Trends in Robotics
Robot manipulation tasks usually involve a large number of actions possible at a given state. Specifically, we differentiate between locomotion as the ability of the robot to move and manipulation as the ability to move objects in the environment of the robot. Both activities are closely related: during locomotion the robot uses its motors to exert forces on its environment (ground, water or air) to move itself; during manipulation it uses motors to exert forces on objects to move them relative to the environment. In here, locomotion could include very different concepts of motion such as rolling, walking, running, jumping, sliding (undulatory locomotion), crawling, climbing, swimming, and flying.
Although locomotion and Manipulation are related to each other and similar in some ways subject to the same constraints and limitations imposed by our models of those laws, there exists a current dichotomy in techniques for approaching planning, control, perception, and design for locomotion and manipulation.
Taking the cue from cognitive psychology in the study of how people perceive, learn, remember, and think about information in a five steps process; [environment sensations proceptions mental activity or formations consciousness] have provided us with the clue on the development tools and methodology for robots equipped with human-like locomotion and manipulation capabilities.
In this special issue, we have invited experts from different domains to share and report on the design, implementation, and fabrication of new actuators for humanoid robots, covering
More details could be found at:
Special Issue on Human-Like Locomotion and Manipulation: Current Achievements and Challenges (worldscientific.com)
Deadline for submission: Feb 15, 2023 (extended)
https://cis.ieee.org/images/files/Documents/call-for-special-issues/Cognitive_Learning_of_Multi-Agent_Systems-IEEE_TCDS-CFP-20211210.pdf
The 2021 Nobel Prize for physics was awarded to Prof. Giorgio Parisi, whose exceptional research contributions include deciphering the collective behavior of birds. This phenomenon reflects the development and cognition of biological and intelligent individuals, which sheds light on the development of cognitive, autonomous and evolutionary robotics. Each individual effectively transmits information and learn from several neighbors, and thus making cooperative decision-making among them. Such interactions among individuals show the development and cognition of natural groups in the evolutionary process, which can be modeled as multi-agent systems. Multi-agent systems are capable of solving complex tasks, which also improve the robustness and efficiency through collaborative learning. Multi-agent learning is playing an increasingly important role in various fields, such as aerospace systems, intelligent transportation, smart grids, etc. As the environment is becoming more complicated (e.g., highly dynamic environment and incomplete/imperfect observational information, etc.), tasks are becoming more difficult (e.g., how to share information, how to set learning objectives, and how to deal with the curse of dimensionality, etc.), most of the existing methods cannot effectively solve these complex problems in cognitive intelligence. In addition, cognitive learning of multi-agent systems shows the efficiency of learning how to learn in a distributed way. From this aspect, multi-agent learning, though of great research value, faces the challenges of solving learning problems ranging from single to multiple, simplicity to complexity, low dimension to high dimension, and one domain to other domains.
In addition, there exist competitive or even adversarial activities in multi-agent systems. This situation can be regarded as the agents making more complex decisions through cognitive learning. In recent years, scientists and engineers on antagonistic multi-agent systems have made great breakthroughs, and the most representative ones are AlphaGo/AlphaZero, Pluribus and AlphaStar, etc. However, there are still limitations and challenges, including incomplete/imperfect information environments and data/strategy generalization. How can agents autonomously and quickly make swarm intelligent decision-making via cognitive learning in complex environments under these circumstances? It is of great significance to the development of various practical fields.
This special issue aims to investigate the cognitive learning in multi-agent systems from the perspective of applications, including practical applications including cognitive, autonomous and evolutionary robotics, etc. All the related original researches that contribute to the development and cognition of multi-agent systems along with their applications are particularly welcome and encouraged.
.
The primary list of topics (but is not limited to): Development and cognition of multiagent systems; Brain-inspired optimization/learning in multi-agent systems; Federated learning/distributed learning; Causal inference in distributed learning; Zero-shot/Few shot/No-regret learning; Critical behavior in multi-agent systems; Meta multiagent reinforcement learning; Multi-agent reinforcement learning in games; Best-response and learning dynamics; Multiagent multi-armed bandits; Cooperative-competitive multi-agent framework; Application to cognitive, autonomous and evolutionary robotics
Submission: Manuscripts should be prepared according to the guidelines in “Submission Guidelines” of the IEEE Transactions on Cognitive and Developmental Systems in
https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=7274989.
Visual inspection has and will remain to be an integral part of the manufacturing quality control and assessment process, yet, human classification of defects is typically inconsistent and inaccurate due to distractions or fatigue, resulting in many man-hours wasted on manual visual inspections. While Automated Optical Inspection (AOI) systems have mostly addressed the shortcomings in manual quality control, rule-based AOIs tend to over-reject simple flaws as defects, resulting in substantial yield loss.
Visual Artificial Intelligence is able to accurately and consistently identify defects, leveraging the data-rich environment of manufacturing - translating to a smaller margin of error during defect analysis.
This technology is an in-line visual inspection Artificial Intelligence (AI) platform that utilises image-based data obtained from any existing automated image capture systems (for quality control and assurance), to conduct automated inspections at a higher rate than a human being for highly accurate, consistent defect classification and yield improvement.
This technology is designed and built to automate defect analysis for a large number of images from a variety of image-based sources e.g. Automated Visual Inspection (AVI) machine, Energy-Dispersive X-Ray Spectroscopy (EDX), Automated Optical Inspection (AOI) machine, Advanced 3D X-Ray Inspection (AXI) machine, Complementary Metal Oxide Semiconductor (CMOS) cameras and Scanning Electron Microscopes (SEM), at a time.
It has the following key features:
This technology has applications in the following sectors/industries:
The technology owner is keen to collaborate with high-value, complex manufacturing companies through R&D collaboration, new product/service co-development, test-bedding, and/or licensing. Additionally, the technology owner is also keen to work with technology partners to co-develop enhanced Artificial Intelligence (AI) root-cause analysis and predictive analytics capabilities.
Vision-based Artificial Intelligence (AI) models require substantial time to train, fine-tune and deploy in production. After production, this process is still required when performance degrades and re-training on a new dataset becomes necessary; this maintenance process exists throughout the model's lifetime to ensure optimal performance. Rather than embarking on the time-consuming and painful process of collecting/acquiring data to train and tune the AI model, many organisations have turned to the use of pre-trained models to accelerate the AI model development process.
This technology consists of a suite of pre-trained models that are intended to detect food, human behaviours, facial features and count people. These AI models are operable on video footage and static images obtained from cameras. Models are tuned and trained on various use-cases and are accessible via API calls or embedded within software as a Software Development Kit (SDK) library.
The technology consists of a suite of pre-trained AI models that provide high accuracy (over 80%) and can be further customised to improve accuracy and adapted to different use-case scenarios. Models can be integrated in the following ways:
The following are the features for various AI models:
Abnormal Behaviour Recognition
Event Detection
Food (Fresh and Packaged) Recognition
Privacy-Preserving Person Recognition
Free (Empty) Space Recognition
Safety Monitoring
Wellbeing and Safety Detection
This technology offer comprises a suite of AI models for the following applications:
Biomimetics | Special Issue : Artificial Intelligence for Autonomous Robots (mdpi.com)
Search and Navigate:
Call or Email Us: Office:+65-6790 5754
Email: contact@rss.org.sg
Address:Room N3-02C-96, School of Mechanical & Aerospace Engineering, Nanyang Technological University, Singapore 639798