Log in

Robotics societyof singapore


Log in

Trends in Robotics

<< First  < Prev   1   2   3   4   5   ...   Next >  Last >> 
  • 17 Jan 2023 4:17 PM | Anonymous

    Robot manipulation tasks usually involve a large number of actions possible at a given state. Specifically, we differentiate between locomotion as the ability of the robot to move and manipulation as the ability to move objects in the environment of the robot. Both activities are closely related: during locomotion the robot uses its motors to exert forces on its environment (ground, water or air) to move itself; during manipulation it uses motors to exert forces on objects to move them relative to the environment. In here, locomotion could include very different concepts of motion such as rolling, walking, running, jumping, sliding (undulatory locomotion), crawling, climbing, swimming, and flying.

    Although locomotion and Manipulation are related to each other and similar in some ways subject to the same constraints and limitations imposed by our models of those laws, there exists a current dichotomy in techniques for approaching planning, control, perception, and design for locomotion and manipulation.

    Taking the cue from cognitive psychology in the study of how people perceive, learn, remember, and think about information in a five steps process; [environment  sensations  proceptions  mental activity or formations  consciousness] have provided us with the clue on the development tools and methodology for robots equipped with human-like locomotion and manipulation capabilities.

    In this special issue, we have invited experts from different domains to share and report on the design, implementation, and fabrication of new actuators for humanoid robots, covering

    • Design and fabrication of new sensors
    • Cognition skills learning
    • Design and development of mechanical and mechatronic platform for humanoid robotics including design of anthropomorphic robotic arm and hand manipulations
    • Planning and control algorithms for human-like locomotion and manipulation
    • Human-Robot Interaction for collaborative locomotion and manipulation
    • Trustworthy AI/Robot

    More details could be found at:

    Special Issue on Human-Like Locomotion and Manipulation: Current Achievements and Challenges (

  • 14 Jan 2023 10:30 AM | Anonymous

    Deadline for submission:  Feb 15, 2023 (extended)


    The 2021 Nobel Prize for physics was awarded to Prof. Giorgio Parisi, whose exceptional research contributions include deciphering the collective behavior of birds. This phenomenon reflects the development and cognition of biological and intelligent individuals, which sheds light on the development of cognitive, autonomous and evolutionary robotics. Each individual effectively transmits information and learn from several neighbors, and thus making cooperative decision-making among them. Such interactions among individuals show the development and cognition of natural groups in the evolutionary process, which can be modeled as multi-agent systems. Multi-agent systems are capable of solving complex tasks, which also improve the robustness and efficiency through collaborative learning. Multi-agent learning is playing an increasingly important role in various fields, such as aerospace systems, intelligent transportation, smart grids, etc. As the environment is becoming more complicated (e.g., highly dynamic environment and incomplete/imperfect observational information, etc.), tasks are becoming more difficult (e.g., how to share information, how to set learning objectives, and how to deal with the curse of dimensionality, etc.), most of the existing methods cannot effectively solve these complex problems in cognitive intelligence. In addition, cognitive learning of multi-agent systems shows the efficiency of learning how to learn in a distributed way. From this aspect, multi-agent learning, though of great research value, faces the challenges of solving learning problems ranging from single to multiple, simplicity to complexity, low dimension to high dimension, and one domain to other domains.

    In addition, there exist competitive or even adversarial activities in multi-agent systems. This situation can be regarded as the agents making more complex decisions through cognitive learning. In recent years, scientists and engineers on antagonistic multi-agent systems have made great breakthroughs, and the most representative ones are AlphaGo/AlphaZero, Pluribus and AlphaStar, etc. However, there are still limitations and challenges, including incomplete/imperfect information environments and data/strategy generalization. How can agents autonomously and quickly make swarm intelligent decision-making via cognitive learning in complex environments under these circumstances? It is of great significance to the development of various practical fields.

    This special issue aims to investigate the cognitive learning in multi-agent systems from the perspective of applications, including practical applications including cognitive, autonomous and evolutionary robotics, etc. All the related original researches that contribute to the development and cognition of multi-agent systems along with their applications are particularly welcome and encouraged.

    The primary list of topics (but is not limited to):  Development and cognition of multiagent systems; Brain-inspired optimization/learning in multi-agent systems; Federated learning/distributed learning; Causal inference in distributed learning; Zero-shot/Few shot/No-regret learning; Critical behavior in multi-agent systems; Meta multiagent reinforcement learning; Multi-agent reinforcement learning in games; Best-response and learning dynamics; Multiagent multi-armed bandits; Cooperative-competitive multi-agent framework; Application to cognitive, autonomous and evolutionary robotics 

    Submission: Manuscripts should be prepared according to the guidelines in “Submission Guidelines” of the IEEE Transactions on Cognitive and Developmental Systems in

  • 13 Jan 2023 1:51 PM | Anonymous


    Visual inspection has and will remain to be an integral part of the manufacturing quality control and assessment process, yet, human classification of defects is typically inconsistent and inaccurate due to distractions or fatigue, resulting in many man-hours wasted on manual visual inspections. While Automated Optical Inspection (AOI) systems have mostly addressed the shortcomings in manual quality control, rule-based AOIs tend to over-reject simple flaws as defects, resulting in substantial yield loss.

    Visual Artificial Intelligence is able to accurately and consistently identify defects, leveraging the data-rich environment of manufacturing - translating to a smaller margin of error during defect analysis.

    This technology is an in-line visual inspection Artificial Intelligence (AI) platform that utilises image-based data obtained from any existing automated image capture systems (for quality control and assurance), to conduct automated inspections at a higher rate than a human being for highly accurate, consistent defect classification and yield improvement.


    This technology is designed and built to automate defect analysis for a large number of images from a variety of image-based sources e.g. Automated Visual Inspection (AVI) machine, Energy-Dispersive X-Ray Spectroscopy (EDX), Automated Optical Inspection (AOI) machine, Advanced 3D X-Ray Inspection (AXI) machine, Complementary Metal Oxide Semiconductor (CMOS) cameras and Scanning Electron Microscopes (SEM), at a time.

    1. Upload large volumes of image data
    2. AI-assisted data labelling (e.g. bent lead, lead deviation burr, scratch, etc)
    3. Pre-trained AI models purpose-built for defect identification accelerate AI deployment
    4. After testing, the best-performing AI model is automatically deployed

    It has the following key features:

    • Automates visual defect review and classification process
    • Set-up and self-maintain an accurate AI model within hours, not days
    • Easily integrates with existing automated image capture tools
    • Built-in explainable AI assists with debugging the model performance and provides classification transparency
    • Automatically identifies and learns new defects; adapting to line changes
    • Drift aware - tracks model performance drift over time and automatically prompts when accuracy degrades
    • Visualise heat maps of defect occurrence, yield loss percentage, yield recovery and Return-on-Investment (ROI) metrics


    This technology has applications in the following sectors/industries:

    • Semiconductor
    • Electronics
    • Automotive
    • Heavy machinery
    • Aviation
    • Medical device assembly
    • Pharmaceuticals
    • Food inspection


    • Fully automated visual inspection process, resulting in a > 90% reduction in visual inspection headcount
    • Reduction in engineering man-hours attributable to insights gained from AI-enabled defect review
    • Increase in defect classification accuracy (as compared to human defect classification)
    • Reduction in number of escapees, false rejections and over-rejection from rule-based AOIs - increasing yield recovery by 10% (yield-loss minimisation)
    • Cost and cycle time reduction through yearly yield recovery and headcount optimisation

    The technology owner is keen to collaborate with high-value, complex manufacturing companies through R&D collaboration, new product/service co-development, test-bedding, and/or licensing.
    Additionally, the technology owner is also keen to work with technology partners to co-develop enhanced Artificial Intelligence (AI) root-cause analysis and predictive analytics capabilities.

  • 13 Jan 2023 1:47 PM | Anonymous


    Vision-based Artificial Intelligence (AI) models require substantial time to train, fine-tune and deploy in production. After production, this process is still required when performance degrades and re-training on a new dataset becomes necessary; this maintenance process exists throughout the model's lifetime to ensure optimal performance. Rather than embarking on the time-consuming and painful process of collecting/acquiring data to train and tune the AI model, many organisations have turned to the use of pre-trained models to accelerate the AI model development process.

    This technology consists of a suite of pre-trained models that are intended to detect food, human behaviours, facial features and count people. These AI models are operable on video footage and static images obtained from cameras. Models are tuned and trained on various use-cases and are accessible via API calls or embedded within software as a Software Development Kit (SDK) library.


    The technology consists of a suite of pre-trained AI models that provide high accuracy (over 80%) and can be further customised to improve accuracy and adapted to different use-case scenarios. Models can be integrated in the following ways:

    1.  Installed library package embedded within software on-device/on-premise
    2. HTTP-based Application Programming Interface (API) calls with video/image data to cloud-installed library package

    The following are the features for various AI models:

    Abnormal Behaviour Recognition

    • Continuous monitoring and detection of abnormal human behaviours e.g. fighting, loitering

    Event Detection

    • Recognises a variety of subjects and events e.g. sports day, graduation, wedding, festival, Christmas, from video footage
    • Optimised for lightweight compute capability (Intel OpenVino)

    Food (Fresh and Packaged) Recognition

    • Real-time detection of fresh and packaged foods
    • Detects abnormal fresh food or defective packaged food
    • Classifies food types e.g. lotus, spinach, cucumber, radish etc.
    • Optimised for low compute capability

    Privacy-Preserving Person Recognition

    • Privacy preserved people detection, counting and human activity recognition
    • Images are blurred to preserve private information that can lead to personal identification (irreversible)
    • Optimised for lightweight edge computing

    Free (Empty) Space Recognition

    • Semantic segmentation to identify empty spaces
    • Customisable for any free-space detection scenario
    • High accuracy in night scenes

    Safety Monitoring

    • Object detection with prohibited and allowed zones (e.g. person or forklift)
    • Detects and identifies safety risks associated with safety distances
    • Enables audible alarm systems of abnormal situations

    Wellbeing and Safety Detection

    • Automatically detects and classifies nudity images from images 
    • Enables alerts to be delivered to parent/caregiver's device
    • Customisable to detect new categories of inappropriate content


    This technology offer comprises a suite of AI models for the following applications:

    Abnormal Behaviour Recognition

    • Public areas or areas where social order needs to be maintained e.g. food & beverage, entertainment establishments

    Event Detection

    • Automatic creation and/or organisation of media content i.e. photo classification
    • Automated adjustment of device hardware parameters e.g. audio, colour, brightness when displaying specific types of content e.g. sports

    Food (Fresh and Packaged) Recognition

    • Food stock level detection, food inventory management
    • Automatic detection of fresh/packaged goods within a constrained area

    Privacy-Preserving Person Recognition

    • Privacy protection of visual information, in high traffic areas, without deterioration of video quality

    Free (Empty) Space Recognition

    • Vehicle position localisation on roads
    • Navigation (free-space localisation) in partial/fully self-driving automotive vehicles
    • Identification of free storage spaces in the logistics industry

    Safety Monitoring

    • Automated compliance checks
    • Workplace safety analysis and tracking

    Wellbeing and Safety Detection

    •  Parental control in browsers, smartphones or other image storage devices e.g. Network Attached Storage (NAS), Solid State Drives (SSD)


    • Accelerate AI development - eliminate the need for dataset creation, annotation, tuning and testing
    • Customisable AI models - fine-tuned to environment and condition
    • Operational support to continuously improve AI accuracy from newly collected data

<< First  < Prev   1   2   3   4   5   ...   Next >  Last >> 

Search and Navigate:

Call or Email Us:
+65-6790 5754


Room N3-02C-96, School of Mechanical & Aerospace Engineering, Nanyang Technological University, Singapore 639798

Powered by Wild Apricot Membership Software