Image 1
Image 1


Invited Speakers:

Image 1
Image 1
Image 1
Christos G. Cassandras
Distinguished Professor
Division of Systems Engineering
Boston University
Jiaoyang Li
Assistant Professor
Robotics Institute
Carnegie Mellon University
Danwei Wang
Professor
School of Electrical and Electrical Engineering
Nanyang Technological University
Image 1
Image 1
Image 1
Ziwei Wang
Assistant Professor
School of Electrical and Electrical Engineering
Nanyang Technological University
Ji Zhang
Systems Scientist
Robotics Institute
Carnegie Mellon University
Boyu Zhou
Tenure Assistant Professor
Department of Mechanical and Energy Engineering
Southern University of Science and Technology



Workshop Program (GMT+8)

08:40

Opening and welcome

08:45

From GPS Limitations to Magnetic Intelligence: Rethinking Autonomous Navigation with Mag-Loc

Abstract: As autonomous systems evolve to operate in increasingly complex, GPS-denied, and infrastructure-challenged environments, the need for robust, absolute localization becomes critical. In this talk, we explore the trajectory of autonomous navigation technologies through the lens of Mag-Loc—an innovative Ambient Magnetic Field (AMF)-based positioning system. Unlike traditional GPS or infrastructure-dependent methods, Mag-Loc leverages the naturally occurring and environment-shaped geomagnetic field as a stable reference. By deriving magnetic fingerprints of environments, Mag-Loc enables fast mapping, real-time absolute positioning, and robust operation across day/night and all-weather conditions—even in magnetically disturbed settings such as factories, warehouses, or underground spaces. We will showcase how Mag-Loc’s deep sensor fusion and modular hardware design empower mobile robots, autonomous vehicles, and logistics platforms to achieve localization resilience with minimal environmental dependency. This talk will also discuss the broader implications of magnetic-based localization in the evolution of intelligent autonomy—bridging the gap between theory and industrial deployment.

Safety-guaranteed optimal controllers in the navigation of autonomous multi-robot systems

Abstract: Optimal control methods provide solutions to safety-critical problems ubiquitous in multi-robot systems, but they easily become intractable. Control Barrier Functions (CBFs) have emerged as a technique that facilitates their solution by provably guaranteeing safety (e.g., collision avoidance) through their forward invariance property, while trading off conservativeness with performance. We will describe how CBF-based controllers assisted by Reinforcement Learning (RL) methods can ensure safe navigation of multi-robot systems performing cooperative tasks in complex environments. In particular, RL is used to learn optimal parameters for these controllers with safety always guaranteed even during on-line training. We will also discuss the more challenging situation arising in a game setting where some robots cooperate and must interact with others which are non-cooperative with generally unknown dynamics.

09:45

How to Coordinate Thousands of Robots without A Traffic Jam

Abstract: Today’s automated warehouses deploy hundreds, and sometimes even thousands, of robots to transport inventory in shared, cluttered spaces, posing significant challenges for ensuring safe, deadlock-free, and congestion-free navigation. In this talk, I will present our recent progress on scalable imitation learning methods for coordinating 10k robots, automatic environment optimization techniques for mitigating congestion, and robust multi-robot execution frameworks to ensure safe and deadlock-free operations.

10:15

Coffee break and Poster Session

10:30

Efficient Active Perception and Manipulation for Drones in Cluttered Scenes

Abstract: Unmanned aerial vehicles (UAVs) offer exceptional flexibility and mobility, making them well-suited for logistics and inspection tasks in complex environments. This talk presents recent research on active perception and mobile manipulation with autonomous UAVs, directly addressing challenges in multi-robot perception and navigation for real-world applications. We first discuss methods enabling autonomous UAVs to efficiently explore unknown environments, focusing on real-time planning, compact environmental representation, and collaborative strategies for UAV swarms. Next, we address the problem of coverage and 3D reconstruction in cluttered scenes, introducing prediction-enhanced real-time coverage planning and heterogeneous UAV coordination approaches. Finally, we present advances in UAV-based transportation, delivery, and manipulation, highlighting how perception-driven navigation supports robust and efficient operations.

11:00

From Lidar SLAM to Full-scale Autonomy and Beyond

Abstract: In this talk, I will introduce our full autonomy stack which enables robots to autonomously navigate to goal points and to explore and map unknown environments. Our work started several years ago from the fundamental SLAM building block - Lidar Odometry And Mapping (LOAM). Building upon the SLAM module, the full autonomy stack now provides multiple low-level features, e.g. collision avoidance, terrain traversability analysis, and waypoint following. The autonomy stack further contains two high-level planners for goal-point navigation and exploration, respectively. I will show a simulation demo and briefly discuss the latest extension to AI-powered vision-language navigation. The series of work has won both the Best Paper Award and Best System Paper Award of RSS 2021, the only time in the history of RSS that one paper collected both major awards, and the Best Student Paper Award of IROS 2022. The extended journal version has recently been published at Science Robotics. Aimed at lowering the bar for everybody to acquire autonomy and make further use of it, our system is open-sourced at cmu-exploration.com.

11:30

VLA on Wheels: Empowering Vision-language-action Models for Mobile Manipulation

Abstract: Vision-language-action models achieve high generalization ability and success rate due to the huge model parameters and large-scale training data. Deploying VLA models in mobile manipulation is still limited as current VLA models are designed for fixed-base manipulation. To empower VLA models for mobile manipulation, we efficiently adapt the VLA models by 1) designing whole-body motion planning framework to achieve desirable manipulation trajectories from VLA, and (2) building geometric scene graph representation for base docking point selection. Our robotic manipulation system significantly broadens the application scenarios of VLA models where robots are required to complete tasks with mobility.

12:00

Announcement of the CARIC Champion and Best Paper Award

Champion Solution

Abstract: TBD