Thursday, September 12, 2024

Autonomous aerial robots for multiroom exploration

An estimated 100 earthquakes cause damage annually around the world, resulting in collapsed buildings, downed electrical lines, and other destruction. It can be crucial and hazardous for first responders to evaluate the situation and concentrate their rescue efforts.

Researchers at Carnegie Mellon University’s Robotics Institute (RI) in the School of Computer Science have developed a new approach for independent aerial robot exploration and coordination among multiple robots inside deserted buildings. This innovation could assist first responders in gathering information and making well-informed decisions following a disaster.

“A key idea of this research was avoiding redundancy in exploration,” said RI Ph.D. student Seungchan Kim. “Since this is multi-robot exploration, coordination and communication among robots is vital. We designed this system so each robot explores different rooms, maximizing the rooms a set number of drones could explore.”

The drones prioritize the swift detection of doors because they are more likely to find significant targets, such as people, in rooms rather than corridors. To identify these specific entry points, the robots analyze the geometric characteristics of their environment using an onboard lidar sensor.

While gently hovering around six feet above the ground, the aerial robots convert the 3D lidar point cloud data into a 2D transformation map. This map represents the spatial layout as an image comprised of cells or pixels, which the robots then scrutinize for structural indicators indicating doors and rooms. Walls are depicted as occupied pixels close to the drone, while an open door or passage is represented as empty pixels.

The robot quickly navigated passageways by identifying the doors as saddle points based on the researchers’ model. Upon entering a room, the robot’s appearance was that of a circle.

Kim provided two main reasons for the researchers’ choice of a lidar sensor over a camera. The first reason is that the sensor requires less computing power than a camera. The second reason is that vision impairment due to dust or smoke in conditions like those found in collapsed buildings, or natural disaster sites can be avoided with a lidar sensor.

Rather than being controlled by a centralized base, each robot autonomously makes decisions and determines the best trajectories by understanding its surroundings and communicating with the other robots. The aerial robots exchange information about the doors and rooms they have explored and utilize it to avoid areas that have already been visited.

Together with Kim, the research team consisted of Micah Corah, a former RI postdoctoral fellow; John Keller, a senior robotics engineer in the RI; Sebastian Scherer, an associate research professor in the RI with a courtesy appointment in the Electrical and Computer Engineering Department; and Graeme Best, a researcher at the University of Technology Sydney.

Blurbs