Robots could help people with disabilities get dressed

Robots could help people with disabilities get dressed.
Robots could help people with disabilities get dressed. Credit: MIT CSAIL

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new algorithm that makes the movement of the robot safe for humans. The new algorithm will help a robot find efficient motion plans to ensure the physical safety of its human counterpart.

The researchers explained that robots have great potential to help people with reduced mobility. But this is a complex task that requires agility, user safety, and speed from the devices while not impacting the task efficiency. Nevertheless, the MIT robot was able to dress the person even when the person was performing other actions, such as looking at the phone. The device can be a powerful tool in expanding assistance for those with disabilities or limited mobility.

To provide a theoretical guarantee of human safety, the MIT team’s algorithm reasons about the uncertainty in the human model. Instead of having a single, default model where the robot only understands one potential reaction, the team gave the machine an understanding of many possible models to more closely mimic how a human can understand other humans. As the robot gathers more data, it will reduce uncertainty and refine those models.

If a person, while being dressed by the robot, changes posture, raises his hand, or turns, the robot reacts to changes in conditions and selects a model that suits the situation. Thus, it takes into account the changing behavior of a person.

In addition, the MIT team redefined safety for human-aware motion planners as either collision avoidance or safe impact in the event of a collision. This allowed the robot to make non-harmful contact with the human to make progress, so long as the robot’s impact on the human is low. Thanks to this two-pronged definition of safety, the robot could safely complete the dressing task in a shorter period of time.

This multifaceted approach combines set theory, human-aware safety constraints, human motion prediction, and feedback control for safe human-robot interaction,” says Assistant Professor in The Robotics Institute at Carnegie Mellon University (Fall 2021) Zackory Erickson. “This research could potentially be applied to a wide variety of assistive robotics scenarios, towards the ultimate goal of enabling robots to provide safer physical assistance to people with disabilities.