Wednesday, April 24, 2024

MIT’s AI smart carpet estimates human poses without using camera

Engineers at MIT‘s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a smart carpet that can accurately estimate human body movements or poses without the need for a camera. The new tactile sensing carpet could be useful for sports feedback, fall monitoring, VR, and game tracking.

The smart carpet itself is made of commercial, pressure-sensitive film and conductive thread, with over 9,000 sensors spanning 36-by-2 feet. When weight is put on different parts of the carpet, each of the sensors on the carpet converts the human’s pressure into an electrical signal through the physical contact between people’s feet, limbs, torso, and the carpet.

The system was specifically trained on synchronized tactile and visual data, such as a video and corresponding heat map of someone doing a pushup, situps, stretching, or doing another action. Then a pressure map from this action is assigned to a virtual model of the person performing it, allowing the system to estimate the person’s body pose based solely on the pressure exerted on individual sections of the smart carpet. The interesting thing about this is that the system was able to estimate the upper body’s movements quite accurately.

The CSAIL team found that the system was ultimately 97% accurate at identifying specific actions and could predict a person’s pose to within 10 centimeters. However, the model was unable to predict poses without more explicit floor contact, like free-floating legs during situps or a twisted torso while standing up.

You could envision using the carpet for workout purposes. Based solely on tactile information, it can recognize the activity, count the number of reps, and calculate the amount of burned calories,says MIT CSAIL Ph.D. student Yunzhu Li.

The team says the smart carpet is low-cost and scalable. Next, they aim to find a way to gather more information from the tactical signals, such as a user’s height or weight, and want to improve the metrics for multiple users, where two people might be dancing or hugging on the carpet.