Thursday, April 18, 2024

MIT’s reconfigurable AI chip could reduce electronics waste

MIT engineers built a LEGO-like artificial intelligence chip, with a view toward sustainable, modular electronics. The chip can be reconfigured, with layers that can be swapped out or stacked on, such as to add new sensors or updated processors. The engineers are calling the LEGO-like chip a “reconfigurable” AI chip because it has unlimited expandability depending on the combination of layers. Such reconfigurable chips could keep devices up to date while reducing our electronic waste.

The AI chip design comprises alternating layers of sensing and processing elements, along with light-emitting diodes (LED) that allow for the chip’s layers to communicate optically. Now, other modular chip designs generally employ conventional wiring to relay signals between layers. Such intricate designs are impossible to sever and rewire, thereby making stackable designs not reconfigurable.

The MIT design uses light, rather than physical wires, to transmit information through the chip. The chip can therefore be reconfigured with layers that can be swapped out or stacked on, for instance, to add new sensors or updated processors.

“You can add as many computing layers and sensors as you want, such as for light, pressure, and even smell,” says MIT postdoc Jihoon Kang. “We call this a LEGO-like reconfigurable AI chip because it has unlimited expandability depending on the combination of layers.”

The researchers are eager to apply the design to edge computing devices – self-sufficient sensors and other electronics that work independently from any central or distributed resources such as supercomputers or cloud-based computing.

In the new chip design, they paired image sensors with artificial synapse arrays, each of which they trained to recognize certain letters – in this case, M, I, and T. A conventional approach would be to relay a sensor’s signals to a processor via physical wires. The team instead fabricated an optical system between each sensor and an artificial synapse array to enable communication between the letters without requiring a physical connection.

MIT’s optical communication system consists of paired photodetectors and LEDs, each patterned with tiny pixels. Photodetectors constitute an image sensor for receiving data and LEDs to transmit data to the next layer.

The team fabricated a single chip with a computing core measuring about 4 square millimeters. This chip is stacked with three image recognition blocks, each comprising an image sensor, optical communication layer, and artificial synapse array for classifying one of three letters, M, I, or T. They then shone a pixelated image to random letters onto the clip and measured the electrical current that each neural network array produced in response.

They found that the chip correctly classified clear images of each letter, but it was less able to distinguish between blurry images, for instance, between I and T. However, researchers were able to quickly swap out the chip’s processing layer for a better denoising processor and found the chip, and then accurately identified the images.

The researchers plan to add more sensing and processing capabilities to the chip, and they envision the applications to be boundless.