While Google and other technology giants have their own dynamics to keep the most detailed and up-to-date maps possible, it is an expensive and time-consuming process. And in some areas, the data is limited.
To improve this, researchers at MIT and Qatar Computing Research Institute (QCRI) have developed a new machine-learning model based on satellite images that could significantly improve digital maps for GPS navigation. The system, called “RoadTagger,” recognizes the types of roads and the number of lanes in satellite images, even in spite of trees or buildings that obscure the view. In the future, the system should recognize even more details, such as bike paths and parking spaces.
RoadTagger relies on a novel combination of a convolutional neural network (CNN) and a graph neural network (GNN) to automatically predict the number of lanes and road types (residential or highway) behind obstructions.
Simply put, this model is fed only raw data and automatically produces output without human intervention. Following this dynamic, you can predict, for example, the type of road or if there are several lanes behind a grove, according to the analyzed characteristics of the satellite images.
The researcher team has already tested RoadTagger using real data, covering an area of 688 square kilometers of maps of 20 U.S. cities, and achieved 93% accuracy in the detection of road types and 77% in the number of lanes.
Maintaining this degree of accuracy on digital maps would not only save time and avoid many headaches for drivers but could also prevent accidents. And of course, it would be vital information in case of emergency or disasters.
The researchers now want to further improve the system and also record additional properties, including bike paths, parking bays, and the road surface – after all, it makes a difference for drivers whether a former gravel track is now paved somewhere in the hinterland.