Engineers at Binghamton University’s Computer Science Department in New York State have been working on a robotic seeing-eye dog to increase accessibility for visually impaired people.
Only a few of the visually impaired and blind communities are able to use a real seeing-eye dog for their whole life. This is because the real seeing-eye dogs cost about $50,000 and take two to three years to train. Moreover, only half of the dogs that undergo training successfully graduate and go on to serve visually impaired people.
Seeing-eye robot dogs offer a possible alternative that could reduce the cost, increase efficiency, and expand accessibility. Robot dogs, equipped with sensors, cameras, and artificial intelligence, can help elderly and visually impaired people navigate their environment and perform daily tasks. They have the potential to improve the quality of life and independence of visually impaired people.
Last year, the Binghamton researchers performed a trick-or-treating exercise with a quadruped robotic dog. And now, the team is using the robot for something “much more important.” They presented a demonstration in which the robot dog led a person around a lab hallway, confidently and carefully responding to directive input.
This is one of the early attempts at developing a seeing-eye robot after the discovery and cost reduction of quadruped technology. After nearly a year of development, the team managed to develop a unique leash-tugging interface to implement through reinforcement learning.
“In about 10 hours of training, these robots are able to move around, navigating the indoor environment, guiding people, avoiding obstacles, and at the same time, being able to detect the tugs,” Assistant Professor Shiqi Zhang said.
The tugging interface allows the user to pull the robot in a certain direction at an intersection in a hallway, enabling the robot to turn in response. However, according to DeFazio, while this technology shows promise, there is a need for further research and development before it can be considered suitable for certain environments.
“Our next step is to add a natural language interface. So ideally, I could have a conversation with the robot based on the situation to get some help,” PhD student David DeFazio said. “Also, intelligent disobedience is an important capability. For example, if I’m visually impaired and I tell the robot dog to walk into traffic, we would want the robot to understand that. We should disregard what the human wants in that situation. Those are some future directions we’re looking into.”
Researchers are also in touch with the Syracuse chapter of the National Federation of the Blind in order to get direct and valuable feedback from members of the visually impaired community. Their input would help guide further research. Their feedback and intuition lead them to believe that robots might be more useful in specific environments.
The ability to store maps of complex environments gives the robots an advantage over real seeing-eye dogs in guiding visually impaired people. The robots can use their maps to plan the best routes and avoid obstacles, while the real dogs may rely more on their instincts and training.
“If this is going well, then potentially, in a few years, we can set up this seeing-eye robot dog at shopping malls and airports. It’s pretty much like how people use shared bicycles on campus,” Zhang said in an official statement.
While still in its early stages, the team believes this research is a promising step for increasing the accessibility of public spaces for the visually impaired community.