Researchers at the University of Lincoln are looking to create an effective visual aid system, leveraging the spatially-aware capabilities of devices like Project Tango which have started to appear in the market. These devices have a multitude of cameras and sensors, enabling them to get a 3-dimensional view of their surroundings. The Lincoln Centre of Autonomous Systems previously made progress in the field of indoor mapping and object recognition, and these findings have been helpful in the creation of this new visual-aid system.
The researchers have developed an interface that recognizes the objects around it, and relays this information to the user with the help of vibration, sound or a spoken hint, depending on the user and the object.
Basically, the interface learns over time and understands how the user responds to certain actions, enabling it to better convey information to the user. The key difference between this interface and similar visual aid devices is that it dynamically adapts to the user and the object, enabling recognition to be faster and better.
"There are also existing smartphone apps that are able to, for example, recognise an object or speak text to describe places. But the sensors embedded in the device are still not fully exploited.”, said Dr Nicola Bellotto, who is the lead researcher for the project.
This new research project brings together multiple technologies in order to create an interface that can communicate with the user effectively. Indoor mapping and machine learning are the main technologies that enable this interface. The best part is that it can be integrated into smartphones like Project Tango with the necessary cameras and sensors, and as such devices appear on the market, this visual-aid interface may be adapted faster.