Robotics and automation have steadily evolved over time, creating automated beings capable of executing superior tasks without supervision. We’ve built robots with biodegradable skin, superior vision units, bipedal stability, camouflaging and many other capabilities, but one aspect that has remained noticeably missing is to impart sentiments, feelings and a sense of apprehension and judgement to automated artificial beings. It is exactly this that researchers at the Carnegie Mellon have been working on, and their latest paper elucidates on how robots may also one day be given a sense of what to avoid, and what to embrace.
This particular notion, which plays a crucial role in saving us humans from many unfortunate circumstances, is self doubt. A play of motor neurons reacting to situations based on what you have faced before, self doubt rises to prevent you from running into danger. Your memory plays an important role here, letting you know what you have faced before, and based on intuition and self doubt, you gauge whether to “risk” the situation or not. For robots, machine learning and deep neural networks contribute to ‘learning’ and ‘remembering’ incidents to build background, following which vision processing units and artificial intelligence will make use of previously stored data to gauge which incidents may be risky for the beings, thereby raising self doubt and effectively avoiding it.
The process, though, is quite complicated. While all of these come naturally to us humans, for machines it would also involve gauging the implication of its decision in the immediate future, and whether or not it would benefit from taking the decision. To illustrate this, researchers at Carnegie Mellon used a drone with obstacles on its path. While immediate physical object detection and avoidance is something that many commercial drones like the DJI Phantom 4 come with, the latest research paper talks about avoiding elements like weather that may harm the drone’s sensors, potential for hazards in flight route, all of which will use processing units and the image data from onboard cameras to execute self-doubt and precaution.
Toshiba's Chihiraaico: Does she even look like a robot?
In the laboratory experiment, researchers Shreyansh Daftry, Sam Zeng, J. Andrew Bagnell and Martial Hebert modified a drone’s algorithms and took it to a park to fly without any human introspection, and succeeded in obtaining double the flying time as that of a standard drone. This was promoted by vision-based autonomous navigation and quantifiable introspection abilities. While it still remains a laboratory theory, this marks a significant step forward in terms of instilling neural reactions and abstract values in artificial beings. The concept is not entirely new, and previous research in this field has been done by trying to combine multiple decision-making, deep learning algorithms together.
Instilling precaution and self-doubt in robots and autonomous, artificially intelligent beings is the only way to prevent them from running into major hazards. While research continues from this very first step towards self-aware automation, this may even lead to instilling senses of emotion among robots. Is that day really too far away?