Google will abide by these Seven principles to ensure responsible use of AI
In a blog post penned down by Google CEO, Sundar Pichai, the company has shared 7 principles which will guide its approach towards AI development and future applications of the technology.
At Google I/O 2018, CEO Sundar Pichai stressed on the importance of responsible use of AI by the company. The announcements at I/O were also centered around Google's various achievements in the field of artificial intelligence and machine learning. At the time, Pichai said that the company will not take a "wide-eyed" approach to AI and will get things done responsibly.
The statements made by Pichai were in light of the public ire Google's controversial military pilot project amassed for a good part of this year. Project Maven, a programme which let US military make use of Google’s vision recognition systems to guide drones and analyse footage gathered by them caused an internal upheaval in the company, with a number of employees choosing the pink slip in protest of unethical applications of AI technology.
After telling employees last week that it would be terminating its contract with the US Department of Defense next year, Google has now laid down a set of principles which the company will abide by to ensure that its AI technology is used and distributed responsibly as well as ethically.
In a blog post dedicated to its self-created and imposed AI fundamentals, Google has made it clear that it will not dedicate its AI efforts towards developing “technologies that cause or are likely to cause overall harm”. The company is also pledging not to apply its AI tech in “Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.” Neither will Google create AI-based information surveillance tools that violate internationally accepted norms, nor will it contravenes widely accepted principles of international law and human rights, Pichai wrote in the blogpost.
“We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right,” Google declared, setting down the seven principles it will follow in the current and future development of artificially intelligent technologies.
If Google’s seven-point AI mandate is to be summed up, the company aims to create socially relevant and beneficial applications of the technology. It intends to secure AI applications to avoid unintended results that create risks of harm.
One of the seven principles notes that the company will apply its current user privacy approach to how it distributes its AI technology. “We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data,” Pichai wrote. While Google shunned the weaponisation of AI, it did say that it will continue to work with the military in other areas such as – cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.
"These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe," Pichai wrote in conclusion.
Head here to read all of Google’s seven AI principles.