As AI (Artificial Intelligence) and ML (Machine Learning) solutions take over every facet of our lives, it is being observed that users' trust in them has been plummeting at a slow but steady rate. According to Gartner, multiple incidents of astounding privacy breaches and data misuse have led to the growing dissonance between AI solutions and their consumers. Regulatory scrutiny for AI is rising across the world, in order to combat such breaches. Despite these efforts, Gartner predicts that 75 per cent of large organizations dabbling in AI will hire "AI behavior forensic, privacy and customer trusterest", in simple words – an AI ethicist, by 2023 to diminish brand and reputation risk.According to a recent KPMG report, the role of an AI ethicist will be one of the most sought-after AI jobs in the coming years.
Bias has historically been a long-standing risk in training AI models, whether it is based on race, gender, age, location, or based on a particular structure of data. Additionally, opaque algorithms such as deep learning can feature various highly variable interactions into their predictions which can be extremely tricky to interpret. As per a 2018 Deloitte survey, 32 per cent of AI-aware officers have ranked the ethical risks of AI as one of their top AI-related concerns. Therefore, many organizations are now beginning to pay close attention to ethical issues surrounding AI.
Microsoft kicked off the creation of a specific job profile to address AI's ethical concerns in 2018 by launching the position of an AI ethicist. Tim O'Brien, who had been working with Microsoft for 15 years as a general manager, took on this novel role.
An AI ethicist deals with the ethical challenges of AI including sensitive projects and risk assessment. He attempts to effectively answer a fundamental question in this field which is not 'should' we use AI, but 'where' should AI be used. An AI ethicist usually works closely with the business unit and the legal unit of a company. As mentioned above, the bias in AI is a huge part of the profile as well. It is monumental, and thus today, we are seeing that in addition to AI ethicists, companies are also beginning to employ AI Biases Specialists, who define AI approach when it comes to performing AI operations in an unbiased manner.
Therefore, AI ethicists now closely focus on the ethical and social implications of AI and formulating AI frameworks that uphold the standard code of ethics within the company and beyond. The task of ridding AI of bias is also sometimes taken up by the AI ethicist himself if the company has tasked an AI Biases Specialist separately to undertake this gargantuan responsibility. The role could also be handled by an existing leader in an organization if companies don't have an individual working on AI ethics full-time. Companies deploying AI need an AI ethicist who will ensure that ethical principles in AI are prioritized, including fairness, accountability and transparency when developing an algorithm.
The role of an AI ethicist requires certain qualifications. The person must be technologically literate and must preferably be trained in social sciences, anthropology, ethnography, psychology and other such humanistic disciplines.
AI Ethicists are typically in charge of educating employees, as well as customers, on ethical considerations surrounding AI. They could take up developing cross-functional teams that function as sounding boards that ultimately focus on the implications of AI deployments. O'Brien believes, "It is important to think about the diversity of teams that are building AI-enabled products".
An AI ethicist also develops a collection of policies within the company about how to use AI and related technologies. These policies could include ones that focus on model transparency to avoid algorithmic bias and also revolving around specific applications such as predictive policing. Essentially, they are responsible for determining the overall output and performance of AI's within the prescribed ethical framework of the company.
The call for AI ethics experts or AI ethicists has never been more discernible since even technology leaders are publicly acknowledging that their AI-enabled products may be flawed and detrimental to employment, privacy and human rights. Companies are more likely to face increasing risk owing to their faulty AI ethics. Microsoft and Alphabet.Inc also revealed the risks when it comes to AI in their Security and Exchange filings. Microsoft's 2018 annual report stated, "AI algorithms may be flawed. If we can enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other issues, we may experience brand or reputational harm ”.
Google also displayed a similar stand during its recent 10K (December 2018). They were also of the opinion that AI-enabled products and services can raise ethical challenges which may ultimately negatively affect their brand and demand for their products and services.
Therefore, the need for businesses, especially large ones, to hire AI ethicists and formulate an AI review board has never been higher. Companies must develop AI audit trails and execute AI training programs. In addition to this, a Brookings Institution report also advised companies to fashion a remediation plan that can be executed if their AI technology does inflict any kind of social harm. For example, Amazon Inc. was found to have an internal algorithm under development which analysed resumes from job applicants. However, the algorithm developed a penchant for male applicants, thereby downgrading women applicants' resumes, as per news reports. Therefore, AI ethicists will become increasingly important as AI is beginning to be widely adopted across the globe and such situations can cause grave consequences for companies.
Concerns from large companies employing AI in their products and services have led to the advent of the 'AI Ethicist' job title. However, questions have been raised about whether this measure was directed towards actually solving AI ethical issues or if it is merely a business risk mitigation effort. Looking at the comments from Google and Microsoft in the last section, there is a tangible emphasis on the words 'brand' and 'reputation', which calls to question if the companies truly care about the implications of unethical AI on their customers or if they You're merely worried about legal and reputation implications that ultimately soil their brand perception in customers' minds.There have been concerns that AI ethicists are just a part of a 'machine-washing' scheme, where companies are simply adopting a "trend" to further their public image.
While the intentions of big brands may be unclear, and somewhat worrisome, they did exhibit tremendous transparency in their statements regarding AI ethics. They are worried about 'brand' and 'reputation' but are also working hard to avoid unethical AI behavior, which, ultimately, benefits the end-users. AI ethicists must become visible representatives of companies and their actions and policies must have a tangible impact if they want to escape being labeled as 'machine-washing' schemes. Ultimately, creating effective AI guidelines and ensuring they are upheld is what AI ethicists must strive to do in order to minimize the implications of unethical AI on the company, and more importantly, the consumers.