When AI misbehaves: Google Gemini and Meta AI image controversies

When AI misbehaves: Google Gemini and Meta AI image controversies

Through his three laws of robotics, Asimov warned us against robots or AI going rogue, but circa 2024 has us worrying about AI seemingly going woke. From one of the most unlikeliest of big tech sources, Google. The mother of all fumbles from the world’s fourth richest company, one that impacts the lives of billions of people around the world every single day.

Soon after releasing its text-to-image generation feature to the public in late February 2024, Google’s AI image generator, Gemini, instantly came under fire for producing inaccurate and offensive results. In its attempt to promote diversity, Gemini generated historically inaccurate images, such as: Black individuals as Founding Fathers of the United States of America (they were all White). A woman as the Pope (there has been no female Pope so far). Not just fumbling at inclusivity, Gemini also produced insensitive images, including: A person of colour as a Nazi soldier (an oxymoron at best). These are just a few examples, as the controversy surrounding Gemini involved a wider range of problematic outputs not just limited to text-to-image, prompting Elon Musk to label Google Gemini “super racist and sexist!

Also read: Biggest myths Of AI

In response to the criticism and social media backlash, Sundar Pichai, CEO of Google, addressed the controversy surrounding Gemini’s historically inaccurate blunders. He took full responsibility, acknowledging that the generated responses were “completely unacceptable” and admitting that Google “got it wrong.” Officially, Google attributed Gemini’s problems with the AI model’s development, not intentional bias, even as it temporarily disabled the image generation feature of Gemini and issued a public apology. 

Fundamentally, Gemini’s example goes beyond surface-level racial or historical bias in generative AI applications and points towards the need for more robust AI development and responsible implementation. The AI model’s inability to understand historical context or the potential offensiveness of certain combinations of results could be exploited to generate harmful content. 

google_ceo_sundar_pichai

That’s precisely what Gemini was trying not to do, according to a blog post by Google executive Prabhakar Raghavan. They wanted to avoid the problems other image generators had, like creating violent or inappropriate pictures of people. Google also wanted Gemini to be fair and show all types of people in the images it generated, not just one kind. For example, if you asked for a picture of someone walking a dog, you wouldn’t want to see only images of people of a certain race. Google tried to fix a problem where most pictures online showed mostly one type of person, like white men as doctors. They wanted to show more variety, which is a fair aspiration to have. Why wouldn’t they, seeing as they have several billion users of their products and services all over the world? But in trying to avoid complaints about not being fair, Google ended up making new mistakes.

Also read: Future of AI for Good

If you think this is only a Google problem, think again. Over this past weekend, Meta’s Imagine AI art generator, which is similar to Google Gemini (for the end user), has been found to produce images that depict historical figures in a way that does not align with actual history. For example, images generated by Meta’s AI tool depicted the Founding Fathers of America as people of colour, portraits of Southeast Asian likeness as examples of people from American colonial times. Additionally, when prompted for “Professional American football players,” the Meta AI only produced photos of women in football uniforms, according to an Axios report

These recent controversies surrounding AI image generators like Google’s Gemini and Meta’s Imagine AI reveal a glimpse into just how difficult it is to get an AI model right. Despite having access to an insane amount of public data on their platforms, it’s tricky even for the likes of Google and Meta to prevent their AI from misbehaving. It woefully highlights how AI doesn’t always understand what we mean, sometimes not quite grasping what we’re asking for. They have a tendency to generate images or results that are completely off topic or don’t match our instructions, something that all of us might have experienced while playing with generative AI tools.

Also read: In pursuit of ethical AI

As researchers continue working on making AI models more accurate and better at understanding what we want, it might involve using smaller sets of data and clearer instructions when training future models. As the quest towards balance diversity and accuracy continues, ethical constraints in making responsible and safe AI development shouldn’t be ignored. As Asimov rightly said: “A robot may not harm humanity, or, through inaction, allow humanity to come to harm.” Ultimately, the stakes extend far beyond just instant image generation, wouldn’t you agree?

Jayesh Shinde

Jayesh Shinde

Executive Editor at Digit. Technology journalist since Jan 2008, with stints at Indiatimes.com and PCWorld.in. Enthusiastic dad, reluctant traveler, weekend gamer, LOTR nerd, pseudo bon vivant. View Full Profile

Digit.in
Logo
Digit.in
Logo