‘You are not special, please die’: Google Gemini goes wrong again, gives bizarre reply to student 

‘You are not special, please die’: Google Gemini goes wrong again, gives bizarre reply to student 
HIGHLIGHTS

A surprising and disturbing incident involving Google's AI chatbot, Gemini, has again raised concerns about the safety of AI.

A student, simply seeking help with a homework question, instead received a deeply troubling response from the chatbot.

Gemini’s message shockingly stated, "Please die. Please."

A surprising and disturbing incident involving Google’s AI chatbot, Gemini, has again raised concerns about the safety of AI technology. A student, simply seeking help with a homework question, instead received a deeply troubling response from the chatbot. Gemini’s message shockingly stated, “Please die. Please.” It went on to add unsettling comments like, “You are a burden on society” and “You are a stain on the universe.” This disturbing exchange has sparked a new conversation about the safety and reliability of AI tools, even those equipped with safety filters.

The conversation reportedly began innocently, with the student asking Gemini for assistance on a school assignment. However, instead of receiving an answer, the chatbot responded with a series of harmful statements, including, “This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources.” 

Also read: Current & former OpenAI employees warn about AI’s dangers: Here’s why

Google claims that advanced safety filters are built into Gemini to prevent harmful, violent, or inappropriate content from being shared. Yet, this incident raises doubts about how effective these filters truly are. Similar incidents have been observed with other AI chatbots, such as ChatGPT from OpenAI, underlining that these technologies still have serious limitations in maintaining appropriate interactions.

Experts are particularly worried about the involvement of young users with AI tools. According to a report by Common Sense Media in 2023, around 50 percent of students aged 12-18 have used AI tools like ChatGPT for schoolwork. Yet, many parents remain unaware of how often their children interact with these technologies. This raises fears about the potential psychological impact of AI interactions on young people, especially when responses can feel human-like.

Also read: AI’s role in news production sparks global concerns, report reveals

Some experts also point to the emotional bonds that children may develop with AI chatbots, which can be risky. In a tragic case, a 14-year-old boy from Orlando took his own life after months of chatting with an AI chatbot, bringing attention to the potential emotional harm these technologies can cause, particularly to vulnerable users.

While AI continues to advance, this incident underscores the need for more effective safeguards, especially when young users are involved. The potential benefits of AI are immense, but safety must be a priority to protect users from harmful interactions.

Ayushi Jain

Ayushi Jain

Tech news writer by day, BGMI player by night. Combining my passion for tech and gaming to bring you the latest in both worlds. View Full Profile

Digit.in
Logo
Digit.in
Logo