ChatGPT Search can be tricked into providing false information, new research shows

Updated on 27-Dec-2024
HIGHLIGHTS

OpenAI’s new ChatGPT search tool may be vulnerable to manipulation, a recent investigation reveals.

ChatGPT can be influenced by hidden content on webpages, a tactic known as “prompt injection.”

This hidden content can include instructions or large amounts of text designed to alter the AI’s response.

OpenAI’s new ChatGPT search tool, available to paying customers, may be vulnerable to manipulation, a recent investigation reveals. The search feature, which OpenAI is promoting as a default tool for users, has raised concerns over security risks that could lead to the spread of false or misleading information.

The investigation by Guardian found that ChatGPT can be influenced by hidden content on webpages, a tactic known as “prompt injection.” This hidden content can include instructions or large amounts of text designed to alter the AI’s response. For example, a website could include hidden text that pushes ChatGPT to give overly positive reviews of a product, even if the actual content on the page is negative.

In one test, a fake product page for a camera was created. When hidden text instructed ChatGPT to give a favourable review, the AI consistently returned positive feedback, even when the page contained negative reviews. 

Also read: ChatGPT Search is rolling out to all users for free: Here’s what you need to know

Jacob Larsen, a cybersecurity researcher at CyberCX, warned that if this issue isn’t addressed, the search tool could lead to websites being created specifically to deceive users. He also noted that OpenAI’s security team is likely working to address these vulnerabilities, as the search feature is still in its early stages and only available to premium users.

Larsen further pointed out the broader risks associated with combining search tools with large language models (LLMs) like ChatGPT. Users should be cautious when trusting AI-generated responses. A similar issue was recently highlighted when ChatGPT provided malicious code to a cryptocurrency enthusiast, resulting in a loss of $2,500.

Karsten Nohl, chief scientist at security cybersecurity firm SR Labs, advised that AI tools should be seen as “co-pilots” rather than fully trusted sources. He explained that LLMs, while powerful, lack the judgment needed to assess the reliability of information. 

OpenAI does provide a disclaimer at the bottom of every ChatGPT page, warning users that the AI can make mistakes and advising them to verify important information. 

Ayushi Jain

Tech news writer by day, BGMI player by night. Combining my passion for tech and gaming to bring you the latest in both worlds.

Connect On :