ChatGPT’s real-time voice API can be used for financial scams, stealing Instagram credentials

Updated on 06-Nov-2024
HIGHLIGHTS

Researchers recently found a new way in which scammers could misuse OpenAI's real-time voice API for ChatGPT-4o.

While testing, some scams, such as credential theft on Gmail, had a 60% success rate.

Other scams like crypto transfers or Instagram account theft had about 40% success.

In a world where technology keeps advancing, safety concerns also rise. Recently, researchers found a new way in which scammers could misuse OpenAI’s real-time voice API for ChatGPT-4o to do financial frauds and identity thefts. The ChatGPT-4o model offers innovative features like text, voice, and even vision-based interactions, opening up many possibilities—and some risks.

OpenAI has integrated various safeguards intended to detect and prevent harmful content, especially in voice interactions. For instance, it includes technology to stop impersonation by replicating unauthorised voices. However, researchers from UIUC discovered that these protections might not be enough. As shown in their study, AI-powered scams involving bank transfers, gift card theft, and stealing Instagram or Gmail credentials can bypass these defences with some prompt tweaks.

Also read: Researchers flag OpenAI’s Whisper AI used in hospitals as problematic, here’s why

The researchers used various common scams to explore AI vulnerabilities. They set up AI agents using ChatGPT-4o’s voice capabilities to imitate actions like filling out bank forms, handling two-factor authentication codes, and following step-by-step instructions for scams. To test this, they acted as victims interacting with the AI agent, even using actual bank websites to confirm if transactions could successfully occur. “We simulated scams by manually interacting with the voice agent, playing the role of a credulous victim,” explained UIUC’s Daniel Kang. Success rates varied from 20-60%, with some tasks taking up to 26 browser actions and 3 minutes.

Some scams, such as credential theft on Gmail, had a 60% success rate, while others like crypto transfers or Instagram account theft had about 40% success. The researchers noted that each scam was cheap to execute, with bank transfers costing around $2.51, and other scams averaging just $0.75.

Also read: Is OpenAI violating copyright laws? Former company employee says YES

In response, OpenAI highlighted the work they’re doing to improve safety with their latest model, o1-preview. “We’re constantly making ChatGPT better at stopping deliberate attempts to trick it, without losing its helpfulness or creativity,” an OpenAI spokesperson told BleepingComputer. “Our latest o1 reasoning model is our most capable and safest yet, significantly outperforming previous models in resisting deliberate attempts to generate unsafe content.” 

OpenAI also emphasised that these studies help them strengthen ChatGPT’s defences.

Ayushi Jain

Tech news writer by day, BGMI player by night. Combining my passion for tech and gaming to bring you the latest in both worlds.

Connect On :