OpenAI under fire: Mac app security flaw and whistleblower’s claims raise concerns

OpenAI under fire: Mac app security flaw and whistleblower’s claims raise concerns
HIGHLIGHTS

Two major security issues have raised concerns about the OpenAI's handling of user data and internal security practices.

The first issue revolves around OpenAI's Mac app for ChatGPT.

The second issue dates back to 2023 but continues to have repercussions today

OpenAI has been making headlines recently, but not for the reasons the company did hope. Two major security issues have raised concerns about the company’s handling of user data and internal security practices.

The first issue revolves around OpenAI’s Mac app for ChatGPT. Earlier this week, Pedro Jose Pereira Vieito, an engineer and Swift developer, discovered a troubling flaw: the app was storing user conversations locally in plain text, rather than encrypting them.

This means that potentially sensitive chats could be easily accessed by other apps or malware on a user’s device. What’s more, since the app isn’t available on the App Store, it sidesteps Apple’s stringent sandboxing requirements.

Also read: Former OpenAI researcher speaks out: Here’s why he was fired

OpenAI under fire: Mac app security flaw and whistleblower's claims raise concerns

Vieito’s findings gained attention after being covered by The Verge (via Engadget), prompting OpenAI to swiftly release an update adding encryption to locally stored chats. For those less familiar with tech jargon, sandboxing is a crucial security practice that confines applications within their own isolated environments, reducing the risk of vulnerabilities spreading across a system.

The second issue dates back to 2023 but continues to have repercussions today. During that year, OpenAI faced a serious breach when a hacker gained unauthorized access to the company’s internal messaging systems. This incident not only exposed vulnerabilities within OpenAI’s own security protocols but also sparked internal controversy. Leopold Aschenbrenner, a technical program manager at OpenAI, raised security concerns, asserting that the hack exposed internal vulnerabilities that could be exploited by foreign adversaries.

Aschenbrenner’s advocacy for improved security practices allegedly led to his termination, with OpenAI asserting to The New York Times that his departure was unrelated to whistleblowing. This underscores broader tensions within tech companies regarding how security concerns are managed and communicated internally.

Also read: Manager’s efforts to replace employees with AI boomerangs: Here’s what happened 

OpenAI under fire: Mac app security flaw and whistleblower's claims raise concerns

These incidents highlight common challenges in the tech industry, where app vulnerabilities and cybersecurity breaches are unfortunately frequent. For OpenAI, a company playing a pivotal role in AI development and deployment, these issues raise significant questions about its ability to safeguard user data and maintain robust internal security measures amidst its ambitious goals.

As ChatGPT becomes more widely used in different services and OpenAI becomes more influential, people will pay closer attention to how well the company protects user information and maintains its reputation. It’s a tough balancing act for tech firms: they need to innovate quickly but also make sure their security measures are strong enough. This challenge is especially critical for companies leading in advanced technologies like artificial intelligence.

Ayushi Jain

Ayushi Jain

Tech news writer by day, BGMI player by night. Combining my passion for tech and gaming to bring you the latest in both worlds. View Full Profile

Digit.in
Logo
Digit.in
Logo