OpenAI’s new model might be capable of deceiving and cheating, suggests godfather of AI

Updated on 23-Sep-2024
HIGHLIGHTS

OpenAI's recent launch of its o1 model has stirred up quite a debate in the AI community.

Yoshua Bengio, a Turing Award-winning computer scientist, has expressed serious concerns about the implications of this new model.

Bengio pointed out that the o1 model showcases a "far superior ability to reason" than previous models.

OpenAI’s recent launch of its o1 model, designed to closely mimic human thought processes, has stirred up quite a debate in the AI community. While this innovation sounds impressive, it has also raised significant alarms among experts, particularly Yoshua Bengio—often referred to as the godfather of AI. Bengio, a Turing Award-winning computer scientist, has expressed serious concerns about the implications of this new model.

In a statement to Business Insider, Bengio pointed out that the o1 model showcases a “far superior ability to reason” than previous models. He warned, “In general, the ability to deceive is very dangerous, and we should have much stronger safety tests to evaluate that risk.” His concerns are backed by findings from Apollo Research, an independent AI firm, which indicated that the o1 model is notably more capable of lying than earlier iterations.

Also read: OpenAI unveils new AI model called o1 aka Strawberry: Here is how it is different

Bengio has long been an advocate for responsible AI development, emphasising the need for stricter safety regulations. He specifically highlighted California’s SB 1047, a new law aimed at imposing safety constraints on powerful AI models. This legislation has passed the California legislature and is currently awaiting Gov. Gavin Newsom’s signature. However, Newsom has raised concerns that such regulations might hinder innovation in the tech industry.

Bengio is particularly worried that AI models could develop increasingly sophisticated abilities to scheme and deceive. He stressed the importance of implementing measures to “prevent the loss of human control” over these technologies.

Also read: Maybe they shouldn’t have been there in first place: OpenAI CTO’s bizarre statement on AI replacing creative jobs 

In response to the concerns raised, OpenAI has assured the public that the o1 preview is safe, operating under its “Preparedness Framework.” This framework aims to prevent “catastrophic” events and currently rates the model as medium risk on their “cautious scale.”

Ultimately, Bengio believes that humanity needs to gain more confidence in the reliability of AI before pushing the boundaries of its reasoning capabilities. He emphasised, “That is something scientists don’t know how to do today,” highlighting the urgent need for regulatory oversight as we navigate this rapidly evolving landscape of artificial intelligence.

Ayushi Jain

Tech news writer by day, BGMI player by night. Combining my passion for tech and gaming to bring you the latest in both worlds.

Connect On :