OpenAI wants ChatGPT to stop going crazy, be more human-like

OpenAI wants ChatGPT to stop going crazy, be more human-like
HIGHLIGHTS

Process supervision is aimed at making the AI more human-like.

ChatGPT has addressed the shortcomings of the system

Experts say that more accuracy and transparency is required for the software

OpenAI is finally working on ChatGPT’s settings to make a better interaction program eliminating any AI hallucinations. If you haven’t heard of AI hallucination, and have used ChatGPT before, you might have already experienced it. Allow us to explain. 

Has it ever happened that when you are using ChatGPT or any AI chatbot that the system just starts blabbering any information and content instead of the prompt that was put in. Well that phenomenon is termed as an AI hallucination rendering misinformation. 

Also read: New smartphone app uses AI to detect fake products: Here’s how Alitheon works

In an effort to change and reduce these outputs OpenAI has finally come up with a solution and the feature is called process supervision. There might be a confusion between process supervision and outcome supervision features where in the latter the system is rewarded for the final conclusion of the task. However, in the former, which is the new feature, the system is rewarded at every step of the task. 

OpenAI in an official blog post has put out mathematical examples which have resulted in better accuracy at large however, the company suggests that they cannot comment on how the process supervision feature will perform out the domain of maths. 

OpenAI

OpenAI has previously mentioned and addressed the shortcomings of the software and warned the users stating that ChatGPT can be inaccurate with the information that is being put out.   

“Even state-of-the-art models are prone to producing falsehoods —they exhibit a tendency to invent facts in moments of uncertainty. These hallucinations are particularly problematic in domains that require multi-step reasoning, since a single logical error is enough to derail a much larger solution. Detecting and mitigating hallucinations is essential to improve reasoning capabilities.”

Also read: IIT-AIIMS developing AI applications to improve India’s healthcare: Here's how

With this effort at making the user experience more transparent and seamless OpenAI aims at building a technology that can reflect answers based on the human interactions and to structure the system to understand humans and make responsive efforts accordingly. 

There are experts who still suggest that the software requires more accuracy and transparency while also prompting regulation on the software.

Ichha Sharma
Digit.in
Logo
Digit.in
Logo