OpenAI’s AI chatbot ChatGPT has finally received the long-awaited Advanced Voice Mode with Vision feature. The AI chatbot will now allow users to interact with it in real-time using voice and visual inputs. In specific terms, ChatGPT Plus, Team, and Pro members will be able to use their phones to point at objects or share their screens while receiving immediate AI responses.
The Advanced Voice Mode will now interpret via live video feed, providing explanations, answering questions, and even making suggestions on topics such as math problems or device menu options after seven months of the demo.
READ: Want to talk to Santa? ChatGPT’s new feature makes it happen
Users will be able to enable the feature by tapping the application’s voice icon and then selecting the video icon, which will launch the live video mode. The screen-sharing mode can also be enabled via a simple menu option, allowing users to interact with AI.
The new feature is being gradually implemented and will be completed by next week. It is important to note that access to the feature will not be immediate. The feature will not be available to Enterprise and Edu subscribers until January next year. Not only that, users in the EU, Switzerland, Iceland, Norway, and Liechtenstein will be unable to use the Advanced Voice Mode with vision because there is no set timeline for the availability.
OpenAI has introduced the feature as a part of its 12 Days of OpenAI event during which the San Francisco-based company will announce new updates daily.
Meanwhile, OpenAI has added a new Santa Mode for the holiday season. It will allow users to interact with ChatGPT in a cheerful tone, with Santa’s voice as a preset for responses. Users can access this feature by simply clicking on the snowflake icon next to the prompt bar in the application.