OpenAI brings video to ChatGPT Advanced Voice Mode
ChatGPT’s Advanced Voice Mode now has video and screenshare capabilities.
The feature was last May with the release of GPT-4o, but only the audio modality has been live. Now users can chat with ChatGPT using a phone camera and the model will “see” what you see.
In the livestream, CPO Kevin Weil and other OpenAI team members demoed ChatGPT assisting with how to make pour-over coffee. By pointing the camera at the action, AVM demonstrated that it understood the principle of the coffee maker and walked the team through the brewing of their beverage. The team also showed how ChatGPT supports screensharing by understanding an open message on a phone with Weil wearing a Santa beard.
Mashable Light Speed
The long-awaited announcement comes a day after Google unveiled the next generation of its flagship model, Gemini 2.0. The new Gemini 2.0 can also process visual and audio inputs and has more agentic capabilities, meaning it can perform multi-step tasks on the user’s behalf. Gemini 2.0’s agent features currently exist as a research prototype under three different names: Project Astra for a universal AI assistant, Project Mariner for specific AI tasks, and Project Jules for developers.
Not to be outdone, OpenAI’s demo showcased how ChatGPT’s vision modality accurately identified objects — and was even interruptible. And yes, part of this included a Santa voice option in Voice Mode, complete with a deep, jolly voice and lots of “ho-ho-hos.” You can chat with OpenAI’s version of Santa by tapping the snowflake icon in ChatGPT. No word yet on whether the real Santa Claus contributed his voice for AI training or OpenAI used his voice without prior consent.
Oddly, when selecting the Santa voice in the ChatGPT app, the user is warned that the voice is only for people 13 and older.
Starting today, video and screenshare are available to ChatGPT Plus and Pro users, with Enterprise and Edu availability coming in Jan.