OpenAI’s Post

View organization page for OpenAI, graphic

5,419,243 followers

We're sharing an update on the advanced Voice Mode we demoed during our Spring Update, which we remain very excited about: We had planned to start rolling this out in alpha to a small group of ChatGPT Plus users in late June, but need one more month to reach our bar to launch. For example, we’re improving the model’s ability to detect and refuse certain content. We’re also working on improving the user experience and preparing our infrastructure to scale to millions while maintaining real-time responses. As part of our iterative deployment strategy, we'll start the alpha with a small group of users to gather feedback and expand based on what we learn. We are planning for all Plus users to have access in the fall. Exact timelines depend on meeting our high safety and reliability bar. We are also working on rolling out the new video and screen sharing capabilities we demoed separately, and will keep you posted on that timeline. ChatGPT’s advanced Voice Mode can understand and respond with emotions and non-verbal cues, moving us closer to real-time, natural conversations with AI. Our mission is to bring these new experiences to you thoughtfully.

Tim Jackson

IT Delivery: Security, Productivity, Innovation

2w

You haven’t even finished rolling out memory in the UK to plus users yet. I’m stuck paying £20 for the same service free users get. Switching to Anthropic.

Ravi Rai

Freelance Machine Learning Engineer | Building and Innovating AI Apps with AWS | 5 Years in Math Research and Applied ML

2w

Who's the voice this time 😂

Valentin RUDLOFF

Tech Lead at Worldline | ISTQB Certified

2w

How to get into the group of alpha users ? or even beta ?

Adam Davis

Product Design at Dell Technologies, Human-AI Experiences

2w

“We’re also working on improving the user experience…” = music to my ears. Delay the launch to reach the bar you set for your users. Deliver a high quality experience and customers will continue to love (and trust) your product.

Voice mode is the b*mb. Use case I'm developing: CogBot goes through a number of question types and exercises during an evening check-in while dog walking. It can be used over time to both detect and decrease cognitive decline, partly by evaluating the conversation against an established/historical user profile (topic preferences, level of detail, tests on current news, etc).

🌟 I recently shared a demo of my own AI receptionist handling calls and scheduling seamlessly 🚀. The potential for these technologies to enhance efficiency and deliver real-time, natural conversations is enormous. My receptionist could : - Handle inbound / outbound calls. - Answer any client queries. - Schedule meetings. Can't wait for the new update AI Will Replace Human 🙄 ?

Exciting news! ChatGPT's Voice Mode advancements sound promising and we look forward to experiencing the improved user interactions. At ModalX, we’re also pushing the boundaries of voice interaction, weaving together technology and AI to redefine human-technology interaction.  Together, we’re paving the way for more natural and engaging AI-driven conversations.🔥

Szymon Stasik

proactive Tech Lead 🧑💼 viable Software Engineer 🖥️ AI Performance & Integration 🧠 Mobile 📱 passionate about DevOps & Agile 🚀 let's collaborate to achieve impactful results!

2w

Meanwhile the gpt 4o is hard to chat with. It’s so stubborn answering all at once and even after asking simple question or to correct some mistake it repeats full answer including mistakes or trying random updates. To get what I need I need to provide very detailed instructions, it was/is not like that with gpt 4.

Mary Coyne

IAE Chambéry - English for specific purposes - International Management and Tourism. International Relations Certification training: Test Centre Administrator for TOEIC at IAE. Other : IELTS, Linguaskills, TOEFL

2w

Would be great if it could "hear" and correct 2nd language learners of English so our students could train their fluency at home. For the moment it transforms text to speech and speech to text. When will it be able to hear (and correct)?

See more comments

To view or add a comment, sign in

Explore topics