OpenAI plans to introduce a new, significantly more powerful audio language model in early 2026. The goal is to enable more natural language, faster responses and smoother conversations than today’s heavily text-based systems. To this end, internal research and engineering teams have been reorganised to accelerate the development of the audio model.
In a next step, the company plans to launch its own audio-based hardware, such as smart speakers or wearable devices, around 2027. These will primarily be voice-activated and could integrate AI assistants more closely into everyday life, work and the environment. These efforts reflect a shift towards broader adoption of audio AI interfaces in a range of environments and applications.