Loading...

OpenAI debuts Whisper API for speech-to-text transcription and translation

OpenAI debuts Whisper API for speech-to-text transcription and translation<br />
<b>Warning</b>:  Undefined array key /var/www/vhosts/lawyersinamerica.com/httpdocs/app/views/singleBlog/singleBlogView.php on line 59
">
Mar 2023

To coincide with the rollout of the ChatGPT API, OpenAI today launched the Whisper API, a hosted version of the open source Whisper speech-to-text model that the company released in September.

Priced at $0.006 per minute, Whisper is an automatic speech recognition system that OpenAI claims enables "robust" transcription in multiple languages as well as translation from those languages into English. It takes files in a variety of formats, including M4A, MP3, MP4, MPEG, MPGA, WAV and WEBM.

Countless organizations have developed highly capable speech recognition systems, which sit at the core of software and services from tech giants like Google, Amazon and Meta. But what makes Whisper different is that it was trained on 680,000 hours of multilingual and "multitask" data collected from the web, according to OpenAI president and chairman Greg Brockman, which lead to improved recognition of unique accents, background noise and technical jargon.

"We released a model, but that actually was not enough to cause the whole developer ecosystem to build around it," Brockman said in a video call with TechCrunch yesterday afternoon. "The Whisper API is the same large model that you can get open source, but we've optimized to the extreme. It's much, much faster and extremely convenient."

To Brockman's point, there's plenty in the way of barriers when it comes to enterprises adopting voice transcription technology. According to a 2020 Statista survey, companies cite accuracy, accent- or dialect-related recognition issues and cost as the top reasons they haven't embraced tech like tech-to-speech.

Whisper has its limitations, though -- particularly in the area of "next-word" prediction. Because the system was trained on a large amount of noisy data, OpenAI cautions that Whisper might include words in its transcriptions that weren't actually spoken -- possibly because it's both trying to predict the next word in audio and transcribe the audio recording itself. Moreover, Whisper doesn't perform equally well across languages, suffering from a higher error rate when it comes to speakers of languages that aren't well-represented in the training data.

That last bit is nothing new to the world of speech recognition, unfortunately. Biases have long plagued even the best systems, with a 2020 Stanford study finding systems from Amazon, Apple, Google, IBM and Microsoft made far fewer errors -- about 19% -- with users who are white than with users who are Black.

Despite this, OpenAI sees Whisper's transcription capabilities being used to improve existing apps, services, products and tools. Already, AI-powered language learning app Speak is using the Whisper API to power a new in-app virtual speaking companion.

If OpenAI can break into the speech-to-text market in a major way, it could be quite profitable for the Microsoft-backed company. According to one report, the segment could be worth $5.4 billion by 2026, up from $2.2 billion in 2021.

"Our picture is that we really want to be this universal intelligence," Brockman said. "We really want to, very flexibly, be able to take in whatever kind of data you have -- whatever kind of task you want to accomplish -- and be a force multiplier on that attention."

Top