Software:Whisper (speech recognition system)
Original author(s) | OpenAI[1] |
---|---|
Initial release | September 21, 2022 |
Repository | https://github.com/openai/whisper |
Type |
Whisper is a machine learning model for speech recognition and transcription, created by OpenAI and first released as open-source software in September 2022.[2]
It is capable of transcribing speech in English and several other languages,[3] and is also capable of translating several non-English languages into English. OpenAI claims that the combination of different training data used in its development has led to improved recognition of accents, background noise and jargon compared to previous approaches.[4]
Whisper is a weakly-supervised deep learning acoustic model, made using an encoder-decoder transformer architecture.[5]
Whisper V2 was released on December 8, 2022.[6] Whisper V3 was released in November 2023, on the OpenAI Dev Day.[7]
Background
Speech recognition has had a long history in research; the first approaches made use of statistical methods, such as dynamic time warping, and later hidden Markov models. At around the 2010s, deep neural network approaches became more common for speech recognition models, which were enabled by big data and increased computational performance.[8] Early approaches to deep learning in speech recognition included convolutional neural networks, which were limited due to their inability to capture sequential data, which later led to developments of Seq2seq approaches, which include recurrent neural networks which made use of long short-term memory.[9]
Transformers, introduced in 2017 by Google, displaced many prior state-of-the art approaches to many problems in machine learning, and started becoming the core neural architecture in fields such as language modeling and computer vision;[10] weakly-supervised approaches to training acoustic models were recognized in the early 2020s as promising for speech recognition approaches using deep neural networks.[11]
Training and capabilities
Whisper has been trained using semi-supervised learning on 680,000 hours of multilingual and multitask data, of which about one-fifth (117,000 hours) were non-English audio data. Whisper does not outperform models which specialize in the LibriSpeech dataset, although when tested across many datasets, it is more robust and makes 50% fewer errors than other models.[12]
Whisper has a differing error rate with respect to transcribing different languages, with a higher word error rate in languages not well-represented in the training data.[13]
The model has been used as the base for a unified model for speech recognition and more general sound recognition.[14]
Architecture
The Whisper architecture is based on an encoder-decoder transformer. Input audio is split into 30-second chunks converted into a Mel-frequency cepstrum, which is passed to an encoder. A decoder is trained to predict later text captions. Special tokens are used to perform several tasks, such as phrase-level timestamps.[12]
See also
- Transcription software
- List of speech recognition software
- Speech recognition software for Linux
- AI boom
- Neural machine translation
References
- ↑ Radford, Alec; Kim, Jong Wook; Xu, Tao; Brockman, Greg; McLeavey, Christine; Sutskever, Ilya (2022-12-06). "Robust Speech Recognition via Large-Scale Weak Supervision". arXiv:2212.04356 [eess.AS].
- ↑ Golla, Ramsri Goutham (2023-03-06). "Here Are Six Practical Use Cases for the New Whisper API" (in en-US). https://slator.com/six-practical-use-cases-for-new-whisper-api/.
- ↑ Dickson, Ben (2022-10-03). "How will OpenAI's Whisper model impact AI applications?" (in en-US). https://venturebeat.com/ai/how-will-openais-whisper-model-impact-ai-applications/.
- ↑ Wiggers, Kyle (September 21, 2022). "OpenAI open-sources Whisper, a multilingual speech recognition system" (in en-US). https://techcrunch.com/2022/09/21/openai-open-sources-whisper-a-multilingual-speech-recognition-system/.
- ↑ Radford, Alec; Kim, Jong Wook; Xu, Tao; Brockman, Greg; McLeavey, Christine; Sutskever, Ilya (2022-12-06). "Robust Speech Recognition via Large-Scale Weak Supervision". p. 3. arXiv:2212.04356 [eess.AS].
- ↑ "Announcing the large-v2 model · openai/whisper · Discussion #661" (in en). https://github.com/openai/whisper/discussions/661.
- ↑ (in en) OpenAI DevDay: Opening Keynote, https://www.youtube.com/watch?v=U9mJuUkhUzk, retrieved 2024-01-08
- ↑ Yu, Dong; Deng, Li (2014) (in en). Automatic speech recognition: a deep learning approach. Signals and communication technology (2015th ed.). London Heidelberg: Springer. pp. 9. ISBN 978-1-4471-5778-6.
- ↑ Siddique, Latif; Zaidi, Aun; Cuayahuitl, Heriberto; Shamshad, Fahad; Shoukat, Moazzam; Qadir, Junaid (2023). "Transformers in Speech Processing: A Survey". arXiv:2303.11607v1 [cs.CL].
- ↑ Kamath, Uday; Graham, Kenneth L.; Emara, Wael (2022) (in en). Transformers for machine learning: a deep dive. Chapman & Hall/CRC machine learning & pattern recognition (First ed.). Boca Raton London New York: CRC Press, Taylor & Francis Group. pp. xix. ISBN 978-0-367-76734-1.
- ↑ Paaß, Gerhard; Giesselbach, Sven (2023-02-16). "Foundation Models for Speech, Images, Videos, and Control" (in en). Foundation Models for Natural Language Processing. Artificial Intelligence: Foundations, Theory, and Algorithms. pp. 313–382. doi:10.1007/978-3-031-23190-2_7. ISBN 978-3-031-23189-6.
- ↑ 12.0 12.1 "Introducing Whisper" (in en-US). 2022-09-21. https://openai.com/research/whisper.
- ↑ Wiggers, Kyle (2023-03-01). "OpenAI debuts Whisper API for speech-to-text transcription and translation" (in en-US). https://techcrunch.com/2023/03/01/openai-debuts-whisper-api-for-text-to-speech-transcription-and-translation/.
- ↑ Yuan, Gong; Khurana, Sameer; Karlinsky, Leonid; Glass, James (2023). "Whisper-AT: Noise-Robust Automatic Speech Recognizers are Also Strong General Audio Event Taggers". Interspeech 2023. pp. 2798–2802. doi:10.21437/Interspeech.2023-2193.
Original source: https://en.wikipedia.org/wiki/Whisper (speech recognition system).
Read more |