WaveNet
Machine learning and data mining |
---|
WaveNet is a deep neural network for generating raw audio. It was created by researchers at London-based AI firm DeepMind. The technique, outlined in a paper in September 2016,[1] is able to generate relatively realistic-sounding human-like voices by directly modelling waveforms using a neural network method trained with recordings of real speech. Tests with US English and Mandarin reportedly showed that the system outperforms Google's best existing text-to-speech (TTS) systems, although as of 2016 its text-to-speech synthesis still was less convincing than actual human speech.[2] WaveNet's ability to generate raw waveforms means that it can model any kind of audio, including music.[3]
History
Generating speech from text is an increasingly common task thanks to the popularity of software such as Apple's Siri, Microsoft's Cortana, Amazon Alexa and the Google Assistant.[4]
Most such systems use a variation of a technique that involves concatenated sound fragments together to form recognisable sounds and words.[5] The most common of these is called concatenative TTS.[6] It consists of large library of speech fragments, recorded from a single speaker that are then concatenated to produce complete words and sounds. The result sounds unnatural, with an odd cadence and tone.[7] The reliance on a recorded library also makes it difficult to modify or change the voice.[8]
Another technique, known as parametric TTS,[9] uses mathematical models to recreate sounds that are then assembled into words and sentences. The information required to generate the sounds is stored in the parameters of the model. The characteristics of the output speech are controlled via the inputs to the model, while the speech is typically created using a voice synthesiser known as a vocoder. This can also result in unnatural sounding audio.
Design and ongoing research
Background
WaveNet is a type of feedforward neural network known as a deep convolutional neural network (CNN). In WaveNet, the CNN takes a raw signal as an input and synthesises an output one sample at a time. It does so by sampling from a softmax (i.e. categorical) distribution of a signal value that is encoded using μ-law companding transformation and quantized to 256 possible values.[11]
Initial concept and results
According to the original September 2016 DeepMind research paper WaveNet: A Generative Model for Raw Audio,[12] the network was fed real waveforms of speech in English and Mandarin. As these pass through the network, it learns a set of rules to describe how the audio waveform evolves over time. The trained network can then be used to create new speech-like waveforms at 16,000 samples per second. These waveforms include realistic breaths and lip smacks – but do not conform to any language.[13]
WaveNet is able to accurately model different voices, with the accent and tone of the input correlating with the output. For example, if it is trained with German, it produces German speech.[14] The capability also means that if the WaveNet is fed other inputs – such as music – its output will be musical. At the time of its release, DeepMind showed that WaveNet could produce waveforms that sound like classical music.[15]
Content (voice) swapping
According to the June 2018 paper Disentangled Sequential Autoencoder,[16] DeepMind has successfully used WaveNet for audio and voice "content swapping": the network can swap the voice on an audio recording for another, pre-existing voice while maintaining the text and other features from the original recording. "We also experiment on audio sequence data. Our disentangled representation allows us to convert speaker identities into each other while conditioning on the content of the speech." (p. 5) "For audio, this allows us to convert a male speaker into a female speaker and vice versa [...]." (p. 1) According to the paper, a two-digit minimum amount of hours (c. 50 hours) of pre-existing speech recordings of both source and target voice are required to be fed into WaveNet for the program to learn their individual features before it is able to perform the conversion from one voice to another at a satisfying quality. The authors stress that "[a]n advantage of the model is that it separates dynamical from static features [...]." (p. 8), i. e. WaveNet is capable of distinguishing between the spoken text and modes of delivery (modulation, speed, pitch, mood, etc.) to maintain during the conversion from one voice to another on the one hand, and the basic features of both source and target voices that it is required to swap on the other.
The January 2019 follow-up paper Unsupervised speech representation learning using WaveNet autoencoders[17] details a method to successfully enhance the proper automatic recognition and discrimination between dynamical and static features for "content swapping", notably including swapping voices on existing audio recordings, in order to make it more reliable. Another follow-up paper, Sample Efficient Adaptive Text-to-Speech,[18] dated September 2018 (latest revision January 2019), states that DeepMind has successfully reduced the minimum amount of real-life recordings required to sample an existing voice via WaveNet to "merely a few minutes of audio data" while maintaining high-quality results.
Its ability to clone voices has raised ethical concerns about WaveNet's ability to mimic the voices of living and dead persons. According to a 2016 BBC article, companies working on similar voice-cloning technologies (such as Adobe Voco) intend to insert watermarking inaudible to humans to prevent counterfeiting, while maintaining that voice cloning satisfying, for instance, the needs of entertainment-industry purposes would be of a far lower complexity and use different methods than required to fool forensic evidencing methods and electronic ID devices, so that natural voices and voices cloned for entertainment-industry purposes could still be easily told apart by technological analysis.[19]
Applications
At the time of its release, DeepMind said that WaveNet required too much computational processing power to be used in real world applications.[20] As of October 2017, Google announced a 1,000-fold performance improvement along with better voice quality. WaveNet was then used to generate Google Assistant voices for US English and Japanese across all Google platforms.[21] In November 2017, DeepMind researchers released a research paper detailing a proposed method of "generating high-fidelity speech samples at more than 20 times faster than real-time", called "Probability Density Distillation".[22] At the annual I/O developer conference in May 2018, it was announced that new Google Assistant voices were available and made possible by WaveNet; WaveNet greatly reduced the number of audio recordings that were required to create a voice model by modeling the raw audio of the voice actor samples.[23]
See also
References
- ↑ van den Oord, Aaron; Dieleman, Sander; Zen, Heiga; Simonyan, Karen; Vinyals, Oriol; Graves, Alex; Kalchbrenner, Nal; Senior, Andrew et al. (2016-09-12). "WaveNet: A Generative Model for Raw Audio". doi:10.48550/arXiv.1609.03499. https://arxiv.org/abs/1609.03499.
- ↑ Kahn, Jeremy (2016-09-09). "Google's DeepMind Achieves Speech-Generation Breakthrough". Bloomberg.com. https://www.bloomberg.com/news/articles/2016-09-09/google-s-ai-brainiacs-achieve-speech-generation-breakthrough.
- ↑ Meyer, David (2016-09-09). "Google's DeepMind Claims Massive Progress in Synthesized Speech". http://fortune.com/2016/09/09/google-deepmind-wavenet-ai/.
- ↑ Kahn, Jeremy (2016-09-09). "Google's DeepMind Achieves Speech-Generation Breakthrough". Bloomberg.com. https://www.bloomberg.com/news/articles/2016-09-09/google-s-ai-brainiacs-achieve-speech-generation-breakthrough.
- ↑ Condliffe, Jamie (2016-09-09). "When this computer talks, you may actually want to listen" (in en). MIT Technology Review. https://www.technologyreview.com/s/602343/face-of-a-robot-voice-of-an-angel/.
- ↑ Hunt, A. J.; Black, A. W. (May 1996). "Unit selection in a concatenative speech synthesis system using a large speech database". 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings. 1. pp. 373–376. doi:10.1109/ICASSP.1996.541110. ISBN 978-0-7803-3192-1. https://www.ee.columbia.edu/~dpwe/e6820/papers/HuntB96-speechsynth.pdf.
- ↑ Coldewey, Devin (2016-09-09). "Google's WaveNet uses neural nets to generate eerily convincing speech and music". https://techcrunch.com/2016/09/09/googles-wavenet-uses-neural-nets-to-generate-eerily-convincing-speech-and-music/.
- ↑ van den Oord, Aäron; Dieleman, Sander; Zen, Heiga (2016-09-08). "WaveNet: A Generative Model for Raw Audio". https://deepmind.com/blog/wavenet-generative-model-raw-audio/.
- ↑ Zen, Heiga; Tokuda, Keiichi; Black, Alan W. (2009). "Statistical parametric speech synthesis". Speech Communication 51 (11): 1039–1064. doi:10.1016/j.specom.2009.04.004.
- ↑ van den Oord, Aäron (2017-11-12). "High-fidelity speech synthesis with WaveNet". https://www.deepmind.com/blog/high-fidelity-speech-synthesis-with-wavenet.
- ↑ Oord, Aaron van den; Dieleman, Sander; Zen, Heiga; Simonyan, Karen; Vinyals, Oriol; Graves, Alex; Kalchbrenner, Nal; Senior, Andrew et al. (2016-09-12). WaveNet: A Generative Model for Raw Audio. 1609. Bibcode: 2016arXiv160903499V.
- ↑ Oord et al. (2016). WaveNet: A Generative Model for Raw Audio, Cornell University, 19 September 2016
- ↑ Gershgorn, Dave (2016-09-09). "Are you sure you're talking to a human? Robots are starting to sounding eerily lifelike" (in en-US). Quartz. https://qz.com/778056/google-deepminds-wavenet-algorithm-can-accurately-mimic-human-voices/.
- ↑ Coldewey, Devin (2016-09-09). "Google's WaveNet uses neural nets to generate eerily convincing speech and music". https://techcrunch.com/2016/09/09/googles-wavenet-uses-neural-nets-to-generate-eerily-convincing-speech-and-music/.
- ↑ van den Oord, Aäron; Dieleman, Sander; Zen, Heiga (2016-09-08). "WaveNet: A Generative Model for Raw Audio". https://deepmind.com/blog/wavenet-generative-model-raw-audio/.
- ↑ Li & Mand (2016). Disentangled Sequential Autoencoder, 12 June 2018, Cornell University
- ↑ Chorowsky et al. (2019). Unsupervised speech representation learning using WaveNet autoencoders, 25 January 2019, Cornell University
- ↑ Chen et al. (2018). Sample Efficient Adaptive Text-to-Speech, 27 September 2018, Cornell University. Also see this paper's latest January 2019 revision.
- ↑ Adobe Voco 'Photoshop-for-voice' causes concern, 7 November 2016, BBC
- ↑ "Adobe Voco 'Photoshop-for-voice' causes concern" (in en-GB). BBC News. 2016-11-07. https://www.bbc.co.uk/news/technology-37899902.
- ↑ WaveNet launches in the Google Assistant
- ↑ Oord et al. (2017): Parallel WaveNet: Fast High-Fidelity Speech Synthesis, Cornell University, 28 November 2017
- ↑ Martin, Taylor (May 9, 2018). "Try the all-new Google Assistant voices right now" (in en). CNET. https://www.cnet.com/how-to/how-to-get-all-google-assistants-new-voices-right-now/.
External links
Original source: https://en.wikipedia.org/wiki/WaveNet.
Read more |