Lyra (codec)

From HandWiki
Short description: Lossy audio codec developed by Google
Lyra (codec)
Lyra codec logo.png
Filename extension.lyra
Developed byGoogle
Initial release2021 (2021)
Latest release
1.3.2
(December 20, 2022; 14 months ago (2022-12-20))
Type of formatspeech codec
Open format?Yes (Apache-2.0)

Lyra is a lossy audio codec developed by Google that is designed for compressing speech at very low bitrates. Unlike most other audio formats, it compresses data using a machine learning-based algorithm.

Features

The Lyra codec is designed to transmit speech in real-time when bandwidth is severely restricted, such as over slow or unreliable network connections.[1] It runs at fixed bitrates of 3.2, 6, and 9 kbit/s and it is intended to provide better quality than codecs that use traditional waveform-based algorithms at similar bitrates.[2][3] Instead, compression is achieved via a machine learning algorithm that encodes the input with feature extraction, and then reconstructs an approximation of the original using a generative model.[1] This model was trained on thousands of hours of speech recorded in over 70 languages to function with various speakers.[2] Because generative models are more computationally complex than traditional codecs, a simple model that processes different frequency ranges in parallel is used to obtain acceptable performance.[4] Lyra imposes 20 ms of latency due to its frame size.[3] Google's reference implementation is available for Android and Linux.[4]

Quality

Lyra's initial version performed significantly better than traditional codecs at similar bitrates.[1][4][5] Ian Buckley at MakeUseOf said, "It succeeds in creating almost eerie levels of audio reproduction with bitrates as low as 3 kbps." Google claims that it reproduces natural-sounding speech, and that Lyra at 3 kbit/s beats Opus at 8 kbit/s.[2] Tsahi Levent-Levi writes that Satin, Microsoft's AI-based codec, outperforms it at higher bitrates.[5]

History

In December 2017, Google researchers published a preprint paper on replacing the Codec 2 decoder with a WaveNet neural network. They found that a neural network is able to extrapolate features of the voice not described in the Codec 2 bitstream and give better audio quality, and that the use of conventional features makes the neural network calculation simpler compared to a purely waveform-based network. Lyra version 1 would reuse this overall framework of feature extraction, quantization, and neural synthesis.[6]

Lyra was first announced in February 2021,[2] and in April, Google released the source code of their reference implementation.[1] The initial version had a fixed bitrate of 3 kbit/s and around 90 ms latency.[1][2] The encoder calculates a log mel spectrogram and performs vector quantization to store the spectrogram in a data stream. The decoder is a WaveNet neural network that takes the spectrogram and reconstructs the input audio.[2]

A second version (v2/1.2.0), released in September 2022, improved sound quality, latency, and performance, and permitted multiple bitrates. V2 uses a "SoundStream" structure where both the encoder and decoder are neural networks, a kind of autoencoder. A residual vector quantizer is used to turn the feature values into transferrable data.[3]

Support

Implementations

Google's implementation is available on GitHub under the Apache License.[1][7] Written in C++, it is optimized for 64-bit ARM but also runs on x86, on either Android or Linux.[4]

Applications

Google Duo uses Lyra to transmit sound for video chats when bandwidth is limited.[1][5]

References

External links

See also