Audio coding format

From HandWiki
Revision as of 17:57, 6 February 2024 by Jworkorg (talk | contribs) (correction)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Digitally coded format for audio signals
Comparison of coding efficiency between popular audio formats

An audio coding format[1] (or sometimes audio compression format) is a content representation format for storage or transmission of digital audio (such as in digital television, digital radio and in audio and video files). Examples of audio coding formats include MP3, AAC, Vorbis, FLAC, and Opus. A specific software or hardware implementation capable of audio compression and decompression to/from a specific audio coding format is called an audio codec; an example of an audio codec is LAME, which is one of several different codecs which implements encoding and decoding audio in the MP3 audio coding format in software.

Some audio coding formats are documented by a detailed technical specification document known as an audio coding specification. Some such specifications are written and approved by standardization organizations as technical standards, and are thus known as an audio coding standard. The term "standard" is also sometimes used for de facto standards as well as formal standards.

Audio content encoded in a particular audio coding format is normally encapsulated within a container format. As such, the user normally doesn't have a raw AAC file, but instead has a .m4a audio file, which is a MPEG-4 Part 14 container containing AAC-encoded audio. The container also contains metadata such as title and other tags, and perhaps an index for fast seeking.[2] A notable exception is MP3 files, which are raw audio coding without a container format. De facto standards for adding metadata tags such as title and artist to MP3s, such as ID3, are hacks which work by appending the tags to the MP3, and then relying on the MP3 player to recognize the chunk as malformed audio coding and therefore skip it. In video files with audio, the encoded audio content is bundled with video (in a video coding format) inside a multimedia container format.

An audio coding format does not dictate all algorithms used by a codec implementing the format. An important part of how lossy audio compression works is by removing data in ways humans can't hear, according to a psychoacoustic model; the implementer of an encoder has some freedom of choice in which data to remove (according to their psychoacoustic model).

Lossless, lossy, and uncompressed audio coding formats

A lossless audio coding format reduces the total data needed to represent a sound but can be de-coded to its original, uncompressed form. A lossy audio coding format additionally reduces the bit resolution of the sound on top of compression, which results in far less data at the cost of irretrievably lost information.

Transmitted (streamed) audio is most often compressed using lossy audio codecs as the smaller size is far more convenient for distribution. The most widely used audio coding formats are MP3 and Advanced Audio Coding (AAC), both of which are lossy formats based on modified discrete cosine transform (MDCT) and perceptual coding algorithms.

Lossless audio coding formats such as FLAC and Apple Lossless are sometimes available, though at the cost of larger files.

Uncompressed audio formats, such as pulse-code modulation (PCM, or .wav), are also sometimes used. PCM was the standard format for Compact Disc Digital Audio (CDDA).

History

Solidyne 922: The world's first commercial audio bit compression sound card for PC, 1990

In 1950, Bell Labs filed the patent on differential pulse-code modulation (DPCM).[3] Adaptive DPCM (ADPCM) was introduced by P. Cummiskey, Nikil S. Jayant and James L. Flanagan at Bell Labs in 1973.[4][5]

Perceptual coding was first used for speech coding compression, with linear predictive coding (LPC).[6] Initial concepts for LPC date back to the work of Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in 1966.[7] During the 1970s, Bishnu S. Atal and Manfred R. Schroeder at Bell Labs developed a form of LPC called adaptive predictive coding (APC), a perceptual coding algorithm that exploited the masking properties of the human ear, followed in the early 1980s with the code-excited linear prediction (CELP) algorithm which achieved a significant compression ratio for its time.[6] Perceptual coding is used by modern audio compression formats such as MP3[6] and AAC.

Discrete cosine transform (DCT), developed by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974,[8] provided the basis for the modified discrete cosine transform (MDCT) used by modern audio compression formats such as MP3[9] and AAC. MDCT was proposed by J. P. Princen, A. W. Johnson and A. B. Bradley in 1987,[10] following earlier work by Princen and Bradley in 1986.[11] The MDCT is used by modern audio compression formats such as Dolby Digital,[12][13] MP3,[9] and Advanced Audio Coding (AAC).[14]

List of lossy formats

General

Basic compression algorithm Audio coding standard Abbreviation Introduction Market share (2019)[15] Ref
Modified discrete cosine transform (MDCT) Dolby Digital (AC-3) AC3 1991 58% [12][16]
Adaptive Transform Acoustic Coding ATRAC 1992 Unknown [12]
MPEG Layer III MP3 1993 49% [9][17]
Advanced Audio Coding (MPEG-2 / MPEG-4) AAC 1997 88% [14][12]
Windows Media Audio WMA 1999 Unknown [12]
Ogg Vorbis Ogg 2000 7% [18][12]
Constrained Energy Lapped Transform CELT 2011 N/A [19]
Opus Opus 2012 8% [20]
LDAC LDAC 2015 Unknown [21][22]
Adaptive differential pulse-code modulation (ADPCM) aptX / aptX-HD aptX 1989 Unknown [23]
Digital Theater Systems DTS 1990 14% [24][25]
Master Quality Authenticated MQA 2014 Unknown
Sub-band coding (SBC) MPEG-1 Audio Layer II MP2 1993 Unknown
Musepack MPC 1997

Speech

List of lossless formats

See also

References

  1. The term "audio coding" can be seen in e.g. the name Advanced Audio Coding, and is analogous to the term video coding
  2. "Video – Where is synchronization information stored in container formats?". http://superuser.com/questions/357686/where-is-synchronization-information-stored-in-container-formats. 
  3. US patent 2605361, C. Chapin Cutler, "Differential Quantization of Communication Signals", issued 1952-07-29 
  4. Cummiskey, P.; Jayant, N. S.; Flanagan, J. L. (1973). "Adaptive Quantization in Differential PCM Coding of Speech". Bell System Technical Journal 52 (7): 1105–1118. doi:10.1002/j.1538-7305.1973.tb02007.x. https://ieeexplore.ieee.org/document/6770730. 
  5. Cummiskey, P.; Jayant, Nikil S.; Flanagan, J. L. (1973). "Adaptive quantization in differential PCM coding of speech". The Bell System Technical Journal 52 (7): 1105–1118. doi:10.1002/j.1538-7305.1973.tb02007.x. ISSN 0005-8580. 
  6. 6.0 6.1 6.2 Schroeder, Manfred R. (2014). "Bell Laboratories". Acoustics, Information, and Communication: Memorial Volume in Honor of Manfred R. Schroeder. Springer. p. 388. ISBN 9783319056609. https://books.google.com/books?id=d9IkBAAAQBAJ&pg=PA388. 
  7. Gray, Robert M. (2010). "A History of Realtime Digital Speech on Packet Networks: Part II of Linear Predictive Coding and the Internet Protocol". Found. Trends Signal Process. 3 (4): 203–303. doi:10.1561/2000000036. ISSN 1932-8346. https://ee.stanford.edu/~gray/lpcip.pdf. 
  8. Nasir Ahmed; T. Natarajan; Kamisetty Ramamohan Rao (January 1974). "Discrete Cosine Transform". IEEE Transactions on Computers C-23 (1): 90–93. doi:10.1109/T-C.1974.223784. https://www.ic.tu-berlin.de/fileadmin/fg121/Source-Coding_WS12/selected-readings/Ahmed_et_al.__1974.pdf. Retrieved 2019-10-20. 
  9. 9.0 9.1 9.2 Guckert, John (Spring 2012). "The Use of FFT and MDCT in MP3 Audio Compression". http://www.math.utah.edu/~gustafso/s2012/2270/web-projects/Guckert-audio-compression-svd-mdct-MP3.pdf. 
  10. Princen, J.; Johnson, A.; Bradley, A. (1987). "Subband/Transform coding using filter bank designs based on time domain aliasing cancellation". ICASSP '87. IEEE International Conference on Acoustics, Speech, and Signal Processing. 12. pp. 2161–2164. doi:10.1109/ICASSP.1987.1169405. https://ieeexplore.ieee.org/document/1169405. 
  11. Princen, J.; Bradley, A. (1986). "Analysis/Synthesis filter bank design based on time domain aliasing cancellation". IEEE Transactions on Acoustics, Speech, and Signal Processing 34 (5): 1153–1161. doi:10.1109/TASSP.1986.1164954. https://ieeexplore.ieee.org/document/1164954. 
  12. 12.0 12.1 12.2 12.3 12.4 12.5 Luo, Fa-Long (2008). Mobile Multimedia Broadcasting Standards: Technology and Practice. Springer Science & Business Media. p. 590. ISBN 9780387782638. https://books.google.com/books?id=l6PovWat8SMC&pg=PA590. 
  13. Britanak, V. (2011). "On Properties, Relations, and Simplified Implementation of Filter Banks in the Dolby Digital (Plus) AC-3 Audio Coding Standards". IEEE Transactions on Audio, Speech, and Language Processing 19 (5): 1231–1241. doi:10.1109/TASL.2010.2087755. 
  14. 14.0 14.1 Brandenburg, Karlheinz (1999). "MP3 and AAC Explained". http://graphics.ethz.ch/teaching/mmcom12/slides/mp3_and_aac_brandenburg.pdf. 
  15. "Video Developer Report 2019". 2019. https://cdn2.hubspot.net/hubfs/3411032/Bitmovin%20Magazine/Video%20Developer%20Report%202019/bitmovin-video-developer-report-2019.pdf. 
  16. Britanak, V. (2011). "On Properties, Relations, and Simplified Implementation of Filter Banks in the Dolby Digital (Plus) AC-3 Audio Coding Standards". IEEE Transactions on Audio, Speech, and Language Processing 19 (5): 1231–1241. doi:10.1109/TASL.2010.2087755. 
  17. Stanković, Radomir S.; Astola, Jaakko T. (2012). "Reminiscences of the Early Work in DCT: Interview with K.R. Rao". Reprints from the Early Days of Information Sciences 60. http://ticsp.cs.tut.fi/reports/ticsp-report-60-reprint-rao-corrected.pdf. Retrieved 13 October 2019. 
  18. Xiph.Org Foundation (2009-06-02). "Vorbis I specification - 1.1.2 Classification". Xiph.Org Foundation. http://www.xiph.org/vorbis/doc/Vorbis_I_spec.html#x1-50001.1.2. 
  19. Terriberry, Timothy B. Presentation of the CELT codec. Presentation (PDF).
  20. Valin, Jean-Marc; Maxwell, Gregory; Terriberry, Timothy B.; Vos, Koen (October 2013). "High-Quality, Low-Delay Music Coding in the Opus Codec". 135th AES Convention. Audio Engineering Society. 
  21. Darko, John H. (2017-03-29). "The inconvenient truth about Bluetooth audio". http://www.digitalaudioreview.net/2017/03/the-inconvenient-truth-about-bluetooth-audio/. 
  22. Ford, Jez (2015-08-24). "What is Sony LDAC, and how does it do it?". http://www.avhub.com.au/news/sound-image/what-is-sony-ldac-and-how-does-it-do-it-408285. 
  23. Ford, Jez (2016-11-22). "aptX HD - lossless or lossy?". http://www.avhub.com.au/news/sound-image/aptx-hd---lossless-or-lossy-442124. 
  24. "Digital Theater Systems Audio Formats". 27 December 2011. https://www.loc.gov/preservation/digital/formats/fdd/fdd000232.shtml. 
  25. Spanias, Andreas; Painter, Ted; Atti, Venkatraman (2006). Audio Signal Processing and Coding. John Wiley & Sons. p. 338. ISBN 9780470041963. https://books.google.com/books?id=a1RULRErhOYC&pg=PA338.