Pitch detection algorithm

From HandWiki
Short description: Algorithm to estimate signal frequency

A pitch detection algorithm (PDA) is an algorithm designed to estimate the pitch or fundamental frequency of a quasiperiodic or oscillating signal, usually a digital recording of speech or a musical note or tone. This can be done in the time domain, the frequency domain, or both.

PDAs are used in various contexts (e.g. phonetics, music information retrieval, speech coding, musical performance systems) and so there may be different demands placed upon the algorithm. There is as yet[when?] no single ideal PDA, so a variety of algorithms exist, most falling broadly into the classes given below.[1]

A PDA typically estimates the period of a quasiperiodic signal, then inverts that value to give the frequency.

General approaches

One simple approach would be to measure the distance between zero crossing points of the signal (i.e. the zero-crossing rate). However, this does not work well with complicated waveforms which are composed of multiple sine waves with differing periods or noisy data. Nevertheless, there are cases in which zero-crossing can be a useful measure, e.g. in some speech applications where a single source is assumed.[citation needed] The algorithm's simplicity makes it "cheap" to implement.

More sophisticated approaches compare segments of the signal with other segments offset by a trial period to find a match. AMDF (average magnitude difference function), ASMDF (Average Squared Mean Difference Function), and other similar autocorrelation algorithms work this way. These algorithms can give quite accurate results for highly periodic signals. However, they have false detection problems (often "octave errors"), can sometimes cope badly with noisy signals (depending on the implementation), and - in their basic implementations - do not deal well with polyphonic sounds (which involve multiple musical notes of different pitches).[citation needed]

Current[when?] time-domain pitch detector algorithms tend to build upon the basic methods mentioned above, with additional refinements to bring the performance more in line with a human assessment of pitch. For example, the YIN algorithm[2] and the MPM algorithm[3] are both based upon autocorrelation.

Frequency-domain approaches

Frequency domain, polyphonic detection is possible, usually utilizing the periodogram to convert the signal to an estimate of the frequency spectrum[4] . This requires more processing power as the desired accuracy increases, although the well-known efficiency of the FFT, a key part of the periodogram algorithm, makes it suitably efficient for many purposes.

Popular frequency domain algorithms include: the harmonic product spectrum;[5][6] cepstral analysis[7] and maximum likelihood which attempts to match the frequency domain characteristics to pre-defined frequency maps (useful for detecting pitch of fixed tuning instruments); and the detection of peaks due to harmonic series.[8]

To improve on the pitch estimate derived from the discrete Fourier spectrum, techniques such as spectral reassignment (phase based) or Grandke interpolation (magnitude based) can be used to go beyond the precision provided by the FFT bins. Another phase-based approach is offered by Brown and Puckette [9]

Spectral/temporal approaches

Spectral/temporal pitch detection algorithms, e.g. the YAAPT pitch tracking algorithm,[10][11] are based upon a combination of time domain processing using an autocorrelation function such as normalized cross correlation, and frequency domain processing utilizing spectral information to identify the pitch. Then, among the candidates estimated from the two domains, a final pitch track can be computed using dynamic programming. The advantage of these approaches is that the tracking error in one domain can be reduced by the process in the other domain.

Speech pitch detection

The fundamental frequency of speech can vary from 40 Hz for low-pitched voices to 600 Hz for high-pitched voices.[12]

Autocorrelation methods need at least two pitch periods to detect pitch. This means that in order to detect a fundamental frequency of 40 Hz, at least 50 milliseconds (ms) of the speech signal must be analyzed. However, during 50 ms, speech with higher fundamental frequencies may not necessarily have the same fundamental frequency throughout the window.[12]

See also

References

  1. D. Gerhard. Pitch Extraction and Fundamental Frequency: History and Current Techniques, technical report, Dept. of Computer Science, University of Regina, 2003.
  2. de Cheveigné, Alain; Kawahara, Hideki (2002). "YIN, a fundamental frequency estimator for speech and music". The Journal of the Acoustical Society of America (Acoustical Society of America (ASA)) 111 (4): 1917–1930. doi:10.1121/1.1458024. ISSN 0001-4966. PMID 12002874. Bibcode2002ASAJ..111.1917D. http://audition.ens.fr/adc/pdf/2002_JASA_YIN.pdf. 
  3. P. McLeod and G. Wyvill. A smarter way to find pitch. In Proceedings of the International Computer Music Conference (ICMC’05), 2005.
  4. Hayes, Monson (1996). Statistical Digital Signal Processing and Modeling. John Wiley & Sons, Inc.. p. 393. ISBN 0-471-59431-8. 
  5. Pitch Detection Algorithms, online resource from Connexions
  6. A. Michael Noll, “Pitch Determination of Human Speech by the Harmonic Product Spectrum, the Harmonic Sum Spectrum and a Maximum Likelihood Estimate,” Proceedings of the Symposium on Computer Processing in Communications, Vol. XIX, Polytechnic Press: Brooklyn, New York, (1970), pp. 779–797.
  7. A. Michael Noll, “Cepstrum Pitch Determination,” Journal of the Acoustical Society of America, Vol. 41, No. 2, (February 1967), pp. 293–309.
  8. Mitre, Adriano; Queiroz, Marcelo; Faria, Régis. Accurate and Efficient Fundamental Frequency Determination from Precise Partial Estimates. Proceedings of the 4th AES Brazil Conference. 113-118, 2006.
  9. Brown JC and Puckette MS (1993). A high resolution fundamental frequency determination based on phase changes of the Fourier transform. J. Acoust. Soc. Am. Volume 94, Issue 2, pp. 662–667 [1]
  10. Zahorian, Stephen A.; Hu, Hongbing (2008). "A spectral/temporal method for robust fundamental frequency tracking". The Journal of the Acoustical Society of America (Acoustical Society of America (ASA)) 123 (6): 4559–4571. doi:10.1121/1.2916590. ISSN 0001-4966. PMID 18537404. Bibcode2008ASAJ..123.4559Z. http://bingweb.binghamton.edu/~hhu1/paper/Zahorian2008spectral.pdf. 
  11. Stephen A. Zahorian and Hongbing Hu. YAAPT Pitch Tracking MATLAB Function
  12. 12.0 12.1 Huang, Xuedong; Alex Acero; Hsiao-Wuen Hon (2001). Spoken Language Processing. Prentice Hall PTR. p. 325. ISBN 0-13-022616-5. 

External links