Stereo and surround recording techniques

In this entry we are going to recall the main techniques for stereo recording:

  • Coincident microphones or intensity stereophony. It uses level differences between the two microphones, with no time differences.
    • XY pair. Consists in two directional microphones at the same place with opening angles which typically go from 90 to 150º.
      XY-Stereo
    • Blumlein pair. Consists in two figure of eight microphones angled 90º.
      Blumlein -Stereo
    • Mid/side pair, or MS pair. Employs a bidirectional (or figure-of-8) microphone pointing sideways (side, or S) plus either an omnidirectional or a variant of a cardioid microphone pointing forward (mid, or M). L and R signals can be formed by a linear combination of S and M.
      MS stereo
  • Spaced microphones, time-of-arrival stereophony or AB technique. This techique consists in having two omnidirectional microphones separated a distance which goes from a few tenths of cm to a few meters. Signal levels are mostly the same, at least with small separations, and stereophony is due to time-of-arrival differences only.
    AB Stereo
  • Near coincident pair techniques, the most common of which is the ORTF system. It consists on two cardioid microphones opened 110º and spaced 17 cm.
    ORTF-Stereo

By checking some references on stereo microphony, like

try to answer the following questions:

  • What are the advantages and incovenients of each one of the techniques?
  • How is the stereo image perceived in each one of the cases?
  • What kind of directivity patterns one can get with the MS technique? How to go from MS to LR signals?
  • Is it always the optimal thing to place two microphones near the stage in the approximate positions the playback loudspeakers would be? Why?

Also, there are microphone configurations directly suitable for recording directly in 5.1 stereo surround. See for instance:

Surround sound recording techniques are based on the same principles (summing localization — time and/or intenisty differences). They try to capture the main soundstage in the L, C, R channels and leave ambience and reverberation for Ls and Rs. In general, listening to surround sound reduces the stereo separation because of the center channel, and surround recording techniques try to counteract this effect.

Wave Field Synthesis

WFS is a spatial sound rendering technique that generates a true sound field using loudspeaker arrays [Berkhout et al., 1993]. Wave fields are synthesized based on virtual sound sources at some position in the sound stage behind the loudspeakers or even inside the listening area. In other words, contrary to traditional spatialization techniques such as stereo or surround sound, the localization of virtual sources in WFS does not depend on or change with the listener’s position.

When using sound reproduction based on WFS, sound fields can be generated in a spatially and temporally correct way. Therefore, listeners experience the feeling that the origin of the sound is actually in the position of the virtual sources. Furthermore, the synthesized wave field is correct for an extended listening area, with much larger dimensions than the “sweet spot” of the current surround systems, such as the commercial 5.1 channel surround.

The major drawback is that the number of speakers needed for an acceptable sound field representation is very high (usually in the order of hundreds). Moreover, WFS algorithm requires a considerable amount of computational power. As a consequence, three-dimensional WFS systems are still not practical although mathematical formulations are already available.

See more at : http://en.wikipedia.org/wiki/Wave_field_synthesis

Bibliography:

Jens Ahrens – Analytic methods of sound field synthesis [Recurs electrònic] 

Basilio Pueo Ortega  – Analysis and enhancement of multiactuator panels for wave field synthesis reproduction (Anexo A).

Sergio Bleda Pérez – Contribuciones a la implementación de sistemas de Wavefield Synthesis  – Section 2.5 (Spanish)

Sasha Spors – The Theory of Wave Field Synthesis Revisited

Software:

http://www.mattmontag.com/projects-page/wfs-visualizer  –  Applet that simulates wave field synthesis

http://www.mattmontag.com/projects-page/wfs-designer – Open-source, cross-platform application for performing wave field synthesis with large speaker arrays

Ambisonics

Ambisonics Logo. See http://en.wikipedia.org/wiki/Ambisonics#/media/File:AmbisonicLogo.svg

Ambisonics is a method of codifying a sound field taking into account its directional properties. Instead of every channel having the signal what every loudspeaker should be emitting, as in stereo or 5.1 surround, every Ambisonics channel has information about certain physical properties of the acoustic field, such as the pressure or the acoustic velocity.

In the two related lectures we will review the Ambisonics theory, going from the recording and encoding to the exhibition, passing by the transmission and format codification. We will mostly concentrate on first order Ambisonics, although we will also make a brief introduction to higher order Ambisonics (HOA).

Since it is relatively difficult to find good introductory material about Ambisonics, you can find some preliminary lecture notes at:

Lecture notes on Ambisonics (version 0.3)

This material has been elaborated by Daniel Arteaga and published under Creative Commons Attributtion-ShareAlike 4.0 International License.

Many things are missing or can bee improved, like a better list of references, historical remarks, a better exposition of concepts, et a revision of the English, typographical revision, etc. Additionally, it will probably contain errors. Use it under your own risk!

Update 9/06/2015: Lecture notes have been updated with some minor corrections.

Stereo and multi-loudspeaker reproduction

Some notions before starting:

from http://stereos.about.com/od/introductiontostereos/a/soundformats.htm

Monophonic Sound

Monophonic sound is sound created by one channel or speaker and is also known as Monaural or High-Fidelity sound. Monophonic sound was replaced by Stereo or Stereophonic sound in the 1960s.

Stereophonic Sound

Stereo or Stereophonic sound is created by two independent audio channels or speakers and provides a sense of directionality because sounds can be heard from different directions.

The term stereophonic is derived from the Greek words stereos, which means solid and phone, which means sound. Stereo sound can reproduce sounds and music from various directions or positions the way we hear things naturally, hence the term solid sound. Stereo sound is a common form of sound reproduction.

Multichannel Surround Sound

Multichannel sound, also known as surround sound, is created by at least four and up to seven independent audio channels or speakers placed in front of and behind the listener that surrounds the listener in sound. Multichannel sound can be enjoyed on DVD music discs, DVD movies and some CDs.

This Lecture we describe the principles of two channel stereo, analise the most common configurations for Multichannel reproduction and briefly describe the most used Stereo Recording techniques.

An detailed overview is depicted below:

  1. Introduction
  2. Two loudspeaker Stereo – More info in [1,2,3]
    1. Two channel (2-0) stereo
      1. Basic principles of loudspeaker stereo: ‘Blumlein Stereo’
      2. Cross-Talk
      3. Basic principles of loudspeaker stereo
      4. Intensity Stereo
      5. Time Difference Stereo
    2. Basic two-channel signal formats
    3. Limitations of two-channel loudspeaker stereo
  3. Multichannel stereo and surround systems – More info in [1]
    1. Three channel stereo (3-0)
    2. Four-channel surround (3-1 stereo)
    3. Channel Surround (3-2 stereo)
    4. Other multichannel configurations
      1. (7.1 channel surround)
      2. (10.2 channel surround)
  4. Surround Sound Systems – More info in [1]
  5. Matrixed surround sound systems – More info in [1]
    1. Dolby Stereo, Surround and Prologic
    2. Circle Surround
    3. Lexicon Logic 7
    4. Dolby EX
  6. Digital surround sound formats – More info in [1]
    1. Dolby Digital
    2. MPEG
  7. Stereo Recording Techniques – More info in [3, 4]
    1. X-Y technique
    2. A-B technique
    3. ORTF technique (Mix technique)

References:

[1] F. Rumsey and T. McCormick – Sound and recording (Chapter 3 and 4)

[2] V. Pulkki “Compensating displacement of amplitude-panned virtual sources.” Audio Engineering Society 22th Int. Conf. on Virtual, Synthetic and Entertainment Audio pp. 186-195. 2002 Espoo, Finland

[3] Bennett et al. – A new approach to the assessment of stereophonic

[4] Bruce Barlett, Jenny Barlett – On Location Recording Techniques

[5] http://en.wikipedia.org/wiki/Microphone_practice

Spatial Audio Psychoacoustics

From [2]

Most research into the mechanisms underlying directional sound perception conclude that there are two primary mechanisms at work, the importance of each depending on the nature of the sound signal and the conflicting environmental cues that may accompany discrete sources. These broad mechanisms involve the detection of timing or phase differences between the ears, and of amplitude or spectral differences between the ears. The majority of spatial perception is dependent on the listener having two ears, although certain monaural cues have been shown to exist – in other words it is mainly the differences in signals received by the two ears that matter.

In this lecture we cover issues related to the perception and cognition of spatial sound as it relates to sound recordings and reproduction. The overview of the class is as follows:

  1. 3D Sound and Spatial Audio
  2. Important terms
  3. Geometric convention
  4. Introduction to sound localization
  5. The minimum audible angle (MAA)
  6. Acoustic cues used in localization
  7. Measurements
  8. Subjective Attributes of Spatial Sound (please read **, pages from 35-39 )
  9. Conclusions

More info at:

[1] A.Gelfand – Hearing: an introduction to psychological and physiological acoustics (Chapter 13)

[2] F. Rumsey and T. McCormick – Sound and recording (Chapter 2) **

Even more ….

[3] G. Kendall – A 3-D Sound Primer: Directional Hearing and Stereo Reproduction

[4] W. Yost – Fundamentals of hearing : an introduction

[5] J. Blauert – Spatial hearing : the psychophysics of human sound localization 

[6] B. Moore – An introduction to the psychology of hearing