Auditory scene analysis

Auditory scene analysis (ASA) is a term coined by the psychologist Albert Bregman to describe the process by which the human auditory system organizes complex mixtures of sound. ASA involves a grouping process, in which sound energy that is likely to have arisen from the same environmental source is packaged together to form a perceptual whole.

A well-known illustration of ASA is the so-called cocktail party effect; at a busy party, one is able to follow a conversation even though other voices and background music are present.

Most natural sounds, such as the human voice, musical instruments, or cars passing in the street, are made up of many frequencies, which contribute to the perceived quality (or timbre) of the sounds. When two or more natural sounds occur at once, all the components of the simultaneously active sounds are received at the same time, or overlapped in time, by the ears of listeners. This faces their auditory systems with a problem: Which frequency components should be grouped together and treated as parts of the same sound? Grouping them incorrectly can cause the listener to hear non-existent sounds built from the wrong combinations of the original components.

A number of grouping principles appear to underlie ASA, many of which are related to principles of perceptual organization discovered by the school of Gestalt psychology. These can be broadly categorised into sequential grouping cues (those that operate across time) and simultaneous grouping cues (those that operate across frequency). In addition, schemas (learned patterns) play an important role.

Errors in simultaneous grouping can lead to the blending of sounds that should be heard as separate, the blended sounds having different perceived qualities (such as pitch or timbre) than any of the actually received sounds.

Errors in sequential grouping can lead, for example, to hearing a word created out of syllables originating from two different voices. The job of ASA is to group incoming sensory information to form an accurate mental representation of the environmental sounds

When sounds are grouped by the auditory system into a perceived sequence, distinct from other co-occurring sequences, each of these perceived sequences is called an “auditory stream”. Normally, a stream corresponds to a distinct environmental sound pattern that persists over time, such as a person talking, a piano playing, or a dog barking, but perceptual errors and illusions are possible under unusual circumstances. One example of this is the laboratory phenomenon of "streaming", also called "stream segregation." If two sounds, A and B, are rapidly alternated in time, after a few seconds the perception may seem to “split” so that the listener hears two rather than one stream of sound, each stream corresponding to the repetitions of one of the two sounds, for example, A-A-A-A-, etc. accompanied by B-B-B-B-, etc. The tendency towards segregation into separate streams is favored by differences in the acoustical properties of sounds A and B.  Among the differences that favor segregation are those of frequency (for pure tones), fundamental frequency (for rich tones), frequency composition, spatial position, and speed of the sequence (faster sequences segregate more readily).

Many experiments have studied the segregation of more complex patterns of sound, such as a sequence of high notes of different pitches, interleaved with low ones. In such sequences, the segregation of co-occurring sounds into distinct streams has a profound effect on the way they are heard. Perception of a melody is formed more easily if all its notes fall in the same auditory stream. We tend to hear the rhythms among notes that are in the same stream, excluding those that are in other streams. Judgments of timing are more precise between notes in the same stream than between notes in separate streams. Even perceived spatial location and perceived loudness can be affected by sequential grouping.

While the initial research on this topic was done on human adults, recent studies have shown that some ASA capabilities are present in newborn infants, showing that they are built-in, rather than learned through experience. Other research has shown that non-human animals also display ASA. Currently, scientists are studying the activity of neurons in the auditory regions of the cerebral cortex to discover the mechanisms underlying ASA.

The field of computational auditory scene analysis attempts to implement ASA in machine systems, and is closely related to the problems of source separation and blind signal separation.