Sound is all around

In today’s world, we are continuously bombarded by sounds. We listen to speech as our main form of communication and are additionally exposed to a host of other sounds (either voluntary [e.g. music] or involuntary [e.g. traffic noise]). The human brain has extraordinary capacities to make sense of this acoustic environment, and over the past decades many efforts have been dedicated to the development of technology that can achieve the same feats as humans. 

“If we had machines that could hear as humans do, we would expect them to be able to easily distinguish speech from music and background noises, to pull out the speech and music parts for special treatment, to know what direction sounds are coming from, to learn which noises are typical and which are noteworthy. Hearing machines should be able to organize what they hear; learn names for recognizable objects, actions, events, places, musical styles, instruments, and speakers; and retrieve sounds by reference to those names. These machines should be able to listen and react in real time”

    - Dick Lyon; Lyon, R. F. (2010). Machine hearing: An emerging field [exploratory dsp]. IEEE signal processing magazine,
      27(5), 131-139,