Acoustic Source Localization and Separation

Distant automatic speech recognition for use in human-machine interaction as well as automatic event detection and localization used in e-Health surveillance and assistance applications require an automatic localization and separation of the acoustic source. At our lab, research in Acoustic Source Localization and Separation covers localization of a single and of multiple concurrent acoustic sources using single-channel or multi-channel audio inputs. Given these audio inputs, we apply a combination of nonlinear signal processing and of machine learning concepts to achieve single-channel or multi-channel-based source separation using techniques like blind source separation, adaptive beamforming or fundamental frequency based source separation. For recording of real-life audio databases, we have recently setup a special recording room equipped with a flexible setup of microphone arrays that allows us to record different meeting situations, assisted living simulations and other distant speech recognition tasks.