Acoustic Source Localization and Separation
- Research Areas
- Contact
Distant automatic speech recognition for use in human-machine interaction as well as automatic event detection and localization used in e-Health surveillance and assistance applications require an automatic localization and separation of the acoustic source. At our lab, research in acoustic source localization and separation covers localization of a single and of multiple concurrent acoustic sources using single-channel or multi-channel audio inputs. Given these audio inputs, we apply a combination of nonlinear signal processing and of machine learning concepts to achieve single-channel or multi-channel-based source separation using techniques like blind source separation, adaptive beamforming or fundamental frequency based source separation.
Since several years, we use deep neural networks for speech separation, dereverberation and speech enhancement. Most of our techniques exploit multiple microphone signals to obtain performances beyond the state-of-the-art. Furthermore, for recording of real-life audio databases, we have recently setup a special recording room equipped with a flexible setup of microphone arrays that allows us to record different meeting situations, assisted living simulations and other distant speech recognition tasks.