Jump label

Service navigation

Main navigation

You are here:

Main content

Acoustic Signal Processing

Acoustic Source Localization

The problem of spatially localizing objects can be solved with different sensor modalities. Our research focuses on acoustic source localization which uses audible cues for localizing acoustic events in time and space. Knowledge about the location of objects, and in our case especially speakers, is solely a useful information alone. Hence, we additionally investigate possible applications for this kind of knowledge in the context of smart environments.

Related publications:

Plinge, A., Fink, G. A. Multi-Speaker Tracking using Multiple Distributed Microphone Arrays, ICASSP 2014.

Plinge, A., Hauschildt, D., Hennecke, M. H., Fink, G. A. Multiple Speaker Tracking using a Microphone Array by Combining Auditory Processing and a Gaussian Mixture Cardinalized Probability Hypothesis Density Filter, ICASSP 2011.

Kleine-Cosack, C., Hennecke, M. H., Vajda, S. , Fink, G. A. Exploiting Acoustic Source Localization for Context Classification in Smart Environments, AMI 2010.


Acoustic Event Classification

The classification of acoustic events in indoor environments is an important task for many practical applications in smart environments. In our group a novel approach for classifying acoustic events was developed based on a Bag-of-Features approach. Mel and gammatone frequency cepstral coefficients that originate from psychoacoustic models were used as input features for the Bag-of representation. Rather than using a prior classification or segmentation step to eliminate silence and background noise, Bag-of-Features representations were learned for a background class. Supervised learning of codebooks and temporal coding were shown to improve the recognition rates.

Related publications:

Plinge, A., Grzeszik, R., Fink, G. A. A Bag-of-Features Approach to Acoustic Event Detection, ICASSP 2014.


Self-Localization of Microphone Arrays

All distributed sensor systems deriving some kind of spatial information about their deployment environment or the objects therein need to know the positions of the sensors itself. Without it, measurements of multiple spatially distributed sensors cannot be put into relation. To ease the installation and maintenance of such a system, an unsupervised method to estimate the geometry of a sensor system would be beneficial.

We study this so-called self-localization problem of distributed sensor systems. Our focus lies on the development of self-localization algorithms for distributed microphone arrays. In contrast to existing methodologies no prior knowledge about the deployment environment and the type of calibration signal should be given. A first attempt to solve this problem used specific noise characteristics and localization results of a moving speaker as cues for the self-localization task

Related publications:

Plinge, A., Fink. G. A., Geometry Calibration of Multiple Microphone Arrays in Highly Reverberant Environments, IWAENC 2014

Plinge, A., Fink. G. A., Geometry Calibration of Distributed Microphone Arrays Exploiting Audio-Visual Correspondences, EUSIPCO 2014

Hennecke, M., Fink, G. A.  Towards Acoustic Self-Localization of Ad Hoc Smartphone Arrays, HSCMA 2011.

Hennecke, M., Plötz, T, Fink, G. A., Schmalenströer, J., Häb-Umbach, R. A Hierarchical Approach to Unsupervised Shape Calibration of Microphone Array Networks, WSSP 2009.


Sub content


Prof. Dr.-Ing. Gernot A. Fink
Head of Research Group
Tel.: 0231 755-6151