Individual Binaural Synthesis of Virtual Acoustic Scenes
- DFG Sachbeihilfe
- Funding Program:
- Research Area:
In recent years, Virtual Reality (VR) has become a major impact in many areas. With respect to the technical development of VR, the focus has traditionally been on the visual component. However, the auditory component is essential for a convincing application. There are various methods available for headphone-based rendering of acoustic virtual reality. Model-based binaural synthesis uses Head-Related Transfer Functions (HRTFs) for the auralization of individual virtual sound sources. In data-based binaural synthesis, microphone arrays are used in combination with audio signal processing and HRTFs. Generic HRTFs of artificial heads can cause problems with both approaches in terms of in-the head localization, front-back confusions and timbre. There are numerous methods for the fast measurement of individual HRTFs and algorithms for the individualization of generic HRTFs. For the former, however, there is a lack of systematic validation of measurement uncertainties due to the apparatus and to the influence of the subject's movement. The same applies to the validation of algorithms for the individualization of generic HRTFs. The effects of measurement uncertainties and approximations in the algorithms of model- and data-based binaural synthesis on the authenticity and plausibility of acoustic scenes are the central goal of this project. Here the essential question is to what extent an improvement of a virtual acoustic scene can be achieved with the help of individualized or individually measured HRTFs in comparison to generic HRTFs. In addition to instrumental measurements, empirical studies applying listening experiments are used for the evaluation.