Implementing and validating headphone-based virtual acoustic methods to study auditory distraction
Master Thesis of Guo, Shuyao
Forschungsgebiet: Acoustic Virtual Reality
Betreuer: Yadav, Manuj / Fels, Janina
Implementing headphone-based virtual acoustic
methods to study auditory distraction
1. Introduction
Background noise, including noise irrelevant to the task, brings impairment in cognitive
performance, for both adults and children; the so-called irrelevant sound effect.
According to current state of research, mainly two different kinds of mechanisms lead to
auditory distraction, namely sound-related attention distraction and specific interference.
However, there are certain controversies on ISE, given the facts that the similarity of
background noise and task-related language proved to be irrelevant, and ISE can also be
evoked by non-linguistic sound. Besides, the relationship between attention distraction
and specific interference worth a further study.
Series of experiments have been done to clarify these mechanisms, in which participants
were asked to finish certain cognitive tasks in an environment with or without
background noise. In earlier studies of this topic, the background noise used has mostly
comprised of unrealistic sounds that were presented diotically, which differs a lot from
real scenarios. Stimuli closer to real background noise should be used in such
experiments for a more reasonable conclusion. Besides, the effect of room acoustics and
sound reproduction methods on cognitive performances is also not clear.
Different types of noise stimuli are thus necessary for this purpose, including stimuli
closer to reality than simple diotic or monaural sounds used in early studies. According to
current research, acoustic technologies can establish a realistic virtual acoustic
environment via headphones and HRTFs. Head and eye-tracking equipment and
technique are introduced to bring interaction between participants and environment. In
actual experiments, these implementations can be reproduced via headphone while
participants perform certain cognitive tasks. This is currently being explored in a joint DFG
project of acoustics (at IHTA) and psychology researchers at RWTH and TU Kaiserslautern.
In this master thesis, which will benefit from the expertise within the DFG project,
different classes of background noises based on headphones will be implemented in
order of increasing realness, using room acoustics, generic and individual HRTFs, head
and eye-tracking technology. Finally, at least one short case study per implementation
will be accomplished to test the validity of these implementations.
2. Methods
2.1 Implementations
2 different room models, namely classroom for children and open office for adults
have been designed using Sketchup. Room acoustic simulation and auralization are
done with MATLAB, RAVEN, VA and certain sound engines. Parameters of room
acoustics and psychoacoustics are also considered in the simulation, based on existing
literature about these parameters in classrooms and offices. Starting from here, new
elements are to be introduced to the workflow.
a. Generic HRTF
A generic artificial HRTF for adults and children is used to produce better acoustic
environments compared to the diotic stimuli used in previous studies, which
brings spatial distribution of sound sources into consideration. With binaural
technology, this stimulus can be presented via headphones during experiments.
b. Individualized HRTF
Next stage of noise stimuli is to take individualized HRTFs into account instead of
using the same generic HRTF for all participants. Based on data of head
dimensions, HRTFs are slightly modified with specific algorithm to adapt different
participants. Thus, more precise spatial information will be provided for the
experiments.
c. Headtracking
All the simulations mentioned above are static, meaning no interaction between
participants and environment. To introduce certain interaction to the scenery,
head and eye-tracking equipment will be used to collect the spatial orientation of
participants in time, such as position and angle of the head. With real-time
processing, the virtual acoustic environments used in the experiments will be
adapted to react to the participant’s movement, making it one step closer to
reality.
d. Individual HRTF
Finally, individual HRTFs are considered in this workflow. Measurement of HRTFs
will be executed in an anechoic chamber, taking approximately 10 minutes.
Individual HRTFs will be combined with all the implementations planned before.
Thus, the virtual environment is highly personalized, to provide a background
noise as authentic as possible.
2.2 Validation
Section 2.1 lists the types of simulation that will be implemented, and the
corresponding background noise stimuli that will be created, which fit well to the
need of this project. To verify the validity of these stimuli, trial tests will be run using
all type of background noises and cognitive tasks designed by psychologists. Given the
logistical limitations in terms of running experiments with human participants at
present, at least one individual HRTF will be completely measured and used in
implementation of the virtual acoustic environment closest to reality.
2.3 Scheduling
A total period of 26 weeks is planned for this dissertation. First 2 weeks will be used
for familiarizing with the current state of this project and tools required. Next, 4
weeks of time are occupied for implementing generic and individualized HRTFs.
Implementations of headtracking will be the most difficult part of the project, which
should take 6 weeks. To finalize the whole virtual environment will use another 6
weeks. Finally, 4 weeks for validation and experiments and last 4 weeks for writing
and revising the paper.