Study of a 3D audio-motor coupling with an electromagnetic motion capture device Thomas Hoellinger1, Johanna Robertson1,2, Malika Auvray1, Agnès Roby-Brami1,2, & Sylvain Hanneton1 1 Laboratoire de Neurophysique et Physiologie, CNRS UMR 8119, Université Paris 5 2 Département de Médecine Physique et Rééducation, Hôpital Raymond Poincaré, Garches, France
INTRODUCTION
RESULTS Performances 100
Trial Duration (s)
80 70 60 50
MATERIAL & METHODS Participants
10 blindfolded sighted participants (age range: 25-50 years) took part in this experiment. None of the participants suffered from hearing deficits (as assessed by auditory tests) or orthopaedic or neurological deficits.
Experimental Set-Up
Task
Sound spatialisation modeling
16 14
1
3
Block
2
3
Block
Kinematics Right-Left Displacements
A
Hand Mode
20
C Head Mode
20
0
0
-20
-20
-40 0
Azimuth Rotation
100
The aim of the participants was to catch a fixed audio source with their hand. The sound was perceived by the participants using “virtual ears” (the listener) located either on their hand (“hand mode”) or on their head (“head mode”). The sound received by the participants was modulated according to the position and orientation of the “virtual ears”. (Fig. 1). For each mode, the experiment consisted of 3 blocks of 9 trials, giving rise to a total of 54 trials per participant.
18
Fig. 2: The participants’ level of performance was significantly higher in the “hand mode” (the less “natural” mode) than in the head mode. In addition, we observed for both modes a block effect with a significant increase in the success rate (F[2,18]=5.87, p<.01) and a significant decrease in the trial duration F[2,18]=6.26, p<0.01) demonstrating an adaptation to the task
40
We used an Electromagnetic device (Polhemus) connected with a 3D audio rendering system (OpenAl) that provides auditory feedback of movements. The model works with a listener (sensor) and an audio source. The sound received by the listener depends on its position and orientation with respect to the source. In this way the sound recorded varies in intensity (Eq. 1.) and in frequency (computed by the audio device). The source was a «buzzing» sound with significant harmonics between 100 and 2000 Hz.
2
20
12
Hand Mode Head Mode 1
B
22
A
90
Success Rate %
Sensory substitution devices convert information normally processed by one sensory modality (e.g., vision) into stimulation provided through another sensory modality (e.g., audition) [1].Many studies have shown the feasibility of converting visual information into sounds [2]. However, the sensorimotor parameters involved in the learning of sensory substitution devices remain unknown. The aim of the study reported here is twofold : 1) To investigate participants’ ability to localize a source within a 3D visual-toauditory environment and to investigate whether performance depends on the emplacement of the sensor (hand versus head) 2) To investigate the kinematics of the participants’ head and hand while adapting to this new system.
B
2
4
6
8
10
Hand Mode
0
12
D
50
0
0
-50
-50 0
2
4
6
Time (s)
8
10
4
6
12
0
8
10
12
Head Mode
100
50
-100
2
2
Hand Movements Head Movements Start Of The Hand Movements
4
6
8
Time (s)
10
12
According to the model used, the intensity decreases and the sound is slightly muffled as the liste- Fig. 3: We observed a tendency for oscillatory movements of the hand to decrease when approaner sensor moves away from the source. ching the target. We suggest that this pattern of low amplitude oscillations close to the target is relaThe sound attenuates according to the Equation 1 (Eq. 1) below: ted to the success of the strategy used to accomplish the task.
D ref G d = D ref R⋅d −D ref where d is the distance to the source, G is the gain applied to the sound, R (= 1.3) the Roll-off factor and Dref (= 5 cm) a “reference distance” (the distance that gives a unitary gain). The sound perceived is influenced by the relative orientation of the listener to the source (Fig. 1) including further effects of binaural of sound perception directly modeled and computed by the audio device (SoundMax integrated digital audio by Analog Devices Inc.).
Hand Mode
b
DISCUSSION The study reported here revealed that auditory feedback can be used in order to guide reaching movements. Participants’ performance was higher using the hand mode than using the head mode. The participants’ performance significantly improved across trials for the two modes (hand and head modes), as assessed both by an increase in success rate and by a decrease in task duration. Further analyses on the different kinematics between hand and head have to be conducted in order to investigate more precisely the participants’ sensorimotor strategies. This study thus reveals that healthy participants are able to adapt to a new audiomotor environment and to acquire new sensori-motor strategies. This possibility raises hope for the use of audio feedback for hand guidance in rehabilitation (see Robertson et al. poster).
Head Mode
b
REFERENCES b
Listener sensor
b
[1] Bach-y-Rita, P., & Kercel, S. W. (2003). Sensory substitution and the human-machine interface. Trends in Cognitive Sciences, 7, 541-546.
Y Z
X
Fig. 1 Modelisation of the sound perceived by the participant relative to the mode and the orientation of the listener (located on the hand in «hand mode» and on the head in «head mode»).
[2] Auvray, M., Hanneton, S., & O’Regan, J. K. (2007). Learning to perceive with a visuo-auditory substitution system: Localization and object recognition with The Voice. Perception, 36, 416-430.ception, 36, 416-430.
ACKNOWLEDGEMENTS: This work was supported by PHRC 2003 grant (“Comprendre et réduire le handicap moteur”). Agnes Roby-Brami is supported by INSERM.
[email protected] www.neurophys.biomedicale.univ-paris5.fr/neuromouv