CaltechAUTHORS
  A Caltech Library Service

Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity

Berger, Christopher C. and Gonzalez-Franco, Mar and Tajadura-Jiménez, Ana and Florencio, Dinei and Zhang, Zhengyou (2018) Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity. Frontiers in Neuroscience, 12 . Art. No. 21. ISSN 1662-453X. http://resolver.caltech.edu/CaltechAUTHORS:20180215-104047810

[img] PDF - Published Version
Creative Commons Attribution.

956Kb
[img] Video (MPEG) (Video V1) - Supplemental Material
Creative Commons Attribution.

4Mb
[img] Archive (ZIP) (Data Sheet) - Supplemental Material
Creative Commons Attribution.

173Kb

Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechAUTHORS:20180215-104047810

Abstract

Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.


Item Type:Article
Related URLs:
URLURL TypeDescription
https://doi.org/10.3389/fnins.2018.00021DOIArticle
https://www.frontiersin.org/articles/10.3389/fnins.2018.00021/fullPublisherArticle
Additional Information:© 2018 Berger, Gonzalez-Franco, Tajadura-Jiménez, Florencio and Zhang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. Received: 22 August 2017; Accepted: 11 January 2018; Published: 02 February 2018. Author Contributions: MG-F, CB, DF, ZZ: Designed the experiments; MG-F: Developed the rendering apparatus; CB: Prepared and ran the experiments; CB and MG-F: Analyzed the data; CB, MG-F, AT-J, DF, and ZZ: Discussed the data; CB and MG-F: Wrote the paper, AT: Provided critical revisions. Funding: Support of RYC-2014-15421 and PSI2016-79004-R (“MAGIC SHOES: Changing sedentary lifestyles by altering mental body-representation using sensory feedback;” AEI/FEDER, UE) grants, Ministerio de Economía, Industria y Competitividad of Spain. Conflict of Interest Statement: The authors report their affiliation to Microsoft, an entity with a financial interest in the subject matter or materials discussed in this manuscript. The authors however have conducted the review following scientific research standards, and declare that the current manuscript presents a balanced and unbiased studies.
Subject Keywords:virtual reality, HRTF (head related transfer function), spatial audio, auditory perception, auditory training, cross-modal perception, cross-modal plasticity
Record Number:CaltechAUTHORS:20180215-104047810
Persistent URL:http://resolver.caltech.edu/CaltechAUTHORS:20180215-104047810
Official Citation:Berger CC, Gonzalez-Franco M, Tajadura-Jiménez A, Florencio D and Zhang Z (2018) Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity. Front. Neurosci. 12:21. doi: 10.3389/fnins.2018.00021
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:84850
Collection:CaltechAUTHORS
Deposited By: Tony Diaz
Deposited On:15 Feb 2018 22:55
Last Modified:15 Feb 2018 22:55

Repository Staff Only: item control page