Auditory Sensory Substitution is Intuitive and Automatic with Texture Stimuli
- Creators
- Stiles, Noelle R. B.
- Shimojo, Shinsuke
Abstract
Millions of people are blind worldwide. Sensory substitution (SS) devices (e.g., vOICe) can assist the blind by encoding a video stream into a sound pattern, recruiting visual brain areas for auditory analysis via crossmodal interactions and plasticity. SS devices often require extensive training to attain limited functionality. In contrast to conventional attention-intensive SS training that starts with visual primitives (e.g., geometrical shapes), we argue that sensory substitution can be engaged efficiently by using stimuli (such as textures) associated with intrinsic crossmodal mappings. Crossmodal mappings link images with sounds and tactile patterns. We show that intuitive SS sounds can be matched to the correct images by naive sighted participants just as well as by intensively-trained participants. This result indicates that existing crossmodal interactions and amodal sensory cortical processing may be as important in the interpretation of patterns by SS as crossmodal plasticity (e.g., the strengthening of existing connections or the formation of new ones), especially at the earlier stages of SS usage. An SS training procedure based on crossmodal mappings could both considerably improve participant performance and shorten training times, thereby enabling SS devices to significantly expand blind capabilities.
Additional Information
© 2015 Macmillan Publishers Limited. This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ Received: 15 December 2014. Accepted: 29 September 2015. Published online: 22 October 2015. We are grateful for a fellowship from the National Science Foundation (NSF) Graduate Research Fellowship Program, and research grants from the Della Martin Fund for Discoveries in Mental Illness, and from the Japan Science and Technology Agency, Core Research for Evolutional Science and Technology. We appreciate Yuqian Zheng's support with training participants on the vOICe device, and Carmel Levitan's and Armand R. Tanguay, Jr.'s comments on the manuscript. We would also like to thank Peter Meijer, Luis Goncalves, and Enrico Di Bernardo from MetaModal LLC for the use of several of the vOICe devices used in this study. Contributions: N.S. designed experiments, collected and analyzed data, and drafted the paper. S.S. designed experiments, interpreted data, and drafted the paper. The authors declare no competing financial interests.Attached Files
Published - srep15628.pdf
Supplemental Material - srep15628-s1.pdf
Supplemental Material - srep15628-s2.mov
Files
Additional details
- PMCID
- PMC4615028
- Eprint ID
- 61536
- Resolver ID
- CaltechAUTHORS:20151026-210816125
- NSF Graduate Research Fellowship
- Della Martin Fund for Discoveries in Mental Illness
- Japan Science and Technology Agency (JST) Core Research for Evolutional Science and Technology (CREST)
- Created
-
2015-10-27Created from EPrint's datestamp field
- Updated
-
2021-11-10Created from EPrint's last_modified field