Projects


I. Immersive Soundscapes

The Immersive Memory Project contributes to the preservation of the tangible and intangible cultural heritage of Peru. The preservation of 360 visual scenarios and 3D soundscapes, in multisensory formats, is important to transmit to the new generations the experience of touring the historical spaces of Peru today.

Our collection currently contains tens of immersive videos of approximately 5 minutes each. Our records began in February 2019. So far we have published content from the cities of Arequipa, Cusco, Lima, and Rioja.

We recommend using headphones for a better spatial listening experience. We also recommend using Google VR cardboards or head-mounted displays to include head movements in the audition during interactive space exploration.


II. Spatial Acoustics

The spatial acoustics library for MATLAB (SALM) contains a collection of Matlab functions and scripts for spatial acoustic signal processing and spatial audio processing.

The GitHub repository of this project is available at https://github.com/cesardsalvador/SpatialAcousticsLibraryMATLAB.

If you use any script, function, or dataset available in this repository, please cite SALM as below:

  • C. D. Salvador, “Spatial Acoustics Library for MATLAB (SALM),” GitHub, Feb., 2024.
    DOI: 10.5281/zenodo.10648288


    III. Spatial Hearing

    This research aims at contributing to the emergence of future cognitive-based audio processing methods for acoustic environment recognition. To achieve this, patterns of invariability are being identified along databases of morphological and acoustical descriptors of the listeners’ external anatomy.

    A dataset of near-distance HRTFs has been constructed as part of this project. The dataset is publicly available for download at cesardsalvador.github.io/download.html.

    This research also aims at establishing mathematical correspondences between the acoustical patterns of invariability and the statistical relations of connectivity in the auditory brain. We are reviewing recent models of structural (anatomical links) and functional (statistical association) connectivity that occur in the bottom-up (stimulus-driven) and top-down (task-oriented) neural pathways of the human auditory brain when sound identification and localization tasks are performed. By interpreting the state-of-the-art in auditory brain modeling from a signal processing perspective, recent findings in the fields of neurobiology, cognitive science, complex brain networks, and mathematical neuroscience are being integrated into a comprehensive framework for acoustic environment recognition.

    This project is being suported by a Grant-in-Aid for Young Scientists (B) from the Japan Society for the Promotion of Science (JSPS), under Grant JP17K12708, 2017-2018.