Experienced researcher in acoustics and lifelong learner of science. My interests include the perception of space through the sense of hearing integrated with other modalities such as vision and touch.

I received my M.Sc. and Ph.D. degrees in 2013 and 2016, respectively, from the Graduate School of Information Sciences (GSIS), Tohoku University, Sendai, Japan. From April 2017 to March 2019, I worked as an Assistant Professor at the Research Institute of Electrical Communication (RIEC), Tohoku University. From August 2019 to January 2021, I worked as Chief Audio Scientist for Silicon Integrated, Wuhan, China. In August 2019, I founded Perception Research in Lima, Peru, to foster education and research on spatial acoustics in the context of multisensory perception, artificial intelligence, and immersive technology. In 2022, our project Memoria Inmersiva got funding from the Ministry of Culture of Peru to continue preserving the immersive soundscapes of historical spaces of Peru. I am also a Full-Time Professor at the Peruvian University of Applied Sciences (UPC), where I lecture the courses of Physics (NCUK Program), Signals and Systems, and Digital Signal Processing (Electrical Engineering Program).

My research lies at the intersection of multimodal signal processing, multisensory perception, and computational neuroscience. My research focuses on challenges that arise when aiming at the flexible, computationally efficient capturing of environments, their analysis, and their reconstruction with high levels of realism and naturalness. I formulate and design computational tools that are required, for instance, in machines that yearn to equal the performance of human perception for environment recognition, in acoustically-transparent devices for hearing aid, in three-dimensional audio installations oriented to large audiences, and in personal audio systems for virtual and augmented reality.

I am constantly seeking opportunities for interdisciplinary collaboration towards the synergic integration of hearing and other modes of perception (e.g., vision and touch). Such integration aims at the devising of a comprehensive framework for multimodal information processing. This framework will enable the creation of multimodal perception systems for robots, human-computer interaction, enhanced hearing aids, immersive environments, and telepresence systems.