Natural scenes such as bubbling water and rustling trees give us specific perception of sound textures. We
developed a method to artificially give such textures to speech and music. Inspired by research on manipulating
image textures, we improved the method so that we can applied it to sound. A computational model that takes
into account our hearing mechanism enabled effective control of sound textures in terms of hearing sensation.
The method is realized in the same framework as the image texture manipulation. This indicates that, in the brain,
seen and heard textures are processed by the similar mechanisms. From a scientific viewpoint, this study leads to
understanding of the mechanisms of sound texture perception by comparing the model’s internal states with the
brain activities induced by hearing sounds. From an application perspective, the proposed method enables us to
speak in a voice that does not actually exist or to play music with an instrument that does not exist.
Takuya Koumura, Sensory Resonance Research Group, Human Information Science Laboratory