Abstract
Hen-Gen-Tou, invented by CS Labs, is an illusion-based projection mapping that adds motion impressions to real static objects. It produces illusory motion impressions in the projection target by projecting luminance motion signals that selectively drive the motion detectors in the human visual system. However, in order to successfully "fool" human vision, the amount of movements must be properly adjusted because there is a limit in shift size that can create the illusion. Here, to automate this laborious adjustment task, we propose an optimization framework that adaptively retargets the motion information in real time based on a perceptual model. The perceptual model predicts the perceived deviation of a projected pattern from an original surface pattern using a computational model of human visual information processing. This technique will broaden the range of applications of Hen-Gen-Tou, including interactive applications.
Taiki Fukiage / Sensory Representation Group, Human Information Science Laboratory
Email: