| Oral Presentations | Exhibition |
Oral Presentations
May 30th (Thu)
Head's Talk
13:20-13:50
Processing like people, understanding people, helping people
- Toward the future where humans and AI will cohabitate and co-create -
Takeshi Yamada, Director of NTT Communication Science Laboratories
Special Talk
15:00-16:00
Sports in the future and human potentiality
Dai Tamesue, Former athlete
Makio Kashino, Fellow, Head of Sports Brain Science Project
May 31st (Fri)
Research Talk
11:00-11:40
See, hear and learn to describe
- Crossmodal information processing opens the way to smarter AI -
Kunio Kashino, Media Information Laboratory
Research Talk
13:00-13:40
Measuring multiple visual abilities in daily circumstances
- Towards establishment of daily selfcheck for eye health -
Kazushi Maruya, Human Information Science Laboratory
Research Talk
13:50-14:30
Like likes like strategy: search suitable for various viewpoints
- Picture book search system “Pitarie” with graph index based search -
Takashi Hattori, Innovative Communication Laboratory
Exhibition Program
May 30th (Thr) 12:00-17:30
May 31st (Fri) 9:30-16:00
Science of Machine Learning
Learning and finding congestion-free routes
Online shortest path algorithm with binary decision diagrams
Efficient and comfortable AC control by AI
Environment reproduction and control optimization system
Rocover urban people flow from population data
People flow estimation from spatiotemporal population data
Improving the accuracy of deep learning
Larger capacity output function for deep learning
Which is cause? Which is effect? Learn from data!
Causal inference in time series via supervised learning
Forcasting future data for Unobserved locations
Tensor factorization for spatio-temporal data analysis
Search suitable for various viewpoints
"Pitarie": Looking for picture books with graph index based search
Science of Communication and Computation
We can transmit messages to the efficiency limit.
Error correcting code achiving the Shannon limit
New secrets threaten past secrets
Vulnerability assessment of quantum secret sharing
Analyzing the discourse structure behind the text
Hierarchical top-down RST parsing based on neural networks
When children begin to understand hiragana
Emergent literacy development in Japanese
Measuring emotional response and emotion sharing
Quantitative assessment of empathic communication
Touch, enhance, and measure the empathy in crowd
Towards tactile enhanced crowd empathetic communication
Robot understands events in your story
Chat-oriented dialogue system based on event understanding
Voice command and speech communication in car
World's best voice capture and recognition technologies
Learning Speech Recognition from Small Paired Data
Semi-supervised end-to-end training with text-to-speech
Who spoke when & what? How many people were there?
All-neural source separation, counting and diarization model
Changing your voice and speaking style
Voice and prosody conversion with sequence-to-sequence model
Face-to-voice conversion and voice-to-face conversion
Crossmodal voice conversion with deep generative models
Learning unknown objects from Speech and Vision
Crossmodal audio-visual concept discovery
Neural audio captioning
Generating text describing non-speech audio
Recognizing Types and Shapes of Objects from Sound
Crossmodal audio-visual analysis for scene understanding
Science of Human
Speech of chirping birds, music of bubbling water
Sound texture conversion with an auditory model
Danswing papers
An illusion to give motion impressions to papers
Measuring visual abilities in a delightful manner
Self eye-check system using video games and tablet PCs
How do the winners control their mental states?
Physiological states and sports performance in real games
Split-second brain function at baseball hitting
Instantaneous cooperation between vision and action
Designing Technologies for Mindful Inclusion
How sharing caregiving data affects family communication
Real-world motion that the body sees
Distinct visuomotor control revealed by natural statistics
Creating a walking sensation for the seated
A sensation of pseudo-walking expands peripersonal space