| Morning | Wearable computing |
|---|---|
| 07:30 | Welcome |
| 07:35 | Introduction to some of the challenges in the
field. Samy Bengio (IDAIP) |
| 08:00 | Learning to Imitate using Wearable Audio-Visual
Sensors Tony Jebara (Columbia University) |
| 08:30 |
The Audio Epitome: A New Representation For Modeling And Classifying
Auditory Phenomena Sumit Basu (Microsoft Research) |
| 09:00 | Break |
| 09:20 | Choosing the Right Modalities and Features for Activity
Recognition
Tanzeem Choudhury (Intel Research) |
| 09:50 | Multi-Sensor Context Awareness: Potentials and Challenges for
Human-Computer Interaction
Bernt Schiele (Darmstadt University of Technology) |
| 10:20 | Information Theoretic Concepts for Multi Modal
Signal Processing Jose Principe (University of Florida) |
| 10:30 | Lunch++ |
| Unfortunately Kristof Van Learhoven had to cancel. Sumit Basu has agreed to step in. This means the the times may change slightly. | |
| Afternoon | Sound and image integration |
| 16:00 | A Framework for Evaluating Multimodal Processing and A Role for
Embodied Conversational Agents Dominic W. Massaro (University of California - Santa Cruz) |
| 16:45 | A State Space Approach to Talking Faces Tue Lehn-Schiĝler (The Technical University of Denmark) |
| 17:10 | Multimodal Meeting Action Modeling Using Multi-Layer HMM
Framework Samy Bengio (IDIAP) |
| 17:35 | Break |
| 17:50 | Audio-Visual Tracking of Multiple Speakers in
Meetings Daniel Gatica-Perez (IDIAP) |
| 18:15 | Audio-visual fusion with streams of articulatory
features Kate Saenko (MIT) |
| 18:40 | Do we have a road map? Jan Larsen (The Technical University of Denmark) |
| 19:00 |