Schedule

Friday December 17.

MorningWearable computing
07:30Welcome
07:35Introduction to some of the challenges in the field.
Samy Bengio (IDAIP)
08:00Learning to Imitate using Wearable Audio-Visual Sensors
Tony Jebara (Columbia University)
08:30 The Audio Epitome: A New Representation For Modeling And Classifying Auditory Phenomena
Sumit Basu (Microsoft Research)
09:00 Break
09:20 Choosing the Right Modalities and Features for Activity Recognition
Tanzeem Choudhury (Intel Research)
09:50 Multi-Sensor Context Awareness: Potentials and Challenges for Human-Computer Interaction
Bernt Schiele (Darmstadt University of Technology)
10:20Information Theoretic Concepts for Multi Modal Signal Processing
Jose Principe (University of Florida)
10:30Lunch++
 
Unfortunately Kristof Van Learhoven had to cancel. Sumit Basu has agreed to step in. This means the the times may change slightly.
 
Afternoon Sound and image integration
16:00A Framework for Evaluating Multimodal Processing and A Role for Embodied Conversational Agents
Dominic W. Massaro (University of California - Santa Cruz)
16:45A State Space Approach to Talking Faces
Tue Lehn-Schiĝler (The Technical University of Denmark)
17:10Multimodal Meeting Action Modeling Using Multi-Layer HMM Framework
Samy Bengio (IDIAP)
17:35Break
17:50Audio-Visual Tracking of Multiple Speakers in Meetings
Daniel Gatica-Perez (IDIAP)
18:15Audio-visual fusion with streams of articulatory features
Kate Saenko (MIT)
18:40Do we have a road map?
Jan Larsen (The Technical University of Denmark)
19:00

Navigation

Description of workshop
Program
Speakers
Venue

Wireframe model

Organizers

Tue Lehn-Schiĝler
Samy Bengio
Lars Kai Hansen
Stephane Canu
Jan Larsen

Links

NIPS 2004
Workshops at NIPS 2004
MLMI'04
NIPS'03 workshop
Video Lectures