Multimodality modeling
Our goal is to obtain realistic strutured models from multimodal -possibly dynamic- data to be used in AR systems for interaction management, visualization or annotation. Two projects are described:
Having a realistic augmented head displaying both external and internal articulators could help language learning tech- nology progress. The long term aim of the project is the acquisition of articulatory data and the design of a 3D+t articulatory model from various image modalities: ex- ternal articulators are extracted from stereovision data, the tongue shape is acquired through ultrasound imaging, 3D images of all articulators can be obtained with MRI for sustained sounds, magnetic sensors are used to recover the tip of the tongue.
The focus of this work is the development of statistical methods that permit the modeling and monitoring of surgical processes, based on signals available in the operating room. The work has beeb achieved within N. padoy's PhD thesis in collaboration with the Teschnische University of Munich.
|
We address the problem of obtaining realistic facial animation within the augmented head application. The main idea of this work is to transfer the dynamics learned on the sparse meshes of the face onto a 3D dense mesh acquired with a scanner.
|
The focus of this work is the development of statistical methods that permit the modeling and monitoring of surgical processes, based on signals available in the operating room. The goal is to combine low-level signals with high-level information in order to detect events and trigger pre-defined actions. A main application is the development of context-aware operating rooms, providing adaptive user interfaces, better synchronization within the surgery department and automatic documentation.
|