This project builds on recent research to build video realistic statistical models from training data (i.e. video, both 2D and 3D). These models (termed active appearance models, or AAMs) can be controlled by varying their parameters, and allow new images/videos covering the range of likely appearance to be synthesised. In addition, the models can be used for analysis of new video, in which the model is fitted to the video frames in order to estimate the model parameters. Such models have been used and developed extensively in our lab.
The project would only aim to target one of the items briefly outlined below and would be chosen to suit the chosen student???s programming skills, experience and interests.
1. Facial dynamics analysis ??? our methods (developed on some past IAESTE projects) allows for the dynamics of facial expressions to be modelled and analysed. For example, we have analysed real v. fake smiles and used expression as a biometric. We would be interested in developing this work further. 2. Real time facial synthesis. Most of our work uses MATLAB which does not natively allow for real-time processing of our methods. One project cold look at (re)implementing some methods ??? in particular model fitting and synthesis - to run in as near to real time as possible. Our methods could be revised to utilise GPU programming and ported to C/C++ (possibly Java). 3. Mobile device implementation ??? We would also be interested in porting some code (our facial synthesis routines) to run on mobile devices (e.g. iPhone). 4. 3D Facial analysis and rendering. We have a 3D facial video capture system. Our basic tools for facial modelling and analysis of dynamics are similar in 2D and 3D video. However, 3D data capture and preparation and rendering 3D is far more complex. We are still building tools for such purposes. One project could look at this area. We would be interested in integrating our models into 3D animation packages such as blender, Poser, Maya or 3D Studio Max