One of the fundamental problems in computational music is the analysis and modelling of performance style. The aims of this project are as follows: (1) Investigate what relevant performance features (e.g. variations in dynamics, tempo, timbre) can be reliably extracted from audio recordings. (2) Provide a means of exploring which of these features are relevant to the perception of performance style. (3) Investigate what operations can be applied to styles, e.g. interpolation between styles, style removal, editing, transfer. Answers to the above questions are paramount to our understanding of what makes a good musical performance and what, quantitatively, are the differences between performance styles of professional musicians. This project requires the integration of data mining, machine learning, and digital signal processing. It builds on initial work we have already done in this area, aligning musical performances and identifying variations in tempo. The code-base (in MATLAB) for this existing work will be made freely available to the person doing the proposed project.