In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples.
* Project Goals 1. Reviewing the Adversarial Machine Learning library https://github.com/vu-aml/adlib and gathering understanding of the Scikit-learn library for machine learning http://scikit-learn.org/stable/ 2. Identifying and analysing data related to spam email checking, and exploring the range of attacks possible by considering scikit ML implementations and AML library 3. Documentation and reflective exercises on the achievements
* Requirements - Python programming - Desirable, basic understanding of machine learning - Software developed must be released with MIT Licence --- https://opensource.org/licenses/MIT - Documentation produced must be released with CC-BY Licence --- https://creativecommons.org/licenses/by/4.0/
* References Biggio B, Corona I, Maiorca D, et al. Evasion Attacks against Machine Learning at Test Time. In: Springer, Berlin, Heidelberg; 2013:387-402. doi:10.1007/978-3-642-40994-3_25.