The spread of misinformation via social networks has become a major problem in societies all over the world in recent years, impacting areas including political elections and public health. As a result, many agencies are looking to technologies - including artificial intelligence (AI), machine learning (ML), data science and cybersecurity - to help manage the impacts of rumours and false information. This is an active area of research in Cardiff University's Crime & Security Research Institute (CSRI) and we are looking for project students to help our efforts this summer.
The focus of this project is to employ data science techniques to track the spread of misinformation on social network platforms. Multiple projects are possible here, depending on your interests. Some options are listed below: o The project may choose to focus on a single platform, e.g., Twitter or Reddit, or multiple platforms, e.g., to examine how misinformation spreads between platforms; it could also focus on English-only or multilingual social media content. o The project may focus on text-based content, using natural language processing (NLP) techniques, e.g., building classifiers for various types of content such as particular conspiracy theories, or image-based content, using image processing techniques, e.g., detecting features of common memes; or the project could look at multimodal content, e.g., text plus images. o The project could focus on detecting misinformation spread, or possibly trying to predict future spread.
Students undertaking this project topic will be supervised as part of a group tackling misinformation with AI and data analytics techniques, and supported by the research team in the Crime & Security Research Institute and our Sentinel software platform for social media data collection and processing. Datasets will be made available for the project before it starts, and guidance will also be issued on how to collect additional data. You will be expected to present your work regularly to the CSRI team, and you will gain regular feedback.
Note that aspects of this topic are negotiable: if parts of the proposal interest you, but you would prefer a slightly different focus, please reach out so we can discuss. For example, perhaps parts of the technical approach may be interesting, but you would prefer to study a different kind of social media content (e.g., tracking news events rather than misinformation) or trying to apply a technology not mentioned above (e.g., graph-based social network analysis or a variety of 'big data' approaches for large-scale data processing).
FURTHER READING CSRI: https://crimeandsecurity.org/feed/ Misinformation: http://upsi.org.uk/news/2018/4/23/briefing-paper-digital-influence-engineering Our Sentinel platform: http://orca.cf.ac.uk/105513 Open source intelligence processing: https://orca.cf.ac.uk/89374/