[PDF]

Patching LLM Chatbot Vulnerabilities in Web Applications


Christopher Petrov

15/05/2025

Supervised by Andrew Hood; Moderated by Jandson Santos Ribeiro Santos

This project aims to explore methods of patching vulnerabilities in LLM chat bot integrations made easy for web-developers, particularly through monitoring and blocking malicious user input and poor LLM output. Investigation includes testing patching techniques across different scenarios (e.g. conversational chats, tool-calling via LLM) and across different models to test flexibility. A combination of fuzzing and external user testing will be used to evaluate the effectiveness of these techniques and gather additional insight in the field of LLM cybersecurity.


Initial Plan (03/02/2025) [Zip Archive]

Final Report (15/05/2025) [Zip Archive]

Publication Form