[PDF]

Safeguarding Against AI-Driven Cyber Threats: Mitigating Risks Posed by Large Language Models (LLMs) [INDUSTRY PROJECT]


Jake Palmer

04/09/2025

Supervised by Eirini S Anthi; Moderated by Padraig Corcoran

This project explores the dual-use nature of Large Language Models (LLMs) in the context of cybersecurity, focusing on their potential to both enhance security measures and be exploited for cyber-attacks. With the rapid advancement and adoption of LLMs in various applications, there is a growing concern about their misuse, including the generation of sophisticated phishing emails, crafting of malware, or automating social engineering attacks. This research aims to develop strategies and tools to mitigate the risks posed by LLMs, ensuring their safe and secure deployment in cyber environments.

Main Objectives:

Objective 1: Analyse and Identify AI-Driven Cyber Threats

Perform a thorough analysis to identify how Large Language Models (LLMs) can be exploited for cyber-attacks, such as generating phishing content or malware. Assess the current cybersecurity landscape to pinpoint vulnerabilities to AI-driven threats.

Objective 2: Develop AI-Based Detection and Defense Mechanisms

Create advanced detection tools using AI and NLP techniques to identify LLM-generated content and cyber threats. Leverage LLMs to enhance cybersecurity defenses, including training simulations and threat intelligence analysis.

Objective 3: Formulate Ethical Guidelines and Raise Awareness

Establish ethical guidelines for the responsible use of LLMs in cybersecurity. Conduct educational initiatives to inform the public, industry, and policymakers about the risks and benefits of LLMs, promoting informed and secure AI deployment.


Final Report (04/09/2025) [Zip Archive]

Publication Form