MelAi: LibMelee AI via Deep Reinforcement Learning

Finn Trowell


Supervised by Frank C Langbein; Moderated by Sylwia Polberg

Super Smash Bros. Melee is a platform fighting game in which two or more players can fight on 29 unique stages with 26 different characters. It has become an ever-growing competitive fighting game, fostering an Esports scene, thanks to its uniquely precise analog controls. These controls allow players to perform frame-perfect actions at blistering speeds, perfect for artificial intelligence to exploit.

The goal of this project is to create an artificially intelligent agent to play Super Smash Bros. Melee, at a human or above human level.

Melee was created to run on the Nintendo Gamecube, so will need to be emulated in order to interface with the game. The emulator needs to provide data of the current game state for inputs to the CNN, and be able to accept outputs from a controller agent. The reward characteristics for the AI will need to be refined, given Melee's ambiguity with what is "good" and "bad". This ties into finding a method for how the AI will be trained, and if the AI will use replay data, or live data.

Dolphin is a Nintendo Gamecube and Wii emulator, and so can be used to emulate Melee. There is also a fork of this project named "Faster Melee", this is the program that will be used for the emulation, given its optimisations and speed features. Dolphin will be used in conjunction with a library named "LibMelee", which allows for python code to interface with Melee, while also extracting useful data such as player position. The reward characteristics of Melee are defined by opponent percentage (Health), player percentage, and how many stocks (Lives) a character has left, therefore combining these three factors will be the foundation of the reward function. So far, using CNNs and generational algorithms is the main system I have started to implement for training this AI, however, an actor-critic approach and an approach using replay data, rather than live data, needs to be researched further.

Initial Plan (06/02/2023) [Zip Archive]

Final Report (12/05/2023) [Zip Archive]

Publication Form