PATS2
You are not logged in
Time stamp: 11:42:31-7/5/2024

[Login]

Doom-playing AI via Deep Reinforcement Learning


Hugo Huang

19/05/2023

Supervised by Frank C Langbein; Moderated by Fernando Alva Manchego

Training artificial intelligence (AI) agents to play games directly from high-dimensional sensory inputs (like visuals or audio) was widely considered as one of the greatest challenges of reinforcement learning (RL) until Deep Q-Network (DQN) was proposed. Even today, training agents to play first-person shooter (FPS) games like Doom is a non-trivial task and in this project I will attempt to create an AI agent that plays Doom via a Deep Q-Learning-related approach. The goal is to utilize the VizDoom platform and train my AI agents to play most of its provided scenarios (customized maps for RL) at human-level, if possible, the agent would also be aimed to play the first map of Doom: E1M1 at a close-to-human-level. Instead of rely directly on high-dimensional sensory inputs, my model would use both the pixels on the screen and depth buffer information (essentially RGB-D images) as inputs, with the latter provided as input to compensate for the lack of depth-completion ability.


Initial Plan (05/02/2023) [Zip Archive]

Final Report (19/05/2023) [Zip Archive]

Publication Form