[PDF]

Super-resolution Enhancement of Magnetic Resonance Images using Deep Learning


Theodor Baur

30/08/2025

Supervised by Frank C Langbein; Moderated by Yazmin Ibanez Garcia

Magnetic Resonance Imaging (MRI) is an indispensable tool in modern medical diagnosis, providing detailed anatomical and functional information. However, inherent limitations in MRI hardware, clinical workflow demands for faster acquisition times, energy and cost considerations often result in images acquired at lower resolutions (e.g., using 1.5T scanners or fast-scanning protocols). Lower resolution can compromise diagnostic accuracy, particularly for subtle pathologies or fine anatomical details. This project aims to develop and validate a deep learning-based super-resolution algorithm to computationally enhance the resolution of MRI images, effectively reconstructing high-resolution (HR) images comparable to those from 3T scans or full-resolution acquisitions, starting from lower resolution (LR) inputs. This technology has the potential to improve diagnostic confidence, reduce the need for repeat scans, and broaden access to high-quality MRI in resource-constrained environments.

This project will focus on developing a robust and effective MRI super-resolution algorithm using machine learning. The project will leverage the power of deep learning to learn complex mappings from low-resolution MRI data to high-resolution representations. A key aspect of the project is the creation of a realistic training dataset. This will be achieved by simulating lower resolution MRI images from existing high-resolution datasets. The simulation process will involve manipulating the k-space representation of HR images to mimic the effects of lower resolution acquisition, considering factors like k-space undersampling and imaging sequence characteristics. This simulated LR-HR image dataset will then be used to train and rigorously validate a chosen super-resolution machine learning model. The project will encompass three core stages: (1) developing a realistic k-space simulation pipeline for generating LR-HR image pairs; (2) designing, implementing, and training a deep learning-based super-resolution technique; and (3) comprehensive evaluation and iterative refinement of the super-resolution performance.

Several deep learning architectures are well-suited for image super-resolution and could be explored in this project. Convolutional Neural Networks (CNNs), including architectures like SRCNN, EDSR, or RRDB, are a strong starting point. Generative Adversarial Networks (GANs) could also be investigated for generating perceptually realistic HR images, potentially using loss functions that combine pixel-wise similarity with perceptual metrics. More recent approaches using transformers, which have shown promise in various image tasks, could also be considered for their ability to capture long-range dependencies in MRI data. The project will involve careful selection of a suitable architecture, optimization of network parameters, and experimentation with different loss functions and training strategies to achieve optimal super-resolution performance. Performance will be evaluated using quantitative metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM). Further refinement may involve incorporating domain-specific knowledge of MRI physics and image characteristics into the network design or training process. For an advanced solution it is important to note that 1.5T and 3T MRI have different characteristics and it is not simply about changing the resolution of images.

For this project, a solid understanding of MRI techniques and imaging protocols is highly beneficial. Strong mathematical and programming skills are essential, particularly in machine learning and image processing. Proficiency in Python or Julia and experience with deep learning frameworks such as TensorFlow, PyTorch, JAX, or Flux/Lux are required. Access to computational resources with GPUs is necessary for training deep learning models, and university facilities are available for this purpose. The code developed for this project is expecte


Final Report (30/08/2025) [Zip Archive]

Publication Form