Image super resolution

Jiahao Li


Supervised by Xianfang Sun; Moderated by Padraig Corcoran

Image Super-Resolution (SR) is a fundamental class of image processing techniques in computer vision to recover a HR image from the LR one, which improves visual per- ception and enhance details of the image. Deep-learning-based methods have shown impressive performance in SR tasks. However, the most previous SISR methods only treat SR of different scale factors as independent tasks. They train a specific model for each scale factor which is inefficient in computing. This work proposes the Hybrid Attention Transformer based Neural Operator in Im- age Super-resolution (HAT-SRNO), a deep network that aims to detail restoration task at arbitray upsampling scale. First, this work simply divides the SR model into fea- ture extracting parts and upsampling parts. Then, The feature extracting module, Hy- brid Attention Transformer (HAT) activates more pixels for reconstruction by combin- ing channel attention and self-attention. At the meantime, treating the LR-HR image pairs as continuous functions approximated with different grid sizes, Super-resolution Neural Operator (SRNO) can learn the map between different levels of discretisation of continuous functions. Experiments that HAT-SRNO attains better performance over upsampling scale x2 in SET5 and DIV2K dataset on quantiative evaluation and produces more nature images with repeated similar pattern patches on visual evaluation.

Final Report (12/09/2023) [Zip Archive]

Publication Form