No-Reference Image Quality Assessment using Deep Convolutional Neural Networks

More Info
expand_more

Abstract

No-reference image quality assessment (NR-IQA) is a challenging field of research that, without making use of reference images, aims at predicting the image quality as it is perceived by the human visual system (HVS). Many NR-IQA methods have been proposed over time but recently proposed convolutional neural network (CNN) based approaches, through their powerful feature learning capabilities, have outperformed all previously existing NR-IQA methods. But these CNN based approaches are perceptually incorrect in assuming distortions to be homogeneously distributed across images. They operate on very small image portions while considering all of them to have identical perceptual quality, whereas in reality, different parts of an image, based on their structure and content, could bear different perceptual quality. Further, these approaches utilize shallow CNN architectures which render them incapable of taking advantages offered by the deep CNN architectures. To improve upon the limitations of existing CNN based approaches, we conducted a design space exploration of CNN’s and proposed a suitable CNN design for NR-IQA task, that operates on bigger image portions and employs a deeper architecture. The proposed design achieves the state of the art performance on LIVE and TID datasets. We further provide informative visualization of features learned by the proposed CNN design, which shed light on its internal working while promoting further understanding regarding the nature of image quality.

Files