Pan Y, Zhou W, Ye L, Yu L. HFFNet: hierarchical feature fusion network for blind binocular image quality prediction.
APPLIED OPTICS 2022;
61:7602-7607. [PMID:
36256359 DOI:
10.1364/ao.465349]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 08/17/2022] [Indexed: 06/16/2023]
Abstract
Compared with monocular images, scene discrepancies between the left- and right-view images impose additional challenges on visual quality predictions in binocular images. Herein, we propose a hierarchical feature fusion network (HFFNet) for blind binocular image quality prediction that handles scene discrepancies and uses multilevel fusion features from the left- and right-view images to reflect distortions in binocular images. Specifically, a feature extraction network based on MobileNetV2 is used to determine the feature layers from distorted binocular images; then, low-level binocular fusion features (or middle-level and high-level binocular fusion features) are obtained by fusing the left and right low-level monocular features (or middle-level and high-level monocular features) using the feature gate module; further, three feature enhancement modules are used to enrich the information of the extracted features at different levels. Finally, the total feature maps obtained from the high-, middle-, and low-level fusion features are applied to a three-input feature fusion module for feature merging. Thus, the proposed HFFNet provides better results, to the best of our knowledge, than existing methods on two benchmark datasets.
Collapse