Abstract
In the field of image processing, better results can often be achieved through the deepening of neural network layers involving considerably more parameters. In image classification, improving classification accuracy without introducing too many parameters remains a challenge. As for image conversion, the use of the conversion model of the generative adversarial network often produces semantic artifacts, resulting in images with lower quality. Thus, to address the above problems, a new type of attention module is proposed in this paper for the first time. This proposed approach uses the pixel–channel hybrid attention (PCHA) mechanism, which combines the attention information of the pixel and channel domains. The comparative results of using different attention modules on multiple-image data verify the superiority of the PCHA module in performing classification tasks. For image conversion, we propose a skip structure (S-PCHA model) in the up- and down-sampling processes based on the PCHA model. The proposed model can help the algorithm identify the most distinctive semantic object in a given image, as this structure effectively realizes the intercommunication of encoder and decoder information. Furthermore, the results showed that the attention model could establish a more realistic mapping from the source domain to the target domain in the image conversion algorithm, thus improving the quality of the image generated by the conversion model.