Abstract
Retinal images play an essential role in the early diagnosis of ophthalmic diseases. Automatic segmentation of retinal vessels in color fundus images is challenging due to the morphological differences between the retinal vessels and the low-contrast background. At the same time, automated models struggle to capture representative and discriminative retinal vascular features. To fully utilize the structural information of the retinal blood vessels, we propose a novel deep learning network called Pre-Activated Convolution Residual and Triple Attention Mechanism Network (PCRTAM-Net). PCRTAM-Net uses the pre-activated dropout convolution residual method to improve the feature learning ability of the network. In addition, the residual atrous convolution spatial pyramid is integrated into both ends of the network encoder to extract multiscale information and improve blood vessel information flow. A triple attention mechanism is proposed to extract the structural information between vessel contexts and to learn long-range feature dependencies. We evaluate the proposed PCRTAM-Net on four publicly available datasets, DRIVE, CHASE_DB1, STARE, and HRF. Our model achieves state-of-the-art performance of 97.10%, 97.70%, 97.68%, and 97.14% for ACC and 83.05%, 82.26%, 84.64%, and 81.16% for F1, respectively.