中圖分類(lèi)號(hào):TP391 文獻(xiàn)標(biāo)志碼:A DOI: 10.16157/j.issn.0258-7998.233934 中文引用格式: 張婷,,秦涵書(shū),,趙若璇. 基于多尺度注意力融合網(wǎng)絡(luò)的胃癌病理圖像分割方法[J]. 電子技術(shù)應(yīng)用,2023,,49(9):46-52. 英文引用格式: Zhang Ting,,Qin Hanshu,Zhao Ruoxuan. Gastric cancer pathological image segmentation method based on multi-scale attention fusion network[J]. Application of Electronic Technique,,2023,,49(9):46-52.
Gastric cancer pathological image segmentation method based on multi-scale attention fusion network
Zhang Ting1,Qin Hanshu1,,Zhao Ruoxuan2
(1.Information Center,, The First Affiliated Hospital of Chongqing Medical University, Chongqing 400016,, China,; 2.Key Laboratory of Optoelectronic Technique System of the Ministry of Education, Chongqing University,, Chongqing 400044,,China)
Abstract: In recent years, with the development of deep learning technology, the research and application of image segmentation methods based on coding and decoding in the automatic analysis of pathological images have gradually become widespread. However, due to the complexity and variability of gastric cancer lesions, large scale changes, and the blurring of boundaries caused by digital staining images, segmentation algorithms designed solely from a single scale often cannot obtain more accurate lesion boundaries. To optimize the accuracy of gastric cancer lesion image segmentation, this paper proposes a gastric cancer image segmentation algorithm based on multi-scale attention fusion network using the coding and decoding network structure. The coding structure uses EfficientNet as the feature extractor. In the decoder, the deep supervision of the network is realized by extracting and fusing the features of different levels of multi-path. When outputting, the spatial and channel attention is used to screen the multi-scale feature map for attention. At the same time, the integrated loss function is used in the training process to optimize the model.The experimental results show that the Dice coefficient score of this method on the SEED data set is 0.806 9, which to some extent achieves more refined gastric cancer lesion segmentation compared to FCN and UNet series networks.
Key words : pathological image;image segmentation,;attention fusion