中圖分類號:TP391.4 文獻(xiàn)標(biāo)志碼:A DOI: 10.16157/j.issn.0258-7998.223139 中文引用格式: 安鶴男,,管聰,鄧武才,,等. 基于YOLOX融合自注意力機(jī)制的FSA-FPN重構(gòu)方法[J]. 電子技術(shù)應(yīng)用,,2023,,49(3):61-66. 英文引用格式: An Henan,,Guan Cong,Deng Wucai,,et al. FSA-FPN reconstruction method that fused self-attention mechanism based on YOLOX[J]. Application of Electronic Technique,,2023,49(3):61-66.
FSA-FPN reconstruction method that fused self-attention mechanism based on YOLOX
An Henan1,,Guan Cong2,,Deng Wucai1,Yang Jiazhou2,,Ma Chao2
(1.College of Electronics and Information Engineering,,Shenzhen University,Shenzhen 518000,,China,; 2.Institute of Microscale Optoelectronics,Shenzhen University,,Shenzhen 518000,,China)
Abstract: Abstract: With the increasing resolution of the input image of the current target detection task,,the feature information extracted from the feature extraction network will become more and more limited under the condition that the receptive field of the feature extraction network remains unchanged,and the information coincidence degree between adjacent feature points will also become higher and higher.This paper proposes an FSA(fusion self-attention)-FPN,,and designs SAU(self-attention upsample) module.The internal structure of SAU performs cross calculation with self-attention mechanism and CNN to further Feature fusion,,and reconstructs FCU(feature coupling unit) to eliminate feature dislocation between them and bridge semantic gap. In this paper,a comparative experiment is carried out on Pascal VOC2007 data set using YOLOX-Darknet 53 as the main dry network. The experimental results show that compared with the FPN of the original network,,the average accuracy of MAP@ [.5:.95] after replacing FSA-FPN is improved by 1.5%,,and the position of the prediction box is also more accurate.It has better application value in detection scenarios requiring higher accuracy.
Key words : FSA-feature pyramid networks;feature fusion,;SAU,;self-attention mechanism