《電子技術(shù)應(yīng)用》
您所在的位置:首頁 > 其他 > 設(shè)計應(yīng)用 > 基于YOLOv3-tiny的視頻監(jiān)控目標(biāo)檢測算法
基于YOLOv3-tiny的視頻監(jiān)控目標(biāo)檢測算法
2022年電子技術(shù)應(yīng)用第7期
王均成1,,2,,3,,賀 超1,2,,3,,趙志源1,,2,3,,鄒建紋1,,2,,3
1.重慶郵電大學(xué) 通信與信息工程學(xué)院,重慶400065,; 2.先進網(wǎng)絡(luò)與智能互聯(lián)技術(shù)重慶市高校重點實驗室,,重慶400065;3.泛在感知與互聯(lián)重慶市重點實驗室,,重慶400065
摘要: 目標(biāo)檢測算法在視頻監(jiān)控領(lǐng)域有著較大的實用價值,。針對當(dāng)前在資源受限的視頻監(jiān)控系統(tǒng)中實現(xiàn)實時目標(biāo)檢測較為困難的情況,提出了一種基于YOLOv3-tiny改進的目標(biāo)檢測算法,。該算法在YOLOv3-tiny架構(gòu)的基礎(chǔ)之上,,通過添加特征重用來優(yōu)化骨干網(wǎng)絡(luò)結(jié)構(gòu),,并提出全連接注意力混合模塊來學(xué)習(xí)到更豐富的空間信息,,更適合資源約束條件下的目標(biāo)檢測。實驗數(shù)據(jù)表明,,該算法相比于YOLOv3-tiny在模型體積降低39.2%,,參數(shù)量降低39.8%,且在VOC數(shù)據(jù)集上提高了2.7%的mAP,,在提高檢測精度的同時顯著降低了模型資源占用,。
中圖分類號: TP391.4
文獻標(biāo)識碼: A
DOI:10.16157/j.issn.0258-7998.212121
中文引用格式: 王均成,賀超,,趙志源,,等. 基于YOLOv3-tiny的視頻監(jiān)控目標(biāo)檢測算法[J].電子技術(shù)應(yīng)用,2022,,48(7):30-33,,39.
英文引用格式: Wang Juncheng,He Chao,,Zhao Zhiyuan,,et al. Video surveillance object detection method based on YOLOv3-tiny[J]. Application of Electronic Technique,2022,,48(7):30-33,,39.
Video surveillance object detection method based on YOLOv3-tiny
Wang Juncheng1,2,,3,,He Chao1,2,,3,,Zhao Zhiyuan1,2,,3,,Zou Jianwen1,,2,3
1.School of Communication and Information Engineering,,Chongqing University of Posts and Telecommunications,, Chongqing 400065,China,; 2.Advanced Network and Intelligent Connection Technology Key Laboratory of Chongqing Education Commission of China,, Chongqing 400065,China,; 3.Chongqing Key Laboratory of Ubiquitous Sensing and Networking,,Chongqing 400065,China
Abstract: Object detection methods have great value in the application field of video surveillance. At present, it is difficult to realize real-time object detection in resource constrained video surveillance system. A object detection method based on improved YOLOv3-tiny is proposed. Based on the YOLOv3-tiny architecture, the algorithm optimizes the backbone network by adding feature reuse, and a fully-connected attention mix module is proposed to enable the network to learn more abundant spatial information, which is more suitable for object detection under resource constraints. The experimental data shows that compared with YOLOv3-tiny, the algorithm reduces the model volume by 39.2%, the amount of parameters by 39.8%, and improves the mAP of 2.7% on the VOC data set, which significantly reduces the occupation of model resources while improving the detection accuracy.
Key words : object detection,;video surveillance,;YOLOv3;feature reuse,;attention mechanism

0 引言

    近年來,,目標(biāo)檢測算法已經(jīng)廣泛應(yīng)用于各個視頻監(jiān)控場景,包括車輛檢測[1],、行人檢測[2],、農(nóng)業(yè)檢測[3]、人類異常行為檢測[4]等,,越來越復(fù)雜的目標(biāo)檢測網(wǎng)絡(luò)展示了最先進的目標(biāo)檢測性能,。但在實際應(yīng)用中,往往需要在視頻監(jiān)控中一些計算能力及內(nèi)存有限的設(shè)備上進行實時目標(biāo)檢測,。例如,,嵌入式平臺視頻監(jiān)控,其可用計算資源一般僅限于低功耗嵌入式圖形處理單元(Graphic Processing Unit,,GPU),。這極大地限制了此類網(wǎng)絡(luò)在相關(guān)領(lǐng)域的廣泛應(yīng)用,使得在資源受限設(shè)備上實現(xiàn)實時目標(biāo)檢測非常具有挑戰(zhàn),。

    為了實現(xiàn)資源有限設(shè)備上目標(biāo)檢測這一挑戰(zhàn),,人們對研究和設(shè)計低復(fù)雜度的高效神經(jīng)網(wǎng)絡(luò)體系架構(gòu)越來越感興趣。而著名的YOLO[5](You Only Look Once,,YOLO)則是圍繞效率設(shè)計的一階段目標(biāo)檢測算法,,它可以在高端圖形處理器上實現(xiàn)視頻監(jiān)控目標(biāo)高效檢測。然而對于許多資源受限監(jiān)控設(shè)備來說,,這些網(wǎng)絡(luò)架構(gòu)參數(shù)量大且計算復(fù)雜度較高,,使得在嵌入式等監(jiān)控設(shè)備上運行時推理速度大幅下降。YOLOv3[6]是YOLO系列應(yīng)用在各領(lǐng)域最普遍的算法,,YOLOv3-tiny則是在該算法的基礎(chǔ)上簡化的,,雖然精度顯著下降但具有了更少計算成本,,這大大增加了在資源受限監(jiān)控設(shè)備上部署目標(biāo)檢測算法的可行性。




本文詳細內(nèi)容請下載:http://forexkbc.com/resource/share/2000004582,。



作者信息:

王均成1,,2,3,,賀  超1,,2,3,,趙志源1,,2,3,,鄒建紋1,,2,3

(1.重慶郵電大學(xué) 通信與信息工程學(xué)院,,重慶400065,;

2.先進網(wǎng)絡(luò)與智能互聯(lián)技術(shù)重慶市高校重點實驗室,,重慶400065;3.泛在感知與互聯(lián)重慶市重點實驗室,,重慶400065)




wd.jpg

此內(nèi)容為AET網(wǎng)站原創(chuàng),,未經(jīng)授權(quán)禁止轉(zhuǎn)載。