《電子技術(shù)應(yīng)用》
您所在的位置:首頁(yè) > 人工智能 > 設(shè)計(jì)應(yīng)用 > 一種基于點(diǎn)云實(shí)例分割的六維位姿估計(jì)方法
一種基于點(diǎn)云實(shí)例分割的六維位姿估計(jì)方法
網(wǎng)絡(luò)安全與數(shù)據(jù)治理
周劍
蘇州深淺優(yōu)視智能科技有限公司
摘要: 提出了一種基于SoftGroup實(shí)例分割模型和PCA主成分分析算法來(lái)估計(jì)物體位姿的方法,。在工業(yè)自動(dòng)化領(lǐng)域,通常會(huì)為諸如機(jī)器人,、機(jī)械臂配備視覺系統(tǒng)并利用二維圖像估算目標(biāo)物體位置,,但當(dāng)目標(biāo)物體出現(xiàn)堆疊,、遮擋等復(fù)雜場(chǎng)景時(shí),對(duì)二維圖形的識(shí)別精度往往有所下降,。為準(zhǔn)確,、高效地獲取物體位置,充分利用三維點(diǎn)云數(shù)據(jù)的高分辨率,、高精度的優(yōu)勢(shì):首先將深度相機(jī)采集到的RGB-D圖像轉(zhuǎn)為點(diǎn)云圖,,接著利用SoftGroup模型分割出點(diǎn)云圖中的目標(biāo)對(duì)象,最后用PCA算法得到目標(biāo)的六維位姿,。在自制工件數(shù)據(jù)集上進(jìn)行驗(yàn)證,,結(jié)果表明對(duì)三種工件識(shí)別的平均AP高達(dá)97.5%,單張點(diǎn)云圖識(shí)別用時(shí)僅0.73 ms,,證明所提出的方法具有高效性和實(shí)時(shí)性,,為諸如機(jī)器人定位、機(jī)械臂自主抓取場(chǎng)景帶來(lái)了全新的視角和解決方案,,具有顯著的工程應(yīng)用潛力,。
中圖分類號(hào):TP391文獻(xiàn)標(biāo)識(shí)碼:ADOI:10-19358/j-issn-2097-1788-2024-05-006
引用格式:周劍.一種基于點(diǎn)云實(shí)例分割的六維位姿估計(jì)方法[J].網(wǎng)絡(luò)安全與數(shù)據(jù)治理,,2024,43(5):42-45,,60.
6D pose estimation based on point cloud instance segmentation
Zhou Jian
DEEPerceptron Tech
Abstract: This paper proposes a method based on the SoftGroup instance segmentation model and Principal Component Analysis (PCA) algorithm for estimating object poses. In the field of industrial automation, visual systems are often equipped on robots and robotic arms to estimate the position of target objects using 2D images. However, in complex scenarios such as stacking and occlusion, the recognition accuracy of 2D images tends to decrease. To accurately and efficiently obtain object positions, this paper fully leverages the high-resolution and high-precision advantages of 3D point cloud data. Firstly, RGB-D images captured by a depth camera are converted into point cloud images. Then, the SoftGroup model is employed to segment the target objects in the point cloud image, and finally, the PCA algorithm is used to obtain the six-dimensional pose of the target. Validation on a self-made dataset shows an average AP of 97.5% for the recognition of three types of objects. The recognition time for a single point cloud image is only 0.73 ms, demonstrating the efficiency and real-time capability of the proposed method. This approach provides a new perspective and solution for scenarios such as robot localization and autonomous grasping of robotic arms, with significant potential for practical engineering applications.
Key words : point cloud data; SoftGroup instance segmentation; 6D pose estimation

引言

近年,,隨著激光掃描儀、相機(jī),、三維掃描儀等硬件設(shè)備的發(fā)展與普及,,點(diǎn)云數(shù)據(jù)的獲取途徑變得更加多樣,數(shù)據(jù)獲取的難度不斷降低,。相較于二維圖像,,三維點(diǎn)云數(shù)據(jù)具備無(wú)可比擬的優(yōu)勢(shì)。其高分辨率,、高精度,、高緯度的特性賦予點(diǎn)云數(shù)據(jù)更為豐富的空間幾何信息,能夠直觀地表達(dá)物體的形狀特征,。近年來(lái),,點(diǎn)云數(shù)據(jù)在工業(yè)測(cè)量、機(jī)械臂抓取,、目標(biāo)檢測(cè),、機(jī)器人視覺等領(lǐng)域得到了廣泛應(yīng)用[1–3]。

在工業(yè)自動(dòng)化領(lǐng)域,,通常需要先獲得物體的位姿信息再進(jìn)行后續(xù)抓取動(dòng)作,。自動(dòng)抓取物體可分為結(jié)構(gòu)化場(chǎng)景和非結(jié)構(gòu)化場(chǎng)景。在結(jié)構(gòu)化工作場(chǎng)景中,,機(jī)械臂抓取固定位置的物體,,該模式需要進(jìn)行大量調(diào)試和示教工作,機(jī)械臂只能按照預(yù)設(shè)程序進(jìn)行工作,,缺乏自主識(shí)別和決策能力,,一旦目標(biāo)物體發(fā)生形變或位置偏移,,可能導(dǎo)致抓取失?。辉诜墙Y(jié)構(gòu)化場(chǎng)景中,,通常為機(jī)械臂配備視覺感知硬件和目標(biāo)檢測(cè)算法,,以使機(jī)械臂能夠感知并理解相對(duì)復(fù)雜的抓取環(huán)境。然而,,在實(shí)際復(fù)雜的抓取場(chǎng)景下(如散亂,、堆疊、遮擋),,常見的目標(biāo)檢測(cè)方法如點(diǎn)云配準(zhǔn)[4],、二維圖像實(shí)例分割[5]的精度有所下降,,從而影響抓取效率[6]。


本文詳細(xì)內(nèi)容請(qǐng)下載:

http://forexkbc.com/resource/share/2000006014


作者信息:

周劍

(蘇州深淺優(yōu)視智能科技有限公司,,江蘇蘇州215124)


Magazine.Subscription.jpg

此內(nèi)容為AET網(wǎng)站原創(chuàng),,未經(jīng)授權(quán)禁止轉(zhuǎn)載。