基于黑盒測試框架的深度學(xué)習(xí)模型版權(quán)保護(hù)方法
網(wǎng)絡(luò)安全與數(shù)據(jù)治理
屈詳顏1,2,于靜1,2,熊剛1,2,,蓋珂珂3
1.中國科學(xué)院信息工程研究所,北京100085,;2.中國科學(xué)院大學(xué)網(wǎng)絡(luò)空間安全學(xué)院,北京100049,; 3.北京理工大學(xué)網(wǎng)絡(luò)空間安全學(xué)院,北京100081
摘要: 當(dāng)前生成式人工智能技術(shù)迅速發(fā)展,,深度學(xué)習(xí)模型作為關(guān)鍵技術(shù)資產(chǎn)的版權(quán)保護(hù)變得越發(fā)重要?,F(xiàn)有模型版權(quán)保護(hù)方法一般采用確定性測試樣本生成算法,存在選擇效率低和對抗攻擊脆弱的問題,。針對上述問題,,提出了一種基于黑盒測試框架的深度學(xué)習(xí)模型版權(quán)保護(hù)方法。首先引入基于隨機(jī)性算法的樣本生成策略,,有效提高了測試效率并降低了對抗攻擊的風(fēng)險(xiǎn),。此外針對黑盒場景,引入了新的測試指標(biāo)和算法,,增強(qiáng)了黑盒防御的能力,,確保每個(gè)指標(biāo)具有足夠的正交性。在實(shí)驗(yàn)驗(yàn)證方面,,所提方法顯示出了高效的版權(quán)判斷準(zhǔn)確性和可靠性,,有效降低了高相關(guān)性指標(biāo)的數(shù)量。
中圖分類號:TP181
文獻(xiàn)標(biāo)識碼:ADOI:10.19358/j.issn.2097-1788.2023.12.001
引用格式:屈詳顏,,于靜,,熊剛,等.基于黑盒測試框架的深度學(xué)習(xí)模型版權(quán)保護(hù)方法[J].網(wǎng)絡(luò)安全與數(shù)據(jù)治理,2023,,42(12):1-6,13.
文獻(xiàn)標(biāo)識碼:ADOI:10.19358/j.issn.2097-1788.2023.12.001
引用格式:屈詳顏,,于靜,,熊剛,等.基于黑盒測試框架的深度學(xué)習(xí)模型版權(quán)保護(hù)方法[J].網(wǎng)絡(luò)安全與數(shù)據(jù)治理,2023,,42(12):1-6,13.
Copyright protection for deep learning models utilizing a black box testing framework
Qu Xiangyan 1,2, Yu Jing1,2, Xiong Gang1,2, Gai Keke3
1 Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100085, China; 2 School of Cyber Security, University of Chinese Academy of Sciences, Beijing 100049, China; 3 School of Cyberspace Science and Technology, Beijing Institute of Technology, Beijing 100081, China
Abstract: With the rapid development of generative artificial intelligence technologies, the copyright protection of deep learning models has become increasingly important. Existing copyright protection methods generally adopt deterministic test sample generation algorithms, which suffer from inefficiencies in selection and vulnerabilities to adversarial attacks. To address these issues, we propose a copyright protection method for deep learning models based on a blackbox testing framework. This method introduces a sample generation strategy based on randomness algorithms, effectively improving test efficiency and reducing the risk of adversarial attacks. Additionally, new test metrics and algorithms are introduced for blackbox scenarios, enhancing the defensive capabilities of blackbox testing and ensuring each metric possesses sufficient orthogonality. In experimental validation, the proposed method demonstrates high efficiency in copyright judgment accuracy and reliability, effectively reducing the number of highly correlated indicators.
Key words : generative artificial intelligence; deep learning models; copyright protection; black box defense
引言
在當(dāng)前生成式人工智能技術(shù)的迅猛發(fā)展推動下,,深度學(xué)習(xí)模型的版權(quán)保護(hù)問題日益受到關(guān)注。深度學(xué)習(xí)模型,,尤其是大規(guī)模和高性能的模型,,因其昂貴的訓(xùn)練成本,容易遭受未授權(quán)的復(fù)制或再現(xiàn),,導(dǎo)致版權(quán)侵犯和模型所有者的經(jīng)濟(jì)損失[1-2],。傳統(tǒng)的版權(quán)保護(hù)方法大多依賴于水印技術(shù)[3-4],,通過在模型中嵌入特定的水印來確認(rèn)所有權(quán)。盡管這類方法可以提供確切的所有權(quán)驗(yàn)證,,但它們對原有模型具有侵入性,,可能會影響模型性能或引入新的安全風(fēng)險(xiǎn);并且這些方法對適應(yīng)性攻擊和新興的模型提取攻擊的魯棒性不足[5-6],。
作者信息
屈詳顏1,2,,于靜1,2,熊剛1,2,,蓋珂珂3
(1 中國科學(xué)院信息工程研究所,,北京100085;2 中國科學(xué)院大學(xué)網(wǎng)絡(luò)空間安全學(xué)院,,北京100049,;
3 北京理工大學(xué)網(wǎng)絡(luò)空間安全學(xué)院,北京100081)
文章下載地址:http://forexkbc.com/resource/share/2000005869
此內(nèi)容為AET網(wǎng)站原創(chuàng),,未經(jīng)授權(quán)禁止轉(zhuǎn)載,。