《電子技術(shù)應(yīng)用》
您所在的位置:首頁 > 人工智能 > 設(shè)計應(yīng)用 > 生成式人工智能對個人信息保護的挑戰(zhàn)與治理路徑
生成式人工智能對個人信息保護的挑戰(zhàn)與治理路徑
網(wǎng)絡(luò)安全與數(shù)據(jù)治理
萬美秀
南昌大學(xué)法學(xué)院
摘要: 以ChatGPT為代表的生成式人工智能技術(shù)給各行各業(yè)帶來顛覆性變革,,但也引發(fā)個人信息泄露,、算法偏見,、虛假信息傳播等個人信息侵權(quán)危機。傳統(tǒng)“基于權(quán)利保護”的路徑過于強調(diào)個人信息保護而阻礙人工智能產(chǎn)業(yè)的發(fā)展,,“基于風(fēng)險防范”的路徑則更加凸顯個人信息的合理利用價值,,價值選擇上更優(yōu)。但以權(quán)利保護和風(fēng)險保護共同治理,,才能實現(xiàn)利益平衡并建立個人信息的長效保護機制,。在個人信息處理規(guī)則上,以“弱同意”規(guī)則取代僵化嚴苛的知情同意規(guī)則,;在目的限制原則上,,以“風(fēng)險限定”取代“目的限定”;在個人信息最小化原則上,,以“風(fēng)險最小化”取代“目的最小化”,。在此基礎(chǔ)上,進一步加強生成式人工智能數(shù)據(jù)來源合規(guī)監(jiān)管,,提升算法透明性和可解釋性,,強化科技倫理規(guī)范和侵權(quán)責(zé)任追究。
中圖分類號:D913;TP399文獻標識碼:ADOI:10.19358/j.issn.2097-1788.2024.04.009
引用格式:萬美秀.生成式人工智能對個人信息保護的挑戰(zhàn)與治理路徑[J].網(wǎng)絡(luò)安全與數(shù)據(jù)治理,,2024,,43(4):53-60.
The challenge and governance path of generative artificial intelligence to personal information protection
Wan Meixiu
Law School, Nanchang University
Abstract: Generative artificial intelligence technology represented by ChatGPT has brought disruptive changes to all walks of life, but also triggered personal information infringement crises such as personal information disclosure, algorithmic bias, and false information dissemination. The traditional "right protection-based" path overly emphasizes personal information protection and hinders the development of the artificial intelligence industry. The "risk prevention-based" path highlights the rational use value of personal information and is better in value selection. However,only by governing together with right protection and risk protection can we achieve a balance of interests and establish a long-term protection mechanism for personal information. In terms of personal information processing rules, the rigid and strict informed consent rules should be replaced by the "weak consent" rule; in terms of purpose limitation principles, the "purpose limitation" principle should be replaced by the "risk limitation"; in terms of personal information minimization principles, the "purpose minimization" principle should be replaced by the "risk minimization". On this basis, we should further strengthen the compliance supervision of generative artificial intelligence data sources, improve the transparency and interpretability of algorithms, and strengthen the standardization of scientific and technological ethics and the investigation of tort liability.
Key words : generative AI;ChatGPT,;personal information protection,;governance path

引言

ChatGPT為代表的生成式人工智能掀起了全球第四次科技革命浪潮,成為帶動全球經(jīng)濟增長的新引擎[1],。然而,,作為新一代人工智能技術(shù),生成式人工智能在不斷迭代更新與變革生產(chǎn)關(guān)系的同時,,也帶來了諸多個人信息保護的法律風(fēng)險,。生成式人工智能的運行以海量用戶的個人信息為基礎(chǔ),在輸入端,、模擬訓(xùn)練端,、模擬優(yōu)化端、輸出端等各環(huán)節(jié)都離不開個人信息的使用,。在大規(guī)模的數(shù)據(jù)處理和不透明的算法黑箱背景下,,生成式人工智能便產(chǎn)生了違法收集個人信息、制造虛假有害信息,、算法偏見與歧視等問題,。對此,各國監(jiān)管部門廣泛關(guān)注,,美國,、法國、意大利,、西班牙,、加拿大等多國政府已宣布對ChatGPT進行調(diào)查監(jiān)管,,并出臺了相應(yīng)監(jiān)管規(guī)范。2023年7月10日,,我國網(wǎng)信辦等七部門也聯(lián)合發(fā)布了《生成式人工智能服務(wù)管理暫行辦法》(以下簡稱“《暫行辦法》”),,明確了促進生成式人工智能技術(shù)發(fā)展的具體措施,對支持和規(guī)范生成式人工智能發(fā)展作出了積極有力的回應(yīng),。但需要注意的是,,《暫行辦法》對個人信息保護的規(guī)定僅在第4、7,、9,、11、19條中援引《個人信息保護法》的相關(guān)規(guī)定,,對使用生成式人工智能技術(shù)侵犯個人信息權(quán)益呈現(xiàn)出的新問題缺乏專門規(guī)定,,而繼續(xù)延用《個人信息保護法》面臨諸多適用困境。


本文詳細內(nèi)容請下載:

http://forexkbc.com/resource/share/2000005969


作者信息:

萬美秀

(南昌大學(xué)法學(xué)院,,江西南昌330031)


Magazine.Subscription.jpg

此內(nèi)容為AET網(wǎng)站原創(chuàng),,未經(jīng)授權(quán)禁止轉(zhuǎn)載。