《電子技術(shù)應(yīng)用》
您所在的位置:首頁(yè) > 其他 > 設(shè)計(jì)應(yīng)用 > 基于預(yù)訓(xùn)練模型的基層治理敏感實(shí)體識(shí)別方法
基于預(yù)訓(xùn)練模型的基層治理敏感實(shí)體識(shí)別方法
電子技術(shù)應(yīng)用
吳磊1,,汪杭軍2
(1.浙江農(nóng)林大學(xué) 數(shù)學(xué)與計(jì)算機(jī)科學(xué)學(xué)院,,浙江 杭州 311300; 2.浙江農(nóng)林大學(xué)暨陽(yáng)學(xué)院 工程技術(shù)學(xué)院,,浙江 諸暨 311800)
摘要: 基層治理產(chǎn)生的大量敏感數(shù)據(jù)可通過(guò)數(shù)據(jù)脫敏去除隱私內(nèi)容,,但這些數(shù)據(jù)包含較多非結(jié)構(gòu)化文本數(shù)據(jù),,難以直接進(jìn)行數(shù)據(jù)脫敏,。因此,,需要對(duì)非結(jié)構(gòu)化文本數(shù)據(jù)進(jìn)行命名實(shí)體識(shí)別以提取敏感數(shù)據(jù),。首先把敏感實(shí)體分為16類并對(duì)信訪文本進(jìn)行標(biāo)注,,輸入層表示采用預(yù)訓(xùn)練模型BERT,編碼層利用雙向長(zhǎng)短時(shí)記憶網(wǎng)絡(luò)汲取上下文信息,解碼層通過(guò)條件隨機(jī)場(chǎng)模型優(yōu)化序列,,構(gòu)建了較高精度的基層治理敏感實(shí)體識(shí)別模型,。針對(duì)脫敏工作需要,改變假陰性和假陽(yáng)性的loss權(quán)重,,并采用敏感實(shí)體框選率輔助評(píng)價(jià)模型性能,。在基層治理信訪數(shù)據(jù)集和公共數(shù)據(jù)集MSRA上進(jìn)行實(shí)驗(yàn),F(xiàn)1值分別為88.38%和90.11%,,相較于基準(zhǔn)模型提升了4.64%和3.78%,。該模型可應(yīng)用于非結(jié)構(gòu)化文本的敏感實(shí)體識(shí)別,識(shí)別成功率高?,F(xiàn)有評(píng)價(jià)指標(biāo)未能較好地反映敏感實(shí)體的間接推理關(guān)系,,應(yīng)當(dāng)探索更完善的敏感實(shí)體評(píng)價(jià)體系。
中圖分類號(hào):TP391.1 文獻(xiàn)標(biāo)志碼:A DOI: 10.16157/j.issn.0258-7998.233942
中文引用格式: 吳磊,,汪杭軍. 基于預(yù)訓(xùn)練模型的基層治理敏感實(shí)體識(shí)別方法[J]. 電子技術(shù)應(yīng)用,,2023,49(9):109-114.
英文引用格式: Wu Lei,,Wang Hangjun. Identification method of sensitive entities in grassroots governance based on pre-training models[J]. Application of Electronic Technique,,2023,49(9):109-114.
Identification method of sensitive entities in grassroots governance based on pre-training models
Wu Lei1,,Wang Hangjun2
(1.School of Mathematics and Computer Science,, Zhejiang A&F University, Hangzhou 311300,, China,; 2.College of Engineering and Technology, Jiyang College of Zhejiang A&F University,, Zhuji 311800,, China)
Abstract: A large number of sensitive data generated by grassroots governance can be desensitized to remove private content, but these data contain more unstructured text data, which is difficult to desensitize directly. Therefore, it is necessary to identify named entities from unstructured text data to extract sensitive data.Firstly, the sensitive entities are divided into 16 categories and the letters and visits are labeled. The input layer is represented by the pre-trained model BERT, and the coding layer uses the bidirectional long short-term memory network to extract the context information. The decoding layer constructs a highly accurate identification model for sensitive entities in grassroots governance through the conditional random field model optimization sequence. According to the needs of desensitization, the loss weight of false negative and false positive is changed, and the Box Selection rate of sensitive entities is used to assist in evaluating the performance of the model.Experiments were conducted on the grassroots governance petition data set and the public data set MSRA. The F1 values were 88.38% and 90.11%, respectively, which were 4.64% and 3.78% higher than the benchmark model. The model can be applied to sensitive entity recognition of unstructured text with high recognition success rate.The existing evaluation indicators fail to better reflect the indirect reasoning relationship of sensitive entities, and a more perfect evaluation system of sensitive entities should be explored.
Key words : pre-trained language model;grassroots governance,;Chinese named entity recognition,;data masking

0 引言

隨著中國(guó)特色社會(huì)主義進(jìn)入新時(shí)代,構(gòu)建現(xiàn)代化的基層治理體系對(duì)鄉(xiāng)村振興和國(guó)家長(zhǎng)治久安意義重大,,而治理體系現(xiàn)代化需要信息化要素的融入[1],。基層治理數(shù)字化產(chǎn)生的大量數(shù)據(jù)經(jīng)過(guò)數(shù)據(jù)分析與挖掘,,可用于鄉(xiāng),、鎮(zhèn)、街道的信息化,、智慧化建設(shè),。這些數(shù)據(jù)難以避免地會(huì)包含個(gè)人隱私信息,且在現(xiàn)有安全條件下這些數(shù)據(jù)采集和使用可能存在數(shù)據(jù)泄露風(fēng)險(xiǎn)[2]。數(shù)據(jù)脫敏是一種將結(jié)構(gòu)化或非結(jié)構(gòu)化數(shù)據(jù)中的敏感信息按照一定脫敏規(guī)則進(jìn)行數(shù)據(jù)變形的技術(shù),,經(jīng)過(guò)脫敏后的數(shù)據(jù)兼顧了可用性和安全性,,能夠在保護(hù)隱私的前提下正常應(yīng)用于各個(gè)場(chǎng)景。文獻(xiàn)[3]闡述司法領(lǐng)域結(jié)構(gòu)化文本和非結(jié)構(gòu)化文本的脫敏問(wèn)題,,并以匈牙利法律文件作為案例研究可能的方案,。該文獻(xiàn)提供了一種思路,即將命名實(shí)體識(shí)別與數(shù)據(jù)脫敏聯(lián)系起來(lái),。結(jié)構(gòu)化數(shù)據(jù)中敏感數(shù)據(jù)較為明確,,可依據(jù)不同的數(shù)據(jù)列劃分,但非結(jié)構(gòu)化數(shù)據(jù)需要將敏感數(shù)據(jù)從大量文本中識(shí)別出來(lái),,這就需要命名實(shí)體識(shí)別技術(shù)應(yīng)用于基層治理文本的數(shù)據(jù)脫敏過(guò)程中,。

命名實(shí)體識(shí)別[4]是一種從非結(jié)構(gòu)化文本中識(shí)別出具有特定意義實(shí)體的技術(shù),為自然語(yǔ)言處理中的一項(xiàng)基礎(chǔ)任務(wù),。該任務(wù)有助于關(guān)系抽取,、知識(shí)圖譜等下游任務(wù)[5]。常見(jiàn)的實(shí)體有人名,、地名,、機(jī)構(gòu)名等,例如在“李彥宏在北京舉辦了百度AI開(kāi)發(fā)大會(huì)”識(shí)別出李彥宏(人名),、北京(地名),、百度(機(jī)構(gòu)名)3個(gè)實(shí)體。命名實(shí)體識(shí)別技術(shù)的發(fā)展可劃分為3個(gè)階段:基于詞典和規(guī)則的方法,、基于機(jī)器學(xué)習(xí)的方法和基于深度學(xué)習(xí)的方法[4],。除了通用語(yǔ)料的實(shí)體識(shí)別,還存在面向特定應(yīng)用場(chǎng)景的領(lǐng)域命名實(shí)體識(shí)別(Domain Named Entity Recognition,,DNER),,例如醫(yī)療、生物,、金融,、司法、農(nóng)業(yè)等領(lǐng)域[6],。雙向長(zhǎng)短期記憶網(wǎng)絡(luò)(Bidirectional Long Short-Term Memory Networks,BiLSTM)和條件隨機(jī)場(chǎng)(Conditional Random Field,CRF)的組合模型由于良好的表現(xiàn),,在不同領(lǐng)域都被作為最經(jīng)典的模型而廣泛使用。本文將基層治理非結(jié)構(gòu)化文本的敏感詞識(shí)別任務(wù)轉(zhuǎn)換為命名實(shí)體識(shí)別任務(wù),,沿用常規(guī)的序列標(biāo)注方法,。

英文單詞之間有空格劃分,分詞邊界明確,,以及首字母,、詞根,、后綴等區(qū)分信息使得命名實(shí)體識(shí)別表現(xiàn)較好。而中文最明顯的特點(diǎn)是詞界模糊,,沒(méi)有分隔符來(lái)表示詞界[7]。由于中文字詞之間沒(méi)有空格分隔,,中文命名實(shí)體識(shí)別若以詞粒度劃分,,必須先進(jìn)行分詞。分詞錯(cuò)誤導(dǎo)致的誤差傳遞使得詞粒度識(shí)別效果差于字粒度,。因此,,中文命名實(shí)體識(shí)別常采用字粒度進(jìn)行識(shí)別。文獻(xiàn)[8]綜述了中文命名實(shí)體識(shí)別的方法,、難點(diǎn)問(wèn)題和未來(lái)研究方向,。文獻(xiàn)[9]通過(guò)在中文詞嵌入加入語(yǔ)義、語(yǔ)音信息以提升識(shí)別效果,。目前,,命名實(shí)體識(shí)別廣泛應(yīng)用于各個(gè)領(lǐng)域,但在基層治理領(lǐng)域的相關(guān)應(yīng)用較少,。與通用領(lǐng)域數(shù)據(jù)相比,,基層治理過(guò)程中的敏感信息識(shí)別實(shí)體嵌套、一詞多義和字詞錯(cuò)誤等問(wèn)題更為嚴(yán)重,。

此外,,通用領(lǐng)域的命名實(shí)體識(shí)別雖包含了人名、地名和機(jī)構(gòu)名等部分敏感實(shí)體,,但未能將身份證號(hào),、手機(jī)號(hào)和銀行卡號(hào)等數(shù)字類型的敏感實(shí)體作為數(shù)據(jù)標(biāo)注,難以包含基層治理過(guò)程中產(chǎn)生的眾多敏感實(shí)體類型,。



本文詳細(xì)內(nèi)容請(qǐng)下載:http://forexkbc.com/resource/share/2000005647




作者信息:

吳磊1,,汪杭軍2

(1.浙江農(nóng)林大學(xué) 數(shù)學(xué)與計(jì)算機(jī)科學(xué)學(xué)院,浙江 杭州 311300,;2.浙江農(nóng)林大學(xué)暨陽(yáng)學(xué)院 工程技術(shù)學(xué)院,,浙江 諸暨 311800)

微信圖片_20210517164139.jpg

此內(nèi)容為AET網(wǎng)站原創(chuàng),未經(jīng)授權(quán)禁止轉(zhuǎn)載,。