es 修改拼音分词器源码实现汉字/拼音/简拼混合搜索时同音字不匹配
[版權聲明]:本文章由danvid發布于http://danvid.cnblogs.com/,如需轉載或部分使用請注明出處
?
在業務中經常會用到拼音匹配查詢,大家都會用到拼音分詞器,但是拼音分詞器匹配的時候有個問題,就是會出現同音字匹配,有時候這種情況是業務不希望出現的。
業務場景:我輸入"純生pi酒"進行搜索,文檔中有以下數據:
doc[1]:{"name":"純生啤酒"}
doc[2]:{"name":"春生啤酒"}
doc[3]:{"name":"純生劈酒"}
以上業務點是我輸入"純生pi酒"理論上業務希望只返回doc[1]:{"name":"純生啤酒"}和doc[3]:{"name":"純生劈酒"}其他的不是我要的數據,因為從業務角度來看,我已經輸入"純生"了,理論上只需要返回有"純生"的數據(當然也有很多情況,會希望把"春生"也返回來),正常使用拼音分詞器,會把doc[2]也會返回,原因是拼音分詞器會把doc[2]變成:
{"tokens": [{"token": "c","start_offset": 0,"end_offset": 1,"type": "word","position": 0},{"token": "chun","start_offset": 0,"end_offset": 1,"type": "word","position": 0},{"token": "s","start_offset": 1,"end_offset": 2,"type": "word","position": 1},{"token": "sheng","start_offset": 1,"end_offset": 2,"type": "word","position": 1},{"token": "p","start_offset": 2,"end_offset": 3,"type": "word","position": 2},{"token": "pi","start_offset": 2,"end_offset": 3,"type": "word","position": 2},{"token": "j","start_offset": 3,"end_offset": 4,"type": "word","position": 3},{"token": "jiu","start_offset": 3,"end_offset": 4,"type": "word","position": 3}] }由于"純生"和"春生"是同音字,分詞結果doc[1]和doc[2]是一樣的,所以把doc[2]匹配上就是理所當然了,那么如何解決?
其實我們的需求是就當輸入搜索文本時(搜索文本中可能同時存在中文/拼音),搜索文本中有[中文] 則按[中文]匹配,有[拼音]則按[拼音]匹配即可,這樣就屏蔽掉了輸入中文時匹配到同音字的問題。那么我們可以這樣思考,我們索引的時候同時存在 全拼/簡拼/中文 三種分詞,搜索的時候 輸入中有中文則按中文一個個分開,有英文則按拼音進行分詞即可 例如:
索引時"純生啤酒"分詞為:
索引分詞:{"tokens": [{"token": "c","start_offset": 0,"end_offset": 1,"type": "word","position": 0},{"token": "chun","start_offset": 0,"end_offset": 1,"type": "word","position": 0},{"token": "純","start_offset": 0,"end_offset": 1,"type": "word","position": 0},{"token": "s","start_offset": 1,"end_offset": 2,"type": "word","position": 1},{"token": "sheng","start_offset": 1,"end_offset": 2,"type": "word","position": 1},{"token": "生","start_offset": 1,"end_offset": 2,"type": "word","position": 1},{"token": "p","start_offset": 2,"end_offset": 3,"type": "word","position": 2},{"token": "pi","start_offset": 2,"end_offset": 3,"type": "word","position": 2},{"token": "啤","start_offset": 2,"end_offset": 3,"type": "word","position": 2},{"token": "j","start_offset": 3,"end_offset": 4,"type": "word","position": 3},{"token": "jiu","start_offset": 3,"end_offset": 4,"type": "word","position": 3},{"token": "酒","start_offset": 3,"end_offset": 4,"type": "word","position": 3}] }
搜索"純生pi酒",分詞為:
搜索分詞:{"tokens": [{"token": "純","start_offset": 0,"end_offset": 1,"type": "word","position": 0},{"token": "生","start_offset": 1,"end_offset": 2,"type": "word","position": 1},{"token": "pi","start_offset": 2,"end_offset": 4,"type": "word","position": 2},{"token": "酒","start_offset": 4,"end_offset": 5,"type": "word","position": 3}] }
這樣就可以只匹配出有"純"|"生"|"酒"這幾個字的數據了,而不是把"春"字的doc也匹配出來,既然解決思路有了,就找方案了。
由于目前的es的拼音分詞器是沒有分離中文并保留中文的功能,所以就需要修改其源碼增加這個功能(使用的拼音分詞器:?https://github.com/medcl/elasticsearch-analysis-pinyin)
源碼的話在上面地址上可以下在,源碼的原理大概講一下,就是他調用一個nlp工具包( https://github.com/NLPchina)先對輸入文本解析成拼音 即"純生pi酒"會解析成["chun","sheng",null,null,"酒"]數組(這里再提一句這個nlp工具包會對詞組進行解析,而不是單個字進行解析例如"廈/門"會解析成"xia/men"而不是"sha/men"這個確實有用很多,當然他還有很多工具,例如簡繁體轉化等等,大家可以學習使用一哈),然后再單獨對英文數字放到buff里面進行二次匹配,采用"正向最大匹配"和"逆向最大匹配"取出最優解(這些都是常用的分詞手法)匹配出拼音字符,源代碼如下:
// 分別正向、逆向最大匹配,選出最短的作為最優結果 List<String> forward = positiveMaxMatch(pinyinText, PINYIN_MAX_LENGTH);if (forward.size() == 1) { // 前向只切出1個的話,沒有必要再做逆向分詞
pinyinList.addAll(forward);
} else {
// 分別正向、逆向最大匹配,選出最短的作為最優結果
List<String> backward = reverseMaxMatch(pinyinText, PINYIN_MAX_LENGTH);
if (forward.size() <= backward.size()) {
pinyinList.addAll(forward);
} else {
pinyinList.addAll(backward);
}
}
至于拼音字典匹配結構由于拼音的數量不多,拼音源碼采用了HashSet的結構而不是我們ik里面的字典樹。("正向最大匹配"和"逆向最大匹配"百度一大把就不在這說了)
原理大概講完了根據需求我們是不需要管英文數字這一塊的匹配邏輯的,只需要修改中文轉拼音這附近的邏輯即可。
首先我們先寫一個中文分割的工具類或者方法如下:
public class ChineseUtil {/*** 漢字始*/public static char CJK_UNIFIED_IDEOGRAPHS_START = '\u4E00';/*** 漢字止*/public static char CJK_UNIFIED_IDEOGRAPHS_END = '\u9FA5';public static List<String> segmentChinese(String str){if (StringUtil.isBlank(str)) {return Collections.emptyList();}List<String> lists = str.length()<=32767?new ArrayList<>(str.length()):new LinkedList<>();for (int i=0;i<str.length();i++){char c = str.charAt(i);if(c>=CJK_UNIFIED_IDEOGRAPHS_START&&c<=CJK_UNIFIED_IDEOGRAPHS_END){lists.add(String.valueOf(c));}else{lists.add(null);}}return lists;} }漢字始或者漢字止這個查一下nlp工具的源碼(PinyinUtil)就可以找到,或者百度。然后在拼音源碼中的PinyinConfig類中添加一項中文分割的配置:
默認false就可以了,然后我們需要修改兩個類(PinyinTokenFilter/PinyinTokenizer),這兩個類是最要的分詞類,對應es的analysis的filter和tokenizer
由于這兩個類修改地方是一樣的我就隨便講一個,首先需要修改構造器的校驗,添加剛剛增加的配置:
然后修改該類的readTerm()方法,如下:
?
兩個類都修改完就完成源碼修改了,現在需要對源碼重新進行打包,mvn install以下就可以了,你就會拿到elasticsearch-analysis-pinyin-5.6.4.jar(你下載源碼的時候要下載release的版本進行修改,版本也要對應你的es哦),同時在源碼的lib拿到nlp-lang-1.7.jar包 ,再加上resource中的plugin-descriptor.properties(這個需要定義插件版本,啟動類等東西,這個去拼音release版本中找個可用的插件解壓一下跟著配置就可以了),最后變成下面這個樣子:
放在一個文件夾里面,這個就是打包好的插件了,名字自己命名即可,然后放到es的plugin目錄里面就完成修改了。
剩下就是修改index的setting和mapping,修改思想就是按照開頭說的那樣search_analyzer和analyzer分開即可,如下:
PUT /test_index {"settings": {"analysis": {"analyzer": {"pinyin_chinese_analyzer": {"tokenizer": "pinyin_tokenizer"},"pinyin_analyzer": {"tokenizer": "pinyin_chinese_tokenizer"}}, "tokenizer": {"pinyin_chinese_tokenizer": {"type": "pinyin","keep_first_letter": false,"keep_separate_first_letter": false,"keep_full_pinyin":false,"keep_original":false,"limit_first_letter_length":50,"keep_separate_chinese": true,"lowercase":true},"pinyin_tokenizer": {"type": "pinyin","keep_first_letter": false,"keep_separate_first_letter": true,"keep_full_pinyin":true,"keep_original":false,"limit_first_letter_length":50,"keep_separate_chinese": true,"lowercase":true}}}}, "mappings": {"indexType":{"properties": {"name":{"type": "text","search_analyzer": "pinyin_chinese_analyzer","analyzer": "pinyin_analyzer"}}}} }查詢使用match_pharse即可(使用原理可以參考我的文章https://www.cnblogs.com/danvid/p/10570334.html),當然也可以用其他,根據業務來把。
下面是簡單的驗證結果:索引中有以下文檔doc[1]:{"name": "雪花純生啤酒200ml"}|doc[2]:{"name": "雪花純爽啤酒200ml"}|doc[3]:{"name": "雪花春生啤酒200ml"}
查詢輸入: GET /test_index/_search {"query": {"match_phrase": {"name": "xuehcs"}} } 結果: "hits": [{"_index": "test_index","_type": "indexType","_id": "2","_source": {"name": "雪花純爽啤酒200ml"}},{"_index": "test_index","_type": "indexType","_id": "1","_source": {"name": "雪花純生啤酒200ml"}},{"_index": "test_index","_type": "indexType","_id": "3","_source": {"name": "雪花春生啤酒200ml"}}] 查詢輸入: GET /test_index/_search {"query": {"match_phrase": {"name": "xueh純生"}} } 結果: "hits": [{"_index": "test_index","_type": "indexType","_id": "1","_source": {"name": "雪花純生啤酒200ml"}}]總結:其實解決思路并不復雜,不過其實在修改源碼之前也考慮過其他方案,例如通過修改tokenizer為standard或者ik+fliter為pinyin進行分詞等,但是總是存在各種問題不盡人意,用standard的時候由于已經拆分成了字,所以會出現"廈門"這種多音字被轉化為"shamen"而不是"xiamen",而ik分詞則在使用match_phrase時可控性較差~加上受詞庫的影響,最后才決定使用修改源碼增加功能的方式~如果大家有更好的方式可以推薦一下
?
?
[說明]:elasticsearch版本5.6.4
轉載于:https://www.cnblogs.com/danvid/p/10691547.html
總結
以上是生活随笔為你收集整理的es 修改拼音分词器源码实现汉字/拼音/简拼混合搜索时同音字不匹配的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: [html] 什么是本地存储的有效期?
- 下一篇: [vue] Vue.observable