-
Notifications
You must be signed in to change notification settings - Fork 140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Excuse me, how to support other language search, such as Chinese search, thank you #201
Comments
Hello @sinianzhiren , Unfortunately I do not know enough about Chinese language to guide you here, but other users have successfully used MiniSearch for Chinese (look for example at this comment or at this issue). Did you encounter a specific problem supporting Chinese or other languages? If so, you can describe it, and I would be happy to help if I can. |
You should do Chinese word segmentation with library like nodejieba before indexing documents. |
If you don't care about supporting Firefox, |
Firefox Nightly support Use const segmenter =
Intl.Segmenter && new Intl.Segmenter("zh", { granularity: "word" });
const miniSearch = new MiniSearch({
fields: ["text"],
processTerm: (term) => {
if (!segmenter) return term;
const tokens = [];
for (const seg of segmenter.segment(term)) {
tokens.push(seg.segment);
}
return tokens;
},
});
const documents = [
{ id: 1, text: "为字段添加 required 属性,并在提交时进行表单验证" },
{
id: 2,
text: "By default, the same processing is applied to search queries. In order to apply a different processing to search queries, supply a processTerm search option:",
},
];
miniSearch.addAll(documents);
console.log("===");
console.log(miniSearch.search("添加")); This is online example: https://duoyun-ui.gemjs.org/zh/ |
I also encountered the problem of searching Chinese, for example, when searching for "预置", due to the problem of word segmentation, the content cannot be searched due to the word segmentation of "预" and "置", my project uses vitepress, indirectly uses minisearch, and finally I configured it like this to support search: ...
export default defineConfig({
...
themeConfig: {
search: {
options: {
miniSearch: {
options: {
tokenize: (term) => {
if (typeof term === 'string') term = term.toLowerCase();
// @ts-ignore
const segmenter = Intl.Segmenter && new Intl.Segmenter("zh", { granularity: "word" });
if (!segmenter) return [term];
const tokens = [];
for (const seg of segmenter.segment(term)) {
// @ts-ignore
tokens.push(seg.segment);
}
return tokens;
},
},
searchOptions: {
combineWith: 'AND', // important for search chinese
processTerm: (term) => {
if (typeof term === 'string') term = term.toLowerCase();
// @ts-ignore
const segmenter = Intl.Segmenter && new Intl.Segmenter("zh", { granularity: "word" });
if (!segmenter) return term;
const tokens = [];
for (const seg of segmenter.segment(term)) {
// @ts-ignore
tokens.push(seg.segment);
}
return tokens;
},
},
},
},
},
},
...
}); Thanks to @mantou132 |
* Fix minisearch Chinese search per lucaong/minisearch#201 (comment) * Mention feature in readme
I found that Intl.Segmenter is not reliable enough for search. For example, '懵逼了' in a document is broken down ['懵', '逼了'], while a search term '懵逼' is broken down ['懵', '逼']. This damages searchability. I use bigram for Chinese search instead. Since import { bigram } from "n-gram"
const SPACE_OR_PUNCTUATION = /[\n\r\p{Z}\p{P}]+/u
// From https://github.com/vinta/pangu.js/blob/master/src/shared/core.js
const CJK_RANGE = '\u2e80-\u2eff\u2f00-\u2fdf\u3040-\u309f\u30a0-\u30fa\u30fc-\u30ff\u3100-\u312f\u3200-\u32ff\u3400-\u4dbf\u4e00-\u9fff\uf900-\ufaff'
const CJK_NCJK = new RegExp(`([${CJK_RANGE}])([^${CJK_RANGE}])`, 'g')
const NCJK_CJK = new RegExp(`([^${CJK_RANGE}])([${CJK_RANGE}])`, 'g')
const CJK_WORD = new RegExp(`^[${CJK_RANGE}]+$`)
function isCJKTerm(term: string) {
return !!term.match(CJK_WORD)
}
// Function to add space between CJK and non-CJK characters.
// '中文Latin中文' => '中文 Latin 中文'
function addSpaceBetweenCJKandNonCJK(text: string) {
return text.replace(CJK_NCJK, '$1 $2').replace(NCJK_CJK, '$1 $2')
}
const miniSearch = new MiniSearch({
tokenize(text) {
const tokens: string[] = []
// Add space between CJK and non-CJK, and then split them by empty space.
const segments = addSpaceBetweenCJKandNonCJK(text).split(SPACE_OR_PUNCTUATION)
segments.forEach(segment => {
if (isCJKTerm(segment)) {
// Conversion between Tradictional Chinese and Simplified Chinese can happen here.
// A simple character table can be found at:
// https://github.com/tongwentang/tongwen-dict/blob/main/src/charater/t2s-char.json
// Each single character is added. '樣例詞組' => ['樣', '例', '詞', '組']
Array.from(segment).forEach(char => tokens.push(char))
// Each bigram is added. '樣例詞組' => ['樣例', '例詞', '詞組']
bigram(segment).forEach(token => tokens.push(token))
} else {
// For non-CJK terms, directly add it to tokens.
tokens.push(segment)
}
})
return tokens
},
searchOptions: {
combineWith: 'AND',
fuzzy(term) {
// For CJK terms, disable fuzzy search. Otherwise, use a fuzzy option.
if (isCJKTerm(term)) {
return false
} else {
return 0.35
}
},
maxFuzzy: 4
}
}) |
Excuse me, how to support other language search, such as Chinese search, thank you .
The text was updated successfully, but these errors were encountered: