分析文本获取对应的 token
请求
POST /es/_analyze
{"analyzer" : "standard","text" : "50 first dates"}
响应
{"tokens": [{"end_offset": 2,"keyword": false,"position": 1,"start_offset": 0,"token": "50","type": "Numeric"},{"end_offset": 8,"keyword": false,"position": 1,"start_offset": 3,"token": "first","type": "AlphaNumeric"},{"end_offset": 14,"keyword": false,"position": 1,"start_offset": 9,"token": "dates","type": "AlphaNumeric"}]}
使用指定的分析器
{"analyzer" : "standard","text" : "50 first dates"}
使用指定的 tokenizer
{"tokenizer" : "standard","text" : "50 first dates"}
使用指定的 tokenizer 和 filter
{"tokenizer" : "standard","char_filter" : ["html"],"token_filter" : ["camel_case"],"text" : "50 first dates"}
支持的分析器
- standard
- simple
- keyword
- web
- regexp
- stop
- whitespace
