From 54c9711a2ce0cc3a4f7bda1ac01df252b0634ee3 Mon Sep 17 00:00:00 2001 From: Anton Rubin Date: Tue, 3 Sep 2024 12:57:03 +0100 Subject: [PATCH 1/8] adding limit token filter docs Signed-off-by: Anton Rubin --- _analyzers/token-filters/limit.md | 88 +++++++++++++++++++++++++++++++ 1 file changed, 88 insertions(+) create mode 100644 _analyzers/token-filters/limit.md diff --git a/_analyzers/token-filters/limit.md b/_analyzers/token-filters/limit.md new file mode 100644 index 00000000000..105fa1b4076 --- /dev/null +++ b/_analyzers/token-filters/limit.md @@ -0,0 +1,88 @@ +--- +layout: default +title: Limit +parent: Token filters +nav_order: 250 +--- + +# Limit token filter + +The `limit` token filter in OpenSearch is used to limit the number of tokens that are passed through the analysis chain. + +## Parameters + +The `limit` token filter in OpenSearch can be configured with the following parameters: + +- `max_token_count`: Maximum number of tokens that will be generated. Default is `1` (Integer, _Optional_) +- `consume_all_tokens`: Use all token, even if result exceeds `max_token_count`. Default is `false` (Boolean, _Optional_) + + +## Example + +The following example request creates a new index named `my_index` and configures an analyzer with `limit` filter: + +```json +PUT my_index +{ + "settings": { + "analysis": { + "analyzer": { + "three_token_limit": { + "tokenizer": "standard", + "filter": [ "custom_token_limit" ] + } + }, + "filter": { + "custom_token_limit": { + "type": "limit", + "max_token_count": 3 + } + } + } + } +} +``` +{% include copy-curl.html %} + +## Generated tokens + +Use the following request to examine the tokens generated using the created analyzer: + +```json +GET /my_index/_analyze +{ + "analyzer": "three_token_limit", + "text": "OpenSearch is a powerful and flexible search engine." +} +``` +{% include copy-curl.html %} + +The response contains the generated tokens: + +```json +{ + "tokens": [ + { + "token": "OpenSearch", + "start_offset": 0, + "end_offset": 10, + "type": "", + "position": 0 + }, + { + "token": "is", + "start_offset": 11, + "end_offset": 13, + "type": "", + "position": 1 + }, + { + "token": "a", + "start_offset": 14, + "end_offset": 15, + "type": "", + "position": 2 + } + ] +} +``` From 4e4e84dd10ce83361e983291b42ae9e33b08b840 Mon Sep 17 00:00:00 2001 From: Anton Rubin Date: Tue, 3 Sep 2024 14:36:48 +0100 Subject: [PATCH 2/8] removing parameter which is not working #8153 Signed-off-by: Anton Rubin --- _analyzers/token-filters/index.md | 2 +- _analyzers/token-filters/limit.md | 5 ++--- 2 files changed, 3 insertions(+), 4 deletions(-) diff --git a/_analyzers/token-filters/index.md b/_analyzers/token-filters/index.md index f4e9c434e74..93165208251 100644 --- a/_analyzers/token-filters/index.md +++ b/_analyzers/token-filters/index.md @@ -37,7 +37,7 @@ Token filter | Underlying Lucene token filter| Description `kstem` | [KStemFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/en/KStemFilter.html) | Provides kstem-based stemming for the English language. Combines algorithmic stemming with a built-in dictionary. `kuromoji_completion` | [JapaneseCompletionFilter](https://lucene.apache.org/core/9_10_0/analysis/kuromoji/org/apache/lucene/analysis/ja/JapaneseCompletionFilter.html) | Adds Japanese romanized terms to the token stream (in addition to the original tokens). Usually used to support autocomplete on Japanese search terms. Note that the filter has a `mode` parameter, which should be set to `index` when used in an index analyzer and `query` when used in a search analyzer. Requires the `analysis-kuromoji` plugin. For information about installing the plugin, see [Additional plugins]({{site.url}}{{site.baseurl}}/install-and-configure/plugins/#additional-plugins). `length` | [LengthFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/LengthFilter.html) | Removes tokens whose lengths are shorter or longer than the length range specified by `min` and `max`. -`limit` | [LimitTokenCountFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/LimitTokenCountFilter.html) | Limits the number of output tokens. A common use case is to limit the size of document field values based on token count. +[`limit`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/limit/) | [LimitTokenCountFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/LimitTokenCountFilter.html) | Limits the number of output tokens. A common use case is to limit the size of document field values based on token count. `lowercase` | [LowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) | Converts tokens to lowercase. The default [LowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) is for the English language. You can set the `language` parameter to `greek` (uses [GreekLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/el/GreekLowerCaseFilter.html)), `irish` (uses [IrishLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/ga/IrishLowerCaseFilter.html)), or `turkish` (uses [TurkishLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/tr/TurkishLowerCaseFilter.html)). `min_hash` | [MinHashFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/minhash/MinHashFilter.html) | Uses the [MinHash technique](https://en.wikipedia.org/wiki/MinHash) to estimate document similarity. Performs the following operations on a token stream sequentially:
1. Hashes each token in the stream.
2. Assigns the hashes to buckets, keeping only the smallest hashes of each bucket.
3. Outputs the smallest hash from each bucket as a token stream. `multiplexer` | N/A | Emits multiple tokens at the same position. Runs each token through each of the specified filter lists separately and outputs the results as separate tokens. diff --git a/_analyzers/token-filters/limit.md b/_analyzers/token-filters/limit.md index 105fa1b4076..23bcebc8f8a 100644 --- a/_analyzers/token-filters/limit.md +++ b/_analyzers/token-filters/limit.md @@ -11,10 +11,9 @@ The `limit` token filter in OpenSearch is used to limit the number of tokens tha ## Parameters -The `limit` token filter in OpenSearch can be configured with the following parameters: +The `limit` token filter in OpenSearch can be configured with the following parameter: -- `max_token_count`: Maximum number of tokens that will be generated. Default is `1` (Integer, _Optional_) -- `consume_all_tokens`: Use all token, even if result exceeds `max_token_count`. Default is `false` (Boolean, _Optional_) +- `max_token_count`: Maximum number of tokens to be generated. Default is `1` (Integer, _Optional_) ## Example From 2b4899fd166a02291b04e7030c5e4ea53994076c Mon Sep 17 00:00:00 2001 From: Anton Rubin Date: Thu, 5 Sep 2024 13:05:29 +0100 Subject: [PATCH 3/8] adding consume_all_tokens Signed-off-by: Anton Rubin --- _analyzers/token-filters/limit.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_analyzers/token-filters/limit.md b/_analyzers/token-filters/limit.md index 23bcebc8f8a..727ae762661 100644 --- a/_analyzers/token-filters/limit.md +++ b/_analyzers/token-filters/limit.md @@ -14,7 +14,7 @@ The `limit` token filter in OpenSearch is used to limit the number of tokens tha The `limit` token filter in OpenSearch can be configured with the following parameter: - `max_token_count`: Maximum number of tokens to be generated. Default is `1` (Integer, _Optional_) - +- `consume_all_tokens`: (Expect level setting) Use all tokens from tokenizer, even if result exceeds `max_token_count`. The output will still only contain the number of tokens specified in `max_token_count`, however all of the token from tokenizer will be processed. Default is `false` (Boolean, _Optional_) ## Example From 9d4629314e21a8537041299ca2e242c51d4a8c6d Mon Sep 17 00:00:00 2001 From: AntonEliatra Date: Thu, 12 Sep 2024 11:04:37 +0100 Subject: [PATCH 4/8] Update limit.md Signed-off-by: AntonEliatra --- _analyzers/token-filters/limit.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_analyzers/token-filters/limit.md b/_analyzers/token-filters/limit.md index 727ae762661..845b791d3a0 100644 --- a/_analyzers/token-filters/limit.md +++ b/_analyzers/token-filters/limit.md @@ -45,7 +45,7 @@ PUT my_index ## Generated tokens -Use the following request to examine the tokens generated using the created analyzer: +Use the following request to examine the tokens generated using the analyzer: ```json GET /my_index/_analyze From 22707435b83966687bbb3f521fb17bf70463e393 Mon Sep 17 00:00:00 2001 From: Anton Rubin Date: Wed, 16 Oct 2024 18:45:25 +0100 Subject: [PATCH 5/8] updating parameter table Signed-off-by: Anton Rubin --- _analyzers/token-filters/limit.md | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/_analyzers/token-filters/limit.md b/_analyzers/token-filters/limit.md index 845b791d3a0..0bc17d60890 100644 --- a/_analyzers/token-filters/limit.md +++ b/_analyzers/token-filters/limit.md @@ -11,10 +11,12 @@ The `limit` token filter in OpenSearch is used to limit the number of tokens tha ## Parameters -The `limit` token filter in OpenSearch can be configured with the following parameter: +The `limit` token filter in OpenSearch can be configured with the following parameter. -- `max_token_count`: Maximum number of tokens to be generated. Default is `1` (Integer, _Optional_) -- `consume_all_tokens`: (Expect level setting) Use all tokens from tokenizer, even if result exceeds `max_token_count`. The output will still only contain the number of tokens specified in `max_token_count`, however all of the token from tokenizer will be processed. Default is `false` (Boolean, _Optional_) +Parameter | Required/Optional | Data type | Description +:--- | :--- | :--- | :--- +`max_token_count` | Optional | Integer | Maximum number of tokens to be generated. Default is `1`. +`consume_all_tokens` | Optional | Boolean | (Expect level setting) Use all tokens from tokenizer, even if result exceeds `max_token_count`. The output will still only contain the number of tokens specified in `max_token_count`, however all of the token from tokenizer will be processed. Default is `false`.` ## Example From 44f785dafe234e896cd0ba26f185f4afcd5ec0be Mon Sep 17 00:00:00 2001 From: Fanit Kolchina Date: Fri, 15 Nov 2024 14:07:13 -0500 Subject: [PATCH 6/8] Doc review Signed-off-by: Fanit Kolchina --- _analyzers/token-filters/limit.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/_analyzers/token-filters/limit.md b/_analyzers/token-filters/limit.md index 0bc17d60890..ac4597b47bd 100644 --- a/_analyzers/token-filters/limit.md +++ b/_analyzers/token-filters/limit.md @@ -7,20 +7,20 @@ nav_order: 250 # Limit token filter -The `limit` token filter in OpenSearch is used to limit the number of tokens that are passed through the analysis chain. +The `limit` token filter is used to limit the number of tokens that are passed through the analysis chain. ## Parameters -The `limit` token filter in OpenSearch can be configured with the following parameter. +The `limit` token filter can be configured with the following parameters. Parameter | Required/Optional | Data type | Description :--- | :--- | :--- | :--- -`max_token_count` | Optional | Integer | Maximum number of tokens to be generated. Default is `1`. -`consume_all_tokens` | Optional | Boolean | (Expect level setting) Use all tokens from tokenizer, even if result exceeds `max_token_count`. The output will still only contain the number of tokens specified in `max_token_count`, however all of the token from tokenizer will be processed. Default is `false`.` +`max_token_count` | Optional | Integer | The maximum number of tokens to be generated. Default is `1`. +`consume_all_tokens` | Optional | Boolean | (Expect level setting) Use all tokens from tokenizer, even if result exceeds `max_token_count`. When this parameter is set, the output still only contains the number of tokens specified in `max_token_count`. However, all tokens generated by the tokenizer are processed. Default is `false`. ## Example -The following example request creates a new index named `my_index` and configures an analyzer with `limit` filter: +The following example request creates a new index named `my_index` and configures an analyzer with a `limit` filter: ```json PUT my_index From 7105763650a302c2866bed7674cc0b6b55e47e11 Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Mon, 2 Dec 2024 11:00:33 -0500 Subject: [PATCH 7/8] Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- _analyzers/token-filters/limit.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/_analyzers/token-filters/limit.md b/_analyzers/token-filters/limit.md index ac4597b47bd..a849f5f06b0 100644 --- a/_analyzers/token-filters/limit.md +++ b/_analyzers/token-filters/limit.md @@ -7,7 +7,7 @@ nav_order: 250 # Limit token filter -The `limit` token filter is used to limit the number of tokens that are passed through the analysis chain. +The `limit` token filter is used to limit the number of tokens passed through the analysis chain. ## Parameters @@ -16,7 +16,7 @@ The `limit` token filter can be configured with the following parameters. Parameter | Required/Optional | Data type | Description :--- | :--- | :--- | :--- `max_token_count` | Optional | Integer | The maximum number of tokens to be generated. Default is `1`. -`consume_all_tokens` | Optional | Boolean | (Expect level setting) Use all tokens from tokenizer, even if result exceeds `max_token_count`. When this parameter is set, the output still only contains the number of tokens specified in `max_token_count`. However, all tokens generated by the tokenizer are processed. Default is `false`. +`consume_all_tokens` | Optional | Boolean | (Expert-level setting) Uses all tokens from the tokenizer, even if the result exceeds `max_token_count`. When this parameter is set, the output still only contains the number of tokens specified by `max_token_count`. However, all tokens generated by the tokenizer are processed. Default is `false`. ## Example From 2e718a0bad97f781552781aab4e68f440a879574 Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Mon, 2 Dec 2024 11:00:43 -0500 Subject: [PATCH 8/8] Update _analyzers/token-filters/index.md Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- _analyzers/token-filters/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_analyzers/token-filters/index.md b/_analyzers/token-filters/index.md index 93165208251..4569897be29 100644 --- a/_analyzers/token-filters/index.md +++ b/_analyzers/token-filters/index.md @@ -37,7 +37,7 @@ Token filter | Underlying Lucene token filter| Description `kstem` | [KStemFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/en/KStemFilter.html) | Provides kstem-based stemming for the English language. Combines algorithmic stemming with a built-in dictionary. `kuromoji_completion` | [JapaneseCompletionFilter](https://lucene.apache.org/core/9_10_0/analysis/kuromoji/org/apache/lucene/analysis/ja/JapaneseCompletionFilter.html) | Adds Japanese romanized terms to the token stream (in addition to the original tokens). Usually used to support autocomplete on Japanese search terms. Note that the filter has a `mode` parameter, which should be set to `index` when used in an index analyzer and `query` when used in a search analyzer. Requires the `analysis-kuromoji` plugin. For information about installing the plugin, see [Additional plugins]({{site.url}}{{site.baseurl}}/install-and-configure/plugins/#additional-plugins). `length` | [LengthFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/LengthFilter.html) | Removes tokens whose lengths are shorter or longer than the length range specified by `min` and `max`. -[`limit`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/limit/) | [LimitTokenCountFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/LimitTokenCountFilter.html) | Limits the number of output tokens. A common use case is to limit the size of document field values based on token count. +[`limit`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/limit/) | [LimitTokenCountFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/LimitTokenCountFilter.html) | Limits the number of output tokens. For example, document field value sizes can be limited based on the token count. `lowercase` | [LowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) | Converts tokens to lowercase. The default [LowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) is for the English language. You can set the `language` parameter to `greek` (uses [GreekLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/el/GreekLowerCaseFilter.html)), `irish` (uses [IrishLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/ga/IrishLowerCaseFilter.html)), or `turkish` (uses [TurkishLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/tr/TurkishLowerCaseFilter.html)). `min_hash` | [MinHashFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/minhash/MinHashFilter.html) | Uses the [MinHash technique](https://en.wikipedia.org/wiki/MinHash) to estimate document similarity. Performs the following operations on a token stream sequentially:
1. Hashes each token in the stream.
2. Assigns the hashes to buckets, keeping only the smallest hashes of each bucket.
3. Outputs the smallest hash from each bucket as a token stream. `multiplexer` | N/A | Emits multiple tokens at the same position. Runs each token through each of the specified filter lists separately and outputs the results as separate tokens.