generated from amazon-archives/__template_Apache-2.0
-
Notifications
You must be signed in to change notification settings - Fork 625
Closed
Labels
Content gapIn progressIssue/PR: The issue or PR is in progress.Issue/PR: The issue or PR is in progress.
Description
This meta issue indicates the new pages we need for each of the text analyzers we are currently missing.
Note: Language analyzer is documented, and the concepts page: Optimizing text for searches with text analyzers
Analyzers (10)
- Standard analyzer
- Simple
- Whitespace
- Stop
- Keyword
- Pattern
- Fingerprint
- Custom
- Stemming
- Token graphs
Language analyzers (24)
- A page for each language - 24 total. See Language analyzer section currently on concepts page.
Tokenizers (14 + index page)
- Index page
- Character group
- Classic
- Edge n-gram
- Keyword
- Letter
- Lowercase
- N-gram
- Path hierarchy
- Pattern
- Simple pattern
- Simple pattern split
- Standard
- Thai
- UAX URL email
- Whitespace
Token filters (48)
- Page for each one
Character filters (3 + index page)
- index page
- HTML strip
- Mapping
- Pattern replace
Normalizers
- Normalizers
Metadata
Metadata
Assignees
Labels
Content gapIn progressIssue/PR: The issue or PR is in progress.Issue/PR: The issue or PR is in progress.