Replies: 4 comments 1 reply
-
Hi @howdymatt . Can you please provide more details? Like which step you are getting what kind of error? And are you using Opensource OpenSearch or Amazon OpenSearch Service? |
Beta Was this translation helpful? Give feedback.
-
When simulating a doc ingest using the pipeline created in step 2. Here is my call (I've removed the IDs here)
returns:
I'm trying to use one of the hugging face encoders (I'm still resolving some role access so I wasn't able to use the Bedrock connector yet).
|
Beta Was this translation helpful? Give feedback.
-
You should use your correct model id when create ingest pipeline
|
Beta Was this translation helpful? Give feedback.
-
I did have actual IDs in my original request -- I've removed them in this discussion. The error occurs when the original IDs are used. |
Beta Was this translation helpful? Give feedback.
-
The tutorial laid out in docs/tutorials/ml_inference/semantic_search/semantic_search_for_long_document.md only supports 2.19 and later. I am using AWS OS 2.17 (their latest available version) and I am unable to run this example.
Can someone help me with a 2.17-compatible version that produces the same query responses for chunked documents? Right now I'm using a hugging face model but will move to a bedrock connector, so I'm looking for an example that could run using either model.
Beta Was this translation helpful? Give feedback.
All reactions