Skip to content

Commit 67fb1d9

Browse files
committed
fix links
1 parent 8d73c60 commit 67fb1d9

File tree

3 files changed

+3
-3
lines changed

3 files changed

+3
-3
lines changed

src/oss/javascript/integrations/chat/openai.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ This guide will help you getting started with ChatOpenAI [chat models](/oss/lang
1515
<Info>
1616
**OpenAI models hosted on Azure**
1717

18-
Note that certain OpenAI models can also be accessed via the [Microsoft Azure platform](https://azure.microsoft.com/en-us/products/ai-foundry/models/openai/). To use the Azure OpenAI service use the [`AzureChatOpenAI`](/oss/integrations/chat/azure_chat_openai/) integration.
18+
Note that certain OpenAI models can also be accessed via the [Microsoft Azure platform](https://azure.microsoft.com/en-us/products/ai-foundry/models/openai/). To use the Azure OpenAI service use the [`AzureChatOpenAI`](/oss/integrations/chat/azure_chat_openai) integration.
1919
</Info>
2020

2121
## Overview

src/oss/langchain/errors/MODEL_RATE_LIMIT.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ This error occurs when you exceed the maximum number of requests permitted by yo
1010

1111
To resolve this error, you can:
1212

13-
1. **Implement Rate Limiting**: Deploy a rate limiter to regulate the frequency of requests sent to the model. See [rate limiting](/oss/python/langchain/models#rate-limiting) docs.
13+
1. **Implement Rate Limiting**: Deploy a rate limiter to regulate the frequency of requests sent to the model. See [rate limiting](/oss/langchain/models#rate-limiting) docs.
1414
2. **Implement Response Caching**: Use model response caching to reduce redundant requests when incoming queries are repetitive.
1515
3. **Use Multiple Providers**: Distribute requests across multiple providers if your application architecture supports this approach
1616
4. **Contact Your Provider**: Reach out to your model provider requesting an increase to your rate limits

src/oss/langchain/errors/OUTPUT_PARSING_FAILURE.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ title: OUTPUT_PARSING_FAILURE
55
An [output parser](https://reference.langchain.com/python/langchain_core/output_parsers/) was unable to handle model output as expected.
66

77
<Note>
8-
Some prebuilt constructs like [legacy LangChain agents](/docs/how_to/agent_executor) and chains may use output parsers internally, so you may see this error even if you're not visibly instantiating and using an output parser.
8+
Some prebuilt constructs like legacy LangChain agents and chains may use output parsers internally, so you may see this error even if you're not visibly instantiating and using an output parser.
99
</Note>
1010

1111
## Troubleshooting

0 commit comments

Comments
 (0)