Skip to content

docs(genapi): add a langchain integration and update bolt.diy integration using GenAPIs #4911

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -71,13 +71,75 @@ print(response.choices[0].message.content)
LangChain is a popular library for building AI applications. Scaleway's Generative APIs support LangChain for both inference and embeddings.

### Python

#### Function calling

1. Install LangChain and his dependencies.
```bash
pip install 'langchain>=0.3.24'
pip install 'langchain-core>=0.3.55'
pip install 'langchain-openai>=0.3.14'
pip install 'langchain-text-splitters>=0.3.8'
```
2. Create a file `tools.py` and add the following code to it to import and create the tools examples:
Comment on lines +77 to +84
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
1. Install LangChain and his dependencies.
```bash
pip install 'langchain>=0.3.24'
pip install 'langchain-core>=0.3.55'
pip install 'langchain-openai>=0.3.14'
pip install 'langchain-text-splitters>=0.3.8'
```
2. Create a file `tools.py` and add the following code to it to import and create the tools examples:
1. Run the following commands to install LangChain and its dependencies:
```bash
pip install 'langchain>=0.3.24'
pip install 'langchain-core>=0.3.55'
pip install 'langchain-openai>=0.3.14'
pip install 'langchain-text-splitters>=0.3.8'
```
2. Create a file named `tools.py` and paste the code below into it to import and create the tools examples:

```Python
from langchain_core.messages import HumanMessage
from langchain.chat_models import init_chat_model
from langchain_core.tools import tool


@tool
def add(a: int, b: int) -> int:
"""Adds a and b."""
return a + b


@tool
def multiply(a: int, b: int) -> int:
"""Multiplies a and b."""
return a * b


tools = [add, multiply]
```
3. You can configure the `init_chat_model` function to use Scaleway's Generative APIs.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
3. You can configure the `init_chat_model` function to use Scaleway's Generative APIs.
3. Configure the `init_chat_model` function to use Scaleway's Generative APIs.

```Python
llm = init_chat_model("mistral-small-3.1-24b-instruct-2503", model_provider="openai", base_url="https://api.scaleway.ai/v1")
```
4. Everything is now set up, you can use the `llm` object and the `tools` list to generate a response to a query with the following code:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
4. Everything is now set up, you can use the `llm` object and the `tools` list to generate a response to a query with the following code:
4. Use the `llm` object and the `tools` list to generate a response to your query with the following code:

```python
query = "What is 3 * 12?"
# You can also try the following query:
# query = "What is 42 + 4?"

messages = [HumanMessage(query)] # We initialize the messages list with the user's query.

ai_msg = llm_with_tools.invoke(messages) # We generate a response to the query.
messages.append(ai_msg) # We append the response to the messages list.

for tool_call in ai_msg.tool_calls:
selected_tool = {"add": add, "multiply": multiply}[tool_call["name"].lower()] # Depending on the tool name, we select the appropriate tool.
tool_msg = selected_tool.invoke(tool_call) # We invoke the selected tool with the tool call.
messages.append(tool_msg) # We append the tool's response to the messages list.

print(llm_with_tools.invoke(messages).content) # We print the content of the final response.
```
5. Finally, run the `tools.py`:
```bash
python tools.py
```
The response should display the result of the calculation:
Comment on lines +127 to +131
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
5. Finally, run the `tools.py`:
```bash
python tools.py
```
The response should display the result of the calculation:
5. Run `tools.py`:
```bash
python tools.py

The result of the calculation should display in the output:

```bash
The result of 3 * 12 is 36.
```

<Message type="tip">
Refer to our dedicated documentation for [implementing Retrieval-Augmented Generation (RAG) with LangChain and Scaleway Generative APIs](/tutorials/how-to-implement-rag-generativeapis/)
</Message>

## LlamaIndex (advanced RAG applications)

LlamaIndex is an open-source framework for building Large Language Models (LLMs) based applications, especially optimizing RAG (Retrieval Augmented Generation) pipelines.
LlamaIndex is an open-source framework for building Large Language Models (LLMs) based applications, especially optimizing RAG (Retrieval Augmented Generation) pipelines.
1. Install the required dependencies to use the LlamaIndex framework with Scaleway's Generative APIs:
```bash
pip install llama-index-llms-openai-like
Expand Down Expand Up @@ -197,7 +259,7 @@ Chatbox AI is a powerful AI client and smart assistant, compatible with Scaleway

## Bolt.diy (code generation)

Bolt.diy is a software enabling users to create web applications from the prompt.
Bolt.diy is a software enabling users to create web applications from the prompt.

1. Install and launch Bolt.diy locally. Follow the setup instructions provided in the [Bolt.diy GitHub repository](https://github.com/stackblitz-labs/bolt.diy?tab=readme-ov-file#setup).
2. Once Bolt.diy is running, open the interface in your web browser.
Expand All @@ -206,9 +268,13 @@ Bolt.diy is a software enabling users to create web applications from the prompt
5. Click **Local Providers** to add a new external provider configuration.
6. Toggle the switch next to **OpenAILike** to enable it. Then, enter the Scaleway API endpoint: `https://api.scaleway.ai/v1` as the base URL.
7. In Bolt's main menu, select `OpenAILike` and input your **Scaleway Secret Key** as the `OpenAILike API Key`.
8. Select one of the supported models from Scaleway Generative APIs. For best results with Bolt.diy, which requires a significant amount of output tokens (8000 by default), start with the `llama-3.1-8b-instruct` model.
8. Select one of the supported models from Scaleway Generative APIs. For best results with Bolt.diy, which requires a significant amount of output tokens (8000 by default), start with the `gemma-3-27b-it` model.
9. Enter your prompt in the Bolt.diy interface to see your application being generated.

<Message type="important">
Only models that have a maximum output token of at least 8000 tokens are supported. You can found the list of Generative APIs models [here](https://www.scaleway.com/en/docs/generative-apis/reference-content/supported-models/#chat-models).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Only models that have a maximum output token of at least 8000 tokens are supported. You can found the list of Generative APIs models [here](https://www.scaleway.com/en/docs/generative-apis/reference-content/supported-models/#chat-models).
Only models that have a maximum output token of at least 8000 tokens are supported. Refer to the [list of Generative APIs models](/generative-apis/reference-content/supported-models/#chat-models) for more information.

</Message>

Alternatively, you can also setup your Scaleway Secret Key by renaming `.env.example` to `.env`, adding corresponding environment variables values and restarting Bolt.diy:
```bash
OPENAI_LIKE_API_BASE_URL=https://api.scaleway.ai/v1
Expand Down