Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
198 changes: 129 additions & 69 deletions module-0/basics.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,13 @@
"source": [
"# LangChain Academy\n",
"\n",
"Welcome to LangChain Academy! \n",
"Welcome to LangChain Academy!\n",
"\n",
"## Context\n",
"\n",
"At LangChain, we aim to make it easy to build LLM applications. One type of LLM application you can build is an agent. There’s a lot of excitement around building agents because they can automate a wide range of tasks that were previously impossible. \n",
"At LangChain, we aim to make it easy to build LLM applications. One type of LLM application you can build is an agent. There’s a lot of excitement around building agents because they can automate a wide range of tasks that were previously impossible.\n",
"\n",
"In practice though, it is incredibly difficult to build systems that reliably execute on these tasks. As we’ve worked with our users to put agents into production, we’ve learned that more control is often necessary. You might need an agent to always call a specific tool first or use different prompts based on its state. \n",
"In practice though, it is incredibly difficult to build systems that reliably execute on these tasks. As we’ve worked with our users to put agents into production, we’ve learned that more control is often necessary. You might need an agent to always call a specific tool first or use different prompts based on its state.\n",
"\n",
"To tackle this problem, we’ve built [LangGraph](https://langchain-ai.github.io/langgraph/) — a framework for building agent and multi-agent applications. Separate from the LangChain package, LangGraph’s core design philosophy is to help developers add better precision and control into agent workflows, suitable for the complexity of real-world systems.\n",
"\n",
Expand All @@ -42,21 +42,29 @@
},
{
"cell_type": "code",
"execution_count": null,
"id": "0f9a52c8",
"metadata": {},
"outputs": [],
"metadata": {
"ExecuteTime": {
"end_time": "2025-08-26T13:13:40.614247Z",
"start_time": "2025-08-26T13:13:38.663701Z"
}
},
"source": [
"%%capture --no-stderr\n",
"%pip install --quiet -U langchain_openai langchain_core langchain_community tavily-python"
]
],
"outputs": [],
"execution_count": 1
},
{
"cell_type": "code",
"execution_count": 1,
"id": "c2a15227",
"metadata": {},
"outputs": [],
"metadata": {
"ExecuteTime": {
"end_time": "2025-08-26T13:14:00.376166Z",
"start_time": "2025-08-26T13:13:50.269848Z"
}
},
"source": [
"import os, getpass\n",
"\n",
Expand All @@ -65,34 +73,41 @@
" os.environ[var] = getpass.getpass(f\"{var}: \")\n",
"\n",
"_set_env(\"OPENAI_API_KEY\")"
]
],
"outputs": [],
"execution_count": 2
},
{
"cell_type": "markdown",
"id": "a326f35b",
"metadata": {},
"source": [
"[Here](https://python.langchain.com/v0.2/docs/how_to/#chat-models) is a useful how-to for all the things that you can do with chat models, but we'll show a few highlights below. If you've run `pip install -r requirements.txt` as noted in the README, then you've installed the `langchain-openai` package. With this, we can instantiate our `ChatOpenAI` model object. If you are signing up for the API for the first time, you should receive [free credits](https://community.openai.com/t/understanding-api-limits-and-free-tier/498517) that can be applied to any of the models. You can see pricing for various models [here](https://openai.com/api/pricing/). The notebooks will default to `gpt-4o` because it's a good balance of quality, price, and speed [see more here](https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4-gpt-4-turbo-gpt-4o-and-gpt-4o-mini), but you can also opt for the lower priced `gpt-3.5` series models. \n",
"[Here](https://python.langchain.com/v0.2/docs/how_to/#chat-models) is a useful how-to for all the things that you can do with chat models, but we'll show a few highlights below. If you've run `pip install -r requirements.txt` as noted in the README, then you've installed the `langchain-openai` package. With this, we can instantiate our `ChatOpenAI` model object. If you are signing up for the API for the first time, you should receive [free credits](https://community.openai.com/t/understanding-api-limits-and-free-tier/498517) that can be applied to any of the models. You can see pricing for various models [here](https://openai.com/api/pricing/). The notebooks will default to `gpt-4o` because it's a good balance of quality, price, and speed [see more here](https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4-gpt-4-turbo-gpt-4o-and-gpt-4o-mini), but you can also opt for the lower priced `gpt-3.5` series models.\n",
"\n",
"There are [a few standard parameters](https://python.langchain.com/v0.2/docs/concepts/#chat-models) that we can set with chat models. Two of the most common are:\n",
"\n",
"* `model`: the name of the model\n",
"* `temperature`: the sampling temperature\n",
"\n",
"`Temperature` controls the randomness or creativity of the model's output where low temperature (close to 0) is more deterministic and focused outputs. This is good for tasks requiring accuracy or factual responses. High temperature (close to 1) is good for creative tasks or generating varied responses. "
"`Temperature` controls the randomness or creativity of the model's output where low temperature (close to 0) is more deterministic and focused outputs. This is good for tasks requiring accuracy or factual responses. High temperature (close to 1) is good for creative tasks or generating varied responses."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "e19a54d3",
"metadata": {},
"outputs": [],
"metadata": {
"ExecuteTime": {
"end_time": "2025-08-26T13:17:18.363961Z",
"start_time": "2025-08-26T13:17:18.360338Z"
}
},
"source": [
"from langchain_openai import ChatOpenAI\n",
"gpt4o_chat = ChatOpenAI(model=\"gpt-4o\", temperature=0)\n",
"gpt35_chat = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
]
],
"outputs": [],
"execution_count": 9
},
{
"cell_type": "markdown",
Expand All @@ -109,21 +124,13 @@
},
{
"cell_type": "code",
"execution_count": 3,
"id": "b1280e1b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Hello! How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 11, 'total_tokens': 20}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_157b3831f5', 'finish_reason': 'stop', 'logprobs': None}, id='run-d3c4bc85-ef14-49f6-ba7e-91bf455cffee-0', usage_metadata={'input_tokens': 11, 'output_tokens': 9, 'total_tokens': 20})"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
"metadata": {
"ExecuteTime": {
"end_time": "2025-08-26T13:14:39.110579Z",
"start_time": "2025-08-26T13:14:37.771547Z"
}
],
},
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
Expand All @@ -133,9 +140,22 @@
"# Message list\n",
"messages = [msg]\n",
"\n",
"# Invoke the model with a list of messages \n",
"# Invoke the model with a list of messages\n",
"gpt4o_chat.invoke(messages)"
]
],
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Hello! How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 11, 'total_tokens': 20, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_80956533cb', 'id': 'chatcmpl-C8nnSalisChEFiomRJq7yGX2kPUrn', 'service_tier': 'default', 'finish_reason': 'stop', 'logprobs': None}, id='run--7dde0149-4e27-4dae-acb4-2d419de42b8a-0', usage_metadata={'input_tokens': 11, 'output_tokens': 9, 'total_tokens': 20, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"execution_count": 4
},
{
"cell_type": "markdown",
Expand All @@ -147,52 +167,62 @@
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f27c6c9a",
"metadata": {},
"metadata": {
"ExecuteTime": {
"end_time": "2025-08-26T13:15:41.937517Z",
"start_time": "2025-08-26T13:15:40.726738Z"
}
},
"source": [
"gpt4o_chat.invoke(\"hello world\")"
],
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Hello! How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 9, 'total_tokens': 18}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_157b3831f5', 'finish_reason': 'stop', 'logprobs': None}, id='run-d6f6b682-e29a-44de-b45e-79fad1e405e5-0', usage_metadata={'input_tokens': 9, 'output_tokens': 9, 'total_tokens': 18})"
"AIMessage(content='Hello! How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 9, 'total_tokens': 18, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_80956533cb', 'id': 'chatcmpl-C8noTLhdNYZzFdj96XJHSBkr641wp', 'service_tier': 'default', 'finish_reason': 'stop', 'logprobs': None}, id='run--932c9b9b-7398-447e-833c-93cee54d5443-0', usage_metadata={'input_tokens': 9, 'output_tokens': 9, 'total_tokens': 18, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})"
]
},
"execution_count": 4,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"gpt4o_chat.invoke(\"hello world\")"
]
"execution_count": 6
},
{
"cell_type": "code",
"execution_count": 5,
"id": "fdc2f0ca",
"metadata": {},
"metadata": {
"ExecuteTime": {
"end_time": "2025-08-26T13:15:47.350587Z",
"start_time": "2025-08-26T13:15:46.538148Z"
}
},
"source": [
"gpt35_chat.invoke(\"hello world\")"
],
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Hello! How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 9, 'total_tokens': 18}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-c75d3f0f-2d71-47be-b14c-42b8dd2b4b08-0', usage_metadata={'input_tokens': 9, 'output_tokens': 9, 'total_tokens': 18})"
"AIMessage(content='Hello! How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 9, 'total_tokens': 18, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'id': 'chatcmpl-C8noZFLPfSAt5InzJ5zhMIRGMfGyW', 'service_tier': 'default', 'finish_reason': 'stop', 'logprobs': None}, id='run--2b9c9ef9-b23e-4b8b-bdca-f4929b8c7b87-0', usage_metadata={'input_tokens': 9, 'output_tokens': 9, 'total_tokens': 18, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})"
]
},
"execution_count": 5,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"gpt35_chat.invoke(\"hello world\")"
]
"execution_count": 7
},
{
"cell_type": "markdown",
"id": "582c0e5a",
"metadata": {},
"source": [
"The interface is consistent across all chat models and models are typically initialized once at the start up each notebooks. \n",
"The interface is consistent across all chat models and models are typically initialized once at the start up each notebooks.\n",
"\n",
"So, you can easily switch between models without changing the downstream code if you have strong preference for another provider.\n"
]
Expand All @@ -209,51 +239,81 @@
},
{
"cell_type": "code",
"execution_count": 10,
"id": "091dff13",
"metadata": {},
"outputs": [],
"metadata": {
"ExecuteTime": {
"end_time": "2025-08-26T13:19:01.850458Z",
"start_time": "2025-08-26T13:18:54.162015Z"
}
},
"source": [
"_set_env(\"TAVILY_API_KEY\")"
]
],
"outputs": [],
"execution_count": 11
},
{
"cell_type": "code",
"execution_count": 6,
"id": "52d69da9",
"metadata": {},
"outputs": [],
"metadata": {
"ExecuteTime": {
"end_time": "2025-08-26T13:21:42.112629Z",
"start_time": "2025-08-26T13:21:40.853660Z"
}
},
"source": [
"from langchain_community.tools.tavily_search import TavilySearchResults\n",
"tavily_search = TavilySearchResults(max_results=3)\n",
"from langchain_tavily import TavilySearch\n",
"tavily_search = TavilySearch(max_results=3)\n",
"search_docs = tavily_search.invoke(\"What is LangGraph?\")"
]
],
"outputs": [],
"execution_count": 13
},
{
"cell_type": "code",
"execution_count": 7,
"id": "d06f87e6",
"metadata": {},
"metadata": {
"ExecuteTime": {
"end_time": "2025-08-26T13:21:44.261084Z",
"start_time": "2025-08-26T13:21:44.257122Z"
}
},
"source": [
"search_docs"
],
"outputs": [
{
"data": {
"text/plain": [
"[{'url': 'https://www.datacamp.com/tutorial/langgraph-tutorial',\n",
" 'content': 'LangGraph is a library within the LangChain ecosystem designed to tackle these challenges head-on. LangGraph provides a framework for defining, coordinating, and executing multiple LLM agents (or chains) in a structured manner.'},\n",
" {'url': 'https://langchain-ai.github.io/langgraph/',\n",
" 'content': 'Overview LangGraph is a library for building stateful, multi-actor applications with LLMs, used to create agent and multi-agent workflows. Compared to other LLM frameworks, it offers these core benefits: cycles, controllability, and persistence. LangGraph allows you to define flows that involve cycles, essential for most agentic architectures, differentiating it from DAG-based solutions. As a ...'},\n",
" {'url': 'https://www.youtube.com/watch?v=nmDFSVRnr4Q',\n",
" 'content': 'LangGraph is an extension of LangChain enabling Multi-Agent conversation and cyclic chains. This video explains the basics of LangGraph and codesLangChain in...'}]"
"{'query': 'What is LangGraph?',\n",
" 'follow_up_questions': None,\n",
" 'answer': None,\n",
" 'images': [],\n",
" 'results': [{'url': 'https://www.datacamp.com/tutorial/langgraph-tutorial',\n",
" 'title': 'LangGraph Tutorial: What Is LangGraph and How to Use It?',\n",
" 'content': 'LangGraph is a library within the LangChain ecosystem that provides a framework for defining, coordinating, and executing multiple LLM agents (or chains) in a structured and efficient manner. By managing the flow of data and the sequence of operations, LangGraph allows developers to focus on the high-level logic of their applications rather than the intricacies of agent coordination. Whether you need a chatbot that can handle various types of user requests or a multi-agent system that performs complex tasks, LangGraph provides the tools to build exactly what you need. LangGraph significantly simplifies the development of complex LLM applications by providing a structured framework for managing state and coordinating agent interactions.',\n",
" 'score': 0.9581988,\n",
" 'raw_content': None},\n",
" {'url': 'https://www.ibm.com/think/topics/langgraph',\n",
" 'title': 'What is LangGraph? - IBM',\n",
" 'content': 'LangGraph, created by LangChain, is an open source AI agent framework designed to build, deploy and manage complex generative AI agent workflows. At its core, LangGraph uses the power of graph-based architectures to model and manage the intricate relationships between various components of an AI agent workflow. LangGraph illuminates the processes within an AI workflow, allowing full transparency of the agent’s state. By combining these technologies with a set of APIs and tools, LangGraph provides users with a versatile platform for developing AI solutions and workflows including chatbots, state graphs and other agent-based systems. **Nodes**: In LangGraph, nodes represent individual components or agents within an AI workflow. LangGraph uses enhanced decision-making by modeling complex relationships between nodes, which means it uses AI agents to analyze their past actions and feedback.',\n",
" 'score': 0.95250803,\n",
" 'raw_content': None},\n",
" {'url': 'https://cobusgreyling.medium.com/langgraph-from-langchain-explained-in-simple-terms-f7cd0c12cdbf',\n",
" 'title': 'LangGraph From LangChain Explained In Simple Terms',\n",
" 'content': \"LangGraph is a method for creating state machines for conversational flow by defining them as graphs & it's easier to understand than you might think.\",\n",
" 'score': 0.9443876,\n",
" 'raw_content': None}],\n",
" 'response_time': 0.77,\n",
" 'request_id': 'b5c91e11-b175-4a38-91ca-0ab3beb52163'}"
]
},
"execution_count": 7,
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"search_docs"
]
"execution_count": 14
},
{
"cell_type": "code",
Expand Down