From 7080eff9f20640c9a294c6078697f55563f5e8d7 Mon Sep 17 00:00:00 2001 From: Abhigyan Date: Tue, 25 Mar 2025 22:16:37 +0530 Subject: [PATCH 1/4] =?UTF-8?q?Gemini=20Chatbot=20Tutorial=20=E2=80=93=20I?= =?UTF-8?q?nteractive=20Chat=20with=20Streaming=20&=20History?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- quickstarts/Gemini_Chatbot_Tutorial.ipynb | 216 ++++++++++++++++++++++ 1 file changed, 216 insertions(+) create mode 100644 quickstarts/Gemini_Chatbot_Tutorial.ipynb diff --git a/quickstarts/Gemini_Chatbot_Tutorial.ipynb b/quickstarts/Gemini_Chatbot_Tutorial.ipynb new file mode 100644 index 000000000..abab47cce --- /dev/null +++ b/quickstarts/Gemini_Chatbot_Tutorial.ipynb @@ -0,0 +1,216 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "##### Copyright 2025 Google LLC." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "b7ce279c7b07" + }, + "outputs": [], + "source": [ + "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", + "# you may not use this file except in compliance with the License.\n", + "# You may obtain a copy of the License at\n", + "#\n", + "# https://www.apache.org/licenses/LICENSE-2.0\n", + "#\n", + "# Unless required by applicable law or agreed to in writing, software\n", + "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", + "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", + "# See the License for the specific language governing permissions and\n", + "# limitations under the License." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "153886bac3ca" + }, + "source": [ + "# Gemini API: Chatbot Quickstart\n", + "\n", + "---\n", + "\n", + "This notebook provides an example of how to use the **Gemini API to build a chatbot**. \n", + "You will create a conversational AI that generates responses based on user input.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "3f7f47043869" + }, + "outputs": [], + "source": [ + "# Install dependencies\n", + "%pip install -U -q \"google-generativeai>=0.7.2\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "53059828e644" + }, + "outputs": [], + "source": [ + "# Import necessary libraries\n", + "import google.generativeai as genai" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "a0656906d23e" + }, + "source": [ + "## Set up your API key\n", + "\n", + "To run the following cell, store your API key in a Colab Secret named `GOOGLE_API_KEY`. \n", + "If you don't already have an API key, or you're unsure how to create a Colab Secret, see the \n", + "[Authentication](https://github.com/google-gemini/cookbook/blob/a0b506a8f65141cec4eb9143db760c735f652a59/quickstarts/Authentication.ipynb) quickstart for an example.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "235ce8c11240" + }, + "outputs": [], + "source": [ + "# # Configure your API key using Colab Secrets\n", + "from google.colab import userdata\n", + "\n", + "GOOGLE_API_KEY = userdata.get(\"GOOGLE_API_KEY\")\n", + "genai.configure(api_key=GOOGLE_API_KEY)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "212ba7018c0e" + }, + "source": [ + "## **Interact with the Gemini Chatbot** \n", + "\n", + "The function below sends a user prompt to the **Gemini API**, maintains conversation history, and streams responses in real-time. \n", + "You can modify the model version using the `MODEL_ID` variable." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "ccb55b1126b1" + }, + "outputs": [], + "source": [ + "# Select the Gemini model\n", + "MODEL_ID = \"gemini-2.0-flash\" # @param [\"gemini-2.0-flash-lite\", \"gemini-2.0-flash\", \"gemini-2.0-pro-exp-02-05\"] {\"allow-input\": true, \"isTemplate\": true}\n", + "\n", + "# Previous conversation history\n", + "conversation_history = [\n", + " {\"role\": \"user\", \"parts\": [{\"text\": \"Hello!\"}]},\n", + " {\"role\": \"model\", \"parts\": [{\"text\": \"Hi there! How can I assist you today?\"}]},\n", + " {\"role\": \"user\", \"parts\": [{\"text\": \"I have two dogs in my house. How many paws are in my house?\"}]},\n", + " {\"role\": \"model\", \"parts\": [{\"text\": \"Each dog has 4 paws, so if you have 2 dogs, that makes 8 paws in your house.\"}]}\n", + "]\n", + "\n", + "# Function to interact with Gemini Chatbot\n", + "def chat_with_gemini(prompt: str) -> str:\n", + " \"\"\"\n", + " Generates a response from the Gemini chatbot for a given prompt.\n", + " - Maintains multi-turn conversation history\n", + " - Streams AI-generated responses in real-time\n", + " - Uses proper message formatting\n", + " \"\"\"\n", + " \n", + " # Append user input to conversation history\n", + " conversation_history.append({\"role\": \"user\", \"parts\": [{\"text\": prompt}]})\n", + "\n", + " # Initialize the model\n", + " model = genai.GenerativeModel(MODEL_ID)\n", + "\n", + " # Generate response with streaming\n", + " response = model.generate_content(conversation_history, stream=True) # Pass history\n", + "\n", + " full_response = \"\"\n", + " print(\"Gemini:\", end=\" \", flush=True)\n", + " for chunk in response:\n", + " print(chunk.text, end=\"\", flush=True) # Streaming output\n", + " full_response += chunk.text\n", + "\n", + " # Append response to history\n", + " conversation_history.append({\"role\": \"model\", \"parts\": [{\"text\": full_response}]})\n", + "\n", + " return full_response\n", + "\n", + "# ask a follow-up question using the history\n", + "user_input = \"What if I get another dog?\"\n", + "response = chat_with_gemini(user_input)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "db2b9411e1e1" + }, + "source": [ + "### **Gemini's Response:** \n", + "If you get another dog, you'll have a total of **3 dogs**. That means you'll have: \n", + "\n", + "**3 dogs Γ— 4 paws/dog = 12 dog paws** \n", + "\n", + "Don't forget about your own feet! So, you'd have: \n", + "\n", + "**12 dog paws + 2 of your feet** " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "d5530e2f354c" + }, + "source": [ + "## **Learn More** \n", + " \n", + "πŸ“– **[Read More on Handling Long Context](https://ai.google.dev/gemini-api/docs/long-context)** \n" + ] + } + ], + "metadata": { + "colab": { + "name": "Gemini_Chatbot_Tutorial.ipynb", + "toc_visible": true + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.21" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} From 1e60be999b1efe72dc3189ed919e850d9ee9786b Mon Sep 17 00:00:00 2001 From: Abhigyan Date: Mon, 31 Mar 2025 12:08:09 +0530 Subject: [PATCH 2/4] migrated to latest genai sdk --- quickstarts/Gemini_Chatbot_Tutorial.ipynb | 39 +++++++++-------------- 1 file changed, 15 insertions(+), 24 deletions(-) diff --git a/quickstarts/Gemini_Chatbot_Tutorial.ipynb b/quickstarts/Gemini_Chatbot_Tutorial.ipynb index abab47cce..0a2984a5b 100644 --- a/quickstarts/Gemini_Chatbot_Tutorial.ipynb +++ b/quickstarts/Gemini_Chatbot_Tutorial.ipynb @@ -52,19 +52,7 @@ "outputs": [], "source": [ "# Install dependencies\n", - "%pip install -U -q \"google-generativeai>=0.7.2\"" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "53059828e644" - }, - "outputs": [], - "source": [ - "# Import necessary libraries\n", - "import google.generativeai as genai" + "%pip install -U \"google-genai>=1.0.0\"" ] }, { @@ -88,11 +76,14 @@ }, "outputs": [], "source": [ - "# # Configure your API key using Colab Secrets\n", + "# Import Libraries\n", + "from google import genai\n", + "from google.genai import types\n", "from google.colab import userdata\n", "\n", + "# Configure your API key using Colab Secrets\n", "GOOGLE_API_KEY = userdata.get(\"GOOGLE_API_KEY\")\n", - "genai.configure(api_key=GOOGLE_API_KEY)" + "client = genai.Client(api_key=GOOGLE_API_KEY)" ] }, { @@ -120,10 +111,10 @@ "\n", "# Previous conversation history\n", "conversation_history = [\n", - " {\"role\": \"user\", \"parts\": [{\"text\": \"Hello!\"}]},\n", - " {\"role\": \"model\", \"parts\": [{\"text\": \"Hi there! How can I assist you today?\"}]},\n", - " {\"role\": \"user\", \"parts\": [{\"text\": \"I have two dogs in my house. How many paws are in my house?\"}]},\n", - " {\"role\": \"model\", \"parts\": [{\"text\": \"Each dog has 4 paws, so if you have 2 dogs, that makes 8 paws in your house.\"}]}\n", + " types.Content(role=\"user\", parts=[types.Part(text=\"Hello!\")]),\n", + " types.Content(role=\"model\", parts=[types.Part(text=\"Hi there! How can I assist you today?\")]),\n", + " types.Content(role=\"user\", parts=[types.Part(text=\"I have two dogs in my house. How many paws are in my house?\")]),\n", + " types.Content(role=\"model\", parts=[types.Part(text=\"Each dog has 4 paws, so if you have 2 dogs, that makes 8 paws in your house.\")]),\n", "]\n", "\n", "# Function to interact with Gemini Chatbot\n", @@ -136,13 +127,13 @@ " \"\"\"\n", " \n", " # Append user input to conversation history\n", - " conversation_history.append({\"role\": \"user\", \"parts\": [{\"text\": prompt}]})\n", + " conversation_history.append(types.Content(role=\"user\", parts=[types.Part(text=prompt)]))\n", "\n", - " # Initialize the model\n", - " model = genai.GenerativeModel(MODEL_ID)\n", + " # Create a chat session with history\n", + " chat = client.chats.create(model=MODEL_ID, history=conversation_history)\n", "\n", - " # Generate response with streaming\n", - " response = model.generate_content(conversation_history, stream=True) # Pass history\n", + " response = chat.send_message_stream(message=prompt) # Stream model response \n", + " # response = chat.send_message(message=prompt) # Get full response at once \n", "\n", " full_response = \"\"\n", " print(\"Gemini:\", end=\" \", flush=True)\n", From c43f708c3511c7e42f19381d19f9bb4d9463f1be Mon Sep 17 00:00:00 2001 From: Abhigyan Date: Wed, 2 Apr 2025 21:29:45 +0530 Subject: [PATCH 3/4] Revamped onboarding flow: Start with basic chat setup, step-by-step progression before advanced features like history & multimodal. --- quickstarts/Gemini_Chatbot_Tutorial.ipynb | 238 ++++++++++++++++++++-- 1 file changed, 218 insertions(+), 20 deletions(-) diff --git a/quickstarts/Gemini_Chatbot_Tutorial.ipynb b/quickstarts/Gemini_Chatbot_Tutorial.ipynb index 0a2984a5b..1fc9356b4 100644 --- a/quickstarts/Gemini_Chatbot_Tutorial.ipynb +++ b/quickstarts/Gemini_Chatbot_Tutorial.ipynb @@ -92,23 +92,153 @@ "id": "212ba7018c0e" }, "source": [ - "## **Interact with the Gemini Chatbot** \n", + "### **Interacting with the Gemini Chatbot** \n", + "This guide walks you through building a chatbot using the Gemini API, from basics to advanced features. Each section builds on the previous one, helping you understand how to create a more interactive and intelligent chatbot. \n", "\n", - "The function below sends a user prompt to the **Gemini API**, maintains conversation history, and streams responses in real-time. \n", - "You can modify the model version using the `MODEL_ID` variable." + "### **Building a Gemini Chatbot** \n", + "1️. **Send a Basic Message** – Get a response from Gemini with a simple prompt. \n", + "2️. **Multi-Turn Conversations** – Break complex queries into steps for clearer answers. \n", + "3️. **Streaming Responses** – Get real-time responses for a more interactive experience. \n", + "4️. **Conversation History** – Maintain context by referring to past messages. \n", + "5️. **Configuring Model Parameters** – Control response behavior with settings like temperature and max tokens. \n", + "6️. **System Instructions** – Customize chatbot behavior by defining specific roles and response styles.\n", + "\n", + "You can modify the model version using the `MODEL_ID` variable. This step-by-step guide aims to help developers build and optimize chatbots with Gemini." ] }, { "cell_type": "code", "execution_count": null, - "metadata": { - "id": "ccb55b1126b1" - }, + "metadata": {}, "outputs": [], "source": [ "# Select the Gemini model\n", - "MODEL_ID = \"gemini-2.0-flash\" # @param [\"gemini-2.0-flash-lite\", \"gemini-2.0-flash\", \"gemini-2.0-pro-exp-02-05\"] {\"allow-input\": true, \"isTemplate\": true}\n", + "MODEL_ID = \"gemini-2.0-flash\" # @param [\"gemini-2.0-flash-lite\", \"gemini-2.0-flash\", \"gemini-2.0-pro-exp-02-05\"] {\"allow-input\": true, \"isTemplate\": true}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### **Sending a Basic Message** \n", + "The following section demonstrates how to send a simple message to the Gemini model and receive a response. [Learn more](). \n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "response = client.models.generate_content(\n", + " model=MODEL_ID,\n", + " contents=[\"How does AI work?\"]\n", + ")\n", + "print(response.text)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### **Multi-Turn Conversations** \n", + "The following section demonstrates Gemini ability to collect multiple questions and answers in a chat session. This chat format enables users to step incrementally toward answers and to get help with multipart problems\n", + "[Learn more]()." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "#create a chat session\n", + "chat = client.chats.create(model = MODEL_ID)\n", + "\n", + "# Send the first message\n", + "response = chat.send_message(\"Each orange has 8 slices.\")\n", + "print(response.text)\n", "\n", + "# Send a follow-up message\n", + "response = chat.send_message(\"I have 8 oranges. If 4 slices in total are rotten, how many complete oranges can I eat?\")\n", + "print(response.text)\n", + "\n", + "# Send another follow-up message\n", + "response = chat.send_message(\"I ate 3 oranges. What is the probability that at least one of the slices I ate was rotten?\")\n", + "print(response.text)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can use the conversation history using `chat.get_history()` method and print it in a structured format." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Display conversation history\n", + "for message in chat.get_history():\n", + " print(f'role - {message.role}',end=\": \")\n", + " print(message.parts[0].text)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### **Streaming Responses**\n", + "\n", + "By default, the model generates and returns the complete response only after finishing the entire text generation process. However, streaming allows real-time output using `send_message_stream()`, enabling responses to be processed as they are generated. This enhances responsiveness, making interactions feel faster and more dynamic, especially for longer replies. [Learn more]()." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Send a message with streaming response\n", + "response = chat.send_message_stream(\"Each orange has 8 slices.\")\n", + "for chunk in response:\n", + " print(chunk.text, end=\"\")\n", + "\n", + "# Send a follow-up message with streaming\n", + "response = chat.send_message_stream(\"I have 8 oranges. If 4 slices in total are rotten, how many complete oranges can I eat?\")\n", + "for chunk in response:\n", + " print(chunk.text, end=\"\")\n", + "\n", + "# Send another follow-up message with streaming\n", + "response = chat.send_message_stream(\"I ate 3 oranges. What is the probability that at least one of the slices I ate was rotten?\")\n", + "for chunk in response:\n", + " print(chunk.text, end=\"\")\n", + "\n", + "# Display conversation history\n", + "for message in chat.get_history():\n", + " print(f\"Role: {message.role}\", end=\": \")\n", + " print(message.parts[0].text)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### **Maintaining Conversation History** \n", + "\n", + "In multi-turn conversations, keeping track of previous messages helps to provide relevant and context-aware responses. This section shows how to store past exchanges so users can ask follow-up questions without losing context.\n", + "[Learn more]()." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ "# Previous conversation history\n", "conversation_history = [\n", " types.Content(role=\"user\", parts=[types.Part(text=\"Hello!\")]),\n", @@ -142,29 +272,93 @@ " full_response += chunk.text\n", "\n", " # Append response to history\n", - " conversation_history.append({\"role\": \"model\", \"parts\": [{\"text\": full_response}]})\n", + " conversation_history.append(types.Content(role=\"model\", parts=[types.Part(text=full_response)]))\n", "\n", " return full_response\n", "\n", - "# ask a follow-up question using the history\n", + "# Ask a follow-up question using the history\n", "user_input = \"What if I get another dog?\"\n", "response = chat_with_gemini(user_input)" ] }, { "cell_type": "markdown", - "metadata": { - "id": "db2b9411e1e1" - }, + "metadata": {}, "source": [ - "### **Gemini's Response:** \n", - "If you get another dog, you'll have a total of **3 dogs**. That means you'll have: \n", + "### **Configuring Model Parameters** \n", "\n", - "**3 dogs Γ— 4 paws/dog = 12 dog paws** \n", + "When sending a request to the Gemini API, various parameters can be adjusted to control how the model generates responses. If not specified, the model uses default values. Below are key parameters that can be configured: \n", "\n", - "Don't forget about your own feet! So, you'd have: \n", + "- **`stopSequences`** – Defines sequences that stop response generation. The model stops output when encountering these sequences. \n", + "- **`temperature`** – Controls randomness. Higher values make responses more creative, while lower values make them more deterministic (range: 0.0 to 2.0). \n", + "- **`maxOutputTokens`** – Limits the number of tokens in the response. \n", + "- **`topP`** – Adjusts token selection by probability. A lower value makes responses more focused, while a higher value allows more variety. \n", + "- **`topK`** – Limits token selection to the most probable options. A `topK` of 1 chooses the highest-probability token, while higher values allow more diversity. \n", "\n", - "**12 dog paws + 2 of your feet** " + "The example below demonstrates how to configure these parameters. [Learn more]()." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Configure model parameters for response generation\n", + "response = client.models.generate_content(\n", + " model=MODEL_ID,\n", + " contents=[\"Explain how AI works\"],\n", + " config=types.GenerateContentConfig(\n", + " max_output_tokens=500, # Limit response length\n", + " temperature=0.1 # Lower value for more precise responses\n", + " )\n", + ")\n", + "print(response.text)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### **Using System Instructions** \n", + "\n", + "System instructions define the model’s behavior for a specific use case, ensuring consistent responses throughout a conversation. This is especially useful for chatbots, as it helps maintain a specific persona, tone, or role without requiring repeated context in every user message. \n", + "\n", + "The following code configures a chatbot to act as a friendly travel assistant: [Learn more]()." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Setting up the chatbot with system instructions\n", + "response = client.models.generate_content(\n", + " model=MODEL_ID,\n", + " config=types.GenerateContentConfig(\n", + " system_instruction=\"You are a friendly travel assistant. Suggest travel destinations and give recommendations.\"\n", + " ),\n", + " contents=\"Can you suggest a good beach destination?\"\n", + ")\n", + "\n", + "# Print response\n", + "print(response.text)\n", + "\n", + "# Multiple follow-up questions\n", + "follow_ups = [\n", + " \"What activities can I do there?\",\n", + " \"When is the best time to visit?\",\n", + " \"Are there any budget-friendly accommodations?\"\n", + "]\n", + "\n", + "# Send them one-by-one\n", + "for question in follow_ups:\n", + " response = client.models.generate_content(\n", + " model=MODEL_ID,\n", + " contents=question\n", + " )\n", + "print(response.text)" ] }, { @@ -173,9 +367,13 @@ "id": "d5530e2f354c" }, "source": [ - "## **Learn More** \n", - " \n", - "πŸ“– **[Read More on Handling Long Context](https://ai.google.dev/gemini-api/docs/long-context)** \n" + "## **Next Steps** \n", + "\n", + "πŸ”Ή **[Handling Long Context](https://ai.google.dev/gemini-api/docs/long-context)** \n", + "\n", + "πŸ”Ή **[Build an AI Chat Web App](https://ai.google.dev/gemini-api/tutorials/web-app?lang=python)** \n", + "\n", + "πŸ”Ή **[Prompt Design Strategies](https://ai.google.dev/gemini-api/docs/prompting-strategies?hl=en)** " ] } ], From 8601cf7ed03a11942796c3e89cee874e71d709da Mon Sep 17 00:00:00 2001 From: Abhigyan Date: Fri, 4 Apr 2025 22:30:27 +0530 Subject: [PATCH 4/4] ran nbfmt --- quickstarts/Gemini_Chatbot_Tutorial.ipynb | 823 +++++++++++----------- 1 file changed, 421 insertions(+), 402 deletions(-) diff --git a/quickstarts/Gemini_Chatbot_Tutorial.ipynb b/quickstarts/Gemini_Chatbot_Tutorial.ipynb index 1fc9356b4..e5afcb863 100644 --- a/quickstarts/Gemini_Chatbot_Tutorial.ipynb +++ b/quickstarts/Gemini_Chatbot_Tutorial.ipynb @@ -1,405 +1,424 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "##### Copyright 2025 Google LLC." - ] + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "3c5dbcc9ae0c" + }, + "source": [ + "##### Copyright 2025 Google LLC." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "b7ce279c7b07" + }, + "outputs": [], + "source": [ + "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", + "# you may not use this file except in compliance with the License.\n", + "# You may obtain a copy of the License at\n", + "#\n", + "# https://www.apache.org/licenses/LICENSE-2.0\n", + "#\n", + "# Unless required by applicable law or agreed to in writing, software\n", + "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", + "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", + "# See the License for the specific language governing permissions and\n", + "# limitations under the License." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "153886bac3ca" + }, + "source": [ + "# Gemini API: Chatbot Quickstart\n", + "\n", + "---\n", + "\n", + "This notebook provides an example of how to use the **Gemini API to build a chatbot**. \n", + "You will create a conversational AI that generates responses based on user input.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "3f7f47043869" + }, + "outputs": [], + "source": [ + "# Install dependencies\n", + "%pip install -U \"google-genai>=1.0.0\"" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "a0656906d23e" + }, + "source": [ + "## Set up your API key\n", + "\n", + "To run the following cell, store your API key in a Colab Secret named `GOOGLE_API_KEY`. \n", + "If you don't already have an API key, or you're unsure how to create a Colab Secret, see the \n", + "[Authentication](https://github.com/google-gemini/cookbook/blob/a0b506a8f65141cec4eb9143db760c735f652a59/quickstarts/Authentication.ipynb) quickstart for an example.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "235ce8c11240" + }, + "outputs": [], + "source": [ + "# Import Libraries\n", + "from google import genai\n", + "from google.genai import types\n", + "from google.colab import userdata\n", + "\n", + "# Configure your API key using Colab Secrets\n", + "GOOGLE_API_KEY = userdata.get(\"GOOGLE_API_KEY\")\n", + "client = genai.Client(api_key=GOOGLE_API_KEY)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "212ba7018c0e" + }, + "source": [ + "### **Interacting with the Gemini Chatbot** \n", + "This guide walks you through building a chatbot using the Gemini API, from basics to advanced features. Each section builds on the previous one, helping you understand how to create a more interactive and intelligent chatbot. \n", + "\n", + "### **Building a Gemini Chatbot** \n", + "1️. **Send a Basic Message** – Get a response from Gemini with a simple prompt. \n", + "2️. **Multi-Turn Conversations** – Break complex queries into steps for clearer answers. \n", + "3️. **Streaming Responses** – Get real-time responses for a more interactive experience. \n", + "4️. **Conversation History** – Maintain context by referring to past messages. \n", + "5️. **Configuring Model Parameters** – Control response behavior with settings like temperature and max tokens. \n", + "6️. **System Instructions** – Customize chatbot behavior by defining specific roles and response styles.\n", + "\n", + "You can modify the model version using the `MODEL_ID` variable. This step-by-step guide aims to help developers build and optimize chatbots with Gemini." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "649c9b112991" + }, + "outputs": [], + "source": [ + "# Select the Gemini model\n", + "MODEL_ID = \"gemini-2.0-flash\" # @param [\"gemini-2.0-flash-lite\", \"gemini-2.0-flash\", \"gemini-2.0-pro-exp-02-05\"] {\"allow-input\": true, \"isTemplate\": true}" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "880fc7506f64" + }, + "source": [ + "### **Sending a Basic Message** \n", + "The following section demonstrates how to send a simple message to the Gemini model and receive a response. [Learn more](). \n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "5af217fc983b" + }, + "outputs": [], + "source": [ + "response = client.models.generate_content(\n", + " model=MODEL_ID,\n", + " contents=[\"How does AI work?\"]\n", + ")\n", + "print(response.text)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "7302a57b9bd5" + }, + "source": [ + "### **Multi-Turn Conversations** \n", + "The following section demonstrates Gemini ability to collect multiple questions and answers in a chat session. This chat format enables users to step incrementally toward answers and to get help with multipart problems\n", + "[Learn more]()." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "63d374ac3804" + }, + "outputs": [], + "source": [ + "#create a chat session\n", + "chat = client.chats.create(model = MODEL_ID)\n", + "\n", + "# Send the first message\n", + "response = chat.send_message(\"Each orange has 8 slices.\")\n", + "print(response.text)\n", + "\n", + "# Send a follow-up message\n", + "response = chat.send_message(\"I have 8 oranges. If 4 slices in total are rotten, how many complete oranges can I eat?\")\n", + "print(response.text)\n", + "\n", + "# Send another follow-up message\n", + "response = chat.send_message(\"I ate 3 oranges. What is the probability that at least one of the slices I ate was rotten?\")\n", + "print(response.text)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "7b8c80d673ff" + }, + "source": [ + "We can use the conversation history using `chat.get_history()` method and print it in a structured format." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "e6dc83c05415" + }, + "outputs": [], + "source": [ + "# Display conversation history\n", + "for message in chat.get_history():\n", + " print(f'role - {message.role}',end=\": \")\n", + " print(message.parts[0].text)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "371a72ef17b9" + }, + "source": [ + "### **Streaming Responses**\n", + "\n", + "By default, the model generates and returns the complete response only after finishing the entire text generation process. However, streaming allows real-time output using `send_message_stream()`, enabling responses to be processed as they are generated. This enhances responsiveness, making interactions feel faster and more dynamic, especially for longer replies. [Learn more]()." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "33a58449d1b7" + }, + "outputs": [], + "source": [ + "# Send a message with streaming response\n", + "response = chat.send_message_stream(\"Each orange has 8 slices.\")\n", + "for chunk in response:\n", + " print(chunk.text, end=\"\")\n", + "\n", + "# Send a follow-up message with streaming\n", + "response = chat.send_message_stream(\"I have 8 oranges. If 4 slices in total are rotten, how many complete oranges can I eat?\")\n", + "for chunk in response:\n", + " print(chunk.text, end=\"\")\n", + "\n", + "# Send another follow-up message with streaming\n", + "response = chat.send_message_stream(\"I ate 3 oranges. What is the probability that at least one of the slices I ate was rotten?\")\n", + "for chunk in response:\n", + " print(chunk.text, end=\"\")\n", + "\n", + "# Display conversation history\n", + "for message in chat.get_history():\n", + " print(f\"Role: {message.role}\", end=\": \")\n", + " print(message.parts[0].text)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "3280efdcf721" + }, + "source": [ + "### **Maintaining Conversation History** \n", + "\n", + "In multi-turn conversations, keeping track of previous messages helps to provide relevant and context-aware responses. This section shows how to store past exchanges so users can ask follow-up questions without losing context.\n", + "[Learn more]()." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "8c02588e3523" + }, + "outputs": [], + "source": [ + "# Previous conversation history\n", + "conversation_history = [\n", + " types.Content(role=\"user\", parts=[types.Part(text=\"Hello!\")]),\n", + " types.Content(role=\"model\", parts=[types.Part(text=\"Hi there! How can I assist you today?\")]),\n", + " types.Content(role=\"user\", parts=[types.Part(text=\"I have two dogs in my house. How many paws are in my house?\")]),\n", + " types.Content(role=\"model\", parts=[types.Part(text=\"Each dog has 4 paws, so if you have 2 dogs, that makes 8 paws in your house.\")]),\n", + "]\n", + "\n", + "# Function to interact with Gemini Chatbot\n", + "def chat_with_gemini(prompt: str) -> str:\n", + " \"\"\"\n", + " Generates a response from the Gemini chatbot for a given prompt.\n", + " - Maintains multi-turn conversation history\n", + " - Streams AI-generated responses in real-time\n", + " - Uses proper message formatting\n", + " \"\"\"\n", + " \n", + " # Append user input to conversation history\n", + " conversation_history.append(types.Content(role=\"user\", parts=[types.Part(text=prompt)]))\n", + "\n", + " # Create a chat session with history\n", + " chat = client.chats.create(model=MODEL_ID, history=conversation_history)\n", + "\n", + " response = chat.send_message_stream(message=prompt) # Stream model response \n", + " # response = chat.send_message(message=prompt) # Get full response at once \n", + "\n", + " full_response = \"\"\n", + " print(\"Gemini:\", end=\" \", flush=True)\n", + " for chunk in response:\n", + " print(chunk.text, end=\"\", flush=True) # Streaming output\n", + " full_response += chunk.text\n", + "\n", + " # Append response to history\n", + " conversation_history.append(types.Content(role=\"model\", parts=[types.Part(text=full_response)]))\n", + "\n", + " return full_response\n", + "\n", + "# Ask a follow-up question using the history\n", + "user_input = \"What if I get another dog?\"\n", + "response = chat_with_gemini(user_input)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "7fee3037581c" + }, + "source": [ + "### **Configuring Model Parameters** \n", + "\n", + "When sending a request to the Gemini API, various parameters can be adjusted to control how the model generates responses. If not specified, the model uses default values. Below are key parameters that can be configured: \n", + "\n", + "- **`stopSequences`** – Defines sequences that stop response generation. The model stops output when encountering these sequences. \n", + "- **`temperature`** – Controls randomness. Higher values make responses more creative, while lower values make them more deterministic (range: 0.0 to 2.0). \n", + "- **`maxOutputTokens`** – Limits the number of tokens in the response. \n", + "- **`topP`** – Adjusts token selection by probability. A lower value makes responses more focused, while a higher value allows more variety. \n", + "- **`topK`** – Limits token selection to the most probable options. A `topK` of 1 chooses the highest-probability token, while higher values allow more diversity. \n", + "\n", + "The example below demonstrates how to configure these parameters. [Learn more]()." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "3e02d89d9ca3" + }, + "outputs": [], + "source": [ + "# Configure model parameters for response generation\n", + "response = client.models.generate_content(\n", + " model=MODEL_ID,\n", + " contents=[\"Explain how AI works\"],\n", + " config=types.GenerateContentConfig(\n", + " max_output_tokens=500, # Limit response length\n", + " temperature=0.1 # Lower value for more precise responses\n", + " )\n", + ")\n", + "print(response.text)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "97b2c692dfb8" + }, + "source": [ + "### **Using System Instructions** \n", + "\n", + "System instructions define the model’s behavior for a specific use case, ensuring consistent responses throughout a conversation. This is especially useful for chatbots, as it helps maintain a specific persona, tone, or role without requiring repeated context in every user message. \n", + "\n", + "The following code configures a chatbot to act as a friendly travel assistant: [Learn more]()." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "39e3dd4ddde7" + }, + "outputs": [], + "source": [ + "# Setting up the chatbot with system instructions\n", + "response = client.models.generate_content(\n", + " model=MODEL_ID,\n", + " config=types.GenerateContentConfig(\n", + " system_instruction=\"You are a friendly travel assistant. Suggest travel destinations and give recommendations.\"\n", + " ),\n", + " contents=\"Can you suggest a good beach destination?\"\n", + ")\n", + "\n", + "# Print response\n", + "print(response.text)\n", + "\n", + "# Multiple follow-up questions\n", + "follow_ups = [\n", + " \"What activities can I do there?\",\n", + " \"When is the best time to visit?\",\n", + " \"Are there any budget-friendly accommodations?\"\n", + "]\n", + "\n", + "# Send them one-by-one\n", + "for question in follow_ups:\n", + " response = client.models.generate_content(\n", + " model=MODEL_ID,\n", + " contents=question\n", + " )\n", + "print(response.text)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "d5530e2f354c" + }, + "source": [ + "## **Next Steps** \n", + "\n", + "πŸ”Ή **[Handling Long Context](https://ai.google.dev/gemini-api/docs/long-context)** \n", + "\n", + "πŸ”Ή **[Build an AI Chat Web App](https://ai.google.dev/gemini-api/tutorials/web-app?lang=python)** \n", + "\n", + "πŸ”Ή **[Prompt Design Strategies](https://ai.google.dev/gemini-api/docs/prompting-strategies?hl=en)** " + ] + } + ], + "metadata": { + "colab": { + "name": "Gemini_Chatbot_Tutorial.ipynb", + "toc_visible": true + }, + "kernelspec": { + "display_name": "Python 3", + "name": "python3" + } }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "cellView": "form", - "id": "b7ce279c7b07" - }, - "outputs": [], - "source": [ - "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", - "# you may not use this file except in compliance with the License.\n", - "# You may obtain a copy of the License at\n", - "#\n", - "# https://www.apache.org/licenses/LICENSE-2.0\n", - "#\n", - "# Unless required by applicable law or agreed to in writing, software\n", - "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", - "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", - "# See the License for the specific language governing permissions and\n", - "# limitations under the License." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "153886bac3ca" - }, - "source": [ - "# Gemini API: Chatbot Quickstart\n", - "\n", - "---\n", - "\n", - "This notebook provides an example of how to use the **Gemini API to build a chatbot**. \n", - "You will create a conversational AI that generates responses based on user input.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "3f7f47043869" - }, - "outputs": [], - "source": [ - "# Install dependencies\n", - "%pip install -U \"google-genai>=1.0.0\"" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "a0656906d23e" - }, - "source": [ - "## Set up your API key\n", - "\n", - "To run the following cell, store your API key in a Colab Secret named `GOOGLE_API_KEY`. \n", - "If you don't already have an API key, or you're unsure how to create a Colab Secret, see the \n", - "[Authentication](https://github.com/google-gemini/cookbook/blob/a0b506a8f65141cec4eb9143db760c735f652a59/quickstarts/Authentication.ipynb) quickstart for an example.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "235ce8c11240" - }, - "outputs": [], - "source": [ - "# Import Libraries\n", - "from google import genai\n", - "from google.genai import types\n", - "from google.colab import userdata\n", - "\n", - "# Configure your API key using Colab Secrets\n", - "GOOGLE_API_KEY = userdata.get(\"GOOGLE_API_KEY\")\n", - "client = genai.Client(api_key=GOOGLE_API_KEY)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "212ba7018c0e" - }, - "source": [ - "### **Interacting with the Gemini Chatbot** \n", - "This guide walks you through building a chatbot using the Gemini API, from basics to advanced features. Each section builds on the previous one, helping you understand how to create a more interactive and intelligent chatbot. \n", - "\n", - "### **Building a Gemini Chatbot** \n", - "1️. **Send a Basic Message** – Get a response from Gemini with a simple prompt. \n", - "2️. **Multi-Turn Conversations** – Break complex queries into steps for clearer answers. \n", - "3️. **Streaming Responses** – Get real-time responses for a more interactive experience. \n", - "4️. **Conversation History** – Maintain context by referring to past messages. \n", - "5️. **Configuring Model Parameters** – Control response behavior with settings like temperature and max tokens. \n", - "6️. **System Instructions** – Customize chatbot behavior by defining specific roles and response styles.\n", - "\n", - "You can modify the model version using the `MODEL_ID` variable. This step-by-step guide aims to help developers build and optimize chatbots with Gemini." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Select the Gemini model\n", - "MODEL_ID = \"gemini-2.0-flash\" # @param [\"gemini-2.0-flash-lite\", \"gemini-2.0-flash\", \"gemini-2.0-pro-exp-02-05\"] {\"allow-input\": true, \"isTemplate\": true}" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### **Sending a Basic Message** \n", - "The following section demonstrates how to send a simple message to the Gemini model and receive a response. [Learn more](). \n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "response = client.models.generate_content(\n", - " model=MODEL_ID,\n", - " contents=[\"How does AI work?\"]\n", - ")\n", - "print(response.text)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### **Multi-Turn Conversations** \n", - "The following section demonstrates Gemini ability to collect multiple questions and answers in a chat session. This chat format enables users to step incrementally toward answers and to get help with multipart problems\n", - "[Learn more]()." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "#create a chat session\n", - "chat = client.chats.create(model = MODEL_ID)\n", - "\n", - "# Send the first message\n", - "response = chat.send_message(\"Each orange has 8 slices.\")\n", - "print(response.text)\n", - "\n", - "# Send a follow-up message\n", - "response = chat.send_message(\"I have 8 oranges. If 4 slices in total are rotten, how many complete oranges can I eat?\")\n", - "print(response.text)\n", - "\n", - "# Send another follow-up message\n", - "response = chat.send_message(\"I ate 3 oranges. What is the probability that at least one of the slices I ate was rotten?\")\n", - "print(response.text)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We can use the conversation history using `chat.get_history()` method and print it in a structured format." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Display conversation history\n", - "for message in chat.get_history():\n", - " print(f'role - {message.role}',end=\": \")\n", - " print(message.parts[0].text)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### **Streaming Responses**\n", - "\n", - "By default, the model generates and returns the complete response only after finishing the entire text generation process. However, streaming allows real-time output using `send_message_stream()`, enabling responses to be processed as they are generated. This enhances responsiveness, making interactions feel faster and more dynamic, especially for longer replies. [Learn more]()." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Send a message with streaming response\n", - "response = chat.send_message_stream(\"Each orange has 8 slices.\")\n", - "for chunk in response:\n", - " print(chunk.text, end=\"\")\n", - "\n", - "# Send a follow-up message with streaming\n", - "response = chat.send_message_stream(\"I have 8 oranges. If 4 slices in total are rotten, how many complete oranges can I eat?\")\n", - "for chunk in response:\n", - " print(chunk.text, end=\"\")\n", - "\n", - "# Send another follow-up message with streaming\n", - "response = chat.send_message_stream(\"I ate 3 oranges. What is the probability that at least one of the slices I ate was rotten?\")\n", - "for chunk in response:\n", - " print(chunk.text, end=\"\")\n", - "\n", - "# Display conversation history\n", - "for message in chat.get_history():\n", - " print(f\"Role: {message.role}\", end=\": \")\n", - " print(message.parts[0].text)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### **Maintaining Conversation History** \n", - "\n", - "In multi-turn conversations, keeping track of previous messages helps to provide relevant and context-aware responses. This section shows how to store past exchanges so users can ask follow-up questions without losing context.\n", - "[Learn more]()." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Previous conversation history\n", - "conversation_history = [\n", - " types.Content(role=\"user\", parts=[types.Part(text=\"Hello!\")]),\n", - " types.Content(role=\"model\", parts=[types.Part(text=\"Hi there! How can I assist you today?\")]),\n", - " types.Content(role=\"user\", parts=[types.Part(text=\"I have two dogs in my house. How many paws are in my house?\")]),\n", - " types.Content(role=\"model\", parts=[types.Part(text=\"Each dog has 4 paws, so if you have 2 dogs, that makes 8 paws in your house.\")]),\n", - "]\n", - "\n", - "# Function to interact with Gemini Chatbot\n", - "def chat_with_gemini(prompt: str) -> str:\n", - " \"\"\"\n", - " Generates a response from the Gemini chatbot for a given prompt.\n", - " - Maintains multi-turn conversation history\n", - " - Streams AI-generated responses in real-time\n", - " - Uses proper message formatting\n", - " \"\"\"\n", - " \n", - " # Append user input to conversation history\n", - " conversation_history.append(types.Content(role=\"user\", parts=[types.Part(text=prompt)]))\n", - "\n", - " # Create a chat session with history\n", - " chat = client.chats.create(model=MODEL_ID, history=conversation_history)\n", - "\n", - " response = chat.send_message_stream(message=prompt) # Stream model response \n", - " # response = chat.send_message(message=prompt) # Get full response at once \n", - "\n", - " full_response = \"\"\n", - " print(\"Gemini:\", end=\" \", flush=True)\n", - " for chunk in response:\n", - " print(chunk.text, end=\"\", flush=True) # Streaming output\n", - " full_response += chunk.text\n", - "\n", - " # Append response to history\n", - " conversation_history.append(types.Content(role=\"model\", parts=[types.Part(text=full_response)]))\n", - "\n", - " return full_response\n", - "\n", - "# Ask a follow-up question using the history\n", - "user_input = \"What if I get another dog?\"\n", - "response = chat_with_gemini(user_input)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### **Configuring Model Parameters** \n", - "\n", - "When sending a request to the Gemini API, various parameters can be adjusted to control how the model generates responses. If not specified, the model uses default values. Below are key parameters that can be configured: \n", - "\n", - "- **`stopSequences`** – Defines sequences that stop response generation. The model stops output when encountering these sequences. \n", - "- **`temperature`** – Controls randomness. Higher values make responses more creative, while lower values make them more deterministic (range: 0.0 to 2.0). \n", - "- **`maxOutputTokens`** – Limits the number of tokens in the response. \n", - "- **`topP`** – Adjusts token selection by probability. A lower value makes responses more focused, while a higher value allows more variety. \n", - "- **`topK`** – Limits token selection to the most probable options. A `topK` of 1 chooses the highest-probability token, while higher values allow more diversity. \n", - "\n", - "The example below demonstrates how to configure these parameters. [Learn more]()." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Configure model parameters for response generation\n", - "response = client.models.generate_content(\n", - " model=MODEL_ID,\n", - " contents=[\"Explain how AI works\"],\n", - " config=types.GenerateContentConfig(\n", - " max_output_tokens=500, # Limit response length\n", - " temperature=0.1 # Lower value for more precise responses\n", - " )\n", - ")\n", - "print(response.text)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### **Using System Instructions** \n", - "\n", - "System instructions define the model’s behavior for a specific use case, ensuring consistent responses throughout a conversation. This is especially useful for chatbots, as it helps maintain a specific persona, tone, or role without requiring repeated context in every user message. \n", - "\n", - "The following code configures a chatbot to act as a friendly travel assistant: [Learn more]()." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Setting up the chatbot with system instructions\n", - "response = client.models.generate_content(\n", - " model=MODEL_ID,\n", - " config=types.GenerateContentConfig(\n", - " system_instruction=\"You are a friendly travel assistant. Suggest travel destinations and give recommendations.\"\n", - " ),\n", - " contents=\"Can you suggest a good beach destination?\"\n", - ")\n", - "\n", - "# Print response\n", - "print(response.text)\n", - "\n", - "# Multiple follow-up questions\n", - "follow_ups = [\n", - " \"What activities can I do there?\",\n", - " \"When is the best time to visit?\",\n", - " \"Are there any budget-friendly accommodations?\"\n", - "]\n", - "\n", - "# Send them one-by-one\n", - "for question in follow_ups:\n", - " response = client.models.generate_content(\n", - " model=MODEL_ID,\n", - " contents=question\n", - " )\n", - "print(response.text)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "d5530e2f354c" - }, - "source": [ - "## **Next Steps** \n", - "\n", - "πŸ”Ή **[Handling Long Context](https://ai.google.dev/gemini-api/docs/long-context)** \n", - "\n", - "πŸ”Ή **[Build an AI Chat Web App](https://ai.google.dev/gemini-api/tutorials/web-app?lang=python)** \n", - "\n", - "πŸ”Ή **[Prompt Design Strategies](https://ai.google.dev/gemini-api/docs/prompting-strategies?hl=en)** " - ] - } - ], - "metadata": { - "colab": { - "name": "Gemini_Chatbot_Tutorial.ipynb", - "toc_visible": true - }, - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.9.21" - } - }, - "nbformat": 4, - "nbformat_minor": 4 + "nbformat": 4, + "nbformat_minor": 0 }