Skip to content

Commit f45ef52

Browse files
authored
Merge pull request #12 from Azure-Samples/funccalling
More function calling examples
2 parents 1adf116 + f6f099c commit f45ef52

16 files changed

+1471
-219
lines changed

AGENTS.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -29,10 +29,12 @@ All example scripts are located in the root directory. They follow a consistent
2929
- `chat_safety.py` - Content safety filter exception handling
3030

3131
**Function Calling Scripts:**
32-
- `function_calling_basic.py` - Single function declaration, prints tool calls
33-
- `function_calling_call.py` - Executes the function when requested
34-
- `function_calling_extended.py` - Full round-trip with function execution and response
35-
- `function_calling_multiple.py` - Multiple functions, demonstrates choice logic
32+
- `function_calling_basic.py` - Single function declaration, prints tool calls (no execution)
33+
- `function_calling_call.py` - Executes the function once if the model requests it
34+
- `function_calling_extended.py` - Full round-trip: executes, returns tool output, gets final answer
35+
- `function_calling_errors.py` - Same as extended but with robust error handling (malformed JSON args, missing tool, tool exceptions, JSON serialization)
36+
- `function_calling_parallel.py` - Shows model requesting multiple tools in one response
37+
- `function_calling_while_loop.py` - Conversation loop that keeps executing sequential tool calls until the model produces a final natural language answer (with error handling)
3638

3739
**Structured Outputs Scripts:**
3840
- `structured_outputs_basic.py` - Basic Pydantic model extraction
@@ -119,7 +121,7 @@ These scripts are automatically run by `azd provision` via the `azure.yaml` post
119121
- `.devcontainer/devcontainer.json` - Default dev container (Azure OpenAI setup with azd)
120122
- `.devcontainer/Dockerfile` - Base Python 3.12 image, installs all requirements-dev.txt
121123
- `.devcontainer/github/` - GitHub Models variant
122-
- `.devcontainer/ollama/` - Ollama variant
124+
- `.devcontainer/ollama/` - Ollama variant
123125
- `.devcontainer/openai/` - OpenAI.com variant
124126

125127
All dev containers install all dependencies from `requirements-dev.txt` which includes base, RAG, and dev tools.

README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,9 @@ Scripts (in increasing order of capability):
4444
1. [`function_calling_basic.py`](./function_calling_basic.py): Declares a single `lookup_weather` function and prompts the model. It prints the tool call (if any) or falls back to the model's normal content. No actual function execution occurs.
4545
2. [`function_calling_call.py`](./function_calling_call.py): Executes the `lookup_weather` function if the model requests it by parsing the returned arguments JSON and calling the local Python function.
4646
3. [`function_calling_extended.py`](./function_calling_extended.py): Shows a full round‑trip: after executing the function, it appends a `tool` role message containing the function result and asks the model again so it can incorporate real data into a final user-facing response.
47-
4. [`function_calling_multiple.py`](./function_calling_multiple.py): Exposes multiple functions (`lookup_weather`, `lookup_movies`) so you can see how the model chooses among them and how multiple tool calls could be returned.
47+
4. [`function_calling_errors.py`](./function_calling_errors.py): Same as the extended example but adds robust error handling (malformed JSON arguments, missing tool implementations, execution exceptions, JSON serialization fallback).
48+
5. [`function_calling_parallel.py`](./function_calling_parallel.py): Demonstrates the model returning multiple tool calls in a single response
49+
6. [`function_calling_while_loop.py`](./function_calling_while_loop.py): An iterative conversation loop that keeps executing sequential tool calls (with error handling) until the model produces a final natural language answer.
4850

4951
You must use a model that supports function calling (such as the defaults `gpt-4o`, `gpt-4o-mini`, etc.). Some local or older models may not support the `tools` parameter.
5052

function_calling_basic.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
import logging
12
import os
23

34
import azure.identity
@@ -6,6 +7,7 @@
67

78
# Setup the OpenAI client to use either Azure, OpenAI.com, or Ollama API
89
load_dotenv(override=True)
10+
logging.basicConfig(level=logging.DEBUG)
911
API_HOST = os.getenv("API_HOST", "github")
1012

1113
if API_HOST == "azure":

function_calling_errors.py

Lines changed: 170 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,170 @@
1+
import json
2+
import os
3+
from collections.abc import Callable
4+
from typing import Any
5+
6+
import azure.identity
7+
import openai
8+
from dotenv import load_dotenv
9+
10+
# Setup the OpenAI client to use either Azure, OpenAI.com, or Ollama API
11+
load_dotenv(override=True)
12+
API_HOST = os.getenv("API_HOST", "github")
13+
14+
if API_HOST == "azure":
15+
token_provider = azure.identity.get_bearer_token_provider(
16+
azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
17+
)
18+
client = openai.OpenAI(
19+
base_url=os.environ["AZURE_OPENAI_ENDPOINT"],
20+
api_key=token_provider,
21+
)
22+
MODEL_NAME = os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"]
23+
24+
elif API_HOST == "ollama":
25+
client = openai.OpenAI(base_url=os.environ["OLLAMA_ENDPOINT"], api_key="nokeyneeded")
26+
MODEL_NAME = os.environ["OLLAMA_MODEL"]
27+
28+
elif API_HOST == "github":
29+
client = openai.OpenAI(base_url="https://models.github.ai/inference", api_key=os.environ["GITHUB_TOKEN"])
30+
MODEL_NAME = os.getenv("GITHUB_MODEL", "openai/gpt-4o")
31+
32+
else:
33+
client = openai.OpenAI(api_key=os.environ["OPENAI_KEY"])
34+
MODEL_NAME = os.environ["OPENAI_MODEL"]
35+
36+
37+
# ---------------------------------------------------------------------------
38+
# Tool implementation(s)
39+
# ---------------------------------------------------------------------------
40+
def search_database(search_query: str, price_filter: dict | None = None) -> dict[str, str]:
41+
"""Search database for relevant products based on user query"""
42+
if not search_query:
43+
raise ValueError("search_query is required")
44+
if price_filter:
45+
if "comparison_operator" not in price_filter or "value" not in price_filter:
46+
raise ValueError("Both comparison_operator and value are required in price_filter")
47+
if price_filter["comparison_operator"] not in {">", "<", ">=", "<=", "="}:
48+
raise ValueError("Invalid comparison_operator in price_filter")
49+
if not isinstance(price_filter["value"], int | float):
50+
raise ValueError("Value in price_filter must be a number")
51+
return [{"id": "123", "name": "Example Product", "price": 19.99}]
52+
53+
54+
tool_mapping: dict[str, Callable[..., Any]] = {
55+
"search_database": search_database,
56+
}
57+
58+
tools = [
59+
{
60+
"type": "function",
61+
"function": {
62+
"name": "search_database",
63+
"description": "Search database for relevant products based on user query",
64+
"parameters": {
65+
"type": "object",
66+
"properties": {
67+
"search_query": {
68+
"type": "string",
69+
"description": "Query string to use for full text search, e.g. 'red shoes'",
70+
},
71+
"price_filter": {
72+
"type": "object",
73+
"description": "Filter search results based on price of the product",
74+
"properties": {
75+
"comparison_operator": {
76+
"type": "string",
77+
"description": "Operator to compare the column value, either '>', '<', '>=', '<=', '='", # noqa
78+
},
79+
"value": {
80+
"type": "number",
81+
"description": "Value to compare against, e.g. 30",
82+
},
83+
},
84+
},
85+
},
86+
"required": ["search_query"],
87+
},
88+
},
89+
}
90+
]
91+
92+
messages: list[dict[str, Any]] = [
93+
{"role": "system", "content": "You are a product search assistant."},
94+
{"role": "user", "content": "Find me a red shirt under $20."},
95+
]
96+
97+
print(f"Model: {MODEL_NAME} on Host: {API_HOST}\n")
98+
99+
# First model response (may include tool call)
100+
response = client.chat.completions.create(
101+
model=MODEL_NAME,
102+
messages=messages,
103+
tools=tools,
104+
tool_choice="auto",
105+
parallel_tool_calls=False,
106+
)
107+
108+
assistant_msg = response.choices[0].message
109+
110+
# If no tool calls were requested, just print the answer.
111+
if not assistant_msg.tool_calls:
112+
print("Assistant:")
113+
print(assistant_msg.content)
114+
else:
115+
# Append assistant message including tool call metadata
116+
messages.append(
117+
{
118+
"role": "assistant",
119+
"content": assistant_msg.content or "",
120+
"tool_calls": [tc.model_dump() for tc in assistant_msg.tool_calls],
121+
}
122+
)
123+
124+
# Process each requested tool sequentially (though usually one here)
125+
for tool_call in assistant_msg.tool_calls:
126+
fn_name = tool_call.function.name
127+
raw_args = tool_call.function.arguments or "{}"
128+
print(f"Tool request: {fn_name}({raw_args})")
129+
130+
target = tool_mapping.get(fn_name)
131+
if not target:
132+
tool_result: Any = f"ERROR: No implementation registered for tool '{fn_name}'"
133+
else:
134+
# Parse arguments safely
135+
try:
136+
parsed_args = json.loads(raw_args) if raw_args.strip() else {}
137+
except json.JSONDecodeError:
138+
parsed_args = {}
139+
tool_result = "Warning: Malformed JSON arguments received; proceeding with empty args"
140+
else:
141+
try:
142+
tool_result = target(**parsed_args)
143+
except Exception as e: # safeguard tool execution
144+
tool_result = f"Tool execution error in {fn_name}: {e}"
145+
146+
# Serialize tool output (dict or str) as JSON string for the model
147+
try:
148+
tool_content = json.dumps(tool_result)
149+
except Exception:
150+
# Fallback to string conversion if something isn't JSON serializable
151+
tool_content = json.dumps({"result": str(tool_result)})
152+
153+
messages.append(
154+
{
155+
"role": "tool",
156+
"tool_call_id": tool_call.id,
157+
"name": fn_name,
158+
"content": tool_content,
159+
}
160+
)
161+
162+
# Follow-up model response after supplying tool outputs
163+
followup = client.chat.completions.create(
164+
model=MODEL_NAME,
165+
messages=messages,
166+
tools=tools,
167+
)
168+
final_msg = followup.choices[0].message
169+
print("Assistant (final):")
170+
print(final_msg.content)

0 commit comments

Comments
 (0)