Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 26 additions & 0 deletions .github/actions/azure-functions-integration-setup/action.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
name: Azure Functions Integration Test Setup
description: Prepare local emulators and tools for Azure Functions integration tests

runs:
using: "composite"
steps:
- name: Start Durable Task Scheduler Emulator
shell: bash
run: |
if [ "$(docker ps -aq -f name=dts-emulator)" ]; then
docker rm -f dts-emulator
fi
docker run -d --name dts-emulator -p 8080:8080 -p 8082:8082 mcr.microsoft.com/dts/dts-emulator:latest
timeout 30 bash -c 'until curl --silent --fail http://localhost:8080/healthz; do sleep 1; done'
- name: Start Azurite (Azure Storage emulator)
shell: bash
run: |
if [ "$(docker ps -aq -f name=azurite)" ]; then
docker rm -f azurite
fi
docker run -d --name azurite -p 10000:10000 -p 10001:10001 -p 10002:10002 mcr.microsoft.com/azure-storage/azurite
- name: Install Azure Functions Core Tools
shell: bash
run: |
npm install -g azure-functions-core-tools@4 --unsafe-perm true
func --version
8 changes: 8 additions & 0 deletions .github/workflows/python-merge-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,11 @@ jobs:
AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME: ${{ vars.AZUREOPENAI__RESPONSESDEPLOYMENTNAME }}
AZURE_OPENAI_ENDPOINT: ${{ vars.AZUREOPENAI__ENDPOINT }}
LOCAL_MCP_URL: ${{ vars.LOCAL_MCP__URL }}
# For Azure Functions integration tests
FUNCTIONS_WORKER_RUNTIME: "python"
DURABLE_TASK_SCHEDULER_CONNECTION_STRING: "Endpoint=http://localhost:8080;TaskHub=default;Authentication=None"
AzureWebJobsStorage: "UseDevelopmentStorage=true"

defaults:
run:
working-directory: python
Expand All @@ -87,6 +92,9 @@ jobs:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
- name: Set up Azure Functions Integration Test Emulators
uses: ./.github/actions/azure-functions-integration-setup
id: azure-functions-setup
- name: Test with pytest
timeout-minutes: 10
run: uv run poe all-tests -n logical --dist loadfile --dist worksteal --timeout 300 --retries 3 --retry-delay 10
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Azure OpenAI Configuration
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME=your-deployment-name
AZURE_OPENAI_API_KEY=your-api-key-here
FUNCTIONS_WORKER_RUNTIME=python
RUN_INTEGRATION_TESTS=true

# Azure Functions Configuration
AzureWebJobsStorage=UseDevelopmentStorage=true
DURABLE_TASK_SCHEDULER_CONNECTION_STRING=Endpoint=http://localhost:8080;Authentication=None

# Note: TASKHUB_NAME is not required for integration tests; it is auto-generated per test run.
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
# Sample Integration Tests

Integration tests that validate the Durable Agent Framework samples by running them as Azure Functions.

## Setup

### 1. Create `.env` file

Copy `.env.example` to `.env` and fill in your Azure credentials:

```bash
cp .env.example .env
```

Required variables:
- `AZURE_OPENAI_ENDPOINT`
- `AZURE_OPENAI_CHAT_DEPLOYMENT_NAME`
- `AZURE_OPENAI_API_KEY`
- `AzureWebJobsStorage`
- `DURABLE_TASK_SCHEDULER_CONNECTION_STRING`
- `FUNCTIONS_WORKER_RUNTIME`

### 2. Start required services

**Azurite (for orchestration tests):**
```bash
docker run -d -p 10000:10000 -p 10001:10001 -p 10002:10002 mcr.microsoft.com/azure-storage/azurite
```

**Durable Task Scheduler:**
```bash
docker run -d -p 8080:8080 -p 8082:8082 mcr.microsoft.com/dts/dts-emulator:latest
```

## Running Tests

The tests automatically start and stop the Azure Functions app for each sample.

### Run all sample tests
```bash
uv run pytest packages/azurefunctions/tests/integration_tests -v
```

### Run specific sample
```bash
uv run pytest packages/azurefunctions/tests/integration_tests/test_01_single_agent.py -v
```

### Run with verbose output
```bash
uv run pytest packages/azurefunctions/tests/integration_tests -sv
```

## How It Works

Each test file uses pytest markers to automatically configure and start the function app:

```python
pytestmark = [
pytest.mark.sample("01_single_agent"),
pytest.mark.usefixtures("function_app_for_test"),
skip_if_azure_functions_integration_tests_disabled,
]
```

The `function_app_for_test` fixture:
1. Loads environment variables from `.env`
2. Validates required variables are present
3. Starts the function app on a dynamically allocated port
4. Waits for the app to be ready
5. Runs your tests
6. Tears down the function app

## Troubleshooting


**Missing environment variables:**
Ensure your `.env` file contains all required variables from `.env.example`.

**Tests timeout:**
Check that Azure OpenAI credentials are valid and the service is accessible.
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Copyright (c) Microsoft. All rights reserved.
121 changes: 121 additions & 0 deletions python/packages/azurefunctions/tests/integration_tests/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
# Copyright (c) Microsoft. All rights reserved.
"""
Pytest configuration for Durable Agent Framework tests.

This module provides fixtures and configuration for pytest.
"""

import subprocess
from collections.abc import Iterator, Mapping
from typing import Any

import pytest
import requests

from .testutils import (
FunctionAppStartupError,
build_base_url,
cleanup_function_app,
find_available_port,
get_sample_path_from_marker,
load_and_validate_env,
start_function_app,
wait_for_function_app_ready,
)


def pytest_configure(config: pytest.Config) -> None:
"""Register custom markers."""
config.addinivalue_line("markers", "orchestration: marks tests that use orchestrations (require Azurite)")
config.addinivalue_line(
"markers",
"sample(path): specify the sample directory path for the test (e.g., @pytest.mark.sample('01_single_agent'))",
)


@pytest.fixture(scope="session")
def function_app_running() -> bool:
"""
Check if the function app is running on localhost:7071.

This fixture can be used to skip tests if the function app is not available.
"""
try:
response = requests.get("http://localhost:7071/api/health", timeout=2)
return response.status_code == 200
except requests.exceptions.RequestException:
return False


@pytest.fixture(scope="session")
def skip_if_no_function_app(function_app_running: bool) -> None:
"""Skip test if function app is not running."""
if not function_app_running:
pytest.skip("Function app is not running on http://localhost:7071")


@pytest.fixture(scope="module")
def function_app_for_test(request: pytest.FixtureRequest) -> Iterator[dict[str, int | str]]:
"""
Start the function app for the corresponding sample based on marker.

This fixture:
1. Determines which sample to run from @pytest.mark.sample()
2. Validates environment variables
3. Starts the function app using 'func start'
4. Waits for the app to be ready
5. Tears down the app after tests complete

Usage:
@pytest.mark.sample("01_single_agent")
@pytest.mark.usefixtures("function_app_for_test")
class TestSample01SingleAgent:
...
"""
# Get sample path from marker
sample_path, error_message = get_sample_path_from_marker(request)
if error_message:
pytest.fail(error_message)

assert sample_path is not None, "Sample path must be resolved before starting the function app"

# Load .env file if it exists and validate required env vars
load_and_validate_env()

max_attempts = 3
last_error: Exception | None = None
func_process: subprocess.Popen[Any] | None = None
base_url = ""
port = 0

for _ in range(max_attempts):
port = find_available_port()
base_url = build_base_url(port)
func_process = start_function_app(sample_path, port)

try:
wait_for_function_app_ready(func_process, port)
last_error = None
break
except FunctionAppStartupError as exc:
last_error = exc
cleanup_function_app(func_process)
func_process = None

if func_process is None:
error_message = f"Function app failed to start after {max_attempts} attempt(s)."
if last_error is not None:
error_message += f" Last error: {last_error}"
pytest.fail(error_message)

try:
yield {"base_url": base_url, "port": port}
finally:
if func_process is not None:
cleanup_function_app(func_process)


@pytest.fixture(scope="module")
def base_url(function_app_for_test: Mapping[str, int | str]) -> str:
"""Expose the function app's base URL to tests."""
return str(function_app_for_test["base_url"])
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
# Copyright (c) Microsoft. All rights reserved.
"""
Integration Tests for Single Agent Sample

Tests the single agent sample with various message formats and session management.

The function app is automatically started by the test fixture.

Prerequisites:
- Azure OpenAI credentials configured (see packages/azurefunctions/tests/integration_tests/.env.example)
- Azurite or Azure Storage account configured

Usage:
uv run pytest packages/azurefunctions/tests/integration_tests/test_01_single_agent.py -v
"""

import pytest

from .testutils import SampleTestHelper, skip_if_azure_functions_integration_tests_disabled

# Module-level markers - applied to all tests in this file
pytestmark = [
pytest.mark.sample("01_single_agent"),
pytest.mark.usefixtures("function_app_for_test"),
skip_if_azure_functions_integration_tests_disabled,
]


class TestSampleSingleAgent:
"""Tests for 01_single_agent sample."""

@pytest.fixture(autouse=True)
def _set_base_url(self, base_url: str) -> None:
"""Provide agent-specific base URL for the tests."""
self.base_url = f"{base_url}/api/agents/Joker"

def test_health_check(self, base_url: str) -> None:
"""Test health check endpoint."""
response = SampleTestHelper.get(f"{base_url}/api/health")
assert response.status_code == 200
data = response.json()
assert data["status"] == "healthy"

def test_simple_message_json(self) -> None:
"""Test sending a simple message with JSON payload."""
response = SampleTestHelper.post_json(
f"{self.base_url}/run",
{"message": "Tell me a short joke about cloud computing.", "sessionId": "test-simple-json"},
)
# Agent can return 200 (immediate) or 202 (async with wait_for_completion=false)
assert response.status_code in [200, 202]
data = response.json()

if response.status_code == 200:
# Synchronous response - check result directly
assert data["status"] == "success"
assert "response" in data
assert data["message_count"] >= 1
else:
# Async response - check we got correlation info
assert "correlationId" in data or "sessionId" in data

def test_simple_message_plain_text(self) -> None:
"""Test sending a message with plain text payload."""
response = SampleTestHelper.post_text(f"{self.base_url}/run", "Tell me a short joke about networking.")
assert response.status_code in [200, 202]
data = response.json()

if response.status_code == 200:
assert data["status"] == "success"
assert "response" in data

def test_session_key_in_query(self) -> None:
"""Test using sessionKey in query parameter."""
response = SampleTestHelper.post_text(
f"{self.base_url}/run?sessionKey=test-query-session", "Tell me a short joke about weather in Texas."
)
assert response.status_code in [200, 202]
data = response.json()

if response.status_code == 200:
assert data["status"] == "success"

def test_conversation_continuity(self) -> None:
"""Test conversation context is maintained across requests."""
session_id = "test-continuity"

# First message
response1 = SampleTestHelper.post_json(
f"{self.base_url}/run",
{"message": "Tell me a short joke about weather in Seattle.", "sessionId": session_id},
)
assert response1.status_code in [200, 202]

if response1.status_code == 200:
data1 = response1.json()
assert data1["message_count"] == 1

# Second message in same session
response2 = SampleTestHelper.post_json(
f"{self.base_url}/run", {"message": "What about San Francisco?", "sessionId": session_id}
)
assert response2.status_code == 200
data2 = response2.json()
assert data2["message_count"] == 2
else:
# In async mode, we can't easily test message count
# Just verify we can make multiple calls
response2 = SampleTestHelper.post_json(
f"{self.base_url}/run", {"message": "What about Texas?", "sessionId": session_id}
)
assert response2.status_code == 202


if __name__ == "__main__":
pytest.main([__file__, "-v"])
Loading