Skip to content

Commit dbfca47

Browse files
Update AI Guard python SDK information
1 parent 63f4976 commit dbfca47

File tree

1 file changed

+44
-49
lines changed

1 file changed

+44
-49
lines changed

content/en/security/ai_guard/onboarding.md

Lines changed: 44 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -294,71 +294,75 @@ Example:
294294

295295
SDK instrumentation allows you to set up and monitor AI Guard activity in real time.
296296

297+
To use the SDK, ensure the following environment variables are configured:
298+
299+
| Variable | Value |
300+
|:-----------------------|:--------------------------------------------------------------|
301+
| `DD_AI_GUARD_ENABLED` | `true` |
302+
| `DD_API_KEY` | `<YOUR_API_KEY>` |
303+
| `DD_APP_KEY` | `<YOUR_APPLICATION_KEY>` |
304+
| `DD_TRACE_ENABLED` | `true` |
305+
297306
{{< tabs >}}
298307
{{% tab "Python" %}}
299-
Beginning with [dd-trace-py v3.14.0rc1][1], a new Python SDK has been introduced. This SDK provides a streamlined interface for invoking the REST API directly from Python code. The following examples demonstrate its usage:
308+
Beginning with [dd-trace-py v3.18.0][1], a new Python SDK has been introduced. This SDK provides a streamlined interface for invoking the REST API directly from Python code. The following examples demonstrate its usage:
309+
310+
<div class="alert alert-info">
311+
**dd-trace-py v3.14.0-rc1** SDK version has now been replaced by the new standardized common message format.
312+
</div>
300313

301314
```py
302-
from ddtrace.appsec.ai_guard import new_ai_guard_client, Prompt, ToolCall
315+
from ddtrace.appsec.ai_guard import new_ai_guard_client, Function, Message, Options, ToolCall
303316

304-
client = new_ai_guard_client(
305-
api_key="<YOUR_API_KEY>",
306-
app_key="<YOUR_APPLICATION_KEY>"
307-
)
317+
client = new_ai_guard_client()
308318
```
309319

310320
#### Example: Evaluate a user prompt {#python-example-evaluate-user-prompt}
311321

312322
```py
313323
# Check if processing the user prompt is considered safe
314-
prompt_evaluation = client.evaluate_prompt(
315-
history=[
316-
Prompt(role="system", content="You are an AI Assistant"),
324+
result = client.evaluate(
325+
messages=[
326+
Message(role="system", content="You are an AI Assistant"),
327+
Message(role="user", content="What is the weather like today?"),
317328
],
318-
role="user",
319-
content="What is the weather like today?"
329+
options=Options(block=False)
320330
)
321331
```
322332

323-
The `evaluate_prompt` method accepts the following parameters:
324-
- `history` (optional): A list of `Prompt` or `ToolCall` objects representing previous prompts or tool evaluations.
325-
- `role` (required): A string specifying the role associated with the prompt.
326-
- `content` (required): The content of the prompt.
333+
The `evaluate` method accepts the following parameters:
334+
- `messages` (required): list of messages (prompts or tool calls) for AI Guard to evaluate.
335+
- `opts` (optional): dictionary with a block flag; if set to `true`, the SDK raises an `AIGuardAbortError` when the assessment is `DENY` or `ABORT` and the service is configured with blocking enabled.
327336

328-
The method returns a Boolean value: `True` if the prompt is considered safe to execute, or `False` otherwise. If the REST API detects potentially dangerous content, it raises an `AIGuardAbortError`.
337+
The method returns an Evaluation object containing:
338+
- `action`: `ALLOW`, `DENY`, or `ABORT`.
339+
- `reason`: natural language summary of the decision.
329340

330341
#### Example: Evaluate a tool call {#python-example-evaluate-tool-call}
331342

343+
Like evaluating user prompts, the method can also be used to evaluate tool calls:
344+
332345
```py
333346
# Check if executing the shell tool is considered safe
334-
tool_evaluation = client.evaluate_tool(
335-
tool_name="shell",
336-
tool_args={"command": "shutdown"}
347+
result = client.evaluate(
348+
messages=[
349+
Message(
350+
role="assistant",
351+
tool_calls=[
352+
ToolCall(
353+
id="call_1",
354+
function=Function(name="shell", arguments='{ "command": "shutdown" }'))
355+
],
356+
)
357+
]
337358
)
338359
```
339360

340-
In this case, the `evaluate_tool` method accepts the following parameters:
341-
342-
- `history` (optional): A list of `Prompt` or `ToolCall` objects representing previous prompts or tool evaluations.
343-
- `tool_name` (required): A string specifying the name of the tool to invoke.
344-
- `tool_args` (required): A dictionary containing the required tool arguments.
345-
346-
The method returns a Boolean value: `True` if the tool invocation is considered safe, or `False` otherwise. If the REST API identifies potentially dangerous content, it raises an `AIGuardAbortError`.
347-
348-
[1]: https://github.com/DataDog/dd-trace-py/releases/tag/v3.14.0rc1
361+
[1]: https://github.com/DataDog/dd-trace-py/releases/tag/v3.18.0
349362
{{% /tab %}}
350363
{{% tab "Javascript" %}}
351364
Starting with [dd-trace-js v5.69.0][1], a new JavaScript SDK is available. This SDK offers a simplified interface for interacting with the REST API directly from JavaScript applications.
352365

353-
To use the SDK, ensure the following environment variables are configured:
354-
355-
| Variable | Value |
356-
|:-----------------------|:--------------------------------------------------------------|
357-
| `DD_AI_GUARD_ENABLED` | `true` |
358-
| `DD_API_KEY` | `<YOUR_API_KEY>` |
359-
| `DD_APP_KEY` | `<YOUR_APPLICATION_KEY>` |
360-
| `DD_TRACE_ENABLED` | `true` |
361-
362366
The SDK is described in a dedicated [TypeScript][2] definition file. For convenience, the following sections provide practical usage examples:
363367

364368
#### Example: Evaluate a user prompt {#javascript-example-evaluate-user-prompt}
@@ -376,7 +380,7 @@ const result = await tracer.aiguard.evaluate([
376380

377381
The evaluate method returns a promise and receives the following parameters:
378382
- `messages` (required): list of messages (prompts or tool calls) for AI Guard to evaluate.
379-
- `opts` (optional): dictionary with a block flag; if set to `true`, the SDK rejects the promise with `AIGuardAbortError` when the assessment is `DENY` or `ABORT`.
383+
- `opts` (optional): dictionary with a block flag; if set to `true`, the SDK rejects the promise with `AIGuardAbortError` when the assessment is `DENY` or `ABORT` and the service is configured with blocking enabled.
380384

381385
The method returns a promise that resolves to an Evaluation object containing:
382386
- `action`: `ALLOW`, `DENY`, or `ABORT`.
@@ -412,15 +416,6 @@ const result = await tracer.aiguard.evaluate([
412416
{{% tab "Java" %}}
413417
Beginning with [dd-trace-java v1.54.0][1], a new Java SDK is available. This SDK provides a streamlined interface for directly interacting with the REST API from Java applications.
414418

415-
Before using the SDK, make sure the following environment variables are properly configured:
416-
417-
| Variable | Value |
418-
|:-----------------------|:--------------------------------------------------------------|
419-
| `DD_AI_GUARD_ENABLED` | `true` |
420-
| `DD_API_KEY` | `<YOUR_API_KEY>` |
421-
| `DD_APP_KEY` | `<YOUR_APPLICATION_KEY>` |
422-
| `DD_TRACE_ENABLED` | `true` |
423-
424419
The following sections provide practical usage examples:
425420

426421
#### Example: Evaluate a user prompt {#java-example-evaluate-user-prompt}
@@ -439,7 +434,7 @@ final AIGuard.Evaluation evaluation = AIGuard.evaluate(
439434

440435
The evaluate method receives the following parameters:
441436
- `messages` (required): list of messages (prompts or tool calls) for AI Guard to evaluate.
442-
- `options` (optional): object with a block flag; if set to `true`, the SDK throws an `AIGuardAbortError` when the assessment is `DENY` or `ABORT`.
437+
- `options` (optional): object with a block flag; if set to `true`, the SDK throws an `AIGuardAbortError` when the assessment is `DENY` or `ABORT` and the service is configured with blocking enabled.
443438

444439
The method returns an Evaluation object containing:
445440
- `action`: `ALLOW`, `DENY`, or `ABORT`.
@@ -508,4 +503,4 @@ Follow the instructions to create a new [metric monitor][11].
508503
[9]: /monitors/
509504
[10]: /monitors/types/apm/?tab=traceanalytics
510505
[11]: /monitors/types/metric/
511-
[12]: https://platform.openai.com/docs/api-reference/chat/object
506+
[12]: https://platform.openai.com/docs/api-reference/chat/object

0 commit comments

Comments
 (0)