diff --git a/v2.6-rc/docs/creative/brand-manifest.mdx b/v2.6-rc/docs/creative/brand-manifest.mdx index 692a6fe9..cb80d8f9 100644 --- a/v2.6-rc/docs/creative/brand-manifest.mdx +++ b/v2.6-rc/docs/creative/brand-manifest.mdx @@ -22,6 +22,7 @@ Brand manifests solve a key problem: how to efficiently identify advertisers and ### Key Benefits - **Know Your Customer**: Publishers can verify advertisers meet their standards +- **Privacy Transparency**: Link to privacy policy for consumer consent flows - **Minimal Friction**: Start with just a name or URL, expand as needed - **Cacheable**: Same brand manifest reused across all requests - **Standardized**: Consistent format across all AdCP implementations @@ -243,6 +244,7 @@ The structure of the brand manifest object itself (whether provided inline or ho | Field | Type | Description | |-------|------|-------------| | `name` | string | Brand or business name | +| `privacy_policy_url` | string (uri) | URL to the brand's privacy policy for consumer consent flows | | `logos` | Logo[] | Brand logo assets with semantic tags | | `colors` | Colors | Brand color palette (hex format) | | `fonts` | Fonts | Brand typography guidelines | @@ -325,10 +327,21 @@ The structure of the brand manifest object itself (whether provided inline or ho ```typescript { feed_url: string; // URL to product catalog feed - feed_format?: string; // Format of the product feed (default: "google_merchant_center") + feed_format?: string; // Format: "google_merchant_center" | "facebook_catalog" | "openai_product_feed" | "custom" categories?: string[]; // Product categories available in the catalog last_updated?: string; // When the product catalog was last updated (ISO 8601) update_frequency?: string; // How frequently the catalog is updated + agentic_checkout?: AgenticCheckout; // Optional checkout endpoint configuration +} +``` + +### AgenticCheckout Object + +```typescript +{ + endpoint: string; // Base URL for checkout session API + spec: string; // Checkout API specification (e.g., "openai_agentic_checkout_v1") + supported_payment_providers?: string[]; // Payment providers (e.g., ["stripe", "adyen"]) } ``` @@ -459,7 +472,7 @@ Large retailers should provide product feeds: } ``` -**Supported Feed Formats**: RSS, JSON Feed, Product CSV +**Supported Feed Formats**: Google Merchant Center, Facebook Catalog, [OpenAI Product Feed](https://developers.openai.com/commerce/specs/feed), Custom ### 5. Asset Libraries for Enterprise @@ -492,6 +505,85 @@ Enterprise brands with large asset libraries should provide explicit assets: } ``` +## Privacy Integration + +Brand manifests support privacy transparency through the `privacy_policy_url` field. This enables AI platforms to present explicit privacy choices to users before sharing personal data with advertisers. + +### Consumer Consent Flow + +When an AI assistant helps a user engage with an advertiser (booking a flight, making a purchase, etc.), the platform can use the brand manifest's privacy policy URL to: + +1. **Present explicit consent**: "May I share your details with Delta? [View their privacy policy]" +2. **Enable informed decisions**: Users can review data practices before data handoff +3. **Support machine-readable terms**: Works alongside [MyTerms/IEEE P7012](https://myterms.info) for automated privacy negotiation + +### Example with Privacy Policy + +```json +{ + "$schema": "https://adcontextprotocol.org/schemas/v2/core/brand-manifest.json", + "name": "Delta Airlines", + "url": "https://delta.com", + "privacy_policy_url": "https://delta.com/privacy" +} +``` + +### MyTerms Discovery + +For advertisers implementing [IEEE P7012 (MyTerms)](https://myterms.info), AI platforms can discover machine-readable privacy terms from the advertiser's domain (e.g., `/.well-known/myterms`). The brand manifest's `privacy_policy_url` serves as the human-readable fallback and explicit consent path. + +## Agentic Commerce Integration + +Brand manifests support integration with AI commerce platforms through the `product_catalog` field. This enables AI agents to discover products and complete purchases on behalf of users. + +### OpenAI Commerce + +For merchants implementing [OpenAI's Commerce specifications](https://developers.openai.com/commerce), the brand manifest provides a bridge: + +```json +{ + "$schema": "https://adcontextprotocol.org/schemas/v2/core/brand-manifest.json", + "name": "Shop Example", + "url": "https://shopexample.com", + "privacy_policy_url": "https://shopexample.com/privacy", + "product_catalog": { + "feed_url": "https://shopexample.com/products.jsonl.gz", + "feed_format": "openai_product_feed", + "update_frequency": "daily", + "agentic_checkout": { + "endpoint": "https://api.shopexample.com/checkout_sessions", + "spec": "openai_agentic_checkout_v1", + "supported_payment_providers": ["stripe", "adyen"] + } + } +} +``` + +**Key fields for OpenAI Commerce:** + +| Field | Description | +|-------|-------------| +| `feed_format: "openai_product_feed"` | Indicates the feed conforms to [OpenAI's Product Feed spec](https://developers.openai.com/commerce/specs/feed) | +| `agentic_checkout.endpoint` | Base URL for [OpenAI's Agentic Checkout API](https://developers.openai.com/commerce/specs/checkout) | +| `agentic_checkout.spec` | Version identifier for the checkout spec | + +### Feed Format Mapping + +If you have an existing Google Merchant Center feed, here's how key fields map to OpenAI's spec: + +| OpenAI Field | Google Merchant Center | Notes | +|--------------|----------------------|-------| +| `item_id` | `id` | Direct mapping | +| `title` | `title` | Direct mapping | +| `description` | `description` | Direct mapping | +| `url` | `link` | Direct mapping | +| `brand` | `brand` | Direct mapping | +| `price` | `price` | OpenAI uses number + currency code | +| `availability` | `availability` | Same enum values | +| `image_url` | `image_link` | Direct mapping | +| `is_eligible_search` | N/A | OpenAI-specific flag | +| `is_eligible_checkout` | N/A | OpenAI-specific flag | + ## Evolution and Versioning Brand cards are versioned using the `metadata.version` field: diff --git a/v2.6-rc/docs/governance/content-standards/artifacts.mdx b/v2.6-rc/docs/governance/content-standards/artifacts.mdx new file mode 100644 index 00000000..7d2f834a --- /dev/null +++ b/v2.6-rc/docs/governance/content-standards/artifacts.mdx @@ -0,0 +1,304 @@ +--- +title: Artifacts +sidebar_position: 2 +--- + +# Artifacts + +An **artifact** is a unit of content adjacent to an ad placement. When evaluating brand safety and suitability, you're asking: "Is this artifact appropriate for my brand's ads?" + +## What Is an Artifact? + +Artifacts represent the content context where an ad appears: + +- A **news article** on a website +- A **podcast segment** between ad breaks +- A **video chapter** in a YouTube video +- A **social media post** in a feed +- A **scene** in a CTV show +- An **AI-generated image** in a chat conversation + +Artifacts are identified by `property_id` + `artifact_id` - the property defines where the content lives, and the artifact_id is an opaque identifier for that specific piece of content. The artifact_id scheme is flexible - it could be a URL path, a platform-specific ID, or any consistent identifier the property owner uses internally. + +## Structure + +**Schema**: [artifact.json](https://adcontextprotocol.org/schemas/v2/content-standards/artifact.json) + +```json +{ + "property_id": {"type": "domain", "value": "reddit.com"}, + "artifact_id": "r_fitness_post_abc123", + "assets": [ + {"type": "text", "role": "title", "content": "Best protein sources for muscle building", "language": "en"}, + {"type": "text", "role": "paragraph", "content": "Looking for recommendations on high-quality protein sources...", "language": "en"}, + {"type": "image", "url": "https://cdn.reddit.com/fitness-image.jpg", "alt_text": "Person lifting weights"} + ] +} +``` + +### Required Fields + +| Field | Description | +|-------|-------------| +| `property_id` | Where this artifact lives - uses standard identifier types (`domain`, `app_id`, `apple_podcast_id`, etc.) | +| `artifact_id` | Unique identifier within the property - the property owner defines their scheme | +| `assets` | Content in document order - text blocks, images, video, audio | + +### Optional Fields + +| Field | Description | +|-------|-------------| +| `variant_id` | Identifies a specific variant (A/B test, translation, temporal version) | +| `format_id` | Reference to format registry (same as creative formats) | +| `url` | Web URL if the artifact has one | +| `metadata` | Artifact-level metadata (Open Graph, JSON-LD, author info) | +| `published_time` | When the artifact was published | +| `last_update_time` | When the artifact was last modified | + +## Variants + +The same artifact may have multiple variants: + +- **Translations** - English version vs Spanish version +- **A/B tests** - Different headlines being tested +- **Temporal versions** - Content that changed on Wednesday + +Use `variant_id` to distinguish between them: + +```json +// English version +{ + "property_id": {"type": "domain", "value": "nytimes.com"}, + "artifact_id": "article_12345", + "variant_id": "en", + "assets": [ + {"type": "text", "role": "title", "content": "Breaking News Story", "language": "en"} + ] +} + +// Spanish translation +{ + "property_id": {"type": "domain", "value": "nytimes.com"}, + "artifact_id": "article_12345", + "variant_id": "es", + "assets": [ + {"type": "text", "role": "title", "content": "Noticia de última hora", "language": "es"} + ] +} + +// A/B test variant +{ + "property_id": {"type": "domain", "value": "nytimes.com"}, + "artifact_id": "article_12345", + "variant_id": "headline_test_b", + "assets": [ + {"type": "text", "role": "title", "content": "Alternative Headline Being Tested", "language": "en"} + ] +} +``` + +The combination of `artifact_id` + `variant_id` must be unique within a property. This lets you track which variant a user saw and correlate it with delivery reports. + +## Asset Types + +Assets are the actual content within an artifact. Everything is an asset - titles, paragraphs, images, videos. + +### Text + +```json +{"type": "text", "role": "title", "content": "Article Title", "language": "en"} +{"type": "text", "role": "paragraph", "content": "The article body text...", "language": "en"} +{"type": "text", "role": "description", "content": "A summary of the article", "language": "en"} +{"type": "text", "role": "heading", "content": "Section Header", "heading_level": 2} +{"type": "text", "role": "quote", "content": "A quoted statement"} +``` + +Roles: `title`, `description`, `paragraph`, `heading`, `caption`, `quote`, `list_item` + +Each text asset can have its own `language` tag for mixed-language content. + +### Image + +```json +{ + "type": "image", + "url": "https://cdn.example.com/photo.jpg", + "alt_text": "Description of the image" +} +``` + +### Video + +```json +{ + "type": "video", + "url": "https://cdn.example.com/video.mp4", + "transcript": "Full transcript of the video content...", + "duration_ms": 180000 +} +``` + +### Audio + +```json +{ + "type": "audio", + "url": "https://cdn.example.com/podcast.mp3", + "transcript": "Today we're discussing...", + "duration_ms": 3600000 +} +``` + +## Metadata + +Artifact-level metadata describes the artifact as a whole, not individual assets: + +```json +{ + "metadata": { + "author": "Jane Smith", + "canonical": "https://example.com/article/12345", + "open_graph": { + "og:type": "article", + "og:site_name": "Example News" + }, + "json_ld": [ + { + "@type": "NewsArticle", + "datePublished": "2025-01-15" + } + ] + } +} +``` + +This is separate from assets because it's about the artifact container, not the content itself. + +## Secured Asset Access + +Many assets aren't publicly accessible - AI-generated images, private conversations, paywalled content. The artifact schema supports authenticated access. + +### Pre-Configuration (Recommended) + +For ongoing partnerships, configure access once during onboarding rather than per-request: + +1. **Service account sharing** - Grant the verification agent access to your cloud storage +2. **OAuth client credentials** - Set up machine-to-machine authentication +3. **API key exchange** - Share long-lived API keys during setup + +This happens during the activation phase when the seller first receives content standards from a buyer. + +### Per-Asset Authentication + +When pre-configuration isn't possible, include access credentials with individual assets: + +```json +{ + "type": "image", + "url": "https://cdn.openai.com/secured/img_abc123.png", + "access": { + "method": "bearer_token", + "token": "eyJhbGciOiJIUzI1NiIs..." + } +} +``` + +**Note on token size**: For artifacts with many assets, per-asset tokens can significantly increase payload size. Consider: + +1. **Pre-configured access** - Set up service account access once during onboarding +2. **Shared token reference** - Define tokens at the artifact level and reference by ID +3. **Signed URLs** - Use pre-signed URLs where the URL itself is the credential + +The `url` field is the access URL - it may differ from the artifact's canonical/published URL. For example, a published article at `https://news.example.com/article/123` might have assets served from `https://cdn.example.com/secured/...`. + +### Access Methods + +| Method | Use Case | +|--------|----------| +| `bearer_token` | OAuth2 bearer token in Authorization header | +| `service_account` | GCP/AWS service account credentials | +| `signed_url` | Pre-signed URL with embedded credentials (URL itself is the credential) | + +### Service Account Setup + +For GCP: + +```json +{ + "access": { + "method": "service_account", + "provider": "gcp", + "credentials": { + "type": "service_account", + "project_id": "my-project", + "private_key_id": "...", + "private_key": "-----BEGIN PRIVATE KEY-----\n...", + "client_email": "verification-agent@my-project.iam.gserviceaccount.com" + } + } +} +``` + +For AWS: + +```json +{ + "access": { + "method": "service_account", + "provider": "aws", + "credentials": { + "access_key_id": "AKIAIOSFODNN7EXAMPLE", + "secret_access_key": "...", + "region": "us-east-1" + } + } +} +``` + +### Pre-Signed URLs + +For one-off access without sharing credentials: + +```json +{ + "type": "video", + "url": "https://storage.googleapis.com/bucket/video.mp4?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=...&X-Goog-Signature=...", + "access": { + "method": "signed_url" + } +} +``` + +The URL itself contains the credentials - no additional authentication needed. + +## Property Identifier Types + +The `property_id` uses standard identifier types from the AdCP property schema: + +| Type | Example | Use Case | +|------|---------|----------| +| `domain` | `reddit.com` | Websites | +| `app_id` | `com.spotify.music` | Mobile apps | +| `apple_podcast_id` | `1234567890` | Apple Podcasts | +| `spotify_show_id` | `4rOoJ6Egrf8K2IrywzwOMk` | Spotify podcasts | +| `youtube_channel_id` | `UCddiUEpeqJcYeBxX1IVBKvQ` | YouTube channels | +| `rss_url` | `https://feeds.example.com/podcast.xml` | RSS feeds | + +## Artifact ID Schemes + +The property owner defines their artifact_id scheme. Examples: + +| Property Type | Artifact ID Pattern | Example | +|---------------|---------------------|---------| +| News website | `article_{id}` | `article_12345` | +| Reddit | `r_{subreddit}_{post_id}` | `r_fitness_abc123` | +| Podcast | `episode_{num}_segment_{num}` | `episode_42_segment_2` | +| CTV | `show_{id}_s{season}e{episode}_scene_{num}` | `show_abc_s3e5_scene_12` | +| Social feed | `post_{id}` | `post_xyz789` | + +The verification agent doesn't need to understand the scheme - it's opaque. The property owner uses it to correlate artifacts with their content. + +## Related + +- [Content Standards Overview](/docs/governance/content-standards) - How artifacts fit into the content standards workflow +- [calibrate_content](/docs/governance/content-standards/tasks/calibrate_content) - Sending artifacts for calibration diff --git a/v2.6-rc/docs/governance/content-standards/implementation-guide.mdx b/v2.6-rc/docs/governance/content-standards/implementation-guide.mdx new file mode 100644 index 00000000..c8efbcb7 --- /dev/null +++ b/v2.6-rc/docs/governance/content-standards/implementation-guide.mdx @@ -0,0 +1,381 @@ +--- +title: Implementation Guide +description: How to implement the Content Standards Protocol as a sales agent, orchestrator, or governance agent +--- + +This guide covers implementation patterns for the Content Standards Protocol from three perspectives: + +1. **Sales agents** accepting and enforcing brand safety standards +2. **Orchestrators** coordinating content standards across publishers +3. **Governance agents** providing content evaluation services + +## Roles Overview + +Before diving in, understand who does what: + +| Role | Examples | Responsibilities | +|------|----------|-----------------| +| **Orchestrator** | DSP, trading desk, agency platform | Coordinates media buying; passes standards refs to sellers; receives artifacts for validation | +| **Sales Agent** | Publisher ad server, SSP | Accepts standards; calibrates local model; enforces during delivery; pushes artifacts | +| **Governance Agent** | IAS, DoubleVerify, brand safety service | Hosts standards; implements `calibrate_content` and `validate_content_delivery` | + +The typical flow: + +``` +1. Brand sets up standards with governance agent (via orchestrator) +2. Orchestrator sends standards_ref with get_products/create_media_buy +3. Sales agent accepts or rejects based on capability +4. Sales agent calibrates against governance agent +5. Sales agent enforces during delivery +6. Sales agent provides artifacts (push via webhook or pull via get_media_buy_artifacts) +7. Orchestrator forwards artifacts to governance agent for validation +``` + +--- + +## For Sales Agents + +If you're a sales agent (publisher ad server, SSP, or platform), implementing Content Standards means accepting orchestrator policies and enforcing them during delivery. + +### The Core Model + +When an orchestrator includes a `content_standards_ref` in their request, you must: + +1. **Fetch the standards** from the governance agent and evaluate if you can fulfill them +2. **Accept or reject** the buy based on your capabilities +3. **Calibrate** your evaluation model against the governance agent's expectations +4. **Enforce** the standards during delivery +5. **Provide artifacts** to the orchestrator for validation + +If you cannot fulfill the content standards requirements, **reject the buy**. Don't accept a campaign you can't properly enforce. + +### What You Need to Implement + +**1. Accept content standards references on `get_products` and `create_media_buy`** + +Orchestrators pass their standards via reference: + +```json +{ + "content_standards_ref": { + "standards_id": "nike_emea_brand_safety", + "agent_url": "https://brandsafety.ias.com" + } +} +``` + +When you receive this: +- Fetch the standards document from the governance agent at `agent_url` +- Evaluate whether you can enforce these requirements +- If you cannot meet the standards, reject the request +- If you can, accept and store the association with the media buy + +**2. Decide: Can you fulfill this?** + +The standards document contains: +- Policy (natural language description of acceptable/unacceptable content) +- Calibration exemplars (pass/fail examples to interpret edge cases) +- Floor (reference to external baseline safety standards) + +Review these requirements against your capabilities. Different publishers have different definitions of "adjacency" - Reddit might include comments, YouTube might include related videos, a news site might mean the article body. That's fine - as long as you can meaningfully enforce the brand's intent, accept the buy. + +If you can't - for example, they need adjacency data for a channel where it doesn't apply (like billboards) - reject the buy. + +**3. Build your evaluation capability** + +Use the standards document to train or configure your content evaluation system. This could be: +- An LLM with the rules as system prompt +- A classifier trained on the calibration examples +- A rules engine for deterministic evaluation +- A third-party brand safety vendor + +The protocol doesn't prescribe your implementation - just that you honor the standards. + +**4. Calibrate against the governance agent** + +After accepting the buy, calibrate your local model by calling `calibrate_content` on the governance agent. You send sample artifacts from your inventory, they tell you how they would rate them: + +```json +// You send examples from your inventory to the governance agent +{ + "standards_id": "nike_emea_brand_safety", + "artifacts": [ + { + "property_id": { "type": "domain", "value": "espn.com" }, + "artifact_id": "article_123", + "assets": [{ "type": "text", "role": "title", "content": "Marathon Runner Collapses at Finish Line" }] + } + ] +} + +// Governance agent responds with their interpretation +{ + "evaluations": [{ + "artifact_id": "article_123", + "suitable": true, + "confidence": 0.9, + "explanation": "Sports injury coverage in athletic context - aligns with brand's sports marketing positioning" + }] +} +``` + +Use these responses to train your local model. If you disagree with a rating, ask follow-up questions to understand the governance agent's reasoning. + +**5. Push artifacts to the orchestrator** + +After delivery, push artifacts to the orchestrator so they can validate against the governance agent. Configure via `artifact_webhook` in the media buy: + +```json +// Artifact webhook payload (you send this to the orchestrator) +{ + "media_buy_id": "mb_nike_reddit_q1", + "batch_id": "batch_20250115_001", + "timestamp": "2025-01-15T11:00:00Z", + "artifacts": [ + { + "artifact": { + "property_id": { "type": "domain", "value": "reddit.com" }, + "artifact_id": "r_fitness_abc123", + "assets": [{ "type": "text", "role": "title", "content": "Best protein sources" }] + }, + "delivered_at": "2025-01-15T10:30:00Z", + "impression_id": "imp_abc123" + } + ] +} +``` + +Also support `get_media_buy_artifacts` for orchestrators who prefer to poll. + +### Implementation Checklist + +- [ ] Parse `content_standards_ref` in `get_products` and `create_media_buy` +- [ ] Fetch and evaluate standards documents from governance agents +- [ ] Reject buys you cannot fulfill - don't accept campaigns you can't enforce +- [ ] Build content evaluation against the standards document +- [ ] Call `calibrate_content` on the governance agent to align interpretation +- [ ] Implement `get_media_buy_artifacts` so orchestrators can retrieve content for validation +- [ ] Support `artifact_webhook` for push-based artifact delivery +- [ ] Support `reporting_webhook` for delivery metrics + +--- + +## For Orchestrators + +If you're an orchestrator (DSP, trading desk, or agency platform), you coordinate content standards between brands, governance agents, and publishers. + +### The Orchestration Pattern + +``` +Brand → Orchestrator → Governance Agent (setup) + → Sales Agent (buying) + ← Sales Agent (artifacts) + → Governance Agent (validation) + → Brand (reporting) +``` + +**1. Help brands set up standards with governance agents** + +Brands create content standards through a governance agent. You might facilitate this or the brand may do it directly: + +```json +// Standards stored at the governance agent +{ + "standards_id": "nike_emea_brand_safety", + "name": "Nike EMEA Brand Safety Policy", + "brand_id": "nike", + "policy": "Sports and fitness content is ideal. Avoid violence, adult themes, drugs.", + "calibration_exemplars": { + "pass": [ + { "type": "url", "value": "https://espn.com/nba/story/_/id/12345/lakers-win", "language": "en" } + ], + "fail": [ + { "type": "url", "value": "https://tabloid.example.com/celebrity-scandal", "language": "en" } + ] + } +} +``` + +**2. Pass standards references when buying** + +When discovering products or creating media buys, include the governance agent reference: + +```json +{ + "product_id": "espn_sports_display", + "packages": [...], + "content_standards_ref": { + "standards_id": "nike_emea_brand_safety", + "agent_url": "https://brandsafety.ias.com" + }, + "artifact_webhook": { + "url": "https://your-platform.com/webhooks/artifacts", + "authentication": { + "schemes": ["HMAC-SHA256"], + "credentials": "your-shared-secret-min-32-chars" + }, + "delivery_mode": "batched", + "batch_frequency": "hourly", + "sampling_rate": 0.25 + } +} +``` + +If the publisher cannot fulfill the standards, they should reject the buy. Handle rejections gracefully and find alternative inventory. + +**3. Receive artifacts from sales agents** + +Sales agents push artifacts to your `artifact_webhook` endpoint. Forward them to the governance agent for validation: + +```python +# Receive artifact webhook from sales agent +@app.post("/webhooks/artifacts") +async def receive_artifacts(payload: ArtifactWebhookPayload): + # Forward to governance agent for validation + validation_result = await governance_agent.validate_content_delivery( + standards_id=get_standards_id(payload.media_buy_id), + records=[ + {"artifact": a.artifact, "record_id": a.impression_id} + for a in payload.artifacts + ] + ) + + # Log any failures + for result in validation_result.results: + if any(f.status == "failed" for f in result.features): + log_brand_safety_incident(payload.media_buy_id, result) + + return {"status": "received", "batch_id": payload.batch_id} +``` + +**4. Report to brands** + +Surface validation results to the brand: +- **Incidents**: Content that didn't meet standards +- **Coverage**: What percentage of delivery was validated +- **Trends**: Changes in content safety over time + +### Implementation Checklist + +- [ ] Facilitate brand setup with governance agents +- [ ] Include `content_standards_ref` in `get_products` and `create_media_buy` requests +- [ ] Configure `artifact_webhook` to receive artifacts from sales agents +- [ ] Handle rejections from publishers who can't fulfill standards +- [ ] Forward artifacts to governance agent via `validate_content_delivery` +- [ ] Build reporting for brands + +--- + +## For Governance Agents + +If you're a governance agent (IAS, DoubleVerify, or brand safety service), you provide content evaluation as a service. + +### What You Implement + +**1. Host and serve content standards** + +Store standards configurations and expose them via `get_content_standards`: + +```json +// Response to get_content_standards +{ + "standards_id": "nike_emea_brand_safety", + "version": "1.2.0", + "name": "Nike EMEA - all digital channels", + "policy": "Sports and fitness content is ideal. Lifestyle content about health is good...", + "calibration_exemplars": { + "pass": [...], + "fail": [...] + } +} +``` + +**2. Implement `calibrate_content`** + +Sales agents call this to align their local models before campaign execution. They send sample artifacts, you respond with how the brand would rate them: + +```python +def calibrate_content(standards_id: str, artifacts: list) -> dict: + standards = get_standards(standards_id) + evaluations = [] + + for artifact in artifacts: + # Evaluate against brand's policy + result = evaluate_against_policy(artifact, standards) + evaluations.append({ + "artifact_id": artifact["artifact_id"], + "suitable": result.suitable, + "confidence": result.confidence, + "explanation": result.explanation # Help them understand your reasoning + }) + + return {"evaluations": evaluations} +``` + +Calibration is a dialogue - be prepared for follow-up questions and edge cases. + +**3. Implement `validate_content_delivery`** + +Orchestrators call this to validate artifacts after delivery. Batch evaluation at scale: + +```python +def validate_content_delivery(standards_id: str, records: list) -> dict: + standards = get_standards(standards_id) + results = [] + + for record in records: + features = [] + for feature in ["brand_safety", "brand_suitability"]: + evaluation = evaluate_feature(record["artifact"], standards, feature) + features.append({ + "feature_id": feature, + "status": "passed" if evaluation.passed else "failed", + "value": evaluation.value, + "message": evaluation.message if not evaluation.passed else None + }) + results.append({ + "record_id": record["record_id"], + "features": features + }) + + return { + "summary": compute_summary(results), + "results": results + } +``` + +### Implementation Checklist + +- [ ] Implement `create_content_standards` for brands to set up policies +- [ ] Implement `get_content_standards` for sales agents to fetch policies +- [ ] Implement `calibrate_content` for sales agents to align their models +- [ ] Implement `validate_content_delivery` for orchestrators to validate delivery +- [ ] Support dialogue in calibration (follow-up questions, edge cases) + +--- + +## Content Access Pattern + +All three roles may need to exchange content securely. The `content_access` pattern provides authenticated access to a URL namespace: + +```json +{ + "content_access": { + "url_pattern": "https://cache.example.com/*", + "auth": { + "type": "bearer", + "token": "eyJ..." + } + } +} +``` + +- **url_pattern**: URLs matching this pattern use this auth +- **auth.type**: Authentication method (`bearer`, `api_key`, `signed_url`) +- **auth.token**: The credential + +Include this in: +- `get_content_standards` response (governance agent → sales agent: "fetch examples here") +- `get_media_buy_artifacts` response (sales agent → orchestrator: "fetch content here") + +This avoids per-asset tokens and keeps payloads small while enabling secure content exchange. diff --git a/v2.6-rc/docs/governance/content-standards/index.mdx b/v2.6-rc/docs/governance/content-standards/index.mdx new file mode 100644 index 00000000..eda9db83 --- /dev/null +++ b/v2.6-rc/docs/governance/content-standards/index.mdx @@ -0,0 +1,358 @@ +--- +title: Overview +sidebar_position: 1 +--- + +# Content Standards Protocol + +The Content Standards Protocol enables **privacy-preserving brand safety** for ephemeral and sensitive content that cannot leave a publisher's infrastructure. + +## The Problem + +Traditional brand safety relies on third-party verification: send your content to IAS or DoubleVerify, they evaluate it, return a verdict. This works for static web pages. It fundamentally cannot work for: + +- **AI-generated content** - ChatGPT responses, DALL-E images that exist only in a user session +- **Private conversations** - Content in messaging apps, private social feeds +- **Ephemeral content** - Stories, live streams, real-time feeds that disappear +- **Privacy-regulated content** - GDPR-protected data that cannot be exported + +For these platforms, **there is no traditional verification option**. The content simply cannot leave. OpenAI cannot send user conversations to an external service. A messaging app cannot export private chats. A streaming platform cannot share real-time content before it disappears. + +Yet these are exactly the environments where advertising is growing fastest - and where brands most need safety guarantees. Without a privacy-preserving approach, brands either avoid these channels entirely or accept unknown risk. + +## The Solution: Calibration-Based Alignment + +Content Standards solves this by **using agents to protect privacy**. It's a three-phase model where no sensitive content ever leaves the publisher's infrastructure: + +| Phase | Where It Runs | What Happens | +|-------|---------------|--------------| +| **1. Calibration** | External (safe data only) | Publisher and verification agent align on policy interpretation using synthetic examples or public samples - no PII, no sensitive content | +| **2. Local Execution** | Inside publisher's walls | Publisher runs evaluation on every impression using a local model trained during calibration - content never leaves | +| **3. Validation** | Statistical sampling | Verification agent audits a sample to detect drift - both parties can verify the system is working without exposing PII | + +This inverts the traditional model. Instead of "send us your content, we'll evaluate it," it's "we'll teach you our standards, you evaluate locally, we'll audit statistically." + +**The key insight**: The execution engine runs entirely inside the publisher's infrastructure. For OpenAI, that means brand safety evaluation happens within their firewall - user conversations never leave. For a messaging app, it means private content stays private. The calibration and validation phases provide confidence that the local model is working correctly, without ever requiring access to sensitive data. + +## What It Covers + +- **Brand safety** - Is this content safe for *any* brand? (universal thresholds like hate speech, illegal content) +- **Brand suitability** - Is this content appropriate for *my* brand? (brand-specific preferences and tone) + +## Key Concepts + +Content standards evaluation involves four key questions that buyers and sellers negotiate: + +1. **What content?** - What [artifacts](/docs/governance/content-standards/artifacts) to evaluate (the ad-adjacent content) +2. **How much adjacency?** - How many artifacts around the ad slot to consider +3. **What sampling rate?** - What percentage of traffic to evaluate +4. **How to calibrate?** - How to align on policy interpretation before runtime + +These parameters are negotiated between buyer and seller during product discovery and media buy creation. + +## Workflow + +```mermaid +sequenceDiagram + participant Brand + participant Buyer as Buyer Agent + participant Seller as Seller Agent + participant Verifier as Verification Agent + + Note over Brand,Verifier: 1. SETUP PHASE + Brand->>Verifier: create_content_standards (policy + calibration examples) + Verifier-->>Brand: standards_id + + Note over Brand,Verifier: 2. ACTIVATION PHASE + Brand->>Buyer: "Buy inventory from Reddit, use standards_id X" + Buyer->>Seller: create_media_buy (includes content_standards reference) + + Seller->>Verifier: calibrate_content (sample artifacts) + Verifier-->>Seller: verdict + explanation + Seller->>Verifier: "What about this edge case?" + Verifier-->>Seller: clarification + Note over Seller: Seller builds local model + + Note over Brand,Verifier: 3. RUNTIME PHASE + loop High-volume decisioning + Note over Seller: Local model evaluates artifacts + end + + Buyer->>Seller: get_media_buy_artifacts (sampled) + Seller-->>Buyer: Content artifacts + Buyer->>Verifier: validate_content_delivery + Verifier-->>Buyer: Validation results +``` + +**Key insight**: Runtime decisioning happens locally at the seller (for scale). Buyers pull content samples from sellers and validate against the verification agent. + +## Adjacency + +How much content around the ad slot should be evaluated? + +| Context | Adjacency Examples | +|---------|-------------------| +| **News article** | The article where the ad appears | +| **Social feed** | 1-2 posts above and below the ad slot | +| **Podcast** | The segment before and after the ad break | +| **CTV** | 1-2 scenes before and after the ad pod | +| **Infinite scroll** | Posts within the visible viewport | + +Adjacency requirements are defined by the seller in their product catalog (`get_products`). The buyer can filter products based on adjacency guarantees: + +```json +{ + "product_id": "reddit_feed_standard", + "content_standards_adjacency_definition": { + "before": 2, + "after": 2, + "unit": "posts" + } +} +``` + +### Adjacency Units + +| Unit | Use Case | +|------|----------| +| `posts` | Social feeds, forums, comment threads | +| `scenes` | CTV, streaming video content | +| `segments` | Podcasts, audio content | +| `seconds` | Time-based adjacency in video/audio | +| `viewports` | Infinite scroll contexts | +| `articles` | News sites, content aggregators | + +Different products may offer different adjacency guarantees at different price points. + +## Sampling Rate + +What percentage of traffic should be evaluated by the verification agent? + +| Rate | Use Case | +|------|----------| +| **100%** | Premium brand safety - every impression validated | +| **10-25%** | Standard monitoring - statistical confidence | +| **1-5%** | Spot checking - drift detection only | + +Sampling rate is negotiated in the media buy: + +```json +{ + "governance": { + "content_standards": { + "agent_url": "https://safety.ias.com/adcp", + "standards_id": "nike_brand_safety", + "sampling_rate": 0.25 + } + } +} +``` + +Higher sampling rates typically cost more but provide stronger guarantees. The seller is responsible for implementing the agreed sampling rate and reporting actual coverage. + +## Validation Thresholds + +When a seller calibrates their local model against a verification agent, there's an expected drift - the local model won't match the verification agent 100% of the time. **Validation thresholds** define acceptable drift between local execution and validation samples. + +Sellers advertise their content safety capabilities in their product catalog: + +```json +{ + "product_id": "reddit_feed_premium", + "content_standards": { + "validation_threshold": 0.95, + "validation_threshold_description": "Local model matches verification agent 95% of the time" + } +} +``` + +| Threshold | Meaning | +|-----------|---------| +| **0.99** | Premium - local model is 99% aligned with verification agent | +| **0.95** | Standard - local model is 95% aligned | +| **0.90** | Budget - local model is 90% aligned | + +**This is a contractual guarantee.** If the seller's validation results show more drift than the advertised threshold, buyers can expect remediation (makegoods, refunds, etc.) just like any other delivery discrepancy. + +The threshold answers the key buyer question: "If I accept your local model, how confident can I be that you're enforcing my standards correctly?" + +## Policies + +Content Standards uses **natural language prompts** rather than rigid keyword lists: + +```json +{ + "policy": "Sports and fitness content is ideal. Lifestyle content about health is good. Entertainment is generally acceptable. Avoid content about violence, controversial politics, adult themes, or content portraying sedentary lifestyle positively. Block hate speech, illegal activities, or ongoing litigation against our company.", + "calibration_exemplars": { + "pass": [ + { + "property_id": {"type": "domain", "value": "espn.com"}, + "artifact_id": "nba_championship_recap_2024", + "assets": [{"type": "text", "role": "title", "content": "Championship Game Recap"}] + } + ], + "fail": [ + { + "property_id": {"type": "domain", "value": "tabloid.example.com"}, + "artifact_id": "scandal_story_123", + "assets": [{"type": "text", "role": "title", "content": "Celebrity Scandal Exposed"}] + } + ] + } +} +``` + +The policy prompt enables AI-powered verification agents to understand context and nuance. **Calibration** examples provide a training/test set that helps the agent interpret the policy correctly. + +See [Artifacts](/docs/governance/content-standards/artifacts) for details on artifact structure and secured asset access. + +## Scoped Standards + +Buyers typically maintain multiple standards configurations for different contexts - UK TV campaigns have different regulations than US display, and children's brands need stricter safety than adult beverages. + +```json +{ + "standards_id": "uk_tv_zero_calorie", + "name": "UK TV - zero-calorie brands", + "countries_all": ["GB"], + "channels_any": ["ctv", "linear_tv"] +} +``` + +**The buyer selects the appropriate `standards_id` when creating a media buy.** The seller receives a reference to the resolved standards - they don't need to do scope matching themselves. + +## Calibration + +Before running campaigns, sellers calibrate their local models against the verification agent. This is a **dialogue-based process** that may involve human review on either side: + +1. Seller sends sample artifacts to the verification agent +2. Verification agent returns verdicts with detailed explanations +3. Seller asks follow-up questions about edge cases +4. Process repeats until alignment is achieved + +**Human-in-the-loop**: Calibration often involves humans on both sides. A brand safety specialist at the buyer might review edge cases flagged by the verification agent. A content operations team at the seller might curate calibration samples and validate the local model's learning. The protocol supports async workflows where either party can pause for human review before responding. + +```json +// Seller: "Does this pass?" +{ + "artifact": { + "property_id": {"type": "domain", "value": "reddit.com"}, + "artifact_id": "r_news_politics_123", + "assets": [{"type": "text", "role": "title", "content": "Political News Article"}] + } +} + +// Verification agent: "No, because..." +{ + "verdict": "fail", + "explanation": "Political content is excluded by brand policy, even when balanced.", + "policy_alignment": { + "violations": [{ + "policy_text": "Avoid content about controversial politics", + "violation_reason": "Article discusses ongoing political controversy" + }] + } +} +``` + +See [calibrate_content](/docs/governance/content-standards/tasks/calibrate_content) for the full task specification. + +## Tasks + +### Discovery + +| Task | Description | +|------|-------------| +| [list_content_standards](/docs/governance/content-standards/tasks/list_content_standards) | List available standards configurations | +| [get_content_standards](/docs/governance/content-standards/tasks/get_content_standards) | Retrieve a specific standards configuration | + +### Management + +| Task | Description | +|------|-------------| +| [create_content_standards](/docs/governance/content-standards/tasks/create_content_standards) | Create a new standards configuration | +| [update_content_standards](/docs/governance/content-standards/tasks/update_content_standards) | Update an existing standards configuration | +| [delete_content_standards](/docs/governance/content-standards/tasks/delete_content_standards) | Delete a standards configuration | + +### Calibration & Validation + +| Task | Description | +|------|-------------| +| [calibrate_content](/docs/governance/content-standards/tasks/calibrate_content) | Collaborative dialogue to align on policy interpretation | +| [get_media_buy_artifacts](/docs/governance/content-standards/tasks/get_media_buy_artifacts) | Retrieve content artifacts from a media buy | +| [validate_content_delivery](/docs/governance/content-standards/tasks/validate_content_delivery) | Batch validation of content artifacts | + +## Typical Providers + +- **IAS** - Integral Ad Science +- **DoubleVerify** - Brand safety and verification +- **Scope3** - Sustainability-focused brand safety with prompt-based policies +- **Custom** - Brand-specific implementations + +## Future: Secure Enclaves + +The current model trusts the publisher to faithfully implement the calibrated standards. A future evolution uses **secure enclaves** (Trusted Execution Environments / TEEs) to provide cryptographic guarantees: + +```mermaid +flowchart TB + subgraph VS["Verification Service"] + Models["Models & Calibration Data"] + Results["Aggregate Results"] + end + + subgraph PUB["Publisher Infrastructure"] + subgraph TEE["Secure Enclave (TEE)"] + Agent["Containerized
Governance Agent"] + end + Content["Content Artifacts"] + end + + Models -->|"Pinhole IN:
models, policy, examples"| Agent + Agent -->|"Pinhole OUT:
pass rates, drift metrics"| Results + Content -->|"evaluate"| Agent + Agent -->|"pass/fail verdict"| Content + + style TEE fill:#e8f5e9,stroke:#4caf50 + style Agent fill:#c8e6c9,stroke:#388e3c + style PUB fill:#fafafa,stroke:#9e9e9e +``` + +**Content never crosses the pinhole** - only models flow in, only aggregates flow out. + +### The Pinhole Interface + +The enclave maintains a narrow, well-defined interface to the verification service: + +**Inbound (verification service → enclave):** +- Updated brand safety models +- Policy changes and calibration exemplars +- Configuration updates + +**Outbound (enclave → verification service):** +- Aggregated validation results (pass rates, drift metrics) +- Statistical summaries +- Attestation proofs + +**Never crosses the boundary:** +- Raw content artifacts +- User data or PII +- Individual impression-level data + +This pinhole is the interface that needs standardization - it defines exactly what flows in and out while keeping sensitive content locked inside the publisher's walls. + +### Why This Matters + +- **Publisher** hosts a secure enclave inside their infrastructure +- **Governance agent** (from IAS, DoubleVerify, etc.) runs as a container within the enclave +- **Content** flows into the enclave for evaluation but never leaves the publisher's walls +- **Both parties** can verify the governance code is running unmodified via attestation +- **Models stay current** - the enclave can receive updates without exposing content + +This provides the same privacy guarantees as local execution, but with cryptographic proof that the correct algorithm is running. The brand knows their standards are being enforced faithfully. The publisher proves compliance without exposing content. + +This architecture aligns with the [IAB Tech Lab ARTF (Agentic RTB Framework)](https://iabtechlab.com/standards/artf/), which defines how service providers can package offerings as containers deployed into host infrastructure. ARTF enables hosts to "provide greater access to data and more interaction opportunities to service agents without concerns about leakage, misappropriation or latency" - exactly the model Content Standards requires for privacy-preserving brand safety. + +## Related + +- [Artifacts](/docs/governance/content-standards/artifacts) - What artifacts are and how to structure them +- [Brand Manifest](/docs/creative/brand-manifest) - Static brand identity that can link to standards agents diff --git a/v2.6-rc/docs/governance/content-standards/tasks/calibrate_content.mdx b/v2.6-rc/docs/governance/content-standards/tasks/calibrate_content.mdx new file mode 100644 index 00000000..801ee3bb --- /dev/null +++ b/v2.6-rc/docs/governance/content-standards/tasks/calibrate_content.mdx @@ -0,0 +1,228 @@ +--- +title: calibrate_content +sidebar_position: 7 +--- + +# calibrate_content + +Collaborative calibration task for aligning on content standards interpretation. Used during setup to help sellers understand and internalize a buyer's content policies before campaign execution. + +Unlike high-volume runtime evaluation, calibration is a **dialogue-based process** where parties exchange examples and explanations until aligned. + +## When to Use + +- **Seller onboarding**: When a seller first receives content standards from a buyer +- **Policy clarification**: When a seller needs to understand why specific content passes or fails +- **Model training**: When building a local model to run against the standards +- **Drift detection**: Periodic re-calibration to ensure continued alignment + +## Request + +**Schema**: [calibrate-content-request.json](https://adcontextprotocol.org/schemas/v2/content-standards/calibrate-content-request.json) + +| Parameter | Type | Required | Description | +|-----------|------|----------|-------------| +| `standards_id` | string | Yes | Standards configuration to calibrate against | +| `artifact` | artifact | Yes | Artifact to evaluate | + +### Artifact + +**Schema**: [artifact.json](https://adcontextprotocol.org/schemas/v2/content-standards/artifact.json) + +An artifact represents content context where ad placements occur - identified by `property_id` + `artifact_id` and represented as a collection of assets: + +```json +{ + "property_id": {"type": "domain", "value": "reddit.com"}, + "artifact_id": "r_fitness_abc123", + "assets": [ + {"type": "text", "role": "title", "content": "Best protein sources for muscle building", "language": "en"}, + {"type": "text", "role": "paragraph", "content": "Looking for recommendations on high-quality protein sources...", "language": "en"}, + {"type": "text", "role": "paragraph", "content": "I've been lifting for 6 months and want to optimize my diet.", "language": "en"}, + {"type": "image", "url": "https://cdn.reddit.com/fitness-image.jpg", "alt_text": "Person lifting weights"} + ] +} +``` + +## Response + +**Schema**: [calibrate-content-response.json](https://adcontextprotocol.org/schemas/v2/content-standards/calibrate-content-response.json) + +### Passing Response + +```json +{ + "verdict": "pass", + "explanation": "This content aligns well with the brand's fitness-focused positioning. Health and fitness content is explicitly marked as 'ideal' in the policy. The discussion is constructive and educational.", + "features": [ + { + "feature_id": "brand_safety", + "status": "passed", + "explanation": "No safety concerns. Content is user-generated but constructive fitness discussion." + }, + { + "feature_id": "brand_suitability", + "status": "passed", + "explanation": "Fitness content matches brand's athletic positioning." + } + ] +} +``` + +### Failing Response with Detailed Explanation + +```json +{ + "verdict": "fail", + "explanation": "This content discusses political topics which the policy explicitly excludes. While the article itself is balanced journalism, the brand has requested to avoid all controversial political content regardless of tone.", + "features": [ + { + "feature_id": "brand_safety", + "status": "passed", + "explanation": "No hate speech, illegal content, or explicit material." + }, + { + "feature_id": "brand_suitability", + "status": "failed", + "explanation": "Political content is excluded by brand policy, even when balanced." + } + ] +} +``` + +### Response Fields + +| Field | Required | Description | +|-------|----------|-------------| +| `verdict` | Yes | Overall `pass` or `fail` decision | +| `explanation` | No | Detailed natural language explanation of the decision | +| `features` | No | Per-feature breakdown with explanations | +| `confidence` | No | Model confidence in the verdict (0-1), when available | + +## Dialogue Flow + +Calibration supports back-and-forth dialogue using the protocol's conversation management. The seller sends content, the verification agent responds with an evaluation and explanation, and the seller can respond with questions or try different content - all within the same conversation context. + +### A2A Example + +```javascript +// Seller sends artifact to evaluate +const response1 = await a2a.send({ + message: { + parts: [{ + kind: "data", + data: { + skill: "calibrate_content", + parameters: { + standards_id: "nike_brand_safety", + artifact: { + property_id: { type: "domain", value: "reddit.com" }, + artifact_id: "r_news_politics_123", + assets: [ + { type: "text", role: "title", content: "Political News Article" } + ] + } + } + } + }] + } +}); +// Response: verdict=fail with feature breakdown + +// Seller asks follow-up question about the decision +const response2 = await a2a.send({ + contextId: response1.contextId, + message: { + parts: [{ + kind: "text", + text: "This is factual news, not opinion. Should balanced journalism be excluded?" + }] + } +}); +// Verification agent clarifies that brand policy excludes ALL political content + +// Seller tries different artifact +const response3 = await a2a.send({ + contextId: response1.contextId, + message: { + parts: [{ + kind: "data", + data: { + skill: "calibrate_content", + parameters: { + standards_id: "nike_brand_safety", + artifact: { + property_id: { type: "domain", value: "reddit.com" }, + artifact_id: "r_running_tips_456", + assets: [ + { type: "text", role: "title", content: "Running Tips" } + ] + } + } + } + }] + } +}); +// Response: verdict=pass - now seller understands the boundaries +``` + +### MCP Example + +```javascript +// Initial calibration request +const response1 = await mcp.call('calibrate_content', { + standards_id: "nike_brand_safety", + artifact: { + property_id: { type: "domain", value: "reddit.com" }, + artifact_id: "r_news_politics_123", + assets: [ + { type: "text", role: "title", content: "Political News Article" } + ] + } +}); +// Response includes context_id for conversation continuity + +// Continue dialogue with follow-up question +const response2 = await mcp.call('calibrate_content', { + context_id: response1.context_id, + standards_id: "nike_brand_safety", + artifact: { + property_id: { type: "domain", value: "reddit.com" }, + artifact_id: "r_news_politics_123", + assets: [ + { type: "text", role: "title", content: "Political News Article" } + ] + } +}); +// Include text message in the protocol envelope asking about balanced journalism + +// Try different artifact in same conversation +const response3 = await mcp.call('calibrate_content', { + context_id: response1.context_id, + standards_id: "nike_brand_safety", + artifact: { + property_id: { type: "domain", value: "reddit.com" }, + artifact_id: "r_running_tips_456", + assets: [ + { type: "text", role: "title", content: "Running Tips" } + ] + } +}); +``` + +The key insight is that the dialogue happens at the **protocol layer**, not the task layer. The verification agent maintains conversation context and can respond to follow-up questions, disagreements, or requests for clarification - just like any agent-to-agent conversation. + +## Calibration vs Runtime + +| Aspect | calibrate_content | Runtime (local model) | +|--------|-------------------|----------------------| +| **Purpose** | Alignment & understanding | High-volume decisioning | +| **Volume** | Low (setup/periodic) | High (every impression) | +| **Response** | Verbose explanations | Pass/fail only | +| **Latency** | Seconds acceptable | Milliseconds required | +| **Dialogue** | Multi-turn conversation | Stateless | + +## Related Tasks + +- [get_content_standards](/docs/governance/content-standards/tasks/get_content_standards) - Retrieve the policies being calibrated against +- [validate_content_delivery](/docs/governance/content-standards/tasks/validate_content_delivery) - Post-campaign delivery validation diff --git a/v2.6-rc/docs/governance/content-standards/tasks/create_content_standards.mdx b/v2.6-rc/docs/governance/content-standards/tasks/create_content_standards.mdx new file mode 100644 index 00000000..51859ca8 --- /dev/null +++ b/v2.6-rc/docs/governance/content-standards/tasks/create_content_standards.mdx @@ -0,0 +1,90 @@ +--- +title: create_content_standards +sidebar_position: 5 +--- + +# create_content_standards + +Create a new content standards configuration. + +**Response time**: < 1s + +## Request + +**Schema**: [create-content-standards-request.json](https://adcontextprotocol.org/schemas/v2/content-standards/create-content-standards-request.json) + +| Parameter | Type | Required | Description | +|-----------|------|----------|-------------| +| `scope` | object | Yes | Where this standards configuration applies (must include `languages_any`) | +| `policy` | string | Yes | Natural language policy prompt | +| `calibration_exemplars` | object | No | Training set of pass/fail artifacts for calibration | + +:::note[Brand Safety Floor Requirement] +Implementors MUST apply a brand safety floor regardless of what policy is defined in content standards. Content that violates the floor (hate speech, illegal content, etc.) must be excluded even when no content standards are specified. AdCP does not define the floor specification; this is left to implementors and industry standards (e.g., GARM categories). +::: + +### Example Request + +```json +{ + "scope": { + "countries_all": ["GB", "DE", "FR"], + "channels_any": ["display", "video", "ctv"], + "languages_any": ["en", "de", "fr"], + "description": "EMEA - all digital channels" + }, + "policy": "Sports and fitness content is ideal. Lifestyle content about health and wellness is good. Entertainment content is generally acceptable. Avoid content about violence, controversial political topics, adult themes, or content that portrays sedentary lifestyle positively.", + "calibration_exemplars": { + "pass": [ + { "type": "url", "value": "https://espn.com/nba/story/_/id/12345/lakers-championship", "language": "en" }, + { "type": "url", "value": "https://healthline.com/fitness/cardio-workout", "language": "en" } + ], + "fail": [ + { "type": "url", "value": "https://tabloid.example.com/celebrity-scandal", "language": "en" }, + { "type": "url", "value": "https://news.example.com/controversial-politics-article", "language": "en" } + ] + } +} +``` + +## Response + +**Schema**: [create-content-standards-response.json](https://adcontextprotocol.org/schemas/v2/content-standards/create-content-standards-response.json) + +### Success Response + +```json +{ + "standards_id": "emea_digital_safety" +} +``` + +### Error Responses + +**Scope Conflict:** + +```json +{ + "errors": [ + { + "code": "SCOPE_CONFLICT", + "message": "Standards already exist for country 'DE' on channel 'display'", + "conflicting_standards_id": "emea_digital_safety" + } + ] +} +``` + +## Scope Conflict Handling + +Multiple standards cannot have overlapping scopes for the same country/channel/language combination. When creating standards that would conflict: + +1. **Check existing standards** - Use [list_content_standards](/docs/governance/content-standards/tasks/list_content_standards) filtered by your scope +2. **Update rather than create** - If standards already exist, use [update_content_standards](/docs/governance/content-standards/tasks/update_content_standards) +3. **Narrow the scope** - Adjust countries or channels to avoid overlap + +## Related Tasks + +- [list_content_standards](/docs/governance/content-standards/tasks/list_content_standards) - List all configurations +- [update_content_standards](/docs/governance/content-standards/tasks/update_content_standards) - Update a configuration +- [delete_content_standards](/docs/governance/content-standards/tasks/delete_content_standards) - Delete a configuration diff --git a/v2.6-rc/docs/governance/content-standards/tasks/delete_content_standards.mdx b/v2.6-rc/docs/governance/content-standards/tasks/delete_content_standards.mdx new file mode 100644 index 00000000..2f38244c --- /dev/null +++ b/v2.6-rc/docs/governance/content-standards/tasks/delete_content_standards.mdx @@ -0,0 +1,70 @@ +--- +title: delete_content_standards +sidebar_position: 7 +--- + +# delete_content_standards + +Delete a content standards configuration. + +**Response time**: < 500ms + +## Request + +| Parameter | Type | Required | Description | +|-----------|------|----------|-------------| +| `standards_id` | string | Yes | ID of the standards configuration to delete | + +### Example Request + +```json +{ + "standards_id": "nike_emea_safety" +} +``` + +## Response + +### Success Response + +```json +{ + "deleted": true, + "standards_id": "nike_emea_safety" +} +``` + +### Error Responses + +**Not Found:** + +```json +{ + "errors": [ + { + "code": "STANDARDS_NOT_FOUND", + "message": "No standards found with ID 'invalid_id'" + } + ] +} +``` + +**Standards In Use:** + +```json +{ + "errors": [ + { + "code": "STANDARDS_IN_USE", + "message": "Cannot delete standards 'nike_emea_safety' - currently referenced by active media buys" + } + ] +} +``` + +Standards cannot be deleted while they are referenced by active media buys. Use [list_content_standards](/docs/governance/content-standards/tasks/list_content_standards) to identify usage, or archive standards by setting an expiration date rather than deleting. + +## Related Tasks + +- [list_content_standards](/docs/governance/content-standards/tasks/list_content_standards) - List all configurations +- [create_content_standards](/docs/governance/content-standards/tasks/create_content_standards) - Create a new configuration diff --git a/v2.6-rc/docs/governance/content-standards/tasks/get_content_standards.mdx b/v2.6-rc/docs/governance/content-standards/tasks/get_content_standards.mdx new file mode 100644 index 00000000..dec5de45 --- /dev/null +++ b/v2.6-rc/docs/governance/content-standards/tasks/get_content_standards.mdx @@ -0,0 +1,77 @@ +--- +title: get_content_standards +sidebar_position: 2 +--- + +# get_content_standards + +Retrieve content safety policies for a specific standards configuration. + +## Request + +**Schema**: [get-content-standards-request.json](https://adcontextprotocol.org/schemas/v2/content-standards/get-content-standards-request.json) + +| Parameter | Type | Required | Description | +|-----------|------|----------|-------------| +| `standards_id` | string | Yes | Identifier for the standards configuration | + +## Response + +**Schema**: [get-content-standards-response.json](https://adcontextprotocol.org/schemas/v2/content-standards/get-content-standards-response.json) + +### Success Response + +```json +{ + "standards_id": "emea_digital_safety", + "name": "EMEA - all digital channels", + "countries_all": ["GB", "DE", "FR"], + "channels_any": ["display", "video", "ctv"], + "languages_any": ["en", "de", "fr"], + "policy": "Sports and fitness content is ideal. Lifestyle content about health and wellness is good. Entertainment content is generally acceptable. Avoid content about violence, controversial political topics, adult themes, or content that portrays sedentary lifestyle positively. Block hate speech, illegal activities, or content disparaging athletes.", + "calibration_exemplars": { + "pass": [ + { "type": "url", "value": "https://espn.com/nba/story/_/id/12345/lakers-championship", "language": "en" }, + { "type": "url", "value": "https://healthline.com/fitness/cardio-workout", "language": "en" } + ], + "fail": [ + { "type": "url", "value": "https://tabloid.example.com/celebrity-scandal", "language": "en" }, + { "type": "url", "value": "https://news.example.com/controversial-politics-article", "language": "en" } + ] + } +} +``` + +### Fields + +| Field | Description | +|-------|-------------| +| `standards_id` | Unique identifier for this standards configuration | +| `name` | Human-readable name | +| `countries_all` | ISO country codes - standards apply in ALL listed countries | +| `channels_any` | Ad channels - standards apply to ANY of the listed channels | +| `languages_any` | BCP 47 language tags - standards apply to content in ANY of these languages | +| `policy` | Natural language policy describing acceptable and unacceptable content contexts | +| `calibration_exemplars` | Training/test set of content contexts (pass/fail) to calibrate policy interpretation | + +:::note[Brand Safety Floor Requirement] +Implementors MUST apply a brand safety floor regardless of what policy is defined. AdCP does not define the floor specification. +::: + +### Error Response + +```json +{ + "errors": [ + { + "code": "STANDARDS_NOT_FOUND", + "message": "No standards found with ID 'invalid_id'" + } + ] +} +``` + +## Related Tasks + +- [calibrate_content](/docs/governance/content-standards/tasks/calibrate_content) - Collaborative calibration against these standards +- [list_content_standards](/docs/governance/content-standards/tasks/list_content_standards) - List available standards configurations diff --git a/v2.6-rc/docs/governance/content-standards/tasks/get_media_buy_artifacts.mdx b/v2.6-rc/docs/governance/content-standards/tasks/get_media_buy_artifacts.mdx new file mode 100644 index 00000000..7b430da9 --- /dev/null +++ b/v2.6-rc/docs/governance/content-standards/tasks/get_media_buy_artifacts.mdx @@ -0,0 +1,205 @@ +--- +title: get_media_buy_artifacts +sidebar_position: 8 +--- + +# get_media_buy_artifacts + +Retrieve content artifacts from a media buy for validation. This is separate from `get_media_buy_delivery` which returns performance metrics - artifacts contain the actual content (text, images, video) where ads were placed. + +**Response time**: < 5s (batch of 1,000 artifacts) + +## Data Flow + +```mermaid +sequenceDiagram + participant Buyer as Buyer Agent + participant Seller as Seller Agent + participant Verifier as Verification Agent + + Buyer->>Seller: get_media_buy_artifacts (sampled or full) + Seller-->>Buyer: Artifacts with content + Buyer->>Verifier: validate_content_delivery + Verifier-->>Buyer: Validation results +``` + +The buyer requests artifacts from the seller using the same media buy parameters. The seller returns content samples based on the agreed sampling rate. The buyer then validates these against the verification agent. + +## Request + +**Schema**: [get-media-buy-artifacts-request.json](https://adcontextprotocol.org/schemas/v2/content-standards/get-media-buy-artifacts-request.json) + +| Parameter | Type | Required | Description | +|-----------|------|----------|-------------| +| `media_buy_id` | string | Yes | Media buy to get artifacts from | +| `package_ids` | array | No | Filter to specific packages | +| `sampling` | object | No | Sampling parameters (defaults to media buy agreement) | +| `time_range` | object | No | Filter to specific time period | +| `limit` | integer | No | Maximum artifacts to return (default: 1000) | +| `cursor` | string | No | Pagination cursor for large result sets | + +### Sampling Options + +```json +{ + "sampling": { + "rate": 0.25, + "method": "random" + } +} +``` + +| Method | Description | +|--------|-------------| +| `random` | Random sample across all deliveries | +| `stratified` | Sample proportionally across packages/properties | +| `recent` | Most recent deliveries first | +| `failures_only` | Only artifacts that failed local evaluation | + +## Response + +**Schema**: [get-media-buy-artifacts-response.json](https://adcontextprotocol.org/schemas/v2/content-standards/get-media-buy-artifacts-response.json) + +### Success Response + +```json +{ + "media_buy_id": "mb_nike_reddit_q1", + "artifacts": [ + { + "record_id": "imp_12345", + "timestamp": "2025-01-15T10:30:00Z", + "package_id": "pkg_feed_standard", + "artifact": { + "property_id": {"type": "domain", "value": "reddit.com"}, + "artifact_id": "r_fitness_abc123", + "assets": [ + {"type": "text", "role": "title", "content": "Best protein sources for muscle building", "language": "en"}, + {"type": "text", "role": "paragraph", "content": "Looking for recommendations on high-quality protein sources...", "language": "en"}, + {"type": "image", "url": "https://cdn.reddit.com/fitness-image.jpg", "alt_text": "Person lifting weights"} + ] + }, + "country": "US", + "channel": "social", + "brand_context": {"brand_id": "nike_global", "sku_id": "air_max_2025"}, + "local_verdict": "pass" + }, + { + "record_id": "imp_12346", + "timestamp": "2025-01-15T10:35:00Z", + "package_id": "pkg_feed_standard", + "artifact": { + "property_id": {"type": "domain", "value": "reddit.com"}, + "artifact_id": "r_news_politics_456", + "assets": [ + {"type": "text", "role": "title", "content": "Election Results Analysis", "language": "en"}, + {"type": "text", "role": "paragraph", "content": "The latest polling data shows...", "language": "en"} + ] + }, + "country": "US", + "channel": "social", + "brand_context": {"brand_id": "nike_global", "sku_id": "air_max_2025"}, + "local_verdict": "fail" + } + ], + "sampling_info": { + "total_deliveries": 100000, + "sampled_count": 1000, + "effective_rate": 0.01, + "method": "random" + }, + "pagination": { + "cursor": "eyJvZmZzZXQiOjEwMDB9", + "has_more": true + } +} +``` + +### Response Fields + +| Field | Description | +|-------|-------------| +| `artifacts` | Array of delivery records with full artifact content | +| `artifacts[].country` | ISO 3166-1 alpha-2 country code where delivery occurred | +| `artifacts[].channel` | Channel type (display, video, audio, social) | +| `artifacts[].brand_context` | Brand/SKU information for policy evaluation (schema TBD) | +| `artifacts[].local_verdict` | Seller's local model verdict (pass/fail/unevaluated) | +| `sampling_info` | How the sample was generated | +| `pagination` | Cursor for fetching more results | + +## Use Cases + +### Validate Sample Against Standards + +```python +# Get artifacts from seller +artifacts_response = seller_agent.get_media_buy_artifacts( + media_buy_id="mb_nike_reddit_q1", + sampling={"rate": 0.25, "method": "random"} +) + +# Convert to validation records +records = [ + { + "record_id": a["record_id"], + "timestamp": a["timestamp"], + "media_buy_id": artifacts_response["media_buy_id"], + "artifact": a["artifact"], + "country": a.get("country"), + "channel": a.get("channel"), + "brand_context": a.get("brand_context") + } + for a in artifacts_response["artifacts"] +] + +# Validate against verification agent +validation = verification_agent.validate_content_delivery( + standards_id="nike_brand_safety", + records=records +) + +# Check for drift between local and verified verdicts +for i, result in enumerate(validation["results"]): + local = artifacts_response["artifacts"][i]["local_verdict"] + verified = result["verdict"] + if local != verified: + print(f"Drift detected: {result['record_id']} - local={local}, verified={verified}") +``` + +### Focus on Local Failures + +```python +# Get only artifacts that failed local evaluation +failures = seller_agent.get_media_buy_artifacts( + media_buy_id="mb_nike_reddit_q1", + sampling={"method": "failures_only"}, + limit=100 +) + +# Verify these were correctly flagged +validation = verification_agent.validate_content_delivery( + standards_id="nike_brand_safety", + records=[{"record_id": a["record_id"], "artifact": a["artifact"]} + for a in failures["artifacts"]] +) + +# Check false positive rate +false_positives = sum(1 for r in validation["results"] if r["verdict"] == "pass") +print(f"False positive rate: {false_positives / len(failures['artifacts']):.1%}") +``` + +## Delivery vs Artifacts + +| Aspect | get_media_buy_delivery | get_media_buy_artifacts | +|--------|------------------------|-------------------------| +| **Purpose** | Performance reporting | Content validation | +| **Data size** | Small (metrics) | Large (full content) | +| **Frequency** | Regular reporting | Sampled validation | +| **Contains** | Impressions, clicks, spend | Text, images, video | +| **Consumer** | Buyer for optimization | Verification agent | + +## Related Tasks + +- [validate_content_delivery](/docs/governance/content-standards/tasks/validate_content_delivery) - Validate the artifacts +- [calibrate_content](/docs/governance/content-standards/tasks/calibrate_content) - Understand why artifacts pass/fail +- [get_media_buy_delivery](/docs/media-buy/task-reference/get_media_buy_delivery) - Get performance metrics diff --git a/v2.6-rc/docs/governance/content-standards/tasks/list_content_standards.mdx b/v2.6-rc/docs/governance/content-standards/tasks/list_content_standards.mdx new file mode 100644 index 00000000..cdaea651 --- /dev/null +++ b/v2.6-rc/docs/governance/content-standards/tasks/list_content_standards.mdx @@ -0,0 +1,67 @@ +--- +title: list_content_standards +sidebar_position: 2 +--- + +# list_content_standards + +List available content standards configurations. + +**Response time**: < 500ms + +## Request + +**Schema**: [list-content-standards-request.json](https://adcontextprotocol.org/schemas/v2/content-standards/list-content-standards-request.json) + +| Parameter | Type | Required | Description | +|-----------|------|----------|-------------| +| `countries` | array | No | Filter by country codes | +| `channels` | array | No | Filter by channels | +| `languages` | array | No | Filter by BCP 47 language tags | + +## Response + +**Schema**: [list-content-standards-response.json](https://adcontextprotocol.org/schemas/v2/content-standards/list-content-standards-response.json) + +Returns an abbreviated list of standards configurations. Use [get_content_standards](/docs/governance/content-standards/tasks/get_content_standards) to retrieve full details including policy text and calibration data. + +### Success Response + +```json +{ + "standards": [ + { + "standards_id": "emea_digital_safety", + "name": "EMEA - all digital channels", + "countries_all": ["GB", "DE", "FR"], + "channels_any": ["display", "video", "ctv"], + "languages": ["en", "de", "fr"] + }, + { + "standards_id": "us_display_only", + "name": "US - display only", + "countries_all": ["US"], + "channels_any": ["display"], + "languages": ["en"] + } + ] +} +``` + +### Error Response + +```json +{ + "errors": [ + { + "code": "UNAUTHORIZED", + "message": "Invalid or expired token" + } + ] +} +``` + +## Related Tasks + +- [get_content_standards](/docs/governance/content-standards/tasks/get_content_standards) - Get a specific standards configuration +- [create_content_standards](/docs/governance/content-standards/tasks/create_content_standards) - Create a new configuration diff --git a/v2.6-rc/docs/governance/content-standards/tasks/update_content_standards.mdx b/v2.6-rc/docs/governance/content-standards/tasks/update_content_standards.mdx new file mode 100644 index 00000000..4f2fb5bc --- /dev/null +++ b/v2.6-rc/docs/governance/content-standards/tasks/update_content_standards.mdx @@ -0,0 +1,72 @@ +--- +title: update_content_standards +sidebar_position: 6 +--- + +# update_content_standards + +Update an existing content standards configuration. Creates a new version. + +**Response time**: < 1s + +## Request + +**Schema**: [update-content-standards-request.json](https://adcontextprotocol.org/schemas/v2/content-standards/update-content-standards-request.json) + +| Parameter | Type | Required | Description | +|-----------|------|----------|-------------| +| `standards_id` | string | Yes | ID of the standards configuration to update | +| `scope` | object | No | Updated scope | +| `policy` | string | No | Updated policy prompt | +| `calibration_exemplars` | object | No | Updated training exemplars (pass/fail) | + +### Example Request + +```json +{ + "standards_id": "nike_emea_safety", + "policy": "Sports and fitness content is ideal. Lifestyle content about health and wellness is good. Entertainment content is generally acceptable. Avoid violence, controversial politics, adult themes. Block hate speech and illegal activities.", + "calibration_exemplars": { + "pass": [ + { "type": "url", "value": "https://espn.com/nba/story/_/id/12345/lakers-win", "language": "en" }, + { "type": "url", "value": "https://healthline.com/fitness/cardio-workout", "language": "en" }, + { "type": "url", "value": "https://runnersworld.com/training/marathon-tips", "language": "en" } + ], + "fail": [ + { "type": "url", "value": "https://tabloid.example.com/celebrity-scandal", "language": "en" }, + { "type": "url", "value": "https://gambling.example.com/betting-guide", "language": "en" } + ] + } +} +``` + +## Response + +**Schema**: [update-content-standards-response.json](https://adcontextprotocol.org/schemas/v2/content-standards/update-content-standards-response.json) + +### Success Response + +```json +{ + "standards_id": "nike_emea_safety" +} +``` + +### Error Response + +```json +{ + "errors": [ + { + "code": "STANDARDS_NOT_FOUND", + "message": "No standards found with ID 'invalid_id'" + } + ] +} +``` + +## Related Tasks + +- [get_content_standards](/docs/governance/content-standards/tasks/get_content_standards) - Get current configuration +- [create_content_standards](/docs/governance/content-standards/tasks/create_content_standards) - Create a new configuration +- [delete_content_standards](/docs/governance/content-standards/tasks/delete_content_standards) - Delete a configuration diff --git a/v2.6-rc/docs/governance/content-standards/tasks/validate_content_delivery.mdx b/v2.6-rc/docs/governance/content-standards/tasks/validate_content_delivery.mdx new file mode 100644 index 00000000..97f36c2b --- /dev/null +++ b/v2.6-rc/docs/governance/content-standards/tasks/validate_content_delivery.mdx @@ -0,0 +1,183 @@ +--- +title: validate_content_delivery +sidebar_position: 4 +--- + +# validate_content_delivery + +Validate delivery records against content safety policies. Designed for batch auditing of where ads were actually delivered. + +**Asynchronous**: Accept immediately, process in background. Returns a `validation_id` for status polling. + +## Data Flow + +Content artifacts are separate from delivery metrics. Use `get_media_buy_artifacts` to retrieve content for validation: + +```mermaid +sequenceDiagram + participant Buyer as Buyer Agent + participant Seller as Seller Agent + participant Verifier as Verification Agent + + Buyer->>Seller: get_media_buy_artifacts (sampled) + Seller-->>Buyer: Artifacts with content + Buyer->>Verifier: validate_content_delivery + Verifier-->>Buyer: Validation results +``` + +**Why through the buyer?** + +- The **buyer** owns the media buy and knows which `standards_id` applies +- The **buyer** requests artifacts from sellers (separate from performance metrics) +- The **buyer** is accountable for brand safety compliance +- The **verification agent** works on behalf of the buyer + +This keeps responsibilities clear: sellers provide content samples via `get_media_buy_artifacts`, buyers validate samples against the verification agent. + +## Request + +**Schema**: [validate-content-delivery-request.json](https://adcontextprotocol.org/schemas/v2/content-standards/validate-content-delivery-request.json) + +| Parameter | Type | Required | Description | +|-----------|------|----------|-------------| +| `standards_id` | string | Yes | Standards configuration to validate against | +| `records` | array | Yes | Delivery records to validate (max 10,000) | +| `feature_ids` | array | No | Specific features to evaluate (defaults to all) | +| `include_passed` | boolean | No | Include passed records in results (default: true) | + +### Delivery Record + +```json +{ + "record_id": "imp_12345", + "timestamp": "2025-01-15T10:30:00Z", + "media_buy_id": "mb_nike_reddit_q1", + "artifact": { + "property_id": {"type": "domain", "value": "example.com"}, + "artifact_id": "article_12345", + "assets": [ + {"type": "text", "role": "title", "content": "Article Title"} + ] + }, + "country": "US", + "channel": "display", + "brand_context": { + "brand_id": "nike_global", + "sku_id": "air_max_2025" + } +} +``` + +| Field | Required | Description | +|-------|----------|-------------| +| `record_id` | Yes | Unique identifier for this delivery record | +| `artifact` | Yes | Content artifact where ad was delivered | +| `media_buy_id` | No | Media buy this record belongs to (for multi-buy batches) | +| `timestamp` | No | When the delivery occurred | +| `country` | No | ISO 3166-1 alpha-2 country code for targeting context | +| `channel` | No | Channel type (display, video, audio, social) | +| `brand_context` | No | Brand/SKU information for policy evaluation (schema TBD) | + +## Response + +**Schema**: [validate-content-delivery-response.json](https://adcontextprotocol.org/schemas/v2/content-standards/validate-content-delivery-response.json) + +### Success Response + +```json +{ + "summary": { + "total_records": 1000, + "passed_records": 950, + "failed_records": 50, + "total_features": 5000, + "passed_features": 4750, + "failed_features": 250 + }, + "results": [ + { + "record_id": "imp_12345", + "features": [ + { + "feature_id": "brand_safety", + "status": "passed", + "value": "safe" + } + ] + }, + { + "record_id": "imp_12346", + "features": [ + { + "feature_id": "brand_safety", + "status": "failed", + "value": "high_risk", + "message": "Content contains violence" + } + ] + } + ] +} +``` + +## Use Cases + +### Post-Campaign Audit + +```python +def audit_campaign_delivery(campaign_id, standards_id, content_standards_agent): + """Audit all delivery records from a campaign.""" + # Fetch delivery records from your ad server + records = fetch_delivery_records(campaign_id) + + # Validate in batches + batch_size = 10000 + all_results = [] + + for i in range(0, len(records), batch_size): + batch = records[i:i + batch_size] + response = content_standards_agent.validate_content_delivery( + standards_id=standards_id, + records=batch + ) + all_results.extend(response["results"]) + + return all_results +``` + +### Real-Time Monitoring Sample + +```python +import random + +def sample_and_validate(records, standards_id, sample_size=1000): + """Validate a random sample for real-time monitoring.""" + sample = random.sample(records, min(sample_size, len(records))) + return content_standards_agent.validate_content_delivery( + standards_id=standards_id, + records=sample + ) +``` + +### Filter for Issues Only + +```python +# Only get failed records to reduce response size +response = content_standards_agent.validate_content_delivery( + standards_id="nike_emea_safety", + records=delivery_records, + include_passed=False # Only return failures +) + +for result in response["results"]: + print(f"Issue with {result['record_id']}") + for feature in result["features"]: + if feature["status"] == "failed": + print(f" - {feature['feature_id']}: {feature['message']}") +``` + +## Related Tasks + +- [get_media_buy_artifacts](/docs/governance/content-standards/tasks/get_media_buy_artifacts) - Get content artifacts from seller +- [calibrate_content](/docs/governance/content-standards/tasks/calibrate_content) - Understand why artifacts pass/fail +- [get_content_standards](/docs/governance/content-standards/tasks/get_content_standards) - Retrieve the policies diff --git a/v2.6-rc/docs/media-buy/task-reference/create_media_buy.mdx b/v2.6-rc/docs/media-buy/task-reference/create_media_buy.mdx index 80001c91..fa138364 100644 --- a/v2.6-rc/docs/media-buy/task-reference/create_media_buy.mdx +++ b/v2.6-rc/docs/media-buy/task-reference/create_media_buy.mdx @@ -182,8 +182,8 @@ npx adcp \ | `pacing` | string | No | `"even"` (default), `"asap"`, or `"front_loaded"` | | `bid_price` | number | No | Bid price for auction pricing (required when `is_fixed` is false) | | `targeting_overlay` | TargetingOverlay | No | Additional targeting criteria (see [Targeting](/docs/media-buy/advanced-topics/targeting)) | -| `creative_ids` | string[] | No | Existing library creative IDs to assign | -| `creatives` | CreativeAsset[] | No | Full creative objects to upload and assign | +| `creative_assignments` | CreativeAssignment[] | No | Assign existing library creatives with optional weights and placement targeting | +| `creatives` | CreativeAsset[] | No | Upload new creative assets and assign (`creative_id` must not already exist in library) | ## Response @@ -550,6 +550,7 @@ Common errors and resolutions: | `TARGETING_TOO_NARROW` | Targeting yields zero inventory | Broaden geographic or audience criteria | | `POLICY_VIOLATION` | Brand/product violates policy | Review publisher's content policies | | `INVALID_PRICING_OPTION` | pricing_option_id not found | Use ID from product's `pricing_options` | +| `CREATIVE_ID_EXISTS` | Creative ID already exists in library | Use a different `creative_id`, assign existing creatives via `creative_assignments`, or update via `sync_creatives` | Example error response: @@ -1025,6 +1026,7 @@ For complete async handling patterns, see [Task Management](/docs/protocols/task - AXE segments enable advanced audience targeting - Pending states (`working`, `submitted`) are normal, not errors - Orchestrators MUST handle pending states as part of normal workflow +- **Inline creatives**: The `creatives` array creates NEW creatives only. To update existing creatives, use [`sync_creatives`](/docs/media-buy/task-reference/sync_creatives). To assign existing library creatives, use `creative_assignments` instead. ## Policy Compliance diff --git a/v2.6-rc/docs/media-buy/task-reference/sync_creatives.mdx b/v2.6-rc/docs/media-buy/task-reference/sync_creatives.mdx index e5133c59..30a51827 100644 --- a/v2.6-rc/docs/media-buy/task-reference/sync_creatives.mdx +++ b/v2.6-rc/docs/media-buy/task-reference/sync_creatives.mdx @@ -106,7 +106,7 @@ asyncio.run(main()) | `assignments` | object | No | Map of creative_id to array of package_ids for bulk assignment | | `dry_run` | boolean | No | When true, preview changes without applying them (default: false) | | `validation_mode` | string | No | Validation strictness: `"strict"` (default) or `"lenient"` | -| `delete_missing` | boolean | No | When true, creatives not in this sync are archived (default: false) | +| `delete_missing` | boolean | No | When true, creatives not in this sync are archived (default: false). Cannot delete creatives assigned to active, non-paused packages. | ### Creative Object @@ -651,11 +651,11 @@ The operation status (`completed`) means the review process finished. Individual | `PACKAGE_NOT_FOUND` | Package ID doesn't exist in media buy | Verify `package_id` from `create_media_buy` response | | `BRAND_SAFETY_VIOLATION` | Creative failed brand safety scan | Review content against publisher's brand safety guidelines | | `FORMAT_MISMATCH` | Assets don't match format requirements | Verify asset types and specifications match format definition | -| `DUPLICATE_CREATIVE_ID` | Creative ID already exists in different media buy | Use unique `creative_id` or sync to correct media buy | +| `CREATIVE_IN_ACTIVE_DELIVERY` | Creative is assigned to an active, non-paused package (blocks updates and `delete_missing` deletions) | Pause the package first, or create a new creative version | ## Best Practices -1. **Use upsert semantics** - Same `creative_id` updates existing creative rather than creating duplicates. This allows iterative creative development. +1. **Use upsert semantics** - Same `creative_id` updates existing creative rather than creating duplicates. This allows iterative creative development. Note: updates are blocked for creatives in active delivery (see #7). 2. **Validate first** - Use `dry_run: true` to catch errors before actual upload. This saves bandwidth and processing time. @@ -667,6 +667,8 @@ The operation status (`completed`) means the review process finished. Individual 6. **Check format support** - Use `list_creative_formats` to verify product supports your creative formats before uploading. +7. **Active delivery protection** - Creatives assigned to active, non-paused packages cannot be updated or deleted via `delete_missing`. Pause the package first, unassign the creative via `update_media_buy`, or create a new creative with a different `creative_id`. + ## Next Steps - [list_creative_formats](/docs/media-buy/task-reference/list_creative_formats) - Check supported formats before upload diff --git a/v2.6-rc/docs/media-buy/task-reference/update_media_buy.mdx b/v2.6-rc/docs/media-buy/task-reference/update_media_buy.mdx index 237b6847..0dd78ea2 100644 --- a/v2.6-rc/docs/media-buy/task-reference/update_media_buy.mdx +++ b/v2.6-rc/docs/media-buy/task-reference/update_media_buy.mdx @@ -149,11 +149,25 @@ asyncio.run(create_and_pause_campaign()) | `end_time` | string | No | Updated campaign end time | | `paused` | boolean | No | Pause/resume entire media buy (`true` = paused, `false` = active) | | `packages` | PackageUpdate[] | No | Package-level updates (see below) | -| `creatives` | CreativeAsset[] | No | Upload and assign new creative assets inline | -| `creative_assignments` | CreativeAssignment[] | No | Update creative rotation weights and placement targeting | +| `reporting_webhook` | object | No | Update reporting webhook configuration (see below) | +| `push_notification_config` | object | No | Webhook for async operation notifications | *Either `media_buy_id` OR `buyer_ref` is required (not both) +### Reporting Webhook Object + +Configure automated delivery reporting for this media buy: + +| Parameter | Type | Required | Description | +|-----------|------|----------|-------------| +| `url` | string | Yes | Webhook endpoint URL | +| `authentication` | object | Yes | Auth config with `schemes` and `credentials` | +| `reporting_frequency` | string | Yes | `hourly`, `daily`, or `monthly` | +| `requested_metrics` | string[] | No | Specific metrics to include (defaults to all) | +| `token` | string | No | Client token for validation (min 16 chars) | + +**Note**: `reporting_webhook` configures ongoing campaign reporting. `push_notification_config` is for async operation notifications (e.g., "notify me when this update completes"). + ### Package Update Object | Parameter | Type | Description | @@ -166,7 +180,8 @@ asyncio.run(create_and_pause_campaign()) | `pacing` | string | Updated pacing strategy | | `bid_price` | number | Updated bid price (auction products only) | | `targeting_overlay` | TargetingOverlay | Updated targeting restrictions | -| `creative_ids` | string[] | Replace assigned creatives | +| `creative_assignments` | CreativeAssignment[] | Replace assigned creatives with optional weights and placement targeting | +| `creatives` | CreativeAsset[] | Upload and assign new creatives inline (must not exist in library) | *Either `package_id` OR `buyer_ref` is required for each package update @@ -372,7 +387,7 @@ asyncio.run(update_targeting()) ### Replace Creatives -Swap out creative assets for a package: +Swap out creative assignments for a package: @@ -384,7 +399,10 @@ const result = await testAgent.updateMediaBuy({ media_buy_id: 'mb_12345', packages: [{ buyer_ref: 'ctv_package', - creative_ids: ['creative_video_v2', 'creative_display_v2'] + creative_assignments: [ + { creative_id: 'creative_video_v2' }, + { creative_id: 'creative_display_v2', weight: 60 } + ] }] }); @@ -415,7 +433,10 @@ async def replace_creatives(): packages=[ { 'buyer_ref': 'ctv_package', - 'creative_ids': ['creative_video_v2', 'creative_display_v2'] + 'creative_assignments': [ + {'creative_id': 'creative_video_v2'}, + {'creative_id': 'creative_display_v2', 'weight': 60} + ] } ] ) @@ -513,6 +534,7 @@ asyncio.run(update_multiple_packages()) ✅ **Can update:** - Start/end times (subject to seller approval) - Campaign status (active/paused) +- Reporting webhook configuration (URL, frequency, metrics) ❌ **Cannot update:** - Media buy ID @@ -548,6 +570,7 @@ Common errors and resolutions: | `BUDGET_INSUFFICIENT` | New budget below minimum | Increase budget amount | | `POLICY_VIOLATION` | Update violates content policy | Review policy requirements | | `INVALID_STATE` | Operation not allowed in current state | Check campaign status | +| `CREATIVE_ID_EXISTS` | Creative ID already exists in library | Use a different `creative_id`, assign existing creatives via `creative_assignments`, or update via `sync_creatives` | Example error response: @@ -596,13 +619,16 @@ Only specified fields are updated - omitted fields remain unchanged: } ``` -**Array replacement**: When updating arrays (like `creative_ids`), provide the complete new array: +**Array replacement**: When updating arrays (like `creative_assignments`), provide the complete new array: ```json { "packages": [{ "buyer_ref": "ctv_package", - "creative_ids": ["creative_video_v2", "creative_display_v2"] + "creative_assignments": [ + { "creative_id": "creative_video_v2" }, + { "creative_id": "creative_display_v2", "weight": 60 } + ] }] } ``` @@ -678,13 +704,14 @@ Check `affected_packages` in response to confirm changes were applied correctly. - Pending states (`working`, `submitted`) are normal, not errors - Orchestrators MUST handle pending states as part of normal workflow - `implementation_date` indicates when changes take effect (null if pending approval) +- **Inline creatives**: The `creatives` array creates NEW creatives only. To update existing creatives, use [`sync_creatives`](/docs/media-buy/task-reference/sync_creatives). To assign existing library creatives, use `creative_assignments` in the Package Update object. ## Next Steps After updating a media buy: 1. **Verify Changes**: Use [`get_media_buy_delivery`](/docs/media-buy/task-reference/get_media_buy_delivery) to confirm updates -2. **Upload New Creatives**: Use [`sync_creatives`](/docs/media-buy/task-reference/sync_creatives) if creative_ids changed +2. **Upload New Creatives**: Use [`sync_creatives`](/docs/media-buy/task-reference/sync_creatives) if creative assignments changed 3. **Monitor Performance**: Track impact of changes on campaign metrics 4. **Optimize Further**: Use [`provide_performance_feedback`](/docs/media-buy/task-reference/provide_performance_feedback) for ongoing optimization diff --git a/v2.6-rc/docs/reference/error-codes.mdx b/v2.6-rc/docs/reference/error-codes.mdx index 0132e64c..0eac0041 100644 --- a/v2.6-rc/docs/reference/error-codes.mdx +++ b/v2.6-rc/docs/reference/error-codes.mdx @@ -425,6 +425,61 @@ Request exceeded maximum processing time. **Resolution**: Refine request parameters or retry. +## Content Standards Errors + +### STANDARDS_NOT_FOUND +Specified standards ID doesn't exist. + +**Example**: +```json +{ + "$schema": "https://adcontextprotocol.org/schemas/v2/core/error.json", + "code": "STANDARDS_NOT_FOUND", + "message": "No standards found with ID 'invalid_id'", + "details": { + "standards_id": "invalid_id" + } +} +``` + +**Resolution**: Use `list_content_standards` to find valid standards IDs. + +### STANDARDS_IN_USE +Cannot delete standards that are referenced by active media buys. + +**Example**: +```json +{ + "$schema": "https://adcontextprotocol.org/schemas/v2/core/error.json", + "code": "STANDARDS_IN_USE", + "message": "Cannot delete standards 'nike_emea_safety' - currently referenced by active media buys", + "details": { + "standards_id": "nike_emea_safety", + "active_media_buy_count": 3 + } +} +``` + +**Resolution**: Wait for media buys to complete before deleting. + +### STANDARDS_SCOPE_CONFLICT +New standards configuration conflicts with existing standards for the same scope. + +**Example**: +```json +{ + "$schema": "https://adcontextprotocol.org/schemas/v2/core/error.json", + "code": "STANDARDS_SCOPE_CONFLICT", + "message": "Standards already exist for brand 'nike' in countries ['GB', 'DE']", + "details": { + "conflicting_standards_id": "nike_emea_safety", + "overlapping_countries": ["GB", "DE"] + } +} +``` + +**Resolution**: Update existing standards or narrow scope to avoid overlap. + ## Data Errors ### DATA_QUALITY_ISSUE @@ -506,7 +561,10 @@ const PERMANENT_ERRORS = [ 'INSUFFICIENT_PERMISSIONS', 'SEGMENT_NOT_FOUND', 'PLATFORM_UNAUTHORIZED', - 'UNSUPPORTED_VERSION' + 'UNSUPPORTED_VERSION', + 'STANDARDS_NOT_FOUND', + 'STANDARDS_IN_USE', + 'STANDARDS_SCOPE_CONFLICT' ]; function isRetryable(errorCode: string): boolean {