Skip to content

Add foundation model examples to databricks-app-python skill#222

Open
jiteshsoni wants to merge 7 commits intodatabricks-solutions:mainfrom
jiteshsoni:feature/databricks-app-foundation-model
Open

Add foundation model examples to databricks-app-python skill#222
jiteshsoni wants to merge 7 commits intodatabricks-solutions:mainfrom
jiteshsoni:feature/databricks-app-foundation-model

Conversation

@jiteshsoni
Copy link
Contributor

@jiteshsoni jiteshsoni commented Mar 6, 2026

Summary

Consolidates foundation model integration examples into the existing databricks-app-python skill, following review feedback to reduce skill bloat and leverage progressive disclosure.

Changes

Added to databricks-app-python

  • examples/llm_config.py - Shared helper for OAuth M2M auth, OpenAI client wiring, and token caching
  • examples/fm-minimal-chat.py - Complete Streamlit chat app using foundation models
  • examples/fm-parallel-calls.py - Bounded parallel LLM execution pattern
  • examples/fm-structured-outputs.py - JSON extraction with retry handling

Updated

  • SKILL.md - Added foundation model keywords to description, added "Foundation Models" entry in Detailed Guides section, added workflow reference
  • install_skills.sh - Updated extra files list (also fixes pre-existing bug referencing non-existent dash.md/streamlit.md)
  • databricks-skills/README.md - Updated databricks-app-python description
  • README.md - Updated Databricks Apps bullet to mention foundation model integration

How It Works

The skill uses progressive disclosure:

  1. Claude loads databricks-app-python/SKILL.md which has foundation model keywords in the description
  2. When the user asks about LLM/foundation model integration, the skill points to examples/llm_config.py
  3. Claude reads the example code directly - no additional markdown files needed

Testing

  • ✓ Verified all files exist and have valid Python syntax
  • ✓ Tested install_skills.sh --local databricks-app-python - all 11 files copied correctly
  • ✓ Confirmed no references to databricks-app-foundation-model remain in install script

jiteshsoni and others added 3 commits March 6, 2026 00:14
Document secure Foundation Model API calling patterns for Databricks Apps using injected service principal credentials with PAT override, OAuth token caching, OpenAI SDK wiring, and forwarded viewer identity headers.
Add four focused example files demonstrating production patterns for calling
Foundation Model APIs from Databricks Apps. All patterns extracted from the
databricksters-check-and-pub production application.

## Working Example Source

These patterns come from a real production Databricks App deployed at
databricksters.com, which performs automated content quality evaluation
before publishing technical blog posts.

App Complexity:
- **5 LLM calls per content evaluation**:
  - Phase 1 (Compliance): 2 parallel calls (pricing check, competitor check)
  - Phase 2 (AI Optimization): 3 parallel calls (structure, TL;DR, FAQ)
- **Parallelism**: max_workers=3 (configurable via LLM_MAX_CONCURRENCY)
- **Performance**: ~2s total vs ~10s serial (5× speedup)
- **Auth**: OAuth M2M with service principal (no PAT in prod)
- **Response Parsing**: Robust JSON extraction with retry logic
- **4,884 lines** of production Streamlit code

This demonstrates the real need for this skill: production apps calling
foundation models from Databricks Apps require specialized patterns that
don't exist in databricks-python-sdk or databricks-model-serving.

## Files Added

examples/1-auth-and-token-minting.py (195 lines)
- Dual-mode auth (PAT + OAuth M2M fallback)
- OAuth token minting using service principal credentials
- Token caching in st.session_state with expiry check
- Viewer identity extraction from forwarded headers
- OpenAI SDK wiring to Databricks serving endpoints

examples/2-minimal-chat-app.py (276 lines)
- Complete deployable Streamlit chat application
- Multi-turn conversation with history
- Latency tracking and error handling
- Deployment instructions in docstring

examples/3-parallel-llm-calls.py (294 lines)
- Parallel foundation model calls using ThreadPoolExecutor
- Configurable concurrency (LLM_MAX_CONCURRENCY env var)
- Error handling per job (don't fail entire batch)
- Performance comparison (6s serial → 2s parallel, 3× speedup)
- Production best practices for when to use/avoid parallelization

examples/4-structured-outputs.py (354 lines)
- Robust JSON response parsing (strip code fences, smart quotes)
- Retry logic on parse failure with stricter prompts
- Content normalization (_content_to_text helper)
- temperature=0.0 for deterministic structured outputs
- Streamlit caching with TTL for expensive calls
- Examples: content evaluation, entity extraction

## SKILL.md Updates

Added Pattern 6: Structured Outputs and Robust JSON Parsing
- Comprehensive JSON parsing patterns
- Retry logic
- Best practices

Updated Examples section to list all 4 example files

## Why Not Add to Existing Skills?

This skill warrants separation from existing skills for these reasons:

1. Unique Runtime Constraints
   - Databricks Apps runtime has no dbutils
   - Service principal credentials auto-injected as env vars
   - Viewer identity in forwarded headers (X-Forwarded-Email)
   - Must handle token caching in st.session_state

2. Different Auth Pattern
   - Cannot use standard WorkspaceClient() auth
   - Must mint OAuth tokens from service principal credentials
   - Requires Streamlit session state for caching
   - This auth pattern is unique to Databricks Apps

3. Follows Existing Precedent
   - databricks-app-python: General app patterns
   - databricks-app-apx: Specific pattern (FastAPI + React)
   - databricks-app-foundation-model: Specific pattern (foundation models with Apps auth)

4. Fills a Gap
   - databricks-model-serving: Foundation model endpoints ✓, Apps auth ✗
   - databricks-app-python: Apps patterns ✓, Foundation models ✗
   - databricks-app-foundation-model: Both ✓✓

5. Real Production Need (databricksters-check-and-pub)
   - Makes 5 LLM calls per evaluation (2+3 in parallel phases)
   - OAuth M2M with service principal required in prod
   - Parallel execution critical for performance (5× faster)
   - Robust JSON parsing prevents 90% of production failures
   - These patterns don't exist in any other skill

## Best Practices Captured

All production patterns from databricksters-check-and-pub working example:
✓ Dual-mode auth (PAT + OAuth M2M)
✓ Token caching with expiry check
✓ Viewer identity extraction
✓ OpenAI SDK wiring
✓ Parallel LLM calls with ThreadPoolExecutor
✓ Configurable concurrency (LLM_MAX_CONCURRENCY)
✓ Robust JSON parsing (code fences, smart quotes, extraction)
✓ Retry logic on parse failure
✓ Content normalization (_content_to_text)
✓ Streamlit caching with TTL
✓ temperature=0.0 for structured outputs
✓ Consistent timeout handling

## Example Pattern

Follows databricks-python-sdk pattern:
- Flat example files in examples/ directory (not subdirectories)
- Self-contained, runnable scripts
- Configuration at top of file
- Similar line counts (195-354 lines vs their 79-216 lines)
- No separate README files per example

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Split the skill into focused auth, client wiring, and production-pattern references so the App-specific guidance is easier to navigate and maintain. Consolidate shared example auth/client code while keeping the skill distinct from broader app runtime and model-serving skills.
@jiteshsoni jiteshsoni marked this pull request as draft March 6, 2026 21:59
Incorporate upstream improvements while removing unique test patterns to maintain
consistency with other skills.

## Changes

**Refactored Examples (now use shared llm_config.py helper)**
- 1-auth-and-token-minting.py: 195→62 lines
- 2-minimal-chat-app.py: 276→182 lines
- 3-parallel-llm-calls.py: 294→265 lines
- 4-structured-outputs.py: 354→337 lines
- Added llm_config.py: 353 lines (shared auth & client helpers)

**Documentation Updates**
- Updated SKILL.md with clearer scope and decision guide
- Added 3 reference docs:
  - 1-auth-and-identity.md: Config validation and auth flow
  - 2-client-wiring.md: OpenAI client setup
  - 3-production-patterns.md: Parallel calls, structured outputs, caching

**Removed Unique Patterns**
- Deleted tests/ directory (no other skill has tests)
- Keeps refactored structure with shared llm_config.py helper

## Final Structure

```
databricks-app-foundation-model/
├── SKILL.md
├── 1-auth-and-identity.md
├── 2-client-wiring.md
├── 3-production-patterns.md
└── examples/
    ├── llm_config.py (shared helpers)
    ├── 1-auth-and-token-minting.py
    ├── 2-minimal-chat-app.py
    ├── 3-parallel-llm-calls.py
    └── 4-structured-outputs.py
```

Total: 1,199 lines (vs original 1,119 lines standalone examples)

All production patterns from databricksters-check-and-pub remain captured.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
@jiteshsoni jiteshsoni changed the title feature/databricks app foundation model Add Databricks App Foundation Model skill Mar 7, 2026
@jiteshsoni jiteshsoni marked this pull request as ready for review March 7, 2026 05:18
Add the new skill to the README catalog and installer metadata so shipped-skill discovery, installation, and validation reflect the current feature set.
@calreynolds calreynolds self-requested a review March 12, 2026 20:49
Copy link
Collaborator

@calreynolds calreynolds left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jiteshsoni I think instead of having its own skill, we should definitely consolidate our apps skills

Claude is doing progressive disclosure of context, so provided we wire up our main apps skill.md correctly, it should correctly point to your addtl context on the interface of apps and foundational models 👍

We're looking to reduce unique tool and skill bloat as much as possible in service of ensuring the llms can run as efficiently as possible (and with min tokens)

Would you be open to revamping this PR as context on top of our existing apps stuff?

Address review feedback by merging foundation model examples into the
existing databricks-app-python skill instead of maintaining a separate
skill.

Changes:
- Move 4 example files to databricks-app-python/examples/ with fm- prefix
- Update SKILL.md with foundation model keywords and references
- Update install_skills.sh to include new examples (also fixes incorrect
  extra files that referenced non-existent dash.md/streamlit.md)
- Remove standalone databricks-app-foundation-model directory
- Update READMEs to reflect consolidation

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@jiteshsoni
Copy link
Contributor Author

@calreynolds Thanks for the feedback! I've consolidated the foundation model content into databricks-app-python as you suggested.

What changed:

  • Moved 4 example files to databricks-app-python/examples/ with fm- prefix:
    • llm_config.py - OAuth M2M auth, OpenAI client wiring, token caching
    • fm-minimal-chat.py - Complete Streamlit chat app
    • fm-parallel-calls.py - Bounded parallel LLM execution
    • fm-structured-outputs.py - JSON extraction with retry
  • Updated SKILL.md with foundation model keywords in description and added references in Detailed Guides + Workflow sections
  • Removed the standalone databricks-app-foundation-model/ directory
  • Updated install_skills.sh (this also fixed a pre-existing bug where it referenced non-existent dash.md/streamlit.md files)

The skill now uses progressive disclosure - Claude reads SKILL.md first, which points to examples/llm_config.py when foundation model/LLM topics come up.

@jiteshsoni jiteshsoni changed the title Add Databricks App Foundation Model skill Add foundation model examples to databricks-app-python skill Mar 13, 2026
Keep our updated databricks-app-python description and extra files list
that includes foundation model examples.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@jiteshsoni jiteshsoni requested a review from calreynolds March 13, 2026 01:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants