Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .test/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ uv run python .test/scripts/optimize.py databricks-metric-views --include-tools
When `--tool-modules` is specified, both tool stats and the cross-skill dataset are filtered:

- **Tool stats** report only the requested modules (e.g., `Tool modules: 1, tools: 5` for `--tool-modules sql`).
- **Cross-skill dataset** includes only skills whose `tool_modules` in `manifest.yaml` overlap with the requested modules. Skills that *don't declare* `tool_modules` are always included as a safe fallback (e.g., `databricks-config`, `databricks-docs`). This means the dataset won't shrink to *only* SQL skills — general-purpose skills without the field are kept so the evaluator still has broad coverage.
- **Cross-skill dataset** includes only skills whose `tool_modules` in `manifest.yaml` overlap with the requested modules. Skills that *don't declare* `tool_modules` are always included as a safe fallback (e.g., `databricks-docs`, `dev-best-practices`). This means the dataset won't shrink to *only* SQL skills — general-purpose skills without the field are kept so the evaluator still has broad coverage.

To reduce the dataset further, add `tool_modules` to any remaining skills that should be excluded for certain module filters. Without `--tool-modules`, all skills are included regardless (no regression).

Expand Down
2 changes: 1 addition & 1 deletion databricks-builder-app/app.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ env:
# =============================================================================
# Comma-separated list of skills to enable
- name: ENABLED_SKILLS
value: "databricks-asset-bundles,databricks-agent-bricks,databricks-aibi-dashboards,databricks-app-apx,databricks-app-python,databricks-config,databricks-docs,databricks-jobs,databricks-python-sdk,databricks-unity-catalog,mlflow-evaluation,spark-declarative-pipelines,synthetic-data-generation,unstructured-pdf-generation"
value: "databricks-asset-bundles,databricks-agent-bricks,databricks-aibi-dashboards,databricks-app-apx,databricks-app-python,databricks-docs,dev-best-practices,databricks-jobs,databricks-python-sdk,databricks-unity-catalog,mlflow-evaluation,spark-declarative-pipelines,synthetic-data-generation,unstructured-pdf-generation"
- name: SKILLS_ONLY_MODE
value: "false"

Expand Down
2 changes: 1 addition & 1 deletion databricks-skills/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,8 +64,8 @@ cp -r ai-dev-kit/databricks-skills/databricks-agent-bricks .claude/skills/
- **databricks-asset-bundles** - DABs for multi-environment deployments
- **databricks-app-apx** - Full-stack apps (FastAPI + React)
- **databricks-app-python** - Python web apps (Dash, Streamlit, Flask)
- **dev-best-practices** - Databricks development best practices: Git workflow, code quality, architecture, CI/CD, and production handoff
- **databricks-python-sdk** - Python SDK, Connect, CLI, REST API
- **databricks-config** - Profile authentication setup
- **databricks-lakebase-provisioned** - Managed PostgreSQL for OLTP workloads

### 📚 Reference
Expand Down
2 changes: 1 addition & 1 deletion databricks-skills/databricks-asset-bundles/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -312,7 +312,7 @@ databricks bundle destroy -t prod --auto-approve
- **[databricks-spark-declarative-pipelines](../databricks-spark-declarative-pipelines/SKILL.md)** - pipeline definitions referenced by DABs
- **[databricks-app-apx](../databricks-app-apx/SKILL.md)** - app deployment via DABs
- **[databricks-app-python](../databricks-app-python/SKILL.md)** - Python app deployment via DABs
- **[databricks-config](../databricks-config/SKILL.md)** - profile and authentication setup for CLI/SDK
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are dropping this skill in favor of FDE playbook

- **[dev-best-practices](../dev-best-practices/SKILL.md)** - Databricks development best practices including DABs philosophy, CI/CD, and dev environment setup
- **[databricks-jobs](../databricks-jobs/SKILL.md)** - job orchestration managed through bundles

## Resources
Expand Down
22 changes: 0 additions & 22 deletions databricks-skills/databricks-config/SKILL.md

This file was deleted.

2 changes: 1 addition & 1 deletion databricks-skills/databricks-python-sdk/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -616,7 +616,7 @@ If I'm unsure about a method, I should:

## Related Skills

- **[databricks-config](../databricks-config/SKILL.md)** - profile and authentication setup
- **[dev-best-practices](../dev-best-practices/SKILL.md)** - Databricks development best practices (dev environment, Git workflow, architecture, CI/CD)
- **[databricks-asset-bundles](../databricks-asset-bundles/SKILL.md)** - deploying resources via DABs
- **[databricks-jobs](../databricks-jobs/SKILL.md)** - job orchestration patterns
- **[databricks-unity-catalog](../databricks-unity-catalog/SKILL.md)** - catalog governance
Expand Down
2 changes: 1 addition & 1 deletion databricks-skills/databricks-zerobus-ingest/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,7 @@ The timestamp generation must use microseconds for Databricks.
- **[databricks-spark-declarative-pipelines](../databricks-spark-declarative-pipelines/SKILL.md)** - Downstream pipeline processing of ingested data
- **[databricks-unity-catalog](../databricks-unity-catalog/SKILL.md)** - Managing catalogs, schemas, and tables that Zerobus writes to
- **[databricks-synthetic-data-gen](../databricks-synthetic-data-gen/SKILL.md)** - Generate test data to feed into Zerobus producers
- **[databricks-config](../databricks-config/SKILL.md)** - Profile and authentication setup
- **[dev-best-practices](../dev-best-practices/SKILL.md)** - CLI and authentication setup (§2.5), development best practices

## Resources

Expand Down
Loading