-
Couldn't load subscription status.
- Fork 1
Adds Github action to trigger tests upon raising PR #78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: beta
Are you sure you want to change the base?
Changes from 9 commits
cc4d0cc
53a2b51
6f2aa9f
4b7da45
54f478f
a188398
372abf7
a810f57
fddb88a
a8faa04
a5e30ba
6204fd8
3159809
6c450e9
7d1db2f
0e49b4e
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,130 @@ | ||||||||||||||||||||||||||||||||||||||
| name: Run Tests | ||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||
| on: | ||||||||||||||||||||||||||||||||||||||
| push: | ||||||||||||||||||||||||||||||||||||||
| branches: [main,beta] | ||||||||||||||||||||||||||||||||||||||
| pull_request: | ||||||||||||||||||||||||||||||||||||||
| branches: [main,beta] | ||||||||||||||||||||||||||||||||||||||
| types: [opened, synchronize, reopened, ready_for_review] | ||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||
| jobs: | ||||||||||||||||||||||||||||||||||||||
| test-main: | ||||||||||||||||||||||||||||||||||||||
| name: Test Main SDK (Python 3.9) | ||||||||||||||||||||||||||||||||||||||
| runs-on: ubuntu-latest | ||||||||||||||||||||||||||||||||||||||
|
Comment on lines
+16
to
+18
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧹 Nitpick (assertive) Add workflow-level hardening and concurrency. Reduces default token scope and auto-cancels superseded runs on the same branch. jobs:
+ # Cancel previous in-progress runs for the same ref
+ # and restrict default permissions
+ concurrency:
+ group: ${{ github.workflow }}-${{ github.ref }}
+ cancel-in-progress: true
+
+permissions:
+ contents: read
🤖 Prompt for AI Agents |
||||||||||||||||||||||||||||||||||||||
| if: github.event.pull_request.draft == false | ||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||
|
Comment on lines
+19
to
+20
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Draft/fork-safe gating: keep runs on push, skip drafts and forks on PRs. Current condition breaks on push events. Use an event-aware expression. - if: github.event.pull_request.draft == false
+ if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }}📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||||||||||||||||||||||||
| steps: | ||||||||||||||||||||||||||||||||||||||
| - uses: actions/checkout@v4 | ||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||
| - name: Install uv | ||||||||||||||||||||||||||||||||||||||
| uses: astral-sh/setup-uv@v4 | ||||||||||||||||||||||||||||||||||||||
| with: | ||||||||||||||||||||||||||||||||||||||
| version: "latest" | ||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧹 Nitpick (assertive) Pin actions to immutable SHAs and set minimal permissions For supply-chain safety and least privilege:
These are standard hardening steps. Proposed additions (example): name: Run Tests
+permissions:
+ contents: read
+
on:Please replace version tags with their corresponding SHAs (example snippet below—update with current SHAs): - uses: actions/checkout@v4
+ uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac # v4
- uses: astral-sh/setup-uv@v4
+ uses: astral-sh/setup-uv@<pin-to-latest-v4-sha>If you want, I can fetch the latest SHAs and update the patch. Also applies to: 70-77 🤖 Prompt for AI AgentsThere was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧹 Nitpick (assertive) Use fixed action versions instead of “latest” for reproducibility. - - name: Install uv
- uses: astral-sh/setup-uv@v4
- with:
- version: "latest"
+ - name: Install uv
+ uses: astral-sh/setup-uv@v4
+ with:
+ version: "0.5.x" # or a specific known-good versionAlso applies to: 67-74 🤖 Prompt for AI Agents🧹 Nitpick (assertive) Optional: Add timeout-minutes to guard against hung runs. test-main:
name: Test Main SDK (Python 3.9)
- runs-on: ubuntu-latest
+ runs-on: ubuntu-latest
+ timeout-minutes: 30
@@
additional-tests:
name: Test Additional Integrations (Python 3.10)
- runs-on: ubuntu-latest
+ runs-on: ubuntu-latest
+ timeout-minutes: 30
🤖 Prompt for AI Agents
Comment on lines
+32
to
+36
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧹 Nitpick (assertive) Pin uv installer version for reproducibility. Avoid “latest” drift. - - name: Install uv
- uses: astral-sh/setup-uv@v4
- with:
- version: "latest"
+ - name: Install uv
+ uses: astral-sh/setup-uv@v4
+ with:
+ version: "0.5.x"Also applies to: 107-111 🤖 Prompt for AI Agents |
||||||||||||||||||||||||||||||||||||||
| - name: Set up Python 3.9 | ||||||||||||||||||||||||||||||||||||||
| run: uv python install 3.9 | ||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||
SamstyleGhost marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||||||||||||||||||||||||||
| - name: Backup pyproject.toml | ||||||||||||||||||||||||||||||||||||||
| run: | | ||||||||||||||||||||||||||||||||||||||
| cp pyproject.toml pyproject.toml.bak | ||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||
| - name: Remove additional dependencies | ||||||||||||||||||||||||||||||||||||||
| run: | | ||||||||||||||||||||||||||||||||||||||
| sed -i.bak '/additional_dev = \[/,/\]/d' pyproject.toml | ||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||
coderabbitai[bot] marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||||||||||||||||||||||||||||||||||||||
| - name: Install dependencies (dev only) | ||||||||||||||||||||||||||||||||||||||
| run: | | ||||||||||||||||||||||||||||||||||||||
| uv sync --python 3.9 | ||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||
|
Comment on lines
+40
to
+43
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion Install only dev dependencies and lock resolution for reproducibility. Use group selection and --frozen to catch accidental resolver drift. - - name: Install dependencies (dev only)
- run: |
- uv sync --python 3.9
+ - name: Install dependencies (dev only)
+ run: |
+ uv sync --group dev --python 3.9.20 --frozen📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||||||||||||||||||||||||
| - name: Run main tests (excluding CrewAI) | ||||||||||||||||||||||||||||||||||||||
| env: | ||||||||||||||||||||||||||||||||||||||
| MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }} | ||||||||||||||||||||||||||||||||||||||
| MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }} | ||||||||||||||||||||||||||||||||||||||
| MAXIM_DATASET_ID: ${{ secrets.MAXIM_DATASET_ID }} | ||||||||||||||||||||||||||||||||||||||
| MAXIM_WORKSPACE_ID: ${{ secrets.MAXIM_WORKSPACE_ID }} | ||||||||||||||||||||||||||||||||||||||
| MAXIM_LOG_REPO_ID: ${{ secrets.MAXIM_LOG_REPO_ID }} | ||||||||||||||||||||||||||||||||||||||
| MAXIM_PROMPT_CHAIN_VERSION_ID: ${{ secrets.MAXIM_PROMPT_CHAIN_VERSION_ID }} | ||||||||||||||||||||||||||||||||||||||
| MAXIM_ASSISTANT_PROMPT_VERSION_ID: ${{ secrets.MAXIM_ASSISTANT_PROMPT_VERSION_ID }} | ||||||||||||||||||||||||||||||||||||||
| MAXIM_ASSISTANT_PROMPT_CHAIN_VERSION_ID: ${{ secrets.MAXIM_ASSISTANT_PROMPT_CHAIN_VERSION_ID }} | ||||||||||||||||||||||||||||||||||||||
| OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} | ||||||||||||||||||||||||||||||||||||||
| ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} | ||||||||||||||||||||||||||||||||||||||
| AZURE_OPENAI_ENDPOINT: ${{ secrets.AZURE_OPENAI_ENDPOINT }} | ||||||||||||||||||||||||||||||||||||||
| AZURE_OPENAI_KEY: ${{ secrets.AZURE_OPENAI_KEY }} | ||||||||||||||||||||||||||||||||||||||
| PORTKEY_API_KEY: ${{ secrets.PORTKEY_API_KEY }} | ||||||||||||||||||||||||||||||||||||||
| PORTKEY_VIRTUAL_KEY: ${{ secrets.PORTKEY_VIRTUAL_KEY }} | ||||||||||||||||||||||||||||||||||||||
| LLAMAINDEX_API_KEY: ${{ secrets.LLAMAINDEX_API_KEY }} | ||||||||||||||||||||||||||||||||||||||
| TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }} | ||||||||||||||||||||||||||||||||||||||
| TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }} | ||||||||||||||||||||||||||||||||||||||
|
Comment on lines
+58
to
+62
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧹 Nitpick (assertive) Consider removing unused secrets (PORTKEY_VIRTUAL_KEY). Tests moved to provider-based config; if PORTKEY_VIRTUAL_KEY is unused, omit it from env to reduce exposure surface. Do you want me to scan the repo for usages and submit a follow-up patch? 🤖 Prompt for AI Agents |
||||||||||||||||||||||||||||||||||||||
| FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }} | ||||||||||||||||||||||||||||||||||||||
| GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }} | ||||||||||||||||||||||||||||||||||||||
| GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }} | ||||||||||||||||||||||||||||||||||||||
| MAXIM_PROMPT_1_ID: ${{ secrets.MAXIM_PROMPT_1_ID }} | ||||||||||||||||||||||||||||||||||||||
| MAXIM_PROMPT_2_ID: ${{ secrets.MAXIM_PROMPT_2_ID }} | ||||||||||||||||||||||||||||||||||||||
| MAXIM_PROMPT_3_ID: ${{ secrets.MAXIM_PROMPT_3_ID }} | ||||||||||||||||||||||||||||||||||||||
| MAXIM_PROMPT_1_VERSION_1_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_1_ID }} | ||||||||||||||||||||||||||||||||||||||
| MAXIM_PROMPT_1_VERSION_3_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_3_ID }} | ||||||||||||||||||||||||||||||||||||||
| MAXIM_PROMPT_1_VERSION_4_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_4_ID }} | ||||||||||||||||||||||||||||||||||||||
| MAXIM_PROMPT_1_VERSION_5_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_5_ID }} | ||||||||||||||||||||||||||||||||||||||
| MAXIM_FOLDER_1_ID: ${{ secrets.MAXIM_FOLDER_1_ID }} | ||||||||||||||||||||||||||||||||||||||
| MAXIM_FOLDER_2_ID: ${{ secrets.MAXIM_FOLDER_2_ID }} | ||||||||||||||||||||||||||||||||||||||
| MAXIM_TEST_RUN_PROMPT_VERSION_ID: ${{ secrets.MAXIM_TEST_RUN_PROMPT_VERSION_ID }} | ||||||||||||||||||||||||||||||||||||||
| MAXIM_TEST_RUN_ASSISTANT_PROMPT_VERSION_ID: ${{ secrets.MAXIM_TEST_RUN_ASSISTANT_PROMPT_VERSION_ID }} | ||||||||||||||||||||||||||||||||||||||
| MAXIM_TEST_RUN_ASSISTANT_PROMPT_CHAIN_VERSION_ID: ${{ secrets.MAXIM_TEST_RUN_ASSISTANT_PROMPT_CHAIN_VERSION_ID }} | ||||||||||||||||||||||||||||||||||||||
| run: | | ||||||||||||||||||||||||||||||||||||||
| uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml | ||||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||||
| run: | | |
| uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml | |
| run: | | |
| mkdir -p junit | |
| uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml |
🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 78 to 79, pytest writes its JUnit
XML to junit/main-test-results.xml but the junit directory may not exist; modify
the workflow to create the directory before running pytest (e.g., add a step or
prepend the run script with mkdir -p junit) so pytest can write the XML and the
artifact step won't fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Use always() for publishing steps (and consider pinning action SHAs).
Ensures artifacts/checks publish on failure/cancel.
- if: success() || failure() # run this step even if previous step failed
+ if: always()Apply to both upload-artifact and publish-unit-test-result steps in both jobs. Also consider pinning actions to commit SHAs for supply-chain safety.
Also applies to: 124-136
🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 81 to 93 (and likewise update lines
124-136): the upload-artifact and publish-unit-test-result steps use "if:
success() || failure()" which should be replaced with "if: always()" so these
artifact and test-publishing steps run on cancelled/neutral runs as well; update
both occurrences to use if: always() and, where possible, pin the actions to
specific commit SHAs (e.g., actions/upload-artifact@<sha> and
EnricoMi/publish-unit-test-result-action@<sha>) to improve supply-chain safety.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Use always() for artifact publishing and pin actions to SHAs; grant checks: write.
Ensures publishing even on cancelled and satisfies permission needs.
- - name: Upload main test results
- uses: actions/upload-artifact@v4
- if: success() || failure() # run this step even if previous step failed
+ - name: Upload main test results
+ uses: actions/upload-artifact@<sha-for-v4>
+ if: always()
@@
- - name: Publish main test results
- uses: EnricoMi/publish-unit-test-result-action@v2
- if: success() || failure() # run this step even if previous step failed
+ - name: Publish main test results
+ uses: EnricoMi/publish-unit-test-result-action@<sha-for-v2>
+ if: always()
with:
junit_files: "junit/main-test-results.xml"
check_name: "Main Test Results"Add job-level permissions to allow creating check runs:
test-main:
name: Test Main SDK (Python 3.9)
runs-on: ubuntu-latest
+ permissions:
+ contents: read
+ checks: write
+ pull-requests: writeCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 76 to 89: replace the conditional "if:
success() || failure()" with "if: always()" for both the Upload main test
results and Publish main test results steps, pin both actions to immutable
commit SHAs instead of tags (replace actions/upload-artifact@v4 and
EnricoMi/publish-unit-test-result-action@v2 with their respective full commit
SHAs), and add job-level permissions with "checks: write" (and other minimal
required permissions) so the publish action can create check runs.
coderabbitai[bot] marked this conversation as resolved.
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apply the same draft/fork gating to additional-tests.
Mirror the main job condition.
- if: github.event.pull_request.draft == false
+ if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| name: Test Additional Integrations (Python 3.10) | |
| runs-on: ubuntu-latest | |
| if: github.event.pull_request.draft == false | |
| name: Test Additional Integrations (Python 3.10) | |
| runs-on: ubuntu-latest | |
| if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }} |
🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 100 to 103, the "Test Additional
Integrations (Python 3.10)" job is missing the same pull request gating used by
the main job; update the job to include the identical if condition from the main
job (i.e., mirror the draft/fork check) so the job only runs for non-draft,
non-fork PRs by adding the same if: expression to this job definition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Pin Python 3.10 patch; install dev-only with frozen; rename step.
Increases determinism and clarity.
- - name: Set up Python 3.10
- run: uv python install 3.10
+ - name: Set up Python 3.10
+ run: uv python install 3.10.15
@@
- - name: Install dependencies (CrewAI only)
- run: |
- uv sync --python 3.10
+ - name: Install dependencies (dev only)
+ run: |
+ uv sync --group dev --python 3.10.15 --frozen📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Set up Python 3.10 | |
| run: uv python install 3.10 | |
| - name: Install dependencies (CrewAI only) | |
| run: | | |
| uv sync --python 3.10 | |
| - name: Set up Python 3.10 | |
| run: uv python install 3.10.15 | |
| - name: Install dependencies (dev only) | |
| run: | | |
| uv sync --group dev --python 3.10.15 --frozen |
🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 107 to 112: the workflow currently
installs "python 3.10" and runs "uv sync --python 3.10" without pinning a patch,
and the step name is vague; update the install to pin a specific 3.10 patch
(e.g., 3.10.12) by changing the install invocation to that exact version, change
the sync command to install dev-only dependencies with a frozen lockfile (use
the uv flags --dev-only and --frozen along with --python <pinned-version>), and
rename the step to something explicit like "Install dev dependencies (CrewAI
only, frozen)" to improve clarity and determinism.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure additional_dev deps are installed; pin Python and freeze.
CrewAI lives in additional_dev; install both dev and additional_dev.
- - name: Set up Python 3.10
- run: uv python install 3.10
+ - name: Set up Python 3.10
+ run: uv python install 3.10.15
- - name: Install dependencies (CrewAI only)
- run: |
- uv sync --python 3.10
+ - name: Install dependencies (dev + additional_dev)
+ run: |
+ uv sync --group dev --group additional_dev --python 3.10.15 --frozen📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Set up Python 3.10 | |
| run: uv python install 3.10 | |
| - name: Install dependencies (CrewAI only) | |
| run: | | |
| uv sync --python 3.10 | |
| - name: Set up Python 3.10 | |
| run: uv python install 3.10.15 | |
| - name: Install dependencies (dev + additional_dev) | |
| run: | | |
| uv sync --group dev --group additional_dev --python 3.10.15 --frozen |
🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 123 to 129, the workflow currently
mis-invokes Python setup and only installs CrewAI’s dev deps; update the step to
pin/setup Python 3.10 correctly, replace the incorrect "uv python install 3.10"
with a proper pinned Python setup, then ensure dependency installation runs uv
(or the project’s installer) to install both dev and additional_dev groups for
CrewAI, and finally freeze the installed packages to a requirements
file/artifact so the exact resolved deps are recorded.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Install required groups for integrations with frozen resolution.
- - name: Install dependencies (CrewAI only)
- run: |
- uv sync --python 3.10
+ - name: Install dependencies (dev + additional_dev)
+ run: |
+ uv sync --group dev --group additional_dev --python 3.10.15 --frozen📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Install dependencies (CrewAI only) | |
| run: | | |
| uv sync --python 3.10 | |
| - name: Install dependencies (dev + additional_dev) | |
| run: | | |
| uv sync --group dev --group additional_dev --python 3.10.15 --frozen |
🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 126 to 129, the workflow currently
runs a plain "uv sync --python 3.10" which does not install the integration
extras nor enforce frozen resolutions; update the step to invoke uv sync with
the flags that (1) install the required integrations group/extras and (2) enable
frozen resolution/lockfile enforcement (use the UV CLI options for specifying
groups/extras and the frozen-lock or equivalent flag) so the integrations
dependencies are installed exactly as pinned.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Create junit dir and pass required env for integration tests.
Prevents junit write failure; ensure secrets available if needed.
- - name: Run additional integration tests
- run: |
- uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml
+ - name: Run additional integration tests
+ env:
+ OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
+ ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
+ MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }}
+ MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }}
+ TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }}
+ run: |
+ mkdir -p junit
+ uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Run additional integration tests | |
| run: | | |
| uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml | |
| - name: Run additional integration tests | |
| env: | |
| OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} | |
| ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} | |
| MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }} | |
| MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }} | |
| TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }} | |
| run: | | |
| mkdir -p junit | |
| uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml |
🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 114 to 116: the workflow step that runs
the additional integration tests can fail because the junit output directory may
not exist and required environment variables/secrets for the tests may be
missing; update the step to create the junit directory before running tests
(e.g., mkdir -p junit) and ensure the needed secrets/env vars are passed into
the job or step (via workflow env: or with: env variables or by referencing
secrets) so pytest can write junit/additional-test-results.xml and the
integration tests have required credentials.
| Original file line number | Diff line number | Diff line change | ||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -75,10 +75,16 @@ def parse_incoming_query(incoming_query: str) -> List[RuleType]: | |||||||||||||||
| value = bool(value) | ||||||||||||||||
| parsed = True | ||||||||||||||||
| if not parsed: | ||||||||||||||||
| if type(value) == bool: | ||||||||||||||||
| try: | ||||||||||||||||
| value = int(value) | ||||||||||||||||
| parsed = True | ||||||||||||||||
| except ValueError: | ||||||||||||||||
| pass | ||||||||||||||||
| if not parsed: | ||||||||||||||||
| if isinstance(value, bool): | ||||||||||||||||
| value = bool(value) | ||||||||||||||||
| parsed = True | ||||||||||||||||
| elif type(value) is int: | ||||||||||||||||
| elif isinstance(value, int): | ||||||||||||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧹 Nitpick (assertive) Fix inconsistent type checking pattern. Line 87 uses For consistency with the existing codebase pattern, apply this diff: - elif isinstance(value, int):
+ elif type(value) is int:📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||
| value = int(value) | ||||||||||||||||
| parsed = True | ||||||||||||||||
|
Comment on lines
+84
to
89
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion Remove redundant type checks after parsing. Lines 84-89 contain redundant type checks that will never execute because Apply this diff to remove the redundant checks: - if not parsed:
- if isinstance(value, bool):
- value = bool(value)
- parsed = True
- elif isinstance(value, int):
- value = int(value)
- parsed = True📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||
| if not parsed: | ||||||||||||||||
|
Comment on lines
+78
to
90
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion Be conservative when casting to int and remove unreachable isinstance branches
- if not parsed:
- try:
- value = int(value)
- parsed = True
- except ValueError:
- pass
- if not parsed:
- if isinstance(value, bool):
- value = bool(value)
- parsed = True
- elif isinstance(value, int):
- value = int(value)
- parsed = True
+ if not parsed and isinstance(value, str) and re.fullmatch(r"[+-]?\d+", value.strip()):
+ try:
+ value = int(value.strip())
+ parsed = True
+ except ValueError:
+ pass
🤖 Prompt for AI Agents |
||||||||||||||||
|
|
||||||||||||||||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -7,7 +7,11 @@ | |||||||||||||||||||||||
| from maxim import Maxim, Config | ||||||||||||||||||||||||
| from maxim.models.dataset import DatasetEntry, Variable, DatasetEntryWithRowNo, FileVariablePayload, VariableFileAttachment | ||||||||||||||||||||||||
| from maxim.logger.components.attachment import FileAttachment, FileDataAttachment, UrlAttachment | ||||||||||||||||||||||||
| from dotenv import load_dotenv | ||||||||||||||||||||||||
| load_dotenv() | ||||||||||||||||||||||||
|
|
||||||||||||||||||||||||
| MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL") | ||||||||||||||||||||||||
| MAXIM_API_KEY = os.getenv("MAXIM_API_KEY") | ||||||||||||||||||||||||
|
Comment on lines
+10
to
+14
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧹 Nitpick (assertive) 🛠️ Refactor suggestion Avoid module-scope dotenv loading and env caching; this breaks test hermeticity.
Apply this diff to make resolution explicit and add safe test defaults (while still honoring env when set): -from dotenv import load_dotenv
-load_dotenv()
-
-MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL")
-MAXIM_API_KEY = os.getenv("MAXIM_API_KEY")
+from dotenv import load_dotenv, find_dotenv
+# Resolve nearest .env without overriding already-set env
+load_dotenv(find_dotenv(usecwd=True), override=False)
+# Note: do not cache real secrets at import-time; provide safe test defaults
+MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL", "http://localhost:8000")
+MAXIM_API_KEY = os.getenv("MAXIM_API_KEY", "test-api-key")If you don’t want defaults, drop them here and inject via setUp using patch.dict. I can provide a version that fully avoids module-level reads. 📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||||||||||
|
|
||||||||||||||||||||||||
| class TestAddDatasetEntriesComprehensive(unittest.TestCase): | ||||||||||||||||||||||||
| """Comprehensive test suite for the updated add_dataset_entries method.""" | ||||||||||||||||||||||||
|
|
@@ -17,13 +21,9 @@ def setUp(self) -> None: | |||||||||||||||||||||||
| if hasattr(Maxim, "_instance"): | ||||||||||||||||||||||||
| delattr(Maxim, "_instance") | ||||||||||||||||||||||||
|
|
||||||||||||||||||||||||
| # Set up test environment variables | ||||||||||||||||||||||||
| os.environ["MAXIM_API_KEY"] = "test-api-key" | ||||||||||||||||||||||||
| os.environ["MAXIM_BASE_URL"] = "https://app.getmaxim.ai" | ||||||||||||||||||||||||
|
|
||||||||||||||||||||||||
| config = Config( | ||||||||||||||||||||||||
| api_key="test-api-key", | ||||||||||||||||||||||||
| base_url="https://app.getmaxim.ai", | ||||||||||||||||||||||||
| api_key=MAXIM_API_KEY, | ||||||||||||||||||||||||
| base_url=MAXIM_BASE_URL, | ||||||||||||||||||||||||
| debug=True, | ||||||||||||||||||||||||
| raise_exceptions=True | ||||||||||||||||||||||||
| ) | ||||||||||||||||||||||||
|
Comment on lines
24
to
29
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion Do not pass None into Config; load env in setUp with safe defaults Maxim requires an API key; without defaults, CI will fail. Build kwargs conditionally or provide test-safe defaults. Apply: - config = Config(
- api_key=MAXIM_API_KEY,
- base_url=MAXIM_BASE_URL,
- debug=True,
- raise_exceptions=True
- )
+ # Resolve env at setup time
+ load_dotenv(find_dotenv(usecwd=True), override=False)
+ api_key = os.getenv("MAXIM_API_KEY", "test-api-key")
+ base_url = os.getenv("MAXIM_BASE_URL", "http://localhost:8000")
+ cfg = {"debug": True, "raise_exceptions": True}
+ # Only add keys if present to avoid None-paths; we provide safe defaults above.
+ if api_key:
+ cfg["api_key"] = api_key
+ if base_url:
+ cfg["base_url"] = base_url
+ config = Config(**cfg)
🤖 Prompt for AI Agents |
||||||||||||||||||||||||
|
|
@@ -175,7 +175,7 @@ def test_add_dataset_entries_with_file_attachments(self) -> None: | |||||||||||||||||||||||
| name="test.txt", | ||||||||||||||||||||||||
| mime_type="text/plain" | ||||||||||||||||||||||||
| ) | ||||||||||||||||||||||||
|
|
||||||||||||||||||||||||
| entry = DatasetEntry(entry={ | ||||||||||||||||||||||||
| "input": Variable(type="text", payload="Input with file"), | ||||||||||||||||||||||||
| "files": Variable(type="file", payload=[file_attachment]) | ||||||||||||||||||||||||
|
|
@@ -186,7 +186,7 @@ def test_add_dataset_entries_with_file_attachments(self) -> None: | |||||||||||||||||||||||
|
|
||||||||||||||||||||||||
| # Verify file upload process was triggered | ||||||||||||||||||||||||
| self.assertEqual(mock_client.request.call_count, 4) # total_rows + entries + upload_url + patch | ||||||||||||||||||||||||
|
|
||||||||||||||||||||||||
| finally: | ||||||||||||||||||||||||
| # Clean up temp file | ||||||||||||||||||||||||
| os.unlink(temp_file_path) | ||||||||||||||||||||||||
|
|
@@ -201,7 +201,7 @@ def test_add_dataset_entries_with_url_attachment(self) -> None: | |||||||||||||||||||||||
| b'{"data": {"url": "https://signed-url.com", "key": "datasets/test-dataset-id/entry123/test-file-key"}}' | ||||||||||||||||||||||||
| ) | ||||||||||||||||||||||||
| mock_patch_response = self._create_mock_response(b'{"success": true}') | ||||||||||||||||||||||||
|
|
||||||||||||||||||||||||
| self._setup_mock_network_calls([ | ||||||||||||||||||||||||
| mock_total_rows_response, | ||||||||||||||||||||||||
| mock_add_entries_response, | ||||||||||||||||||||||||
|
|
@@ -215,7 +215,7 @@ def test_add_dataset_entries_with_url_attachment(self) -> None: | |||||||||||||||||||||||
| name="image.jpg", | ||||||||||||||||||||||||
| mime_type="image/jpeg" | ||||||||||||||||||||||||
| ) | ||||||||||||||||||||||||
|
|
||||||||||||||||||||||||
| entry = DatasetEntry(entry={ | ||||||||||||||||||||||||
| "input": Variable(type="text", payload="Input with URL"), | ||||||||||||||||||||||||
| "images": Variable(type="file", payload=[url_attachment]) | ||||||||||||||||||||||||
|
|
@@ -240,7 +240,7 @@ def test_add_dataset_entries_with_file_data_attachment(self) -> None: | |||||||||||||||||||||||
| b'{"data": {"url": "https://signed-url.com", "key": "datasets/test-dataset-id/entry123/test-file-key"}}' | ||||||||||||||||||||||||
| ) | ||||||||||||||||||||||||
| mock_patch_response = self._create_mock_response(b'{"success": true}') | ||||||||||||||||||||||||
|
|
||||||||||||||||||||||||
| mock_client = self._setup_mock_network_calls([ | ||||||||||||||||||||||||
| mock_total_rows_response, | ||||||||||||||||||||||||
| mock_add_entries_response, | ||||||||||||||||||||||||
|
|
@@ -254,7 +254,7 @@ def test_add_dataset_entries_with_file_data_attachment(self) -> None: | |||||||||||||||||||||||
| name="data.bin", | ||||||||||||||||||||||||
| mime_type="application/octet-stream" | ||||||||||||||||||||||||
| ) | ||||||||||||||||||||||||
|
|
||||||||||||||||||||||||
| entry = DatasetEntry(entry={ | ||||||||||||||||||||||||
| "input": Variable(type="text", payload="Input with file data"), | ||||||||||||||||||||||||
| "data": Variable(type="file", payload=[file_data_attachment]) | ||||||||||||||||||||||||
|
|
@@ -272,7 +272,7 @@ def test_add_dataset_entries_mixed_input_types(self) -> None: | |||||||||||||||||||||||
| mock_add_entries_response = self._create_mock_response( | ||||||||||||||||||||||||
| b'{"data": {"ids": ["entry1", "entry2"], "cells": []}}' | ||||||||||||||||||||||||
| ) | ||||||||||||||||||||||||
|
|
||||||||||||||||||||||||
| mock_client = self._setup_mock_network_calls([ | ||||||||||||||||||||||||
| mock_total_rows_response, | ||||||||||||||||||||||||
| mock_add_entries_response | ||||||||||||||||||||||||
|
|
@@ -283,7 +283,7 @@ def test_add_dataset_entries_mixed_input_types(self) -> None: | |||||||||||||||||||||||
| "input": Variable(type="text", payload="Object input"), | ||||||||||||||||||||||||
| "output": Variable(type="json", payload={"result": "object"}), | ||||||||||||||||||||||||
| }) | ||||||||||||||||||||||||
|
|
||||||||||||||||||||||||
| dict_entry = { | ||||||||||||||||||||||||
| "input": "Dict input", | ||||||||||||||||||||||||
| "output": {"result": "dict"} | ||||||||||||||||||||||||
|
|
||||||||||||||||||||||||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -16,7 +16,7 @@ | |||||
| # Environment variables for integration testing | ||||||
| MAXIM_API_KEY = os.getenv("MAXIM_API_KEY") | ||||||
| MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL") or "https://app.getmaxim.ai" | ||||||
| MAXIM_DATASET_ID = os.getenv("MAXIM_DATASET_ID") # Dataset ID for integration tests | ||||||
| MAXIM_DATASET_ID = "cmesayj1f001c9ybdxwxmtilk" # Dataset ID for integration tests | ||||||
|
||||||
| MAXIM_DATASET_ID = "cmesayj1f001c9ybdxwxmtilk" # Dataset ID for integration tests | |
| MAXIM_DATASET_ID = os.getenv("MAXIM_DATASET_ID") |
🤖 Prompt for AI Agents
In maxim/tests/test_add_dataset_entries_integration.py around line 19, the test
hard-codes MAXIM_DATASET_ID which bypasses the env-driven gating and causes
shared-state/CI flakiness; replace the constant with reading from an environment
variable (e.g., os.getenv("MAXIM_DATASET_ID")) and add a pytest skip guard
(pytest.skip or pytest.mark.skipif) when the env var is not set so the test
won't run or leak IDs in CI; ensure no fallback hard-coded ID is used and update
imports to include os and pytest if necessary.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -20,15 +20,14 @@ | |
| baseUrl = os.getenv("MAXIM_BASE_URL") or "https://app.getmaxim.ai" | ||
| repoId = os.getenv("MAXIM_LOG_REPO_ID") | ||
|
|
||
|
|
||
| class TestAnthropicWithMockWriter(unittest.TestCase): | ||
| """Test class demonstrating how to use MockLogWriter for verification.""" | ||
|
|
||
| def setUp(self): | ||
| if hasattr(Maxim, "_instance"): | ||
| delattr(Maxim, "_instance") | ||
| # Create logger and patch its writer | ||
| self.logger = Maxim().logger() | ||
| self.logger = Maxim({"base_url": baseUrl}).logger() | ||
| self.mock_writer = inject_mock_writer(self.logger) | ||
|
Comment on lines
+30
to
31
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 💡 Verification agent 🧩 Analysis chainStabilize logger creation: provide repo-id fallback and guard on missing secrets Right now the test will fail (or be flaky in CI) if MAXIM_LOG_REPO_ID / MAXIM_API_KEY / ANTHROPIC_API_KEY aren’t present. Add a skip guard and supply a deterministic logger id fallback so the test runs without repo-id secrets. - # Create logger and patch its writer
- self.logger = Maxim({"base_url": baseUrl}).logger()
+ # Create logger and patch its writer
+ # Fail fast when required keys are missing (stabilizes CI without secrets)
+ if not apiKey or not anthropicApiKey:
+ self.skipTest("MAXIM_API_KEY and/or ANTHROPIC_API_KEY not set")
+ self.logger = Maxim({"base_url": baseUrl}).logger(
+ {"id": (repoId or f"test-repo-{uuid4()}")}
+ )🏁 Script executed: #!/bin/bash
# Locate the test file path
fd --glob "*test_anthropic.py"Length of output: 62 🏁 Script executed: #!/bin/bash
# Display the first 200 lines of the test file to inspect context around logger creation
sed -n '1,200p' maxim/tests/test_anthropic.pyLength of output: 7467 🏁 Script executed: #!/bin/bash
# Search for the Maxim class definition to inspect logger() signature
rg -nP '^class\s+Maxim\b' -g '*.py' -n -C2Length of output: 191 🏁 Script executed: #!/bin/bash
# Search for the logger method in maxim/maxim.py
grep -R -n "def logger" -n maxim/maxim.pyLength of output: 63 🏁 Script executed: #!/bin/bash
# Show lines around the logger method definition to inspect its signature
sed -n '880,940p' maxim/maxim.pyLength of output: 2247 Stabilize logger creation in TestAnthropicWithMockWriter.setUp The existing test will error out whenever
--- a/maxim/tests/test_anthropic.py
+++ b/maxim/tests/test_anthropic.py
@@ class TestAnthropicWithMockWriter(unittest.TestCase):
def setUp(self):
- # Create logger and patch its writer
- self.logger = Maxim({"base_url": baseUrl}).logger()
+ # Skip if required API keys aren’t configured (avoids CI flakiness)
+ if not apiKey or not anthropicApiKey:
+ self.skipTest("MAXIM_API_KEY and/or ANTHROPIC_API_KEY not set")
+ # Provide a deterministic repo ID (env or test-scoped UUID)
+ test_repo_id = repoId or f"test-repo-{uuid4()}"
+ self.logger = Maxim({"base_url": baseUrl}).logger(
+ {"id": test_repo_id}
+ )
self.mock_writer = inject_mock_writer(self.logger)This ensures:
🤖 Prompt for AI Agents |
||
|
|
||
| def test_messages_with_mock_writer_verification(self): | ||
|
|
@@ -123,7 +122,7 @@ def setUp(self): | |
| # This is a hack to ensure that the Maxim instance is not cached | ||
| if hasattr(Maxim, "_instance"): | ||
| delattr(Maxim, "_instance") | ||
| self.logger = Maxim().logger() | ||
| self.logger = Maxim({"base_url": baseUrl}).logger() | ||
| self.mock_writer = inject_mock_writer(self.logger) | ||
|
|
||
| def test_messages_using_wrapper(self): | ||
|
|
||
Uh oh!
There was an error while loading. Please reload this page.