Skip to content
Open
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
130 changes: 130 additions & 0 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
name: Run Tests

on:
push:
branches: [main,beta]
pull_request:
branches: [main,beta]
types: [opened, synchronize, reopened, ready_for_review]

jobs:
test-main:
name: Test Main SDK (Python 3.9)
runs-on: ubuntu-latest
Comment on lines +16 to +18
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Add workflow-level hardening and concurrency.

Reduces default token scope and auto-cancels superseded runs on the same branch.

 jobs:
+  # Cancel previous in-progress runs for the same ref
+  # and restrict default permissions
+  concurrency:
+    group: ${{ github.workflow }}-${{ github.ref }}
+    cancel-in-progress: true
+
+permissions:
+  contents: read

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 10 to 12, add workflow-level
hardening by restricting the default GITHUB_TOKEN permissions (for example:
permissions: contents: read) and enable concurrency to auto-cancel superseded
runs (for example: concurrency: group: ${{ github.workflow }}-${{ github.ref }},
cancel-in-progress: true). Update the top-level of the workflow file to include
these two keys so runs use the reduced token scope and newer runs cancel older
in-progress runs on the same branch.

if: github.event.pull_request.draft == false

Comment on lines +19 to +20
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Draft/fork-safe gating: keep runs on push, skip drafts and forks on PRs.

Current condition breaks on push events. Use an event-aware expression.

-    if: github.event.pull_request.draft == false
+    if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }}
steps:
# …rest of this job’s steps…
🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 19-20: the current if clause
(github.event.pull_request.draft == false) fails on push events; update the
condition to be event-aware so pushes always run and pull_request runs only when
not a draft (e.g., check github.event_name == 'push' OR when github.event_name
== 'pull_request' ensure github.event.pull_request.draft == false), and ensure
the expression also guards accessing github.event.pull_request when the event is
not a pull_request to avoid errors.

steps:
- uses: actions/checkout@v4

- name: Install uv
uses: astral-sh/setup-uv@v4
with:
version: "latest"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Pin actions to immutable SHAs and set minimal permissions

For supply-chain safety and least privilege:

  • Pin actions/checkout@v4 and astral-sh/setup-uv@v4 to commit SHAs.
  • Add permissions: contents: read at workflow top.

These are standard hardening steps.

Proposed additions (example):

 name: Run Tests

+permissions:
+  contents: read
+
 on:

Please replace version tags with their corresponding SHAs (example snippet below—update with current SHAs):

- uses: actions/checkout@v4
+ uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac  # v4

- uses: astral-sh/setup-uv@v4
+ uses: astral-sh/setup-uv@<pin-to-latest-v4-sha>

If you want, I can fetch the latest SHAs and update the patch.

Also applies to: 70-77

🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 26 to 33 (and similar change at 70-77):
the workflow uses action version tags which should be pinned to immutable commit
SHAs and the workflow lacks least-privilege permissions; update each uses: entry
(actions/checkout@v4 and astral-sh/setup-uv@v4) to the corresponding commit SHA
for that tag (replace the tag with the full SHA for the referenced release) and
add a top-level permissions: contents: read entry to the workflow YAML; apply
the same SHA-pinning for the other occurrences at lines 70-77.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Use fixed action versions instead of “latest” for reproducibility.

-      - name: Install uv
-        uses: astral-sh/setup-uv@v4
-        with:
-          version: "latest"
+      - name: Install uv
+        uses: astral-sh/setup-uv@v4
+        with:
+          version: "0.5.x"  # or a specific known-good version

Also applies to: 67-74

🤖 Prompt for AI Agents
.github/workflows/tests.yml lines 26-33 (and similarly lines 67-74): the
workflow uses the action astral-sh/setup-uv@v4 with with: version: "latest",
which reduces reproducibility; change the workflow to pin a specific, tested
version (replace "latest" with a concrete version string or tag like "vX.Y.Z" or
a commit SHA), and update both occurrences (lines 26-33 and 67-74) to the chosen
fixed version so the CI is deterministic and reproducible.

🧹 Nitpick (assertive)

Optional: Add timeout-minutes to guard against hung runs.

   test-main:
     name: Test Main SDK (Python 3.9)
-    runs-on: ubuntu-latest
+    runs-on: ubuntu-latest
+    timeout-minutes: 30
@@
   additional-tests:
     name: Test Additional Integrations (Python 3.10)
-    runs-on: ubuntu-latest
+    runs-on: ubuntu-latest
+    timeout-minutes: 30

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 26-33: the workflow currently lacks a
timeout and can hang indefinitely; add a timeout-minutes entry at the job level
(same indentation as runs-on) for the job containing these steps (e.g.,
timeout-minutes: 10 or an org-appropriate value) to guard against hung runs and
make it configurable via a workflow var if needed.

Comment on lines +32 to +36
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Pin uv installer version for reproducibility.

Avoid “latest” drift.

-      - name: Install uv
-        uses: astral-sh/setup-uv@v4
-        with:
-          version: "latest"
+      - name: Install uv
+        uses: astral-sh/setup-uv@v4
+        with:
+          version: "0.5.x"

Also applies to: 107-111

🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 32-36 (and also apply same change to
lines 107-111): the workflow currently pins the astral-sh/setup-uv action to
version "latest", which can cause nondeterministic CI behavior; replace "latest"
with a specific, tested tag or semver (for example v4.2.1 or v4@sha256:<commit>)
to pin the installer version for reproducibility, and update both occurrences
(lines 32-36 and 107-111) to use the same explicit version string.

- name: Set up Python 3.9
run: uv python install 3.9

- name: Backup pyproject.toml
run: |
cp pyproject.toml pyproject.toml.bak

- name: Remove additional dependencies
run: |
sed -i.bak '/additional_dev = \[/,/\]/d' pyproject.toml

- name: Install dependencies (dev only)
run: |
uv sync --python 3.9

Comment on lines +40 to +43
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Install only dev dependencies and lock resolution for reproducibility.

Use group selection and --frozen to catch accidental resolver drift.

-      - name: Install dependencies (dev only)
-        run: |
-          uv sync --python 3.9
+      - name: Install dependencies (dev only)
+        run: |
+          uv sync --group dev --python 3.9.20 --frozen
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Install dependencies (dev only)
run: |
uv sync --python 3.9
- name: Install dependencies (dev only)
run: |
uv sync --group dev --python 3.9.20 --frozen
🤖 Prompt for AI Agents
.github/workflows/tests.yml lines 45-48: the workflow currently runs "uv sync
--python 3.9" which installs all dependencies without locking or restricting to
dev-only; update the command to select the dev group and enable frozen
resolution (e.g., use the CLI flags to install only the dev group and add
--frozen) so that only dev dependencies are installed and the resolver will fail
on drift, ensuring reproducible installs.

- name: Run main tests (excluding CrewAI)
env:
MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }}
MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }}
MAXIM_DATASET_ID: ${{ secrets.MAXIM_DATASET_ID }}
MAXIM_WORKSPACE_ID: ${{ secrets.MAXIM_WORKSPACE_ID }}
MAXIM_LOG_REPO_ID: ${{ secrets.MAXIM_LOG_REPO_ID }}
MAXIM_PROMPT_CHAIN_VERSION_ID: ${{ secrets.MAXIM_PROMPT_CHAIN_VERSION_ID }}
MAXIM_ASSISTANT_PROMPT_VERSION_ID: ${{ secrets.MAXIM_ASSISTANT_PROMPT_VERSION_ID }}
MAXIM_ASSISTANT_PROMPT_CHAIN_VERSION_ID: ${{ secrets.MAXIM_ASSISTANT_PROMPT_CHAIN_VERSION_ID }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
AZURE_OPENAI_ENDPOINT: ${{ secrets.AZURE_OPENAI_ENDPOINT }}
AZURE_OPENAI_KEY: ${{ secrets.AZURE_OPENAI_KEY }}
PORTKEY_API_KEY: ${{ secrets.PORTKEY_API_KEY }}
PORTKEY_VIRTUAL_KEY: ${{ secrets.PORTKEY_VIRTUAL_KEY }}
LLAMAINDEX_API_KEY: ${{ secrets.LLAMAINDEX_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }}
Comment on lines +58 to +62
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Consider removing unused secrets (PORTKEY_VIRTUAL_KEY).

Tests moved to provider-based config; if PORTKEY_VIRTUAL_KEY is unused, omit it from env to reduce exposure surface.

Do you want me to scan the repo for usages and submit a follow-up patch?

🤖 Prompt for AI Agents
.github/workflows/tests.yml lines 51-55: the workflow exposes
PORTKEY_VIRTUAL_KEY which appears unused; search the repository for any
references to PORTKEY_VIRTUAL_KEY (including CI, infra, and provider config
files) and if no usages exist, remove the PORTKEY_VIRTUAL_KEY entry from the env
block in this workflow and delete the corresponding GitHub Actions secret to
reduce exposure; if usages are found, update those callers to use the
provider-based config instead or consolidate to a single secret name, then
adjust the workflow accordingly and run CI locally or via a test branch to
verify no breakage.

FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
MAXIM_PROMPT_1_ID: ${{ secrets.MAXIM_PROMPT_1_ID }}
MAXIM_PROMPT_2_ID: ${{ secrets.MAXIM_PROMPT_2_ID }}
MAXIM_PROMPT_3_ID: ${{ secrets.MAXIM_PROMPT_3_ID }}
MAXIM_PROMPT_1_VERSION_1_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_1_ID }}
MAXIM_PROMPT_1_VERSION_3_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_3_ID }}
MAXIM_PROMPT_1_VERSION_4_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_4_ID }}
MAXIM_PROMPT_1_VERSION_5_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_5_ID }}
MAXIM_FOLDER_1_ID: ${{ secrets.MAXIM_FOLDER_1_ID }}
MAXIM_FOLDER_2_ID: ${{ secrets.MAXIM_FOLDER_2_ID }}
MAXIM_TEST_RUN_PROMPT_VERSION_ID: ${{ secrets.MAXIM_TEST_RUN_PROMPT_VERSION_ID }}
MAXIM_TEST_RUN_ASSISTANT_PROMPT_VERSION_ID: ${{ secrets.MAXIM_TEST_RUN_ASSISTANT_PROMPT_VERSION_ID }}
MAXIM_TEST_RUN_ASSISTANT_PROMPT_CHAIN_VERSION_ID: ${{ secrets.MAXIM_TEST_RUN_ASSISTANT_PROMPT_CHAIN_VERSION_ID }}
run: |
uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Create junit directory before pytest writes XML.

Prevents artifact step failures.

-        run: |
-          uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml
+        run: |
+          mkdir -p junit
+          uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
run: |
uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml
run: |
mkdir -p junit
uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml
🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 78 to 79, pytest writes its JUnit
XML to junit/main-test-results.xml but the junit directory may not exist; modify
the workflow to create the directory before running pytest (e.g., add a step or
prepend the run script with mkdir -p junit) so pytest can write the XML and the
artifact step won't fail.


- name: Upload main test results
uses: actions/upload-artifact@v4
if: success() || failure() # run this step even if previous step failed
with:
name: main-test-results
path: junit/main-test-results.xml

- name: Publish main test results
uses: EnricoMi/publish-unit-test-result-action@v2
if: success() || failure() # run this step even if previous step failed
with:
junit_files: "junit/main-test-results.xml"
check_name: "Main Test Results"
Comment on lines +98 to +110
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Use always() for publishing steps (and consider pinning action SHAs).

Ensures artifacts/checks publish on failure/cancel.

-        if: success() || failure()    # run this step even if previous step failed
+        if: always()

Apply to both upload-artifact and publish-unit-test-result steps in both jobs. Also consider pinning actions to commit SHAs for supply-chain safety.

Also applies to: 124-136

🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 81 to 93 (and likewise update lines
124-136): the upload-artifact and publish-unit-test-result steps use "if:
success() || failure()" which should be replaced with "if: always()" so these
artifact and test-publishing steps run on cancelled/neutral runs as well; update
both occurrences to use if: always() and, where possible, pin the actions to
specific commit SHAs (e.g., actions/upload-artifact@<sha> and
EnricoMi/publish-unit-test-result-action@<sha>) to improve supply-chain safety.


Comment on lines +98 to +111
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Use always() for artifact publishing and pin actions to SHAs; grant checks: write.

Ensures publishing even on cancelled and satisfies permission needs.

-      - name: Upload main test results
-        uses: actions/upload-artifact@v4
-        if: success() || failure()    # run this step even if previous step failed
+      - name: Upload main test results
+        uses: actions/upload-artifact@<sha-for-v4>
+        if: always()
@@
-      - name: Publish main test results
-        uses: EnricoMi/publish-unit-test-result-action@v2
-        if: success() || failure()    # run this step even if previous step failed
+      - name: Publish main test results
+        uses: EnricoMi/publish-unit-test-result-action@<sha-for-v2>
+        if: always()
         with:
           junit_files: "junit/main-test-results.xml"
           check_name: "Main Test Results"

Add job-level permissions to allow creating check runs:

   test-main:
     name: Test Main SDK (Python 3.9)
     runs-on: ubuntu-latest
+    permissions:
+      contents: read
+      checks: write
+      pull-requests: write

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 76 to 89: replace the conditional "if:
success() || failure()" with "if: always()" for both the Upload main test
results and Publish main test results steps, pin both actions to immutable
commit SHAs instead of tags (replace actions/upload-artifact@v4 and
EnricoMi/publish-unit-test-result-action@v2 with their respective full commit
SHAs), and add job-level permissions with "checks: write" (and other minimal
required permissions) so the publish action can create check runs.

- name: Restore pyproject.toml
run: |
mv pyproject.toml.bak pyproject.toml

additional-tests:
name: Test Additional Integrations (Python 3.10)
runs-on: ubuntu-latest
if: github.event.pull_request.draft == false

Comment on lines +117 to +120
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Apply the same draft/fork gating to additional-tests.

Mirror the main job condition.

-    if: github.event.pull_request.draft == false
+    if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
name: Test Additional Integrations (Python 3.10)
runs-on: ubuntu-latest
if: github.event.pull_request.draft == false
name: Test Additional Integrations (Python 3.10)
runs-on: ubuntu-latest
if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }}
🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 100 to 103, the "Test Additional
Integrations (Python 3.10)" job is missing the same pull request gating used by
the main job; update the job to include the identical if condition from the main
job (i.e., mirror the draft/fork check) so the job only runs for non-draft,
non-fork PRs by adding the same if: expression to this job definition.

steps:
- uses: actions/checkout@v4

- name: Install uv
uses: astral-sh/setup-uv@v4
with:
version: "latest"

- name: Set up Python 3.10
run: uv python install 3.10

- name: Install dependencies (CrewAI only)
run: |
uv sync --python 3.10
Comment on lines +137 to +142
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Pin Python 3.10 patch; install dev-only with frozen; rename step.

Increases determinism and clarity.

-      - name: Set up Python 3.10
-        run: uv python install 3.10
+      - name: Set up Python 3.10
+        run: uv python install 3.10.15
@@
-      - name: Install dependencies (CrewAI only)
-        run: |
-          uv sync --python 3.10
+      - name: Install dependencies (dev only)
+        run: |
+          uv sync --group dev --python 3.10.15 --frozen
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Set up Python 3.10
run: uv python install 3.10
- name: Install dependencies (CrewAI only)
run: |
uv sync --python 3.10
- name: Set up Python 3.10
run: uv python install 3.10.15
- name: Install dependencies (dev only)
run: |
uv sync --group dev --python 3.10.15 --frozen
🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 107 to 112: the workflow currently
installs "python 3.10" and runs "uv sync --python 3.10" without pinning a patch,
and the step name is vague; update the install to pin a specific 3.10 patch
(e.g., 3.10.12) by changing the install invocation to that exact version, change
the sync command to install dev-only dependencies with a frozen lockfile (use
the uv flags --dev-only and --frozen along with --python <pinned-version>), and
rename the step to something explicit like "Install dev dependencies (CrewAI
only, frozen)" to improve clarity and determinism.


Comment on lines +137 to +143
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ensure additional_dev deps are installed; pin Python and freeze.

CrewAI lives in additional_dev; install both dev and additional_dev.

-      - name: Set up Python 3.10
-        run: uv python install 3.10
+      - name: Set up Python 3.10
+        run: uv python install 3.10.15

-      - name: Install dependencies (CrewAI only)
-        run: |
-          uv sync --python 3.10
+      - name: Install dependencies (dev + additional_dev)
+        run: |
+          uv sync --group dev --group additional_dev --python 3.10.15 --frozen
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Set up Python 3.10
run: uv python install 3.10
- name: Install dependencies (CrewAI only)
run: |
uv sync --python 3.10
- name: Set up Python 3.10
run: uv python install 3.10.15
- name: Install dependencies (dev + additional_dev)
run: |
uv sync --group dev --group additional_dev --python 3.10.15 --frozen
🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 123 to 129, the workflow currently
mis-invokes Python setup and only installs CrewAI’s dev deps; update the step to
pin/setup Python 3.10 correctly, replace the incorrect "uv python install 3.10"
with a proper pinned Python setup, then ensure dependency installation runs uv
(or the project’s installer) to install both dev and additional_dev groups for
CrewAI, and finally freeze the installed packages to a requirements
file/artifact so the exact resolved deps are recorded.

Comment on lines +140 to +143
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Install required groups for integrations with frozen resolution.

-      - name: Install dependencies (CrewAI only)
-        run: |
-          uv sync --python 3.10
+      - name: Install dependencies (dev + additional_dev)
+        run: |
+          uv sync --group dev --group additional_dev --python 3.10.15 --frozen
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Install dependencies (CrewAI only)
run: |
uv sync --python 3.10
- name: Install dependencies (dev + additional_dev)
run: |
uv sync --group dev --group additional_dev --python 3.10.15 --frozen
🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 126 to 129, the workflow currently
runs a plain "uv sync --python 3.10" which does not install the integration
extras nor enforce frozen resolutions; update the step to invoke uv sync with
the flags that (1) install the required integrations group/extras and (2) enable
frozen resolution/lockfile enforcement (use the UV CLI options for specifying
groups/extras and the frozen-lock or equivalent flag) so the integrations
dependencies are installed exactly as pinned.

- name: Run additional integration tests
run: |
uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml
Comment on lines +144 to +146
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Create junit dir and pass required env for integration tests.

Prevents junit write failure; ensure secrets available if needed.

-      - name: Run additional integration tests
-        run: |
-          uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml
+      - name: Run additional integration tests
+        env:
+          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
+          ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
+          MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }}
+          MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }}
+          TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }}
+        run: |
+          mkdir -p junit
+          uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Run additional integration tests
run: |
uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml
- name: Run additional integration tests
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }}
MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }}
TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }}
run: |
mkdir -p junit
uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml
🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 114 to 116: the workflow step that runs
the additional integration tests can fail because the junit output directory may
not exist and required environment variables/secrets for the tests may be
missing; update the step to create the junit directory before running tests
(e.g., mkdir -p junit) and ensure the needed secrets/env vars are passed into
the job or step (via workflow env: or with: env variables or by referencing
secrets) so pytest can write junit/additional-test-results.xml and the
integration tests have required credentials.


- name: Upload additional test results
uses: actions/upload-artifact@v4
if: success() || failure() # run this step even if previous step failed
with:
name: additional-test-results
path: junit/additional-test-results.xml

- name: Publish additional test results
uses: EnricoMi/publish-unit-test-result-action@v2
if: success() || failure() # run this step even if previous step failed
with:
junit_files: "junit/additional-test-results.xml"
check_name: "Additional Test Results"
10 changes: 8 additions & 2 deletions maxim/filter_objects.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,10 +75,16 @@ def parse_incoming_query(incoming_query: str) -> List[RuleType]:
value = bool(value)
parsed = True
if not parsed:
if type(value) == bool:
try:
value = int(value)
parsed = True
except ValueError:
pass
if not parsed:
if isinstance(value, bool):
value = bool(value)
parsed = True
elif type(value) is int:
elif isinstance(value, int):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Fix inconsistent type checking pattern.

Line 87 uses isinstance(value, int) while the rest of the codebase uses type(value) is int (e.g., line 303). This inconsistency could lead to different behavior since isinstance returns True for boolean values (bool is a subclass of int), while type() is checks for exact type matches.

For consistency with the existing codebase pattern, apply this diff:

-                    elif isinstance(value, int):
+                    elif type(value) is int:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
elif isinstance(value, int):
elif type(value) is int:
🤖 Prompt for AI Agents
In maxim/filter_objects.py around line 87, replace the isinstance(value, int)
check with an exact type comparison (type(value) is int) to match the project
pattern and avoid treating bools as ints; update the conditional so it uses
type(value) is int (ensuring boolean values are not considered integers) and
keep the rest of the branch logic unchanged.

value = int(value)
parsed = True
Comment on lines +84 to 89
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove redundant type checks after parsing.

Lines 84-89 contain redundant type checks that will never execute because value starts as a string and only gets converted through explicit parsing steps above. The boolean check will always be false since value was never converted to a boolean in the reachable code path, and the integer check is redundant after the integer parsing attempt above.

Apply this diff to remove the redundant checks:

-                if not parsed:
-                    if isinstance(value, bool):
-                        value = bool(value)
-                        parsed = True
-                    elif isinstance(value, int):
-                        value = int(value)
-                        parsed = True
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if isinstance(value, bool):
value = bool(value)
parsed = True
elif type(value) is int:
elif isinstance(value, int):
value = int(value)
parsed = True
🤖 Prompt for AI Agents
In maxim/filter_objects.py around lines 84 to 89, remove the redundant
isinstance checks for bool and int that follow the explicit parsing steps above;
they are unreachable because value starts as a string and parsing blocks already
handle int/bool conversion. Delete those two conditional branches and ensure
subsequent logic relies on the parsed flag/state set by the actual parsing
attempts instead of re-checking value's type.

if not parsed:
Comment on lines +78 to 90
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Be conservative when casting to int and remove unreachable isinstance branches

  • Guard integer casting with a strict regex so we don’t accidentally coerce arbitrary strings.
  • The subsequent isinstance checks are unreachable for this code path and add no value.
-                if not parsed:
-                    try:
-                        value = int(value)
-                        parsed = True
-                    except ValueError:
-                        pass
-                if not parsed:
-                    if isinstance(value, bool):
-                        value = bool(value)
-                        parsed = True
-                    elif isinstance(value, int):
-                        value = int(value)
-                        parsed = True
+                if not parsed and isinstance(value, str) and re.fullmatch(r"[+-]?\d+", value.strip()):
+                    try:
+                        value = int(value.strip())
+                        parsed = True
+                    except ValueError:
+                        pass

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In maxim/filter_objects.py around lines 78-90, the code tries int(value)
unguarded then has unreachable isinstance branches; change the flow to first
detect booleans (if value is bool, keep as-is), then only attempt integer
coercion for string-like inputs that strictly match a digits regex (e.g.
r'^[+-]?\d+$') before calling int(value), and remove the redundant
isinstance(value, int) branch (and any other unreachable isinstance checks) so
only valid numeric strings are coerced to int.

Expand Down
2 changes: 2 additions & 0 deletions maxim/logger/portkey/portkey.py
Original file line number Diff line number Diff line change
Expand Up @@ -134,6 +134,7 @@ def create(self, *args, **kwargs):
.get("message", {})
.get("content", "")
)
if trace is not None:
trace.end()
except Exception as e:
scribe().warning(
Expand Down Expand Up @@ -201,6 +202,7 @@ async def create(self, *args, **kwargs):
.get("message", {})
.get("content", "")
)
if trace is not None:
trace.end()
except Exception as e:
scribe().warning(
Expand Down
28 changes: 14 additions & 14 deletions maxim/tests/test_add_dataset_entries_comprehensive.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,11 @@
from maxim import Maxim, Config
from maxim.models.dataset import DatasetEntry, Variable, DatasetEntryWithRowNo, FileVariablePayload, VariableFileAttachment
from maxim.logger.components.attachment import FileAttachment, FileDataAttachment, UrlAttachment
from dotenv import load_dotenv
load_dotenv()

MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL")
MAXIM_API_KEY = os.getenv("MAXIM_API_KEY")
Comment on lines +10 to +14
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

🛠️ Refactor suggestion

Avoid module-scope dotenv loading and env caching; this breaks test hermeticity.

  • load_dotenv() at import time and caching MAXIM_BASE_URL/MAXIM_API_KEY as module constants make tests order-dependent and hard to override per-test. Combined with tearDown deleting env vars (see below), this can corrupt the global process env for other tests.
  • Prefer resolving .env deterministically and providing safe test defaults, or read env inside setUp.

Apply this diff to make resolution explicit and add safe test defaults (while still honoring env when set):

-from dotenv import load_dotenv
-load_dotenv()
-
-MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL")
-MAXIM_API_KEY = os.getenv("MAXIM_API_KEY")
+from dotenv import load_dotenv, find_dotenv
+# Resolve nearest .env without overriding already-set env
+load_dotenv(find_dotenv(usecwd=True), override=False)
+# Note: do not cache real secrets at import-time; provide safe test defaults
+MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL", "http://localhost:8000")
+MAXIM_API_KEY = os.getenv("MAXIM_API_KEY", "test-api-key")

If you don’t want defaults, drop them here and inject via setUp using patch.dict. I can provide a version that fully avoids module-level reads.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
from dotenv import load_dotenv
load_dotenv()
MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL")
MAXIM_API_KEY = os.getenv("MAXIM_API_KEY")
from dotenv import load_dotenv, find_dotenv
# Resolve nearest .env without overriding already-set env
load_dotenv(find_dotenv(usecwd=True), override=False)
# Note: do not cache real secrets at import-time; provide safe test defaults
MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL", "http://localhost:8000")
MAXIM_API_KEY = os.getenv("MAXIM_API_KEY", "test-api-key")
🤖 Prompt for AI Agents
In maxim/tests/test_add_dataset_entries_comprehensive.py around lines 10-14,
remove the module-level load_dotenv() and module-scoped
MAXIM_BASE_URL/MAXIM_API_KEY reads; instead, resolve .env and read env vars
inside the test's setUp (or individual tests) so each test can control/override
values deterministically. Either call load_dotenv(path) in setUp and then use
os.getenv("MAXIM_BASE_URL", "http://localhost:...")/os.getenv("MAXIM_API_KEY",
"test-key") to provide safe defaults, or avoid defaults and inject values via
unittest.mock.patch.dict(os.environ) per-test; do not mutate global env at
import time or rely on tearDown to undo module-level caching.


class TestAddDatasetEntriesComprehensive(unittest.TestCase):
"""Comprehensive test suite for the updated add_dataset_entries method."""
Expand All @@ -17,13 +21,9 @@ def setUp(self) -> None:
if hasattr(Maxim, "_instance"):
delattr(Maxim, "_instance")

# Set up test environment variables
os.environ["MAXIM_API_KEY"] = "test-api-key"
os.environ["MAXIM_BASE_URL"] = "https://app.getmaxim.ai"

config = Config(
api_key="test-api-key",
base_url="https://app.getmaxim.ai",
api_key=MAXIM_API_KEY,
base_url=MAXIM_BASE_URL,
debug=True,
raise_exceptions=True
)
Comment on lines 24 to 29
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Do not pass None into Config; load env in setUp with safe defaults

Maxim requires an API key; without defaults, CI will fail. Build kwargs conditionally or provide test-safe defaults.

Apply:

-        config = Config(
-            api_key=MAXIM_API_KEY,
-            base_url=MAXIM_BASE_URL,
-            debug=True,
-            raise_exceptions=True
-        )
+        # Resolve env at setup time
+        load_dotenv(find_dotenv(usecwd=True), override=False)
+        api_key = os.getenv("MAXIM_API_KEY", "test-api-key")
+        base_url = os.getenv("MAXIM_BASE_URL", "http://localhost:8000")
+        cfg = {"debug": True, "raise_exceptions": True}
+        # Only add keys if present to avoid None-paths; we provide safe defaults above.
+        if api_key:
+            cfg["api_key"] = api_key
+        if base_url:
+            cfg["base_url"] = base_url
+        config = Config(**cfg)

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In maxim/tests/test_add_dataset_entries_comprehensive.py around lines 24-29,
avoid passing None into Config by loading env vars in setUp with safe defaults:
import os, read api_key = os.getenv('MAXIM_API_KEY', 'test-api-key') and
base_url = os.getenv('MAXIM_BASE_URL', 'http://localhost'), then construct
Config using those values (and other flags) so kwargs are never None;
alternatively build the kwargs dict conditionally (only include a key if env
value exists) or use the safe defaults to ensure CI won't fail when env vars are
absent.

Expand Down Expand Up @@ -175,7 +175,7 @@ def test_add_dataset_entries_with_file_attachments(self) -> None:
name="test.txt",
mime_type="text/plain"
)

entry = DatasetEntry(entry={
"input": Variable(type="text", payload="Input with file"),
"files": Variable(type="file", payload=[file_attachment])
Expand All @@ -186,7 +186,7 @@ def test_add_dataset_entries_with_file_attachments(self) -> None:

# Verify file upload process was triggered
self.assertEqual(mock_client.request.call_count, 4) # total_rows + entries + upload_url + patch

finally:
# Clean up temp file
os.unlink(temp_file_path)
Expand All @@ -201,7 +201,7 @@ def test_add_dataset_entries_with_url_attachment(self) -> None:
b'{"data": {"url": "https://signed-url.com", "key": "datasets/test-dataset-id/entry123/test-file-key"}}'
)
mock_patch_response = self._create_mock_response(b'{"success": true}')

self._setup_mock_network_calls([
mock_total_rows_response,
mock_add_entries_response,
Expand All @@ -215,7 +215,7 @@ def test_add_dataset_entries_with_url_attachment(self) -> None:
name="image.jpg",
mime_type="image/jpeg"
)

entry = DatasetEntry(entry={
"input": Variable(type="text", payload="Input with URL"),
"images": Variable(type="file", payload=[url_attachment])
Expand All @@ -240,7 +240,7 @@ def test_add_dataset_entries_with_file_data_attachment(self) -> None:
b'{"data": {"url": "https://signed-url.com", "key": "datasets/test-dataset-id/entry123/test-file-key"}}'
)
mock_patch_response = self._create_mock_response(b'{"success": true}')

mock_client = self._setup_mock_network_calls([
mock_total_rows_response,
mock_add_entries_response,
Expand All @@ -254,7 +254,7 @@ def test_add_dataset_entries_with_file_data_attachment(self) -> None:
name="data.bin",
mime_type="application/octet-stream"
)

entry = DatasetEntry(entry={
"input": Variable(type="text", payload="Input with file data"),
"data": Variable(type="file", payload=[file_data_attachment])
Expand All @@ -272,7 +272,7 @@ def test_add_dataset_entries_mixed_input_types(self) -> None:
mock_add_entries_response = self._create_mock_response(
b'{"data": {"ids": ["entry1", "entry2"], "cells": []}}'
)

mock_client = self._setup_mock_network_calls([
mock_total_rows_response,
mock_add_entries_response
Expand All @@ -283,7 +283,7 @@ def test_add_dataset_entries_mixed_input_types(self) -> None:
"input": Variable(type="text", payload="Object input"),
"output": Variable(type="json", payload={"result": "object"}),
})

dict_entry = {
"input": "Dict input",
"output": {"result": "dict"}
Expand Down
18 changes: 9 additions & 9 deletions maxim/tests/test_add_dataset_entries_integration.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
# Environment variables for integration testing
MAXIM_API_KEY = os.getenv("MAXIM_API_KEY")
MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL") or "https://app.getmaxim.ai"
MAXIM_DATASET_ID = os.getenv("MAXIM_DATASET_ID") # Dataset ID for integration tests
MAXIM_DATASET_ID = "cmesayj1f001c9ybdxwxmtilk" # Dataset ID for integration tests
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Do not hard-code MAXIM_DATASET_ID; restore env-driven gating to avoid leaking IDs and CI flakiness.

A fixed dataset ID defeats the skip guard, risks polluting a shared dataset across runs, and likely explains the pipeline failure “Total rows mismatch … expected 4, got 2” when tests run concurrently or in different environments.

Apply this diff:

-MAXIM_DATASET_ID = "cmesayj1f001c9ybdxwxmtilk"  # Dataset ID for integration tests
+MAXIM_DATASET_ID = os.getenv("MAXIM_DATASET_ID")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
MAXIM_DATASET_ID = "cmesayj1f001c9ybdxwxmtilk" # Dataset ID for integration tests
MAXIM_DATASET_ID = os.getenv("MAXIM_DATASET_ID")
🤖 Prompt for AI Agents
In maxim/tests/test_add_dataset_entries_integration.py around line 19, the test
hard-codes MAXIM_DATASET_ID which bypasses the env-driven gating and causes
shared-state/CI flakiness; replace the constant with reading from an environment
variable (e.g., os.getenv("MAXIM_DATASET_ID")) and add a pytest skip guard
(pytest.skip or pytest.mark.skipif) when the env var is not set so the test
won't run or leak IDs in CI; ensure no fallback hard-coded ID is used and update
imports to include os and pytest if necessary.



class TestAddDatasetEntriesIntegration(unittest.TestCase):
Expand Down Expand Up @@ -118,7 +118,7 @@ def test_add_dataset_entries_with_dictionary_input(self) -> None:
entries = [
{
"Input": "How do I calculate the area of a circle?",
"Expected Output": {
"Test": {
"formula": "A = π × r²",
"explanation": "The area of a circle is π times the radius squared",
"example": "For radius 5: A = π × 25 = 78.54 square units",
Expand All @@ -129,7 +129,7 @@ def test_add_dataset_entries_with_dictionary_input(self) -> None:
},
{
"Input": "What is 15 + 27?",
"Expected Output": {
"Test": {
"answer": 42,
"method": "simple addition",
"step_by_step": ["15 + 27", "= 42"],
Expand Down Expand Up @@ -195,7 +195,7 @@ def test_add_dataset_entries_with_dataset_entry_objects(self) -> None:
# Create DatasetEntry objects with correct column structure
entry1 = DatasetEntry(entry={
"Input": Variable(type="text", payload="What is the capital of France?"),
"Expected Output": Variable(type="json", payload={
"Test": Variable(type="json", payload={
"answer": "Paris",
"confidence": 0.95,
"reasoning": "Paris is the capital and largest city of France",
Expand All @@ -206,7 +206,7 @@ def test_add_dataset_entries_with_dataset_entry_objects(self) -> None:

entry2 = DatasetEntry(entry={
"Input": Variable(type="text", payload="Explain the process of photosynthesis"),
"Expected Output": Variable(type="json", payload={
"Test": Variable(type="json", payload={
"answer": "Photosynthesis is the process by which plants convert sunlight into energy",
"confidence": 0.98,
"key_components": ["chlorophyll", "sunlight", "carbon dioxide", "water"],
Expand Down Expand Up @@ -265,7 +265,7 @@ def test_add_dataset_entries_with_file_attachments(self) -> None:

entry_with_file = DatasetEntry(entry={
"Input": Variable(type="text", payload="Integration test with file attachment"),
"Expected Output": Variable(type="json", payload={
"Test": Variable(type="json", payload={
"result": "Successfully processed file attachment",
"file_type": "text",
"confidence": 0.95
Expand All @@ -283,7 +283,7 @@ def test_add_dataset_entries_with_file_attachments(self) -> None:

entry_with_data = DatasetEntry(entry={
"Input": Variable(type="text", payload="Integration test with binary data"),
"Expected Output": Variable(type="json", payload={
"Test": Variable(type="json", payload={
"result": "Successfully processed binary data",
"file_size_bytes": len(binary_data),
"confidence": 0.97
Expand Down Expand Up @@ -349,7 +349,7 @@ def test_mixed_entry_types_integration(self) -> None:
# Create mixed entry types with correct columns
dict_entry = {
"Input": "What are the benefits of renewable energy?",
"Expected Output": {
"Test": {
"benefits": ["environmental protection", "sustainability", "cost savings"],
"types": ["solar", "wind", "hydro", "geothermal"],
"impact": "positive long-term effects on climate",
Expand All @@ -360,7 +360,7 @@ def test_mixed_entry_types_integration(self) -> None:

dataset_entry = DatasetEntry(entry={
"Input": Variable(type="text", payload="Explain machine learning in simple terms"),
"Expected Output": Variable(type="json", payload={
"Test": Variable(type="json", payload={
"definition": "Machine learning is AI that learns from data to make predictions",
"key_concepts": ["algorithms", "training", "predictions", "patterns"],
"applications": ["image recognition", "natural language processing", "recommendation systems"],
Expand Down
1 change: 0 additions & 1 deletion maxim/tests/test_agno.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
import unittest
import os
from uuid import uuid4

from maxim.logger import Logger
from maxim.logger.agno import instrument_agno
Expand Down
5 changes: 2 additions & 3 deletions maxim/tests/test_anthropic.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,15 +20,14 @@
baseUrl = os.getenv("MAXIM_BASE_URL") or "https://app.getmaxim.ai"
repoId = os.getenv("MAXIM_LOG_REPO_ID")


class TestAnthropicWithMockWriter(unittest.TestCase):
"""Test class demonstrating how to use MockLogWriter for verification."""

def setUp(self):
if hasattr(Maxim, "_instance"):
delattr(Maxim, "_instance")
# Create logger and patch its writer
self.logger = Maxim().logger()
self.logger = Maxim({"base_url": baseUrl}).logger()
self.mock_writer = inject_mock_writer(self.logger)
Comment on lines +30 to 31
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Stabilize logger creation: provide repo-id fallback and guard on missing secrets

Right now the test will fail (or be flaky in CI) if MAXIM_LOG_REPO_ID / MAXIM_API_KEY / ANTHROPIC_API_KEY aren’t present. Add a skip guard and supply a deterministic logger id fallback so the test runs without repo-id secrets.

-        # Create logger and patch its writer
-        self.logger = Maxim({"base_url": baseUrl}).logger()
+        # Create logger and patch its writer
+        # Fail fast when required keys are missing (stabilizes CI without secrets)
+        if not apiKey or not anthropicApiKey:
+            self.skipTest("MAXIM_API_KEY and/or ANTHROPIC_API_KEY not set")
+        self.logger = Maxim({"base_url": baseUrl}).logger(
+            {"id": (repoId or f"test-repo-{uuid4()}")}
+        )

🏁 Script executed:

#!/bin/bash
# Locate the test file path
fd --glob "*test_anthropic.py"

Length of output: 62


🏁 Script executed:

#!/bin/bash
# Display the first 200 lines of the test file to inspect context around logger creation
sed -n '1,200p' maxim/tests/test_anthropic.py

Length of output: 7467


🏁 Script executed:

#!/bin/bash
# Search for the Maxim class definition to inspect logger() signature
rg -nP '^class\s+Maxim\b' -g '*.py' -n -C2

Length of output: 191


🏁 Script executed:

#!/bin/bash
# Search for the logger method in maxim/maxim.py
grep -R -n "def logger" -n maxim/maxim.py

Length of output: 63


🏁 Script executed:

#!/bin/bash
# Show lines around the logger method definition to inspect its signature
sed -n '880,940p' maxim/maxim.py

Length of output: 2247


Stabilize logger creation in TestAnthropicWithMockWriter.setUp

The existing test will error out whenever MAXIM_LOG_REPO_ID (or MAXIM_API_KEY/ANTHROPIC_API_KEY) isn’t set, causing CI failures. To make this deterministic and non-flaky:

  • In maxim/tests/test_anthropic.py at lines ~30–31, wrap the logger setup with a guard that skips the test if either API key is missing.
  • Pass an explicit id into logger(...), falling back to a static test-scoped UUID when MAXIM_LOG_REPO_ID isn’t defined.
--- a/maxim/tests/test_anthropic.py
+++ b/maxim/tests/test_anthropic.py
@@ class TestAnthropicWithMockWriter(unittest.TestCase):
     def setUp(self):
-        # Create logger and patch its writer
-        self.logger = Maxim({"base_url": baseUrl}).logger()
+        # Skip if required API keys aren’t configured (avoids CI flakiness)
+        if not apiKey or not anthropicApiKey:
+            self.skipTest("MAXIM_API_KEY and/or ANTHROPIC_API_KEY not set")
+        # Provide a deterministic repo ID (env or test-scoped UUID)
+        test_repo_id = repoId or f"test-repo-{uuid4()}"
+        self.logger = Maxim({"base_url": baseUrl}).logger(
+            {"id": test_repo_id}
+        )
         self.mock_writer = inject_mock_writer(self.logger)

This ensures:

  • Tests are auto-skipped when credentials are missing.
  • logger() never raises due to a missing repo ID, since we always supply id.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In maxim/tests/test_anthropic.py around lines 30–31, the test currently
constructs the logger using environment values and can raise when repo/API keys
are missing; modify setUp to first check environment and skip the test if either
MAXIM_API_KEY or ANTHROPIC_API_KEY is not set (use self.skipTest or raise
unittest.SkipTest), then call Maxim(...).logger(id=repo_id) supplying repo_id =
os.environ.get("MAXIM_LOG_REPO_ID") or a fixed test-scoped UUID (e.g. create one
via uuid.uuid4().hex) so logger() always receives an explicit id and cannot
raise due to a missing repo ID.


def test_messages_with_mock_writer_verification(self):
Expand Down Expand Up @@ -123,7 +122,7 @@ def setUp(self):
# This is a hack to ensure that the Maxim instance is not cached
if hasattr(Maxim, "_instance"):
delattr(Maxim, "_instance")
self.logger = Maxim().logger()
self.logger = Maxim({"base_url": baseUrl}).logger()
self.mock_writer = inject_mock_writer(self.logger)

def test_messages_using_wrapper(self):
Expand Down
Loading
Loading