Skip to content
Open
Show file tree
Hide file tree
Changes from 15 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
151 changes: 151 additions & 0 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@
name: Run Tests

on:
push:
branches: [main,beta]
pull_request:
branches: [main,beta]
types: [opened, synchronize, reopened, ready_for_review]

permissions:
contents: read
checks: write
pull-requests: write

jobs:
test-main:
name: Test Main SDK (Python 3.9)
runs-on: ubuntu-latest
Comment on lines +16 to +18
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Add workflow-level hardening and concurrency.

Reduces default token scope and auto-cancels superseded runs on the same branch.

 jobs:
+  # Cancel previous in-progress runs for the same ref
+  # and restrict default permissions
+  concurrency:
+    group: ${{ github.workflow }}-${{ github.ref }}
+    cancel-in-progress: true
+
+permissions:
+  contents: read

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 10 to 12, add workflow-level
hardening by restricting the default GITHUB_TOKEN permissions (for example:
permissions: contents: read) and enable concurrency to auto-cancel superseded
runs (for example: concurrency: group: ${{ github.workflow }}-${{ github.ref }},
cancel-in-progress: true). Update the top-level of the workflow file to include
these two keys so runs use the reduced token scope and newer runs cancel older
in-progress runs on the same branch.

if: github.event.pull_request.draft == false

Comment on lines +19 to +20
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Draft/fork-safe gating: keep runs on push, skip drafts and forks on PRs.

Current condition breaks on push events. Use an event-aware expression.

-    if: github.event.pull_request.draft == false
+    if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }}
steps:
# …rest of this job’s steps…
🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 19-20: the current if clause
(github.event.pull_request.draft == false) fails on push events; update the
condition to be event-aware so pushes always run and pull_request runs only when
not a draft (e.g., check github.event_name == 'push' OR when github.event_name
== 'pull_request' ensure github.event.pull_request.draft == false), and ensure
the expression also guards accessing github.event.pull_request when the event is
not a pull_request to avoid errors.

steps:
- uses: actions/checkout@v4

- name: Backup pyproject.toml
run: |
cp pyproject.toml pyproject.toml.bak
- name: Remove additional dependencies
run: |
sed -i.bak '/^additional_dev = \[$/,/]$/d' pyproject.toml
- name: Install uv
uses: astral-sh/setup-uv@v4
with:
version: "latest"

Comment on lines +32 to +36
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Pin uv installer version for reproducibility.

Avoid “latest” drift.

-      - name: Install uv
-        uses: astral-sh/setup-uv@v4
-        with:
-          version: "latest"
+      - name: Install uv
+        uses: astral-sh/setup-uv@v4
+        with:
+          version: "0.5.x"

Also applies to: 107-111

🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 32-36 (and also apply same change to
lines 107-111): the workflow currently pins the astral-sh/setup-uv action to
version "latest", which can cause nondeterministic CI behavior; replace "latest"
with a specific, tested tag or semver (for example v4.2.1 or v4@sha256:<commit>)
to pin the installer version for reproducibility, and update both occurrences
(lines 32-36 and 107-111) to use the same explicit version string.

- name: Set up Python 3.9
run: uv python install 3.9

- name: Install dependencies (dev only)
run: |
uv sync --python 3.9
Comment on lines +40 to +43
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Install only dev dependencies and lock resolution for reproducibility.

Use group selection and --frozen to catch accidental resolver drift.

-      - name: Install dependencies (dev only)
-        run: |
-          uv sync --python 3.9
+      - name: Install dependencies (dev only)
+        run: |
+          uv sync --group dev --python 3.9.20 --frozen
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Install dependencies (dev only)
run: |
uv sync --python 3.9
- name: Install dependencies (dev only)
run: |
uv sync --group dev --python 3.9.20 --frozen
🤖 Prompt for AI Agents
.github/workflows/tests.yml lines 45-48: the workflow currently runs "uv sync
--python 3.9" which installs all dependencies without locking or restricting to
dev-only; update the command to select the dev group and enable frozen
resolution (e.g., use the CLI flags to install only the dev group and add
--frozen) so that only dev dependencies are installed and the resolver will fail
on drift, ensuring reproducible installs.

Comment on lines +37 to +43
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Pin Python patch and install dev-only with frozen lock.

Improves determinism; avoids pulling unintended extras.

-      - name: Set up Python 3.9
-        run: uv python install 3.9
+      - name: Set up Python 3.9
+        run: uv python install 3.9.20
@@
-      - name: Install dependencies (dev only)
-        run: |
-          uv sync --python 3.9
+      - name: Install dependencies (dev only)
+        run: |
+          uv sync --group dev --python 3.9.20 --frozen
-      - name: Set up Python 3.10
-        run: uv python install 3.10
+      - name: Set up Python 3.10
+        run: uv python install 3.10.15
@@
-      - name: Install dependencies (CrewAI only)
-        run: |
-          uv sync --python 3.10
+      - name: Install dependencies (dev only)
+        run: |
+          uv sync --group dev --python 3.10.15 --frozen

Also applies to: 112-118

🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 37-43 (and also apply same changes
to 112-118): pin the Python 3.9 runtime to a specific patch release (e.g.
3.9.17) instead of the generic "3.9", and change the dependency installation to
perform a dev-only install using the frozen lockfile flag; for example replace
the python install step with the pinned patch version and replace "uv sync
--python 3.9" with "uv sync --python 3.9.17 --dev --frozen-lockfile" (apply
identical edits at lines 112-118).

- name: Run main tests (excluding CrewAI)
env:
MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }}
MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }}
MAXIM_DATASET_ID: ${{ secrets.MAXIM_DATASET_ID }}
MAXIM_WORKSPACE_ID: ${{ secrets.MAXIM_WORKSPACE_ID }}
MAXIM_LOG_REPO_ID: ${{ secrets.MAXIM_LOG_REPO_ID }}
MAXIM_PROMPT_CHAIN_VERSION_ID: ${{ secrets.MAXIM_PROMPT_CHAIN_VERSION_ID }}
MAXIM_ASSISTANT_PROMPT_VERSION_ID: ${{ secrets.MAXIM_ASSISTANT_PROMPT_VERSION_ID }}
MAXIM_ASSISTANT_PROMPT_CHAIN_VERSION_ID: ${{ secrets.MAXIM_ASSISTANT_PROMPT_CHAIN_VERSION_ID }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
AZURE_OPENAI_ENDPOINT: ${{ secrets.AZURE_OPENAI_ENDPOINT }}
AZURE_OPENAI_KEY: ${{ secrets.AZURE_OPENAI_KEY }}
PORTKEY_API_KEY: ${{ secrets.PORTKEY_API_KEY }}
PORTKEY_VIRTUAL_KEY: ${{ secrets.PORTKEY_VIRTUAL_KEY }}
LLAMAINDEX_API_KEY: ${{ secrets.LLAMAINDEX_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }}
Comment on lines +58 to +62
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Consider removing unused secrets (PORTKEY_VIRTUAL_KEY).

Tests moved to provider-based config; if PORTKEY_VIRTUAL_KEY is unused, omit it from env to reduce exposure surface.

Do you want me to scan the repo for usages and submit a follow-up patch?

🤖 Prompt for AI Agents
.github/workflows/tests.yml lines 51-55: the workflow exposes
PORTKEY_VIRTUAL_KEY which appears unused; search the repository for any
references to PORTKEY_VIRTUAL_KEY (including CI, infra, and provider config
files) and if no usages exist, remove the PORTKEY_VIRTUAL_KEY entry from the env
block in this workflow and delete the corresponding GitHub Actions secret to
reduce exposure; if usages are found, update those callers to use the
provider-based config instead or consolidate to a single secret name, then
adjust the workflow accordingly and run CI locally or via a test branch to
verify no breakage.

FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
MAXIM_PROMPT_1_ID: ${{ secrets.MAXIM_PROMPT_1_ID }}
MAXIM_PROMPT_2_ID: ${{ secrets.MAXIM_PROMPT_2_ID }}
MAXIM_PROMPT_3_ID: ${{ secrets.MAXIM_PROMPT_3_ID }}
MAXIM_PROMPT_1_VERSION_1_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_1_ID }}
MAXIM_PROMPT_1_VERSION_3_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_3_ID }}
MAXIM_PROMPT_1_VERSION_4_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_4_ID }}
MAXIM_PROMPT_1_VERSION_5_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_5_ID }}
MAXIM_FOLDER_1_ID: ${{ secrets.MAXIM_FOLDER_1_ID }}
MAXIM_FOLDER_2_ID: ${{ secrets.MAXIM_FOLDER_2_ID }}
PROMPT_VERSION_ID: ${{ secrets.PROMPT_VERSION_ID }}
FOLDER_ID: ${{ secrets.FOLDER_ID }}
MAXIM_PROMPT_3_VERSION_2_ID: ${{ secrets.MAXIM_PROMPT_3_VERSION_2_ID }}
MAXIM_TEST_RUN_PROMPT_VERSION_ID: ${{ secrets.MAXIM_TEST_RUN_PROMPT_VERSION_ID }}
MAXIM_TEST_RUN_ASSISTANT_PROMPT_VERSION_ID: ${{ secrets.MAXIM_TEST_RUN_ASSISTANT_PROMPT_VERSION_ID }}
MAXIM_TEST_RUN_ASSISTANT_PROMPT_CHAIN_VERSION_ID: ${{ secrets.MAXIM_TEST_RUN_ASSISTANT_PROMPT_CHAIN_VERSION_ID }}
run: |
uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Create junit directory before pytest writes XML.

Prevents artifact step failures.

-        run: |
-          uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml
+        run: |
+          mkdir -p junit
+          uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
run: |
uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml
run: |
mkdir -p junit
uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml
🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 78 to 79, pytest writes its JUnit
XML to junit/main-test-results.xml but the junit directory may not exist; modify
the workflow to create the directory before running pytest (e.g., add a step or
prepend the run script with mkdir -p junit) so pytest can write the XML and the
artifact step won't fail.

- name: Upload main test results
uses: actions/upload-artifact@v4
if: success() || failure() # run this step even if previous step failed
with:
name: main-test-results
path: junit/main-test-results.xml

- name: Publish main test results
uses: EnricoMi/publish-unit-test-result-action@v2
if: success() || failure() # run this step even if previous step failed
with:
junit_files: "junit/main-test-results.xml"
check_name: "Main Test Results"
Comment on lines +98 to +110
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Use always() for publishing steps (and consider pinning action SHAs).

Ensures artifacts/checks publish on failure/cancel.

-        if: success() || failure()    # run this step even if previous step failed
+        if: always()

Apply to both upload-artifact and publish-unit-test-result steps in both jobs. Also consider pinning actions to commit SHAs for supply-chain safety.

Also applies to: 124-136

🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 81 to 93 (and likewise update lines
124-136): the upload-artifact and publish-unit-test-result steps use "if:
success() || failure()" which should be replaced with "if: always()" so these
artifact and test-publishing steps run on cancelled/neutral runs as well; update
both occurrences to use if: always() and, where possible, pin the actions to
specific commit SHAs (e.g., actions/upload-artifact@<sha> and
EnricoMi/publish-unit-test-result-action@<sha>) to improve supply-chain safety.


Comment on lines +98 to +111
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Use always() for artifact publishing and pin actions to SHAs; grant checks: write.

Ensures publishing even on cancelled and satisfies permission needs.

-      - name: Upload main test results
-        uses: actions/upload-artifact@v4
-        if: success() || failure()    # run this step even if previous step failed
+      - name: Upload main test results
+        uses: actions/upload-artifact@<sha-for-v4>
+        if: always()
@@
-      - name: Publish main test results
-        uses: EnricoMi/publish-unit-test-result-action@v2
-        if: success() || failure()    # run this step even if previous step failed
+      - name: Publish main test results
+        uses: EnricoMi/publish-unit-test-result-action@<sha-for-v2>
+        if: always()
         with:
           junit_files: "junit/main-test-results.xml"
           check_name: "Main Test Results"

Add job-level permissions to allow creating check runs:

   test-main:
     name: Test Main SDK (Python 3.9)
     runs-on: ubuntu-latest
+    permissions:
+      contents: read
+      checks: write
+      pull-requests: write

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 76 to 89: replace the conditional "if:
success() || failure()" with "if: always()" for both the Upload main test
results and Publish main test results steps, pin both actions to immutable
commit SHAs instead of tags (replace actions/upload-artifact@v4 and
EnricoMi/publish-unit-test-result-action@v2 with their respective full commit
SHAs), and add job-level permissions with "checks: write" (and other minimal
required permissions) so the publish action can create check runs.

- name: Restore pyproject.toml
run: |
mv pyproject.toml.bak pyproject.toml
additional-tests:
name: Test Additional Integrations (Python 3.10)
runs-on: ubuntu-latest
if: github.event.pull_request.draft == false

Comment on lines +117 to +120
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Apply the same draft/fork gating to additional-tests.

Mirror the main job condition.

-    if: github.event.pull_request.draft == false
+    if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
name: Test Additional Integrations (Python 3.10)
runs-on: ubuntu-latest
if: github.event.pull_request.draft == false
name: Test Additional Integrations (Python 3.10)
runs-on: ubuntu-latest
if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }}
🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 100 to 103, the "Test Additional
Integrations (Python 3.10)" job is missing the same pull request gating used by
the main job; update the job to include the identical if condition from the main
job (i.e., mirror the draft/fork check) so the job only runs for non-draft,
non-fork PRs by adding the same if: expression to this job definition.

steps:
- uses: actions/checkout@v4

- name: Install uv
uses: astral-sh/setup-uv@v4
with:
version: "latest"

- name: Backup .python-version
run: |
cp .python-version .python-version.bak
- name: Update .python-version for CrewAI
run: |
echo "3.10.0" > .python-version
Comment on lines +129 to +136
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid editing .python-version in CI.

These steps are brittle; remove them and install the desired Python directly.

-      - name: Backup .python-version
-        run: |
-          cp .python-version .python-version.bak
-
-      - name: Update .python-version for CrewAI
-        run: |
-          echo "3.10.0" > .python-version
...
-      - name: Restore .python-version
-        if: always()
-        run: |
-          mv .python-version.bak .python-version
+      # No .python-version mutation required

Also applies to: 148-151

🤖 Prompt for AI Agents
.github/workflows/tests.yml lines 115-122 (and also lines 148-151): remove the
steps that copy and overwrite .python-version in the CI; instead configure the
runner's Python using the official actions/setup-python (or
actions/setup-python@v4) with the required version (e.g., 3.10.0) so CI doesn't
mutate repository files; delete the backup/echo steps and replace them with a
single setup-python step specifying the desired python-version and cache options
as needed.

- name: Set up Python 3.10
run: uv python install 3.10
Comment on lines +129 to +138
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove .python-version hacks; install pinned 3.10 directly.
Simpler and less brittle.

-      - name: Backup .python-version
-        run: |
-          cp .python-version .python-version.bak
-
-      - name: Update .python-version for CrewAI
-        run: |
-          echo "3.10.0" > .python-version
-
       - name: Set up Python 3.10
-        run: uv python install 3.10
+        run: uv python install 3.10.15

Also remove the final “Restore .python-version” step (see below).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Backup .python-version
run: |
cp .python-version .python-version.bak
- name: Update .python-version for CrewAI
run: |
echo "3.10.0" > .python-version
- name: Set up Python 3.10
run: uv python install 3.10
- name: Set up Python 3.10
run: uv python install 3.10.15
🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 115 to 124, remove the three-step
".python-version" backup/update hack and instead install Python 3.10 directly
with the standard setup action; delete the prior "Backup .python-version" and
"Update .python-version for CrewAI" steps and replace the "Set up Python 3.10"
step with a single action that installs/pins Python 3.10 (use the official
setup-python action and set the version to 3.10), and also delete the later
"Restore .python-version" step mentioned in the comment.


- name: Install dependencies (CrewAI only)
run: |
uv sync --python 3.10
Comment on lines +137 to +142
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Pin Python 3.10 patch; install dev-only with frozen; rename step.

Increases determinism and clarity.

-      - name: Set up Python 3.10
-        run: uv python install 3.10
+      - name: Set up Python 3.10
+        run: uv python install 3.10.15
@@
-      - name: Install dependencies (CrewAI only)
-        run: |
-          uv sync --python 3.10
+      - name: Install dependencies (dev only)
+        run: |
+          uv sync --group dev --python 3.10.15 --frozen
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Set up Python 3.10
run: uv python install 3.10
- name: Install dependencies (CrewAI only)
run: |
uv sync --python 3.10
- name: Set up Python 3.10
run: uv python install 3.10.15
- name: Install dependencies (dev only)
run: |
uv sync --group dev --python 3.10.15 --frozen
🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 107 to 112: the workflow currently
installs "python 3.10" and runs "uv sync --python 3.10" without pinning a patch,
and the step name is vague; update the install to pin a specific 3.10 patch
(e.g., 3.10.12) by changing the install invocation to that exact version, change
the sync command to install dev-only dependencies with a frozen lockfile (use
the uv flags --dev-only and --frozen along with --python <pinned-version>), and
rename the step to something explicit like "Install dev dependencies (CrewAI
only, frozen)" to improve clarity and determinism.

Comment on lines +137 to +143
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ensure additional_dev deps are installed; pin Python and freeze.

CrewAI lives in additional_dev; install both dev and additional_dev.

-      - name: Set up Python 3.10
-        run: uv python install 3.10
+      - name: Set up Python 3.10
+        run: uv python install 3.10.15

-      - name: Install dependencies (CrewAI only)
-        run: |
-          uv sync --python 3.10
+      - name: Install dependencies (dev + additional_dev)
+        run: |
+          uv sync --group dev --group additional_dev --python 3.10.15 --frozen
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Set up Python 3.10
run: uv python install 3.10
- name: Install dependencies (CrewAI only)
run: |
uv sync --python 3.10
- name: Set up Python 3.10
run: uv python install 3.10.15
- name: Install dependencies (dev + additional_dev)
run: |
uv sync --group dev --group additional_dev --python 3.10.15 --frozen
🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 123 to 129, the workflow currently
mis-invokes Python setup and only installs CrewAI’s dev deps; update the step to
pin/setup Python 3.10 correctly, replace the incorrect "uv python install 3.10"
with a proper pinned Python setup, then ensure dependency installation runs uv
(or the project’s installer) to install both dev and additional_dev groups for
CrewAI, and finally freeze the installed packages to a requirements
file/artifact so the exact resolved deps are recorded.

Comment on lines +140 to +143
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Install required groups for integrations with frozen resolution.

-      - name: Install dependencies (CrewAI only)
-        run: |
-          uv sync --python 3.10
+      - name: Install dependencies (dev + additional_dev)
+        run: |
+          uv sync --group dev --group additional_dev --python 3.10.15 --frozen
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Install dependencies (CrewAI only)
run: |
uv sync --python 3.10
- name: Install dependencies (dev + additional_dev)
run: |
uv sync --group dev --group additional_dev --python 3.10.15 --frozen
🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 126 to 129, the workflow currently
runs a plain "uv sync --python 3.10" which does not install the integration
extras nor enforce frozen resolutions; update the step to invoke uv sync with
the flags that (1) install the required integrations group/extras and (2) enable
frozen resolution/lockfile enforcement (use the UV CLI options for specifying
groups/extras and the frozen-lock or equivalent flag) so the integrations
dependencies are installed exactly as pinned.

- name: Run additional integration tests
run: |
uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml
Comment on lines +144 to +146
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Create junit dir and pass required env for integration tests.

Prevents junit write failure; ensure secrets available if needed.

-      - name: Run additional integration tests
-        run: |
-          uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml
+      - name: Run additional integration tests
+        env:
+          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
+          ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
+          MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }}
+          MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }}
+          TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }}
+        run: |
+          mkdir -p junit
+          uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Run additional integration tests
run: |
uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml
- name: Run additional integration tests
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }}
MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }}
TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }}
run: |
mkdir -p junit
uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml
🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 114 to 116: the workflow step that runs
the additional integration tests can fail because the junit output directory may
not exist and required environment variables/secrets for the tests may be
missing; update the step to create the junit directory before running tests
(e.g., mkdir -p junit) and ensure the needed secrets/env vars are passed into
the job or step (via workflow env: or with: env variables or by referencing
secrets) so pytest can write junit/additional-test-results.xml and the
integration tests have required credentials.

- name: Upload additional test results
uses: actions/upload-artifact@v4
if: success() || failure() # run this step even if previous step failed
with:
name: additional-test-results
path: junit/additional-test-results.xml

- name: Publish additional test results
uses: EnricoMi/publish-unit-test-result-action@v2
if: success() || failure() # run this step even if previous step failed
with:
junit_files: "junit/additional-test-results.xml"
check_name: "Additional Test Results"

- name: Restore .python-version
if: always() # run this step even if previous steps failed
run: |
mv .python-version.bak .python-version
Comment on lines +162 to +165
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Drop .python-version restore (no longer modified).

-      - name: Restore .python-version
-        if: always()    # run this step even if previous steps failed
-        run: |
-          mv .python-version.bak .python-version
+      # No restore needed
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Restore .python-version
if: always() # run this step even if previous steps failed
run: |
mv .python-version.bak .python-version
# No restore needed
🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 148 to 151: the workflow contains a
step that restores .python-version using mv .python-version.bak .python-version
but the file is no longer modified earlier in the job; remove this entire step
(including name, if condition and run block) to avoid failing or no-op
operations and update any subsequent step numbering or references if applicable.

10 changes: 8 additions & 2 deletions maxim/filter_objects.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,10 +75,16 @@ def parse_incoming_query(incoming_query: str) -> List[RuleType]:
value = bool(value)
parsed = True
if not parsed:
if type(value) == bool:
try:
value = int(value)
parsed = True
except ValueError:
pass
if not parsed:
if isinstance(value, bool):
value = bool(value)
parsed = True
elif type(value) is int:
elif isinstance(value, int):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Fix inconsistent type checking pattern.

Line 87 uses isinstance(value, int) while the rest of the codebase uses type(value) is int (e.g., line 303). This inconsistency could lead to different behavior since isinstance returns True for boolean values (bool is a subclass of int), while type() is checks for exact type matches.

For consistency with the existing codebase pattern, apply this diff:

-                    elif isinstance(value, int):
+                    elif type(value) is int:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
elif isinstance(value, int):
elif type(value) is int:
🤖 Prompt for AI Agents
In maxim/filter_objects.py around line 87, replace the isinstance(value, int)
check with an exact type comparison (type(value) is int) to match the project
pattern and avoid treating bools as ints; update the conditional so it uses
type(value) is int (ensuring boolean values are not considered integers) and
keep the rest of the branch logic unchanged.

value = int(value)
parsed = True
Comment on lines +84 to 89
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove redundant type checks after parsing.

Lines 84-89 contain redundant type checks that will never execute because value starts as a string and only gets converted through explicit parsing steps above. The boolean check will always be false since value was never converted to a boolean in the reachable code path, and the integer check is redundant after the integer parsing attempt above.

Apply this diff to remove the redundant checks:

-                if not parsed:
-                    if isinstance(value, bool):
-                        value = bool(value)
-                        parsed = True
-                    elif isinstance(value, int):
-                        value = int(value)
-                        parsed = True
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if isinstance(value, bool):
value = bool(value)
parsed = True
elif type(value) is int:
elif isinstance(value, int):
value = int(value)
parsed = True
🤖 Prompt for AI Agents
In maxim/filter_objects.py around lines 84 to 89, remove the redundant
isinstance checks for bool and int that follow the explicit parsing steps above;
they are unreachable because value starts as a string and parsing blocks already
handle int/bool conversion. Delete those two conditional branches and ensure
subsequent logic relies on the parsed flag/state set by the actual parsing
attempts instead of re-checking value's type.

if not parsed:
Comment on lines +78 to 90
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Be conservative when casting to int and remove unreachable isinstance branches

  • Guard integer casting with a strict regex so we don’t accidentally coerce arbitrary strings.
  • The subsequent isinstance checks are unreachable for this code path and add no value.
-                if not parsed:
-                    try:
-                        value = int(value)
-                        parsed = True
-                    except ValueError:
-                        pass
-                if not parsed:
-                    if isinstance(value, bool):
-                        value = bool(value)
-                        parsed = True
-                    elif isinstance(value, int):
-                        value = int(value)
-                        parsed = True
+                if not parsed and isinstance(value, str) and re.fullmatch(r"[+-]?\d+", value.strip()):
+                    try:
+                        value = int(value.strip())
+                        parsed = True
+                    except ValueError:
+                        pass

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In maxim/filter_objects.py around lines 78-90, the code tries int(value)
unguarded then has unreachable isinstance branches; change the flow to first
detect booleans (if value is bool, keep as-is), then only attempt integer
coercion for string-like inputs that strictly match a digits regex (e.g.
r'^[+-]?\d+$') before calling int(value), and remove the redundant
isinstance(value, int) branch (and any other unreachable isinstance checks) so
only valid numeric strings are coerced to int.

Expand Down
2 changes: 2 additions & 0 deletions maxim/logger/portkey/portkey.py
Original file line number Diff line number Diff line change
Expand Up @@ -134,6 +134,7 @@ def create(self, *args, **kwargs):
.get("message", {})
.get("content", "")
)
if trace is not None:
trace.end()
except Exception as e:
scribe().warning(
Expand Down Expand Up @@ -201,6 +202,7 @@ async def create(self, *args, **kwargs):
.get("message", {})
.get("content", "")
)
if trace is not None:
trace.end()
except Exception as e:
scribe().warning(
Expand Down
32 changes: 14 additions & 18 deletions maxim/tests/test_add_dataset_entries_comprehensive.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,11 @@
from maxim import Maxim, Config
from maxim.models.dataset import DatasetEntry, Variable, DatasetEntryWithRowNo, FileVariablePayload, VariableFileAttachment
from maxim.logger.components.attachment import FileAttachment, FileDataAttachment, UrlAttachment
from dotenv import load_dotenv
load_dotenv()

MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL")
MAXIM_API_KEY = os.getenv("MAXIM_API_KEY")
Comment on lines +10 to +14
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

🛠️ Refactor suggestion

Avoid module-scope dotenv loading and env caching; this breaks test hermeticity.

  • load_dotenv() at import time and caching MAXIM_BASE_URL/MAXIM_API_KEY as module constants make tests order-dependent and hard to override per-test. Combined with tearDown deleting env vars (see below), this can corrupt the global process env for other tests.
  • Prefer resolving .env deterministically and providing safe test defaults, or read env inside setUp.

Apply this diff to make resolution explicit and add safe test defaults (while still honoring env when set):

-from dotenv import load_dotenv
-load_dotenv()
-
-MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL")
-MAXIM_API_KEY = os.getenv("MAXIM_API_KEY")
+from dotenv import load_dotenv, find_dotenv
+# Resolve nearest .env without overriding already-set env
+load_dotenv(find_dotenv(usecwd=True), override=False)
+# Note: do not cache real secrets at import-time; provide safe test defaults
+MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL", "http://localhost:8000")
+MAXIM_API_KEY = os.getenv("MAXIM_API_KEY", "test-api-key")

If you don’t want defaults, drop them here and inject via setUp using patch.dict. I can provide a version that fully avoids module-level reads.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
from dotenv import load_dotenv
load_dotenv()
MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL")
MAXIM_API_KEY = os.getenv("MAXIM_API_KEY")
from dotenv import load_dotenv, find_dotenv
# Resolve nearest .env without overriding already-set env
load_dotenv(find_dotenv(usecwd=True), override=False)
# Note: do not cache real secrets at import-time; provide safe test defaults
MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL", "http://localhost:8000")
MAXIM_API_KEY = os.getenv("MAXIM_API_KEY", "test-api-key")
🤖 Prompt for AI Agents
In maxim/tests/test_add_dataset_entries_comprehensive.py around lines 10-14,
remove the module-level load_dotenv() and module-scoped
MAXIM_BASE_URL/MAXIM_API_KEY reads; instead, resolve .env and read env vars
inside the test's setUp (or individual tests) so each test can control/override
values deterministically. Either call load_dotenv(path) in setUp and then use
os.getenv("MAXIM_BASE_URL", "http://localhost:...")/os.getenv("MAXIM_API_KEY",
"test-key") to provide safe defaults, or avoid defaults and inject values via
unittest.mock.patch.dict(os.environ) per-test; do not mutate global env at
import time or rely on tearDown to undo module-level caching.


class TestAddDatasetEntriesComprehensive(unittest.TestCase):
"""Comprehensive test suite for the updated add_dataset_entries method."""
Expand All @@ -17,13 +21,9 @@ def setUp(self) -> None:
if hasattr(Maxim, "_instance"):
delattr(Maxim, "_instance")

# Set up test environment variables
os.environ["MAXIM_API_KEY"] = "test-api-key"
os.environ["MAXIM_BASE_URL"] = "https://app.getmaxim.ai"

config = Config(
api_key="test-api-key",
base_url="https://app.getmaxim.ai",
api_key=MAXIM_API_KEY,
base_url=MAXIM_BASE_URL,
debug=True,
raise_exceptions=True
)
Comment on lines 24 to 29
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Do not pass None into Config; load env in setUp with safe defaults

Maxim requires an API key; without defaults, CI will fail. Build kwargs conditionally or provide test-safe defaults.

Apply:

-        config = Config(
-            api_key=MAXIM_API_KEY,
-            base_url=MAXIM_BASE_URL,
-            debug=True,
-            raise_exceptions=True
-        )
+        # Resolve env at setup time
+        load_dotenv(find_dotenv(usecwd=True), override=False)
+        api_key = os.getenv("MAXIM_API_KEY", "test-api-key")
+        base_url = os.getenv("MAXIM_BASE_URL", "http://localhost:8000")
+        cfg = {"debug": True, "raise_exceptions": True}
+        # Only add keys if present to avoid None-paths; we provide safe defaults above.
+        if api_key:
+            cfg["api_key"] = api_key
+        if base_url:
+            cfg["base_url"] = base_url
+        config = Config(**cfg)

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In maxim/tests/test_add_dataset_entries_comprehensive.py around lines 24-29,
avoid passing None into Config by loading env vars in setUp with safe defaults:
import os, read api_key = os.getenv('MAXIM_API_KEY', 'test-api-key') and
base_url = os.getenv('MAXIM_BASE_URL', 'http://localhost'), then construct
Config using those values (and other flags) so kwargs are never None;
alternatively build the kwargs dict conditionally (only include a key if env
value exists) or use the safe defaults to ensure CI won't fail when env vars are
absent.

Expand All @@ -34,10 +34,6 @@ def tearDown(self) -> None:
"""Clean up after tests."""
if hasattr(Maxim, "_instance"):
delattr(Maxim, "_instance")
if "MAXIM_API_KEY" in os.environ:
del os.environ["MAXIM_API_KEY"]
if "MAXIM_BASE_URL" in os.environ:
del os.environ["MAXIM_BASE_URL"]

def _create_mock_response(self, content: bytes, headers: dict = None) -> Mock:
"""Create a mock HTTP response."""
Expand Down Expand Up @@ -175,7 +171,7 @@ def test_add_dataset_entries_with_file_attachments(self) -> None:
name="test.txt",
mime_type="text/plain"
)

entry = DatasetEntry(entry={
"input": Variable(type="text", payload="Input with file"),
"files": Variable(type="file", payload=[file_attachment])
Expand All @@ -186,7 +182,7 @@ def test_add_dataset_entries_with_file_attachments(self) -> None:

# Verify file upload process was triggered
self.assertEqual(mock_client.request.call_count, 4) # total_rows + entries + upload_url + patch

finally:
# Clean up temp file
os.unlink(temp_file_path)
Expand All @@ -201,7 +197,7 @@ def test_add_dataset_entries_with_url_attachment(self) -> None:
b'{"data": {"url": "https://signed-url.com", "key": "datasets/test-dataset-id/entry123/test-file-key"}}'
)
mock_patch_response = self._create_mock_response(b'{"success": true}')

self._setup_mock_network_calls([
mock_total_rows_response,
mock_add_entries_response,
Expand All @@ -215,7 +211,7 @@ def test_add_dataset_entries_with_url_attachment(self) -> None:
name="image.jpg",
mime_type="image/jpeg"
)

entry = DatasetEntry(entry={
"input": Variable(type="text", payload="Input with URL"),
"images": Variable(type="file", payload=[url_attachment])
Expand All @@ -240,7 +236,7 @@ def test_add_dataset_entries_with_file_data_attachment(self) -> None:
b'{"data": {"url": "https://signed-url.com", "key": "datasets/test-dataset-id/entry123/test-file-key"}}'
)
mock_patch_response = self._create_mock_response(b'{"success": true}')

mock_client = self._setup_mock_network_calls([
mock_total_rows_response,
mock_add_entries_response,
Expand All @@ -254,7 +250,7 @@ def test_add_dataset_entries_with_file_data_attachment(self) -> None:
name="data.bin",
mime_type="application/octet-stream"
)

entry = DatasetEntry(entry={
"input": Variable(type="text", payload="Input with file data"),
"data": Variable(type="file", payload=[file_data_attachment])
Expand All @@ -272,7 +268,7 @@ def test_add_dataset_entries_mixed_input_types(self) -> None:
mock_add_entries_response = self._create_mock_response(
b'{"data": {"ids": ["entry1", "entry2"], "cells": []}}'
)

mock_client = self._setup_mock_network_calls([
mock_total_rows_response,
mock_add_entries_response
Expand All @@ -283,7 +279,7 @@ def test_add_dataset_entries_mixed_input_types(self) -> None:
"input": Variable(type="text", payload="Object input"),
"output": Variable(type="json", payload={"result": "object"}),
})

dict_entry = {
"input": "Dict input",
"output": {"result": "dict"}
Expand Down
16 changes: 8 additions & 8 deletions maxim/tests/test_add_dataset_entries_integration.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ def test_add_dataset_entries_with_dictionary_input(self) -> None:
entries = [
{
"Input": "How do I calculate the area of a circle?",
"Expected Output": {
"Test": {
"formula": "A = π × r²",
"explanation": "The area of a circle is π times the radius squared",
"example": "For radius 5: A = π × 25 = 78.54 square units",
Expand All @@ -129,7 +129,7 @@ def test_add_dataset_entries_with_dictionary_input(self) -> None:
},
{
"Input": "What is 15 + 27?",
"Expected Output": {
"Test": {
"answer": 42,
"method": "simple addition",
"step_by_step": ["15 + 27", "= 42"],
Expand Down Expand Up @@ -195,7 +195,7 @@ def test_add_dataset_entries_with_dataset_entry_objects(self) -> None:
# Create DatasetEntry objects with correct column structure
entry1 = DatasetEntry(entry={
"Input": Variable(type="text", payload="What is the capital of France?"),
"Expected Output": Variable(type="json", payload={
"Test": Variable(type="json", payload={
"answer": "Paris",
"confidence": 0.95,
"reasoning": "Paris is the capital and largest city of France",
Expand All @@ -206,7 +206,7 @@ def test_add_dataset_entries_with_dataset_entry_objects(self) -> None:

entry2 = DatasetEntry(entry={
"Input": Variable(type="text", payload="Explain the process of photosynthesis"),
"Expected Output": Variable(type="json", payload={
"Test": Variable(type="json", payload={
"answer": "Photosynthesis is the process by which plants convert sunlight into energy",
"confidence": 0.98,
"key_components": ["chlorophyll", "sunlight", "carbon dioxide", "water"],
Expand Down Expand Up @@ -265,7 +265,7 @@ def test_add_dataset_entries_with_file_attachments(self) -> None:

entry_with_file = DatasetEntry(entry={
"Input": Variable(type="text", payload="Integration test with file attachment"),
"Expected Output": Variable(type="json", payload={
"Test": Variable(type="json", payload={
"result": "Successfully processed file attachment",
"file_type": "text",
"confidence": 0.95
Expand All @@ -283,7 +283,7 @@ def test_add_dataset_entries_with_file_attachments(self) -> None:

entry_with_data = DatasetEntry(entry={
"Input": Variable(type="text", payload="Integration test with binary data"),
"Expected Output": Variable(type="json", payload={
"Test": Variable(type="json", payload={
"result": "Successfully processed binary data",
"file_size_bytes": len(binary_data),
"confidence": 0.97
Expand Down Expand Up @@ -349,7 +349,7 @@ def test_mixed_entry_types_integration(self) -> None:
# Create mixed entry types with correct columns
dict_entry = {
"Input": "What are the benefits of renewable energy?",
"Expected Output": {
"Test": {
"benefits": ["environmental protection", "sustainability", "cost savings"],
"types": ["solar", "wind", "hydro", "geothermal"],
"impact": "positive long-term effects on climate",
Expand All @@ -360,7 +360,7 @@ def test_mixed_entry_types_integration(self) -> None:

dataset_entry = DatasetEntry(entry={
"Input": Variable(type="text", payload="Explain machine learning in simple terms"),
"Expected Output": Variable(type="json", payload={
"Test": Variable(type="json", payload={
"definition": "Machine learning is AI that learns from data to make predictions",
"key_concepts": ["algorithms", "training", "predictions", "patterns"],
"applications": ["image recognition", "natural language processing", "recommendation systems"],
Expand Down
7 changes: 4 additions & 3 deletions maxim/tests/test_agno.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
import unittest
import os
from uuid import uuid4
from dotenv import load_dotenv
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Deterministic .env resolution without clobbering pre-set env.

Use find_dotenv(usecwd=True) and override=False to avoid surprising overrides in CI.

-from dotenv import load_dotenv
-load_dotenv()
+from dotenv import load_dotenv, find_dotenv
+load_dotenv(find_dotenv(usecwd=True), override=False)

Also applies to: 9-9

🤖 Prompt for AI Agents
In maxim/tests/test_agno.py around lines 3 and 9, the test currently uses
load_dotenv which may clobber pre-set environment variables in CI; update the
dotenv resolution to use find_dotenv(usecwd=True) and call load_dotenv with
override=False so the discovered .env is loaded relative to the current working
directory and does not overwrite any existing environment variables.


from maxim.logger import Logger
from maxim.logger.agno import instrument_agno
from maxim.tests.mock_writer import inject_mock_writer

load_dotenv()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Avoid module-scope env override; resolve .env deterministically

Module-level load_dotenv can clobber existing env in CI. Use find_dotenv and avoid override.

Apply:

-from dotenv import load_dotenv
+from dotenv import load_dotenv, find_dotenv
@@
-load_dotenv()
+load_dotenv(find_dotenv(usecwd=True), override=False)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
load_dotenv()
from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv(usecwd=True), override=False)
🤖 Prompt for AI Agents
In maxim/tests/test_agno.py around line 9, a module-level call to load_dotenv()
can clobber CI environment; replace this with a deterministic, non-overriding
load: use dotenv.find_dotenv() to locate a .env path and call load_dotenv(path,
override=False) only when a path is found, and move the call out of module scope
into a test setup fixture or inside tests so it doesn't run unconditionally at
import time.


class TestAgnoIntegration(unittest.TestCase):
def test_agno_generation_logging(self):
Expand All @@ -19,8 +20,8 @@ def test_agno_generation_logging(self):
# Setup Maxim logger with mock writer
logger = Logger(
{"id": os.getenv("MAXIM_LOG_REPO_ID", "test-repo")},
api_key="test-api-key",
base_url="https://app.getmaxim.ai",
api_key=os.getenv("MAXIM_API_KEY"),
base_url=os.getenv("MAXIM_BASE_URL"),
)
Comment on lines 21 to 25
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Avoid passing None for api_key/base_url to Logger.

Build kwargs conditionally to prevent None from altering code paths.

-        logger = Logger(
-            {"id": os.getenv("MAXIM_LOG_REPO_ID", "test-repo")},
-            api_key=os.getenv("MAXIM_API_KEY"),
-            base_url=os.getenv("MAXIM_BASE_URL"),
-        )
+        cfg = {"id": os.getenv("MAXIM_LOG_REPO_ID", "test-repo")}
+        kwargs = {}
+        ak = os.getenv("MAXIM_API_KEY")
+        bu = os.getenv("MAXIM_BASE_URL")
+        if ak: kwargs["api_key"] = ak
+        if bu: kwargs["base_url"] = bu
+        logger = Logger(cfg, **kwargs)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
logger = Logger(
{"id": os.getenv("MAXIM_LOG_REPO_ID", "test-repo")},
api_key="test-api-key",
base_url="https://app.getmaxim.ai",
api_key=os.getenv("MAXIM_API_KEY"),
base_url=os.getenv("MAXIM_BASE_URL"),
)
cfg = {"id": os.getenv("MAXIM_LOG_REPO_ID", "test-repo")}
kwargs = {}
ak = os.getenv("MAXIM_API_KEY")
bu = os.getenv("MAXIM_BASE_URL")
if ak:
kwargs["api_key"] = ak
if bu:
kwargs["base_url"] = bu
logger = Logger(cfg, **kwargs)
🤖 Prompt for AI Agents
In maxim/tests/test_agno.py around lines 21 to 25, the test currently passes
os.getenv(...) results directly into Logger which can pass None for
api_key/base_url; instead build the kwargs dict conditionally: read the env vars
into variables, add api_key and base_url to the kwargs only if they are not
None/empty, then call Logger with the repo dict and the constructed kwargs so
None values are not passed through and code paths remain unchanged.

Comment on lines 21 to 25
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Provide safe defaults for logger construction to keep tests hermetic

Passing None for api_key/base_url can fail constructor. Tests use a mock writer, so test-safe defaults are fine.

Apply:

-        logger = Logger(
-            {"id": os.getenv("MAXIM_LOG_REPO_ID", "test-repo")},
-            api_key=os.getenv("MAXIM_API_KEY"),
-            base_url=os.getenv("MAXIM_BASE_URL"),
-        )
+        logger = Logger(
+            {"id": os.getenv("MAXIM_LOG_REPO_ID", "test-repo")},
+            api_key=os.getenv("MAXIM_API_KEY") or "test-api-key",
+            base_url=os.getenv("MAXIM_BASE_URL") or "https://app.getmaxim.ai",
+        )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
logger = Logger(
{"id": os.getenv("MAXIM_LOG_REPO_ID", "test-repo")},
api_key="test-api-key",
base_url="https://app.getmaxim.ai",
api_key=os.getenv("MAXIM_API_KEY"),
base_url=os.getenv("MAXIM_BASE_URL"),
)
logger = Logger(
{"id": os.getenv("MAXIM_LOG_REPO_ID", "test-repo")},
api_key=os.getenv("MAXIM_API_KEY") or "test-api-key",
base_url=os.getenv("MAXIM_BASE_URL") or "https://app.getmaxim.ai",
)
🤖 Prompt for AI Agents
In maxim/tests/test_agno.py around lines 21 to 25, the Logger is constructed
with api_key and base_url that may be None if environment variables are absent,
which can cause the constructor to fail; change the calls to use safe test
defaults (e.g., api_key=os.getenv("MAXIM_API_KEY", "test-api-key") and
base_url=os.getenv("MAXIM_BASE_URL", "http://localhost")) so the test remains
hermetic while still using the mock writer.

mock_writer = inject_mock_writer(logger)
instrument_agno(logger)
Expand Down
Loading
Loading