-
Notifications
You must be signed in to change notification settings - Fork 1
Adds Github action to trigger tests upon raising PR #78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: beta
Are you sure you want to change the base?
Conversation
📝 WalkthroughSummary by CodeRabbit
WalkthroughAdds a CI workflow; centralizes tests into two jobs; migrates many tests to dotenv-driven environment config and instantiates Maxim with explicit base_url; updates LangChain/Azure/OpenAI test patterns; splits dev extras in pyproject.toml; improves numeric parsing in filters; guards Portkey trace.end() against None; updates Portkey/Maxim constructor usage in tests. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Dev as Developer
participant GH as GitHub
participant WF as CI Workflow
note over GH,WF: Trigger: push / pull_request to main or beta
Dev->>GH: push / open PR
GH-->>WF: start tests.yml
rect rgba(235,245,255,0.6)
WF->>WF: checkout repo
WF->>WF: setup uv + Python (3.9 / 3.10)
WF->>WF: backup/modify pyproject.toml
WF->>WF: install deps
WF->>WF: run pytest (scoped lists per job)
WF->>WF: write JUnit XML, upload artifacts, publish checks
end
WF-->>GH: job statuses & artifacts
GH-->>Dev: CI result notification
sequenceDiagram
autonumber
participant Test as Test code
participant Env as Environment (.env / OS)
participant Maxim as Maxim singleton
Note over Test,Env: Tests load dotenv and read env vars (e.g., MAXIM_BASE_URL)
Test->>Env: read MAXIM_BASE_URL and other vars
alt Maxim singleton exists
Test->>Maxim: delete Maxim._instance
end
Test->>Maxim: instantiate Maxim({"base_url": baseUrl})
Test->>Maxim: call .logger() and run tests
Note right of Maxim: Logger and integrations configured with provided base_url
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
✨ Finishing Touches
🧪 Generate unit tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 22
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
maxim/tests/test_fireworks.py (1)
21-30: Don’t hard-fail in setUp; skip tests when creds are absent and pass config explicitly.Raising ValueError in setUp aborts the test class instead of marking tests as skipped. Also, Maxim(...) requires MAXIM_API_KEY and logger() requires MAXIM_LOG_REPO_ID; if either is missing, setUp will fail before tests can self-skip.
Apply:
- if not fireworksApiKey: - raise ValueError("FIREWORKS_API_KEY environment variable is not set") - - self.logger = Maxim({"base_url": baseUrl}).logger() + if not fireworksApiKey or not apiKey or not repoId: + self.skipTest("Skipping Fireworks tests: missing FIREWORKS_API_KEY / MAXIM_API_KEY / MAXIM_LOG_REPO_ID") + self.logger = Maxim({"base_url": baseUrl, "api_key": apiKey}).logger({"id": repoId})This maintains clean CI behavior and removes hidden env coupling.
maxim/tests/test_prompts.py (1)
25-31: Undefined variable 'data' used later; plus missing env gating will break CI
- Multiple assertions reference
data[env][...], butdatais never defined/loaded in this module, leading to NameError at runtime (e.g., Lines 142, 161, 191).- These tests call live services and rely on PROMPT_ID, PROMPT_VERSION_ID, FOLDER_ID, MAXIM_API_KEY being present. On PR workflows (especially from forks), secrets are absent — tests will hard-fail.
Suggested fixes:
- Load testConfig.json (or remove those data-based assertions), and gate on required env vars:
class TestMaximPromptManagement(unittest.TestCase): def setUp(self): # Clear singleton instance if it exists if hasattr(Maxim, "_instance"): delattr(Maxim, "_instance") - self.maxim = Maxim( + # Gate on required integration env before hitting network + missing = [k for k, v in { + "MAXIM_API_KEY": apiKey, + "PROMPT_ID": promptId, + "PROMPT_VERSION_ID": promptVersionId, + "FOLDER_ID": folderID, + }.items() if not v] + if missing: + self.skipTest(f"Missing required env vars: {', '.join(missing)}") + + self.maxim = Maxim( { "api_key": apiKey, "debug": True, "prompt_management": True, "base_url": baseUrl } )Outside the selected range, initialize
datanear the top:# Place near other globals in this file config_path = os.path.join(os.path.dirname(__file__), "testConfig.json") try: with open(config_path, "r", encoding="utf-8") as fh: data = json.load(fh) except FileNotFoundError: data = {} # or self.skipTest(...) in setUp if strictIf you want, I can push a follow-up patch that either loads data or rewrites assertions to not depend on external JSON.
maxim/tests/test_litellm.py (1)
221-227: Same as sync setUp: make async setup explicit and resilient; add return type annotationExplicitly passing API key and log repo id, and skipping when absent, avoids test fragility. Also add “-> None” annotation to satisfy type-checkers.
- self.maxim = Maxim({"base_url": baseUrl}) - self.logger = self.maxim.logger() + if not (apiKey and repoId): + self.skipTest("MAXIM_API_KEY or MAXIM_LOG_REPO_ID is not set; skipping integration test") + self.maxim = Maxim({"base_url": baseUrl, "api_key": apiKey}) + self.logger = self.maxim.logger({"id": repoId})Outside the selected range:
- async def asyncSetUp(self): + async def asyncSetUp(self) -> None:
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (14)
.github/workflows/tests.yml(1 hunks)maxim/tests/test_anthropic.py(1 hunks)maxim/tests/test_decorators.py(2 hunks)maxim/tests/test_fireworks.py(2 hunks)maxim/tests/test_gemini.py(2 hunks)maxim/tests/test_groq.py(2 hunks)maxim/tests/test_langgraph.py(1 hunks)maxim/tests/test_litellm.py(2 hunks)maxim/tests/test_logger.py(3 hunks)maxim/tests/test_maxim_core_simple.py(4 hunks)maxim/tests/test_openai.py(1 hunks)maxim/tests/test_portkey.py(2 hunks)maxim/tests/test_prompts.py(2 hunks)pyproject.toml(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (11)
maxim/tests/test_groq.py (1)
maxim/maxim.py (2)
logger(897-942)Maxim(127-1045)
maxim/tests/test_openai.py (1)
maxim/maxim.py (2)
logger(897-942)Maxim(127-1045)
maxim/tests/test_logger.py (2)
maxim/tests/test_anthropic.py (2)
setUp(26-31)setUp(121-126)maxim/maxim.py (1)
Maxim(127-1045)
maxim/tests/test_langgraph.py (1)
maxim/maxim.py (1)
Maxim(127-1045)
maxim/tests/test_decorators.py (1)
maxim/maxim.py (1)
logger(897-942)
maxim/tests/test_litellm.py (1)
maxim/maxim.py (1)
Maxim(127-1045)
maxim/tests/test_fireworks.py (1)
maxim/maxim.py (2)
logger(897-942)Maxim(127-1045)
maxim/tests/test_anthropic.py (1)
maxim/maxim.py (2)
logger(897-942)Maxim(127-1045)
maxim/tests/test_portkey.py (1)
maxim/maxim.py (2)
logger(897-942)Maxim(127-1045)
maxim/tests/test_gemini.py (5)
maxim/tests/test_fireworks.py (1)
asyncSetUp(467-483)maxim/tests/test_groq.py (1)
asyncSetUp(638-650)maxim/tests/test_litellm.py (1)
asyncSetUp(213-227)maxim/tests/test_openai.py (1)
asyncSetUp(23-27)maxim/maxim.py (2)
Maxim(127-1045)logger(897-942)
maxim/tests/test_maxim_core_simple.py (2)
maxim/maxim.py (1)
Maxim(127-1045)maxim/cache/inMemory.py (1)
MaximInMemoryCache(4-41)
🪛 Ruff (0.12.2)
maxim/tests/test_logger.py
34-34: Missing return type annotation for public function setUp
Add return type annotation: None
(ANN201)
maxim/tests/test_gemini.py
19-19: Missing return type annotation for public function asyncSetUp
Add return type annotation: None
(ANN201)
maxim/tests/test_maxim_core_simple.py
147-147: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
149-149: Missing return type annotation for public function test_maxim_uses_env_api_key
Add return type annotation: None
(ANN201)
155-155: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
157-157: Missing return type annotation for public function test_maxim_singleton_pattern
Add return type annotation: None
(ANN201)
161-161: Local variable maxim1 is assigned to but never used
Remove assignment to unused variable maxim1
(F841)
163-163: Use pytest.raises instead of unittest-style assertRaises
Replace assertRaises with pytest.raises
(PT027)
166-166: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
168-168: Missing return type annotation for public function test_maxim_default_cache_creation
Add return type annotation: None
(ANN201)
172-172: Do not call getattr with a constant attribute value. It is not any safer than normal property access.
Replace getattr with attribute access
(B009)
173-173: Use a regular assert instead of unittest-style assertIsInstance
Replace assertIsInstance(...) with assert ...
(PT009)
175-175: Missing return type annotation for public function test_cleanup_method_stops_running
Add return type annotation: None
(ANN201)
🪛 YAMLlint (1.37.1)
.github/workflows/tests.yml
[warning] 3-3: truthy value should be one of [false, true]
(truthy)
[error] 5-5: too few spaces after comma
(commas)
[error] 7-7: too few spaces after comma
(commas)
[error] 84-84: no new line character at the end of file
(new-line-at-end-of-file)
🪛 GitHub Actions: Run Tests
maxim/tests/test_langgraph.py
[error] 37-37: Step: uv run pytest maxim/tests/ -v --ignore=maxim/tests/integrations/crewai/ --ignore=maxim/tests/test_livekit.py --ignore=maxim/tests/test_livekit_realtime.py; Error: pydantic_core._pydantic_core.ValidationError: 1 validation error for TavilySearchAPIWrapper: Did not find tavily_api_key; please set an environment variable TAVILY_API_KEY or pass tavily_api_key as a named parameter
[warning] 37-37: LangChainDeprecationWarning: TavilySearchResults is deprecated in LangChain 0.3.25 and will be removed in 1.0. Install langchain_tavily and import TavilySearch from there.
maxim/tests/test_decorators.py
[error] 12-12: Step: uv run pytest maxim/tests/ -v --ignore=maxim/tests/integrations/crewai/ --ignore=maxim/tests/test_livekit.py --ignore=maxim/tests/test_livekit_realtime.py; Error: ImportError: attempted relative import with no known parent package
[warning] 10-10: LangChainDeprecationWarning: Importing ChatOpenAI from langchain.chat_models is deprecated. Replace with 'from langchain_community.chat_models.openai import ChatOpenAI'.
🔇 Additional comments (7)
pyproject.toml (1)
15-19: Classifiers reformatting is fine.No semantic change; keeping these standardized helps packaging metadata.
maxim/tests/test_logger.py (1)
30-31: Base URL defaulting looks good.Using a sane default keeps local runs simple while allowing override via MAXIM_BASE_URL.
maxim/tests/test_prompts.py (1)
16-16: Good: baseUrl now defaults to the public SaaS URLThis keeps tests resilient when MAXIM_BASE_URL isn’t set.
maxim/tests/test_litellm.py (1)
42-46: Drop credentials guard in TestLiteLLM – tests use in-memory writerThe Litellm tracer tests in
maxim/tests/test_litellm.pyneither use the loadedapiKeynorrepoId—they only instantiateMaximwithbase_urland immediately replace its writer with the in-memory mock (inject_mock_writer) before exercisinglitellm.completion. No real API or repo-ID calls are ever made, and missingMAXIM_API_KEY/MAXIM_LOG_REPO_IDwill not cause CI failures.• In
setUp(), onlybaseUrlis passed toMaxim({"base_url": baseUrl})and theninject_mock_writerswaps in the mock writer (no external network I/O).
• All tests guard only on the actual LLM keys (OPENAI_API_KEY,ANTHROPIC_API_KEY) and correctly callskipTestwhen those are missing.The proposed diff to add a skip-guard for
MAXIM_API_KEY/MAXIM_LOG_REPO_IDis unnecessary. Please disregard the previous suggestion.Likely an incorrect or invalid review comment.
maxim/tests/test_gemini.py (1)
16-16: Good: added sensible default for baseUrlThis mirrors Maxim’s default and makes local runs predictable.
maxim/tests/test_anthropic.py (1)
30-31: Ignore gating of API key and repo ID in this test setupThe
inject_mock_writer(self.logger)call replaces the real writer before any network activity, so neitherMAXIM_API_KEYnorMAXIM_LOG_REPO_IDneed to be present for the test to pass—no external calls are made once the mock writer is injected. Likewise, passing an explicitapi_keyorrepo_idintoMaximor itslogger()call is unnecessary here, as the mock handles all logging interactions.Likely an incorrect or invalid review comment.
.github/workflows/tests.yml (1)
78-81: Verify UV configuration and dependency group
I ran searches againstpyproject.tomland didn’t find anytool.uvsections or agroups.additional_devblock. Please confirm whether your CrewAI dependencies live in a separate UV group.
- If they do, update the sync step to explicitly target that group. For example:
- uv sync --python 3.10 + uv sync --python 3.10 --group <correct-group-name>- If no custom group exists, ensure those dependencies are included in the default set (or define a new group) so the CI picks them up.
| jobs: | ||
| test-main: | ||
| name: Test Main SDK (Python 3.9) | ||
| runs-on: ubuntu-latest | ||
| env: | ||
| MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }} | ||
| MAXIM_LOG_REPO_ID: ${{ secrets.MAXIM_LOG_REPO_ID }} | ||
| MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }} | ||
| OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} | ||
| ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} | ||
| GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }} | ||
| GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }} | ||
| FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }} | ||
| MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }} | ||
| TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }} | ||
| TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }} | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Optional: add concurrency control to cancel superseded runs on the same ref
Speeds up CI by cancelling in-flight runs when new commits land on the same branch/PR.
Proposed addition:
jobs:
+ concurrency:
+ group: tests-${{ github.workflow }}-${{ github.ref }}
+ cancel-in-progress: trueCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 9 to 25: the workflow lacks concurrency
control so CI runs for the same branch/PR are not cancelled when newer commits
arrive; add a top-level concurrency stanza to cancel superseded runs by setting
concurrency.group to a unique key like "${{ github.workflow }}-${{ github.ref
}}" and concurrency.cancel-in-progress to true (place this at the top level of
the workflow file, not inside a job) so in-flight runs on the same ref are
automatically cancelled when a new run is triggered.
| steps: | ||
| - uses: actions/checkout@v4 | ||
|
|
||
| - name: Install uv | ||
| uses: astral-sh/setup-uv@v4 | ||
| with: | ||
| version: "latest" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Pin actions to immutable SHAs and set minimal permissions
For supply-chain safety and least privilege:
- Pin
actions/checkout@v4andastral-sh/setup-uv@v4to commit SHAs. - Add
permissions: contents: readat workflow top.
These are standard hardening steps.
Proposed additions (example):
name: Run Tests
+permissions:
+ contents: read
+
on:Please replace version tags with their corresponding SHAs (example snippet below—update with current SHAs):
- uses: actions/checkout@v4
+ uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac # v4
- uses: astral-sh/setup-uv@v4
+ uses: astral-sh/setup-uv@<pin-to-latest-v4-sha>If you want, I can fetch the latest SHAs and update the patch.
Also applies to: 70-77
🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 26 to 33 (and similar change at 70-77):
the workflow uses action version tags which should be pinned to immutable commit
SHAs and the workflow lacks least-privilege permissions; update each uses: entry
(actions/checkout@v4 and astral-sh/setup-uv@v4) to the corresponding commit SHA
for that tag (replace the tag with the full SHA for the referenced release) and
add a top-level permissions: contents: read entry to the workflow YAML; apply
the same SHA-pinning for the other occurrences at lines 70-77.
| additional-tests: | ||
| name: Test Additional Integrations (Python 3.10) | ||
| runs-on: ubuntu-latest | ||
| env: | ||
| MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }} | ||
| MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }} | ||
| MAXIM_LOG_REPO_ID: ${{ secrets.MAXIM_LOG_REPO_ID }} | ||
| OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} | ||
| ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} | ||
|
|
||
| steps: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard secrets on PRs from forks to prevent failing runs and reduce risk
GitHub doesn’t expose repository secrets to workflows triggered by PRs from forks. This workflow will fail on such PRs because tests require API keys. Gate the jobs so they run on:
- Pushes, or
- PRs where
head.repo.fork == false.
This keeps CI green for external contributions and prevents secret-dependent steps from executing unintentionally.
Apply this diff:
additional-tests:
name: Test Additional Integrations (Python 3.10)
runs-on: ubuntu-latest
+ if: ${{ github.event_name != 'pull_request' || github.event.pull_request.head.repo.fork == false }}And similarly for the first job:
test-main:
name: Test Main SDK (Python 3.9)
runs-on: ubuntu-latest
+ if: ${{ github.event_name != 'pull_request' || github.event.pull_request.head.repo.fork == false }}🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 57-67, the additional-tests job (and
the first job) will fail for PRs from forks because repository secrets are not
exposed; update each secret-dependent job to only run on pushes or on
pull_request events where head.repo.fork == false by adding an appropriate
job-level conditional that checks github.event_name == 'push' OR
github.event.pull_request.head.repo.fork == false, ensuring secret-based env
vars are only used when allowed.
| self.logger = Maxim({"base_url": baseUrl}).logger() | ||
| self.mock_writer = inject_mock_writer(self.logger) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Initialize logger with repo id and skip when missing
Avoid env-dependent hard failures by checking MAXIM_LOG_REPO_ID and passing the id to logger() explicitly.
Apply this diff:
- self.logger = Maxim({"base_url": baseUrl}).logger()
+ repo_id = os.getenv("MAXIM_LOG_REPO_ID")
+ if not repo_id:
+ self.skipTest("MAXIM_LOG_REPO_ID environment variable is not set")
+ self.logger = Maxim({"base_url": baseUrl}).logger({"id": repo_id})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| self.logger = Maxim({"base_url": baseUrl}).logger() | |
| self.mock_writer = inject_mock_writer(self.logger) | |
| repo_id = os.getenv("MAXIM_LOG_REPO_ID") | |
| if not repo_id: | |
| self.skipTest("MAXIM_LOG_REPO_ID environment variable is not set") | |
| self.logger = Maxim({"base_url": baseUrl}).logger({"id": repo_id}) | |
| self.mock_writer = inject_mock_writer(self.logger) |
🤖 Prompt for AI Agents
In maxim/tests/test_portkey.py around lines 31-33, the test currently
initializes the logger without an explicit repo id which causes env-dependent
failures; change it to read repo_id = os.environ.get("MAXIM_LOG_REPO_ID"), call
pytest.skip(...) if repo_id is missing, and then pass that id explicitly to
logger() (e.g. .logger(repo_id=repo_id)); also add the required imports (os and
pytest) at the top of the test file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 23
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (16)
maxim/tests/test_prompts.py (1)
25-31: Passing base_url in config aligns with Maxim’s initializer; consider also skipping when required envs are absent.These tests depend on MAXIM_API_KEY and prompt/folder IDs. To avoid CI flakes, guard with skipIf when critical env vars are missing.
def setUp(self): # Clear singleton instance if it exists if hasattr(Maxim, "_instance"): delattr(Maxim, "_instance") - self.maxim = Maxim( + if not apiKey: + self.skipTest("MAXIM_API_KEY not set") + self.maxim = Maxim( { "api_key": apiKey, "debug": True, "prompt_management": True, "base_url": baseUrl } )maxim/tests/test_langgraph.py (2)
37-38: CI failure root cause: Tavily tool is instantiated at import time and requires TAVILY_API_KEY.Pipeline error shows: “Did not find tavily_api_key; set TAVILY_API_KEY…”. Instantiate tools lazily and skip tests when the key is absent to avoid module import failures in PRs without secrets.
-tools = [TavilySearchResults(max_results=1)] +tavily_api_key = os.getenv("TAVILY_API_KEY") +tools = [] +if tavily_api_key: + tools = [TavilySearchResults(max_results=1, tavily_api_key=tavily_api_key)]Additionally, make graph/tool-node creation conditional:
-# Define the function to execute tools -tool_node = ToolNode(tools) +# Define the function to execute tools (if available) +tool_node = ToolNode(tools) if tools else None @@ -workflow.add_node("action", tool_node) +if tool_node: + workflow.add_node("action", tool_node) @@ -workflow.add_conditional_edges( +if tool_node: + workflow.add_conditional_edges( "agent", should_continue, { "continue": "action", "end": END, }, -) +) @@ -workflow.add_edge("action", "agent") +if tool_node: + workflow.add_edge("action", "agent")And early-exit in should_continue when no tools are available:
def should_continue(state): + if not tools: + return "end" messages = state["messages"] last_message = messages[-1]
141-169: Optionally skip the whole test suite when TAVILY_API_KEY is missing.If you prefer not to conditionally rewire the graph, you can skip these tests on PRs that don’t expose secrets.
-class TestLangGraph(unittest.TestCase): +@unittest.skipUnless(os.getenv("TAVILY_API_KEY"), "TAVILY_API_KEY not set; skipping LangGraph tests") +class TestLangGraph(unittest.TestCase):maxim/tests/test_anthropic.py (1)
120-126: Align base_url usage and repo-id handling across tests in this moduleThe second test class still constructs Maxim() without base_url and without an explicit repo id. For consistency and CI stability, use the same baseUrl and provide a repo-id fallback like above.
- self.logger = Maxim().logger() + if not apiKey or not anthropicApiKey: + self.skipTest("MAXIM_API_KEY and/or ANTHROPIC_API_KEY not set") + self.logger = Maxim({"base_url": baseUrl}).logger( + {"id": (repoId or f"test-repo-{uuid4()}")} + )maxim/tests/test_portkey.py (2)
323-391: Async test method is never awaited under unittest.TestCase
async deftests are ignored by unittest.TestCase. Wrap with asyncio.run or switch the class to IsolatedAsyncioTestCase. Below converts this specific test to a sync wrapper that drives an async helper.- async def test_portkey_tool_calls_async(self): + def test_portkey_tool_calls_async(self): + import asyncio + asyncio.run(self._test_portkey_tool_calls_async()) + + async def _test_portkey_tool_calls_async(self): """Test Portkey integration with tool calls (asynchronous).""" # Create an async Portkey client and instrument it async_portkey_client = portkey_ai.AsyncPortkey( api_key=portkey_api_key, virtual_key=portkey_virtual_key ) instrumented_client = MaximPortkeyClient(async_portkey_client, self.logger) @@ - # Async tool calls should also generate logs + # Async tool calls should also generate logs all_logs = self.mock_writer.get_all_logs() self.assertGreater(len(all_logs), 0, "Expected at least one log to be captured")
27-33: Add guards and fallbacks for Portkey and Maxim env vars intest_portkey.pyThe current tests unconditionally initialize a global logger and a Portkey client without checking for required environment variables. In a CI or local dev setup without these secrets, you’ll get
ValueErroror real‐service calls. We need to:
- Remove or guard the unscoped, module‐level logger initialization.
- Skip the entire test class if
PORTKEY_API_KEYorPORTKEY_VIRTUAL_KEYis missing.- In
setUp, ensureMAXIM_API_KEYis set (via a test default) and supply a deterministicrepo_idtologger(), so you never hit the “Log repo id is required” error.Locations to update:
• maxim/tests/test_portkey.py: top of file
• Lines 27–33 (insidesetUp)Suggested diffs:
--- maxim/tests/test_portkey.py @@ -import portkey_api_key = os.getenv("PORTKEY_API_KEY") -import portkey_virtual_key = os.getenv("PORTKEY_VIRTUAL_KEY") -baseUrl = os.getenv("MAXIM_BASE_URL") or "https://app.getmaxim.ai" -logger = Maxim({"base_url": baseUrl}).logger() +portkey_api_key = os.getenv("PORTKEY_API_KEY") +portkey_virtual_key = os.getenv("PORTKEY_VIRTUAL_KEY") +baseUrl = os.getenv("MAXIM_BASE_URL") or "https://app.getmaxim.ai" + +# Skip all Portkey integration tests if no credentials +if not (portkey_api_key and portkey_virtual_key): + raise unittest.SkipTest( + "Skipping Portkey integration tests: PORTKEY_API_KEY/PORTKEY_VIRTUAL_KEY not set" + ) + +# Defer logger init into each test’s setUp to control env fallbacks +logger = None @@ class TestPortkeyIntegration(unittest.TestCase): - def setUp(self): + def setUp(self): # This is a hack to ensure that the Maxim instance is not cached if hasattr(Maxim, "_instance"): delattr(Maxim, "_instance") - self.logger = Maxim({"base_url": baseUrl}).logger() + # Provide a test API key and deterministic repo ID to avoid external dependencies + os.environ.setdefault("MAXIM_API_KEY", "test-api-key") + self.logger = Maxim({"base_url": baseUrl}).logger( + {"id": os.getenv("MAXIM_LOG_REPO_ID") or "test-repo"} + ) self.mock_writer = inject_mock_writer(self.logger)These changes ensure the test file:
- Skips all tests cleanly when Portkey secrets aren’t set.
- Uses a safe default
MAXIM_API_KEYsoMaxim()never errors.- Supplies a static
MAXIM_LOG_REPO_IDfallback, avoiding random UUIDs in CI logs.maxim/tests/test_gemini.py (1)
62-69: Mirror the same CI-safe setup in the sync test classMake the sync setUp deterministic and independent of secrets.
- def setUp(self): + def setUp(self) -> None: # This is a hack to ensure that the Maxim instance is not cached if hasattr(Maxim, "_instance"): delattr(Maxim, "_instance") - self.logger = Maxim({"base_url": baseUrl}).logger() + if not geminiApiKey: + self.skipTest("GEMINI_API_KEY not set") + os.environ.setdefault("MAXIM_API_KEY", "test-api-key") + self.logger = Maxim({"base_url": baseUrl}).logger( + {"id": os.getenv("MAXIM_LOG_REPO_ID") or f"test-repo-{uuid4()}"} + ) self.mock_writer = inject_mock_writer(self.logger)maxim/tests/test_logger.py (2)
47-52: Same fix for TestLogging.setUp (annotation + test API key)Replicate the CI-safe setup.
- def setUp(self): + def setUp(self) -> None: if hasattr(Maxim, "_instance"): delattr(Maxim, "_instance") - self.maxim = Maxim({"base_url": baseUrl}) + os.environ.setdefault("MAXIM_API_KEY", "test-api-key") + self.maxim = Maxim({"base_url": baseUrl})
87-93: Same fix for TestCreatingSession.setUp (annotation + test API key)Replicate the CI-safe setup.
- def setUp(self): + def setUp(self) -> None: if hasattr(Maxim, "_instance"): delattr(Maxim, "_instance") - self.maxim = Maxim({"base_url": baseUrl}) + os.environ.setdefault("MAXIM_API_KEY", "test-api-key") + self.maxim = Maxim({"base_url": baseUrl})maxim/tests/test_litellm.py (2)
221-228: Same CI-safe setup for async testsReplicate the API key default and repo-id fallback in asyncSetUp.
- self.maxim = Maxim({"base_url": baseUrl}) - self.logger = self.maxim.logger() + os.environ.setdefault("MAXIM_API_KEY", "test-api-key") + self.maxim = Maxim({"base_url": baseUrl}) + self.logger = self.maxim.logger( + {"id": os.getenv("MAXIM_LOG_REPO_ID") or f"test-repo-{uuid4()}"} + )
242-246: Don’t block the event loop in async tests
time.sleep(0.5)blocks the loop under IsolatedAsyncioTestCase. Useawait asyncio.sleep.- # Give LiteLLM callback time to complete - time.sleep(0.5) + # Give LiteLLM callback time to complete + await asyncio.sleep(0.5)maxim/tests/test_decorators.py (5)
12-17: Fix ImportError causing CI failure: use absolute imports instead of package-relativePytest runs tests as top-level modules in this workflow, so “attempted relative import with no known parent package” occurs. Switch to absolute imports under maxim.*.
-from .. import Config, Maxim -from ..decorators import current_retrieval, current_trace, retrieval, span, trace -from ..decorators.langchain import langchain_callback, langchain_llm_call -from ..logger import LoggerConfig -from ..tests.mock_writer import inject_mock_writer +from maxim import Config, Maxim +from maxim.decorators import current_retrieval, current_trace, retrieval, span, trace +from maxim.decorators.langchain import langchain_callback, langchain_llm_call +from maxim.logger import LoggerConfig +from maxim.tests.mock_writer import inject_mock_writerAlternative (if you prefer keeping relative imports): add an init.py under maxim/tests so modules are package-aware. I can open a follow-up PR to add that file.
33-38: Make tests independent of CI secrets: ensure API key and repo id exist before creating Maxim loggerWithout MAXIM_API_KEY or MAXIM_LOG_REPO_ID, Maxim.logger() raises. Since this suite uses a mock writer, a dummy API key is sufficient and avoids leaking secrets.
@@ if hasattr(Maxim, "_instance"): delattr(Maxim, "_instance") - self.logger = Maxim({"base_url": baseUrl}).logger() + if not os.getenv("MAXIM_API_KEY"): + os.environ["MAXIM_API_KEY"] = "test-api-key" + repo_id = os.getenv("MAXIM_LOG_REPO_ID") or "test-repo" + self.logger = Maxim({"base_url": baseUrl}).logger({"id": repo_id})
84-90: Same resiliency for the Flask/decorators test setupProvide a test-safe API key and logger id to avoid environment coupling.
@@ if hasattr(Maxim, "_instance"): delattr(Maxim, "_instance") - self.maxim = Maxim({"base_url": baseUrl}) - self.logger = self.maxim.logger() + if not os.getenv("MAXIM_API_KEY"): + os.environ["MAXIM_API_KEY"] = "test-api-key" + repo_id = os.getenv("MAXIM_LOG_REPO_ID") or "test-repo" + self.maxim = Maxim({"base_url": baseUrl}) + self.logger = self.maxim.logger({"id": repo_id})
1-1: Fix remaining relative imports in tests to absolute importsThe search revealed that several test files still use relative imports, which will break pytest execution when run from the repository root. Please update them to use absolute imports from the
maximpackage:• File:
maxim/tests/test_decorators.py(lines 12–16)
• File:maxim/tests/test_logger_azure_openai.py(lines 9–11)Suggested diff for each occurrence (example from
test_decorators.py):-from .. import Config, Maxim -from ..decorators import current_retrieval, current_trace, retrieval, span, trace -from ..decorators.langchain import langchain_callback, langchain_llm_call -from ..logger import LoggerConfig -from ..tests.mock_writer import inject_mock_writer +from maxim import Config, Maxim +from maxim.decorators import current_retrieval, current_trace, retrieval, span, trace +from maxim.decorators.langchain import langchain_callback, langchain_llm_call +from maxim.logger import LoggerConfig +from maxim.tests.mock_writer import inject_mock_writerRepeat similar updates in
test_logger_azure_openai.py.
This will ensure pytest can discover and import modules reliably.
132-171: Ensure compatibility with LangChain’s relocatedChatOpenAIimportWe’ve verified that
maxim/tests/test_decorators.pycurrently importsChatOpenAIfrom the legacy path (line 10):
from langchain.chat_models.openai import ChatOpenAITo maintain compatibility with newer LangChain releases (where
ChatOpenAIlives inlangchain_openai), consider adding a fallback import:try: - from langchain.chat_models.openai import ChatOpenAI # legacy path + from langchain.chat_models.openai import ChatOpenAI except ImportError: + from langchain_openai import ChatOpenAI # new pathThis change is optional but recommended to prevent import errors if LangChain is upgraded.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (14)
.github/workflows/tests.yml(1 hunks)maxim/tests/test_anthropic.py(1 hunks)maxim/tests/test_decorators.py(2 hunks)maxim/tests/test_fireworks.py(2 hunks)maxim/tests/test_gemini.py(2 hunks)maxim/tests/test_groq.py(2 hunks)maxim/tests/test_langgraph.py(1 hunks)maxim/tests/test_litellm.py(2 hunks)maxim/tests/test_logger.py(3 hunks)maxim/tests/test_maxim_core_simple.py(4 hunks)maxim/tests/test_openai.py(1 hunks)maxim/tests/test_portkey.py(2 hunks)maxim/tests/test_prompts.py(2 hunks)pyproject.toml(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (11)
maxim/tests/test_groq.py (1)
maxim/maxim.py (2)
logger(897-942)Maxim(127-1045)
maxim/tests/test_litellm.py (1)
maxim/maxim.py (1)
Maxim(127-1045)
maxim/tests/test_portkey.py (1)
maxim/maxim.py (2)
logger(897-942)Maxim(127-1045)
maxim/tests/test_gemini.py (4)
maxim/tests/test_fireworks.py (1)
asyncSetUp(467-483)maxim/tests/test_groq.py (1)
asyncSetUp(638-650)maxim/tests/test_litellm.py (1)
asyncSetUp(213-227)maxim/tests/test_openai.py (1)
asyncSetUp(23-27)
maxim/tests/test_openai.py (1)
maxim/maxim.py (2)
logger(897-942)Maxim(127-1045)
maxim/tests/test_logger.py (11)
maxim/tests/test_anthropic.py (2)
setUp(26-31)setUp(121-126)maxim/tests/test_decorators.py (2)
setUp(32-38)setUp(84-90)maxim/tests/test_fireworks.py (1)
setUp(21-37)maxim/tests/test_gemini.py (1)
setUp(63-68)maxim/tests/test_groq.py (1)
setUp(22-34)maxim/tests/test_litellm.py (1)
setUp(34-48)maxim/tests/test_maxim_core_simple.py (2)
setUp(39-43)setUp(130-133)maxim/tests/test_openai.py (1)
setUp(193-197)maxim/tests/test_portkey.py (1)
setUp(27-32)maxim/tests/test_prompts.py (1)
setUp(20-32)maxim/maxim.py (1)
Maxim(127-1045)
maxim/tests/test_langgraph.py (1)
maxim/maxim.py (1)
Maxim(127-1045)
maxim/tests/test_anthropic.py (1)
maxim/maxim.py (2)
logger(897-942)Maxim(127-1045)
maxim/tests/test_maxim_core_simple.py (2)
maxim/maxim.py (1)
Maxim(127-1045)maxim/cache/inMemory.py (1)
MaximInMemoryCache(4-41)
maxim/tests/test_fireworks.py (1)
maxim/maxim.py (2)
logger(897-942)Maxim(127-1045)
maxim/tests/test_decorators.py (1)
maxim/maxim.py (2)
logger(897-942)Maxim(127-1045)
🪛 Ruff (0.12.2)
maxim/tests/test_gemini.py
19-19: Missing return type annotation for public function asyncSetUp
Add return type annotation: None
(ANN201)
maxim/tests/test_logger.py
34-34: Missing return type annotation for public function setUp
Add return type annotation: None
(ANN201)
maxim/tests/test_maxim_core_simple.py
147-147: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
149-149: Missing return type annotation for public function test_maxim_uses_env_api_key
Add return type annotation: None
(ANN201)
155-155: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
157-157: Missing return type annotation for public function test_maxim_singleton_pattern
Add return type annotation: None
(ANN201)
161-161: Local variable maxim1 is assigned to but never used
Remove assignment to unused variable maxim1
(F841)
163-163: Use pytest.raises instead of unittest-style assertRaises
Replace assertRaises with pytest.raises
(PT027)
166-166: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
168-168: Missing return type annotation for public function test_maxim_default_cache_creation
Add return type annotation: None
(ANN201)
172-172: Do not call getattr with a constant attribute value. It is not any safer than normal property access.
Replace getattr with attribute access
(B009)
173-173: Use a regular assert instead of unittest-style assertIsInstance
Replace assertIsInstance(...) with assert ...
(PT009)
175-175: Missing return type annotation for public function test_cleanup_method_stops_running
Add return type annotation: None
(ANN201)
🪛 YAMLlint (1.37.1)
.github/workflows/tests.yml
[warning] 3-3: truthy value should be one of [false, true]
(truthy)
[error] 5-5: too few spaces after comma
(commas)
[error] 7-7: too few spaces after comma
(commas)
[error] 84-84: no new line character at the end of file
(new-line-at-end-of-file)
🪛 GitHub Actions: Run Tests
maxim/tests/test_langgraph.py
[error] 37-37: During pytest run (uv run pytest maxim/tests/ -v --ignore=maxim/tests/integrations/crewai/ --ignore=maxim/tests/test_livekit.py --ignore=maxim/tests/test_livekit_realtime.py): pydantic ValidationError: TavilySearchAPIWrapper: Did not find tavily_api_key; please set environment variable TAVILY_API_KEY or pass tavily_api_key as a named parameter.
maxim/tests/test_decorators.py
[error] 12-12: During pytest run (uv run pytest maxim/tests/ -v --ignore=maxim/tests/integrations/crewai/ --ignore=maxim/tests/test_livekit.py --ignore=maxim/tests/test_livekit_realtime.py): ImportError: attempted relative import with no known parent package.
🔇 Additional comments (7)
maxim/tests/test_openai.py (1)
197-197: Constructor change to pass base_url via config is correct and future-proof.Maxim.init falls back to "https://app.getmaxim.ai" when base_url is None, so passing an env-derived value (possibly None) is safe.
maxim/tests/test_prompts.py (1)
16-16: Good: defaulting MAXIM_BASE_URL prevents None propagation in tests.Keeps tests stable across environments without MAXIM_BASE_URL.
maxim/tests/test_gemini.py (2)
16-16: LGTM on baseUrl defaultReads MAXIM_BASE_URL with a sensible default. Matches the constructor behavior in Maxim.
19-24: I’ve added a type annotation toasyncSetUp, guarded the test against missing API keys, and provided a fallbackrepo_idfor CI stability:--- a/maxim/tests/test_gemini.py +++ b/maxim/tests/test_gemini.py @@ -19,8 +19,17 @@ class TestGemini(TestCase): async def asyncSetUp(self) -> None: # Ensure that the Maxim singleton isn’t reused across tests if hasattr(Maxim, "_instance"): delattr(Maxim, "_instance") - self.logger = Maxim({"base_url": baseUrl}).logger() + # Skip if Gemini/Maxim API key isn’t configured in CI + gemini_key = os.getenv("GEMINI_API_KEY") or os.getenv("MAXIM_API_KEY") + if not gemini_key: + self.skipTest("GEMINI_API_KEY or MAXIM_API_KEY not set") + + # Provide a stable fallback for logger repo ID in CI + os.environ.setdefault("MAXIM_API_KEY", gemini_key) + repo_id = os.getenv("MAXIM_LOG_REPO_ID") or f"test-repo-{uuid4()}" + self.logger = Maxim({"base_url": baseUrl}).logger({"id": repo_id}) self.mock_writer = inject_mock_writer(self.logger)
- Added
-> Nonereturn type to satisfy the lint rule.- Checked for either
GEMINI_API_KEYorMAXIM_API_KEYand callskipTestif neither is set.- Set
MAXIM_API_KEYin the environment to ensure the logger sees a valid key.- Passed a fallback
repo_idintologger()so that CI runs remain deterministic.maxim/tests/test_logger.py (2)
30-31: LGTM on baseUrl source and defaultPulling MAXIM_BASE_URL with a default is consistent with the broader test changes.
37-45: Ensure logger tests explicitly controlMAXIM_LOG_REPO_IDTo make these tests order-independent and CI-safe, each one should set or clear the
MAXIM_LOG_REPO_IDenv var within its own scope. For example:def test_initialize_logger_if_log_repository_exists(self): - logger = self.maxim.logger() + from unittest import mock + with mock.patch.dict(os.environ, {"MAXIM_LOG_REPO_ID": "test-repo"}, clear=False): + logger = self.maxim.logger() self.assertIsNotNone(logger) def test_should_throw_error_if_log_repository_does_not_exist(self): - with self.assertRaises(Exception) as context: - self.maxim.logger() - self.assertTrue("Log repository not found" in str(context.exception)) + from unittest import mock + # Ensure MAXIM_LOG_REPO_ID is unset + with mock.patch.dict(os.environ, {}, clear=False): + os.environ.pop("MAXIM_LOG_REPO_ID", None) + with self.assertRaises(ValueError) as context: + self.maxim.logger() + self.assertIn("Log repo id is required", str(context.exception))• Wrap each call to
self.maxim.logger()in its ownmock.patch.dictcontext
• Verify the exception type (ValueError) and message match the implementation
• This makes the tests deterministic regardless of CI env or execution ordermaxim/tests/test_maxim_core_simple.py (1)
9-9: Config-driven Maxim initialization LGTMUsing base_url from MAXIM_BASE_URL (with safe default) and passing it explicitly into Maxim creates deterministic tests and aligns with the constructor change. No functional issues spotted.
Also applies to: 54-55, 145-146, 153-154, 161-165, 170-171, 177-178, 189-190
| on: | ||
| push: | ||
| branches: [main,beta] | ||
| pull_request: | ||
| branches: [main,beta] | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Quote the “on” key and fix branch list spacing to satisfy yamllint.
Also ensures better compatibility with YAML 1.1 parsers that treat bare “on/off” as booleans.
-name: Run Tests
-
-on:
+name: Run Tests
+
+"on":
push:
- branches: [main,beta]
+ branches: [main, beta]
pull_request:
- branches: [main,beta]
+ branches: [main, beta]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| on: | |
| push: | |
| branches: [main,beta] | |
| pull_request: | |
| branches: [main,beta] | |
| name: Run Tests | |
| "on": | |
| push: | |
| branches: [main, beta] | |
| pull_request: | |
| branches: [main, beta] |
🧰 Tools
🪛 YAMLlint (1.37.1)
[warning] 3-3: truthy value should be one of [false, true]
(truthy)
[error] 5-5: too few spaces after comma
(commas)
[error] 7-7: too few spaces after comma
(commas)
🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 3 to 8, the top-level on key should
be quoted and the branch list spacing fixed to satisfy yamllint and avoid YAML
1.1 on/off boolean parsing; update the file to use a quoted key ("on":) and
format the branches arrays with a space after the comma (e.g. branches: [main,
beta]) while preserving existing indentation and semantics.
| test-main: | ||
| name: Test Main SDK (Python 3.9) | ||
| runs-on: ubuntu-latest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Add workflow-level hardening and concurrency.
Reduces default token scope and auto-cancels superseded runs on the same branch.
jobs:
+ # Cancel previous in-progress runs for the same ref
+ # and restrict default permissions
+ concurrency:
+ group: ${{ github.workflow }}-${{ github.ref }}
+ cancel-in-progress: true
+
+permissions:
+ contents: readCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 10 to 12, add workflow-level
hardening by restricting the default GITHUB_TOKEN permissions (for example:
permissions: contents: read) and enable concurrency to auto-cancel superseded
runs (for example: concurrency: group: ${{ github.workflow }}-${{ github.ref }},
cancel-in-progress: true). Update the top-level of the workflow file to include
these two keys so runs use the reduced token scope and newer runs cancel older
in-progress runs on the same branch.
| steps: | ||
| - uses: actions/checkout@v4 | ||
|
|
||
| - name: Install uv | ||
| uses: astral-sh/setup-uv@v4 | ||
| with: | ||
| version: "latest" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Use fixed action versions instead of “latest” for reproducibility.
- - name: Install uv
- uses: astral-sh/setup-uv@v4
- with:
- version: "latest"
+ - name: Install uv
+ uses: astral-sh/setup-uv@v4
+ with:
+ version: "0.5.x" # or a specific known-good versionAlso applies to: 67-74
🤖 Prompt for AI Agents
.github/workflows/tests.yml lines 26-33 (and similarly lines 67-74): the
workflow uses the action astral-sh/setup-uv@v4 with with: version: "latest",
which reduces reproducibility; change the workflow to pin a specific, tested
version (replace "latest" with a concrete version string or tag like "vX.Y.Z" or
a commit SHA), and update both occurrences (lines 26-33 and 67-74) to the chosen
fixed version so the CI is deterministic and reproducible.
🧹 Nitpick (assertive)
Optional: Add timeout-minutes to guard against hung runs.
test-main:
name: Test Main SDK (Python 3.9)
- runs-on: ubuntu-latest
+ runs-on: ubuntu-latest
+ timeout-minutes: 30
@@
additional-tests:
name: Test Additional Integrations (Python 3.10)
- runs-on: ubuntu-latest
+ runs-on: ubuntu-latest
+ timeout-minutes: 30Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 26-33: the workflow currently lacks a
timeout and can hang indefinitely; add a timeout-minutes entry at the job level
(same indentation as runs-on) for the job containing these steps (e.g.,
timeout-minutes: 10 or an org-appropriate value) to guard against hung runs and
make it configurable via a workflow var if needed.
| self.maxim = Maxim({"base_url": baseUrl}) | ||
| self.logger = self.maxim.logger() | ||
| self.mock_writer = inject_mock_writer(self.logger) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Harden setUp for CI (test API key + repo-id fallback)
Make logger initialization independent of env secrets.
- self.maxim = Maxim({"base_url": baseUrl})
- self.logger = self.maxim.logger()
+ os.environ.setdefault("MAXIM_API_KEY", "test-api-key")
+ self.maxim = Maxim({"base_url": baseUrl})
+ self.logger = self.maxim.logger(
+ {"id": os.getenv("MAXIM_LOG_REPO_ID") or f"test-repo-{uuid4()}"}
+ )🏁 Script executed:
#!/bin/bash
set -e
# 1. Locate the test file
TEST_FILE=$(fd --type f test_litellm.py)
echo "Test file: $TEST_FILE"
echo
# 2. Inspect imports and setUp context
echo "=== Imports (lines 1–50) ==="
sed -n '1,50p' "$TEST_FILE"
echo
echo "=== setUp Context (lines 30–60) ==="
sed -n '30,60p' "$TEST_FILE"
echo
# 3. Verify if os and uuid4 are imported
echo "=== Checking for os/uuid imports ==="
grep -nP 'import\s+(os|uuid)' "$TEST_FILE" || echo "No os/uuid import found in test file."
echo
# 4. Find where the Maxim class is defined
MAXIM_FILE=$(rg -lP 'class\s+Maxim' -g '*.py' | head -n1)
echo "Maxim class defined in: $MAXIM_FILE"
echo
# 5. Inspect the logger() method signature and implementation
echo "=== logger() Signature ==="
rg -nP 'def\s+logger' -n "$MAXIM_FILE"
echo
echo "=== logger() Implementation (±5 lines) ==="
rg -nP -C5 'def\s+logger' -n "$MAXIM_FILE"Length of output: 3644
🏁 Script executed:
#!/bin/bash
set -e
# Locate the Maxim class definition file
MAXIM_FILE=$(rg -lP 'class\s+Maxim' -g '*.py' | head -n1)
echo "Maxim class file: $MAXIM_FILE"
echo
# Show the logger method definition lines
echo "=== logger() definition ==="
rg -nP 'def\s+logger' "$MAXIM_FILE"
# Show context around the logger() method
echo
echo "=== Context around logger() ==="
rg -nP -C5 'def\s+logger' "$MAXIM_FILE"Length of output: 807
🏁 Script executed:
#!/bin/bash
set -e
MAXIM_FILE="maxim/maxim.py"
echo "Maxim implementation file: $MAXIM_FILE"
echo
echo "=== Context around class Maxim ==="
rg -nP -C5 'class\s+Maxim' -n "$MAXIM_FILE"
echo
echo "=== __init__ method (first 50 lines) ==="
INIT_LINE=$(rg -nP 'def\s+__init__' -n "$MAXIM_FILE" | cut -d: -f1 | head -n1)
echo "__init__ starts at line $INIT_LINE"
sed -n "${INIT_LINE},$((INIT_LINE+50))p" "$MAXIM_FILE"
echo
echo "=== logger() method (lines 897–1030) ==="
sed -n '897,1030p' "$MAXIM_FILE"Length of output: 8128
Harden setUp in maxim/tests/test_litellm.py for CI fallback
Ensure tests don’t fail in CI when env vars are missing:
File: maxim/tests/test_litellm.py, inside def setUp(self):
--- a/maxim/tests/test_litellm.py
+++ b/maxim/tests/test_litellm.py
@@ def setUp(self):
- self.maxim = Maxim({"base_url": baseUrl})
- self.logger = self.maxim.logger()
+ # Ensure a test API key is always set for Maxim.__init__
+ os.environ.setdefault("MAXIM_API_KEY", "test-api-key")
+ self.maxim = Maxim({"base_url": baseUrl})
+ # Provide a fallback repo ID so logger() never errors
+ self.logger = self.maxim.logger(
+ {"id": os.getenv("MAXIM_LOG_REPO_ID") or f"test-repo-{uuid4()}"}
+ )
self.mock_writer = inject_mock_writer(self.logger)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| self.maxim = Maxim({"base_url": baseUrl}) | |
| self.logger = self.maxim.logger() | |
| self.mock_writer = inject_mock_writer(self.logger) | |
| # Ensure a test API key is always set for Maxim.__init__ | |
| os.environ.setdefault("MAXIM_API_KEY", "test-api-key") | |
| self.maxim = Maxim({"base_url": baseUrl}) | |
| # Provide a fallback repo ID so logger() never errors | |
| self.logger = self.maxim.logger( | |
| {"id": os.getenv("MAXIM_LOG_REPO_ID") or f"test-repo-{uuid4()}"} | |
| ) | |
| self.mock_writer = inject_mock_writer(self.logger) |
🤖 Prompt for AI Agents
In maxim/tests/test_litellm.py around lines 42 to 44, the setUp currently
constructs Maxim with a baseUrl from environment variables and assumes
logger/writer injection will succeed; harden it for CI by reading env vars with
safe defaults (e.g., use os.environ.get("MAXIM_BASE_URL",
"http://localhost:8000") or another minimal stub URL), wrap Maxim initialization
in a try/except to fall back to a mocked or no-op Maxim instance if creation
fails, and ensure logger() and inject_mock_writer are only called when logger is
present (otherwise create a simple test logger or mock). Ensure the setup always
results in self.maxim, self.logger, and self.mock_writer being defined so tests
do not error when CI env vars are missing.
| def setUp(self): | ||
| self.maxim = Maxim() | ||
| self.maxim = Maxim({"base_url": baseUrl}) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Annotate setUp and ensure a test API key is present
Without MAXIM_API_KEY set, Maxim() raises; set a harmless default for tests and satisfy Ruff’s ANN201 by annotating setUp.
- def setUp(self):
- self.maxim = Maxim({"base_url": baseUrl})
+ def setUp(self) -> None:
+ os.environ.setdefault("MAXIM_API_KEY", "test-api-key")
+ self.maxim = Maxim({"base_url": baseUrl})Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Ruff (0.12.2)
34-34: Missing return type annotation for public function setUp
Add return type annotation: None
(ANN201)
🤖 Prompt for AI Agents
In maxim/tests/test_logger.py around lines 34 to 36, the setUp method is missing
a return type annotation and tests fail when MAXIM_API_KEY is not set; update
the signature to annotate the method (e.g., def setUp(self) -> None:) and ensure
a harmless test key is present before instantiating Maxim by setting
os.environ["MAXIM_API_KEY"] = "test" (or passing an "api_key": "test" in the
Maxim config), and add an import for os at the top if not already present.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (16)
maxim/filter_objects.py (8)
74-77: Fix boolean parsing: bool("False") evaluates to TrueCurrent logic converts both "True" and "False" to True. Parse booleans case-insensitively by value, not by truthiness of a non-empty string.
- # converting 'True'/'False':str to True/False:bool - if value == "True" or value == "False": - value = bool(value) - parsed = True + # converting 'True'/'False' (any case) to True/False: bool + if isinstance(value, str) and value.strip().lower() in ("true", "false"): + value = value.strip().lower() == "true" + parsed = True
67-68: Split on the first operator only to avoid corrupting valuesIf the value part contains the operator text (e.g., tags=["a=b"]), splitting without a limit will break the value. Use maxsplit=1.
- field, value = map(str.strip, condition.split(op)) + field, value = map(str.strip, condition.split(op, 1))
103-107: Break after handling the first detected operator to prevent duplicate/incorrect rulesWithout a break, a condition like "a!=b" is parsed twice: once for "!=" and again for "=" (since "=" is still in the string), producing a bogus second rule.
result.append( RuleType( field=field, value=value, operator=op, exactMatch=exact_match ) ) + # Stop after the first matching operator + break
258-260: Fix 'does not include' semantics for list vs listCurrent check tests list containment of the entire incoming list, not per-element exclusion. It should ensure none of the incoming elements appear in the rule list.
- return field_incoming_rule.value not in field_rule.value + return all(el not in field_rule.value for el in field_incoming_rule.value)
91-101: Allow empty string arrays and validate all elementsThe current JSON list parsing rejects empty lists and only checks the first element’s type. Prefer validating all elements and allowing empty lists.
- if ( - isinstance(parsed_value, list) - and len(parsed_value) > 0 - and isinstance(parsed_value[0], str) - ): - value = parsed_value + if isinstance(parsed_value, list) and all(isinstance(el, str) for el in parsed_value): + value = parsed_value
56-66: Operator detection by substring is brittle; consider regex with word boundaries (follow-up)Using if op in condition can misfire on unexpected substrings. You’ve mitigated ordering, but a robust approach is to parse with a regex that respects token boundaries and brackets. Not a blocker if you adopt the break and split fixes, but worth tracking.
I can propose a small, bracket-aware operator regex to reduce edge cases if helpful.
287-305: Type coercion paths are asymmetric and may surprise callers
- The special-case converting incoming ints to strings (Line 303-305) applies even when field_rule.value is not str, which seems unintended.
- Consider constraining that branch to only when the rule expects a string.
Would you like a targeted refactor that centralizes type coercion rules (int<->bool, str, list[str]) into a dedicated helper with explicit cases?
46-55: Add tests for new normalization pathsGiven the normalization changes, add unit tests to lock behavior:
- "is_active=True" → bool True; "is_active=false" → bool False.
- "age=05" → int 5 (confirm desired handling of leading zeros).
- "age=-12" → int -12.
- 'tags includes ["a=b", "c"]' is parsed correctly with split(op, 1).
- Single operator per condition ("a!=b" results in one rule).
- 'does not include' for list vs list excludes overlapping elements.
I can scaffold these tests quickly if you share the existing test layout.
maxim/tests/test_portkey.py (2)
409-422: Make tool_calls assertion robust to None/[] across SDK versionsSome clients return
None, others[]. Use a falsy check.- # Verify no tool calls - self.assertIsNone(response.choices[0].message.tool_calls) + # Verify no tool calls + self.assertFalse(response.choices[0].message.tool_calls)
51-53: Reduce max_tokens for CI speed and costLarge
max_tokensincreases latency and spend. 64 is enough to validate logging/instrumentation.- max_tokens=1000, + max_tokens=64, @@ - max_tokens=150, + max_tokens=64, @@ - max_tokens=200, + max_tokens=64, @@ - max_tokens=100, + max_tokens=64, @@ - max_tokens=50, + max_tokens=64,Also applies to: 141-142, 268-269, 373-374, 412-413
maxim/tests/test_logger_langchain_03x.py (2)
498-501: Fix typo: “meth teacher” → “math teacher”User-facing text in logs/tests.
- "You are a meth teacher", + "You are a math teacher", @@ - "You are a meth teacher", + "You are a math teacher",Also applies to: 519-522
468-476: Skip Anthropic tests when ANTHROPIC_API_KEY is missingMirror the Azure gating for consistency.
def test_generation_chat_prompt_anthropic_sonnet_chat_model_streaming(self): + if not anthropicApiKey: + self.skipTest("ANTHROPIC_API_KEY is not set") model = ChatAnthropic( @@ def test_generation_chat_prompt_anthropic_sonnet_chat_model_with_tool_call(self): + if not anthropicApiKey: + self.skipTest("ANTHROPIC_API_KEY is not set") model = ChatAnthropic( @@ def test_generation_chat_prompt_anthropic_3_sonnet_chat_model(self): + if not anthropicApiKey: + self.skipTest("ANTHROPIC_API_KEY is not set") model = ChatAnthropic( @@ def test_generation_chat_prompt_anthropic_haiku_chat_model(self): + if not anthropicApiKey: + self.skipTest("ANTHROPIC_API_KEY is not set") model = ChatAnthropic(Also applies to: 549-556, 569-576, 589-595
maxim/logger/portkey/portkey.py (2)
126-139: Don’t end traces you didn’t create; gate trace.end() on is_local_trace and make closure resilient.Right now trace.end() runs regardless of whether the trace was locally created. If x-maxim-trace-id was provided (is_local_trace is False), this can prematurely end an upstream-owned trace. Also, because trace.end() is inside the try block, any exception during parsing/result logging will skip closure and leave the trace open.
- Gate end() with is_local_trace.
- Prefer a finally block to guarantee closure even on exceptions.
Apply the minimal guard within this hunk:
- if trace is not None: + if is_local_trace and trace is not None: trace.end()For stronger safety, consider this restructuring (illustrative; touches lines outside this hunk):
response = self._client.chat.completions.create(*args, **kwargs) parsed_response = None try: if generation is not None: parsed_response = OpenAIUtils.parse_completion(response) generation.result(parsed_response) if is_local_trace and trace is not None and parsed_response is not None: trace.set_output( parsed_response.get("choices", [{}])[0] .get("message", {}) .get("content", "") ) except Exception as e: scribe().warning( f"[MaximSDK][MaximPortkeyChatCompletions] Error in logging generation: {str(e)}" ) finally: try: if is_local_trace and trace is not None: trace.end() except Exception as ee: scribe().warning( f"[MaximSDK][MaximPortkeyChatCompletions] Error ending trace: {str(ee)}" )
194-207: Async path mirrors the same ownership and closure issues—apply the same guard and finally.The async version should not end non-local traces either, and should guarantee closure via finally.
Minimal fix in this hunk:
- if trace is not None: + if is_local_trace and trace is not None: trace.end()Recommend mirroring the same try/except/finally structure shown in the sync comment to ensure end() is always attempted but never crashes the call path.
maxim/tests/test_logger_azure_openai.py (2)
29-36: Guard against missing MAXIM_ before constructing Logger; set safe defaults or pass id explicitly.*setUp currently creates Maxim() and logger() before skipping when Azure creds are missing. If MAXIM_LOG_REPO_ID or MAXIM_API_KEY aren’t set, logger() raises before the skip triggers. Provide defaults (non-overriding) or pass a test id directly.
def setUp(self) -> None: # This is a hack to ensure that the Maxim instance is not cached if hasattr(Maxim, "_instance"): delattr(Maxim, "_instance") - self.maxim = Maxim({ "base_url": baseUrl }) - self.logger = self.maxim.logger() + # Ensure tests don't fail without env; set harmless defaults if missing + os.environ.setdefault("MAXIM_API_KEY", "test-api-key") + os.environ.setdefault("MAXIM_LOG_REPO_ID", "test-repo-id") + self.maxim = Maxim({ "base_url": baseUrl or "https://app.getmaxim.ai" }) + # Pass explicit repo id to avoid env coupling and keep auto_flush off for determinism + self.logger = self.maxim.logger({"id": os.environ["MAXIM_LOG_REPO_ID"], "auto_flush": False}) self.mock_writer = inject_mock_writer(self.logger)
37-45: Skip check should run before any cloud-dependent calls.The Azure credentials gate should be evaluated before doing any work that can raise due to unrelated env (e.g., logger creation). After adopting the previous change, this remains less critical, but ordering the skip first reduces test setup cost.
- # Skip tests if Azure credentials are not available - if not azureApiKey or not azureEndpoint: - self.skipTest("Azure OpenAI credentials not available") - - self.client = AzureOpenAI( + # Skip tests if Azure credentials are not available + if not azureApiKey or not azureEndpoint: + self.skipTest("Azure OpenAI credentials not available") + self.client = AzureOpenAI( api_version="2024-08-01-preview", api_key=azureApiKey, azure_endpoint=azureEndpoint, )
♻️ Duplicate comments (10)
maxim/tests/test_portkey.py (1)
32-33: Pass an explicit repo id to logger() and skip when MAXIM_LOG_REPO_ID is missingCalling
Maxim(...).logger()without an id depends on MAXIM_LOG_REPO_ID being set and will raise at import/CI time when absent. Make the test robust by reading the env var, skipping when missing, and passing the id explicitly.- self.logger = Maxim({"base_url": baseUrl}).logger() + repo_id = os.getenv("MAXIM_LOG_REPO_ID") + if not repo_id: + self.skipTest("MAXIM_LOG_REPO_ID environment variable is not set") + self.logger = Maxim({"base_url": baseUrl}).logger({"id": repo_id}).github/workflows/tests.yml (7)
3-3: Quote the "on" key to avoid YAML parsing issues.Apply this diff:
-on: +"on":
5-5: Fix branch list spacing for YAML lint compliance.Apply this diff:
- branches: [main,beta] + branches: [main, beta]Also applies to: 7-7
9-12: Add workflow-level security hardening and concurrency control.Adding permissions restriction and concurrency control improves security and CI efficiency.
Apply this diff:
+permissions: + contents: read + +concurrency: + group: ${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: true + jobs:
25-31: Simplify dependency management by removing redundant backup operations.The current approach creates multiple backups and mutates pyproject.toml unnecessarily. The sed pattern doesn't match the actual content in pyproject.toml.
Apply this diff to simplify the approach:
- - name: Backup pyproject.toml - run: | - cp pyproject.toml pyproject.toml.bak - - - name: Remove additional dependencies - run: | - sed -i.bak '/additional_dev = \[/,/\]/d' pyproject.toml + # Install dev dependencies directly without mutating pyproject.toml
75-77: Remove redundant restore step.Since we're no longer mutating pyproject.toml, the restore step is unnecessary.
Apply this diff:
- - name: Restore pyproject.toml - run: | - mv pyproject.toml.bak pyproject.toml + # No restore needed since pyproject.toml is no longer modified
15-20: Pin action versions to specific releases instead of "latest".Using "latest" reduces reproducibility. Pin to specific versions for stable CI.
Apply this diff:
- uses: astral-sh/setup-uv@v4 - with: - version: "latest" + uses: astral-sh/setup-uv@v4 + with: + version: "0.5.0" # or your preferred stable versionAlso applies to: 84-89
100-100: Add trailing newline at end of file.Apply this diff:
- uv run pytest maxim/tests/test_crewai.py + uv run pytest maxim/tests/test_crewai.py +maxim/tests/test_logger.py (2)
33-36: Annotate all setUp methods with -> None (Ruff ANN201).Keeps CI lint clean and is consistent with other tearDown annotations.
-class TestLoggerInitialization(unittest.TestCase): - def setUp(self): +class TestLoggerInitialization(unittest.TestCase): + def setUp(self) -> None: if hasattr(Maxim, "_instance"): delattr(Maxim, "_instance") self.maxim = Maxim({"base_url": baseUrl}) class TestLogging(unittest.TestCase): - def setUp(self): + def setUp(self) -> None: if hasattr(Maxim, "_instance"): delattr(Maxim, "_instance") self.maxim = Maxim({"base_url": baseUrl}) class TestCreatingSession(unittest.TestCase): - def setUp(self): + def setUp(self) -> None: if hasattr(Maxim, "_instance"): delattr(Maxim, "_instance") self.maxim = Maxim({"base_url": baseUrl})Also applies to: 49-53, 90-94
29-30: Tests still depend on MAXIM_ env; consider providing safe defaults or mocking to avoid CI flakiness.*Many tests call self.maxim.logger() without a config id; Maxim.logger() then requires MAXIM_LOG_REPO_ID (and MAXIM_API_KEY for Maxim()) to be present in the environment. This will fail locally and in CI unless those are set. Either ensure secrets are wired in the workflow, or mock the writer in these tests as done in Azure tests.
Options:
- Provide defaults in setUp (won’t override real secrets):
import os os.environ.setdefault("MAXIM_API_KEY", "test-api-key") os.environ.setdefault("MAXIM_LOG_REPO_ID", "test-repo-id")
- Or inject the mock writer where the network isn’t under test:
from maxim.tests.mock_writer import inject_mock_writer logger = self.maxim.logger({"id": "test-repo-id", "auto_flush": False}) mock_writer = inject_mock_writer(logger)
- Or ensure your GitHub Actions exports secrets:
env: MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }} MAXIM_LOG_REPO_ID: ${{ secrets.MAXIM_LOG_REPO_ID }} MAXIM_BASE_URL: ${{ vars.MAXIM_BASE_URL || 'https://app.getmaxim.ai' }}If you want, I can open a follow-up PR to add mock injection to the non-integration tests.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (12)
.github/workflows/tests.yml(1 hunks)maxim/filter_objects.py(1 hunks)maxim/logger/portkey/portkey.py(2 hunks)maxim/tests/test_agno.py(0 hunks)maxim/tests/test_decorators.py(3 hunks)maxim/tests/test_logger.py(3 hunks)maxim/tests/test_logger_azure_openai.py(6 hunks)maxim/tests/test_logger_langchain_03x.py(24 hunks)maxim/tests/test_portkey.py(6 hunks)maxim/tests/test_prompt_chains.py(3 hunks)maxim/tests/test_prompts.py(12 hunks)maxim/tests/test_test_runs.py(19 hunks)
💤 Files with no reviewable changes (1)
- maxim/tests/test_agno.py
🧰 Additional context used
🧬 Code graph analysis (8)
maxim/tests/test_decorators.py (2)
maxim/maxim.py (2)
Maxim(127-1045)logger(897-942)maxim/tests/mock_writer.py (1)
inject_mock_writer(243-259)
maxim/tests/test_logger_azure_openai.py (4)
maxim/maxim.py (2)
Maxim(127-1045)logger(897-942)maxim/logger/components/generation.py (1)
GenerationConfig(67-76)maxim/logger/components/trace.py (1)
TraceConfig(45-55)maxim/tests/mock_writer.py (1)
inject_mock_writer(243-259)
maxim/tests/test_test_runs.py (2)
maxim/test_runs/test_run_builder.py (7)
with_data(319-332)with_concurrency(512-523)with_evaluators(334-351)with_logger(525-536)with_data_structure(305-317)with_prompt_version_id(411-443)with_prompt_chain_version_id(445-477)maxim/apis/maxim_apis.py (1)
create_test_run(1157-1247)
maxim/tests/test_prompt_chains.py (1)
maxim/models/query_builder.py (4)
QueryBuilder(18-124)and_(29-37)deployment_var(72-88)build(107-124)
maxim/tests/test_portkey.py (3)
maxim/maxim.py (2)
logger(897-942)Maxim(127-1045)maxim/tests/mock_writer.py (1)
inject_mock_writer(243-259)maxim/logger/portkey/client.py (1)
instrument_portkey(32-46)
maxim/tests/test_logger.py (1)
maxim/maxim.py (2)
Maxim(127-1045)logger(897-942)
maxim/tests/test_prompts.py (4)
maxim/models/query_builder.py (6)
QueryBuilder(18-124)and_(29-37)deployment_var(72-88)build(107-124)tag(90-105)folder(49-60)maxim/apis/maxim_apis.py (2)
get_prompt(312-347)get_folders(669-702)maxim/maxim.py (3)
get_prompt(736-772)get_folder_by_id(849-873)get_folders(875-895)maxim/runnable/prompt.py (1)
run(30-41)
maxim/tests/test_logger_langchain_03x.py (5)
maxim/maxim.py (2)
Maxim(127-1045)logger(897-942)maxim/logger/logger.py (1)
LoggerConfig(59-75)maxim/logger/langchain/tracer.py (1)
MaximLangchainTracer(39-812)maxim/logger/components/trace.py (1)
TraceConfig(45-55)maxim/logger/components/span.py (1)
SpanConfig(37-45)
🪛 YAMLlint (1.37.1)
.github/workflows/tests.yml
[warning] 3-3: truthy value should be one of [false, true]
(truthy)
[error] 5-5: too few spaces after comma
(commas)
[error] 7-7: too few spaces after comma
(commas)
[error] 100-100: no new line character at the end of file
(new-line-at-end-of-file)
🪛 Ruff (0.12.2)
maxim/tests/test_decorators.py
9-9: Module level import not at top of file
(E402)
10-10: Module level import not at top of file
(E402)
12-12: Module level import not at top of file
(E402)
13-13: Module level import not at top of file
(E402)
14-14: Module level import not at top of file
(E402)
15-15: Module level import not at top of file
(E402)
maxim/tests/test_test_runs.py
5-5: typing.Dict is deprecated, use dict instead
(UP035)
65-65: datetime.datetime.now() called without a tz argument
(DTZ005)
84-84: datetime.datetime.now() called without a tz argument
(DTZ005)
105-105: datetime.datetime.now() called without a tz argument
(DTZ005)
127-127: datetime.datetime.now() called without a tz argument
(DTZ005)
162-162: datetime.datetime.now() called without a tz argument
(DTZ005)
208-208: datetime.datetime.now() called without a tz argument
(DTZ005)
265-265: datetime.datetime.now() called without a tz argument
(DTZ005)
313-313: datetime.datetime.now() called without a tz argument
(DTZ005)
365-365: datetime.datetime.now() called without a tz argument
(DTZ005)
417-417: datetime.datetime.now() called without a tz argument
(DTZ005)
486-486: datetime.datetime.now() called without a tz argument
(DTZ005)
554-554: datetime.datetime.now() called without a tz argument
(DTZ005)
641-641: datetime.datetime.now() called without a tz argument
(DTZ005)
728-728: datetime.datetime.now() called without a tz argument
(DTZ005)
845-845: datetime.datetime.now() called without a tz argument
(DTZ005)
maxim/tests/test_portkey.py
35-35: Missing return type annotation for public function test_instrument_portkey_sync
Add return type annotation: None
(ANN201)
maxim/tests/test_logger.py
33-33: Missing return type annotation for public function setUp
Add return type annotation: None
(ANN201)
38-38: Missing return type annotation for public function test_initialize_logger_if_log_repository_exists
Add return type annotation: None
(ANN201)
40-40: Use a regular assert instead of unittest-style assertIsNotNone
Replace assertIsNotNone(...) with assert ...
(PT009)
42-42: Missing return type annotation for public function test_should_throw_error_if_log_repository_does_not_exist
Add return type annotation: None
(ANN201)
43-43: Use pytest.raises instead of unittest-style assertRaises
Replace assertRaises with pytest.raises
(PT027)
45-45: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
49-49: Missing return type annotation for public function setUp
Add return type annotation: None
(ANN201)
maxim/tests/test_prompts.py
47-47: Create your own exception
(TRY002)
47-47: Avoid specifying long messages outside the exception class
(TRY003)
48-48: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
49-49: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
50-50: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
51-51: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
52-52: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
71-71: Create your own exception
(TRY002)
71-71: Avoid specifying long messages outside the exception class
(TRY003)
73-73: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
74-74: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
75-75: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
76-76: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
111-111: Create your own exception
(TRY002)
111-111: Avoid specifying long messages outside the exception class
(TRY003)
112-112: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
113-113: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
114-114: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
131-131: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
132-132: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
134-134: Missing return type annotation for public function test_getPrompt_with_deployment_variables_multiselect_includes
Add return type annotation: None
(ANN201)
142-142: Create your own exception
(TRY002)
142-142: Avoid specifying long messages outside the exception class
(TRY003)
143-143: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
144-144: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
148-148: Missing return type annotation for public function test_if_prompt_cache_works_fine
Add return type annotation: None
(ANN201)
234-234: Unnecessary list comprehension (rewrite using list())
Rewrite using list()
(C416)
236-236: Missing return type annotation for public function test_getFolderUsingId
Add return type annotation: None
(ANN201)
239-239: Create your own exception
(TRY002)
239-239: Avoid specifying long messages outside the exception class
(TRY003)
240-240: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
242-242: Missing return type annotation for public function test_getFolderUsingTags
Add return type annotation: None
(ANN201)
244-244: Boolean positional value in function call
(FBT003)
maxim/tests/test_logger_langchain_03x.py
65-65: Missing return type annotation for public function test_generation_chat_prompt
Add return type annotation: None
(ANN201)
136-136: Missing return type annotation for public function test_generation_chat_prompt_azure_chat_model_with_streaming
Add return type annotation: None
(ANN201)
467-467: Missing return type annotation for public function test_generation_chat_prompt_anthropic_sonnet_chat_model_streaming
Add return type annotation: None
(ANN201)
488-488: Missing return type annotation for public function test_generation_chat_prompt_openai_chat_model_with_tool_call
Add return type annotation: None
(ANN201)
506-506: Missing return type annotation for public function test_generation_chat_prompt_openai_chat_model_with_tool_call_with_streaming
Add return type annotation: None
(ANN201)
528-528: Missing return type annotation for public function test_generation_chat_prompt_openai_chat_model_with_streaming
Add return type annotation: None
(ANN201)
549-549: Missing return type annotation for public function test_generation_chat_prompt_anthropic_sonnet_chat_model_with_tool_call
Add return type annotation: None
(ANN201)
569-569: Missing return type annotation for public function test_generation_chat_prompt_anthropic_3_sonnet_chat_model
Add return type annotation: None
(ANN201)
588-588: Missing return type annotation for public function test_generation_chat_prompt_anthropic_haiku_chat_model
Add return type annotation: None
(ANN201)
607-607: Missing return type annotation for public function test_generation_chat_prompt_azure_chat_model_old_class
Add return type annotation: None
(ANN201)
627-627: Missing return type annotation for public function test_generation_chat_prompt_chat_model_with_span
Add return type annotation: None
(ANN201)
697-697: Missing return type annotation for public function test_langchain_generation_with_azure_multi_prompt_chain
Add return type annotation: None
(ANN201)
772-772: Missing return type annotation for public function test_custom_result_with_generation_chat_prompt
Add return type annotation: None
(ANN201)
833-833: langchain.schema.runnable.RunnablePassthrough imported but unused
Remove unused import: langchain.schema.runnable.RunnablePassthrough
(F401)
🪛 GitHub Actions: Run Tests
maxim/tests/test_prompts.py
[error] 239-239: Folder not found
🔇 Additional comments (7)
maxim/tests/test_decorators.py (3)
6-7: Environment configuration loading implemented correctly.Good addition of dotenv loading to support environment-driven test configuration, consistent with other test files in this PR.
10-15: Import updates properly implemented.The imports have been correctly updated to use absolute paths and the modern langchain_openai package, resolving the relative import issues and deprecation warnings.
35-35: Configuration-based Maxim initialization applied consistently.Both test classes now properly use the config dictionary pattern for Maxim initialization with base_url, aligning with the broader test suite changes in this PR.
Also applies to: 86-86
maxim/tests/test_prompt_chains.py (1)
46-47: LGTM: QueryBuilder usage with enforced deployment var "Environment=Prod"Consistent with the repo’s pattern; exact-match not required here.
Also applies to: 55-56
maxim/tests/test_logger.py (1)
9-9: Import adjustment looks good.Switch to from maxim import Maxim aligns with the updated constructor usage across the suite.
maxim/tests/test_logger_azure_openai.py (2)
61-71: Migration to chat.completions with gpt-4o looks correct and matches GenerationConfig.
- Messages payloads align with chat API.
- Stop sequences provided consistently in both GenerationConfig and request.
- Tool-call payload conforms to function tools schema.
Also applies to: 111-122, 144-186
41-45: 1. Recommendedapi_version: use "2024-07-01-preview" in your AzureOpenAI client instantiation for Chat Completions (learn.microsoft.com).
modelmust be your Azure deployment name, not the underlying public model identifier. For example, if you deployed “gpt-4o” as the deployment name, you passmodel="gpt-4o"(learn.microsoft.com).Latest example from the official Python docs:
import os from openai import AzureOpenAI client = AzureOpenAI( api_key=os.getenv("AZURE_OPENAI_API_KEY"), api_version="2024-07-01-preview", azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT") ) response = client.chat.completions.create( model="gpt-4o", # must match your deployment name messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "When was Microsoft founded?"} ] ) print(response.choices[0].message.content) ``` ([learn.microsoft.com](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/switching-endpoints?utm_source=chatgpt.com)) </blockquote></details> </blockquote></details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
| try: | ||
| value = int(value) | ||
| parsed = True | ||
| except ValueError: | ||
| pass | ||
| if not parsed: | ||
| if isinstance(value, bool): | ||
| value = bool(value) | ||
| parsed = True | ||
| elif type(value) is int: | ||
| elif isinstance(value, int): | ||
| value = int(value) | ||
| parsed = True | ||
| if not parsed: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Be conservative when casting to int and remove unreachable isinstance branches
- Guard integer casting with a strict regex so we don’t accidentally coerce arbitrary strings.
- The subsequent isinstance checks are unreachable for this code path and add no value.
- if not parsed:
- try:
- value = int(value)
- parsed = True
- except ValueError:
- pass
- if not parsed:
- if isinstance(value, bool):
- value = bool(value)
- parsed = True
- elif isinstance(value, int):
- value = int(value)
- parsed = True
+ if not parsed and isinstance(value, str) and re.fullmatch(r"[+-]?\d+", value.strip()):
+ try:
+ value = int(value.strip())
+ parsed = True
+ except ValueError:
+ passCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In maxim/filter_objects.py around lines 78-90, the code tries int(value)
unguarded then has unreachable isinstance branches; change the flow to first
detect booleans (if value is bool, keep as-is), then only attempt integer
coercion for string-like inputs that strictly match a digits regex (e.g.
r'^[+-]?\d+$') before calling int(value), and remove the redundant
isinstance(value, int) branch (and any other unreachable isinstance checks) so
only valid numeric strings are coerced to int.
| from dotenv import load_dotenv | ||
| load_dotenv() | ||
|
|
||
| from flask import Flask, request | ||
| from langchain.chat_models.openai import ChatOpenAI | ||
| from langchain_openai import ChatOpenAI | ||
|
|
||
| from .. import Config, Maxim | ||
| from ..decorators import current_retrieval, current_trace, retrieval, span, trace | ||
| from ..decorators.langchain import langchain_callback, langchain_llm_call | ||
| from ..logger import LoggerConfig | ||
| from ..tests.mock_writer import inject_mock_writer | ||
| from maxim import Maxim | ||
| from maxim.decorators import current_retrieval, current_trace, retrieval, span, trace | ||
| from maxim.decorators.langchain import langchain_callback, langchain_llm_call | ||
| from maxim.tests.mock_writer import inject_mock_writer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Module-level imports after dotenv loading.
Static analysis flags that module imports should be at the top of the file. While the current pattern works, consider moving the dotenv loading to a conftest.py or test setup to maintain proper import ordering.
Consider this refactoring:
+import concurrent.futures
+import logging
+import os
+import unittest
+from uuid import uuid4
+
+from flask import Flask, request
+from langchain_openai import ChatOpenAI
+
+from maxim import Maxim
+from maxim.decorators import current_retrieval, current_trace, retrieval, span, trace
+from maxim.decorators.langchain import langchain_callback, langchain_llm_call
+from maxim.tests.mock_writer import inject_mock_writer
+
from dotenv import load_dotenv
load_dotenv()
-
-from flask import Flask, request
-from langchain_openai import ChatOpenAI
-
-from maxim import Maxim
-from maxim.decorators import current_retrieval, current_trace, retrieval, span, trace
-from maxim.decorators.langchain import langchain_callback, langchain_llm_call
-from maxim.tests.mock_writer import inject_mock_writer📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from dotenv import load_dotenv | |
| load_dotenv() | |
| from flask import Flask, request | |
| from langchain.chat_models.openai import ChatOpenAI | |
| from langchain_openai import ChatOpenAI | |
| from .. import Config, Maxim | |
| from ..decorators import current_retrieval, current_trace, retrieval, span, trace | |
| from ..decorators.langchain import langchain_callback, langchain_llm_call | |
| from ..logger import LoggerConfig | |
| from ..tests.mock_writer import inject_mock_writer | |
| from maxim import Maxim | |
| from maxim.decorators import current_retrieval, current_trace, retrieval, span, trace | |
| from maxim.decorators.langchain import langchain_callback, langchain_llm_call | |
| from maxim.tests.mock_writer import inject_mock_writer | |
| import concurrent.futures | |
| import logging | |
| import os | |
| import unittest | |
| from uuid import uuid4 | |
| from flask import Flask, request | |
| from langchain_openai import ChatOpenAI | |
| from maxim import Maxim | |
| from maxim.decorators import current_retrieval, current_trace, retrieval, span, trace | |
| from maxim.decorators.langchain import langchain_callback, langchain_llm_call | |
| from maxim.tests.mock_writer import inject_mock_writer | |
| from dotenv import load_dotenv | |
| load_dotenv() |
🧰 Tools
🪛 Ruff (0.12.2)
9-9: Module level import not at top of file
(E402)
10-10: Module level import not at top of file
(E402)
12-12: Module level import not at top of file
(E402)
13-13: Module level import not at top of file
(E402)
14-14: Module level import not at top of file
(E402)
15-15: Module level import not at top of file
(E402)
🤖 Prompt for AI Agents
In maxim/tests/test_decorators.py around lines 6 to 15, the file calls
load_dotenv() before importing other modules which violates the rule that
module-level imports should be at the top; move the dotenv loading into your
test setup (e.g., maxim/tests/conftest.py or a pytest fixture) so that
maxim/tests/test_decorators.py only contains imports at the top, remove the
load_dotenv() call from this file, and ensure the conftest.py executes
load_dotenv() before tests run (or provide a session-scoped fixture that calls
load_dotenv) so environment variables are available without altering import
order.
| class TestLoggingUsingLangchain(unittest.TestCase): | ||
| def setUp(self): | ||
| self.maxim = Maxim() | ||
| if hasattr(Maxim, "_instance"): | ||
| delattr(Maxim, "_instance") | ||
| self.maxim = Maxim({ "base_url": baseUrl, "api_key": apiKey }) | ||
| self.logger = self.maxim.logger(LoggerConfig(id=repoId)) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Skip all tests in this class when core env vars are missing
These tests hit OpenAI/Azure/Anthropic. Skip early if repo id, OpenAI key, or Maxim credentials are missing to keep PR CI reliable.
def setUp(self):
- if hasattr(Maxim, "_instance"):
- delattr(Maxim, "_instance")
- self.maxim = Maxim({ "base_url": baseUrl, "api_key": apiKey })
- self.logger = self.maxim.logger(LoggerConfig(id=repoId))
+ if hasattr(Maxim, "_instance"):
+ delattr(Maxim, "_instance")
+ missing = [name for name, val in [
+ ("MAXIM_API_KEY", apiKey),
+ ("MAXIM_LOG_REPO_ID", repoId),
+ ("OPENAI_API_KEY", openAIKey),
+ ] if not val]
+ if missing:
+ self.skipTest(f"Skipping langchain logger tests; missing env: {', '.join(missing)}")
+ self.maxim = Maxim({"base_url": baseUrl, "api_key": apiKey})
+ self.logger = self.maxim.logger(LoggerConfig(id=repoId))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| class TestLoggingUsingLangchain(unittest.TestCase): | |
| def setUp(self): | |
| self.maxim = Maxim() | |
| if hasattr(Maxim, "_instance"): | |
| delattr(Maxim, "_instance") | |
| self.maxim = Maxim({ "base_url": baseUrl, "api_key": apiKey }) | |
| self.logger = self.maxim.logger(LoggerConfig(id=repoId)) | |
| class TestLoggingUsingLangchain(unittest.TestCase): | |
| def setUp(self): | |
| if hasattr(Maxim, "_instance"): | |
| delattr(Maxim, "_instance") | |
| missing = [name for name, val in [ | |
| ("MAXIM_API_KEY", apiKey), | |
| ("MAXIM_LOG_REPO_ID", repoId), | |
| ("OPENAI_API_KEY", openAIKey), | |
| ] if not val] | |
| if missing: | |
| self.skipTest(f"Skipping langchain logger tests; missing env: {', '.join(missing)}") | |
| self.maxim = Maxim({"base_url": baseUrl, "api_key": apiKey}) | |
| self.logger = self.maxim.logger(LoggerConfig(id=repoId)) |
🧰 Tools
🪛 Ruff (0.12.2)
59-59: Missing return type annotation for public function setUp
Add return type annotation: None
(ANN201)
🤖 Prompt for AI Agents
In maxim/tests/test_logger_langchain_03x.py around lines 58-64, the test class
setUp always attempts to create a Maxim instance and logger even when required
environment credentials are missing; update setUp to detect missing core env
vars (repoId, OpenAI key, and Maxim credentials like baseUrl/apiKey) and call
self.skipTest(...) (or raise unittest.SkipTest) to skip the whole class early
when any are absent so CI doesn't hit external OpenAI/Azure/Anthropic services.
| self.maxim = Maxim({ "base_url": baseUrl, "api_key": apiKey }) | ||
| self.logger = self.maxim.logger(LoggerConfig(id=repoId)) | ||
|
|
||
| def test_generation_chat_prompt(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Add -> None annotations to public test methods flagged by Ruff
Mechanical, keeps CI lint happy.
- def test_generation_chat_prompt(self):
+ def test_generation_chat_prompt(self) -> None:
@@
- def test_generation_chat_prompt_chat_model(self):
+ def test_generation_chat_prompt_chat_model(self) -> None:
@@
- def test_generation_chat_prompt_azure_chat_model(self):
+ def test_generation_chat_prompt_azure_chat_model(self) -> None:
@@
- def test_generation_chat_prompt_azure_chat_model_with_streaming(self):
+ def test_generation_chat_prompt_azure_chat_model_with_streaming(self) -> None:
@@
- def test_generation_chat_prompt_anthropic_sonnet_chat_model_streaming(self):
+ def test_generation_chat_prompt_anthropic_sonnet_chat_model_streaming(self) -> None:
@@
- def test_generation_chat_prompt_openai_chat_model_with_tool_call(self):
+ def test_generation_chat_prompt_openai_chat_model_with_tool_call(self) -> None:
@@
- def test_generation_chat_prompt_openai_chat_model_with_tool_call_with_streaming(
+ def test_generation_chat_prompt_openai_chat_model_with_tool_call_with_streaming(
self,
- ):
+ ) -> None:
@@
- def test_generation_chat_prompt_openai_chat_model_with_streaming(self):
+ def test_generation_chat_prompt_openai_chat_model_with_streaming(self) -> None:
@@
- def test_generation_chat_prompt_anthropic_sonnet_chat_model_with_tool_call(self):
+ def test_generation_chat_prompt_anthropic_sonnet_chat_model_with_tool_call(self) -> None:
@@
- def test_generation_chat_prompt_anthropic_3_sonnet_chat_model(self):
+ def test_generation_chat_prompt_anthropic_3_sonnet_chat_model(self) -> None:
@@
- def test_generation_chat_prompt_anthropic_haiku_chat_model(self):
+ def test_generation_chat_prompt_anthropic_haiku_chat_model(self) -> None:
@@
- def test_generation_chat_prompt_azure_chat_model_old_class(self):
+ def test_generation_chat_prompt_azure_chat_model_old_class(self) -> None:
@@
- def test_generation_chat_prompt_chat_model_with_span(self):
+ def test_generation_chat_prompt_chat_model_with_span(self) -> None:
@@
- def test_langchain_generation_with_azure_multi_prompt_chain(self):
+ def test_langchain_generation_with_azure_multi_prompt_chain(self) -> None:
@@
- def test_custom_result_with_generation_chat_prompt(self):
+ def test_custom_result_with_generation_chat_prompt(self) -> None:
@@
- def test_multi_node_langchain_chain(self):
+ def test_multi_node_langchain_chain(self) -> None:Also applies to: 104-104, 116-116, 136-136, 467-467, 488-488, 506-506, 528-528, 549-549, 569-569, 588-588, 607-607, 627-627, 697-697, 772-772, 817-817
🧰 Tools
🪛 Ruff (0.12.2)
65-65: Missing return type annotation for public function test_generation_chat_prompt
Add return type annotation: None
(ANN201)
🤖 Prompt for AI Agents
In maxim/tests/test_logger_langchain_03x.py around lines 65, 104, 116, 136, 467,
488, 506, 528, 549, 569, 588, 607, 627, 697, 772, and 817, public test methods
are missing explicit return annotations; add "-> None" to each test method
signature (e.g., def test_generation_chat_prompt(self) -> None:) so Ruff no
longer flags them, ensure spacing and typing style matches existing file
conventions.
| model = AzureChatOpenAI( | ||
| api_key=azureOpenAIKey, | ||
| model="gpt-35-turbo-16k", | ||
| model="gpt-4o", | ||
| azure_endpoint=azureOpenAIBaseUrl, | ||
| callbacks=[MaximLangchainTracer(logger)], | ||
| callbacks=[MaximLangchainTracer(self.logger)], | ||
| api_version="2024-02-01", | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Skip provider-specific tests when their keys are absent
Azure tests should be skipped when AZURE_OPENAI_KEY or AZURE_OPENAI_BASE_URL is not set.
def test_generation_chat_prompt_azure_chat_model(self):
+ if not (azureOpenAIKey and azureOpenAIBaseUrl):
+ self.skipTest("Azure OpenAI env vars are not set")
model = AzureChatOpenAI(
api_key=azureOpenAIKey,
model="gpt-4o",
azure_endpoint=azureOpenAIBaseUrl,
callbacks=[MaximLangchainTracer(self.logger)],
api_version="2024-02-01",
)
@@
def test_generation_chat_prompt_azure_chat_model_with_streaming(self):
+ if not (azureOpenAIKey and azureOpenAIBaseUrl):
+ self.skipTest("Azure OpenAI env vars are not set")
model = AzureChatOpenAI(
api_key=azureOpenAIKey,
model="gpt-4o",
azure_endpoint=azureOpenAIBaseUrl,
callbacks=[MaximLangchainTracer(self.logger)],
api_version="2024-02-01",
streaming=True,
)
@@
def test_generation_chat_prompt_azure_chat_model_old_class(self):
+ if not (azureOpenAIKey and azureOpenAIBaseUrl):
+ self.skipTest("Azure OpenAI env vars are not set")
model = AzureOpenAI(
api_key=azureOpenAIKey,
model="gpt-4o",
azure_endpoint=azureOpenAIBaseUrl,
callbacks=[MaximLangchainTracer(self.logger)],
api_version="2024-02-01",
)Also applies to: 136-144, 607-614
🤖 Prompt for AI Agents
In maxim/tests/test_logger_langchain_03x.py around lines 116-122 (and also apply
same change at 136-144 and 607-614), the Azure-specific test/config is
unconditionally instantiated; change it to first check for AZURE_OPENAI_KEY and
AZURE_OPENAI_BASE_URL in the environment and skip the test if either is missing
(use pytest.skip or @pytest.mark.skipif with a clear reason). Concretely, guard
the AzureChatOpenAI creation with a conditional that reads os.environ for those
two keys and calls pytest.skip("Azure OpenAI credentials not set") when absent,
or apply a skipif decorator to the test functions covering these line ranges so
the provider-specific tests are skipped when keys are not provided.
| def test_getFolderUsingId(self): | ||
| folder = self.maxim.get_folder_by_id(folderID) | ||
| folder = self.maxim.get_folder_by_id(os.getenv("MAXIM_FOLDER_1_ID")) | ||
| if folder is None: | ||
| raise Exception("Folder not found") | ||
| self.assertEqual(folder.name, "SDK Tests") | ||
|
|
||
| def test_getFolderUsingTags(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix CI failure: Skip folder test when MAXIM_FOLDER_1_ID is missing or invalid
The pipeline error “Folder not found” indicates env is not provisioned. Avoid hard failures by skipping when the id isn’t configured.
- def test_getFolderUsingId(self):
- folder = self.maxim.get_folder_by_id(os.getenv("MAXIM_FOLDER_1_ID"))
+ def test_getFolderUsingId(self) -> None:
+ folder_id = os.getenv("MAXIM_FOLDER_1_ID")
+ if not folder_id:
+ self.skipTest("MAXIM_FOLDER_1_ID is not set")
+ folder = self.maxim.get_folder_by_id(folder_id)
if folder is None:
raise Exception("Folder not found")
self.assertEqual(folder.name, "SDK Tests")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_getFolderUsingId(self): | |
| folder = self.maxim.get_folder_by_id(folderID) | |
| folder = self.maxim.get_folder_by_id(os.getenv("MAXIM_FOLDER_1_ID")) | |
| if folder is None: | |
| raise Exception("Folder not found") | |
| self.assertEqual(folder.name, "SDK Tests") | |
| def test_getFolderUsingId(self) -> None: | |
| folder_id = os.getenv("MAXIM_FOLDER_1_ID") | |
| if not folder_id: | |
| self.skipTest("MAXIM_FOLDER_1_ID is not set") | |
| folder = self.maxim.get_folder_by_id(folder_id) | |
| if folder is None: | |
| raise Exception("Folder not found") | |
| self.assertEqual(folder.name, "SDK Tests") |
🧰 Tools
🪛 Ruff (0.12.2)
236-236: Missing return type annotation for public function test_getFolderUsingId
Add return type annotation: None
(ANN201)
239-239: Create your own exception
(TRY002)
239-239: Avoid specifying long messages outside the exception class
(TRY003)
240-240: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
🪛 GitHub Actions: Run Tests
[error] 239-239: Folder not found
🤖 Prompt for AI Agents
In maxim/tests/test_prompts.py around lines 236 to 241, the test raises an
exception when MAXIM_FOLDER_1_ID is missing or the lookup returns None which
fails CI; change the test to gracefully skip instead of hard-failing by checking
os.getenv("MAXIM_FOLDER_1_ID") at the top of the test and calling
self.skipTest("MAXIM_FOLDER_1_ID not configured") when it's falsy, and also
replace the raise Exception("Folder not found") with self.skipTest("Configured
MAXIM_FOLDER_1_ID not found or invalid") if folder is None so the test is
skipped rather than failing.
| ) | ||
| self.assertEqual(folders[0].name, "SDK Tests") | ||
| self.assertEqual(len(folders), 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Prefer keyword for boolean arg in tag() to satisfy Ruff FBT003
Nit, improves clarity.
- folders = self.maxim.get_folders(
- QueryBuilder().and_().tag("Testing", True).build()
- )
+ folders = self.maxim.get_folders(
+ QueryBuilder().and_().tag("Testing", True,).build()
+ )Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Ruff (0.12.2)
244-244: Boolean positional value in function call
(FBT003)
246-246: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
🤖 Prompt for AI Agents
In maxim/tests/test_prompts.py around lines 244 to 246, the call
QueryBuilder().and_().tag("Testing", True).build() passes a boolean as a
positional argument which triggers Ruff FBT003; change the call to pass the
boolean as a keyword matching the tag() signature (for example tag("Testing",
enabled=True) or the actual parameter name used by tag) so the boolean is
explicit and satisfies the linter.
| @@ -1,8 +1,9 @@ | |||
| import json | |||
| from datetime import datetime | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Use timezone-aware timestamps for run names
Ruff DTZ005: prefer datetime.now(timezone.utc) to avoid naive datetimes.
-from datetime import datetime
+from datetime import datetime, timezone
@@
- name=f"SDK Test run 1 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", in_workspace_id=workspaceId
+ name=f"SDK Test run 1 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S %Z')}", in_workspace_id=workspaceIdRepeat for other run-name call sites in this file. I can push a batch edit if you prefer.
Also applies to: 65-65, 84-84, 105-105, 127-127, 162-162, 208-208, 265-265, 313-313, 365-365, 417-417, 486-486, 554-554, 641-641, 728-728, 845-845
🤖 Prompt for AI Agents
In maxim/tests/test_test_runs.py around lines 1 and at call sites 65, 84, 105,
127, 162, 208, 265, 313, 365, 417, 486, 554, 641, 728, and 845, the tests create
run names using naive datetimes; update the code to use timezone-aware
timestamps by importing timezone from datetime and replacing datetime.now() (or
similar naive calls) with datetime.now(timezone.utc) at each listed call site so
all run-name timestamps are UTC-aware.
| import os | ||
| import unittest | ||
| from typing import Dict, Optional | ||
| from dotenv import load_dotenv |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Prefer builtin dict over typing.Dict
Minor modern typing cleanup.
-from typing import Dict, Optional
+from typing import OptionalAnd update return annotations like Dict[str, LocalEvaluatorReturn] → dict[str, LocalEvaluatorReturn] (PEP 585). I can apply this across the file if you’d like.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from typing import Dict, Optional | |
| from typing import Optional |
🧰 Tools
🪛 Ruff (0.12.2)
5-5: typing.Dict is deprecated, use dict instead
(UP035)
🤖 Prompt for AI Agents
In maxim/tests/test_test_runs.py around line 5, replace the use of typing.Dict
with the built-in dict (PEP 585): remove Dict from the typing import list,
update any annotations like Dict[str, LocalEvaluatorReturn] to dict[str,
LocalEvaluatorReturn], and clean up the import line so it only imports Optional
(or remove typing import entirely if unused); ensure all return/type hints are
updated accordingly.
maxim/tests/test_test_runs.py
Outdated
| baseUrl = os.getenv("MAXIM_BASE_URL") | ||
| workspaceId = os.getenv("MAXIM_WORKSPACE_ID") | ||
| datasetId = os.getenv("MAXIM_DATASET_ID") | ||
| workflowId = os.getenv("MAXIM_WORKFLOW_ID") | ||
| promptVersionId = os.getenv("MAXIM_TEST_RUN_PROMPT_VERSION_ID") | ||
| promptChainVersionId = os.getenv("MAXIM_PROMPT_CHAIN_VERSION_ID") | ||
| assistantPromptVersionId = os.getenv("MAXIM_ASSISTANT_PROMPT_VERSION_ID") | ||
| assistantPromptChainVersionId = os.getenv("MAXIM_ASSISTANT_PROMPT_CHAIN_VERSION_ID") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Skip integration-heavy tests when required env vars are missing
These tests call real Maxim APIs (create test runs, evaluators, workflows) and will fail in CI without secrets or fixtures. Add a guard to skip when mandatory env vars are not set.
class TestTestRuns(unittest.TestCase):
def setUp(self):
# Clear singleton instance if it exists
if hasattr(Maxim, "_instance"):
delattr(Maxim, "_instance")
- config = Config(
+ missing = [name for name, val in [
+ ("MAXIM_API_KEY", apiKey),
+ ("MAXIM_WORKSPACE_ID", workspaceId),
+ ("MAXIM_DATASET_ID", datasetId),
+ ] if not val]
+ if missing:
+ self.skipTest(f"Skipping Maxim test runs; missing env: {', '.join(missing)}")
+
+ config = Config(
api_key=apiKey, base_url=baseUrl, debug=True, raise_exceptions=True
)
self.maxim = Maxim(config)Optionally, gate all tests in this module behind a single toggle like RUN_MAXIM_INTEGRATION_TESTS=1. I can wire that if you want.
Also applies to: 48-52
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 19
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (16)
maxim/logger/portkey/portkey.py (4)
126-139: End traces only when we created them locally, and guarantee end() via finally.Right now, trace.end() is inside a try block and will be skipped if parsing/setting output raises; also it ends external traces (when trace_id is provided) which likely belongs to the caller. Wrap response handling in try/finally and gate end() by is_local_trace to avoid prematurely closing externally managed traces.
Apply:
- try: + try: parsed_response = None if generation is not None: parsed_response = OpenAIUtils.parse_completion(response) generation.result(parsed_response) if is_local_trace and trace is not None and parsed_response is not None: trace.set_output( parsed_response.get("choices", [{}])[0] .get("message", {}) .get("content", "") ) - if trace is not None: - trace.end() except Exception as e: scribe().warning( f"[MaximSDK][MaximPortkeyChatCompletions] Error in logging generation: {str(e)}" ) + finally: + if is_local_trace and trace is not None: + try: + trace.end() + except Exception as e: + scribe().warning( + f"[MaximSDK][MaximPortkeyChatCompletions] Error ending trace: {str(e)}" + )
123-125: Capture API-call exceptions and close local traces on failure.If self._client.chat.completions.create raises, we never end the local trace. Add a guard to set minimal error output and end the local trace before re-raising.
Apply:
- # Call the actual Portkey completion - response = self._client.chat.completions.create(*args, **kwargs) + # Call the actual Portkey completion + try: + response = self._client.chat.completions.create(*args, **kwargs) + except Exception as e: + if is_local_trace and trace is not None: + try: + trace.set_output(f"[error] {e}") + trace.end() + except Exception: + pass + raise
194-207: Mirror the same lifecycle fixes in the async path.Async create has the same two issues: it ends external traces and can leave local traces open on exceptions. Align with the sync fix (gate end by is_local_trace, use finally, and guard the API call).
Apply:
- # Call the actual async Portkey completion - response = await self._client.chat.completions.create(*args, **kwargs) + # Call the actual async Portkey completion + try: + response = await self._client.chat.completions.create(*args, **kwargs) + except Exception as e: + if is_local_trace and trace is not None: + try: + trace.set_output(f"[error] {e}") + await trace.end() if hasattr(trace, "end") and callable(getattr(trace, "end")) else trace.end() + except Exception: + pass + raise @@ - try: + try: parsed_response = None if generation is not None: parsed_response = OpenAIUtils.parse_completion(response) generation.result(parsed_response) if is_local_trace and trace is not None and parsed_response is not None: trace.set_output( parsed_response.get("choices", [{}])[0] .get("message", {}) .get("content", "") ) - if trace is not None: - trace.end() except Exception as e: scribe().warning( f"[MaximSDK][MaximAsyncPortkeyChatCompletions] Error in logging generation: {str(e)}" ) + finally: + if is_local_trace and trace is not None: + try: + trace.end() + except Exception as e: + scribe().warning( + f"[MaximSDK][MaximAsyncPortkeyChatCompletions] Error ending trace: {str(e)}" + )
11-11: Remove unused import.ChatCompletion is imported but never used.
-from portkey_ai import ChatCompletionmaxim/tests/test_logger_langchain_03x.py (5)
509-514: Fix repeated assignment in streaming tool-call test.Same issue as above.
- model = model = ChatOpenAI( + model = ChatOpenAI( callbacks=[MaximLangchainTracer(self.logger)], api_key=openAIKey, model="gpt-4o-mini", streaming=True, )
529-534: Fix repeated assignment in streaming test.Same issue as above.
- model = model = ChatOpenAI( + model = ChatOpenAI( callbacks=[MaximLangchainTracer(self.logger)], api_key=openAIKey, model="gpt-4o-mini", streaming=True, )
497-501: Correct “meth teacher” to “math teacher”.User-facing prompt typo.
- "You are a meth teacher", + "You are a math teacher",Also applies to: 518-521
65-75: Skip tests gracefully when provider keys are absent.Several tests assume openAIKey/azureOpenAIKey/anthropicApiKey exist and will fail in CI for forks. Add conditional skips or class-level skipUnless to avoid noisy failures.
Example pattern:
def test_generation_chat_prompt(self): - model = OpenAI(callbacks=[MaximLangchainTracer(self.logger)], api_key=openAIKey) + if not openAIKey: + self.skipTest("Missing OPENAI_API_KEY") + model = OpenAI(callbacks=[MaximLangchainTracer(self.logger)], api_key=openAIKey)Apply similarly for Azure/Anthropic tests.
743-771: Add minimal assertions to increase test value.Many tests only print results. Assert on non-empty outputs or expected types to catch regressions.
For example:
- response = llm_chain.run(first_int=4, second_int=5) - print(response) + response = llm_chain.run(first_int=4, second_int=5) + self.assertIsInstance(response, str) + self.assertTrue(len(response) > 0)maxim/tests/test_decorators.py (1)
54-57: Lower thread fan-out to reduce test flakiness.max_workers=10 on CI runners can add noise. 2–4 is typically sufficient here.
- with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor: + with concurrent.futures.ThreadPoolExecutor(max_workers=4) as executor:maxim/tests/test_logger.py (2)
273-284: Avoid 40–100s sleeps in CI; skip or gate slow tests.These sleeps will blow up CI time. Gate with an env flag or mark as slow and skip in CI.
Apply (pattern shown for two tests here; mirror for other long-sleep tests in this class):
def test_session_changes(self): - logger = self.maxim.logger() + if os.getenv("MAXIM_SKIP_SLOW_TESTS") == "1" or os.getenv("CI"): + self.skipTest("Skipping slow session test in CI") + logger = self.maxim.logger() ... - time.sleep(40) + time.sleep(1) session.add_tag("test", "test tag should appear") session.end() def test_unended_session(self): - logger = self.maxim.logger() + if os.getenv("MAXIM_SKIP_SLOW_TESTS") == "1" or os.getenv("CI"): + self.skipTest("Skipping slow unended session test in CI") + logger = self.maxim.logger() session_id = str(uuid4()) session = logger.session(SessionConfig(id=session_id, name="test session")) - time.sleep(100) + time.sleep(2) session.add_tag("test", "test tag should appear") - time.sleep(100) + time.sleep(2)
521-526: Likely tag misuse: using trace.id as the tag key will fragment analytics.add_tag(key, value) should use a semantic key. Using a random UUID as key prevents aggregations/filters later.
Apply:
- trace.add_tag(trace.id, json.dumps(json.loads(jsonObj))) + trace.add_tag("debugJson", json.dumps(json.loads(jsonObj)))If “key must be stable” is intended elsewhere, make this consistent across tests.
maxim/tests/test_portkey.py (1)
327-335: Async test is never awaited under unittest; will pass vacuously or raise “coroutine was never awaited”.Wrap the coroutine in asyncio.run() or switch to pytest-asyncio. Here’s an unittest-friendly fix.
Apply:
- async def test_portkey_tool_calls_async(self): - """Test Portkey integration with tool calls (asynchronous).""" - # Create an async Portkey client and instrument it - async_portkey_client = portkey_ai.AsyncPortkey( - api_key=portkey_api_key, - provider=portkey_virtual_key, - ) - instrumented_client = MaximPortkeyClient(async_portkey_client, self.logger) + def test_portkey_tool_calls_async(self) -> None: + """Test Portkey integration with tool calls (asynchronous).""" + import asyncio + + async def _run(): + # Create an async Portkey client and instrument it + async_portkey_client = portkey_ai.AsyncPortkey( + api_key=portkey_api_key, + provider=portkey_virtual_key, + ) + instrumented_client = MaximPortkeyClient(async_portkey_client, self.logger) + # ... rest of the test body remains identical ... + return instrumented_client + + asyncio.run(_run())Alternatively, mark as integration and skip when PORTKEY_API_KEY/PORTKEY_VIRTUAL_KEY are absent to avoid network dependency.
maxim/tests/test_prompts.py (1)
18-30: Annotate setUp and bail out early if required env isn’t present.These tests are env-driven; skipping when IDs are missing prevents CI flakes.
Apply:
class TestMaximPromptManagement(unittest.TestCase): - def setUp(self): + def setUp(self) -> None: # Clear singleton instance if it exists if hasattr(Maxim, "_instance"): delattr(Maxim, "_instance") - self.maxim = Maxim( + required = [apiKey, promptId] + if not all(required): + self.skipTest("Missing MAXIM_API_KEY or MAXIM_PROMPT_1_ID") + self.maxim = Maxim( { "api_key": apiKey, "debug": True, "prompt_management": True, "base_url": baseUrl } )maxim/tests/test_test_runs.py (2)
42-52: Skip integration tests when required env is missing.These are networked E2E tests that will fail without real workspace/dataset IDs and an API key. Skip cleanly to keep PR checks signalful.
Apply:
def setUp(self): # Clear singleton instance if it exists if hasattr(Maxim, "_instance"): delattr(Maxim, "_instance") - config = Config( + if not apiKey or not workspaceId: + self.skipTest("Missing MAXIM_API_KEY or MAXIM_WORKSPACE_ID") + config = Config( api_key=apiKey, base_url=baseUrl, debug=True, raise_exceptions=True ) self.maxim = Maxim(config)Optionally add per-test guards for datasetId/workflowId and the assistant ids depending on what the test uses.
64-71: Consider marking these as integration tests and opt-in via env flag.To keep PR CI lean, gate .run() calls with e.g. RUN_MAXIM_INTEGRATION_TESTS=1 and skip otherwise. Reduces timeouts/flakes on external dependencies.
I can propose a small helper to check os.getenv("RUN_MAXIM_INTEGRATION_TESTS") and call self.skipTest otherwise—let me know if you want a patch across the file.
♻️ Duplicate comments (17)
.github/workflows/tests.yml (13)
3-7: Quote “on” and fix branch-list spacing (yamllint failures).Unquoted on and missing spaces after commas trigger lint errors.
Apply:
-on: +\"on\": push: - branches: [main,beta] + branches: [main, beta] pull_request: - branches: [main,beta] + branches: [main, beta]
9-12: Add workflow-level hardening and cancel superseded runs.Restrict default token, and auto-cancel in-progress runs on same ref.
Apply at the top level:
jobs: +concurrency: + group: ${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: true + +permissions: + contents: read
15-20: Pin actions to immutable SHAs and avoid “latest” for uv.Supply-chain hardening and reproducibility. Replace tags with commit SHAs and pin uv to a known-good version.
Apply (use current SHAs for your org’s policy):
- - uses: actions/checkout@v4 + - uses: actions/checkout@<sha-for-v4> @@ - uses: astral-sh/setup-uv@v4 + uses: astral-sh/setup-uv@<sha-for-v4> with: - version: "latest" + version: "0.5.x" # pin a tested version
22-24: Pin Python to patch version (meets requires-python >=3.9.20).Installing just “3.9” can pick older patches. Pin to 3.9.20 (or higher if declared).
- - name: Set up Python 3.9 - run: uv python install 3.9 + - name: Set up Python 3.9 + run: uv python install 3.9.20
25-32: Stop mutating pyproject.toml; rely on uv groups instead.The backup/mutate/restore steps are brittle and unnecessary. Install the dev group directly with a frozen lock.
Apply:
- - name: Backup pyproject.toml - run: | - cp pyproject.toml pyproject.toml.bak - - - name: Remove additional dependencies - run: | - sed -i.bak '/additional_dev = \[/,/\]/d' pyproject.toml + # No pyproject mutation required; install only dev deps below
33-36: Install only dev group and freeze resolution.Guarantees reproducible env and avoids pulling non-dev groups.
- - name: Install dependencies (dev only) - run: | - uv sync --python 3.9 + - name: Install dependencies (dev only) + run: | + uv sync --group dev --python 3.9.20 --frozen
37-73: Guard secret-dependent tests on PRs from forks and fail fast on missing secrets.PRs from forks don’t receive repo secrets. Add a job-level if and optionally a pre-check to surface missing secrets early.
Apply:
test-main: name: Test Main SDK (Python 3.9) runs-on: ubuntu-latest + if: ${{ github.event_name != 'pull_request' || github.event.pull_request.head.repo.fork == false }} @@ - - name: Run main tests (excluding CrewAI) + - name: Run main tests (excluding CrewAI) env: MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }} @@ MAXIM_TEST_RUN_ASSISTANT_PROMPT_CHAIN_VERSION_ID: ${{ secrets.MAXIM_TEST_RUN_ASSISTANT_PROMPT_CHAIN_VERSION_ID }} - - run: | - uv run pytest maxim/tests/ -v --ignore=maxim/tests/test_crewai.py --ignore=maxim/tests/test_livekit.py --ignore=maxim/tests/test_livekit_realtime.py --ignore=maxim/tests/test_agno.py + run: | + for var in MAXIM_API_KEY MAXIM_LOG_REPO_ID OPENAI_API_KEY; do + if [ -z "${!var}" ]; then + echo "Missing required secret: $var"; exit 1; + fi + done + uv run pytest maxim/tests/ -v \ + --ignore=maxim/tests/test_crewai.py \ + --ignore=maxim/tests/test_livekit.py \ + --ignore=maxim/tests/test_livekit_realtime.py \ + --ignore=maxim/tests/test_agno.py
75-77: Remove pyproject restore step (we no longer mutate).This restore can fail and is redundant.
- - name: Restore pyproject.toml - run: | - mv pyproject.toml.bak pyproject.toml + # No restore needed
79-83: Add timeouts to jobs to avoid hung runs.Prevent stuck CI.
additional-tests: name: Test Additional Integrations (Python 3.10) runs-on: ubuntu-latest + timeout-minutes: 30
84-90: Repeat SHA pinning and uv version pin for second job.Mirror hardening across jobs.
- - uses: actions/checkout@v4 + - uses: actions/checkout@<sha-for-v4> @@ - uses: astral-sh/setup-uv@v4 + uses: astral-sh/setup-uv@<sha-for-v4> with: - version: "latest" + version: "0.5.x"
91-97: Pin Python 3.10 patch and install dev group only.Keep environments consistent and reproducible.
- - name: Set up Python 3.10 - run: uv python install 3.10 + - name: Set up Python 3.10 + run: uv python install 3.10.14 @@ - - name: Install dependencies (CrewAI only) - run: | - uv sync --python 3.10 + - name: Install dependencies (dev only) + run: | + uv sync --group dev --python 3.10.14 --frozen
79-83: Gate second job on secrets for forks, and keep env parity.Apply the same PR-fork guard; add missing env (if used elsewhere) for parity.
additional-tests: name: Test Additional Integrations (Python 3.10) runs-on: ubuntu-latest + if: ${{ github.event_name != 'pull_request' || github.event.pull_request.head.repo.fork == false }}
100-100: Add trailing newline at EOF.Fixes yamllint: “no new line character at the end of file”.
- uv run pytest maxim/tests/test_crewai.py + uv run pytest maxim/tests/test_crewai.py +maxim/tests/test_decorators.py (2)
82-88: Mirror deterministic init in Flask tests.Same issue as above—pass api_key and repo id explicitly.
- self.maxim = Maxim({"base_url": baseUrl}) - self.logger = self.maxim.logger() + self.maxim = Maxim({"base_url": baseUrl, "api_key": apiKey}) + self.logger = self.maxim.logger({"id": repoId})
33-36: Pass explicit API key and repo id to avoid env flakiness.Maxim(...) without api_key, and logger() without id depend on env; make it deterministic.
- self.logger = Maxim({"base_url": baseUrl}).logger() + self.logger = Maxim({"base_url": baseUrl, "api_key": apiKey}).logger({"id": repoId})maxim/tests/test_logger.py (1)
33-37: Annotate setUp and ensure per-test cleanup to avoid singleton residue.Add return type for lint (Ruff ANN201) and add a tearDown to call Maxim.cleanup so the singleton doesn’t linger between classes.
Apply:
-class TestLoggerInitialization(unittest.TestCase): - def setUp(self): +class TestLoggerInitialization(unittest.TestCase): + def setUp(self) -> None: if hasattr(Maxim, "_instance"): delattr(Maxim, "_instance") self.maxim = Maxim({"base_url": baseUrl}) + + def tearDown(self) -> None: + self.maxim.cleanup() + return super().tearDown()maxim/tests/test_portkey.py (1)
28-34: Initialize Maxim safely: set API key default, require repo id, and store self.maxim for cleanup.logger() without an id depends on MAXIM_LOG_REPO_ID and Maxim() requires MAXIM_API_KEY. Make tests robust and avoid global singleton residue.
Apply:
def setUp(self): # This is a hack to ensure that the Maxim instance is not cached if hasattr(Maxim, "_instance"): delattr(Maxim, "_instance") - self.logger = Maxim({"base_url": baseUrl}).logger() + # Ensure API key exists for Maxim ctor even when running locally + os.environ.setdefault("MAXIM_API_KEY", "test-api-key") + repo_id = os.getenv("MAXIM_LOG_REPO_ID") + if not repo_id: + self.skipTest("MAXIM_LOG_REPO_ID is not set") + self.maxim = Maxim({"base_url": baseUrl}) + self.logger = self.maxim.logger({"id": repo_id}) self.mock_writer = inject_mock_writer(self.logger)Also add in tearDown:
def tearDown(self) -> None: # Print final summary for debugging self.mock_writer.print_logs_summary() # Cleanup the mock writer self.mock_writer.cleanup() + # Cleanup Maxim singleton + if hasattr(self, "maxim"): + self.maxim.cleanup()
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (12)
.github/workflows/tests.yml(1 hunks)maxim/filter_objects.py(1 hunks)maxim/logger/portkey/portkey.py(2 hunks)maxim/tests/test_agno.py(0 hunks)maxim/tests/test_decorators.py(3 hunks)maxim/tests/test_logger.py(3 hunks)maxim/tests/test_logger_azure_openai.py(6 hunks)maxim/tests/test_logger_langchain_03x.py(24 hunks)maxim/tests/test_portkey.py(6 hunks)maxim/tests/test_prompt_chains.py(3 hunks)maxim/tests/test_prompts.py(12 hunks)maxim/tests/test_test_runs.py(19 hunks)
💤 Files with no reviewable changes (1)
- maxim/tests/test_agno.py
🧰 Additional context used
🧬 Code graph analysis (8)
maxim/tests/test_prompt_chains.py (1)
maxim/models/query_builder.py (4)
QueryBuilder(18-124)and_(29-37)deployment_var(72-88)build(107-124)
maxim/tests/test_portkey.py (3)
maxim/maxim.py (2)
logger(897-942)Maxim(127-1045)maxim/tests/mock_writer.py (1)
inject_mock_writer(243-259)maxim/logger/portkey/client.py (1)
instrument_portkey(32-46)
maxim/tests/test_logger.py (2)
maxim/maxim.py (2)
Maxim(127-1045)logger(897-942)maxim/tests/test_portkey.py (1)
setUp(28-33)
maxim/tests/test_prompts.py (4)
maxim/models/query_builder.py (6)
QueryBuilder(18-124)and_(29-37)deployment_var(72-88)build(107-124)tag(90-105)folder(49-60)maxim/apis/maxim_apis.py (2)
get_prompt(312-347)get_folders(669-702)maxim/maxim.py (3)
get_prompt(736-772)get_folder_by_id(849-873)get_folders(875-895)maxim/runnable/prompt.py (1)
run(30-41)
maxim/tests/test_decorators.py (2)
maxim/maxim.py (2)
Maxim(127-1045)logger(897-942)maxim/tests/mock_writer.py (1)
inject_mock_writer(243-259)
maxim/tests/test_test_runs.py (2)
maxim/test_runs/test_run_builder.py (7)
with_data(319-332)with_concurrency(512-523)with_evaluators(334-351)with_logger(525-536)with_data_structure(305-317)with_prompt_version_id(411-443)with_prompt_chain_version_id(445-477)maxim/apis/maxim_apis.py (1)
create_test_run(1157-1247)
maxim/tests/test_logger_azure_openai.py (4)
maxim/maxim.py (2)
Maxim(127-1045)logger(897-942)maxim/logger/components/generation.py (1)
GenerationConfig(67-76)maxim/logger/components/trace.py (1)
TraceConfig(45-55)maxim/tests/mock_writer.py (1)
inject_mock_writer(243-259)
maxim/tests/test_logger_langchain_03x.py (4)
maxim/maxim.py (2)
Maxim(127-1045)logger(897-942)maxim/logger/logger.py (1)
LoggerConfig(59-75)maxim/logger/components/trace.py (1)
TraceConfig(45-55)maxim/tests/mock_writer.py (1)
flush(80-98)
🪛 Ruff (0.12.2)
maxim/tests/test_portkey.py
35-35: Missing return type annotation for public function test_instrument_portkey_sync
Add return type annotation: None
(ANN201)
maxim/tests/test_logger.py
33-33: Missing return type annotation for public function setUp
Add return type annotation: None
(ANN201)
38-38: Missing return type annotation for public function test_initialize_logger_if_log_repository_exists
Add return type annotation: None
(ANN201)
40-40: Use a regular assert instead of unittest-style assertIsNotNone
Replace assertIsNotNone(...) with assert ...
(PT009)
42-42: Missing return type annotation for public function test_should_throw_error_if_log_repository_does_not_exist
Add return type annotation: None
(ANN201)
43-43: Use pytest.raises instead of unittest-style assertRaises
Replace assertRaises with pytest.raises
(PT027)
45-45: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
49-49: Missing return type annotation for public function setUp
Add return type annotation: None
(ANN201)
maxim/tests/test_prompts.py
47-47: Create your own exception
(TRY002)
47-47: Avoid specifying long messages outside the exception class
(TRY003)
48-48: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
49-49: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
50-50: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
51-51: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
52-52: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
71-71: Create your own exception
(TRY002)
71-71: Avoid specifying long messages outside the exception class
(TRY003)
73-73: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
74-74: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
75-75: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
76-76: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
111-111: Create your own exception
(TRY002)
111-111: Avoid specifying long messages outside the exception class
(TRY003)
112-112: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
113-113: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
114-114: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
131-131: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
132-132: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
134-134: Missing return type annotation for public function test_getPrompt_with_deployment_variables_multiselect_includes
Add return type annotation: None
(ANN201)
142-142: Create your own exception
(TRY002)
142-142: Avoid specifying long messages outside the exception class
(TRY003)
143-143: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
144-144: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
148-148: Missing return type annotation for public function test_if_prompt_cache_works_fine
Add return type annotation: None
(ANN201)
234-234: Unnecessary list comprehension (rewrite using list())
Rewrite using list()
(C416)
236-236: Missing return type annotation for public function test_getFolderUsingId
Add return type annotation: None
(ANN201)
239-239: Create your own exception
(TRY002)
239-239: Avoid specifying long messages outside the exception class
(TRY003)
240-240: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
242-242: Missing return type annotation for public function test_getFolderUsingTags
Add return type annotation: None
(ANN201)
244-244: Boolean positional value in function call
(FBT003)
maxim/tests/test_decorators.py
9-9: Module level import not at top of file
(E402)
10-10: Module level import not at top of file
(E402)
12-12: Module level import not at top of file
(E402)
13-13: Module level import not at top of file
(E402)
14-14: Module level import not at top of file
(E402)
15-15: Module level import not at top of file
(E402)
maxim/tests/test_test_runs.py
5-5: typing.Dict is deprecated, use dict instead
(UP035)
65-65: datetime.datetime.now() called without a tz argument
(DTZ005)
84-84: datetime.datetime.now() called without a tz argument
(DTZ005)
105-105: datetime.datetime.now() called without a tz argument
(DTZ005)
127-127: datetime.datetime.now() called without a tz argument
(DTZ005)
162-162: datetime.datetime.now() called without a tz argument
(DTZ005)
208-208: datetime.datetime.now() called without a tz argument
(DTZ005)
265-265: datetime.datetime.now() called without a tz argument
(DTZ005)
313-313: datetime.datetime.now() called without a tz argument
(DTZ005)
365-365: datetime.datetime.now() called without a tz argument
(DTZ005)
417-417: datetime.datetime.now() called without a tz argument
(DTZ005)
486-486: datetime.datetime.now() called without a tz argument
(DTZ005)
554-554: datetime.datetime.now() called without a tz argument
(DTZ005)
641-641: datetime.datetime.now() called without a tz argument
(DTZ005)
728-728: datetime.datetime.now() called without a tz argument
(DTZ005)
845-845: datetime.datetime.now() called without a tz argument
(DTZ005)
maxim/tests/test_logger_langchain_03x.py
65-65: Missing return type annotation for public function test_generation_chat_prompt
Add return type annotation: None
(ANN201)
136-136: Missing return type annotation for public function test_generation_chat_prompt_azure_chat_model_with_streaming
Add return type annotation: None
(ANN201)
467-467: Missing return type annotation for public function test_generation_chat_prompt_anthropic_sonnet_chat_model_streaming
Add return type annotation: None
(ANN201)
488-488: Missing return type annotation for public function test_generation_chat_prompt_openai_chat_model_with_tool_call
Add return type annotation: None
(ANN201)
506-506: Missing return type annotation for public function test_generation_chat_prompt_openai_chat_model_with_tool_call_with_streaming
Add return type annotation: None
(ANN201)
528-528: Missing return type annotation for public function test_generation_chat_prompt_openai_chat_model_with_streaming
Add return type annotation: None
(ANN201)
549-549: Missing return type annotation for public function test_generation_chat_prompt_anthropic_sonnet_chat_model_with_tool_call
Add return type annotation: None
(ANN201)
569-569: Missing return type annotation for public function test_generation_chat_prompt_anthropic_3_sonnet_chat_model
Add return type annotation: None
(ANN201)
588-588: Missing return type annotation for public function test_generation_chat_prompt_anthropic_haiku_chat_model
Add return type annotation: None
(ANN201)
607-607: Missing return type annotation for public function test_generation_chat_prompt_azure_chat_model_old_class
Add return type annotation: None
(ANN201)
627-627: Missing return type annotation for public function test_generation_chat_prompt_chat_model_with_span
Add return type annotation: None
(ANN201)
697-697: Missing return type annotation for public function test_langchain_generation_with_azure_multi_prompt_chain
Add return type annotation: None
(ANN201)
772-772: Missing return type annotation for public function test_custom_result_with_generation_chat_prompt
Add return type annotation: None
(ANN201)
833-833: langchain.schema.runnable.RunnablePassthrough imported but unused
Remove unused import: langchain.schema.runnable.RunnablePassthrough
(F401)
🪛 YAMLlint (1.37.1)
.github/workflows/tests.yml
[warning] 3-3: truthy value should be one of [false, true]
(truthy)
[error] 5-5: too few spaces after comma
(commas)
[error] 7-7: too few spaces after comma
(commas)
[error] 100-100: no new line character at the end of file
(new-line-at-end-of-file)
🔇 Additional comments (7)
maxim/filter_objects.py (1)
78-82: Move integer parsing before boolean handling to fix logic flow.The current placement of integer parsing after boolean handling creates unreachable code. Since
valueis already a string at this point, the boolean check will always fail, making lines 78-82 execute unnecessarily.Apply this diff to fix the logical flow:
- if not parsed: - try: - value = int(value) - parsed = True - except ValueError: - pass - if not parsed: - if isinstance(value, bool): + if not parsed: + if isinstance(value, bool): value = bool(value) parsed = True + if not parsed: + try: + value = int(value) + parsed = True + except ValueError: + pass + if not parsed: elif isinstance(value, int):Likely an incorrect or invalid review comment.
maxim/tests/test_prompt_chains.py (1)
24-26: Confirm “Prod” case-sensitivity in deployment_var.If the backend treats deployment variables case-sensitively, switching to "Prod" is fine; otherwise it may fail to match.
Do you want me to add a fallback that tries both "Prod" and "prod" to tolerate mixed data?
maxim/tests/test_decorators.py (1)
130-176: Validate decorator parameter names (sessionId vs session_id).Confirm the trace decorator accepts sessionId (camelCase). If the API expects session_id, this will silently drop the session linkage.
If needed, I can scan usages and update to session_id consistently.
maxim/tests/test_logger.py (2)
29-30: Good: Centralized base URL via env with sane default.Using MAXIM_BASE_URL with a fallback to https://app.getmaxim.ai is the right pattern and keeps tests endpoint-configurable.
52-56: LGTM: tearDown properly cleans up resources.This prevents logger/writer leaks and reduces test flakiness.
maxim/tests/test_portkey.py (1)
72-79: Portkey constructor parameter usage is correctI’ve confirmed that in the released portkey-ai Python package (v0.1.1 and later), the
Portkeyclass’s__init__signature uses aprovidernamed argument rather thanvirtual_key. Passingprovider=portkey_virtual_keyin your tests and client instantiations is therefore aligned with the published API—no changes needed for lines 72–79, 205–209, or 399–403.maxim/tests/test_logger_azure_openai.py (1)
41-45: Request verification: Azure OpenAI API version.Validate that 2024-08-01-preview is still available in your target Azure region/tenant; preview versions get retired.
If needed, update api_version and add it to CI via AZURE_OPENAI_API_VERSION to avoid hard-coding.
| PORTKEY_API_KEY: ${{ secrets.PORTKEY_API_KEY }} | ||
| PORTKEY_VIRTUAL_KEY: ${{ secrets.PORTKEY_VIRTUAL_KEY }} | ||
| LLAMAINDEX_API_KEY: ${{ secrets.LLAMAINDEX_API_KEY }} | ||
| TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }} | ||
| TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Consider removing unused secrets (PORTKEY_VIRTUAL_KEY).
Tests moved to provider-based config; if PORTKEY_VIRTUAL_KEY is unused, omit it from env to reduce exposure surface.
Do you want me to scan the repo for usages and submit a follow-up patch?
🤖 Prompt for AI Agents
.github/workflows/tests.yml lines 51-55: the workflow exposes
PORTKEY_VIRTUAL_KEY which appears unused; search the repository for any
references to PORTKEY_VIRTUAL_KEY (including CI, infra, and provider config
files) and if no usages exist, remove the PORTKEY_VIRTUAL_KEY entry from the env
block in this workflow and delete the corresponding GitHub Actions secret to
reduce exposure; if usages are found, update those callers to use the
provider-based config instead or consolidate to a single secret name, then
adjust the workflow accordingly and run CI locally or via a test branch to
verify no breakage.
| if isinstance(value, bool): | ||
| value = bool(value) | ||
| parsed = True | ||
| elif type(value) is int: | ||
| elif isinstance(value, int): | ||
| value = int(value) | ||
| parsed = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Remove redundant type checks after parsing.
Lines 84-89 contain redundant type checks that will never execute because value starts as a string and only gets converted through explicit parsing steps above. The boolean check will always be false since value was never converted to a boolean in the reachable code path, and the integer check is redundant after the integer parsing attempt above.
Apply this diff to remove the redundant checks:
- if not parsed:
- if isinstance(value, bool):
- value = bool(value)
- parsed = True
- elif isinstance(value, int):
- value = int(value)
- parsed = True📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if isinstance(value, bool): | |
| value = bool(value) | |
| parsed = True | |
| elif type(value) is int: | |
| elif isinstance(value, int): | |
| value = int(value) | |
| parsed = True |
🤖 Prompt for AI Agents
In maxim/filter_objects.py around lines 84 to 89, remove the redundant
isinstance checks for bool and int that follow the explicit parsing steps above;
they are unreachable because value starts as a string and parsing blocks already
handle int/bool conversion. Delete those two conditional branches and ensure
subsequent logic relies on the parsed flag/state set by the actual parsing
attempts instead of re-checking value's type.
| value = bool(value) | ||
| parsed = True | ||
| elif type(value) is int: | ||
| elif isinstance(value, int): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Fix inconsistent type checking pattern.
Line 87 uses isinstance(value, int) while the rest of the codebase uses type(value) is int (e.g., line 303). This inconsistency could lead to different behavior since isinstance returns True for boolean values (bool is a subclass of int), while type() is checks for exact type matches.
For consistency with the existing codebase pattern, apply this diff:
- elif isinstance(value, int):
+ elif type(value) is int:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| elif isinstance(value, int): | |
| elif type(value) is int: |
🤖 Prompt for AI Agents
In maxim/filter_objects.py around line 87, replace the isinstance(value, int)
check with an exact type comparison (type(value) is int) to match the project
pattern and avoid treating bools as ints; update the conditional so it uses
type(value) is int (ensuring boolean values are not considered integers) and
keep the rest of the branch logic unchanged.
| from dotenv import load_dotenv | ||
| load_dotenv() | ||
|
|
||
| from flask import Flask, request | ||
| from langchain.chat_models.openai import ChatOpenAI | ||
| from langchain_openai import ChatOpenAI | ||
|
|
||
| from .. import Config, Maxim | ||
| from ..decorators import current_retrieval, current_trace, retrieval, span, trace | ||
| from ..decorators.langchain import langchain_callback, langchain_llm_call | ||
| from ..logger import LoggerConfig | ||
| from ..tests.mock_writer import inject_mock_writer | ||
| from maxim import Maxim | ||
| from maxim.decorators import current_retrieval, current_trace, retrieval, span, trace | ||
| from maxim.decorators.langchain import langchain_callback, langchain_llm_call | ||
| from maxim.tests.mock_writer import inject_mock_writer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Fix E402: avoid executable statements between import groups.
load_dotenv() before the remaining imports causes “module level import not at top of file”. Move load_dotenv() after all imports and before env reads.
-from dotenv import load_dotenv
-load_dotenv()
+from dotenv import load_dotenv
@@
-from flask import Flask, request
-from langchain_openai import ChatOpenAI
-
-from maxim import Maxim
-from maxim.decorators import current_retrieval, current_trace, retrieval, span, trace
-from maxim.decorators.langchain import langchain_callback, langchain_llm_call
-from maxim.tests.mock_writer import inject_mock_writer
+from flask import Flask, request
+from langchain_openai import ChatOpenAI
+from maxim import Maxim
+from maxim.decorators import current_retrieval, current_trace, retrieval, span, trace
+from maxim.decorators.langchain import langchain_callback, langchain_llm_call
+from maxim.tests.mock_writer import inject_mock_writer
+
+load_dotenv()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from dotenv import load_dotenv | |
| load_dotenv() | |
| from flask import Flask, request | |
| from langchain.chat_models.openai import ChatOpenAI | |
| from langchain_openai import ChatOpenAI | |
| from .. import Config, Maxim | |
| from ..decorators import current_retrieval, current_trace, retrieval, span, trace | |
| from ..decorators.langchain import langchain_callback, langchain_llm_call | |
| from ..logger import LoggerConfig | |
| from ..tests.mock_writer import inject_mock_writer | |
| from maxim import Maxim | |
| from maxim.decorators import current_retrieval, current_trace, retrieval, span, trace | |
| from maxim.decorators.langchain import langchain_callback, langchain_llm_call | |
| from maxim.tests.mock_writer import inject_mock_writer | |
| from dotenv import load_dotenv | |
| from flask import Flask, request | |
| from langchain_openai import ChatOpenAI | |
| from maxim import Maxim | |
| from maxim.decorators import current_retrieval, current_trace, retrieval, span, trace | |
| from maxim.decorators.langchain import langchain_callback, langchain_llm_call | |
| from maxim.tests.mock_writer import inject_mock_writer | |
| load_dotenv() |
🧰 Tools
🪛 Ruff (0.12.2)
9-9: Module level import not at top of file
(E402)
10-10: Module level import not at top of file
(E402)
12-12: Module level import not at top of file
(E402)
13-13: Module level import not at top of file
(E402)
14-14: Module level import not at top of file
(E402)
15-15: Module level import not at top of file
(E402)
🤖 Prompt for AI Agents
In maxim/tests/test_decorators.py around lines 6 to 15, the call to
load_dotenv() is placed between import groups causing an E402 violation; move
the load_dotenv() call so all imports remain at the top of the file, then invoke
load_dotenv() immediately after the import block (and before any environment
reads) to preserve import ordering and satisfy the linter.
| self.maxim = Maxim({ "base_url": baseUrl }) | ||
| self.logger = self.maxim.logger() | ||
| self.mock_writer = inject_mock_writer(self.logger) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Ensure Maxim can initialize without secret leaks in local runs.
Maxim() will raise if MAXIM_API_KEY is unset. Set a harmless default for local/dev; CI should provide a real key via secrets.
Apply:
- self.maxim = Maxim({ "base_url": baseUrl })
+ os.environ.setdefault("MAXIM_API_KEY", "test-api-key")
+ self.maxim = Maxim({"base_url": baseUrl or "https://app.getmaxim.ai"})
self.logger = self.maxim.logger()
self.mock_writer = inject_mock_writer(self.logger)Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In maxim/tests/test_logger_azure_openai.py around lines 33 to 36, the test fails
locally when MAXIM_API_KEY is unset; set a harmless default API key in the test
process before creating Maxim so local/dev runs don't raise (CI will supply the
real key via secrets). Add a line to set os.environ["MAXIM_API_KEY"] to a
non-sensitive value like "local-test-key" immediately before instantiating
Maxim(), then proceed to create the logger and inject the mock writer.
| self, | ||
| ): | ||
| prompt_3_id = os.getenv("MAXIM_PROMPT_3_ID") | ||
| prompt = self.maxim.get_prompt( | ||
| promptId, | ||
| prompt_3_id, | ||
| QueryBuilder() | ||
| .and_() | ||
| .deployment_var("Environment", "Prod") | ||
| .deployment_var("TenantId", 123) | ||
| .deployment_var("Test number", 123) | ||
| .build(), | ||
| ) | ||
| if prompt is None: | ||
| raise Exception("Prompt not found") | ||
| self.assertEqual(prompt.prompt_id, promptId) | ||
| self.assertEqual(prompt.version_id, data[env]["prodAndT123PromptVersionId"]) | ||
| self.assertEqual(len(prompt.messages), 2) | ||
| self.assertEqual(prompt.prompt_id, prompt_3_id) | ||
| self.assertEqual(prompt.version_id, os.getenv("MAXIM_PROMPT_3_VERSION_2_ID")) | ||
| self.assertEqual(len(prompt.messages), 1) | ||
|
|
||
| def test_getPrompt_with_deployment_variables_multiselect( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Brittle env-dependent tests: add a small helper to require envs and skip when absent.
A helper keeps each test concise and reduces boilerplate.
Add near the top of the file (outside changed lines):
def require_env(self, *keys: str) -> None:
missing = [k for k in keys if not os.getenv(k)]
if missing:
self.skipTest(f"Missing env: {', '.join(missing)}")Then start tests with e.g.:
self.require_env("MAXIM_PROMPT_3_ID", "MAXIM_PROMPT_3_VERSION_2_ID")Also applies to: 134-146, 148-171, 172-198, 216-233, 253-261
🧰 Tools
🪛 Ruff (0.12.2)
99-99: Missing return type annotation for public function test_getPrompt_with_deployment_variables_Environment_prod_and_TenantId_123
Add return type annotation: None
(ANN201)
111-111: Create your own exception
(TRY002)
111-111: Avoid specifying long messages outside the exception class
(TRY003)
112-112: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
113-113: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
114-114: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
🤖 Prompt for AI Agents
In maxim/tests/test_prompts.py around lines 99 to 115, the test relies on
environment variables and can fail when those envs are absent; add a small
helper function near the top of the file (outside the changed lines) named
require_env(self, *keys) that checks os.getenv for each key and calls
self.skipTest(...) if any are missing, then update this test to call
self.require_env("MAXIM_PROMPT_3_ID", "MAXIM_PROMPT_3_VERSION_2_ID") at the
start so it skips instead of failing when envs are missing; apply the same
pattern (call the helper at the start) to the other affected test ranges:
134-146, 148-171, 172-198, 216-233, and 253-261.
| def test_getFolderUsingId(self): | ||
| folder = self.maxim.get_folder_by_id(folderID) | ||
| folder = self.maxim.get_folder_by_id(os.getenv("MAXIM_FOLDER_1_ID")) | ||
| if folder is None: | ||
| raise Exception("Folder not found") | ||
| self.assertEqual(folder.name, "SDK Tests") | ||
|
|
||
| def test_getFolderUsingTags(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Assertion on folder name is environment-specific; assert on id instead.
Name can change; id is the stable handle.
Apply:
- self.assertEqual(folder.name, "SDK Tests")
+ self.assertEqual(folder.id, os.getenv("MAXIM_FOLDER_1_ID"))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_getFolderUsingId(self): | |
| folder = self.maxim.get_folder_by_id(folderID) | |
| folder = self.maxim.get_folder_by_id(os.getenv("MAXIM_FOLDER_1_ID")) | |
| if folder is None: | |
| raise Exception("Folder not found") | |
| self.assertEqual(folder.name, "SDK Tests") | |
| def test_getFolderUsingId(self): | |
| folder = self.maxim.get_folder_by_id(os.getenv("MAXIM_FOLDER_1_ID")) | |
| if folder is None: | |
| raise Exception("Folder not found") | |
| self.assertEqual(folder.id, os.getenv("MAXIM_FOLDER_1_ID")) |
🧰 Tools
🪛 Ruff (0.12.2)
236-236: Missing return type annotation for public function test_getFolderUsingId
Add return type annotation: None
(ANN201)
239-239: Create your own exception
(TRY002)
239-239: Avoid specifying long messages outside the exception class
(TRY003)
240-240: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
🤖 Prompt for AI Agents
In maxim/tests/test_prompts.py around lines 236 to 241, the test asserts on
folder.name which is environment-specific; change the assertion to verify the
returned folder's id matches the input id. Retrieve the expected id from
os.getenv("MAXIM_FOLDER_1_ID") into a variable, assert folder is not None, and
replace self.assertEqual(folder.name, "SDK Tests") with an assertion that
folder.id (or the attribute used for the identifier) equals the expected id.
| folders = self.maxim.get_folders( | ||
| QueryBuilder().and_().tag("test", True).build() | ||
| QueryBuilder().and_().tag("Testing", True).build() | ||
| ) | ||
| self.assertEqual(folders[0].name, "SDK Tests") | ||
| self.assertEqual(len(folders), 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Add return type to tests and avoid boolean positional for clarity.
Minor style/readability improvements flagged by Ruff (ANN201, FBT003).
Apply:
- def test_getFolderUsingTags(self):
+ def test_getFolderUsingTags(self) -> None:
folders = self.maxim.get_folders(
- QueryBuilder().and_().tag("Testing", True).build()
+ QueryBuilder().and_().tag(key="Testing", value=True).build()
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_getFolderUsingTags(self): | |
| folders = self.maxim.get_folders( | |
| QueryBuilder().and_().tag("test", True).build() | |
| QueryBuilder().and_().tag("Testing", True).build() | |
| ) | |
| self.assertEqual(folders[0].name, "SDK Tests") | |
| def test_getFolderUsingTags(self) -> None: | |
| folders = self.maxim.get_folders( | |
| QueryBuilder().and_().tag(key="Testing", value=True).build() | |
| ) | |
| self.assertEqual(folders[0].name, "SDK Tests") |
🧰 Tools
🪛 Ruff (0.12.2)
242-242: Missing return type annotation for public function test_getFolderUsingTags
Add return type annotation: None
(ANN201)
244-244: Boolean positional value in function call
(FBT003)
246-246: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
🤖 Prompt for AI Agents
In maxim/tests/test_prompts.py around lines 242 to 246, the test function lacks
an explicit return type and uses a boolean positional argument for clarity;
update the function signature to include an explicit return type (-> None) and
replace the positional boolean in QueryBuilder().tag("Testing", True) with a
keyword argument (e.g., present=True) so the call becomes
QueryBuilder().and_().tag("Testing", present=True).build().
| from datetime import datetime | ||
| import logging | ||
| import os | ||
| import unittest | ||
| from typing import Dict, Optional | ||
| from dotenv import load_dotenv | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Use timezone-aware timestamps to satisfy DTZ005 and make names stable.
Switch to datetime.now(timezone.utc) across all dynamic names; import timezone.
Apply:
-from datetime import datetime
+from datetime import datetime, timezone
...
- name=f"SDK Test run 1 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", in_workspace_id=workspaceId
+ name=f"SDK Test run 1 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S')}", in_workspace_id=workspaceId
...
- name=f"SDK Test run 2 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
+ name=f"SDK Test run 2 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S')}",
...
- name=f"SDK Test run Workflow {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
+ name=f"SDK Test run Workflow {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S')}",
...
- name=f"SDK Test run 3 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
+ name=f"SDK Test run 3 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S')}",
...
- name=f"SDK Test run 4 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
+ name=f"SDK Test run 4 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S')}",
...
- name=f"SDK Test run 5 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
+ name=f"SDK Test run 5 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S')}",
...
- self.maxim.create_test_run(name=f"SDK Test run 6 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", in_workspace_id=workspaceId)
+ self.maxim.create_test_run(name=f"SDK Test run 6 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S')}", in_workspace_id=workspaceId)
...
- name=f"SDK Test run 7 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
+ name=f"SDK Test run 7 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S')}",
...
- name=f"SDK Test run 8 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
+ name=f"SDK Test run 8 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S')}",
...
- name=f"SDK Test run 9 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
+ name=f"SDK Test run 9 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S')}",
...
- name=f"SDK Test run 10 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
+ name=f"SDK Test run 10 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S')}",
...
- name=f"SDK Test run 11 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
+ name=f"SDK Test run 11 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S')}",
...
- name=f"SDK Test run 12 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
+ name=f"SDK Test run 12 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S')}",
...
- name=f"SDK Test run 13 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
+ name=f"SDK Test run 13 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S')}",
...
- name=f"SDK Test run 14 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
+ name=f"SDK Test run 14 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S')}",Also applies to: 65-65, 84-84, 105-105, 127-127, 162-162, 208-208, 265-265, 313-313, 365-365, 417-417, 486-486, 641-641, 728-728, 845-845
🧰 Tools
🪛 Ruff (0.12.2)
5-5: typing.Dict is deprecated, use dict instead
(UP035)
🤖 Prompt for AI Agents
In maxim/tests/test_test_runs.py around lines 1 to 7 (and also at lines 65, 84,
105, 127, 162, 208, 265, 313, 365, 417, 486, 641, 728, 845), the code uses naive
datetime.now() for dynamic names which breaks DTZ005 and produces unstable
names; import timezone from datetime and replace all datetime.now() calls used
to build timestamps or names with datetime.now(timezone.utc) so timestamps are
timezone-aware and stable across environments.
| import os | ||
| import unittest | ||
| from typing import Dict, Optional | ||
| from dotenv import load_dotenv |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Modernize typing: prefer built-in generics (dict) over typing.Dict.
Aligns with Ruff UP035 and current typing best practices.
Apply:
-from typing import Dict, Optional
+from typing import Optional
...
- class MyCustomEvaluator(BaseEvaluator):
- def evaluate(
- self, result: LocalEvaluatorResultParameter, data: LocalData
- ) -> Dict[str, LocalEvaluatorReturn]:
+ class MyCustomEvaluator(BaseEvaluator):
+ def evaluate(
+ self, result: LocalEvaluatorResultParameter, data: LocalData
+ ) -> dict[str, LocalEvaluatorReturn]:
return {
"abc": LocalEvaluatorReturn(score=1),
"cde": LocalEvaluatorReturn(score=False, reasoning="Just chillll"),
}Repeat the same return type substitution for the other MyCustomEvaluator classes in this file.
Also applies to: 175-183, 596-608, 683-695, 800-812
🧰 Tools
🪛 Ruff (0.12.2)
5-5: typing.Dict is deprecated, use dict instead
(UP035)
🤖 Prompt for AI Agents
In maxim/tests/test_test_runs.py around line 5, replace typing.Dict and
typing.Optional with built-in generics dict and Optional (or remove Optional
from typing if using from typing import Optional) to modernize typing; update
the function/class signatures and return type annotations accordingly. Also make
the same substitutions for the other MyCustomEvaluator classes at lines 175-183,
596-608, 683-695, and 800-812 so all typing.Dict usages become dict (and
typing.Optional becomes Optional or the built-in equivalent) across those
ranges.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (18)
.github/workflows/tests.yml (2)
118-131: Apply the same artifact/publisher hardening to the second job.Consistency + reliability.
- - name: Upload additional test results - uses: actions/upload-artifact@v4 - if: success() || failure() # run this step even if previous step failed + - name: Upload additional test results + uses: actions/upload-artifact@<sha-for-v4> + if: always() @@ - - name: Publish additional test results - uses: EnricoMi/publish-unit-test-result-action@v2 - if: success() || failure() # run this step even if previous step failed + - name: Publish additional test results + uses: EnricoMi/publish-unit-test-result-action@<sha-for-v2> + if: always() with: junit_files: "junit/additional-test-results.xml" check_name: "Additional Test Results"
10-131: Optional: collapse to a matrix strategy for maintainability.Reduces duplication across jobs and keeps Python/version drift in one place.
- Define a matrix: python: [3.9.20, 3.10.15]; job name: Test SDK (Python ${{ matrix.python }}).
- Share steps (checkout, uv, install dev with --frozen, mkdir junit, pytest with conditional selection).
- Use include/exclude to run CrewAI-only tests on 3.10 matrix entry.
maxim/tests/test_add_dataset_entries_integration.py (4)
81-81: Align attachments with the log message and remove the external URL dependency.You create a markdown file but don’t attach it; instead you attach a remote image URL. Prefer deterministic, local inputs.
Apply this diff:
- print("\n📝 Testing dictionary input with real API (including 3 attachments: image, PDF, and markdown)...") + print("\n📝 Testing dictionary input with real API (including 3 attachments: image, audio, and markdown)...") @@ - context_file3 = UrlAttachment( - url="https://images.pexels.com/photos/842711/pexels-photo-842711.jpeg", - name="background.jpeg" - ) + context_file3 = FileAttachment( + path=str(temp_file_path3), + name="additional_context.md", + mime_type="text/markdown" + )Also applies to: 95-98, 111-115
91-93: Use unittest assertions instead of bare asserts.Bare asserts can be optimized out; unittest assertions give clearer failure output.
- assert real_image_path.exists(), "Image file not found" - assert real_audio_path.exists(), "Audio file not found" + self.assertTrue(real_image_path.exists(), "Image file not found") + self.assertTrue(real_audio_path.exists(), "Audio file not found")
150-150: Assert exact number of created IDs rather than > 0.Strengthens validation and would catch partial failures.
- self.assertTrue(len(response["data"]["ids"]) > 0) + self.assertEqual(len(response["data"]["ids"]), 2)Apply at each occurrence shown in the selected ranges.
Also applies to: 228-228, 303-303, 382-382
374-376: Isolate integration runs to a per-run dataset to prevent row-count drift.The CI error “Total rows mismatch … expected 4, got 2” suggests multiple jobs sharing a dataset. Use an ephemeral dataset per run (create in setUpClass, delete in tearDownClass), or gate on a unique dataset ID via environment/secrets. This avoids interference across PRs and reruns.
If the SDK supports dataset lifecycle, I can propose a small fixture to create/clean up a temporary dataset for these tests—want me to draft it?
maxim/tests/test_together.py (7)
24-36: Skip instead of raising; also gate on MAXIM_API_KEY and provide deterministic logger idAvoid CI failures without secrets and prevent logger() from raising on missing repo id.
@@ def setUp(self): @@ - if not togetherApiKey: - raise ValueError("TOGETHER_API_KEY environment variable is not set") + if not togetherApiKey: + self.skipTest("TOGETHER_API_KEY not set") + if not apiKey: + self.skipTest("MAXIM_API_KEY not set") @@ - self.logger = Maxim({ "base_url": baseUrl }).logger() + self.logger = Maxim({"base_url": baseUrl}).logger( + {"id": (repoId or f"test-repo-{uuid4()}")} + )Also applies to: 32-36
64-87: Harden test: wrap network call and skip on failureNetwork flakiness should not fail the suite.
@@ - def test_non_streaming_with_system_message(self): + def test_non_streaming_with_system_message(self): @@ - res: ChatCompletionResponse = self.client.chat.completions.create( + try: + res: ChatCompletionResponse = self.client.chat.completions.create( @@ - ) + ) @@ - # Verify response structure - self.assertIsNotNone(res) - self.assertTrue(hasattr(res, 'choices')) - self.assertTrue(len(res.choices) > 0) - self.assertIsNotNone(res.choices[0].message.content) + self.assertIsNotNone(res) + self.assertTrue(hasattr(res, 'choices')) + self.assertTrue(len(res.choices) > 0) + self.assertIsNotNull(res.choices[0].message.content) # or assert ... + except Exception as e: + self.skipTest(f"Error: {e}")
88-111: Add try/except to streaming test to avoid hard failures without creds/connectivity@@ - def test_streaming_chat_completion(self): + def test_streaming_chat_completion(self): @@ - stream = self.client.chat.completions.create( + try: + stream = self.client.chat.completions.create( @@ - ) + ) @@ - for chunk in stream: + for chunk in stream: ... @@ - self.assertGreater(chunk_count, 0, "Expected to receive streaming chunks") - self.assertGreater(len(full_response), 0, "Expected non-empty response") + self.assertGreater(chunk_count, 0, "Expected to receive streaming chunks") + self.assertGreater(len(full_response), 0, "Expected non-empty response") + except Exception as e: + self.skipTest(f"Streaming error: {e}")
111-136: Ditto for streaming-with-system@@ - stream = self.client.chat.completions.create( + try: + stream = self.client.chat.completions.create( @@ - ) + ) @@ - for chunk in stream: + for chunk in stream: ... @@ - self.assertGreater(chunk_count, 0, "Expected to receive streaming chunks") - self.assertGreater(len(full_response), 0, "Expected non-empty response") + self.assertGreater(chunk_count, 0, "Expected to receive streaming chunks") + self.assertGreater(len(full_response), 0, "Expected non-empty response") + except Exception as e: + self.skipTest(f"Streaming error: {e}")
630-632: Clean up logger to release resources@@ - def tearDown(self) -> None: - pass + def tearDown(self) -> None: + if hasattr(self, "logger"): + self.logger.cleanup()
636-649: Async setup: also gate on MAXIM_API_KEY and set logger id@@ async def asyncSetUp(self): @@ - if not togetherApiKey: - self.skipTest("TOGETHER_API_KEY environment variable is not set") + if not togetherApiKey: + self.skipTest("TOGETHER_API_KEY environment variable is not set") + if not apiKey: + self.skipTest("MAXIM_API_KEY not set") @@ - self.logger = Maxim({ "base_url": baseUrl }).logger() + self.logger = Maxim({"base_url": baseUrl}).logger( + {"id": (repoId or f"test-repo-{uuid4()}")} + )Also applies to: 644-649
753-762: Make the invalid-model test assertful@@ - try: - res = await self.async_client.chat.completions.create( + with self.assertRaises(Exception): + await self.async_client.chat.completions.create( model="invalid-model-name", messages=[{"role": "user", "content": "Hello"}], max_tokens=10 ) - except Exception as e: - print(e)maxim/tests/test_prompts.py (2)
23-31: Skip suite when MAXIM_API_KEY missing to avoid 403s@@ def setUp(self): @@ - self.maxim = Maxim( + if not apiKey: + self.skipTest("MAXIM_API_KEY not set") + self.maxim = Maxim( { "api_key": apiKey, "debug": True, "prompt_management": True, "base_url": baseUrl } )
216-233: Print list comp nit is fine to keep; consider list() if you want to silence Ruff C416maxim/tests/test_test_runs.py (3)
41-49: Skip integration tests when mandatory envs are absent (or toggle via RUN_MAXIM_INTEGRATION_TESTS)@@ def setUp(self): @@ - config = Config( + if os.getenv("RUN_MAXIM_INTEGRATION_TESTS") not in ("1", "true", "True"): + self.skipTest("Integration tests disabled (set RUN_MAXIM_INTEGRATION_TESTS=1 to enable)") + missing = [name for name, val in [ + ("MAXIM_API_KEY", apiKey), + ("MAXIM_WORKSPACE_ID", workspaceId), + ("MAXIM_DATASET_ID", datasetId), + ] if not val] + if missing: + self.skipTest(f"Missing env: {', '.join(missing)}") + config = Config( api_key=apiKey, base_url=baseUrl, debug=True, raise_exceptions=True )
62-69: Replace naive datetime.now() with timezone-aware now(timezone.utc)- name=f"SDK Test run 1 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", in_workspace_id=workspaceId + name=f"SDK Test run 1 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S %Z')}", in_workspace_id=workspaceId @@ - name=f"SDK Test run 2 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", + name=f"SDK Test run 2 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S %Z')}", @@ - name=f"SDK Test run Workflow {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", + name=f"SDK Test run Workflow {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S %Z')}", @@ - name=f"SDK Test run 3 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", + name=f"SDK Test run 3 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S %Z')}", @@ - name=f"SDK Test run 4 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", + name=f"SDK Test run 4 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S %Z')}", @@ - name=f"SDK Test run 5 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", + name=f"SDK Test run 5 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S %Z')}", @@ - self.maxim.create_test_run(name=f"SDK Test run 6 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", in_workspace_id=workspaceId) + self.maxim.create_test_run(name=f"SDK Test run 6 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S %Z')}", in_workspace_id=workspaceId) @@ - name=f"SDK Test run 7 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", + name=f"SDK Test run 7 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S %Z')}", @@ - name=f"SDK Test run 8 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", + name=f"SDK Test run 8 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S %Z')}", @@ - name=f"SDK Test run 9 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", + name=f"SDK Test run 9 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S %Z')}", @@ - name=f"SDK Test run 10 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", + name=f"SDK Test run 10 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S %Z')}", @@ - name=f"SDK Test run 11 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", + name=f"SDK Test run 11 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S %Z')}", @@ - name=f"SDK Test run 12 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", + name=f"SDK Test run 12 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S %Z')}", @@ - name=f"SDK Test run 13 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", + name=f"SDK Test run 13 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S %Z')}", @@ - name=f"SDK Test run 14 {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", + name=f"SDK Test run 14 {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S %Z')}",Also applies to: 81-93, 102-112, 124-132, 159-171, 205-233, 262-275, 297-308, 349-367, 401-414, 470-482, 538-556, 625-656, 712-745, 829-861
172-181: Return type: use dict[str, LocalEvaluatorReturn]@@ - ) -> Dict[str, LocalEvaluatorReturn]: + ) -> dict[str, LocalEvaluatorReturn]:Also applies to: 495-506, 582-594, 786-797
♻️ Duplicate comments (19)
.github/workflows/tests.yml (8)
3-7: Quote “on” and fix branch list spacing (yamllint).Prevents YAML 1.1 boolean misparse and lints.
-on: +"on": push: - branches: [main,beta] + branches: [main, beta] pull_request: - branches: [main,beta] + branches: [main, beta]
11-15: Fix job “if” for push events and skip forked PRs without secrets; add timeout.Avoids null access on push and prevents secret-based steps from running on forks.
test-main: name: Test Main SDK (Python 3.9) runs-on: ubuntu-latest - if: github.event.pull_request.draft == false + if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }} + timeout-minutes: 30
94-98: Mirror gating/timeout on additional-tests (push-safe, fork-safe).Keep CI green for external PRs.
additional-tests: name: Test Additional Integrations (Python 3.10) runs-on: ubuntu-latest - if: github.event.pull_request.draft == false + if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }} + timeout-minutes: 30 + permissions: + contents: read + checks: write + pull-requests: write
35-38: Install dev group only and lock resolution.Improves speed and reproducibility.
- - name: Install dependencies (dev only) - run: | - uv sync --python 3.9 + - name: Install dependencies (dev only) + run: | + uv sync --group dev --python 3.9.20 --frozen
1-3: Add top-level least-privilege permissions and concurrency cancellation.Harden token scope and auto-cancel superseded runs.
name: Run Tests +"permissions": + contents: read + +"concurrency": + group: tests-${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: true + "on":
100-106: Pin actions/setup to SHAs and uv version.Same hardening as main job.
- - uses: actions/checkout@v4 + - uses: actions/checkout@<sha-for-v4> @@ - - name: Install uv - uses: astral-sh/setup-uv@v4 - with: - version: "latest" + - name: Install uv + uses: astral-sh/setup-uv@<sha-for-v4> + with: + version: "0.5.x"
17-23: Pin GitHub Actions and uv to immutable SHAsReplace floating tags with the exact commit SHAs in
.github/workflows/tests.yml:File: .github/workflows/tests.yml @@ -17,7 +17,7 @@ - - uses: actions/checkout@v4 + - uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 @@ -21,7 +21,7 @@ - - name: Install uv - uses: astral-sh/setup-uv@v4 - with: - version: "latest" + - name: Install uv + uses: astral-sh/setup-uv@38f3f104447c67c051c4a08e39b64a148898af3a + with: + version: "0.5.x"
24-25: Pin Python version to requires-python floor (3.9.20)
pyproject.toml defines requires-python = ">=3.9.20" (line 10), so CI must install exactly Python 3.9.20 to avoid running on unsupported patch releases.- - name: Set up Python 3.9 - run: uv python install 3.9 + - name: Set up Python 3.9 + run: uv python install 3.9.20maxim/tests/test_anthropic.py (2)
121-127: Apply the same guard/id fallback in TestAnthropic.setUp@@ def setUp(self): # This is a hack to ensure that the Maxim instance is not cached if hasattr(Maxim, "_instance"): delattr(Maxim, "_instance") - self.logger = Maxim({"base_url": baseUrl}).logger() + if not apiKey or not anthropicApiKey: + self.skipTest("MAXIM_API_KEY and/or ANTHROPIC_API_KEY not set") + self.logger = Maxim({"base_url": baseUrl}).logger( + {"id": (repoId or f"test-repo-{uuid4()}")} + ) self.mock_writer = inject_mock_writer(self.logger)
26-32: Stabilize logger creation and skip when secrets missing@@ def setUp(self): if hasattr(Maxim, "_instance"): delattr(Maxim, "_instance") - # Create logger and patch its writer - self.logger = Maxim({"base_url": baseUrl}).logger() + # Create logger and patch its writer + if not apiKey or not anthropicApiKey: + self.skipTest("MAXIM_API_KEY and/or ANTHROPIC_API_KEY not set") + self.logger = Maxim({"base_url": baseUrl}).logger( + {"id": (repoId or f"test-repo-{uuid4()}")} + ) self.mock_writer = inject_mock_writer(self.logger)maxim/tests/test_prompts.py (7)
61-79: Gate multiselect variant behind env and relax brittle content assert@@ - def test_getPrompt_with_multiselect_deployment_variables_and_execute(self): + def test_getPrompt_with_multiselect_deployment_variables_and_execute(self): + self.require_env("MAXIM_PROMPT_1_ID") @@ - self.assertEqual(prompt.model, "gpt-4o") + self.assertTrue(prompt.model) @@ - "You are a helpful assistant. Answer the user query.", + "You are a helpful assistant. Answer the user query.",Also applies to: 66-79
148-171: Cache test: add annotation and avoid raising for missing prompt@@ - def test_if_prompt_cache_works_fine(self): + def test_if_prompt_cache_works_fine(self) -> None: @@ - if prompt is None: - raise Exception("Prompt not found") + if prompt is None: + self.skipTest("Prompt not found") @@ - if prompt2 is None: - raise Exception("Prompt2 not found") + if prompt2 is None: + self.skipTest("Prompt2 not found")Also applies to: 153-166, 164-166
41-53: Gate env-dependent prompt assertions; prefer invariants over tenant-specific valuesAdd a small helper near the top (outside diffed lines):
def require_env(self, *keys: str) -> None: missing = [k for k in keys if not os.getenv(k)] if missing: self.skipTest(f"Missing env: {', '.join(missing)}")And adjust this test:
@@ - def test_get_prompt_with_deployment_variables(self): + def test_get_prompt_with_deployment_variables(self): + self.require_env("MAXIM_PROMPT_1_ID", "MAXIM_PROMPT_1_VERSION_1_ID") @@ - self.assertEqual(prompt.version_id, os.getenv("MAXIM_PROMPT_1_VERSION_1_ID")) - self.assertEqual(prompt.model, "gpt-4o") - self.assertEqual(prompt.provider, "openai") - self.assertEqual(prompt.messages[0].content, "You are a helpful assistant. You talk like Chandler from Friends.") + self.assertEqual(prompt.version_id, os.getenv("MAXIM_PROMPT_1_VERSION_1_ID")) + self.assertTrue(prompt.model) # avoid env-specific exact value + self.assertTrue(prompt.provider) # avoid env-specific exact value + self.assertTrue(prompt.messages[0].content)Also applies to: 49-53
99-115: Guard prompt_3 path with env checks and avoid hard raise@@ - ): - prompt_3_id = os.getenv("MAXIM_PROMPT_3_ID") + ): + self.require_env("MAXIM_PROMPT_3_ID", "MAXIM_PROMPT_3_VERSION_2_ID") + prompt_3_id = os.getenv("MAXIM_PROMPT_3_ID") @@ - if prompt is None: - raise Exception("Prompt not found") + if prompt is None: + self.skipTest("Configured prompt not found") @@ - self.assertEqual(len(prompt.messages), 1) + self.assertEqual(len(prompt.messages), 1)
131-133: Add return annotation; gate env and avoid hard raises@@ - self.assertEqual(prompt.version_id, os.getenv("MAXIM_PROMPT_1_VERSION_4_ID")) + self.assertEqual(prompt.version_id, os.getenv("MAXIM_PROMPT_1_VERSION_4_ID")) @@ - def test_getPrompt_with_deployment_variables_multiselect_includes( + def test_getPrompt_with_deployment_variables_multiselect_includes( self, ): + self.require_env("MAXIM_PROMPT_1_ID", "MAXIM_PROMPT_1_VERSION_5_ID") @@ - if prompt is None: - raise Exception("Prompt not found") + if prompt is None: + self.skipTest("Prompt not found")Also applies to: 134-147
236-241: Folder test: guard env, skip on lookup miss, assert by id (not name)@@ - def test_getFolderUsingId(self): - folder = self.maxim.get_folder_by_id(os.getenv("MAXIM_FOLDER_1_ID")) + def test_getFolderUsingId(self) -> None: + folder_id = os.getenv("MAXIM_FOLDER_1_ID") + if not folder_id: + self.skipTest("MAXIM_FOLDER_1_ID is not set") + folder = self.maxim.get_folder_by_id(folder_id) if folder is None: - raise Exception("Folder not found") - self.assertEqual(folder.name, "SDK Tests") + self.skipTest("Configured folder id not found") + self.assertEqual(folder.id, folder_id)
242-246: Boolean positional argument in tag(): prefer keyword for clarity (FBT003)@@ - def test_getFolderUsingTags(self): + def test_getFolderUsingTags(self) -> None: folders = self.maxim.get_folders( - QueryBuilder().and_().tag("Testing", True).build() + QueryBuilder().and_().tag(key="Testing", value=True).build() )Also applies to: 244-245
maxim/tests/test_test_runs.py (2)
1-1: Use timezone-aware timestamps-from datetime import datetime +from datetime import datetime, timezone
5-5: Modernize typing: prefer built-in generics-from typing import Dict, Optional +from typing import Optional
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (6)
.github/workflows/tests.yml(1 hunks)maxim/tests/test_add_dataset_entries_integration.py(9 hunks)maxim/tests/test_anthropic.py(2 hunks)maxim/tests/test_prompts.py(12 hunks)maxim/tests/test_test_runs.py(19 hunks)maxim/tests/test_together.py(7 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
maxim/tests/test_test_runs.py (3)
maxim/test_runs/test_run_builder.py (7)
with_data(319-332)with_concurrency(512-523)with_evaluators(334-351)with_logger(525-536)with_data_structure(305-317)with_prompt_version_id(411-443)with_prompt_chain_version_id(445-477)maxim/logger/components/generation.py (1)
data(795-810)maxim/apis/maxim_apis.py (1)
create_test_run(1157-1247)
🪛 Ruff (0.12.2)
maxim/tests/test_together.py
740-740: Use a regular assert instead of unittest-style assertIsNotNone
Replace assertIsNotNone(...) with assert ...
(PT009)
741-741: Use a regular assert instead of unittest-style assertIsNotNone
Replace assertIsNotNone(...) with assert ...
(PT009)
743-743: Use a regular assert instead of unittest-style assertIsNotNone
Replace assertIsNotNone(...) with assert ...
(PT009)
744-744: Use a regular assert instead of unittest-style assertIsNotNone
Replace assertIsNotNone(...) with assert ...
(PT009)
maxim/tests/test_prompts.py
47-47: Create your own exception
(TRY002)
47-47: Avoid specifying long messages outside the exception class
(TRY003)
48-48: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
49-49: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
50-50: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
51-51: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
52-52: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
71-71: Create your own exception
(TRY002)
71-71: Avoid specifying long messages outside the exception class
(TRY003)
73-73: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
74-74: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
75-75: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
76-76: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
111-111: Create your own exception
(TRY002)
111-111: Avoid specifying long messages outside the exception class
(TRY003)
112-112: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
113-113: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
114-114: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
131-131: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
132-132: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
134-134: Missing return type annotation for public function test_getPrompt_with_deployment_variables_multiselect_includes
Add return type annotation: None
(ANN201)
142-142: Create your own exception
(TRY002)
142-142: Avoid specifying long messages outside the exception class
(TRY003)
143-143: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
144-144: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
148-148: Missing return type annotation for public function test_if_prompt_cache_works_fine
Add return type annotation: None
(ANN201)
234-234: Unnecessary list comprehension (rewrite using list())
Rewrite using list()
(C416)
236-236: Missing return type annotation for public function test_getFolderUsingId
Add return type annotation: None
(ANN201)
239-239: Create your own exception
(TRY002)
239-239: Avoid specifying long messages outside the exception class
(TRY003)
240-240: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
242-242: Missing return type annotation for public function test_getFolderUsingTags
Add return type annotation: None
(ANN201)
244-244: Boolean positional value in function call
(FBT003)
maxim/tests/test_test_runs.py
5-5: typing.Dict is deprecated, use dict instead
(UP035)
63-63: datetime.datetime.now() called without a tz argument
(DTZ005)
82-82: datetime.datetime.now() called without a tz argument
(DTZ005)
103-103: datetime.datetime.now() called without a tz argument
(DTZ005)
125-125: datetime.datetime.now() called without a tz argument
(DTZ005)
160-160: datetime.datetime.now() called without a tz argument
(DTZ005)
206-206: datetime.datetime.now() called without a tz argument
(DTZ005)
263-263: datetime.datetime.now() called without a tz argument
(DTZ005)
299-299: datetime.datetime.now() called without a tz argument
(DTZ005)
351-351: datetime.datetime.now() called without a tz argument
(DTZ005)
403-403: datetime.datetime.now() called without a tz argument
(DTZ005)
472-472: datetime.datetime.now() called without a tz argument
(DTZ005)
540-540: datetime.datetime.now() called without a tz argument
(DTZ005)
627-627: datetime.datetime.now() called without a tz argument
(DTZ005)
714-714: datetime.datetime.now() called without a tz argument
(DTZ005)
831-831: datetime.datetime.now() called without a tz argument
(DTZ005)
🪛 YAMLlint (1.37.1)
.github/workflows/tests.yml
[warning] 3-3: truthy value should be one of [false, true]
(truthy)
[error] 5-5: too few spaces after comma
(commas)
[error] 7-7: too few spaces after comma
(commas)
🪛 GitHub Actions: Run Tests
maxim/tests/test_prompts.py
[error] 1-1: HTTP 403 Forbidden when prompting (GET /api/sdk/v4/prompts) or running prompts (POST /api/sdk/v4/prompts/run).
maxim/tests/test_add_dataset_entries_integration.py
[error] 374-374: Total rows mismatch in dataset total-rows API call (expected 4 rows, got 2).
🔇 Additional comments (2)
.github/workflows/tests.yml (1)
53-55: ```shell
sed -n '1,200p' maxim/tests/test_portkey.py</blockquote></details> <details> <summary>maxim/tests/test_add_dataset_entries_integration.py (1)</summary><blockquote> `121-127`: **All instances updated from “Expected Output” to “Test”** Ripgrep search found zero occurrences of “Expected Output” and tests consistently reference “Test”. </blockquote></details> </blockquote></details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
.github/workflows/tests.yml
Outdated
| - name: Run main tests (excluding CrewAI) | ||
| env: | ||
| MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }} | ||
| MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }} | ||
| MAXIM_DATASET_ID: ${{ secrets.MAXIM_DATASET_ID }} | ||
| MAXIM_WORKSPACE_ID: ${{ secrets.MAXIM_WORKSPACE_ID }} | ||
| MAXIM_LOG_REPO_ID: ${{ secrets.MAXIM_LOG_REPO_ID }} | ||
| MAXIM_PROMPT_CHAIN_VERSION_ID: ${{ secrets.MAXIM_PROMPT_CHAIN_VERSION_ID }} | ||
| MAXIM_ASSISTANT_PROMPT_VERSION_ID: ${{ secrets.MAXIM_ASSISTANT_PROMPT_VERSION_ID }} | ||
| MAXIM_ASSISTANT_PROMPT_CHAIN_VERSION_ID: ${{ secrets.MAXIM_ASSISTANT_PROMPT_CHAIN_VERSION_ID }} | ||
| OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} | ||
| ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} | ||
| AZURE_OPENAI_ENDPOINT: ${{ secrets.AZURE_OPENAI_ENDPOINT }} | ||
| AZURE_OPENAI_KEY: ${{ secrets.AZURE_OPENAI_KEY }} | ||
| PORTKEY_API_KEY: ${{ secrets.PORTKEY_API_KEY }} | ||
| PORTKEY_VIRTUAL_KEY: ${{ secrets.PORTKEY_VIRTUAL_KEY }} | ||
| LLAMAINDEX_API_KEY: ${{ secrets.LLAMAINDEX_API_KEY }} | ||
| TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }} | ||
| TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }} | ||
| FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }} | ||
| GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }} | ||
| GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }} | ||
| MAXIM_PROMPT_1_ID: ${{ secrets.MAXIM_PROMPT_1_ID }} | ||
| MAXIM_PROMPT_2_ID: ${{ secrets.MAXIM_PROMPT_2_ID }} | ||
| MAXIM_PROMPT_3_ID: ${{ secrets.MAXIM_PROMPT_3_ID }} | ||
| MAXIM_PROMPT_1_VERSION_1_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_1_ID }} | ||
| MAXIM_PROMPT_1_VERSION_3_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_3_ID }} | ||
| MAXIM_PROMPT_1_VERSION_4_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_4_ID }} | ||
| MAXIM_PROMPT_1_VERSION_5_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_5_ID }} | ||
| MAXIM_FOLDER_1_ID: ${{ secrets.MAXIM_FOLDER_1_ID }} | ||
| MAXIM_FOLDER_2_ID: ${{ secrets.MAXIM_FOLDER_2_ID }} | ||
| MAXIM_TEST_RUN_PROMPT_VERSION_ID: ${{ secrets.MAXIM_TEST_RUN_PROMPT_VERSION_ID }} | ||
| MAXIM_TEST_RUN_ASSISTANT_PROMPT_VERSION_ID: ${{ secrets.MAXIM_TEST_RUN_ASSISTANT_PROMPT_VERSION_ID }} | ||
| MAXIM_TEST_RUN_ASSISTANT_PROMPT_CHAIN_VERSION_ID: ${{ secrets.MAXIM_TEST_RUN_ASSISTANT_PROMPT_CHAIN_VERSION_ID }} | ||
| run: | | ||
| uv run pytest maxim/tests/ -v --ignore=maxim/tests/test_crewai.py --ignore=maxim/tests/test_livekit.py --ignore=maxim/tests/test_livekit_realtime.py --ignore=maxim/tests/test_agno.py --junitxml=junit/main-test-results.xml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Ensure junit dir exists; keep artifact steps from failing.
pytest won’t create parent dirs for --junitxml.
- - name: Run main tests (excluding CrewAI)
+ - name: Run main tests (excluding CrewAI)
env:
@@
- run: |
- uv run pytest maxim/tests/ -v --ignore=maxim/tests/test_crewai.py --ignore=maxim/tests/test_livekit.py --ignore=maxim/tests/test_livekit_realtime.py --ignore=maxim/tests/test_agno.py --junitxml=junit/main-test-results.xml
+ run: |
+ mkdir -p junit
+ uv run pytest maxim/tests/ -v \
+ --ignore=maxim/tests/test_crewai.py \
+ --ignore=maxim/tests/test_livekit.py \
+ --ignore=maxim/tests/test_livekit_realtime.py \
+ --ignore=maxim/tests/test_agno.py \
+ --junitxml=junit/main-test-results.xml📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Run main tests (excluding CrewAI) | |
| env: | |
| MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }} | |
| MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }} | |
| MAXIM_DATASET_ID: ${{ secrets.MAXIM_DATASET_ID }} | |
| MAXIM_WORKSPACE_ID: ${{ secrets.MAXIM_WORKSPACE_ID }} | |
| MAXIM_LOG_REPO_ID: ${{ secrets.MAXIM_LOG_REPO_ID }} | |
| MAXIM_PROMPT_CHAIN_VERSION_ID: ${{ secrets.MAXIM_PROMPT_CHAIN_VERSION_ID }} | |
| MAXIM_ASSISTANT_PROMPT_VERSION_ID: ${{ secrets.MAXIM_ASSISTANT_PROMPT_VERSION_ID }} | |
| MAXIM_ASSISTANT_PROMPT_CHAIN_VERSION_ID: ${{ secrets.MAXIM_ASSISTANT_PROMPT_CHAIN_VERSION_ID }} | |
| OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} | |
| ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} | |
| AZURE_OPENAI_ENDPOINT: ${{ secrets.AZURE_OPENAI_ENDPOINT }} | |
| AZURE_OPENAI_KEY: ${{ secrets.AZURE_OPENAI_KEY }} | |
| PORTKEY_API_KEY: ${{ secrets.PORTKEY_API_KEY }} | |
| PORTKEY_VIRTUAL_KEY: ${{ secrets.PORTKEY_VIRTUAL_KEY }} | |
| LLAMAINDEX_API_KEY: ${{ secrets.LLAMAINDEX_API_KEY }} | |
| TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }} | |
| TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }} | |
| FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }} | |
| GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }} | |
| GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }} | |
| MAXIM_PROMPT_1_ID: ${{ secrets.MAXIM_PROMPT_1_ID }} | |
| MAXIM_PROMPT_2_ID: ${{ secrets.MAXIM_PROMPT_2_ID }} | |
| MAXIM_PROMPT_3_ID: ${{ secrets.MAXIM_PROMPT_3_ID }} | |
| MAXIM_PROMPT_1_VERSION_1_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_1_ID }} | |
| MAXIM_PROMPT_1_VERSION_3_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_3_ID }} | |
| MAXIM_PROMPT_1_VERSION_4_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_4_ID }} | |
| MAXIM_PROMPT_1_VERSION_5_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_5_ID }} | |
| MAXIM_FOLDER_1_ID: ${{ secrets.MAXIM_FOLDER_1_ID }} | |
| MAXIM_FOLDER_2_ID: ${{ secrets.MAXIM_FOLDER_2_ID }} | |
| MAXIM_TEST_RUN_PROMPT_VERSION_ID: ${{ secrets.MAXIM_TEST_RUN_PROMPT_VERSION_ID }} | |
| MAXIM_TEST_RUN_ASSISTANT_PROMPT_VERSION_ID: ${{ secrets.MAXIM_TEST_RUN_ASSISTANT_PROMPT_VERSION_ID }} | |
| MAXIM_TEST_RUN_ASSISTANT_PROMPT_CHAIN_VERSION_ID: ${{ secrets.MAXIM_TEST_RUN_ASSISTANT_PROMPT_CHAIN_VERSION_ID }} | |
| run: | | |
| uv run pytest maxim/tests/ -v --ignore=maxim/tests/test_crewai.py --ignore=maxim/tests/test_livekit.py --ignore=maxim/tests/test_livekit_realtime.py --ignore=maxim/tests/test_agno.py --junitxml=junit/main-test-results.xml | |
| - name: Run main tests (excluding CrewAI) | |
| env: | |
| MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }} | |
| MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }} | |
| MAXIM_DATASET_ID: ${{ secrets.MAXIM_DATASET_ID }} | |
| MAXIM_WORKSPACE_ID: ${{ secrets.MAXIM_WORKSPACE_ID }} | |
| MAXIM_LOG_REPO_ID: ${{ secrets.MAXIM_LOG_REPO_ID }} | |
| MAXIM_PROMPT_CHAIN_VERSION_ID: ${{ secrets.MAXIM_PROMPT_CHAIN_VERSION_ID }} | |
| MAXIM_ASSISTANT_PROMPT_VERSION_ID: ${{ secrets.MAXIM_ASSISTANT_PROMPT_VERSION_ID }} | |
| MAXIM_ASSISTANT_PROMPT_CHAIN_VERSION_ID: ${{ secrets.MAXIM_ASSISTANT_PROMPT_CHAIN_VERSION_ID }} | |
| OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} | |
| ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} | |
| AZURE_OPENAI_ENDPOINT: ${{ secrets.AZURE_OPENAI_ENDPOINT }} | |
| AZURE_OPENAI_KEY: ${{ secrets.AZURE_OPENAI_KEY }} | |
| PORTKEY_API_KEY: ${{ secrets.PORTKEY_API_KEY }} | |
| PORTKEY_VIRTUAL_KEY: ${{ secrets.PORTKEY_VIRTUAL_KEY }} | |
| LLAMAINDEX_API_KEY: ${{ secrets.LLAMAINDEX_API_KEY }} | |
| TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }} | |
| TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }} | |
| FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }} | |
| GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }} | |
| GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }} | |
| MAXIM_PROMPT_1_ID: ${{ secrets.MAXIM_PROMPT_1_ID }} | |
| MAXIM_PROMPT_2_ID: ${{ secrets.MAXIM_PROMPT_2_ID }} | |
| MAXIM_PROMPT_3_ID: ${{ secrets.MAXIM_PROMPT_3_ID }} | |
| MAXIM_PROMPT_1_VERSION_1_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_1_ID }} | |
| MAXIM_PROMPT_1_VERSION_3_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_3_ID }} | |
| MAXIM_PROMPT_1_VERSION_4_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_4_ID }} | |
| MAXIM_PROMPT_1_VERSION_5_ID: ${{ secrets.MAXIM_PROMPT_1_VERSION_5_ID }} | |
| MAXIM_FOLDER_1_ID: ${{ secrets.MAXIM_FOLDER_1_ID }} | |
| MAXIM_FOLDER_2_ID: ${{ secrets.MAXIM_FOLDER_2_ID }} | |
| MAXIM_TEST_RUN_PROMPT_VERSION_ID: ${{ secrets.MAXIM_TEST_RUN_PROMPT_VERSION_ID }} | |
| MAXIM_TEST_RUN_ASSISTANT_PROMPT_VERSION_ID: ${{ secrets.MAXIM_TEST_RUN_ASSISTANT_PROMPT_VERSION_ID }} | |
| MAXIM_TEST_RUN_ASSISTANT_PROMPT_CHAIN_VERSION_ID: ${{ secrets.MAXIM_TEST_RUN_ASSISTANT_PROMPT_CHAIN_VERSION_ID }} | |
| run: | | |
| mkdir -p junit | |
| uv run pytest maxim/tests/ -v \ | |
| --ignore=maxim/tests/test_crewai.py \ | |
| --ignore=maxim/tests/test_livekit.py \ | |
| --ignore=maxim/tests/test_livekit_realtime.py \ | |
| --ignore=maxim/tests/test_agno.py \ | |
| --junitxml=junit/main-test-results.xml |
🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 39 to 74: the pytest command writes
--junitxml=junit/main-test-results.xml but pytest won't create the parent junit
directory, which can make later artifact upload fail; before running pytest,
create the directory (e.g., run mkdir -p junit) or otherwise ensure the junit
path exists so the junit file can be written and the artifact steps won't fail.
| - name: Upload main test results | ||
| uses: actions/upload-artifact@v4 | ||
| if: success() || failure() # run this step even if previous step failed | ||
| with: | ||
| name: main-test-results | ||
| path: junit/main-test-results.xml | ||
|
|
||
| - name: Publish main test results | ||
| uses: EnricoMi/publish-unit-test-result-action@v2 | ||
| if: success() || failure() # run this step even if previous step failed | ||
| with: | ||
| junit_files: "junit/main-test-results.xml" | ||
| check_name: "Main Test Results" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Use always() for artifact publishing and pin actions to SHAs; grant checks: write.
Ensures publishing even on cancelled and satisfies permission needs.
- - name: Upload main test results
- uses: actions/upload-artifact@v4
- if: success() || failure() # run this step even if previous step failed
+ - name: Upload main test results
+ uses: actions/upload-artifact@<sha-for-v4>
+ if: always()
@@
- - name: Publish main test results
- uses: EnricoMi/publish-unit-test-result-action@v2
- if: success() || failure() # run this step even if previous step failed
+ - name: Publish main test results
+ uses: EnricoMi/publish-unit-test-result-action@<sha-for-v2>
+ if: always()
with:
junit_files: "junit/main-test-results.xml"
check_name: "Main Test Results"Add job-level permissions to allow creating check runs:
test-main:
name: Test Main SDK (Python 3.9)
runs-on: ubuntu-latest
+ permissions:
+ contents: read
+ checks: write
+ pull-requests: writeCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 76 to 89: replace the conditional "if:
success() || failure()" with "if: always()" for both the Upload main test
results and Publish main test results steps, pin both actions to immutable
commit SHAs instead of tags (replace actions/upload-artifact@v4 and
EnricoMi/publish-unit-test-result-action@v2 with their respective full commit
SHAs), and add job-level permissions with "checks: write" (and other minimal
required permissions) so the publish action can create check runs.
| - name: Set up Python 3.10 | ||
| run: uv python install 3.10 | ||
|
|
||
| - name: Install dependencies (CrewAI only) | ||
| run: | | ||
| uv sync --python 3.10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Pin Python 3.10 patch; install dev-only with frozen; rename step.
Increases determinism and clarity.
- - name: Set up Python 3.10
- run: uv python install 3.10
+ - name: Set up Python 3.10
+ run: uv python install 3.10.15
@@
- - name: Install dependencies (CrewAI only)
- run: |
- uv sync --python 3.10
+ - name: Install dependencies (dev only)
+ run: |
+ uv sync --group dev --python 3.10.15 --frozen📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Set up Python 3.10 | |
| run: uv python install 3.10 | |
| - name: Install dependencies (CrewAI only) | |
| run: | | |
| uv sync --python 3.10 | |
| - name: Set up Python 3.10 | |
| run: uv python install 3.10.15 | |
| - name: Install dependencies (dev only) | |
| run: | | |
| uv sync --group dev --python 3.10.15 --frozen |
🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 107 to 112: the workflow currently
installs "python 3.10" and runs "uv sync --python 3.10" without pinning a patch,
and the step name is vague; update the install to pin a specific 3.10 patch
(e.g., 3.10.12) by changing the install invocation to that exact version, change
the sync command to install dev-only dependencies with a frozen lockfile (use
the uv flags --dev-only and --frozen along with --python <pinned-version>), and
rename the step to something explicit like "Install dev dependencies (CrewAI
only, frozen)" to improve clarity and determinism.
| - name: Run additional integration tests | ||
| run: | | ||
| uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Create junit dir and pass required env for integration tests.
Prevents junit write failure; ensure secrets available if needed.
- - name: Run additional integration tests
- run: |
- uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml
+ - name: Run additional integration tests
+ env:
+ OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
+ ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
+ MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }}
+ MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }}
+ TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }}
+ run: |
+ mkdir -p junit
+ uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Run additional integration tests | |
| run: | | |
| uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml | |
| - name: Run additional integration tests | |
| env: | |
| OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} | |
| ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} | |
| MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }} | |
| MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }} | |
| TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }} | |
| run: | | |
| mkdir -p junit | |
| uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml |
🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 114 to 116: the workflow step that runs
the additional integration tests can fail because the junit output directory may
not exist and required environment variables/secrets for the tests may be
missing; update the step to create the junit directory before running tests
(e.g., mkdir -p junit) and ensure the needed secrets/env vars are passed into
the job or step (via workflow env: or with: env variables or by referencing
secrets) so pytest can write junit/additional-test-results.xml and the
integration tests have required credentials.
| MAXIM_API_KEY = os.getenv("MAXIM_API_KEY") | ||
| MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL") or "https://app.getmaxim.ai" | ||
| MAXIM_DATASET_ID = os.getenv("MAXIM_DATASET_ID") # Dataset ID for integration tests | ||
| MAXIM_DATASET_ID = "cmesayj1f001c9ybdxwxmtilk" # Dataset ID for integration tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do not hard-code MAXIM_DATASET_ID; restore env-driven gating to avoid leaking IDs and CI flakiness.
A fixed dataset ID defeats the skip guard, risks polluting a shared dataset across runs, and likely explains the pipeline failure “Total rows mismatch … expected 4, got 2” when tests run concurrently or in different environments.
Apply this diff:
-MAXIM_DATASET_ID = "cmesayj1f001c9ybdxwxmtilk" # Dataset ID for integration tests
+MAXIM_DATASET_ID = os.getenv("MAXIM_DATASET_ID")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| MAXIM_DATASET_ID = "cmesayj1f001c9ybdxwxmtilk" # Dataset ID for integration tests | |
| MAXIM_DATASET_ID = os.getenv("MAXIM_DATASET_ID") |
🤖 Prompt for AI Agents
In maxim/tests/test_add_dataset_entries_integration.py around line 19, the test
hard-codes MAXIM_DATASET_ID which bypasses the env-driven gating and causes
shared-state/CI flakiness; replace the constant with reading from an environment
variable (e.g., os.getenv("MAXIM_DATASET_ID")) and add a pytest skip guard
(pytest.skip or pytest.mark.skipif) when the env var is not set so the test
won't run or leak IDs in CI; ensure no fallback hard-coded ID is used and update
imports to include os and pytest if necessary.
| def test_get_prompt_with_deployment_variables(self): | ||
| prompt = self.maxim.get_prompt( | ||
| promptId, | ||
| QueryBuilder().and_().deployment_var("Environment", "prod").build(), | ||
| QueryBuilder().and_().deployment_var("Environment", "Prod").build(), | ||
| ) | ||
| if prompt is None: | ||
| raise Exception("Prompt not found") | ||
| self.assertEqual(prompt.prompt_id, promptId) | ||
| self.assertEqual(prompt.version_id, promptVersionId) | ||
| self.assertEqual(prompt.model, "gpt-3.5-turbo") | ||
| self.assertEqual(prompt.version_id, os.getenv("MAXIM_PROMPT_1_VERSION_1_ID")) | ||
| self.assertEqual(prompt.model, "gpt-4o") | ||
| self.assertEqual(prompt.provider, "openai") | ||
| self.assertEqual(prompt.messages[0].content, "You are a helpful assistant") | ||
| self.assertEqual(prompt.messages[0].content, "You are a helpful assistant. You talk like Chandler from Friends.") | ||
| self.assertEqual(len(prompt.messages), 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Handle 403 Forbidden from backend by skipping, not failing
Recent CI error shows 403s on prompts endpoints; wrap calls in try/except and call self.skipTest(...) on “403”/“forbidden” substrings to stabilize pipelines.
Also applies to: 61-79, 99-115, 134-147, 148-171, 236-246
🧰 Tools
🪛 Ruff (0.12.2)
41-41: Missing return type annotation for public function test_get_prompt_with_deployment_variables
Add return type annotation: None
(ANN201)
47-47: Create your own exception
(TRY002)
47-47: Avoid specifying long messages outside the exception class
(TRY003)
48-48: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
49-49: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
50-50: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
51-51: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
52-52: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
53-53: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
🤖 Prompt for AI Agents
In maxim/tests/test_prompts.py around lines 41 to 53 (and also apply same change
to ranges 61-79, 99-115, 134-147, 148-171, 236-246), wrap the call to
self.maxim.get_prompt(...) in a try/except that catches exceptions from the
backend call, inspect the exception message (and any HTTP error details) for
substrings "403" or "forbidden" (case-insensitive), and if found call
self.skipTest("Backend returned 403/forbidden, skipping test"); otherwise
re-raise or fail normally so non-403 errors still surface. Ensure the rest of
the assertions remain inside the try block so they are only executed when the
call succeeds.
| load_dotenv() | ||
| logging.basicConfig(level=logging.INFO) | ||
| env = "prod" | ||
|
|
||
| apiKey = data[env]["apiKey"] | ||
| baseUrl = data[env]["baseUrl"] | ||
| workspaceId = data[env]["workspaceId"] | ||
| datasetId = data[env]["datasetId"] | ||
| workflowId = data[env]["workflowId"] | ||
| promptVersionId = data[env]["promptVersionId"] | ||
| promptChainVersionId = data[env]["promptChainVersionId"] | ||
|
|
||
| apiKey = os.getenv("MAXIM_API_KEY") | ||
| baseUrl = os.getenv("MAXIM_BASE_URL") | ||
| workspaceId = os.getenv("MAXIM_WORKSPACE_ID") | ||
| datasetId = os.getenv("MAXIM_DATASET_ID") | ||
| workflowId = os.getenv("MAXIM_WORKFLOW_ID") | ||
| promptVersionId = os.getenv("MAXIM_TEST_RUN_PROMPT_VERSION_ID") | ||
| promptChainVersionId = os.getenv("MAXIM_PROMPT_CHAIN_VERSION_ID") | ||
| assistantPromptVersionId = os.getenv("MAXIM_TEST_RUN_ASSISTANT_PROMPT_VERSION_ID") | ||
| assistantPromptChainVersionId = os.getenv("MAXIM_TEST_RUN_ASSISTANT_PROMPT_CHAIN_VERSION_ID") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Optional: single toggle to disable all integration-heavy tests in CI
You can skip the env-by-env checks and just use RUN_MAXIM_INTEGRATION_TESTS to guard the whole class, as added above. Keeps CI green by default.
Also applies to: 41-49
🤖 Prompt for AI Agents
In maxim/tests/test_test_runs.py around lines 27-39 (and similarly update lines
41-49), replace the multiple per-variable environment checks with a single
RUN_MAXIM_INTEGRATION_TESTS guard: read RUN_MAXIM_INTEGRATION_TESTS from env and
if it's not truthy, skip or mark the entire integration test class, otherwise
load the required env vars as before; implement this by adding a top-level
boolean flag (e.g., RUN_MAXIM_INTEGRATION_TESTS =
os.getenv("RUN_MAXIM_INTEGRATION_TESTS")) and using it to conditionally skip the
test class or return early so CI can opt-in to integration-heavy tests with one
toggle.
| baseUrl = os.getenv("MAXIM_BASE_URL") | ||
| workspaceId = os.getenv("MAXIM_WORKSPACE_ID") | ||
| datasetId = os.getenv("MAXIM_DATASET_ID") | ||
| workflowId = os.getenv("MAXIM_WORKFLOW_ID") | ||
| promptVersionId = os.getenv("MAXIM_TEST_RUN_PROMPT_VERSION_ID") | ||
| promptChainVersionId = os.getenv("MAXIM_PROMPT_CHAIN_VERSION_ID") | ||
| assistantPromptVersionId = os.getenv("MAXIM_TEST_RUN_ASSISTANT_PROMPT_VERSION_ID") | ||
| assistantPromptChainVersionId = os.getenv("MAXIM_TEST_RUN_ASSISTANT_PROMPT_CHAIN_VERSION_ID") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Provide sensible defaults for base_url to avoid None in Config
@@
-apiKey = os.getenv("MAXIM_API_KEY")
-baseUrl = os.getenv("MAXIM_BASE_URL")
+apiKey = os.getenv("MAXIM_API_KEY")
+baseUrl = os.getenv("MAXIM_BASE_URL") or "https://app.getmaxim.ai"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| apiKey = os.getenv("MAXIM_API_KEY") | |
| baseUrl = os.getenv("MAXIM_BASE_URL") | |
| workspaceId = os.getenv("MAXIM_WORKSPACE_ID") | |
| datasetId = os.getenv("MAXIM_DATASET_ID") | |
| workflowId = os.getenv("MAXIM_WORKFLOW_ID") | |
| promptVersionId = os.getenv("MAXIM_TEST_RUN_PROMPT_VERSION_ID") | |
| promptChainVersionId = os.getenv("MAXIM_PROMPT_CHAIN_VERSION_ID") | |
| assistantPromptVersionId = os.getenv("MAXIM_TEST_RUN_ASSISTANT_PROMPT_VERSION_ID") | |
| assistantPromptChainVersionId = os.getenv("MAXIM_TEST_RUN_ASSISTANT_PROMPT_CHAIN_VERSION_ID") | |
| apiKey = os.getenv("MAXIM_API_KEY") | |
| baseUrl = os.getenv("MAXIM_BASE_URL") or "https://app.getmaxim.ai" | |
| workspaceId = os.getenv("MAXIM_WORKSPACE_ID") | |
| datasetId = os.getenv("MAXIM_DATASET_ID") | |
| workflowId = os.getenv("MAXIM_WORKFLOW_ID") | |
| promptVersionId = os.getenv("MAXIM_TEST_RUN_PROMPT_VERSION_ID") | |
| promptChainVersionId = os.getenv("MAXIM_PROMPT_CHAIN_VERSION_ID") | |
| assistantPromptVersionId = os.getenv("MAXIM_TEST_RUN_ASSISTANT_PROMPT_VERSION_ID") | |
| assistantPromptChainVersionId = os.getenv("MAXIM_TEST_RUN_ASSISTANT_PROMPT_CHAIN_VERSION_ID") |
🤖 Prompt for AI Agents
In maxim/tests/test_test_runs.py around lines 30 to 38, the baseUrl environment
variable can be None which causes Config to receive a None value; change the
getenv call to provide a sensible default (e.g. the expected API base URL used
in tests or a localhost/mock endpoint) so baseUrl is never None, or set the
default before instantiating Config; ensure the chosen default matches other
test fixtures and update any related assertions if necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
maxim/tests/test_add_dataset_entries_comprehensive.py (2)
37-40: tearDown currently mutates the real process environment; remove or restore instead.This file no longer sets MAXIM_* env vars, yet tearDown deletes them globally. In CI or local runs where they are legitimately set, subsequent tests can fail.
Replace deletion with restoration:
# In setUp: self._orig_env = {k: os.environ.get(k) for k in ("MAXIM_API_KEY", "MAXIM_BASE_URL")} # In tearDown: for k, v in self._orig_env.items(): if v is None: os.environ.pop(k, None) else: os.environ[k] = vOr simply drop the env cleanup entirely if this class doesn’t set them.
244-249: Brittle network call-count assertion; this caused the pipeline failure.The job shows “AssertionError: 2 != 4” at Line 267. If upload_to_signed_url internally obtains the signed URL and/or handles the PATCH, mocking it to return True will suppress those get_client calls, reducing call_count. Asserting exact totals is brittle.
Refocus the assertion on behavior (presence of GET for upload URL and PATCH for entries), not the raw count:
- # Verify the upload process was initiated - self.assertEqual(mock_client.request.call_count, 4) + # Verify upload URL retrieval and entries patch were invoked + methods = [kwargs["method"] for _, kwargs in mock_client.request.call_args_list] + self.assertIn("GET", methods) # signed upload URL + self.assertIn("PATCH", methods) # entries updateAdditionally, track the mocked uploader explicitly:
with patch.object(self.maxim.maxim_api, 'upload_to_signed_url', return_value=True) as mock_upload: self.maxim.add_dataset_entries(self.dataset_id, [entry]) mock_upload.assert_called_once()If the implementation truly centralizes signed-URL retrieval inside upload_to_signed_url, drop the GET assertion and only assert PATCH + mock_upload.called.
♻️ Duplicate comments (15)
.github/workflows/tests.yml (15)
1-131: EOF newline.Ensure the file ends with a single trailing newline.
3-7: Quote "on" and fix branch-list commas to satisfy yamllint.Prevents YAML 1.1 boolean gotcha and lint errors.
-on: +"on": push: - branches: [main,beta] + branches: [main, beta] pull_request: - branches: [main,beta] + branches: [main, beta]
1-10: Add workflow-level least-privilege and cancel superseded runs.Harden token scope and save CI minutes.
name: Run Tests +permissions: + contents: read + +"on": @@ jobs: + concurrency: + group: tests-${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: true
13-15: Fix job condition to work on push events and gate fork PRs.Current expression dereferences pull_request on push and will error; also secrets won’t be available on forks.
- if: github.event.pull_request.draft == false + if: github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) + timeout-minutes: 30
16-23: Pin actions to immutable SHAs and avoid "latest" for uv.Supply-chain hygiene and reproducibility.
- - uses: actions/checkout@v4 + - uses: actions/checkout@<sha-for-v4> # v4 @@ - uses: astral-sh/setup-uv@v4 + uses: astral-sh/setup-uv@<sha-for-v4> # v4 with: - version: "latest" + version: "0.5.x"
24-25: Pin Python to 3.9.20 to satisfy requires-python >=3.9.20.- - name: Set up Python 3.9 - run: uv python install 3.9 + - name: Set up Python 3.9 + run: uv python install 3.9.20
27-37: Remove brittle pyproject.toml mutation/restore; install dev group frozen.Use resolver flags instead of editing files.
- - name: Backup pyproject.toml - run: | - cp pyproject.toml pyproject.toml.bak - - - name: Remove additional dependencies - run: | - sed -i.bak '/additional_dev = \[/,/\]/d' pyproject.toml - - - name: Install dependencies (dev only) - run: | - uv sync --python 3.9 + - name: Install dependencies (dev only, frozen) + run: | + uv sync --group dev --python 3.9.20 --frozen
39-74: Ensure junit dir exists; consider trimming unused secrets; create job perms for checks.
- Create junit dir before pytest.
- Remove unused secrets like PORTKEY_VIRTUAL_KEY if not needed.
- Add job-level permissions for check publishing.
test-main: name: Test Main SDK (Python 3.9) runs-on: ubuntu-latest + permissions: + contents: read + checks: write + pull-requests: write @@ - run: | - uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml + run: | + mkdir -p junit + uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xmlIf PORTKEY_VIRTUAL_KEY is unused, delete line 54.
76-88: Publish artifacts/results unconditionally and pin actions to SHAs.- - name: Upload main test results - uses: actions/upload-artifact@v4 - if: success() || failure() + - name: Upload main test results + uses: actions/upload-artifact@<sha-for-v4> # v4 + if: always() @@ - - name: Publish main test results - uses: EnricoMi/publish-unit-test-result-action@v2 - if: success() || failure() + - name: Publish main test results + uses: EnricoMi/publish-unit-test-result-action@<sha-for-v2> # v2 + if: always() with: junit_files: "junit/main-test-results.xml" check_name: "Main Test Results"
90-92: Remove restore step (no longer mutating pyproject).- - name: Restore pyproject.toml - run: | - mv pyproject.toml.bak pyproject.toml + # No restore needed
95-99: Mirror job guards and timeout for 3.10 job; add perms.Prevents fork/draft failures and enables check publishing.
additional-tests: name: Test Additional Integrations (Python 3.10) runs-on: ubuntu-latest - if: github.event.pull_request.draft == false + if: github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) + timeout-minutes: 30 + permissions: + contents: read + checks: write + pull-requests: write
100-106: Pin actions and uv version.- - uses: actions/checkout@v4 + - uses: actions/checkout@<sha-for-v4> # v4 @@ - uses: astral-sh/setup-uv@v4 + uses: astral-sh/setup-uv@<sha-for-v4> # v4 with: - version: "latest" + version: "0.5.x"
107-113: Pin Python 3.10 patch and install dev-only, frozen; fix step label.- - name: Set up Python 3.10 - run: uv python install 3.10 + - name: Set up Python 3.10 + run: uv python install 3.10.15 @@ - - name: Install dependencies (CrewAI only) - run: | - uv sync --python 3.10 + - name: Install dependencies (dev only, frozen) + run: | + uv sync --group dev --python 3.10.15 --frozen
114-116: Create junit dir and pass required env for integration tests.- - name: Run additional integration tests - run: | - uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml + - name: Run additional integration tests + env: + OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} + ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} + MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }} + MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }} + TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }} + run: | + mkdir -p junit + uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml
118-130: Use always() for artifacts/results and pin action SHAs.- - name: Upload additional test results - uses: actions/upload-artifact@v4 - if: success() || failure() + - name: Upload additional test results + uses: actions/upload-artifact@<sha-for-v4> # v4 + if: always() @@ - - name: Publish additional test results - uses: EnricoMi/publish-unit-test-result-action@v2 - if: success() || failure() + - name: Publish additional test results + uses: EnricoMi/publish-unit-test-result-action@<sha-for-v2> # v2 + if: always() with: junit_files: "junit/additional-test-results.xml" check_name: "Additional Test Results"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
.github/workflows/tests.yml(1 hunks)maxim/tests/test_add_dataset_entries_comprehensive.py(10 hunks)
🧰 Additional context used
🪛 GitHub Actions: Run Tests
maxim/tests/test_add_dataset_entries_comprehensive.py
[error] 267-267: AssertionError: 2 != 4; expected 4 network calls, got 2
🪛 YAMLlint (1.37.1)
.github/workflows/tests.yml
[warning] 3-3: truthy value should be one of [false, true]
(truthy)
[error] 5-5: too few spaces after comma
(commas)
[error] 7-7: too few spaces after comma
(commas)
🔇 Additional comments (7)
maxim/tests/test_add_dataset_entries_comprehensive.py (7)
178-178: No-op whitespace.
189-189: No-op whitespace.
218-218: No-op whitespace.
243-243: No-op whitespace.
275-275: No-op whitespace.
286-286: No-op whitespace.
204-204: Use a consistent API response shape for dataset‐entries patch
Mock the patch response to return adataobject (e.g.{"data": {"message": "Entries updated successfully"}}) instead of{"success": true}so that downstream result handling isn’t skipped.
| from dotenv import load_dotenv | ||
| load_dotenv() | ||
|
|
||
| MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL") | ||
| MAXIM_API_KEY = os.getenv("MAXIM_API_KEY") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
🛠️ Refactor suggestion
Avoid module-scope dotenv loading and env caching; this breaks test hermeticity.
- load_dotenv() at import time and caching MAXIM_BASE_URL/MAXIM_API_KEY as module constants make tests order-dependent and hard to override per-test. Combined with tearDown deleting env vars (see below), this can corrupt the global process env for other tests.
- Prefer resolving .env deterministically and providing safe test defaults, or read env inside setUp.
Apply this diff to make resolution explicit and add safe test defaults (while still honoring env when set):
-from dotenv import load_dotenv
-load_dotenv()
-
-MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL")
-MAXIM_API_KEY = os.getenv("MAXIM_API_KEY")
+from dotenv import load_dotenv, find_dotenv
+# Resolve nearest .env without overriding already-set env
+load_dotenv(find_dotenv(usecwd=True), override=False)
+# Note: do not cache real secrets at import-time; provide safe test defaults
+MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL", "http://localhost:8000")
+MAXIM_API_KEY = os.getenv("MAXIM_API_KEY", "test-api-key")If you don’t want defaults, drop them here and inject via setUp using patch.dict. I can provide a version that fully avoids module-level reads.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from dotenv import load_dotenv | |
| load_dotenv() | |
| MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL") | |
| MAXIM_API_KEY = os.getenv("MAXIM_API_KEY") | |
| from dotenv import load_dotenv, find_dotenv | |
| # Resolve nearest .env without overriding already-set env | |
| load_dotenv(find_dotenv(usecwd=True), override=False) | |
| # Note: do not cache real secrets at import-time; provide safe test defaults | |
| MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL", "http://localhost:8000") | |
| MAXIM_API_KEY = os.getenv("MAXIM_API_KEY", "test-api-key") |
🤖 Prompt for AI Agents
In maxim/tests/test_add_dataset_entries_comprehensive.py around lines 10-14,
remove the module-level load_dotenv() and module-scoped
MAXIM_BASE_URL/MAXIM_API_KEY reads; instead, resolve .env and read env vars
inside the test's setUp (or individual tests) so each test can control/override
values deterministically. Either call load_dotenv(path) in setUp and then use
os.getenv("MAXIM_BASE_URL", "http://localhost:...")/os.getenv("MAXIM_API_KEY",
"test-key") to provide safe defaults, or avoid defaults and inject values via
unittest.mock.patch.dict(os.environ) per-test; do not mutate global env at
import time or rely on tearDown to undo module-level caching.
Main Test Results131 tests 104 ✅ 5m 5s ⏱️ For more details on these failures, see this check. Results for commit 0e49b4e. ♻️ This comment has been updated with latest results. |
Additional Test Results0 tests 0 ✅ 0s ⏱️ Results for commit 0e49b4e. ♻️ This comment has been updated with latest results. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
maxim/tests/test_fireworks.py (2)
337-341: Possible AttributeError in streaming loop (tool calls)Chunks may omit delta; this will crash intermittently.
- for chunk in stream: - if chunk.choices[0].delta.content: - full_response += chunk.choices[0].delta.content - if hasattr(chunk.choices[0].delta, 'tool_calls') and chunk.choices[0].delta.tool_calls: - tool_calls_received = True + for chunk in stream: + choice = chunk.choices[0] + if hasattr(choice, "delta"): + delta = choice.delta + if hasattr(delta, "content") and delta.content: + full_response += delta.content + if hasattr(delta, "tool_calls") and delta.tool_calls: + tool_calls_received = True
460-462: Guard cleanup to avoid AttributeError when setUp skipped earlyDefensive cleanup makes the suite resilient.
- self.logger.cleanup() + if hasattr(self, "logger"): + self.logger.cleanup()Also applies to: 722-723
maxim/tests/test_add_dataset_entries_integration.py (1)
14-14: Prefer explicit .env discovery.Minor: use find_dotenv(usecwd=True) and override=False for predictable CI behavior.
-dotenv.load_dotenv() +from dotenv import find_dotenv +dotenv.load_dotenv(find_dotenv(usecwd=True), override=False)
♻️ Duplicate comments (8)
maxim/tests/test_fireworks.py (2)
26-31: Skip tests when secrets are missing; pass explicit config to loggerRaising here fails CI for forks. Also, Maxim.logger() will raise without a repo id. Guard all required env vars and pass them explicitly.
- if not fireworksApiKey: - raise ValueError("FIREWORKS_API_KEY environment variable is not set") - - self.logger = Maxim({"base_url": baseUrl}).logger() + if not fireworksApiKey or not apiKey or not repoId: + self.skipTest("Skipping Fireworks tests: missing FIREWORKS_API_KEY / MAXIM_API_KEY / MAXIM_LOG_REPO_ID") + self.logger = Maxim({"base_url": baseUrl, "api_key": apiKey}).logger({"id": repoId})
472-476: Async setup parity: skip on missing env and pass explicit configMirror sync setup to avoid CI failures and logger() ValueError.
- if not fireworksApiKey: - raise ValueError("FIREWORKS_API_KEY environment variable is not set") - - self.logger = Maxim({"base_url": baseUrl}).logger() + if not fireworksApiKey or not apiKey or not repoId: + self.skipTest("Skipping Fireworks async tests: missing FIREWORKS_API_KEY / MAXIM_API_KEY / MAXIM_LOG_REPO_ID") + self.logger = Maxim({"base_url": baseUrl, "api_key": apiKey}).logger({"id": repoId})maxim/tests/test_add_dataset_entries_comprehensive.py (2)
10-14: Avoid module-scope dotenv loading and env caching.Repeat of prior feedback: move env reads out of module scope to keep tests hermetic.
-from dotenv import load_dotenv -load_dotenv() - -MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL") -MAXIM_API_KEY = os.getenv("MAXIM_API_KEY") +from dotenv import load_dotenv, find_dotenv +load_dotenv(find_dotenv(usecwd=True), override=False)Then read env inside setUp (see next diff).
24-29: Don’t pass None into Config; build kwargs conditionally.Prevents accidental None from changing code paths.
- config = Config( - api_key=MAXIM_API_KEY, - base_url=MAXIM_BASE_URL, - debug=True, - raise_exceptions=True - ) + ak = os.getenv("MAXIM_API_KEY") + bu = os.getenv("MAXIM_BASE_URL") + cfg = {"debug": True, "raise_exceptions": True} + if ak: cfg["api_key"] = ak + if bu: cfg["base_url"] = bu + config = Config(**cfg).github/workflows/tests.yml (3)
14-15: Add top-level concurrency to cancel superseded runs.Speeds CI and reduces cost.
permissions: contents: read checks: write pull-requests: write +concurrency: + group: ${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: true + jobs:
119-121: Ensure env passed and junit dir exists for integration tests.Prevents missing-secret flakiness and junit write errors.
- - name: Run additional integration tests - run: | - uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml + - name: Run additional integration tests + env: + MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }} + MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }} + MAXIM_LOG_REPO_ID: ${{ secrets.MAXIM_LOG_REPO_ID }} + OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} + ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} + TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }} + run: | + mkdir -p junit + uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml
3-7: Fix YAML lint and boolean parsing: quote “on” and add spaces after commas.Prevents parse/lint issues.
-on: +\"on\": push: - branches: [main,beta] + branches: [main, beta] pull_request: - branches: [main,beta] + branches: [main, beta]pyproject.toml (1)
24-62: Requires-Python bound is too strict for the stated CI matrix.Your CI includes Python 3.9 and 3.10, but
requires-python = ">=3.9.20"can exclude valid runners (e.g., 3.9.19).Mirror prior advice:
-requires-python = ">=3.9.20" +requires-python = ">=3.9" # optionally bound upper versions not yet validated +# requires-python = ">=3.9, <3.13"Also ensure the GH Actions matrix stays within these bounds.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (7)
.github/workflows/tests.yml(1 hunks)maxim/tests/test_add_dataset_entries_comprehensive.py(10 hunks)maxim/tests/test_add_dataset_entries_integration.py(8 hunks)maxim/tests/test_agno.py(2 hunks)maxim/tests/test_crewai.py(1 hunks)maxim/tests/test_fireworks.py(2 hunks)pyproject.toml(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
maxim/tests/test_crewai.py (1)
maxim/maxim.py (1)
logger(897-942)
maxim/tests/test_agno.py (3)
maxim/maxim.py (1)
logger(897-942)maxim/logger/agno/client.py (1)
instrument_agno(440-465)maxim/tests/mock_writer.py (1)
inject_mock_writer(243-259)
maxim/tests/test_add_dataset_entries_integration.py (1)
maxim/models/dataset.py (1)
Variable(23-81)
maxim/tests/test_fireworks.py (1)
maxim/maxim.py (2)
logger(897-942)Maxim(127-1045)
🪛 GitHub Actions: Run Tests
pyproject.toml
[warning] 64-64: uv sync --python 3.9: TOML parse warning in pyproject.toml at line 64, column 22: key with no value, expected '='
[error] 64-64: uv sync --python 3.9: TOML parse error in pyproject.toml at line 64, column 22: key with no value, expected '='. Process completed with exit code 2.
🪛 YAMLlint (1.37.1)
.github/workflows/tests.yml
[warning] 3-3: truthy value should be one of [false, true]
(truthy)
[error] 5-5: too few spaces after comma
(commas)
[error] 7-7: too few spaces after comma
(commas)
🔇 Additional comments (3)
maxim/tests/test_add_dataset_entries_integration.py (1)
119-140: LGTM: Standardized to “Test” column.Consistent renaming to "Test" across dict- and object-based entries aligns the suite. No API surface change here.
Also applies to: 196-206, 208-216, 266-274, 285-293, 349-359, 361-370
pyproject.toml (2)
64-68: Ignore parse error suggestion—pyproject.tomlloads without errors.Likely an incorrect or invalid review comment.
24-62: Bump stale dev pins & remove duplicate dev dependencies
- Bump python-dotenv to ~=1.0.1 (load_dotenv API unchanged since v1.0.1) (pypi.org, newreleases.io)
- Bump boto3 & botocore to ~=1.34.0 (v1.34.0 released Dec 2023; BaseClient import in botocore.client remains available) (boto3.amazonaws.com)
- Remove duplicate filetype & typing-extensions entries from the dev deps group
| if: github.event.pull_request.draft == false | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Draft/fork-safe gating: keep runs on push, skip drafts and forks on PRs.
Current condition breaks on push events. Use an event-aware expression.
- if: github.event.pull_request.draft == false
+ if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if: github.event.pull_request.draft == false | |
| runs-on: ubuntu-latest | |
| if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }} | |
| steps: | |
| # …rest of this job’s steps… |
🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 19-20: the current if clause
(github.event.pull_request.draft == false) fails on push events; update the
condition to be event-aware so pushes always run and pull_request runs only when
not a draft (e.g., check github.event_name == 'push' OR when github.event_name
== 'pull_request' ensure github.event.pull_request.draft == false), and ensure
the expression also guards accessing github.event.pull_request when the event is
not a pull_request to avoid errors.
| - name: Backup pyproject.toml | ||
| run: | | ||
| cp pyproject.toml pyproject.toml.bak | ||
| - name: Remove additional dependencies | ||
| run: | | ||
| sed -i.bak '/additional_dev = \[/,/\]/d' pyproject.toml | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Remove brittle pyproject mutation/restore.
Editing pyproject.toml in CI is flaky; rely on resolver flags/groups.
- - name: Backup pyproject.toml
- run: |
- cp pyproject.toml pyproject.toml.bak
-
- - name: Remove additional dependencies
- run: |
- sed -i.bak '/additional_dev = \[/,/\]/d' pyproject.toml
@@
- - name: Restore pyproject.toml
- run: |
- mv pyproject.toml.bak pyproject.toml
+ # No pyproject.toml mutation requiredAlso applies to: 95-97
🤖 Prompt for AI Agents
.github/workflows/tests.yml lines 24-31 (and similarly 95-97): the workflow
mutates pyproject.toml by copying and sed-removing the additional_dev section
which is brittle; instead remove these steps and update the job to use resolver
flags or dependency groups to exclude the extra dependencies (e.g., pass the
appropriate resolver CLI flags or configure dependency groups in the tool
invocation or matrix), and remove the backup/restore and sed steps so the CI
does not edit the source pyproject.toml.
| - name: Install uv | ||
| uses: astral-sh/setup-uv@v4 | ||
| with: | ||
| version: "latest" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Pin uv installer version for reproducibility.
Avoid “latest” drift.
- - name: Install uv
- uses: astral-sh/setup-uv@v4
- with:
- version: "latest"
+ - name: Install uv
+ uses: astral-sh/setup-uv@v4
+ with:
+ version: "0.5.x"Also applies to: 107-111
🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 32-36 (and also apply same change to
lines 107-111): the workflow currently pins the astral-sh/setup-uv action to
version "latest", which can cause nondeterministic CI behavior; replace "latest"
with a specific, tested tag or semver (for example v4.2.1 or v4@sha256:<commit>)
to pin the installer version for reproducibility, and update both occurrences
(lines 32-36 and 107-111) to use the same explicit version string.
| - name: Set up Python 3.9 | ||
| run: uv python install 3.9 | ||
|
|
||
| - name: Install dependencies (dev only) | ||
| run: | | ||
| uv sync --python 3.9 | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Pin Python patch and install dev-only with frozen lock.
Improves determinism; avoids pulling unintended extras.
- - name: Set up Python 3.9
- run: uv python install 3.9
+ - name: Set up Python 3.9
+ run: uv python install 3.9.20
@@
- - name: Install dependencies (dev only)
- run: |
- uv sync --python 3.9
+ - name: Install dependencies (dev only)
+ run: |
+ uv sync --group dev --python 3.9.20 --frozen- - name: Set up Python 3.10
- run: uv python install 3.10
+ - name: Set up Python 3.10
+ run: uv python install 3.10.15
@@
- - name: Install dependencies (CrewAI only)
- run: |
- uv sync --python 3.10
+ - name: Install dependencies (dev only)
+ run: |
+ uv sync --group dev --python 3.10.15 --frozenAlso applies to: 112-118
🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 37-43 (and also apply same changes
to 112-118): pin the Python 3.9 runtime to a specific patch release (e.g.
3.9.17) instead of the generic "3.9", and change the dependency installation to
perform a dev-only install using the frozen lockfile flag; for example replace
the python install step with the pinned patch version and replace "uv sync
--python 3.9" with "uv sync --python 3.9.17 --dev --frozen-lockfile" (apply
identical edits at lines 112-118).
.github/workflows/tests.yml
Outdated
| run: | | ||
| uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Create junit directory before pytest writes XML.
Prevents artifact step failures.
- run: |
- uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml
+ run: |
+ mkdir -p junit
+ uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| run: | | |
| uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml | |
| run: | | |
| mkdir -p junit | |
| uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml |
🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 78 to 79, pytest writes its JUnit
XML to junit/main-test-results.xml but the junit directory may not exist; modify
the workflow to create the directory before running pytest (e.g., add a step or
prepend the run script with mkdir -p junit) so pytest can write the XML and the
artifact step won't fail.
| logger = Logger( | ||
| {"id": os.getenv("MAXIM_LOG_REPO_ID", "test-repo")}, | ||
| api_key="test-api-key", | ||
| base_url="https://app.getmaxim.ai", | ||
| api_key=os.getenv("MAXIM_API_KEY"), | ||
| base_url=os.getenv("MAXIM_BASE_URL"), | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Avoid passing None for api_key/base_url to Logger.
Build kwargs conditionally to prevent None from altering code paths.
- logger = Logger(
- {"id": os.getenv("MAXIM_LOG_REPO_ID", "test-repo")},
- api_key=os.getenv("MAXIM_API_KEY"),
- base_url=os.getenv("MAXIM_BASE_URL"),
- )
+ cfg = {"id": os.getenv("MAXIM_LOG_REPO_ID", "test-repo")}
+ kwargs = {}
+ ak = os.getenv("MAXIM_API_KEY")
+ bu = os.getenv("MAXIM_BASE_URL")
+ if ak: kwargs["api_key"] = ak
+ if bu: kwargs["base_url"] = bu
+ logger = Logger(cfg, **kwargs)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| logger = Logger( | |
| {"id": os.getenv("MAXIM_LOG_REPO_ID", "test-repo")}, | |
| api_key="test-api-key", | |
| base_url="https://app.getmaxim.ai", | |
| api_key=os.getenv("MAXIM_API_KEY"), | |
| base_url=os.getenv("MAXIM_BASE_URL"), | |
| ) | |
| cfg = {"id": os.getenv("MAXIM_LOG_REPO_ID", "test-repo")} | |
| kwargs = {} | |
| ak = os.getenv("MAXIM_API_KEY") | |
| bu = os.getenv("MAXIM_BASE_URL") | |
| if ak: | |
| kwargs["api_key"] = ak | |
| if bu: | |
| kwargs["base_url"] = bu | |
| logger = Logger(cfg, **kwargs) |
🤖 Prompt for AI Agents
In maxim/tests/test_agno.py around lines 21 to 25, the test currently passes
os.getenv(...) results directly into Logger which can pass None for
api_key/base_url; instead build the kwargs dict conditionally: read the env vars
into variables, add api_key and base_url to the kwargs only if they are not
None/empty, then call Logger with the repo dict and the constructed kwargs so
None values are not passed through and code paths remain unchanged.
| "debug": True, | ||
| }) | ||
| self.logger = self.maxim.logger({"id": os.getenv("MAXIM_LOGGER_ID")}) | ||
| self.logger = self.maxim.logger({"id": os.getenv("MAXIM_LOG_REPO_ID")}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Avoid passing None as logger id; rely on env fallback or set conditionally.
If MAXIM_LOG_REPO_ID is unset, this passes {"id": None}, then raises. Either omit config to use the built-in env fallback, or include id only when present.
- self.logger = self.maxim.logger({"id": os.getenv("MAXIM_LOG_REPO_ID")})
+ repo_id = os.getenv("MAXIM_LOG_REPO_ID")
+ self.logger = self.maxim.logger({} if not repo_id else {"id": repo_id})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| self.logger = self.maxim.logger({"id": os.getenv("MAXIM_LOG_REPO_ID")}) | |
| repo_id = os.getenv("MAXIM_LOG_REPO_ID") | |
| self.logger = self.maxim.logger({} if not repo_id else {"id": repo_id}) |
🤖 Prompt for AI Agents
In maxim/tests/test_crewai.py around line 25, the logger is being initialized
with {"id": os.getenv("MAXIM_LOG_REPO_ID")} which can pass {"id": None} when the
env var is unset and cause an error; change the call to either omit the config
so the logger uses its built-in env fallback (self.logger = self.maxim.logger())
or read the env var into a variable and pass a config only when it is truthy
(e.g., build the dict conditionally and call self.maxim.logger(config) only if
id is present).
| model="llama4-maverick-instruct-basic", | ||
| deployment_type="serverless", | ||
| api_key=fireworksApiKey |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
External dependency drift: make model and deployment type configurable
Models change frequently; hardcoding risks flakes. Read from env with sensible defaults and document in CI secrets.
- model="llama4-maverick-instruct-basic",
- deployment_type="serverless",
+ model=os.getenv("FIREWORKS_MODEL", "llama4-maverick-instruct-basic"),
+ deployment_type=os.getenv("FIREWORKS_DEPLOYMENT_TYPE", "serverless"),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| model="llama4-maverick-instruct-basic", | |
| deployment_type="serverless", | |
| api_key=fireworksApiKey | |
| model=os.getenv("FIREWORKS_MODEL", "llama4-maverick-instruct-basic"), | |
| deployment_type=os.getenv("FIREWORKS_DEPLOYMENT_TYPE", "serverless"), | |
| api_key=fireworksApiKey |
🤖 Prompt for AI Agents
In maxim/tests/test_fireworks.py around lines 33 to 35, the test hardcodes
model="llama4-maverick-instruct-basic" and deployment_type="serverless"; change
these to read from environment variables (e.g., FIREWORKS_MODEL and
FIREWORKS_DEPLOYMENT_TYPE) with sensible defaults so the values can be
overridden in CI or local runs; validate or coerce values if necessary, and
update repository/CI docs to list the new env vars (and remind to add the API
key secret) so maintainers can configure model/deployment type without touching
tests.
| "Intended Audience :: Developers", | ||
| "Programming Language :: Python :: 3", | ||
| "Operating System :: Unix", | ||
| "Operating System :: MacOS :: MacOS X", | ||
| "Operating System :: Microsoft :: Windows", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Add explicit Python version classifiers to match your CI matrix.
Your workflow runs 3.9 and 3.10; reflect that in Trove classifiers.
classifiers = [
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
"Operating System :: Unix",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows",
]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "Intended Audience :: Developers", | |
| "Programming Language :: Python :: 3", | |
| "Operating System :: Unix", | |
| "Operating System :: MacOS :: MacOS X", | |
| "Operating System :: Microsoft :: Windows", | |
| classifiers = [ | |
| "Intended Audience :: Developers", | |
| "Programming Language :: Python :: 3", | |
| "Programming Language :: Python :: 3.9", | |
| "Programming Language :: Python :: 3.10", | |
| "Operating System :: Unix", | |
| "Operating System :: MacOS :: MacOS X", | |
| "Operating System :: Microsoft :: Windows", | |
| ] |
🤖 Prompt for AI Agents
In pyproject.toml around lines 15 to 19, the Trove classifiers list a generic
"Programming Language :: Python :: 3" but do not include explicit Python version
classifiers matching the CI matrix; add "Programming Language :: Python :: 3.9"
and "Programming Language :: Python :: 3.10" to the classifiers list so the
package metadata reflects the supported Python versions used in CI.
| "agno~=1.7.1", | ||
| "ollama~=0.5.1", | ||
| "pyyaml~=6.0.0", | ||
| "pydoc-markdown~=2.1.3", | ||
| "pytest~=8.4.0", | ||
| "boto3~=1.7.84", | ||
| "langchain-anthropic~=0.3.15", | ||
| "langchain-community~=0.3.25", | ||
| "langchain-openai~=0.3.23", | ||
| "langgraph~=0.4.8", | ||
| "flask~=3.1.1", | ||
| "google-genai~=1.18.0", | ||
| "litellm~=1.72.6", | ||
| "botocore~=1.10.84", | ||
| "anthropic~=0.54.0", | ||
| "python-dotenv~=0.9.9", | ||
| "filetype~=1.2.0", | ||
| "langchain==0.3.25", | ||
| "livekit~=1.0.8", | ||
| "livekit-agents[openai]~=1.0.22", | ||
| "livekit-agents[google]~=1.0", | ||
| "livekit-api~=1.0.2", | ||
| "portkey-ai~=1.13.0", | ||
| "requests~=2.32.3", | ||
| "build~=1.2.2.post1", | ||
| "typing-extensions~=4.13.2", | ||
| "urllib3~=2.4.0", | ||
| "setuptools~=80.9.0", | ||
| "wheel~=0.45.1", | ||
| "openai-agents~=0.0.17", | ||
| "mistralai~=1.8.2", | ||
| "together~=1.4.6", | ||
| "groq~=0.30.0", | ||
| "fireworks-ai~=0.17.9", | ||
| "livekit-plugins-noise-cancellation~=0.2", | ||
| "llama-index-core~=0.12.49", | ||
| "llama-index-llms-openai~=0.4.7", | ||
| "tavily-python~=0.7.9", | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
❓ Verification inconclusive
Mixed exact pin and broad ranges may cause conflicts (langchain family).
You pin langchain==0.3.25 while using compatible ranges for langchain-community/openai. Ensure their minor versions remain mutually compatible across CI.
Run a dry resolve to catch conflicts early:
Consider aligning with compatible caps, e.g., langchain~=0.3.25, langchain-*=~0.3.x, or commit a lock for CI determinism.
🏁 Script executed:
#!/bin/bash
uv pip compile -q -o /dev/stdout --python 3.10 --all-extras pyproject.toml | rg -n 'langchain'Length of output: 187
Verify LangChain dependency compatibility
Mixed exact pin (langchain==0.3.25) alongside ~= ranges for langchain-community/langchain-openai may yield conflicting sub-versions. Please manually run your CI’s resolver (e.g. pip-compile --output-file=- pyproject.toml | rg langchain) to confirm no version clashes.
Optional: align all LangChain-related packages on ~=0.3.25 or commit a lockfile for deterministic installs.
🤖 Prompt for AI Agents
pyproject.toml lines 24-62: The file mixes an exact pin for langchain
(langchain==0.3.25) with ~= ranges for related packages (langchain-community,
langchain-openai) which can cause resolver conflicts; run your CI resolver
locally (e.g. pip-compile --output-file=- pyproject.toml | rg langchain) to
verify there are no incompatible versions, then either align all
LangChain-related packages to the same ~=0.3.25 range or remove the ~= ranges
and pin them to compatible versions, or commit a generated lockfile (e.g.
requirements.txt/poetry lock) so installs are deterministic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
maxim/tests/test_add_dataset_entries_integration.py (5)
95-129: Fix: created markdown file is never attached; docstring/log mention “PDF/markdown” but code uses audio+URL.You create a temp .md (Lines 95-98), but never attach it. Meanwhile the log (Line 81) says “image, PDF, and markdown” yet the attachments are image, audio, and a URL. This causes misleading logs and unnecessary filesystem work; also introduces external dependency via URL which can be flaky.
Recommend replacing the UrlAttachment with a FileAttachment for the markdown you already created:
- context_file3 = UrlAttachment( - url="https://images.pexels.com/photos/842711/pexels-photo-842711.jpeg", - name="background.jpeg" - ) + context_file3 = FileAttachment( + path=str(temp_file_path3), + name=Path(temp_file_path3).name + )And optionally update the log/docstring to say “image, audio, and markdown.”
90-93: Use unittest assertions instead of bare assert for file presence.Bare asserts can be optimized away. Prefer unittest APIs to keep failures visible.
- assert real_image_path.exists(), "Image file not found" - assert real_audio_path.exists(), "Audio file not found" + self.assertTrue(real_image_path.exists(), "Image file not found") + self.assertTrue(real_audio_path.exists(), "Audio file not found")
119-128: Confirm “Test” column name with API and consider centralizing as a constant.If the backend expects a specific output/label column, drift will break integrations. Suggest:
- Define once near other env config:
OUTPUT_COL = os.getenv("MAXIM_OUTPUT_COL", "Test")- Use {OUTPUT_COL: {...}} in entries.
This keeps tests aligned if the column name changes across the suite.
Would you like me to propagate this constant through all occurrences in this file and the rest of the test suite?
119-129: Optional: derive “supporting_docs” from attachments to avoid drift.Hard-coding 3 can desync if attachments change.
- "supporting_docs": 3 + "supporting_docs": len([context_file1, context_file2, context_file3])
407-415: Use valid entry shape in invalid-ID error testThe test currently uses lower-case “input”/“expected_behavior”, which doesn’t match the expected schema (“Input”/“Test”), so the exception may be due to payload shape rather than an invalid dataset ID. Update the entry to use the correct keys:
- test_entry = { - "input": "This should fail due to invalid dataset ID", - "expected_behavior": "error" - } + test_entry = { + "Input": "This should fail due to invalid dataset ID", + "Test": {"expected_behavior": "error"} + }pyproject.toml (1)
10-10: Relax requires-python to avoid excluding runners (patch-level bound)">=3.9.20" is too strict; your workflow likely targets generic 3.9.x.
Apply:
-requires-python = ">=3.9.20" +requires-python = ">=3.9"
♻️ Duplicate comments (20)
maxim/tests/test_add_dataset_entries_integration.py (8)
16-21: Resolved: env-driven gating for API key and dataset ID (and skip behavior) looks correct.This addresses the prior feedback about hard-coded IDs and flakiness. Good use of dotenv + unittest.SkipTest.
Also applies to: 35-44
131-138: Same note: use centralized OUTPUT_COL or verify “Test” is the canonical column.
196-205: Same note for DatasetEntry JSON column key (“Test”).
207-216: Same note for DatasetEntry JSON column key (“Test”).
266-274: Same note for DatasetEntry JSON column key (“Test”).
284-292: Same note for DatasetEntry JSON column key (“Test”).
349-359: Same note for dict entry output column (“Test”).
361-370: Same note for DatasetEntry JSON column key (“Test”).maxim/tests/test_fireworks.py (2)
29-29: Initialize Maxim with api_key and explicit repo id; skip on missing secretsCurrent call omits api_key/repo id; Maxim() will raise if MAXIM_API_KEY isn’t set, and logger() will fail without a repo id. Align with the sync/async patterns suggested previously.
Apply:
- self.logger = Maxim({"base_url": baseUrl}).logger() + if not fireworksApiKey or not apiKey or not repoId: + self.skipTest("Missing FIREWORKS_API_KEY / MAXIM_API_KEY / MAXIM_LOG_REPO_ID") + self.logger = Maxim({"base_url": baseUrl, "api_key": apiKey}).logger({"id": repoId})
475-476: Async setup: mirror the same skip-and-logger-id patternParity keeps CI behavior consistent.
Apply:
- self.logger = Maxim({"base_url": baseUrl}).logger() + if not fireworksApiKey or not apiKey or not repoId: + self.skipTest("Missing FIREWORKS_API_KEY / MAXIM_API_KEY / MAXIM_LOG_REPO_ID") + self.logger = Maxim({"base_url": baseUrl, "api_key": apiKey}).logger({"id": repoId})maxim/tests/test_add_dataset_entries_comprehensive.py (1)
10-15: Remove module-level dotenv loading and env caching; read in setUpThis pattern breaks test hermeticity and makes env order-dependent. Prior feedback already flagged this.
Apply:
-from dotenv import load_dotenv -load_dotenv() - -MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL") -MAXIM_API_KEY = os.getenv("MAXIM_API_KEY") +from dotenv import load_dotenv, find_dotenvThen, in setUp (see next comment) load and read env with safe defaults.
pyproject.toml (1)
24-62: Align dependency constraints for determinismMixing an exact pin (langchain==0.3.25) with many ~=/>= ranges increases resolver churn. Consider pinning dev groups or adding upper bounds (e.g., <0.4.0) for stability.
Example:
- "langchain==0.3.25", + "langchain~=0.3.25",Optionally commit a lock (uv pip compile or requirements.txt) for CI reproducibility.
.github/workflows/tests.yml (8)
3-9: Fix YAML lint (quote “on”, add spaces after commas in branch lists).Prevents yamllint errors/warnings and keeps parsers happy.
-on: +\"on\": push: - branches: [main,beta] + branches: [main, beta] pull_request: - branches: [main,beta] + branches: [main, beta] types: [opened, synchronize, reopened, ready_for_review]
10-15: Add top-level concurrency to cancel superseded runs.Saves CI minutes and noise on rapid pushes.
permissions: contents: read checks: write pull-requests: write +concurrency: + group: tests-${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: true
15-20: Gate jobs for forked PRs and add timeouts.Secrets won’t be available on forks; add guard. Also prevent hung runs.
test-main: name: Test Main SDK (Python 3.9) runs-on: ubuntu-latest - if: github.event.pull_request.draft == false + if: ${{ github.event.pull_request.draft == false && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.fork == false) }} + timeout-minutes: 30 @@ additional-tests: name: Test Additional Integrations (Python 3.10) runs-on: ubuntu-latest - if: github.event.pull_request.draft == false + if: ${{ github.event.pull_request.draft == false && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.fork == false) }} + timeout-minutes: 30Also applies to: 99-103
21-23: Harden action pinning; remove brittle pyproject mutation; pin Python and frozen dev sync.
- Pin actions to SHAs; avoid “latest”.
- Drop cp/sed/mv dance; use groups + --frozen.
- Pin Python patch versions for determinism.
- uses: actions/checkout@v4 + # TODO: pin to immutable SHA for v4 + # uses: actions/checkout@<sha-for-v4> - - name: Backup pyproject.toml - run: | - cp pyproject.toml pyproject.toml.bak - - - name: Remove additional dependencies - run: | - sed -i.bak '/additional_dev = \[/,/\]/d' pyproject.toml - name: Install uv - uses: astral-sh/setup-uv@v4 + # TODO: pin to immutable SHA for v4 + uses: astral-sh/setup-uv@v4 with: - version: "latest" + version: "0.5.x" - name: Set up Python 3.9 - run: uv python install 3.9 + run: uv python install 3.9.20 - name: Install dependencies (dev only) run: | - uv sync --python 3.9 + uv sync --group dev --python 3.9.20 --frozen @@ - - name: Restore pyproject.toml - run: | - mv pyproject.toml.bak pyproject.toml + # No restore needed; pyproject.toml not mutatedAlso applies to: 32-43, 24-31, 95-97
44-79: Ensure junit dir exists; optionally fail fast on missing secrets.Prevents junit write errors; clearer diagnostics.
- name: Run main tests (excluding CrewAI) env: @@ - run: | - uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml + run: | + mkdir -p junit + uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml
81-93: Publish artifacts/results with always() and pin actions to SHAs.Ensures publishing on cancel and improves supply-chain hygiene.
- - name: Upload main test results - uses: actions/upload-artifact@v4 - if: success() || failure() + - name: Upload main test results + uses: actions/upload-artifact@<sha-for-v4> + if: always() @@ - - name: Publish main test results - uses: EnricoMi/publish-unit-test-result-action@v2 - if: success() || failure() + - name: Publish main test results + uses: EnricoMi/publish-unit-test-result-action@<sha-for-v2> + if: always() with: junit_files: "junit/main-test-results.xml" check_name: "Main Test Results" @@ - - name: Upload additional test results - uses: actions/upload-artifact@v4 - if: success() || failure() + - name: Upload additional test results + uses: actions/upload-artifact@<sha-for-v4> + if: always() @@ - - name: Publish additional test results - uses: EnricoMi/publish-unit-test-result-action@v2 - if: success() || failure() + - name: Publish additional test results + uses: EnricoMi/publish-unit-test-result-action@<sha-for-v2> + if: always()Also applies to: 123-135
104-118: Pin Python 3.10; install dev-only with frozen; fix step label; pin actions.Improves determinism and clarity.
- uses: actions/checkout@v4 + # TODO: pin to immutable SHA for v4 - name: Install uv - uses: astral-sh/setup-uv@v4 + uses: astral-sh/setup-uv@v4 # TODO: pin to SHA with: - version: "latest" + version: "0.5.x" - name: Set up Python 3.10 - run: uv python install 3.10 + run: uv python install 3.10.15 - - name: Install dependencies (CrewAI only) + - name: Install dependencies (dev only) run: | - uv sync --python 3.10 + uv sync --group dev --python 3.10.15 --frozen
119-121: Pass required env and create junit dir for additional tests.Avoids failures due to missing creds/paths.
- - name: Run additional integration tests - run: | - uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml + - name: Run additional integration tests + env: + OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} + ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} + MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }} + MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }} + TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }} + run: | + mkdir -p junit + uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (7)
.github/workflows/tests.yml(1 hunks)maxim/tests/test_add_dataset_entries_comprehensive.py(10 hunks)maxim/tests/test_add_dataset_entries_integration.py(8 hunks)maxim/tests/test_agno.py(2 hunks)maxim/tests/test_crewai.py(1 hunks)maxim/tests/test_fireworks.py(2 hunks)pyproject.toml(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
maxim/tests/test_fireworks.py (1)
maxim/maxim.py (2)
logger(897-942)Maxim(127-1045)
maxim/tests/test_agno.py (3)
maxim/maxim.py (1)
logger(897-942)maxim/logger/agno/client.py (1)
instrument_agno(440-465)maxim/tests/mock_writer.py (1)
inject_mock_writer(243-259)
maxim/tests/test_add_dataset_entries_integration.py (1)
maxim/models/dataset.py (1)
Variable(23-81)
maxim/tests/test_crewai.py (1)
maxim/maxim.py (1)
logger(897-942)
🪛 YAMLlint (1.37.1)
.github/workflows/tests.yml
[warning] 3-3: truthy value should be one of [false, true]
(truthy)
[error] 5-5: too few spaces after comma
(commas)
[error] 7-7: too few spaces after comma
(commas)
🪛 GitHub Actions: Run Tests
pyproject.toml
[warning] 64-64: uv sync --python 3.9: Failed to parse pyproject.toml during settings discovery (TOML parse error at line 64, column 22).
[error] 64-64: uv sync --python 3.9: TOML parse error in pyproject.toml: key with no value, expected '=' at line 64, column 22. The line 'crewai==0.130.0' appears misformatted; should be 'crewai = "0.130.0"'.
🔇 Additional comments (5)
maxim/tests/test_add_dataset_entries_integration.py (1)
46-60: Double-check reliance on private Maxim._instance.Deleting a private attribute is brittle. Prefer a public reset/close if available; otherwise consider constructing a fresh client without touching internals.
Would you like me to scan the codebase for a public reset/close API and update this accordingly?
maxim/tests/test_crewai.py (1)
25-25: File maxim/tests/test_crewai.py not found — there is no such test file in the repository; disregard this comment.Likely an incorrect or invalid review comment.
maxim/tests/test_fireworks.py (1)
33-35: No action required: “llama4-maverick-instruct-basic” anddeployment_type="serverless"are valid per the Fireworks SDK as of August 2025.pyproject.toml (1)
64-68: No action needed: quotes already present in additional_dev
Thecrewai==0.130.0entry is already quoted andpyproject.tomlparses successfully.Likely an incorrect or invalid review comment.
.github/workflows/tests.yml (1)
58-62: Do not removePORTKEY_VIRTUAL_KEY—it’s referenced inmaxim/tests/test_portkey.pyas theproviderargument and is required for those tests.Likely an incorrect or invalid review comment.
| config = Config( | ||
| api_key="test-api-key", | ||
| base_url="https://app.getmaxim.ai", | ||
| api_key=MAXIM_API_KEY, | ||
| base_url=MAXIM_BASE_URL, | ||
| debug=True, | ||
| raise_exceptions=True | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Do not pass None into Config; load env in setUp with safe defaults
Maxim requires an API key; without defaults, CI will fail. Build kwargs conditionally or provide test-safe defaults.
Apply:
- config = Config(
- api_key=MAXIM_API_KEY,
- base_url=MAXIM_BASE_URL,
- debug=True,
- raise_exceptions=True
- )
+ # Resolve env at setup time
+ load_dotenv(find_dotenv(usecwd=True), override=False)
+ api_key = os.getenv("MAXIM_API_KEY", "test-api-key")
+ base_url = os.getenv("MAXIM_BASE_URL", "http://localhost:8000")
+ cfg = {"debug": True, "raise_exceptions": True}
+ # Only add keys if present to avoid None-paths; we provide safe defaults above.
+ if api_key:
+ cfg["api_key"] = api_key
+ if base_url:
+ cfg["base_url"] = base_url
+ config = Config(**cfg)Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In maxim/tests/test_add_dataset_entries_comprehensive.py around lines 24-29,
avoid passing None into Config by loading env vars in setUp with safe defaults:
import os, read api_key = os.getenv('MAXIM_API_KEY', 'test-api-key') and
base_url = os.getenv('MAXIM_BASE_URL', 'http://localhost'), then construct
Config using those values (and other flags) so kwargs are never None;
alternatively build the kwargs dict conditionally (only include a key if env
value exists) or use the safe defaults to ensure CI won't fail when env vars are
absent.
| from maxim.logger.agno import instrument_agno | ||
| from maxim.tests.mock_writer import inject_mock_writer | ||
|
|
||
| load_dotenv() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Avoid module-scope env override; resolve .env deterministically
Module-level load_dotenv can clobber existing env in CI. Use find_dotenv and avoid override.
Apply:
-from dotenv import load_dotenv
+from dotenv import load_dotenv, find_dotenv
@@
-load_dotenv()
+load_dotenv(find_dotenv(usecwd=True), override=False)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| load_dotenv() | |
| from dotenv import load_dotenv, find_dotenv | |
| load_dotenv(find_dotenv(usecwd=True), override=False) |
🤖 Prompt for AI Agents
In maxim/tests/test_agno.py around line 9, a module-level call to load_dotenv()
can clobber CI environment; replace this with a deterministic, non-overriding
load: use dotenv.find_dotenv() to locate a .env path and call load_dotenv(path,
override=False) only when a path is found, and move the call out of module scope
into a test setup fixture or inside tests so it doesn't run unconditionally at
import time.
| logger = Logger( | ||
| {"id": os.getenv("MAXIM_LOG_REPO_ID", "test-repo")}, | ||
| api_key="test-api-key", | ||
| base_url="https://app.getmaxim.ai", | ||
| api_key=os.getenv("MAXIM_API_KEY"), | ||
| base_url=os.getenv("MAXIM_BASE_URL"), | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Provide safe defaults for logger construction to keep tests hermetic
Passing None for api_key/base_url can fail constructor. Tests use a mock writer, so test-safe defaults are fine.
Apply:
- logger = Logger(
- {"id": os.getenv("MAXIM_LOG_REPO_ID", "test-repo")},
- api_key=os.getenv("MAXIM_API_KEY"),
- base_url=os.getenv("MAXIM_BASE_URL"),
- )
+ logger = Logger(
+ {"id": os.getenv("MAXIM_LOG_REPO_ID", "test-repo")},
+ api_key=os.getenv("MAXIM_API_KEY") or "test-api-key",
+ base_url=os.getenv("MAXIM_BASE_URL") or "https://app.getmaxim.ai",
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| logger = Logger( | |
| {"id": os.getenv("MAXIM_LOG_REPO_ID", "test-repo")}, | |
| api_key="test-api-key", | |
| base_url="https://app.getmaxim.ai", | |
| api_key=os.getenv("MAXIM_API_KEY"), | |
| base_url=os.getenv("MAXIM_BASE_URL"), | |
| ) | |
| logger = Logger( | |
| {"id": os.getenv("MAXIM_LOG_REPO_ID", "test-repo")}, | |
| api_key=os.getenv("MAXIM_API_KEY") or "test-api-key", | |
| base_url=os.getenv("MAXIM_BASE_URL") or "https://app.getmaxim.ai", | |
| ) |
🤖 Prompt for AI Agents
In maxim/tests/test_agno.py around lines 21 to 25, the Logger is constructed
with api_key and base_url that may be None if environment variables are absent,
which can cause the constructor to fail; change the calls to use safe test
defaults (e.g., api_key=os.getenv("MAXIM_API_KEY", "test-api-key") and
base_url=os.getenv("MAXIM_BASE_URL", "http://localhost")) so the test remains
hermetic while still using the mock writer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pyproject.toml (1)
11-11: Avoid conflicting specs across runtime/dev; align versions.
- filetype and typing-extensions appear in both [project.dependencies] and dev with different/unspecified versions.
- Align runtime deps and remove duplicates from dev to avoid resolver churn.
-[project] -... -dependencies = ["typing-extensions", "filetype", "httpx>=0.28.1"] +[project] +... +dependencies = ["typing-extensions~=4.13.2", "filetype~=1.2.0", "httpx~=0.28.1"] [dependency-groups] dev = [ - "filetype~=1.2.0", - "typing-extensions~=4.13.2", + # removed: already declared in runtimeAlso applies to: 40-41, 49-51
♻️ Duplicate comments (13)
pyproject.toml (2)
10-10: Loosen requires-python or pin CI interpreter patch.Mismatch with CI (3.9) and brittle lower bound (>=3.9.20). Prefer widening to >=3.9 (optionally cap upper), or pin Python 3.9.20 in CI.
-requires-python = ">=3.9.20" +requires-python = ">=3.9" +# optionally: ">=3.9, <3.13"
15-20: Add explicit Python classifiers (3.9/3.10) to match CI.classifiers = [ "Intended Audience :: Developers", "Programming Language :: Python :: 3", + "Programming Language :: Python :: 3.9", + "Programming Language :: Python :: 3.10", "Operating System :: Unix",.github/workflows/tests.yml (11)
3-7: Quote “on” and fix branch list spacing (yamllint).-on: +\"on\": push: - branches: [main,beta] + branches: [main, beta] pull_request: - branches: [main,beta] + branches: [main, beta]
19-19: Broken condition on push events; guard drafts/forks safely.Accessing github.event.pull_request on push causes evaluation errors; also gate forks.
- if: github.event.pull_request.draft == false + if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }}
22-23: Pin actions to SHAs and avoid “latest” tool drift.
- Pin actions/checkout and astral-sh/setup-uv to immutable SHAs.
- Replace uv version "latest" with a pinned series (or exact).
- - uses: actions/checkout@v4 + - uses: actions/checkout@<sha-for-v4> ... - - name: Install uv - uses: astral-sh/setup-uv@v4 - with: - version: "latest" + - name: Install uv + uses: astral-sh/setup-uv@<sha-for-v4> + with: + version: "0.5.x"Also applies to: 32-36, 110-114, 108-108
24-31: Stop mutating pyproject.toml in CI; use uv groups instead.Backup/sed/restore is brittle; select groups in uv sync.
- - name: Backup pyproject.toml - run: | - cp pyproject.toml pyproject.toml.bak - - - name: Remove additional dependencies - run: | - sed -i.bak '/^additional_dev = \[$/,/]$/d' pyproject.toml ... - - name: Restore pyproject.toml - run: | - mv pyproject.toml.bak pyproject.toml + # No pyproject.toml mutation requiredAlso applies to: 98-101
37-43: Pin Python patch and install dev-only with frozen resolver.- - name: Set up Python 3.9 - run: uv python install 3.9 + - name: Set up Python 3.9 + run: uv python install 3.9.20 - - name: Install dependencies (dev only) - run: | - uv sync --python 3.9 + - name: Install dependencies (dev only) + run: | + uv sync --group dev --python 3.9.20 --frozenAlso applies to: 40-42
44-82: Create junit dir; consider pruning unused secrets.
- Ensure junit/ exists before pytest.
- If PORTKEY_VIRTUAL_KEY is unused, drop it.
- name: Run main tests (excluding CrewAI) @@ - run: | - uv run pytest ... --junitxml=junit/main-test-results.xml + run: | + mkdir -p junit + uv run pytest ... --junitxml=junit/main-test-results.xmlIf PORTKEY_VIRTUAL_KEY is unused, remove its env entry.
84-96: Publish even on cancel; pin action SHAs.- uses: actions/upload-artifact@v4 - if: success() || failure() + uses: actions/upload-artifact@<sha-for-v4> + if: always() ... - uses: EnricoMi/publish-unit-test-result-action@v2 - if: success() || failure() + uses: EnricoMi/publish-unit-test-result-action@<sha-for-v2> + if: always()
102-106: Apply same safe gating to additional-tests job.- if: github.event.pull_request.draft == false + if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }}
130-133: Create junit dir and pass required env to integration tests.- - name: Run additional integration tests - run: | - uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml + - name: Run additional integration tests + env: + OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} + ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} + MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }} + MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }} + TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }} + run: | + mkdir -p junit + uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml
134-146: Use always() for artifacts/publisher; pin SHAs; ensure permissions.Mirror main job changes and keep checks publishing on cancel.
- uses: actions/upload-artifact@v4 - if: success() || failure() + uses: actions/upload-artifact@<sha-for-v4> + if: always() ... - uses: EnricoMi/publish-unit-test-result-action@v2 - if: success() || failure() + uses: EnricoMi/publish-unit-test-result-action@<sha-for-v2> + if: always()
10-13: Add workflow-level concurrency to cancel superseded runs.permissions: contents: read checks: write pull-requests: write +concurrency: + group: tests-${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: trueAlso applies to: 15-16
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
.github/workflows/tests.yml(1 hunks)pyproject.toml(1 hunks)
🧰 Additional context used
🪛 YAMLlint (1.37.1)
.github/workflows/tests.yml
[warning] 3-3: truthy value should be one of [false, true]
(truthy)
[error] 5-5: too few spaces after comma
(commas)
[error] 7-7: too few spaces after comma
(commas)
🔇 Additional comments (1)
pyproject.toml (1)
31-42: Align LangChain versions: change"langchain==0.3.25"to"langchain~=0.3.25"and bumplangchain-communityandlangchain-openaito~=0.3.25to ensure consistency.
| - name: Backup .python-version | ||
| run: | | ||
| cp .python-version .python-version.bak | ||
| - name: Update .python-version for CrewAI | ||
| run: | | ||
| echo "3.10.0" > .python-version | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid editing .python-version in CI.
These steps are brittle; remove them and install the desired Python directly.
- - name: Backup .python-version
- run: |
- cp .python-version .python-version.bak
-
- - name: Update .python-version for CrewAI
- run: |
- echo "3.10.0" > .python-version
...
- - name: Restore .python-version
- if: always()
- run: |
- mv .python-version.bak .python-version
+ # No .python-version mutation requiredAlso applies to: 148-151
🤖 Prompt for AI Agents
.github/workflows/tests.yml lines 115-122 (and also lines 148-151): remove the
steps that copy and overwrite .python-version in the CI; instead configure the
runner's Python using the official actions/setup-python (or
actions/setup-python@v4) with the required version (e.g., 3.10.0) so CI doesn't
mutate repository files; delete the backup/echo steps and replace them with a
single setup-python step specifying the desired python-version and cache options
as needed.
| - name: Set up Python 3.10 | ||
| run: uv python install 3.10 | ||
|
|
||
| - name: Install dependencies (CrewAI only) | ||
| run: | | ||
| uv sync --python 3.10 | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure additional_dev deps are installed; pin Python and freeze.
CrewAI lives in additional_dev; install both dev and additional_dev.
- - name: Set up Python 3.10
- run: uv python install 3.10
+ - name: Set up Python 3.10
+ run: uv python install 3.10.15
- - name: Install dependencies (CrewAI only)
- run: |
- uv sync --python 3.10
+ - name: Install dependencies (dev + additional_dev)
+ run: |
+ uv sync --group dev --group additional_dev --python 3.10.15 --frozen📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Set up Python 3.10 | |
| run: uv python install 3.10 | |
| - name: Install dependencies (CrewAI only) | |
| run: | | |
| uv sync --python 3.10 | |
| - name: Set up Python 3.10 | |
| run: uv python install 3.10.15 | |
| - name: Install dependencies (dev + additional_dev) | |
| run: | | |
| uv sync --group dev --group additional_dev --python 3.10.15 --frozen |
🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 123 to 129, the workflow currently
mis-invokes Python setup and only installs CrewAI’s dev deps; update the step to
pin/setup Python 3.10 correctly, replace the incorrect "uv python install 3.10"
with a proper pinned Python setup, then ensure dependency installation runs uv
(or the project’s installer) to install both dev and additional_dev groups for
CrewAI, and finally freeze the installed packages to a requirements
file/artifact so the exact resolved deps are recorded.
| "agno~=1.7.1", | ||
| "ollama~=0.5.1", | ||
| "pyyaml~=6.0.0", | ||
| "pydoc-markdown~=2.1.3", | ||
| "pytest~=8.4.0", | ||
| "boto3~=1.7.84", | ||
| "langchain-anthropic~=0.3.15", | ||
| "langchain-community~=0.3.25", | ||
| "langchain-openai~=0.3.23", | ||
| "langgraph~=0.4.8", | ||
| "flask~=3.1.1", | ||
| "google-genai~=1.18.0", | ||
| "litellm>=1.72.0", | ||
| "botocore~=1.10.84", | ||
| "anthropic~=0.54.0", | ||
| "python-dotenv>=1.0.0", | ||
| "filetype~=1.2.0", | ||
| "langchain==0.3.25", | ||
| "livekit~=1.0.8", | ||
| "livekit-agents[openai]~=1.0.22", | ||
| "livekit-agents[google]~=1.0", | ||
| "livekit-api~=1.0.2", | ||
| "portkey-ai~=1.13.0", | ||
| "requests~=2.32.3", | ||
| "build~=1.2.2.post1", | ||
| "typing-extensions~=4.13.2", | ||
| "urllib3~=2.4.0", | ||
| "setuptools~=80.9.0", | ||
| "wheel~=0.45.1", | ||
| "openai-agents~=0.0.17", | ||
| "mistralai~=1.8.2", | ||
| "together~=1.4.6", | ||
| "groq~=0.30.0", | ||
| "fireworks-ai~=0.17.9", | ||
| "livekit-plugins-noise-cancellation~=0.2", | ||
| "llama-index-core~=0.12.49", | ||
| "llama-index-llms-openai~=0.4.7", | ||
| "tavily-python~=0.7.9", | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Normalize specifiers; prefer consistent ~= caps for fast-moving libs.
- Use ~= instead of >= for litellm and python-dotenv to bound minor drift.
- "litellm>=1.72.0",
+ "litellm~=1.72.0",
- "python-dotenv>=1.0.0",
+ "python-dotenv~=1.0.0",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "agno~=1.7.1", | |
| "ollama~=0.5.1", | |
| "pyyaml~=6.0.0", | |
| "pydoc-markdown~=2.1.3", | |
| "pytest~=8.4.0", | |
| "boto3~=1.7.84", | |
| "langchain-anthropic~=0.3.15", | |
| "langchain-community~=0.3.25", | |
| "langchain-openai~=0.3.23", | |
| "langgraph~=0.4.8", | |
| "flask~=3.1.1", | |
| "google-genai~=1.18.0", | |
| "litellm>=1.72.0", | |
| "botocore~=1.10.84", | |
| "anthropic~=0.54.0", | |
| "python-dotenv>=1.0.0", | |
| "filetype~=1.2.0", | |
| "langchain==0.3.25", | |
| "livekit~=1.0.8", | |
| "livekit-agents[openai]~=1.0.22", | |
| "livekit-agents[google]~=1.0", | |
| "livekit-api~=1.0.2", | |
| "portkey-ai~=1.13.0", | |
| "requests~=2.32.3", | |
| "build~=1.2.2.post1", | |
| "typing-extensions~=4.13.2", | |
| "urllib3~=2.4.0", | |
| "setuptools~=80.9.0", | |
| "wheel~=0.45.1", | |
| "openai-agents~=0.0.17", | |
| "mistralai~=1.8.2", | |
| "together~=1.4.6", | |
| "groq~=0.30.0", | |
| "fireworks-ai~=0.17.9", | |
| "livekit-plugins-noise-cancellation~=0.2", | |
| "llama-index-core~=0.12.49", | |
| "llama-index-llms-openai~=0.4.7", | |
| "tavily-python~=0.7.9", | |
| ] | |
| "google-genai~=1.18.0", | |
| "litellm~=1.72.0", | |
| "botocore~=1.10.84", | |
| "anthropic~=0.54.0", | |
| "python-dotenv~=1.0.0", | |
| "filetype~=1.2.0", |
🤖 Prompt for AI Agents
In pyproject.toml around lines 24 to 62, two dependency specifiers use >= which
is inconsistent with the project's use of ~=; change "litellm>=1.72.0" to use
the compatible release operator (e.g. "litellm~=1.72.0") and change
"python-dotenv>=1.0.0" to "python-dotenv~=1.0.0" so both are bounded similarly
to other fast-moving libraries.
| additional_dev = [ | ||
| "livekit-agents[silero]~=1.0", | ||
| "crewai==0.130.0", | ||
| "smolagents~=1.21.2" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Ensure CI actually installs additional_dev where needed.
The 3.10 job must include this group (otherwise CrewAI tests may miss deps). See workflow comments for the specific fix.
🏁 Script executed:
#!/bin/bash
find .github/workflows -type fLength of output: 60
🏁 Script executed:
#!/bin/bash
sed -n '1,200p' .github/workflows/tests.ymlLength of output: 6042
🏁 Script executed:
#!/bin/bash
sed -n '1,120p' pyproject.tomlLength of output: 1910
Include the additional_dev group in the Python 3.10 job
Update the “Install dependencies (CrewAI only)” step in .github/workflows/tests.yml to run:
uv sync --python 3.10 --group additional_devOtherwise the CrewAI integration tests will miss required dependencies.
🤖 Prompt for AI Agents
In pyproject.toml around lines 64 to 67, the new dependency group additional_dev
is defined but the CI job for Python 3.10 doesn't install it; update the
"Install dependencies (CrewAI only)" step in .github/workflows/tests.yml to run
the install command with the additional_dev group and Python 3.10 (i.e., use uv
sync --python 3.10 --group additional_dev) so the CrewAI integration tests have
the required packages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (14)
pyproject.toml (2)
15-19: Add explicit Python version classifiers to match CI (3.9, 3.10).
Keeps metadata aligned with workflow matrix.classifiers = [ "Intended Audience :: Developers", "Programming Language :: Python :: 3", + "Programming Language :: Python :: 3.9", + "Programming Language :: Python :: 3.10", "Operating System :: Unix", "Operating System :: MacOS :: MacOS X", "Operating System :: Microsoft :: Windows", ]
64-67: Ensure CI installs this group for integration tests.
The workflow should sync both dev and additional_dev for the “Additional Integrations” job; otherwise CrewAI/SmolAgents tests will fail.Proposed change in workflow (see tests.yml comments):
- uv sync --python 3.10 + uv sync --group dev --group additional_dev --python 3.10.15 --frozen.github/workflows/tests.yml (12)
107-114: Pin uv installer; avoid “latest”.
Deterministic toolchain.- name: Install uv uses: astral-sh/setup-uv@v4 with: - version: "latest" + version: "0.5.x"
81-83: Ensure junit dir exists before pytest writes XML.
Prevents artifact upload failures.- run: | - uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml + run: | + mkdir -p junit + uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml
103-106: Apply the same event-aware gating to additional-tests.- if: github.event.pull_request.draft == false + if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }}
98-101: Remove restore step; pyproject is no longer mutated.
Avoids unnecessary file ops.- - name: Restore pyproject.toml - run: | - mv pyproject.toml.bak pyproject.toml + # No restore needed
24-31: Stop mutating pyproject in CI; use uv groups instead.
Backup/sed/restore is brittle and error-prone.- - name: Backup pyproject.toml - run: | - cp pyproject.toml pyproject.toml.bak - - - name: Remove additional dependencies - run: | - sed -i.bak '/^additional_dev = \[$/,/]$/d' pyproject.toml + # No pyproject.toml mutation required; control installs via uv groups
130-133: Create junit dir and pass required env for integration tests.- - name: Run additional integration tests - run: | - uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml + - name: Run additional integration tests + env: + OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} + ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} + MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }} + MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }} + TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }} + run: | + mkdir -p junit + uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml
84-96: Publish steps should run on all outcomes; also pin action SHAs.
More robust artifact/check publishing.- if: success() || failure() + if: always() @@ - if: success() || failure() + if: always()Also replace action tags with immutable SHAs.
32-43: Pin tool/runtime and install dev-only with frozen resolution.
Improves determinism and speed.- name: Install uv uses: astral-sh/setup-uv@v4 with: - version: "latest" + version: "0.5.x" @@ - - name: Set up Python 3.9 - run: uv python install 3.9 + - name: Set up Python 3.9 + run: uv python install 3.9.20 @@ - - name: Install dependencies (dev only) - run: | - uv sync --python 3.9 + - name: Install dependencies (dev only) + run: | + uv sync --group dev --python 3.9.20 --frozenNote: Consider pinning actions to SHAs for supply-chain hardening.
19-20: Gating is not event-aware; access to pull_request fields on push will fail.
Gate drafts and forks only on PRs; always run on push.- if: github.event.pull_request.draft == false + if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }}
15-15: Add workflow-level concurrency to cancel superseded runs.
Speeds up CI on rapid pushes.jobs: + concurrency: + group: ${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: true
134-147: Use always() for artifacts/publishing; pin action SHAs.
Consistency with main job.- if: success() || failure() + if: always() @@ - if: success() || failure() + if: always()Also pin actions to immutable SHAs.
3-7: Fix YAML lint (quote “on”, add spaces after commas).
Prevents lint/parser edge cases.-on: +“on”: push: - branches: [main,beta] + branches: [main, beta] pull_request: - branches: [main,beta] + branches: [main, beta] types: [opened, synchronize, reopened, ready_for_review]
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
.github/workflows/tests.yml(1 hunks)pyproject.toml(1 hunks)
🧰 Additional context used
🪛 YAMLlint (1.37.1)
.github/workflows/tests.yml
[warning] 3-3: truthy value should be one of [false, true]
(truthy)
[error] 5-5: too few spaces after comma
(commas)
[error] 7-7: too few spaces after comma
(commas)
🔇 Additional comments (2)
pyproject.toml (1)
24-62: Resolve duplicated/competing deps and align spec styles (prevents resolver churn).
- Duplicates:
filetypeandtyping-extensionsappear in both the dev group and[project.dependencies]. Remove them from the dev group.- Mixed styles: most libs use
~=, butpython-dotenvandlitellmuse>=, andlangchain-family is pinned with==/~=inconsistently. Use~=for all and pin the entire langchain suite to the same micro-release.Suggested edits in the dev group:
- "python-dotenv>=1.0.0", + "python-dotenv~=1.0", @@ - "litellm>=1.72.0", + "litellm~=1.72", @@ - "langchain-openai~=0.3.23", + "langchain-openai~=0.3.25",And ensure your
[project.dependencies]includes:[project] dependencies = [ "typing-extensions~=4.13.2", "filetype~=1.2.0", # other deps… ]Verification: the prior
pip-compilescript failed (“uv: command not found”). Please manually confirm (via your lockfile orpip-compile/Poetry lock) that all langchain-family packages resolve to 0.3.25 (or a single, consistent version) before merging..github/workflows/tests.yml (1)
45-63: Retain PORTKEY_VIRTUAL_KEY env var: It’s referenced in maxim/tests/test_portkey.py (line 19), so it must remain.
| - name: Backup .python-version | ||
| run: | | ||
| cp .python-version .python-version.bak | ||
| - name: Update .python-version for CrewAI | ||
| run: | | ||
| echo "3.10.0" > .python-version | ||
| - name: Set up Python 3.10 | ||
| run: uv python install 3.10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Remove .python-version hacks; install pinned 3.10 directly.
Simpler and less brittle.
- - name: Backup .python-version
- run: |
- cp .python-version .python-version.bak
-
- - name: Update .python-version for CrewAI
- run: |
- echo "3.10.0" > .python-version
-
- name: Set up Python 3.10
- run: uv python install 3.10
+ run: uv python install 3.10.15Also remove the final “Restore .python-version” step (see below).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Backup .python-version | |
| run: | | |
| cp .python-version .python-version.bak | |
| - name: Update .python-version for CrewAI | |
| run: | | |
| echo "3.10.0" > .python-version | |
| - name: Set up Python 3.10 | |
| run: uv python install 3.10 | |
| - name: Set up Python 3.10 | |
| run: uv python install 3.10.15 |
🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 115 to 124, remove the three-step
".python-version" backup/update hack and instead install Python 3.10 directly
with the standard setup action; delete the prior "Backup .python-version" and
"Update .python-version for CrewAI" steps and replace the "Set up Python 3.10"
step with a single action that installs/pins Python 3.10 (use the official
setup-python action and set the version to 3.10), and also delete the later
"Restore .python-version" step mentioned in the comment.
| - name: Install dependencies (CrewAI only) | ||
| run: | | ||
| uv sync --python 3.10 | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Install required groups for integrations with frozen resolution.
- - name: Install dependencies (CrewAI only)
- run: |
- uv sync --python 3.10
+ - name: Install dependencies (dev + additional_dev)
+ run: |
+ uv sync --group dev --group additional_dev --python 3.10.15 --frozen📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Install dependencies (CrewAI only) | |
| run: | | |
| uv sync --python 3.10 | |
| - name: Install dependencies (dev + additional_dev) | |
| run: | | |
| uv sync --group dev --group additional_dev --python 3.10.15 --frozen |
🤖 Prompt for AI Agents
In .github/workflows/tests.yml around lines 126 to 129, the workflow currently
runs a plain "uv sync --python 3.10" which does not install the integration
extras nor enforce frozen resolutions; update the step to invoke uv sync with
the flags that (1) install the required integrations group/extras and (2) enable
frozen resolution/lockfile enforcement (use the UV CLI options for specifying
groups/extras and the frozen-lock or equivalent flag) so the integrations
dependencies are installed exactly as pinned.
| - name: Restore .python-version | ||
| if: always() # run this step even if previous steps failed | ||
| run: | | ||
| mv .python-version.bak .python-version |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Drop .python-version restore (no longer modified).
- - name: Restore .python-version
- if: always() # run this step even if previous steps failed
- run: |
- mv .python-version.bak .python-version
+ # No restore needed📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Restore .python-version | |
| if: always() # run this step even if previous steps failed | |
| run: | | |
| mv .python-version.bak .python-version | |
| # No restore needed |
🤖 Prompt for AI Agents
.github/workflows/tests.yml around lines 148 to 151: the workflow contains a
step that restores .python-version using mv .python-version.bak .python-version
but the file is no longer modified earlier in the job; remove this entire step
(including name, if condition and run block) to avoid failing or no-op
operations and update any subsequent step numbering or references if applicable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (15)
pyproject.toml (2)
10-10: Loosen requires-python or pin interpreter patch in CI.
Patch-level lower bound (>=3.9.20) is brittle and conflicts with your CI installing “3.9”. Either relax to >=3.9 or pin Python 3.9.20 in CI.-requires-python = ">=3.9.20" +requires-python = ">=3.9" +# Optionally bound upper versions you haven't validated yet: +# requires-python = ">=3.9, <3.13"
15-19: Add explicit Python version classifiers to match CI matrix.
Reflect 3.9 and 3.10 support in Trove classifiers.classifiers = [ "Intended Audience :: Developers", "Programming Language :: Python :: 3", + "Programming Language :: Python :: 3.9", + "Programming Language :: Python :: 3.10", "Operating System :: Unix", "Operating System :: MacOS :: MacOS X", "Operating System :: Microsoft :: Windows", ].github/workflows/tests.yml (13)
3-7: Fix YAML lint issues: quote “on” and add spaces after commas.
Prevents linter noise and YAML 1.1 “on/off” ambiguity.-on: +"on": push: - branches: [main,beta] + branches: [main, beta] pull_request: - branches: [main,beta] + branches: [main, beta]
15-15: Add workflow-level concurrency to cancel superseded runs.
Saves minutes on rapid pushes.jobs: + concurrency: + group: tests-${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: true
19-20: Gate jobs for push vs PR, drafts, and forks (secrets-safe).
Current condition errors on push and runs on fork PRs without secrets.- if: github.event.pull_request.draft == false + if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }}Apply the same change to the additional-tests job (Line 105).
24-31: Stop mutating pyproject.toml in CI; install only the dev group.
Backup/sed/restore is brittle. Use uv group selection and git for restore if ever needed.- - name: Backup pyproject.toml - run: | - cp pyproject.toml pyproject.toml.bak - - - name: Remove additional dependencies - run: | - sed -i.bak '/^additional_dev = \[$/,/]$/d' pyproject.toml + # No pyproject.toml mutation required @@ - - name: Restore pyproject.toml - run: | - mv pyproject.toml.bak pyproject.toml + # No restore neededAnd restrict installation to the dev group (see Line 40 comment).
Also applies to: 98-101
32-36: Pin uv installer version (and prefer SHAs for actions).
Avoid “latest” drift; also pin actions to immutable SHAs.- - name: Install uv - uses: astral-sh/setup-uv@v4 - with: - version: "latest" + - name: Install uv + uses: astral-sh/setup-uv@v4 + with: + version: "0.5.x" # or a specific tested versionOptionally replace tag-based actions with SHAs:
- actions/checkout@v4 → actions/checkout@
- astral-sh/setup-uv@v4 → astral-sh/setup-uv@
Would you like me to fetch the current SHAs?
37-43: Pin Python patch to match requires-python or relax the requirement.
Either install 3.9.20 explicitly or adjust pyproject to >=3.9.- - name: Set up Python 3.9 - run: uv python install 3.9 + - name: Set up Python 3.9 + run: uv python install 3.9.20 @@ - uv sync --python 3.9 + uv sync --group dev --python 3.9.20 --frozen
44-82: Ensure junit dir exists before pytest; keep secrets minimal.
Create junit/ to avoid artifact failures; consider trimming unused secrets (e.g., PORTKEY_VIRTUAL_KEY) if not needed.- name: Run main tests (excluding CrewAI) @@ - run: | - uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml + run: | + mkdir -p junit + uv run pytest maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xmlIf PORTKEY_VIRTUAL_KEY is unused, remove it from env.
84-96: Publish artifacts/results even on cancelled runs; pin actions.
Use always(); consider pinning action SHAs.- uses: actions/upload-artifact@v4 - if: success() || failure() + uses: actions/upload-artifact@<sha-for-v4> + if: always() @@ - uses: EnricoMi/publish-unit-test-result-action@v2 - if: success() || failure() + uses: EnricoMi/publish-unit-test-result-action@<sha-for-v2> + if: always()
102-106: Apply the same draft/fork gating to the second job.
Mirror main job’s if condition for consistency and secret safety.- if: github.event.pull_request.draft == false + if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }}
110-114: Pin uv installer version in second job too.
Mirror the first job’s change.- - name: Install uv - uses: astral-sh/setup-uv@v4 - with: - version: "latest" + - name: Install uv + uses: astral-sh/setup-uv@v4 + with: + version: "0.5.x"
115-123: Don’t edit .python-version in CI; remove backup/restore steps.
Let the workflow select the interpreter; avoid mutating repo files.- - name: Backup .python-version - run: | - cp .python-version .python-version.bak - - - name: Update .python-version for CrewAI - run: | - echo "3.10.0" > .python-version @@ - - name: Restore .python-version - if: always() # run this step even if previous steps failed - run: | - mv .python-version.bak .python-version + # No .python-version mutation requiredAlso applies to: 148-151
123-129: Pin Python 3.10 patch and install dev-only with frozen resolution.
Stabilizes runs and avoids pulling extras.- - name: Set up Python 3.10 - run: uv python install 3.10 + - name: Set up Python 3.10 + run: uv python install 3.10.15 @@ - - name: Install dependencies (CrewAI only) - run: | - uv sync --python 3.10 + - name: Install dependencies (dev only) + run: | + uv sync --group dev --python 3.10.15 --frozen
130-146: Pass required env, create junit dir, and make publishing resilient.
Ensure integration tests have secrets and junit path; reuse always() + pinned actions.- - name: Run additional integration tests - run: | - uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml + - name: Run additional integration tests + env: + OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} + ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} + MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }} + MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }} + TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }} + run: | + mkdir -p junit + uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml @@ - - name: Upload additional test results - uses: actions/upload-artifact@v4 - if: success() || failure() + - name: Upload additional test results + uses: actions/upload-artifact@<sha-for-v4> + if: always() @@ - - name: Publish additional test results - uses: EnricoMi/publish-unit-test-result-action@v2 - if: success() || failure() + - name: Publish additional test results + uses: EnricoMi/publish-unit-test-result-action@<sha-for-v2> + if: always()
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
.github/workflows/tests.yml(1 hunks)pyproject.toml(1 hunks)
🧰 Additional context used
🪛 YAMLlint (1.37.1)
.github/workflows/tests.yml
[warning] 3-3: truthy value should be one of [false, true]
(truthy)
[error] 5-5: too few spaces after comma
(commas)
[error] 7-7: too few spaces after comma
(commas)
🔇 Additional comments (2)
pyproject.toml (2)
64-67: Good: optional integrations isolated in additional_dev.
Keeping CrewAI and Smolagents in a separate group is sane for leaner core CI.
30-33: Align LangChain family constraints to 0.3.25
Pin all LangChain packages to the same minor release to avoid resolver churn:- "langchain-community~=0.3.25", - "langchain-openai~=0.3.23", + "langchain-community~=0.3.25", + "langchain-openai~=0.3.25", @@ - "langchain==0.3.25", + "langchain~=0.3.25",Manually verify with your resolver of choice, for example:
pip-compile --quiet --python-version=3.10 pyproject.toml | rg -n 'langchain'
| "litellm>=1.72.0", | ||
| "botocore~=1.10.84", | ||
| "anthropic~=0.54.0", | ||
| "python-dotenv>=1.0.0", | ||
| "filetype~=1.2.0", | ||
| "langchain==0.3.25", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Prefer consistent “~=” caps over “>=” for dev deps to stabilize CI.
Two entries still use >= which invites drift.
- "litellm>=1.72.0",
+ "litellm~=1.72.0",
@@
- "python-dotenv>=1.0.0",
+ "python-dotenv~=1.0.0",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "litellm>=1.72.0", | |
| "botocore~=1.10.84", | |
| "anthropic~=0.54.0", | |
| "python-dotenv>=1.0.0", | |
| "filetype~=1.2.0", | |
| "langchain==0.3.25", | |
| "litellm~=1.72.0", | |
| "botocore~=1.10.84", | |
| "anthropic~=0.54.0", | |
| "python-dotenv~=1.0.0", | |
| "filetype~=1.2.0", | |
| "langchain==0.3.25", |
🤖 Prompt for AI Agents
In pyproject.toml around lines 36 to 41, two dev dependency entries use ">="
which allows version drift; change "litellm>=1.72.0" to use the compatible
release operator "~=1.72.0" and change "python-dotenv>=1.0.0" to "~=1.0.0" so
all dev deps consistently use "~=" to stabilize CI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (13)
maxim/tests/test_add_dataset_entries_comprehensive.py (1)
10-14: Stop module-level dotenv/env caching; build Config in setUp with safe defaults.Import-time
load_dotenv()and cachingMAXIM_*make tests order-dependent and hard to override. Also avoid passing None intoConfig.-from dotenv import load_dotenv -load_dotenv() - -MAXIM_BASE_URL = os.getenv("MAXIM_BASE_URL") -MAXIM_API_KEY = os.getenv("MAXIM_API_KEY") @@ - config = Config( - api_key=MAXIM_API_KEY, - base_url=MAXIM_BASE_URL, - debug=True, - raise_exceptions=True - ) + from dotenv import load_dotenv, find_dotenv + load_dotenv(find_dotenv(usecwd=True), override=False) + api_key = os.getenv("MAXIM_API_KEY", "test-api-key") + base_url = os.getenv("MAXIM_BASE_URL", "http://localhost:8000") + config = Config( + api_key=api_key, + base_url=base_url, + debug=True, + raise_exceptions=True, + )Also applies to: 24-29
maxim/tests/test_prompts.py (4)
4-4: Avoid import-time dotenv loading; resolve .env deterministically.-from dotenv import load_dotenv +from dotenv import load_dotenv, find_dotenv @@ -load_dotenv() +load_dotenv(find_dotenv(usecwd=True), override=False)Also applies to: 8-8
10-15: Reduce brittleness: don’t cache env at import-time; remove noisy print; supply safe defaults in setUp.-apiKey = os.getenv("MAXIM_API_KEY") -promptId = os.getenv("MAXIM_PROMPT_1_ID") -promptVersionId = os.getenv("PROMPT_VERSION_ID") -folderID = os.getenv("FOLDER_ID") -baseUrl = os.getenv("MAXIM_BASE_URL") @@ - print("Base URL: >>> ", baseUrl) - self.maxim = Maxim( - { - "api_key": apiKey, - "debug": True, - "prompt_management": True, - "base_url": baseUrl - } - ) + api_key = os.getenv("MAXIM_API_KEY", "test-api-key") + base_url = os.getenv("MAXIM_BASE_URL", "https://app.getmaxim.ai") + self.maxim = Maxim({ + "api_key": api_key, + "debug": True, + "prompt_management": True, + "base_url": base_url, + })Optional helper to skip when env-driven ids are absent:
# place near top-level in this file def require_env(testcase: unittest.TestCase, *keys: str) -> None: missing = [k for k in keys if not os.getenv(k)] if missing: testcase.skipTest(f"Missing env: {', '.join(missing)}")Also applies to: 23-31
42-53: Handle backend 403s by skipping and avoid tenant-coupled assertions.- prompt = self.maxim.get_prompt( - promptId, + require_env(self, "MAXIM_PROMPT_1_ID") + pid = os.getenv("MAXIM_PROMPT_1_ID") + try: + prompt = self.maxim.get_prompt( + pid, QueryBuilder().and_().deployment_var("Environment", "Prod").build(), - ) - if prompt is None: - raise Exception("Prompt not found") - self.assertEqual(prompt.prompt_id, promptId) - self.assertEqual(prompt.version_id, os.getenv("MAXIM_PROMPT_1_VERSION_1_ID")) - self.assertEqual(prompt.model, "gpt-4o") - self.assertEqual(prompt.provider, "openai") - self.assertEqual(prompt.messages[0].content, "You are a helpful assistant. You talk like Chandler from Friends.") + ) + except Exception as e: + if "403" in str(e).lower() or "forbidden" in str(e).lower(): + self.skipTest("Backend returned 403 Forbidden") + raise + self.assertIsNotNone(prompt, "Prompt not found") + self.assertEqual(prompt.prompt_id, pid) + # Guard on version id presence if provided + vid = os.getenv("MAXIM_PROMPT_1_VERSION_1_ID") + if vid: + self.assertEqual(prompt.version_id, vid) + # Avoid brittle model/content coupling; assert invariants + self.assertTrue(prompt.model) + self.assertTrue(prompt.provider) + self.assertTrue(prompt.messages and prompt.messages[0].content)
237-248: Folder tests: skip when env missing; assert on id; avoid brittle name; fix boolean positional.- def test_getFolderUsingId(self): - folder = self.maxim.get_folder_by_id(os.getenv("MAXIM_FOLDER_1_ID")) - if folder is None: - raise Exception("Folder not found") - self.assertEqual(folder.name, "SDK Tests") + def test_getFolderUsingId(self) -> None: + fid = os.getenv("MAXIM_FOLDER_1_ID") + if not fid: + self.skipTest("MAXIM_FOLDER_1_ID is not set") + folder = self.maxim.get_folder_by_id(fid) + if folder is None: + self.skipTest("Configured folder not found or 403") + self.assertEqual(folder.id, fid) @@ - def test_getFolderUsingTags(self): + def test_getFolderUsingTags(self) -> None: folders = self.maxim.get_folders( - QueryBuilder().and_().tag("Testing", True).build() + QueryBuilder().and_().tag("Testing", value=True).build() ) - self.assertEqual(folders[0].name, "SDK Tests") - self.assertEqual(len(folders), 1) + self.assertTrue(isinstance(folders, list)) + self.assertGreaterEqual(len(folders), 0).github/workflows/tests.yml (8)
3-7: YAML hygiene: quote “on” and fix branch list spacing.-on: +"on": push: - branches: [main,beta] + branches: [main, beta] pull_request: - branches: [main,beta] + branches: [main, beta]
15-20: Gate jobs for pushes and non-fork, non-draft PRs to avoid secret-less 403s.test-main: name: Test Main SDK (Python 3.9) runs-on: ubuntu-latest - if: github.event.pull_request.draft == false + if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }}Also add top-level concurrency to cancel superseded runs:
permissions: contents: read checks: write pull-requests: write + +concurrency: + group: ${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: true
24-31: Don’t mutate pyproject in CI; remove backup/sed/restore steps.- - name: Backup pyproject.toml - run: | - cp pyproject.toml pyproject.toml.bak @@ - - name: Remove additional dependencies - run: | - sed -i.bak '/^additional_dev = \[$/,/]$/d' pyproject.toml @@ - - name: Restore pyproject.toml - run: | - mv pyproject.toml.bak pyproject.toml + # No pyproject.toml mutation required; install selected groups with uvAlso applies to: 112-115
32-43: Pin tools/interpreters and install only needed groups with frozen lock.- - name: Install uv - uses: astral-sh/setup-uv@v4 - with: - version: "latest" + - name: Install uv + uses: astral-sh/setup-uv@v4 + with: + version: "0.5.x" @@ - - name: Set up Python 3.9 - run: uv python install 3.9 + - name: Set up Python 3.9 + run: uv python install 3.9.20 @@ - - name: Install dependencies (dev only) - run: | - uv sync --python 3.9 + - name: Install dependencies (dev only) + run: | + uv sync --group dev --python 3.9.20 --frozen @@ - - name: Set up Python 3.10 - run: uv python install 3.10 + - name: Set up Python 3.10 + run: uv python install 3.10.15 @@ - - name: Install dependencies (CrewAI only) - run: | - uv sync --python 3.10 + - name: Install dependencies (dev + additional_dev) + run: | + uv sync --group dev --group additional_dev --python 3.10.15 --frozenAlso applies to: 124-128, 137-143
44-83: Ensure junit directory exists and pass required env; remove noisy debug step.- - name: Run main tests (excluding CrewAI) + - name: Run main tests (excluding CrewAI) env: @@ - run: | - uv run pytest -v maxim/tests/test_prompts.py maxim/tests/test_openai.py maxim/tests/test_llamaindex.py maxim/tests/test_together.py maxim/tests/test_fireworks.py maxim/tests/test_groq.py maxim/tests/test_add_dataset_entries_comprehensive.py maxim/tests/test_maxim_core_simple.py --junitxml=junit/main-test-results.xml + run: | + mkdir -p junit + uv run pytest -v \ + maxim/tests/test_prompts.py \ + maxim/tests/test_openai.py \ + maxim/tests/test_llamaindex.py \ + maxim/tests/test_together.py \ + maxim/tests/test_fireworks.py \ + maxim/tests/test_groq.py \ + maxim/tests/test_add_dataset_entries_comprehensive.py \ + maxim/tests/test_maxim_core_simple.py \ + --junitxml=junit/main-test-results.xml @@ - - name: Debug environment - run: | - echo "Event name: ${{ github.event_name }}" - echo "PR from fork: ${{ github.event.pull_request.head.repo.full_name != github.repository }}" - echo "Base ref: ${{ github.base_ref }}" - echo "Head ref: ${{ github.head_ref }}" - echo "Repository: ${{ github.repository }}" - # Test if secrets are available (without exposing them) - if [ -z "${{ secrets.OPENAI_API_KEY }}" ]; then - echo "OPENAI_API_KEY is not available" - else - echo "OPENAI_API_KEY is available" - fi + # Removed debug step; avoid using untrusted context in shell @@ - - name: Run additional integration tests - run: | - uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xml + - name: Run additional integration tests + env: + OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} + ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} + MAXIM_API_KEY: ${{ secrets.MAXIM_API_KEY }} + MAXIM_BASE_URL: ${{ secrets.MAXIM_BASE_URL }} + TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }} + run: | + mkdir -p junit + uv run pytest maxim/tests/test_crewai.py --junitxml=junit/additional-test-results.xmlAlso applies to: 144-147
98-107: Publish artifacts/results even on failure/cancel; use always().- - name: Upload main test results + - name: Upload main test results uses: actions/upload-artifact@v4 - if: success() || failure() + if: always() @@ - - name: Publish main test results + - name: Publish main test results uses: EnricoMi/publish-unit-test-result-action@v2 - if: success() || failure() + if: always() @@ - - name: Upload additional test results + - name: Upload additional test results uses: actions/upload-artifact@v4 - if: success() || failure() + if: always() @@ - - name: Publish additional test results + - name: Publish additional test results uses: EnricoMi/publish-unit-test-result-action@v2 - if: success() || failure() + if: always()Also applies to: 148-161
116-120: Mirror gating on additional-tests to avoid fork/draft failures.additional-tests: name: Test Additional Integrations (Python 3.10) runs-on: ubuntu-latest - if: github.event.pull_request.draft == false + if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && github.event.pull_request.head.repo.fork == false) }}
129-136: Remove brittle .python-version backup/restore hacks.- - name: Backup .python-version - run: | - cp .python-version .python-version.bak - - - name: Update .python-version for CrewAI - run: | - echo "3.10.0" > .python-version @@ - - name: Restore .python-version - if: always() - run: | - mv .python-version.bak .python-version + # No .python-version mutation requiredAlso applies to: 162-165
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
.github/workflows/tests.yml(1 hunks)maxim/tests/test_add_dataset_entries_comprehensive.py(10 hunks)maxim/tests/test_prompts.py(12 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
maxim/tests/test_prompts.py (4)
maxim/maxim.py (4)
Maxim(127-1045)get_prompt(736-772)get_folder_by_id(849-873)get_folders(875-895)maxim/models/query_builder.py (6)
QueryBuilder(18-124)and_(29-37)deployment_var(72-88)build(107-124)tag(90-105)folder(49-60)maxim/apis/maxim_apis.py (2)
get_prompt(312-347)get_folders(669-702)maxim/runnable/prompt.py (1)
run(30-41)
🪛 Ruff (0.12.2)
maxim/tests/test_add_dataset_entries_comprehensive.py
263-263: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
maxim/tests/test_prompts.py
48-48: Create your own exception
(TRY002)
48-48: Avoid specifying long messages outside the exception class
(TRY003)
49-49: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
50-50: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
51-51: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
52-52: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
53-53: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
72-72: Create your own exception
(TRY002)
72-72: Avoid specifying long messages outside the exception class
(TRY003)
74-74: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
75-75: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
76-76: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
77-77: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
112-112: Create your own exception
(TRY002)
112-112: Avoid specifying long messages outside the exception class
(TRY003)
113-113: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
114-114: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
115-115: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
132-132: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
133-133: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
135-135: Missing return type annotation for public function test_getPrompt_with_deployment_variables_multiselect_includes
Add return type annotation: None
(ANN201)
143-143: Create your own exception
(TRY002)
143-143: Avoid specifying long messages outside the exception class
(TRY003)
144-144: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
145-145: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
149-149: Missing return type annotation for public function test_if_prompt_cache_works_fine
Add return type annotation: None
(ANN201)
235-235: Unnecessary list comprehension (rewrite using list())
Rewrite using list()
(C416)
237-237: Missing return type annotation for public function test_getFolderUsingId
Add return type annotation: None
(ANN201)
240-240: Create your own exception
(TRY002)
240-240: Avoid specifying long messages outside the exception class
(TRY003)
241-241: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
243-243: Missing return type annotation for public function test_getFolderUsingTags
Add return type annotation: None
(ANN201)
245-245: Boolean positional value in function call
(FBT003)
🪛 actionlint (1.7.7)
.github/workflows/tests.yml
85-85: "github.head_ref" is potentially untrusted. avoid using it directly in inline scripts. instead, pass it through an environment variable. see https://docs.github.com/en/actions/security-for-github-actions/security-guides/security-hardening-for-github-actions for more details
(expression)
🪛 YAMLlint (1.37.1)
.github/workflows/tests.yml
[warning] 3-3: truthy value should be one of [false, true]
(truthy)
[error] 5-5: too few spaces after comma
(commas)
[error] 7-7: too few spaces after comma
(commas)
[error] 83-83: trailing spaces
(trailing-spaces)
🪛 GitHub Actions: Run Tests
maxim/tests/test_prompts.py
[error] 90-90: Test 'TestMaximPromptManagement.test_custom_prompt_execution' failed due to HTTP 403 Forbidden on Maxim API (/api/sdk/v4/prompts/run). Command: uv run pytest -v maxim/tests/test_prompts.py ...
[error] 240-240: Folder not found in test_getFolderUsingId. HTTP 403 Forbidden from Maxim API (/api/sdk/v3/folders). Command: uv run pytest -v maxim/tests/test_prompts.py ...
[error] 247-247: IndexError: list index out of range in test_getFolderUsingTags due to API access error (HTTP 403). Command: uv run pytest -v maxim/tests/test_prompts.py ...
[error] 104-104: Test 'test_get_prompt_with_deployment_variables_Environment_prod_and_TenantId_123' failed due to HTTP 403 Forbidden (GET /api/sdk/v4/prompts?promptId=...). Command: uv run pytest -v maxim/tests/test_prompts.py ...
[error] 120-120: Test 'test_get_prompt_with_deployment_variables_multiselect' failed due to HTTP 403 Forbidden (GET /api/sdk/v4/prompts?promptId=...). Command: uv run pytest -v maxim/tests/test_prompts.py ...
[error] 138-138: Test 'test_get_prompt_with_deployment_variables_multiselect_includes' failed due to HTTP 403 Forbidden (GET /api/sdk/v4/prompts?promptId=...). Command: uv run pytest -v maxim/tests/test_prompts.py ...
[error] 63-63: Test 'test_get_prompt_with_multiselect_deployment_variables_and_execute' failed due to HTTP 403 Forbidden (GET /api/sdk/v4/prompts?promptId=...). Command: uv run pytest -v maxim/tests/test_prompts.py ...
[error] 43-43: Test 'test_get_prompt_with_deployment_variables' failed due to HTTP 403 Forbidden (GET /api/sdk/v4/prompts?promptId=...). Command: uv run pytest -v maxim/tests/test_prompts.py ...
[error] 174-174: Test 'test_if_fallback_works_fine' failed due to HTTP 403 Forbidden (GET /api/sdk/v4/prompts?promptId=...). Command: uv run pytest -v maxim/tests/test_prompts.py ...
[error] 187-187: Test 'test_if_fallback_works_fine_forceful' failed due to HTTP 403 Forbidden (GET /api/sdk/v4/prompts?promptId=...). Command: uv run pytest -v maxim/tests/test_prompts.py ...
[error] 150-150: Test 'test_if_prompt_cache_works_fine' failed due to HTTP 403 Forbidden (GET /api/sdk/v4/prompts?promptId=...). Command: uv run pytest -v maxim/tests/test_prompts.py ...
| # Verify the upload process was initiated | ||
| self.assertEqual(mock_client.request.call_count, 4) | ||
| self.assertEqual(mock_client.request.call_count, 2) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Ruff PT009: keep or suppress consistently.
File uses unittest.TestCase assertions everywhere. Either keep self.assertEqual(...) for consistency or migrate all to bare assert. Don’t mix styles.
🧰 Tools
🪛 Ruff (0.12.2)
263-263: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
🤖 Prompt for AI Agents
In maxim/tests/test_add_dataset_entries_comprehensive.py around lines 262-264,
the test mixes assertion styles (Ruff PT009); standardize to the TestCase style
used by the test class: replace any bare assert statements in this file with the
appropriate self.assert* calls (e.g., convert "assert X == Y" to
"self.assertEqual(X, Y)" and similar for truthiness/inequality), ensuring all
assertions use unittest.TestCase methods consistently.
No description provided.