-
Notifications
You must be signed in to change notification settings - Fork 136
Introduce runtime modes with feature-gated initialization #253
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughSplit eager wiring into lazy, feature-gated initialization: API router split into Changes
Sequence Diagram(s)sequenceDiagram
participant Main as App Startup
participant Settings as Settings
participant API as API Router
participant DB as DB Clients
participant Discord as Discord Bot
participant Queue as Queue Manager
Main->>Settings: load configuration
Settings-->>Main: return feature flags (discord_enabled, github_enabled, code_intelligence_enabled)
Main->>API: include core_router
alt github_enabled
Main->>API: call get_auth_router()
API-->>Main: returns composed auth/integrations router (deferred imports)
end
alt discord_enabled
Main->>Discord: init bot (queue_manager optional)
alt code_intelligence_enabled
Main->>Queue: create queue_manager
Discord->>Queue: register handlers on_ready
Discord->>Discord: get_classifier() (lazy init)
else
Discord-->>Main: run in Discord-only mode
end
Main->>Discord: start bot
end
alt code_intelligence_enabled
Main->>DB: test Weaviate connection (deferred client import)
end
Main-->>Main: Application ready
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
backend/main.py (1)
109-117: Guard shutdown calls when components are absent.
self.discord_botcan be set toNoneon startup failure, andself.queue_managermay be unset in minimal/discord-only modes.🛠️ Suggested fix
- if settings.discord_enabled and not self.discord_bot.is_closed(): + if settings.discord_enabled and self.discord_bot and not self.discord_bot.is_closed(): await self.discord_bot.close() logger.info("Discord bot has been closed.") @@ - await self.queue_manager.stop() - logger.info("Queue manager has been stopped.") + if self.queue_manager: + await self.queue_manager.stop() + logger.info("Queue manager has been stopped.")backend/app/api/v1/health.py (1)
46-55: Return “disabled” when Weaviate isn’t enabled.Currently the endpoint reports “not_ready” even when the feature is disabled.
🛠️ Suggested fix
try: - is_ready = None - if settings.code_intelligence_enabled: + if settings.code_intelligence_enabled: from app.database.weaviate.client import get_weaviate_client async with get_weaviate_client() as client: - is_ready = await client.is_ready() + is_ready = await client.is_ready() + status = "ready" if is_ready else "not_ready" + else: + status = "disabled" return { "service": "weaviate", - "status": "ready" if is_ready else "not_ready" + "status": status }
🤖 Fix all issues with AI agents
In `@backend/app/api/router.py`:
- Around line 12-33: The module currently creates a top-level auth_router
variable (auth_router) but never populates it and still exports it via __all__;
update the module to either remove the unused top-level auth_router and delete
it from __all__, or wire the existing get_auth_router() result into that symbol
(assign auth_router = get_auth_router()) so callers receive a populated router;
ensure __all__ only exports valid objects (e.g., include "auth_router" only if
you assign it, or remove it if you delete the top-level variable).
In `@backend/app/api/v1/health.py`:
- Around line 7-30: The health_check function currently relies on a module-level
mutable dict services and an unused request parameter; change this by removing
the top-level services variable and creating a fresh local services = {} inside
health_check, drop the unused request parameter from the function signature (and
remove the unused Request import), then keep the existing logic that checks
settings.code_intelligence_enabled and calls get_weaviate_client/is_ready to
populate the local services dict before returning the response.
In `@backend/app/core/config/settings.py`:
- Around line 10-12: Ensure GitHub enablement is gated on a non-empty
backend_url: update the logic that sets or evaluates github_enabled in
settings.py to require backend_url to be truthy (non-empty) before allowing
github_enabled == True; locate the Settings/config structure where backend_url
and github_enabled are defined and change the computed value or validation so
github_enabled is false if backend_url is empty, and add a clear
validation/error message if needed to prevent invalid OAuth callbacks (affects
the backend_url and github_enabled fields and any initialization/validation code
using them).
In `@backend/app/database/supabase/client.py`:
- Around line 4-17: The RuntimeError in get_supabase_client() should report
which Supabase configuration keys are missing; modify the check around
settings.supabase_url and settings.supabase_key (used in get_supabase_client and
referenced via _client) to collect the names of any missing/empty values (e.g.,
"supabase_url", "supabase_key") and raise a RuntimeError that includes that list
(for example: "Supabase is not configured: missing supabase_key, supabase_url"),
so callers immediately know which settings to fix.
In `@backend/integrations/discord/bot.py`:
- Around line 4-8: The code references QueuePriority at runtime in
_handle_devrel_message but currently imports it only under TYPE_CHECKING,
causing a NameError; fix by performing a lazy/runtime import of QueuePriority
(e.g., inside _handle_devrel_message or at top-level without TYPE_CHECKING) so
the symbol is available at runtime—locate the typing block that currently reads
"if TYPE_CHECKING: from app.core.orchestration.queue_manager import
AsyncQueueManager, QueuePriority" and either move QueuePriority into a normal
import or add a local "from app.core.orchestration.queue_manager import
QueuePriority" inside the _handle_devrel_message function before it is used.
In `@backend/integrations/discord/cogs.py`:
- Around line 416-419: The check incorrectly calls the boolean property
settings.code_intelligence_enabled() causing a TypeError; change the conditional
to reference the property without calling it (use
settings.code_intelligence_enabled) in the deletion block that calls
falkor_unavailable (see the if block around settings.code_intelligence_enabled,
logger.info and await falkor_unavailable(interaction)), leaving the rest of the
flow unchanged.
- Around line 18-27: send_github_unavailable currently always uses
interaction.followup.send which causes "interaction not responded" errors when
the interaction hasn't been responded to yet; update send_github_unavailable to
mirror the safe-send pattern used by falkor_unavailable: check
interaction.response.is_done() and call interaction.followup.send if true,
otherwise call interaction.response.send_message, preserving the same embed,
ephemeral=True behavior so the message is sent correctly whether or not the
interaction has been responded to.
In `@backend/main.py`:
- Around line 94-99: The code calls get_weaviate_client inside
test_weaviate_connection but never imports it, causing a NameError; fix by
adding the proper import for get_weaviate_client (or performing a local import
inside test_weaviate_connection) from the module that provides it so the symbol
is defined when test_weaviate_connection runs; locate references to
get_weaviate_client and ensure the import statement is added/adjusted
accordingly.
- Around line 33-49: Add a default initializer for self.queue_manager so it
always exists in minimal/discord-only modes and avoid AttributeError in
stop_background_tasks: set self.queue_manager = None alongside self.discord_bot
and self.agent_coordinator at the top of the constructor, and ensure
stop_background_tasks (or any code using self.queue_manager) checks for
truthiness before calling methods; reference symbols: self.queue_manager,
stop_background_tasks, DiscordBot, AsyncQueueManager, AgentCoordinator,
settings.discord_enabled, settings.code_intelligence_enabled.
🧹 Nitpick comments (2)
backend/app/api/v1/health.py (1)
69-85: Consider returning a flat status string for Discord health.This keeps the response shape consistent with
/health/weaviate.♻️ Optional adjustment
- services = { - "discord_bot": ( - "running" - if app_instance.discord_bot - and not app_instance.discord_bot.is_closed() - else "stopped" - ) - } + status = ( + "running" + if app_instance.discord_bot + and not app_instance.discord_bot.is_closed() + else "stopped" + ) return { "service": "discord_bot", - "status": services + "status": status }backend/app/core/config/settings.py (1)
48-58: Consider removing the commented-out validator/config block.Keeping large commented sections tends to drift; git history already preserves prior behavior.
♻️ Optional cleanup
- # `@field_validator`("supabase_url", "supabase_key", mode="before") - # `@classmethod` - # def _not_empty(cls, v, field): - # if not v: - # raise ValueError(f"{field.name} must be set") - # return v - - # model_config = ConfigDict( - # env_file=".env", - # extra="ignore" - # ) # to prevent errors from extra env variables
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/integrations/discord/cogs.py (1)
75-92:⚠️ Potential issue | 🔴 CriticalAdd missing import for
cleanup_expired_tokensfunction.The task method at line 89 calls
await cleanup_expired_tokens(), but the function frombackend/app/services/auth/verification.pyis not imported. This will raiseNameErrorat runtime when the task executes.Add to imports:
from app.services.auth.verification import cleanup_expired_tokens
🤖 Fix all issues with AI agents
In `@backend/app/llm/chat.py`:
- Around line 5-22: The chat_completion function lacks a guard for a missing
settings.gemini_api_key and accepts an unused context parameter; update
chat_completion to first check settings.gemini_api_key and raise or return a
clear error message when it's None before instantiating ChatGoogleGenerativeAI,
and either remove the unused context argument from the signature or make use of
it when building the HumanMessage/inputs (reference function name
chat_completion, symbol settings.gemini_api_key, and ChatGoogleGenerativeAI to
locate the code).
In `@backend/integrations/discord/bot.py`:
- Around line 100-115: The exception handler in _handle_devrel_direct can raise
ValueError when thread_id is falsy because it blindly calls
self.get_channel(int(thread_id)); change the except block to safely resolve the
target channel by: if thread_id is truthy, attempt to get the thread via
self.get_channel(int(thread_id)) and if that returns None fallback to
message.channel; if thread_id is falsy skip int() and directly use
message.channel; then send the error message to that resolved channel. Also
address the unused triage_result parameter on _handle_devrel_direct by either
removing it from the function signature (and updating the caller) or renaming it
to _triage_result to indicate intentional unused status.
In `@backend/integrations/discord/cogs.py`:
- Line 283: Fix the typo in the log message by updating the logger.info call
that currently logs "Idexing blocked: FalkorDB not configured" to the correct
wording "Indexing blocked: FalkorDB not configured"; locate the logger.info
invocation in backend/integrations/discord/cogs.py (the logger.info statement
shown in the diff) and change the string literal only.
- Around line 109-117: The reset command currently only enqueues the cleanup
task when self.queue is present, yet always informs the user the thread was
reset; update the block around self.queue so that if self.queue exists you keep
enqueuing cleanup with QueuePriority.HIGH, but if self.queue is None you call
the cleanup coroutine directly (await cleanup(...)) or explicitly send a
different message indicating background cleanup was skipped; adjust the
interaction.response.send_message text accordingly and ensure you still clear
any in-memory state like active_threads so the behavior and user-facing message
reflect whether cleanup actually ran.
🧹 Nitpick comments (3)
backend/integrations/discord/bot.py (1)
199-200: Uselogger.exceptionfor full traceback context.Per static analysis hint,
logger.exceptionautomatically includes the traceback, making debugging easier.♻️ Suggested fix
- logger.error(f"Error handling agent response: {str(e)}") + logger.exception("Error handling agent response")backend/app/core/config/settings.py (1)
66-76: Mixed configuration sources: Pydantic fields vs rawos.getenv.
code_intelligence_enabledreads environment variables directly viaos.getenv()while other settings use Pydantic's declarative approach. This inconsistency makes configuration harder to test (can't override via Settings instantiation) and violates the single-source-of-truth principle.♻️ Suggested improvement
Consider declaring these as optional Pydantic fields:
+ # FalkorDB / CodeGraph configuration + falkordb_host: Optional[str] = None + falkordb_port: Optional[str] = None + codegraph_backend_url: Optional[str] = None + codegraph_secret_token: Optional[str] = None `@property` def code_intelligence_enabled(self) -> bool: """ FalkorDB / GraphRAG / indexing. """ return self.github_enabled and all([ - os.getenv("FALKORDB_HOST"), - os.getenv("FALKORDB_PORT"), - os.getenv("CODEGRAPH_BACKEND_URL"), - os.getenv("CODEGRAPH_SECRET_TOKEN"), + self.falkordb_host, + self.falkordb_port, + self.codegraph_backend_url, + self.codegraph_secret_token, ])This enables easier testing and maintains consistency with other configuration fields.
backend/integrations/discord/cogs.py (1)
67-67: Remove redundant exception object fromlogging.exception.
logger.exception()automatically includes the exception info; passing{e}in the message is redundant.♻️ Suggested fix
- logger.exception(f"Failed to send GitHub unavailable DM to {user}: {e}") + logger.exception(f"Failed to send GitHub unavailable DM to {user}")
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@backend/main.py`:
- Around line 112-114: The shutdown code calls self.discord_bot.is_closed()
without ensuring self.discord_bot is not None; update the shutdown path to first
check that self.discord_bot is truthy (e.g., if self.discord_bot and not
self.discord_bot.is_closed():) before awaiting self.discord_bot.close() and
logging, so AttributeError is avoided when discord startup failed and
self.discord_bot is None; make this change around the shutdown logic referencing
self.discord_bot and the logger.info call.
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@backend/main.py`:
- Around line 65-70: The create_task call that starts the Discord bot must be
stored to avoid garbage collection: replace the unreferenced
asyncio.create_task(self.discord_bot.start(settings.discord_bot_token)) with an
assignment to an instance attribute (e.g., self.discord_task =
asyncio.create_task(...)) so the Task is retained; ensure any shutdown logic
later cancels or awaits self.discord_task if present. Also expand the truncated
docstring at the method beginning (the docstring currently reading "Starts the
background.") to a full descriptive sentence explaining what background services
are started (mention Discord bot task).
- Around line 73-79: The try/except block sets self.weaviate_enabled on failure
but that attribute is neither declared nor used; either remove the dead
assignment or implement the fallback: declare and initialize weaviate_enabled in
__init__ (e.g., self.weaviate_enabled = True/False) and update/consume it
elsewhere, or simply delete "self.weaviate_enabled = False" from the except
clause; locate the logic around test_weaviate_connection and __init__ to decide
which option fits the design and make the corresponding change.
🧹 Nitpick comments (1)
backend/main.py (1)
170-177: Remove unusedoptional_varsdictionary.This dictionary is defined but never used, resulting in dead code.
♻️ Proposed fix
required_vars = [ "BACKEND_URL" ] - optional_vars = { - "supabase": ["SUPABASE_URL", "SUPABASE_KEY"], - "discord" : ["DISCORD_BOT_TOKEN"], - "llm":["GEMINI_API_KEY"], - "queue": ["RABBITMQ_URL"], - "search": ["TAVILY_API_KEY"], - "github": ["GITHUB_TOKEN"] - } missing_vars = [var for var in required_vars if not getattr(settings, var.lower(), None)]If the intent was to log which optional features are missing, consider implementing that or remove this code.
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@backend/main.py`:
- Around line 62-65: The queue is started when
settings.code_intelligence_enabled is true but AsyncQueueManager.connect() will
try the default RabbitMQ URL and crash if rabbitmq_url is not configured; change
the startup gating so that before calling
self.queue_manager.start(num_workers=3) you also require settings.rabbitmq_url
to be set (i.e., gate on both settings.code_intelligence_enabled and
settings.rabbitmq_url) or alternatively modify AsyncQueueManager.connect() to
treat a missing rabbitmq_url as a no-op/disabled state and return without
raising; update the check around self.queue_manager.start(...) or add a guard in
AsyncQueueManager.connect() accordingly to avoid attempting to connect to
RabbitMQ when settings.rabbitmq_url is absent.
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@backend/app/core/config/settings.py`:
- Around line 51-64: discord_enabled should not require the LLM key: change the
discord_enabled property to only require discord_bot_token (i.e., return
bool(self.discord_bot_token)); update github_enabled to stop depending on
discord_enabled and instead explicitly validate the credentials it needs (check
bool(self.github_token), bool(self.backend_url), bool(self.supabase_url),
bool(self.supabase_key) and, if GitHub features do require the LLM, include
bool(self.gemini_api_key) explicitly) so github_enabled uses direct checks
rather than referencing discord_enabled; update references to discord_enabled
and github_enabled accordingly.
|
Updating this: since tools are now separate, I will remove the classification step and choose the tool directly at runtime. Refactor in progress. |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain
Keep Classification, But Make It Capability-AwareThe core insight: Separate what the query needs (classification) from where it should go (routing based on available services). Recommended Architecture# 1. Classification determines INTENT (what the user needs)
class QueryIntent(Enum):
CODE_QUERY = "code" # Needs code intelligence
WEB_SEARCH = "web" # Needs web search
GENERAL_CHAT = "general" # General LLM response
# 2. Router selects HANDLER based on intent + available capabilities
async def route_query(intent: QueryIntent, query: str, context: dict):
if intent == QueryIntent.CODE_QUERY:
if settings.code_intelligence_enabled:
return await handle_code_query(query, context)
else:
# Graceful degradation
logger.info("Code intelligence disabled, falling back to Gemini")
return await handle_with_gemini(query, context, hint="code-related")
elif intent == QueryIntent.WEB_SEARCH:
if settings.tavily_api_key:
return await handle_web_search(query, context)
else:
# Fallback to Gemini
return await handle_with_gemini(query, context, hint="requires current info")
else: # GENERAL_CHAT
return await handle_with_gemini(query, context)For Different Runtime ModesMinimal/Discord-only mode: Skip classification entirely, go straight to Gemini if not settings.code_intelligence_enabled:
# No point classifying if we can't route anywhere
return await _basic_discord_response(message)Discord + GitHub mode: Classify between general chat and web search intents = [QueryIntent.GENERAL_CHAT, QueryIntent.WEB_SEARCH]Full mode: Use complete classification with all three intents Implementation PatternIn the meantime, here's the pattern I recommend:
This maintains the intelligence of your classification while gracefully handling optional services. The classification isn't "useless" - it helps Gemini provide better responses even when specialized services aren't available! Would you like me to look at your specific ClassificationRouter implementation to provide more targeted refactoring suggestions? 🏁 Script executed: Length of output: 91 🏁 Script executed: Length of output: 4462 🏁 Script executed: Length of output: 9071 🏁 Script executed: Length of output: 17934
Your Current ArchitectureYour The actual routing happens inside your
The supervisor can call tools multiple times in a loop until it has enough info to generate a response. The ProblemYour ReAct supervisor doesn't know which services are available! It might try to call unavailable tools. The Solution: Make Your ReAct Supervisor Capability-AwareYou need to conditionally initialize tools and only offer available tools to the supervisor. Here's how: In the meantime, here's the pattern: 1. Make DevRelAgent.init conditionalclass DevRelAgent(BaseAgent):
def __init__(self, config: Dict[str, Any] = None):
self.config = config or {}
self.llm = ChatGoogleGenerativeAI(
model=settings.devrel_agent_model,
temperature=0.3,
google_api_key=settings.gemini_api_key
)
# Conditionally initialize tools based on capabilities
self.available_tools = []
# Web search (always available with DuckDuckGo fallback)
if settings.tavily_api_key:
from .tools.search_tool.tavilly import TavilySearchTool
self.search_tool = TavilySearchTool()
else:
self.search_tool = DuckDuckGoSearchTool()
self.available_tools.append("web_search")
# FAQ (always available - local data)
self.faq_tool = FAQTool()
self.available_tools.append("faq_handler")
# GitHub toolkit (only if code intelligence enabled)
if settings.code_intelligence_enabled:
self.github_toolkit = GitHubToolkit()
self.available_tools.append("github_toolkit")
else:
self.github_toolkit = None
# Onboarding (only if GitHub enabled)
if settings.github_enabled:
self.available_tools.append("onboarding")
self.checkpointer = InMemorySaver()
super().__init__("DevRelAgent", self.config)2. Pass available tools to supervisorYour supervisor needs to know which tools are available: def _build_graph(self):
workflow = StateGraph(AgentState)
# ... other nodes ...
# Pass available tools to supervisor
workflow.add_node(
"react_supervisor",
partial(
react_supervisor_node,
llm=self.llm,
available_tools=self.available_tools # NEW!
)
)
# Conditionally add tool nodes
workflow.add_node("web_search_tool", partial(web_search_tool_node, search_tool=self.search_tool, llm=self.llm))
workflow.add_node("faq_handler_tool", partial(faq_handler_tool_node, faq_tool=self.faq_tool))
if "github_toolkit" in self.available_tools:
workflow.add_node("github_toolkit_tool", partial(github_toolkit_tool_node, github_toolkit=self.github_toolkit))
if "onboarding" in self.available_tools:
workflow.add_node("onboarding_tool", onboarding_tool_node)
# ... rest of graph setup ...3. Update supervisor to only offer available toolsThe supervisor's prompt should list only available tools, and the router should only route to available nodes. Bottom LineYour classification is fine! You just need to:
This way:
Want me to look at your 🏁 Script executed: Length of output: 5715 Perfect! Now I can give you exact code changes. Let me first check the supervisor prompt: Now here are the specific changes you need: 1. Update
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@backend/.env.example`:
- Around line 10-41: Replace all real-looking secret values in the .env.example
with clearly non-functional placeholders: update entries such as
DISCORD_BOT_TOKEN, ENABLE_DISCORD_BOT, GEMINI_API_KEY, GEMINI_MODEL,
SUPABASE_URL, SUPABASE_KEY, GITHUB_TOKEN, FALKORDB_HOST, FALKORDB_PORT,
CODEGRAPH_BACKEND_URL, CODEGRAPH_SECRET_TOKEN, SECRET_TOKEN, RABBITMQ_URL,
TAVILY_API_KEY, LANGSMITH_TRACING, LANGSMITH_ENDPOINT, and LANGSMITH_API_KEY to
use obvious dummy placeholders (e.g., "DISCORD_BOT_TOKEN_HERE",
"GEMINI_API_KEY_HERE", "SUPABASE_KEY_HERE", etc.) so no real credentials appear
in the example file.
🧹 Nitpick comments (1)
backend/.env.example (1)
10-36: Avoid enabling multiple modes by default.All mode blocks are active and
ENABLE_DISCORD_BOT="true"is set, which can unintentionally enable integrations when users copy the file. Consider commenting out non-selected modes and defaultingENABLE_DISCORD_BOTto"false"with a note on how to enable each mode.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@backend/.env.example`:
- Around line 14-16: Remove the stale ENABLE_DISCORD_BOT entry from the
.env.example file; the codebase determines Discord enablement via the presence
of discord_bot_token and gemini_api_key in settings.py (see variables
discord_bot_token and gemini_api_key and the derived boolean discord_enabled),
so delete the ENABLE_DISCORD_BOT="true" line to avoid confusion.
In `@backend/app/core/config/settings.py`:
- Around line 10-11: The backend_url field is currently set to an empty default
(backend_url: str = ""), making it effectively optional; change this so missing
configuration fails: either remove the default so backend_url is a required
field (e.g., declare backend_url: str) or add explicit validation after loading
settings (e.g., validate_backend_url() or a __post_init__/validate method) that
raises a clear error when backend_url is empty or missing; update any settings
construction code to rely on that validation to surface configuration errors at
startup.
- Around line 66-77: The property code_intelligence_enabled currently calls
os.getenv() directly, bypassing Pydantic validation; add Optional[str] fields to
the Settings class for FALKORDB_HOST, FALKORDB_PORT, CODEGRAPH_BACKEND_URL, and
SECRET_TOKEN (defaulting to None) and update code_intelligence_enabled to
reference self.FALKORDB_HOST, self.FALKORDB_PORT, self.CODEGRAPH_BACKEND_URL,
and self.SECRET_TOKEN instead of os.getenv(); this keeps all config in Pydantic,
allows typing/validation, and lets you remove the unused import os if nothing
else uses it.
🧹 Nitpick comments (2)
backend/.env.example (1)
10-41: Clean up trailing whitespace flagged by linter.Multiple lines have trailing whitespace (Lines 12, 20, 26, 27, 28, 30, 35, 38, 39) as reported by
dotenv-linter. While cosmetic, these are trivially fixable and keep the file clean.Proposed fix
`#1`. backend only BACKEND_URL=http://localhost:8000 - + `#2`. Discord-only mode (no GitHub, no Supabase) DISCORD_BOT_TOKEN="your_discord_bot_token" ENABLE_DISCORD_BOT="true" GEMINI_API_KEY="your_gemini_api_key" GEMINI_MODEL=gemini-2.0-flash `#3`. Supabase setting (Discord + Github + Supabase) -SUPABASE_URL="https://...supabase.co" -SUPABASE_KEY="your_supabase_key" -GITHUB_TOKEN="your_github_token" +SUPABASE_URL="https://...supabase.co" +SUPABASE_KEY=your_supabase_key +GITHUB_TOKEN=your_github_token `#4`. Full mode (FalkorDB + CodeGraph + Agents) # FalkorDB Configuration -FALKORDB_HOST=localhost -FALKORDB_PORT=6379 -CODEGRAPH_BACKEND_URL=http://localhost:5000 -SECRET_TOKEN="your_codegraph_secret" -CODE_GRAPH_PUBLIC=1 +FALKORDB_HOST=localhost +FALKORDB_PORT=6379 +CODEGRAPH_BACKEND_URL=http://localhost:5000 +SECRET_TOKEN=your_codegraph_secret +CODE_GRAPH_PUBLIC=1 FLASK_RUN_HOST=0.0.0.0 FLASK_RUN_PORT=5000 RABBITMQ_URL=amqp://localhost:5672/ -TAVILY_API_KEY="your_tavily_api_key" +TAVILY_API_KEY=your_tavily_api_key # Langsmith -LANGSMITH_TRACING="true" -LANGSMITH_ENDPOINT="https://api.smith.langchain.com" -LANGSMITH_API_KEY="your_langsmith_api_key" +LANGSMITH_TRACING="true" +LANGSMITH_ENDPOINT="https://api.smith.langchain.com" +LANGSMITH_API_KEY=your_langsmith_api_keybackend/app/core/config/settings.py (1)
3-3: Remove unused importsfield_validatorandConfigDict.Neither is used in this file. The class uses only simple field assignments and
@propertydecorators, with no validators.Proposed fix
-from pydantic import field_validator, ConfigDict(Remove the entire import line if
Fieldis not needed elsewhere.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@backend/.env.example`:
- Around line 11-12: Remove the trailing whitespace characters in the .env
example entries (e.g., the BACKEND_URL line and the other affected ranges) so
dotenv-linter stops reporting errors; open backend/.env.example, trim any spaces
or tabs at end of each environment variable line (including the ranges
mentioned: lines ~19-29 and 33-38), and save the file so each line ends exactly
after the variable value with no trailing whitespace.
🧹 Nitpick comments (3)
backend/.env.example (1)
18-31: Consider reordering keys to align with dotenv-linter expectations.This is non-functional but keeps the example file consistent and lint-clean.
♻️ Example ordering fix
-#3. Supabase setting (Discord + Github + Supabase) -SUPABASE_URL="https://...supabase.co" -SUPABASE_KEY="your_supabase_key" -GITHUB_TOKEN="your_github_token" +#3. Supabase setting (Discord + Github + Supabase) +SUPABASE_URL="https://...supabase.co" +SUPABASE_KEY="your_supabase_key" +GITHUB_TOKEN="your_github_token" -#4. Full mode (FalkorDB + CodeGraph + Agents) -# FalkorDB Configuration -FALKORDB_HOST=localhost -FALKORDB_PORT=6379 -CODEGRAPH_BACKEND_URL=http://localhost:5000 -SECRET_TOKEN="your_codegraph_secret" -CODE_GRAPH_PUBLIC=1 -FLASK_RUN_HOST=0.0.0.0 -FLASK_RUN_PORT=5000 +#4. Full mode (FalkorDB + CodeGraph + Agents) +# FalkorDB Configuration +CODEGRAPH_BACKEND_URL=http://localhost:5000 +CODE_GRAPH_PUBLIC=1 +FALKORDB_HOST=localhost +FALKORDB_PORT=6379 +FLASK_RUN_HOST=0.0.0.0 +FLASK_RUN_PORT=5000 +SECRET_TOKEN="your_codegraph_secret" -# Langsmith -LANGSMITH_TRACING="true" -LANGSMITH_ENDPOINT="https://api.smith.langchain.com" -LANGSMITH_API_KEY="your_langsmith_api_key" +# Langsmith +LANGSMITH_ENDPOINT="https://api.smith.langchain.com" +LANGSMITH_TRACING="true" +LANGSMITH_API_KEY="your_langsmith_api_key"Also applies to: 36-39
backend/app/core/config/settings.py (2)
3-5: Remove unused imports:os,field_validator,ConfigDict.
field_validatorandConfigDict(line 3) are imported but never used in this file. Theosimport (line 5) is also now unused sincecode_intelligence_enabledwas refactored to useselffields instead ofos.getenv().Proposed fix
-from pydantic import field_validator, ConfigDict -from typing import Optional -import os +from typing import Optional
42-46: Consider a more specific name forsecret_token.
secret_tokenis quite generic and could easily collide with other environment variables. A name likecodegraph_secret_tokenorwebhook_secret_tokenwould clarify its purpose and reduce the risk of env var conflicts, especially in.envfiles with many services.
Closes #192
feat: introduce runtime modes with feature-gated initialization
📝 Description
This PR introduces runtime modes for DevR.AI and refactors initialization to make all major integrations optional, feature-gated, and safe to load independently.
The goal is to prevent startup crashes caused by missing external services (Supabase, FalkorDB, etc.) while allowing DevR.AI to run in progressively richer configurations.
🚦 Supported Runtime Modes
DevR.AI can now start in the following modes:
1. Backend minimal
2. Discord only
3. Discord + Supabase
4. Full mode
🔧 Changes Made
Made all fields optional in
settings.pyexceptbackend_url:Made gates in
settings.pyand used them for Gating every changed file --Used Lazy imports in
cogs.py,router.py,health.py,bot.py,main.py--Separated user query handling paths into a direct Discord-only LLM flow and a queue-based DevRel agent flow, selected explicitly based on queue availability.
📷 Screenshots or Visual Changes (if applicable)
1) Basic Discord only

2) Discord+Supabase

3) FULLmode

🤝 Collaboration
Collaborated with:
@username(optional)✅ Checklist
Summary by CodeRabbit
New Features
Improvements
Bug Fixes