Add requesting-human-help skill for structured human-in-the-loop collaboration#673
Add requesting-human-help skill for structured human-in-the-loop collaboration#673pratyush618 wants to merge 2 commits intoobra:mainfrom
Conversation
…aboration Implements skill from issue obra#594: structured, evidence-driven requests for capability limits and high-risk actions, with validation chain and audit trail from request through human action to agent decision.
📝 WalkthroughWalkthroughAdds a new skill documentation file that defines a standardized, evidence-driven workflow and template for agent requests for human assistance, covering required fields, validation rules, audit-trail chaining, red flags, usage scenarios, and a common mistakes table with fixes. Changes
Sequence Diagram(s)sequenceDiagram
participant Agent as Agent
participant Human as Human
participant EvidenceStore as EvidenceStore
participant Auditor as Auditor
Agent->>Human: Submit structured REQUEST (goal, steps, prerequisites, acceptance criteria)
Human->>EvidenceStore: Perform action & upload EVIDENCE (screenshots, logs, outputs)
Human-->>Agent: Respond with EVIDENCE + human notes
Agent->>EvidenceStore: Validate EVIDENCE against acceptance criteria
alt Evidence meets criteria
Agent->>Auditor: Log chain: REQUEST → HUMAN ACTION → EVIDENCE → AGENT DECISION
Agent-->>Human: Confirm completion / continue workflow
else Evidence missing or fails
Agent-->>Human: Minimal follow-up request for missing pieces or escalate
end
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@skills/requesting-human-help/SKILL.md`:
- Around line 38-66: Add a language tag (use "text") to the triple-backticked
fenced code blocks in the "Human Help Needed" section of SKILL.md to satisfy
markdownlint MD040; locate the fenced blocks that start/stop around the "##
Human Help Needed" content (and the additional blocks noted at ranges
corresponding to the acceptance criteria and REQUEST examples) and change the
opening "```" to "```text" for each block so all fenced sections are annotated
consistently.
- Around line 34-66: Add an explicit "Human involvement level" field to the
request template so each help request records whether the human is being asked
for "Clarification", "Execution", or "Approval/Takeover": update the structured
block under "## Human Help Needed" to include a line like "**Human involvement
level:** [Clarification | Execution | Approval/Takeover]" and, where relevant,
ensure "Steps", "Expected output / evidence needed", and "Acceptance criteria"
reference or differentiate behavior based on that level (e.g., approval requires
explicit confirm/accept output while clarification may require only answers), so
reviewers can unambiguously determine the authority granted by the human
response.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 38f9aabf-63f4-498d-93b7-bac4b6b92ec0
📒 Files selected for processing (1)
skills/requesting-human-help/SKILL.md
…uage tags - Add explicit involvement level (clarification/execution/approval) to the request template so the audit trail records what authority the human response grants - Add 'text' language tag to all fenced blocks to satisfy MD040
There was a problem hiding this comment.
♻️ Duplicate comments (1)
skills/requesting-human-help/SKILL.md (1)
62-103:⚠️ Potential issue | 🟠 MajorMake evidence and validation branch on the involvement level.
approval/takeoveris now named, but the template still treats its proof the same way asexecution. For approval flows, the key artifact is the explicit authorization itself, including scope/constraints; otherwiseREQUEST → HUMAN ACTION → EVIDENCE → AGENT DECISIONstill doesn't prove what authority was granted.Suggested doc update
**Expected output / evidence needed:** -- [What to capture: screenshot, log output, command result, confirmation text] -- [Format: paste text output, attach screenshot, confirm yes/no] +- clarification: [Direct answer, plus cited source/screenshot if needed] +- execution: [Artifact proving the step ran: screenshot, log output, command result] +- approval/takeover: [Exact approval/takeover statement, scope, constraints, and who owns the action] **Acceptance criteria:** -- [ ] [Specific, verifiable condition that means "this worked"] -- [ ] [What distinguishes success from partial success] +- [ ] [Criteria appropriate to the involvement level] +- [ ] [For approval/takeover: authorization is explicit and scoped] IF all criteria met: → State: "Confirmed: [criterion 1], [criterion 2]. Proceeding." + → For approval/takeover, restate exactly what was approved and any limits before continuing. REQUEST → [structured block above] HUMAN ACTION → [what they did] EVIDENCE → [artifact they returned] -AGENT DECISION → [what you decided based on evidence] +AGENT DECISION → [what you decided based on evidence, including approval scope/limits]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/requesting-human-help/SKILL.md` around lines 62 - 103, Summary: The validation template treats approval/takeover like execution and doesn't require explicit authorization artifacts; update the "Validating the Human Response" logic to branch on involvement level and require scope/constraints evidence for approval flows. Fix: In the "Validating the Human Response" section, add branching that checks the involvement type (e.g., "approval/takeover" vs "execution") and for approval flows require an explicit authorization artifact (authorization text, scope/constraints, signer identity) as evidence; for execution flows keep the current evidence rules (logs/screenshots/outputs). Also update the guidance text and the REQUEST→HUMAN ACTION→EVIDENCE→AGENT DECISION audit chain to note that for approval flows the EVIDENCE must include the authorization scope/constraints and signer, and adjust instructions to "If any criterion unmet → Request ONLY the missing piece" to reflect the involvement-specific missing artifact handling.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@skills/requesting-human-help/SKILL.md`:
- Around line 62-103: Summary: The validation template treats approval/takeover
like execution and doesn't require explicit authorization artifacts; update the
"Validating the Human Response" logic to branch on involvement level and require
scope/constraints evidence for approval flows. Fix: In the "Validating the Human
Response" section, add branching that checks the involvement type (e.g.,
"approval/takeover" vs "execution") and for approval flows require an explicit
authorization artifact (authorization text, scope/constraints, signer identity)
as evidence; for execution flows keep the current evidence rules
(logs/screenshots/outputs). Also update the guidance text and the REQUEST→HUMAN
ACTION→EVIDENCE→AGENT DECISION audit chain to note that for approval flows the
EVIDENCE must include the authorization scope/constraints and signer, and adjust
instructions to "If any criterion unmet → Request ONLY the missing piece" to
reflect the involvement-specific missing artifact handling.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 0f564151-447d-4966-abaa-a17244137054
📒 Files selected for processing (1)
skills/requesting-human-help/SKILL.md
Summary
skills/requesting-human-help/SKILL.mdwith a structured human-in-the-loop patternCloses #594