Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
79 commits
Select commit Hold shift + click to select a range
3986bad
Load env vars in node hook. Update vector search
danielcampagnolitg Nov 5, 2025
e5cbc92
Generate function schemas seperate before unit tests in GitHub actions
danielcampagnolitg Nov 5, 2025
51e9dda
Increase mem when generating functionSchemas
danielcampagnolitg Nov 5, 2025
e32cd02
Add generated function schemas
danielcampagnolitg Nov 5, 2025
83adc30
Update functionSchemas script
danielcampagnolitg Nov 5, 2025
a33f208
Update .dockerignore
danielcampagnolitg Nov 5, 2025
f5f2ed6
Update .dockerignore
danielcampagnolitg Nov 5, 2025
e64d5c9
Update Dockerfile
danielcampagnolitg Nov 5, 2025
9fb99ad
Build fixes
danielcampagnolitg Nov 5, 2025
521f21d
Update Dockerfile
danielcampagnolitg Nov 5, 2025
639038e
Update env.js
danielcampagnolitg Nov 5, 2025
a2aa8ba
Update env.js
danielcampagnolitg Nov 5, 2025
efa4fec
Update Dockerfile to extend from gcloud image
danielcampagnolitg Nov 5, 2025
46b6573
Update Slack postMessage. Cerebras GLM 4.6
danielcampagnolitg Nov 6, 2025
c79a7b0
Add refactored exec
danielcampagnolitg Nov 7, 2025
676cf90
Update vector search
campers Nov 8, 2025
13d21cb
Update appDirs to lazy init agentContext
campers Nov 10, 2025
b6391d0
Update defaultLlms.ts
danielcampagnolitg Nov 10, 2025
b86d2a3
fix: Handle unborn branches in `switchToBranch` logic
danielcampagnolitg Nov 10, 2025
ee38876
Implement mask in execCommand
danielcampagnolitg Nov 10, 2025
9d47c14
Make the GitLab agent name configurable
danielcampagnolitg Nov 10, 2025
7eb80f1
Update Python codegen to allow top level await and remove the main wr…
danielcampagnolitg Nov 10, 2025
69a3cad
Update SlackBot
danielcampagnolitg Nov 13, 2025
5ba3305
Fix Cerebras GLM4.6 model id
danielcampagnolitg Nov 13, 2025
23d4aca
Add initial AlloyDD vector store implementation
danielcampagnolitg Nov 13, 2025
0d9d340
Update vector config
danielcampagnolitg Nov 13, 2025
24013ad
Update Postgres implementation. Add auto-start pg container
campers Nov 25, 2025
80371dd
Update postgresFunctionCacheService.ts
campers Nov 25, 2025
1373607
Vector search updates
campers Nov 25, 2025
04bb245
Vector updates
campers Nov 25, 2025
0459e91
Move countTokens to core agent functions
campers Nov 25, 2025
d2d6f30
Add Claude Code and Codex LLMs
campers Nov 25, 2025
49caa7e
Add gemini 3.0. Update ai-sdk packages
campers Nov 25, 2025
f1a0797
Update tests scripts
campers Nov 25, 2025
e534045
Update xai.ts
campers Nov 25, 2025
11a2af2
Update testSetup.ts
campers Nov 25, 2025
a5a63eb
Update llmAliases.ts
campers Nov 28, 2025
0f495d0
Update multi-agent debate implementations
campers Nov 28, 2025
cf42927
Update LLMs and costs
campers Nov 28, 2025
ff12f62
Update frontend ai package
campers Nov 28, 2025
c14d1c2
Update Dockerfile
danielcampagnolitg Dec 1, 2025
9a83b8e
Update ccproxy.ts
danielcampagnolitg Dec 1, 2025
d4bb49c
Fix openai
danielcampagnolitg Dec 1, 2025
4c1a132
Update Dockerfile
danielcampagnolitg Dec 1, 2025
64efab9
Update fireworks.ts
danielcampagnolitg Dec 1, 2025
e5b4614
LLM updates
danielcampagnolitg Dec 1, 2025
fb01b68
Update gitlabPipelineHandler.ts
danielcampagnolitg Dec 1, 2025
a7abb38
Vector updates
campers Dec 7, 2025
cd3c3e6
Refactor to break cyclic dependencies. Fix TypeScriptRefactor for lat…
campers Dec 7, 2025
153c72e
PG test fixes. Change ConsoleCompletedHandler to LoggerCompletedHandler
campers Dec 7, 2025
318b4c9
Update postgresAgentStateService.ts
campers Dec 7, 2025
6567996
Update discoveryEngine.ts
campers Dec 7, 2025
ff9152a
MCP and CLI tools for discovery agents and file system tree with summ…
campers Dec 9, 2025
7294974
Add fileSystemTree config to .typedai.json
danielcampagnolitg Dec 9, 2025
98da160
Merge branches 'main' and 'main' of https://www.github.com/TrafficGua…
danielcampagnolitg Dec 9, 2025
c09d04b
Update pnpm-lock.yaml
danielcampagnolitg Dec 9, 2025
324062a
Compile and lint
danielcampagnolitg Dec 9, 2025
f6d2e3a
Fixes
danielcampagnolitg Dec 9, 2025
b223740
Update ci.yml
danielcampagnolitg Dec 9, 2025
0c8789a
Update ci.yml
danielcampagnolitg Dec 9, 2025
35335e7
Update functionSchemaParser.ts
danielcampagnolitg Dec 9, 2025
1e11990
Fix vector config to not require GCLOUD_PROJECT in unit tests
danielcampagnolitg Dec 9, 2025
51ec592
Use synchronous deferred loading of defaultLLMs
danielcampagnolitg Dec 10, 2025
6872c63
Fix Vertex LLMs
danielcampagnolitg Dec 10, 2025
30903e2
Update vertexai.ts
danielcampagnolitg Dec 10, 2025
9124790
Remove Slack reaction when a message has been queued for processing
danielcampagnolitg Dec 10, 2025
cd68e48
Update Claude agent SDK
campers Dec 13, 2025
135a679
Fix agent lastUpdate in pg implementation
campers Dec 13, 2025
d8a3757
Optimise tests. Add Postgres store for file summaries. Add vertex bat…
campers Dec 14, 2025
413866e
Rename indexDocs to summaries
campers Dec 14, 2025
9ed08dd
Cleanup test ouput
campers Dec 14, 2025
c94b499
Tidy up query and file-tree scripts. Add pnpm check hook
campers Dec 14, 2025
ecfb4ab
Add Flash 3 support
danielcampagnolitg Dec 18, 2025
8a5c214
Add AlloyDB ScaNN index to the columnar engine
danielcampagnolitg Dec 18, 2025
9ea0fd8
Update Slack controller hard LLM to Flash 3 for speed
danielcampagnolitg Dec 18, 2025
6eff48f
AI SDK beta 6. Add agentic debate with tools. Add Rollbar tool. Add C…
danielcampagnolitg Dec 19, 2025
f291bdb
Update path CLI tools. Add nextgen coding agent orchestration ideas
campers Dec 21, 2025
3f80e15
Add Claude plugins. Update frontend ai package.
campers Dec 21, 2025
34139ae
Refactor to RateLimitCircuitBreaker. Add serial mode to summary gener…
campers Dec 21, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
103 changes: 103 additions & 0 deletions .claude/hooks/ensure-pnpm-check.test.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
#!/bin/bash
# Test harness for ensure-pnpm-check.sh

SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
HOOK="$SCRIPT_DIR/ensure-pnpm-check.sh"
TEST_DIR="$SCRIPT_DIR/test-fixtures"

mkdir -p "$TEST_DIR"

PASS=0
FAIL=0

test_result() {
local name="$1"
local expected="$2" # "block" (exit 2) or "allow" (exit 0)
local exit_code="$3"

if [ "$expected" = "block" ]; then
if [ "$exit_code" -eq 2 ]; then
echo " PASS: $name"
((PASS++))
else
echo " FAIL: $name - expected exit 2 (block), got exit $exit_code"
((FAIL++))
fi
else
if [ "$exit_code" -eq 0 ]; then
echo " PASS: $name"
((PASS++))
else
echo " FAIL: $name - expected exit 0 (allow), got exit $exit_code"
((FAIL++))
fi
fi
}

# Test 1: No pnpm check run - should block
echo "Test 1: No pnpm check run"
cat > "$TEST_DIR/no-check.jsonl" << 'EOF'
{"type":"user","message":{"role":"user","content":"hello"}}
{"type":"assistant","message":{"role":"assistant","content":[{"type":"text","text":"hi there"}]}}
EOF
echo "{\"transcript_path\": \"$TEST_DIR/no-check.jsonl\", \"stop_hook_active\": false}" | "$HOOK" 2>/dev/null
exit_code=$?
test_result "No pnpm check - should block" "block" "$exit_code"

# Test 2: pnpm check was EXECUTED - should allow
echo "Test 2: pnpm check was executed"
cat > "$TEST_DIR/with-check.jsonl" << 'EOF'
{"type":"user","message":{"role":"user","content":"hello"}}
{"type":"assistant","message":{"role":"assistant","content":[{"type":"tool_use","name":"Bash","input":{"command":"pnpm check"}}]}}
{"type":"user","message":{"role":"user","content":[{"type":"tool_result","content":"ok"}]}}
EOF
echo "{\"transcript_path\": \"$TEST_DIR/with-check.jsonl\", \"stop_hook_active\": false}" | "$HOOK" 2>/dev/null
exit_code=$?
test_result "pnpm check was executed - should allow" "allow" "$exit_code"

# Test 3: pnpm check MENTIONED but not executed - should block
echo "Test 3: pnpm check mentioned but not executed"
cat > "$TEST_DIR/mentioned-only.jsonl" << 'EOF'
{"type":"user","message":{"role":"user","content":"please run pnpm check"}}
{"type":"assistant","message":{"role":"assistant","content":[{"type":"text","text":"I will run pnpm check for you"}]}}
EOF
echo "{\"transcript_path\": \"$TEST_DIR/mentioned-only.jsonl\", \"stop_hook_active\": false}" | "$HOOK" 2>/dev/null
exit_code=$?
test_result "pnpm check mentioned only - should block" "block" "$exit_code"

# Test 4: pnpm check executed BEFORE last stop - should block
echo "Test 4: pnpm check before last stop"
cat > "$TEST_DIR/check-before-stop.jsonl" << 'EOF'
{"type":"assistant","message":{"role":"assistant","content":[{"type":"tool_use","name":"Bash","input":{"command":"pnpm check"}}]}}
{"type":"user","message":{"role":"user","content":[{"type":"tool_result","content":"ok"}]}}
{"type":"result","subtype":"stop"}
{"type":"user","message":{"role":"user","content":"continue please"}}
EOF
echo "{\"transcript_path\": \"$TEST_DIR/check-before-stop.jsonl\", \"stop_hook_active\": false}" | "$HOOK" 2>/dev/null
exit_code=$?
test_result "pnpm check before stop - should block" "block" "$exit_code"

# Test 5: pnpm check executed AFTER last stop - should allow
echo "Test 5: pnpm check after last stop"
cat > "$TEST_DIR/check-after-stop.jsonl" << 'EOF'
{"type":"result","subtype":"stop"}
{"type":"user","message":{"role":"user","content":"continue please"}}
{"type":"assistant","message":{"role":"assistant","content":[{"type":"tool_use","name":"Bash","input":{"command":"pnpm check"}}]}}
{"type":"user","message":{"role":"user","content":[{"type":"tool_result","content":"ok"}]}}
EOF
echo "{\"transcript_path\": \"$TEST_DIR/check-after-stop.jsonl\", \"stop_hook_active\": false}" | "$HOOK" 2>/dev/null
exit_code=$?
test_result "pnpm check after stop - should allow" "allow" "$exit_code"

# Test 6: stop_hook_active=true - should always allow (prevent infinite loop)
echo "Test 6: stop_hook_active=true (infinite loop prevention)"
echo "{\"transcript_path\": \"$TEST_DIR/no-check.jsonl\", \"stop_hook_active\": true}" | "$HOOK" 2>/dev/null
exit_code=$?
test_result "stop_hook_active=true - should allow" "allow" "$exit_code"

# Cleanup
rm -rf "$TEST_DIR"

echo ""
echo "Results: $PASS passed, $FAIL failed"
[ "$FAIL" -eq 0 ] && exit 0 || exit 1
228 changes: 228 additions & 0 deletions .claude/hooks/extract-qa.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,228 @@
#!/usr/bin/env python3
"""
Extract Q&A pairs from Claude Code session files.
Finds AskUserQuestion tool calls and their corresponding user answers.

Usage:
python extract-qa.py <session_file> [--output-dir <dir>] [--format md|json|both]

Or as a hook:
Receives session data via stdin (hook context)
"""

import json
import sys
import os
import argparse
from datetime import datetime
from pathlib import Path


def extract_qa_from_session(session_lines: list[str]) -> list[dict]:
"""Extract Q&A pairs from session JSONL lines."""
qa_pairs = []
tool_use_map = {} # Map tool_use_id to question data

for line in session_lines:
if not line.strip():
continue
try:
entry = json.loads(line)
except json.JSONDecodeError:
continue

# Look for AskUserQuestion tool_use
message = entry.get("message", {})
content = message.get("content", [])

if isinstance(content, list):
for item in content:
if isinstance(item, dict):
# Found a tool_use for AskUserQuestion
if item.get("type") == "tool_use" and item.get("name") == "AskUserQuestion":
tool_id = item.get("id")
questions = item.get("input", {}).get("questions", [])
timestamp = entry.get("timestamp")
if tool_id and questions:
tool_use_map[tool_id] = {
"questions": questions,
"timestamp": timestamp
}

# Found a tool_result - check if it's for an AskUserQuestion
if item.get("type") == "tool_result":
tool_id = item.get("tool_use_id")
if tool_id and tool_id in tool_use_map:
# Get answers from toolUseResult
tool_result = entry.get("toolUseResult", {})
answers = tool_result.get("answers", {})

if answers:
question_data = tool_use_map[tool_id]
qa_pairs.append({
"timestamp": question_data["timestamp"],
"answer_timestamp": entry.get("timestamp"),
"questions": question_data["questions"],
"answers": answers
})

return qa_pairs


def format_qa_markdown(qa_pairs: list[dict], session_id: str = None) -> str:
"""Format Q&A pairs as markdown."""
lines = ["# Claude Code Q&A Extraction\n"]

if session_id:
lines.append(f"Session: `{session_id}`\n")

lines.append(f"Extracted: {datetime.now().isoformat()}\n")
lines.append("---\n")

for i, qa in enumerate(qa_pairs, 1):
lines.append(f"## Q&A #{i}\n")

if qa.get("timestamp"):
lines.append(f"*Asked: {qa['timestamp']}*\n")

for question in qa.get("questions", []):
q_text = question.get("question", "Unknown question")
header = question.get("header", "")
options = question.get("options", [])
answer = qa.get("answers", {}).get(q_text, "No answer recorded")

lines.append(f"### {header}: {q_text}\n")

if options:
lines.append("**Options:**")
for opt in options:
label = opt.get("label", "")
desc = opt.get("description", "")
marker = "**[Selected]**" if label == answer else ""
lines.append(f"- {label}: {desc} {marker}")
lines.append("")

lines.append(f"**Answer:** {answer}\n")

lines.append("---\n")

return "\n".join(lines)


def format_qa_json(qa_pairs: list[dict], session_id: str = None) -> str:
"""Format Q&A pairs as JSON."""
output = {
"session_id": session_id,
"extracted_at": datetime.now().isoformat(),
"qa_pairs": qa_pairs
}
return json.dumps(output, indent=2)


def process_session_file(filepath: str, output_dir: str = None, output_format: str = "md"):
"""Process a single session file and extract Q&A pairs."""
with open(filepath, "r") as f:
lines = f.readlines()

qa_pairs = extract_qa_from_session(lines)

if not qa_pairs:
print(f"No Q&A pairs found in {filepath}", file=sys.stderr)
return None

session_id = Path(filepath).stem

if output_dir:
os.makedirs(output_dir, exist_ok=True)
base_name = f"qa-{session_id}-{datetime.now().strftime('%Y%m%d-%H%M%S')}"

if output_format in ("md", "both"):
md_path = os.path.join(output_dir, f"{base_name}.md")
with open(md_path, "w") as f:
f.write(format_qa_markdown(qa_pairs, session_id))
print(f"Markdown saved to: {md_path}", file=sys.stderr)

if output_format in ("json", "both"):
json_path = os.path.join(output_dir, f"{base_name}.json")
with open(json_path, "w") as f:
f.write(format_qa_json(qa_pairs, session_id))
print(f"JSON saved to: {json_path}", file=sys.stderr)
else:
# Output to stdout
if output_format == "json":
print(format_qa_json(qa_pairs, session_id))
else:
print(format_qa_markdown(qa_pairs, session_id))

return qa_pairs


def process_hook_input():
"""Process input when running as a Claude Code hook."""
# Read hook context from stdin
try:
hook_input = json.load(sys.stdin)
except json.JSONDecodeError:
print("Error: Invalid JSON input from hook", file=sys.stderr)
sys.exit(1)

session_id = hook_input.get("session_id")
cwd = hook_input.get("cwd", os.getcwd())

# PreCompact hook provides transcript_path directly
transcript_path = hook_input.get("transcript_path")

if transcript_path:
# Expand ~ in path
session_file = os.path.expanduser(transcript_path)
else:
# Fallback: Find the session file from session_id
home = os.path.expanduser("~")
project_path = cwd.replace("/", "-").lstrip("-")
session_dir = os.path.join(home, ".claude", "projects", f"-{project_path}")
session_file = os.path.join(session_dir, f"{session_id}.jsonl")

if not os.path.exists(session_file):
print(f"Session file not found: {session_file}", file=sys.stderr)
sys.exit(0) # Don't fail the hook

# Extract to project's .claude directory
output_dir = os.path.join(cwd, ".claude", "qa-extractions")
qa_pairs = process_session_file(session_file, output_dir, "md")

# Return success for hook
if qa_pairs:
print(f"Extracted {len(qa_pairs)} Q&A pair(s) before compaction", file=sys.stderr)


def main():
parser = argparse.ArgumentParser(description="Extract Q&A pairs from Claude Code sessions")
parser.add_argument("session_file", nargs="?", help="Path to session .jsonl file")
parser.add_argument("--output-dir", "-o", help="Directory to save output files")
parser.add_argument("--format", "-f", choices=["md", "json", "both"], default="md",
help="Output format (default: md)")
parser.add_argument("--hook", action="store_true", help="Run in hook mode (read from stdin)")

args = parser.parse_args()

# If explicit --hook flag, run in hook mode
if args.hook:
process_hook_input()
return

# If session file provided, process it directly
if args.session_file:
process_session_file(args.session_file, args.output_dir, args.format)
return

# No args - check if running as a hook with stdin
if not sys.stdin.isatty():
process_hook_input()
return

parser.print_help()
sys.exit(1)


if __name__ == "__main__":
main()
48 changes: 46 additions & 2 deletions .claude/settings.local.json
Original file line number Diff line number Diff line change
@@ -1,13 +1,57 @@
{
"permissions": {
"allow": [
"Bash(npm test:*)",
"Bash(ls:*)",
"Bash(cat:*)",
"Bash(echo:*)",
"Bash(find:*)",
"Bash(grep:*)",
"Bash(chmod:*)",
"Bash(file-tree)",
"Bash(query)",
"Bash(node --version)",
"Bash(timeout 20 node:*)",
"Bash(timeout 30 node:*)",
"Bash(timeout 60 node:*)",
"Bash(npm test:*)",
"Bash(pnpm run build:*)",
"Bash(pnpm run lint:*)",
"Bash(pnpm lint:*)",
"Bash(pnpm check:*)",
"Bash(pnpm test:*)",
"Bash(pnpm test:unit:*)",
"Bash(pnpm test:single:*)",
"Bash(pnpm run test:unit:*)",
"Bash(pnpm run test:single:*)",
"Bash(pnpm run test:postgres:*)",
"Bash(timeout 30 pnpm test:single:*)",
"Bash(timeout 60 pnpm test:single:*)",
"Bash(pnpm summaries build:*)",
"Bash(pnpm summaries:*)",
"Bash(timeout 30 pnpm nextgen:*)",
"Bash(timeout 60 pnpm nextgen:*)",
"Bash(timeout 180 pnpm run nextgen:*)",
"WebSearch",
"WebFetch(domain:github.com)",
"WebFetch(domain:code.claude.com)",
"WebFetch(domain:docs.anthropic.com)",
"WebFetch(domain:docs.docker.com)",
"WebFetch(domain:docs.cloud.google.com)",
"WebFetch(domain:docs.github.com)",
"WebFetch(domain:docs.gitlab.com)",
"WebFetch(domain:angular.dev)",
"WebFetch(domain:ai-sdk.dev)",
"WebFetch(domain:fastify.dev)",
"WebFetch(domain:zod.dev)",
"WebFetch(domain:biomejs.dev)",
"WebFetch(domain:ui.shadcn.com)",
"mcp__ide__getDiagnostics",
"mcp__perplexity__search"
],
"deny": [],
"ask": []
}
},
"enabledMcpjsonServers": [
"perplexity"
]
}
Loading