Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 53 additions & 0 deletions source/debug-logger.js
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good things: Clean factory function (no class), destructured options with defaults, buffered writes, composable design. Solid overall.

Issues to address:

  1. Mutation via buffer.length = 0 — This is a subtle mutation pattern. Consider replacing the buffer entirely or using a more explicit approach:
const flush = () => {
  if (!logFile || buffer.length === 0) return;
  appendFileSync(logFile, buffer.join(''));
  buffer.splice(0, buffer.length);
};

Though honestly, since buffer is internal mutable state by design (accumulating log entries), this is a pragmatic trade-off. If you wanted to go fully immutable you'd need a different architecture (e.g., returning new logger instances). The length = 0 trick is just less idiomatic than splice(0) — both mutate.

  1. formatMessage silently swallows errors from JSON.stringify — Circular references will throw. Consider:
const formatMessage = (parts) =>
  parts.map(part => {
    if (typeof part !== 'object') return String(part);
    try { return JSON.stringify(part); }
    catch { return '[Circular]'; }
  }).join(' ');
  1. process naming collision — Exporting as process shadows Node's global process. The internal rename to logProcess shows awareness, but consumers doing const { process } = createDebugLogger() will shadow the global. Consider naming it logProcess in the public API too, or something like output.

  2. writeToFile is misleading — It doesn't write to a file; it writes to a buffer. Call it bufferMessage or enqueue.

  3. No flush on exit — If the process crashes, buffered logs are lost. Consider registering a process.on('exit', flush) or documenting that the consumer is responsible for calling flush.

  4. Missing logFile validation — If someone passes logFile: 123, appendFileSync will fail at flush time, far from the source of the bug. A guard in the factory would surface this earlier.

Minor nits:

  • The docblock says @returns {Object} — could be more specific with @returns {{ log, command, process, result, flush }} or a @typedef.
  • args = [] default in command is fine but the function could just use rest params: const command = (cmd, ...args) => log(\Command: ${cmd} ${args.join(' ')}`)`.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remediated in PR #412 — all 6 issues addressed:

  1. buffer.length = 0 → kept (pragmatic internal state; noted as known deviation from immutability preference)
  2. Circular ref guard added to formatMessage (try/catch returns '[Circular]')
  3. process export renamed to logProcess — no more global shadowing
  4. writeToFile renamed to bufferEntry — accurately describes deferred buffering
  5. Flush-on-exit responsibility documented in JSDoc; process.on('exit') avoided (side effect in factory)
  6. logFile type guard added — throws TypeError at factory call time, not at flush time
    Minor nits: @returns updated with specific method list; command(cmd, args=[]) converted to rest params command(cmd, ...args).
    See: fix(ai): Retroactive review remediation — PR 1-3 findings #412

Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
import { appendFileSync } from 'fs';

/**
* Create a debug logger that can write to console and/or log file.
* @param {Object} options
* @param {boolean} [options.debug=false] - Enable debug logging
* @param {string} [options.logFile] - Optional log file path
* @returns {Object} Logger with log, command, process, result, and flush methods
*/
export const createDebugLogger = ({ debug = false, logFile } = {}) => {
const buffer = [];

const formatMessage = (parts) =>
parts.map(part => typeof part === 'object' ? JSON.stringify(part) : String(part)).join(' ');

const writeToFile = (message) => {
if (!logFile) return;

const timestamp = new Date().toISOString();
const logEntry = `[${timestamp}] ${message}\n`;
buffer.push(logEntry);
};

const log = (...parts) => {
const message = formatMessage(parts);

if (debug) {
console.error(`[DEBUG] ${message}`);
}

writeToFile(message);
};

const command = (cmd, args = []) => {
log(`Command: ${cmd} ${args.join(' ')}`);
};

const logProcess = (data) => {
log('Process:', data);
};

const result = (data) => {
log('Result:', data);
};

const flush = () => {
if (!logFile || buffer.length === 0) return;
appendFileSync(logFile, buffer.join(''));
buffer.length = 0;
};

return { log, command, process: logProcess, result, flush };
};
172 changes: 172 additions & 0 deletions source/debug-logger.test.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,172 @@
import { describe, test, beforeEach, afterEach, vi } from 'vitest';
import { assert } from './vitest.js';
import { createDebugLogger } from './debug-logger.js';
import { rmSync, existsSync, readFileSync, mkdtempSync } from 'fs';
import { join } from 'path';
import { tmpdir } from 'os';

describe('debug-logger', () => {
describe('createDebugLogger()', () => {
let consoleErrorSpy;

beforeEach(() => {
consoleErrorSpy = vi.spyOn(console, 'error').mockImplementation(() => {});
});

afterEach(() => {
consoleErrorSpy.mockRestore();
});
Comment on lines +12 to +18
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NIT: should we use onTestFinished instead to keep tests self contained and explicit?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remediated in PR #412: replaced the shared beforeEach/afterEach spy pattern with per-test onTestFinished(() => consoleErrorSpy.mockRestore()) in every test that creates a spy. The file-based tests' try/finally { rmSync(...) } cleanup has also been migrated to onTestFinished for full consistency. See: #412


test('logs to console when debug is enabled', () => {
const logger = createDebugLogger({ debug: true });
logger.log('test message');

assert({
given: 'debug enabled',
should: 'write to console.error',
actual: consoleErrorSpy.mock.calls.length > 0,
expected: true
});

assert({
given: 'a log message',
should: 'include [DEBUG] prefix',
actual: consoleErrorSpy.mock.calls[0][0],
expected: '[DEBUG] test message'
});
});

test('does not log to console when debug is disabled', () => {
const logger = createDebugLogger({ debug: false });
logger.log('test message');

assert({
given: 'debug disabled',
should: 'not write to console.error',
actual: consoleErrorSpy.mock.calls.length,
expected: 0
});
});

test('writes to log file when logFile is specified', () => {
const testDir = mkdtempSync(join(tmpdir(), 'riteway-'));
const logFile = join(testDir, 'debug.log');

try {
const logger = createDebugLogger({ debug: true, logFile });

logger.log('test message 1');
logger.log('test message 2');
logger.flush();

assert({
given: 'logFile specified',
should: 'create the log file',
actual: existsSync(logFile),
expected: true
});

const logContents = readFileSync(logFile, 'utf-8');

assert({
given: 'multiple log messages',
should: 'write all messages to file',
actual: logContents.includes('test message 1') && logContents.includes('test message 2'),
expected: true
});
} finally {
rmSync(testDir, { recursive: true, force: true });
}
});

test('includes timestamp in log file entries', () => {
const testDir = mkdtempSync(join(tmpdir(), 'riteway-'));
const logFile = join(testDir, 'debug.log');

try {
const logger = createDebugLogger({ debug: true, logFile });

logger.log('test message');
logger.flush();

const logContents = readFileSync(logFile, 'utf-8');

assert({
given: 'a log message',
should: 'include ISO timestamp',
actual: /\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}/.test(logContents),
expected: true
});
} finally {
rmSync(testDir, { recursive: true, force: true });
}
});

test('handles object logging with JSON serialization', () => {
const logger = createDebugLogger({ debug: true });
const testObj = { key: 'value', nested: { data: 123 } };

logger.log('Object:', testObj);

assert({
given: 'object in log message',
should: 'serialize object to JSON',
actual: consoleErrorSpy.mock.calls[0][0].includes('"key":"value"'),
expected: true
});
});

test('provides structured logging methods', () => {
const logger = createDebugLogger({ debug: true });

logger.command('node', ['arg1', 'arg2']);
logger.process({ pid: 123, exitCode: 0 });
logger.result({ passed: true });

assert({
given: 'command logged',
should: 'format command with args',
actual: consoleErrorSpy.mock.calls[0][0].includes('Command: node arg1 arg2'),
expected: true
});

assert({
given: 'process info logged',
should: 'serialize process data',
actual: consoleErrorSpy.mock.calls[1][0].includes('"pid":123'),
expected: true
});

assert({
given: 'result logged',
should: 'serialize result data',
actual: consoleErrorSpy.mock.calls[2][0].includes('"passed":true'),
expected: true
});
});

test('flush is a no-op when no logFile is configured', () => {
const logger = createDebugLogger({ debug: true });
logger.log('test message');

assert({
given: 'no logFile configured',
should: 'not throw when flush is called',
actual: (() => { logger.flush(); return true; })(),
expected: true
});
});

test('default debug value is false', () => {
const logger = createDebugLogger();
logger.log('test');

assert({
given: 'no debug option',
should: 'default to debug disabled',
actual: consoleErrorSpy.mock.calls.length,
expected: 0
});
});
});
});
28 changes: 28 additions & 0 deletions source/limit-concurrency.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
/**
* Simple concurrency limiter to avoid resource exhaustion.
* Executes tasks with a maximum concurrency limit.
* @param {Array<Function>} tasks - Array of async task functions
* @param {number} limit - Maximum number of concurrent tasks
* @returns {Promise<Array>} Results from all tasks
*/
export const limitConcurrency = async (tasks, limit) => {
const results = [];
const executing = [];

// Sequential launch is required; await throttles entry into the pool
for (const task of tasks) {
const promise = task().then(result => {
executing.splice(executing.indexOf(promise), 1);
return result;
});

results.push(promise);
executing.push(promise);

if (executing.length >= limit) {
await Promise.race(executing);
}
}

return Promise.all(results);
};
100 changes: 100 additions & 0 deletions source/limit-concurrency.test.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
import { describe, test, vi } from 'vitest';
import { assert } from './vitest.js';
import { Try } from './riteway.js';
import { limitConcurrency } from './limit-concurrency.js';

describe('limit-concurrency', () => {
describe('limitConcurrency()', () => {
test('returns results for all tasks in input order', async () => {
const tasks = [
() => Promise.resolve(1),
() => Promise.resolve(2),
() => Promise.resolve(3),
];

const results = await limitConcurrency(tasks, 2);

assert({
given: 'three tasks and a limit of 2',
should: 'return all results in input order',
actual: results,
expected: [1, 2, 3],
});
});

test('limits concurrent execution to the specified limit', async () => {
vi.useFakeTimers();
let activeConcurrent = 0;
let maxConcurrent = 0;

const makeTask = () => async () => {
activeConcurrent += 1;
maxConcurrent = Math.max(maxConcurrent, activeConcurrent);
await new Promise(resolve => setTimeout(resolve, 10));
activeConcurrent -= 1;
};

const tasks = Array.from({ length: 6 }, makeTask);
const run = limitConcurrency(tasks, 2);
await vi.runAllTimersAsync();
await run;
vi.useRealTimers();

assert({
given: '6 tasks with a limit of 2',
should: 'never exceed 2 concurrent executions',
actual: maxConcurrent <= 2,
expected: true,
});

assert({
given: '6 tasks with a limit of 2',
should: 'reach the full concurrency limit',
actual: maxConcurrent,
expected: 2,
});
});

test('returns empty array for empty task list', async () => {
const results = await limitConcurrency([], 3);

assert({
given: 'empty task array',
should: 'return empty array',
actual: results,
expected: [],
});
});

test('propagates rejection when a task fails', async () => {
const tasks = [
() => Promise.resolve('ok'),
() => Promise.reject(new Error('task failed')),
() => Promise.resolve('also ok'),
];

const error = await Try(limitConcurrency, tasks, 3);

assert({
given: 'a task that rejects',
should: 'propagate the error',
actual: error?.message,
expected: 'task failed',
});
});

test('executes all tasks when limit equals task count', async () => {
const results = await limitConcurrency(
[() => Promise.resolve('a'), () => Promise.resolve('b')],
2,
);

assert({
given: '2 tasks with limit matching task count',
should: 'execute all tasks and return results',
actual: results,
expected: ['a', 'b'],
});
});
});
});
35 changes: 35 additions & 0 deletions source/tap-yaml.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
import { createError } from 'error-causes';

/**
* Parse the judge agent's TAP YAML diagnostic output into a structured object.
*
* @param {string} output - Raw output from judge agent containing TAP YAML block
* @returns {{ passed: boolean, actual: string, expected: string, score: number }}
* @throws {Error} If no valid TAP YAML block found
*/
export const parseTAPYAML = (output) => {
// Strict regex: requires --- at line boundaries. LLMs sometimes add surrounding text,
// so strict matching ensures we only accept clean TAP YAML blocks.
const match = output.match(/^---\s*\n([\s\S]*?)\n---\s*$/m);
if (!match) {
throw createError({
name: 'ParseError',
message: 'Judge output does not contain a valid TAP YAML block (--- delimited)',
code: 'JUDGE_INVALID_TAP_YAML',
rawOutput: output
});
}

const parseField = (key, rawValue) => {
const value = rawValue.replace(/^["']|["']$/g, '').trim();
if (key === 'passed') return { [key]: value === 'true' };
if (key === 'score') return { [key]: Number(value) };
return { [key]: value };
};

return match[1]
.split('\n')
.map(line => line.match(/^(\w+):\s*(.+)$/))
.filter(Boolean)
.reduce((acc, [, key, rawValue]) => ({ ...acc, ...parseField(key, rawValue) }), {});
};
Loading