A serverless mock API deployed on Vercel that provides pre-generated test data for the Autobox multi-agent AI simulation platform.
autobox-mocks-api serves as a development and testing backend by exposing all autobox-api endpoints with realistic mock data. It uses pre-generated static data to ensure consistent responses across environments, making it ideal for frontend development, testing, and demos.
- π Serverless deployment on Vercel
- π Pre-generated mock data with LLM-powered content
- π Type-safe with TypeScript and Zod validation
- β‘ Fast responses with static data serving
- π Automated mock generation and synchronization pipeline
- π§ͺ Comprehensive test coverage
- Node.js 18+
- Yarn package manager
- OpenAI API key (for generating new mock data)
# Clone the repository
git clone https://github.com/yourusername/autobox-mocks-api.git
cd autobox-mocks-api
# Install dependencies
yarn install
# Start Vercel dev server on port 9000
yarn start:vercel
# Set your OpenAI API key
export OPENAI_API_KEY=your-api-key
# Generate new mock data
yarn generate
# Or build and run the compiled generator
yarn generate:build
yarn generate:prod
The project uses a two-stage data pipeline:
- JSON mock files in
mocks/
(generated) - TypeScript files in
src/mocks/
(synced from JSON)
# One-time sync from JSON to TypeScript
yarn sync:mocks
# Watch mode for automatic syncing during development
yarn sync:mocks:watch
# Generate and sync in one command
yarn generate:all
# Generate and sync with watch mode
yarn dev:mocks
autobox-mocks-api/
βββ api/ # Vercel serverless functions
β βββ index.ts # Main API router
β βββ ping.ts # Health check endpoint
βββ mocks/ # Generated JSON mock data
βββ src/
β βββ handler/ # Request handlers
β βββ mocks/ # TypeScript mock data
β βββ utils/ # Utility functions
βββ scripts/
βββ generator/ # Mock data generation
β βββ generators/ # Entity generators
β βββ openai/ # LLM integration
β βββ schemas/ # Zod schemas
βββ sync-mocks.ts # JSON to TS sync
GET /api/organizations
- List all organizationsGET /api/organizations/[oid]
- Get organization by ID
GET /api/organizations/[oid]/projects
- List projects for organizationGET /api/organizations/[oid]/projects/[pid]
- Get project by ID
GET /api/organizations/[oid]/projects/[pid]/simulations
- List simulationsGET /api/organizations/[oid]/projects/[pid]/simulations/[id]
- Get simulation
GET /api/organizations/[oid]/projects/[pid]/simulations/[sid]/runs
- List runsGET /api/organizations/[oid]/projects/[pid]/simulations/[sid]/runs/[id]
- Get run
GET /api/.../runs/[rid]/agents
- List agents for runGET /api/.../runs/[rid]/agents/[id]
- Get agent by ID
GET /api/.../runs/[rid]/traces
- Get traces for runGET /api/.../runs/[rid]/agents/[aid]/traces
- Get agent traces
GET /api/.../runs/[rid]/metrics
- Get run metricsGET /api/.../simulations/[sid]/metrics/definitions
- Get metric definitions
GET /api/ping
- Health checkPOST /api/organizations/[oid]/chat
- Chat endpoint
# Run all tests
yarn test
# Run unit tests only
yarn test:unit
# Run integration tests
yarn test:integration
# Type checking
yarn check:types
- Create a handler in
src/handler/
:
export function getNewEndpoint(req: VercelRequest, res: VercelResponse) {
const data = mockData.newEndpoint;
return res.status(200).json(data);
}
- Add route to
api/index.ts
:
const routes = {
'GET:/api/new-endpoint': getNewEndpoint,
// ...
};
- Add mock data to generator or seeds
- Edit generator in
scripts/generator/generators/
- Run
yarn generate
to create new JSON - Run
yarn sync:mocks
to update TypeScript - Test with
yarn start:vercel
The API is automatically deployed to Vercel on push to main branch.
# Install Vercel CLI
npm i -g vercel
# Deploy to production
vercel --prod
The mock data follows this hierarchy:
Organizations (1) β (N) Projects
Projects (1) β (N) Simulations
Simulations (1) β (N) Runs
Runs (1) β (N) Agents, Traces, Metrics
Each entity has realistic attributes generated using OpenAI GPT-4:
- Organizations: Tech companies with various focus areas
- Projects: AI/ML initiatives with business goals
- Simulations: Multi-agent scenarios for negotiation, planning, etc.
- Runs: Execution instances with timestamps and status
- Agents: Specialized AI agents with roles and objectives
- Traces: Conversation history and agent interactions
- Metrics: Performance and outcome measurements
# For mock generation
OPENAI_API_KEY=your-openai-api-key
OPENAI_MODEL=gpt-4-turbo-preview # Optional, defaults to gpt-4
# For local development
PORT=9000 # Vercel dev server port
See vercel.json
for deployment settings:
- Function timeout: 5 seconds
- All routes handled by single function
- GitHub integration enabled
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is part of the Autobox platform. See LICENSE for details.
- autobox-api - Backend API with agent orchestration
- autobox-ui - Frontend application
- autobox-mocks - Python mock data generator
For issues, questions, or suggestions, please open an issue on GitHub.