MockForge
MockForge is a comprehensive mocking framework for APIs, gRPC services, and WebSockets. It provides a unified interface for creating, managing, and deploying mock servers across different protocols.
Features
- Multi-Protocol Support: HTTP REST APIs, gRPC services, and WebSocket connections
- Dynamic Response Generation: Create realistic mock responses with configurable latency and failure rates
- Scenario Management: Define complex interaction scenarios with state management
- CLI Tool: Easy-to-use command-line interface for local development
- Admin UI: Web-based interface for managing mock servers
- Extensible Architecture: Plugin system for custom response generators
Quick Start
Installation
cargo install mockforge-cli
Basic Usage
# Start a mock server with an OpenAPI spec
cargo run -p mockforge-cli -- serve --spec examples/openapi-demo.json --http-port 3000
# Add WebSocket support with replay file
MOCKFORGE_WS_REPLAY_FILE=examples/ws-demo.jsonl cargo run -p mockforge-cli -- serve --ws-port 3001
# Full configuration with Admin UI
MOCKFORGE_WS_REPLAY_FILE=examples/ws-demo.jsonl \
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true \
cargo run -p mockforge-cli -- serve --spec examples/openapi-demo.json --admin --admin-port 9080
# Use configuration file
cargo run -p mockforge-cli -- serve --config demo-config.yaml
Docker
docker run -p 3000:3000 -p 3001:3001 -p 50051:50051 SaaSy-Solutions/mockforge
Documentation Structure
- Getting Started - Installation and basic setup
- HTTP Mocking - REST API mocking guide
- gRPC Mocking - gRPC service mocking
- WebSocket Mocking - WebSocket connection mocking
- Configuration - Advanced configuration options
- API Reference - Complete API documentation
- Contributing - How to contribute to MockForge
- FAQ - Frequently asked questions
Examples
Check out the examples/ directory for sample configurations and use cases.
Community
- GitHub Issues - Report bugs and request features
- GitHub Discussions - Ask questions and share ideas
- Discord - Join our community chat
License
Licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE)
- MIT License (LICENSE-MIT)
at your option.
Getting Started with MockForge
Welcome to MockForge! This guide will get you up and running in minutes. MockForge is a powerful, multi-protocol mocking framework that helps frontend and backend teams work in parallel by providing realistic API mocks.
Table of Contents
What is MockForge?
MockForge is a comprehensive mocking framework that supports multiple protocols:
- HTTP/REST APIs - Mock REST endpoints from OpenAPI/Swagger specs
- WebSocket - Simulate real-time connections with replay and interactive modes
- gRPC - Mock gRPC services from
.protofiles - GraphQL - Generate mock resolvers from GraphQL schemas
Why MockForge?
- 🚀 Fast Setup: Go from OpenAPI spec to running mock server in seconds
- 🔄 Multi-Protocol: Mock HTTP, WebSocket, gRPC, and GraphQL in one tool
- 🎯 Realistic Data: Generate intelligent mock data with faker functions and templates
- 🔌 Extensible: Plugin system for custom authentication, templates, and data sources
- 📊 Admin UI: Visual interface for monitoring and managing mock servers
Installation
Prerequisites
MockForge requires one of:
- Rust toolchain (for
cargo install) - Docker (for containerized deployment)
Method 1: Cargo Install (Recommended)
cargo install mockforge-cli
Verify installation:
mockforge --version
Method 2: Docker
# Build the Docker image
git clone https://github.com/SaaSy-Solutions/mockforge.git
cd mockforge
docker build -t mockforge .
# Run with default ports
docker run -p 3000:3000 -p 3001:3001 -p 9080:9080 mockforge
Method 3: Build from Source
git clone https://github.com/SaaSy-Solutions/mockforge.git
cd mockforge
cargo build --release
# Install globally
cargo install --path crates/mockforge-cli
See Installation Guide for detailed instructions and troubleshooting.
Quick Start: Your First Mock API
Let’s create a simple mock API in 3 steps:
Step 1: Create an OpenAPI Specification
Create a file my-api.yaml:
openapi: 3.0.3
info:
title: My First API
version: 1.0.0
paths:
/users:
get:
summary: List users
responses:
'200':
description: Success
content:
application/json:
schema:
type: array
items:
$ref: '#/components/schemas/User'
post:
summary: Create user
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/User'
responses:
'201':
description: Created
content:
application/json:
schema:
$ref: '#/components/schemas/User'
/users/{id}:
get:
summary: Get user by ID
parameters:
- name: id
in: path
required: true
schema:
type: string
responses:
'200':
description: Success
content:
application/json:
schema:
$ref: '#/components/schemas/User'
components:
schemas:
User:
type: object
required:
- id
- name
- email
properties:
id:
type: string
example: "{{uuid}}"
name:
type: string
example: "John Doe"
email:
type: string
format: email
example: "john@example.com"
createdAt:
type: string
format: date-time
example: "{{now}}"
Step 2: Start MockForge with Your Spec
mockforge serve --spec my-api.yaml --http-port 3000
You should see:
🚀 MockForge v1.0.0 starting...
📡 HTTP server listening on 0.0.0.0:3000
📋 OpenAPI spec loaded from my-api.yaml
✅ Ready to serve requests at http://localhost:3000
Step 3: Test Your Mock API
Open a new terminal and test your endpoints:
# List users
curl http://localhost:3000/users
# Get a specific user
curl http://localhost:3000/users/123
# Create a user
curl -X POST http://localhost:3000/users \
-H "Content-Type: application/json" \
-d '{"name": "Jane Smith", "email": "jane@example.com"}'
Congratulations! You have a working mock API! 🎉
Enable Dynamic Data (Optional)
To get unique data on each request, enable template expansion:
# Stop the server (Ctrl+C), then restart with templates enabled
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true \
mockforge serve --spec my-api.yaml --http-port 3000
Now {{uuid}} and {{now}} in your spec will generate unique values!
Basic Configuration
Using a Configuration File
Create mockforge.yaml for better control:
http:
port: 3000
openapi_spec: my-api.yaml
response_template_expand: true
cors:
enabled: true
allowed_origins: ["http://localhost:3000"]
admin:
enabled: true
port: 9080
logging:
level: info
Start with the config file:
mockforge serve --config mockforge.yaml
Environment Variables
All settings can be set via environment variables:
export MOCKFORGE_HTTP_PORT=3000
export MOCKFORGE_ADMIN_ENABLED=true
export MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
export MOCKFORGE_LOG_LEVEL=debug
mockforge serve --spec my-api.yaml
See Configuration Reference for all options.
Common Use Cases
Frontend Development
Start a mock server and point your frontend to it:
# Terminal 1: Start mock server
mockforge serve --spec api.json --http-port 3000 --admin
# Terminal 2: Point frontend to mock server
export REACT_APP_API_URL=http://localhost:3000
npm start
API Contract Testing
Test that your API matches the OpenAPI specification:
mockforge serve --spec api.json \
--validation enforce \
--http-port 3000
Team Collaboration
Share mock configurations via Git:
# Commit your mock config
git add mockforge.yaml
git commit -m "Add user API mocks"
# Team members can use the same mocks
git pull
mockforge serve --config mockforge.yaml
Next Steps
Now that you have MockForge running, explore these resources:
Tutorials
- 5-Minute API Tutorial - Build a complete mock API quickly
- Mock from OpenAPI Spec - Detailed OpenAPI workflow
- React + MockForge Workflow - Use MockForge with React apps
- Vue + MockForge Workflow - Use MockForge with Vue apps
User Guides
- HTTP Mocking - REST API mocking features
- WebSocket Mocking - Real-time connection mocking
- gRPC Mocking - gRPC service mocking
- Plugin System - Extend MockForge with plugins
Reference
- Configuration Guide - Complete configuration options
- FAQ - Common questions and answers
- Troubleshooting - Solve common issues
Examples
- React Demo - Complete React application
- Vue Demo - Complete Vue 3 application
- Example Projects - All available examples
Troubleshooting
Server Won’t Start
# Check if port is in use
lsof -i :3000
# Use a different port
mockforge serve --spec my-api.yaml --http-port 3001
Templates Not Working
Enable template expansion:
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true mockforge serve --spec my-api.yaml
Need More Help?
- Check the FAQ
- Review Troubleshooting Guide
- Open a GitHub Issue
Ready to dive deeper? Continue to the 5-Minute Tutorial or explore all available examples.
Installation
MockForge can be installed through multiple methods depending on your needs and environment. Choose the installation method that best fits your workflow.
Prerequisites
Before installing MockForge, ensure you have one of the following:
- Rust toolchain (for cargo installation or building from source)
- Docker (for containerized deployment)
- Pre-built binaries (when available)
Method 1: Cargo Install (Recommended)
The easiest way to install MockForge is through Cargo, Rust’s package manager:
cargo install mockforge-cli
This installs the MockForge CLI globally on your system. After installation, you can verify it’s working:
mockforge --version
Updating
To update to the latest version:
cargo install mockforge-cli --force
Method 2: Docker (Containerized)
MockForge is also available as a Docker image, which is ideal for:
- Isolated environments
- CI/CD pipelines
- Systems without Rust installed
Build Docker image
Since pre-built images are not yet published to Docker Hub, build the image locally:
# Clone and build
git clone https://github.com/SaaSy-Solutions/mockforge.git
cd mockforge
docker build -t mockforge .
Run with basic configuration
docker run -p 3000:3000 -p 3001:3001 -p 50051:50051 -p 9080:9080 \
-e MOCKFORGE_ADMIN_ENABLED=true \
-e MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true \
mockforge
Alternative: Docker Compose
For a complete setup with all services:
git clone https://github.com/SaaSy-Solutions/mockforge.git
cd mockforge
docker-compose up
Build from source (without Docker)
git clone https://github.com/SaaSy-Solutions/mockforge.git
cd mockforge
docker build -t mockforge .
Method 3: Building from Source
For development or custom builds, you can build MockForge from source:
git clone https://github.com/SaaSy-Solutions/mockforge.git
cd mockforge
cargo build --release
The binary will be available at target/release/mockforge.
To install it system-wide after building:
cargo install --path crates/mockforge-cli
Verification
After installation, verify MockForge is working:
# Check version
mockforge --version
# View help
mockforge --help
# Start with example configuration
mockforge serve --spec examples/openapi-demo.json --http-port 3000
Platform Support
MockForge supports:
- Linux (x86_64, aarch64)
- macOS (x86_64, aarch64)
- Windows (x86_64)
- Docker (any platform with Docker support)
Troubleshooting Installation
Cargo installation fails
If cargo install fails, ensure you have Rust installed:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
Docker permission issues
If Docker commands fail with permission errors:
# Add user to docker group (Linux)
sudo usermod -aG docker $USER
# Log out and back in for changes to take effect
Port conflicts
If default ports (3000, 3001, 9080, 50051) are in use:
# Check what's using the ports
lsof -i :3000
lsof -i :3001
# Kill conflicting processes or use different ports
mockforge serve --http-port 3001 --ws-port 3002 --admin-port 8081
Next Steps
Once installed, proceed to the Quick Start guide to create your first mock server, or read about Basic Concepts to understand how MockForge works.
Your First Mock API in 5 Minutes
Scenario: Your frontend team needs a /users API to continue development, but the backend isn’t ready. Let’s create a working mock in 5 minutes.
Step 1: Install MockForge (30 seconds)
cargo install mockforge-cli
Or use the pre-built binary from the releases page.
Step 2: Create a Simple Config (1 minute)
You can either create a config manually or use the init command:
# Option A: Use the init command (recommended)
mockforge init .
# This creates mockforge.yaml with sensible defaults
# Then edit it to match the config below
# Option B: Create manually
Create a file called my-api.yaml (or edit the generated mockforge.yaml):
http:
port: 3000
routes:
- path: /users
method: GET
response:
status: 200
body: |
[
{
"id": "{{uuid}}",
"name": "Alice Johnson",
"email": "alice@example.com",
"createdAt": "{{now}}"
},
{
"id": "{{uuid}}",
"name": "Bob Smith",
"email": "bob@example.com",
"createdAt": "{{now}}"
}
]
- path: /users/{id}
method: GET
response:
status: 200
body: |
{
"id": "{{request.path.id}}",
"name": "Alice Johnson",
"email": "alice@example.com",
"createdAt": "{{now}}"
}
- path: /users
method: POST
response:
status: 201
body: |
{
"id": "{{uuid}}",
"name": "{{request.body.name}}",
"email": "{{request.body.email}}",
"createdAt": "{{now}}"
}
Step 3: Validate Your Config (Optional but Recommended)
mockforge config validate --config my-api.yaml
You should see:
✅ Configuration is valid
📊 Summary:
Found 3 HTTP routes
Step 4: Start the Server (10 seconds)
mockforge serve --config my-api.yaml
You’ll see:
MockForge v1.0.0 starting...
HTTP server listening on 0.0.0.0:3000
Ready to serve requests at http://localhost:3000
Step 5: Test It (30 seconds)
Open a new terminal and test your endpoints:
# Get all users
curl http://localhost:3000/users
# Get a specific user
curl http://localhost:3000/users/123
# Create a new user
curl -X POST http://localhost:3000/users \
-H "Content-Type: application/json" \
-d '{"name": "Charlie Brown", "email": "charlie@example.com"}'
What just happened?
{{uuid}}generates a unique ID each time{{now}}adds the current timestamp{{request.path.id}}captures the ID from the URL{{request.body.name}}reads data from POST requests
Step 6: Enable Dynamic Data (1 minute)
Want different data each time? Enable template expansion:
# Stop the server (Ctrl+C), then restart:
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true mockforge serve --config my-api.yaml
Now every request returns unique UUIDs and timestamps!
Step 7: Add the Admin UI (30 seconds)
Want to see requests in real-time?
mockforge serve --config my-api.yaml --admin --admin-port 9080
Open http://localhost:9080 in your browser to see:
- Live request logs
- API metrics
- Configuration controls
What’s Next?
In the next 5 minutes, you could:
-
Use an OpenAPI Spec instead of YAML routes:
mockforge serve --spec your-api.json --admin -
Add a Plugin for custom data generation:
mockforge plugin install auth-jwt mockforge serve --config my-api.yaml --admin -
Mock a WebSocket for real-time features:
websocket: port: 3001 replay_file: chat-messages.jsonl -
Share with Your Team using workspace sync:
mockforge sync start --directory ./team-mocks git add team-mocks && git commit -m "Add user API mocks"
Common Next Steps
| What You Need | Where to Go |
|---|---|
| OpenAPI/Swagger integration | OpenAPI Guide |
| More realistic fake data | Dynamic Data Guide |
| WebSocket/real-time mocking | WebSocket Guide |
| gRPC service mocking | gRPC Guide |
| Custom authentication | Security Guide |
| Team collaboration | Sync Guide |
Troubleshooting
Port already in use?
mockforge serve --config my-api.yaml --http-port 8080
Templates not working?
Make sure you set MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true or add it to your config:
http:
response_template_expand: true
Config errors?
# Validate your configuration
mockforge config validate --config my-api.yaml
# See all available options
# https://github.com/SaaSy-Solutions/mockforge/blob/main/config.template.yaml
Need help?
- Check the Configuration Validation Guide
- Review the Complete Config Template
- See Troubleshooting Guide
- Check the FAQ
Congratulations! You now have a working mock API that your frontend team can use immediately. The best part? As the real API evolves, just update your config file to match.
Quick Start
Get MockForge running in under 5 minutes with this hands-on guide. We’ll create a mock API server and test it with real HTTP requests.
Prerequisites
Ensure MockForge is installed and available in your PATH.
Step 1: Start a Basic HTTP Mock Server
MockForge can serve mock APIs defined in OpenAPI specifications. Let’s use the included example:
# Navigate to the MockForge directory (if building from source)
cd mockforge
# Start the server with the demo OpenAPI spec
mockforge serve --spec examples/openapi-demo.json --http-port 3000
You should see output like:
MockForge v0.1.0 starting...
HTTP server listening on 0.0.0.0:3000
OpenAPI spec loaded from examples/openapi-demo.json
Ready to serve requests at http://localhost:3000
Step 2: Test Your Mock API
Open a new terminal and test the API endpoints:
# Health check endpoint
curl http://localhost:3000/ping
Expected response:
{
"status": "pong",
"timestamp": "2025-09-12T17:20:01.512504405+00:00",
"requestId": "550e8400-e29b-41d4-a716-446655440000"
}
# List users endpoint
curl http://localhost:3000/users
Expected response:
[
{
"id": "550e8400-e29b-41d4-a716-446655440001",
"name": "John Doe",
"email": "john@example.com",
"createdAt": "2025-09-12T17:20:01.512504405+00:00",
"active": true
}
]
# Create a new user
curl -X POST http://localhost:3000/users \
-H "Content-Type: application/json" \
-d '{"name": "Jane Smith", "email": "jane@example.com"}'
# Get user by ID (path parameter)
curl http://localhost:3000/users/123
Step 3: Enable Template Expansion
MockForge supports dynamic content generation. Enable template expansion for more realistic data:
# Stop the current server (Ctrl+C), then restart with templates enabled
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true \
mockforge serve --spec examples/openapi-demo.json --http-port 3000
Now test the endpoints again - you’ll see different UUIDs and timestamps each time!
Step 4: Add WebSocket Support
MockForge can also mock WebSocket connections. Let’s add WebSocket support to our server:
# Stop the server, then restart with WebSocket support
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true \
MOCKFORGE_WS_REPLAY_FILE=examples/ws-demo.jsonl \
mockforge serve --spec examples/openapi-demo.json --ws-port 3001 --http-port 3000
Step 5: Test WebSocket Connection
Test the WebSocket endpoint (requires Node.js or a WebSocket client):
# Using Node.js
node -e "
const WebSocket = require('ws');
const ws = new WebSocket('ws://localhost:3001/ws');
ws.on('open', () => {
console.log('Connected! Sending CLIENT_READY...');
ws.send('CLIENT_READY');
});
ws.on('message', (data) => {
console.log('Received:', data.toString());
if (data.toString().includes('ACK')) {
ws.send('ACK');
}
if (data.toString().includes('CONFIRMED')) {
ws.send('CONFIRMED');
}
});
ws.on('close', () => console.log('Connection closed'));
"
Expected WebSocket message flow:
- Send
CLIENT_READY - Receive welcome message with session ID
- Receive data message, respond with
ACK - Receive heartbeat messages
- Receive notification, respond with
CONFIRMED
Step 6: Enable Admin UI (Optional)
For a visual interface to manage your mock server:
# Stop the server, then restart with admin UI
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true \
MOCKFORGE_WS_REPLAY_FILE=examples/ws-demo.jsonl \
mockforge serve --spec examples/openapi-demo.json \
--admin --admin-port 9080 \
--http-port 3000 --ws-port 3001
Access the admin interface at: http://localhost:9080
Step 7: Using Configuration Files
Instead of environment variables, you can use a configuration file:
# Stop the server, then start with config file
mockforge serve --config demo-config.yaml
Step 8: Docker Alternative
If you prefer Docker:
# Build and run with Docker
docker build -t mockforge .
docker run -p 3000:3000 -p 3001:3001 -p 9080:9080 \
-e MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true \
-e MOCKFORGE_WS_REPLAY_FILE=examples/ws-demo.jsonl \
mockforge
What’s Next?
Congratulations! You now have a fully functional mock server running. Here are some next steps:
- Learn about Basic Concepts to understand how MockForge works
- Explore HTTP Mocking for advanced REST API features
- Try WebSocket Mocking for real-time communication
- Check out the Admin UI for visual management
Troubleshooting
Server won’t start
- Check if ports 3000, 3001, or 9080 are already in use
- Verify the OpenAPI spec file path is correct
- Ensure MockForge is properly installed
Template variables not working
- Make sure
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=trueis set - Check that template syntax
{{variable}}is used correctly
WebSocket connection fails
- Verify WebSocket port (default 3001) is accessible
- Check that
MOCKFORGE_WS_REPLAY_FILEpoints to a valid replay file - Ensure the replay file uses the correct JSONL format
Need help?
- Check the examples README for detailed testing scripts
- Review Configuration Files for advanced setup
- Visit the Troubleshooting guide
Basic Concepts
Understanding MockForge’s core concepts will help you make the most of its capabilities. This guide explains the fundamental ideas behind MockForge’s design and functionality.
Multi-Protocol Architecture
MockForge is designed to mock multiple communication protocols within a single, unified framework:
HTTP/REST APIs
- OpenAPI/Swagger Support: Define API contracts using industry-standard OpenAPI specifications
- Dynamic Response Generation: Generate realistic responses based on request parameters
- Request/Response Matching: Route requests to appropriate mock responses based on HTTP methods, paths, and parameters
WebSocket Connections
- Replay Mode: Simulate scripted message sequences from recorded interactions
- Interactive Mode: Respond dynamically to client messages
- State Management: Maintain connection state across message exchanges
gRPC Services
- Protocol Buffer Integration: Mock services defined with .proto files
- Dynamic Service Discovery: Automatically discover and compile .proto files
- Streaming Support: Handle unary, server streaming, client streaming, and bidirectional streaming
- Reflection Support: Built-in gRPC reflection for service discovery
Response Generation Strategies
MockForge offers multiple approaches to generating mock responses:
1. Static Responses
Define fixed response payloads that are returned for matching requests:
{
"status": "success",
"data": {
"id": 123,
"name": "Example Item"
}
}
2. Template-Based Dynamic Responses
Use template variables for dynamic content generation:
{
"id": "{{uuid}}",
"timestamp": "{{now}}",
"randomValue": "{{randInt 1 100}}",
"userData": "{{request.body}}"
}
3. Scenario-Based Responses
Define complex interaction scenarios with conditional logic and state management.
4. Advanced Data Synthesis (gRPC)
For gRPC services, MockForge provides sophisticated data synthesis capabilities:
- Smart Field Inference: Automatically detects data types from field names (emails, phones, IDs)
- Deterministic Generation: Reproducible test data with seeded randomness
- Relationship Awareness: Maintains referential integrity across related entities
- RAG-Driven Generation: Uses domain knowledge for contextually appropriate data
Template System
MockForge’s template system enables dynamic content generation using Handlebars-style syntax:
Built-in Template Functions
Data Generation
{{uuid}}- Generate unique UUID v4 identifiers{{now}}- Current timestamp in ISO 8601 format{{now+1h}}- Future timestamps with offset support{{randInt min max}}- Random integers within a range{{randFloat min max}}- Random floating-point numbers
Request Data Access
{{request.body}}- Access complete request body{{request.body.field}}- Access specific JSON fields{{request.path.param}}- Access URL path parameters{{request.query.param}}- Access query string parameters{{request.header.name}}- Access HTTP headers
Conditional Logic
{{#if condition}}content{{/if}}- Conditional content rendering{{#each array}}item{{/each}}- Iterate over arrays
Template Expansion Control
Templates are only processed when explicitly enabled:
# Enable template expansion
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
This security feature prevents accidental template processing in production environments.
Configuration Hierarchy
MockForge supports multiple configuration methods with clear precedence:
1. Command Line Arguments (Highest Priority)
mockforge serve --http-port 3000 --ws-port 3001 --spec api.json
2. Environment Variables
MOCKFORGE_HTTP_PORT=3000
MOCKFORGE_WS_PORT=3001
MOCKFORGE_OPENAPI_SPEC=api.json
3. Configuration Files (Lowest Priority)
# config.yaml
server:
http_port: 3000
ws_port: 3001
spec: api.json
Server Modes
Development Mode
- Template Expansion: Enabled by default for dynamic content
- Verbose Logging: Detailed request/response logging
- Admin UI: Enabled for visual server management
- CORS: Permissive cross-origin requests
Production Mode
- Template Expansion: Disabled by default for security
- Minimal Logging: Essential information only
- Performance Optimized: Reduced overhead for high-throughput scenarios
Request Matching
MockForge uses a sophisticated matching system to route requests to appropriate responses:
HTTP Request Matching
- Method Matching: GET, POST, PUT, DELETE, PATCH
- Path Matching: Exact path or parameterized routes
- Query Parameter Matching: Optional query string conditions
- Header Matching: Conditional responses based on request headers
- Body Matching: Match against request payload structure
Priority Order
- Most specific match first (method + path + query + headers + body)
- Fall back to less specific matches
- Default response for unmatched requests
State Management
For complex scenarios, MockForge supports maintaining state across requests:
Session State
- Connection-specific data persists across WebSocket messages
- HTTP session cookies maintain state between requests
- Scenario progression tracks interaction flow
Global State
- Shared data accessible across all connections
- Configuration updates applied dynamically
- Metrics and counters maintained server-wide
Extensibility
MockForge is designed for extension through multiple mechanisms:
Custom Response Generators
Implement custom logic for generating complex responses based on business rules.
Plugin System
Extend functionality through compiled plugins for specialized use cases.
Configuration Extensions
Add custom configuration options for domain-specific requirements.
Security Considerations
Template Injection Prevention
- Templates are disabled by default in production
- Explicit opt-in required for template processing
- Input validation prevents malicious template injection
Access Control
- Configurable CORS policies
- Request rate limiting options
- Authentication simulation support
Data Privacy
- Request/response logging controls
- Sensitive data masking capabilities
- Compliance-friendly configuration options
Performance Characteristics
Throughput
- HTTP APIs: 10,000+ requests/second (depending on response complexity)
- WebSocket: 1,000+ concurrent connections
- Memory Usage: Minimal overhead per connection
Scalability
- Horizontal Scaling: Multiple instances behind load balancer
- Resource Efficiency: Low CPU and memory footprint
- Concurrent Users: Support for thousands of simultaneous connections
Integration Patterns
MockForge works well in various development and testing scenarios:
API Development
- Contract-First Development: Mock APIs before implementation
- Parallel Development: Frontend and backend teams work independently
- Integration Testing: Validate API contracts between services
Microservices Testing
- Service Virtualization: Mock dependent services during testing
- Chaos Engineering: Simulate service failures and latency
- Load Testing: Generate realistic traffic patterns
CI/CD Pipelines
- Automated Testing: Mock external dependencies in test environments
- Deployment Validation: Verify application behavior with mock services
- Performance Benchmarking: Consistent test conditions across environments
This foundation will help you understand how to effectively use MockForge for your specific use case. The following guides provide detailed instructions for configuring and using each protocol and feature.
Tutorials
Welcome to the MockForge tutorials! These step-by-step guides walk you through common workflows and real-world scenarios.
Each tutorial is designed to be completed in 5-10 minutes and focuses on a specific goal.
Getting Started Tutorials
Your First Mock API in 5 Minutes
Time: 5 minutes | Level: Beginner
The fastest way to get started with MockForge. Create a simple REST API from scratch and test it.
You’ll learn:
- Basic YAML configuration
- Template variables
- Starting the server
- Testing endpoints
Perfect for: First-time users who want to see MockForge in action immediately.
Common Workflow Tutorials
Mock a REST API from an OpenAPI Spec
Time: 3 minutes | Level: Beginner
Automatically generate mock endpoints from your OpenAPI/Swagger specification.
You’ll learn:
- Loading OpenAPI specs
- Auto-generated responses
- Request validation
- Overriding specific endpoints
Perfect for: Teams with existing API documentation who want instant mocks.
Admin UI Walkthrough
Time: 5 minutes | Level: Beginner
Discover MockForge’s visual interface for managing your mock server without editing config files.
You’ll learn:
- Dashboard navigation
- Live request logs
- Fixture management
- Latency and fault simulation
- Full-text search
Perfect for: Visual learners and teams who prefer UI over CLI.
Add a Custom Plugin
Time: 10 minutes | Level: Intermediate
Extend MockForge with plugins for custom authentication, data generation, or business logic.
You’ll learn:
- Installing pre-built plugins
- Using plugins in configs
- Creating your own plugin
- Plugin security and permissions
Perfect for: Developers who need custom functionality beyond built-in features.
Scenario-Based Tutorials
Coming Soon
We’re working on tutorials for these common scenarios:
- Frontend Development Workflow: Set up mocks for a React/Vue/Angular app
- Microservices Testing: Mock a multi-service architecture
- Team Collaboration: Share mocks with Git and workspace sync
- CI/CD Integration: Use MockForge in automated testing pipelines
- Performance Testing: Simulate load and measure application behavior
- WebSocket Real-Time Apps: Mock chat, notifications, and live updates
- gRPC Service Development: Work with Protocol Buffers and streaming
Want to see a specific tutorial? Open an issue with your suggestion!
Tutorial Format
Each tutorial follows this structure:
- Goal: What you’ll accomplish
- Time: How long it takes
- Prerequisites: What you need before starting
- Step-by-step instructions: Clear, numbered steps
- Code examples: Ready-to-use configurations
- Troubleshooting: Common issues and solutions
- What’s next: Related guides and advanced topics
How to Use These Tutorials
For Beginners
Start with “Your First Mock API in 5 Minutes”, then move to “Mock a REST API from an OpenAPI Spec” if you have existing API documentation.
For Teams
Have team members complete “Admin UI Walkthrough” to get comfortable with the visual interface, then explore “Team Collaboration” (coming soon) for multi-user workflows.
For Developers
Jump straight to “Add a Custom Plugin” if you need advanced customization, or start with the basic tutorials to understand core concepts first.
Contributing Tutorials
Found a tutorial helpful? Have ideas for new ones? We welcome contributions!
See our Contributing Guide for details on how to submit tutorial ideas or write your own.
Quick Reference
| Tutorial | Time | Level | Tags |
|---|---|---|---|
| Your First Mock API | 5 min | Beginner | Getting Started, HTTP, Basic |
| Mock OpenAPI Spec | 3 min | Beginner | HTTP, OpenAPI, Validation |
| Admin UI Walkthrough | 5 min | Beginner | Admin UI, Monitoring, Visual |
| Add Custom Plugin | 10 min | Intermediate | Plugins, Extension, WASM |
Ready to start? Pick a tutorial above and follow along. Each one is designed to give you hands-on experience with MockForge’s powerful features.
Mock a REST API from an OpenAPI Spec
Goal: You have an OpenAPI specification (Swagger file) and want to automatically generate mock endpoints for frontend development.
Time: 3 minutes
What You’ll Learn
- Load an OpenAPI/Swagger spec into MockForge
- Auto-generate mock responses from schema definitions
- Enable dynamic data with template expansion
- Test your mocked API
Prerequisites
- MockForge installed (Installation Guide)
- An OpenAPI 3.0 or Swagger 2.0 spec file (JSON or YAML)
Step 1: Prepare Your OpenAPI Spec
Use your existing spec, or create a simple one for testing:
petstore-api.json:
{
"openapi": "3.0.0",
"info": {
"title": "Pet Store API",
"version": "1.0.0"
},
"paths": {
"/pets": {
"get": {
"summary": "List all pets",
"responses": {
"200": {
"description": "Successful response",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Pet"
}
}
}
}
}
}
},
"post": {
"summary": "Create a pet",
"requestBody": {
"required": true,
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/Pet"
}
}
}
},
"responses": {
"201": {
"description": "Pet created",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/Pet"
}
}
}
}
}
}
},
"/pets/{petId}": {
"get": {
"summary": "Get a pet by ID",
"parameters": [
{
"name": "petId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Successful response",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/Pet"
}
}
}
}
}
}
}
},
"components": {
"schemas": {
"Pet": {
"type": "object",
"required": ["id", "name"],
"properties": {
"id": {
"type": "string",
"example": "{{uuid}}"
},
"name": {
"type": "string",
"example": "Fluffy"
},
"species": {
"type": "string",
"example": "cat"
},
"age": {
"type": "integer",
"example": 3
}
}
}
}
}
}
Step 2: Start MockForge with Your Spec
mockforge serve --spec petstore-api.json --http-port 3000
What happened? MockForge:
- Parsed your OpenAPI spec
- Created mock endpoints for all defined paths
- Generated example responses from schemas
Step 3: Test the Auto-Generated Endpoints
# List all pets
curl http://localhost:3000/pets
# Create a pet
curl -X POST http://localhost:3000/pets \
-H "Content-Type: application/json" \
-d '{"name": "Rex", "species": "dog", "age": 5}'
# Get a specific pet
curl http://localhost:3000/pets/123
Step 4: Enable Dynamic Template Expansion
To get unique IDs and dynamic data on each request:
# Stop the server (Ctrl+C), then restart with templates enabled:
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true \
mockforge serve --spec petstore-api.json --http-port 3000
Now test again - the {{uuid}} in your schema examples will generate unique IDs!
Step 5: Add Request Validation
MockForge can validate requests against your OpenAPI schema:
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true \
MOCKFORGE_REQUEST_VALIDATION=enforce \
mockforge serve --spec petstore-api.json --http-port 3000
Try sending an invalid request:
# This will fail validation (missing required 'name' field)
curl -X POST http://localhost:3000/pets \
-H "Content-Type: application/json" \
-d '{"species": "dog"}'
Response:
{
"error": "request validation failed",
"details": [
{
"path": "body.name",
"code": "required",
"message": "Missing required field: name"
}
]
}
Step 6: Use a Configuration File (Optional)
For more control, create a config file:
petstore-config.yaml:
server:
http_port: 3000
spec: petstore-api.json
validation:
mode: enforce
response:
template_expand: true
admin:
enabled: true
port: 9080
Start with config:
mockforge serve --config petstore-config.yaml
Advanced: Override Specific Responses
You can override auto-generated responses for specific endpoints:
petstore-config.yaml:
http:
port: 3000
openapi_spec: petstore-api.json
response_template_expand: true
# Override the GET /pets endpoint
routes:
- path: /pets
method: GET
response:
status: 200
body: |
[
{
"id": "{{uuid}}",
"name": "{{faker.name}}",
"species": "cat",
"age": {{randInt 1 15}}
},
{
"id": "{{uuid}}",
"name": "{{faker.name}}",
"species": "dog",
"age": {{randInt 1 15}}
}
]
Step 7: Configure Request Validation
MockForge supports comprehensive OpenAPI request validation. Update your config to enable validation:
validation:
mode: enforce # Reject invalid requests
aggregate_errors: true # Combine multiple validation errors
status_code: 422 # Use 422 for validation errors
# Optional: Skip validation for specific routes
validation:
overrides:
"GET /health": "off" # Health checks don't need validation
Test validation by sending an invalid request:
# This will fail validation (missing required fields)
curl -X POST http://localhost:3000/pets \
-H "Content-Type: application/json" \
-d '{"species": "dog"}'
Response:
{
"error": "request validation failed",
"status": 422,
"details": [
{
"path": "body.name",
"code": "required",
"message": "Missing required field: name"
}
]
}
Validation Modes
off: Disable validation completelywarn: Log warnings but allow invalid requestsenforce: Reject invalid requests with error responses
Common Use Cases
| Use Case | Configuration |
|---|---|
| Frontend development | Enable CORS, template expansion |
| API contract testing | Enable request validation (enforce mode) |
| Demo environments | Use faker functions for realistic data |
| Integration tests | Disable template expansion for deterministic responses |
Troubleshooting
Spec not loading?
- Verify the file path is correct
- Check that the spec is valid OpenAPI 3.0 or Swagger 2.0
- Use a validator like Swagger Editor
Validation too strict?
# Use 'warn' mode instead of 'enforce'
MOCKFORGE_REQUEST_VALIDATION=warn mockforge serve --spec petstore-api.json
Need custom responses?
- Add route overrides in your config file (see Advanced section above)
- Or use Custom Responses Guide
Complete Workflow Example
Here’s a complete workflow for generating mocks from an OpenAPI spec and using them in development:
1. Start with OpenAPI Spec
# Your API team provides this spec
cat petstore-api.json
2. Generate Mock Server
# Start MockForge with the spec
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true \
mockforge serve --spec petstore-api.json --http-port 3000 --admin
3. Test Generated Endpoints
# All endpoints from the spec are now available
curl http://localhost:3000/pets
curl http://localhost:3000/pets/123
curl -X POST http://localhost:3000/pets -d '{"name": "Fluffy", "species": "cat"}'
4. Monitor in Admin UI
Visit http://localhost:9080 to see:
- All requests in real-time
- Request/response bodies
- Response times
- Error rates
5. Use in Frontend Development
Point your frontend app to the mock server:
// In your React/Vue/Angular app
const API_URL = 'http://localhost:3000';
fetch(`${API_URL}/pets`)
.then(res => res.json())
.then(data => console.log('Pets:', data));
6. Iterate as API Evolves
When the API spec changes:
# 1. Update the OpenAPI spec file
vim petstore-api.json
# 2. Restart MockForge (it auto-reloads from spec)
# Or use watch mode if available
# 3. Regenerate client code (if using code generation)
mockforge client generate --spec petstore-api.json --framework react
Best Practices
Organization
-
Keep specs in version control
git add petstore-api.json git commit -m "Add Pet Store API spec v1.2" -
Use environment-specific configs
# mockforge.dev.yaml http: port: 3000 response_template_expand: true cors: enabled: true -
Document any custom overrides
# Custom route overrides http: routes: - path: /pets/{petId} method: GET response: # Override default response status: 200 body: | { "id": "{{request.path.petId}}", "name": "Custom Pet", "species": "custom" }
Testing
-
Use deterministic data for tests
# Disable template expansion for consistent test data response: template_expand: false -
Enable validation for contract testing
mockforge serve --spec api.json --validation enforce -
Record test scenarios
- Use Admin UI to record request/response pairs
- Export as fixtures for automated tests
What’s Next?
- Dynamic Data Generation - Add faker functions and advanced templates
- React Workflow - Complete React + MockForge setup
- Vue Workflow - Complete Vue + MockForge setup
- Admin UI Walkthrough - Visualize and manage your mock server
- Add a Custom Plugin - Extend MockForge with custom functionality
- Team Collaboration - Share mocks with your team via Git
Pro Tip: Keep your OpenAPI spec in version control alongside your mock configuration. As the real API evolves, update the spec and your frontend automatically benefits from the changes.
React + MockForge Workflow
Goal: Build a React application that uses MockForge as a backend mock server for development and testing.
Time: 10-15 minutes
Overview
This tutorial shows you how to:
- Set up MockForge with an OpenAPI specification
- Generate TypeScript client code for React
- Build a React app that consumes the mock API
- Develop and test frontend features against mock data
Prerequisites
- MockForge installed (Installation Guide)
- Node.js 16+ and npm/pnpm installed
- Basic React and TypeScript knowledge
Step 1: Prepare Your OpenAPI Specification
Create or use an existing OpenAPI spec. For this tutorial, we’ll use a User Management API:
user-management-api.json:
{
"openapi": "3.0.3",
"info": {
"title": "User Management API",
"version": "1.0.0"
},
"paths": {
"/users": {
"get": {
"summary": "List all users",
"responses": {
"200": {
"description": "List of users",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/User"
}
}
}
}
}
}
},
"post": {
"summary": "Create a user",
"requestBody": {
"required": true,
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/UserInput"
}
}
}
},
"responses": {
"201": {
"description": "User created",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/User"
}
}
}
}
}
}
},
"/users/{id}": {
"get": {
"summary": "Get user by ID",
"parameters": [
{
"name": "id",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "User details",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/User"
}
}
}
}
}
}
}
},
"components": {
"schemas": {
"User": {
"type": "object",
"required": ["id", "name", "email"],
"properties": {
"id": {
"type": "string",
"example": "{{uuid}}"
},
"name": {
"type": "string",
"example": "John Doe"
},
"email": {
"type": "string",
"format": "email",
"example": "john@example.com"
},
"createdAt": {
"type": "string",
"format": "date-time",
"example": "{{now}}"
}
}
},
"UserInput": {
"type": "object",
"required": ["name", "email"],
"properties": {
"name": {
"type": "string"
},
"email": {
"type": "string",
"format": "email"
}
}
}
}
}
}
Step 2: Start MockForge Server
Start the mock server with your OpenAPI spec:
# Terminal 1: Start MockForge
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true \
mockforge serve --spec user-management-api.json --http-port 3000 --admin
You should see:
🚀 MockForge v1.0.0 starting...
📡 HTTP server listening on 0.0.0.0:3000
✅ Ready to serve requests at http://localhost:3000
Tip: Keep this terminal running. The --admin flag enables the admin UI at http://localhost:9080 for monitoring requests.
Step 3: Create React Application
Create a new React app (or use an existing one):
# Create React app with TypeScript
npx create-react-app my-app --template typescript
cd my-app
Step 4: Generate TypeScript Client (Optional)
MockForge can generate type-safe React hooks from your OpenAPI spec:
# Install MockForge CLI as dev dependency
npm install --save-dev mockforge-cli
# Add to package.json scripts
Update package.json:
{
"scripts": {
"generate-client": "mockforge client generate --spec ../user-management-api.json --framework react --output ./src/generated",
"start": "react-scripts start",
"build": "react-scripts build"
}
}
Generate the client:
npm run generate-client
This creates:
src/generated/types.ts- TypeScript type definitionssrc/generated/hooks.ts- React hooks for API calls
Step 5: Configure React App
Option A: Using Generated Hooks
If you generated the client, use the hooks:
src/App.tsx:
import React, { useState } from 'react';
import { useGetUsers, useCreateUser } from './generated/hooks';
import type { UserInput } from './generated/types';
function App() {
const { data: users, loading, error, refetch } = useGetUsers();
const { execute: createUser, loading: creating } = useCreateUser();
const [formData, setFormData] = useState<UserInput>({
name: '',
email: ''
});
const handleSubmit = async (e: React.FormEvent) => {
e.preventDefault();
try {
await createUser(formData);
setFormData({ name: '', email: '' });
refetch(); // Refresh user list
} catch (error) {
console.error('Failed to create user:', error);
}
};
if (loading) return <div>Loading users...</div>;
if (error) return <div>Error: {error.message}</div>;
return (
<div className="App">
<h1>User Management</h1>
<form onSubmit={handleSubmit}>
<input
type="text"
placeholder="Name"
value={formData.name}
onChange={(e) => setFormData({ ...formData, name: e.target.value })}
/>
<input
type="email"
placeholder="Email"
value={formData.email}
onChange={(e) => setFormData({ ...formData, email: e.target.value })}
/>
<button type="submit" disabled={creating}>
{creating ? 'Creating...' : 'Create User'}
</button>
</form>
<ul>
{users?.map(user => (
<li key={user.id}>
<strong>{user.name}</strong> - {user.email}
</li>
))}
</ul>
</div>
);
}
export default App;
Option B: Manual Fetch Implementation
If you prefer manual implementation:
src/App.tsx:
import React, { useState, useEffect } from 'react';
interface User {
id: string;
name: string;
email: string;
createdAt: string;
}
function App() {
const [users, setUsers] = useState<User[]>([]);
const [loading, setLoading] = useState(true);
const [formData, setFormData] = useState({ name: '', email: '' });
useEffect(() => {
fetch('http://localhost:3000/users')
.then(res => res.json())
.then(data => {
setUsers(data);
setLoading(false);
})
.catch(err => {
console.error('Error fetching users:', err);
setLoading(false);
});
}, []);
const handleSubmit = async (e: React.FormEvent) => {
e.preventDefault();
try {
const res = await fetch('http://localhost:3000/users', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(formData)
});
const newUser = await res.json();
setUsers([...users, newUser]);
setFormData({ name: '', email: '' });
} catch (error) {
console.error('Failed to create user:', error);
}
};
if (loading) return <div>Loading...</div>;
return (
<div className="App">
<h1>User Management</h1>
<form onSubmit={handleSubmit}>
<input
type="text"
placeholder="Name"
value={formData.name}
onChange={(e) => setFormData({ ...formData, name: e.target.value })}
/>
<input
type="email"
placeholder="Email"
value={formData.email}
onChange={(e) => setFormData({ ...formData, email: e.target.value })}
/>
<button type="submit">Create User</button>
</form>
<ul>
{users.map(user => (
<li key={user.id}>
<strong>{user.name}</strong> - {user.email}
</li>
))}
</ul>
</div>
);
}
export default App;
Step 6: Configure API Base URL
Set the API URL as an environment variable:
.env.development:
REACT_APP_API_URL=http://localhost:3000
.env.production:
REACT_APP_API_URL=https://api.yourdomain.com
Update your fetch calls to use the environment variable:
const API_URL = process.env.REACT_APP_API_URL || 'http://localhost:3000';
fetch(`${API_URL}/users`)
Step 7: Start React App
# Terminal 2: Start React app
npm start
Your React app will be available at http://localhost:3001 (or 3000 if available).
Step 8: Test the Integration
- Create a user: Fill out the form and submit
- View users: See the list update with new users
- Monitor requests: Open http://localhost:9080 (Admin UI) to see all requests
Development Workflow
Typical Development Cycle
- Start MockForge with your API spec
- Develop React features against mock data
- View requests in Admin UI for debugging
- Update spec as API evolves
- Regenerate client when spec changes
Updating API Spec
When the OpenAPI spec changes:
# Regenerate TypeScript client
npm run generate-client
# Restart MockForge with updated spec
# (Ctrl+C in Terminal 1, then restart)
mockforge serve --spec user-management-api.json --http-port 3000 --admin
Testing
Run tests against the mock server:
# Start mock server in background
mockforge serve --spec user-management-api.json --http-port 3000 &
MOCKFORGE_PID=$!
# Run tests
npm test
# Stop mock server
kill $MOCKFORGE_PID
Common Issues
CORS Errors
If you see CORS errors, enable CORS in MockForge config:
# mockforge.yaml
http:
port: 3000
cors:
enabled: true
allowed_origins: ["http://localhost:3000", "http://localhost:3001"]
Template Variables Not Expanding
Make sure template expansion is enabled:
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true mockforge serve ...
Client Generation Fails
- Ensure MockForge CLI is in PATH
- Check OpenAPI spec is valid JSON/YAML
- Verify framework name is correct (
react, notreactjs)
Advanced Usage
Custom Hooks
Wrap generated hooks with custom logic:
import { useGetUsers as useGetUsersBase } from './generated/hooks';
export function useGetUsers() {
const result = useGetUsersBase();
// Add custom logic
useEffect(() => {
if (result.data) {
console.log('Users loaded:', result.data.length);
}
}, [result.data]);
return result;
}
Error Handling
Implement global error handling:
import { useGetUsers, useCreateUser } from './generated/hooks';
function App() {
const { data, error } = useGetUsers();
if (error) {
// Show user-friendly error message
return <ErrorDisplay error={error} />;
}
// ... rest of component
}
Request Interceptors
Add authentication or custom headers:
// In generated/hooks.ts, modify the base configuration
const apiConfig = {
baseUrl: 'http://localhost:3000',
headers: {
'Authorization': `Bearer ${getToken()}`,
}
};
Next Steps
- View Complete Example: See React Demo for a full implementation
- Learn Vue Workflow: Vue + MockForge Workflow
- Explore Admin UI: Admin UI Walkthrough
- Advanced Features: Dynamic Data Generation
Need help? Check the FAQ or Troubleshooting Guide.
Vue + MockForge Workflow
Goal: Build a Vue 3 application that uses MockForge as a backend mock server for development and testing.
Time: 10-15 minutes
Overview
This tutorial shows you how to:
- Set up MockForge with an OpenAPI specification
- Generate TypeScript client code for Vue 3
- Build a Vue app that consumes the mock API using Pinia
- Develop and test frontend features against mock data
Prerequisites
- MockForge installed (Installation Guide)
- Node.js 16+ and npm/pnpm installed
- Basic Vue 3 and TypeScript knowledge
Step 1: Prepare Your OpenAPI Specification
Create or use an existing OpenAPI spec. We’ll use the same User Management API from the React tutorial:
user-management-api.json:
{
"openapi": "3.0.3",
"info": {
"title": "User Management API",
"version": "1.0.0"
},
"paths": {
"/users": {
"get": {
"summary": "List all users",
"responses": {
"200": {
"description": "List of users",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/User"
}
}
}
}
}
}
},
"post": {
"summary": "Create a user",
"requestBody": {
"required": true,
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/UserInput"
}
}
}
},
"responses": {
"201": {
"description": "User created",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/User"
}
}
}
}
}
}
},
"/users/{id}": {
"get": {
"summary": "Get user by ID",
"parameters": [
{
"name": "id",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "User details",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/User"
}
}
}
}
}
}
}
},
"components": {
"schemas": {
"User": {
"type": "object",
"required": ["id", "name", "email"],
"properties": {
"id": {
"type": "string",
"example": "{{uuid}}"
},
"name": {
"type": "string",
"example": "John Doe"
},
"email": {
"type": "string",
"format": "email",
"example": "john@example.com"
},
"createdAt": {
"type": "string",
"format": "date-time",
"example": "{{now}}"
}
}
},
"UserInput": {
"type": "object",
"required": ["name", "email"],
"properties": {
"name": {
"type": "string"
},
"email": {
"type": "string",
"format": "email"
}
}
}
}
}
}
Step 2: Start MockForge Server
Start the mock server with your OpenAPI spec:
# Terminal 1: Start MockForge
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true \
mockforge serve --spec user-management-api.json --http-port 3000 --admin
You should see:
🚀 MockForge v1.0.0 starting...
📡 HTTP server listening on 0.0.0.0:3000
✅ Ready to serve requests at http://localhost:3000
Tip: Keep this terminal running. The --admin flag enables the admin UI at http://localhost:9080.
Step 3: Create Vue Application
Create a new Vue 3 app with TypeScript:
# Create Vue app with TypeScript
npm create vue@latest my-app
cd my-app
# Select TypeScript when prompted
# Install dependencies
npm install
Step 4: Install Pinia (State Management)
npm install pinia
Set up Pinia in src/main.ts:
import { createApp } from 'vue'
import { createPinia } from 'pinia'
import App from './App.vue'
const app = createApp(App)
app.use(createPinia())
app.mount('#app')
Step 5: Generate TypeScript Client (Optional)
MockForge can generate type-safe Vue composables from your OpenAPI spec:
# Install MockForge CLI as dev dependency
npm install --save-dev mockforge-cli
# Add to package.json scripts
Update package.json:
{
"scripts": {
"generate-client": "mockforge client generate --spec ../user-management-api.json --framework vue --output ./src/generated",
"dev": "vite",
"build": "vue-tsc && vite build"
}
}
Generate the client:
npm run generate-client
This creates:
src/generated/types.ts- TypeScript type definitionssrc/generated/composables.ts- Vue composables for API callssrc/generated/store.ts- Pinia store for state management
Step 6: Configure Vue App
Option A: Using Generated Composables
If you generated the client, use the composables:
src/App.vue:
<template>
<div class="app">
<h1>User Management</h1>
<form @submit.prevent="handleSubmit">
<input
v-model="formData.name"
type="text"
placeholder="Name"
required
/>
<input
v-model="formData.email"
type="email"
placeholder="Email"
required
/>
<button type="submit" :disabled="creating">
{{ creating ? 'Creating...' : 'Create User' }}
</button>
</form>
<div v-if="loading">Loading users...</div>
<div v-else-if="error">Error: {{ error.message }}</div>
<ul v-else>
<li v-for="user in users" :key="user.id">
<strong>{{ user.name }}</strong> - {{ user.email }}
</li>
</ul>
</div>
</template>
<script setup lang="ts">
import { ref } from 'vue';
import { useGetUsers, useCreateUser } from './generated/composables';
import type { UserInput } from './generated/types';
const { data: users, loading, error, refetch } = useGetUsers();
const { execute: createUser, loading: creating } = useCreateUser();
const formData = ref<UserInput>({
name: '',
email: ''
});
const handleSubmit = async () => {
try {
await createUser(formData.value);
formData.value = { name: '', email: '' };
refetch(); // Refresh user list
} catch (error) {
console.error('Failed to create user:', error);
}
};
</script>
<style scoped>
.app {
max-width: 800px;
margin: 0 auto;
padding: 20px;
}
form {
margin-bottom: 20px;
}
input {
margin-right: 10px;
padding: 8px;
}
button {
padding: 8px 16px;
cursor: pointer;
}
ul {
list-style: none;
padding: 0;
}
li {
padding: 10px;
margin: 5px 0;
background: #f5f5f5;
border-radius: 4px;
}
</style>
Option B: Manual Implementation with Pinia Store
Create a Pinia store for user management:
src/stores/userStore.ts:
import { defineStore } from 'pinia';
import { ref, computed } from 'vue';
interface User {
id: string;
name: string;
email: string;
createdAt: string;
}
interface UserInput {
name: string;
email: string;
}
const API_URL = import.meta.env.VITE_API_URL || 'http://localhost:3000';
export const useUserStore = defineStore('users', () => {
const users = ref<User[]>([]);
const loading = ref(false);
const error = ref<Error | null>(null);
const userCount = computed(() => users.value.length);
async function fetchUsers() {
loading.value = true;
error.value = null;
try {
const response = await fetch(`${API_URL}/users`);
if (!response.ok) throw new Error('Failed to fetch users');
users.value = await response.json();
} catch (e) {
error.value = e as Error;
console.error('Error fetching users:', e);
} finally {
loading.value = false;
}
}
async function createUser(input: UserInput) {
loading.value = true;
error.value = null;
try {
const response = await fetch(`${API_URL}/users`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(input)
});
if (!response.ok) throw new Error('Failed to create user');
const newUser = await response.json();
users.value.push(newUser);
} catch (e) {
error.value = e as Error;
console.error('Error creating user:', e);
throw e;
} finally {
loading.value = false;
}
}
return {
users,
loading,
error,
userCount,
fetchUsers,
createUser
};
});
Use the store in your component:
src/App.vue:
<template>
<div class="app">
<h1>User Management</h1>
<form @submit.prevent="handleSubmit">
<input
v-model="formData.name"
type="text"
placeholder="Name"
required
/>
<input
v-model="formData.email"
type="email"
placeholder="Email"
required
/>
<button type="submit" :disabled="userStore.loading">
{{ userStore.loading ? 'Creating...' : 'Create User' }}
</button>
</form>
<div v-if="userStore.loading && userStore.users.length === 0">
Loading users...
</div>
<div v-else-if="userStore.error">
Error: {{ userStore.error.message }}
</div>
<ul v-else>
<li v-for="user in userStore.users" :key="user.id">
<strong>{{ user.name }}</strong> - {{ user.email }}
</li>
</ul>
</div>
</template>
<script setup lang="ts">
import { ref, onMounted } from 'vue';
import { useUserStore } from './stores/userStore';
const userStore = useUserStore();
const formData = ref({ name: '', email: '' });
onMounted(() => {
userStore.fetchUsers();
});
const handleSubmit = async () => {
try {
await userStore.createUser(formData.value);
formData.value = { name: '', email: '' };
} catch (error) {
// Error already handled in store
}
};
</script>
Step 7: Configure API Base URL
Set the API URL as an environment variable:
.env.development:
VITE_API_URL=http://localhost:3000
.env.production:
VITE_API_URL=https://api.yourdomain.com
Step 8: Start Vue App
# Terminal 2: Start Vue app
npm run dev
Your Vue app will be available at http://localhost:5173 (or next available port).
Step 9: Test the Integration
- Create a user: Fill out the form and submit
- View users: See the list update with new users
- Monitor requests: Open http://localhost:9080 (Admin UI) to see all requests
Development Workflow
Typical Development Cycle
- Start MockForge with your API spec
- Develop Vue features against mock data
- View requests in Admin UI for debugging
- Update spec as API evolves
- Regenerate client when spec changes
Updating API Spec
When the OpenAPI spec changes:
# Regenerate TypeScript client
npm run generate-client
# Restart MockForge with updated spec
# (Ctrl+C in Terminal 1, then restart)
mockforge serve --spec user-management-api.json --http-port 3000 --admin
Testing with Vitest
Create tests against the mock server:
src/components/__tests__/UserForm.spec.ts:
import { describe, it, expect, beforeEach } from 'vitest';
import { mount } from '@vue/test-utils';
import { setActivePinia, createPinia } from 'pinia';
import UserForm from '../UserForm.vue';
describe('UserForm', () => {
beforeEach(() => {
setActivePinia(createPinia());
});
it('creates a user', async () => {
const wrapper = mount(UserForm);
// Your test logic here
});
});
Common Issues
CORS Errors
Enable CORS in MockForge config:
# mockforge.yaml
http:
port: 3000
cors:
enabled: true
allowed_origins: ["http://localhost:5173", "http://localhost:3000"]
Template Variables Not Expanding
Make sure template expansion is enabled:
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true mockforge serve ...
Environment Variables Not Loading
Vite requires the VITE_ prefix for environment variables. Ensure your .env file uses:
VITE_API_URL=http://localhost:3000
Advanced Usage
Reactive Data with Computed Properties
<script setup lang="ts">
import { computed } from 'vue';
import { useUserStore } from './stores/userStore';
const userStore = useUserStore();
const activeUsers = computed(() =>
userStore.users.filter(u => !u.deleted)
);
</script>
Error Handling with Vue Toast
import { useToast } from 'vue-toastification';
const toast = useToast();
async function createUser(input: UserInput) {
try {
await userStore.createUser(input);
toast.success('User created successfully!');
} catch (error) {
toast.error('Failed to create user');
}
}
Next Steps
- View Complete Example: See Vue Demo for a full implementation
- Learn React Workflow: React + MockForge Workflow
- Explore Admin UI: Admin UI Walkthrough
- Advanced Features: Dynamic Data Generation
Need help? Check the FAQ or Troubleshooting Guide.
Admin UI Walkthrough
Goal: Use MockForge’s Admin UI to visually manage your mock server, view live logs, and configure settings without editing files.
Time: 5 minutes
What You’ll Learn
- Access the Admin UI
- View real-time request logs
- Monitor server metrics
- Manage fixtures with drag-and-drop
- Configure latency and fault injection
- Search and filter logs
Prerequisites
- MockForge installed and running
- A basic understanding of MockForge concepts
Step 1: Start MockForge with Admin UI
You can run the Admin UI in two modes:
Standalone Mode (Separate Port)
mockforge serve --admin --admin-port 9080 --http-port 3000
Access at: http://localhost:9080
Embedded Mode (Under HTTP Server)
mockforge serve --admin-embed --admin-mount-path /admin --http-port 3000
Access at: http://localhost:3000/admin
For this tutorial, we’ll use standalone mode for simplicity.
Step 2: Access the Dashboard
Open your browser and navigate to http://localhost:9080.
You’ll see the Dashboard with:
Server Status Section
- HTTP Server: Running on port 3000
- WebSocket Server: Status and port
- gRPC Server: Status and port
- Uptime: How long the server has been running
Quick Stats
- Total Requests: Request counter
- Active Connections: Current open connections
- Average Response Time: Performance metrics
- Error Rate: Failed requests percentage
Recent Activity
- Last 10 requests with timestamps, methods, paths, and status codes
Step 3: View Live Logs
Click on the “Logs” tab in the navigation.
Features:
- Real-time updates: Logs stream via Server-Sent Events (SSE)
- Color-coded levels: INFO (blue), WARN (yellow), ERROR (red)
- Request details: Method, path, status code, response time
- Search: Filter logs by keyword
- Auto-scroll: Automatically scroll to newest logs
Try It:
- Keep the logs tab open
- In another terminal, send a request:
curl http://localhost:3000/users - Watch the log appear instantly in the UI!
Log Search
Use the search box to filter:
- Search by path:
/users - Search by method:
POST - Search by status:
404 - Search by error message:
validation failed
Step 4: Explore Metrics
Click on the “Metrics” tab.
Available Metrics:
- Request Rate: Requests per second over time
- Response Times: P50, P95, P99 latencies
- Status Code Distribution: 2xx, 4xx, 5xx breakdown
- Endpoint Performance: Slowest endpoints
- Error Trends: Error rates over time
Use Cases:
- Performance testing: Monitor response times under load
- Debugging: Identify which endpoints are failing
- Capacity planning: See throughput limits
Step 5: Manage Fixtures
Click on the “Fixtures” tab.
What are Fixtures?
Fixtures are saved mock scenarios - collections of requests and expected responses for testing.
Tree View Interface:
📁 Fixtures
📁 User Management
✅ Create User - Happy Path
✅ Create User - Validation Error
✅ Get User - Not Found
📁 Order Processing
✅ Create Order
✅ Update Order Status
Actions:
- Drag and Drop: Reorganize fixtures into folders
- Run Fixture: Test a specific scenario
- Run Folder: Execute all fixtures in a folder
- Export: Download fixtures as JSON
- Import: Upload fixture collections
Try It:
- Click “New Fixture”
- Name it: “Test User Creation”
- Configure:
- Method: POST
- Path:
/users - Expected Status: 201
- Request Body:
{"name": "Test User", "email": "test@example.com"}
- Click “Save”
- Click “Run” to test it
Step 6: Configure Latency Simulation
Click on the “Configuration” tab, then “Latency”.
Latency Profiles:
MockForge can simulate various network conditions:
| Profile | Description | Latency |
|---|---|---|
| None | No artificial delay | 0ms |
| Fast | Local network | 10-30ms |
| Normal | Good internet | 50-150ms |
| Slow | Poor connection | 300-800ms |
| Very Slow | Bad mobile | 1000-3000ms |
Configure:
- Select “Slow” profile
- Click “Apply”
- Test an endpoint:
time curl http://localhost:3000/users - Notice the delay!
Per-Endpoint Latency:
You can also configure latency for specific endpoints:
# In your config file
http:
latency:
enabled: true
default_profile: normal
endpoint_overrides:
"POST /orders": slow # Simulate slow order processing
"GET /products": fast # Fast product catalog
Step 7: Enable Fault Injection
Still in the “Configuration” tab, click “Fault Injection”.
Fault Types:
- Random Failures: Randomly return 500 errors
- Timeouts: Simulate request timeouts
- Malformed Responses: Return invalid JSON
- Connection Drops: Close connections unexpectedly
Configure:
- Enable Fault Injection: Toggle ON
- Error Rate: Set to 20% (1 in 5 requests fails)
- Fault Type: Select “Random Failures”
- Click “Apply”
Test It:
# Run this multiple times - some will fail!
for i in {1..10}; do
curl http://localhost:3000/users
echo ""
done
You’ll see some requests return 500 errors, simulating an unreliable backend.
Step 8: Search Across Services
Click on the “Search” tab.
Full-Text Search:
Search across:
- Service names
- Endpoint paths
- Request/response bodies
- Log messages
- Configuration values
Try It:
- Search for
users- finds all user-related endpoints - Search for
POST- finds all POST endpoints - Search for
validation- finds validation errors in logs
Step 9: Proxy Configuration (Advanced)
Click “Configuration” → “Proxy”.
Hybrid Mode:
MockForge can act as a proxy, forwarding unknown requests to a real backend:
- Enable Proxy: Toggle ON
- Target URL:
https://api.example.com - Fallback Mode: “Forward unknown requests”
- Click “Apply”
Now:
- Mocked endpoints return mock data
- Unknown endpoints are forwarded to the real API
- Perfect for gradual migration!
Common Workflows
Workflow 1: Debug a Failing Test
- Open Logs tab
- Enable “Error Only” filter
- Run your failing test
- Find the error in real-time
- Copy the request details
- Fix your test or mock configuration
Workflow 2: Create Test Fixtures
- Run your application manually (e.g., click through the UI)
- Admin UI captures all requests in Logs
- Click “Save as Fixture” on interesting requests
- Organize fixtures into folders
- Run fixtures as smoke tests before deployment
Workflow 3: Performance Testing
- Clear metrics (Metrics → “Reset”)
- Run load test against MockForge
- Monitor Metrics tab in real-time
- Identify performance bottlenecks
- Adjust mock configuration for better performance
Workflow 4: Demo Preparation
- Fixtures: Create realistic demo scenarios
- Latency: Set to “Fast” for smooth demos
- Fault Injection: Disable to prevent unexpected errors
- Logs: Keep open to show real-time activity
Keyboard Shortcuts
| Shortcut | Action |
|---|---|
Ctrl+K | Open search |
Ctrl+L | Jump to logs |
Ctrl+M | Jump to metrics |
Ctrl+R | Refresh dashboard |
Esc | Close modals |
Troubleshooting
Admin UI not loading?
- Check that the admin port (9080) isn’t blocked
- Verify MockForge is running with
--adminflag - Check browser console for JavaScript errors
Logs not updating?
- Ensure Server-Sent Events (SSE) aren’t blocked by your browser or proxy
- Try refreshing the page
- Check that
/__mockforge/logsendpoint is accessible
Fixtures not saving?
- Verify you have write permissions to the MockForge data directory
- Check disk space availability
- Review logs for error messages
What’s Next?
- Custom Response Configuration - Build advanced mock responses
- Security Features - Add authentication to Admin UI (v1.1+)
- Workspace Sync - Share fixtures with your team
- Plugin System - Extend Admin UI functionality
Pro Tip: Use browser bookmarks for quick access:
http://localhost:9080/- Dashboardhttp://localhost:9080/?tab=logs- Jump directly to logshttp://localhost:9080/?tab=metrics- Jump directly to metrics
Add a Custom Plugin
Goal: Extend MockForge with a plugin to add custom authentication or data generation functionality.
Time: 10 minutes
What You’ll Learn
- Install a plugin from a remote source
- Install a plugin from a local file
- Use a plugin in your mock configuration
- Create a simple custom plugin
- Test and debug plugins
Prerequisites
- MockForge installed (Installation Guide)
- Basic understanding of MockForge configuration
- (Optional) Rust toolchain for building custom plugins
Step 1: Install a Pre-Built Plugin
MockForge comes with example plugins you can install immediately.
Install the JWT Authentication Plugin
# Install from the examples directory (if building from source)
mockforge plugin install examples/plugins/auth-jwt
# Or install from a URL (when published)
mockforge plugin install https://github.com/SaaSy-Solutions/mockforge/releases/download/v1.0.0/auth-jwt-plugin.wasm
Verify Installation
mockforge plugin list
Output:
Installed Plugins:
- auth-jwt (v1.0.0)
Description: JWT authentication and token generation
Author: MockForge Team
Step 2: Use the Plugin in Your Configuration
Create a config file that uses the JWT plugin:
api-with-auth.yaml:
http:
port: 3000
response_template_expand: true
# Load the plugin
plugins:
- name: auth-jwt
config:
secret: "my-super-secret-key"
algorithm: HS256
expiry: 3600 # 1 hour
routes:
# Login endpoint - generates JWT token
- path: /auth/login
method: POST
response:
status: 200
headers:
Content-Type: application/json
body: |
{
"token": "{{plugin:auth-jwt:generate_token({{request.body.username}})}}",
"expiresIn": 3600
}
# Protected endpoint - validates JWT
- path: /users/me
method: GET
middleware:
- plugin: auth-jwt
action: validate_token
response:
status: 200
body: |
{
"id": "{{uuid}}",
"username": "{{plugin:auth-jwt:get_claim(username)}}",
"email": "{{plugin:auth-jwt:get_claim(email)}}"
}
Step 3: Test the Plugin
Start the server:
mockforge serve --config api-with-auth.yaml
Login and Get Token
curl -X POST http://localhost:3000/auth/login \
-H "Content-Type: application/json" \
-d '{"username": "alice", "password": "secret123"}'
Response:
{
"token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6ImFsaWNlIiwiZXhwIjoxNzA5NTY3ODkwfQ.signature",
"expiresIn": 3600
}
Use Token to Access Protected Endpoint
# Save the token
TOKEN="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."
# Access protected endpoint
curl http://localhost:3000/users/me \
-H "Authorization: Bearer $TOKEN"
Response:
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"username": "alice",
"email": "alice@example.com"
}
Try Without Token (Should Fail)
curl http://localhost:3000/users/me
Response:
{
"error": "Unauthorized",
"message": "Missing or invalid JWT token"
}
Step 4: Install the Template Crypto Plugin
Let’s install another plugin for encryption in templates:
mockforge plugin install examples/plugins/template-crypto
crypto-config.yaml:
http:
port: 3000
response_template_expand: true
plugins:
- name: template-crypto
config:
default_algorithm: aes-256-gcm
routes:
- path: /encrypt
method: POST
response:
status: 200
body: |
{
"encrypted": "{{plugin:template-crypto:encrypt({{request.body.message}})}}",
"algorithm": "aes-256-gcm"
}
- path: /decrypt
method: POST
response:
status: 200
body: |
{
"decrypted": "{{plugin:template-crypto:decrypt({{request.body.encrypted}})}}"
}
Test it:
# Encrypt a message
curl -X POST http://localhost:3000/encrypt \
-H "Content-Type: application/json" \
-d '{"message": "secret data"}'
# Decrypt the result
curl -X POST http://localhost:3000/decrypt \
-H "Content-Type: application/json" \
-d '{"encrypted": "base64-encrypted-string"}'
Step 5: Create a Simple Custom Plugin
Let’s create a custom plugin that generates fake company data.
Project Structure
mkdir my-company-plugin
cd my-company-plugin
cargo init --lib
Cargo.toml:
[package]
name = "company-data-plugin"
version = "0.1.0"
edition = "2021"
[lib]
crate-type = ["cdylib"]
[dependencies]
mockforge-plugin-api = "1.0"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
fake = { version = "2.9", features = ["derive"] }
src/lib.rs:
#![allow(unused)] fn main() { use mockforge_plugin_api::{Plugin, PluginContext, PluginResult}; use fake::{Fake, faker::company::en::*}; use serde_json::json; pub struct CompanyDataPlugin; impl Plugin for CompanyDataPlugin { fn name(&self) -> &str { "company-data" } fn version(&self) -> &str { "0.1.0" } fn execute(&self, ctx: &PluginContext) -> PluginResult { match ctx.action.as_str() { "generate_company" => { let company_name: String = CompanyName().fake(); let industry: String = Industry().fake(); let buzzword: String = Buzzword().fake(); Ok(json!({ "name": company_name, "industry": industry, "tagline": buzzword, "founded": (1950..2024).fake::<i32>(), "employees": (10..10000).fake::<i32>() })) } "generate_tagline" => { Ok(json!({ "tagline": Buzzword().fake::<String>() })) } _ => Err(format!("Unknown action: {}", ctx.action)) } } } mockforge_plugin_api::export_plugin!(CompanyDataPlugin); }
Build the Plugin
cargo build --release --target wasm32-unknown-unknown
The compiled plugin will be at:
target/wasm32-unknown-unknown/release/company_data_plugin.wasm
Step 6: Install and Use Your Custom Plugin
# Install from local file
mockforge plugin install ./target/wasm32-unknown-unknown/release/company_data_plugin.wasm
company-api.yaml:
http:
port: 3000
response_template_expand: true
plugins:
- name: company-data
routes:
- path: /companies
method: GET
response:
status: 200
body: |
[
{{plugin:company-data:generate_company()}},
{{plugin:company-data:generate_company()}},
{{plugin:company-data:generate_company()}}
]
- path: /tagline
method: GET
response:
status: 200
body: "{{plugin:company-data:generate_tagline()}}"
Test it:
mockforge serve --config company-api.yaml
# Generate fake companies
curl http://localhost:3000/companies
Response:
[
{
"name": "Acme Corporation",
"industry": "Technology",
"tagline": "Innovative solutions for tomorrow",
"founded": 1985,
"employees": 2500
},
{
"name": "GlobalTech Industries",
"industry": "Manufacturing",
"tagline": "Building the future",
"founded": 2001,
"employees": 850
},
{
"name": "DataSync Solutions",
"industry": "Software",
"tagline": "Connecting businesses worldwide",
"founded": 2015,
"employees": 120
}
]
Step 7: Plugin Management Commands
List Installed Plugins
mockforge plugin list
Get Plugin Info
mockforge plugin info auth-jwt
Update a Plugin
mockforge plugin update auth-jwt
Uninstall a Plugin
mockforge plugin uninstall company-data
Install with Version Pinning
# From Git with version tag
mockforge plugin install https://github.com/user/plugin#v1.2.0
# From URL with checksum verification
mockforge plugin install https://example.com/plugin.wasm --checksum sha256:abc123...
Common Plugin Use Cases
| Use Case | Plugin Type | Example |
|---|---|---|
| Authentication | Middleware | JWT, OAuth2, API keys |
| Data Generation | Template function | Faker, custom generators |
| Data Transformation | Response modifier | Format converters, encryption |
| External Integration | Data source | Database, CSV files, APIs |
| Custom Validation | Request validator | Business rule enforcement |
| Rate Limiting | Middleware | Token bucket, sliding window |
Plugin Security
MockForge plugins run in a WebAssembly sandbox with:
- Memory isolation: Plugins can’t access host memory
- Resource limits: CPU and memory usage capped
- No network access: Plugins can’t make external requests (unless explicitly allowed)
- File system restrictions: Limited file access
Configure Plugin Permissions
config.yaml:
plugins:
security:
max_memory_mb: 50
max_execution_ms: 1000
allow_network: false
allow_file_access: false
plugins:
- name: auth-jwt
permissions:
network: false
file_read: false
- name: db-connector
permissions:
network: true # Needs network for DB connection
file_read: true
Debugging Plugins
Enable Plugin Debug Logs
MOCKFORGE_LOG_LEVEL=debug mockforge serve --config api-with-auth.yaml
Test Plugin in Isolation
mockforge plugin test auth-jwt --action generate_token --input '{"username": "test"}'
Plugin Benchmarking
mockforge plugin bench auth-jwt --iterations 1000
Troubleshooting
Plugin not found after installation?
# Check plugin directory
mockforge plugin list --verbose
# Reinstall
mockforge plugin install ./path/to/plugin.wasm --force
Plugin execution fails?
- Check plugin logs with
MOCKFORGE_LOG_LEVEL=debug - Verify plugin configuration syntax
- Test plugin in isolation with
mockforge plugin test
Plugin build fails?
# Ensure wasm target is installed
rustup target add wasm32-unknown-unknown
# Clean and rebuild
cargo clean
cargo build --release --target wasm32-unknown-unknown
What’s Next?
- Plugin API Reference - Complete plugin API documentation
- Plugin Development Guide - Advanced plugin development
- Security Model - Plugin security architecture
- Example Plugins - More plugin examples
Pro Tip: Plugins can be version-controlled and shared with your team. Commit the .wasm file or the source code to Git, and everyone can use the same custom functionality!
Getting Started with MQTT
MockForge includes a fully functional MQTT (Message Queuing Telemetry Transport) broker for testing IoT and pub/sub workflows in your applications. This guide will help you get started quickly.
Quick Start
1. Enable MQTT in Configuration
Create a configuration file or modify your existing config.yaml:
mqtt:
enabled: true
port: 1883
host: "0.0.0.0"
max_connections: 1000
max_packet_size: 1048576 # 1MB
keep_alive_secs: 60
2. Start the Server
mockforge serve --config config.yaml
You should see:
📡 MQTT broker listening on localhost:1883
3. Connect and Publish a Test Message
Using the mosquitto command-line tools:
# Install mosquitto clients (Ubuntu/Debian)
sudo apt install mosquitto-clients
# Or on macOS
brew install mosquitto
# Publish a test message
mosquitto_pub -h localhost -p 1883 -t "sensors/temperature" -m "25.5" -q 1
# Subscribe to receive messages
mosquitto_sub -h localhost -p 1883 -t "sensors/temperature" -q 1
4. Verify Message Handling
Messages are processed according to your fixtures configuration. Check server logs for routing information and fixture matching.
Using Command-Line Tools
mosquitto_pub
Publish messages to topics:
# Simple publish
mosquitto_pub -h localhost -p 1883 -t "sensors/temp/room1" -m "23.5"
# With QoS 1 (at least once delivery)
mosquitto_pub -h localhost -p 1883 -t "devices/status" -m "online" -q 1
# With retained message
mosquitto_pub -h localhost -p 1883 -t "config/max_temp" -m "30.0" -r
# JSON payload
mosquitto_pub -h localhost -p 1883 -t "sensors/data" -m '{"temperature": 22.1, "humidity": 65}'
mosquitto_sub
Subscribe to topics:
# Subscribe to specific topic
mosquitto_sub -h localhost -p 1883 -t "sensors/temp/room1"
# Subscribe with wildcards
mosquitto_sub -h localhost -p 1883 -t "sensors/temp/+"
mosquitto_sub -h localhost -p 1883 -t "devices/#"
# Subscribe to all topics (for debugging)
mosquitto_sub -h localhost -p 1883 -t "#"
MQTT CLI Commands
MockForge provides MQTT-specific CLI commands:
# List active topics
mockforge mqtt topics
# List connected clients
mockforge mqtt clients
# Publish a message
mockforge mqtt publish sensors/temperature 25.5 --qos 1
# Subscribe to topics
mockforge mqtt subscribe "sensors/#" --qos 0
Supported MQTT Features
MockForge MQTT broker implements MQTT 3.1.1 and 5.0 specifications with the following features:
Quality of Service (QoS) Levels
- QoS 0 - At most once delivery (fire and forget)
- QoS 1 - At least once delivery (acknowledged delivery)
- QoS 2 - Exactly once delivery (assured delivery)
Topic Management
- Single-level wildcards (
+) - Match one topic level - Multi-level wildcards (
#) - Match multiple topic levels - Retained messages - Store last message per topic
- Clean sessions - Persistent vs ephemeral subscriptions
Connection Management
- Keep-alive handling - Automatic client timeout
- Will messages - Last-will-and-testament
- Session persistence - Restore subscriptions on reconnect
Basic Configuration Options
mqtt:
enabled: true # Enable/disable MQTT broker
port: 1883 # Port (1883 for MQTT, 8883 for MQTT over TLS)
host: "0.0.0.0" # Bind address
max_connections: 1000 # Maximum concurrent connections
max_packet_size: 1048576 # Maximum packet size (1MB)
keep_alive_secs: 60 # Default keep-alive timeout
# Advanced options
max_inflight_messages: 20 # Maximum QoS 1/2 messages in flight
max_queued_messages: 100 # Maximum queued messages per client
Environment Variables
Override configuration with environment variables:
export MOCKFORGE_MQTT_ENABLED=true
export MOCKFORGE_MQTT_PORT=1883
export MOCKFORGE_MQTT_HOST=0.0.0.0
export MOCKFORGE_MQTT_MAX_CONNECTIONS=1000
mockforge serve
Next Steps
- Configuration Reference - Detailed configuration options
- Fixtures - Create MQTT scenarios and mock responses
- Examples - Real-world usage examples
Troubleshooting
Connection Refused
Problem: Cannot connect to MQTT broker
Solutions:
- Verify MQTT is enabled:
mqtt.enabled: true - Check the port isn’t in use:
lsof -i :1883 - Ensure server is running: Look for “MQTT broker listening” in logs
Messages Not Received
Problem: Messages published but not received by subscribers
Solutions:
- Check topic matching patterns
- Verify QoS levels are compatible
- Check for retained message conflicts
- Review server logs for routing information
Wildcard Issues
Problem: Wildcard subscriptions not working as expected
Solutions:
+matches exactly one level:sensors/+/temperature#matches multiple levels:devices/#- Wildcards only work in subscriptions, not publications
Common Use Cases
IoT Device Simulation
# Simulate multiple IoT sensors
import paho.mqtt.client as mqtt
import time
import random
def simulate_sensor(sensor_id, topic_prefix):
client = mqtt.Client(f"sensor_{sensor_id}")
client.connect("localhost", 1883, 60)
while True:
temperature = 20 + random.uniform(-5, 5)
payload = f'{{"sensor_id": "{sensor_id}", "temperature": {temperature:.1f}}}'
client.publish(f"{topic_prefix}/temperature", payload, qos=1)
time.sleep(5)
# Start multiple sensors
for i in range(3):
simulate_sensor(f"sensor_{i}", f"sensors/room{i}")
Testing MQTT Applications
// In your test suite (Node.js with mqtt.js)
const mqtt = require('mqtt');
describe('Temperature Monitoring', () => {
let client;
beforeAll(() => {
client = mqtt.connect('mqtt://localhost:1883');
});
afterAll(() => {
client.end();
});
test('receives temperature updates', (done) => {
client.subscribe('sensors/temperature/+', { qos: 1 });
client.on('message', (topic, message) => {
const data = JSON.parse(message.toString());
expect(data).toHaveProperty('sensor_id');
expect(data).toHaveProperty('temperature');
expect(data.temperature).toBeGreaterThan(-50);
expect(data.temperature).toBeLessThan(100);
done();
});
// Trigger temperature reading in your app
// Your app should publish to sensors/temperature/+
});
});
CI/CD Integration
# .github/workflows/test.yml
- name: Start MockForge MQTT
run: |
mockforge serve --mqtt --mqtt-port 1883 &
sleep 2
- name: Run MQTT tests
env:
MQTT_HOST: localhost
MQTT_PORT: 1883
run: npm test
What’s Next?
Now that you have a basic MQTT broker running, explore:
- Fixtures - Define MQTT message patterns and mock responses
- Configuration - Fine-tune broker behavior
- Examples - See real-world implementations
MQTT Configuration Reference
This document provides a comprehensive reference for configuring the MockForge MQTT broker. The MQTT implementation supports all standard MQTT 3.1.1 and 5.0 features with additional MockForge-specific configuration options.
Basic Configuration
mqtt:
# Enable/disable MQTT broker
enabled: true
# Server binding
port: 1883
host: "0.0.0.0"
# Connection limits
max_connections: 1000
# Message size limits
max_packet_size: 1048576 # 1MB
# Connection timeouts
keep_alive_secs: 60
Advanced Configuration
Connection Management
mqtt:
# Maximum concurrent connections
max_connections: 1000
# Maximum packet size (bytes)
max_packet_size: 1048576 # 1MB
# Default keep-alive timeout (seconds)
keep_alive_secs: 60
# Maximum QoS 1/2 messages in flight per client
max_inflight_messages: 20
# Maximum queued messages per client
max_queued_messages: 100
Quality of Service (QoS)
MockForge supports all MQTT QoS levels:
- QoS 0: At most once delivery (fire and forget)
- QoS 1: At least once delivery (acknowledged)
- QoS 2: Exactly once delivery (assured)
QoS levels are configured per fixture and can be overridden by client requests.
Retained Messages
mqtt:
# Enable retained message support
retained_messages_enabled: true
# Maximum retained messages per topic
max_retained_per_topic: 1
# Maximum total retained messages
max_total_retained: 10000
Session Management
mqtt:
# Enable persistent sessions
persistent_sessions: true
# Session expiry (seconds)
session_expiry_secs: 3600
# Clean session behavior
force_clean_session: false
TLS/SSL Configuration
For secure MQTT (MQTT over TLS):
mqtt:
# Use TLS
tls_enabled: true
tls_port: 8883
# Certificate paths
tls_cert_path: "/path/to/server.crt"
tls_key_path: "/path/to/server.key"
# Client certificate verification
tls_require_client_cert: false
tls_ca_path: "/path/to/ca.crt"
Authentication and Authorization
Basic Authentication
mqtt:
# Enable authentication
auth_enabled: true
# Authentication method
auth_method: "basic" # basic, jwt, oauth2
# User database
users:
- username: "user1"
password: "password1"
permissions:
- "publish:sensors/#"
- "subscribe:actuators/#"
- username: "device1"
password: "devicepass"
permissions:
- "publish:devices/device1/#"
- "subscribe:commands/device1/#"
JWT Authentication
mqtt:
auth_method: "jwt"
jwt:
# JWT issuer
issuer: "mockforge"
# JWT audience
audience: "mqtt-clients"
# Secret key or public key path
secret: "your-jwt-secret"
# OR
public_key_path: "/path/to/public.pem"
# Token validation
validate_exp: true
validate_iat: true
validate_nbf: true
# Custom claims mapping
claims_mapping:
permissions: "perms"
client_id: "client"
Topic Authorization
mqtt:
# Topic access control
topic_acl:
# Allow anonymous access to these topics
anonymous_topics:
- "public/#"
# Deny access to these topics
denied_topics:
- "admin/#"
- "system/#"
# Require authentication for these topics
authenticated_topics:
- "private/#"
- "secure/#"
Logging and Monitoring
mqtt:
# Log level
log_level: "info"
# Enable connection logging
log_connections: true
# Enable message logging (WARNING: can be verbose)
log_messages: false
# Metrics collection
metrics_enabled: true
# Prometheus metrics
metrics_path: "/metrics"
metrics_port: 9090
Performance Tuning
mqtt:
# Thread pool size
worker_threads: 4
# Connection backlog
connection_backlog: 1024
# Socket options
socket:
# TCP_NODELAY
no_delay: true
# SO_KEEPALIVE
keep_alive: true
# Buffer sizes
send_buffer_size: 65536
recv_buffer_size: 65536
Environment Variables
Override configuration with environment variables:
# Basic settings
export MOCKFORGE_MQTT_ENABLED=true
export MOCKFORGE_MQTT_PORT=1883
export MOCKFORGE_MQTT_HOST=0.0.0.0
# Connection limits
export MOCKFORGE_MQTT_MAX_CONNECTIONS=1000
export MOCKFORGE_MQTT_MAX_PACKET_SIZE=1048576
# TLS settings
export MOCKFORGE_MQTT_TLS_ENABLED=false
export MOCKFORGE_MQTT_TLS_CERT_PATH=/path/to/cert.pem
export MOCKFORGE_MQTT_TLS_KEY_PATH=/path/to/key.pem
# Authentication
export MOCKFORGE_MQTT_AUTH_ENABLED=true
export MOCKFORGE_MQTT_AUTH_METHOD=basic
Configuration Validation
MockForge validates MQTT configuration on startup:
- Port conflicts: Checks if the configured port is available
- Certificate validation: Verifies TLS certificates exist and are valid
- ACL consistency: Ensures topic ACL rules don’t conflict
- Resource limits: Validates connection and message limits are reasonable
Configuration Examples
Development Setup
mqtt:
enabled: true
port: 1883
host: "127.0.0.1"
max_connections: 100
log_connections: true
log_messages: true
Production Setup
mqtt:
enabled: true
port: 1883
host: "0.0.0.0"
max_connections: 10000
tls_enabled: true
tls_port: 8883
tls_cert_path: "/etc/ssl/certs/mqtt.crt"
tls_key_path: "/etc/ssl/private/mqtt.key"
auth_enabled: true
auth_method: "jwt"
metrics_enabled: true
IoT Gateway
mqtt:
enabled: true
port: 1883
max_connections: 1000
max_packet_size: 524288 # 512KB for sensor data
keep_alive_secs: 300 # 5 minutes for battery-powered devices
retained_messages_enabled: true
max_total_retained: 5000
Troubleshooting
Common Issues
High CPU Usage
- Reduce
max_connectionsorworker_threads - Enable connection rate limiting
- Check for connection leaks
Memory Issues
- Lower
max_queued_messagesandmax_inflight_messages - Reduce
max_total_retained - Monitor retained message growth
Connection Timeouts
- Increase
keep_alive_secs - Check network connectivity
- Verify firewall settings
TLS Handshake Failures
- Verify certificate validity
- Check certificate chain
- Ensure correct certificate format (PEM)
Next Steps
- Getting Started - Basic MQTT setup
- Fixtures - Define MQTT mock scenarios
- Examples - Real-world usage examples
MQTT Fixtures
MQTT fixtures in MockForge define mock responses for MQTT topics. Unlike HTTP fixtures that respond to requests, MQTT fixtures define what messages should be published when clients publish to specific topics.
Basic Fixture Structure
mqtt:
fixtures:
- identifier: "temperature-sensor"
name: "Temperature Sensor Mock"
topic_pattern: "^sensors/temperature/[^/]+$"
qos: 1
retained: false
response:
payload:
sensor_id: "{{topic_param 2}}"
temperature: "{{faker.float 15.0 35.0}}"
unit: "celsius"
timestamp: "{{now}}"
auto_publish:
enabled: false
interval_ms: 1000
count: 10
Topic Patterns
MQTT fixtures use regex patterns to match topics:
# Match specific topic
topic_pattern: "^sensors/temperature/room1$"
# Match topic hierarchy with wildcards
topic_pattern: "^sensors/temperature/[^/]+$"
# Match multiple levels
topic_pattern: "^devices/.+/status$"
# Complex patterns
topic_pattern: "^([^/]+)/([^/]+)/(.+)$"
Response Configuration
Static Responses
response:
payload:
status: "online"
version: "1.2.3"
uptime: 3600
Dynamic Responses with Templates
response:
payload:
sensor_id: "{{topic_param 1}}"
temperature: "{{faker.float 20.0 30.0}}"
humidity: "{{faker.float 40.0 80.0}}"
timestamp: "{{now}}"
random_id: "{{uuid}}"
Template Variables
MockForge supports extensive templating for MQTT responses:
Topic Parameters
{{topic}}- Full topic string{{topic_param N}}- Nth segment of topic (0-indexed)
Random Data
{{uuid}}- Random UUID{{faker.float min max}}- Random float between min and max{{faker.int min max}}- Random integer between min and max{{rand.float}}- Random float 0.0-1.0{{rand.int}}- Random integer
Time and Dates
{{now}}- Current timestamp (RFC3339){{now + 1h}}- Future timestamp{{now - 30m}}- Past timestamp
Environment Variables
{{env VAR_NAME}}- Environment variable value
Quality of Service (QoS)
# QoS 0 - At most once (fire and forget)
qos: 0
# QoS 1 - At least once (acknowledged)
qos: 1
# QoS 2 - Exactly once (assured)
qos: 2
Retained Messages
# Message is retained on the broker
retained: true
# Message is not retained
retained: false
Auto-Publish Configuration
Automatically publish messages at regular intervals:
auto_publish:
enabled: true
interval_ms: 5000 # Publish every 5 seconds
count: 100 # Publish 100 messages, then stop (optional)
Advanced Fixtures
Conditional Responses
fixtures:
- identifier: "smart-sensor"
name: "Smart Temperature Sensor"
topic_pattern: "^sensors/temp/(.+)$"
response:
payload: |
{
"sensor_id": "{{topic_param 1}}",
"temperature": {{faker.float 15.0 35.0}},
"status": "{{#if (> temperature 30.0)}}critical{{else}}normal{{/if}}",
"timestamp": "{{now}}"
}
conditions:
- variable: "temperature"
operator: ">"
value: 30.0
response:
payload:
sensor_id: "{{topic_param 1}}"
temperature: "{{temperature}}"
status: "critical"
alert: true
Sequence Responses
fixtures:
- identifier: "sequence-demo"
name: "Sequence Response Demo"
topic_pattern: "^demo/sequence$"
sequence:
- payload:
step: 1
message: "Starting sequence"
- payload:
step: 2
message: "Processing..."
- payload:
step: 3
message: "Complete"
sequence_reset: "manual" # auto, manual, time
Error Simulation
fixtures:
- identifier: "faulty-sensor"
name: "Faulty Sensor"
topic_pattern: "^sensors/faulty/(.+)$"
error_simulation:
enabled: true
error_rate: 0.1 # 10% of messages fail
error_responses:
- payload:
error: "Sensor malfunction"
code: "SENSOR_ERROR"
- payload:
error: "Communication timeout"
code: "TIMEOUT"
Fixture Management
Loading Fixtures
# Load fixtures from file
mockforge mqtt fixtures load ./fixtures/mqtt.yaml
# Load fixtures from directory
mockforge mqtt fixtures load ./fixtures/mqtt/
Auto-Publish Control
# Start auto-publishing for all fixtures
mockforge mqtt fixtures start-auto-publish
# Stop auto-publishing
mockforge mqtt fixtures stop-auto-publish
# Start specific fixture
mockforge mqtt fixtures start-auto-publish temperature-sensor
Fixture Validation
MockForge validates fixtures on load:
- Topic pattern syntax - Valid regex patterns
- Template variables - Available variables and functions
- QoS levels - Valid QoS values (0, 1, 2)
- JSON structure - Valid JSON payloads
Examples
IoT Sensor Network
mqtt:
fixtures:
- identifier: "temp-sensor-room1"
name: "Room 1 Temperature Sensor"
topic_pattern: "^sensors/temperature/room1$"
qos: 1
retained: true
response:
payload:
sensor_id: "room1"
temperature: "{{faker.float 20.0 25.0}}"
humidity: "{{faker.float 40.0 60.0}}"
battery_level: "{{faker.float 80.0 100.0}}"
timestamp: "{{now}}"
- identifier: "motion-sensor"
name: "Motion Sensor"
topic_pattern: "^sensors/motion/(.+)$"
qos: 0
retained: false
response:
payload:
sensor_id: "{{topic_param 1}}"
motion_detected: "{{faker.boolean}}"
timestamp: "{{now}}"
auto_publish:
enabled: true
interval_ms: 30000 # Every 30 seconds
Smart Home Devices
mqtt:
fixtures:
- identifier: "smart-light"
name: "Smart Light Controller"
topic_pattern: "^home/lights/(.+)/command$"
qos: 1
response:
payload:
device_id: "{{topic_param 1}}"
command: "ack"
status: "success"
timestamp: "{{now}}"
- identifier: "thermostat"
name: "Smart Thermostat"
topic_pattern: "^home/climate/thermostat$"
qos: 2
retained: true
response:
payload:
temperature: "{{faker.float 18.0 25.0}}"
humidity: "{{faker.float 35.0 65.0}}"
mode: "{{faker.random_element heating cooling auto}}"
setpoint: "{{faker.float 19.0 23.0}}"
timestamp: "{{now}}"
Industrial IoT
mqtt:
fixtures:
- identifier: "conveyor-belt"
name: "Conveyor Belt Monitor"
topic_pattern: "^factory/conveyor/(.+)/status$"
qos: 1
retained: true
response:
payload:
conveyor_id: "{{topic_param 1}}"
status: "{{faker.random_element running stopped maintenance}}"
speed_rpm: "{{faker.float 50.0 150.0}}"
temperature: "{{faker.float 25.0 45.0}}"
vibration: "{{faker.float 0.1 2.0}}"
timestamp: "{{now}}"
auto_publish:
enabled: true
interval_ms: 5000
- identifier: "quality-control"
name: "Quality Control Station"
topic_pattern: "^factory/qc/(.+)/result$"
qos: 2
response:
payload:
station_id: "{{topic_param 1}}"
product_id: "{{uuid}}"
quality_score: "{{faker.float 85.0 100.0}}"
defects_found: "{{faker.int 0 3}}"
passed: "{{#if (> quality_score 90.0)}}true{{else}}false{{/if}}"
timestamp: "{{now}}"
Best Practices
Topic Design
- Use hierarchical topics:
building/floor/room/device - Include device IDs:
sensors/temp/sensor_001 - Use consistent naming conventions
QoS Selection
- QoS 0: Sensor data, non-critical updates
- QoS 1: Important status updates, commands
- QoS 2: Critical control messages, financial data
Retained Messages
- Use for current state:
device/status,sensor/last_reading - Avoid for event data:
sensor/trigger,button/press
Auto-Publish
- Reasonable intervals: 1-60 seconds for sensors
- Consider battery life for IoT devices
- Use for simulation, not production data
Next Steps
- Getting Started - Basic MQTT setup
- Configuration - Detailed configuration options
- Examples - Real-world usage examples
MQTT Examples
This document provides real-world examples of using MockForge MQTT for testing IoT applications, microservices communication, and pub/sub systems.
IoT Device Simulation
Smart Home System
Scenario: Test a smart home application that controls lights, thermostats, and security sensors.
MockForge Configuration:
mqtt:
enabled: true
port: 1883
fixtures:
# Smart Lights
- identifier: "living-room-light"
name: "Living Room Light"
topic_pattern: "^home/lights/living_room/command$"
qos: 1
response:
payload:
device_id: "living_room_light"
status: "success"
brightness: "{{faker.int 0 100}}"
timestamp: "{{now}}"
- identifier: "kitchen-light"
name: "Kitchen Light"
topic_pattern: "^home/lights/kitchen/command$"
qos: 1
response:
payload:
device_id: "kitchen_light"
status: "success"
color_temp: "{{faker.int 2700 6500}}"
timestamp: "{{now}}"
# Thermostat
- identifier: "thermostat"
name: "Smart Thermostat"
topic_pattern: "^home/climate/thermostat$"
qos: 2
retained: true
response:
payload:
temperature: "{{faker.float 18.0 25.0}}"
humidity: "{{faker.float 35.0 65.0}}"
mode: "{{faker.random_element heating cooling auto}}"
setpoint: "{{faker.float 19.0 23.0}}"
timestamp: "{{now}}"
auto_publish:
enabled: true
interval_ms: 30000
# Motion Sensors
- identifier: "motion-sensor"
name: "Motion Sensor"
topic_pattern: "^home/security/motion/(.+)$"
qos: 0
response:
payload:
sensor_id: "{{topic_param 1}}"
motion_detected: "{{faker.boolean}}"
battery_level: "{{faker.float 70.0 100.0}}"
timestamp: "{{now}}"
auto_publish:
enabled: true
interval_ms: 15000
Test Code (Python):
import paho.mqtt.client as mqtt
import json
import time
def test_smart_home_integration():
client = mqtt.Client("test-client")
client.connect("localhost", 1883, 60)
# Test light control
client.publish("home/lights/living_room/command", json.dumps({
"action": "turn_on",
"brightness": 80
}), qos=1)
# Subscribe to responses
responses = []
def on_message(client, userdata, msg):
responses.append(json.loads(msg.payload.decode()))
client.on_message = on_message
client.subscribe("home/lights/living_room/status")
client.loop_start()
# Wait for response
time.sleep(1)
client.loop_stop()
assert len(responses) > 0
assert responses[0]["device_id"] == "living_room_light"
assert responses[0]["status"] == "success"
# Test thermostat reading
client.subscribe("home/climate/thermostat")
client.loop_start()
time.sleep(2) # Wait for auto-published message
client.loop_stop()
# Verify thermostat data
thermostat_data = None
for response in responses:
if "temperature" in response:
thermostat_data = response
break
assert thermostat_data is not None
assert 18.0 <= thermostat_data["temperature"] <= 25.0
assert thermostat_data["mode"] in ["heating", "cooling", "auto"]
client.disconnect()
Industrial IoT Monitoring
Scenario: Test an industrial monitoring system with sensors, actuators, and PLCs.
MockForge Configuration:
mqtt:
enabled: true
port: 1883
max_connections: 100
fixtures:
# Temperature Sensors
- identifier: "temp-sensor-1"
name: "Temperature Sensor 1"
topic_pattern: "^factory/sensors/temp/1$"
qos: 1
retained: true
response:
payload:
sensor_id: "temp_1"
temperature: "{{faker.float 20.0 80.0}}"
unit: "celsius"
status: "operational"
timestamp: "{{now}}"
auto_publish:
enabled: true
interval_ms: 5000
# Pressure Sensors
- identifier: "pressure-sensor"
name: "Pressure Sensor"
topic_pattern: "^factory/sensors/pressure/(.+)$"
qos: 1
response:
payload:
sensor_id: "{{topic_param 1}}"
pressure: "{{faker.float 0.5 5.0}}"
unit: "bar"
threshold: 3.5
alert: "{{#if (> pressure 3.5)}}true{{else}}false{{/if}}"
timestamp: "{{now}}"
# Conveyor Belt Controller
- identifier: "conveyor-controller"
name: "Conveyor Belt Controller"
topic_pattern: "^factory/actuators/conveyor/(.+)/command$"
qos: 2
response:
payload:
actuator_id: "{{topic_param 1}}"
command_ack: true
status: "executing"
estimated_completion: "{{now + 5s}}"
timestamp: "{{now}}"
# Quality Control Station
- identifier: "qc-station"
name: "Quality Control Station"
topic_pattern: "^factory/qc/station_(.+)/result$"
qos: 2
response:
payload:
station_id: "{{topic_param 1}}"
product_id: "{{uuid}}"
quality_score: "{{faker.float 85.0 100.0}}"
defects: "{{faker.int 0 2}}"
passed: "{{#if (> quality_score 95.0)}}true{{else}}false{{/if}}"
timestamp: "{{now}}"
Test Code (JavaScript/Node.js):
const mqtt = require('mqtt');
describe('Industrial IoT System', () => {
let client;
beforeAll(() => {
client = mqtt.connect('mqtt://localhost:1883');
});
afterAll(() => {
client.end();
});
test('sensor data collection', (done) => {
const sensorData = [];
client.subscribe('factory/sensors/temp/1');
client.subscribe('factory/sensors/pressure/1');
client.on('message', (topic, message) => {
const data = JSON.parse(message.toString());
sensorData.push({ topic, data });
if (sensorData.length >= 2) {
// Verify temperature sensor
const tempSensor = sensorData.find(s => s.topic === 'factory/sensors/temp/1');
expect(tempSensor.data.temperature).toBeGreaterThanOrEqual(20);
expect(tempSensor.data.temperature).toBeLessThanOrEqual(80);
expect(tempSensor.data.unit).toBe('celsius');
// Verify pressure sensor
const pressureSensor = sensorData.find(s => s.topic === 'factory/sensors/pressure/1');
expect(pressureSensor.data.pressure).toBeGreaterThanOrEqual(0.5);
expect(pressureSensor.data.pressure).toBeLessThanOrEqual(5.0);
expect(pressureSensor.data.unit).toBe('bar');
client.unsubscribe(['factory/sensors/temp/1', 'factory/sensors/pressure/1']);
done();
}
});
// Trigger sensor readings
client.publish('factory/sensors/temp/1/trigger', 'read');
client.publish('factory/sensors/pressure/1/trigger', 'read');
});
test('actuator control', (done) => {
client.subscribe('factory/actuators/conveyor/1/status');
client.on('message', (topic, message) => {
if (topic === 'factory/actuators/conveyor/1/status') {
const status = JSON.parse(message.toString());
expect(status.actuator_id).toBe('1');
expect(status.command_ack).toBe(true);
expect(status.status).toBe('executing');
client.unsubscribe('factory/actuators/conveyor/1/status');
done();
}
});
// Send control command
client.publish('factory/actuators/conveyor/1/command', JSON.stringify({
action: 'start',
speed: 50
}), { qos: 2 });
});
test('quality control workflow', (done) => {
client.subscribe('factory/qc/station_1/result');
client.on('message', (topic, message) => {
const result = JSON.parse(message.toString());
expect(result.station_id).toBe('1');
expect(result.quality_score).toBeGreaterThanOrEqual(85);
expect(result.quality_score).toBeLessThanOrEqual(100);
expect(typeof result.defects).toBe('number');
expect(typeof result.passed).toBe('boolean');
client.unsubscribe('factory/qc/station_1/result');
done();
});
// Trigger quality check
client.publish('factory/qc/station_1/check', JSON.stringify({
product_id: 'PROD-001',
batch_id: 'BATCH-2024'
}));
});
});
Microservices Communication
Event-Driven Architecture
Scenario: Test microservices communicating via MQTT events.
MockForge Configuration:
mqtt:
enabled: true
port: 1883
fixtures:
# User Service Events
- identifier: "user-registered"
name: "User Registration Event"
topic_pattern: "^events/user/registered$"
qos: 1
response:
payload:
event_type: "user_registered"
user_id: "{{uuid}}"
email: "{{faker.email}}"
timestamp: "{{now}}"
source: "user-service"
# Order Service Events
- identifier: "order-created"
name: "Order Created Event"
topic_pattern: "^events/order/created$"
qos: 1
response:
payload:
event_type: "order_created"
order_id: "{{uuid}}"
user_id: "{{uuid}}"
amount: "{{faker.float 10.0 500.0}}"
currency: "USD"
items: "{{faker.int 1 10}}"
timestamp: "{{now}}"
source: "order-service"
# Payment Service Events
- identifier: "payment-processed"
name: "Payment Processed Event"
topic_pattern: "^events/payment/processed$"
qos: 2
response:
payload:
event_type: "payment_processed"
payment_id: "{{uuid}}"
order_id: "{{uuid}}"
amount: "{{faker.float 10.0 500.0}}"
currency: "USD"
status: "{{faker.random_element completed failed pending}}"
method: "{{faker.random_element credit_card paypal bank_transfer}}"
timestamp: "{{now}}"
source: "payment-service"
# Notification Service
- identifier: "email-notification"
name: "Email Notification"
topic_pattern: "^commands/notification/email$"
qos: 1
response:
payload:
command_type: "send_email"
notification_id: "{{uuid}}"
recipient: "{{faker.email}}"
subject: "Order Confirmation"
template: "order_confirmation"
status: "queued"
timestamp: "{{now}}"
Test Code (Go):
package main
import (
"encoding/json"
"testing"
"time"
mqtt "github.com/eclipse/paho.mqtt.golang"
)
func TestEventDrivenWorkflow(t *testing.T) {
opts := mqtt.NewClientOptions().AddBroker("tcp://localhost:1883")
client := mqtt.NewClient(opts)
if token := client.Connect(); token.Wait() && token.Error() != nil {
t.Fatalf("Failed to connect: %v", token.Error())
}
defer client.Disconnect(250)
// Test user registration -> order creation -> payment -> notification flow
events := make(chan map[string]interface{}, 10)
// Subscribe to all events
client.Subscribe("events/#", 1, func(client mqtt.Client, msg mqtt.Message) {
var event map[string]interface{}
json.Unmarshal(msg.Payload(), &event)
events <- event
})
// Trigger user registration
userEvent := map[string]interface{}{
"user_id": "user-123",
"email": "user@example.com",
}
payload, _ := json.Marshal(userEvent)
client.Publish("events/user/registered", 1, false, payload)
// Wait for events
timeout := time.After(5 * time.Second)
receivedEvents := make(map[string]int)
for {
select {
case event := <-events:
eventType := event["event_type"].(string)
receivedEvents[eventType]++
// Verify event structure
switch eventType {
case "user_registered":
if event["user_id"] == nil || event["email"] == nil {
t.Errorf("Invalid user_registered event: %v", event)
}
case "order_created":
if event["order_id"] == nil || event["amount"] == nil {
t.Errorf("Invalid order_created event: %v", event)
}
case "payment_processed":
if event["payment_id"] == nil || event["status"] == nil {
t.Errorf("Invalid payment_processed event: %v", event)
}
}
case <-timeout:
// Check that we received expected events
if receivedEvents["user_registered"] == 0 {
t.Error("Expected user_registered event")
}
if receivedEvents["order_created"] == 0 {
t.Error("Expected order_created event")
}
if receivedEvents["payment_processed"] == 0 {
t.Error("Expected payment_processed event")
}
return
}
}
}
Real-Time Data Streaming
Live Dashboard Testing
Scenario: Test a real-time dashboard that displays sensor data and alerts.
MockForge Configuration:
mqtt:
enabled: true
port: 1883
fixtures:
# Environmental Sensors
- identifier: "env-sensor-cluster"
name: "Environmental Sensor Cluster"
topic_pattern: "^sensors/env/(.+)/(.+)$"
qos: 0
response:
payload:
sensor_type: "{{topic_param 2}}"
location: "{{topic_param 1}}"
value: "{{#switch topic_param.2}}
{{#case 'temperature'}}{{faker.float 15.0 35.0}}{{/case}}
{{#case 'humidity'}}{{faker.float 30.0 90.0}}{{/case}}
{{#case 'co2'}}{{faker.float 400.0 2000.0}}{{/case}}
{{#default}}0{{/default}}
{{/switch}}"
unit: "{{#switch topic_param.2}}
{{#case 'temperature'}}celsius{{/case}}
{{#case 'humidity'}}percent{{/case}}
{{#case 'co2'}}ppm{{/case}}
{{#default}}unit{{/default}}
{{/switch}}"
timestamp: "{{now}}"
auto_publish:
enabled: true
interval_ms: 2000
# System Alerts
- identifier: "system-alerts"
name: "System Alerts"
topic_pattern: "^alerts/system/(.+)$"
qos: 1
response:
payload:
alert_type: "{{topic_param 1}}"
severity: "{{faker.random_element info warning error critical}}"
message: "{{#switch topic_param.1}}
{{#case 'temperature'}}High temperature detected{{/case}}
{{#case 'power'}}Power supply issue{{/case}}
{{#case 'network'}}Network connectivity lost{{/case}}
{{#default}}System alert{{/default}}
{{/switch}}"
sensor_id: "{{uuid}}"
timestamp: "{{now}}"
auto_publish:
enabled: true
interval_ms: 30000
Test Code (Rust):
#![allow(unused)] fn main() { use paho_mqtt as mqtt; use std::time::Duration; #[tokio::test] async fn test_realtime_dashboard() { let create_opts = mqtt::CreateOptionsBuilder::new() .server_uri("tcp://localhost:1883") .client_id("dashboard-test") .finalize(); let mut client = mqtt::AsyncClient::new(create_opts).unwrap(); let conn_opts = mqtt::ConnectOptions::new(); client.connect(conn_opts).await.unwrap(); // Subscribe to sensor data client.subscribe("sensors/env/+/temperature", mqtt::QOS_0).await.unwrap(); client.subscribe("sensors/env/+/humidity", mqtt::QOS_0).await.unwrap(); client.subscribe("alerts/system/+", mqtt::QOS_1).await.unwrap(); let mut receiver = client.get_stream(100); let mut message_count = 0; let mut alerts_received = 0; // Collect messages for 10 seconds let start_time = std::time::Instant::now(); while start_time.elapsed() < Duration::from_secs(10) { if let Ok(Some(msg)) = tokio::time::timeout(Duration::from_millis(100), receiver.recv()).await { message_count += 1; let payload: serde_json::Value = serde_json::from_str(&msg.payload_str()).unwrap(); // Verify sensor data structure if msg.topic().contains("sensors/env") { assert!(payload.get("sensor_type").is_some()); assert!(payload.get("location").is_some()); assert!(payload.get("value").is_some()); assert!(payload.get("unit").is_some()); assert!(payload.get("timestamp").is_some()); } // Count alerts if msg.topic().contains("alerts/system") { alerts_received += 1; assert!(payload.get("alert_type").is_some()); assert!(payload.get("severity").is_some()); assert!(payload.get("message").is_some()); } } } // Verify we received data assert!(message_count > 0, "No messages received"); assert!(alerts_received > 0, "No alerts received"); client.disconnect(None).await.unwrap(); } }
CI/CD Integration
Automated Testing Pipeline
# .github/workflows/mqtt-tests.yml
name: MQTT Integration Tests
on: [push, pull_request]
jobs:
mqtt-tests:
runs-on: ubuntu-latest
services:
mockforge:
image: mockforge:latest
ports:
- 1883:1883
env:
MOCKFORGE_MQTT_ENABLED: true
MOCKFORGE_MQTT_FIXTURES: ./test-fixtures/mqtt/
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install dependencies
run: npm ci
- name: Wait for MockForge
run: |
timeout 30 bash -c 'until nc -z localhost 1883; do sleep 1; done'
- name: Run MQTT tests
run: npm test -- --testPathPattern=mqtt
env:
MQTT_BROKER: localhost:1883
Performance Testing
Load Testing MQTT Broker
mqtt:
enabled: true
port: 1883
max_connections: 1000
fixtures:
- identifier: "load-test-sensor"
name: "Load Test Sensor"
topic_pattern: "^loadtest/sensor/(.+)$"
qos: 0
response:
payload:
sensor_id: "{{topic_param 1}}"
value: "{{faker.float 0.0 100.0}}"
timestamp: "{{now}}"
Load Test Script (Python):
import paho.mqtt.client as mqtt
import threading
import time
import json
def create_publisher(client_id, num_messages):
client = mqtt.Client(f"publisher-{client_id}")
client.connect("localhost", 1883, 60)
for i in range(num_messages):
payload = {
"sensor_id": f"sensor_{client_id}_{i}",
"value": i * 1.5,
"timestamp": time.time()
}
client.publish(f"loadtest/sensor/{client_id}", json.dumps(payload), qos=0)
client.disconnect()
def load_test():
num_publishers = 50
messages_per_publisher = 100
start_time = time.time()
threads = []
for i in range(num_publishers):
thread = threading.Thread(target=create_publisher, args=(i, messages_per_publisher))
threads.append(thread)
thread.start()
for thread in threads:
thread.join()
end_time = time.time()
total_messages = num_publishers * messages_per_publisher
duration = end_time - start_time
print(f"Published {total_messages} messages in {duration:.2f} seconds")
print(f"Throughput: {total_messages / duration:.0f} messages/second")
if __name__ == "__main__":
load_test()
Next Steps
- Getting Started - Basic MQTT setup
- Configuration - Detailed configuration options
- Fixtures - Define MQTT mock scenarios
Getting Started with SMTP
MockForge includes a fully functional SMTP (Simple Mail Transfer Protocol) server for testing email workflows in your applications. This guide will help you get started quickly.
Quick Start
1. Enable SMTP in Configuration
Create a configuration file or modify your existing config.yaml:
smtp:
enabled: true
port: 1025
host: "0.0.0.0"
hostname: "mockforge-smtp"
2. Start the Server
mockforge serve --config config.yaml
You should see:
📧 SMTP server listening on localhost:1025
3. Send a Test Email
Using Python’s built-in smtplib:
import smtplib
from email.message import EmailMessage
msg = EmailMessage()
msg['Subject'] = 'Test Email'
msg['From'] = 'sender@example.com'
msg['To'] = 'recipient@example.com'
msg.set_content('This is a test email from Python.')
with smtplib.SMTP('localhost', 1025) as server:
server.send_message(msg)
print("Email sent successfully!")
4. Verify Email Reception
Currently, emails are stored in the in-memory mailbox. You can verify by checking the server logs or using the API endpoints (if UI is enabled).
Using Command-Line Tools
telnet
telnet localhost 1025
> EHLO client.example.com
> MAIL FROM:<sender@example.com>
> RCPT TO:<recipient@example.com>
> DATA
> Subject: Test Email
>
> This is a test email.
> .
> QUIT
swaks (SMTP Testing Tool)
swaks is a powerful SMTP testing tool:
# Install swaks
# On Ubuntu/Debian: apt install swaks
# On macOS: brew install swaks
# Send test email
swaks --to recipient@example.com \
--from sender@example.com \
--server localhost:1025 \
--body "Test email from swaks" \
--header "Subject: Test"
Supported SMTP Commands
MockForge SMTP server implements RFC 5321 and supports:
- HELO / EHLO - Client introduction
- MAIL FROM - Specify sender
- RCPT TO - Specify recipient(s)
- DATA - Send message content
- RSET - Reset session
- NOOP - No operation (keepalive)
- QUIT - End session
- HELP - List supported commands
Basic Configuration Options
smtp:
enabled: true # Enable/disable SMTP server
port: 1025 # Port (1025 for dev, 25 for prod)
host: "0.0.0.0" # Bind address
hostname: "mockforge-smtp" # Server hostname in greeting
# Mailbox settings
enable_mailbox: true
max_mailbox_messages: 1000
# Timeouts
timeout_secs: 30
max_connections: 100
Environment Variables
Override configuration with environment variables:
export MOCKFORGE_SMTP_ENABLED=true
export MOCKFORGE_SMTP_PORT=1025
export MOCKFORGE_SMTP_HOST=0.0.0.0
export MOCKFORGE_SMTP_HOSTNAME=my-smtp-server
mockforge serve
Next Steps
- Configuration Reference - Detailed configuration options
- Fixtures - Create email scenarios and auto-replies
- Examples - Real-world usage examples
Troubleshooting
Connection Refused
Problem: Cannot connect to SMTP server
Solutions:
- Verify SMTP is enabled:
smtp.enabled: true - Check the port isn’t in use:
lsof -i :1025 - Ensure server is running: Look for “SMTP server listening” in logs
Email Not Received
Problem: Email sent but not stored
Solutions:
- Check mailbox is enabled:
smtp.enable_mailbox: true - Verify mailbox size limit:
smtp.max_mailbox_messages - Check server logs for errors
Permission Denied on Port 25
Problem: Cannot bind to port 25
Solution: Ports below 1024 require root privileges. Use port 1025 for development or run with sudo for production testing.
Common Use Cases
Testing Email Workflows
# In your test suite
def test_user_registration_sends_welcome_email():
# Register user (triggers email send)
response = client.post('/register', json={
'email': 'newuser@example.com',
'password': 'secret'
})
assert response.status_code == 201
# Verify email was sent to MockForge SMTP
emails = get_emails_from_mockforge()
assert len(emails) == 1
assert emails[0]['to'] == 'newuser@example.com'
assert 'Welcome' in emails[0]['subject']
CI/CD Integration
# .github/workflows/test.yml
- name: Start MockForge SMTP
run: |
mockforge serve --smtp --smtp-port 1025 &
sleep 2
- name: Run tests
env:
SMTP_HOST: localhost
SMTP_PORT: 1025
run: pytest tests/
What’s Next?
Now that you have a basic SMTP server running, explore:
- Fixtures - Define email acceptance rules and auto-replies
- Configuration - Fine-tune server behavior
- Examples - See real-world implementations
SMTP Configuration Reference
This page provides comprehensive documentation for all SMTP configuration options in MockForge.
Configuration File
Configuration can be provided via YAML or JSON files:
# config.yaml
smtp:
# Server settings
enabled: true
port: 1025
host: "0.0.0.0"
hostname: "mockforge-smtp"
# Connection settings
timeout_secs: 30
max_connections: 100
# Mailbox settings
enable_mailbox: true
max_mailbox_messages: 1000
# Fixtures
fixtures_dir: "./fixtures/smtp"
Configuration Options
Server Settings
enabled
- Type:
boolean - Default:
false - Description: Enable or disable the SMTP server
smtp:
enabled: true
port
- Type:
integer - Default:
1025 - Description: Port number for the SMTP server to listen on
- Notes:
- Standard SMTP port is 25, but requires root/admin privileges
- Common development ports: 1025, 2525, 5025
- Must be between 1 and 65535
smtp:
port: 1025
host
- Type:
string - Default:
"0.0.0.0" - Description: IP address to bind the server to
- Options:
"0.0.0.0"- Listen on all interfaces"127.0.0.1"- Listen only on localhost- Specific IP for network interface
smtp:
host: "127.0.0.1" # Localhost only
hostname
- Type:
string - Default:
"mockforge-smtp" - Description: Server hostname used in SMTP greeting and responses
- Notes: Appears in
220greeting and250HELO/EHLO responses
smtp:
hostname: "mail.example.com"
Connection Settings
timeout_secs
- Type:
integer - Default:
30 - Description: Connection timeout in seconds
- Range:
1to3600(1 second to 1 hour)
smtp:
timeout_secs: 60 # 1 minute timeout
max_connections
- Type:
integer - Default:
100 - Description: Maximum number of concurrent SMTP connections
- Notes: Prevents resource exhaustion from too many connections
smtp:
max_connections: 500
Mailbox Settings
enable_mailbox
- Type:
boolean - Default:
true - Description: Enable in-memory mailbox for storing received emails
smtp:
enable_mailbox: true
max_mailbox_messages
- Type:
integer - Default:
1000 - Description: Maximum number of emails to store in mailbox
- Notes:
- Uses FIFO (First In, First Out) when limit is reached
- Oldest emails are removed when limit is exceeded
- Set to
0for unlimited (not recommended)
smtp:
max_mailbox_messages: 5000
Fixture Settings
fixtures_dir
- Type:
string(path) - Default:
null(no fixtures) - Description: Directory containing SMTP fixture files
- Notes:
- Can be absolute or relative path
- All
.yamland.ymlfiles in directory will be loaded - See Fixtures documentation for format
smtp:
fixtures_dir: "./fixtures/smtp"
Or with absolute path:
smtp:
fixtures_dir: "/opt/mockforge/fixtures/smtp"
Environment Variables
All configuration options can be overridden with environment variables using the prefix MOCKFORGE_SMTP_:
| Environment Variable | Config Option | Example |
|---|---|---|
MOCKFORGE_SMTP_ENABLED | enabled | true |
MOCKFORGE_SMTP_PORT | port | 2525 |
MOCKFORGE_SMTP_HOST | host | 127.0.0.1 |
MOCKFORGE_SMTP_HOSTNAME | hostname | testmail.local |
Example
export MOCKFORGE_SMTP_ENABLED=true
export MOCKFORGE_SMTP_PORT=2525
export MOCKFORGE_SMTP_HOST=0.0.0.0
export MOCKFORGE_SMTP_HOSTNAME=test-server
mockforge serve
Command-Line Arguments
Override configuration via CLI arguments:
mockforge serve \
--smtp-port 2525 \
--config ./config.yaml
Priority Order
Configuration is applied in the following order (highest to lowest priority):
- Command-line arguments
- Environment variables
- Configuration file
- Default values
Complete Example
Development Configuration
# config.dev.yaml
smtp:
enabled: true
port: 1025
host: "127.0.0.1"
hostname: "dev-smtp"
timeout_secs: 30
max_connections: 50
enable_mailbox: true
max_mailbox_messages: 500
fixtures_dir: "./fixtures/smtp"
Production-Like Configuration
# config.prod.yaml
smtp:
enabled: true
port: 2525
host: "0.0.0.0"
hostname: "mockforge.example.com"
timeout_secs: 60
max_connections: 1000
enable_mailbox: true
max_mailbox_messages: 10000
fixtures_dir: "/opt/mockforge/smtp-fixtures"
CI/CD Configuration
# config.ci.yaml
smtp:
enabled: true
port: 1025
host: "127.0.0.1"
hostname: "ci-smtp"
timeout_secs: 10
max_connections: 10
enable_mailbox: true
max_mailbox_messages: 100
fixtures_dir: "./test/fixtures/smtp"
Performance Tuning
High-Volume Scenarios
For testing high-volume email sending:
smtp:
max_connections: 2000
max_mailbox_messages: 50000
timeout_secs: 120
Memory considerations: Each stored email uses approximately 1-5 KB of memory depending on size. 50,000 emails ≈ 50-250 MB.
Low-Resource Environments
For constrained environments (CI, containers):
smtp:
max_connections: 25
max_mailbox_messages: 100
timeout_secs: 15
Best Practices
Security
-
Bind to localhost in development:
host: "127.0.0.1" -
Use non-privileged ports:
port: 1025 # Not 25 -
Limit connections:
max_connections: 100
Testing
-
Use fixtures for deterministic tests:
fixtures_dir: "./fixtures/smtp" -
Configure appropriate mailbox size:
max_mailbox_messages: 1000 # Adjust based on test suite -
Set realistic timeouts:
timeout_secs: 30 # Not too short, not too long
CI/CD
-
Use environment variables for flexibility:
MOCKFORGE_SMTP_PORT=1025 -
Start server in background:
mockforge serve --smtp & -
Use localhost binding for security:
host: "127.0.0.1"
Troubleshooting
Port Already in Use
Error: Address already in use
Solution:
# Check what's using the port
lsof -i :1025
# Use a different port
mockforge serve --smtp-port 2525
Too Many Open Files
Error: Too many open files
Solution: Reduce max_connections:
smtp:
max_connections: 50
Out of Memory
Error: OOM or slowdown with large mailbox
Solution: Reduce max_mailbox_messages:
smtp:
max_mailbox_messages: 1000
Related Documentation
SMTP Fixtures
SMTP fixtures allow you to define email acceptance rules, auto-reply behavior, and storage options based on pattern matching. This enables sophisticated email testing scenarios.
Fixture Format
Fixtures are defined in YAML format:
identifier: "welcome-email"
name: "Welcome Email Handler"
description: "Handles welcome emails to new users"
match_criteria:
recipient_pattern: "^welcome@example\\.com$"
sender_pattern: null
subject_pattern: null
match_all: false
response:
status_code: 250
message: "Message accepted"
delay_ms: 0
auto_reply:
enabled: false
storage:
save_to_mailbox: true
export_to_file: null
behavior:
failure_rate: 0.0
delay_ms: 0
Match Criteria
recipient_pattern
- Type:
string(regex) ornull - Description: Regular expression to match recipient email address
- Examples:
^user@example\.com$- Exact match^.*@example\.com$- Any user at domain^admin.*@.*\.com$- Admin users at any .com domain
match_criteria:
recipient_pattern: "^support@example\\.com$"
sender_pattern
- Type:
string(regex) ornull - Description: Regular expression to match sender email address
match_criteria:
sender_pattern: "^no-reply@.*\\.com$"
subject_pattern
- Type:
string(regex) ornull - Description: Regular expression to match email subject line
match_criteria:
subject_pattern: "^\\[URGENT\\].*"
match_all
- Type:
boolean - Default:
false - Description: When
true, this fixture matches all emails (catch-all)
match_criteria:
match_all: true # Catch-all fixture
Matching Logic
Patterns are evaluated in order:
- If
match_allistrue, fixture matches - Otherwise, all non-null patterns must match:
- If
recipient_patternis set, it must match - If
sender_patternis set, it must match - If
subject_patternis set, it must match
- If
Response Configuration
status_code
- Type:
integer - Default:
250 - Description: SMTP status code to return
- Common codes:
250- OK (success)550- Mailbox unavailable (rejection)451- Temporary failure452- Insufficient storage
response:
status_code: 550 # Reject email
message
- Type:
string - Description: Response message text
response:
status_code: 250
message: "Message accepted for delivery"
delay_ms
- Type:
integer - Default:
0 - Description: Artificial delay before responding (milliseconds)
- Use case: Simulate slow mail servers
response:
delay_ms: 500 # 500ms delay
Auto-Reply
Auto-replies allow MockForge to automatically send response emails.
Basic Auto-Reply
auto_reply:
enabled: true
from: "noreply@example.com"
to: "{{from}}" # Reply to sender
subject: "Re: {{subject}}"
body: |
Thank you for your email.
This is an automated response.
Template Variables
Use template variables in auto-reply fields:
{{from}}- Original sender email{{to}}- Original recipient email{{subject}}- Original subject{{from_name}}- Extracted name from sender{{now}}- Current timestamp- Faker functions:
{{faker.name}},{{faker.email}}, etc.
Example: Welcome Email Auto-Reply
identifier: "welcome-autoresponder"
name: "Welcome Email Auto-Reply"
match_criteria:
recipient_pattern: "^register@example\\.com$"
response:
status_code: 250
message: "Message accepted"
auto_reply:
enabled: true
from: "welcome@example.com"
to: "{{from}}"
subject: "Welcome to Example.com!"
body: |
Hi {{from_name}},
Thank you for registering at Example.com!
Your registration was received at {{now}}.
If you have any questions, reply to this email.
Best regards,
The Example.com Team
Storage Configuration
save_to_mailbox
- Type:
boolean - Default:
true - Description: Store received email in in-memory mailbox
storage:
save_to_mailbox: true
export_to_file
- Type:
string(path) ornull - Description: Export email to file on disk
- Format: Emails are saved as
.emlfiles
storage:
save_to_mailbox: true
export_to_file: "./emails/received"
File naming pattern: {timestamp}_{from}_{to}.eml
Example: 20240315_143022_sender@example.com_recipient@example.com.eml
Behavior Configuration
failure_rate
- Type:
float(0.0 to 1.0) - Default:
0.0 - Description: Probability of simulated failure (for testing error handling)
- Examples:
0.0- Never fail0.1- 10% failure rate1.0- Always fail
behavior:
failure_rate: 0.05 # 5% of emails fail
delay_ms
- Type:
integer - Default:
0 - Description: Artificial delay before processing (milliseconds)
behavior:
delay_ms: 1000 # 1 second delay
Complete Examples
Example 1: User Registration Emails
identifier: "user-registration"
name: "User Registration Handler"
description: "Handles new user registration confirmation emails"
match_criteria:
recipient_pattern: "^[^@]+@example\\.com$"
subject_pattern: "^Registration Confirmation"
response:
status_code: 250
message: "Registration email accepted"
delay_ms: 0
auto_reply:
enabled: true
from: "noreply@example.com"
to: "{{from}}"
subject: "Welcome! Please Confirm Your Email"
body: |
Hello,
Thank you for registering!
Please click the link below to confirm your email:
https://example.com/confirm?token={{uuid}}
This link expires in 24 hours.
Best regards,
Example.com Team
storage:
save_to_mailbox: true
export_to_file: "./logs/registration-emails"
behavior:
failure_rate: 0.0
delay_ms: 0
Example 2: Support Ticket System
identifier: "support-tickets"
name: "Support Ticket Handler"
description: "Auto-responds to support emails"
match_criteria:
recipient_pattern: "^support@example\\.com$"
response:
status_code: 250
message: "Support ticket created"
auto_reply:
enabled: true
from: "support@example.com"
to: "{{from}}"
subject: "Ticket Created: {{subject}}"
body: |
Your support ticket has been created.
Ticket ID: {{uuid}}
Subject: {{subject}}
Created: {{now}}
We'll respond within 24 hours.
Support Team
storage:
save_to_mailbox: true
Example 3: Bounced Email Simulation
identifier: "bounce-simulation"
name: "Simulate Bounced Emails"
description: "Rejects emails to invalid addresses"
match_criteria:
recipient_pattern: "^bounce-test@example\\.com$"
response:
status_code: 550
message: "Mailbox unavailable"
delay_ms: 0
auto_reply:
enabled: false
storage:
save_to_mailbox: false
behavior:
failure_rate: 1.0 # Always fail
Example 4: Slow Server Simulation
identifier: "slow-server"
name: "Slow SMTP Server"
description: "Simulates slow mail server response"
match_criteria:
recipient_pattern: "^slowtest@example\\.com$"
response:
status_code: 250
message: "OK"
delay_ms: 5000 # 5 second delay
storage:
save_to_mailbox: true
behavior:
delay_ms: 3000 # Additional 3 second processing delay
Example 5: Catch-All Default
identifier: "default-handler"
name: "Default Email Handler"
description: "Accepts all emails not matched by other fixtures"
match_criteria:
match_all: true
response:
status_code: 250
message: "Message accepted"
auto_reply:
enabled: false
storage:
save_to_mailbox: true
behavior:
failure_rate: 0.0
delay_ms: 0
Loading Fixtures
Directory Structure
fixtures/smtp/
├── welcome-email.yaml
├── support-tickets.yaml
├── bounce-simulation.yaml
└── default.yaml
Configuration
smtp:
fixtures_dir: "./fixtures/smtp"
Fixture Priority
Fixtures are evaluated in alphabetical order by filename. First match wins (except match_all).
To control priority, use numbered prefixes:
fixtures/smtp/
├── 01-bounce.yaml # Highest priority
├── 02-welcome.yaml
├── 03-support.yaml
└── 99-default.yaml # Lowest priority (catch-all)
Testing Fixtures
1. Validate Fixture Syntax
# Future command (not yet implemented)
mockforge smtp fixtures validate ./fixtures/smtp/welcome.yaml
2. Test Fixture Matching
Send test email:
swaks --to welcome@example.com \
--from test@test.com \
--server localhost:1025 \
--header "Subject: Test"
Check server logs for fixture match:
[INFO] Matched fixture: welcome-email
3. Verify Auto-Reply
Check mailbox or export directory for auto-reply email.
Best Practices
1. Specific Before General
Place specific fixtures before general catch-all fixtures:
01-specific-user.yaml
02-domain-specific.yaml
99-catch-all.yaml
2. Use Descriptive Identifiers
identifier: "welcome-new-users" # Good
identifier: "fixture1" # Bad
3. Document with Descriptions
description: "Handles password reset emails with confirmation link"
4. Test Failure Scenarios
behavior:
failure_rate: 0.01 # Test with 1% failure
5. Limit Auto-Replies
Don’t create auto-reply loops:
- Avoid auto-replying to
noreply@addresses - Check sender before replying
Troubleshooting
Fixture Not Matching
- Check pattern syntax: Use regex tester (regex101.com)
- Check fixture order: Earlier fixtures may match first
- Enable debug logging: See which fixture matched
- Test with simple pattern: Start with
^.*@example\.com$
Auto-Reply Not Sending
- Verify enabled:
auto_reply.enabled: true - Check template syntax: Ensure valid template variables
- Check logs: Look for auto-reply errors
Performance Issues
- Simplify regex: Complex patterns slow matching
- Reduce fixtures: Too many fixtures slow evaluation
- Disable storage: Set
save_to_mailbox: falseif not needed
Related Documentation
SMTP Examples
This page provides real-world examples of using MockForge SMTP for testing email workflows.
Table of Contents
- Testing User Registration
- Password Reset Flow
- Email Verification
- Newsletter Subscriptions
- CI/CD Integration
- Load Testing
- Multi-Language Applications
Testing User Registration
Scenario
Test that your application sends a welcome email when users register.
Fixture
fixtures/smtp/welcome-email.yaml:
identifier: "welcome-email"
name: "Welcome Email"
description: "Auto-responds to new user registration"
match_criteria:
recipient_pattern: "^[^@]+@example\\.com$"
subject_pattern: "^Welcome"
response:
status_code: 250
message: "Message accepted"
auto_reply:
enabled: true
from: "noreply@example.com"
to: "{{from}}"
subject: "Welcome to Our Platform!"
body: |
Hi there!
Thank you for registering at our platform.
Click here to verify your email:
https://example.com/verify?token={{uuid}}
Best regards,
The Team
storage:
save_to_mailbox: true
Python Test
import smtplib
import requests
from email.message import EmailMessage
def test_user_registration_sends_welcome_email():
# Register a new user
response = requests.post('http://localhost:8080/api/register', json={
'email': 'newuser@example.com',
'password': 'SecurePass123',
'name': 'Test User'
})
assert response.status_code == 201
# Verify email was sent
# (In real scenario, you'd query MockForge's mailbox API)
# For now, manually check logs or implement mailbox checking
def send_test_email():
"""Helper to test fixture directly"""
msg = EmailMessage()
msg['Subject'] = 'Welcome to Our Platform'
msg['From'] = 'system@myapp.com'
msg['To'] = 'newuser@example.com'
msg.set_content('Welcome!')
with smtplib.SMTP('localhost', 1025) as server:
server.send_message(msg)
print("Test email sent!")
if __name__ == "__main__":
send_test_email()
Node.js Test
const nodemailer = require('nodemailer');
const axios = require('axios');
const assert = require('assert');
describe('User Registration', () => {
it('should send welcome email', async () => {
// Configure nodemailer to use MockForge
const transporter = nodemailer.createTransport({
host: 'localhost',
port: 1025,
secure: false,
});
// Register user
const response = await axios.post('http://localhost:8080/api/register', {
email: 'newuser@example.com',
password: 'SecurePass123',
name: 'Test User'
});
assert.strictEqual(response.status, 201);
// Send test email
await transporter.sendMail({
from: 'system@myapp.com',
to: 'newuser@example.com',
subject: 'Welcome to Our Platform',
text: 'Welcome!',
});
// In production, query MockForge mailbox API here
});
});
Password Reset Flow
Scenario
Test password reset email with temporary token.
Fixture
fixtures/smtp/password-reset.yaml:
identifier: "password-reset"
name: "Password Reset"
match_criteria:
recipient_pattern: "^.*@.*$"
subject_pattern: "^Password Reset"
response:
status_code: 250
message: "Reset email accepted"
auto_reply:
enabled: true
from: "security@example.com"
to: "{{from}}"
subject: "Password Reset Instructions"
body: |
Hello,
You requested a password reset.
Click the link below to reset your password:
https://example.com/reset?token={{uuid}}
This link expires in 1 hour.
If you didn't request this, please ignore this email.
Security Team
storage:
save_to_mailbox: true
export_to_file: "./logs/password-resets"
Python Test
import pytest
import smtplib
from email.message import EmailMessage
def trigger_password_reset(email):
"""Trigger password reset in your application"""
import requests
response = requests.post('http://localhost:8080/api/password-reset',
json={'email': email})
return response.status_code == 200
def test_password_reset_email():
email = 'user@example.com'
# Trigger reset
assert trigger_password_reset(email)
# Verify email sent (check mailbox)
# TODO: Implement mailbox API check
def test_password_reset_invalid_email():
"""Test that invalid email is rejected"""
email = 'bounce-test@example.com' # Configured to fail
# This should fail
assert not trigger_password_reset(email)
Email Verification
Scenario
Test email verification link generation and sending.
Fixture
fixtures/smtp/email-verification.yaml:
identifier: "email-verification"
name: "Email Verification"
match_criteria:
subject_pattern: "^Verify Your Email"
response:
status_code: 250
message: "Verification email sent"
auto_reply:
enabled: true
from: "noreply@example.com"
to: "{{from}}"
subject: "Verify Your Email Address"
body: |
Please verify your email address by clicking below:
https://example.com/verify?email={{to}}&code={{faker.alphanumeric 32}}
This link expires in 24 hours.
storage:
save_to_mailbox: true
Go Test
package main
import (
"net/smtp"
"testing"
)
func TestEmailVerification(t *testing.T) {
// Setup
smtpHost := "localhost:1025"
from := "system@myapp.com"
to := []string{"user@example.com"}
// Create message
message := []byte(
"Subject: Verify Your Email\r\n" +
"\r\n" +
"Please verify your email.\r\n",
)
// Send email
err := smtp.SendMail(smtpHost, nil, from, to, message)
if err != nil {
t.Fatalf("Failed to send email: %v", err)
}
// Verify sent (check mailbox)
// TODO: Implement mailbox check
}
Newsletter Subscriptions
Scenario
Test newsletter subscription confirmation emails.
Fixture
fixtures/smtp/newsletter.yaml:
identifier: "newsletter-subscription"
name: "Newsletter Subscription"
match_criteria:
recipient_pattern: "^newsletter@example\\.com$"
response:
status_code: 250
message: "Subscription received"
auto_reply:
enabled: true
from: "newsletter@example.com"
to: "{{from}}"
subject: "Confirm Your Newsletter Subscription"
body: |
Thanks for subscribing to our newsletter!
Click to confirm: https://example.com/newsletter/confirm?email={{from}}
You'll receive our weekly digest every Monday.
storage:
save_to_mailbox: true
Ruby Test
require 'mail'
require 'minitest/autorun'
class NewsletterTest < Minitest::Test
def setup
Mail.defaults do
delivery_method :smtp,
address: "localhost",
port: 1025
end
end
def test_newsletter_subscription
email = Mail.new do
from 'user@test.com'
to 'newsletter@example.com'
subject 'Subscribe'
body 'Please subscribe me'
end
email.deliver!
# Verify subscription email sent
# TODO: Check MockForge mailbox
end
end
CI/CD Integration
GitHub Actions
.github/workflows/test.yml:
name: Test Email Workflows
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
services:
mockforge:
image: mockforge/mockforge:latest
ports:
- 1025:1025
env:
MOCKFORGE_SMTP_ENABLED: true
MOCKFORGE_SMTP_PORT: 1025
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install dependencies
run: |
pip install -r requirements.txt
- name: Run email tests
env:
SMTP_HOST: localhost
SMTP_PORT: 1025
run: |
pytest tests/test_emails.py -v
GitLab CI
.gitlab-ci.yml:
test:
image: python:3.11
services:
- name: mockforge/mockforge:latest
alias: mockforge
variables:
MOCKFORGE_SMTP_ENABLED: "true"
SMTP_HOST: mockforge
SMTP_PORT: "1025"
script:
- pip install -r requirements.txt
- pytest tests/test_emails.py
Docker Compose
docker-compose.test.yml:
version: '3.8'
services:
mockforge:
image: mockforge/mockforge:latest
ports:
- "1025:1025"
environment:
MOCKFORGE_SMTP_ENABLED: "true"
MOCKFORGE_SMTP_PORT: 1025
volumes:
- ./fixtures:/fixtures
app:
build: .
depends_on:
- mockforge
environment:
SMTP_HOST: mockforge
SMTP_PORT: 1025
command: pytest tests/
Load Testing
Scenario
Test application performance with high email volume.
Python Load Test
import concurrent.futures
import smtplib
from email.message import EmailMessage
import time
def send_email(index):
"""Send a single email"""
msg = EmailMessage()
msg['Subject'] = f'Load Test Email {index}'
msg['From'] = f'loadtest{index}@test.com'
msg['To'] = 'recipient@example.com'
msg.set_content(f'This is load test email #{index}')
try:
with smtplib.SMTP('localhost', 1025, timeout=5) as server:
server.send_message(msg)
return True
except Exception as e:
print(f"Error sending email {index}: {e}")
return False
def load_test(num_emails=1000, num_workers=10):
"""Send many emails concurrently"""
print(f"Starting load test: {num_emails} emails with {num_workers} workers")
start_time = time.time()
with concurrent.futures.ThreadPoolExecutor(max_workers=num_workers) as executor:
results = list(executor.map(send_email, range(num_emails)))
end_time = time.time()
duration = end_time - start_time
success_count = sum(results)
emails_per_second = num_emails / duration
print(f"\nResults:")
print(f" Total emails: {num_emails}")
print(f" Successful: {success_count}")
print(f" Failed: {num_emails - success_count}")
print(f" Duration: {duration:.2f}s")
print(f" Throughput: {emails_per_second:.2f} emails/sec")
if __name__ == "__main__":
load_test(num_emails=1000, num_workers=20)
Configuration for Load Testing
smtp:
enabled: true
port: 1025
host: "0.0.0.0"
max_connections: 500
max_mailbox_messages: 10000
timeout_secs: 60
Multi-Language Applications
Scenario
Test internationalized email content.
Fixture with Template
fixtures/smtp/i18n-welcome.yaml:
identifier: "i18n-welcome"
name: "Internationalized Welcome"
match_criteria:
recipient_pattern: "^[^@]+@example\\.com$"
subject_pattern: "^Welcome|Bienvenue|Willkommen"
response:
status_code: 250
message: "Message accepted"
auto_reply:
enabled: false # Handle in application
storage:
save_to_mailbox: true
Python Multi-Language Test
import smtplib
from email.message import EmailMessage
from email.mime.text import MIMEText
def send_welcome_email(recipient, language='en'):
"""Send welcome email in specified language"""
subjects = {
'en': 'Welcome to Our Platform',
'fr': 'Bienvenue sur notre plateforme',
'de': 'Willkommen auf unserer Plattform',
'es': 'Bienvenido a nuestra plataforma'
}
bodies = {
'en': 'Welcome! Thank you for registering.',
'fr': 'Bienvenue! Merci de vous être inscrit.',
'de': 'Willkommen! Danke für Ihre Registrierung.',
'es': '¡Bienvenido! Gracias por registrarse.'
}
msg = EmailMessage()
msg['Subject'] = subjects.get(language, subjects['en'])
msg['From'] = 'noreply@example.com'
msg['To'] = recipient
msg['Content-Language'] = language
msg.set_content(bodies.get(language, bodies['en']))
with smtplib.SMTP('localhost', 1025) as server:
server.send_message(msg)
def test_multi_language_emails():
"""Test emails in multiple languages"""
languages = ['en', 'fr', 'de', 'es']
for lang in languages:
send_welcome_email(f'user-{lang}@example.com', lang)
print(f"Sent {lang} email")
if __name__ == "__main__":
test_multi_language_emails()
Testing Email Bounces
Scenario
Test application handling of bounced emails.
Fixture
fixtures/smtp/bounce-test.yaml:
identifier: "bounce-simulation"
name: "Bounce Simulation"
match_criteria:
recipient_pattern: "^bounce@example\\.com$"
response:
status_code: 550
message: "Mailbox unavailable"
storage:
save_to_mailbox: false
behavior:
failure_rate: 1.0 # Always fail
Test
import smtplib
from email.message import EmailMessage
def test_bounce_handling():
"""Test that application handles bounces correctly"""
msg = EmailMessage()
msg['Subject'] = 'Test Bounce'
msg['From'] = 'sender@test.com'
msg['To'] = 'bounce@example.com'
msg.set_content('This should bounce')
try:
with smtplib.SMTP('localhost', 1025) as server:
server.send_message(msg)
assert False, "Expected SMTPRecipientsRefused"
except smtplib.SMTPRecipientsRefused as e:
# Expected behavior
print(f"Bounce handled correctly: {e}")
assert '550' in str(e)
Integration with Testing Frameworks
pytest Fixture
import pytest
import smtplib
from email.message import EmailMessage
@pytest.fixture
def smtp_client():
"""Provides SMTP client connected to MockForge"""
return smtplib.SMTP('localhost', 1025)
@pytest.fixture
def email_factory():
"""Factory for creating test emails"""
def _create_email(to, subject="Test", body="Test body"):
msg = EmailMessage()
msg['Subject'] = subject
msg['From'] = 'test@test.com'
msg['To'] = to
msg.set_content(body)
return msg
return _create_email
def test_with_fixtures(smtp_client, email_factory):
"""Test using pytest fixtures"""
email = email_factory('user@example.com', subject='Welcome')
smtp_client.send_message(email)
# Verify email sent
unittest Helper
import unittest
import smtplib
class EmailTestCase(unittest.TestCase):
@classmethod
def setUpClass(cls):
"""Set up SMTP connection for all tests"""
cls.smtp_host = 'localhost'
cls.smtp_port = 1025
def send_test_email(self, to, subject, body):
"""Helper method to send test emails"""
with smtplib.SMTP(self.smtp_host, self.smtp_port) as server:
# ... send email
pass
def test_email_sending(self):
self.send_test_email('test@example.com', 'Test', 'Body')
# Verify
Best Practices
- Use dedicated fixtures for each test scenario
- Clean mailbox between test runs
- Test both success and failure scenarios
- Verify email content, not just delivery
- Use realistic delays in load tests
- Test internationalization early
- Mock external dependencies completely
Troubleshooting
Emails Not Received
Check:
- SMTP server is running
- Correct port (1025)
- Fixture patterns match
- Mailbox not full
Slow Tests
Optimize:
- Reduce
delay_msin fixtures - Disable
save_to_mailboxif not needed - Use concurrent connections in load tests
Fixture Not Matching
Debug:
- Enable debug logging
- Simplify regex patterns
- Test patterns with regex101.com
- Check fixture load order
Related Documentation
Getting Started with FTP Mocking
MockForge provides comprehensive FTP server mocking capabilities, allowing you to simulate FTP file transfers for testing and development purposes.
Quick Start
Starting an FTP Server
# Start a basic FTP server on port 2121
mockforge ftp serve --port 2121
# Start with custom configuration
mockforge ftp serve --host 0.0.0.0 --port 2121 --virtual-root /ftp
Connecting with an FTP Client
Once the server is running, you can connect using any FTP client:
# Using lftp
lftp ftp://localhost:2121
# Using curl
curl ftp://localhost:2121/
# Using FileZilla or other GUI clients
# Host: localhost
# Port: 2121
# Username: (leave blank for anonymous)
# Password: (leave blank)
Basic Concepts
Virtual File System
MockForge FTP uses an in-memory virtual file system that supports:
- Static files: Pre-defined content
- Template files: Dynamic content generation using Handlebars
- Generated files: Synthetic content (random, zeros, patterns)
- Upload handling: Configurable validation and storage rules
File Content Types
Static Content
# Add a static file
mockforge ftp vfs add /hello.txt --content "Hello, World!"
Template Content
# Add a template file with dynamic content
mockforge ftp vfs add /user.json --template '{"name": "{{faker.name}}", "id": "{{uuid}}", "timestamp": "{{now}}"}'
Generated Content
# Add a file with random content
mockforge ftp vfs add /random.bin --generate random --size 1024
# Add a file filled with zeros
mockforge ftp vfs add /zeros.bin --generate zeros --size 1024
FTP Commands Supported
MockForge supports standard FTP commands:
LIST- Directory listingRETR- Download filesSTOR- Upload filesDELE- Delete filesPWD- Print working directorySIZE- Get file sizeCWD- Change directory (limited support)
Example Session
$ mockforge ftp serve --port 2121 &
$ lftp localhost:2121
lftp localhost:2121:~> ls
-rw-r--r-- 1 mockforge ftp 0 Jan 01 00:00 test.txt
lftp localhost:2121:~> put localfile.txt
lftp localhost:2121:~> get test.txt
lftp localhost:2121:~> quit
Next Steps
- Configuration - Advanced server configuration
- Fixtures - Pre-configured file structures
- Examples - Complete usage examples
FTP Server Configuration
MockForge FTP servers can be configured through command-line options or configuration files.
Command Line Options
Server Options
mockforge ftp serve [OPTIONS]
| Option | Description | Default |
|---|---|---|
--port <PORT> | FTP server port | 2121 |
--host <HOST> | FTP server host | 127.0.0.1 |
--virtual-root <PATH> | Virtual file system root path | / |
--config <FILE> | Configuration file path | - |
Examples
# Basic server
mockforge ftp serve
# Custom port and host
mockforge ftp serve --port 2122 --host 0.0.0.0
# With configuration file
mockforge ftp serve --config ftp-config.yaml
Configuration File
FTP servers can be configured using a YAML configuration file:
ftp:
host: "127.0.0.1"
port: 2121
virtual_root: "/"
fixtures:
- name: "sample_files"
description: "Sample files for testing"
virtual_files:
- path: "/welcome.txt"
content:
type: "static"
content: "Welcome to MockForge FTP!"
permissions: "644"
owner: "ftp"
group: "ftp"
upload_rules:
- path_pattern: "/uploads/.*"
auto_accept: true
max_size_bytes: 1048576 # 1MB
allowed_extensions: ["txt", "json", "xml"]
storage:
type: "memory"
Virtual File System Configuration
File Content Types
Static Content
content:
type: "static"
content: "Hello, World!"
Template Content
content:
type: "template"
template: '{"user": "{{faker.name}}", "id": "{{uuid}}", "time": "{{now}}"}'
Generated Content
content:
type: "generated"
size: 1024
pattern: "random" # random, zeros, ones, incremental
Upload Rules
Upload rules control how files are accepted and stored:
upload_rules:
- path_pattern: "/uploads/.*" # Regex pattern
auto_accept: true # Auto-accept uploads
max_size_bytes: 1048576 # Maximum file size
allowed_extensions: # Allowed file extensions
- "txt"
- "json"
storage: # Storage backend
type: "memory" # memory, file, discard
Storage Options
Memory Storage
Files are stored in memory (default):
storage:
type: "memory"
File Storage
Files are written to the local filesystem:
storage:
type: "file"
path: "/tmp/uploads"
Discard Storage
Files are accepted but not stored:
storage:
type: "discard"
Template Variables
When using template content, the following variables are available:
Timestamps
{{now}}- Current timestamp in RFC3339 format{{timestamp}}- Unix timestamp (seconds){{date}}- Current date (YYYY-MM-DD){{time}}- Current time (HH:MM:SS)
Random Values
{{random_int}}- Random 64-bit integer{{random_float}}- Random float (0.0-1.0){{uuid}}- Random UUID v4
Sample Data
{{faker.name}}- Random name{{faker.email}}- Random email address{{faker.age}}- Random age (18-80)
Example Templates
# JSON response with dynamic data
content:
type: "template"
template: |
{
"id": "{{uuid}}",
"name": "{{faker.name}}",
"email": "{{faker.email}}",
"created_at": "{{now}}",
"age": {{faker.age}}
}
# Log file with timestamps
content:
type: "template"
template: "[{{timestamp}}] INFO: Application started at {{time}}"
Passive Mode Configuration
FTP passive mode uses dynamic port ranges. The server automatically configures passive ports in the range 49152-65535.
Authentication
Currently, MockForge FTP servers support anonymous access only. Authentication can be added in future versions.
Performance Tuning
Memory Usage
- Virtual file system stores all files in memory
- Large files or many files may consume significant memory
- Consider using file-based storage for large uploads
Connection Limits
- No built-in connection limits
- Consider system ulimits for production use
Timeouts
- No configurable timeouts
- Uses libunftp defaults
FTP Fixtures
FTP fixtures allow you to pre-configure file structures and upload rules for your mock FTP server.
Fixture Structure
Fixtures are defined in YAML format and contain:
- Virtual files: Pre-defined files in the virtual file system
- Upload rules: Rules for accepting and handling file uploads
Basic Fixture Example
fixtures:
- name: "sample_files"
description: "Sample files for testing FTP clients"
virtual_files:
- path: "/welcome.txt"
content:
type: "static"
content: "Welcome to MockForge FTP Server!"
permissions: "644"
owner: "ftp"
group: "ftp"
- path: "/data.json"
content:
type: "template"
template: '{"timestamp": "{{now}}", "server": "mockforge"}'
permissions: "644"
owner: "ftp"
group: "ftp"
upload_rules:
- path_pattern: "/uploads/.*"
auto_accept: true
max_size_bytes: 1048576
allowed_extensions: ["txt", "json", "xml"]
storage:
type: "memory"
Virtual Files
Static Content Files
virtual_files:
- path: "/readme.txt"
content:
type: "static"
content: |
This is a mock FTP server.
You can upload files to the /uploads directory.
permissions: "644"
owner: "ftp"
group: "ftp"
Template Files
virtual_files:
- path: "/status.json"
content:
type: "template"
template: |
{
"server": "MockForge FTP",
"version": "1.0.0",
"uptime": "{{timestamp}}",
"status": "running"
}
permissions: "644"
owner: "ftp"
group: "ftp"
Generated Content Files
virtual_files:
- path: "/random.bin"
content:
type: "generated"
size: 1024
pattern: "random"
permissions: "644"
owner: "ftp"
group: "ftp"
Upload Rules
Upload rules control how the server handles file uploads.
Basic Upload Rule
upload_rules:
- path_pattern: "/uploads/.*"
auto_accept: true
storage:
type: "memory"
Advanced Upload Rule
upload_rules:
- path_pattern: "/documents/.*"
auto_accept: true
validation:
max_size_bytes: 5242880 # 5MB
allowed_extensions: ["pdf", "doc", "docx", "txt"]
mime_types: ["application/pdf", "application/msword"]
storage:
type: "file"
path: "/tmp/uploads"
Validation Options
File Size Limits
validation:
max_size_bytes: 1048576 # 1MB limit
File Extensions
validation:
allowed_extensions: ["jpg", "png", "gif"]
MIME Types
validation:
mime_types: ["image/jpeg", "image/png"]
Storage Backends
Memory Storage
Files are stored in memory (default):
storage:
type: "memory"
File Storage
Files are written to disk:
storage:
type: "file"
path: "/var/ftp/uploads"
Discard Storage
Files are accepted but not stored:
storage:
type: "discard"
Loading Fixtures
From Configuration File
mockforge ftp serve --config ftp-config.yaml
From Directory
mockforge ftp fixtures load ./fixtures/ftp/
Validate Fixtures
mockforge ftp fixtures validate fixture.yaml
Example Complete Fixture
fixtures:
- name: "test_environment"
description: "Complete test environment with various file types"
virtual_files:
# Static files
- path: "/readme.txt"
content:
type: "static"
content: "FTP Test Server - Upload files to /uploads/"
permissions: "644"
owner: "ftp"
group: "ftp"
# Template files
- path: "/server-info.json"
content:
type: "template"
template: |
{
"server": "MockForge FTP",
"started_at": "{{now}}",
"session_id": "{{uuid}}"
}
permissions: "644"
owner: "ftp"
group: "ftp"
# Generated files
- path: "/test-data.bin"
content:
type: "generated"
size: 4096
pattern: "random"
permissions: "644"
owner: "ftp"
group: "ftp"
upload_rules:
# General uploads
- path_pattern: "/uploads/.*"
auto_accept: true
validation:
max_size_bytes: 10485760 # 10MB
storage:
type: "memory"
# Image uploads
- path_pattern: "/images/.*"
auto_accept: true
validation:
max_size_bytes: 5242880 # 5MB
allowed_extensions: ["jpg", "jpeg", "png", "gif"]
mime_types: ["image/jpeg", "image/png", "image/gif"]
storage:
type: "file"
path: "/tmp/images"
# Log files (discard)
- path_pattern: "/logs/.*"
auto_accept: true
storage:
type: "discard"
CLI Management
List Fixtures
mockforge ftp fixtures list
Load Fixtures
# Load from directory
mockforge ftp fixtures load ./fixtures/
# Load specific file
mockforge ftp fixtures load fixture.yaml
Validate Fixtures
mockforge ftp fixtures validate fixture.yaml
Virtual File System Management
Add Files
# Static content
mockforge ftp vfs add /hello.txt --content "Hello World"
# Template content
mockforge ftp vfs add /user.json --template '{"name": "{{faker.name}}"}'
# Generated content
mockforge ftp vfs add /data.bin --generate random --size 1024
List Files
mockforge ftp vfs list /
Remove Files
mockforge ftp vfs remove /old-file.txt
Get File Info
mockforge ftp vfs info /hello.txt
FTP Examples
This section provides complete examples of using MockForge FTP for various testing scenarios.
Basic FTP Server
Starting a Simple Server
# Start FTP server on default port 2121
mockforge ftp serve
# Start on custom port
mockforge ftp serve --port 2122
# Start with custom host
mockforge ftp serve --host 0.0.0.0 --port 2121
Connecting with FTP Clients
Using lftp
# Connect to the server
lftp localhost:2121
# List files
lftp localhost:2121:~> ls
# Download a file
lftp localhost:2121:~> get test.txt
# Upload a file
lftp localhost:2121:~> put localfile.txt
# Exit
lftp localhost:2121:~> quit
Using curl
# List directory
curl ftp://localhost:2121/
# Download file
curl ftp://localhost:2121/test.txt -o downloaded.txt
# Upload file
curl -T localfile.txt ftp://localhost:2121/
Using Python
import ftplib
# Connect to FTP server
ftp = ftplib.FTP('localhost', 'anonymous', '')
# List files
files = ftp.nlst()
print("Files:", files)
# Download file
with open('downloaded.txt', 'wb') as f:
ftp.retrbinary('RETR test.txt', f.write)
# Upload file
with open('localfile.txt', 'rb') as f:
ftp.storbinary('STOR uploaded.txt', f)
ftp.quit()
File Management Examples
Adding Static Files
# Add a simple text file
mockforge ftp vfs add /hello.txt --content "Hello, FTP World!"
# Add a JSON file
mockforge ftp vfs add /config.json --content '{"server": "mockforge", "port": 2121}'
# Add a larger file
echo "This is a test file with multiple lines." > test.txt
mockforge ftp vfs add /multiline.txt --content "$(cat test.txt)"
Adding Template Files
# Add a dynamic JSON response
mockforge ftp vfs add /user.json --template '{"id": "{{uuid}}", "name": "{{faker.name}}", "created": "{{now}}"}'
# Add a log file with timestamps
mockforge ftp vfs add /server.log --template '[{{timestamp}}] Server started at {{time}}'
# Add a status file
mockforge ftp vfs add /status.xml --template '<?xml version="1.0"?><status><server>MockForge</server><time>{{now}}</time></status>'
Adding Generated Files
# Add a random binary file (1KB)
mockforge ftp vfs add /random.bin --generate random --size 1024
# Add a file filled with zeros (512 bytes)
mockforge ftp vfs add /zeros.dat --generate zeros --size 512
# Add an incremental pattern file
mockforge ftp vfs add /pattern.bin --generate incremental --size 256
Managing Files
# List all files
mockforge ftp vfs list /
# Get file information
mockforge ftp vfs info /hello.txt
# Remove a file
mockforge ftp vfs remove /old-file.txt
Configuration Examples
Basic Configuration File
# ftp-config.yaml
ftp:
host: "127.0.0.1"
port: 2121
virtual_root: "/"
fixtures:
- name: "basic_files"
description: "Basic test files"
virtual_files:
- path: "/readme.txt"
content:
type: "static"
content: "Welcome to MockForge FTP Server"
permissions: "644"
owner: "ftp"
group: "ftp"
upload_rules:
- path_pattern: "/uploads/.*"
auto_accept: true
storage:
type: "memory"
Advanced Configuration
# advanced-ftp-config.yaml
ftp:
host: "0.0.0.0"
port: 2121
virtual_root: "/ftp"
fixtures:
- name: "api_test_files"
description: "Files for API testing"
virtual_files:
# Static files
- path: "/api/v1/users"
content:
type: "static"
content: '[{"id": 1, "name": "Alice"}, {"id": 2, "name": "Bob"}]'
permissions: "644"
owner: "api"
group: "users"
# Template files
- path: "/api/v1/status"
content:
type: "template"
template: '{"status": "ok", "timestamp": "{{now}}", "version": "1.0.0"}'
permissions: "644"
owner: "api"
group: "system"
# Generated test data
- path: "/test/data.bin"
content:
type: "generated"
size: 1048576 # 1MB
pattern: "random"
permissions: "644"
owner: "test"
group: "data"
upload_rules:
# General uploads
- path_pattern: "/uploads/.*"
auto_accept: true
validation:
max_size_bytes: 10485760 # 10MB
storage:
type: "memory"
# Image uploads
- path_pattern: "/images/.*"
auto_accept: true
validation:
max_size_bytes: 5242880 # 5MB
allowed_extensions: ["jpg", "png", "gif"]
storage:
type: "file"
path: "/tmp/ftp/images"
# Log files (accepted but discarded)
- path_pattern: "/logs/.*"
auto_accept: true
storage:
type: "discard"
Testing Scenarios
File Upload Testing
# Start server with upload configuration
mockforge ftp serve --config upload-config.yaml
# Test file upload with curl
echo "Test file content" > test.txt
curl -T test.txt ftp://localhost:2121/uploads/
# Test large file upload
dd if=/dev/zero of=large.bin bs=1M count=5
curl -T large.bin ftp://localhost:2121/uploads/
# Test invalid file type
echo "invalid content" > invalid.exe
curl -T invalid.exe ftp://localhost:2121/uploads/ # Should fail
Load Testing
# Start server
mockforge ftp serve --port 2121 &
# Simple load test with parallel uploads
for i in {1..10}; do
echo "File $i content" > "file$i.txt"
curl -T "file$i.txt" "ftp://localhost:2121/uploads/file$i.txt" &
done
wait
Integration Testing
With pytest
# test_ftp_integration.py
import ftplib
import pytest
import tempfile
import os
class TestFTPIntegration:
@pytest.fixture(scope="class")
def ftp_client(self):
# Connect to MockForge FTP server
ftp = ftplib.FTP('localhost', 'anonymous', '')
yield ftp
ftp.quit()
def test_list_files(self, ftp_client):
files = ftp_client.nlst()
assert len(files) >= 0 # At least empty directory
def test_download_file(self, ftp_client):
# Assuming server has a test file
with tempfile.NamedTemporaryFile(delete=False) as tmp:
try:
ftp_client.retrbinary('RETR test.txt', tmp.write)
assert os.path.getsize(tmp.name) > 0
finally:
os.unlink(tmp.name)
def test_upload_file(self, ftp_client):
# Create test file
with tempfile.NamedTemporaryFile(mode='w', delete=False) as tmp:
tmp.write("Test upload content")
tmp_path = tmp.name
try:
# Upload file
with open(tmp_path, 'rb') as f:
ftp_client.storbinary('STOR uploaded.txt', f)
# Verify upload (if server supports listing uploads)
files = ftp_client.nlst()
assert 'uploaded.txt' in [os.path.basename(f) for f in files]
finally:
os.unlink(tmp_path)
With Java
// FtpIntegrationTest.java
import org.apache.commons.net.ftp.FTPClient;
import org.junit.jupiter.api.*;
import java.io.*;
class FtpIntegrationTest {
private FTPClient ftpClient;
@BeforeEach
void setup() throws IOException {
ftpClient = new FTPClient();
ftpClient.connect("localhost", 2121);
ftpClient.login("anonymous", "");
ftpClient.enterLocalPassiveMode();
}
@AfterEach
void teardown() throws IOException {
if (ftpClient.isConnected()) {
ftpClient.disconnect();
}
}
@Test
void testFileDownload() throws IOException {
// Download a file
File tempFile = File.createTempFile("downloaded", ".txt");
try (FileOutputStream fos = new FileOutputStream(tempFile)) {
boolean success = ftpClient.retrieveFile("test.txt", fos);
Assertions.assertTrue(success, "File download should succeed");
Assertions.assertTrue(tempFile.length() > 0, "Downloaded file should not be empty");
} finally {
tempFile.delete();
}
}
@Test
void testFileUpload() throws IOException {
// Create test file
File tempFile = File.createTempFile("upload", ".txt");
try (FileWriter writer = new FileWriter(tempFile)) {
writer.write("Test upload content");
}
// Upload file
try (FileInputStream fis = new FileInputStream(tempFile)) {
boolean success = ftpClient.storeFile("uploaded.txt", fis);
Assertions.assertTrue(success, "File upload should succeed");
} finally {
tempFile.delete();
}
}
@Test
void testDirectoryListing() throws IOException {
FTPFile[] files = ftpClient.listFiles();
Assertions.assertNotNull(files, "Directory listing should not be null");
// Additional assertions based on expected files
}
}
Docker Integration
Running in Docker
# Dockerfile
FROM mockforge:latest
# Copy FTP configuration
COPY ftp-config.yaml /app/config/
# Expose FTP port
EXPOSE 2121
# Start FTP server
CMD ["mockforge", "ftp", "serve", "--config", "/app/config/ftp-config.yaml"]
# Build and run
docker build -t mockforge-ftp .
docker run -p 2121:2121 mockforge-ftp
Docker Compose
# docker-compose.yml
version: '3.8'
services:
ftp-server:
image: mockforge:latest
command: ["mockforge", "ftp", "serve", "--host", "0.0.0.0"]
ports:
- "2121:2121"
volumes:
- ./ftp-config.yaml:/app/config/ftp-config.yaml
- ./uploads:/tmp/uploads
environment:
- RUST_LOG=info
CI/CD Integration
GitHub Actions Example
# .github/workflows/ftp-test.yml
name: FTP Integration Tests
on: [push, pull_request]
jobs:
ftp-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Rust
uses: actions-rust-lang/setup-rust-toolchain@v1
- name: Build MockForge
run: cargo build --release
- name: Start FTP Server
run: |
./target/release/mockforge ftp serve --port 2121 &
sleep 2
- name: Run FTP Tests
run: |
# Test with lftp
sudo apt-get update && sudo apt-get install -y lftp
echo "Test file content" > test.txt
lftp -c "open localhost:2121; put test.txt; ls; get test.txt -o downloaded.txt; quit"
# Verify files
test -f downloaded.txt
grep -q "Test file content" downloaded.txt
Jenkins Pipeline
// Jenkinsfile
pipeline {
agent any
stages {
stage('FTP Integration Test') {
steps {
sh 'cargo build --release'
// Start FTP server in background
sh './target/release/mockforge ftp serve --port 2121 &'
sh 'sleep 3'
// Run tests
sh '''
# Install FTP client
apt-get update && apt-get install -y lftp
# Create test file
echo "Integration test content" > test.txt
# Test FTP operations
lftp -c "
open localhost:2121
put test.txt
ls
get test.txt -o downloaded.txt
quit
"
# Verify
grep -q "Integration test content" downloaded.txt
'''
}
}
}
}
Troubleshooting
Common Issues
Connection Refused
# Check if server is running
netstat -tlnp | grep 2121
# Check server logs
mockforge ftp serve --port 2121 2>&1
Passive Mode Issues
# FTP clients may need passive mode
curl --ftp-pasv ftp://localhost:2121/
File Permission Issues
# Check file permissions in VFS
mockforge ftp vfs info /problematic-file.txt
# Check upload rules
mockforge ftp fixtures validate config.yaml
Memory Issues
# Monitor memory usage
ps aux | grep mockforge
# Use file storage for large files
# Configure storage type in upload rules
This completes the FTP implementation for MockForge. The server provides comprehensive FTP mocking capabilities with virtual file systems, template rendering, and configurable upload handling.
HTTP Mocking
MockForge provides comprehensive HTTP API mocking capabilities with OpenAPI specification support, dynamic response generation, and advanced request matching. This guide covers everything you need to create realistic REST API mocks.
OpenAPI Integration
MockForge uses OpenAPI (formerly Swagger) specifications as the foundation for HTTP API mocking. This industry-standard approach ensures your mocks accurately reflect real API contracts.
Loading OpenAPI Specifications
# Load from JSON file
mockforge serve --spec api-spec.json --http-port 3000
# Load from YAML file
mockforge serve --spec api-spec.yaml --http-port 3000
# Load from URL
mockforge serve --spec https://api.example.com/openapi.json --http-port 3000
OpenAPI Specification Structure
MockForge supports OpenAPI 3.0+ specifications with the following key components:
- Paths: API endpoint definitions
- Methods: HTTP verbs (GET, POST, PUT, DELETE, PATCH)
- Parameters: Path, query, and header parameters
- Request Bodies: JSON/XML payload schemas
- Responses: Status codes and response schemas
- Components: Reusable schemas and examples
Example OpenAPI Specification
openapi: 3.0.3
info:
title: User Management API
version: 1.0.0
paths:
/users:
get:
summary: List users
parameters:
- name: limit
in: query
schema:
type: integer
default: 10
responses:
'200':
description: Successful response
content:
application/json:
schema:
type: array
items:
$ref: '#/components/schemas/User'
post:
summary: Create user
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/UserInput'
responses:
'201':
description: User created
content:
application/json:
schema:
$ref: '#/components/schemas/User'
/users/{id}:
get:
summary: Get user by ID
parameters:
- name: id
in: path
required: true
schema:
type: string
responses:
'200':
description: User found
content:
application/json:
schema:
$ref: '#/components/schemas/User'
'404':
description: User not found
components:
schemas:
User:
type: object
properties:
id:
type: string
format: uuid
name:
type: string
email:
type: string
format: email
createdAt:
type: string
format: date-time
UserInput:
type: object
required:
- name
- email
properties:
name:
type: string
email:
type: string
Dynamic Response Generation
MockForge generates realistic responses automatically based on your OpenAPI schemas, with support for dynamic data through templates.
Automatic Response Generation
For basic use cases, MockForge can generate responses directly from your OpenAPI schemas:
# Start server with automatic response generation
mockforge serve --spec api-spec.json --http-port 3000
This generates:
- UUIDs for ID fields
- Random data for string/number fields
- Current timestamps for date-time fields
- Valid email addresses for email fields
Template-Enhanced Responses
For more control, use MockForge’s template system in your OpenAPI examples:
paths:
/users:
get:
responses:
'200':
description: List of users
content:
application/json:
example:
users:
- id: "{{uuid}}"
name: "John Doe"
email: "john@example.com"
createdAt: "{{now}}"
lastLogin: "{{now-1d}}"
- id: "{{uuid}}"
name: "Jane Smith"
email: "jane@example.com"
createdAt: "{{now-7d}}"
lastLogin: "{{now-2h}}"
Template Functions
Data Generation Templates
{{uuid}}- Generate unique UUID{{now}}- Current timestamp{{now+1h}}- Future timestamp{{now-1d}}- Past timestamp{{randInt 1 100}}- Random integer{{randFloat 0.0 1.0}}- Random float
Request Data Templates
{{request.path.id}}- Access path parameters{{request.query.limit}}- Access query parameters{{request.header.Authorization}}- Access headers{{request.body.name}}- Access request body fields
Request Matching and Routing
MockForge uses sophisticated matching to route requests to appropriate responses.
Matching Priority
- Exact Path + Method Match
- Parameterized Path Match (e.g.,
/users/{id}) - Query Parameter Conditions
- Header-Based Conditions
- Request Body Matching
- Default Response (catch-all)
Path Parameter Handling
/users/{id}:
get:
parameters:
- name: id
in: path
required: true
schema:
type: string
responses:
'200':
content:
application/json:
example:
id: "{{request.path.id}}"
name: "User {{request.path.id}}"
retrievedAt: "{{now}}"
Query Parameter Filtering
/users:
get:
parameters:
- name: status
in: query
schema:
type: string
enum: [active, inactive]
- name: limit
in: query
schema:
type: integer
default: 10
responses:
'200':
content:
application/json:
example: "{{#if (eq request.query.status 'active')}}active_users{{else}}all_users{{/if}}"
Response Scenarios
MockForge supports multiple response scenarios for testing different conditions.
Success Responses
responses:
'200':
description: Success
content:
application/json:
example:
status: "success"
data: { ... }
Error Responses
responses:
'400':
description: Bad Request
content:
application/json:
example:
error: "INVALID_INPUT"
message: "The provided input is invalid"
'404':
description: Not Found
content:
application/json:
example:
error: "NOT_FOUND"
message: "Resource not found"
'500':
description: Internal Server Error
content:
application/json:
example:
error: "INTERNAL_ERROR"
message: "An unexpected error occurred"
Conditional Responses
Use templates to return different responses based on request data:
responses:
'200':
content:
application/json:
example: |
{{#if (eq request.query.format 'detailed')}}
{
"id": "{{uuid}}",
"name": "Detailed User",
"email": "user@example.com",
"profile": {
"bio": "Detailed user profile",
"preferences": { ... }
}
}
{{else}}
{
"id": "{{uuid}}",
"name": "Basic User",
"email": "user@example.com"
}
{{/if}}
Advanced Features
Response Latency Simulation
# Add random latency (100-500ms)
MOCKFORGE_LATENCY_ENABLED=true \
MOCKFORGE_LATENCY_MIN_MS=100 \
MOCKFORGE_LATENCY_MAX_MS=500 \
mockforge serve --spec api-spec.json
Failure Injection
# Enable random failures (10% chance)
MOCKFORGE_FAILURES_ENABLED=true \
MOCKFORGE_FAILURE_RATE=0.1 \
mockforge serve --spec api-spec.json
Request/Response Recording
# Record all HTTP interactions
MOCKFORGE_RECORD_ENABLED=true \
mockforge serve --spec api-spec.json
Response Replay
# Replay recorded responses
MOCKFORGE_REPLAY_ENABLED=true \
mockforge serve --spec api-spec.json
Testing Your Mocks
Manual Testing with curl
# Test GET endpoint
curl http://localhost:3000/users
# Test POST endpoint
curl -X POST http://localhost:3000/users \
-H "Content-Type: application/json" \
-d '{"name": "Test User", "email": "test@example.com"}'
# Test path parameters
curl http://localhost:3000/users/123
# Test query parameters
curl "http://localhost:3000/users?limit=5&status=active"
# Test error scenarios
curl http://localhost:3000/users/999 # Should return 404
Automated Testing
#!/bin/bash
# test-api.sh
BASE_URL="http://localhost:3000"
echo "Testing User API..."
# Test user creation
USER_RESPONSE=$(curl -s -X POST $BASE_URL/users \
-H "Content-Type: application/json" \
-d '{"name": "Test User", "email": "test@example.com"}')
echo "Created user: $USER_RESPONSE"
# Extract user ID (assuming response contains id)
USER_ID=$(echo $USER_RESPONSE | jq -r '.id')
# Test user retrieval
RETRIEVED_USER=$(curl -s $BASE_URL/users/$USER_ID)
echo "Retrieved user: $RETRIEVED_USER"
# Test user listing
USER_LIST=$(curl -s $BASE_URL/users)
echo "User list: $USER_LIST"
echo "API tests completed!"
Best Practices
OpenAPI Specification Tips
- Use descriptive operation IDs for better organization
- Include examples in your OpenAPI spec for consistent responses
- Define reusable components for common schemas
- Use appropriate HTTP status codes for different scenarios
- Document all parameters clearly
Template Usage Guidelines
- Enable templates only when needed for security
- Use meaningful template variables for maintainability
- Test template expansion thoroughly
- Avoid complex logic in templates - keep it simple
Response Design Principles
- Match real API behavior as closely as possible
- Include appropriate error responses for testing
- Use consistent data formats across endpoints
- Consider pagination for list endpoints
- Include metadata like timestamps and request IDs
Performance Considerations
- Use static responses when dynamic data isn’t needed
- Limit template complexity to maintain response times
- Configure appropriate timeouts for your use case
- Monitor memory usage with large response payloads
Troubleshooting
Common Issues
Templates not expanding: Ensure MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
OpenAPI spec not loading: Check file path and JSON/YAML syntax
Wrong response returned: Verify request matching rules and parameter handling
Performance issues: Reduce template complexity or use static responses
Port conflicts: Change default ports with --http-port option
Advanced Behavior and Simulation
MockForge supports advanced behavior simulation features for realistic API testing:
Record & Playback
Automatically record API interactions and convert them to replayable fixtures:
# Record requests while proxying
mockforge serve --spec api-spec.json --proxy --record
# Convert recordings to stub mappings
mockforge recorder convert --input recordings.db --output fixtures/
Stateful Behavior
Simulate stateful APIs where responses change based on previous requests:
core:
stateful:
enabled: true
state_machines:
- name: "order_workflow"
resource_id_extract:
type: "path_param"
param: "order_id"
transitions:
- method: "POST"
path_pattern: "/api/orders"
from_state: "initial"
to_state: "pending"
Per-Route Fault Injection
Configure fault injection on specific routes:
core:
routes:
- path: "/api/payments/process"
method: "POST"
fault_injection:
enabled: true
probability: 0.05
fault_types:
- type: "http_error"
status_code: 503
Per-Route Latency
Simulate network conditions per route:
core:
routes:
- path: "/api/search"
method: "GET"
latency:
enabled: true
distribution: "normal"
mean_ms: 500.0
std_dev_ms: 100.0
Conditional Proxying
Proxy requests conditionally based on request attributes:
core:
proxy:
rules:
- pattern: "/api/admin/*"
upstream_url: "https://admin-api.example.com"
condition: "$.user.role == 'admin'"
For detailed documentation on these features, see Advanced Behavior and Simulation.
For more advanced HTTP mocking features, see the following guides:
- OpenAPI Integration - Advanced OpenAPI features
- Custom Responses - Complex response scenarios
- Dynamic Data - Advanced templating techniques
OpenAPI Integration
MockForge provides advanced OpenAPI integration capabilities beyond basic spec loading and response generation. This guide covers sophisticated features for enterprise-grade API mocking.
Advanced Request Validation
MockForge supports comprehensive request validation against OpenAPI schemas with multiple validation modes and granular control.
Validation Modes
# Disable validation completely
MOCKFORGE_REQUEST_VALIDATION=off mockforge serve --spec api-spec.json
# Log warnings but allow invalid requests
MOCKFORGE_REQUEST_VALIDATION=warn mockforge serve --spec api-spec.json
# Reject invalid requests (default)
MOCKFORGE_REQUEST_VALIDATION=enforce mockforge serve --spec api-spec.json
Response Validation
Enable validation of generated responses against OpenAPI schemas:
# Validate responses against schemas
MOCKFORGE_RESPONSE_VALIDATION=true mockforge serve --spec api-spec.json
Custom Validation Status Codes
Configure HTTP status codes for validation failures:
# Use 422 Unprocessable Entity for validation errors
MOCKFORGE_VALIDATION_STATUS=422 mockforge serve --spec api-spec.json
Validation Overrides
Skip validation for specific routes:
validation:
mode: enforce
overrides:
"GET /health": "off"
"POST /webhooks/*": "warn"
Aggregated Error Reporting
Control how validation errors are reported:
# Report all validation errors at once
MOCKFORGE_AGGREGATE_ERRORS=true mockforge serve --spec api-spec.json
# Stop at first validation error
MOCKFORGE_AGGREGATE_ERRORS=false mockforge serve --spec api-spec.json
Security Scheme Validation
MockForge validates authentication and authorization requirements defined in your OpenAPI spec.
Supported Security Schemes
- HTTP Basic Authentication: Validates
Authorization: Basic <credentials>headers - Bearer Tokens: Validates
Authorization: Bearer <token>headers - API Keys: Supports header and query parameter API keys
- OAuth2: Basic OAuth2 flow validation
Security Validation Example
openapi: 3.0.0
components:
securitySchemes:
bearerAuth:
type: http
scheme: bearer
apiKey:
type: apiKey
in: header
name: X-API-Key
security:
- bearerAuth: []
- apiKey: []
paths:
/protected:
get:
security:
- bearerAuth: []
# Test with valid Bearer token
curl -H "Authorization: Bearer eyJ0eXAi..." http://localhost:3000/protected
# Test with API key
curl -H "X-API-Key: your-api-key" http://localhost:3000/protected
Schema Resolution and References
MockForge fully supports OpenAPI schema references ($ref) for reusable components.
Component References
components:
schemas:
User:
type: object
properties:
id:
type: string
format: uuid
name:
type: string
profile:
$ref: '#/components/schemas/UserProfile'
UserProfile:
type: object
properties:
bio:
type: string
avatar:
type: string
format: uri
responses:
UserResponse:
description: User data
content:
application/json:
schema:
$ref: '#/components/schemas/User'
paths:
/users/{id}:
get:
responses:
'200':
$ref: '#/components/responses/UserResponse'
Request Body References
components:
requestBodies:
UserCreate:
required: true
content:
application/json:
schema:
type: object
required:
- name
- email
properties:
name:
type: string
email:
type: string
format: email
paths:
/users:
post:
requestBody:
$ref: '#/components/requestBodies/UserCreate'
Multiple OpenAPI Specifications
MockForge can serve multiple OpenAPI specifications simultaneously with path-based routing.
Configuration for Multiple Specs
server:
http_port: 3000
specs:
- name: user-api
path: /api/v1
spec: user-api.json
- name: admin-api
path: /api/admin
spec: admin-api.json
Base Path Routing
# Routes to user-api.json endpoints
curl http://localhost:3000/api/v1/users
# Routes to admin-api.json endpoints
curl http://localhost:3000/api/admin/users
Advanced Routing and Matching
MockForge provides sophisticated request matching beyond simple path/method combinations.
Path Parameter Constraints
paths:
/users/{id}:
get:
parameters:
- name: id
in: path
required: true
schema:
type: string
pattern: '^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$'
Query Parameter Matching
paths:
/users:
get:
parameters:
- name: status
in: query
schema:
type: string
enum: [active, inactive, pending]
- name: limit
in: query
schema:
type: integer
minimum: 1
maximum: 100
default: 10
Header-Based Routing
paths:
/api/v1/users:
get:
parameters:
- name: X-API-Version
in: header
schema:
type: string
enum: [v1, v2]
Template Expansion in Responses
Advanced template features for dynamic response generation.
Advanced Template Functions
responses:
'200':
content:
application/json:
example:
id: "{{uuid}}"
createdAt: "{{now}}"
expiresAt: "{{now+1h}}"
lastModified: "{{now-30m}}"
randomValue: "{{randInt 1 100}}"
randomFloat: "{{randFloat 0.0 5.0}}"
userAgent: "{{request.header.User-Agent}}"
apiVersion: "{{request.header.X-API-Version}}"
userId: "{{request.path.id}}"
searchQuery: "{{request.query.q}}"
Conditional Templates
responses:
'200':
content:
application/json:
example: |
{{#if (eq request.query.format 'detailed')}}
{
"id": "{{uuid}}",
"name": "Detailed User",
"profile": {
"bio": "User biography",
"preferences": {}
}
}
{{else}}
{
"id": "{{uuid}}",
"name": "Basic User"
}
{{/if}}
Template Security
Enable template expansion only when needed:
# Enable template expansion
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true mockforge serve --spec api-spec.json
Performance Optimization
Strategies for handling large OpenAPI specifications efficiently.
Lazy Loading
MockForge loads and parses OpenAPI specs on startup but generates routes lazily:
# Monitor startup performance
time mockforge serve --spec large-api.json
Route Caching
Generated routes are cached in memory for optimal performance:
# Check memory usage with large specs
MOCKFORGE_LOG_LEVEL=debug mockforge serve --spec large-api.json
Validation Performance
Disable expensive validations in high-throughput scenarios:
# Disable response validation for better performance
MOCKFORGE_RESPONSE_VALIDATION=false mockforge serve --spec api-spec.json
Custom Validation Options
Fine-tune validation behavior for your specific needs.
Validation Configuration
validation:
mode: enforce
aggregate_errors: true
validate_responses: false
status_code: 422
overrides:
"GET /health": "off"
"POST /webhooks/*": "warn"
admin_skip_prefixes:
- "/admin"
- "/internal"
Environment Variables
# Validation mode
MOCKFORGE_REQUEST_VALIDATION=enforce
# Error aggregation
MOCKFORGE_AGGREGATE_ERRORS=true
# Response validation
MOCKFORGE_RESPONSE_VALIDATION=false
# Custom status code
MOCKFORGE_VALIDATION_STATUS=422
# Template expansion
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
OpenAPI Extensions
MockForge supports OpenAPI extensions (x- prefixed properties) for custom behavior.
Custom Extensions
paths:
/users:
get:
x-mockforge-delay: 1000 # Add 1 second delay
x-mockforge-failure-rate: 0.1 # 10% failure rate
responses:
'200':
x-mockforge-template: true # Enable template expansion
Vendor Extensions
info:
x-mockforge-config:
enable_cors: true
default_response_format: json
paths:
/api/users:
x-vendor-custom-behavior: enabled
Troubleshooting
Common issues and solutions for advanced OpenAPI integration.
Validation Errors
Problem: Requests are rejected with validation errors
{
"error": "request validation failed",
"status": 422,
"details": [
{
"path": "body.name",
"code": "required",
"message": "Missing required field: name"
}
]
}
Solutions:
# Switch to warning mode
MOCKFORGE_REQUEST_VALIDATION=warn mockforge serve --spec api-spec.json
# Disable validation for specific routes
# Add to config.yaml:
validation:
overrides:
"POST /users": "off"
Schema Reference Issues
Problem: $ref references not resolving correctly
Solutions:
- Ensure component names match exactly
- Check that referenced components exist
- Validate your OpenAPI spec with external tools
Performance Issues
Problem: Slow startup or high memory usage with large specs
Solutions:
# Disable non-essential features
MOCKFORGE_RESPONSE_VALIDATION=false
MOCKFORGE_AGGREGATE_ERRORS=false
# Monitor with debug logging
MOCKFORGE_LOG_LEVEL=debug mockforge serve --spec api-spec.json
Security Validation Failures
Problem: Authentication requests failing
Solutions:
- Verify security scheme definitions
- Check header formats (e.g.,
Bearerprefix) - Ensure global security requirements are met
Template Expansion Issues
Problem: Templates not expanding in responses
Solutions:
# Enable template expansion
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true mockforge serve --spec api-spec.json
# Check template syntax
# Use {{variable}} format, not ${variable}
Best Practices
Specification Management
- Version Control: Keep OpenAPI specs in version control alongside mock configurations
- Validation: Use external validators to ensure spec correctness
- Documentation: Include comprehensive examples and descriptions
- Modularity: Use components and references for maintainable specs
Performance Tuning
- Selective Validation: Disable validation for high-traffic endpoints
- Template Usage: Only enable templates when dynamic data is needed
- Caching: Leverage MockForge’s built-in route caching
- Monitoring: Monitor memory usage and response times
Security Considerations
- Validation Modes: Use appropriate validation levels for different environments
- Template Security: Be cautious with user-controlled template input
- Authentication: Properly configure security schemes for protected endpoints
- Overrides: Use validation overrides judiciously
For basic OpenAPI integration features, see the HTTP Mocking guide. For dynamic data generation, see the Dynamic Data guide.
Custom Responses
MockForge provides multiple powerful ways to create custom HTTP responses beyond basic OpenAPI schema generation. This guide covers advanced response customization techniques including plugins, overrides, and dynamic generation.
Response Override Rules
Override rules allow you to modify OpenAPI-generated responses using JSON patches without changing the original specification.
Basic Override Configuration
# mockforge.yaml
http:
openapi_spec: api-spec.json
response_template_expand: true
# Override specific endpoints
overrides:
- targets: ["path:/users"]
patch:
- op: replace
path: "/responses/200/content/application~1json/example"
value:
users:
- id: "{{uuid}}"
name: "John Doe"
email: "john@example.com"
- id: "{{uuid}}"
name: "Jane Smith"
email: "jane@example.com"
- targets: ["operation:getUser"]
patch:
- op: add
path: "/responses/200/content/application~1json/example/profile"
value:
avatar: "https://example.com/avatar.jpg"
bio: "User biography"
Override Targeting
Target specific operations using different selectors:
overrides:
# By operation ID
- targets: ["operation:listUsers", "operation:createUser"]
patch: [...]
# By path pattern
- targets: ["path:/users/*"]
patch: [...]
# By tag
- targets: ["tag:Users"]
patch: [...]
# By regex
- targets: ["regex:^/api/v[0-9]+/users$"]
patch: [...]
Patch Operations
Supported JSON patch operations:
overrides:
- targets: ["path:/users"]
patch:
# Add new fields
- op: add
path: "/responses/200/content/application~1json/example/metadata"
value:
total: 100
page: 1
# Replace existing values
- op: replace
path: "/responses/200/content/application~1json/example/users/0/name"
value: "Updated Name"
# Remove fields
- op: remove
path: "/responses/200/content/application~1json/example/users/1/email"
# Copy values
- op: copy
from: "/responses/200/content/application~1json/example/users/0/id"
path: "/responses/200/content/application~1json/example/primaryUserId"
# Move values
- op: move
from: "/responses/200/content/application~1json/example/temp"
path: "/responses/200/content/application~1json/example/permanent"
Conditional Overrides
Apply overrides based on request conditions:
overrides:
- targets: ["path:/users"]
when: "request.query.format == 'detailed'"
patch:
- op: add
path: "/responses/200/content/application~1json/example/users/0/profile"
value:
bio: "Detailed user profile"
preferences: {}
- targets: ["path:/users"]
when: "request.header.X-API-Version == 'v2'"
patch:
- op: add
path: "/responses/200/content/application~1json/example/apiVersion"
value: "v2"
Override Modes
Control how patches are applied:
overrides:
# Replace mode (default) - complete replacement
- targets: ["path:/users"]
mode: replace
patch: [...]
# Merge mode - deep merge objects and arrays
- targets: ["path:/users"]
mode: merge
patch:
- op: add
path: "/responses/200/content/application~1json/example"
value:
additionalField: "value"
Response Plugins
Create custom response generation logic using MockForge’s plugin system.
Response Generator Plugin
Implement the ResponsePlugin trait for complete response control:
#![allow(unused)] fn main() { use mockforge_plugin_core::*; pub struct CustomResponsePlugin; #[async_trait::async_trait] impl ResponsePlugin for CustomResponsePlugin { fn capabilities(&self) -> PluginCapabilities { PluginCapabilities { network: NetworkCapabilities { allow_http_outbound: true, allowed_hosts: vec!["api.example.com".to_string()], }, filesystem: FilesystemCapabilities::default(), resources: PluginResources { max_memory_bytes: 50 * 1024 * 1024, max_cpu_time_ms: 5000, }, custom: HashMap::new(), } } async fn initialize(&self, config: &ResponsePluginConfig) -> Result<()> { // Plugin initialization Ok(()) } async fn can_handle( &self, _context: &PluginContext, request: &ResponseRequest, _config: &ResponsePluginConfig, ) -> Result<PluginResult<bool>> { // Check if this plugin should handle the request let should_handle = request.path.starts_with("/api/custom/"); Ok(PluginResult::success(should_handle, 0)) } async fn generate_response( &self, _context: &PluginContext, request: &ResponseRequest, _config: &ResponsePluginConfig, ) -> Result<PluginResult<ResponseData>> { // Generate custom response match request.path.as_str() { "/api/custom/weather" => { let weather_data = serde_json::json!({ "temperature": 22, "condition": "sunny", "location": request.query_param("location").unwrap_or("Unknown") }); Ok(PluginResult::success( ResponseData::json(200, &weather_data)?, 0 )) } "/api/custom/time" => { let time_data = serde_json::json!({ "current_time": chrono::Utc::now().to_rfc3339(), "timezone": request.query_param("tz").unwrap_or("UTC") }); Ok(PluginResult::success( ResponseData::json(200, &time_data)?, 0 )) } _ => Ok(PluginResult::success( ResponseData::not_found("Custom endpoint not found"), 0 )) } } fn priority(&self) -> i32 { 100 } fn validate_config(&self, _config: &ResponsePluginConfig) -> Result<()> { Ok(()) } fn supported_content_types(&self) -> Vec<String> { vec!["application/json".to_string()] } } }
Plugin Configuration
Configure response plugins in your MockForge setup:
# plugin.yaml
name: custom-response-plugin
version: "1.0.0"
type: response
config:
enabled: true
priority: 100
content_types:
- "application/json"
url_patterns:
- "/api/custom/*"
methods:
- "GET"
- "POST"
settings:
external_api_timeout: 5000
cache_enabled: true
Response Modifier Plugin
Modify responses after generation using the ResponseModifierPlugin trait:
#![allow(unused)] fn main() { use mockforge_plugin_core::*; pub struct ResponseModifierPlugin; #[async_trait::async_trait] impl ResponseModifierPlugin for ResponseModifierPlugin { fn capabilities(&self) -> PluginCapabilities { PluginCapabilities::default() } async fn initialize(&self, _config: &ResponseModifierConfig) -> Result<()> { Ok(()) } async fn should_modify( &self, _context: &PluginContext, _request: &ResponseRequest, response: &ResponseData, _config: &ResponseModifierConfig, ) -> Result<PluginResult<bool>> { // Modify successful JSON responses let should_modify = response.status_code == 200 && response.content_type == "application/json"; Ok(PluginResult::success(should_modify, 0)) } async fn modify_response( &self, _context: &PluginContext, _request: &ResponseRequest, mut response: ResponseData, _config: &ResponseModifierConfig, ) -> Result<PluginResult<ResponseData>> { // Add custom headers response.headers.insert( "X-Custom-Header".to_string(), "Modified by plugin".to_string() ); // Add metadata to JSON responses if let Some(json_str) = response.body_as_string() { if let Ok(mut json_value) = serde_json::from_str::<serde_json::Value>(&json_str) { if let Some(obj) = json_value.as_object_mut() { obj.insert("_metadata".to_string(), serde_json::json!({ "modified_by": "ResponseModifierPlugin", "timestamp": chrono::Utc::now().timestamp() })); } let modified_body = serde_json::to_vec(&json_value) .map_err(|e| PluginError::execution(format!("JSON serialization error: {}", e)))?; response.body = modified_body; } } Ok(PluginResult::success(response, 0)) } fn priority(&self) -> i32 { 50 } fn validate_config(&self, _config: &ResponseModifierConfig) -> Result<()> { Ok(()) } } }
Template Plugins
Extend MockForge’s templating system with custom functions.
Custom Template Functions
#![allow(unused)] fn main() { use mockforge_plugin_core::*; pub struct BusinessTemplatePlugin; impl TemplatePlugin for BusinessTemplatePlugin { fn execute_function( &mut self, function_name: &str, args: &[TemplateArg], _context: &PluginContext, ) -> PluginResult<String> { match function_name { "business_id" => { // Generate business-specific ID let id = format!("BIZ-{:010}", rand::random::<u32>()); PluginResult::success(id, 0) } "department_name" => { // Generate department name let departments = ["Engineering", "Sales", "Marketing", "HR", "Finance"]; let dept = departments[rand::random::<usize>() % departments.len()]; PluginResult::success(dept.to_string(), 0) } "employee_data" => { // Generate complete employee object let employee = serde_json::json!({ "id": format!("EMP-{:06}", rand::random::<u32>() % 1000000), "name": "{{faker.name}}", "department": "{{department_name}}", "salary": rand::random::<u32>() % 50000 + 50000, "hire_date": "{{faker.date.past 365}}" }); PluginResult::success(employee.to_string(), 0) } _ => PluginResult::failure( format!("Unknown function: {}", function_name), 0 ) } } fn get_available_functions(&self) -> Vec<TemplateFunction> { vec![ TemplateFunction { name: "business_id".to_string(), description: "Generate a business ID".to_string(), args: vec![], return_type: "string".to_string(), }, TemplateFunction { name: "department_name".to_string(), description: "Generate a department name".to_string(), args: vec![], return_type: "string".to_string(), }, TemplateFunction { name: "employee_data".to_string(), description: "Generate complete employee data".to_string(), args: vec![], return_type: "json".to_string(), }, ] } fn get_capabilities(&self) -> PluginCapabilities { PluginCapabilities::default() } fn health_check(&self) -> PluginHealth { PluginHealth::healthy("Template plugin healthy".to_string(), PluginMetrics::default()) } } }
Using Custom Templates
# OpenAPI spec with custom templates
paths:
/employees:
get:
responses:
'200':
content:
application/json:
example:
employees:
- "{{employee_data}}"
- "{{employee_data}}"
business_id: "{{business_id}}"
Configuration-Based Custom Responses
Define custom responses directly in configuration files.
Route-Specific Responses
# mockforge.yaml
http:
port: 3000
routes:
- path: /api/custom/dashboard
method: GET
response:
status: 200
headers:
Content-Type: application/json
X-Custom-Header: Dashboard-Data
body: |
{
"widgets": [
{
"id": "sales-chart",
"type": "chart",
"data": [120, 150, 180, 200, 250]
},
{
"id": "user-stats",
"type": "stats",
"data": {
"total_users": 15420,
"active_users": 8920,
"new_signups": 245
}
}
],
"last_updated": "{{now}}"
}
- path: /api/custom/report
method: POST
response:
status: 201
headers:
Location: /api/reports/123
body: |
{
"report_id": "RPT-{{randInt 1000 9999}}",
"status": "processing",
"estimated_completion": "{{now+5m}}"
}
Dynamic Route Matching
routes:
# Path parameters
- path: /api/users/{userId}/profile
method: GET
response:
status: 200
body: |
{
"user_id": "{{request.path.userId}}",
"name": "{{faker.name}}",
"email": "{{faker.email}}",
"profile": {
"bio": "{{faker.sentence}}",
"location": "{{faker.city}}, {{faker.country}}"
}
}
# Query parameter conditions
- path: /api/search
method: GET
response:
status: 200
body: |
{{#if (eq request.query.type 'users')}}
{
"results": [
{"id": 1, "name": "John", "type": "user"},
{"id": 2, "name": "Jane", "type": "user"}
]
}
{{else if (eq request.query.type 'posts')}}
{
"results": [
{"id": 1, "title": "Post 1", "type": "post"},
{"id": 2, "title": "Post 2", "type": "post"}
]
}
{{else}}
{
"results": [],
"message": "No results found for type: {{request.query.type}}"
}
{{/if}}
Error Response Customization
Create sophisticated error responses for different scenarios.
Structured Error Responses
routes:
- path: /api/users/{userId}
method: GET
response:
status: 404
headers:
Content-Type: application/json
body: |
{
"error": {
"code": "USER_NOT_FOUND",
"message": "User with ID {{request.path.userId}} not found",
"details": {
"user_id": "{{request.path.userId}}",
"requested_at": "{{now}}",
"request_id": "{{uuid}}"
},
"suggestions": [
"Check if the user ID is correct",
"Verify the user exists in the system",
"Try searching by email instead"
]
}
}
- path: /api/orders
method: POST
response:
status: 422
body: |
{
"error": {
"code": "VALIDATION_ERROR",
"message": "Request validation failed",
"validation_errors": [
{
"field": "customer_email",
"code": "invalid_format",
"message": "Email format is invalid"
},
{
"field": "order_items",
"code": "min_items",
"message": "At least one order item is required"
}
]
}
}
Conditional Error Responses
routes:
- path: /api/payments
method: POST
response:
status: 402
condition: "request.header.X-Test-Mode == 'insufficient_funds'"
body: |
{
"error": "INSUFFICIENT_FUNDS",
"message": "Payment failed due to insufficient funds",
"details": {
"available_balance": 50.00,
"requested_amount": 100.00,
"currency": "USD"
}
}
- path: /api/payments
method: POST
response:
status: 500
condition: "request.header.X-Test-Mode == 'server_error'"
body: |
{
"error": "INTERNAL_SERVER_ERROR",
"message": "An unexpected error occurred while processing payment",
"reference_id": "ERR-{{randInt 100000 999999}}",
"timestamp": "{{now}}"
}
Advanced Response Features
Response Delays and Latency
routes:
- path: /api/slow-endpoint
method: GET
response:
status: 200
delay_ms: 2000 # 2 second delay
body: |
{
"message": "This response was delayed",
"timestamp": "{{now}}"
}
- path: /api/variable-delay
method: GET
response:
status: 200
delay_ms: "{{randInt 100 5000}}" # Random delay between 100ms-5s
body: |
{
"message": "Random delay applied",
"delay_applied_ms": "{{_delay_ms}}"
}
Response Caching
routes:
- path: /api/cached-data
method: GET
response:
status: 200
headers:
Cache-Control: max-age=300
X-Cache-Status: "{{_cache_hit ? 'HIT' : 'MISS'}}"
cache: true
cache_ttl_seconds: 300
body: |
{
"data": "This response may be cached",
"generated_at": "{{now}}",
"cache_expires_at": "{{now+5m}}"
}
Binary Response Handling
routes:
- path: /api/download/{filename}
method: GET
response:
status: 200
headers:
Content-Type: application/octet-stream
Content-Disposition: attachment; filename="{{request.path.filename}}"
body_file: "/path/to/binary/files/{{request.path.filename}}"
- path: /api/images/{imageId}
method: GET
response:
status: 200
headers:
Content-Type: image/png
Cache-Control: max-age=3600
body_base64: "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChwGA60e6kgAAAABJRU5ErkJggg=="
Testing Custom Responses
Manual Testing
# Test custom route
curl http://localhost:3000/api/custom/dashboard
# Test with parameters
curl "http://localhost:3000/api/users/123/profile"
# Test error conditions
curl -H "X-Test-Mode: insufficient_funds" \
http://localhost:3000/api/payments \
-X POST \
-d '{}'
Automated Testing
#!/bin/bash
# test-custom-responses.sh
BASE_URL="http://localhost:3000"
echo "Testing custom responses..."
# Test dashboard endpoint
DASHBOARD_RESPONSE=$(curl -s $BASE_URL/api/custom/dashboard)
echo "Dashboard response:"
echo $DASHBOARD_RESPONSE | jq '.'
# Test user profile with path parameter
USER_RESPONSE=$(curl -s $BASE_URL/api/users/456/profile)
echo "User profile response:"
echo $USER_RESPONSE | jq '.'
# Test error responses
ERROR_RESPONSE=$(curl -s -H "X-Test-Mode: insufficient_funds" \
-X POST \
-d '{}' \
$BASE_URL/api/payments)
echo "Error response:"
echo $ERROR_RESPONSE | jq '.'
echo "Custom response tests completed!"
Best Practices
Plugin Development
- Resource Limits: Set appropriate memory and CPU limits for plugins
- Error Handling: Implement proper error handling and logging
- Testing: Thoroughly test plugins with various inputs
- Documentation: Document plugin capabilities and configuration options
Override Usage
- Selective Application: Use specific targets to avoid unintended modifications
- Version Control: Keep override configurations in version control
- Testing: Test overrides with different request scenarios
- Performance: Minimize complex conditions and patch operations
Response Design
- Consistency: Maintain consistent response formats across endpoints
- Error Details: Provide meaningful error messages and codes
- Metadata: Include relevant metadata like timestamps and request IDs
- Content Types: Set appropriate Content-Type headers
Security Considerations
- Input Validation: Validate all inputs in custom plugins
- Resource Limits: Prevent resource exhaustion attacks
- Authentication: Implement proper authentication for sensitive endpoints
- Logging: Log security-relevant events without exposing sensitive data
Troubleshooting
Plugin Issues
Plugin not loading: Check plugin configuration and file paths Plugin timeout: Increase resource limits or optimize plugin code Plugin errors: Check plugin logs and error messages
Override Problems
Overrides not applying: Verify target selectors and patch syntax JSON patch errors: Validate patch operations against JSON structure Condition evaluation: Test conditional expressions with sample requests
Performance Issues
Slow responses: Profile plugin execution and optimize bottlenecks Memory usage: Monitor plugin memory consumption and adjust limits Template expansion: Simplify complex templates or use static responses
For basic HTTP mocking features, see the HTTP Mocking guide. For advanced templating, see the Dynamic Data guide.
Dynamic Data
MockForge provides powerful dynamic data generation capabilities through its templating system and faker integration. This guide covers generating realistic, varied responses for comprehensive API testing and development.
Template Expansion Basics
MockForge uses a lightweight templating system with {{token}} syntax to inject dynamic values into responses.
Enabling Templates
Templates are disabled by default for security. Enable them using:
# Environment variable
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true mockforge serve --spec api-spec.json
# Configuration file
http:
response_template_expand: true
Basic Template Syntax
paths:
/users:
get:
responses:
'200':
content:
application/json:
example:
users:
- id: "{{uuid}}"
name: "{{faker.name}}"
email: "{{faker.email}}"
created_at: "{{now}}"
Time-Based Templates
Generate timestamps and time offsets for realistic temporal data.
Current Time
responses:
'200':
content:
application/json:
example:
current_time: "{{now}}"
server_timestamp: "{{now}}"
Time Offsets
responses:
'200':
content:
application/json:
example:
created_at: "{{now-7d}}"
expires_at: "{{now+1h}}"
last_login: "{{now-30m}}"
scheduled_for: "{{now+2h}}"
Supported units:
s- secondsm- minutesh- hoursd- days
Random Data Generation
Generate random values for varied test data.
Random Integers
responses:
'200':
content:
application/json:
example:
user_count: "{{randInt 1 100}}"
age: "{{randInt 18 80}}"
score: "{{randInt -10 10}}"
Random Floats
responses:
'200':
content:
application/json:
example:
price: "{{randFloat 9.99 999.99}}"
rating: "{{randFloat 1.0 5.0}}"
percentage: "{{randFloat 0.0 100.0}}"
UUID Generation
Generate unique identifiers for entities.
responses:
'200':
content:
application/json:
example:
id: "{{uuid}}"
order_id: "{{uuid}}"
transaction_id: "{{uuid}}"
Faker Data Generation
Generate realistic fake data using the Faker library.
Basic Faker Functions
responses:
'200':
content:
application/json:
example:
user:
id: "{{uuid}}"
name: "{{faker.name}}"
email: "{{faker.email}}"
created_at: "{{now}}"
Extended Faker Functions
When the data-faker feature is enabled, additional functions are available:
responses:
'200':
content:
application/json:
example:
user:
name: "{{faker.name}}"
email: "{{faker.email}}"
phone: "{{faker.phone}}"
address: "{{faker.address}}"
company: "{{faker.company}}"
product:
name: "{{faker.word}}"
description: "{{faker.sentence}}"
color: "{{faker.color}}"
url: "{{faker.url}}"
ip_address: "{{faker.ip}}"
Disabling Faker
For deterministic testing, disable faker tokens:
MOCKFORGE_FAKE_TOKENS=false mockforge serve --spec api-spec.json
Request Data Access
Access data from incoming requests to create dynamic responses.
Path Parameters
paths:
/users/{userId}:
get:
parameters:
- name: userId
in: path
required: true
schema:
type: string
responses:
'200':
content:
application/json:
example:
id: "{{request.path.userId}}"
name: "User {{request.path.userId}}"
retrieved_at: "{{now}}"
Query Parameters
paths:
/users:
get:
parameters:
- name: limit
in: query
schema:
type: integer
default: 10
- name: format
in: query
schema:
type: string
enum: [brief, detailed]
responses:
'200':
content:
application/json:
example: |
{{#if (eq request.query.format 'detailed')}}
{
"users": [
{
"id": "{{uuid}}",
"name": "{{faker.name}}",
"email": "{{faker.email}}",
"profile": {
"bio": "{{faker.sentence}}",
"location": "{{faker.address}}"
}
}
],
"limit": {{request.query.limit}},
"format": "{{request.query.format}}"
}
{{else}}
{
"users": [
{
"id": "{{uuid}}",
"name": "{{faker.name}}",
"email": "{{faker.email}}"
}
],
"limit": {{request.query.limit}}
}
{{/if}}
Request Body Access
paths:
/users:
post:
requestBody:
required: true
content:
application/json:
schema:
type: object
properties:
name:
type: string
email:
type: string
responses:
'201':
content:
application/json:
example:
id: "{{uuid}}"
name: "{{request.body.name}}"
email: "{{request.body.email}}"
created_at: "{{now}}"
welcome_message: "Welcome {{request.body.name}}!"
Headers Access
responses:
'200':
content:
application/json:
example:
user_agent: "{{request.header.User-Agent}}"
api_version: "{{request.header.X-API-Version}}"
authorization: "{{request.header.Authorization}}"
Conditional Templates
Use Handlebars-style conditionals for complex logic.
Basic Conditionals
responses:
'200':
content:
application/json:
example: |
{{#if (eq request.query.format 'detailed')}}
{
"data": {
"id": "{{uuid}}",
"name": "{{faker.name}}",
"details": {
"bio": "{{faker.paragraph}}",
"stats": {
"login_count": {{randInt 1 1000}},
"last_active": "{{now-1d}}"
}
}
}
}
{{else}}
{
"data": {
"id": "{{uuid}}",
"name": "{{faker.name}}"
}
}
{{/if}}
Multiple Conditions
responses:
'200':
content:
application/json:
example: |
{{#if (eq request.query.type 'admin')}}
{
"user": {
"id": "{{uuid}}",
"name": "{{faker.name}}",
"role": "admin",
"permissions": ["read", "write", "delete", "admin"]
}
}
{{else if (eq request.query.type 'premium')}}
{
"user": {
"id": "{{uuid}}",
"name": "{{faker.name}}",
"role": "premium",
"permissions": ["read", "write"]
}
}
{{else}}
{
"user": {
"id": "{{uuid}}",
"name": "{{faker.name}}",
"role": "basic",
"permissions": ["read"]
}
}
{{/if}}
Data Generation Templates
MockForge includes built-in data generation templates for common entities.
User Template
# Generate user data
mockforge data template user --rows 10 --format json
# Output:
[
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"email": "john.doe@example.com",
"name": "John Doe",
"created_at": "2024-01-15T10:30:00Z",
"active": true
}
]
Product Template
# Generate product data
mockforge data template product --rows 5 --format csv
# Output:
id,name,description,price,category,in_stock
550e8400-e29b-41d4-a716-446655440001,Wireless Headphones,High-quality wireless headphones with noise cancellation,199.99,Electronics,true
Order Template
# Generate order data with relationships
mockforge data template order --rows 3 --format json --rag
# Output:
[
{
"id": "550e8400-e29b-41d4-a716-446655440002",
"user_id": "550e8400-e29b-41d4-a716-446655440000",
"total_amount": 299.97,
"status": "completed",
"created_at": "2024-01-16T14:20:00Z"
}
]
Advanced Templating Features
Encryption Functions
Secure sensitive data in responses:
responses:
'200':
content:
application/json:
example:
user:
id: "{{uuid}}"
name: "{{encrypt 'user_name' faker.name}}"
email: "{{encrypt 'user_email' faker.email}}"
ssn: "{{encrypt 'sensitive' '123-45-6789'}}"
Decryption
Access encrypted data:
# In templates that need to decrypt
decrypted_name: "{{decrypt 'user_name' request.body.encrypted_name}}"
File System Access
Read external files for dynamic content:
responses:
'200':
content:
application/json:
example:
config: "{{fs.readFile 'config.json'}}"
template: "{{fs.readFile 'templates/welcome.html'}}"
Request Chaining Context
Access data from previous requests in chained scenarios.
Chain Variables
# In chained request templates
responses:
'200':
content:
application/json:
example:
previous_request_id: "{{chain.request_id}}"
previous_user_id: "{{chain.user.id}}"
session_token: "{{chain.auth.token}}"
Custom Template Plugins
Extend templating with custom functions via plugins.
Template Plugin Example
#![allow(unused)] fn main() { use mockforge_plugin_core::*; pub struct BusinessTemplatePlugin; impl TemplatePlugin for BusinessTemplatePlugin { fn execute_function( &mut self, function_name: &str, args: &[TemplateArg], _context: &PluginContext, ) -> PluginResult<String> { match function_name { "business_id" => { let id = format!("BIZ-{:010}", rand::random::<u32>()); PluginResult::success(id, 0) } "department" => { let depts = ["Engineering", "Sales", "Marketing", "HR"]; let dept = depts[rand::random::<usize>() % depts.len()]; PluginResult::success(dept.to_string(), 0) } "salary" => { let salary = rand::random::<u32>() % 150000 + 50000; PluginResult::success(salary.to_string(), 0) } _ => PluginResult::failure( format!("Unknown function: {}", function_name), 0 ) } } fn get_available_functions(&self) -> Vec<TemplateFunction> { vec![ TemplateFunction { name: "business_id".to_string(), description: "Generate business ID".to_string(), args: vec![], return_type: "string".to_string(), }, TemplateFunction { name: "department".to_string(), description: "Generate department name".to_string(), args: vec![], return_type: "string".to_string(), }, TemplateFunction { name: "salary".to_string(), description: "Generate salary amount".to_string(), args: vec![], return_type: "string".to_string(), }, ] } fn get_capabilities(&self) -> PluginCapabilities { PluginCapabilities::default() } fn health_check(&self) -> PluginHealth { PluginHealth::healthy("Business template plugin healthy".to_string(), PluginMetrics::default()) } } }
Using Custom Templates
responses:
'200':
content:
application/json:
example:
employee:
id: "{{business_id}}"
name: "{{faker.name}}"
department: "{{department}}"
salary: "{{salary}}"
hire_date: "{{now-1y}}"
Configuration and Security
Template Security Settings
# mockforge.yaml
http:
response_template_expand: true
template_security:
allow_file_access: false
allow_encryption: true
max_template_depth: 10
timeout_ms: 5000
Environment Variables
# Enable template expansion
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
# Disable faker for deterministic tests
MOCKFORGE_FAKE_TOKENS=false
# Set validation status for template errors
MOCKFORGE_VALIDATION_STATUS=422
# Control template execution timeout
MOCKFORGE_TEMPLATE_TIMEOUT_MS=5000
Testing with Dynamic Data
Manual Testing
# Test template expansion
curl http://localhost:3000/users
# Test with query parameters
curl "http://localhost:3000/users?format=detailed&limit=5"
# Test path parameters
curl http://localhost:3000/users/123
# Test POST with body access
curl -X POST http://localhost:3000/users \
-H "Content-Type: application/json" \
-d '{"name": "Test User", "email": "test@example.com"}'
Automated Testing
#!/bin/bash
# test-dynamic-data.sh
BASE_URL="http://localhost:3000"
echo "Testing dynamic data generation..."
# Test basic templates
USER_RESPONSE=$(curl -s $BASE_URL/users)
echo "User response with templates:"
echo $USER_RESPONSE | jq '.'
# Test conditional templates
DETAILED_RESPONSE=$(curl -s "$BASE_URL/users?format=detailed")
echo "Detailed format response:"
echo $DETAILED_RESPONSE | jq '.'
BASIC_RESPONSE=$(curl -s "$BASE_URL/users?format=basic")
echo "Basic format response:"
echo $BASIC_RESPONSE | jq '.'
# Test faker data
PRODUCT_RESPONSE=$(curl -s $BASE_URL/products)
echo "Product response with faker data:"
echo $PRODUCT_RESPONSE | jq '.'
echo "Dynamic data tests completed!"
Best Practices
Template Usage
- Enable Selectively: Only enable template expansion where needed for security
- Validate Input: Sanitize request data used in templates
- Test Thoroughly: Test template expansion with various inputs
- Monitor Performance: Templates add processing overhead
Data Generation
- Use Appropriate Faker: Choose faker functions that match your domain
- Maintain Consistency: Use consistent data patterns across endpoints
- Consider Relationships: Generate related data that makes sense together
- Balance Realism: Generate realistic but not sensitive data
Security Considerations
- Input Sanitization: Never trust request data in templates
- File Access: Disable file system access in production if not needed
- Encryption: Use encryption functions for sensitive data
- Rate Limiting: Consider rate limiting for expensive template operations
Performance Optimization
- Cache Static Parts: Cache template parsing for frequently used templates
- Limit Complexity: Avoid deeply nested conditionals and complex logic
- Profile Execution: Monitor template execution time and optimize slow functions
- Use Appropriate Timeouts: Set reasonable timeouts for template execution
Troubleshooting
Template Not Expanding
Problem: Templates appear as literal text in responses
Solutions:
# Enable template expansion
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true mockforge serve --spec api-spec.json
# Check configuration
# Ensure response_template_expand: true in config
Faker Functions Not Working
Problem: Faker functions return empty or error values
Solutions:
# Ensure faker is enabled
MOCKFORGE_FAKE_TOKENS=true mockforge serve --spec api-spec.json
# Check if data-faker feature is enabled
# For extended faker functions, ensure the feature is compiled in
Request Data Access Issues
Problem: request.* variables are empty or undefined
Solutions:
- Verify request format (JSON for body access)
- Check parameter names match exactly
- Ensure path/query parameters are properly defined in OpenAPI spec
Performance Issues
Problem: Template expansion is slow
Solutions:
- Simplify template logic
- Cache frequently used values
- Use static responses where dynamic data isn’t needed
- Profile and optimize custom template functions
For basic HTTP mocking features, see the HTTP Mocking guide. For custom response generation, see the Custom Responses guide.
Advanced Behavior and Simulation
MockForge provides advanced behavior and simulation features that allow you to create realistic, stateful, and resilient API mocks. This guide covers record & playback, stateful behavior simulation, fault injection, latency simulation, and conditional proxying.
Table of Contents
- Record & Playback
- Stateful Behavior Simulation
- Per-Route Fault Injection
- Per-Route Latency Simulation
- Conditional Proxying
- Browser Proxy with Conditional Forwarding
Record & Playback
The record & playback feature allows you to capture real API interactions and convert them into replayable stub mappings.
Quick Start
- Start recording while proxying to a real service:
mockforge serve --spec api-spec.json --proxy --record
- Convert recordings to stub mappings:
# Convert a specific recording
mockforge recorder convert --recording-id abc123 --output fixtures/user-api.yaml
# Batch convert all recordings
mockforge recorder convert --input recordings.db --output fixtures/ --format yaml
Configuration
core:
recorder:
enabled: true
auto_convert: true
output_dir: "./fixtures/recorded"
format: "yaml"
filters:
min_status_code: 200
max_status_code: 299
exclude_paths:
- "/health"
- "/metrics"
API Usage
# Convert via API
curl -X POST http://localhost:9080/api/recorder/convert/abc123 \
-H "Content-Type: application/json" \
-d '{"format": "yaml"}'
Stateful Behavior Simulation
Stateful behavior simulation allows responses to change based on previous requests, using state machines to track resource state.
Basic Example
core:
stateful:
enabled: true
state_machines:
- name: "order_workflow"
initial_state: "pending"
states:
- name: "pending"
response:
status_code: 200
body_template: '{"order_id": "{{resource_id}}", "status": "pending"}'
- name: "processing"
response:
status_code: 200
body_template: '{"order_id": "{{resource_id}}", "status": "processing"}'
- name: "shipped"
response:
status_code: 200
body_template: '{"order_id": "{{resource_id}}", "status": "shipped"}'
resource_id_extract:
type: "path_param"
param: "order_id"
transitions:
- method: "POST"
path_pattern: "/api/orders"
from_state: "initial"
to_state: "pending"
- method: "PUT"
path_pattern: "/api/orders/{order_id}/process"
from_state: "pending"
to_state: "processing"
Resource ID Extraction
Extract resource IDs from various sources:
# From path parameter
resource_id_extract:
type: "path_param"
param: "order_id"
# From header
resource_id_extract:
type: "header"
name: "X-Resource-ID"
# From JSONPath in request body
resource_id_extract:
type: "json_path"
path: "$.order.id"
# Composite (tries multiple sources)
resource_id_extract:
type: "composite"
extractors:
- type: "path_param"
param: "order_id"
- type: "header"
name: "X-Order-ID"
Per-Route Fault Injection
Configure fault injection on specific routes with multiple fault types.
Configuration
core:
routes:
- path: "/api/payments/process"
method: "POST"
fault_injection:
enabled: true
probability: 0.05 # 5% chance
fault_types:
- type: "http_error"
status_code: 503
message: "Service unavailable"
- type: "timeout"
duration_ms: 5000
- type: "connection_error"
message: "Connection refused"
Fault Types
- HTTP Error: Return specific status codes
- Connection Error: Simulate connection failures
- Timeout: Simulate request timeouts
- Partial Response: Truncate responses
- Payload Corruption: Corrupt response payloads
Per-Route Latency Simulation
Simulate network latency with various distributions.
Configuration
core:
routes:
- path: "/api/search"
method: "GET"
latency:
enabled: true
probability: 0.8
distribution: "normal" # fixed, normal, exponential, uniform
mean_ms: 500.0
std_dev_ms: 100.0
jitter_percent: 15.0
Distributions
- Fixed: Constant delay with optional jitter
- Normal: Gaussian distribution (realistic for most APIs)
- Exponential: Exponential distribution (simulates network delays)
- Uniform: Random delay within a range
Conditional Proxying
Proxy requests conditionally based on request attributes using expressions.
Basic Examples
core:
proxy:
enabled: true
rules:
# Proxy admin requests
- pattern: "/api/admin/*"
upstream_url: "https://admin-api.example.com"
condition: "$.user.role == 'admin'"
# Proxy authenticated requests
- pattern: "/api/protected/*"
upstream_url: "https://protected-api.example.com"
condition: "header[authorization] != ''"
# Proxy based on query parameter
- pattern: "/api/data/*"
upstream_url: "https://data-api.example.com"
condition: "query[env] == 'production'"
Condition Types
JSONPath Expressions
condition: "$.user.role == 'admin'"
condition: "$.order.amount > 1000"
Header Checks
condition: "header[authorization] != ''"
condition: "header[user-agent] == 'MobileApp/1.0'"
Query Parameters
condition: "query[env] == 'production'"
condition: "query[version] == 'v2'"
Logical Operators
# AND
condition: "AND(header[authorization] != '', $.user.role == 'admin')"
# OR
condition: "OR(query[env] == 'production', query[env] == 'staging')"
# NOT
condition: "NOT(query[env] == 'development')"
Browser Proxy with Conditional Forwarding
The browser proxy mode supports the same conditional forwarding rules.
Usage
# Start browser proxy with conditional rules
mockforge proxy --port 8081 --config config.yaml
Configure your browser/mobile app to use 127.0.0.1:8081 as the HTTP proxy. All requests will be evaluated against conditional rules before proxying.
Example Configuration
proxy:
enabled: true
rules:
# Route admin users to production
- pattern: "/api/admin/*"
upstream_url: "https://admin-api.production.com"
condition: "$.user.role == 'admin'"
# Route authenticated users to staging
- pattern: "/api/*"
upstream_url: "https://api.staging.com"
condition: "header[authorization] != ''"
Priority Chain
MockForge processes requests through this priority chain:
- Replay - Check for recorded fixtures
- Stateful - Check for stateful response handling
- Route Chaos - Apply per-route fault injection and latency
- Global Fail - Apply global/tag-based failure injection
- Proxy - Check for conditional proxying
- Mock - Generate mock response from OpenAPI spec
- Record - Record request for future replay
Related Advanced Features
MockForge includes many additional advanced features that complement the basic advanced behavior:
- VBR Engine: Virtual database layer with automatic CRUD generation
- Temporal Simulation: Time travel and time-based data mutations
- Scenario State Machines: Visual flow editor for complex workflows
- MockAI: AI-powered intelligent response generation
- Chaos Lab: Interactive network condition simulation
- Reality Slider: Unified control for mock environment realism
For a complete overview, see Advanced Features.
Best Practices
- Start simple - Begin with basic configurations and add complexity gradually
- Test thoroughly - Verify state transitions and conditions work as expected
- Monitor performance - Latency injection can slow down tests
- Document conditions - Keep conditional logic well-documented
- Use version control - Track configuration changes over time
Examples
See the example configuration file for comprehensive examples of all features.
For more details, see the Advanced Behavior and Simulation documentation.
gRPC Mocking
MockForge provides comprehensive gRPC service mocking with dynamic Protocol Buffer discovery, streaming support, and flexible service registration. This enables testing of gRPC-based microservices and APIs with realistic mock responses.
Overview
MockForge’s gRPC mocking system offers:
- Dynamic Proto Discovery: Automatically discovers and compiles
.protofiles from configurable directories - Flexible Service Registration: Register and mock any gRPC service without hardcoding
- Streaming Support: Full support for unary, server streaming, client streaming, and bidirectional streaming
- Reflection Support: Built-in gRPC reflection for service discovery and testing
- Template Integration: Use MockForge’s template system for dynamic response generation
- Advanced Data Synthesis: Intelligent mock data generation with deterministic seeding, relationship awareness, and RAG-driven domain knowledge
Quick Start
Basic gRPC Server
Start a gRPC mock server with default configuration:
# Start with default proto directory (proto/)
mockforge serve --grpc-port 50051
With Custom Proto Directory
# Specify custom proto directory
MOCKFORGE_PROTO_DIR=my-protos mockforge serve --grpc-port 50051
Complete Example
# Start MockForge with HTTP, WebSocket, and gRPC support
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true \
MOCKFORGE_WS_REPLAY_FILE=examples/ws-demo.jsonl \
MOCKFORGE_PROTO_DIR=examples/grpc-protos \
mockforge serve \
--spec examples/openapi-demo.json \
--http-port 3000 \
--ws-port 3001 \
--grpc-port 50051 \
--admin --admin-port 9080
Proto File Setup
Directory Structure
MockForge automatically discovers .proto files in a configurable directory:
your-project/
├── proto/ # Default proto directory
│ ├── user_service.proto # Will be discovered
│ ├── payment.proto # Will be discovered
│ └── subdir/
│ └── analytics.proto # Will be discovered (recursive)
└── examples/
└── grpc-protos/ # Custom proto directory
└── service.proto
Sample Proto File
syntax = "proto3";
package mockforge.user;
service UserService {
rpc GetUser(GetUserRequest) returns (UserResponse);
rpc ListUsers(ListUsersRequest) returns (stream UserResponse);
rpc CreateUser(stream CreateUserRequest) returns (UserResponse);
rpc Chat(stream ChatMessage) returns (stream ChatMessage);
}
message GetUserRequest {
string user_id = 1;
}
message UserResponse {
string user_id = 1;
string name = 2;
string email = 3;
int64 created_at = 4;
Status status = 5;
}
message ListUsersRequest {
int32 limit = 1;
string filter = 2;
}
message CreateUserRequest {
string name = 1;
string email = 2;
}
message ChatMessage {
string user_id = 1;
string content = 2;
int64 timestamp = 3;
}
enum Status {
UNKNOWN = 0;
ACTIVE = 1;
INACTIVE = 2;
SUSPENDED = 3;
}
Dynamic Response Generation
MockForge generates responses automatically based on your proto message schemas, with support for templates and custom logic.
Automatic Response Generation
For basic use cases, MockForge generates responses from proto schemas:
- Strings: Random realistic values
- Integers: Random numbers in appropriate ranges
- Timestamps: Current time or future dates
- Enums: Random valid enum values
- Messages: Nested objects with generated data
- Repeated fields: Arrays with multiple generated items
Template-Enhanced Responses
Use MockForge templates in proto comments for custom responses:
message UserResponse {
string user_id = 1; // {{uuid}}
string name = 2; // {{request.user_id == "123" ? "John Doe" : "Jane Smith"}}
string email = 3; // {{name | replace(" ", ".") | lower}}@example.com
int64 created_at = 4; // {{now}}
Status status = 5; // ACTIVE
}
Request Context Access
Access request data in templates:
message UserResponse {
string user_id = 1; // {{request.user_id}}
string requested_by = 2; // {{request.metadata.user_id}}
string message = 3; // User {{request.user_id}} was retrieved
}
Testing gRPC Services
Using gRPC CLI Tools
grpcurl (Recommended)
# Install grpcurl
go install github.com/fullstorydev/grpcurl/cmd/grpcurl@latest
# List available services
grpcurl -plaintext localhost:50051 list
# Call a unary method
grpcurl -plaintext -d '{"user_id": "123"}' \
localhost:50051 mockforge.user.UserService/GetUser
# Call a server streaming method
grpcurl -plaintext -d '{"limit": 5}' \
localhost:50051 mockforge.user.UserService/ListUsers
# Call a client streaming method
echo '{"name": "Alice", "email": "alice@example.com"}' | \
grpcurl -plaintext -d @ \
localhost:50051 mockforge.user.UserService/CreateUser
grpcui (Web Interface)
# Install grpcui
go install github.com/fullstorydev/grpcui/cmd/grpcui@latest
# Start web interface
grpcui -plaintext localhost:50051
# Open http://localhost:2633 in your browser
Programmatic Testing
Node.js with grpc-js
const grpc = require('@grpc/grpc-js');
const protoLoader = require('@grpc/proto-loader');
const packageDefinition = protoLoader.loadSync(
'proto/user_service.proto',
{
keepCase: true,
longs: String,
enums: String,
defaults: true,
oneofs: true
}
);
const protoDescriptor = grpc.loadPackageDefinition(packageDefinition);
const client = new protoDescriptor.mockforge.user.UserService(
'localhost:50051',
grpc.credentials.createInsecure()
);
// Unary call
client.GetUser({ user_id: '123' }, (error, response) => {
if (error) {
console.error('Error:', error);
} else {
console.log('Response:', response);
}
});
// Server streaming
const stream = client.ListUsers({ limit: 5 });
stream.on('data', (response) => {
console.log('User:', response);
});
stream.on('end', () => {
console.log('Stream ended');
});
Python with grpcio
import grpc
from user_service_pb2 import GetUserRequest
from user_service_pb2_grpc import UserServiceStub
channel = grpc.insecure_channel('localhost:50051')
stub = UserServiceStub(channel)
# Unary call
request = GetUserRequest(user_id='123')
response = stub.GetUser(request)
print(f"User: {response.name}, Email: {response.email}")
# Streaming
for user in stub.ListUsers(ListUsersRequest(limit=5)):
print(f"User: {user.name}")
Advanced Configuration
Custom Response Mappings
Create custom response logic by implementing service handlers:
#![allow(unused)] fn main() { use mockforge_grpc::{ServiceRegistry, ServiceImplementation}; use std::collections::HashMap; struct CustomUserService { user_data: HashMap<String, UserResponse>, } impl ServiceImplementation for CustomUserService { fn handle_unary(&self, method: &str, request: &[u8]) -> Vec<u8> { match method { "GetUser" => { let req: GetUserRequest = prost::Message::decode(request).unwrap(); let response = self.user_data.get(&req.user_id) .cloned() .unwrap_or_else(|| UserResponse { user_id: req.user_id, name: "Unknown User".to_string(), email: "unknown@example.com".to_string(), created_at: std::time::SystemTime::now() .duration_since(std::time::UNIX_EPOCH) .unwrap().as_secs() as i64, status: Status::Unknown as i32, }); let mut buf = Vec::new(); response.encode(&mut buf).unwrap(); buf } _ => Vec::new(), } } } }
Environment Variables
# Proto file configuration
MOCKFORGE_PROTO_DIR=proto/ # Directory containing .proto files
MOCKFORGE_GRPC_PORT=50051 # gRPC server port
# Service behavior
MOCKFORGE_GRPC_LATENCY_ENABLED=true # Enable response latency
MOCKFORGE_GRPC_LATENCY_MIN_MS=10 # Minimum latency
MOCKFORGE_GRPC_LATENCY_MAX_MS=100 # Maximum latency
# Reflection settings
MOCKFORGE_GRPC_REFLECTION_ENABLED=true # Enable gRPC reflection
Configuration File
grpc:
port: 50051
proto_dir: "proto/"
enable_reflection: true
latency:
enabled: true
min_ms: 10
max_ms: 100
services:
- name: "mockforge.user.UserService"
implementation: "dynamic"
- name: "custom.Service"
implementation: "custom_handler"
Streaming Support
MockForge supports all gRPC streaming patterns:
Unary (Request → Response)
rpc GetUser(GetUserRequest) returns (UserResponse);
Standard request-response pattern used for simple operations.
Server Streaming (Request → Stream of Responses)
rpc ListUsers(ListUsersRequest) returns (stream UserResponse);
Single request that returns multiple responses over time.
Client Streaming (Stream of Requests → Response)
rpc CreateUsers(stream CreateUserRequest) returns (UserSummary);
Multiple requests sent as a stream, single response returned.
Bidirectional Streaming (Stream ↔ Stream)
rpc Chat(stream ChatMessage) returns (stream ChatMessage);
Both client and server can send messages independently.
Error Handling
gRPC Status Codes
MockForge supports all standard gRPC status codes:
// In proto comments for custom error responses
rpc GetUser(GetUserRequest) returns (UserResponse);
// @error NOT_FOUND User not found
// @error INVALID_ARGUMENT Invalid user ID format
// @error INTERNAL Server error occurred
Custom Error Responses
#![allow(unused)] fn main() { // Custom error handling fn handle_unary(&self, method: &str, request: &[u8]) -> Result<Vec<u8>, tonic::Status> { match method { "GetUser" => { let req: GetUserRequest = prost::Message::decode(request)?; if !is_valid_user_id(&req.user_id) { return Err(tonic::Status::invalid_argument("Invalid user ID")); } match self.get_user(&req.user_id) { Some(user) => { let mut buf = Vec::new(); user.encode(&mut buf)?; Ok(buf) } None => Err(tonic::Status::not_found("User not found")), } } _ => Err(tonic::Status::unimplemented("Method not implemented")), } } }
Integration Patterns
Microservices Testing
# Start multiple gRPC services
MOCKFORGE_PROTO_DIR=user-proto mockforge serve --grpc-port 50051 &
MOCKFORGE_PROTO_DIR=payment-proto mockforge serve --grpc-port 50052 &
MOCKFORGE_PROTO_DIR=inventory-proto mockforge serve --grpc-port 50053 &
# Test service communication
grpcurl -plaintext localhost:50051 mockforge.user.UserService/GetUser \
-d '{"user_id": "123"}'
Load Testing
# Simple load test with hey
hey -n 1000 -c 10 \
grpcurl -plaintext -d '{"user_id": "123"}' \
localhost:50051 mockforge.user.UserService/GetUser
# Advanced load testing with ghz
ghz --insecure \
--proto proto/user_service.proto \
--call mockforge.user.UserService.GetUser \
--data '{"user_id": "123"}' \
--concurrency 10 \
--total 1000 \
localhost:50051
CI/CD Integration
# .github/workflows/test.yml
name: gRPC Tests
on: [push, pull_request]
jobs:
grpc-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Rust
uses: actions-rust-lang/setup-rust-toolchain@v1
- name: Start MockForge
run: |
cargo run --bin mockforge-cli -- serve --grpc-port 50051 &
sleep 5
- name: Run gRPC Tests
run: |
npm install -g grpcurl
grpcurl -plaintext localhost:50051 list
# Add your test commands here
Best Practices
Proto File Organization
- Clear Package Names: Use descriptive package names that reflect service domains
- Consistent Naming: Follow protobuf naming conventions
- Versioning: Include version information in package names when appropriate
- Documentation: Add comments to proto files for better API documentation
Service Design
- Appropriate Streaming: Choose the right streaming pattern for your use case
- Error Handling: Define clear error conditions and status codes
- Pagination: Implement pagination for large result sets
- Backwards Compatibility: Design for evolution and backwards compatibility
Testing Strategies
- Unit Tests: Test individual service methods
- Integration Tests: Test service interactions
- Load Tests: Verify performance under load
- Chaos Tests: Test failure scenarios and recovery
Performance Optimization
- Response Caching: Cache frequently requested data
- Connection Pooling: Reuse gRPC connections
- Async Processing: Use async operations for I/O bound tasks
- Memory Management: Monitor and optimize memory usage
Troubleshooting
Common Issues
Proto files not found: Check MOCKFORGE_PROTO_DIR environment variable and directory permissions
Service not available: Verify proto compilation succeeded and service names match
Connection refused: Ensure gRPC port is accessible and not blocked by firewall
Template errors: Check template syntax and available context variables
Debug Commands
# Check proto compilation
cargo build --verbose
# List available services
grpcurl -plaintext localhost:50051 list
# Check service methods
grpcurl -plaintext localhost:50051 describe mockforge.user.UserService
# Test with verbose output
grpcurl -plaintext -v -d '{"user_id": "123"}' \
localhost:50051 mockforge.user.UserService/GetUser
Log Analysis
# View gRPC logs
tail -f mockforge.log | grep -i grpc
# Count requests by service
grep "grpc.*call" mockforge.log | cut -d' ' -f5 | sort | uniq -c
# Monitor errors
grep -i "grpc.*error" mockforge.log
For detailed implementation guides, see:
- Protocol Buffers - Working with .proto files
- Streaming - Advanced streaming patterns
- Advanced Data Synthesis - Intelligent data generation with RAG and validation
Protocol Buffers
Protocol Buffers (protobuf) are the interface definition language used by gRPC services. MockForge provides comprehensive support for working with protobuf files, including automatic discovery, compilation, and dynamic service generation.
Understanding Proto Files
Basic Structure
A .proto file defines the service interface and message formats:
syntax = "proto3";
package myapp.user;
import "google/protobuf/timestamp.proto";
// Service definition
service UserService {
rpc GetUser(GetUserRequest) returns (User);
rpc ListUsers(ListUsersRequest) returns (stream User);
rpc CreateUser(CreateUserRequest) returns (User);
rpc UpdateUser(UpdateUserRequest) returns (User);
rpc DeleteUser(DeleteUserRequest) returns (google.protobuf.Empty);
}
// Message definitions
message GetUserRequest {
string user_id = 1;
}
message User {
string user_id = 1;
string email = 2;
string name = 3;
google.protobuf.Timestamp created_at = 4;
google.protobuf.Timestamp updated_at = 5;
UserStatus status = 6;
repeated string roles = 7;
}
message ListUsersRequest {
int32 page_size = 1;
string page_token = 2;
string filter = 3;
}
message CreateUserRequest {
string email = 1;
string name = 2;
repeated string roles = 3;
}
message UpdateUserRequest {
string user_id = 1;
string email = 2;
string name = 3;
repeated string roles = 4;
}
message DeleteUserRequest {
string user_id = 1;
}
enum UserStatus {
UNKNOWN = 0;
ACTIVE = 1;
INACTIVE = 2;
SUSPENDED = 3;
}
Key Components
Syntax Declaration
syntax = "proto3";
Declares the protobuf version. MockForge supports proto3.
Package Declaration
package myapp.user;
Defines the namespace for the service and messages.
Imports
import "google/protobuf/timestamp.proto";
Imports common protobuf types and other proto files.
Service Definition
service UserService {
rpc GetUser(GetUserRequest) returns (User);
// ... more methods
}
Defines the RPC methods available in the service.
Message Definitions
message User {
string user_id = 1;
string email = 2;
// ... more fields
}
Defines the structure of data exchanged between client and server.
Enum Definitions
enum UserStatus {
UNKNOWN = 0;
ACTIVE = 1;
// ... more values
}
Defines enumerated types with named constants.
Field Types
Scalar Types
| Proto Type | Go Type | Java Type | C++ Type | Notes |
|---|---|---|---|---|
| double | float64 | double | double | |
| float | float32 | float | float | |
| int32 | int32 | int | int32 | Uses variable-length encoding |
| int64 | int64 | long | int64 | Uses variable-length encoding |
| uint32 | uint32 | int | uint32 | Uses variable-length encoding |
| uint64 | uint64 | long | uint64 | Uses variable-length encoding |
| sint32 | int32 | int | int32 | Uses zigzag encoding |
| sint64 | int64 | long | int64 | Uses zigzag encoding |
| fixed32 | uint32 | int | uint32 | Always 4 bytes |
| fixed64 | uint64 | long | uint64 | Always 8 bytes |
| sfixed32 | int32 | int | int32 | Always 4 bytes |
| sfixed64 | int64 | long | int64 | Always 8 bytes |
| bool | bool | boolean | bool | |
| string | string | String | string | UTF-8 encoded |
| bytes | []byte | ByteString | string |
Repeated Fields
message SearchResponse {
repeated Result results = 1;
}
Creates an array/list of the specified type.
Nested Messages
message Address {
string street = 1;
string city = 2;
string country = 3;
}
message Person {
string name = 1;
Address address = 2;
}
Messages can contain other messages as fields.
Oneof Fields
message Person {
string name = 1;
oneof contact_info {
string email = 2;
string phone = 3;
}
}
Only one of the specified fields can be set at a time.
Maps
message Config {
map<string, string> settings = 1;
}
Creates a key-value map structure.
Service Patterns
Unary RPC
service Calculator {
rpc Add(AddRequest) returns (AddResponse);
}
Standard request-response pattern.
Server Streaming
service NotificationService {
rpc Subscribe(SubscribeRequest) returns (stream Notification);
}
Server sends multiple responses for a single request.
Client Streaming
service UploadService {
rpc Upload(stream UploadChunk) returns (UploadResponse);
}
Client sends multiple requests, server responds once.
Bidirectional Streaming
service ChatService {
rpc Chat(stream ChatMessage) returns (stream ChatMessage);
}
Both client and server can send messages independently.
Proto File Organization
Directory Structure
proto/
├── user/
│ ├── v1/
│ │ ├── user.proto
│ │ └── user_service.proto
│ └── v2/
│ ├── user.proto
│ └── user_service.proto
├── payment/
│ ├── payment.proto
│ └── payment_service.proto
└── common/
├── types.proto
└── errors.proto
Versioning
// user/v1/user.proto
syntax = "proto3";
package myapp.user.v1;
// Version-specific message
message User {
string id = 1;
string name = 2;
string email = 3;
}
// user/v2/user.proto
syntax = "proto3";
package myapp.user.v2;
// Extended version with new fields
message User {
string id = 1;
string name = 2;
string email = 3;
string phone = 4; // New field
repeated string tags = 5; // New field
}
MockForge Integration
Automatic Discovery
MockForge automatically discovers .proto files in the configured directory:
# Default proto directory
mockforge serve --grpc-port 50051
# Custom proto directory
MOCKFORGE_PROTO_DIR=my-protos mockforge serve --grpc-port 50051
Service Registration
MockForge automatically registers all discovered services:
# List available services
grpcurl -plaintext localhost:50051 list
# Output:
# grpc.reflection.v1alpha.ServerReflection
# myapp.user.UserService
# myapp.payment.PaymentService
Dynamic Response Generation
MockForge generates responses based on proto message schemas:
message UserResponse {
string user_id = 1; // Generates UUID
string name = 2; // Generates random name
string email = 3; // Generates valid email
int64 created_at = 4; // Generates timestamp
UserStatus status = 5; // Random enum value
}
Template Support
Use MockForge templates for custom responses:
message UserResponse {
string user_id = 1; // {{uuid}}
string name = 2; // {{request.user_id == "123" ? "John Doe" : "Jane Smith"}}
string email = 3; // {{name | replace(" ", ".") | lower}}@example.com
int64 created_at = 4; // {{now}}
UserStatus status = 5; // ACTIVE
}
Best Practices
Naming Conventions
- Packages: Use lowercase with dots (e.g.,
myapp.user.v1) - Services: Use PascalCase with “Service” suffix (e.g.,
UserService) - Messages: Use PascalCase (e.g.,
UserProfile) - Fields: Use snake_case (e.g.,
user_id,created_at) - Enums: Use PascalCase for type, SCREAMING_SNAKE_CASE for values
Field Numbering
- Reserve numbers: Don’t reuse field numbers from deleted fields
- Start from 1: Field numbers start from 1
- Gap for extensions: Leave gaps for future extensions
- Document reservations: Comment reserved field numbers
message User {
string user_id = 1;
string name = 2;
string email = 3;
// reserved 4, 5, 6; // Reserved for future use
int64 created_at = 7;
}
Import Organization
- Standard imports: Import well-known protobuf types first
- Local imports: Import project-specific proto files
- Relative paths: Use relative paths for local imports
syntax = "proto3";
import "google/protobuf/timestamp.proto";
import "google/protobuf/empty.proto";
import "common/types.proto";
import "user/profile.proto";
package myapp.user;
Documentation
- Service comments: Document what each service does
- Method comments: Explain each RPC method
- Field comments: Describe field purposes and constraints
- Enum comments: Document enum value meanings
// User management service
service UserService {
// Get a user by ID
rpc GetUser(GetUserRequest) returns (User);
// List users with pagination
rpc ListUsers(ListUsersRequest) returns (ListUsersResponse);
}
message User {
string user_id = 1; // Unique identifier for the user
string email = 2; // User's email address (must be valid)
UserStatus status = 3; // Current account status
}
enum UserStatus {
UNKNOWN = 0; // Default value
ACTIVE = 1; // Account is active
INACTIVE = 2; // Account is deactivated
SUSPENDED = 3; // Account is temporarily suspended
}
Migration and Evolution
Adding Fields
// Original
message User {
string user_id = 1;
string name = 2;
}
// Extended (backwards compatible)
message User {
string user_id = 1;
string name = 2;
string email = 3; // New field
bool active = 4; // New field
}
Reserved Fields
message User {
reserved 5, 6, 7; // Reserved for future use
reserved "old_field"; // Reserved field name
string user_id = 1;
string name = 2;
string email = 3;
}
Versioning Strategy
- Package versioning: Include version in package name
- Service evolution: Extend services with new methods
- Deprecation notices: Mark deprecated fields
- Breaking changes: Create new service versions
Validation
Proto File Validation
# Validate proto syntax
protoc --proto_path=. --error_format=json myproto.proto
# Generate descriptors
protoc --proto_path=. --descriptor_set_out=descriptor.pb myproto.proto
MockForge Integration Testing
# Test proto compilation
MOCKFORGE_PROTO_DIR=my-protos cargo build
# Verify service discovery
mockforge serve --grpc-port 50051 &
sleep 2
grpcurl -plaintext localhost:50051 list
Cross-Language Compatibility
# Generate code for multiple languages
protoc --proto_path=. \
--go_out=. \
--java_out=. \
--python_out=. \
--cpp_out=. \
myproto.proto
Troubleshooting
Common Proto Issues
Import resolution: Ensure all imported proto files are available in the proto path
Field conflicts: Check for duplicate field numbers or names within messages
Circular imports: Avoid circular dependencies between proto files
Syntax errors: Use protoc to validate proto file syntax
MockForge-Specific Issues
Services not discovered: Check proto directory configuration and file permissions
Invalid responses: Verify proto message definitions match expected schemas
Compilation failures: Check for proto syntax errors and missing dependencies
Template errors: Ensure template variables are properly escaped in proto comments
Debug Commands
# Check proto file discovery
find proto/ -name "*.proto" -type f
# Validate proto files
for file in $(find proto/ -name "*.proto"); do
echo "Validating $file..."
protoc --proto_path=. --error_format=json "$file" > /dev/null
done
# Test service compilation
MOCKFORGE_PROTO_DIR=proto/ cargo check -p mockforge-grpc
# Inspect generated code
cargo doc --open --package mockforge-grpc
Protocol Buffers provide a robust foundation for gRPC service definitions. By following these guidelines and leveraging MockForge’s dynamic discovery capabilities, you can create well-structured, maintainable, and testable gRPC services.
Streaming
gRPC supports four fundamental communication patterns, with three involving streaming. MockForge provides comprehensive support for all streaming patterns, enabling realistic testing of real-time and batch data scenarios.
Streaming Patterns
Unary (Request → Response)
Standard request-response pattern - one message in, one message out.
Server Streaming (Request → Stream of Responses)
Single request initiates a stream of responses from server to client.
Client Streaming (Stream of Requests → Response)
Client sends multiple messages, server responds once with aggregated result.
Bidirectional Streaming (Stream ↔ Stream)
Both client and server can send messages independently and simultaneously.
Server Streaming
Basic Server Streaming
service NotificationService {
rpc Subscribe(SubscribeRequest) returns (stream Notification);
}
message SubscribeRequest {
repeated string topics = 1;
SubscriptionType type = 2;
}
message Notification {
string topic = 1;
string message = 2;
google.protobuf.Timestamp timestamp = 3;
Severity severity = 4;
}
enum SubscriptionType {
REALTIME = 0;
BATCH = 1;
}
enum Severity {
INFO = 0;
WARNING = 1;
ERROR = 2;
CRITICAL = 3;
}
MockForge Configuration
Server streaming generates multiple responses based on configuration:
// Basic server streaming - fixed number of responses
{"ts":0,"dir":"out","text":"{\"topic\":\"system\",\"message\":\"Connected\",\"severity\":\"INFO\"}"}
{"ts":1000,"dir":"out","text":"{\"topic\":\"user\",\"message\":\"New user registered\",\"severity\":\"INFO\"}"}
{"ts":2000,"dir":"out","text":"{\"topic\":\"payment\",\"message\":\"Payment processed\",\"severity\":\"INFO\"}"}
{"ts":3000,"dir":"out","text":"{\"topic\":\"system\",\"message\":\"Maintenance scheduled\",\"severity\":\"WARNING\"}"}
Dynamic Server Streaming
// Template-based dynamic responses
{"ts":0,"dir":"out","text":"{\"topic\":\"{{request.topics[0]}}\",\"message\":\"Subscribed to {{request.topics.length}} topics\",\"timestamp\":\"{{now}}\"}"}
{"ts":1000,"dir":"out","text":"{\"topic\":\"{{randFromArray request.topics}}\",\"message\":\"{{randParagraph}}\",\"timestamp\":\"{{now}}\"}"}
{"ts":2000,"dir":"out","text":"{\"topic\":\"{{randFromArray request.topics}}\",\"message\":\"{{randSentence}}\",\"timestamp\":\"{{now}}\"}"}
{"ts":5000,"dir":"out","text":"{\"topic\":\"system\",\"message\":\"Stream ending\",\"timestamp\":\"{{now}}\"}"}
Testing Server Streaming
Using grpcurl
# Test server streaming
grpcurl -plaintext -d '{"topics": ["user", "payment"], "type": "REALTIME"}' \
localhost:50051 myapp.NotificationService/Subscribe
Using Node.js
const grpc = require('@grpc/grpc-js');
const protoLoader = require('@grpc/proto-loader');
const packageDefinition = protoLoader.loadSync('proto/notification.proto');
const proto = grpc.loadPackageDefinition(packageDefinition);
const client = new proto.myapp.NotificationService(
'localhost:50051',
grpc.credentials.createInsecure()
);
const call = client.Subscribe({
topics: ['user', 'payment'],
type: 'REALTIME'
});
call.on('data', (notification) => {
console.log('Notification:', notification);
});
call.on('end', () => {
console.log('Stream ended');
});
call.on('error', (error) => {
console.error('Error:', error);
});
Client Streaming
Basic Client Streaming
service UploadService {
rpc UploadFile(stream FileChunk) returns (UploadResponse);
}
message FileChunk {
bytes data = 1;
int32 sequence = 2;
bool is_last = 3;
}
message UploadResponse {
string file_id = 1;
int64 total_size = 2;
string checksum = 3;
UploadStatus status = 4;
}
enum UploadStatus {
SUCCESS = 0;
FAILED = 1;
PARTIAL = 2;
}
MockForge Configuration
Client streaming processes multiple incoming messages and returns a single response:
// Client streaming - processes multiple chunks
{"ts":0,"dir":"in","text":".*","response":"{\"file_id\":\"{{uuid}}\",\"total_size\":1024,\"status\":\"SUCCESS\"}"}
Advanced Client Streaming
// Process chunks and maintain state
{"ts":0,"dir":"in","text":"{\"sequence\":0}","response":"Chunk 0 received","state":"uploading","chunks":1}
{"ts":0,"dir":"in","text":"{\"sequence\":1}","response":"Chunk 1 received","chunks":"{{request.ws.state.chunks + 1}}"}
{"ts":0,"dir":"in","text":"{\"is_last\":true}","response":"{\"file_id\":\"{{uuid}}\",\"total_size\":\"{{request.ws.state.chunks * 1024}}\",\"status\":\"SUCCESS\"}"}
Testing Client Streaming
Using grpcurl
# Send multiple messages for client streaming
echo '{"data": "chunk1", "sequence": 0}' | \
grpcurl -plaintext -d @ localhost:50051 myapp.UploadService/UploadFile
echo '{"data": "chunk2", "sequence": 1}' | \
grpcurl -plaintext -d @ localhost:50051 myapp.UploadService/UploadFile
echo '{"data": "chunk3", "sequence": 2, "is_last": true}' | \
grpcurl -plaintext -d @ localhost:50051 myapp.UploadService/UploadFile
Using Python
import grpc
from upload_pb2 import FileChunk
from upload_pb2_grpc import UploadServiceStub
def generate_chunks():
# Simulate file chunks
chunks = [
b"chunk1",
b"chunk2",
b"chunk3"
]
for i, chunk in enumerate(chunks):
yield FileChunk(
data=chunk,
sequence=i,
is_last=(i == len(chunks) - 1)
)
channel = grpc.insecure_channel('localhost:50051')
stub = UploadServiceStub(channel)
response = stub.UploadFile(generate_chunks())
print(f"Upload result: {response}")
Bidirectional Streaming
Basic Bidirectional Streaming
service ChatService {
rpc Chat(stream ChatMessage) returns (stream ChatMessage);
}
message ChatMessage {
string user_id = 1;
string content = 2;
MessageType type = 3;
google.protobuf.Timestamp timestamp = 4;
}
enum MessageType {
TEXT = 0;
JOIN = 1;
LEAVE = 2;
SYSTEM = 3;
}
MockForge Configuration
Bidirectional streaming handles both incoming and outgoing messages:
// Welcome message on connection
{"ts":0,"dir":"out","text":"{\"user_id\":\"system\",\"content\":\"Welcome to chat!\",\"type\":\"SYSTEM\"}"}
// Handle join messages
{"ts":0,"dir":"in","text":"{\"type\":\"JOIN\"}","response":"{\"user_id\":\"system\",\"content\":\"{{request.ws.message.user_id}} joined the chat\",\"type\":\"SYSTEM\"}"}
// Handle text messages
{"ts":0,"dir":"in","text":"{\"type\":\"TEXT\"}","response":"{\"user_id\":\"{{request.ws.message.user_id}}\",\"content\":\"{{request.ws.message.content}}\",\"type\":\"TEXT\"}"}
// Handle leave messages
{"ts":0,"dir":"in","text":"{\"type\":\"LEAVE\"}","response":"{\"user_id\":\"system\",\"content\":\"{{request.ws.message.user_id}} left the chat\",\"type\":\"SYSTEM\"}"}
// Periodic system messages
{"ts":30000,"dir":"out","text":"{\"user_id\":\"system\",\"content\":\"Server uptime: {{randInt 1 24}} hours\",\"type\":\"SYSTEM\"}"}
Advanced Bidirectional Patterns
// State-aware responses
{"ts":0,"dir":"in","text":".*","condition":"{{!request.ws.state.authenticated}}","response":"Please authenticate first"}
{"ts":0,"dir":"in","text":"AUTH","response":"Authenticated","state":"authenticated"}
{"ts":0,"dir":"in","text":".*","condition":"{{request.ws.state.authenticated}}","response":"{{request.ws.message}}"}
{"ts":0,"dir":"in","text":"HELP","response":"Available commands: MSG, QUIT, STATUS"}
{"ts":0,"dir":"in","text":"STATUS","response":"Connected users: {{randInt 1 50}}"}
{"ts":0,"dir":"in","text":"QUIT","response":"Goodbye!","close":true}
Testing Bidirectional Streaming
Using Node.js
const grpc = require('@grpc/grpc-js');
const protoLoader = require('@grpc/proto-loader');
const packageDefinition = protoLoader.loadSync('proto/chat.proto');
const proto = grpc.loadPackageDefinition(packageDefinition);
const client = new proto.myapp.ChatService(
'localhost:50051',
grpc.credentials.createInsecure()
);
const call = client.Chat();
// Handle incoming messages
call.on('data', (message) => {
console.log('Received:', message);
});
// Send messages
setInterval(() => {
call.write({
user_id: 'user123',
content: 'Hello from client',
type: 'TEXT'
});
}, 2000);
// Send join message
call.write({
user_id: 'user123',
content: 'Joined chat',
type: 'JOIN'
});
// Handle stream end
call.on('end', () => {
console.log('Stream ended');
});
// Close after 30 seconds
setTimeout(() => {
call.write({
user_id: 'user123',
content: 'Leaving chat',
type: 'LEAVE'
});
call.end();
}, 30000);
Streaming Configuration
Environment Variables
# Streaming behavior
MOCKFORGE_GRPC_STREAM_TIMEOUT=30000 # Stream timeout in ms
MOCKFORGE_GRPC_MAX_STREAM_MESSAGES=1000 # Max messages per stream
MOCKFORGE_GRPC_STREAM_BUFFER_SIZE=1024 # Buffer size for streaming
# Response timing
MOCKFORGE_GRPC_LATENCY_MIN_MS=10 # Minimum response latency
MOCKFORGE_GRPC_LATENCY_MAX_MS=100 # Maximum response latency
Stream Control Templates
// Conditional streaming
{"ts":0,"dir":"out","text":"Starting stream","condition":"{{request.stream_enabled}}"}
{"ts":1000,"dir":"out","text":"Stream data","condition":"{{request.ws.state.active}}"}
{"ts":0,"dir":"out","text":"Stream ended","condition":"{{request.ws.message.type === 'END'}}","close":true}
// Dynamic intervals
{"ts":"{{randInt 1000 5000}}","dir":"out","text":"Random interval message"}
{"ts":"{{request.interval || 2000}}","dir":"out","text":"Custom interval message"}
Performance Considerations
Memory Management
// Limit message history
{"ts":0,"dir":"in","text":".*","condition":"{{(request.ws.state.messageCount || 0) < 100}}","response":"Message received","messageCount":"{{(request.ws.state.messageCount || 0) + 1}}"}
{"ts":0,"dir":"in","text":".*","condition":"{{(request.ws.state.messageCount || 0) >= 100}}","response":"Message limit reached"}
Connection Limits
// Global connection tracking (requires custom implementation)
{"ts":0,"dir":"out","text":"Connection {{request.ws.connectionId}} established"}
{"ts":300000,"dir":"out","text":"Connection timeout","close":true}
Load Balancing
// Simulate load balancer behavior
{"ts":"{{randInt 100 1000}}","dir":"out","text":"Response from server {{randInt 1 3}}"}
{"ts":"{{randInt 2000 5000}}","dir":"out","text":"Health check from server {{randInt 1 3}}"}
Error Handling in Streams
Stream Errors
// Handle invalid messages
{"ts":0,"dir":"in","text":"","response":"Empty message not allowed"}
{"ts":0,"dir":"in","text":".{500,}","response":"Message too long (max 500 chars)"}
// Simulate network errors
{"ts":5000,"dir":"out","text":"Network error occurred","error":true,"close":true}
Recovery Patterns
// Automatic reconnection
{"ts":0,"dir":"out","text":"Connection lost, attempting reconnect..."}
{"ts":2000,"dir":"out","text":"Reconnected successfully"}
{"ts":100,"dir":"out","text":"Resuming stream from message {{request.ws.state.lastMessageId}}"}
Testing Strategies
Unit Testing Streams
// test-streaming.js
const { expect } = require('chai');
describe('gRPC Streaming', () => {
it('should handle server streaming', (done) => {
const call = client.subscribeNotifications({ topics: ['test'] });
let messageCount = 0;
call.on('data', (notification) => {
messageCount++;
expect(notification).to.have.property('topic');
expect(notification).to.have.property('message');
});
call.on('end', () => {
expect(messageCount).to.be.greaterThan(0);
done();
});
// End test after 5 seconds
setTimeout(() => call.cancel(), 5000);
});
it('should handle client streaming', (done) => {
const call = client.uploadFile((error, response) => {
expect(error).to.be.null;
expect(response).to.have.property('file_id');
expect(response.status).to.equal('SUCCESS');
done();
});
// Send test chunks
call.write({ data: Buffer.from('test'), sequence: 0 });
call.write({ data: Buffer.from('data'), sequence: 1, is_last: true });
call.end();
});
});
Load Testing
#!/bin/bash
# load-test-streams.sh
CONCURRENT_STREAMS=10
DURATION=60
echo "Load testing $CONCURRENT_STREAMS concurrent streams for ${DURATION}s"
for i in $(seq 1 $CONCURRENT_STREAMS); do
node stream-client.js &
done
# Wait for test duration
sleep $DURATION
# Kill all clients
pkill -f stream-client.js
echo "Load test completed"
Best Practices
Stream Design
- Appropriate Patterns: Choose the right streaming pattern for your use case
- Message Size: Keep individual messages reasonably sized
- Heartbeat Messages: Include periodic keepalive messages for long-running streams
- Error Recovery: Implement proper error handling and recovery mechanisms
Performance Optimization
- Buffering: Use appropriate buffer sizes for your throughput requirements
- Compression: Enable compression for large message streams
- Connection Reuse: Reuse connections when possible
- Resource Limits: Set appropriate limits on concurrent streams and message rates
Monitoring and Debugging
- Stream Metrics: Monitor stream duration, message counts, and error rates
- Logging: Enable detailed logging for debugging streaming issues
- Tracing: Implement request tracing across stream messages
- Health Checks: Regular health checks for long-running streams
Client Compatibility
- Protocol Versions: Ensure compatibility with different gRPC versions
- Language Support: Test with multiple client language implementations
- Network Conditions: Test under various network conditions (latency, packet loss)
- Browser Support: Consider WebSocket fallback for web clients
Troubleshooting
Common Streaming Issues
Stream doesn’t start: Check proto file definitions and service registration
Messages not received: Verify message encoding and template syntax
Stream hangs: Check for proper stream termination and timeout settings
Performance degradation: Monitor resource usage and adjust buffer sizes
Client disconnects: Implement proper heartbeat and reconnection logic
Debug Commands
# Monitor active streams
grpcurl -plaintext localhost:50051 list
# Check stream status
netstat -tlnp | grep :50051
# View stream logs
tail -f mockforge.log | grep -E "(stream|grpc)"
# Test basic connectivity
grpcurl -plaintext localhost:50051 grpc.reflection.v1alpha.ServerReflection/ServerReflectionInfo
Performance Profiling
# Profile gRPC performance
cargo flamegraph --bin mockforge-cli -- serve --grpc-port 50051
# Monitor system resources
htop -p $(pgrep mockforge)
# Network monitoring
iftop -i lo
Streaming patterns enable powerful real-time communication scenarios. MockForge’s comprehensive streaming support allows you to create sophisticated mock environments that accurately simulate production streaming services for thorough testing and development.
Advanced Data Synthesis
MockForge provides sophisticated data synthesis capabilities that go beyond simple random data generation. The advanced data synthesis system combines intelligent field inference, deterministic seeding, relationship-aware generation, and cross-endpoint validation to create realistic, coherent, and reproducible test data.
Overview
The advanced data synthesis system consists of four main components:
- Smart Mock Generator - Intelligent field-based mock data generation with deterministic seeding
- Schema Graph Extraction - Automatic discovery of relationships from protobuf schemas
- RAG-Driven Synthesis - Domain-aware data generation using Retrieval-Augmented Generation
- Validation Framework - Cross-endpoint consistency and integrity validation
These components work together to provide enterprise-grade test data generation that maintains referential integrity across your entire gRPC service ecosystem.
Smart Mock Generator
The Smart Mock Generator provides intelligent mock data generation based on field names, types, and patterns. It automatically detects the intent behind field names and generates appropriate realistic data.
Field Name Intelligence
The generator automatically infers appropriate data types based on field names:
| Field Pattern | Generated Data Type | Example Values |
|---|---|---|
email, email_address | Realistic email addresses | user@example.com, alice.smith@company.org |
phone, mobile, phone_number | Formatted phone numbers | +1-555-0123, (555) 123-4567 |
id, user_id, order_id | Sequential or UUID-based IDs | user_001, 550e8400-e29b-41d4-a716-446655440000 |
name, first_name, last_name | Realistic names | John Doe, Alice, Johnson |
created_at, updated_at, timestamp | ISO timestamps | 2023-10-15T14:30:00Z |
latitude, longitude | Geographic coordinates | 40.7128, -74.0060 |
url, website | Valid URLs | https://example.com |
token, api_key | Security tokens | sk_live_4eC39HqLyjWDarjtT1zdp7dc |
Deterministic Generation
For reproducible test fixtures, the Smart Mock Generator supports deterministic seeding:
#![allow(unused)] fn main() { use mockforge_grpc::reflection::smart_mock_generator::{SmartMockGenerator, SmartMockConfig}; // Create a deterministic generator with a fixed seed let mut generator = SmartMockGenerator::new_with_seed( SmartMockConfig::default(), 12345 // seed value ); // Generate reproducible data let uuid1 = generator.generate_uuid(); let email = generator.generate_random_string(10); // Reset to regenerate same data generator.reset(); let uuid2 = generator.generate_uuid(); // Same as uuid1 }
This ensures that your tests produce consistent results across different runs and environments.
Schema Graph Extraction
The schema graph extraction system analyzes your protobuf definitions to automatically discover relationships and foreign key patterns between entities.
Foreign Key Detection
The system uses naming conventions to detect foreign key relationships:
message Order {
string id = 1;
string user_id = 2; // → Detected as foreign key to User
string customer_ref = 3; // → Detected as reference to Customer
int64 timestamp = 4;
}
message User {
string id = 1; // → Detected as primary key
string name = 2;
string email = 3;
}
Common Foreign Key Patterns:
user_id→ referencesUserentityorderId→ referencesOrderentitycustomer_ref→ referencesCustomerentity
Relationship Types
The system identifies various relationship types:
- Foreign Key: Direct ID references (
user_id→User) - Embedded: Nested message types within other messages
- One-to-Many: Repeated field relationships
- Composition: Ownership relationships between entities
RAG-Driven Data Synthesis
RAG (Retrieval-Augmented Generation) enables context-aware data generation using domain knowledge from documentation, examples, and business rules.
Configuration
grpc:
data_synthesis:
rag:
enabled: true
api_endpoint: "https://api.openai.com/v1/chat/completions"
model: "gpt-3.5-turbo"
embedding_model: "text-embedding-ada-002"
similarity_threshold: 0.7
max_documents: 5
context_sources:
- id: "user_docs"
type: "documentation"
path: "./docs/user_guide.md"
weight: 1.0
- id: "examples"
type: "examples"
path: "./examples/sample_data.json"
weight: 0.8
Business Rule Extraction
The RAG system automatically extracts business rules from your documentation:
- Email Validation: “Email fields must follow valid email format”
- Phone Formatting: “Phone numbers should be in international format”
- ID Requirements: “User IDs must be alphanumeric and 8 characters long”
- Relationship Constraints: “Orders must reference valid existing users”
Domain-Aware Generation
Instead of generic random data, RAG generates contextually appropriate values:
message User {
string role = 1; // Context: "admin", "user", "moderator"
string department = 2; // Context: "engineering", "marketing", "sales"
string location = 3; // Context: "San Francisco", "New York", "London"
}
Cross-Endpoint Validation
The validation framework ensures data coherence across different endpoints and validates referential integrity.
Validation Rules
The framework supports multiple types of validation rules:
Built-in Validations:
- Foreign key existence validation
- Field format validation (email, phone, URL)
- Range validation for numeric fields
- Unique constraint validation
Custom Validation Rules:
grpc:
data_synthesis:
validation:
enabled: true
strict_mode: false
custom_rules:
- name: "email_format"
applies_to: ["User", "Customer"]
fields: ["email"]
type: "format"
pattern: "^[^@\\s]+@[^@\\s]+\\.[^@\\s]+$"
error: "Invalid email format"
- name: "age_range"
applies_to: ["User"]
fields: ["age"]
type: "range"
min: 0
max: 120
error: "Age must be between 0 and 120"
Referential Integrity
The validator automatically checks that:
- Foreign key references point to existing entities
- Required relationships are satisfied
- Cross-service data dependencies are maintained
- Business constraints are enforced
Configuration
Environment Variables
# Enable advanced data synthesis
MOCKFORGE_DATA_SYNTHESIS_ENABLED=true
# Deterministic generation
MOCKFORGE_DATA_SYNTHESIS_SEED=12345
MOCKFORGE_DATA_SYNTHESIS_DETERMINISTIC=true
# RAG configuration
MOCKFORGE_RAG_ENABLED=true
MOCKFORGE_RAG_API_KEY=your-api-key
MOCKFORGE_RAG_MODEL=gpt-3.5-turbo
# Validation settings
MOCKFORGE_VALIDATION_ENABLED=true
MOCKFORGE_VALIDATION_STRICT_MODE=false
Configuration File
grpc:
port: 50051
proto_dir: "proto/"
data_synthesis:
enabled: true
smart_generator:
field_inference: true
use_faker: true
deterministic: true
seed: 42
max_depth: 5
rag:
enabled: true
api_endpoint: "https://api.openai.com/v1/chat/completions"
api_key: "${RAG_API_KEY}"
model: "gpt-3.5-turbo"
embedding_model: "text-embedding-ada-002"
similarity_threshold: 0.7
max_context_length: 2000
cache_contexts: true
validation:
enabled: true
strict_mode: false
max_validation_depth: 3
cache_results: true
schema_extraction:
extract_relationships: true
detect_foreign_keys: true
confidence_threshold: 0.8
Example Usage
Basic Smart Generation
# Start MockForge with advanced data synthesis
MOCKFORGE_DATA_SYNTHESIS_ENABLED=true \
MOCKFORGE_DATA_SYNTHESIS_SEED=12345 \
mockforge serve --grpc-port 50051
With RAG Enhancement
# Start with RAG-powered domain awareness
MOCKFORGE_DATA_SYNTHESIS_ENABLED=true \
MOCKFORGE_RAG_ENABLED=true \
MOCKFORGE_RAG_API_KEY=your-api-key \
MOCKFORGE_VALIDATION_ENABLED=true \
mockforge serve --grpc-port 50051
Testing Deterministic Generation
# Generate data twice with same seed - should be identical
grpcurl -plaintext -d '{"user_id": "123"}' \
localhost:50051 com.example.UserService/GetUser
# Reset and call again - will generate same response
grpcurl -plaintext -d '{"user_id": "123"}' \
localhost:50051 com.example.UserService/GetUser
Best Practices
Deterministic Testing
- Use fixed seeds in CI/CD pipelines for reproducible tests
- Reset generators between test cases for consistency
- Document seed values used in critical test scenarios
Schema Design for Synthesis
- Use consistent naming conventions for foreign keys (
user_id,customer_ref) - Add comments to proto files describing business rules
- Consider field naming that indicates data type (
email_addressvscontact)
RAG Integration
- Provide high-quality domain documentation as context sources
- Use specific, actionable descriptions in documentation
- Monitor API costs and implement appropriate caching
Validation Strategy
- Start with lenient validation and gradually add stricter rules
- Use warnings for potential issues, errors for critical problems
- Provide helpful error messages with suggested fixes
Advanced Scenarios
Multi-Service Data Coherence
When mocking multiple related gRPC services, ensure data coherence:
# Start user service
MOCKFORGE_DATA_SYNTHESIS_SEED=100 \
mockforge serve --grpc-port 50051 --proto-dir user-proto &
# Start order service with same seed for consistency
MOCKFORGE_DATA_SYNTHESIS_SEED=100 \
mockforge serve --grpc-port 50052 --proto-dir order-proto &
Custom Field Overrides
Override specific fields with custom values:
grpc:
data_synthesis:
field_overrides:
"admin_email": "admin@company.com"
"api_version": "v2.1"
"environment": "testing"
Business Rule Templates
Define reusable business rule templates:
grpc:
data_synthesis:
rule_templates:
- name: "financial_data"
applies_to: ["Invoice", "Payment", "Transaction"]
rules:
- field_pattern: "*_amount"
type: "range"
min: 0.01
max: 10000.00
- field_pattern: "*_currency"
type: "enum"
values: ["USD", "EUR", "GBP"]
Troubleshooting
Common Issues
Generated data not realistic enough
- Enable RAG synthesis with domain documentation
- Check field naming conventions for better inference
- Add custom business rules for specific constraints
Non-deterministic behavior
- Ensure
deterministic: trueand provide aseedvalue - Reset generators between test runs
- Check for external randomness sources
Validation failures
- Review foreign key naming conventions
- Ensure referenced entities are generated before referencing ones
- Check custom validation rule patterns
RAG not working
- Verify API credentials and endpoints
- Check context source file paths and permissions
- Monitor API rate limits and error responses
Debug Commands
# Test data synthesis configuration
mockforge validate-config
# Show detected schema relationships
mockforge analyze-schema --proto-dir proto/
# Test deterministic generation
MOCKFORGE_DATA_SYNTHESIS_DEBUG=true \
mockforge serve --grpc-port 50051
Advanced data synthesis transforms MockForge from a simple mocking tool into a comprehensive test data management platform, enabling realistic, consistent, and validated test scenarios across your entire service architecture.
GraphQL Mocking
MockForge provides comprehensive GraphQL API mocking capabilities, allowing you to create realistic GraphQL endpoints with schema-driven response generation, introspection support, and custom resolvers.
Overview
MockForge’s GraphQL support includes:
- Schema-Driven Mocking: Generate responses based on GraphQL schema definitions
- Introspection Support: Full GraphQL introspection query support
- Custom Resolvers: Implement custom logic for specific fields
- Query Validation: Validate incoming GraphQL queries against schema
- Subscription Support: Mock GraphQL subscriptions with real-time updates
- Schema Stitching: Combine multiple schemas into unified endpoints
- Performance Simulation: Configurable latency and complexity limits
Getting Started
Basic Setup
Enable GraphQL mocking in your MockForge configuration:
# config.yaml
graphql:
enabled: true
endpoint: "/graphql"
schema_file: "schema.graphql"
introspection: true
playground: true
server:
http_port: 3000
Start MockForge with GraphQL support:
mockforge serve --config config.yaml
Access your GraphQL endpoint:
- GraphQL Endpoint:
http://localhost:3000/graphql - GraphQL Playground:
http://localhost:3000/graphql/playground
Schema Definition
Create a GraphQL schema file:
# schema.graphql
type User {
id: ID!
name: String!
email: String!
age: Int
posts: [Post!]!
profile: UserProfile
}
type Post {
id: ID!
title: String!
content: String!
published: Boolean!
author: User!
createdAt: String!
tags: [String!]!
}
type UserProfile {
bio: String
website: String
location: String
avatarUrl: String
}
type Query {
users: [User!]!
user(id: ID!): User
posts: [Post!]!
post(id: ID!): Post
searchUsers(query: String!): [User!]!
}
type Mutation {
createUser(input: CreateUserInput!): User!
updateUser(id: ID!, input: UpdateUserInput!): User!
deleteUser(id: ID!): Boolean!
createPost(input: CreatePostInput!): Post!
}
type Subscription {
userCreated: User!
postPublished: Post!
userOnline(userId: ID!): Boolean!
}
input CreateUserInput {
name: String!
email: String!
age: Int
}
input UpdateUserInput {
name: String
email: String
age: Int
}
input CreatePostInput {
title: String!
content: String!
authorId: ID!
tags: [String!]
}
Configuration Options
Basic Configuration
graphql:
# Enable GraphQL support
enabled: true
# GraphQL endpoint path
endpoint: "/graphql"
# Schema configuration
schema_file: "schema.graphql"
schema_url: "https://api.example.com/schema" # Alternative: fetch from URL
# Development features
introspection: true
playground: true
playground_endpoint: "/graphql/playground"
# Response generation
mock_responses: true
default_list_length: 5
# Validation
validate_queries: true
max_query_depth: 10
max_query_complexity: 1000
Advanced Configuration
graphql:
# Performance settings
performance:
enable_query_complexity_analysis: true
max_query_depth: 15
max_query_complexity: 1000
timeout_ms: 30000
# Caching
caching:
enabled: true
ttl_seconds: 300
max_cache_size: 1000
# Custom resolvers
resolvers:
directory: "./graphql/resolvers"
auto_load: true
# Subscription settings
subscriptions:
enabled: true
transport: "websocket"
heartbeat_interval: 30
# Error handling
errors:
include_stack_trace: true
include_extensions: true
custom_error_codes: true
Response Generation
Automatic Response Generation
MockForge automatically generates realistic responses based on your schema:
# Query
query GetUsers {
users {
id
name
email
age
posts {
title
published
}
}
}
{
"data": {
"users": [
{
"id": "1a2b3c4d",
"name": "Alice Johnson",
"email": "alice.johnson@example.com",
"age": 29,
"posts": [
{
"title": "Getting Started with GraphQL",
"published": true
},
{
"title": "Advanced Query Techniques",
"published": false
}
]
},
{
"id": "2b3c4d5e",
"name": "Bob Smith",
"email": "bob.smith@example.com",
"age": 34,
"posts": [
{
"title": "Building Scalable APIs",
"published": true
}
]
}
]
}
}
Template-Based Responses
Use templates for more control over response data:
# graphql/responses/user.yaml
query: "query GetUser($id: ID!)"
response:
data:
user:
id: "{{args.id}}"
name: "{{faker.name.fullName}}"
email: "{{faker.internet.email}}"
age: "{{randInt 18 65}}"
profile:
bio: "{{faker.lorem.sentence}}"
website: "{{faker.internet.url}}"
location: "{{faker.address.city}}, {{faker.address.state}}"
avatarUrl: "https://api.dicebear.com/7.x/avataaars/svg?seed={{uuid}}"
Custom Field Resolvers
Create custom resolvers for specific fields:
// graphql/resolvers/user.js
module.exports = {
User: {
// Custom resolver for posts field
posts: (parent, args, context) => {
return context.dataSources.posts.getByAuthorId(parent.id);
},
// Computed field
fullName: (parent) => {
return `${parent.firstName} ${parent.lastName}`;
},
// Async resolver with external data
socialStats: async (parent, args, context) => {
return await context.dataSources.social.getStats(parent.id);
}
},
Query: {
// Custom query resolver
searchUsers: (parent, args, context) => {
const { query, limit = 10 } = args;
return context.dataSources.users.search(query, limit);
}
},
Mutation: {
// Custom mutation resolver
createUser: (parent, args, context) => {
const { input } = args;
const user = {
id: uuid(),
...input,
createdAt: new Date().toISOString()
};
context.dataSources.users.create(user);
// Trigger subscription
context.pubsub.publish('USER_CREATED', { userCreated: user });
return user;
}
}
};
Data Sources
CSV Data Source
Connect GraphQL resolvers to CSV data:
# config.yaml
graphql:
data_sources:
users:
type: "csv"
file: "data/users.csv"
key_field: "id"
posts:
type: "csv"
file: "data/posts.csv"
key_field: "id"
relationships:
author_id: "users.id"
# data/users.csv
id,name,email,age
1,Alice Johnson,alice@example.com,29
2,Bob Smith,bob@example.com,34
3,Carol Davis,carol@example.com,27
REST API Data Source
Fetch data from external REST APIs:
graphql:
data_sources:
users:
type: "rest"
base_url: "https://jsonplaceholder.typicode.com"
endpoints:
getAll: "/users"
getById: "/users/{id}"
create:
method: "POST"
url: "/users"
posts:
type: "rest"
base_url: "https://jsonplaceholder.typicode.com"
endpoints:
getAll: "/posts"
getByUserId: "/posts?userId={userId}"
Database Data Source
Connect to databases for realistic data:
graphql:
data_sources:
database:
type: "postgresql"
connection_string: "postgresql://user:pass@localhost/mockdb"
tables:
users:
table: "users"
key_field: "id"
posts:
table: "posts"
key_field: "id"
relationships:
author_id: "users.id"
Subscriptions
WebSocket Subscriptions
Enable real-time GraphQL subscriptions:
graphql:
subscriptions:
enabled: true
transport: "websocket"
endpoint: "/graphql/ws"
heartbeat_interval: 30
connection_timeout: 60
Subscription Resolvers
// graphql/resolvers/subscriptions.js
module.exports = {
Subscription: {
userCreated: {
subscribe: (parent, args, context) => {
return context.pubsub.asyncIterator('USER_CREATED');
}
},
postPublished: {
subscribe: (parent, args, context) => {
return context.pubsub.asyncIterator('POST_PUBLISHED');
}
},
userOnline: {
subscribe: (parent, args, context) => {
const { userId } = args;
return context.pubsub.asyncIterator(`USER_ONLINE_${userId}`);
}
}
}
};
Triggering Subscriptions
Trigger subscriptions from mutations or external events:
// In mutation resolver
createPost: (parent, args, context) => {
const post = createNewPost(args.input);
// Trigger subscription
context.pubsub.publish('POST_PUBLISHED', {
postPublished: post
});
return post;
}
Schema Stitching
Combine multiple GraphQL schemas:
graphql:
schema_stitching:
enabled: true
schemas:
- name: "users"
file: "schemas/users.graphql"
endpoint: "http://users-service/graphql"
- name: "posts"
file: "schemas/posts.graphql"
endpoint: "http://posts-service/graphql"
- name: "comments"
file: "schemas/comments.graphql"
endpoint: "http://comments-service/graphql"
# Type extensions for stitching
extensions:
- |
extend type User {
posts: [Post]
}
- |
extend type Post {
comments: [Comment]
}
Error Handling
Custom Error Responses
Configure custom error handling:
graphql:
errors:
# Include detailed error information
include_stack_trace: true
include_extensions: true
# Custom error codes
custom_error_codes:
INVALID_INPUT: 400
UNAUTHORIZED: 401
FORBIDDEN: 403
NOT_FOUND: 404
RATE_LIMITED: 429
Error Response Format
{
"errors": [
{
"message": "User not found",
"locations": [
{
"line": 2,
"column": 3
}
],
"path": ["user"],
"extensions": {
"code": "NOT_FOUND",
"userId": "invalid-id",
"timestamp": "2024-01-01T00:00:00Z"
}
}
],
"data": {
"user": null
}
}
Performance & Optimization
Query Complexity Analysis
Prevent expensive queries:
graphql:
performance:
enable_query_complexity_analysis: true
max_query_depth: 10
max_query_complexity: 1000
complexity_scalarCost: 1
complexity_objectCost: 2
complexity_listFactor: 10
complexity_introspectionCost: 100
Caching
Cache responses for improved performance:
graphql:
caching:
enabled: true
ttl_seconds: 300
max_cache_size: 1000
cache_key_strategy: "query_and_variables"
# Cache per resolver
resolver_cache:
"Query.users": 600 # Cache for 10 minutes
"Query.posts": 300 # Cache for 5 minutes
Latency Simulation
Simulate real-world latency:
graphql:
latency:
enabled: true
default_delay_ms: 100
# Per-field latency
field_delays:
"Query.users": 200
"User.posts": 150
"Post.comments": 100
# Random latency ranges
random_delay:
min_ms: 50
max_ms: 500
Testing & Development
GraphQL Playground
The built-in GraphQL Playground provides:
- Interactive Query Editor: Write and test GraphQL queries
- Schema Documentation: Browse your schema structure
- Query Variables: Test with different variable values
- Response Headers: View response metadata
- Subscription Testing: Test real-time subscriptions
Query Examples
Test your GraphQL API with these examples:
# Simple query
query GetAllUsers {
users {
id
name
email
}
}
# Query with variables
query GetUser($userId: ID!) {
user(id: $userId) {
id
name
email
posts {
title
published
}
}
}
# Mutation
mutation CreateUser($input: CreateUserInput!) {
createUser(input: $input) {
id
name
email
}
}
# Subscription
subscription UserUpdates {
userCreated {
id
name
email
}
}
Integration with HTTP Mocking
Combine GraphQL with REST API mocking:
# config.yaml
http:
enabled: true
spec: "openapi.yaml"
graphql:
enabled: true
schema_file: "schema.graphql"
# Use REST endpoints in GraphQL resolvers
graphql:
data_sources:
rest_api:
type: "rest"
base_url: "http://localhost:3000" # MockForge HTTP server
endpoints:
users: "/api/users"
posts: "/api/posts"
Best Practices
Schema Design
- Use Descriptive Names: Choose clear, self-documenting field names
- Follow Conventions: Use camelCase for fields, PascalCase for types
- Document Your Schema: Add descriptions to types and fields
- Version Carefully: Use field deprecation instead of breaking changes
Performance
- Implement Caching: Cache expensive resolver operations
- Limit Query Depth: Prevent deeply nested queries
- Use DataLoaders: Batch and cache data fetching
- Monitor Complexity: Track query complexity metrics
Testing
- Test Query Variations: Test different query structures and variables
- Validate Error Cases: Ensure proper error handling
- Test Subscriptions: Verify real-time functionality
- Performance Testing: Test with realistic query loads
Troubleshooting
Common Issues
Schema Loading Errors
# Validate GraphQL schema
mockforge graphql validate --schema schema.graphql
# Check schema syntax
graphql-schema-linter schema.graphql
Resolver Errors
# Enable debug logging
RUST_LOG=mockforge_graphql=debug mockforge serve
# Test individual resolvers
mockforge graphql test-resolver Query.users
Subscription Issues
# Test WebSocket connection
wscat -c ws://localhost:3000/graphql/ws
# Check subscription resolver
mockforge graphql test-subscription userCreated
This comprehensive GraphQL support makes MockForge a powerful tool for mocking modern GraphQL APIs with realistic data and behavior.
WebSocket Mocking
MockForge provides comprehensive WebSocket connection mocking with support for both scripted replay scenarios and interactive real-time communication. This enables testing of WebSocket-based applications, real-time APIs, and event-driven systems.
WebSocket Mocking Modes
MockForge supports two primary WebSocket mocking approaches:
1. Replay Mode (Scripted)
Pre-recorded message sequences that play back on schedule, simulating server behavior with precise timing control.
2. Interactive Mode (Real-time)
Dynamic responses based on client messages, enabling complex interactive scenarios and stateful communication.
Configuration
Basic WebSocket Setup
# Start MockForge with WebSocket support
mockforge serve --ws-port 3001 --ws-replay-file ws-scenario.jsonl
Environment Variables
# WebSocket configuration
MOCKFORGE_WS_ENABLED=true # Enable WebSocket support (default: false)
MOCKFORGE_WS_PORT=3001 # WebSocket server port
MOCKFORGE_WS_BIND=0.0.0.0 # Bind address
MOCKFORGE_WS_REPLAY_FILE=path/to/file.jsonl # Path to replay file
MOCKFORGE_WS_PATH=/ws # WebSocket endpoint path (default: /ws)
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true # Enable template processing
Command Line Options
mockforge serve \
--ws-port 3001 \
--ws-replay-file examples/ws-demo.jsonl \
--ws-path /websocket
Replay Mode
Replay mode uses JSONL-formatted files to define scripted message sequences with precise timing control.
Replay File Format
Each line in the replay file is a JSON object with the following structure:
{
"ts": 0,
"dir": "out",
"text": "Hello, client!",
"waitFor": "^CLIENT_READY$"
}
Field Definitions
ts(number, required): Timestamp offset in milliseconds from connection startdir(string, required): Message direction"out"- Message sent from server to client"in"- Expected message from client (for validation)
text(string, required): Message content (supports templates)waitFor(string, optional): Regular expression to wait for before proceeding
Basic Replay Example
{"ts":0,"dir":"out","text":"Welcome to MockForge WebSocket server","waitFor":"^HELLO$"}
{"ts":1000,"dir":"out","text":"Connection established"}
{"ts":2000,"dir":"out","text":"Sending data: 42"}
{"ts":3000,"dir":"out","text":"Goodbye"}
Advanced Replay Features
Template Support
{"ts":0,"dir":"out","text":"Session {{uuid}} started at {{now}}"}
{"ts":1000,"dir":"out","text":"Random value: {{randInt 1 100}}"}
{"ts":2000,"dir":"out","text":"Future event at {{now+5m}}"}
Interactive Elements
{"ts":0,"dir":"out","text":"Please authenticate","waitFor":"^AUTH .+$"}
{"ts":100,"dir":"out","text":"Authentication successful"}
{"ts":200,"dir":"out","text":"Choose option (A/B/C)","waitFor":"^(A|B|C)$"}
Complex Message Structures
{"ts":0,"dir":"out","text":"{\"type\":\"welcome\",\"user\":{\"id\":\"{{uuid}}\",\"name\":\"John\"}}"}
{"ts":1000,"dir":"out","text":"{\"type\":\"data\",\"payload\":{\"items\":[{\"id\":1,\"value\":\"{{randInt 10 99}}\"},{\"id\":2,\"value\":\"{{randInt 100 999}}\"}]}}"}
Replay File Management
Creating Replay Files
# Record from live WebSocket connection
# (Feature in development - manual creation for now)
# Create from application logs
# Extract WebSocket messages and convert to JSONL format
# Generate programmatically
node -e "
const fs = require('fs');
const messages = [
{ts: 0, dir: 'out', text: 'HELLO', waitFor: '^HI$'},
{ts: 1000, dir: 'out', text: 'DATA: 42'}
];
fs.writeFileSync('replay.jsonl', messages.map(JSON.stringify).join('\n'));
"
Validation
# Validate replay file syntax
node -e "
const fs = require('fs');
const lines = fs.readFileSync('replay.jsonl', 'utf8').split('\n');
lines.forEach((line, i) => {
if (line.trim()) {
try {
const msg = JSON.parse(line);
if (!msg.ts || !msg.dir || !msg.text) {
console.log(\`Line \${i+1}: Missing required fields\`);
}
} catch (e) {
console.log(\`Line \${i+1}: Invalid JSON\`);
}
}
});
console.log('Validation complete');
"
Interactive Mode
Interactive mode enables dynamic responses based on client messages, supporting complex conversational patterns and state management.
Basic Interactive Setup
{"ts":0,"dir":"out","text":"What is your name?","waitFor":"^NAME .+$"}
{"ts":100,"dir":"out","text":"Hello {{request.ws.lastMessage.match(/^NAME (.+)$/)[1]}}!"}
State Management
{"ts":0,"dir":"out","text":"Welcome! Type 'START' to begin","waitFor":"^START$"}
{"ts":100,"dir":"out","text":"Game started. Score: 0","state":"playing"}
{"ts":200,"dir":"out","text":"Choose: ROCK/PAPER/SCISSORS","waitFor":"^(ROCK|PAPER|SCISSORS)$"}
{"ts":300,"dir":"out","text":"You chose {{request.ws.lastMessage}}. I chose ROCK. You win!","waitFor":"^PLAY_AGAIN$"}
Conditional Logic
{"ts":0,"dir":"out","text":"Enter command","waitFor":".+","condition":"{{request.ws.message.length > 0}}"}
{"ts":100,"dir":"out","text":"Processing: {{request.ws.message}}"}
{"ts":200,"dir":"out","text":"Command completed"}
Testing WebSocket Connections
Using WebSocket Clients
Node.js Client
const WebSocket = require('ws');
const ws = new WebSocket('ws://localhost:3001/ws');
ws.on('open', () => {
console.log('Connected to MockForge WebSocket');
ws.send('CLIENT_READY');
});
ws.on('message', (data) => {
const message = data.toString();
console.log('Received:', message);
// Auto-respond to common prompts
if (message.includes('ACK')) {
ws.send('ACK');
}
if (message.includes('CONFIRMED')) {
ws.send('CONFIRMED');
}
if (message.includes('AUTH')) {
ws.send('AUTH token123');
}
});
ws.on('close', () => {
console.log('Connection closed');
});
ws.on('error', (err) => {
console.error('WebSocket error:', err);
});
Browser JavaScript
const ws = new WebSocket('ws://localhost:3001/ws');
ws.onopen = () => {
console.log('Connected');
ws.send('CLIENT_READY');
};
ws.onmessage = (event) => {
console.log('Received:', event.data);
// Handle server messages
};
ws.onclose = () => {
console.log('Connection closed');
};
Command Line Tools
# Using websocat
websocat ws://localhost:3001/ws
# Using curl (WebSocket support experimental)
curl --include \
--no-buffer \
--header "Connection: Upgrade" \
--header "Upgrade: websocket" \
--header "Sec-WebSocket-Key: x3JJHMbDL1EzLkh9GBhXDw==" \
--header "Sec-WebSocket-Version: 13" \
ws://localhost:3001/ws
Automated Testing
#!/bin/bash
# test-websocket.sh
echo "Testing WebSocket connection..."
# Test with Node.js
node -e "
const WebSocket = require('ws');
const ws = new WebSocket('ws://localhost:3001/ws');
ws.on('open', () => {
console.log('✓ Connection established');
ws.send('CLIENT_READY');
});
ws.on('message', (data) => {
console.log('✓ Message received:', data.toString());
ws.close();
});
ws.on('close', () => {
console.log('✓ Connection closed successfully');
process.exit(0);
});
ws.on('error', (err) => {
console.error('✗ WebSocket error:', err);
process.exit(1);
});
// Timeout after 10 seconds
setTimeout(() => {
console.error('✗ Test timeout');
process.exit(1);
}, 10000);
"
Advanced Features
Connection Pooling
# Support multiple concurrent connections
MOCKFORGE_WS_MAX_CONNECTIONS=100
MOCKFORGE_WS_CONNECTION_TIMEOUT=30000
Message Filtering
{"ts":0,"dir":"in","text":".*","filter":"{{request.ws.message.startsWith('VALID_')}}"}
{"ts":100,"dir":"out","text":"Valid message received"}
Error Simulation
{"ts":0,"dir":"out","text":"Error occurred","error":"true","code":1006}
{"ts":100,"dir":"out","text":"Connection will close","close":"true"}
Binary Message Support
{"ts":0,"dir":"out","text":"AQIDBAU=","binary":"true"}
{"ts":1000,"dir":"out","text":"Binary data sent"}
Integration Patterns
Real-time Applications
- Chat Applications: Mock user conversations and bot responses
- Live Updates: Simulate real-time data feeds and notifications
- Gaming: Mock multiplayer game state and player interactions
API Testing
- WebSocket APIs: Test GraphQL subscriptions and real-time queries
- Event Streams: Mock server-sent events and push notifications
- Live Dashboards: Simulate real-time metrics and monitoring data
Development Workflows
- Frontend Development: Mock WebSocket backends during UI development
- Integration Testing: Test WebSocket handling in microservices
- Load Testing: Simulate thousands of concurrent WebSocket connections
Best Practices
Replay File Organization
- Modular Files: Break complex scenarios into smaller, focused replay files
- Version Control: Keep replay files in Git for collaboration
- Documentation: Comment complex scenarios with clear descriptions
- Validation: Always validate replay files before deployment
Performance Considerations
- Message Volume: Limit concurrent connections based on system resources
- Memory Usage: Monitor memory usage with large replay files
- Timing Accuracy: Consider system clock precision for time-sensitive scenarios
- Connection Limits: Set appropriate connection pool sizes
Security Considerations
- Input Validation: Validate all client messages in interactive mode
- Rate Limiting: Implement connection rate limits for production
- Authentication: Mock authentication handshakes appropriately
- Data Sanitization: Avoid exposing sensitive data in replay files
Debugging Tips
- Verbose Logging: Enable detailed WebSocket logging for troubleshooting
- Connection Monitoring: Track connection lifecycle and message flow
- Replay Debugging: Step through replay files manually
- Client Compatibility: Test with multiple WebSocket client libraries
Troubleshooting
Common Issues
Connection fails: Check that WebSocket port is not blocked by firewall
Messages not received: Verify replay file path and JSONL format
Templates not expanding: Ensure MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
Timing issues: Check system clock and timestamp calculations
Debug Commands
# Check WebSocket port
netstat -tlnp | grep :3001
# Monitor connections
ss -tlnp | grep :3001
# Test basic connectivity
curl -I http://localhost:3001/health # If HTTP health endpoint exists
Log Analysis
# View WebSocket logs
tail -f mockforge.log | grep -i websocket
# Count connections
grep "WebSocket connection" mockforge.log | wc -l
# Find errors
grep -i "websocket.*error" mockforge.log
For detailed implementation guides, see:
- Replay Mode - Advanced scripted scenarios
- Interactive Mode - Dynamic real-time communication
Replay Mode
Replay mode provides precise, scripted WebSocket message sequences that execute on a predetermined schedule. This mode is ideal for testing deterministic scenarios, reproducing specific interaction patterns, and validating client behavior against known server responses.
Core Concepts
Message Timeline
Replay files define a sequence of messages that execute based on timestamps relative to connection establishment. Each message has a precise timing offset ensuring consistent playback.
Deterministic Execution
Replay scenarios execute identically each time, making them perfect for:
- Automated testing
- Regression testing
- Client behavior validation
- Demo environments
Replay File Structure
JSONL Format
Replay files use JSON Lines format where each line contains a complete JSON object representing a single message or directive.
{"ts":0,"dir":"out","text":"Welcome message"}
{"ts":1000,"dir":"out","text":"Data update","waitFor":"^ACK$"}
{"ts":2000,"dir":"out","text":"Connection closing"}
Message Object Schema
interface ReplayMessage {
ts: number; // Timestamp offset in milliseconds
dir: "out" | "in"; // Message direction
text: string; // Message content
waitFor?: string; // Optional regex pattern to wait for
binary?: boolean; // Binary message flag
close?: boolean; // Close connection after this message
error?: boolean; // Send as error frame
}
Basic Replay Examples
Simple Chat Simulation
{"ts":0,"dir":"out","text":"Chat server connected. Welcome!"}
{"ts":500,"dir":"out","text":"Type 'hello' to start chatting","waitFor":"^hello$"}
{"ts":100,"dir":"out","text":"Hello! How can I help you today?"}
{"ts":2000,"dir":"out","text":"Are you still there?","waitFor":".*"}
{"ts":500,"dir":"out","text":"Thanks for chatting! Goodbye."}
API Status Monitoring
{"ts":0,"dir":"out","text":"{\"type\":\"status\",\"message\":\"Monitor connected\"}"}
{"ts":1000,"dir":"out","text":"{\"type\":\"metrics\",\"cpu\":45,\"memory\":67}"}
{"ts":2000,"dir":"out","text":"{\"type\":\"metrics\",\"cpu\":42,\"memory\":68}"}
{"ts":3000,"dir":"out","text":"{\"type\":\"metrics\",\"cpu\":47,\"memory\":66}"}
{"ts":4000,"dir":"out","text":"{\"type\":\"alert\",\"level\":\"warning\",\"message\":\"High CPU usage\"}"}
Game State Synchronization
{"ts":0,"dir":"out","text":"{\"action\":\"game_start\",\"player_id\":\"{{uuid}}\",\"game_id\":\"{{uuid}}\"}"}
{"ts":1000,"dir":"out","text":"{\"action\":\"state_update\",\"position\":{\"x\":10,\"y\":20},\"score\":0}"}
{"ts":2000,"dir":"out","text":"{\"action\":\"enemy_spawn\",\"enemy_id\":\"{{uuid}}\",\"position\":{\"x\":50,\"y\":30}}"}
{"ts":1500,"dir":"out","text":"{\"action\":\"powerup\",\"type\":\"speed\",\"position\":{\"x\":25,\"y\":15}}"}
{"ts":3000,"dir":"out","text":"{\"action\":\"game_over\",\"final_score\":1250,\"reason\":\"timeout\"}"}
Advanced Replay Techniques
Conditional Branching
While replay mode is inherently linear, you can simulate branching using multiple replay files and external logic:
// File: login-success.jsonl
{"ts":0,"dir":"out","text":"Login successful","waitFor":"^ready$"}
{"ts":100,"dir":"out","text":"Welcome to your dashboard"}
// File: login-failed.jsonl
{"ts":0,"dir":"out","text":"Invalid credentials"}
{"ts":500,"dir":"out","text":"Connection will close","close":true}
Template Integration
{"ts":0,"dir":"out","text":"Session {{uuid}} established at {{now}}"}
{"ts":1000,"dir":"out","text":"Your lucky number is: {{randInt 1 100}}"}
{"ts":2000,"dir":"out","text":"Next maintenance window: {{now+24h}}"}
{"ts":3000,"dir":"out","text":"Server load: {{randInt 20 80}}%"}
Binary Message Support
{"ts":0,"dir":"out","text":"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChwGA60e6kgAAAABJRU5ErkJggg==","binary":true}
{"ts":1000,"dir":"out","text":"Image sent successfully"}
Error Simulation
{"ts":0,"dir":"out","text":"Connection established"}
{"ts":5000,"dir":"out","text":"Internal server error","error":true}
{"ts":1000,"dir":"out","text":"Attempting reconnection..."}
{"ts":2000,"dir":"out","text":"Reconnection failed","close":true}
Creating Replay Files
Manual Creation
# Create a new replay file
cat > chat-replay.jsonl << 'EOF'
{"ts":0,"dir":"out","text":"Welcome to support chat!"}
{"ts":1000,"dir":"out","text":"How can I help you today?","waitFor":".*"}
{"ts":500,"dir":"out","text":"Thanks for your question. Let me check..."}
{"ts":2000,"dir":"out","text":"I found the solution! Here's what you need to do:"}
{"ts":1000,"dir":"out","text":"1. Go to settings\n2. Click preferences\n3. Enable feature X"}
{"ts":3000,"dir":"out","text":"Does this solve your issue?","waitFor":"^(yes|no)$"}
{"ts":500,"dir":"out","text":"Great! Glad I could help. Have a nice day!"}
EOF
From Application Logs
#!/bin/bash
# extract-websocket-logs.sh
# Extract WebSocket messages from application logs
grep "WEBSOCKET_MSG" app.log | \
# Parse log entries and convert to JSONL
awk '{
# Extract timestamp, direction, and message
match($0, /([0-9]+).*dir=([^ ]*).*msg=(.*)/, arr)
printf "{\"ts\":%d,\"dir\":\"%s\",\"text\":\"%s\"}\n", arr[1], arr[2], arr[3]
}' > replay-from-logs.jsonl
Programmatic Generation
// generate-replay.js
const fs = require('fs');
function generateHeartbeatReplay(interval = 30000, duration = 300000) {
const messages = [];
const messageCount = duration / interval;
for (let i = 0; i < messageCount; i++) {
messages.push({
ts: i * interval,
dir: "out",
text: JSON.stringify({
type: "heartbeat",
timestamp: `{{now+${i * interval}ms}}`,
sequence: i + 1
})
});
}
fs.writeFileSync('heartbeat-replay.jsonl',
messages.map(JSON.stringify).join('\n'));
}
generateHeartbeatReplay();
# generate-replay.py
import json
import random
def generate_data_stream(count=100, interval=1000):
messages = []
for i in range(count):
messages.append({
"ts": i * interval,
"dir": "out",
"text": json.dumps({
"type": "data_point",
"id": f"{{{{uuid}}}}",
"value": random.randint(1, 100),
"timestamp": f"{{{{now+{i * interval}ms}}}}}"
})
})
return messages
# Write to file
with open('data-stream-replay.jsonl', 'w') as f:
for msg in generate_data_stream():
f.write(json.dumps(msg) + '\n')
Validation and Testing
Replay File Validation
# Validate JSONL syntax
node -e "
const fs = require('fs');
const lines = fs.readFileSync('replay.jsonl', 'utf8').split('\n');
let valid = true;
lines.forEach((line, i) => {
if (line.trim()) {
try {
const msg = JSON.parse(line);
if (!msg.ts || !msg.dir || !msg.text) {
console.log(\`Line \${i+1}: Missing required fields\`);
valid = false;
}
if (typeof msg.ts !== 'number' || msg.ts < 0) {
console.log(\`Line \${i+1}: Invalid timestamp\`);
valid = false;
}
if (!['in', 'out'].includes(msg.dir)) {
console.log(\`Line \${i+1}: Invalid direction\`);
valid = false;
}
} catch (e) {
console.log(\`Line \${i+1}: Invalid JSON - \${e.message}\`);
valid = false;
}
}
});
console.log(valid ? '✓ Replay file is valid' : '✗ Replay file has errors');
"
Timing Analysis
# Analyze replay timing
node -e "
const fs = require('fs');
const messages = fs.readFileSync('replay.jsonl', 'utf8')
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
const timings = messages.map((msg, i) => ({
index: i + 1,
ts: msg.ts,
interval: i > 0 ? msg.ts - messages[i-1].ts : 0
}));
console.log('Timing Analysis:');
timings.forEach(t => {
console.log(\`Message \${t.index}: \${t.ts}ms (interval: \${t.interval}ms)\`);
});
const totalDuration = Math.max(...messages.map(m => m.ts));
console.log(\`Total duration: \${totalDuration}ms (\${(totalDuration/1000).toFixed(1)}s)\`);
"
Functional Testing
#!/bin/bash
# test-replay.sh
REPLAY_FILE=$1
WS_URL="ws://localhost:3001/ws"
echo "Testing replay file: $REPLAY_FILE"
# Validate file exists and is readable
if [ ! -f "$REPLAY_FILE" ]; then
echo "✗ Replay file not found"
exit 1
fi
# Basic syntax check
if ! node -e "
const fs = require('fs');
const content = fs.readFileSync('$REPLAY_FILE', 'utf8');
const lines = content.split('\n').filter(l => l.trim());
lines.forEach((line, i) => {
try {
JSON.parse(line);
} catch (e) {
console.error(\`Line \${i+1}: \${e.message}\`);
process.exit(1);
}
});
console.log(\`✓ Valid JSONL: \${lines.length} messages\`);
"; then
echo "✗ Syntax validation failed"
exit 1
fi
echo "✓ Replay file validation passed"
echo "Ready to test with: mockforge serve --ws-replay-file $REPLAY_FILE"
Best Practices
File Organization
-
Descriptive Names: Use clear, descriptive filenames
user-authentication-flow.jsonl real-time-data-stream.jsonl error-handling-scenarios.jsonl -
Modular Scenarios: Break complex interactions into focused files
login-flow.jsonl main-interaction.jsonl logout-flow.jsonl -
Version Control: Keep replay files in Git with meaningful commit messages
Performance Optimization
- Message Batching: Group related messages with minimal intervals
- Memory Management: Monitor memory usage with large replay files
- Connection Limits: Consider concurrent connection impact
Maintenance
- Regular Updates: Keep replay files synchronized with application changes
- Documentation: Comment complex scenarios inline
- Versioning: Tag replay files with application versions
Debugging
- Verbose Logging: Enable detailed WebSocket logging during development
- Step-through Testing: Test replay files incrementally
- Timing Verification: Validate message timing against expectations
Common Patterns
Authentication Flow
{"ts":0,"dir":"out","text":"Please authenticate","waitFor":"^AUTH .+$"}
{"ts":100,"dir":"out","text":"Authenticating..."}
{"ts":500,"dir":"out","text":"Authentication successful"}
{"ts":200,"dir":"out","text":"Welcome back, user!"}
Streaming Data
{"ts":0,"dir":"out","text":"{\"type\":\"stream_start\",\"stream_id\":\"{{uuid}}\"}"}
{"ts":100,"dir":"out","text":"{\"type\":\"data\",\"value\":{{randInt 1 100}}}"}
{"ts":100,"dir":"out","text":"{\"type\":\"data\",\"value\":{{randInt 1 100}}}"}
{"ts":100,"dir":"out","text":"{\"type\":\"data\",\"value\":{{randInt 1 100}}}"}
{"ts":5000,"dir":"out","text":"{\"type\":\"stream_end\",\"total_messages\":3}"}
Error Recovery
{"ts":0,"dir":"out","text":"System operational"}
{"ts":30000,"dir":"out","text":"Warning: High load detected"}
{"ts":10000,"dir":"out","text":"Error: Service unavailable","error":true}
{"ts":5000,"dir":"out","text":"Attempting recovery..."}
{"ts":10000,"dir":"out","text":"Recovery successful"}
{"ts":1000,"dir":"out","text":"System back to normal"}
Integration with CI/CD
Automated Testing
# .github/workflows/test.yml
name: WebSocket Tests
on: [push, pull_request]
jobs:
websocket-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install dependencies
run: npm install ws
- name: Start MockForge
run: |
cargo install mockforge-cli
mockforge serve --ws-replay-file examples/ws-demo.jsonl &
sleep 2
- name: Run WebSocket tests
run: node test-websocket.js
Performance Benchmarking
#!/bin/bash
# benchmark-replay.sh
CONCURRENT_CONNECTIONS=100
DURATION=60
echo "Benchmarking WebSocket replay with $CONCURRENT_CONNECTIONS connections for ${DURATION}s"
# Start MockForge
mockforge serve --ws-replay-file benchmark-replay.jsonl &
SERVER_PID=$!
sleep 2
# Run benchmark
node benchmark-websocket.js $CONCURRENT_CONNECTIONS $DURATION
# Cleanup
kill $SERVER_PID
This comprehensive approach to replay mode ensures reliable, deterministic WebSocket testing scenarios that can be easily created, validated, and maintained as part of your testing infrastructure.
Interactive Mode
Interactive mode enables dynamic, real-time WebSocket communication where MockForge responds intelligently to client messages. Unlike replay mode’s predetermined sequences, interactive mode supports complex conversational patterns, state management, and adaptive responses based on client input.
Core Concepts
Dynamic Response Logic
Interactive mode evaluates client messages and generates contextually appropriate responses using conditional logic, pattern matching, and state tracking.
State Management
Connections maintain state across messages, enabling complex interactions like authentication flows, game mechanics, and multi-step processes.
Message Processing Pipeline
- Receive client message
- Parse and validate input
- Evaluate conditions and state
- Generate appropriate response
- Update connection state
Basic Interactive Setup
Simple Echo Server
{"ts":0,"dir":"out","text":"Echo server ready. Send me a message!"}
{"ts":0,"dir":"in","text":".*","response":"You said: {{request.ws.message}}"}
Command Processor
{"ts":0,"dir":"out","text":"Available commands: HELP, TIME, ECHO <message>, QUIT"}
{"ts":0,"dir":"in","text":"^HELP$","response":"Commands: HELP, TIME, ECHO <msg>, QUIT"}
{"ts":0,"dir":"in","text":"^TIME$","response":"Current time: {{now}}"}
{"ts":0,"dir":"in","text":"^ECHO (.+)$","response":"Echo: {{request.ws.message.match(/^ECHO (.+)$/)[1]}}"}
{"ts":0,"dir":"in","text":"^QUIT$","response":"Goodbye!","close":true}
Advanced Interactive Patterns
Authentication Flow
{"ts":0,"dir":"out","text":"Welcome! Please login with: LOGIN <username> <password>"}
{"ts":0,"dir":"in","text":"^LOGIN (\\w+) (\\w+)$","response":"Authenticating {{request.ws.message.match(/^LOGIN (\\w+) (\\w+)$/)[1]}}...","state":"authenticating"}
{"ts":1000,"dir":"out","text":"Login successful! Welcome, {{request.ws.state.username}}!","condition":"{{request.ws.state.authenticating}}"}
{"ts":0,"dir":"out","text":"Login failed. Try again.","condition":"{{!request.ws.state.authenticating}}"}
State-Based Conversations
{"ts":0,"dir":"out","text":"Welcome to the survey bot. What's your name?","state":"awaiting_name"}
{"ts":0,"dir":"in","text":".+","response":"Nice to meet you, {{request.ws.message}}! How old are you?","state":"awaiting_age","condition":"{{request.ws.state.awaiting_name}}"}
{"ts":0,"dir":"in","text":"^\\d+$","response":"Thanks! You're {{request.ws.message}} years old. Survey complete!","state":"complete","condition":"{{request.ws.state.awaiting_age}}"}
{"ts":0,"dir":"in","text":".*","response":"Please enter a valid age (numbers only).","condition":"{{request.ws.state.awaiting_age}}"}
Game Mechanics
{"ts":0,"dir":"out","text":"Welcome to Number Guessing Game! I'm thinking of a number between 1-100.","state":"playing","game":{"target":42,"attempts":0}}
{"ts":0,"dir":"in","text":"^GUESS (\\d+)$","condition":"{{request.ws.state.playing}}","response":"{{#if (eq (parseInt request.ws.message.match(/^GUESS (\\d+)$/) [1]) request.ws.state.game.target)}}You won in {{request.ws.state.game.attempts + 1}} attempts!{{else}}{{#if (gt (parseInt request.ws.message.match(/^GUESS (\\d+)$/) [1]) request.ws.state.game.target)}}Too high!{{else}}Too low!{{/if}} Try again.{{/if}}","state":"{{#if (eq (parseInt request.ws.message.match(/^GUESS (\\d+)$/) [1]) request.ws.state.game.target)}}won{{else}}playing{{/if}}","game":{"target":"{{request.ws.state.game.target}}","attempts":"{{request.ws.state.game.attempts + 1}}"}}
Message Processing Syntax
Input Patterns
Interactive mode uses regex patterns to match client messages:
// Exact match
{"dir":"in","text":"hello","response":"Hi there!"}
// Case-insensitive match
{"dir":"in","text":"(?i)hello","response":"Hi there!"}
// Pattern with capture groups
{"dir":"in","text":"^NAME (.+)$","response":"Hello, {{request.ws.message.match(/^NAME (.+)$/)[1]}}!"}
// Optional elements
{"dir":"in","text":"^(HELP|help|\\?)$","response":"Available commands: ..."}
Response Templates
Responses support the full MockForge template system:
{"dir":"in","text":".*","response":"Message received at {{now}}: {{request.ws.message}} (length: {{request.ws.message.length}})"}
Conditions
Use template conditions to control when rules apply:
{"dir":"in","text":".*","condition":"{{request.ws.state.authenticated}}","response":"Welcome back!"}
{"dir":"in","text":".*","condition":"{{!request.ws.state.authenticated}}","response":"Please authenticate first."}
State Updates
Modify connection state based on interactions:
// Set simple state
{"dir":"in","text":"START","response":"Starting...","state":"active"}
// Update complex state
{"dir":"in","text":"SCORE","response":"Current score: {{request.ws.state.score}}","state":"playing","score":"{{request.ws.state.score + 10}}"}
Advanced Features
Multi-Message Conversations
// Step 1: Greeting
{"ts":0,"dir":"out","text":"Hello! What's your favorite color?"}
{"ts":0,"dir":"in","text":".+","response":"{{request.ws.message}} is a great choice! What's your favorite food?","state":"asked_color","color":"{{request.ws.message}}","next":"food"}
// Step 2: Follow-up
{"ts":0,"dir":"out","text":"Based on your preferences, I recommend: ...","condition":"{{request.ws.state.next === 'complete'}}"}
{"ts":0,"dir":"in","text":".+","condition":"{{request.ws.state.next === 'food'}}","response":"Perfect! You like {{request.ws.state.color}} and {{request.ws.message}}. Here's a recommendation...","state":"complete"}
Error Handling
{"ts":0,"dir":"out","text":"Enter a command:"}
{"ts":0,"dir":"in","text":"","response":"Empty input not allowed. Try again."}
{"ts":0,"dir":"in","text":"^.{100,}$","response":"Input too long (max 99 characters). Please shorten."}
{"ts":0,"dir":"in","text":"^INVALID.*","response":"Unknown command. Type HELP for available commands."}
{"ts":0,"dir":"in","text":".*","response":"Processing: {{request.ws.message}}"}
Rate Limiting
{"ts":0,"dir":"in","text":".*","condition":"{{request.ws.state.messageCount < 10}}","response":"Message {{request.ws.state.messageCount + 1}}: {{request.ws.message}}","messageCount":"{{request.ws.state.messageCount + 1}}"}
{"ts":0,"dir":"in","text":".*","condition":"{{request.ws.state.messageCount >= 10}}","response":"Rate limit exceeded. Please wait."}
Session Management
// Initialize session
{"ts":0,"dir":"out","text":"Session started: {{uuid}}","sessionId":"{{uuid}}","startTime":"{{now}}","messageCount":0}
// Track activity
{"ts":0,"dir":"in","text":".*","response":"Received","messageCount":"{{request.ws.state.messageCount + 1}}","lastActivity":"{{now}}","condition":"{{request.ws.state.active}}"}
Template Functions for Interactive Mode
Message Analysis
// Message properties
{"dir":"in","text":".*","response":"Length: {{request.ws.message.length}}, Uppercase: {{request.ws.message.toUpperCase()}}"}
State Queries
// Check state existence
{"condition":"{{request.ws.state.userId}}","response":"Logged in as: {{request.ws.state.userId}}"}
{"condition":"{{!request.ws.state.userId}}","response":"Please log in first."}
// State comparisons
{"condition":"{{request.ws.state.score > 100}}","response":"High score achieved!"}
{"condition":"{{request.ws.state.level === 'expert'}}","response":"Expert mode enabled."}
Time-based Logic
// Session timeout
{"condition":"{{request.ws.state.lastActivity && (now - request.ws.state.lastActivity) > 300000}}","response":"Session expired. Please reconnect.","close":true}
// Time-based greetings
{"response":"{{#if (gte (now.getHours()) 18)}}Good evening!{{else if (gte (now.getHours()) 12)}}Good afternoon!{{else}}Good morning!{{/if}}"}
Creating Interactive Scenarios
From Scratch
# Create a new interactive scenario
cat > interactive-chat.jsonl << 'EOF'
{"ts":0,"dir":"out","text":"ChatBot: Hello! How can I help you today?"}
{"ts":0,"dir":"in","text":"(?i).*help.*","response":"ChatBot: I can answer questions, tell jokes, or just chat. What would you like?"}
{"ts":0,"dir":"in","text":"(?i).*joke.*","response":"ChatBot: Why did the computer go to the doctor? It had a virus! 😂"}
{"ts":0,"dir":"in","text":"(?i).*bye.*","response":"ChatBot: Goodbye! Have a great day! 👋","close":true}
{"ts":0,"dir":"in","text":".*","response":"ChatBot: I'm not sure how to respond to that. Try asking for help!"}
EOF
From Existing Logs
#!/bin/bash
# convert-logs-to-interactive.sh
# Extract conversation patterns from logs
grep "USER:" chat.log | sed 's/.*USER: //' | sort | uniq > user_patterns.txt
grep "BOT:" chat.log | sed 's/.*BOT: //' | sort | uniq > bot_responses.txt
# Generate interactive rules
paste user_patterns.txt bot_responses.txt | while IFS=$'\t' read -r user bot; do
echo "{\"dir\":\"in\",\"text\":\"$(echo "$user" | sed 's/[^a-zA-Z0-9]/\\&/g')\",\"response\":\"$bot\"}"
done > interactive-from-logs.jsonl
Testing Interactive Scenarios
#!/bin/bash
# test-interactive.sh
echo "Testing interactive WebSocket scenario..."
# Start MockForge with interactive file
mockforge serve --ws-replay-file interactive-test.jsonl &
SERVER_PID=$!
sleep 2
# Test conversation flow
node -e "
const WebSocket = require('ws');
const ws = new WebSocket('ws://localhost:3001/ws');
const conversation = [
'Hello',
'Tell me a joke',
'What can you do?',
'Goodbye'
];
let step = 0;
ws.on('open', () => {
console.log('Connected, starting conversation...');
ws.send(conversation[step++]);
});
ws.on('message', (data) => {
const response = data.toString();
console.log('Bot:', response);
if (step < conversation.length) {
setTimeout(() => {
ws.send(conversation[step++]);
}, 1000);
} else {
ws.close();
}
});
ws.on('close', () => {
console.log('Conversation complete');
process.exit(0);
});
ws.on('error', (err) => {
console.error('Error:', err);
process.exit(1);
});
"
# Cleanup
kill $SERVER_PID
Best Practices
Design Principles
- Clear Conversation Flow: Design conversations with clear paths and expectations
- Graceful Error Handling: Provide helpful responses for unexpected input
- State Consistency: Keep state updates predictable and logical
- Performance Awareness: Avoid complex regex or template processing
Pattern Guidelines
- Specific to General: Order patterns from most specific to most general
- Anchored Regex: Use
^and$to avoid partial matches - Case Handling: Consider case sensitivity in user input
- Input Validation: Validate and sanitize user input
State Management
- Minimal State: Store only necessary information in connection state
- State Validation: Verify state consistency across interactions
- State Cleanup: Clear state when conversations end
- State Persistence: Consider state requirements for reconnection scenarios
Debugging Interactive Scenarios
- Verbose Logging: Enable detailed WebSocket logging
- State Inspection: Log state changes during conversations
- Pattern Testing: Test regex patterns independently
- Flow Tracing: Track conversation paths through state changes
Common Patterns
Customer Support Chat
{"ts":0,"dir":"out","text":"Welcome to support! How can I help you? (Type your question or 'menu' for options)"}
{"ts":0,"dir":"in","text":"(?i)menu","response":"Options: 1) Password reset 2) Billing 3) Technical issue 4) Other","state":"menu"}
{"ts":0,"dir":"in","text":"(?i).*password.*","response":"I'll help you reset your password. What's your email address?","state":"password_reset","issue":"password"}
{"ts":0,"dir":"in","text":"(?i).*billing.*","response":"For billing questions, please visit our billing portal at billing.example.com","state":"billing"}
{"ts":0,"dir":"in","text":".*","response":"Thanks for your question: '{{request.ws.message}}'. A support agent will respond shortly. Your ticket ID is: {{uuid}}"}
E-commerce Assistant
{"ts":0,"dir":"out","text":"Welcome to our store! What are you looking for?","state":"browsing"}
{"ts":0,"dir":"in","text":"(?i).*shirt.*","response":"We have various shirts: casual, formal, graphic. Which style interests you?","state":"shirt_selection","category":"shirts"}
{"ts":0,"dir":"in","text":"(?i).*size.*","response":"Available sizes: S, M, L, XL. Which size would you like?","state":"size_selection","condition":"{{request.ws.state.category}}"}
{"ts":0,"dir":"in","text":"(?i)(S|M|L|XL)","condition":"{{request.ws.state.size_selection}}","response":"Great! Adding {{request.ws.state.category}} in size {{request.ws.message.toUpperCase()}} to cart. Would you like to checkout or continue shopping?","state":"checkout_ready"}
Game Server
{"ts":0,"dir":"out","text":"Welcome to the game server! Choose your character: WARRIOR, MAGE, ROGUE","state":"character_select"}
{"ts":0,"dir":"in","text":"(?i)^(warrior|mage|rogue)$","response":"Excellent choice! You selected {{request.ws.message.toUpperCase()}}. Your adventure begins now...","state":"playing","character":"{{request.ws.message.toLowerCase()}}","health":100,"level":1}
{"ts":0,"dir":"in","text":"(?i)stats","condition":"{{request.ws.state.playing}}","response":"Character: {{request.ws.state.character}}, Level: {{request.ws.state.level}}, Health: {{request.ws.state.health}}"}
{"ts":0,"dir":"in","text":"(?i)fight","condition":"{{request.ws.state.playing}}","response":"You encounter a monster! Roll for attack... {{randInt 1 20}}! {{#if (gte (randInt 1 20) 10)}}Victory!{{else}}Defeat!{{/if}}"}
Integration Examples
With Testing Frameworks
// test-interactive.js
const WebSocket = require('ws');
class InteractiveWebSocketTester {
constructor(url) {
this.url = url;
this.ws = null;
}
async connect() {
return new Promise((resolve, reject) => {
this.ws = new WebSocket(this.url);
this.ws.on('open', () => resolve());
this.ws.on('error', reject);
});
}
async sendAndExpect(message, expectedResponse) {
return new Promise((resolve, reject) => {
const timeout = setTimeout(() => reject(new Error('Timeout')), 5000);
this.ws.send(message);
this.ws.once('message', (data) => {
clearTimeout(timeout);
const response = data.toString();
if (response === expectedResponse) {
resolve(response);
} else {
reject(new Error(`Expected "${expectedResponse}", got "${response}"`));
}
});
});
}
close() {
if (this.ws) this.ws.close();
}
}
module.exports = InteractiveWebSocketTester;
Load Testing Interactive Scenarios
#!/bin/bash
# load-test-interactive.sh
CONCURRENT_USERS=50
DURATION=300
echo "Load testing interactive WebSocket with $CONCURRENT_USERS concurrent users for ${DURATION}s"
# Start MockForge
mockforge serve --ws-replay-file interactive-load-test.jsonl &
SERVER_PID=$!
sleep 2
# Run load test
node load-test-interactive.js $CONCURRENT_USERS $DURATION
# Generate report
echo "Generating performance report..."
node analyze-results.js
# Cleanup
kill $SERVER_PID
Interactive mode transforms MockForge from a simple message player into an intelligent conversation partner, enabling sophisticated testing scenarios that adapt to client behavior and maintain complex interaction state.
Plugin System
MockForge features a powerful WebAssembly-based plugin system that allows you to extend functionality without modifying the core framework. Plugins run in a secure sandbox with resource limits and provide capabilities for custom response generation, authentication, data sources, and template extensions.
Overview
The plugin system enables:
- Custom Response Generators: Create specialized mock data and responses
- Authentication Providers: Implement JWT, OAuth2, and custom authentication schemes
- Data Source Connectors: Connect to CSV files, databases, and external APIs
- Template Extensions: Add custom template functions and filters
- Protocol Handlers: Extend support for custom protocols and formats
Plugin Architecture
WebAssembly Runtime
Plugins are compiled to WebAssembly (WASM) and run in an isolated runtime environment:
- Security Sandbox: Isolated execution prevents plugins from accessing unauthorized resources
- Resource Limits: CPU, memory, and execution time constraints
- Capability System: Fine-grained permissions control what plugins can access
- Cross-platform: WASM plugins work on any platform MockForge supports
Plugin Types
MockForge supports several plugin types:
| Type | Description | Interface |
|---|---|---|
response | Generate custom response data | ResponseGenerator |
auth | Handle authentication and authorization | AuthProvider |
datasource | Connect to external data sources | DataSourceConnector |
template | Add custom template functions | TemplateExtension |
protocol | Support custom protocols | ProtocolHandler |
Installing Plugins
From Plugin Registry
# Install plugin from registry
mockforge plugin install auth-jwt
# Install specific version
mockforge plugin install auth-jwt@1.2.0
# List available plugins
mockforge plugin search
From Local File
# Install from local WASM file
mockforge plugin install ./my-plugin.wasm
# Install with manifest
mockforge plugin install ./my-plugin/ --manifest plugin.yaml
From Git Repository
# Install from Git repository
mockforge plugin install https://github.com/example/mockforge-plugin-custom.git
# Install specific branch/tag
mockforge plugin install https://github.com/example/mockforge-plugin-custom.git#v1.0.0
Plugin Management
List Installed Plugins
# List all installed plugins
mockforge plugin list
# Show detailed information
mockforge plugin list --verbose
# Filter by type
mockforge plugin list --type auth
Enable/Disable Plugins
# Enable plugin
mockforge plugin enable auth-jwt
# Disable plugin
mockforge plugin disable auth-jwt
# Enable plugin for specific workspace
mockforge plugin enable auth-jwt --workspace my-workspace
Update Plugins
# Update specific plugin
mockforge plugin update auth-jwt
# Update all plugins
mockforge plugin update --all
# Check for updates
mockforge plugin outdated
Remove Plugins
# Remove plugin
mockforge plugin remove auth-jwt
# Remove plugin and its data
mockforge plugin remove auth-jwt --purge
Plugin Configuration
Global Configuration
Configure plugins in your MockForge configuration file:
plugins:
enabled: true
directory: "~/.mockforge/plugins"
runtime:
memory_limit_mb: 64
cpu_limit_percent: 10
execution_timeout_ms: 5000
# Plugin-specific configuration
auth-jwt:
enabled: true
config:
secret_key: "${JWT_SECRET}"
algorithm: "HS256"
expiration: 3600
datasource-csv:
enabled: true
config:
base_directory: "./data"
cache_ttl: 300
Environment Variables
# Plugin system settings
export MOCKFORGE_PLUGINS_ENABLED=true
export MOCKFORGE_PLUGINS_DIRECTORY=~/.mockforge/plugins
# Runtime limits
export MOCKFORGE_PLUGIN_MEMORY_LIMIT=64
export MOCKFORGE_PLUGIN_CPU_LIMIT=10
export MOCKFORGE_PLUGIN_TIMEOUT=5000
# Plugin-specific settings
export JWT_SECRET=your-secret-key
export CSV_DATA_DIR=./test-data
Developing Plugins
Plugin Manifest
Every plugin requires a plugin.yaml manifest file:
# plugin.yaml
name: "auth-jwt"
version: "1.0.0"
description: "JWT authentication provider"
author: "Your Name <email@example.com>"
license: "MIT"
repository: "https://github.com/example/mockforge-plugin-auth-jwt"
# Plugin metadata
type: "auth"
category: "authentication"
tags: ["jwt", "auth", "security"]
# Runtime requirements
runtime:
wasm_version: "0.1"
memory_limit_mb: 32
execution_timeout_ms: 1000
# Capabilities required
capabilities:
- "network.http.client"
- "storage.key_value"
- "template.functions"
# Configuration schema
config_schema:
type: "object"
properties:
secret_key:
type: "string"
description: "JWT signing secret"
required: true
algorithm:
type: "string"
enum: ["HS256", "HS384", "HS512", "RS256"]
default: "HS256"
expiration:
type: "integer"
description: "Token expiration in seconds"
default: 3600
minimum: 60
# Export information
exports:
auth_provider: "JwtAuthProvider"
template_functions:
- "jwt_encode"
- "jwt_decode"
- "jwt_verify"
Rust Plugin Development
Create a new Rust project for your plugin:
cargo new --lib mockforge-plugin-custom
cd mockforge-plugin-custom
Add dependencies to Cargo.toml:
[package]
name = "mockforge-plugin-custom"
version = "0.1.0"
edition = "2021"
[lib]
crate-type = ["cdylib"]
[dependencies]
mockforge-plugin-core = "0.1.0"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
wasm-bindgen = "0.2"
[dependencies.web-sys]
version = "0.3"
features = [
"console",
]
Implement your plugin in src/lib.rs:
#![allow(unused)] fn main() { use mockforge_plugin_core::{ AuthProvider, AuthResult, PluginConfig, PluginError, PluginResult, export_auth_provider, export_template_functions }; use serde::{Deserialize, Serialize}; use wasm_bindgen::prelude::*; #[derive(Deserialize)] struct JwtConfig { secret_key: String, algorithm: String, expiration: u64, } pub struct JwtAuthProvider { config: JwtConfig, } impl JwtAuthProvider { pub fn new(config: PluginConfig) -> PluginResult<Self> { let jwt_config: JwtConfig = serde_json::from_value(config.into())?; Ok(Self { config: jwt_config }) } } impl AuthProvider for JwtAuthProvider { fn authenticate(&self, token: &str) -> PluginResult<AuthResult> { // Implement JWT validation logic match self.verify_jwt(token) { Ok(claims) => Ok(AuthResult::success(claims)), Err(e) => Ok(AuthResult::failure(e.to_string())), } } fn generate_token(&self, user_id: &str) -> PluginResult<String> { // Implement JWT generation logic self.create_jwt(user_id) } } impl JwtAuthProvider { fn verify_jwt(&self, token: &str) -> Result<serde_json::Value, PluginError> { // JWT verification implementation todo!("Implement JWT verification") } fn create_jwt(&self, user_id: &str) -> PluginResult<String> { // JWT creation implementation todo!("Implement JWT creation") } } // Template functions #[wasm_bindgen] pub fn jwt_encode(payload: &str, secret: &str) -> String { // Implement JWT encoding for templates todo!("Implement template JWT encoding") } #[wasm_bindgen] pub fn jwt_decode(token: &str) -> String { // Implement JWT decoding for templates todo!("Implement template JWT decoding") } // Export plugin interfaces export_auth_provider!(JwtAuthProvider); export_template_functions! { "jwt_encode" => jwt_encode, "jwt_decode" => jwt_decode, } }
Building Plugins
Build your plugin to WebAssembly:
# Install wasm-pack if not already installed
cargo install wasm-pack
# Build the plugin
wasm-pack build --target web --out-dir pkg
# The WASM file will be in pkg/mockforge_plugin_custom.wasm
Testing Plugins
MockForge provides a testing framework for plugins:
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use super::*; use mockforge_plugin_core::test_utils::*; #[test] fn test_jwt_authentication() { let config = test_config! { "secret_key": "test-secret", "algorithm": "HS256", "expiration": 3600 }; let provider = JwtAuthProvider::new(config).unwrap(); // Test valid token let token = provider.generate_token("user123").unwrap(); let result = provider.authenticate(&token).unwrap(); assert!(result.is_success()); // Test invalid token let invalid_result = provider.authenticate("invalid.token.here").unwrap(); assert!(invalid_result.is_failure()); } } }
Plugin Examples
MockForge includes several example plugins to demonstrate different capabilities:
Authentication Plugins
Basic Authentication (auth-basic)
# examples/plugins/auth-basic/plugin.yaml
name: "auth-basic"
type: "auth"
description: "HTTP Basic Authentication provider"
config_schema:
type: "object"
properties:
users:
type: "object"
description: "Username to password mapping"
realm:
type: "string"
default: "MockForge"
Usage in MockForge configuration:
plugins:
auth-basic:
enabled: true
config:
realm: "API Access"
users:
admin: "password123"
user: "userpass"
JWT Authentication (auth-jwt)
Advanced JWT authentication with support for multiple algorithms:
# examples/plugins/auth-jwt/plugin.yaml
name: "auth-jwt"
type: "auth"
description: "JWT authentication provider with multiple algorithm support"
capabilities:
- "storage.key_value"
- "template.functions"
config_schema:
type: "object"
properties:
secret_key:
type: "string"
required: true
algorithm:
type: "string"
enum: ["HS256", "HS384", "HS512", "RS256", "RS384", "RS512"]
default: "HS256"
issuer:
type: "string"
description: "JWT issuer claim"
audience:
type: "string"
description: "JWT audience claim"
Data Source Plugins
CSV Data Source (datasource-csv)
Connect to CSV files as data sources:
# examples/plugins/datasource-csv/plugin.yaml
name: "datasource-csv"
type: "datasource"
description: "CSV file data source connector"
config_schema:
type: "object"
properties:
base_directory:
type: "string"
description: "Base directory for CSV files"
required: true
cache_ttl:
type: "integer"
description: "Cache TTL in seconds"
default: 300
delimiter:
type: "string"
description: "CSV delimiter"
default: ","
Usage in templates:
response:
status: 200
body:
users: "{{datasource.csv('users.csv').random(5)}}"
products: "{{datasource.csv('products.csv').filter('category', 'electronics')}}"
Template Plugins
Crypto Functions (template-crypto)
Add cryptographic template functions:
# examples/plugins/template-crypto/plugin.yaml
name: "template-crypto"
type: "template"
description: "Cryptographic template functions"
exports:
template_functions:
- "crypto_hash"
- "crypto_hmac"
- "crypto_encrypt"
- "crypto_decrypt"
- "crypto_random"
Template usage:
response:
body:
user_id: "{{uuid}}"
password_hash: "{{crypto_hash(faker.password, 'sha256')}}"
api_key: "{{crypto_random(32, 'hex')}}"
signature: "{{crypto_hmac(request.body, env.API_SECRET, 'sha256')}}"
Response Plugins
GraphQL Response Generator (response-graphql)
Generate GraphQL responses from schema:
# examples/plugins/response-graphql/plugin.yaml
name: "response-graphql"
type: "response"
description: "GraphQL response generator"
config_schema:
type: "object"
properties:
schema_file:
type: "string"
description: "Path to GraphQL schema file"
required: true
resolvers:
type: "object"
description: "Custom resolver configuration"
Security Considerations
Capability System
Plugins must declare required capabilities:
# plugin.yaml
capabilities:
- "network.http.client" # Make HTTP requests
- "network.http.server" # Handle HTTP requests
- "storage.key_value" # Access key-value storage
- "storage.file.read" # Read files
- "storage.file.write" # Write files
- "template.functions" # Register template functions
- "crypto.random" # Access random number generation
- "crypto.hash" # Access hashing functions
Resource Limits
Configure resource limits per plugin:
plugins:
my-plugin:
runtime:
memory_limit_mb: 64 # Maximum memory usage
cpu_limit_percent: 5 # Maximum CPU usage
execution_timeout_ms: 2000 # Maximum execution time
network_timeout_ms: 1000 # Network request timeout
Sandboxing
Plugins run in a secure sandbox that:
- Prevents access to the host file system outside permitted directories
- Limits network access to declared endpoints
- Restricts system calls and resource usage
- Isolates plugin memory from the host process
Best Practices
Plugin Development
- Keep plugins focused: Each plugin should have a single, clear purpose
- Minimize resource usage: Use efficient algorithms and limit memory allocation
- Handle errors gracefully: Return meaningful error messages
- Document configuration: Provide clear schema and examples
- Test thoroughly: Include comprehensive tests for all functionality
Plugin Usage
- Review plugin capabilities: Understand what permissions plugins require
- Monitor resource usage: Check plugin performance and resource consumption
- Keep plugins updated: Regularly update to get security fixes and improvements
- Use official plugins: Prefer plugins from trusted sources
- Test in development: Thoroughly test plugins before production use
Security
- Audit plugin code: Review plugin source code when possible
- Limit capabilities: Only grant necessary permissions
- Monitor logs: Watch for suspicious plugin behavior
- Use resource limits: Prevent plugins from consuming excessive resources
- Isolate environments: Use separate plugin configurations for development and production
Troubleshooting
Common Issues
Plugin Won’t Load
# Check plugin status
mockforge plugin status my-plugin
# Validate plugin manifest
mockforge plugin validate ./my-plugin/plugin.yaml
# Check logs for errors
mockforge logs --filter "plugin"
Runtime Errors
# Enable debug logging
RUST_LOG=mockforge_plugin_loader=debug mockforge serve
# Check resource limits
mockforge plugin stats my-plugin
# Validate configuration
mockforge plugin config validate my-plugin
Performance Issues
# Monitor plugin performance
mockforge plugin stats --watch
# Check memory usage
mockforge plugin stats --memory
# Profile plugin execution
mockforge plugin profile my-plugin
Debug Mode
Enable debug mode for plugin development:
plugins:
debug_mode: true
verbose_logging: true
enable_profiling: true
This comprehensive plugin system enables powerful extensibility while maintaining security and performance. Plugins can significantly extend MockForge’s capabilities for specialized use cases and integrations.
Security & Encryption
MockForge provides enterprise-grade security features including end-to-end encryption, secure key management, and comprehensive authentication systems to protect your mock data and configurations.
Overview
MockForge’s security features include:
- End-to-End Encryption: AES-256-GCM and ChaCha20-Poly1305 algorithms
- Hierarchical Key Management: Master keys, workspace keys, and session keys
- Auto-Encryption: Automatic encryption of sensitive configuration data
- Secure Storage: OS keychain integration and file-based key storage
- Template Encryption: Built-in encryption/decryption functions in templates
- Role-Based Access Control: Admin and viewer roles in the UI
- Plugin Security: Sandboxed plugin execution with capability controls
Encryption Setup
Initial Configuration
Enable encryption when starting MockForge:
# Enable encryption with environment variables
export MOCKFORGE_ENCRYPTION_ENABLED=true
export MOCKFORGE_ENCRYPTION_ALGORITHM=aes-256-gcm
export MOCKFORGE_KEY_STORE_PATH=~/.mockforge/keys
# Start MockForge with encryption
mockforge serve --config config.yaml
Configuration File
Configure encryption in your YAML configuration:
# config.yaml
encryption:
enabled: true
algorithm: "aes-256-gcm" # or "chacha20-poly1305"
key_store:
type: "file" # or "os_keychain"
path: "~/.mockforge/keys"
auto_create: true
# Auto-encryption rules
auto_encrypt:
enabled: true
patterns:
- "*.password"
- "*.secret"
- "*.key"
- "*.token"
- "auth.headers.*"
- "database.connection_string"
# Key rotation
rotation:
enabled: true
interval_days: 30
backup_count: 5
Key Management
Key Hierarchy
MockForge uses a hierarchical key system:
- Master Key: Root encryption key stored securely
- Workspace Keys: Per-workspace encryption keys derived from master key
- Session Keys: Temporary keys for active sessions
- Data Keys: Keys for encrypting specific data elements
Key Storage Options
File-Based Storage
Store keys in encrypted files on the local filesystem:
encryption:
key_store:
type: "file"
path: "~/.mockforge/keys"
permissions: "0600" # Owner read/write only
backup_enabled: true
backup_path: "~/.mockforge/keys.backup"
OS Keychain Integration
Use the operating system’s secure keychain:
encryption:
key_store:
type: "os_keychain"
service_name: "mockforge"
account_prefix: "workspace_"
Supported Platforms:
- macOS: Uses Keychain Services
- Windows: Uses Windows Credential Manager
- Linux: Uses Secret Service API (GNOME Keyring, KWallet)
Key Generation
MockForge automatically generates keys when needed:
# Initialize new key store
mockforge keys init --algorithm aes-256-gcm
# Generate workspace key
mockforge keys generate --workspace my-workspace
# Rotate all keys
mockforge keys rotate --all
# Export keys for backup (encrypted)
mockforge keys export --output keys-backup.enc
Key Rotation
Implement automatic key rotation for enhanced security:
encryption:
rotation:
enabled: true
interval_days: 30
max_key_age_days: 90
backup_old_keys: true
notify_before_rotation_days: 7
Encryption Algorithms
AES-256-GCM (Default)
encryption:
algorithm: "aes-256-gcm"
config:
key_size: 256
iv_size: 12
tag_size: 16
Features:
- Performance: Optimized for speed on modern CPUs
- Security: NIST-approved, widely audited
- Authentication: Built-in message authentication
- Hardware Support: AES-NI acceleration on Intel/AMD
ChaCha20-Poly1305
encryption:
algorithm: "chacha20-poly1305"
config:
key_size: 256
nonce_size: 12
tag_size: 16
Features:
- Performance: Excellent on ARM and older CPUs
- Security: Modern, quantum-resistant design
- Authentication: Integrated Poly1305 MAC
- Simplicity: Fewer implementation pitfalls
Auto-Encryption
MockForge automatically encrypts sensitive data based on configurable patterns:
Configuration Patterns
encryption:
auto_encrypt:
enabled: true
patterns:
# Password fields
- "*.password"
- "*.passwd"
- "auth.password"
# API keys and tokens
- "*.api_key"
- "*.secret_key"
- "*.access_token"
- "*.refresh_token"
# Database connections
- "database.password"
- "database.connection_string"
- "redis.password"
# HTTP headers
- "auth.headers.Authorization"
- "auth.headers.X-API-Key"
# Custom patterns
- "custom.sensitive_data.*"
Field-Level Encryption
Encrypt specific fields in your configurations:
# Original configuration
database:
host: "localhost"
port: 5432
username: "user"
password: "secret123" # Will be auto-encrypted
auth:
jwt_secret: "my-secret" # Will be auto-encrypted
# After auto-encryption
database:
host: "localhost"
port: 5432
username: "user"
password: "{{encrypted:AES256:base64-encrypted-data}}"
auth:
jwt_secret: "{{encrypted:AES256:base64-encrypted-data}}"
Template Encryption Functions
Use encryption functions directly in your templates:
Encryption Functions
# Encrypt data in templates
response:
body:
user_id: "{{uuid}}"
encrypted_data: "{{encrypt('sensitive-data', 'workspace-key')}}"
hashed_password: "{{hash('password123', 'sha256')}}"
signed_token: "{{sign(user_data, 'signing-key')}}"
Decryption Functions
# Decrypt data in templates
request:
headers:
Authorization: "Bearer {{decrypt(encrypted_token, 'workspace-key')}}"
body:
password: "{{decrypt(user.encrypted_password, 'user-key')}}"
Available Functions
| Function | Description | Example |
|---|---|---|
encrypt(data, key) | Encrypt data with specified key | {{encrypt('secret', 'my-key')}} |
decrypt(data, key) | Decrypt data with specified key | {{decrypt(encrypted_data, 'my-key')}} |
hash(data, algorithm) | Hash data with algorithm | {{hash('password', 'sha256')}} |
hmac(data, key, algorithm) | Generate HMAC signature | {{hmac(message, 'secret', 'sha256')}} |
sign(data, key) | Sign data with private key | {{sign(payload, 'private-key')}} |
verify(data, signature, key) | Verify signature with public key | {{verify(data, sig, 'public-key')}} |
Mutual TLS (mTLS)
MockForge supports Mutual TLS (mTLS) for enhanced security, requiring both server and client certificates for authentication.
Quick Start
Enable mTLS in your configuration:
http:
tls:
enabled: true
cert_file: "./certs/server.crt"
key_file: "./certs/server.key"
ca_file: "./certs/ca.crt" # CA certificate for client verification
require_client_cert: true # Enable mTLS
Client Configuration
Clients must provide a certificate signed by the CA:
# Using cURL
curl --cert client.crt --key client.key --cacert ca.crt \
https://localhost:3000/api/endpoint
Certificate Generation
For development, use mkcert:
# Install mkcert
brew install mkcert
mkcert -install
# Generate certificates
mkcert localhost 127.0.0.1 ::1
mkcert -client localhost 127.0.0.1 ::1
For production, use OpenSSL or a trusted Certificate Authority.
Full Documentation: See mTLS Configuration Guide for complete setup instructions, certificate generation, client examples, and troubleshooting.
Authentication & Authorization
Admin UI Authentication
MockForge Admin UI v2 includes complete role-based authentication with JWT-based authentication:
admin:
auth:
enabled: true
jwt_secret: "{{encrypted:your-jwt-secret}}"
session_timeout: 86400 # 24 hours
# Built-in users
users:
admin:
password: "{{encrypted:admin-password}}"
role: "admin"
viewer:
password: "{{encrypted:viewer-password}}"
role: "viewer"
# Custom authentication provider
provider: "custom"
provider_config:
ldap_url: "ldap://company.com"
oauth2_client_id: "mockforge-client"
Role Permissions
| Role | Permissions |
|---|---|
| Admin | Full access to all features (workspace management, member management, all editing) |
| Editor | Create, edit, and delete mocks; view history; cannot manage workspace settings |
| Viewer | Read-only access to dashboard, logs, metrics, and mocks |
Full Documentation: See RBAC Guide for complete role and permission details.
Custom Authentication
Implement custom authentication via plugins:
#![allow(unused)] fn main() { // Custom auth plugin use mockforge_plugin_core::{AuthProvider, AuthResult}; pub struct LdapAuthProvider { ldap_url: String, base_dn: String, } impl AuthProvider for LdapAuthProvider { fn authenticate(&self, username: &str, password: &str) -> AuthResult { // LDAP authentication logic match self.ldap_authenticate(username, password) { Ok(user_info) => AuthResult::success(user_info), Err(e) => AuthResult::failure(e.to_string()), } } } }
Plugin Security
Capability System
Plugins must declare required capabilities:
# plugin.yaml
capabilities:
- "crypto.encrypt" # Encryption functions
- "crypto.decrypt" # Decryption functions
- "crypto.hash" # Hashing functions
- "crypto.random" # Random number generation
- "storage.encrypted" # Encrypted storage access
- "network.tls" # TLS/SSL connections
Resource Limits
Configure security limits for plugins:
plugins:
security:
memory_limit_mb: 64
cpu_limit_percent: 5
network_timeout_ms: 5000
file_access_paths:
- "/app/data"
- "/tmp/plugin-cache"
# Encryption access
encryption_access:
allowed_algorithms: ["aes-256-gcm"]
key_access_patterns: ["workspace.*", "plugin.*"]
Sandboxing
Plugins run in secure sandboxes that:
- Isolate Memory: Separate memory space from host process
- Limit File Access: Restricted to declared paths only
- Control Network: Limited to specified endpoints
- Monitor Resources: CPU, memory, and execution time limits
- Audit Operations: Log all security-relevant operations
Transport Security
TLS Configuration
Enable TLS for all network communication:
# Server TLS
server:
tls:
enabled: true
cert_file: "/path/to/server.crt"
key_file: "/path/to/server.key"
min_version: "1.3"
cipher_suites:
- "TLS_AES_256_GCM_SHA384"
- "TLS_CHACHA20_POLY1305_SHA256"
# Client TLS (for outbound requests)
client:
tls:
verify_certificates: true
ca_bundle: "/path/to/ca-bundle.crt"
client_cert: "/path/to/client.crt"
client_key: "/path/to/client.key"
Certificate Management
# Generate self-signed certificates for development
mockforge certs generate --domain localhost --output ./certs/
# Use Let's Encrypt for production
mockforge certs letsencrypt --domain api.mockforge.dev --email admin@company.com
# Import existing certificates
mockforge certs import --cert server.crt --key server.key --ca ca.crt
Security Best Practices
Configuration Security
- Encrypt Sensitive Data: Use auto-encryption for passwords and keys
- Secure Key Storage: Use OS keychain in production
- Regular Key Rotation: Implement automatic key rotation
- Least Privilege: Grant minimal necessary permissions
- Audit Logging: Enable comprehensive security logging
Deployment Security
- Use TLS: Enable TLS for all network communication
- Network Isolation: Deploy in isolated network segments
- Access Control: Implement proper firewall rules
- Monitor Security: Set up security monitoring and alerting
- Regular Updates: Keep MockForge and dependencies updated
Plugin Security
- Review Plugin Code: Audit plugin source code before installation
- Limit Capabilities: Grant only necessary plugin permissions
- Monitor Resources: Watch plugin resource usage
- Isolate Environments: Use separate configs for dev/prod
- Update Regularly: Keep plugins updated for security fixes
Security Monitoring
Audit Logging
Enable comprehensive security logging:
logging:
security:
enabled: true
level: "info"
destinations:
- type: "file"
path: "/var/log/mockforge/security.log"
format: "json"
- type: "syslog"
facility: "local0"
tag: "mockforge-security"
events:
- "auth_success"
- "auth_failure"
- "key_access"
- "encryption_operation"
- "plugin_security_violation"
- "configuration_change"
Security Metrics
Monitor security-related metrics:
metrics:
security:
enabled: true
metrics:
- "auth_attempts_total"
- "auth_failures_total"
- "encryption_operations_total"
- "key_rotations_total"
- "plugin_security_violations_total"
Alerting
Set up security alerts:
alerts:
security:
enabled: true
rules:
- name: "High Authentication Failures"
condition: "auth_failures_rate > 10/minute"
action: "email_admin"
- name: "Plugin Security Violation"
condition: "plugin_security_violations > 0"
action: "disable_plugin"
- name: "Encryption Key Access Anomaly"
condition: "key_access_rate > 100/minute"
action: "alert_security_team"
Compliance & Standards
Standards Compliance
MockForge security features comply with:
- FIPS 140-2: Cryptographic standards compliance
- Common Criteria: Security evaluation criteria
- SOC 2 Type II: Security, availability, and confidentiality
- ISO 27001: Information security management
Data Protection
Features for data protection compliance:
- Data Encryption: All sensitive data encrypted at rest and in transit
- Key Management: Secure key lifecycle management
- Access Controls: Role-based access and audit trails
- Data Minimization: Only collect and store necessary data
- Right to Deletion: Secure data deletion capabilities
Audit Logging
MockForge provides comprehensive audit logging for security and compliance:
- Authentication Audit Logs: Track all authentication attempts (success/failure)
- Request Logs: Full request/response logging with metadata
- Collaboration History: Git-style version control for workspace changes
- Configuration Changes: Track all configuration modifications
- Plugin Activity: Monitor plugin execution and security events
Full Documentation: See Audit Trails Guide for complete audit logging configuration and usage.
Troubleshooting Security
Common Issues
Encryption Not Working
# Check encryption status
mockforge encryption status
# Verify key store
mockforge keys list
# Test encryption/decryption
mockforge encrypt test-data --key workspace-key
Authentication Failures
# Check auth configuration
mockforge auth status
# Verify JWT secret
mockforge auth verify-jwt your-token
# Reset admin credentials
mockforge auth reset-admin
Key Store Issues
# Initialize key store
mockforge keys init --force
# Repair key store
mockforge keys repair
# Backup and restore
mockforge keys backup --output keys.backup
mockforge keys restore --input keys.backup
Debug Mode
Enable security debug logging:
RUST_LOG=mockforge_core::encryption=debug,mockforge_core::auth=debug mockforge serve
This comprehensive security system ensures that MockForge can be safely used in enterprise environments while protecting sensitive mock data and configurations.
Directory Synchronization
MockForge’s sync daemon enables automatic synchronization between workspace files and MockForge’s internal storage, allowing you to work with your mock API definitions as files and keep them in version control.
Overview
The sync daemon monitors a directory for .yaml and .yml files and automatically imports them into MockForge workspaces. This enables:
- File-based workflows: Edit workspace files with your favorite text editor
- Version control: Keep workspace definitions in Git
- Team collaboration: Share workspaces via Git repositories
- Automated workflows: CI/CD integration and automated deployment
- Real-time feedback: See exactly what’s being synced as it happens
How It Works
The sync daemon provides bidirectional synchronization:
- Monitors Directory: Watches for file changes in the specified workspace directory
- Detects Changes: Identifies created, modified, and deleted
.yaml/.ymlfiles - Imports Automatically: Parses and imports valid MockRequest files into workspaces
- Provides Feedback: Shows clear, real-time output of all sync operations
What Gets Synced
- File Types: Only
.yamland.ymlfiles - File Format: Files must be valid MockRequest YAML
- Subdirectories: Monitors all subdirectories recursively
- Exclusions: Skips hidden files (starting with
.)
Getting Started
Starting the Sync Daemon
Use the CLI to start the sync daemon:
# Basic usage
mockforge sync --workspace-dir ./my-workspace
# Short form
mockforge sync -w ./my-workspace
# With custom configuration
mockforge sync --workspace-dir ./workspace --config sync-config.yaml
What You’ll See
When you start the sync daemon:
🔄 Starting MockForge Sync Daemon...
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📁 Workspace directory: ./my-workspace
ℹ️ What the sync daemon does:
• Monitors the workspace directory for .yaml/.yml file changes
• Automatically imports new or modified request files
• Syncs changes bidirectionally between files and workspace
• Skips hidden files (starting with .)
🔍 Monitoring for file changes...
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Sync daemon started successfully!
💡 Press Ctrl+C to stop
Real-time Feedback
As files change, you’ll see detailed output:
🔄 Detected 1 file change in workspace 'default'
➕ Created: new-endpoint.yaml
✅ Successfully imported
🔄 Detected 2 file changes in workspace 'default'
📝 Modified: user-api.yaml
✅ Successfully updated
🗑️ Deleted: old-endpoint.yaml
ℹ️ Auto-deletion from workspace is disabled
Directory Organization
You can organize your workspace files however you like. The sync daemon monitors all subdirectories recursively:
my-workspace/
├── api-v1/
│ ├── users.yaml
│ ├── products.yaml
│ └── orders.yaml
├── api-v2/
│ ├── users.yaml
│ └── graphql.yaml
├── internal/
│ └── admin.yaml
└── shared/
└── auth.yaml
All .yaml and .yml files will be monitored and imported automatically.
File Format
Each file should contain a valid MockRequest in YAML format:
id: "get-users"
name: "Get Users"
method: "GET"
path: "/api/users"
headers:
Content-Type: "application/json"
response_status: 200
response_body: |
[
{"id": 1, "name": "Alice"},
{"id": 2, "name": "Bob"}
]
Usage Examples
Git Integration
Keep your workspaces in version control:
# 1. Create a Git repository for your workspaces
mkdir api-mocks
cd api-mocks
git init
# 2. Start the sync daemon
mockforge sync --workspace-dir .
# 3. Create or edit workspace files
vim user-endpoints.yaml
# 4. Commit and push changes
git add .
git commit -m "Add user endpoints"
git push origin main
# 5. Team members can pull changes
# The sync daemon will automatically import updates
Development Workflow
Use the sync daemon during active development:
# Terminal 1: Start sync daemon
mockforge sync --workspace-dir ./workspaces
# Terminal 2: Edit files
vim ./workspaces/new-feature.yaml
# Changes are automatically imported
# You'll see real-time feedback in Terminal 1
CI/CD Integration
Automate workspace deployment:
#!/bin/bash
# deploy-mocks.sh
# Pull latest workspace definitions from Git
git pull origin main
# Start sync daemon in background
mockforge sync --workspace-dir ./workspaces &
SYNC_PID=$!
# Wait for initial sync
sleep 5
# Start MockForge server
mockforge serve --config mockforge.yaml
# Cleanup on exit
trap "kill $SYNC_PID" EXIT
Best Practices
1. Use Version Control
Keep workspace files in Git for team collaboration:
# Create a .gitignore to exclude temporary files
echo ".DS_Store" >> .gitignore
echo "*.swp" >> .gitignore
echo "*.tmp" >> .gitignore
# Commit workspace definitions
git add *.yaml
git commit -m "Add workspace definitions"
2. Organize Files Logically
Structure your workspace files for clarity:
workspaces/
├── production/ # Production endpoints
│ ├── users-api.yaml
│ └── orders-api.yaml
├── staging/ # Staging endpoints
│ └── beta-features.yaml
└── development/ # Development/experimental
└── new-feature.yaml
3. Use Descriptive Filenames
Name files based on what they contain:
✅ Good:
- user-authentication.yaml
- product-catalog-api.yaml
- payment-processing.yaml
❌ Bad:
- endpoint1.yaml
- test.yaml
- temp.yaml
4. Keep Sync Daemon Running
Run the sync daemon continuously during development:
# Use a terminal multiplexer like tmux
tmux new -s mockforge-sync
mockforge sync --workspace-dir ./workspaces
# Detach with Ctrl+B then D
# Reattach with: tmux attach -t mockforge-sync
5. Monitor Sync Output
Pay attention to the sync daemon’s output:
- ✅ Green checkmarks: Files imported successfully
- ⚠️ Warning icons: Import failed, check file format
- 🔄 Change notifications: Shows what’s being synced
- ❌ Error messages: Indicate issues that need fixing
6. Handle Errors Promptly
When you see errors, fix them immediately:
❌ Detected error:
📝 Modified: broken-endpoint.yaml
⚠️ Failed to import: File is not a recognized format
Action: Check the file syntax and fix YAML formatting
Troubleshooting
Files Not Being Imported
Check file extension:
# Only .yaml and .yml files are monitored
ls -la workspaces/
# Ensure files end with .yaml or .yml
Verify file format:
# Files must be valid MockRequest YAML
cat workspaces/my-file.yaml
# Check for proper YAML syntax and required fields
Check for hidden files:
# Hidden files (starting with .) are ignored
# Rename: .hidden.yaml → visible.yaml
mv .hidden.yaml visible.yaml
Permission Errors
# Ensure MockForge can read the directory
chmod 755 workspaces/
chmod 644 workspaces/*.yaml
# Check ownership
ls -la workspaces/
Changes Not Detected
Verify sync daemon is running:
# Check if the process is still active
ps aux | grep "mockforge sync"
Check filesystem notifications:
# Some network filesystems don't support notifications
# Try editing locally instead of over NFS/SMB
Restart sync daemon:
# Stop with Ctrl+C, then restart
mockforge sync --workspace-dir ./workspaces
YAML Syntax Errors
When files fail to import due to syntax errors:
# Use a YAML validator
yamllint workspaces/problematic-file.yaml
# Common issues:
# - Incorrect indentation
# - Missing quotes around special characters
# - Invalid escape sequences
Debug Logging
Enable detailed logging to see what’s happening:
# Enable debug logs for sync watcher
RUST_LOG=mockforge_core::sync_watcher=debug mockforge sync --workspace-dir ./workspaces
# Enable trace-level logs for maximum detail
RUST_LOG=mockforge_core::sync_watcher=trace mockforge sync --workspace-dir ./workspaces
# Log to a file
RUST_LOG=mockforge_core::sync_watcher=debug mockforge sync --workspace-dir ./workspaces 2>&1 | tee sync.log
Getting Help
If you’re still having issues:
- Check the sync daemon output for error messages
- Enable debug logging to see detailed information
- Verify file format matches MockRequest YAML structure
- Check file permissions and ownership
- Try with a minimal test file to isolate the issue
Example minimal test file:
# test-endpoint.yaml
id: "test"
name: "Test Endpoint"
method: "GET"
path: "/test"
response_status: 200
response_body: '{"status": "ok"}'
Save this file in your workspace directory and verify it gets imported successfully.
Admin UI

MockForge Admin UI is a modern React-based dashboard that provides comprehensive administrative capabilities for your MockForge instances. Built with Shadcn UI components and designed for power users, it eliminates the need for manual file editing while providing enhanced functionality and user experience.
Overview
The Admin UI replaces the legacy static HTML interface with a rich, interactive React application that offers:
- Service Management: Enable/disable services and routes with granular control
- Fixture Management: Visual editing, diffing, and organization of mock data
- Live Monitoring: Real-time logs and performance metrics
- Authentication: Secure role-based access control
- Advanced Search: Full-text search across services, fixtures, and logs
- Bulk Operations: Manage multiple services simultaneously
Getting Started
Enabling the Admin UI
The Admin UI is enabled by default when starting MockForge with the admin interface:
mockforge serve --admin-ui
Access the interface at http://localhost:9080/admin (or your configured admin port).
Authentication
The Admin UI includes secure authentication with two built-in roles:
Admin Role
- Username:
admin - Password:
admin123 - Permissions: Full access to all features
Viewer Role
- Username:
viewer - Password:
viewer123 - Permissions: Read-only access to dashboard, logs, and metrics
First Login
- Navigate to the admin URL
- Enter your credentials or click “Demo Admin” for quick access
- The interface will load with role-appropriate navigation
Core Features
Dashboard
The dashboard provides an overview of your MockForge instance:
- System Status: CPU, memory usage, uptime, and active threads
- Server Status: HTTP, WebSocket, and gRPC server health
- Recent Requests: Latest API calls with response times and status codes
- Quick Stats: Total routes, fixtures, and active connections
Service Management
Manage your mock services without editing configuration files:
Service Controls
- Service Toggle: Enable/disable entire services
- Route Toggle: Granular control over individual endpoints
- Bulk Operations: Enable/disable multiple services at once
- Tag Filtering: Filter services by tags for organized management
Service Information
- Request counts and error rates per route
- Response time averages
- HTTP method indicators (GET, POST, PUT, DELETE)
- gRPC service paths
// Example: Toggle a service programmatically
const { updateService } = useServiceStore();
updateService('user-service', { enabled: false });
Fixture Management
Complete fixture lifecycle management through the web interface:
File Operations
- Tree View: Hierarchical organization of fixture files
- Drag & Drop: Move fixtures between folders
- Inline Rename: Click to edit fixture names
- Rich Editor: Monaco-style editing with syntax highlighting
Content Management
- Real-time Editing: Live preview of fixture content
- Version Control: Track changes with version numbers
- Auto-save: Ctrl+S keyboard shortcut for quick saves
- File Metadata: Size, modification dates, and route associations
Visual Diff
- Change Detection: Automatic diff generation on content changes
- Side-by-side View: Color-coded comparison of old vs new content
- Change Statistics: Count of added, removed, and modified lines
- Diff History: Review previous changes with timestamps
Live Logs
Monitor your MockForge instance in real-time:
Log Streaming
- Real-time Updates: Live log feed with configurable refresh intervals
- Auto-scroll: Smart scrolling with pause/resume controls
- Connection Status: Visual indicators for WebSocket health
Advanced Filtering
- Method Filter: Filter by HTTP methods (GET, POST, etc.)
- Status Code Filter: Focus on specific response codes
- Path Search: Full-text search across request paths
- Time Range: Filter logs by time windows (1h, 6h, 24h, 7d)
Log Details
- Request Inspection: Click any log entry for detailed view
- Headers & Timing: Complete request/response metadata
- Error Analysis: Detailed error messages and stack traces
- Export Options: Download filtered logs for analysis
Performance Metrics
Comprehensive performance monitoring and analysis:
Latency Analysis
- Histogram Visualization: Response time distribution across buckets
- Percentile Metrics: P50, P95, and P99 latency measurements
- Service Comparison: Compare performance across different services
- Color-coded Buckets: Visual indicators for fast (green), medium (yellow), and slow (red) responses
Failure Analysis
- Success/Failure Ratios: Pie chart visualization of request outcomes
- Status Code Distribution: Bar chart of HTTP response codes
- Error Rate Tracking: Percentage of failed requests over time
- SLA Monitoring: Visual indicators for SLA compliance
Real-time Updates
- Auto-refresh: Metrics update every 30 seconds
- Manual Refresh: Force immediate data refresh
- Performance Alerts: Automatic warnings for high error rates or latency
Advanced Features
The Admin UI provides access to many advanced MockForge features:
- Chaos Lab: Interactive network condition simulation with real-time latency visualization
- Reality Slider: Unified control for adjusting mock environment realism
- Scenario State Machine Editor: Visual flow editor for creating state machines
- Time Travel Controls: Virtual clock controls for temporal simulation
- Contract Diff Dashboard: Visualize and analyze API contract mismatches
- Voice Interface: Create APIs using natural language commands
For detailed documentation on these features, see the Advanced Features section.
Authentication & Authorization
JWT-based Security
- Token Authentication: Secure JWT tokens with automatic refresh
- Session Persistence: Login state survives browser refresh
- Auto-logout: Automatic logout on token expiration
Role-based Access Control
- Admin Features: Full read/write access to all functionality
- Viewer Restrictions: Read-only access to monitoring features
- Navigation Adaptation: Menu items adjust based on user role
- Permission Guards: Graceful handling of unauthorized access
Search & Filtering
Global Search
- Service Search: Find services by name, route paths, or tags
- Fixture Search: Search fixture names, paths, and content
- Log Search: Full-text search across log messages and metadata
Advanced Filters
- Tag-based Filtering: Group services by functional tags
- Time-based Filtering: Filter data by time ranges
- Status Filtering: Focus on specific response codes or error states
- Persistent Filters: Maintain filter state across navigation
Bulk Operations
Service Management
# Enable all services in a tag group
services.filter(s => s.tags.includes('api'))
.forEach(s => updateService(s.id, { enabled: true }));
Fixture Operations
- Batch Selection: Select multiple fixtures for operations
- Bulk Rename: Apply naming patterns to multiple files
- Mass Delete: Remove multiple fixtures with confirmation
Validation Management
The Admin UI provides comprehensive validation controls for OpenAPI request validation:
Validation Mode Control
- Global Mode Toggle: Switch between
off,warn, andenforcevalidation modes - Per-Route Overrides: Set custom validation rules for specific endpoints
- Real-time Application: Changes take effect immediately without server restart
Validation Monitoring
- Error Statistics: View validation failure rates and error types
- Route-specific Metrics: See which endpoints are failing validation
- Error Details: Inspect detailed validation error messages
Advanced Validation Features
- Aggregate Error Reporting: Combine multiple validation errors into single responses
- Response Validation: Validate response payloads against OpenAPI schemas
- Admin Route Exclusion: Skip validation for admin UI routes when configured
// Example: Update validation mode programmatically
const { updateValidation } = useValidationStore();
updateValidation({
mode: 'warn',
aggregate_errors: true,
overrides: {
'GET /health': 'off',
'POST /api/users': 'enforce'
}
});
Configuration
Environment Variables
Configure Admin UI behavior through environment variables:
# Enable Admin UI (default: true)
MOCKFORGE_ADMIN_UI_ENABLED=true
# Admin UI port (default: 9080)
MOCKFORGE_ADMIN_PORT=9080
# Authentication settings
MOCKFORGE_ADMIN_AUTH_ENABLED=true
MOCKFORGE_ADMIN_JWT_SECRET=your-secret-key
# Session timeout (default: 24h)
MOCKFORGE_ADMIN_SESSION_TIMEOUT=86400
Custom Authentication
Replace the default authentication with your own system:
#![allow(unused)] fn main() { // Custom auth provider pub struct CustomAuthProvider { // Your authentication implementation } impl AuthProvider for CustomAuthProvider { fn authenticate(&self, username: &str, password: &str) -> Result<User> { // Your authentication logic } } }
Theming
The Admin UI supports light and dark themes with CSS custom properties:
:root {
--background: 0 0% 100%;
--foreground: 222.2 84% 4.9%;
--primary: 221.2 83.2% 53.3%;
/* ... additional theme variables */
}
.dark {
--background: 222.2 84% 4.9%;
--foreground: 210 40% 98%;
/* ... dark theme overrides */
}
API Integration
REST Endpoints
The Admin UI communicates with MockForge through RESTful APIs:
# Service management
GET /api/v2/services
PUT /api/v2/services/{id}
POST /api/v2/services/bulk
# Fixture management
GET /api/v2/fixtures
POST /api/v2/fixtures
PUT /api/v2/fixtures/{id}
DELETE /api/v2/fixtures/{id}
# Authentication
POST /api/v2/auth/login
POST /api/v2/auth/refresh
POST /api/v2/auth/logout
# Logs and metrics
GET /api/v2/logs
GET /api/v2/metrics/latency
GET /api/v2/metrics/failures
WebSocket Endpoints
Real-time features use WebSocket connections:
# Live log streaming
WS /api/v2/logs/stream
# Metrics updates
WS /api/v2/metrics/stream
# Configuration changes
WS /api/v2/config/stream
Troubleshooting
Common Issues
Authentication Problems
# Check JWT secret configuration
MOCKFORGE_ADMIN_JWT_SECRET=your-secret-key
# Verify admin credentials
curl -X POST http://localhost:9080/api/v2/auth/login \
-H "Content-Type: application/json" \
-d '{"username":"admin","password":"admin123"}'
WebSocket Connection Issues
# Check WebSocket endpoint
wscat -c ws://localhost:9080/api/v2/logs/stream
# Verify proxy configuration if behind reverse proxy
ProxyPass /api/v2/ ws://localhost:9080/api/v2/
Performance Issues
# Enable performance monitoring
MOCKFORGE_ADMIN_METRICS_ENABLED=true
# Increase memory limits for large datasets
MOCKFORGE_ADMIN_MEMORY_LIMIT=512MB
Debug Mode
Enable debug logging for troubleshooting:
MOCKFORGE_LOG_LEVEL=debug mockforge serve --admin-ui
Browser Compatibility
The Admin UI requires modern browsers with support for:
- ES2020 features
- WebSocket API
- CSS Grid and Flexbox
- Local Storage
Best Practices
Security
- Change default admin credentials in production
- Use HTTPS for admin interface in production
- Configure appropriate session timeouts
- Regularly rotate JWT secrets
Performance
- Use filtering to limit large datasets
- Enable auto-scroll only when monitoring actively
- Clear old logs periodically to improve performance
- Monitor memory usage with large fixture files
Organization
- Use descriptive service and fixture names
- Organize fixtures in logical folder structures
- Apply consistent tagging to services
- Document fixture purposes in comments
Examples
Service Management Workflow
// 1. Filter services by tag
const apiServices = services.filter(s => s.tags.includes('api'));
// 2. Enable all API services
apiServices.forEach(service => {
updateService(service.id, { enabled: true });
});
// 3. Disable specific routes within services
apiServices.forEach(service => {
service.routes
.filter(route => route.path.includes('/internal'))
.forEach(route => {
const routeId = `${route.method}-${route.path}`;
toggleRoute(service.id, routeId, false);
});
});
Fixture Management Workflow
// 1. Create new fixture
const newFixture = {
id: 'user-profile-success',
name: 'user-profile.json',
path: 'http/get/users/profile/user-profile.json',
content: JSON.stringify({
id: '{{uuid}}',
name: '{{faker.name.fullName}}',
email: '{{faker.internet.email}}',
created_at: '{{now}}'
}, null, 2)
};
// 2. Add to store
addFixture(newFixture);
// 3. Associate with route
updateFixture(newFixture.id, {
...newFixture.content,
route_path: '/api/users/profile',
method: 'GET'
});
This comprehensive guide covers all aspects of the MockForge Admin UI, from basic usage to advanced configuration and troubleshooting. The interface provides a complete administrative solution that eliminates the need for manual file editing while offering enhanced functionality and user experience.
Advanced Features
MockForge includes a comprehensive set of advanced features that enable sophisticated mocking scenarios, intelligent behavior simulation, and production-like testing environments. This section provides an overview of all advanced features with links to detailed documentation.
Overview
MockForge’s advanced features are organized into several categories:
- Simulation & State Management: Virtual Backend Reality (VBR), Temporal Simulation, Scenario State Machines
- Intelligence & Automation: MockAI, Generative Schema Mode, AI Contract Diff
- Chaos & Realism: Chaos Lab, Reality Slider
- Collaboration & Cloud: Cloud Workspaces, Data Scenario Marketplace
- Developer Experience: ForgeConnect SDK
- Experimental Features: Deceptive Deploys, Voice + LLM Interface, Reality Continuum, Smart Personas
Simulation & State Management
Virtual Backend Reality (VBR) Engine
The VBR Engine provides a virtual “database” layer that automatically generates CRUD operations from OpenAPI specifications. It supports relationship mapping, data persistence, and state management.
Key Features:
- Automatic CRUD generation from OpenAPI specs
- Support for 1:N and N:N relationships
- Multiple storage backends (JSON, SQLite, in-memory)
- Data seeding and state snapshots
- Realistic ID generation
Learn More: VBR Engine Documentation
Temporal Simulation (Time Travel)
Temporal Simulation allows you to control time in your mock environment, enabling time-based data mutations, scheduled events, and time-travel debugging.
Key Features:
- Virtual clock abstraction
- Time advancement controls
- Data mutation rules triggered by time
- Scheduler for simulated cron events
- UI controls for time travel
Learn More: Temporal Simulation Documentation
Scenario State Machines 2.0
Advanced state machine system for modeling complex workflows and multi-step scenarios with visual flow editing and conditional transitions.
Key Features:
- Visual flow editor for state transitions
- Conditional transitions with if/else logic
- Reusable sub-scenarios
- Real-time preview of active state
- Programmatic state manipulation
Learn More: Scenario State Machines Documentation
Intelligence & Automation
MockAI (Intelligent Mocking)
MockAI uses artificial intelligence to generate contextually appropriate, realistic API responses. It learns from OpenAPI specifications and example payloads.
Key Features:
- Trainable rule engine from examples or schema
- Context-aware conditional logic generation
- LLM-based dynamic response option
- Automatic fake data consistency
- Realistic validation error simulation
Learn More: MockAI Documentation
Generative Schema Mode
Generate complete API ecosystems from JSON payloads, automatically creating routes, schemas, and entity relationships.
Key Features:
- Complete “JSON → entire API ecosystem” generation
- Auto-route generation with realistic CRUD mapping
- One-click environment creation from JSON payloads
- Entity relation inference
- Schema merging from multiple examples
Learn More: Generative Schema Mode Documentation
AI Contract Diff
Automatically detect and analyze differences between API contracts and live requests, providing contextual recommendations for mismatches.
Key Features:
- Contract diff analysis between schema and live requests
- Contextual recommendations for mismatches
- Inline schema correction proposals
- CI/CD integration (contract verification step)
- Dashboard visualization of mismatches
Learn More: AI Contract Diff Documentation
Chaos & Realism
Chaos Lab
Interactive network condition simulation with real-time latency visualization, network profiles, and error pattern scripting.
Key Features:
- Real-time latency visualization
- Network profile management (slow 3G, flaky Wi-Fi, etc.)
- Error pattern scripting (burst, random, sequential)
- Profile export/import
- CLI integration
Learn More: Chaos Lab Documentation
Reality Slider
Unified control mechanism that adjusts mock environment realism from simple static stubs to full production-level chaos.
Key Features:
- Configurable realism levels (1–5)
- Automated toggling of chaos, latency, and MockAI behaviors
- Persistent slider state per environment
- Export/import of realism presets
- Keyboard shortcuts for quick changes
Learn More: Reality Slider Documentation
Collaboration & Cloud
Cloud Workspaces
Multi-user collaborative editing with real-time state synchronization, version control, and role-based permissions.
Key Features:
- User authentication and access control
- Multi-user environment editing
- State synchronization between clients
- Git-style version control for mocks and data
- Role-based permissions (Owner, Editor, Viewer)
Learn More: Cloud Workspaces Documentation
Data Scenario Marketplace
Marketplace for downloadable mock templates with tags, ratings, versioning, and one-click import/export.
Key Features:
- Marketplace for downloadable mock templates
- Tags, ratings, and versioning
- One-click import/export
- Domain-specific packs (e-commerce, fintech, IoT)
- Automatic schema and route alignment
Learn More: Scenario Marketplace Documentation
Developer Experience
ForgeConnect SDK
Browser extension and SDK for capturing network traffic, auto-generating mocks, and integrating with popular frameworks.
Key Features:
- Browser extension to capture network traffic
- Auto-mock generation for unhandled requests
- Local mock preview in browser
- SDK for framework bindings (React, Vue, Angular)
- Auth passthrough support for OAuth flows
Learn More: ForgeConnect SDK Documentation
Experimental Features
Deceptive Deploys
Deploy mock APIs that look identical to production endpoints, perfect for demos, PoCs, and client presentations.
Key Features:
- Production-like headers and response patterns
- Production-like CORS configuration
- Production-like rate limiting
- OAuth flow simulation
- Auto-tunnel deployment
Learn More: Deceptive Deploys Documentation
Voice + LLM Interface
Generate OpenAPI specifications and mock APIs from natural language voice commands.
Key Features:
- Voice command parsing with LLM
- OpenAPI spec generation from voice commands
- Conversational mode for multi-turn interactions
- Single-shot mode for complete commands
- CLI and Web UI integration
Learn More: Voice + LLM Interface Documentation
Reality Continuum
Gradually transition from mock to real backend data by intelligently blending responses from both sources.
Key Features:
- Dynamic blending of mock and real responses
- Time-based progression with virtual clock integration
- Per-route, group-level, and global blend ratios
- Multiple merge strategies
- Fallback handling for failures
Learn More: Reality Continuum Documentation
Smart Personas
Generate coherent, consistent mock data using persona profiles with unique backstories and deterministic generation.
Key Features:
- Persona profile system with unique IDs and domains
- Coherent backstories with template-based generation
- Persona relationships (connections between personas)
- Deterministic data generation (same persona = same data)
- Domain-specific persona templates
Learn More: Smart Personas Documentation
Getting Started
To get started with advanced features:
- Review the feature documentation linked above for detailed information
- Check configuration examples in the Configuration Guide
- Try the tutorials in the Tutorials section
- Explore examples in the
examples/directory
Feature Comparison
| Feature | Use Case | Complexity |
|---|---|---|
| VBR Engine | Stateful CRUD operations | Medium |
| Temporal Simulation | Time-based testing | Medium |
| MockAI | Intelligent responses | High |
| Chaos Lab | Resilience testing | Low |
| Reality Slider | Quick realism adjustment | Low |
| Cloud Workspaces | Team collaboration | Medium |
| ForgeConnect SDK | Browser-based development | Low |
Best Practices
- Start Simple: Begin with basic features (Chaos Lab, Reality Slider) before moving to advanced features
- Read Documentation: Each feature has detailed documentation with examples
- Use Examples: Check the
examples/directory for working configurations - Test Incrementally: Enable features one at a time to understand their impact
- Monitor Performance: Some features (like MockAI) may add latency
Related Documentation
- Advanced Behavior and Simulation - Basic advanced features
- Configuration Guide - How to configure features
- API Reference - Programmatic API access
- Tutorials - Step-by-step guides
Virtual Backend Reality (VBR) Engine
The Virtual Backend Reality (VBR) Engine provides a virtual “database” layer that automatically generates CRUD operations from OpenAPI specifications. It enables stateful mocking with relationship management, data persistence, and realistic data generation.
Overview
The VBR Engine transforms MockForge from a simple request/response mock server into a stateful backend simulator. Instead of returning static responses, VBR maintains a virtual database that supports:
- Automatic CRUD operations from OpenAPI specs
- Relationship mapping (1:N and N:N)
- Data persistence across server restarts
- State snapshots for point-in-time recovery
- Realistic ID generation with customizable patterns
Quick Start
From OpenAPI Specification
The easiest way to get started is to generate a VBR engine from an OpenAPI specification:
# Start server with VBR from OpenAPI spec
mockforge serve --spec api.yaml --vbr-enabled
Or in your configuration:
vbr:
enabled: true
openapi_spec: "./api.yaml"
backend: "sqlite" # or "json", "memory"
storage_path: "./vbr-data"
Programmatic Usage
#![allow(unused)] fn main() { use mockforge_vbr::VbrEngine; // Create engine from OpenAPI spec let (engine, result) = VbrEngine::from_openapi_file(config, "./api-spec.yaml").await?; // Or create manually let mut engine = VbrEngine::new(config).await?; }
Features
Automatic CRUD Generation
VBR automatically detects CRUD operations from your OpenAPI specification:
- GET /users → List all users
- GET /users/{id} → Get user by ID
- POST /users → Create new user
- PUT /users/{id} → Update user
- DELETE /users/{id} → Delete user
Primary keys are auto-detected (fields named id, uuid, etc.), and foreign keys are inferred from field names ending in _id.
Relationship Mapping
One-to-Many (1:N)
VBR automatically detects foreign key relationships:
# OpenAPI spec
components:
schemas:
User:
properties:
id: { type: integer }
name: { type: string }
Post:
properties:
id: { type: integer }
user_id: { type: integer } # Foreign key detected
title: { type: string }
This creates a relationship where one User can have many Posts. Access related resources:
# Get all posts for a user
GET /vbr-api/users/1/posts
Many-to-Many (N:N)
Define many-to-many relationships explicitly:
#![allow(unused)] fn main() { use mockforge_vbr::ManyToManyDefinition; let m2m = ManyToManyDefinition::new("User".to_string(), "Role".to_string()); schema.with_many_to_many(m2m); }
This creates a junction table automatically (e.g., user_role) and enables:
# Get all roles for a user
GET /vbr-api/users/1/roles
# Get all users with a role
GET /vbr-api/roles/1/users
Data Seeding
Seed your virtual database with initial data:
From File
# Seed from JSON file
mockforge vbr seed --file seed-data.json
# Seed from YAML file
mockforge vbr seed --file seed-data.yaml
Seed file format:
{
"users": [
{"id": 1, "name": "Alice", "email": "alice@example.com"},
{"id": 2, "name": "Bob", "email": "bob@example.com"}
],
"posts": [
{"id": 1, "user_id": 1, "title": "First Post"},
{"id": 2, "user_id": 1, "title": "Second Post"}
]
}
Programmatic Seeding
#![allow(unused)] fn main() { // Seed a single entity engine.seed_entity("users", vec![ json!({"name": "Alice", "email": "alice@example.com"}), json!({"name": "Bob", "email": "bob@example.com"}), ]).await?; // Seed all entities from file engine.seed_from_file("./seed-data.json").await?; // Clear entity data engine.clear_entity("users").await?; // Clear all data engine.reset().await?; }
ID Generation
VBR supports multiple ID generation strategies:
Pattern-Based IDs
#![allow(unused)] fn main() { .with_auto_generation("id", AutoGenerationRule::Pattern("USR-{increment:06}".to_string())) }
Template variables:
{increment}or{increment:06}- Auto-incrementing with optional padding{timestamp}- Unix timestamp{random}or{random:8}- Random alphanumeric (default length 8){uuid}- UUID v4
Realistic IDs (Stripe-style)
#![allow(unused)] fn main() { .with_auto_generation("id", AutoGenerationRule::Realistic { prefix: "cus".to_string(), length: 14 }) }
Generates IDs like: cus_abc123def456
State Snapshots
Create point-in-time snapshots of your virtual database:
Create Snapshot
# Via CLI
mockforge vbr snapshot create --name initial --description "Initial state"
# Via API
curl -X POST http://localhost:3000/vbr-api/snapshots \
-H "Content-Type: application/json" \
-d '{"name": "initial", "description": "Initial state"}'
Restore Snapshot
# Via CLI
mockforge vbr snapshot restore --name initial
# Via API
curl -X POST http://localhost:3000/vbr-api/snapshots/initial/restore
List Snapshots
# Via CLI
mockforge vbr snapshot list
# Via API
curl http://localhost:3000/vbr-api/snapshots
Delete Snapshot
# Via CLI
mockforge vbr snapshot delete --name initial
# Via API
curl -X DELETE http://localhost:3000/vbr-api/snapshots/initial
Time-Based Expiry
Configure records to expire after a certain time:
vbr:
entities:
- name: sessions
ttl_seconds: 3600 # Expire after 1 hour
aging_enabled: true
Records older than the TTL are automatically removed.
Storage Backends
VBR supports multiple storage backends:
SQLite (Recommended)
Persistent storage with full SQL support:
vbr:
backend: "sqlite"
storage_path: "./vbr-data.db"
Advantages:
- Full SQL query support
- ACID transactions
- Efficient for large datasets
- Easy to inspect with SQL tools
JSON
File-based storage for simple use cases:
vbr:
backend: "json"
storage_path: "./vbr-data.json"
Advantages:
- Human-readable
- Easy to version control
- Simple backup/restore
In-Memory
Fast, non-persistent storage:
vbr:
backend: "memory"
Advantages:
- Fastest performance
- No disk I/O
- Perfect for testing
Note: Data is lost on server restart.
API Endpoints
VBR automatically creates REST API endpoints for all entities:
Entity Operations
# List all entities
GET /vbr-api/{entity}
# Get entity by ID
GET /vbr-api/{entity}/{id}
# Create entity
POST /vbr-api/{entity}
Content-Type: application/json
{
"name": "Alice",
"email": "alice@example.com"
}
# Update entity
PUT /vbr-api/{entity}/{id}
Content-Type: application/json
{
"name": "Alice Updated"
}
# Delete entity
DELETE /vbr-api/{entity}/{id}
Relationship Operations
# Get related entities (1:N)
GET /vbr-api/{entity}/{id}/{relationship}
# Get related entities (N:N)
GET /vbr-api/{entity}/{id}/{relationship}
Snapshot Operations
# Create snapshot
POST /vbr-api/snapshots
Content-Type: application/json
{
"name": "snapshot1",
"description": "Optional description"
}
# List snapshots
GET /vbr-api/snapshots
# Get snapshot metadata
GET /vbr-api/snapshots/{name}
# Restore snapshot
POST /vbr-api/snapshots/{name}/restore
# Delete snapshot
DELETE /vbr-api/snapshots/{name}
Database Management
# Reset entire database
POST /vbr-api/reset
# Reset specific entity
POST /vbr-api/reset/{entity}
Configuration
Full Configuration Example
vbr:
enabled: true
# OpenAPI spec for auto-generation
openapi_spec: "./api.yaml"
# Storage backend
backend: "sqlite" # sqlite, json, memory
storage_path: "./vbr-data"
# Entity configuration
entities:
- name: users
primary_key: "id"
auto_generation:
id: "pattern:USR-{increment:06}"
ttl_seconds: null # No expiry
aging_enabled: false
- name: sessions
primary_key: "id"
ttl_seconds: 3600 # Expire after 1 hour
aging_enabled: true
# Relationships
relationships:
- type: "one_to_many"
from: "users"
to: "posts"
foreign_key: "user_id"
- type: "many_to_many"
from: "users"
to: "roles"
junction_table: "user_role"
# Snapshot configuration
snapshots:
enabled: true
directory: "./snapshots"
max_snapshots: 10
Use Cases
Development Environment
Create a realistic development environment without a real database:
vbr:
enabled: true
backend: "sqlite"
openapi_spec: "./api.yaml"
Integration Testing
Use VBR for integration tests with deterministic data:
#![allow(unused)] fn main() { // Setup let engine = VbrEngine::from_openapi_file(config, "./api.yaml").await?; engine.seed_from_file("./test-data.json").await?; // Run tests // ... // Cleanup engine.reset().await?; }
Demo Environments
Create snapshots for consistent demo environments:
# Setup demo data
mockforge vbr seed --file demo-data.json
# Create snapshot
mockforge vbr snapshot create --name demo
# Later, restore for consistent demos
mockforge vbr snapshot restore --name demo
Best Practices
- Use SQLite for Production: SQLite provides the best balance of performance and features
- Seed Initial Data: Use seed files for consistent starting states
- Create Snapshots: Save important states for quick restoration
- Configure TTL: Use time-based expiry for session-like data
- Version Control Seed Files: Keep seed data in version control
- Use Realistic IDs: Pattern-based IDs make data look more realistic
Troubleshooting
Primary Key Not Detected
If VBR doesn’t detect your primary key, specify it explicitly:
vbr:
entities:
- name: users
primary_key: "user_id" # Explicit primary key
Foreign Key Not Detected
If foreign key relationships aren’t detected, define them explicitly:
vbr:
relationships:
- type: "one_to_many"
from: "users"
to: "posts"
foreign_key: "author_id" # Custom foreign key name
Snapshot Restore Fails
Ensure the snapshot directory exists and has write permissions:
mkdir -p ./snapshots
chmod 755 ./snapshots
Related Documentation
- Temporal Simulation - Time-based data mutations
- Scenario State Machines - State machine integration
- Configuration Guide - Complete configuration reference
Temporal Simulation (Time Travel)
Temporal Simulation allows you to control time in your mock environment, enabling time-based data mutations, scheduled events, and time-travel debugging. Test time-dependent behavior without waiting for real time to pass.
Overview
Time travel in MockForge works through a virtual clock that can be:
- Enabled/disabled at runtime
- Set to any specific point in time
- Advanced by arbitrary durations instantly
- Scaled to run faster or slower than real time
When time travel is enabled, all time-related features use the virtual clock instead of the system clock.
Quick Start
Enable Time Travel
# config.yaml
core:
time_travel:
enabled: true
initial_time: "2025-01-01T00:00:00Z"
scale_factor: 1.0
enable_scheduling: true
Control Time via CLI
# Get time travel status
mockforge time status
# Enable time travel at a specific time
mockforge time enable --time "2025-01-01T00:00:00Z"
# Advance time by 1 month (instantly!)
mockforge time advance 1month
# Advance time by 2 hours
mockforge time advance 2h
# Set time to a specific point
mockforge time set "2025-06-01T12:00:00Z"
# Reset to real time
mockforge time reset
Use Time-Based Templates
Time-aware template tokens automatically use the virtual clock:
{
"timestamp": "{{now}}",
"expires_at": "{{now+1h}}",
"created_at": "{{now-30m}}"
}
Virtual Clock
The virtual clock is the core of temporal simulation. It provides:
Basic Operations
#![allow(unused)] fn main() { use mockforge_core::time_travel::VirtualClock; let clock = VirtualClock::new(); // Enable and set time clock.enable_and_set(DateTime::parse_from_rfc3339("2025-01-01T00:00:00Z")?); // Advance time clock.advance(Duration::from_secs(3600)); // Advance 1 hour // Get current virtual time let now = clock.now(); // Disable (return to real time) clock.disable(); }
Time Scale
Run time faster or slower than real time:
# Run at 2x speed
mockforge time scale 2.0
# Run at 0.5x speed (half speed)
mockforge time scale 0.5
Cron Scheduler
Schedule recurring events using cron expressions:
Create Cron Job
# Via CLI
mockforge time cron create \
--schedule "0 */6 * * *" \
--action "callback" \
--callback-url "http://localhost:3000/api/cleanup"
# Via API
curl -X POST http://localhost:9080/__mockforge/time-travel/cron \
-H "Content-Type: application/json" \
-d '{
"schedule": "0 */6 * * *",
"action": {
"type": "callback",
"url": "http://localhost:3000/api/cleanup"
},
"enabled": true
}'
Cron Expression Format
┌───────────── minute (0 - 59)
│ ┌───────────── hour (0 - 23)
│ │ ┌───────────── day of month (1 - 31)
│ │ │ ┌───────────── month (1 - 12)
│ │ │ │ ┌───────────── day of week (0 - 6) (Sunday to Saturday)
│ │ │ │ │
* * * * *
Examples:
0 */6 * * *- Every 6 hours0 0 * * *- Daily at midnight*/15 * * * *- Every 15 minutes0 9 * * 1-5- Weekdays at 9 AM
List Cron Jobs
# Via CLI
mockforge time cron list
# Via API
curl http://localhost:9080/__mockforge/time-travel/cron
Mutation Rules
Automatically mutate data based on time triggers:
Interval-Based Mutations
Mutate data at regular intervals:
# Create mutation rule
mockforge time mutation create \
--entity "orders" \
--trigger "interval:1h" \
--operation "update_status" \
--field "status" \
--value "shipped"
# Via API
curl -X POST http://localhost:9080/__mockforge/time-travel/mutations \
-H "Content-Type: application/json" \
-d '{
"entity": "orders",
"trigger": {
"type": "interval",
"duration": "1h"
},
"operation": {
"type": "update_status",
"field": "status",
"value": "shipped"
}
}'
Time-Based Mutations
Mutate data at specific times:
{
"entity": "tokens",
"trigger": {
"type": "at_time",
"time": "2025-01-01T12:00:00Z"
},
"operation": {
"type": "set",
"field": "expired",
"value": true
}
}
Field Threshold Mutations
Mutate when a field reaches a threshold:
{
"entity": "orders",
"trigger": {
"type": "field_threshold",
"field": "age_days",
"operator": ">=",
"value": 30
},
"operation": {
"type": "set",
"field": "status",
"value": "archived"
}
}
Scheduled Responses
Schedule responses to be sent at specific times:
# Schedule a response for 30 minutes from now
curl -X POST http://localhost:9080/__mockforge/time-travel/schedule \
-H "Content-Type: application/json" \
-d '{
"trigger_time": "+30m",
"path": "/api/notifications",
"method": "POST",
"body": {"event": "token_expired"},
"status": 401
}'
VBR Integration
Temporal simulation integrates with the VBR Engine for time-based data mutations:
Snapshot with Time Travel
Create snapshots that include time travel state:
#![allow(unused)] fn main() { use mockforge_vbr::VbrEngine; // Create snapshot with time travel state engine.create_snapshot_with_time_travel( "snapshot1", Some("Description".to_string()), "./snapshots", &clock ).await?; // Restore snapshot with time travel state engine.restore_snapshot_with_time_travel( "snapshot1", "./snapshots", &clock ).await?; }
Mutation Rules in VBR
VBR automatically executes mutation rules based on virtual time:
vbr:
entities:
- name: orders
mutation_rules:
- trigger: "interval:1h"
operation: "update_status"
field: "status"
value: "processing"
Admin API
Time Travel Status
GET /__mockforge/time-travel/status
Response:
{
"enabled": true,
"virtual_time": "2025-01-15T10:30:00Z",
"real_time": "2025-01-01T10:30:00Z",
"scale_factor": 1.0
}
Advance Time
POST /__mockforge/time-travel/advance
Content-Type: application/json
{
"duration": "2h" # or "1month", "30m", etc.
}
Set Time
PUT /__mockforge/time-travel/time
Content-Type: application/json
{
"time": "2025-06-01T12:00:00Z"
}
Enable/Disable
POST /__mockforge/time-travel/enable
Content-Type: application/json
{
"time": "2025-01-01T00:00:00Z" # Optional initial time
}
POST /__mockforge/time-travel/disable
CLI Commands
Time Control
# Status
mockforge time status
# Enable
mockforge time enable [--time "2025-01-01T00:00:00Z"]
# Disable
mockforge time disable
# Advance
mockforge time advance <duration> # e.g., "1month", "2h", "30m"
# Set
mockforge time set <time> # ISO 8601 format
# Scale
mockforge time scale <factor> # e.g., 2.0 for 2x speed
# Reset
mockforge time reset
Cron Jobs
# List
mockforge time cron list
# Create
mockforge time cron create --schedule "<cron>" --action "<action>"
# Get
mockforge time cron get <id>
# Update
mockforge time cron update <id> --enabled false
# Delete
mockforge time cron delete <id>
Mutation Rules
# List
mockforge time mutation list
# Create
mockforge time mutation create --entity "<entity>" --trigger "<trigger>" --operation "<operation>"
# Get
mockforge time mutation get <id>
# Update
mockforge time mutation update <id> --enabled false
# Delete
mockforge time mutation delete <id>
Use Cases
Token Expiration
Test token expiration without waiting:
# Create token that expires in 1 hour
mockforge time enable --time "2025-01-01T00:00:00Z"
# Advance 1 hour
mockforge time advance 1h
# Token is now expired
Session Timeouts
Test session timeout behavior:
vbr:
entities:
- name: sessions
ttl_seconds: 3600 # 1 hour
aging_enabled: true
Scheduled Events
Test scheduled notifications:
# Schedule notification for 1 day from now
mockforge time cron create \
--schedule "0 0 * * *" \
--action "callback" \
--callback-url "http://localhost:3000/api/send-daily-report"
Data Aging
Test data that changes over time:
# Create mutation rule to age orders
mockforge time mutation create \
--entity "orders" \
--trigger "interval:1d" \
--operation "increment" \
--field "age_days"
Best Practices
- Start with Simple Scenarios: Begin with basic time advancement before using cron or mutations
- Use Snapshots: Save important time states for quick restoration
- Test Edge Cases: Test behavior at midnight, month boundaries, etc.
- Monitor Performance: Time-based features add minimal overhead
- Combine with VBR: Use VBR entities with time-based mutations for realistic scenarios
Troubleshooting
Time Not Advancing
- Ensure time travel is enabled:
mockforge time status - Check that scheduling is enabled in configuration
- Verify cron jobs are enabled
Mutations Not Executing
- Check mutation rule is enabled
- Verify trigger conditions are met
- Review server logs for errors
Cron Jobs Not Running
- Ensure cron scheduler background task is running
- Check cron expression is valid
- Verify job is enabled
Related Documentation
- VBR Engine - State management with time-based mutations
- Scenario State Machines - Time-based state transitions
- Configuration Guide - Complete configuration reference
Scenario State Machines 2.0
Scenario State Machines 2.0 provides a visual flow editor for modeling complex workflows and multi-step scenarios. Create state machines with conditional transitions, reusable sub-scenarios, and real-time state tracking.
Overview
State machines enable you to model complex API behaviors that depend on previous interactions:
- Visual Flow Editor: Drag-and-drop interface for creating state machines
- Conditional Transitions: If/else logic for state transitions
- Reusable Sub-Scenarios: Compose complex workflows from simpler components
- Real-Time Preview: See active state and available transitions
- VBR Integration: Synchronize state with VBR entities
Quick Start
Create a State Machine
- Navigate to State Machines in the Admin UI
- Click Create New State Machine
- Add states and transitions using the visual editor
- Configure conditions for transitions
- Save the state machine
Basic Example: Order Workflow
name: order_workflow
initial_state: pending
states:
- name: pending
response:
status_code: 200
body: '{"order_id": "{{resource_id}}", "status": "pending"}'
- name: processing
response:
status_code: 200
body: '{"order_id": "{{resource_id}}", "status": "processing"}'
- name: shipped
response:
status_code: 200
body: '{"order_id": "{{resource_id}}", "status": "shipped"}'
transitions:
- from: pending
to: processing
condition: 'method == "PUT" && path == "/api/orders/{id}/process"'
- from: processing
to: shipped
condition: 'method == "PUT" && path == "/api/orders/{id}/ship"'
Visual Editor
The visual editor provides a React Flow-based interface for creating state machines:
Adding States
- Click Add State button
- Configure state name and response
- Position state on canvas
- Connect states with transitions
Creating Transitions
- Drag from one state to another
- Configure transition condition
- Set transition metadata (optional)
Editing States
- Double-click a state to edit
- Right-click for context menu
- Drag to reposition
Conditional Transitions
Transitions can include conditions that determine when they execute:
Method-Based Conditions
transitions:
- from: pending
to: processing
condition: 'method == "POST" && path == "/api/orders/{id}/process"'
Header-Based Conditions
transitions:
- from: pending
to: processing
condition: 'header["X-Admin"] == "true"'
Body-Based Conditions
transitions:
- from: pending
to: processing
condition: 'body.status == "ready"'
Complex Conditions
transitions:
- from: pending
to: processing
condition: '(method == "PUT" || method == "PATCH") && body.amount > 100'
Sub-Scenarios
Create reusable sub-scenarios that can be embedded in larger workflows:
Define Sub-Scenario
name: payment_processing
states:
- name: initiated
- name: processing
- name: completed
- name: failed
transitions:
- from: initiated
to: processing
condition: 'method == "POST" && path == "/api/payments"'
Use Sub-Scenario
name: order_workflow
states:
- name: pending
- name: payment
sub_scenario: payment_processing
- name: completed
transitions:
- from: pending
to: payment
condition: 'method == "POST" && path == "/api/orders/{id}/pay"'
- from: payment
to: completed
condition: 'sub_scenario_state == "completed"'
VBR Integration
Synchronize state machine state with VBR entities:
Configure VBR Entity
vbr:
entities:
- name: orders
state_machine: order_workflow
state_field: status
State Synchronization
When a state transition occurs, the corresponding VBR entity is updated:
# Transition order to processing
PUT /api/orders/123/process
# VBR entity automatically updated
GET /vbr-api/orders/123
# Response: {"id": 123, "status": "processing", ...}
API Endpoints
State Machine CRUD
# Create state machine
POST /__mockforge/state-machines
Content-Type: application/json
{
"name": "order_workflow",
"initial_state": "pending",
"states": [...],
"transitions": [...]
}
# List state machines
GET /__mockforge/state-machines
# Get state machine
GET /__mockforge/state-machines/{id}
# Update state machine
PUT /__mockforge/state-machines/{id}
# Delete state machine
DELETE /__mockforge/state-machines/{id}
State Instances
# Create state instance
POST /__mockforge/state-machines/{id}/instances
Content-Type: application/json
{
"resource_id": "order-123",
"initial_state": "pending"
}
# List instances
GET /__mockforge/state-machines/{id}/instances
# Get instance
GET /__mockforge/state-machines/{id}/instances/{instance_id}
# Transition instance
POST /__mockforge/state-machines/{id}/instances/{instance_id}/transition
Content-Type: application/json
{
"to_state": "processing",
"condition_override": null
}
Current State
# Get current state
GET /__mockforge/state-machines/{id}/instances/{instance_id}/state
# Get next possible states
GET /__mockforge/state-machines/{id}/instances/{instance_id}/next-states
Import/Export
# Export state machine
GET /__mockforge/state-machines/{id}/export
# Import state machine
POST /__mockforge/state-machines/import
Content-Type: application/json
{
"name": "order_workflow",
"definition": {...}
}
Real-Time Updates
State machines support real-time updates via WebSocket:
WebSocket Events
{
"type": "state_machine_transition",
"state_machine_id": "uuid",
"instance_id": "uuid",
"from_state": "pending",
"to_state": "processing",
"timestamp": "2025-01-15T10:30:00Z"
}
Subscribe to Updates
const ws = new WebSocket('ws://localhost:9080/ws');
ws.onmessage = (event) => {
const data = JSON.parse(event.data);
if (data.type === 'state_machine_transition') {
console.log('State transition:', data);
}
};
Undo/Redo
The visual editor supports undo/redo operations:
- Undo:
Ctrl+ZorCmd+Z - Redo:
Ctrl+Shift+ZorCmd+Shift+Z - History: View edit history in editor
Use Cases
Order Processing Workflow
Model a complete order lifecycle:
states:
- pending
- payment_pending
- payment_processing
- payment_completed
- payment_failed
- processing
- shipped
- delivered
- cancelled
User Onboarding
Track user onboarding progress:
states:
- signup
- email_verification
- profile_setup
- onboarding_complete
Approval Workflows
Model multi-step approval processes:
states:
- draft
- submitted
- review
- approved
- rejected
Best Practices
- Start Simple: Begin with basic state machines before adding complexity
- Use Sub-Scenarios: Break complex workflows into reusable components
- Test Transitions: Verify all transitions work as expected
- Document Conditions: Keep transition conditions well-documented
- Version Control: Export and version control state machine definitions
Troubleshooting
State Not Transitioning
- Verify transition condition is correct
- Check that request matches condition
- Review server logs for errors
Sub-Scenario Not Executing
- Ensure sub-scenario is properly defined
- Verify input/output mapping is correct
- Check sub-scenario state transitions
VBR Sync Issues
- Verify VBR entity configuration
- Check state field name matches
- Review VBR entity state
Related Documentation
- VBR Engine - State persistence
- Temporal Simulation - Time-based state transitions
- Admin UI - Visual editor usage
MockAI (Intelligent Mocking)
MockAI is MockForge’s intelligent mock generation system that uses AI to create contextually appropriate, realistic API responses. It automatically learns from OpenAPI specifications and example payloads to generate intelligent behavior.
Overview
MockAI provides:
- Auto-Generated Rules: Automatically infers behavioral rules from OpenAPI specs or example payloads
- Context-Aware Responses: Maintains session state and conversation history across requests
- Mutation Detection: Intelligently detects create, update, and delete operations from request changes
- Validation Error Generation: Generates realistic, context-aware validation error responses
- Pagination Intelligence: Automatically generates realistic pagination metadata and responses
- Session Persistence: Tracks state across multiple requests within a session
Quick Start
Enable MockAI
# config.yaml
mockai:
enabled: true
auto_learn: true
mutation_detection: true
ai_validation_errors: true
intelligent_pagination: true
Start Server
mockforge serve --config config.yaml --spec api.yaml
MockAI will automatically:
- Learn from your OpenAPI specification
- Generate intelligent responses
- Track session state
- Handle mutations and pagination
Configuration
Basic Configuration
mockai:
enabled: true
auto_learn: true
mutation_detection: true
ai_validation_errors: true
intelligent_pagination: true
intelligent_behavior:
behavior_model:
provider: "ollama" # or "openai", "anthropic"
model: "llama3.2"
base_url: "http://localhost:11434"
LLM Provider Configuration
Ollama (Local, Free)
mockai:
intelligent_behavior:
behavior_model:
provider: "ollama"
model: "llama3.2"
base_url: "http://localhost:11434"
OpenAI
mockai:
intelligent_behavior:
behavior_model:
provider: "openai"
model: "gpt-3.5-turbo"
api_key: "${OPENAI_API_KEY}"
temperature: 0.7
max_tokens: 1000
Anthropic
mockai:
intelligent_behavior:
behavior_model:
provider: "anthropic"
model: "claude-3-sonnet-20240229"
api_key: "${ANTHROPIC_API_KEY}"
Performance Tuning
mockai:
intelligent_behavior:
performance:
max_history_length: 100
cache_enabled: true
cache_ttl_seconds: 3600
timeout_seconds: 30
CLI Commands
Enable/Disable MockAI
# Enable globally
mockforge mockai enable
# Enable for specific endpoints
mockforge mockai enable --endpoints "/users" "/products"
# Disable globally
mockforge mockai disable
# Disable for specific endpoints
mockforge mockai disable --endpoints "/admin/*"
Check Status
mockforge mockai status
Learn from Examples
# Learn from example request/response pairs
mockforge mockai learn --examples examples.json
Generate Response
# Generate a response for a request
mockforge mockai generate \
--method POST \
--path "/users" \
--body '{"name": "John"}'
Session Management
MockAI automatically tracks sessions to maintain context across requests:
Session Identification
Sessions are identified by:
- Header:
X-Session-ID: <session-id> - Cookie:
mockforge_session=<session-id>
If no session ID is provided, MockAI generates a new one automatically.
Example with Session
# First request - creates session
curl http://localhost:3000/users
# Response includes session ID in Set-Cookie header
# Subsequent requests use the same session
# Second request with session
curl -H "X-Session-ID: my-session-123" \
http://localhost:3000/users
Mutation Detection
MockAI automatically detects mutations (create, update, delete) by comparing request bodies:
Create Detection
# First request - creates a new resource
curl -X POST http://localhost:3000/users \
-H "Content-Type: application/json" \
-d '{"name": "John", "email": "john@example.com"}'
# MockAI detects this as a create operation
# Response includes generated ID and created timestamp
Update Detection
# Second request with changes - detected as update
curl -X POST http://localhost:3000/users \
-H "Content-Type: application/json" \
-H "X-Session-ID: my-session-123" \
-d '{"name": "John Doe", "email": "john@example.com"}'
# MockAI detects changes and treats as update
# Response reflects updated values
Validation Errors
MockAI generates realistic validation errors when requests don’t match schemas:
Missing Required Field
curl -X POST http://localhost:3000/users \
-H "Content-Type: application/json" \
-d '{"email": "invalid"}' # Missing "name" field
Response:
{
"error": "Validation failed",
"details": [
{
"field": "name",
"message": "Field 'name' is required"
},
{
"field": "email",
"message": "Invalid email format"
}
]
}
Pagination
MockAI automatically handles pagination requests:
Paginated Request
curl "http://localhost:3000/users?page=1&limit=10"
Response:
{
"data": [...],
"pagination": {
"page": 1,
"limit": 10,
"total": 100,
"total_pages": 10,
"has_next": true,
"has_prev": false
}
}
Programmatic Usage
Create MockAI from OpenAPI
#![allow(unused)] fn main() { use mockforge_core::intelligent_behavior::{IntelligentBehaviorConfig, MockAI}; use mockforge_core::openapi::OpenApiSpec; // Load OpenAPI spec let spec = OpenApiSpec::from_file("api.yaml").await?; // Create MockAI with default config let config = IntelligentBehaviorConfig::default(); let mockai = MockAI::from_openapi(&spec, config).await?; // Process a request let request = Request { method: "POST".to_string(), path: "/users".to_string(), body: Some(json!({"name": "John"})), query_params: HashMap::new(), headers: HashMap::new(), }; let response = mockai.process_request(&request).await?; }
Learn from Examples
#![allow(unused)] fn main() { use mockforge_core::intelligent_behavior::rule_generator::ExamplePair; let examples = vec![ ExamplePair { method: "POST".to_string(), path: "/users".to_string(), request: Some(json!({"name": "John"})), response: Some(json!({"id": 1, "name": "John"})), }, ]; mockai.learn_from_example(examples[0]).await?; }
Use Cases
Rapid Prototyping
Generate realistic API responses without writing fixtures:
mockai:
enabled: true
auto_learn: true
Testing Error Handling
Generate realistic validation errors:
mockai:
enabled: true
ai_validation_errors: true
Session-Based Testing
Test multi-step workflows with session persistence:
# Step 1: Create session
curl -X POST http://localhost:3000/sessions
# Step 2: Use session in subsequent requests
curl -H "X-Session-ID: <session-id>" \
http://localhost:3000/users
Best Practices
- Start with Defaults: Begin with default configuration and adjust as needed
- Use Local LLMs: For faster responses, use Ollama or similar local providers
- Monitor Performance: Track response times and adjust
timeout_secondsaccordingly - Session Management: Use consistent session IDs across related requests
- Example Quality: Provide high-quality examples for better rule generation
Troubleshooting
MockAI Not Responding
-
Check if MockAI is enabled:
mockforge mockai status -
Verify LLM provider is accessible:
# For Ollama curl http://localhost:11434/api/tags -
Check logs for errors:
mockforge serve --log-level debug
Session Not Persisting
- Ensure session ID is sent in headers or cookies
- Check session timeout settings
- Verify session storage is not being cleared
Slow Responses
- Use a smaller/faster model
- Enable caching
- Reduce
max_history_length - Use a local LLM provider (Ollama)
Limitations
- Query parameter extraction currently requires middleware enhancement
- Session contexts are stored in memory (not persisted to disk)
- Large OpenAPI specs may take longer to initialize
Related Documentation
- Reality Slider - Control MockAI via reality levels
- Configuration Guide - Complete configuration reference
- OpenAPI Integration - OpenAPI specification support
Generative Schema Mode
Generative Schema Mode enables you to generate complete API ecosystems from JSON payloads. Simply provide example JSON data, and MockForge automatically creates routes, schemas, and entity relationships for a fully functional mock API.
Overview
Generative Schema Mode transforms example JSON payloads into:
- Complete OpenAPI specifications with all endpoints
- Automatic CRUD routes for each entity
- Entity relationship inference from data structure
- One-click environment creation ready for deployment
- Preview and edit generated schemas before deployment
Quick Start
Generate from JSON File
# Generate API ecosystem from JSON payloads
mockforge generate --from-json examples.json --output ./generated-api
# Or from multiple files
mockforge generate --from-json file1.json file2.json --output ./generated-api
Generate from JSON Payloads
# Generate from inline JSON
mockforge generate --from-json '{"users": [{"id": 1, "name": "Alice"}]}' --output ./api
One-Click Environment Creation
# Generate and start server in one command
mockforge generate --from-json data.json --serve --port 3000
How It Works
1. Entity Inference
MockForge analyzes JSON payloads to infer entity structures:
Input JSON:
{
"users": [
{"id": 1, "name": "Alice", "email": "alice@example.com"},
{"id": 2, "name": "Bob", "email": "bob@example.com"}
],
"posts": [
{"id": 1, "user_id": 1, "title": "First Post", "content": "..."},
{"id": 2, "user_id": 1, "title": "Second Post", "content": "..."}
]
}
Inferred Entities:
Userentity with fields:id,name,emailPostentity with fields:id,user_id,title,content- Relationship:
Userhas manyPost(viauser_id)
2. Route Generation
Automatically generates CRUD routes for each entity:
Generated Routes:
GET /users- List all usersGET /users/{id}- Get user by IDPOST /users- Create userPUT /users/{id}- Update userDELETE /users/{id}- Delete user
Same routes generated for posts.
3. Schema Building
Creates complete OpenAPI 3.0 specification:
openapi: 3.0.0
info:
title: Generated API
version: 1.0.0
paths:
/users:
get:
summary: List users
responses:
'200':
description: List of users
content:
application/json:
schema:
type: array
items:
$ref: '#/components/schemas/User'
components:
schemas:
User:
type: object
properties:
id:
type: integer
name:
type: string
email:
type: string
format: email
Configuration
Generation Options
generative_schema:
enabled: true
# API metadata
title: "My Generated API"
version: "1.0.0"
# Naming rules
naming_rules:
entity_case: "PascalCase" # PascalCase, camelCase, snake_case
route_case: "kebab-case" # kebab-case, snake_case, camelCase
pluralization: "standard" # standard, none, custom
# Generation options
generate_crud: true
infer_relationships: true
merge_schemas: true
Naming Rules
Customize how entities and routes are named:
naming_rules:
# Entity naming
entity_case: "PascalCase" # User, OrderItem
entity_suffix: "" # Optional suffix
# Route naming
route_case: "kebab-case" # /api/users, /api/order-items
route_prefix: "/api" # Route prefix
# Pluralization
pluralization: "standard" # users, orders
custom_plurals:
person: "people"
child: "children"
CLI Commands
Generate from JSON
# Basic generation
mockforge generate --from-json data.json
# With output directory
mockforge generate --from-json data.json --output ./generated
# With options
mockforge generate \
--from-json data.json \
--title "My API" \
--version "1.0.0" \
--output ./generated
Preview Before Generation
# Preview generated schema without creating files
mockforge generate --from-json data.json --preview
Generate and Serve
# Generate and start server
mockforge generate --from-json data.json --serve --port 3000
Programmatic Usage
Generate Ecosystem
#![allow(unused)] fn main() { use mockforge_core::generative_schema::{ EcosystemGenerator, GenerationOptions, NamingRules }; use serde_json::json; // Example payloads let payloads = vec![ json!({ "users": [ {"id": 1, "name": "Alice", "email": "alice@example.com"} ] }) ]; // Generation options let options = GenerationOptions { title: Some("My API".to_string()), version: Some("1.0.0".to_string()), naming_rules: NamingRules::default(), generate_crud: true, output_dir: Some("./generated".into()), }; // Generate ecosystem let result = EcosystemGenerator::generate_from_json(payloads, options).await?; // Access generated spec let spec = result.spec; let entities = result.entities; let routes = result.routes; }
Entity Relationship Inference
MockForge automatically detects relationships from JSON structure:
One-to-Many (1:N)
Detected from foreign key patterns:
{
"users": [{"id": 1, "name": "Alice"}],
"posts": [{"id": 1, "user_id": 1, "title": "Post"}]
}
Detected Relationship:
Userhas manyPost(viauser_id)
Many-to-Many (N:N)
Detected from junction patterns:
{
"users": [{"id": 1, "name": "Alice"}],
"roles": [{"id": 1, "name": "admin"}],
"user_roles": [
{"user_id": 1, "role_id": 1}
]
}
Detected Relationship:
Userhas manyRolethroughuser_roles
Schema Merging
When generating from multiple JSON files, schemas are intelligently merged:
# Generate from multiple files
mockforge generate \
--from-json users.json posts.json comments.json \
--output ./generated
Merging Strategy:
- Common fields are preserved
- New fields are added
- Type conflicts are resolved (prefer more specific types)
- Relationships are merged
Preview and Edit
Before deploying, preview and edit the generated schema:
Preview Generated Schema
# Preview in terminal
mockforge generate --from-json data.json --preview
# Preview in browser (opens generated OpenAPI spec)
mockforge generate --from-json data.json --preview --open-browser
Edit Before Deployment
# Generate and open in editor
mockforge generate --from-json data.json --output ./generated --edit
# Manually edit generated/openapi.yaml, then deploy
mockforge serve --spec ./generated/openapi.yaml
Integration with VBR
Generated schemas can be automatically integrated with VBR:
# Generate with VBR integration
mockforge generate \
--from-json data.json \
--vbr-enabled \
--output ./generated
This creates:
- VBR entity definitions
- Relationship mappings
- Seed data from JSON
Use Cases
Rapid Prototyping
Quickly create mock APIs from example data:
# Generate API from sample responses
mockforge generate --from-json sample-responses.json --serve
API Design
Design APIs by example:
# Create API from design mockups
mockforge generate --from-json design-mockups.json --output ./api-design
Testing Data Generation
Generate test APIs with realistic data:
# Generate API with test data
mockforge generate --from-json test-data.json --output ./test-api
Best Practices
- Provide Complete Examples: Include all fields you want in the generated schema
- Use Consistent Naming: Consistent naming in JSON helps with entity inference
- Include Relationships: Show relationships in JSON for automatic detection
- Preview Before Deploy: Always preview generated schemas before deployment
- Version Control: Commit generated schemas to version control
Troubleshooting
Entities Not Detected
- Ensure JSON has a clear structure (arrays of objects)
- Use consistent field names
- Include ID fields for relationship detection
Routes Not Generated
- Check that
generate_crudis enabled - Verify entity names are valid
- Review naming rules configuration
Relationships Not Inferred
- Use standard foreign key naming (
entity_id) - Include junction tables for many-to-many
- Provide complete relationship data in JSON
Related Documentation
- VBR Engine - State management for generated entities
- OpenAPI Integration - Working with generated OpenAPI specs
- Configuration Guide - Complete configuration reference
AI Contract Diff
AI Contract Diff automatically detects and analyzes differences between API contracts (OpenAPI specifications) and live requests. It provides contextual recommendations for mismatches and generates correction proposals to keep your contracts in sync with reality.
Overview
AI Contract Diff helps you:
- Detect Contract Drift: Find discrepancies between your OpenAPI spec and actual API usage
- Get AI-Powered Recommendations: Understand why mismatches occur and how to fix them
- Generate Correction Patches: Automatically create JSON Patch files to update your specs
- Integrate with CI/CD: Automatically verify contracts in your pipeline
- Visualize Mismatches: Dashboard visualization of contract differences
Quick Start
Analyze a Request
# Analyze a captured request against an OpenAPI spec
mockforge contract-diff analyze \
--spec api.yaml \
--request-id <capture-id>
# Or analyze from file
mockforge contract-diff analyze \
--spec api.yaml \
--request-file request.json
Compare Two Specs
# Compare two OpenAPI specifications
mockforge contract-diff compare \
--spec1 api-v1.yaml \
--spec2 api-v2.yaml
Generate Correction Patch
# Generate JSON Patch file for corrections
mockforge contract-diff generate-patch \
--spec api.yaml \
--request-id <capture-id> \
--output patch.json
How It Works
1. Request Capture
MockForge automatically captures requests for contract analysis:
# config.yaml
core:
contract_diff:
enabled: true
auto_capture: true
capture_all: false # Only capture mismatches
2. Contract Analysis
When a request is captured, it’s analyzed against your OpenAPI specification:
- Path Matching: Verify request path matches spec
- Method Validation: Check HTTP method is defined
- Header Validation: Compare request headers with spec
- Query Parameter Validation: Verify query params match
- Body Validation: Validate request body against schema
3. Mismatch Detection
The analyzer identifies several types of mismatches:
- Missing Endpoint: Request path not in spec
- Invalid Method: HTTP method not allowed
- Missing Header: Required header not present
- Invalid Parameter: Query param doesn’t match spec
- Schema Mismatch: Request body doesn’t match schema
- Type Mismatch: Value type doesn’t match spec
4. AI Recommendations
AI-powered recommendations explain mismatches:
{
"mismatch": {
"type": "missing_field",
"field": "email",
"location": "request.body"
},
"recommendation": {
"message": "The 'email' field is required but missing from the request. Add it to the request body or mark it as optional in the schema.",
"confidence": 0.95,
"suggested_fix": "Add 'email' field to request body or update schema to make it optional"
}
}
5. Correction Proposals
Generate JSON Patch files to fix mismatches:
[
{
"op": "add",
"path": "/paths/~1users/post/requestBody/content/application~1json/schema/required",
"value": ["email"]
}
]
Configuration
Basic Configuration
core:
contract_diff:
enabled: true
auto_capture: true
capture_all: false
spec_path: "./api.yaml"
AI Provider Configuration
core:
contract_diff:
ai_provider: "ollama" # or "openai", "anthropic"
ai_model: "llama3.2"
ai_base_url: "http://localhost:11434"
ai_api_key: "${AI_API_KEY}" # For OpenAI/Anthropic
Webhook Configuration
core:
contract_diff:
webhooks:
- url: "https://example.com/webhook"
events: ["mismatch", "high_severity"]
secret: "${WEBHOOK_SECRET}"
CLI Commands
Analyze Request
# Analyze captured request
mockforge contract-diff analyze \
--spec api.yaml \
--request-id <capture-id>
# Analyze from file
mockforge contract-diff analyze \
--spec api.yaml \
--request-file request.json
# With AI recommendations
mockforge contract-diff analyze \
--spec api.yaml \
--request-id <capture-id> \
--ai-enabled \
--ai-provider ollama
Compare Specs
# Compare two OpenAPI specs
mockforge contract-diff compare \
--spec1 api-v1.yaml \
--spec2 api-v2.yaml
# Output to file
mockforge contract-diff compare \
--spec1 api-v1.yaml \
--spec2 api-v2.yaml \
--output diff.json
Generate Patch
# Generate correction patch
mockforge contract-diff generate-patch \
--spec api.yaml \
--request-id <capture-id> \
--output patch.json
# Apply patch automatically
mockforge contract-diff generate-patch \
--spec api.yaml \
--request-id <capture-id> \
--apply
Apply Patch
# Apply patch to spec
mockforge contract-diff apply-patch \
--spec api.yaml \
--patch patch.json \
--output api-updated.yaml
API Endpoints
Upload Request
POST /__mockforge/contract-diff/upload
Content-Type: application/json
{
"method": "POST",
"path": "/users",
"headers": {"Content-Type": "application/json"},
"query_params": {},
"body": {"name": "Alice", "email": "alice@example.com"}
}
Get Captured Requests
GET /__mockforge/contract-diff/captures?limit=10&offset=0
Analyze Request
POST /__mockforge/contract-diff/captures/{id}/analyze
Content-Type: application/json
{
"spec_path": "./api.yaml"
}
Generate Patch
POST /__mockforge/contract-diff/captures/{id}/patch
Content-Type: application/json
{
"spec_path": "./api.yaml"
}
Get Statistics
GET /__mockforge/contract-diff/statistics
Dashboard
The Contract Diff dashboard provides:
- Statistics Overview: Total captures, analyzed requests, mismatch counts
- Captured Requests List: Browse and filter captured requests
- Analysis Results: View mismatches, recommendations, and confidence scores
- Patch Generation: Generate and download correction patches
Access via: Admin UI → Contract Diff
CI/CD Integration
GitHub Actions
name: Contract Diff Analysis
on:
pull_request:
paths:
- 'api.yaml'
- '**/*.yaml'
jobs:
contract-diff:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Analyze contracts
run: |
mockforge contract-diff analyze \
--spec api.yaml \
--request-id ${{ github.event.pull_request.number }}
- name: Generate patch
run: |
mockforge contract-diff generate-patch \
--spec api.yaml \
--request-id ${{ github.event.pull_request.number }} \
--output patch.json
- name: Upload patch
uses: actions/upload-artifact@v3
with:
name: contract-patch
path: patch.json
GitLab CI
contract-diff:
script:
- mockforge contract-diff analyze --spec api.yaml --request-id $CI_PIPELINE_ID
- mockforge contract-diff generate-patch --spec api.yaml --request-id $CI_PIPELINE_ID --output patch.json
artifacts:
paths:
- patch.json
Use Cases
Contract Validation
Ensure your API spec matches actual usage:
# Run analysis on all captured requests
for id in $(mockforge contract-diff list-captures --ids); do
mockforge contract-diff analyze --spec api.yaml --request-id $id
done
Spec Maintenance
Keep specs up-to-date automatically:
# Generate patches for all mismatches
mockforge contract-diff generate-patch \
--spec api.yaml \
--request-id <capture-id> \
--output patches/
# Review and apply patches
mockforge contract-diff apply-patch \
--spec api.yaml \
--patch patches/patch-1.json \
--output api-updated.yaml
API Versioning
Compare API versions:
# Compare v1 and v2
mockforge contract-diff compare \
--spec1 api-v1.yaml \
--spec2 api-v2.yaml \
--output version-diff.json
Best Practices
- Enable Auto-Capture: Automatically capture requests for analysis
- Regular Analysis: Run analysis regularly to catch drift early
- Review Recommendations: Always review AI recommendations before applying
- Version Control Patches: Commit patches to version control
- CI/CD Integration: Automate contract validation in your pipeline
Troubleshooting
No Mismatches Detected
- Verify OpenAPI spec is valid
- Check that request path matches spec
- Ensure method is defined in spec
AI Recommendations Not Available
- Check AI provider is configured
- Verify API key is set (for OpenAI/Anthropic)
- Ensure Ollama is running (for local provider)
Patch Generation Fails
- Verify spec path is correct
- Check that mismatches exist
- Review patch generation logs
Related Documentation
- OpenAPI Integration - Working with OpenAPI specs
- Configuration Guide - Complete configuration reference
- CI/CD Integration - Pipeline integration
Chaos Lab
Chaos Lab is an interactive module that enables you to simulate various real-world network conditions and errors directly from the UI. Test application resilience, debug network-related issues, and validate error handling logic.
Overview
Chaos Lab provides:
- Real-time latency visualization - Visual graph showing request latency over time
- Network profile management - Predefined and custom profiles for common network conditions
- Error pattern scripting - Configure burst, random, or sequential error injection
- Profile export/import - Share and version control chaos configurations
- CLI integration - Apply profiles and manage configurations from the command line
Quick Start
Using the UI
- Navigate to the Chaos Engineering page in the MockForge Admin UI
- Use the Network Profiles section to apply predefined conditions (slow 3G, flaky Wi-Fi, etc.)
- Monitor real-time latency in the Latency Metrics graph
- Configure error patterns in the Error Pattern Editor
Using the CLI
# Apply a network profile
mockforge serve --chaos-profile slow_3g
# List available profiles
mockforge chaos profile list
# Export a profile
mockforge chaos profile export slow_3g --format json --output profile.json
# Import a profile
mockforge chaos profile import --file profile.json
Features
Real-Time Latency Graph
The latency graph displays request latency over time with:
- Time-series visualization - See latency trends in real-time
- Statistics overlay - Min, max, average, P95, P99 percentiles
- Auto-refresh - Updates every 500ms for live monitoring
- Configurable history - View last 100 samples by default
Usage:
- Enable latency injection in the Quick Controls section
- The graph automatically populates as requests are made
- Hover over data points to see exact latency values
Network Profiles
Network profiles are pre-configured chaos settings that simulate specific network conditions:
Built-in Profiles
- slow_3g - Simulates slow 3G connection (high latency, low bandwidth)
- flaky_wifi - Intermittent connection issues with packet loss
- high_latency - Consistent high latency for all requests
- unstable_connection - Random connection drops and timeouts
Custom Profiles
Create your own profiles:
- Configure chaos settings in the Quick Controls
- Use the Profile Exporter to save your configuration
- Import it later or share with your team
Applying Profiles:
# Via UI
Click "Apply Profile" on any profile card
# Via CLI
mockforge chaos profile apply slow_3g
Error Pattern Editor
Configure sophisticated error injection patterns:
Burst Pattern
Inject multiple errors within a time window:
{
"type": "burst",
"count": 5,
"interval_ms": 1000
}
This injects 5 errors within 1 second, then waits for the next interval.
Random Pattern
Inject errors with a probability:
{
"type": "random",
"probability": 0.1
}
Each request has a 10% chance of receiving an error.
Sequential Pattern
Inject errors in a specific order:
{
"type": "sequential",
"sequence": [500, 502, 503, 504]
}
Errors are injected in the specified order, then the sequence repeats.
Usage:
- Enable Fault Injection in Quick Controls
- Open the Error Pattern Editor
- Select pattern type and configure parameters
- Click “Save Pattern”
Profile Export/Import
Export and import chaos configurations for:
- Version control - Track chaos configurations in git
- Team sharing - Share tested configurations
- CI/CD integration - Apply profiles in automated tests
- Backup - Save working configurations
Export Format:
{
"name": "custom_profile",
"description": "Custom network condition",
"chaos_config": {
"latency": {
"enabled": true,
"fixed_delay_ms": 500,
"probability": 1.0
},
"fault_injection": {
"enabled": true,
"http_errors": [500, 502, 503],
"http_error_probability": 0.1
}
},
"tags": ["custom", "testing"],
"builtin": false
}
Import:
- Via UI: Use the Profile Exporter component
- Via CLI:
mockforge chaos profile import --file profile.json
API Endpoints
Latency Metrics
GET /api/chaos/metrics/latency
Returns time-series latency data:
{
"samples": [
{
"timestamp": "2024-01-01T12:00:00Z",
"latency_ms": 150
}
]
}
GET /api/chaos/metrics/latency/stats
Returns aggregated statistics:
{
"avg_latency_ms": 145.5,
"min_latency_ms": 100,
"max_latency_ms": 200,
"total_requests": 100,
"p50_ms": 140,
"p95_ms": 180,
"p99_ms": 195
}
Profile Management
GET /api/chaos/profiles
List all available profiles.
GET /api/chaos/profiles/{name}
Get a specific profile.
POST /api/chaos/profiles/{name}/apply
Apply a profile to the current configuration.
POST /api/chaos/profiles
Create a custom profile.
DELETE /api/chaos/profiles/{name}
Delete a custom profile.
GET /api/chaos/profiles/{name}/export?format=json
Export a profile.
POST /api/chaos/profiles/import
Import a profile.
Error Pattern Configuration
Update error patterns via the fault injection config endpoint:
PUT /api/chaos/config/faults
{
"enabled": true,
"http_errors": [500, 502, 503],
"error_pattern": {
"type": "burst",
"count": 5,
"interval_ms": 1000
}
}
CLI Commands
Profile Management
# List all profiles
mockforge chaos profile list
# Apply a profile
mockforge chaos profile apply slow_3g
# Export a profile
mockforge chaos profile export slow_3g --format json --output profile.json
# Import a profile
mockforge chaos profile import --file profile.json
Server Startup
# Start server with a profile applied
mockforge serve --chaos-profile slow_3g --spec openapi.json
Use Cases
Testing Resilience
- Apply a “flaky_wifi” profile
- Monitor your application’s retry logic
- Verify error handling and recovery
Debugging Network Issues
- Reproduce reported network conditions
- Use the latency graph to identify patterns
- Test fixes under controlled conditions
Load Testing Preparation
- Create profiles matching production network conditions
- Export profiles for CI/CD pipelines
- Apply profiles during automated tests
Team Collaboration
- Export tested chaos configurations
- Share profiles via version control
- Standardize testing across environments
Best Practices
Profile Naming
- Use descriptive names:
production_like_network,mobile_edge_conditions - Include tags for categorization:
["mobile", "edge", "testing"] - Document profile purpose in the description field
Error Pattern Design
- Start with low probabilities (0.05-0.1) and increase gradually
- Use burst patterns to test rate limiting and circuit breakers
- Use sequential patterns to test specific error code handling
Monitoring
- Always monitor the latency graph when chaos is active
- Set up alerts for unexpected latency spikes
- Review statistics regularly to understand impact
Version Control
- Export profiles before making changes
- Commit profiles to version control
- Tag profiles with application versions
Troubleshooting
Latency Graph Not Updating
- Ensure latency injection is enabled
- Check that requests are being made to the server
- Verify the API endpoint is accessible:
GET /api/chaos/metrics/latency
Profile Not Applying
- Verify profile name is correct:
mockforge chaos profile list - Check server logs for errors
- Ensure chaos engineering is enabled in configuration
Error Pattern Not Working
- Verify fault injection is enabled
- Check error pattern configuration is valid JSON
- Ensure HTTP error codes are configured:
http_errors: [500, 502, 503]
Configuration
Chaos Lab settings can be configured in mockforge.yaml:
observability:
chaos:
enabled: true
latency:
enabled: true
fixed_delay_ms: 200
probability: 0.5
fault_injection:
enabled: true
http_errors: [500, 502, 503]
http_error_probability: 0.1
error_pattern:
type: random
probability: 0.1
Integration with Test Automation
CI/CD Integration
# Example GitHub Actions workflow
- name: Test with chaos profile
run: |
mockforge serve --chaos-profile slow_3g &
sleep 5
pytest tests/
mockforge chaos profile apply none
Test Scripts
#!/bin/bash
# Apply profile and run tests
mockforge chaos profile apply flaky_wifi --base-url http://localhost:3000
npm test
mockforge chaos profile apply none --base-url http://localhost:3000
Performance Considerations
- Latency metrics are stored in memory (last 100 samples)
- Profile application is instant (no server restart required)
- Error pattern evaluation adds minimal overhead (< 1ms per request)
- Real-time graph updates every 500ms (configurable)
Limitations
- Latency samples are limited to the last 100 requests
- Custom profiles are stored in memory (not persisted across restarts)
- Error patterns apply globally (not per-endpoint)
- MockAI integration requires MockAI to be enabled
Related Documentation
- Reality Slider - Unified realism control
- Advanced Behavior - Basic chaos features
- Configuration Guide - Complete configuration reference
Reality Slider
The Reality Slider is a unified control mechanism that adjusts the realism of your mock environment from simple static stubs to full production-level chaos. It coordinates three key subsystems: Chaos Engineering, Latency Simulation, and MockAI.
Overview
By adjusting a single slider from 1 to 5, you can instantly transform your mock environment to match different testing scenarios without manually configuring each subsystem.
Reality Levels
Level 1: Static Stubs
Use Case: Fast, predictable responses for basic functionality testing
- Chaos: Disabled
- Latency: 0ms (instant responses)
- MockAI: Disabled
- Best For: Unit tests, rapid prototyping, simple integration checks
Level 2: Light Simulation
Use Case: Minimal realism with basic intelligence
- Chaos: Disabled
- Latency: 10-50ms (minimal network delay)
- MockAI: Basic AI (simple response generation)
- Best For: Frontend development, basic API testing, quick demos
Level 3: Moderate Realism (Default)
Use Case: Balanced realism for most development scenarios
- Chaos: 5% error rate, 10% delay probability
- Latency: 50-200ms (moderate network conditions)
- MockAI: Full AI enabled (intelligent responses, relationship awareness)
- Best For: Integration testing, development environments, staging-like behavior
Level 4: High Realism
Use Case: Production-like conditions with increased complexity
- Chaos: 10% error rate, 20% delay probability
- Latency: 100-500ms (realistic network conditions)
- MockAI: Full AI + session state management
- Best For: Pre-production testing, realistic user flows, stress testing preparation
Level 5: Production Chaos
Use Case: Maximum realism for resilience testing
- Chaos: 15% error rate, 30% delay probability
- Latency: 200-2000ms (production-like network conditions)
- MockAI: Full AI + mutations + advanced features
- Best For: Chaos engineering, resilience testing, production simulation
Usage
UI Usage
Dashboard
The Reality Slider is available on the Dashboard page:
- Navigate to Dashboard in the admin UI
- Find the Environment Control section
- Use the slider to adjust the reality level (1-5)
- Click level indicators for quick selection
- View current configuration in the details panel
Configuration Page
For advanced control and preset management:
- Navigate to Configuration → Reality Slider
- Use the full-featured slider with visual feedback
- Manage presets (export/import configurations)
- View keyboard shortcuts reference
CLI Usage
Command Line Flag
# Set reality level at startup
mockforge serve --reality-level 5
# With OpenAPI spec
mockforge serve --spec api.yaml --reality-level 3
Environment Variable
# Set via environment variable
export MOCKFORGE_REALITY_LEVEL=4
mockforge serve
# Or inline
MOCKFORGE_REALITY_LEVEL=2 mockforge serve --spec api.yaml
Precedence: CLI flag > Environment variable > Config file > Default (Level 3)
Configuration File
Add to your mockforge.yaml:
reality:
enabled: true
level: 3 # 1-5
Or use per-profile configuration:
profiles:
development:
reality:
level: 2
staging:
reality:
level: 4
production:
reality:
level: 5
Keyboard Shortcuts
Quick reality level changes from anywhere in the UI:
| Shortcut | Action |
|---|---|
Ctrl+Shift+1 | Set to Level 1 (Static Stubs) |
Ctrl+Shift+2 | Set to Level 2 (Light Simulation) |
Ctrl+Shift+3 | Set to Level 3 (Moderate Realism) |
Ctrl+Shift+4 | Set to Level 4 (High Realism) |
Ctrl+Shift+5 | Set to Level 5 (Production Chaos) |
Ctrl+Shift+R | Reset to default (Level 3) |
Ctrl+Shift+P | Open preset manager (Config page) |
Note: Shortcuts are disabled when typing in input fields to avoid conflicts.
Presets
Exporting Presets
Save your current reality configuration for reuse:
- Navigate to Configuration → Reality Slider
- Click Export Current
- Enter a preset name (e.g., “production-chaos”, “staging-realistic”)
- Optionally add a description
- Click Export Preset
Presets are saved as JSON or YAML files in the workspace presets directory.
Importing Presets
- Navigate to Configuration → Reality Slider
- Click Import Preset
- Select a preset from the list
- Click Load to apply
Preset File Format
Presets are stored as JSON or YAML:
{
"metadata": {
"name": "production-chaos",
"description": "Maximum realism for resilience testing",
"created_at": "2025-01-15T10:30:00Z",
"version": "1.0"
},
"config": {
"chaos": {
"enabled": true,
"error_rate": 0.15,
"delay_rate": 0.30
},
"latency": {
"base_ms": 200,
"jitter_ms": 1800
},
"mockai": {
"enabled": true
}
}
}
CI/CD Integration
GitHub Actions
env:
MOCKFORGE_REALITY_LEVEL: 3 # Moderate Realism for tests
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Run tests with mock
run: |
mockforge serve --reality-level ${{ env.MOCKFORGE_REALITY_LEVEL }} &
# Run your tests
Docker Compose
services:
mockforge:
environment:
- MOCKFORGE_REALITY_LEVEL=${MOCKFORGE_REALITY_LEVEL:-3}
API Reference
Get Current Reality Level
GET /__mockforge/reality/level
Response:
{
"level": 3,
"level_name": "Moderate Realism",
"description": "Some chaos, moderate latency, full intelligence",
"chaos": {
"enabled": true,
"error_rate": 0.05,
"delay_rate": 0.10
},
"latency": {
"base_ms": 50,
"jitter_ms": 150
},
"mockai": {
"enabled": true
}
}
Set Reality Level
PUT /__mockforge/reality/level
Content-Type: application/json
{
"level": 5
}
Use Cases
Development Workflow
-
Start Development: Level 2 (Light Simulation)
- Fast responses for rapid iteration
- Basic AI for realistic data
-
Integration Testing: Level 3 (Moderate Realism)
- Some chaos to catch error handling
- Realistic latency for network-aware code
-
Pre-Production: Level 4 (High Realism)
- Production-like conditions
- Full feature set enabled
-
Resilience Testing: Level 5 (Production Chaos)
- Maximum chaos for stress testing
- Simulate worst-case scenarios
Testing Scenarios
Unit Tests
# Fast, predictable responses
MOCKFORGE_REALITY_LEVEL=1 npm test
Integration Tests
# Moderate realism
MOCKFORGE_REALITY_LEVEL=3 npm test
E2E Tests
# High realism for production-like testing
MOCKFORGE_REALITY_LEVEL=4 npm test
Chaos Engineering
# Maximum chaos for resilience testing
MOCKFORGE_REALITY_LEVEL=5 npm test
Best Practices
- Start Low, Increase Gradually: Begin with Level 1-2 for development, increase as you approach production
- Use Presets: Save common configurations for different environments
- CI/CD Integration: Set appropriate levels for different test stages
- Monitor Impact: Watch metrics as you change levels to understand the impact
- Document Your Levels: Use preset descriptions to document when to use each configuration
Troubleshooting
Level Changes Not Applying
- Check that the reality slider is enabled in configuration
- Verify API endpoint is accessible:
curl http://localhost:9080/__mockforge/reality/level - Check server logs for errors
Shortcuts Not Working
- Ensure you’re not typing in an input field
- Check browser console for JavaScript errors
- Verify shortcuts are enabled (disabled in compact mode)
Presets Not Loading
- Verify preset file format (JSON or YAML)
- Check file permissions
- Ensure preset path is correct
- Review server logs for import errors
Related Documentation
- Chaos Lab - Detailed chaos engineering features
- MockAI - Intelligent mocking system
- Configuration Guide - Complete configuration reference
Cloud Workspaces (Collaboration)
Cloud Workspaces enables multi-user collaborative editing with real-time state synchronization, version control, and role-based permissions. Work together on mock configurations with Git-style versioning and conflict resolution.
Overview
Cloud Workspaces provides:
- User Authentication: JWT-based authentication with secure sessions
- Multi-User Editing: Real-time collaborative editing with presence awareness
- State Synchronization: WebSocket-based real-time sync between clients
- Version Control: Git-style version control for mocks and data
- Change Tracking: Full history with rollback capabilities
- Role-Based Permissions: Owner, Editor, and Viewer roles
Quick Start
Create a Workspace
# Create a new workspace
mockforge workspace create --name "My Workspace" --description "Team workspace"
# Or via API
curl -X POST http://localhost:9080/api/workspaces \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <token>" \
-d '{
"name": "My Workspace",
"description": "Team workspace"
}'
Join a Workspace
# List available workspaces
mockforge workspace list
# Join a workspace (requires invitation)
mockforge workspace join <workspace-id>
Start Collaborative Server
# Start server with collaboration enabled
mockforge serve --collab-enabled --collab-port 8080
Features
User Authentication
Register
# Register new user
mockforge auth register \
--email "user@example.com" \
--password "secure-password" \
--name "User Name"
Login
# Login and get JWT token
mockforge auth login \
--email "user@example.com" \
--password "secure-password"
Workspace Management
Create Workspace
mockforge workspace create \
--name "Team Workspace" \
--description "Shared workspace for team"
List Workspaces
# List your workspaces
mockforge workspace list
# List all workspaces (admin only)
mockforge workspace list --all
Get Workspace Details
mockforge workspace get <workspace-id>
Member Management
Add Member
# Add member to workspace
mockforge workspace member add \
--workspace <workspace-id> \
--user <user-id> \
--role editor
List Members
# List workspace members
mockforge workspace member list --workspace <workspace-id>
Change Role
# Change member role
mockforge workspace member role \
--workspace <workspace-id> \
--user <user-id> \
--role viewer
Remove Member
# Remove member from workspace
mockforge workspace member remove \
--workspace <workspace-id> \
--user <user-id>
Real-Time Synchronization
Workspaces use WebSocket for real-time synchronization:
WebSocket Connection
const ws = new WebSocket('ws://localhost:8080/ws');
// Subscribe to workspace
ws.send(JSON.stringify({
type: 'subscribe',
workspace_id: 'workspace-uuid'
}));
// Receive updates
ws.onmessage = (event) => {
const data = JSON.parse(event.data);
if (data.type === 'change') {
console.log('Change event:', data.event);
}
};
Change Events
mock_created- New mock addedmock_updated- Mock modifiedmock_deleted- Mock removedworkspace_updated- Workspace settings changedmember_added- New team membermember_removed- Member leftrole_changed- Member role updatedsnapshot_created- New snapshotuser_joined- User connecteduser_left- User disconnectedcursor_moved- Cursor position updated
Version Control
Create Snapshot
# Create workspace snapshot
mockforge workspace snapshot create \
--workspace <workspace-id> \
--message "Initial state"
List Snapshots
# List workspace snapshots
mockforge workspace snapshot list --workspace <workspace-id>
Restore Snapshot
# Restore workspace to snapshot
mockforge workspace snapshot restore \
--workspace <workspace-id> \
--snapshot <snapshot-id>
Conflict Resolution
When multiple users edit simultaneously, conflicts are resolved automatically:
- Last Write Wins: Default strategy for simple conflicts
- Merge Strategy: Intelligent merging for compatible changes
- Manual Resolution: Manual conflict resolution for complex cases
API Endpoints
Authentication
POST /auth/register
Content-Type: application/json
{
"email": "user@example.com",
"password": "secure-password",
"name": "User Name"
}
POST /auth/login
Content-Type: application/json
{
"email": "user@example.com",
"password": "secure-password"
}
Workspaces
POST /workspaces
Authorization: Bearer <token>
Content-Type: application/json
{
"name": "My Workspace",
"description": "Team workspace"
}
GET /workspaces
Authorization: Bearer <token>
GET /workspaces/:id
Authorization: Bearer <token>
PUT /workspaces/:id
Authorization: Bearer <token>
Content-Type: application/json
{
"name": "Updated Name",
"description": "Updated description"
}
DELETE /workspaces/:id
Authorization: Bearer <token>
Members
POST /workspaces/:id/members
Authorization: Bearer <token>
Content-Type: application/json
{
"user_id": "user-uuid",
"role": "editor"
}
GET /workspaces/:id/members
Authorization: Bearer <token>
PUT /workspaces/:id/members/:user_id/role
Authorization: Bearer <token>
Content-Type: application/json
{
"role": "viewer"
}
DELETE /workspaces/:id/members/:user_id
Authorization: Bearer <token>
Role-Based Permissions
Owner
- Full access to workspace
- Can delete workspace
- Can manage all members
- Can change any member’s role
Editor
- Can create, update, and delete mocks
- Can view all workspace content
- Cannot delete workspace
- Cannot manage members
Viewer
- Can view workspace content
- Cannot modify anything
- Read-only access
Configuration
Server Configuration
collab:
enabled: true
port: 8080
database:
type: "sqlite" # or "postgres"
path: "./collab.db" # For SQLite
connection_string: "postgresql://..." # For PostgreSQL
jwt:
secret: "${JWT_SECRET}"
expiration_hours: 24
Client Configuration
collab:
server_url: "http://localhost:8080"
workspace_id: "workspace-uuid"
auto_sync: true
sync_interval_ms: 1000
Use Cases
Team Development
Multiple developers working on the same mock configuration:
- Create shared workspace
- Invite team members
- Edit mocks collaboratively
- View changes in real-time
Staging Environment
Shared staging environment with controlled access:
- Create workspace for staging
- Add team members as editors
- Add stakeholders as viewers
- Track all changes with version control
Client Demos
Share mock environments with clients:
- Create workspace for client
- Add client as viewer
- Update mocks as needed
- Client sees changes in real-time
Best Practices
- Use Appropriate Roles: Assign roles based on responsibilities
- Regular Snapshots: Create snapshots before major changes
- Monitor Conflicts: Watch for conflict warnings
- Version Control: Use snapshots for important milestones
- Secure Secrets: Never commit JWT secrets to version control
Troubleshooting
Connection Issues
- Verify WebSocket endpoint is accessible
- Check firewall settings
- Review server logs for errors
Sync Conflicts
- Review conflict resolution strategy
- Use manual resolution for complex cases
- Create snapshots before major changes
Permission Errors
- Verify user role has required permissions
- Check workspace membership
- Review JWT token expiration
Related Documentation
- VBR Engine - State management
- Scenario Marketplace - Sharing scenarios
- Configuration Guide - Complete configuration reference
Data Scenario Marketplace
The Data Scenario Marketplace allows you to discover, install, and use community-built realistic mock scenarios with one-click import functionality. Share your scenarios with the community or use pre-built scenarios for common use cases.
Overview
Scenarios are complete mock system configurations that include:
- MockForge configuration files (
config.yaml) - OpenAPI specifications
- Protocol-specific fixtures
- Example data files
- Documentation
Quick Start
Install a Scenario
# Install from local path
mockforge scenario install ./examples/scenarios/ecommerce-store
# Install from URL
mockforge scenario install https://example.com/scenarios/ecommerce-store.zip
# Install from Git repository
mockforge scenario install https://github.com/user/scenarios#main:ecommerce-store
# Install from registry
mockforge scenario install ecommerce-store
Apply Scenario to Workspace
# Apply installed scenario to current directory
mockforge scenario use ecommerce-store
# This copies:
# - config.yaml
# - openapi.json
# - fixtures/
# - examples/
Start the Server
mockforge serve --config config.yaml
Available Commands
Install
Install a scenario from various sources:
mockforge scenario install <source> [--force] [--skip-validation] [--checksum <sha256>]
Sources:
- Local path:
./scenarios/my-scenario - URL:
https://example.com/scenario.zip - Git:
https://github.com/user/repo#main:scenarios/my-scenario - Registry:
ecommerce-storeorecommerce-store@1.0.0
Options:
--force: Force reinstall even if scenario exists--skip-validation: Skip package validation--checksum: Expected SHA-256 checksum (for URL sources)
List
List all installed scenarios:
mockforge scenario list [--detailed]
Info
Show detailed information about an installed scenario:
mockforge scenario info <name> [--version <version>]
Use
Apply a scenario to the current workspace:
mockforge scenario use <name> [--version <version>]
This copies scenario files to the current directory, allowing you to start using the scenario immediately.
Search
Search for scenarios in the registry:
mockforge scenario search <query> [--category <category>] [--tags <tags>]
Publish
Publish your scenario to the marketplace:
mockforge scenario publish \
--name "my-scenario" \
--version "1.0.0" \
--description "My awesome scenario" \
--category "ecommerce" \
--tags "api,rest,mock"
Scenario Structure
A scenario package must follow this structure:
my-scenario/
├── scenario.yaml # Scenario metadata
├── config.yaml # MockForge configuration
├── openapi.json # OpenAPI specification
├── fixtures/ # Protocol-specific fixtures
│ ├── http/
│ ├── grpc/
│ └── websocket/
├── examples/ # Example data files
├── README.md # Documentation
└── CHANGELOG.md # Version history
scenario.yaml
name: ecommerce-store
version: 1.0.0
description: Complete e-commerce API mock
author: John Doe
category: ecommerce
tags:
- api
- rest
- ecommerce
- shopping
dependencies: []
Marketplace Features
Tags and Categories
Scenarios are organized by:
- Categories: ecommerce, fintech, healthcare, iot, etc.
- Tags: api, rest, grpc, websocket, etc.
- Ratings: Community ratings and reviews
- Versioning: Semantic versioning support
Ratings and Reviews
Rate and review scenarios:
# Rate a scenario
mockforge scenario rate <name> --rating 5 --comment "Great scenario!"
# View ratings
mockforge scenario info <name> --show-ratings
Versioning
Scenarios use semantic versioning:
# Install specific version
mockforge scenario install ecommerce-store@1.0.0
# Install latest version
mockforge scenario install ecommerce-store@latest
# Update to latest
mockforge scenario update ecommerce-store
Domain-Specific Packs
E-commerce
Complete e-commerce API scenarios:
mockforge scenario install ecommerce-store
Includes:
- Product catalog
- Shopping cart
- Order management
- Payment processing
- User accounts
Fintech
Financial services scenarios:
mockforge scenario install fintech-banking
Includes:
- Account management
- Transactions
- Payments
- Cards
- Loans
Healthcare
Healthcare API scenarios:
mockforge scenario install healthcare-api
Includes:
- Patient records
- Appointments
- Prescriptions
- Medical devices
IoT
IoT device scenarios:
mockforge scenario install iot-devices
Includes:
- Device management
- Sensor data
- Commands
- Telemetry
Integration with VBR and MockAI
Scenarios automatically integrate with VBR and MockAI:
VBR Integration
Scenarios can include VBR entity definitions:
# scenario.yaml
vbr_entities:
- name: users
schema: ./schemas/user.json
seed_data: ./data/users.json
MockAI Integration
Scenarios can include MockAI rules:
# scenario.yaml
mockai_rules:
- endpoint: "/users"
rules: ./rules/users.json
API Endpoints
Marketplace API
GET /api/scenarios/marketplace?category=ecommerce&tags=api
List scenarios from marketplace.
GET /api/scenarios/marketplace/{name}
Get scenario details.
POST /api/scenarios/marketplace/{name}/install
Install scenario from marketplace.
Local Scenarios
GET /api/scenarios/local
List installed scenarios.
GET /api/scenarios/local/{name}
Get installed scenario details.
POST /api/scenarios/local/{name}/use
Apply scenario to workspace.
Use Cases
Quick Prototyping
Start with a pre-built scenario:
# Install e-commerce scenario
mockforge scenario install ecommerce-store
# Apply to workspace
mockforge scenario use ecommerce-store
# Start server
mockforge serve --config config.yaml
Team Sharing
Share scenarios within your team:
# Publish to internal registry
mockforge scenario publish \
--name "internal-api" \
--registry "https://internal-registry.example.com"
Community Contribution
Contribute scenarios to the community:
# Publish to public marketplace
mockforge scenario publish \
--name "my-awesome-scenario" \
--public
Best Practices
- Document Well: Include comprehensive README and examples
- Version Properly: Use semantic versioning
- Test Thoroughly: Ensure scenarios work out of the box
- Tag Appropriately: Use relevant tags and categories
- Keep Updated: Maintain scenarios with bug fixes and improvements
Troubleshooting
Installation Fails
- Verify scenario structure is correct
- Check file permissions
- Review scenario.yaml for errors
Scenario Not Working
- Check MockForge version compatibility
- Verify all dependencies are installed
- Review scenario documentation
Marketplace Connection Issues
- Verify network connectivity
- Check marketplace URL is correct
- Review authentication credentials
Related Documentation
- Cloud Workspaces - Sharing scenarios with teams
- VBR Engine - State management in scenarios
- Configuration Guide - Complete configuration reference
ForgeConnect SDK
ForgeConnect SDK provides browser extension and framework SDKs for capturing network traffic, auto-generating mocks, and integrating with popular frontend frameworks. Develop and test frontend applications with seamless mock integration.
Overview
ForgeConnect includes:
- Browser Extension: Capture network traffic and create mocks automatically
- Browser SDK: JavaScript/TypeScript SDK for framework integration
- Auto-Mock Generation: Automatically create mocks for unhandled requests
- Framework Adapters: React, Vue, Angular, Next.js support
- Auth Passthrough: Support for OAuth flows and authentication
Quick Start
Install Browser Extension
- Install from Chrome Web Store or Firefox Add-ons
- Open browser DevTools
- Navigate to “MockForge” tab
- Connect to MockForge server
Install Browser SDK
npm install @mockforge/forgeconnect
Basic Usage
import { ForgeConnect } from '@mockforge/forgeconnect';
// Initialize ForgeConnect
const forgeConnect = new ForgeConnect({
serverUrl: 'http://localhost:3000',
autoMock: true
});
// Start intercepting requests
forgeConnect.start();
Browser Extension
Features
- Request Capture: Automatically capture all network requests
- Mock Creation: Create mocks from captured requests with one click
- DevTools Integration: Full DevTools panel with React UI
- Auto-Discovery: Automatically discover MockForge server
- Request Filtering: Filter requests by URL, method, status
Usage
- Open DevTools: Press F12 or right-click → Inspect
- Navigate to MockForge Tab: Click “MockForge” in DevTools
- Connect to Server: Enter MockForge server URL or use auto-discovery
- Capture Requests: Requests are automatically captured
- Create Mocks: Click “Create Mock” on any captured request
Auto-Mock Generation
When a request fails or returns an error, ForgeConnect can automatically create a mock:
const forgeConnect = new ForgeConnect({
serverUrl: 'http://localhost:3000',
autoMock: true,
autoMockOnError: true // Create mock on 4xx/5xx errors
});
Browser SDK
Installation
npm install @mockforge/forgeconnect
Basic Setup
import { ForgeConnect } from '@mockforge/forgeconnect';
const forgeConnect = new ForgeConnect({
serverUrl: 'http://localhost:3000',
autoMock: true,
interceptFetch: true,
interceptXHR: true
});
// Start intercepting
forgeConnect.start();
Framework Adapters
React
import { useForgeConnect } from '@mockforge/forgeconnect/react';
function App() {
const { isConnected, mocks } = useForgeConnect({
serverUrl: 'http://localhost:3000'
});
return (
<div>
{isConnected ? 'Connected' : 'Disconnected'}
<ul>
{mocks.map(mock => (
<li key={mock.id}>{mock.path}</li>
))}
</ul>
</div>
);
}
Vue
import { useForgeConnect } from '@mockforge/forgeconnect/vue';
export default {
setup() {
const { isConnected, mocks } = useForgeConnect({
serverUrl: 'http://localhost:3000'
});
return { isConnected, mocks };
}
};
Next.js
// pages/_app.tsx
import { ForgeConnectProvider } from '@mockforge/forgeconnect/next';
function MyApp({ Component, pageProps }) {
return (
<ForgeConnectProvider serverUrl="http://localhost:3000">
<Component {...pageProps} />
</ForgeConnectProvider>
);
}
Request Interception
ForgeConnect intercepts both fetch and XMLHttpRequest:
const forgeConnect = new ForgeConnect({
serverUrl: 'http://localhost:3000',
interceptFetch: true,
interceptXHR: true
});
// All fetch requests are intercepted
fetch('/api/users')
.then(response => response.json())
.then(data => console.log(data));
// All XHR requests are intercepted
const xhr = new XMLHttpRequest();
xhr.open('GET', '/api/users');
xhr.send();
Mock Management
List Mocks
const mocks = await forgeConnect.listMocks();
console.log('Available mocks:', mocks);
Create Mock
const mock = await forgeConnect.createMock({
method: 'GET',
path: '/api/users',
response: {
status: 200,
body: { users: [] }
}
});
Update Mock
await forgeConnect.updateMock(mockId, {
response: {
status: 200,
body: { users: [{ id: 1, name: 'Alice' }] }
}
});
Delete Mock
await forgeConnect.deleteMock(mockId);
Auth Passthrough
ForgeConnect supports OAuth flows and authentication:
const forgeConnect = new ForgeConnect({
serverUrl: 'http://localhost:3000',
authPassthrough: true,
authPaths: ['/auth', '/oauth', '/login']
});
Requests to auth paths are passed through to the real server without interception.
Configuration
SDK Configuration
interface ForgeConnectConfig {
serverUrl: string;
autoMock?: boolean;
autoMockOnError?: boolean;
interceptFetch?: boolean;
interceptXHR?: boolean;
authPassthrough?: boolean;
authPaths?: string[];
mockPaths?: string[];
excludePaths?: string[];
}
Extension Configuration
Configure via extension options:
- Right-click extension icon
- Select “Options”
- Configure server URL and settings
Use Cases
Frontend Development
Develop frontend without backend:
// Start ForgeConnect
const forgeConnect = new ForgeConnect({
serverUrl: 'http://localhost:3000',
autoMock: true
});
forgeConnect.start();
// Develop frontend - mocks created automatically
API Testing
Test API integration:
// Capture real API calls
const forgeConnect = new ForgeConnect({
serverUrl: 'http://localhost:3000',
autoMock: false // Don't auto-create, capture only
});
// Review captured requests
const captures = await forgeConnect.getCaptures();
// Create mocks from captures
for (const capture of captures) {
await forgeConnect.createMockFromCapture(capture);
}
Debugging
Debug API issues:
// Enable detailed logging
const forgeConnect = new ForgeConnect({
serverUrl: 'http://localhost:3000',
debug: true
});
// View intercepted requests in console
forgeConnect.on('request', (request) => {
console.log('Intercepted:', request);
});
Best Practices
- Use Auto-Mock Sparingly: Only enable for development
- Filter Requests: Use
mockPathsandexcludePathsto control interception - Auth Passthrough: Always enable for authentication flows
- Version Control Mocks: Export and commit mocks to version control
- Test with Real APIs: Periodically test against real APIs
Troubleshooting
Extension Not Connecting
- Verify MockForge server is running
- Check server URL is correct
- Review browser console for errors
Requests Not Intercepted
- Verify interception is enabled
- Check request paths match configuration
- Review SDK logs for errors
Mocks Not Working
- Verify mock is created correctly
- Check mock path matches request path
- Review MockForge server logs
Related Documentation
- Browser Proxy Mode - Proxy mode features
- Configuration Guide - Complete configuration reference
- SDK Documentation - Complete SDK reference
Deceptive Deploys
Deceptive Deploy allows you to deploy mock APIs that look identical to production endpoints. Perfect for front-end demos, PoCs, investor prototypes, and client presentations without exposing production systems.
Overview
Deceptive Deploy configures MockForge to automatically:
- ✅ Add production-like headers to all responses
- ✅ Configure CORS to match production settings
- ✅ Apply production-like rate limiting
- ✅ Support OAuth flows identical to production
- ✅ Deploy to public URLs via tunneling
The result: mock APIs that are indistinguishable from production endpoints to your application and users.
Quick Start
Basic Deployment
# Deploy with production preset
mockforge deploy deploy --production-preset --spec api.yaml
# Deploy with custom config
mockforge deploy deploy --config config.yaml --spec api.yaml
Configuration File
Create a config.yaml file:
http:
port: 3000
openapi_spec: "./api-spec.yaml"
deceptive_deploy:
enabled: true
auto_tunnel: true
Start the Server
mockforge serve --config config.yaml
The server will automatically:
- Apply production-like headers
- Configure CORS
- Set up rate limiting
- Start a tunnel (if
auto_tunnel: true)
Configuration
Basic Configuration
deceptive_deploy:
enabled: true
auto_tunnel: true
Full Configuration
deceptive_deploy:
enabled: true
# Production-like CORS
cors:
allowed_origins: ["*"]
allowed_methods: ["GET", "POST", "PUT", "DELETE", "PATCH", "OPTIONS"]
allowed_headers: ["*"]
allow_credentials: true
# Production-like rate limiting
rate_limit:
requests_per_minute: 1000
burst: 2000
per_ip: true
# Production headers (supports templates)
headers:
X-API-Version: "1.0"
X-Request-ID: "{{uuid}}"
X-Powered-By: "MockForge"
# OAuth configuration (optional)
oauth:
client_id: "your-client-id"
client_secret: "your-client-secret"
introspection_url: "https://auth.example.com/introspect"
# Custom domain (optional)
custom_domain: "api.example.com"
# Auto-start tunnel
auto_tunnel: true
Production Headers
Deceptive Deploy automatically adds configured headers to all responses. Headers support template expansion:
Supported Templates
{{uuid}}- Generates a unique UUID v4 for each request{{now}}- Current timestamp in RFC3339 format{{timestamp}}- Current Unix timestamp (seconds)
Example
headers:
X-Request-ID: "{{uuid}}" # Unique ID per request
X-Timestamp: "{{timestamp}}" # Unix timestamp
X-Request-Time: "{{now}}" # RFC3339 timestamp
X-API-Version: "1.0" # Static value
Common Production Headers
headers:
# Request tracking
X-Request-ID: "{{uuid}}"
X-Correlation-ID: "{{uuid}}"
# API information
X-API-Version: "1.0"
X-Environment: "production"
# Server information
X-Powered-By: "MockForge"
Server: "MockForge/1.0"
# Custom headers
X-Rate-Limit-Remaining: "999"
X-Rate-Limit-Reset: "{{timestamp}}"
CORS Configuration
Deceptive Deploy can configure CORS to match production settings:
cors:
# Allow all origins (use specific origins in production)
allowed_origins:
- "*"
# Or specific origins:
# - "https://app.example.com"
# - "https://staging.example.com"
# Allowed HTTP methods
allowed_methods:
- "GET"
- "POST"
- "PUT"
- "DELETE"
- "PATCH"
- "OPTIONS"
# Allowed headers
allowed_headers:
- "*"
# Or specific headers:
# - "Content-Type"
# - "Authorization"
# - "X-API-Key"
# Allow credentials (cookies, authorization headers)
allow_credentials: true
Rate Limiting
Configure production-like rate limiting:
rate_limit:
# Requests per minute
requests_per_minute: 1000
# Burst capacity (maximum requests in a short burst)
burst: 2000
# Enable per-IP rate limiting
per_ip: true
Rate Limit Headers
When rate limiting is enabled, responses include rate limit headers:
X-Rate-Limit-Limit: Maximum requests per minuteX-Rate-Limit-Remaining: Remaining requests in current windowX-Rate-Limit-Reset: Unix timestamp when limit resets
OAuth Configuration
Configure OAuth flows to match production:
oauth:
client_id: "your-client-id"
client_secret: "your-client-secret"
introspection_url: "https://auth.example.com/introspect"
auth_url: "https://auth.example.com/authorize"
token_url: "https://auth.example.com/token"
token_type_hint: "access_token"
This enables:
- Token introspection
- Authorization code flow
- Client credentials flow
- Token validation
Tunneling
Deceptive Deploy can automatically start a tunnel to expose your mock API via a public URL:
deceptive_deploy:
auto_tunnel: true
custom_domain: "api.example.com" # Optional
Tunnel Providers
- Self-hosted: Use your own tunnel server
- Cloud: Use MockForge Cloud (if available)
- Cloudflare: Use Cloudflare Tunnel (coming soon)
Manual Tunnel
# Start tunnel manually
mockforge tunnel start \
--local-url http://localhost:3000 \
--subdomain my-api
CLI Commands
Deploy
# Deploy with production preset
mockforge deploy deploy --production-preset --spec api.yaml
# Deploy with custom config
mockforge deploy deploy --config config.yaml --spec api.yaml
# Deploy with auto-tunnel
mockforge deploy deploy --config config.yaml --auto-tunnel
# Deploy with custom domain
mockforge deploy deploy --config config.yaml --custom-domain api.example.com
Status
# Get deployment status
mockforge deploy status --config config.yaml
Stop
# Stop deployment
mockforge deploy stop --config config.yaml
Use Cases
Front-End Demo
# config.yaml
http:
port: 3000
openapi_spec: "./api.yaml"
deceptive_deploy:
enabled: true
auto_tunnel: true
headers:
X-API-Version: "1.0"
X-Request-ID: "{{uuid}}"
# Deploy
mockforge deploy deploy --config config.yaml
# Start server
mockforge serve --config config.yaml
# Front-end connects to public URL
# https://abc123.tunnel.mockforge.dev
Investor Prototype
deceptive_deploy:
enabled: true
cors:
allowed_origins: ["*"]
allow_credentials: true
rate_limit:
requests_per_minute: 1000
burst: 2000
headers:
X-API-Version: "1.0"
X-Environment: "production"
auto_tunnel: true
custom_domain: "api.demo.example.com"
PoC with OAuth
deceptive_deploy:
enabled: true
oauth:
client_id: "demo-client"
client_secret: "demo-secret"
introspection_url: "https://auth.example.com/introspect"
headers:
X-Request-ID: "{{uuid}}"
X-Auth-Provider: "OAuth2"
Best Practices
1. Use Specific Origins
Instead of *, use specific origins:
cors:
allowed_origins:
- "https://app.example.com"
- "https://staging.example.com"
2. Set Realistic Rate Limits
Match production rate limits:
rate_limit:
requests_per_minute: 1000 # Match production
burst: 2000
3. Use Meaningful Headers
Add headers that match production:
headers:
X-API-Version: "1.0"
X-Request-ID: "{{uuid}}"
X-Environment: "production"
4. Secure OAuth Credentials
Never commit OAuth secrets to version control:
oauth:
client_id: "${OAUTH_CLIENT_ID}"
client_secret: "${OAUTH_CLIENT_SECRET}"
5. Use Custom Domains
For professional presentations:
deceptive_deploy:
custom_domain: "api.example.com"
Troubleshooting
Headers Not Appearing
Check that deceptive deploy is enabled:
deceptive_deploy:
enabled: true
headers:
X-Request-ID: "{{uuid}}"
CORS Errors
Verify CORS configuration:
cors:
allowed_origins: ["*"] # Or specific origins
allow_credentials: true
Rate Limiting Too Strict
Adjust rate limits:
rate_limit:
requests_per_minute: 1000 # Increase if needed
burst: 2000
Tunnel Not Starting
Check tunnel configuration:
deceptive_deploy:
auto_tunnel: true
Or start manually:
mockforge tunnel start --local-url http://localhost:3000
Related Documentation
- Tunneling Guide - Detailed tunnel setup
- Authentication Guide - OAuth configuration
- Configuration Reference - Full config options
Voice + LLM Interface
The Voice + LLM Interface allows you to create mock APIs conversationally using natural language commands, powered by LLM interpretation. Generate OpenAPI specifications and mock APIs from voice or text commands.
Overview
The Voice + LLM Interface provides:
- Voice Command Parsing: Use natural language to describe APIs
- OpenAPI Generation: Automatically generate OpenAPI 3.0 specifications
- Conversational Mode: Multi-turn interactions for complex APIs
- Single-Shot Mode: Complete API generation in one command
- CLI and Web UI: Use from command line or web interface
Quick Start
CLI Usage
Single-Shot Mode
Create a complete API in one command:
# Create API from text command
mockforge voice create \
--command "Create a user management API with endpoints for listing users, getting a user by ID, creating users, and updating users" \
--output api.yaml
# Or use interactive input
mockforge voice create
# Enter your command when prompted
Conversational Mode
Build APIs through conversation:
# Start interactive conversation
mockforge voice interactive
# Example conversation:
# > Create a user management API
# > Add an endpoint to get user by email
# > Add authentication to all endpoints
# > Show me the spec
# > done
Web UI Usage
- Navigate to Voice page in Admin UI
- Click microphone or type your command
- View generated OpenAPI spec
- Download or use the spec
Features
Natural Language Commands
Describe your API in plain English:
Create a REST API for an e-commerce store with:
- Product catalog with categories
- Shopping cart management
- Order processing
- User authentication
OpenAPI Generation
Automatically generates complete OpenAPI 3.0 specifications:
openapi: 3.0.0
info:
title: E-commerce Store API
version: 1.0.0
paths:
/products:
get:
summary: List products
responses:
'200':
description: List of products
/cart:
post:
summary: Add item to cart
requestBody:
required: true
content:
application/json:
schema:
type: object
properties:
product_id:
type: integer
quantity:
type: integer
Conversational Mode
Build complex APIs through multiple interactions:
> Create a blog API
✓ Created blog API with posts endpoint
> Add comments to posts
✓ Added comments endpoint with post_id relationship
> Add user authentication
✓ Added authentication to all endpoints
> Show me the spec
[Displays generated OpenAPI spec]
> done
✓ Saved to blog-api.yaml
Single-Shot Mode
Generate complete APIs in one command:
mockforge voice create \
--command "Create a task management API with CRUD operations for tasks, projects, and users" \
--output task-api.yaml
CLI Commands
Create (Single-Shot)
mockforge voice create \
--command "<description>" \
--output <file> \
--format yaml \
--ai-provider ollama \
--ai-model llama3.2
Options:
--command: Natural language description of API--output: Output file path (default:generated-api.yaml)--format: Output format (yamlorjson)--ai-provider: LLM provider (ollama,openai,anthropic)--ai-model: Model name (e.g.,llama3.2,gpt-3.5-turbo)
Interactive (Conversational)
mockforge voice interactive \
--ai-provider ollama \
--ai-model llama3.2
Special Commands:
help- Show available commandsshow spec- Display current OpenAPI specsave <file>- Save spec to filedone- Exit and saveexit- Exit without saving
Web UI
Voice Input
Use Web Speech API for voice input:
- Click microphone button
- Speak your command
- View real-time transcript
- See generated spec
Text Input
Type commands directly:
- Enter command in text field
- Click “Generate” or press Enter
- View generated spec
- Download or use spec
Command History
View last 10 commands:
- Click on history item to reuse
- Edit before regenerating
- Save successful commands
Configuration
AI Provider Configuration
voice:
enabled: true
ai_provider: "ollama" # or "openai", "anthropic"
ai_model: "llama3.2"
ai_base_url: "http://localhost:11434" # For Ollama
ai_api_key: "${AI_API_KEY}" # For OpenAI/Anthropic
CLI Configuration
# Set AI provider via environment
export MOCKFORGE_VOICE_AI_PROVIDER=ollama
export MOCKFORGE_VOICE_AI_MODEL=llama3.2
export MOCKFORGE_VOICE_AI_BASE_URL=http://localhost:11434
# Or use OpenAI
export MOCKFORGE_VOICE_AI_PROVIDER=openai
export MOCKFORGE_VOICE_AI_MODEL=gpt-3.5-turbo
export MOCKFORGE_VOICE_AI_API_KEY=sk-...
API Endpoints
Process Voice Command
POST /api/v2/voice/process
Content-Type: application/json
{
"command": "Create a user management API",
"mode": "single_shot", # or "conversational"
"conversation_id": null # For conversational mode
}
Response:
{
"success": true,
"spec": {
"openapi": "3.0.0",
"info": {...},
"paths": {...}
},
"conversation_id": "uuid" # For conversational mode
}
Continue Conversation
POST /api/v2/voice/process
Content-Type: application/json
{
"command": "Add authentication",
"mode": "conversational",
"conversation_id": "uuid"
}
Use Cases
Rapid Prototyping
Quickly create API prototypes:
mockforge voice create \
--command "Create a simple todo API with CRUD operations" \
--output todo-api.yaml
API Design
Design APIs by describing them:
mockforge voice interactive
# > Create a social media API
# > Add posts, comments, and likes
# > Add user profiles
# > Show me the spec
Learning
Learn OpenAPI by example:
# Generate spec
mockforge voice create --command "..."
# Review generated spec
cat generated-api.yaml
Best Practices
- Be Specific: Provide clear, detailed descriptions
- Iterate: Use conversational mode for complex APIs
- Review Generated Specs: Always review and validate generated specs
- Use Local LLMs: Use Ollama for faster, free generation
- Save Good Examples: Save successful commands for reuse
Troubleshooting
Command Not Understood
- Be more specific in your description
- Break complex APIs into smaller parts
- Use conversational mode for clarification
Spec Generation Fails
- Check AI provider is accessible
- Verify API key is set (for OpenAI/Anthropic)
- Review server logs for errors
Voice Input Not Working
- Check browser permissions for microphone
- Verify Web Speech API is supported
- Use text input as fallback
Related Documentation
- Generative Schema Mode - JSON-based API generation
- OpenAPI Integration - Working with OpenAPI specs
- Configuration Guide - Complete configuration reference
Reality Continuum
The Reality Continuum feature enables gradual transition from mock to real backend data by intelligently blending responses from both sources. This allows teams to develop and test against a real backend that’s still under construction, smoothly transitioning from 100% mock to 100% real over time.
Overview
The Reality Continuum provides:
- Dynamic Blending: Intelligently merges mock and real responses based on configurable blend ratios
- Time-Based Progression: Automatically transitions blend ratios over time using virtual clock
- Flexible Configuration: Supports per-route, group-level, and global blend ratio settings
- Multiple Merge Strategies: Field-level merge, weighted selection, or body blending
- Fallback Handling: Gracefully handles failures from either source
Quick Start
Basic Configuration
reality_continuum:
enabled: true
default_ratio: 0.0 # Start with 100% mock
transition_mode: "manual" # or "time_based" or "scheduled"
merge_strategy: "field_level"
Time-Based Progression
Configure automatic progression from mock to real over a time period:
reality_continuum:
enabled: true
default_ratio: 0.0
transition_mode: "time_based"
time_schedule:
start_time: "2025-01-01T00:00:00Z"
end_time: "2025-02-01T00:00:00Z"
start_ratio: 0.0
end_ratio: 1.0
curve: "linear" # or "exponential" or "sigmoid"
Per-Route Configuration
Set different blend ratios for specific routes:
reality_continuum:
enabled: true
default_ratio: 0.0
routes:
- pattern: "/api/users/*"
ratio: 0.5 # 50% real for user endpoints
enabled: true
- pattern: "/api/orders/*"
ratio: 0.3 # 30% real for order endpoints
group: "api-v1"
enabled: true
Blend Ratio Priority
The blend ratio is determined in the following order (highest to lowest priority):
- Manual Overrides - Set via API calls
- Route-Specific Rules - Per-route configuration
- Group-Level Overrides - Migration group settings
- Time-Based Schedule - If time-based mode is enabled
- Default Ratio - Global default setting
Merge Strategies
Field-Level (Default)
Deep merges JSON objects, combines arrays, and uses weighted selection for primitives:
// Mock response
{
"id": 1,
"name": "Mock User",
"email": "mock@example.com"
}
// Real response
{
"id": 2,
"name": "Real User",
"status": "active"
}
// Blended (ratio: 0.5)
{
"id": 1.5, // Weighted average
"name": "Real User", // Selected based on ratio
"email": "mock@example.com", // From mock (ratio < 0.5)
"status": "active" // From real (ratio >= 0.5)
}
Weighted Selection
Randomly selects between mock and real based on ratio (for testing/demo).
Body Blend
Merges arrays, averages numeric fields, and deep merges objects with interleaving.
Transition Curves
Linear
Constant rate of progression:
Ratio
1.0 | *
| *
| *
| *
0.0 |*
+------------------- Time
Exponential
Slow start, fast end:
Ratio
1.0 | *
| *
| *
| *
0.0 |*
+------------------- Time
Sigmoid
Slow start and end, fast middle:
Ratio
1.0 | *
| *
| *
| *
0.0 |*
+------------------- Time
API Endpoints
Get Blend Ratio
GET /__mockforge/continuum/ratio?path=/api/users/123
Response:
{
"success": true,
"data": {
"path": "/api/users/123",
"blend_ratio": 0.5,
"enabled": true,
"transition_mode": "Manual",
"merge_strategy": "FieldLevel",
"default_ratio": 0.0
}
}
Set Blend Ratio
PUT /__mockforge/continuum/ratio
Content-Type: application/json
{
"path": "/api/users/*",
"ratio": 0.75
}
Get Time Schedule
GET /__mockforge/continuum/schedule
Update Time Schedule
PUT /__mockforge/continuum/schedule
Content-Type: application/json
{
"start_time": "2025-01-01T00:00:00Z",
"end_time": "2025-02-01T00:00:00Z",
"start_ratio": 0.0,
"end_ratio": 1.0,
"curve": "linear"
}
Manually Advance Ratio
POST /__mockforge/continuum/advance
Content-Type: application/json
{
"increment": 0.1
}
Enable/Disable
PUT /__mockforge/continuum/enabled
Content-Type: application/json
{
"enabled": true
}
Integration with Time Travel
The Reality Continuum integrates seamlessly with MockForge’s time travel system. When virtual time is enabled, blend ratios automatically progress based on the virtual clock:
#![allow(unused)] fn main() { use mockforge_core::{RealityContinuumEngine, VirtualClock, TimeSchedule}; let clock = Arc::new(VirtualClock::new()); clock.enable_and_set(start_time); let schedule = TimeSchedule::new(start_time, end_time, 0.0, 1.0); let config = ContinuumConfig { enabled: true, transition_mode: TransitionMode::TimeBased, time_schedule: Some(schedule), ..Default::default() }; let engine = RealityContinuumEngine::with_virtual_clock(config, clock); }
Use Cases
Gradual Backend Migration
Start with 100% mock responses and gradually increase real backend usage as endpoints are implemented:
reality_continuum:
enabled: true
transition_mode: "time_based"
time_schedule:
start_time: "2025-01-01T00:00:00Z"
end_time: "2025-03-01T00:00:00Z" # 2 months transition
start_ratio: 0.0
end_ratio: 1.0
curve: "sigmoid" # Slow start and end
Per-Endpoint Rollout
Different endpoints migrate at different rates:
reality_continuum:
enabled: true
routes:
- pattern: "/api/users/*"
ratio: 0.9 # Almost fully migrated
- pattern: "/api/orders/*"
ratio: 0.3 # Still mostly mock
- pattern: "/api/payments/*"
ratio: 0.0 # Not yet migrated
A/B Testing
Compare mock and real responses by blending them:
reality_continuum:
enabled: true
default_ratio: 0.5 # 50/50 split
merge_strategy: "field_level"
Fallback Behavior
When continuum is enabled:
- Both sources succeed: Responses are blended according to the blend ratio
- Only proxy succeeds: Real response is returned (fallback to real)
- Only mock succeeds: Mock response is returned (fallback to mock)
- Both fail: Error is returned (unless migration mode is Real, which fails hard)
Best Practices
- Start Conservative: Begin with
default_ratio: 0.0(100% mock) - Use Time-Based Progression: Automate the transition with time schedules
- Monitor Both Sources: Ensure both mock and real backends are healthy
- Test Fallback Behavior: Verify graceful degradation when one source fails
- Use Groups for Batch Control: Group related routes for coordinated migration
- Leverage Virtual Clock: Use time travel to simulate weeks of development in minutes
Limitations
- Currently supports JSON responses only
- Merge strategies may not handle all edge cases perfectly
- Time-based progression requires time travel to be enabled for full effect
- Blending adds slight latency (both responses must be fetched)
Related Documentation
- Temporal Simulation - Time travel integration
- Proxy Mode - Proxy configuration
- Configuration Guide - Complete configuration reference
Smart Personas
Smart Personas enable generating coherent, consistent mock data using persona profiles with unique backstories and deterministic generation. The same persona always generates the same data, ensuring consistency across endpoints and requests.
Overview
Smart Personas provide:
- Persona Profiles: Unique personas with IDs and domain associations
- Coherent Backstories: Template-based backstory generation
- Persona Relationships: Connections between personas (users, devices, organizations)
- Deterministic Generation: Same persona = same data every time
- Domain-Specific Templates: Finance, E-commerce, Healthcare, IoT personas
Quick Start
Enable Smart Personas
# config.yaml
data:
personas:
enabled: true
auto_generate_backstories: true
domain: "ecommerce" # or "finance", "healthcare", "iot"
Use in Templates
responses:
- path: "/api/users/{id}"
body: |
{
"id": "{{persona.id}}",
"name": "{{persona.name}}",
"email": "{{persona.email}}",
"backstory": "{{persona.backstory}}"
}
Persona Profiles
Automatic Persona Creation
Personas are automatically created when referenced:
# Request to /api/users/123
# Persona with ID "123" is automatically created
# Same persona used for all requests with ID "123"
Manual Persona Creation
#![allow(unused)] fn main() { use mockforge_data::{PersonaProfile, PersonaRegistry}; let mut registry = PersonaRegistry::new(); let persona = PersonaProfile::new("user-123", "ecommerce"); registry.add_persona(persona); }
Backstories
Automatic Backstory Generation
Backstories are automatically generated based on domain:
data:
personas:
enabled: true
auto_generate_backstories: true
domain: "ecommerce"
Domain-Specific Templates
E-commerce
"Alice is a 32-year-old marketing professional living in San Francisco.
She frequently shops online for electronics and fashion items.
Her average order value is $150, and she prefers express shipping."
Finance
"Bob is a 45-year-old investment banker based in New York.
He manages a portfolio worth $2.5M and prefers conservative investments.
He has been a customer for 8 years."
Healthcare
"Carol is a 28-year-old nurse practitioner in Boston.
She manages chronic conditions for 50+ patients.
She prefers digital health tools and telemedicine."
IoT
"Device-001 is a smart thermostat installed in a 3-bedroom home in Seattle.
It monitors temperature, humidity, and energy usage.
It's connected to 5 other smart home devices."
Custom Backstories
Set custom backstories:
#![allow(unused)] fn main() { let mut persona = PersonaProfile::new("user-123", "ecommerce"); persona.set_backstory("Custom backstory text".to_string()); }
Persona Relationships
Define Relationships
#![allow(unused)] fn main() { use mockforge_data::PersonaRegistry; let mut registry = PersonaRegistry::new(); // Add relationship registry.add_relationship( "user-123", "device-456", "owns" ); // Get related personas let devices = registry.get_related_personas("user-123", "owns"); }
Relationship Types
Common relationship types:
owns- User owns device/organizationbelongs_to- Device/organization belongs to usermanages- User manages organizationconnected_to- Device connected to other deviceparent_of- Organization parent-child relationship
Cross-Entity Consistency
Same base ID across different entity types:
#![allow(unused)] fn main() { // User persona let user = registry.get_or_create_persona_by_type("123", EntityType::User, "ecommerce"); // Device persona (same ID, different type) let device = registry.get_or_create_persona_by_type("123", EntityType::Device, "iot"); // Automatically establishes relationship }
Deterministic Generation
Same Persona, Same Data
The same persona always generates the same data:
# First request
GET /api/users/123
# Response: {"id": 123, "name": "Alice", "email": "alice@example.com"}
# Second request (same persona ID)
GET /api/users/123
# Response: {"id": 123, "name": "Alice", "email": "alice@example.com"} # Same!
Seed-Based Generation
Personas use deterministic seeds:
#![allow(unused)] fn main() { let persona = PersonaProfile::new("user-123", "ecommerce"); // Seed is derived from persona ID and domain // Same ID + same domain = same seed = same data }
Template Functions
Persona Functions
# In response templates
{
"id": "{{persona.id}}",
"name": "{{persona.name}}",
"email": "{{persona.email}}",
"phone": "{{persona.phone}}",
"address": "{{persona.address}}",
"backstory": "{{persona.backstory}}",
"traits": "{{persona.traits}}"
}
Relationship Functions
# Get related personas
{
"user": {
"id": "{{persona.id}}",
"name": "{{persona.name}}"
},
"devices": "{{persona.related.owns}}"
}
Configuration
Full Configuration
data:
personas:
enabled: true
auto_generate_backstories: true
domain: "ecommerce" # finance, healthcare, iot, generic
backstory_templates:
ecommerce:
- "{{name}} is a {{age}}-year-old {{profession}} living in {{city}}."
- "They frequently shop for {{interests}} with an average order value of ${{avg_order_value}}."
relationship_types:
- owns
- belongs_to
- manages
- connected_to
Use Cases
Consistent User Data
Generate consistent user data across endpoints:
# User endpoint
responses:
- path: "/api/users/{id}"
body: |
{
"id": "{{persona.id}}",
"name": "{{persona.name}}",
"email": "{{persona.email}}"
}
# User's orders endpoint
responses:
- path: "/api/users/{id}/orders"
body: |
{
"user_id": "{{persona.id}}",
"user_name": "{{persona.name}}",
"orders": [...]
}
Device Relationships
Model device ownership:
# Device endpoint
responses:
- path: "/api/devices/{id}"
body: |
{
"id": "{{persona.id}}",
"owner_id": "{{persona.relationship.owner}}",
"type": "{{persona.type}}"
}
Organization Hierarchies
Model organizational structures:
# Organization endpoint
responses:
- path: "/api/organizations/{id}"
body: |
{
"id": "{{persona.id}}",
"name": "{{persona.name}}",
"parent_id": "{{persona.relationship.parent}}",
"children": "{{persona.related.children}}"
}
Best Practices
- Use Consistent IDs: Use the same persona ID across related endpoints
- Choose Appropriate Domain: Select domain that matches your use case
- Leverage Relationships: Use relationships to model complex data structures
- Customize Backstories: Add domain-specific details to backstories
- Test Determinism: Verify same persona generates same data
Troubleshooting
Persona Not Found
- Ensure personas are enabled in configuration
- Check persona ID is consistent across requests
- Verify domain matches persona domain
Backstory Not Generated
- Check
auto_generate_backstoriesis enabled - Verify domain is supported
- Review persona creation logs
Relationships Not Working
- Verify relationship types are defined
- Check relationship is added to registry
- Review relationship query syntax
Related Documentation
- VBR Engine - State management with personas
- Data Generation - Data generation features
- Configuration Guide - Complete configuration reference
Environment Variables
MockForge supports extensive configuration through environment variables. This page documents all available environment variables, their purposes, and usage examples.
Core Functionality
Server Control
-
MOCKFORGE_LATENCY_ENABLED=true|false(default:true)- Enable/disable response latency simulation
- When disabled, responses are immediate
-
MOCKFORGE_FAILURES_ENABLED=true|false(default:false)- Enable/disable failure injection
- When enabled, can simulate HTTP errors and timeouts
-
MOCKFORGE_LOG_LEVEL=debug|info|warn|error(default:info)- Set the logging verbosity level
- Available:
debug,info,warn,error
Recording and Replay
-
MOCKFORGE_RECORD_ENABLED=true|false(default:false)- Enable recording of HTTP requests as fixtures
- Recorded fixtures can be replayed later
-
MOCKFORGE_REPLAY_ENABLED=true|false(default:false)- Enable replay of recorded fixtures
- When enabled, serves recorded responses instead of generating new ones
-
MOCKFORGE_PROXY_ENABLED=true|false(default:false)- Enable proxy mode for forwarding requests
- Useful for testing against real APIs
HTTP Server Configuration
Server Settings
-
MOCKFORGE_HTTP_PORT=3000(default:3000)- Port for the HTTP server to listen on
-
MOCKFORGE_HTTP_HOST=127.0.0.1(default:0.0.0.0)- Host address for the HTTP server to bind to
-
MOCKFORGE_CORS_ENABLED=true|false(default:true)- Enable/disable CORS headers in responses
-
MOCKFORGE_REQUEST_TIMEOUT_SECS=30(default:30)- Timeout for HTTP requests in seconds
OpenAPI Integration
MOCKFORGE_HTTP_OPENAPI_SPEC=path/to/spec.json- Path to OpenAPI specification file
- Enables automatic endpoint generation from OpenAPI spec
Validation and Templating
-
MOCKFORGE_REQUEST_VALIDATION=enforce|warn|off(default:enforce)- Level of request validation
enforce: Reject invalid requests with errorwarn: Log warnings but allow requestsoff: Skip validation entirely
-
MOCKFORGE_RESPONSE_VALIDATION=true|false(default:false)- Enable validation of generated responses
- Useful for ensuring response format compliance
-
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true|false(default:false)- Enable template expansion in responses
- Allows use of
{{uuid}},{{now}}, etc. in responses
-
MOCKFORGE_AGGREGATE_ERRORS=true|false(default:true)- Aggregate multiple validation errors into a single response
- When enabled, returns all validation errors at once
-
MOCKFORGE_VALIDATION_STATUS=400|422(default:400)- HTTP status code for validation errors
400: Bad Request (general)422: Unprocessable Entity (validation-specific)
WebSocket Server Configuration
Server Settings
-
MOCKFORGE_WS_PORT=3001(default:3001)- Port for the WebSocket server to listen on
-
MOCKFORGE_WS_HOST=127.0.0.1(default:0.0.0.0)- Host address for the WebSocket server to bind to
-
MOCKFORGE_WS_CONNECTION_TIMEOUT_SECS=300(default:300)- WebSocket connection timeout in seconds
Replay Configuration
MOCKFORGE_WS_REPLAY_FILE=path/to/replay.jsonl- Path to WebSocket replay file
- Enables scripted WebSocket message sequences
gRPC Server Configuration
Server Settings
-
MOCKFORGE_GRPC_PORT=50051(default:50051)- Port for the gRPC server to listen on
-
MOCKFORGE_GRPC_HOST=127.0.0.1(default:0.0.0.0)- Host address for the gRPC server to bind to
Admin UI Configuration
Server Settings
-
MOCKFORGE_ADMIN_ENABLED=true|false(default:false)- Enable/disable the Admin UI
- When enabled, provides web interface for management
-
MOCKFORGE_ADMIN_PORT=9080(default:9080)- Port for the Admin UI server to listen on
-
MOCKFORGE_ADMIN_HOST=127.0.0.1(default:127.0.0.1)- Host address for the Admin UI server to bind to
UI Configuration
-
MOCKFORGE_ADMIN_MOUNT_PATH=/admin(default: none)- Mount path for embedded Admin UI
- When set, Admin UI is available under HTTP server
-
MOCKFORGE_ADMIN_API_ENABLED=true|false(default:true)- Enable/disable Admin UI API endpoints
- Controls whether
/__mockforge/*endpoints are available
Data Generation Configuration
Faker Control
-
MOCKFORGE_RAG_ENABLED=true|false(default:false)- Enable Retrieval-Augmented Generation for data
- Requires additional setup for LLM integration
-
MOCKFORGE_FAKE_TOKENS=true|false(default:true)- Enable/disable faker token expansion
- Controls whether
{{faker.email}}etc. work
Fixtures and Testing
Fixtures Configuration
-
MOCKFORGE_FIXTURES_DIR=path/to/fixtures(default:./fixtures)- Directory where fixtures are stored
- Used for recording and replaying HTTP requests
-
MOCKFORGE_RECORD_GET_ONLY=true|false(default:false)- When recording, only record GET requests
- Reduces fixture file size for read-only APIs
Configuration Files
Configuration Loading
MOCKFORGE_CONFIG_FILE=path/to/config.yaml- Path to YAML configuration file
- Alternative to environment variables
Usage Examples
Basic HTTP Server with OpenAPI
export MOCKFORGE_HTTP_OPENAPI_SPEC=examples/openapi-demo.json
export MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
export MOCKFORGE_ADMIN_ENABLED=true
cargo run -p mockforge-cli -- serve --http-port 3000 --admin-port 9080
Full WebSocket Support
export MOCKFORGE_WS_REPLAY_FILE=examples/ws-demo.jsonl
export MOCKFORGE_WS_PORT=3001
export MOCKFORGE_HTTP_OPENAPI_SPEC=examples/openapi-demo.json
export MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
cargo run -p mockforge-cli -- serve --admin
Development Setup
export MOCKFORGE_LOG_LEVEL=debug
export MOCKFORGE_LATENCY_ENABLED=false
export MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
export MOCKFORGE_ADMIN_ENABLED=true
export MOCKFORGE_HTTP_OPENAPI_SPEC=examples/openapi-demo.json
cargo run -p mockforge-cli -- serve
Production Setup
export MOCKFORGE_LOG_LEVEL=warn
export MOCKFORGE_LATENCY_ENABLED=true
export MOCKFORGE_FAILURES_ENABLED=false
export MOCKFORGE_REQUEST_VALIDATION=enforce
export MOCKFORGE_ADMIN_ENABLED=false
export MOCKFORGE_HTTP_OPENAPI_SPEC=path/to/production-spec.json
cargo run -p mockforge-cli -- serve --http-port 80
Environment Variable Priority
Environment variables override configuration file settings. CLI flags take precedence over both. The priority order is:
- CLI flags (highest priority)
- Environment variables
- Configuration file settings
- Default values (lowest priority)
Security Considerations
- Be careful with
MOCKFORGE_ADMIN_ENABLED=truein production - Consider setting restrictive host bindings (
127.0.0.1) for internal use - Use
MOCKFORGE_FAKE_TOKENS=falsefor deterministic testing - Review
MOCKFORGE_CORS_ENABLEDsettings for cross-origin requests
Troubleshooting
Common Issues
-
Environment variables not taking effect
- Check variable names for typos
- Ensure variables are exported before running the command
- Use
env | grep MOCKFORGEto verify variables are set
-
Port conflicts
- Use different ports via
MOCKFORGE_HTTP_PORT,MOCKFORGE_WS_PORT, etc. - Check what processes are using ports with
netstat -tlnp
- Use different ports via
-
OpenAPI spec not loading
- Verify file path in
MOCKFORGE_HTTP_OPENAPI_SPEC - Ensure JSON/YAML syntax is valid
- Check file permissions
- Verify file path in
-
Template expansion not working
- Set
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true - Verify token syntax (e.g.,
{{uuid}}not{uuid})
- Set
For more detailed configuration options, see the Configuration Files documentation.
Configuration Files
MockForge supports comprehensive configuration through YAML files as an alternative to environment variables. This page documents the configuration file format, options, and usage.
Quick Start
Initialize a New Configuration
# Create a new project with template configuration
mockforge init my-project
# Or initialize in current directory
mockforge init .
This creates a mockforge.yaml file with sensible defaults and example configurations.
Validate Your Configuration
# Validate configuration file
mockforge config validate
# Validate specific file
mockforge config validate --config my-config.yaml
See the Configuration Validation Guide for detailed validation instructions.
Complete Configuration Template
For a fully documented configuration template with all available options, see: config.template.yaml
This template includes:
- Every configuration option with inline comments
- Default values and valid ranges
- Example configurations for common scenarios
- Links to detailed documentation
Configuration File Location
MockForge looks for configuration files in the following order:
- Path specified by
--configCLI flag - Path specified by
MOCKFORGE_CONFIG_FILEenvironment variable - Default location:
./mockforge.yamlor./mockforge.yml - No configuration file (uses defaults)
Basic Configuration Structure
# MockForge Configuration Example
# This file demonstrates all available configuration options
# HTTP server configuration
http:
port: 3000
host: "0.0.0.0"
openapi_spec: "examples/openapi-demo.json"
cors_enabled: true
request_timeout_secs: 30
request_validation: "enforce"
aggregate_validation_errors: true
validate_responses: false
response_template_expand: true
skip_admin_validation: true
# WebSocket server configuration
websocket:
port: 3001
host: "0.0.0.0"
replay_file: "examples/ws-demo.jsonl"
connection_timeout_secs: 300
# gRPC server configuration
grpc:
port: 50051
host: "0.0.0.0"
# Admin UI configuration
admin:
enabled: true
port: 9080
host: "127.0.0.1"
mount_path: null
api_enabled: true
# Core MockForge configuration
core:
latency_enabled: true
failures_enabled: false
# Logging configuration
logging:
level: "info"
json_format: false
file_path: null
max_file_size_mb: 10
max_files: 5
# Data generation configuration
data:
default_rows: 100
default_format: "json"
locale: "en"
HTTP Server Configuration
Basic Settings
http:
port: 3000 # Server port
host: "0.0.0.0" # Bind address (0.0.0.0 for all interfaces)
cors_enabled: true # Enable CORS headers
request_timeout_secs: 30 # Request timeout in seconds
OpenAPI Integration
http:
openapi_spec: "path/to/spec.json" # OpenAPI spec file for HTTP server
# Alternative: use URL
openapi_spec: "https://example.com/api-spec.yaml"
Validation and Response Handling
http:
request_validation: "enforce" # off|warn|enforce
aggregate_validation_errors: true # Combine multiple errors
validate_responses: false # Validate generated responses
response_template_expand: true # Enable {{uuid}}, {{now}} etc.
skip_admin_validation: true # Skip validation for admin endpoints
Validation Overrides
http:
validation_overrides:
"POST /users/{id}": "warn" # Override validation level per endpoint
"GET /internal/health": "off" # Skip validation for specific endpoints
WebSocket Server Configuration
websocket:
port: 3001 # Server port
host: "0.0.0.0" # Bind address
replay_file: "path/to/replay.jsonl" # WebSocket replay file
connection_timeout_secs: 300 # Connection timeout in seconds
gRPC Server Configuration
grpc:
port: 50051 # Server port
host: "0.0.0.0" # Bind address
proto_dir: null # Directory containing .proto files
tls: null # TLS configuration (optional)
Admin UI Configuration
Standalone Mode (Default)
admin:
enabled: true
port: 9080
host: "127.0.0.1"
api_enabled: true
Embedded Mode
admin:
enabled: true
mount_path: "/admin" # Mount under HTTP server
api_enabled: true # Enable API endpoints
# Note: port/host ignored when mount_path is set
Core Configuration
Latency Simulation
core:
latency_enabled: true
default_latency:
base_ms: 50
jitter_ms: 20
distribution: "fixed" # fixed, normal, or pareto
# For normal distribution
# std_dev_ms: 10.0
# For pareto distribution
# pareto_shape: 2.0
min_ms: 10 # Minimum latency
max_ms: 5000 # Maximum latency (optional)
# Per-operation overrides
tag_overrides:
auth: 100
payments: 200
Failure Injection
core:
failures_enabled: true
failure_config:
global_error_rate: 0.05 # 5% global error rate
# Default status codes for failures
default_status_codes: [500, 502, 503, 504]
# Per-tag error rates and status codes
tag_configs:
auth:
error_rate: 0.1 # 10% error rate for auth operations
status_codes: [401, 403]
error_message: "Authentication failed"
payments:
error_rate: 0.02 # 2% error rate for payments
status_codes: [402, 503]
error_message: "Payment processing failed"
# Tag filtering
include_tags: [] # Empty means all tags included
exclude_tags: ["health", "metrics"] # Exclude these tags
Proxy Configuration
core:
proxy:
upstream_url: "http://api.example.com"
timeout_seconds: 30
Logging Configuration
logging:
level: "info" # debug|info|warn|error
json_format: false # Use JSON format for logs
file_path: "logs/mockforge.log" # Optional log file
max_file_size_mb: 10 # Rotate when file reaches this size
max_files: 5 # Keep this many rotated log files
Data Generation Configuration
data:
default_rows: 100 # Default number of rows to generate
default_format: "json" # Default output format
locale: "en" # Locale for generated data
# Custom faker templates
templates:
custom_user:
name: "{{faker.name}}"
email: "{{faker.email}}"
department: "{{faker.word}}"
# RAG (Retrieval-Augmented Generation) configuration
rag:
enabled: false
api_endpoint: null
api_key: null
model: null
context_window: 4000
Advanced Configuration
Request/Response Overrides
# YAML patch overrides for requests/responses
overrides:
- targets: ["operation:getUser"] # Target specific operations
patch:
- op: add
path: /metadata/requestId
value: "{{uuid}}"
- op: replace
path: /user/createdAt
value: "{{now}}"
- op: add
path: /user/score
value: "{{rand.float}}"
- targets: ["tag:Payments"] # Target by tags
patch:
- op: replace
path: /payment/status
value: "FAILED"
Latency Profiles
# External latency profiles file
latency_profiles: "config/latency.yaml"
# Example latency configuration:
# operation:getUser:
# fixed_ms: 120
# jitter_ms: 80
# fail_p: 0.0
#
# tag:Payments:
# fixed_ms: 200
# jitter_ms: 300
# fail_p: 0.05
# fail_status: 503
Configuration Examples
Development Configuration
# Development setup with debugging and fast responses
http:
port: 3000
response_template_expand: true
request_validation: "warn"
admin:
enabled: true
port: 9080
core:
latency_enabled: false # Disable latency for faster development
logging:
level: "debug"
json_format: false
Testing Configuration
# Testing setup with deterministic responses
http:
port: 3000
response_template_expand: false # Disable random tokens for determinism
core:
latency_enabled: false
data:
rag:
enabled: false # Disable RAG for consistent test data
Production Configuration
# Production setup with monitoring and reliability
http:
port: 80
host: "0.0.0.0"
request_validation: "enforce"
cors_enabled: false
admin:
enabled: false # Disable admin UI in production
core:
latency_enabled: true
failures_enabled: false
logging:
level: "warn"
json_format: true
file_path: "/var/log/mockforge.log"
Configuration File Validation
MockForge validates configuration files at startup. Common issues:
- Invalid YAML syntax - Check indentation and quotes
- Missing required fields - Some fields like
request_timeout_secsare required - Invalid file paths - Ensure OpenAPI spec and replay files exist
- Port conflicts - Choose unique ports for each service
Configuration Precedence
Configuration values are resolved in this priority order:
- CLI flags (highest priority)
- Environment variables
- Configuration file
- Default values (lowest priority)
This allows you to override specific values without changing your configuration file.
Hot Reloading
Configuration changes require a server restart to take effect. For development, you can use:
# Watch for changes and auto-restart
cargo watch -x "run -p mockforge-cli -- serve --config config.yaml"
For more information on environment variables, see the Environment Variables documentation.
Advanced Options
MockForge provides extensive advanced configuration options for enterprise-grade API mocking, testing, and chaos engineering scenarios. This guide covers sophisticated features like traffic shaping, time travel, ML-based anomaly detection, multi-tenancy, and advanced orchestration.
Traffic Shaping and Bandwidth Control
MockForge supports advanced traffic shaping beyond simple latency simulation, including bandwidth throttling and burst control.
Bandwidth Throttling
Configure bandwidth limits to simulate network constraints:
# mockforge.yaml
traffic_shaping:
bandwidth:
enabled: true
max_bytes_per_sec: 1024000 # 1MB/s
burst_capacity_bytes: 1048576 # 1MB burst allowance
# Tag-based overrides for specific routes
tag_overrides:
premium: 5242880 # 5MB/s for premium routes
admin: 0 # Unlimited for admin routes
Packet Loss Simulation
Simulate network unreliability with configurable packet loss:
traffic_shaping:
packet_loss:
enabled: true
loss_rate: 0.05 # 5% packet loss
burst_loss_probability: 0.1 # 10% chance of burst loss
burst_length: 5 # 5 consecutive packets lost in burst
# Route-specific overrides
route_overrides:
"/api/health": 0.0 # No loss for health checks
"/api/slow/*": 0.2 # 20% loss for slow endpoints
Environment Variables
# Bandwidth throttling
MOCKFORGE_TRAFFIC_SHAPING_BANDWIDTH_ENABLED=true
MOCKFORGE_TRAFFIC_SHAPING_BANDWIDTH_MAX_BYTES_PER_SEC=1024000
MOCKFORGE_TRAFFIC_SHAPING_BANDWIDTH_BURST_CAPACITY=1048576
# Packet loss
MOCKFORGE_TRAFFIC_SHAPING_PACKET_LOSS_ENABLED=true
MOCKFORGE_TRAFFIC_SHAPING_PACKET_LOSS_RATE=0.05
Time Travel and Temporal Testing
MockForge’s time travel capabilities allow testing time-dependent behavior without waiting for real time to pass.
Virtual Clock Configuration
# mockforge.yaml
time_travel:
enabled: true
initial_time: "2024-01-01T00:00:00Z"
scale_factor: 1.0 # 1.0 = real time, 2.0 = 2x speed
# Scheduled time jumps
schedule:
- at: "2024-01-01T01:00:00Z"
jump_to: "2024-01-01T06:00:00Z"
- at: "2024-01-01T12:00:00Z"
advance_by: "1d"
Time Travel API
Control time programmatically through the Admin UI or REST API:
# Set virtual time
curl -X POST http://localhost:9080/api/v2/time/set \
-H "Content-Type: application/json" \
-d '{"time": "2024-01-01T12:00:00Z"}'
# Advance time
curl -X POST http://localhost:9080/api/v2/time/advance \
-H "Content-Type: application/json" \
-d '{"duration": "1h"}'
# Enable/disable time travel
curl -X POST http://localhost:9080/api/v2/time/enable \
-H "Content-Type: application/json" \
-d '{"enabled": true}'
Testing Time-Dependent Logic
# Example: Testing token expiry
routes:
- path: /api/auth/validate
method: GET
response:
status: 200
condition: "time_travel.now < time_travel.parse('2024-01-01T02:00:00Z')"
body: |
{
"valid": true,
"expires_at": "2024-01-01T02:00:00Z"
}
- path: /api/auth/validate
method: GET
response:
status: 401
condition: "time_travel.now >= time_travel.parse('2024-01-01T02:00:00Z')"
body: |
{
"error": "Token expired",
"expired_at": "2024-01-01T02:00:00Z"
}
ML-Based Anomaly Detection
MockForge integrates machine learning for intelligent anomaly detection in system behavior.
Anomaly Detection Configuration
# mockforge.yaml
anomaly_detection:
enabled: true
# Detection parameters
config:
std_dev_threshold: 3.0 # Standard deviations for anomaly
min_baseline_samples: 30 # Minimum samples for baseline
moving_average_window: 10 # Smoothing window
enable_seasonal: true # Account for seasonal patterns
seasonal_period: 24 # Hours in daily cycle
sensitivity: 0.7 # Detection sensitivity (0.0-1.0)
# Metrics to monitor
monitored_metrics:
- name: response_time_ms
baseline_samples: 100
alert_on_anomaly: true
severity_threshold: high
- name: error_rate
baseline_samples: 50
alert_on_anomaly: true
severity_threshold: medium
- name: request_throughput
baseline_samples: 100
alert_on_anomaly: false
severity_threshold: high
# Collective anomaly detection
collective_detection:
enabled: true
metric_groups:
- name: api_health
metrics:
- response_time_ms
- error_rate
- request_throughput
min_affected_metrics: 2
Anomaly Response Actions
Configure automatic responses to detected anomalies:
anomaly_detection:
response_actions:
- trigger: high_severity_anomaly
action: circuit_breaker
duration: 5m
routes: ["/api/*"]
- trigger: collective_anomaly
action: failover
target: backup_service
routes: ["/api/critical/*"]
- trigger: performance_degradation
action: scale_up
threshold: 2.0 # 2x normal response time
Chaos Mesh Integration
Integrate with Chaos Mesh for Kubernetes-native chaos engineering.
Chaos Mesh Configuration
# mockforge.yaml
chaos_mesh:
enabled: true
api_url: https://kubernetes.default.svc
namespace: chaos-testing
# Default experiment settings
defaults:
mode: one # one, all, fixed, fixed-percent, random-max-percent
duration: 5m
# Pre-configured experiments
experiments:
- name: pod-kill-test
type: PodChaos
action: pod-kill
selector:
namespaces:
- production
label_selectors:
app: api-gateway
tier: backend
mode: one
duration: 30s
schedule: "*/5 * * * *" # Every 5 minutes
- name: network-latency-test
type: NetworkChaos
action: delay
selector:
namespaces:
- production
label_selectors:
app: database
delay:
latency: 100ms
jitter: 10ms
correlation: "50"
duration: 3m
- name: cpu-stress-test
type: StressChaos
selector:
namespaces:
- staging
label_selectors:
app: worker-service
stressors:
cpu_workers: 4
cpu_load: 80
duration: 10m
Chaos Experiment Orchestration
# Orchestrate chaos experiments with MockForge scenarios
orchestration:
name: chaos-testing-workflow
description: Comprehensive chaos testing with monitoring
steps:
- name: baseline_measurement
type: metrics_collection
duration: 5m
- name: pod_failure_injection
type: chaos_mesh
experiment: pod-kill-test
wait_for_completion: true
- name: anomaly_detection
type: ml_detection
metrics: [response_time_ms, error_rate]
alert_threshold: high
- name: network_chaos
type: chaos_mesh
experiment: network-latency-test
- name: recovery_verification
type: health_check
endpoints: ["/api/health", "/api/status"]
timeout: 30s
Multi-Tenancy Configuration
MockForge supports multi-tenant deployments with configurable plans and quotas.
Tenant Plans Configuration
# mockforge.yaml
multi_tenancy:
enabled: true
# Define tenant plans
plans:
free:
quotas:
max_scenarios: 5
max_concurrent_executions: 1
max_orchestrations: 3
max_templates: 5
max_requests_per_minute: 50
max_storage_mb: 50
max_users: 1
max_experiment_duration_secs: 600
permissions:
can_create_scenarios: true
can_execute_scenarios: true
can_view_observability: false
can_manage_resilience: false
can_use_advanced_features: false
can_integrate_external: false
can_use_ml_features: false
can_manage_users: false
professional:
quotas:
max_scenarios: 100
max_concurrent_executions: 20
max_orchestrations: 50
max_templates: 100
max_requests_per_minute: 1000
max_storage_mb: 5000
max_users: 25
max_experiment_duration_secs: 14400
permissions:
can_create_scenarios: true
can_execute_scenarios: true
can_view_observability: true
can_manage_resilience: true
can_use_advanced_features: true
can_integrate_external: true
can_use_ml_features: true
can_manage_users: true
# Default tenants
tenants:
- name: acme-corp
plan: professional
enabled: true
metadata:
organization: Acme Corporation
contact: admin@acme.com
environment: production
Tenant Isolation
Configure tenant-specific resources and isolation:
multi_tenancy:
isolation:
# Database isolation
database:
separate_schemas: true
schema_prefix: "tenant_"
# File system isolation
filesystem:
tenant_directories: true
shared_resources: ["global-templates"]
# Network isolation
network:
tenant_subdomains: true
shared_ports: [80, 443]
Plugin System Configuration
Advanced plugin configuration for extending MockForge functionality.
Plugin Registry Configuration
# mockforge.yaml
plugins:
enabled: true
# Plugin registry settings
registry:
auto_discover: true
plugin_dirs:
- /etc/mockforge/plugins
- ~/.mockforge/plugins
- ./custom-plugins
# Built-in plugins
builtin:
- id: custom-fault-injector
enabled: true
config:
fault_probability: 0.1
default_timeout_ms: 5000
- id: metrics-collector
enabled: true
config:
export_interval_secs: 60
buffer_size: 1000
# Custom plugins
custom:
- id: database-fault-injector
enabled: true
path: /etc/mockforge/plugins/database_fault.so
config:
connection_timeout_ms: 5000
query_timeout_ms: 30000
fault_types:
- connection_timeout
- query_error
- slow_query
- deadlock
- id: prometheus-exporter
enabled: true
path: /etc/mockforge/plugins/prometheus.so
config:
export_port: 9090
metrics_path: /metrics
include_labels:
- tenant_id
- scenario_id
- experiment_type
# Plugin hooks
hooks:
- type: logging
enabled: true
config:
log_level: info
include_context: true
- type: metrics
enabled: true
config:
track_execution_time: true
track_success_rate: true
- type: rate_limiting
enabled: true
config:
max_executions_per_minute: 100
burst_size: 20
Plugin Security
Configure plugin execution security:
plugins:
security:
# Sandbox configuration
sandbox:
enabled: true
memory_limit_mb: 100
cpu_limit_percent: 50
network_access: deny
filesystem_access: restricted
# Plugin signing
signing:
enabled: true
trusted_keys:
- "mockforge-official"
- "enterprise-customer-key"
# Resource limits
limits:
max_plugins_per_tenant: 10
max_plugin_memory_mb: 50
max_plugin_timeout_secs: 30
Advanced Orchestration
Complex scenario orchestration with conditional logic and dependencies.
Orchestration Configuration
# mockforge.yaml
orchestration:
name: advanced-chaos-scenario
description: Comprehensive chaos test with ML detection and multi-tenancy
# Tenant context
tenant_id: production-tenant
# Enable advanced features
features:
anomaly_detection: true
chaos_mesh_integration: true
plugin_execution: true
time_travel: true
# Complex step orchestration
steps:
# Step 1: Baseline measurement
- name: collect_baseline
type: custom
plugin: metrics-collector
config:
duration: 5m
metrics:
- response_time_ms
- error_rate
- request_throughput
# Step 2: Time travel setup
- name: setup_time_travel
type: time_travel
config:
enabled: true
initial_time: "2024-01-01T00:00:00Z"
# Step 3: Chaos Mesh pod kill
- name: pod_chaos
type: chaos_mesh
experiment: pod-kill-test
wait_for_completion: true
depends_on: ["collect_baseline"]
# Step 4: Monitor for anomalies
- name: detect_anomalies
type: ml_detection
metrics:
- response_time_ms
- error_rate
alert_threshold: high
depends_on: ["pod_chaos"]
# Step 5: Custom fault injection
- name: database_fault
type: plugin
plugin: database-fault-injector
config:
fault_type: slow_query
latency_ms: 1000
duration: 2m
depends_on: ["detect_anomalies"]
# Step 6: Network chaos
- name: network_latency
type: chaos_mesh
experiment: network-latency-test
depends_on: ["database_fault"]
# Step 7: Final analysis
- name: analyze_results
type: custom
plugin: prometheus-exporter
config:
export_metrics: true
generate_report: true
depends_on: ["network_latency"]
# Conditional execution
conditions:
- name: high_load_detected
expression: "metrics.request_throughput > 1000"
actions:
- skip_step: "network_latency"
- enable_step: "load_shedding"
- name: anomaly_critical
expression: "anomaly.severity == 'critical'"
actions:
- abort_orchestration: true
- send_alert: "critical_anomaly"
# Assertions and validations
assertions:
- metric: response_time_ms
operator: less_than
value: 1000
severity: high
- metric: error_rate
operator: less_than
value: 0.05
severity: critical
- metric: anomaly_count
operator: equals
value: 0
severity: medium
# Cleanup configuration
cleanup:
- delete_chaos_mesh_experiments: true
- export_metrics: true
- send_notifications: true
- reset_time_travel: true
Observability Integration
Advanced observability with Prometheus, OpenTelemetry, and alerting.
Prometheus Integration
# mockforge.yaml
observability:
prometheus:
enabled: true
port: 9090
path: /metrics
# Custom metrics
custom_metrics:
- name: mockforge_scenario_duration
type: histogram
description: "Time spent executing scenarios"
labels: ["scenario_name", "tenant_id"]
- name: mockforge_anomaly_detected
type: counter
description: "Number of anomalies detected"
labels: ["severity", "metric_name"]
opentelemetry:
enabled: true
endpoint: http://otel-collector:4317
# Tracing configuration
tracing:
service_name: mockforge
service_version: "1.0.0"
sample_rate: 0.1
# Metrics configuration
metrics:
export_interval: 30s
resource_attributes:
service.name: mockforge
service.version: "1.0.0"
Alerting Configuration
observability:
alerts:
- name: anomaly_detected
condition: "anomaly.severity >= 'high'"
channels:
- slack
- email
- webhook
cooldown: 5m
- name: quota_exceeded
condition: "tenant.usage >= tenant.quota * 0.9"
channels:
- email
cooldown: 1h
- name: service_degradation
condition: "metrics.response_time_p95 > 2000"
channels:
- slack
- pager_duty
cooldown: 10m
# Alert channels configuration
channels:
slack:
webhook_url: "${SLACK_WEBHOOK_URL}"
channel: "#alerts"
username: "MockForge Alert"
email:
smtp_server: "smtp.company.com"
smtp_port: 587
username: "${SMTP_USERNAME}"
password: "${SMTP_PASSWORD}"
from: "alerts@mockforge.company.com"
to: ["devops@company.com", "engineering@company.com"]
webhook:
url: "https://alert-manager.company.com/webhook"
headers:
Authorization: "Bearer ${WEBHOOK_TOKEN}"
method: POST
Security and Encryption
Advanced security features for enterprise deployments.
Encryption Configuration
# mockforge.yaml
security:
encryption:
enabled: true
# Key management
keys:
default:
algorithm: AES-256-GCM
key_rotation_days: 30
sensitive:
algorithm: AES-256-GCM
hsm_integration: true
# Data encryption
data_encryption:
fixtures: true
logs: true
configuration: false
# TLS configuration
tls:
enabled: true
certificate_file: /etc/ssl/mockforge.crt
private_key_file: /etc/ssl/mockforge.key
client_auth: optional
Authentication and Authorization
security:
auth:
# JWT configuration
jwt:
enabled: true
secret: "${JWT_SECRET}"
issuer: "mockforge"
audience: "mockforge-users"
algorithms: ["HS256", "RS256"]
# OAuth2 integration
oauth2:
enabled: true
provider: keycloak
client_id: "${OAUTH2_CLIENT_ID}"
client_secret: "${OAUTH2_CLIENT_SECRET}"
token_url: "https://auth.company.com/token"
userinfo_url: "https://auth.company.com/userinfo"
# Role-based access control
rbac:
enabled: true
roles:
admin:
permissions:
- "scenarios:*"
- "tenants:*"
- "system:*"
developer:
permissions:
- "scenarios:read"
- "scenarios:execute"
- "fixtures:*"
viewer:
permissions:
- "scenarios:read"
- "fixtures:read"
- "metrics:read"
Performance Tuning
Advanced performance configuration for high-throughput scenarios.
Resource Limits
# mockforge.yaml
performance:
# Thread pool configuration
thread_pool:
http_workers: 16
background_workers: 4
max_blocking_threads: 512
# Memory management
memory:
max_heap_size_mb: 2048
gc_threshold_mb: 1024
cache_size_mb: 512
# Connection pooling
connections:
max_http_connections: 1000
connection_timeout_secs: 30
keep_alive_secs: 300
# Request processing
requests:
max_concurrent_requests: 10000
request_timeout_secs: 60
buffer_size_kb: 64
Caching Configuration
performance:
caching:
# Response caching
responses:
enabled: true
max_size_mb: 100
ttl_secs: 300
compression: true
# Template caching
templates:
enabled: true
max_entries: 1000
ttl_secs: 3600
# Plugin caching
plugins:
enabled: true
max_instances: 10
preload: ["metrics-collector", "template-renderer"]
Monitoring and Profiling
performance:
monitoring:
# Performance metrics
metrics:
enabled: true
interval_secs: 30
export_format: prometheus
# Profiling
profiling:
enabled: true
sample_rate: 1000 # 1000 Hz
max_stack_depth: 64
# Health checks
health_checks:
enabled: true
interval_secs: 60
failure_threshold: 3
Environment Variables
Advanced configuration through environment variables:
# Traffic shaping
MOCKFORGE_TRAFFIC_SHAPING_ENABLED=true
MOCKFORGE_BANDWIDTH_MAX_BYTES_PER_SEC=1024000
MOCKFORGE_PACKET_LOSS_RATE=0.05
# Time travel
MOCKFORGE_TIME_TRAVEL_ENABLED=true
MOCKFORGE_VIRTUAL_TIME_SCALE=1.0
# Anomaly detection
MOCKFORGE_ANOMALY_DETECTION_ENABLED=true
MOCKFORGE_ANOMALY_SENSITIVITY=0.7
# Chaos Mesh
MOCKFORGE_CHAOS_MESH_ENABLED=true
MOCKFORGE_CHAOS_MESH_NAMESPACE=chaos-testing
# Multi-tenancy
MOCKFORGE_MULTI_TENANCY_ENABLED=true
MOCKFORGE_DEFAULT_TENANT_PLAN=professional
# Plugins
MOCKFORGE_PLUGINS_ENABLED=true
MOCKFORGE_PLUGIN_AUTO_DISCOVER=true
# Observability
MOCKFORGE_PROMETHEUS_ENABLED=true
MOCKFORGE_OPENTELEMETRY_ENABLED=true
# Security
MOCKFORGE_ENCRYPTION_ENABLED=true
MOCKFORGE_JWT_ENABLED=true
MOCKFORGE_TLS_ENABLED=true
# Performance
MOCKFORGE_MAX_CONCURRENT_REQUESTS=10000
MOCKFORGE_CACHE_ENABLED=true
MOCKFORGE_PROFILING_ENABLED=true
Best Practices
Configuration Management
- Version Control: Keep all configuration files in version control
- Environment Separation: Use different configurations for dev/staging/prod
- Secrets Management: Never commit secrets to version control
- Validation: Always validate configurations before deployment
Security
- Principle of Least Privilege: Grant minimal required permissions
- Network Security: Use firewalls and network policies
- Audit Logging: Enable comprehensive audit logging
- Regular Updates: Keep MockForge and dependencies updated
Performance
- Resource Monitoring: Monitor resource usage continuously
- Load Testing: Test configurations under load
- Caching Strategy: Configure appropriate caching for your use case
- Scalability Planning: Plan for growth and scale accordingly
Troubleshooting
- Debug Logging: Enable debug logging for troubleshooting
- Metrics Collection: Use observability tools for monitoring
- Configuration Validation: Validate configurations regularly
- Incremental Changes: Make configuration changes incrementally
This comprehensive guide covers MockForge’s advanced configuration options for enterprise-grade API mocking and chaos engineering scenarios.
Building from Source
This guide covers building MockForge from source code, including prerequisites, build processes, and troubleshooting common build issues.
Prerequisites
Before building MockForge, ensure you have the required development tools installed.
System Requirements
- Rust: Version 1.70.0 or later
- Cargo: Included with Rust
- Git: For cloning the repository
- C/C++ Compiler: For native dependencies
Platform-Specific Requirements
Linux (Ubuntu/Debian)
# Install build essentials
sudo apt update
sudo apt install build-essential pkg-config libssl-dev
# Install Rust (if not already installed)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
macOS
# Install Xcode command line tools
xcode-select --install
# Install Homebrew (optional, for additional tools)
# /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
Windows
# Install Visual Studio Build Tools
# Download from: https://visualstudio.microsoft.com/visual-cpp-build-tools/
# Install Rust
# Download from: https://rustup.rs/
# Or use winget: winget install --id Rustlang.Rustup
Rust Setup Verification
# Verify Rust installation
rustc --version
cargo --version
# Update to latest stable
rustup update stable
Cloning the Repository
# Clone the repository
git clone https://github.com/SaaSy-Solutions/mockforge.git
cd mockforge
# Initialize submodules (if any)
git submodule update --init --recursive
Build Process
Basic Build
# Build all crates in debug mode (default)
cargo build
# Build in release mode for production
cargo build --release
# Build specific crate
cargo build -p mockforge-cli
Build Outputs
After building, binaries are available in:
# Debug builds
target/debug/mockforge-cli
# Release builds
target/release/mockforge-cli
Build Features
MockForge supports conditional compilation features:
# Build with all features enabled
cargo build --all-features
# Build with specific features
cargo build --features "grpc,websocket"
# List available features
cargo metadata --format-version 1 | jq '.packages[] | select(.name == "mockforge-cli") | .features'
Development Workflow
Development Builds
# Quick development builds
cargo build
# Run tests during development
cargo test
# Run specific tests
cargo test --package mockforge-core --lib
Watch Mode Development
# Install cargo-watch for automatic rebuilds
cargo install cargo-watch
# Watch for changes and rebuild
cargo watch -x build
# Watch and run tests
cargo watch -x test
# Watch and run specific binary
cargo watch -x "run --bin mockforge-cli -- --help"
IDE Setup
VS Code
Install recommended extensions:
rust-lang.rust-analyzerms-vscode.vscode-jsonredhat.vscode-yaml
IntelliJ/CLion
Install Rust plugin through marketplace.
Debugging
# Build with debug symbols
cargo build
# Run with debugger
rust-gdb target/debug/mockforge-cli
# Or use lldb on macOS
rust-lldb target/debug/mockforge-cli
Advanced Build Options
Cross-Compilation
# Install cross-compilation targets
rustup target add x86_64-unknown-linux-musl
rustup target add aarch64-unknown-linux-gnu
# Build for different architectures
cargo build --target x86_64-unknown-linux-musl
cargo build --target aarch64-unknown-linux-gnu
Custom Linker
# Use mold linker for faster linking (Linux)
sudo apt install mold
export RUSTFLAGS="-C link-arg=-fuse-ld=mold"
cargo build
Build Caching
# Use sccache for faster rebuilds
cargo install sccache
export RUSTC_WRAPPER=sccache
cargo build
Testing
Running Tests
# Run all tests
cargo test
# Run tests with output
cargo test -- --nocapture
# Run specific test
cargo test test_name
# Run tests for specific package
cargo test -p mockforge-core
# Run integration tests
cargo test --test integration
# Run with release optimizations
cargo test --release
Test Coverage
# Install cargo-tarpaulin
cargo install cargo-tarpaulin
# Generate coverage report
cargo tarpaulin --out Html
# Open coverage report
open tarpaulin-report.html
Benchmarking
# Run benchmarks
cargo bench
# Run specific benchmark
cargo bench benchmark_name
Code Quality
Linting
# Run clippy lints
cargo clippy
# Run with pedantic mode
cargo clippy -- -W clippy::pedantic
# Auto-fix some issues
cargo clippy --fix
Formatting
# Check code formatting
cargo fmt --check
# Auto-format code
cargo fmt
Security Auditing
# Install cargo-audit
cargo install cargo-audit
# Audit dependencies for security vulnerabilities
cargo audit
Documentation
Building Documentation
# Build API documentation
cargo doc
# Open documentation in browser
cargo doc --open
# Build documentation with private items
cargo doc --document-private-items
# Build for specific package
cargo doc -p mockforge-core
Building mdBook
# Install mdbook
cargo install mdbook
# Build the documentation
mdbook build
# Serve documentation locally
mdbook serve
Packaging and Distribution
Creating Releases
# Create a release build
cargo build --release
# Strip debug symbols (Linux/macOS)
strip target/release/mockforge-cli
# Create distribution archive
tar -czf mockforge-v0.1.0-x86_64-linux.tar.gz \
-C target/release mockforge-cli
# Create Debian package
cargo install cargo-deb
cargo deb
Docker Builds
# Build Docker image
docker build -t mockforge .
# Build with buildkit for faster builds
DOCKER_BUILDKIT=1 docker build -t mockforge .
# Multi-stage build for smaller images
docker build -f Dockerfile.multi-stage -t mockforge .
Troubleshooting Build Issues
Common Problems
Compilation Errors
Problem: error[E0432]: unresolved import
Solution: Check that dependencies are properly specified in Cargo.toml
# Update dependencies
cargo update
# Clean and rebuild
cargo clean
cargo build
Linker Errors
Problem: undefined reference to...
Solution: Install system dependencies
# Ubuntu/Debian
sudo apt install libssl-dev pkg-config
# macOS
brew install openssl pkg-config
Out of Memory
Problem: fatal error: Killed signal terminated program cc1
Solution: Increase available memory or reduce parallelism
# Reduce parallel jobs
cargo build --jobs 1
# Or set memory limits
export CARGO_BUILD_JOBS=2
Slow Builds
Solutions:
# Use incremental compilation
export CARGO_INCREMENTAL=1
# Use faster linker
export RUSTFLAGS="-C link-arg=-fuse-ld=mold"
# Use build cache
cargo install sccache
export RUSTC_WRAPPER=sccache
Platform-Specific Issues
Windows
# Install Windows SDK if missing
# Download from: https://developer.microsoft.com/en-us/windows/downloads/windows-sdk/
# Use different target for static linking
cargo build --target x86_64-pc-windows-msvc
macOS
# Install missing headers
open /Library/Developer/CommandLineTools/Packages/macOS_SDK_headers_for_macOS_10.14.pkg
# Or reinstall command line tools
sudo rm -rf /Library/Developer/CommandLineTools
xcode-select --install
Linux
# Install additional development libraries
sudo apt install libclang-dev llvm-dev
# For cross-compilation
sudo apt install gcc-aarch64-linux-gnu
Network Issues
# Clear cargo cache
cargo clean
rm -rf ~/.cargo/registry/cache
rm -rf ~/.cargo/git/checkouts
# Use different registry
export CARGO_REGISTRIES_CRATES_IO_PROTOCOL=sparse
Dependency Conflicts
# Update Cargo.lock
cargo update
# Resolve conflicts
cargo update -p package-name
# Use cargo-tree to visualize dependencies
cargo install cargo-tree
cargo tree
Performance Optimization
Release Builds
# Optimized release build
cargo build --release
# With Link-Time Optimization (LTO)
export RUSTFLAGS="-C opt-level=3 -C lto=fat -C codegen-units=1"
cargo build --release
Profile-Guided Optimization (PGO)
# Build with instrumentation
export RUSTFLAGS="-Cprofile-generate=/tmp/pgo-data"
cargo build --release
# Run instrumented binary with representative workload
./target/release/mockforge-cli serve --spec examples/openapi-demo.json &
sleep 10
curl -s http://localhost:3000/users > /dev/null
pkill mockforge-cli
# Build optimized version
export RUSTFLAGS="-Cprofile-use=/tmp/pgo-data"
cargo build --release
Contributing to the Build System
Adding New Dependencies
# Add to workspace Cargo.toml
[workspace.dependencies]
new-dependency = "1.0"
# Use in crate Cargo.toml
[dependencies]
new-dependency = { workspace = true }
Adding Build Scripts
// build.rs fn main() { // Generate code or check dependencies println!("cargo:rerun-if-changed=proto/"); tonic_build::compile_protos("proto/service.proto").unwrap(); }
Custom Build Profiles
# In Cargo.toml
[profile.release]
opt-level = 3
lto = true
codegen-units = 1
panic = "abort"
[profile.dev]
opt-level = 0
debug = true
overflow-checks = true
This comprehensive build guide ensures developers can successfully compile, test, and contribute to MockForge across different platforms and development environments.
Testing Guide
This guide covers MockForge’s comprehensive testing strategy, including unit tests, integration tests, end-to-end tests, and testing best practices.
Testing Overview
MockForge employs a multi-layered testing approach to ensure code quality and prevent regressions:
- Unit Tests: Individual functions and modules
- Integration Tests: Component interactions
- End-to-End Tests: Full system workflows
- Performance Tests: Load and performance validation
- Security Tests: Vulnerability and access control testing
Unit Testing
Running Unit Tests
# Run all unit tests
cargo test --lib
# Run tests for specific crate
cargo test -p mockforge-core
# Run specific test function
cargo test test_template_rendering
# Run tests matching pattern
cargo test template
# Run tests with output
cargo test -- --nocapture
Writing Unit Tests
Basic Test Structure
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use super::*; #[test] fn test_basic_functionality() { // Arrange let input = "test input"; let expected = "expected output"; // Act let result = process_input(input); // Assert assert_eq!(result, expected); } #[test] fn test_error_conditions() { // Test error cases let result = process_input(""); assert!(result.is_err()); } } }
Async Tests
#![allow(unused)] fn main() { #[cfg(test)] mod async_tests { use tokio::test; #[tokio::test] async fn test_async_operation() { let result = async_operation().await; assert!(result.is_ok()); } #[tokio::test] async fn test_concurrent_operations() { let (result1, result2) = tokio::join( async_operation(), another_async_operation() ); assert!(result1.is_ok()); assert!(result2.is_ok()); } } }
Integration Testing
Component Integration Tests
#![allow(unused)] fn main() { #[cfg(test)] mod integration_tests { use mockforge_core::config::MockForgeConfig; use mockforge_http::HttpServer; #[tokio::test] async fn test_http_server_integration() { // Start test server let config = test_config(); let server = HttpServer::new(config); let addr = server.local_addr(); tokio::spawn(async move { server.serve().await.unwrap(); }); // Wait for server to start tokio::time::sleep(tokio::time::Duration::from_millis(100)).await; // Test HTTP request let client = reqwest::Client::new(); let response = client .get(&format!("http://{}/health", addr)) .send() .await .unwrap(); assert_eq!(response.status(), 200); } } }
End-to-End Testing
Full System Tests
#![allow(unused)] fn main() { #[cfg(test)] mod e2e_tests { use std::process::Command; use std::thread; use std::time::Duration; #[test] fn test_full_openapi_workflow() { // Start MockForge server let mut server = Command::new("cargo") .args(&["run", "--bin", "mockforge-cli", "serve", "--spec", "examples/openapi-demo.json", "--http-port", "3000"]) .spawn() .unwrap(); // Wait for server to start thread::sleep(Duration::from_secs(2)); // Test API endpoints test_user_endpoints(); test_product_endpoints(); // Stop server server.kill().unwrap(); } } }
Performance Testing
Load Testing
# Using hey for HTTP load testing
hey -n 1000 -c 10 http://localhost:3000/users
# Using wrk for more detailed benchmarking
wrk -t 4 -c 100 -d 30s http://localhost:3000/users
Benchmarking
#![allow(unused)] fn main() { // In benches/benchmark.rs use criterion::{black_box, criterion_group, criterion_main, Criterion}; fn benchmark_template_rendering(c: &mut Criterion) { let engine = TemplateEngine::new(); c.bench_function("template_render_simple", |b| { b.iter(|| { engine.render("Hello {{name}}", &Context::from_value("name", "World")) }) }); } criterion_group!(benches, benchmark_template_rendering); criterion_main!(benches); }
Run benchmarks:
cargo bench
Security Testing
Input Validation Tests
#![allow(unused)] fn main() { #[cfg(test)] mod security_tests { #[test] fn test_sql_injection_prevention() { let input = "'; DROP TABLE users; --"; let result = sanitize_input(input); // Ensure dangerous characters are escaped assert!(!result.contains("DROP")); } #[test] fn test_template_injection() { let engine = TemplateEngine::new(); let malicious = "{{#exec}}rm -rf /{{/exec}}"; // Should not execute dangerous commands let result = engine.render(malicious, &Context::new()); assert!(!result.contains("exec")); } } }
Continuous Integration
GitHub Actions Testing
# .github/workflows/test.yml
name: Test
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
toolchain: stable
override: true
- name: Cache dependencies
uses: actions/cache@v2
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- name: Run tests
run: cargo test --verbose
- name: Run clippy
run: cargo clippy -- -D warnings
- name: Check formatting
run: cargo fmt --check
- name: Run security audit
run: cargo audit
This comprehensive testing guide ensures MockForge maintains high code quality and prevents regressions across all components and integration points.
Architecture Overview
MockForge is a modular, Rust-based platform for mocking APIs across HTTP, WebSocket, and gRPC protocols. This document provides a comprehensive overview of the system architecture, design principles, and component interactions.
System Overview
MockForge enables frontend and integration development without live backends by providing realistic API mocking with configurable latency, failure injection, and dynamic response generation. The system is built as a modular workspace of Rust crates that share a core engine for request routing, validation, and data generation.
Key Design Principles
- Modularity: Separated concerns across focused crates
- Extensibility: Plugin architecture for custom functionality
- Performance: Async-first design with efficient resource usage
- Developer Experience: Comprehensive tooling and clear APIs
- Protocol Agnostic: Unified approach across different protocols
High-Level Architecture
graph TB
subgraph "User Interfaces"
CLI[CLI mockforge-cli]
UI[Admin UI v2]
end
subgraph "Core Engine"
Router[Route Registry]
Templates[Template Engine]
Validator[Schema Validator]
Latency[Latency Injector]
Failure[Failure Injector]
Logger[Request Logger]
Plugins[Plugin System]
end
subgraph "Protocol Handlers"
HTTP[HTTP Server<br/>axum]
WS[WebSocket Server<br/>tokio-ws]
GRPC[gRPC Server<br/>tonic]
end
subgraph "Data Layer"
DataGen[Data Generator<br/>faker + RAG]
Workspace[Workspace Manager]
Encryption[Encryption Engine]
end
CLI --> Router
UI --> Router
Router --> HTTP
Router --> WS
Router --> GRPC
HTTP --> Templates
WS --> Templates
GRPC --> Templates
Templates --> Validator
Validator --> Latency
Latency --> Failure
Failure --> Logger
Templates --> DataGen
Templates --> Plugins
Router --> Workspace
Workspace --> Encryption
style CLI fill:#e1f5ff
style UI fill:#e1f5ff
style Router fill:#ffe1e1
style Templates fill:#ffe1e1
style DataGen fill:#e1ffe1
Crate Structure
MockForge is organized as a Cargo workspace with the following crates:
mockforge/
crates/
mockforge-cli/ # Command-line interface
mockforge-core/ # Shared functionality
mockforge-http/ # HTTP REST API mocking
mockforge-ws/ # WebSocket connection mocking
mockforge-grpc/ # gRPC service mocking
mockforge-data/ # Synthetic data generation
mockforge-ui/ # Web-based admin interface
Crate Responsibilities
mockforge-core - Shared Core Engine
The foundation crate providing common functionality used across all protocols:
- Request Routing: Unified route registry and matching logic
- Validation Engine: OpenAPI and schema validation
- Template System: Handlebars-based dynamic content generation
- Latency Injection: Configurable response delays
- Failure Injection: Simulated error conditions
- Record/Replay: Request/response capture and replay
- Logging: Structured request/response logging
- Configuration: Unified configuration management
mockforge-http - HTTP REST API Mocking
HTTP-specific implementation built on axum:
- OpenAPI Integration: Automatic route generation from specifications
- Request Matching: Method, path, query, header, and body matching
- Response Generation: Schema-driven and template-based responses
- Middleware Support: Custom request/response processing
mockforge-ws - WebSocket Connection Mocking
Real-time communication mocking:
- Replay Mode: Scripted message sequences with timing control
- Interactive Mode: Dynamic responses based on client messages
- State Management: Connection-specific state tracking
- Template Support: Dynamic message content generation
mockforge-grpc - gRPC Service Mocking
Protocol buffer-based service mocking:
- Dynamic Proto Discovery: Automatic compilation of
.protofiles - Service Reflection: Runtime service discovery and inspection
- Streaming Support: Unary, server, client, and bidirectional streaming
- Schema Validation: Message validation against proto definitions
mockforge-data - Synthetic Data Generation
Advanced data generation capabilities:
- Faker Integration: Realistic fake data generation
- RAG Enhancement: Retrieval-augmented generation for contextual data
- Schema-Driven Generation: Data conforming to JSON Schema/OpenAPI specs
- Template Helpers: Integration with core templating system
mockforge-cli - Command-Line Interface
User-facing command-line tool:
- Server Management: Start/stop mock servers
- Configuration: Load and validate configuration files
- Data Generation: Command-line data generation utilities
- Development Tools: Testing and debugging utilities
mockforge-ui - Admin Web Interface
Browser-based management interface:
- Real-time Monitoring: Live request/response viewing
- Configuration Management: Runtime configuration changes
- Fixture Management: Recorded interaction management
- Performance Metrics: Response times and error rates
Core Engine Architecture
Request Processing Pipeline
All requests follow a unified processing pipeline regardless of protocol:
- Request Reception: Protocol-specific server receives request
- Route Matching: Core routing engine matches request to handler
- Validation: Schema validation if enabled
- Template Processing: Dynamic content generation
- Latency Injection: Artificial delays if configured
- Failure Injection: Error simulation if enabled
- Response Generation: Handler generates response
- Logging: Request/response logging
- Response Delivery: Protocol-specific response sending
sequenceDiagram
participant Client
participant Server as Protocol Server<br/>(HTTP/WS/gRPC)
participant Router as Route Registry
participant Validator
participant Templates
participant Latency
participant Failure
participant Handler
participant Logger
Client->>Server: Incoming Request
Server->>Router: Match Route
Router->>Router: Find Handler
alt Route Found
Router->>Validator: Validate Request
alt Validation Enabled
Validator->>Validator: Check Schema
alt Valid
Validator->>Templates: Process Request
else Invalid
Validator-->>Server: Validation Error
Server-->>Client: 400 Bad Request
end
else Validation Disabled
Validator->>Templates: Process Request
end
Templates->>Templates: Render Template
Templates->>Handler: Generate Response
Handler->>Latency: Apply Delays
Latency->>Failure: Check Failure Rules
alt Should Fail
Failure-->>Server: Simulated Error
Server-->>Client: Error Response
else Success
Failure->>Logger: Log Request/Response
Logger-->>Server: Response Data
Server-->>Client: Success Response
end
else Route Not Found
Router-->>Server: No Match
Server-->>Client: 404 Not Found
end
Route Registry System
The core routing system provides unified route management:
#![allow(unused)] fn main() { pub struct RouteRegistry { routes: HashMap<RouteKey, Vec<RouteHandler>>, overrides: Overrides, validation_mode: ValidationMode, } impl RouteRegistry { pub fn register(&mut self, key: RouteKey, handler: RouteHandler); pub fn match_route(&self, request: &Request) -> Option<&RouteHandler>; pub fn apply_overrides(&mut self, overrides: &Overrides); } }
Template Engine
Handlebars-based templating with custom helpers:
#![allow(unused)] fn main() { pub struct TemplateEngine { registry: handlebars::Handlebars<'static>, } impl TemplateEngine { pub fn render(&self, template: &str, context: &Context) -> Result<String>; pub fn register_helper(&mut self, name: &str, helper: Box<dyn HelperDef>); } }
Built-in helpers include:
uuid: Generate unique identifiersnow: Current timestamprandInt: Random integersrequest: Access request datafaker: Synthetic data generation
Plugin System Architecture
MockForge uses a WebAssembly-based plugin system for extensibility:
graph TB
subgraph "Plugin Lifecycle"
Load[Load Plugin WASM]
Init[Initialize Plugin]
Register[Register Hooks]
Execute[Execute Plugin]
Cleanup[Cleanup Resources]
end
subgraph "Plugin Types"
Auth[Authentication<br/>JWT, OAuth2, etc.]
Response[Response Generators<br/>GraphQL, Custom Data]
DataSource[Data Sources<br/>CSV, Database, API]
Template[Template Extensions<br/>Custom Functions]
end
subgraph "Security Sandbox"
Isolate[WASM Isolation]
Limits[Resource Limits<br/>Memory, CPU, Time]
Perms[Permission System]
end
subgraph "Core Integration"
Loader[Plugin Loader]
Registry[Plugin Registry]
API[Plugin API]
end
Load --> Init
Init --> Register
Register --> Execute
Execute --> Cleanup
Auth --> Loader
Response --> Loader
DataSource --> Loader
Template --> Loader
Loader --> Registry
Registry --> API
API --> Isolate
Isolate --> Limits
Limits --> Perms
style Auth fill:#e1f5ff
style Response fill:#e1f5ff
style DataSource fill:#e1f5ff
style Template fill:#e1f5ff
style Isolate fill:#ffe1e1
style Limits fill:#ffe1e1
style Perms fill:#ffe1e1
Plugin Hook Points:
- Request Interceptors: Modify incoming requests
- Response Generators: Create custom response data
- Template Helpers: Add custom template functions
- Authentication Providers: Implement auth schemes
- Data Source Connectors: Connect to external data sources
Security Model:
- WASM sandboxing isolates plugin execution
- Resource limits prevent DoS attacks
- Permission system controls plugin capabilities
- Plugin signature verification (planned)
This architecture provides a solid foundation for API mocking while maintaining extensibility, performance, and developer experience. The modular design allows for independent evolution of each protocol implementation while sharing common infrastructure.
CLI Crate
The mockforge-cli crate provides the primary command-line interface for MockForge, serving as the main entry point for users to interact with the MockForge ecosystem. It orchestrates all MockForge services and provides comprehensive configuration and management capabilities.
Architecture Overview
graph TD
A[CLI Entry Point] --> B[Command Parser]
B --> C{Command Type}
C --> D[Serve Command]
C --> E[Plugin Commands]
C --> F[Workspace Commands]
C --> G[Data Commands]
C --> H[Other Commands]
D --> I[Server Orchestration]
I --> J[HTTP Server]
I --> K[WebSocket Server]
I --> L[gRPC Server]
I --> M[SMTP Server]
I --> N[Admin UI]
I --> O[Metrics Server]
Core Components
Command Structure
The CLI uses clap for argument parsing and command structure. The main Cli struct defines the top-level interface with global options and subcommands.
Main Commands
serve: Start MockForge servers (HTTP, WebSocket, gRPC, SMTP)admin: Start standalone admin UI serversync: Bidirectional workspace synchronization daemonplugin: Plugin management (install, uninstall, update, list)workspace: Multi-tenant workspace managementdata: Synthetic data generationgenerate-tests: Test generation from recorded API interactionssuggest: AI-powered API specification suggestionsbench: Load testing against real servicesorchestrate: Chaos experiment orchestration
Server Orchestration
The serve command is the most complex, supporting extensive configuration options:
Server Types
- HTTP Server: REST API mocking with OpenAPI support
- WebSocket Server: Real-time messaging simulation
- gRPC Server: Protocol buffer-based service mocking
- SMTP Server: Email service simulation
- Admin UI: Web-based management interface
- Metrics Server: Prometheus metrics endpoint
Configuration Layers
The CLI implements a three-tier configuration precedence system:
- CLI Arguments: Highest precedence, command-line flags
- Configuration File: YAML/JSON config file (optional)
- Environment Variables: Lowest precedence, environment overrides
Advanced Features
- Chaos Engineering: Fault injection, latency simulation, network degradation
- Traffic Shaping: Bandwidth limiting, packet loss simulation
- Observability: OpenTelemetry tracing, Prometheus metrics, API flight recording
- AI Integration: RAG-powered intelligent mocking
- Multi-tenancy: Workspace isolation and management
Key Modules
main.rs
The main entry point that:
- Parses CLI arguments using clap
- Initializes logging and observability
- Routes commands to appropriate handlers
- Manages server lifecycle and graceful shutdown
plugin_commands.rs
Handles plugin ecosystem management:
- Plugin installation from various sources (URLs, Git repos, local paths)
- Plugin validation and security verification
- Cache management for downloaded plugins
- Registry integration (future feature)
workspace_commands.rs
Multi-tenant workspace management:
- CRUD operations for workspaces
- Workspace statistics and monitoring
- Enable/disable workspace functionality
- REST API integration with admin UI
Import Modules
curl_import.rs: Convert curl commands to MockForge configurationspostman_import.rs: Import Postman collectionsinsomnia_import.rs: Import Insomnia workspacesimport_utils.rs: Shared utilities for import operations
Configuration Management
Server Configuration Building
The build_server_config_from_cli() function merges configuration from multiple sources:
#![allow(unused)] fn main() { // Step 1: Load config from file if provided let mut config = load_config_with_fallback(path)?; // Step 2: Apply environment variable overrides config = apply_env_overrides(config); // Step 3: Apply CLI argument overrides (highest precedence) config.http.port = serve_args.http_port; // ... more overrides }
Validation
Before starting servers, the CLI performs comprehensive validation:
- Configuration file existence and readability
- OpenAPI spec file validation
- Port availability checking
- Dry-run mode for configuration testing
Server Lifecycle Management
Concurrent Server Startup
All servers are started concurrently using Tokio tasks:
#![allow(unused)] fn main() { // Start HTTP server let http_handle = tokio::spawn(async move { mockforge_http::serve_router(http_port, http_app) }); // Start WebSocket server let ws_handle = tokio::spawn(async move { mockforge_ws::start_with_latency(ws_port, None) }); // Start gRPC server let grpc_handle = tokio::spawn(async move { mockforge_grpc::start(grpc_port) }); }
Graceful Shutdown
The CLI implements graceful shutdown using Tokio’s CancellationToken:
#![allow(unused)] fn main() { let shutdown_token = CancellationToken::new(); // All servers listen for cancellation tokio::select! { result = server_task => { /* handle result */ } _ = shutdown_token.cancelled() => { /* cleanup */ } } }
Integration Points
Core Crate Dependencies
The CLI depends on all MockForge service crates:
mockforge-core: Configuration and shared utilitiesmockforge-http: HTTP server implementationmockforge-ws: WebSocket servermockforge-grpc: gRPC servermockforge-smtp: SMTP servermockforge-ui: Admin interfacemockforge-observability: Metrics and tracingmockforge-data: Data generation and RAGmockforge-plugin-*: Plugin ecosystem
External Integrations
- OpenTelemetry: Distributed tracing
- Prometheus: Metrics collection
- Jaeger: Trace visualization
- Plugin Registry: Remote plugin distribution
- AI Providers: OpenAI, Anthropic, Ollama for intelligent features
Error Handling
The CLI implements comprehensive error handling:
- User-friendly error messages with suggestions
- Validation errors with specific guidance
- Network error recovery and retry logic
- Graceful degradation when services fail
Testing
The CLI includes integration tests in tests/cli_integration_tests.rs and configuration validation tests in tests/config_validation_tests.rs, ensuring reliability of the command-line interface and configuration parsing.
Future Enhancements
- Plugin Marketplace: Integrated plugin discovery and installation
- Interactive Mode: Shell-like interface for complex workflows
- Configuration Wizards: Guided setup for new users
- Remote Management: Cloud-based MockForge instance management
HTTP Crate
The mockforge-http crate provides comprehensive HTTP/REST API mocking capabilities for MockForge, built on top of the Axum web framework. It integrates OpenAPI specification support, AI-powered response generation, comprehensive management APIs, and advanced middleware for observability and traffic control.
Architecture Overview
graph TD
A[HTTP Server] --> B[Router Builder]
B --> C{Configuration Type}
C --> D[OpenAPI Router]
C --> E[Basic Router]
C --> F[Auth Router]
C --> G[Chain Router]
D --> H[OpenAPI Spec Loader]
H --> I[Route Registry]
I --> J[Validation Middleware]
A --> K[Middleware Stack]
K --> L[Rate Limiting]
K --> M[Request Logging]
K --> N[Metrics Collection]
K --> O[Tracing]
A --> P[Management API]
P --> Q[REST Endpoints]
P --> R[WebSocket Events]
P --> S[SSE Streams]
A --> T[AI Integration]
T --> U[Intelligent Generation]
T --> V[Data Drift]
Core Components
Router Building System
The HTTP crate provides multiple router builders for different use cases:
build_router()
Basic router with optional OpenAPI integration:
- Loads and validates OpenAPI specifications
- Creates route handlers from spec operations
- Applies validation middleware
- Includes health check endpoints
build_router_with_auth()
Router with authentication support:
- Integrates OAuth2 and JWT authentication
- Supports custom auth middleware
- Validates tokens and permissions
build_router_with_chains()
Router with request chaining support:
- Enables multi-step request workflows
- Manages chain execution state
- Provides chain management endpoints
build_router_with_traffic_shaping()
Router with traffic control:
- Bandwidth limiting and packet loss simulation
- Network condition emulation
- Traffic shaping middleware
OpenAPI Integration
Specification Loading
#![allow(unused)] fn main() { // Load OpenAPI spec from file let openapi = OpenApiSpec::from_file("api.yaml").await?; // Create route registry with validation options let registry = OpenApiRouteRegistry::new_with_options( openapi, ValidationOptions::enforce() ); }
Route Generation
- Automatic endpoint creation from OpenAPI operations
- Parameter extraction and validation
- Response schema validation
- Error response generation
Validation Modes
- Strict: Full request/response validation
- Lenient: Warnings for validation failures
- Disabled: No validation (performance mode)
Middleware Architecture
Request Processing Pipeline
Request → Rate Limiting → Authentication → Logging → Metrics → Handler → Response
Key Middleware Components
- Rate Limiting: Uses
governorcrate for distributed rate limiting - Request Logging: Comprehensive HTTP request/response logging
- Metrics Collection: Prometheus-compatible metrics
- Tracing: OpenTelemetry integration for distributed tracing
- Traffic Shaping: Bandwidth and latency control
Management API
REST Endpoints
GET /__mockforge/health- Health checkGET /__mockforge/stats- Server statisticsGET /__mockforge/routes- Route informationGET /__mockforge/coverage- API coverage metricsGET/POST/PUT/DELETE /__mockforge/mocks- Mock management
WebSocket Integration
- Real-time server events
- Live request monitoring
- Interactive mock configuration
Server-Sent Events (SSE)
- Log streaming to clients
- Real-time metrics updates
- Coverage report streaming
AI-Powered Features
Intelligent Mock Generation
#![allow(unused)] fn main() { let ai_config = AiResponseConfig { enabled: true, rag_config: RagConfig { provider: "openai".to_string(), model: "gpt-4".to_string(), api_key: Some(api_key), }, prompt: "Generate realistic user data".to_string(), }; let response = process_response_with_ai(&ai_config, request_data).await?; }
Data Drift Simulation
- Progressive response changes over time
- Realistic data evolution patterns
- Configurable drift parameters
Authentication System
Supported Methods
- OAuth2: Authorization code, client credentials flows
- JWT: Token validation and claims extraction
- API Keys: Header and query parameter validation
- Basic Auth: Username/password authentication
Auth Middleware
#![allow(unused)] fn main() { let auth_middleware = auth_middleware(auth_state); app = app.layer(auth_middleware); }
Request Chaining
Chain Execution Engine
- Multi-step request workflows
- Conditional execution based on responses
- Chain state management and persistence
- Execution history and debugging
Chain Management API
POST /__mockforge/chains- Create chainsGET /__mockforge/chains/{id}/execute- Execute chainsGET /__mockforge/chains/{id}/history- View execution history
Multi-Tenant Support
Workspace Isolation
- Path-based routing (
/workspace1/api/*,/workspace2/api/*) - Port-based isolation (different ports per tenant)
- Configuration isolation per workspace
Auto-Discovery
- Automatic workspace loading from config directories
- YAML-based workspace definitions
- Dynamic workspace registration
Observability Integration
Metrics Collection
- Request/response counts and timings
- Error rates and status code distribution
- Route coverage and usage statistics
- Performance histograms
Tracing Integration
- Distributed tracing with OpenTelemetry
- Request correlation IDs
- Span tagging for operations
- Jaeger and Zipkin export support
Logging
- Structured JSON logging
- Request/response body logging (configurable)
- Error tracking and correlation
- Log level configuration
Traffic Shaping and Chaos Engineering
Network Simulation
- Bandwidth limiting (bytes per second)
- Packet loss simulation (percentage)
- Latency injection (fixed or random)
- Connection failure simulation
Chaos Scenarios
- Predefined network profiles (3G, 4G, 5G, satellite)
- Custom traffic shaping rules
- Circuit breaker patterns
- Bulkhead isolation
Testing and Validation
Integration Tests
- End-to-end request/response validation
- OpenAPI compliance testing
- Authentication flow testing
- Performance benchmarking
Coverage Analysis
- API endpoint coverage tracking
- Request pattern analysis
- Missing endpoint detection
- Coverage reporting and visualization
Key Data Structures
HttpServerState
Shared state for route information and rate limiting:
#![allow(unused)] fn main() { pub struct HttpServerState { pub routes: Vec<RouteInfo>, pub rate_limiter: Option<Arc<GlobalRateLimiter>>, } }
ManagementState
Management API state with server statistics:
#![allow(unused)] fn main() { pub struct ManagementState { pub mocks: Arc<RwLock<Vec<MockConfig>>>, pub spec: Option<Arc<OpenApiSpec>>, pub request_counter: Arc<RwLock<u64>>, } }
AiResponseHandler
AI-powered response generation:
#![allow(unused)] fn main() { pub struct AiResponseHandler { intelligent_generator: Option<IntelligentMockGenerator>, drift_engine: Option<Arc<RwLock<DataDriftEngine>>>, } }
Integration Points
Core Dependencies
mockforge-core: OpenAPI handling, validation, routingmockforge-data: AI generation, data templatingmockforge-observability: Metrics, logging, tracing
External Integrations
- Axum: Web framework for HTTP handling
- OpenTelemetry: Distributed tracing
- Prometheus: Metrics collection
- OAuth2: Authentication flows
- Governor: Rate limiting
- Reqwest: HTTP client for chaining
Performance Considerations
Startup Optimization
- Lazy OpenAPI spec loading
- Parallel route registry creation
- Cached validation schemas
- Startup time profiling and logging
Runtime Performance
- Efficient middleware pipeline
- Minimal allocations in hot paths
- Async request processing
- Connection pooling for external calls
Memory Management
- Shared state with Arc/RwLock
- Response streaming for large payloads
- Configurable request body limits
- Automatic cleanup of expired sessions
Error Handling
Validation Errors
- Structured error responses
- OpenAPI-compliant error schemas
- Configurable error verbosity
- Error correlation IDs
Recovery Mechanisms
- Graceful degradation on failures
- Fallback responses for AI generation
- Circuit breaker patterns
- Automatic retry logic
Future Enhancements
- GraphQL Integration: Schema-based GraphQL mocking
- WebSocket Mocking: Interactive WebSocket scenarios
- Advanced Caching: Response caching and invalidation
- Load Balancing: Multi-instance coordination
- Plugin Architecture: Extensible middleware system
gRPC Crate
The mockforge-grpc crate provides comprehensive gRPC protocol support for MockForge, featuring dynamic service discovery, runtime protobuf parsing, and HTTP bridge capabilities. It enables automatic mocking of gRPC services without code generation, supporting all streaming patterns and providing rich introspection features.
Architecture Overview
graph TD
A[gRPC Server] --> B[Dynamic Service Discovery]
B --> C[Proto Parser]
C --> D[Service Registry]
D --> E[Dynamic Service Generator]
A --> F[gRPC Reflection]
F --> G[Reflection Proxy]
G --> H[Descriptor Pool]
A --> I[HTTP Bridge]
I --> J[REST API Generator]
J --> K[OpenAPI Spec]
A --> L[Streaming Support]
L --> M[Unary RPC]
L --> N[Server Streaming]
L --> O[Client Streaming]
L --> P[Bidirectional Streaming]
Core Components
Dynamic Service Discovery
The gRPC crate’s core innovation is runtime service discovery and mocking without code generation:
Proto Parser
#![allow(unused)] fn main() { // Parse protobuf files at runtime let mut parser = ProtoParser::new(); parser.parse_directory("./proto").await?; // Extract services and methods let services = parser.services(); let descriptor_pool = parser.into_pool(); }
Service Registry
#![allow(unused)] fn main() { // Create registry with parsed services let mut registry = ServiceRegistry::with_descriptor_pool(descriptor_pool); // Register dynamic service implementations for (name, proto_service) in services { let dynamic_service = DynamicGrpcService::new(proto_service, config); registry.register(name, dynamic_service); } }
gRPC Reflection
Reflection Proxy
Enables runtime service discovery and method invocation:
#![allow(unused)] fn main() { let proxy_config = ProxyConfig::default(); let mock_proxy = MockReflectionProxy::new(proxy_config, registry).await?; // Server supports reflection queries // grpcurl -plaintext localhost:50051 list // grpcurl -plaintext localhost:50051 describe MyService }
Descriptor Management
- Descriptor Pool: In-memory protobuf descriptor storage
- Dynamic Resolution: Runtime method and message resolution
- Schema Introspection: Full protobuf schema access
HTTP Bridge
REST API Generation
Automatically converts gRPC services to REST endpoints:
#![allow(unused)] fn main() { let config = DynamicGrpcConfig { enable_http_bridge: true, http_bridge_port: 8080, generate_openapi: true, ..Default::default() }; // gRPC: MyService/GetUser → HTTP: POST /api/myservice/getuser }
OpenAPI Generation
- Automatic OpenAPI 3.0 spec generation from protobuf definitions
- REST endpoint documentation
- Request/response schema documentation
Streaming Support
All Streaming Patterns
The crate supports all four gRPC streaming patterns:
- Unary RPC: Simple request-response
- Server Streaming: Single request, streaming response
- Client Streaming: Streaming request, single response
- Bidirectional Streaming: Streaming in both directions
Streaming Implementation
#![allow(unused)] fn main() { // Server streaming async fn list_users( &self, request: Request<ListUsersRequest>, ) -> Result<Response<Self::ListUsersStream>, Status> { // Return stream of User messages } // Bidirectional streaming async fn chat( &self, request: Request<Streaming<ChatMessage>>, ) -> Result<Response<Self::ChatStream>, Status> { // Handle bidirectional message stream } }
Key Modules
dynamic/
Core dynamic service functionality:
proto_parser.rs
- Runtime protobuf file parsing
- Service and method extraction
- Message descriptor generation
service_generator.rs
- Dynamic service implementation generation
- Mock response synthesis
- Streaming method handling
http_bridge/
- REST API conversion logic
- OpenAPI specification generation
- HTTP request/response mapping
reflection/
gRPC reflection protocol implementation:
mock_proxy.rs
- Reflection service implementation
- Dynamic method invocation
- Response generation
client.rs
- Reflection client for service discovery
- Dynamic RPC calls
- Connection pooling
smart_mock_generator.rs
- AI-powered response generation
- Schema-aware data synthesis
- Contextual mock data
registry.rs
Service registration and management:
#![allow(unused)] fn main() { pub struct GrpcProtoRegistry { services: HashMap<String, ProtoService>, descriptor_pool: DescriptorPool, } }
Configuration
DynamicGrpcConfig
#![allow(unused)] fn main() { #[derive(Debug, Clone)] pub struct DynamicGrpcConfig { pub proto_dir: String, // Proto file directory pub enable_reflection: bool, // Enable gRPC reflection pub excluded_services: Vec<String>, // Services to skip pub http_bridge: Option<HttpBridgeConfig>, // HTTP bridge settings pub max_message_size: usize, // Max message size } }
HTTP Bridge Config
#![allow(unused)] fn main() { pub struct HttpBridgeConfig { pub enabled: bool, // Enable HTTP bridge pub port: u16, // HTTP server port pub generate_openapi: bool, // Generate OpenAPI specs pub cors_enabled: bool, // Enable CORS } }
Advanced Data Synthesis
Intelligent Field Inference
The crate uses field names and types to generate realistic mock data:
message User {
string id = 1; // Generates UUIDs
string email = 2; // Generates email addresses
string phone = 3; // Generates phone numbers
repeated string tags = 4; // Generates string arrays
}
Referential Integrity
Maintains relationships between messages:
- Foreign key relationships
- Consistent ID generation
- Cross-message data consistency
Deterministic Seeding
#![allow(unused)] fn main() { // Reproducible test data let config = MockConfig { seed: Some(42), ..Default::default() }; }
Performance Features
Connection Pooling
- Efficient gRPC connection management
- Connection reuse and lifecycle management
- Load balancing across connections
Caching
- Descriptor caching for performance
- Response caching for repeated requests
- Schema compilation caching
Async Processing
- Tokio-based async runtime
- Streaming data processing
- Concurrent request handling
Integration Points
Core Dependencies
mockforge-core: Base mocking functionalitymockforge-data: Advanced data generationmockforge-observability: Metrics and tracing
External Libraries
- Tonic: gRPC framework for Rust
- Prost: Protocol buffer implementation
- Prost-reflect: Runtime protobuf reflection
- Tokio: Async runtime
Observability
Metrics Collection
- Request/response counts
- Method execution times
- Error rates by service/method
- Streaming metrics
Tracing Integration
- OpenTelemetry tracing support
- Distributed tracing across services
- Request correlation IDs
Logging
- Structured logging for all operations
- Debug logging for request/response payloads
- Performance logging
Testing Support
Integration Tests
- End-to-end gRPC testing
- HTTP bridge validation
- Reflection service testing
- Streaming functionality tests
Mock Data Generation
- Deterministic test data
- Schema-compliant mock generation
- Custom data providers
Error Handling
gRPC Status Codes
- Proper gRPC status code mapping
- Detailed error messages
- Error correlation IDs
Recovery Mechanisms
- Connection retry logic
- Graceful degradation
- Fallback responses
Build System
Proto Compilation
The crate uses tonic-prost-build for compile-time proto generation:
// build.rs fn main() -> Result<(), Box<dyn std::error::Error>> { tonic_build::compile_protos("proto/greeter.proto")?; Ok(()) }
Feature Flags
data-faker: Enable advanced data synthesis- Default features include data faker for rich mock data
Usage Examples
Basic gRPC Server
use mockforge_grpc::start; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> { // Auto-discovers services from ./proto directory start(50051).await?; Ok(()) }
With HTTP Bridge
#![allow(unused)] fn main() { use mockforge_grpc::{start_with_config, DynamicGrpcConfig}; let config = DynamicGrpcConfig { proto_dir: "./proto".to_string(), enable_reflection: true, http_bridge: Some(HttpBridgeConfig { enabled: true, port: 8080, generate_openapi: true, }), ..Default::default() }; start_with_config(50051, None, config).await?; }
Client Usage
# List services
grpcurl -plaintext localhost:50051 list
# Describe service
grpcurl -plaintext localhost:50051 describe MyService
# Call method
grpcurl -plaintext -d '{"id": "123"}' localhost:50051 MyService/GetUser
# HTTP bridge
curl -X POST http://localhost:8080/api/myservice/getuser \
-H "Content-Type: application/json" \
-d '{"id": "123"}'
Future Enhancements
- Advanced Streaming: Enhanced bidirectional streaming support
- Service Mesh Integration: Istio and Linkerd integration
- Schema Evolution: Automatic handling of protobuf schema changes
- Load Testing: Built-in gRPC load testing capabilities
- Code Generation: Optional compile-time service generation
WebSocket Crate
The mockforge-ws crate provides comprehensive WebSocket protocol support for MockForge, featuring replay capabilities, proxy functionality, and AI-powered event generation. It enables realistic WebSocket interaction simulation for testing and development.
Architecture Overview
graph TD
A[WebSocket Server] --> B[Connection Handler]
B --> C{Operation Mode}
C --> D[Replay Mode]
C --> E[Proxy Mode]
C --> F[Interactive Mode]
C --> G[AI Event Mode]
D --> H[Replay File Parser]
H --> I[JSONL Processor]
I --> J[Template Expansion]
E --> K[Proxy Handler]
K --> L[Upstream Connection]
L --> M[Message Forwarding]
F --> N[Message Router]
N --> O[Pattern Matching]
O --> P[Response Generation]
G --> Q[AI Event Generator]
Q --> R[LLM Integration]
R --> S[Event Stream]
Core Components
Connection Management
WebSocket Router
The crate provides multiple router configurations for different use cases:
#![allow(unused)] fn main() { // Basic WebSocket router let app = router(); // With latency simulation let latency_injector = LatencyInjector::new(profile, Default::default()); let app = router_with_latency(latency_injector); // With proxy support let proxy_handler = WsProxyHandler::new(proxy_config); let app = router_with_proxy(proxy_handler); }
Connection Lifecycle
- Establishment: Connection tracking and metrics collection
- Message Handling: Bidirectional message processing
- Error Handling: Graceful error recovery and logging
- Termination: Connection cleanup and statistics recording
Operational Modes
1. Replay Mode
Scripted message playback from JSONL files:
{"ts":0,"dir":"out","text":"HELLO {{uuid}}","waitFor":"^CLIENT_READY$"}
{"ts":10,"dir":"out","text":"{\"type\":\"welcome\",\"sessionId\":\"{{uuid}}\"}"}
{"ts":20,"dir":"out","text":"{\"data\":{{randInt 1 100}}}","waitFor":"^ACK$"}
Features:
- Timestamp-based message sequencing
- Template expansion (
{{uuid}},{{now}},{{randInt min max}}) - Conditional waiting with regex/JSONPath patterns
- Deterministic replay for testing
2. Proxy Mode
Forward messages to upstream WebSocket servers:
#![allow(unused)] fn main() { let proxy_config = WsProxyConfig { upstream_url: "wss://api.example.com/ws".to_string(), should_proxy: true, message_transform: Some(transform_fn), }; }
Features:
- Transparent message forwarding
- Optional message transformation
- Connection pooling and reuse
- Error handling and fallback
3. Interactive Mode
Dynamic responses based on client messages:
#![allow(unused)] fn main() { // Echo mode (default) while let Some(msg) = socket.recv().await { if let Ok(Message::Text(text)) = msg { let response = format!("echo: {}", text); socket.send(Message::Text(response.into())).await?; } } }
Features:
- Pattern-based response matching
- JSONPath query support
- State-aware conversations
- Custom response logic
4. AI Event Mode
LLM-powered event stream generation:
#![allow(unused)] fn main() { let ai_config = WebSocketAiConfig { enabled: true, narrative: "Simulate 5 minutes of live stock market trading".to_string(), event_count: 20, replay: Some(ReplayAugmentationConfig { provider: "openai".to_string(), model: "gpt-3.5-turbo".to_string(), ..Default::default() }), }; let generator = AiEventGenerator::new(ai_config); generator.stream_events(socket, Some(20)).await; }
Message Processing
Template Expansion
Rich templating system for dynamic content:
#![allow(unused)] fn main() { // UUID generation "session_{{uuid}}" → "session_550e8400-e29b-41d4-a716-446655440000" // Timestamp manipulation "{{now}}" → "2024-01-15T10:30:00Z" "{{now+1h}}" → "2024-01-15T11:30:00Z" // Random values "{{randInt 1 100}}" → "42" }
JSONPath Matching
Sophisticated message pattern matching:
// Wait for specific message types
{"waitFor": "$.type", "text": "Type received: {{$.type}}"}
// Match nested object properties
{"waitFor": "$.user.id", "text": "User {{$.user.name}} authenticated"}
// Array element matching
{"waitFor": "$.items[0].status", "text": "First item status: {{$.items[0].status}}"}
AI Integration
Event Generation
Narrative-driven event stream creation:
#![allow(unused)] fn main() { pub struct AiEventGenerator { engine: Arc<RwLock<ReplayAugmentationEngine>>, } impl AiEventGenerator { pub async fn stream_events(&self, socket: WebSocket, max_events: Option<usize>) { // Generate contextual events based on narrative let events = self.engine.write().await.generate_stream().await?; // Stream events to client with configurable rate } } }
Replay Augmentation
Enhance existing replay files with AI-generated content:
#![allow(unused)] fn main() { let augmentation_config = ReplayAugmentationConfig { narrative: "Add realistic user interactions to chat replay".to_string(), augmentation_points: vec!["user_message".to_string()], provider: "openai".to_string(), model: "gpt-4".to_string(), }; }
Observability
Metrics Collection
Comprehensive WebSocket metrics:
#![allow(unused)] fn main() { let registry = get_global_registry(); registry.record_ws_connection_established(); registry.record_ws_message_received(); registry.record_ws_message_sent(); registry.record_ws_connection_closed(duration, status); }
Tracing Integration
Distributed tracing for WebSocket connections:
#![allow(unused)] fn main() { let span = create_ws_connection_span(&request); let _guard = span.enter(); // Connection handling with tracing context record_ws_message_success(&span, message_size); }
Logging
Structured logging for connection lifecycle and message flow:
#![allow(unused)] fn main() { info!("WebSocket connection established from {}", peer_addr); debug!("Received message: {} bytes", message.len()); error!("WebSocket error: {}", error); }
Performance Features
Connection Pooling
Efficient management of upstream connections in proxy mode:
#![allow(unused)] fn main() { // Connection reuse for proxy operations let connection = pool.get_connection(upstream_url).await?; connection.forward_message(message).await?; }
Message Buffering
Optimized message processing with buffering:
#![allow(unused)] fn main() { // Stream processing for large message volumes while let Some(batch) = message_buffer.next_batch().await { for message in batch { process_message(message).await?; } } }
Rate Limiting
Configurable message rate limits:
#![allow(unused)] fn main() { let rate_limiter = RateLimiter::new(1000, Duration::from_secs(60)); // 1000 msg/min if rate_limiter.check_limit().await { process_message(message).await?; } }
Configuration
WebSocket Server Config
#![allow(unused)] fn main() { pub struct WsConfig { pub port: u16, pub max_connections: usize, pub max_message_size: usize, pub heartbeat_interval: Duration, pub replay_file: Option<PathBuf>, pub proxy_config: Option<WsProxyConfig>, pub ai_config: Option<WebSocketAiConfig>, } }
Proxy Configuration
#![allow(unused)] fn main() { pub struct WsProxyConfig { pub upstream_url: String, pub should_proxy: bool, pub message_transform: Option<TransformFn>, pub connection_pool_size: usize, pub retry_attempts: u32, } }
AI Configuration
#![allow(unused)] fn main() { pub struct WebSocketAiConfig { pub enabled: bool, pub narrative: String, pub event_count: usize, pub events_per_second: f64, pub replay: Option<ReplayAugmentationConfig>, } }
Testing Support
Integration Tests
End-to-end WebSocket testing:
#![allow(unused)] fn main() { #[tokio::test] async fn test_websocket_replay() { // Start WebSocket server with replay file let server = TestServer::new(router()).await; // Connect test client let (ws_stream, _) = connect_async(server.url()).await?; // Verify replay sequence let msg = ws_stream.next().await.unwrap()?; assert_eq!(msg, Message::Text("HELLO test-session".into())); } }
Replay File Validation
Automated validation of replay configurations:
#![allow(unused)] fn main() { #[test] fn test_replay_file_parsing() { let replay_data = r#"{"ts":0,"text":"hello","waitFor":"ready"}"#; let entry: ReplayEntry = serde_json::from_str(replay_data)?; assert_eq!(entry.ts, 0); assert_eq!(entry.text, "hello"); } }
Error Handling
Connection Errors
Graceful handling of connection failures:
#![allow(unused)] fn main() { match socket.recv().await { Ok(Some(Message::Close(frame))) => { info!("Client closed connection: {:?}", frame); break; } Err(e) => { error!("WebSocket error: {}", e); record_ws_error(); break; } _ => continue, } }
Message Processing Errors
Robust message parsing and transformation:
#![allow(unused)] fn main() { match serde_json::from_str::<Value>(&text) { Ok(json) => process_json_message(json).await, Err(e) => { warn!("Invalid JSON message: {}", e); send_error_response("Invalid JSON format").await?; } } }
Usage Examples
Basic WebSocket Server
use mockforge_ws::start_with_latency; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { // Start with default latency profile start_with_latency(3001, Some(LatencyProfile::normal())).await?; Ok(()) }
Replay Mode
# Set environment variable for replay file
export MOCKFORGE_WS_REPLAY_FILE=./replay.jsonl
# Start server
mockforge serve --ws-port 3001
Proxy Mode
#![allow(unused)] fn main() { use mockforge_ws::router_with_proxy; use mockforge_core::{WsProxyConfig, WsProxyHandler}; let proxy_config = WsProxyConfig { upstream_url: "wss://api.example.com/ws".to_string(), should_proxy: true, }; let proxy = WsProxyHandler::new(proxy_config); let app = router_with_proxy(proxy); }
AI Event Generation
#![allow(unused)] fn main() { use mockforge_ws::{AiEventGenerator, WebSocketAiConfig}; let config = WebSocketAiConfig { enabled: true, narrative: "Simulate real-time chat conversation".to_string(), event_count: 50, events_per_second: 2.0, }; let generator = AiEventGenerator::new(config)?; generator.stream_events_with_rate(socket, None, 2.0).await; }
Future Enhancements
- Advanced Pattern Matching: Complex event correlation and state machines
- Load Testing: Built-in WebSocket load testing capabilities
- Recording Mode: Capture live WebSocket interactions for replay
- Clustering: Distributed WebSocket session management
- Protocol Extensions: Support for custom WebSocket subprotocols
CLI Reference
MockForge provides a comprehensive command-line interface for managing mock servers and generating test data. This reference covers all available commands, options, and usage patterns.
Global Options
All MockForge commands support the following global options:
mockforge-cli [OPTIONS] <COMMAND>
Global Options
-h, --help: Display help information
Commands
serve - Start Mock Servers
The primary command for starting MockForge’s mock servers with support for HTTP, WebSocket, and gRPC protocols.
mockforge-cli serve [OPTIONS]
Server Options
Port Configuration:
--http-port <PORT>: HTTP server port (default: 3000)--ws-port <PORT>: WebSocket server port (default: 3001)--grpc-port <PORT>: gRPC server port (default: 50051)
API Specification:
--spec <PATH>: OpenAPI spec file for HTTP server (JSON or YAML format)
Configuration:
-c, --config <PATH>: Path to configuration file
Admin UI Options
Admin UI Control:
--admin: Enable admin UI--admin-port <PORT>: Admin UI port (default: 9080)--admin-embed: Force embedding Admin UI under HTTP server--admin-mount-path <PATH>: Explicit mount path for embedded Admin UI (implies--admin-embed)--admin-standalone: Force standalone Admin UI on separate port (overrides embed)--disable-admin-api: Disable Admin API endpoints (UI loads but API routes are absent)
Validation Options
Request Validation:
--validation <MODE>: Request validation mode (default: enforce)off: Disable validationwarn: Log warnings but allow requestsenforce: Reject invalid requests
--aggregate-errors: Aggregate request validation errors into JSON array--validate-responses: Validate responses (warn-only)--validation-status <CODE>: Validation error HTTP status code (default: 400)
Response Processing
Template Expansion:
--response-template-expand: Expand templating tokens in responses/examples
Chaos Engineering
Latency Simulation:
--latency-enabled: Enable latency simulation
Failure Injection:
--failures-enabled: Enable failure injection
Examples
Basic HTTP Server:
mockforge-cli serve --spec examples/openapi-demo.json --http-port 3000
Full Multi-Protocol Setup:
mockforge-cli serve \
--spec examples/openapi-demo.json \
--http-port 3000 \
--ws-port 3001 \
--grpc-port 50051 \
--admin \
--admin-port 9080 \
--response-template-expand
Development Configuration:
mockforge-cli serve \
--config demo-config.yaml \
--validation warn \
--response-template-expand \
--latency-enabled
Production Configuration:
mockforge-cli serve \
--config production-config.yaml \
--validation enforce \
--admin-standalone
init - Initialize New Project
Create a new MockForge project with a template configuration file.
mockforge-cli init [OPTIONS] <NAME>
Arguments
<NAME>: Project name or directory path- Use
.to initialize in the current directory - Use a project name to create a new directory
- Use
Options
--no-examples: Skip creating example files (only createmockforge.yaml)
Examples
# Create a new project in a new directory
mockforge-cli init my-mock-api
# Initialize in the current directory
mockforge-cli init .
# Initialize without examples
mockforge-cli init my-project --no-examples
What Gets Created
-
mockforge.yaml: Main configuration file with:
- HTTP, WebSocket, gRPC server configurations
- Admin UI settings
- Core features (latency, failures, overrides)
- Observability configuration
- Data generation settings
- Logging configuration
-
examples/ directory (unless
--no-examples):openapi.json: Sample OpenAPI specification- Example data files
See Also
config - Configuration Management
Validate and manage MockForge configuration files.
mockforge-cli config <SUBCOMMAND>
Subcommands
validate - Validate Configuration File
Validate a MockForge configuration file for syntax and structure errors.
mockforge-cli config validate [OPTIONS]
Options:
--config <PATH>: Path to config file to validate- If omitted, auto-discovers
mockforge.yamlormockforge.ymlin current and parent directories
- If omitted, auto-discovers
What Gets Validated:
- YAML syntax and structure
- File existence
- HTTP endpoints count
- Request chains count
- Missing sections (warnings)
Examples:
# Validate config in current directory
mockforge-cli config validate
# Validate specific config file
mockforge-cli config validate --config my-config.yaml
# Validate before starting server
mockforge-cli config validate && mockforge-cli serve
Output Example:
🔍 Validating MockForge configuration...
📄 Checking configuration file: mockforge.yaml
✅ Configuration is valid
📊 Summary:
Found 5 HTTP endpoints
Found 2 chains
⚠️ Warnings:
- No WebSocket configuration found
Common Issues:
- Invalid YAML syntax: Fix indentation, quotes, or structure
- File not found: Check path or run
mockforge init - Missing sections: Add HTTP, admin, or other required sections
Note: Current validation is basic (syntax, structure, counts). For comprehensive field validation, see the Configuration Validation Guide.
See Also
data - Generate Synthetic Data
Generate synthetic test data using various templates and schemas.
mockforge-cli data <SUBCOMMAND>
Subcommands
template - Generate from Built-in Templates
Generate data using MockForge’s built-in data generation templates.
mockforge-cli data template [OPTIONS]
Options:
--count <N>: Number of items to generate (default: 1)--format <FORMAT>: Output format (json, yaml, csv)--template <NAME>: Template name (user, product, order, etc.)--output <PATH>: Output file path
Examples:
# Generate 10 user records as JSON
mockforge-cli data template --template user --count 10 --format json
# Generate product data to file
mockforge-cli data template --template product --count 50 --output products.json
schema - Generate from JSON Schema
Generate data conforming to a JSON Schema specification.
mockforge-cli data schema [OPTIONS] <SCHEMA>
Parameters:
<SCHEMA>: Path to JSON Schema file
Options:
--count <N>: Number of items to generate (default: 1)--format <FORMAT>: Output format (json, yaml)--output <PATH>: Output file path
Examples:
# Generate data from user schema
mockforge-cli data schema --count 5 user-schema.json
# Generate and save to file
mockforge-cli data schema --count 100 --output generated-data.json api-schema.json
open-api - Generate from OpenAPI Spec
Generate mock data based on OpenAPI specification schemas.
mockforge-cli data open-api [OPTIONS] <SPEC>
Parameters:
<SPEC>: Path to OpenAPI specification file
Options:
--endpoint <PATH>: Specific endpoint to generate data for--method <METHOD>: HTTP method (get, post, put, delete)--count <N>: Number of items to generate (default: 1)--format <FORMAT>: Output format (json, yaml)--output <PATH>: Output file path
Examples:
# Generate data for all endpoints in OpenAPI spec
mockforge-cli data open-api api-spec.yaml
# Generate data for specific endpoint
mockforge-cli data open-api --endpoint /users --method get --count 20 api-spec.yaml
# Generate POST request body data
mockforge-cli data open-api --endpoint /users --method post api-spec.yaml
admin - Admin UI Server
Start the Admin UI as a standalone server without the main mock servers.
mockforge-cli admin [OPTIONS]
Options
--port <PORT>: Server port (default: 9080)
Examples
# Start admin UI on default port
mockforge-cli admin
# Start admin UI on custom port
mockforge-cli admin --port 9090
sync - Workspace Synchronization Daemon
Start a background daemon that monitors a workspace directory for file changes and automatically syncs them to MockForge workspaces.
mockforge-cli sync [OPTIONS]
Options
Required:
--workspace-dir <PATH>or-w <PATH>: Workspace directory to monitor for changes
Optional:
--config <PATH>or-c <PATH>: Configuration file path for sync settings
How It Works
The sync daemon provides bidirectional synchronization between workspace files and MockForge’s internal workspace storage:
- File Monitoring: Watches for
.yamland.ymlfiles in the workspace directory - Automatic Import: When files are created or modified, they’re automatically imported into the workspace
- Real-time Updates: Changes are detected and processed immediately
- Visual Feedback: Clear console output shows what’s happening in real-time
File Requirements:
- Only
.yamland.ymlfiles are monitored - Hidden files (starting with
.) are ignored - Files must be valid MockRequest YAML format
What You’ll See:
- File creation notifications with import status
- File modification notifications with update status
- File deletion notifications (files are not auto-deleted from workspace)
- Error messages if imports fail
- Real-time feedback for all sync operations
Examples
Basic Usage:
# Start sync daemon for a workspace directory
mockforge-cli sync --workspace-dir ./my-workspace
# Use short form
mockforge-cli sync -w ./my-workspace
# With custom config
mockforge-cli sync --workspace-dir /path/to/workspace --config sync-config.yaml
Git Integration:
# Monitor a Git repository directory
mockforge-cli sync --workspace-dir /path/to/git/repo/workspaces
# Changes you make in Git will automatically sync to MockForge
# Perfect for team collaboration via Git
Development Workflow:
# 1. Start the sync daemon in one terminal
mockforge-cli sync --workspace-dir ./workspaces
# 2. In another terminal, edit workspace files
vim ./workspaces/my-request.yaml
# 3. Save the file - it will automatically import to MockForge
# You'll see output like:
# 🔄 Detected 1 file change in workspace 'default'
# 📝 Modified: my-request.yaml
# ✅ Successfully updated
Example Output
When you start the sync daemon, you’ll see:
🔄 Starting MockForge Sync Daemon...
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📁 Workspace directory: ./my-workspace
ℹ️ What the sync daemon does:
• Monitors the workspace directory for .yaml/.yml file changes
• Automatically imports new or modified request files
• Syncs changes bidirectionally between files and workspace
• Skips hidden files (starting with .)
🔍 Monitoring for file changes...
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Sync daemon started successfully!
💡 Press Ctrl+C to stop
📂 Monitoring workspace 'default' in directory: ./my-workspace
When files change, you’ll see:
🔄 Detected 1 file change in workspace 'default'
➕ Created: new-endpoint.yaml
✅ Successfully imported
🔄 Detected 2 file changes in workspace 'default'
📝 Modified: user-api.yaml
✅ Successfully updated
🗑️ Deleted: old-endpoint.yaml
ℹ️ Auto-deletion from workspace is disabled
If errors occur:
🔄 Detected 1 file change in workspace 'default'
📝 Modified: invalid-file.yaml
⚠️ Failed to import: File is not a recognized format (expected MockRequest YAML)
Stopping the Daemon
Press Ctrl+C to gracefully stop the sync daemon:
^C
🛑 Received shutdown signal
⏹️ Stopped monitoring workspace 'default' in directory: ./my-workspace
👋 Sync daemon stopped
Best Practices
Version Control:
# Use sync with Git for team collaboration
cd /path/to/git/repo
mockforge-cli sync --workspace-dir ./workspaces
# Team members can push/pull changes
# The sync daemon will automatically import updates
Development Workflow:
# Keep sync daemon running during development
# Edit files in your favorite editor
# Changes automatically sync to MockForge
# Perfect for file-based workflows
Directory Organization:
# Organize workspace files in subdirectories
workspaces/
├── api-v1/
│ ├── users.yaml
│ └── products.yaml
├── api-v2/
│ └── users.yaml
└── internal/
└── admin.yaml
# All .yaml files will be monitored
mockforge-cli sync --workspace-dir ./workspaces
Troubleshooting
Files not importing:
- Ensure files have
.yamlor.ymlextension - Check that files are valid MockRequest YAML format
- Look for error messages in the console output
- Verify files are not hidden (don’t start with
.)
Permission errors:
- Ensure MockForge has read access to the workspace directory
- Check file permissions:
ls -la workspace-dir/
Changes not detected:
- The sync daemon uses filesystem notifications
- Some network filesystems may not support change notifications
- Try editing the file locally rather than over a network mount
Enable debug logging:
RUST_LOG=mockforge_core::sync_watcher=debug mockforge-cli sync --workspace-dir ./workspace
Configuration File Format
MockForge supports YAML configuration files that can be used instead of command-line options.
Basic Configuration Structure
# Server configuration
server:
http_port: 3000
ws_port: 3001
grpc_port: 50051
# API specification
spec: examples/openapi-demo.json
# Admin UI configuration
admin:
enabled: true
port: 9080
embedded: false
mount_path: "/admin"
standalone: true
disable_api: false
# Validation settings
validation:
mode: enforce
aggregate_errors: false
validate_responses: false
status_code: 400
# Response processing
response:
template_expand: true
# Chaos engineering
chaos:
latency_enabled: false
failures_enabled: false
# Protocol-specific settings
grpc:
proto_dir: "proto/"
enable_reflection: true
websocket:
replay_file: "examples/ws-demo.jsonl"
Configuration Precedence
Configuration values are applied in the following order (later sources override earlier ones):
- Default values (compiled into the binary)
- Configuration file (
-c/--configoption) - Environment variables
- Command-line arguments (highest priority)
Environment Variables
All configuration options can be set via environment variables using the MOCKFORGE_ prefix:
# Server ports
export MOCKFORGE_HTTP_PORT=3000
export MOCKFORGE_WS_PORT=3001
export MOCKFORGE_GRPC_PORT=50051
# Admin UI
export MOCKFORGE_ADMIN_ENABLED=true
export MOCKFORGE_ADMIN_PORT=9080
export MOCKFORGE_ADMIN_JWT_SECRET=your-secret-key
export MOCKFORGE_ADMIN_SESSION_TIMEOUT=86400
export MOCKFORGE_ADMIN_AUTH_ENABLED=true
# Validation
export MOCKFORGE_VALIDATION_MODE=enforce
export MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
# gRPC settings
export MOCKFORGE_PROTO_DIR=proto/
export MOCKFORGE_GRPC_REFLECTION_ENABLED=true
# WebSocket settings
export MOCKFORGE_WS_REPLAY_FILE=examples/ws-demo.jsonl
# Plugin system
export MOCKFORGE_PLUGINS_ENABLED=true
export MOCKFORGE_PLUGINS_DIRECTORY=~/.mockforge/plugins
export MOCKFORGE_PLUGIN_MEMORY_LIMIT=64
export MOCKFORGE_PLUGIN_CPU_LIMIT=10
export MOCKFORGE_PLUGIN_TIMEOUT=5000
# Encryption
export MOCKFORGE_ENCRYPTION_ENABLED=true
export MOCKFORGE_ENCRYPTION_ALGORITHM=aes-256-gcm
export MOCKFORGE_KEY_STORE_PATH=~/.mockforge/keys
# Synchronization
export MOCKFORGE_SYNC_ENABLED=true
export MOCKFORGE_SYNC_DIRECTORY=./workspace-sync
export MOCKFORGE_SYNC_MODE=bidirectional
export MOCKFORGE_SYNC_WATCH=true
# Data generation
export MOCKFORGE_DATA_RAG_ENABLED=true
export MOCKFORGE_DATA_RAG_PROVIDER=openai
export MOCKFORGE_DATA_RAG_API_KEY=your-api-key
Exit Codes
MockForge uses standard exit codes:
- 0: Success
- 1: General error
- 2: Configuration error
- 3: Validation error
- 4: File I/O error
- 5: Network error
Logging
MockForge provides configurable logging output to help with debugging and monitoring.
Log Levels
error: Only error messageswarn: Warnings and errorsinfo: General information (default)debug: Detailed debugging informationtrace: Very verbose tracing information
Log Configuration
# Set log level via environment variable
export RUST_LOG=mockforge=debug
# Or via configuration file
logging:
level: debug
format: json
Log Output
Logs include structured information about:
- HTTP requests/responses
- WebSocket connections and messages
- gRPC calls and streaming
- Configuration loading
- Template expansion
- Validation errors
Examples
Complete Development Setup
# Start all servers with admin UI
mockforge-cli serve \
--spec examples/openapi-demo.json \
--http-port 3000 \
--ws-port 3001 \
--grpc-port 50051 \
--admin \
--admin-port 9080 \
--response-template-expand \
--validation warn
CI/CD Testing Pipeline
#!/bin/bash
# test-mockforge.sh
# Start MockForge in background
mockforge-cli serve --spec api-spec.yaml --http-port 3000 &
MOCKFORGE_PID=$!
# Wait for server to start
sleep 5
# Run API tests
npm test
# Generate test data
mockforge-cli data open-api --endpoint /users --count 100 api-spec.yaml > test-users.json
# Stop MockForge
kill $MOCKFORGE_PID
Load Testing Setup
#!/bin/bash
# load-test-setup.sh
# Start MockForge with minimal validation for performance
MOCKFORGE_VALIDATION_MODE=off \
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=false \
mockforge-cli serve \
--spec load-test-spec.yaml \
--http-port 3000 \
--validation off
# Now run your load testing tool against localhost:3000
# Example: hey -n 10000 -c 100 http://localhost:3000/api/test
Docker Integration
# Run MockForge in Docker with CLI commands
docker run --rm -v $(pwd)/examples:/examples \
mockforge \
serve --spec /examples/openapi-demo.json --http-port 3000
Troubleshooting
Common Issues
Server won’t start:
# Check if ports are available
lsof -i :3000
lsof -i :3001
# Try different ports
mockforge-cli serve --http-port 3001 --ws-port 3002
Configuration not loading:
# Validate YAML syntax
yamllint config.yaml
# Check file permissions
ls -la config.yaml
OpenAPI spec not found:
# Verify file exists and path is correct
ls -la examples/openapi-demo.json
# Use absolute path
mockforge-cli serve --spec /full/path/to/examples/openapi-demo.json
Template expansion not working:
# Ensure template expansion is enabled
mockforge-cli serve --response-template-expand --spec api-spec.yaml
Debug Mode
Run with debug logging for detailed information:
RUST_LOG=mockforge=debug mockforge-cli serve --spec api-spec.yaml
Health Checks
Test basic functionality:
# HTTP health check
curl http://localhost:3000/health
# WebSocket connection test
websocat ws://localhost:3001/ws
# gRPC service discovery
grpcurl -plaintext localhost:50051 list
This CLI reference provides comprehensive coverage of MockForge’s command-line interface. For programmatic usage, see the Rust API Reference.
Admin UI REST API
This document provides comprehensive documentation for the MockForge Admin UI REST API endpoints.
Overview
The MockForge Admin UI provides a web-based interface for managing and monitoring MockForge servers. The API is organized around the following main areas:
- Dashboard: System overview and real-time metrics
- Server Management: Control and monitor server instances
- Configuration: Update latency, faults, proxy, and validation settings
- Logging: View and filter request logs
- Metrics: Performance monitoring and analytics
- Fixtures: Manage mock data and fixtures
- Environment: Environment variable management
Base URL
All API endpoints are prefixed with /__mockforge/api to avoid conflicts with user-defined routes.
Standalone Mode vs Embedded Mode
The REST API works identically in both standalone and embedded modes:
Standalone Mode (Default):
- Admin UI runs on a separate port (default: 9080)
- REST API endpoints available at:
http://localhost:9080/__mockforge/api/* - Main HTTP server runs on port 3000 (or configured port)
- Example:
curl http://localhost:9080/__mockforge/api/mocks
Embedded Mode:
- Admin UI mounted under HTTP server (e.g.,
/admin) - REST API endpoints available at:
http://localhost:3000/__mockforge/api/* - Same endpoints, different base URL
- Example:
curl http://localhost:3000/__mockforge/api/mocks
Configuration via REST API (JSON over HTTP):
The REST API supports full configuration management via JSON over HTTP, making it suitable for:
- CI/CD pipelines
- Automated testing
- Remote configuration
- Integration with external tools
All endpoints accept and return JSON, following standard REST conventions.
Standalone Mode Examples
Starting MockForge in Standalone Mode:
# Start MockForge with standalone admin UI
mockforge serve --admin --admin-standalone --admin-port 9080
# Or via config file
# admin:
# enabled: true
# port: 9080
# api_enabled: true
Creating a Mock via REST API (Standalone Mode):
# Create a mock using JSON over HTTP
curl -X POST http://localhost:9080/__mockforge/api/mocks \
-H "Content-Type: application/json" \
-d '{
"id": "user-get",
"name": "Get User",
"method": "GET",
"path": "/api/users/{id}",
"response": {
"body": {
"id": "{{request.path.id}}",
"name": "Alice",
"email": "alice@example.com"
},
"headers": {
"Content-Type": "application/json"
}
},
"enabled": true,
"status_code": 200
}'
Updating Configuration via REST API:
# Update latency configuration
curl -X POST http://localhost:9080/__mockforge/api/config/latency \
-H "Content-Type: application/json" \
-d '{
"base_ms": 100,
"jitter_ms": 50
}'
Listing All Mocks:
# Get all configured mocks
curl http://localhost:9080/__mockforge/api/mocks
Using the SDK with Standalone Mode:
use mockforge_sdk::AdminClient; use mockforge_sdk::MockConfigBuilder; use serde_json::json; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { // Connect to standalone admin API let client = AdminClient::new("http://localhost:9080"); // Create a mock using the fluent builder API let mock = MockConfigBuilder::new("POST", "/api/users") .name("Create User") .with_header("Authorization", "Bearer.*") .with_query_param("role", "admin") .status(201) .body(json!({ "id": "{{uuid}}", "name": "{{faker.name}}", "created": true })) .priority(10) .build(); // Create the mock via REST API client.create_mock(mock).await?; Ok(()) }
Authentication
Currently, the API does not implement authentication. In production deployments, consider adding authentication middleware.
Response Format
All API responses follow a consistent format:
{
"success": boolean,
"data": object | array | null,
"error": string | null,
"timestamp": string
}
Success Response
{
"success": true,
"data": { ... },
"error": null,
"timestamp": "2025-09-17T10:30:00Z"
}
Error Response
{
"success": false,
"data": null,
"error": "Error message",
"timestamp": "2025-09-17T10:30:00Z"
}
API Endpoints
Dashboard
GET /__mockforge/dashboard
Get comprehensive dashboard data including system information, server status, routes, and recent logs.
Response:
{
"success": true,
"data": {
"system": {
"version": "0.1.0",
"uptime_seconds": 3600,
"memory_usage_mb": 128,
"cpu_usage_percent": 15.5,
"active_threads": 8,
"total_routes": 25,
"total_fixtures": 150
},
"servers": [
{
"server_type": "HTTP",
"address": "127.0.0.1:3000",
"running": true,
"start_time": "2025-09-17T09:30:00Z",
"uptime_seconds": 3600,
"active_connections": 5,
"total_requests": 1250
}
],
"routes": [
{
"method": "GET",
"path": "/api/users",
"priority": 0,
"has_fixtures": true,
"latency_ms": 45,
"request_count": 125,
"last_request": "2025-09-17T10:25:00Z",
"error_count": 2
}
],
"recent_logs": [
{
"id": "log_1",
"timestamp": "2025-09-17T10:29:00Z",
"method": "GET",
"path": "/api/users",
"status_code": 200,
"response_time_ms": 45,
"client_ip": "127.0.0.1",
"user_agent": "test-agent",
"headers": {},
"response_size_bytes": 1024,
"error_message": null
}
],
"latency_profile": {
"name": "default",
"base_ms": 50,
"jitter_ms": 20,
"tag_overrides": {}
},
"fault_config": {
"enabled": false,
"failure_rate": 0.0,
"status_codes": [500],
"active_failures": 0
},
"proxy_config": {
"enabled": false,
"upstream_url": null,
"timeout_seconds": 30,
"requests_proxied": 0
}
}
}
Health Check
GET /__mockforge/health
Get system health status.
Response:
{
"status": "healthy",
"services": {
"http": "healthy",
"websocket": "healthy",
"grpc": "healthy"
},
"last_check": "2025-09-17T10:30:00Z",
"issues": []
}
Server Management
GET /__mockforge/server-info
Get information about server addresses and configuration.
Response:
{
"success": true,
"data": {
"http_server": "127.0.0.1:3000",
"ws_server": "127.0.0.1:3001",
"grpc_server": "127.0.0.1:50051"
}
}
POST /__mockforge/servers/restart
Initiate server restart.
Request Body:
{
"reason": "Manual restart requested"
}
Response:
{
"success": true,
"data": {
"message": "Server restart initiated. Please wait for completion."
}
}
GET /__mockforge/servers/restart/status
Get restart status.
Response:
{
"success": true,
"data": {
"in_progress": false,
"initiated_at": null,
"reason": null,
"success": null
}
}
Routes
GET /__mockforge/routes
Get information about configured routes (proxied to HTTP server).
Logs
GET /__mockforge/logs
Get request logs with optional filtering.
Query Parameters:
method(string): Filter by HTTP methodpath(string): Filter by path patternstatus(number): Filter by status codelimit(number): Maximum number of results
Examples:
GET /__mockforge/logs?method=GET&limit=50
GET /__mockforge/logs?path=/api/users&status=200
Response:
{
"success": true,
"data": [
{
"id": "log_1",
"timestamp": "2025-09-17T10:29:00Z",
"method": "GET",
"path": "/api/users",
"status_code": 200,
"response_time_ms": 45,
"client_ip": "127.0.0.1",
"user_agent": "test-agent",
"headers": {},
"response_size_bytes": 1024,
"error_message": null
}
]
}
POST /__mockforge/logs/clear
Clear all request logs.
Response:
{
"success": true,
"data": {
"message": "Logs cleared"
}
}
Metrics
GET /__mockforge/metrics
Get performance metrics and analytics.
Response:
{
"success": true,
"data": {
"requests_by_endpoint": {
"GET /api/users": 125,
"POST /api/users": 45
},
"response_time_percentiles": {
"p50": 45,
"p95": 120,
"p99": 250
},
"error_rate_by_endpoint": {
"GET /api/users": 0.02,
"POST /api/users": 0.0
},
"memory_usage_over_time": [
["2025-09-17T10:25:00Z", 120],
["2025-09-17T10:26:00Z", 125]
],
"cpu_usage_over_time": [
["2025-09-17T10:25:00Z", 12.5],
["2025-09-17T10:26:00Z", 15.2]
]
}
}
Configuration
GET /__mockforge/config
Get current configuration settings.
Response:
{
"success": true,
"data": {
"latency": {
"enabled": true,
"base_ms": 50,
"jitter_ms": 20,
"tag_overrides": {}
},
"faults": {
"enabled": false,
"failure_rate": 0.0,
"status_codes": [500, 502, 503]
},
"proxy": {
"enabled": false,
"upstream_url": null,
"timeout_seconds": 30
},
"validation": {
"mode": "enforce",
"aggregate_errors": true,
"validate_responses": false,
"overrides": {}
}
}
}
POST /__mockforge/config/latency
Update latency configuration.
Request Body:
{
"config_type": "latency",
"data": {
"base_ms": 100,
"jitter_ms": 50,
"tag_overrides": {
"auth": 200
}
}
}
POST /__mockforge/config/faults
Update fault injection configuration.
Request Body:
{
"config_type": "faults",
"data": {
"enabled": true,
"failure_rate": 0.1,
"status_codes": [500, 502, 503]
}
}
POST /__mockforge/config/proxy
Update proxy configuration.
Request Body:
{
"config_type": "proxy",
"data": {
"enabled": true,
"upstream_url": "http://api.example.com",
"timeout_seconds": 60
}
}
POST /__mockforge/validation
Update validation settings.
Request Body:
{
"mode": "warn",
"aggregate_errors": false,
"validate_responses": true,
"overrides": {
"GET /health": "off"
}
}
Environment Variables
GET /__mockforge/env
Get relevant environment variables.
Response:
{
"success": true,
"data": {
"MOCKFORGE_LATENCY_ENABLED": "true",
"MOCKFORGE_HTTP_PORT": "3000",
"RUST_LOG": "info"
}
}
POST /__mockforge/env
Update an environment variable (runtime only).
Request Body:
{
"key": "MOCKFORGE_LOG_LEVEL",
"value": "debug"
}
Response:
{
"success": true,
"data": {
"message": "Environment variable MOCKFORGE_LOG_LEVEL updated to 'debug'. Note: This change is not persisted and will be lost on restart."
}
}
Files
POST /__mockforge/files/content
Get file content.
Request Body:
{
"file_path": "config.yaml",
"file_type": "yaml"
}
Response:
{
"success": true,
"data": "http:\n request_validation: \"enforce\"\n aggregate_validation_errors: true\n"
}
POST /__mockforge/files/save
Save file content.
Request Body:
{
"file_path": "config.yaml",
"content": "http:\n port: 9080\n"
}
Response:
{
"success": true,
"data": {
"message": "File saved successfully"
}
}
Fixtures
GET /__mockforge/fixtures
Get all fixtures with metadata.
Response:
{
"success": true,
"data": [
{
"id": "fixture_123",
"protocol": "http",
"method": "GET",
"path": "/api/users",
"saved_at": "2025-09-17T09:00:00Z",
"file_size": 2048,
"file_path": "http/get/api_users_123.json",
"fingerprint": "abc123",
"metadata": { ... }
}
]
}
POST /__mockforge/fixtures/delete
Delete a fixture.
Request Body:
{
"fixture_id": "fixture_123"
}
POST /__mockforge/fixtures/delete-bulk
Delete multiple fixtures.
Request Body:
{
"fixture_ids": ["fixture_123", "fixture_456"]
}
Response:
{
"success": true,
"data": {
"deleted_count": 2,
"total_requested": 2,
"errors": []
}
}
GET /__mockforge/fixtures/download?id=fixture_123
Download a fixture file.
Response: Binary file download
Smoke Tests
GET /__mockforge/smoke
Get smoke test results.
GET /__mockforge/smoke/run
Run smoke tests against fixtures.
Response:
{
"success": true,
"data": {
"message": "Smoke tests started. Check results in the smoke tests section."
}
}
Error Codes
HTTP Status Codes
200 OK: Success400 Bad Request: Invalid request parameters404 Not Found: Endpoint or resource not found500 Internal Server Error: Server error
Common Error Messages
"Invalid config type": Configuration update with invalid type"Failed to load fixtures": Error reading fixture files"Path traversal detected": Security violation in file paths"Server restart already in progress": Attempted restart while one is running
Rate Limiting
Currently, no rate limiting is implemented. Consider adding rate limiting for production deployments.
CORS
The API includes CORS middleware allowing cross-origin requests from web applications.
WebSocket Support
The admin UI supports real-time updates through WebSocket connections for live monitoring of metrics and logs.
Examples
Complete Dashboard Fetch
const response = await fetch('/__mockforge/dashboard');
const data = await response.json();
if (data.success) {
console.log('System uptime:', data.data.system.uptime_seconds);
console.log('Active servers:', data.data.servers.length);
}
Update Latency Configuration
const response = await fetch('/__mockforge/config/latency', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
config_type: 'latency',
data: {
base_ms: 100,
jitter_ms: 25
}
})
});
const result = await response.json();
console.log(result.data.message);
Filter Logs
const response = await fetch('/__mockforge/logs?method=GET&status=200&limit=100');
const data = await response.json();
data.data.forEach(log => {
console.log(`${log.method} ${log.path} - ${log.status_code} (${log.response_time_ms}ms)`);
});
Development
Running Tests
# Run all tests
cargo test --package mockforge-ui
# Run integration tests
cargo test --package mockforge-ui --test integration
# Run smoke tests
cargo test --package mockforge-ui --test smoke
Building Documentation
# Generate API documentation
cargo doc --package mockforge-ui --open
Security Considerations
- Input Validation: All inputs should be validated
- Path Traversal: File operations prevent directory traversal
- Rate Limiting: Consider implementing rate limiting
- Authentication: Add authentication for production use
- HTTPS: Use HTTPS in production
- CORS: Properly configure CORS policies
Contributing
When adding new API endpoints:
- Follow the established response format
- Add comprehensive error handling
- Include integration tests
- Update this documentation
- Ensure proper CORS and security measures
Rust API Reference
MockForge provides comprehensive Rust libraries for programmatic usage and extension. This reference covers the main crates and their APIs.
Crate Overview
MockForge consists of several interconnected crates:
mockforge-cli: Command-line interface and main executablemockforge-core: Core functionality shared across protocolsmockforge-http: HTTP REST API mockingmockforge-grpc: gRPC service mockingmockforge-ui: Web-based admin interface
Getting Started
Add MockForge to your Cargo.toml:
[dependencies]
mockforge-core = "0.1"
mockforge-http = "0.1"
mockforge-grpc = "0.1"
For development or testing, you might want to use path dependencies:
[dependencies]
mockforge-core = { path = "../mockforge/crates/mockforge-core" }
mockforge-http = { path = "../mockforge/crates/mockforge-http" }
mockforge-grpc = { path = "../mockforge/crates/mockforge-grpc" }
Core Concepts
Configuration System
MockForge uses a hierarchical configuration system that can be built programmatically:
#![allow(unused)] fn main() { use mockforge_core::config::MockForgeConfig; let config = MockForgeConfig { server: ServerConfig { http_port: Some(3000), ws_port: Some(3001), grpc_port: Some(50051), }, validation: ValidationConfig { mode: ValidationMode::Enforce, aggregate_errors: false, }, response: ResponseConfig { template_expand: true, }, ..Default::default() }; }
Template System
MockForge includes a powerful template engine for dynamic content generation:
#![allow(unused)] fn main() { use mockforge_core::template::{TemplateEngine, Context}; let engine = TemplateEngine::new(); let context = Context::new() .with_value("user_id", "12345") .with_value("timestamp", "2025-09-12T10:00:00Z"); let result = engine.render("User {{user_id}} logged in at {{timestamp}}", &context)?; assert_eq!(result, "User 12345 logged in at 2025-09-12T10:00:00Z"); }
Error Handling
MockForge uses the anyhow crate for error handling:
#![allow(unused)] fn main() { use anyhow::{Result, Context}; fn start_server(config: &Config) -> Result<()> { let server = HttpServer::new(config) .context("Failed to create HTTP server")?; server.start() .context("Failed to start server")?; Ok(()) } }
HTTP API
Basic HTTP Server
use mockforge_http::{HttpServer, HttpConfig}; use mockforge_core::config::ServerConfig; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { // Create HTTP configuration let http_config = HttpConfig { spec_path: Some("api-spec.yaml".to_string()), validation_mode: ValidationMode::Warn, template_expand: true, }; // Start HTTP server let mut server = HttpServer::new(http_config); server.start(([127, 0, 0, 1], 3000)).await?; println!("HTTP server running on http://localhost:3000"); Ok(()) }
Custom Route Handlers
use mockforge_http::{HttpServer, RouteHandler}; use warp::{Filter, Reply}; struct CustomHandler; impl RouteHandler for CustomHandler { fn handle(&self, path: &str, method: &str) -> Option<Box<dyn Reply>> { if path == "/custom" && method == "GET" { Some(Box::new(warp::reply::json(&serde_json::json!({ "message": "Custom response", "timestamp": chrono::Utc::now() })))) } else { None } } } #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let handler = CustomHandler; let server = HttpServer::with_handler(handler); server.start(([127, 0, 0, 1], 3000)).await?; Ok(()) }
gRPC API
Basic gRPC Server
use mockforge_grpc::{GrpcServer, GrpcConfig}; use std::path::Path; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { // Configure proto discovery let config = GrpcConfig { proto_dir: Path::new("proto/"), enable_reflection: true, ..Default::default() }; // Start gRPC server let server = GrpcServer::new(config); server.start("127.0.0.1:50051").await?; println!("gRPC server running on 127.0.0.1:50051"); Ok(()) }
Custom Service Implementation
use mockforge_grpc::{ServiceRegistry, ServiceImplementation}; use prost::Message; use tonic::{Request, Response, Status}; // Generated from proto file mod greeter { include!("generated/greeter.rs"); } pub struct GreeterService; #[tonic::async_trait] impl greeter::greeter_server::Greeter for GreeterService { async fn say_hello( &self, request: Request<greeter::HelloRequest>, ) -> Result<Response<greeter::HelloReply>, Status> { let name = request.into_inner().name; let reply = greeter::HelloReply { message: format!("Hello, {}!", name), timestamp: Some(prost_types::Timestamp::from(std::time::SystemTime::now())), }; Ok(Response::new(reply)) } } #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let service = GreeterService {}; let server = GrpcServer::with_service(service); server.start("127.0.0.1:50051").await?; Ok(()) }
WebSocket API
Basic WebSocket Server
use mockforge_ws::{WebSocketServer, WebSocketConfig}; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let config = WebSocketConfig { port: 3001, replay_file: Some("ws-replay.jsonl".to_string()), ..Default::default() }; let server = WebSocketServer::new(config); server.start().await?; println!("WebSocket server running on ws://localhost:3001"); Ok(()) }
Custom Message Handler
use mockforge_ws::{WebSocketServer, MessageHandler}; use futures_util::{SinkExt, StreamExt}; struct EchoHandler; impl MessageHandler for EchoHandler { async fn handle_message(&self, message: String) -> String { format!("Echo: {}", message) } } #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let handler = EchoHandler {}; let server = WebSocketServer::with_handler(handler); server.start().await?; Ok(()) }
This Rust API reference provides the foundation for programmatic usage of MockForge. For protocol-specific details, see the HTTP, gRPC, and WebSocket API documentation.
HTTP Module
The mockforge_http crate provides comprehensive HTTP/REST API mocking capabilities with OpenAPI integration, AI-powered responses, and advanced management features.
Modules
Core Functions
build_router
#![allow(unused)] fn main() { pub async fn build_router( spec_path: Option<String>, options: Option<ValidationOptions>, failure_config: Option<FailureConfig>, ) -> Router }
Creates a basic HTTP router with optional OpenAPI specification support.
Parameters:
spec_path: Optional path to OpenAPI specification fileoptions: Optional validation options for request/response validationfailure_config: Optional failure injection configuration
Returns: Axum Router configured for HTTP mocking
Example:
#![allow(unused)] fn main() { use mockforge_http::build_router; use mockforge_core::ValidationOptions; let router = build_router( Some("./api.yaml".to_string()), Some(ValidationOptions::enforce()), None, ).await; }
build_router_with_auth
#![allow(unused)] fn main() { pub async fn build_router_with_auth( spec_path: Option<String>, options: Option<ValidationOptions>, auth_config: Option<AuthConfig>, ) -> Router }
Creates an HTTP router with authentication support.
Parameters:
spec_path: Optional path to OpenAPI specification fileoptions: Optional validation optionsauth_config: Authentication configuration (OAuth2, JWT, API keys)
Returns: Axum Router with authentication middleware
Example:
#![allow(unused)] fn main() { use mockforge_http::build_router_with_auth; use mockforge_core::config::AuthConfig; let auth_config = AuthConfig { oauth2: Some(OAuth2Config { client_id: "client123".to_string(), client_secret: "secret".to_string(), ..Default::default() }), ..Default::default() }; let router = build_router_with_auth( Some("./api.yaml".to_string()), None, Some(auth_config), ).await; }
build_router_with_chains
#![allow(unused)] fn main() { pub async fn build_router_with_chains( spec_path: Option<String>, options: Option<ValidationOptions>, chain_config: Option<RequestChainingConfig>, ) -> Router }
Creates an HTTP router with request chaining support for multi-step workflows.
Parameters:
spec_path: Optional path to OpenAPI specification fileoptions: Optional validation optionschain_config: Request chaining configuration
Returns: Axum Router with chaining capabilities
build_router_with_multi_tenant
#![allow(unused)] fn main() { pub async fn build_router_with_multi_tenant( spec_path: Option<String>, options: Option<ValidationOptions>, failure_config: Option<FailureConfig>, multi_tenant_config: Option<MultiTenantConfig>, route_configs: Option<Vec<RouteConfig>>, cors_config: Option<HttpCorsConfig>, ) -> Router }
Creates an HTTP router with multi-tenant workspace support.
Parameters:
spec_path: Optional path to OpenAPI specification fileoptions: Optional validation optionsfailure_config: Optional failure injection configurationmulti_tenant_config: Multi-tenant workspace configurationroute_configs: Custom route configurationscors_config: CORS configuration
Returns: Axum Router with multi-tenant support
build_router_with_traffic_shaping
#![allow(unused)] fn main() { pub async fn build_router_with_traffic_shaping( spec_path: Option<String>, options: Option<ValidationOptions>, traffic_shaper: Option<TrafficShaper>, traffic_shaping_enabled: bool, ) -> Router }
Creates an HTTP router with traffic shaping capabilities.
Parameters:
spec_path: Optional path to OpenAPI specification fileoptions: Optional validation optionstraffic_shaper: Traffic shaping configurationtraffic_shaping_enabled: Whether traffic shaping is active
Returns: Axum Router with traffic shaping middleware
Server Functions
serve_router
#![allow(unused)] fn main() { pub async fn serve_router( port: u16, app: Router, ) -> Result<(), Box<dyn std::error::Error + Send + Sync>> }
Starts the HTTP server on the specified port.
Parameters:
port: Port number to bind toapp: Axum router to serve
Returns: Result<(), Error> indicating server startup success
Errors:
- Port binding failures
- Server startup errors
start
#![allow(unused)] fn main() { pub async fn start( port: u16, spec_path: Option<String>, options: Option<ValidationOptions>, ) -> Result<(), Box<dyn std::error::Error + Send + Sync>> }
Convenience function to build and start an HTTP server.
Parameters:
port: Port number to bind tospec_path: Optional path to OpenAPI specification fileoptions: Optional validation options
start_with_auth_and_latency
#![allow(unused)] fn main() { pub async fn start_with_auth_and_latency( port: u16, spec_path: Option<String>, options: Option<ValidationOptions>, auth_config: Option<AuthConfig>, latency_profile: Option<LatencyProfile>, ) -> Result<(), Box<dyn std::error::Error + Send + Sync>> }
Starts HTTP server with authentication and latency simulation.
Parameters:
port: Port number to bind tospec_path: Optional path to OpenAPI specification fileoptions: Optional validation optionsauth_config: Authentication configurationlatency_profile: Latency injection profile
Management API
management_router
#![allow(unused)] fn main() { pub fn management_router(state: ManagementState) -> Router }
Creates a management API router for server control and monitoring.
Parameters:
state: Management state containing server statistics and configuration
Returns: Axum Router with management endpoints
Endpoints:
GET /health- Health checkGET /stats- Server statisticsGET /routes- Route informationGET /coverage- API coverage metricsGET/POST/PUT/DELETE /mocks- Mock management
management_ws_router
#![allow(unused)] fn main() { pub fn ws_management_router(state: WsManagementState) -> Router }
Creates a WebSocket management router for real-time monitoring.
Parameters:
state: WebSocket management state
Returns: Axum Router with WebSocket management endpoints
AI Integration
process_response_with_ai
#![allow(unused)] fn main() { pub async fn process_response_with_ai( response_body: Option<Value>, intelligent_config: Option<Value>, drift_config: Option<Value>, ) -> Result<Value> }
Processes a response body using AI features if configured.
Parameters:
response_body: Base response body as JSON Valueintelligent_config: Intelligent mock generation configurationdrift_config: Data drift simulation configuration
Returns: Result<Value, Error> with processed response
Example:
#![allow(unused)] fn main() { use mockforge_http::process_response_with_ai; use serde_json::json; let config = json!({ "enabled": true, "prompt": "Generate realistic user data" }); let response = process_response_with_ai( Some(json!({"name": "John"})), Some(config), None, ).await?; }
Data Structures
HttpServerState
#![allow(unused)] fn main() { pub struct HttpServerState { pub routes: Vec<RouteInfo>, pub rate_limiter: Option<Arc<GlobalRateLimiter>>, } }
Shared state for HTTP server route information and rate limiting.
Fields:
routes: Vector of route informationrate_limiter: Optional global rate limiter
Methods:
#![allow(unused)] fn main() { impl HttpServerState { pub fn new() -> Self pub fn with_routes(routes: Vec<RouteInfo>) -> Self pub fn with_rate_limiter(rate_limiter: Arc<GlobalRateLimiter>) -> Self } }
RouteInfo
#![allow(unused)] fn main() { pub struct RouteInfo { pub method: String, pub path: String, pub operation_id: Option<String>, pub summary: Option<String>, pub description: Option<String>, pub parameters: Vec<String>, } }
Information about an HTTP route.
Fields:
method: HTTP method (GET, POST, etc.)path: Route path patternoperation_id: Optional OpenAPI operation IDsummary: Optional route summarydescription: Optional route descriptionparameters: List of parameter names
ManagementState
#![allow(unused)] fn main() { pub struct ManagementState { pub mocks: Arc<RwLock<Vec<MockConfig>>>, pub spec: Option<Arc<OpenApiSpec>>, pub spec_path: Option<String>, pub port: u16, pub start_time: Instant, pub request_counter: Arc<RwLock<u64>>, } }
State for the management API.
Fields:
mocks: Thread-safe vector of mock configurationsspec: Optional OpenAPI specificationspec_path: Optional path to spec fileport: Server portstart_time: Server startup timestamprequest_counter: Request counter for statistics
Methods:
#![allow(unused)] fn main() { impl ManagementState { pub fn new( spec: Option<Arc<OpenApiSpec>>, spec_path: Option<String>, port: u16, ) -> Self } }
MockConfig
#![allow(unused)] fn main() { pub struct MockConfig { pub id: String, pub name: String, pub method: String, pub path: String, pub response: MockResponse, pub enabled: bool, pub latency_ms: Option<u64>, pub status_code: Option<u16>, } }
Configuration for a mock endpoint.
Fields:
id: Unique mock identifiername: Human-readable namemethod: HTTP methodpath: Route pathresponse: Mock response configurationenabled: Whether mock is activelatency_ms: Optional latency injectionstatus_code: Optional status code override
MockResponse
#![allow(unused)] fn main() { pub struct MockResponse { pub body: Value, pub headers: Option<HashMap<String, String>>, } }
Mock response configuration.
Fields:
body: JSON response bodyheaders: Optional HTTP headers
ServerStats
#![allow(unused)] fn main() { pub struct ServerStats { pub uptime_seconds: u64, pub total_requests: u64, pub active_mocks: usize, pub enabled_mocks: usize, pub registered_routes: usize, } }
Server statistics.
Fields:
uptime_seconds: Server uptime in secondstotal_requests: Total requests processedactive_mocks: Number of configured mocksenabled_mocks: Number of enabled mocksregistered_routes: Number of registered routes
ServerConfig
#![allow(unused)] fn main() { pub struct ServerConfig { pub version: String, pub port: u16, pub has_openapi_spec: bool, pub spec_path: Option<String>, } }
Server configuration information.
Fields:
version: MockForge versionport: Server porthas_openapi_spec: Whether OpenAPI spec is loadedspec_path: Optional path to spec file
AI Types
AiResponseConfig
#![allow(unused)] fn main() { pub struct AiResponseConfig { pub enabled: bool, pub rag_config: RagConfig, pub prompt: String, pub schema: Option<Value>, } }
Configuration for AI-powered response generation.
Fields:
enabled: Whether AI responses are enabledrag_config: RAG (Retrieval-Augmented Generation) configurationprompt: AI generation promptschema: Optional response schema
AiResponseHandler
#![allow(unused)] fn main() { pub struct AiResponseHandler { /* fields omitted */ } }
Handler for AI-powered response generation.
Methods:
#![allow(unused)] fn main() { impl AiResponseHandler { pub fn new( intelligent_config: Option<IntelligentMockConfig>, drift_config: Option<DataDriftConfig>, ) -> Result<Self> pub fn is_enabled(&self) -> bool pub async fn generate_response(&mut self, base_response: Option<Value>) -> Result<Value> pub async fn reset_drift(&self) } }
Coverage Types
CoverageReport
#![allow(unused)] fn main() { pub struct CoverageReport { pub routes: HashMap<String, RouteCoverage>, pub total_routes: usize, pub covered_routes: usize, pub coverage_percentage: f64, } }
API coverage report.
Fields:
routes: Coverage data per routetotal_routes: Total number of routescovered_routes: Number of covered routescoverage_percentage: Coverage percentage (0.0-100.0)
RouteCoverage
#![allow(unused)] fn main() { pub struct RouteCoverage { pub method: String, pub path: String, pub methods: HashMap<String, MethodCoverage>, pub total_requests: u64, pub covered_methods: usize, } }
Coverage information for a specific route.
Fields:
method: HTTP methodpath: Route pathmethods: Coverage per HTTP methodtotal_requests: Total requests to this routecovered_methods: Number of methods with coverage
MethodCoverage
#![allow(unused)] fn main() { pub struct MethodCoverage { pub request_count: u64, pub response_codes: HashMap<u16, u64>, pub last_request: Option<DateTime<Utc>>, } }
Coverage information for a specific HTTP method.
Fields:
request_count: Number of requestsresponse_codes: Response code distributionlast_request: Timestamp of last request
Coverage Functions
calculate_coverage
#![allow(unused)] fn main() { pub fn calculate_coverage( routes: &[RouteInfo], request_logs: &[RequestLogEntry], ) -> CoverageReport }
Calculates API coverage from route information and request logs.
Parameters:
routes: Available routesrequest_logs: Historical request logs
Returns: CoverageReport with coverage statistics
get_coverage_handler
#![allow(unused)] fn main() { pub async fn get_coverage_handler(State(state): State<HttpServerState>) -> Json<Value> }
Axum handler for coverage endpoint.
Returns: JSON response with coverage data
Middleware Functions
collect_http_metrics
#![allow(unused)] fn main() { pub fn collect_http_metrics(request: &Request, response: &Response, duration: Duration) }
Collects HTTP metrics for observability.
Parameters:
request: HTTP requestresponse: HTTP responseduration: Request processing duration
http_tracing_middleware
#![allow(unused)] fn main() { pub fn http_tracing_middleware( request: Request, next: Next, ) -> impl Future<Output = Response> }
Middleware for HTTP request tracing.
Parameters:
request: Incoming HTTP requestnext: Next middleware in chain
Returns: Future resolving to HTTP response
Error Types
All functions return Result<T, Box<dyn std::error::Error + Send + Sync>> for error handling. Common errors include:
- File I/O errors (spec file reading)
- JSON parsing errors
- Server binding errors
- Validation errors
- AI service errors
Constants
DEFAULT_RATE_LIMIT_RPM: Default requests per minute (1000)DEFAULT_RATE_LIMIT_BURST: Default burst size (2000)
Feature Flags
data-faker: Enables rich data generation features
Examples
Basic HTTP Server
use mockforge_http::build_router; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let router = build_router( Some("./api.yaml".to_string()), None, None, ).await; let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await?; axum::serve(listener, router).await?; Ok(()) }
Server with Management API
use mockforge_http::{build_router, management_router, ManagementState}; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { // Build main router let app = build_router(None, None, None).await; // Add management API let mgmt_state = ManagementState::new(None, None, 3000); let mgmt_router = management_router(mgmt_state); let app = app.nest("/__mockforge", mgmt_router); let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await?; axum::serve(listener, app).await?; Ok(()) }
AI-Powered Responses
use mockforge_http::{AiResponseConfig, process_response_with_ai}; use mockforge_data::RagConfig; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let ai_config = AiResponseConfig { enabled: true, rag_config: RagConfig { provider: "openai".to_string(), model: "gpt-3.5-turbo".to_string(), api_key: Some("sk-...".to_string()), ..Default::default() }, prompt: "Generate realistic user data".to_string(), schema: None, }; let response = process_response_with_ai( Some(serde_json::json!({"id": 1})), Some(serde_json::to_value(ai_config)?), None, ).await?; println!("AI response: {}", response); Ok(()) }
gRPC Module
The mockforge_grpc crate provides dynamic gRPC service discovery and mocking with HTTP bridge capabilities.
Modules
Core Functions
start
#![allow(unused)] fn main() { pub async fn start(port: u16) -> Result<(), Box<dyn std::error::Error + Send + Sync>> }
Starts a gRPC server with default configuration on the specified port.
Parameters:
port: Port number to bind the gRPC server to
Returns: Result<(), Error> indicating server startup success
Example:
use mockforge_grpc::start; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> { start(50051).await?; Ok(()) }
start_with_config
#![allow(unused)] fn main() { pub async fn start_with_config( port: u16, latency_profile: Option<LatencyProfile>, config: DynamicGrpcConfig, ) -> Result<(), Box<dyn std::error::Error + Send + Sync>> }
Starts a gRPC server with custom configuration and optional latency simulation.
Parameters:
port: Port number to bind the gRPC server tolatency_profile: Optional latency injection profileconfig: Dynamic gRPC configuration
Returns: Result<(), Error> indicating server startup success
Example:
#![allow(unused)] fn main() { use mockforge_grpc::{start_with_config, DynamicGrpcConfig}; use mockforge_core::LatencyProfile; let config = DynamicGrpcConfig { proto_dir: "./proto".to_string(), enable_reflection: true, ..Default::default() }; start_with_config(50051, Some(LatencyProfile::normal()), config).await?; }
Configuration Types
DynamicGrpcConfig
#![allow(unused)] fn main() { pub struct DynamicGrpcConfig { pub proto_dir: String, pub enable_reflection: bool, pub excluded_services: Vec<String>, pub http_bridge: Option<HttpBridgeConfig>, } }
Configuration for dynamic gRPC service discovery.
Fields:
proto_dir: Directory containing.protofiles (default: “proto”)enable_reflection: Whether to enable gRPC reflection (default: false)excluded_services: List of services to exclude from discoveryhttp_bridge: Optional HTTP bridge configuration
Methods:
#![allow(unused)] fn main() { impl DynamicGrpcConfig { pub fn default() -> Self } }
Example:
#![allow(unused)] fn main() { let config = DynamicGrpcConfig { proto_dir: "./my-protos".to_string(), enable_reflection: true, excluded_services: vec!["HealthService".to_string()], http_bridge: Some(HttpBridgeConfig { enabled: true, port: 8080, generate_openapi: true, }), }; }
HttpBridgeConfig
#![allow(unused)] fn main() { pub struct HttpBridgeConfig { pub enabled: bool, pub port: u16, pub generate_openapi: bool, pub cors_enabled: bool, } }
Configuration for HTTP bridge functionality.
Fields:
enabled: Whether HTTP bridge is enabled (default: true)port: HTTP server port (default: 8080)generate_openapi: Whether to generate OpenAPI specs (default: true)cors_enabled: Whether CORS is enabled (default: false)
Methods:
#![allow(unused)] fn main() { impl HttpBridgeConfig { pub fn default() -> Self } }
Service Registry
ServiceRegistry
#![allow(unused)] fn main() { pub struct ServiceRegistry { /* fields omitted */ } }
Registry containing discovered gRPC services.
Methods:
#![allow(unused)] fn main() { impl ServiceRegistry { pub fn new() -> Self pub fn with_descriptor_pool(descriptor_pool: DescriptorPool) -> Self pub fn descriptor_pool(&self) -> &DescriptorPool pub fn register(&mut self, name: String, service: DynamicGrpcService) pub fn get(&self, name: &str) -> Option<&Arc<DynamicGrpcService>> pub fn service_names(&self) -> Vec<String> } }
Example:
#![allow(unused)] fn main() { use mockforge_grpc::ServiceRegistry; let mut registry = ServiceRegistry::new(); registry.register("MyService".to_string(), dynamic_service); println!("Registered services: {:?}", registry.service_names()); }
Dynamic Service Types
DynamicGrpcService
#![allow(unused)] fn main() { pub struct DynamicGrpcService { /* fields omitted */ } }
Dynamically generated gRPC service implementation.
Methods:
#![allow(unused)] fn main() { impl DynamicGrpcService { pub fn new( proto_service: ProtoService, config: Option<ServiceConfig>, ) -> Self } }
ProtoService
#![allow(unused)] fn main() { pub struct ProtoService { pub name: String, pub methods: HashMap<String, ProtoMethod>, pub package: String, } }
Parsed protobuf service definition.
Fields:
name: Service namemethods: Map of method names to method definitionspackage: Protobuf package name
ProtoMethod
#![allow(unused)] fn main() { pub struct ProtoMethod { pub name: String, pub input_type: String, pub output_type: String, pub is_client_streaming: bool, pub is_server_streaming: bool, } }
Parsed protobuf method definition.
Fields:
name: Method nameinput_type: Input message type nameoutput_type: Output message type nameis_client_streaming: Whether method accepts client streamingis_server_streaming: Whether method returns server streaming
Mock Response Types
MockResponse
#![allow(unused)] fn main() { pub enum MockResponse { Unary(Value), ServerStream(Vec<Value>), ClientStream(Value), BidiStream(Vec<Value>), } }
Mock response types for different gRPC method patterns.
Variants:
Unary(Value): Single request-responseServerStream(Vec<Value>): Server streaming responseClientStream(Value): Client streaming responseBidiStream(Vec<Value>): Bidirectional streaming
Reflection Types
MockReflectionProxy
#![allow(unused)] fn main() { pub struct MockReflectionProxy { /* fields omitted */ } }
Proxy for gRPC reflection protocol.
Methods:
#![allow(unused)] fn main() { impl MockReflectionProxy { pub async fn new( config: ProxyConfig, registry: Arc<ServiceRegistry>, ) -> Result<Self> } }
ReflectionProxy
#![allow(unused)] fn main() { pub trait ReflectionProxy { fn list_services(&self) -> Vec<String>; fn get_service_descriptor(&self, service_name: &str) -> Option<&prost_reflect::ServiceDescriptor>; fn get_method_descriptor(&self, service_name: &str, method_name: &str) -> Option<&prost_reflect::MethodDescriptor>; } }
Trait for gRPC reflection functionality.
ProxyConfig
#![allow(unused)] fn main() { pub struct ProxyConfig { pub max_message_size: usize, pub connection_timeout: Duration, pub request_timeout: Duration, } }
Configuration for reflection proxy.
Fields:
max_message_size: Maximum message size in bytes (default: 4MB)connection_timeout: Connection timeout durationrequest_timeout: Request timeout duration
Proto Parser
ProtoParser
#![allow(unused)] fn main() { pub struct ProtoParser { /* fields omitted */ } }
Parser for protobuf files.
Methods:
#![allow(unused)] fn main() { impl ProtoParser { pub fn new() -> Self pub async fn parse_directory(&mut self, dir: &str) -> Result<()> pub fn services(&self) -> &HashMap<String, ProtoService> pub fn into_pool(self) -> DescriptorPool } }
Example:
#![allow(unused)] fn main() { use mockforge_grpc::dynamic::proto_parser::ProtoParser; let mut parser = ProtoParser::new(); parser.parse_directory("./proto").await?; let services = parser.services(); println!("Found {} services", services.len()); }
Discovery Functions
discover_services
#![allow(unused)] fn main() { pub async fn discover_services( config: &DynamicGrpcConfig, ) -> Result<ServiceRegistry, Box<dyn std::error::Error + Send + Sync>> }
Discovers and registers gRPC services from proto files.
Parameters:
config: Discovery configuration
Returns: Result<ServiceRegistry, Error> with discovered services
Example:
#![allow(unused)] fn main() { use mockforge_grpc::{discover_services, DynamicGrpcConfig}; let config = DynamicGrpcConfig { proto_dir: "./proto".to_string(), ..Default::default() }; let registry = discover_services(&config).await?; println!("Discovered services: {:?}", registry.service_names()); }
Generated Types
Greeter Service
#![allow(unused)] fn main() { pub mod generated { pub mod greeter_server { pub trait Greeter: Send + Sync + 'static { type SayHelloStreamStream: Stream<Item = Result<HelloReply, Status>> + Send + 'static; async fn say_hello( &self, request: Request<HelloRequest>, ) -> Result<Response<HelloReply>, Status>; async fn say_hello_stream( &self, request: Request<HelloRequest>, ) -> Result<Response<Self::SayHelloStreamStream>, Status>; async fn say_hello_client_stream( &self, request: Request<Streaming<HelloRequest>>, ) -> Result<Response<HelloReply>, Status>; async fn chat( &self, request: Request<Streaming<HelloRequest>>, ) -> Result<Response<Self::ChatStream>, Status>; } } } }
Generated gRPC service trait with all streaming patterns.
Message Types
HelloRequest
#![allow(unused)] fn main() { pub struct HelloRequest { pub name: String, } }
Request message for greeting service.
Fields:
name: Name to greet
HelloReply
#![allow(unused)] fn main() { pub struct HelloReply { pub message: String, pub metadata: Option<HashMap<String, String>>, pub items: Vec<String>, } }
Response message for greeting service.
Fields:
message: Greeting messagemetadata: Optional metadata mapitems: Optional list of items
Error Handling
All functions return Result<T, Box<dyn std::error::Error + Send + Sync>>. Common errors include:
- File I/O errors (proto file reading)
- Protobuf parsing errors
- Server binding errors
- Reflection setup errors
- HTTP bridge configuration errors
Constants
DEFAULT_MAX_MESSAGE_SIZE: Default maximum message size (4MB)
Feature Flags
data-faker: Enables advanced data synthesis features
Examples
Basic gRPC Server
use mockforge_grpc::start; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> { // Starts server on port 50051 with default config // Automatically discovers services from ./proto directory start(50051).await?; Ok(()) }
Server with Reflection
use mockforge_grpc::{start_with_config, DynamicGrpcConfig}; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> { let config = DynamicGrpcConfig { proto_dir: "./proto".to_string(), enable_reflection: true, // Enable gRPC reflection ..Default::default() }; start_with_config(50051, None, config).await?; // Now you can use grpcurl: // grpcurl -plaintext localhost:50051 list // grpcurl -plaintext localhost:50051 describe MyService Ok(()) }
Server with HTTP Bridge
use mockforge_grpc::{start_with_config, DynamicGrpcConfig}; use mockforge_grpc::dynamic::http_bridge::HttpBridgeConfig; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> { let config = DynamicGrpcConfig { proto_dir: "./proto".to_string(), http_bridge: Some(HttpBridgeConfig { enabled: true, port: 8080, generate_openapi: true, }), ..Default::default() }; start_with_config(50051, None, config).await?; // gRPC available on localhost:50051 // REST API available on localhost:8080 // OpenAPI docs at http://localhost:8080/api/docs Ok(()) }
Manual Service Discovery
use mockforge_grpc::{discover_services, DynamicGrpcConfig}; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> { let config = DynamicGrpcConfig { proto_dir: "./proto".to_string(), excluded_services: vec!["HealthService".to_string()], ..Default::default() }; let registry = discover_services(&config).await?; println!("Discovered services:"); for service_name in registry.service_names() { println!(" - {}", service_name); } // Access service descriptors if let Some(descriptor) = registry.descriptor_pool().get_service_by_name("MyService") { println!("Service methods:"); for method in descriptor.methods() { println!(" - {}", method.name()); } } Ok(()) }
Custom Service Implementation
use mockforge_grpc::dynamic::service_generator::DynamicGrpcService; use mockforge_grpc::dynamic::proto_parser::{ProtoParser, ProtoService}; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> { // Parse proto files let mut parser = ProtoParser::new(); parser.parse_directory("./proto").await?; // Get a specific service if let Some(proto_service) = parser.services().get("MyService") { // Create dynamic service let dynamic_service = DynamicGrpcService::new(proto_service.clone(), None); // The service will automatically handle all RPC methods // with mock responses based on the protobuf definitions } Ok(()) }
Using gRPC Reflection
use mockforge_grpc::reflection::{MockReflectionProxy, ProxyConfig}; use std::sync::Arc; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> { let config = DynamicGrpcConfig { proto_dir: "./proto".to_string(), enable_reflection: true, ..Default::default() }; let registry = discover_services(&config).await?; let registry_arc = Arc::new(registry); let proxy_config = ProxyConfig::default(); let reflection_proxy = MockReflectionProxy::new(proxy_config, registry_arc).await?; // The reflection proxy enables: // - Service listing: reflection_proxy.list_services() // - Service descriptors: reflection_proxy.get_service_descriptor("MyService") // - Method descriptors: reflection_proxy.get_method_descriptor("MyService", "MyMethod") Ok(()) }
WebSocket Module
The mockforge_ws crate provides comprehensive WebSocket mocking with replay, proxy, and AI-powered event generation capabilities.
Modules
Core Functions
router
#![allow(unused)] fn main() { pub fn router() -> Router }
Creates a basic WebSocket router with echo functionality.
Returns: Axum Router configured for WebSocket connections
Example:
#![allow(unused)] fn main() { use mockforge_ws::router; let app = router(); // Routes WebSocket connections to /ws }
router_with_latency
#![allow(unused)] fn main() { pub fn router_with_latency(latency_injector: LatencyInjector) -> Router }
Creates a WebSocket router with latency simulation.
Parameters:
latency_injector: Latency injection configuration
Returns: Axum Router with latency simulation
Example:
#![allow(unused)] fn main() { use mockforge_ws::router_with_latency; use mockforge_core::{LatencyProfile, latency::LatencyInjector}; let latency = LatencyProfile::slow(); // 300-800ms let injector = LatencyInjector::new(latency, Default::default()); let app = router_with_latency(injector); }
router_with_proxy
#![allow(unused)] fn main() { pub fn router_with_proxy(proxy_handler: WsProxyHandler) -> Router }
Creates a WebSocket router with proxy capabilities.
Parameters:
proxy_handler: WebSocket proxy handler configuration
Returns: Axum Router with proxy functionality
Example:
#![allow(unused)] fn main() { use mockforge_ws::router_with_proxy; use mockforge_core::{WsProxyConfig, WsProxyHandler}; let proxy_config = WsProxyConfig { upstream_url: "wss://api.example.com/ws".to_string(), should_proxy: true, ..Default::default() }; let proxy = WsProxyHandler::new(proxy_config); let app = router_with_proxy(proxy); }
Server Functions
start_with_latency
#![allow(unused)] fn main() { pub async fn start_with_latency( port: u16, latency: Option<LatencyProfile>, ) -> Result<(), Box<dyn std::error::Error>> }
Starts a WebSocket server with optional latency simulation.
Parameters:
port: Port number to bind tolatency: Optional latency profile
Returns: Result<(), Error> indicating server startup success
Example:
#![allow(unused)] fn main() { use mockforge_ws::start_with_latency; use mockforge_core::LatencyProfile; start_with_latency(3001, Some(LatencyProfile::normal())).await?; }
AI Event Generation
AiEventGenerator
#![allow(unused)] fn main() { pub struct AiEventGenerator { /* fields omitted */ } }
Generator for AI-powered WebSocket event streams.
Methods:
#![allow(unused)] fn main() { impl AiEventGenerator { pub fn new(config: ReplayAugmentationConfig) -> Result<Self> pub async fn stream_events( &self, socket: WebSocket, max_events: Option<usize>, ) pub async fn stream_events_with_rate( &self, socket: WebSocket, max_events: Option<usize>, events_per_second: f64, ) } }
Example:
#![allow(unused)] fn main() { use mockforge_ws::AiEventGenerator; use mockforge_data::ReplayAugmentationConfig; let config = ReplayAugmentationConfig { narrative: "Simulate stock market trading".to_string(), ..Default::default() }; let generator = AiEventGenerator::new(config)?; generator.stream_events(socket, Some(100)).await?; }
WebSocketAiConfig
#![allow(unused)] fn main() { pub struct WebSocketAiConfig { pub enabled: bool, pub replay: Option<ReplayAugmentationConfig>, pub max_events: Option<usize>, } }
Configuration for WebSocket AI features.
Fields:
enabled: Whether AI features are enabledreplay: Optional replay augmentation configurationmax_events: Maximum number of events to generate
Tracing Functions
create_ws_connection_span
#![allow(unused)] fn main() { pub fn create_ws_connection_span(request: &Request) -> Span }
Creates an OpenTelemetry span for WebSocket connection establishment.
Parameters:
request: HTTP request that initiated the WebSocket connection
Returns: OpenTelemetry Span for connection tracking
create_ws_message_span
#![allow(unused)] fn main() { pub fn create_ws_message_span(message_size: usize, direction: &str) -> Span }
Creates an OpenTelemetry span for WebSocket message processing.
Parameters:
message_size: Size of the message in bytesdirection: Message direction (“in” or “out”)
Returns: OpenTelemetry Span for message tracking
record_ws_connection_success
#![allow(unused)] fn main() { pub fn record_ws_connection_success(span: &Span) }
Records successful WebSocket connection establishment.
Parameters:
span: Connection span to record success on
record_ws_message_success
#![allow(unused)] fn main() { pub fn record_ws_message_success(span: &Span, message_size: usize) }
Records successful WebSocket message processing.
Parameters:
span: Message span to record success onmessage_size: Size of processed message
record_ws_error
#![allow(unused)] fn main() { pub fn record_ws_error(span: &Span, error: &str) }
Records WebSocket error.
Parameters:
span: Span to record error onerror: Error description
Template Expansion
Token Expansion Functions
The crate includes internal template expansion functionality for replay files:
#![allow(unused)] fn main() { fn expand_tokens(text: &str) -> String }
Expands template tokens in replay file content.
Supported Tokens:
{{uuid}}: Generates random UUID{{now}}: Current timestamp in RFC3339 format{{now+1m}}: Timestamp 1 minute from now{{now+1h}}: Timestamp 1 hour from now{{randInt min max}}: Random integer between min and max
Example:
#![allow(unused)] fn main() { let text = "Hello {{uuid}} at {{now}}"; let expanded = expand_tokens(text); // Result: "Hello 550e8400-e29b-41d4-a716-446655440000 at 2024-01-15T10:30:00Z" }
Internal Types
WebSocket Message Handling
The crate uses Axum’s WebSocket types internally:
#![allow(unused)] fn main() { use axum::extract::ws::{Message, WebSocket, WebSocketUpgrade}; }
Message Types:
Message::Text(String): Text messageMessage::Binary(Vec<u8>): Binary messageMessage::Close(Option<CloseFrame>): Connection closeMessage::Ping(Vec<u8>): Ping messageMessage::Pong(Vec<u8>): Pong message
Error Handling
All public functions return Result<T, Box<dyn std::error::Error>>. Common errors include:
- Server binding errors
- WebSocket protocol errors
- File I/O errors (for replay files)
- AI service errors
- Template expansion errors
Constants
- Default WebSocket path:
/ws - Default server port: 3001
Feature Flags
data-faker: Enables rich data generation features
Examples
Basic WebSocket Server
use mockforge_ws::router; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let app = router(); let addr = "0.0.0.0:3001".parse()?; let listener = tokio::net::TcpListener::bind(addr).await?; axum::serve(listener, app).await?; Ok(()) }
Server with Latency Simulation
use mockforge_ws::start_with_latency; use mockforge_core::LatencyProfile; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { // Add 50-200ms latency to all messages start_with_latency(3001, Some(LatencyProfile::normal())).await?; Ok(()) }
Proxy Server
use mockforge_ws::router_with_proxy; use mockforge_core::{WsProxyConfig, WsProxyHandler}; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let proxy_config = WsProxyConfig { upstream_url: "wss://echo.websocket.org".to_string(), should_proxy: true, message_transform: None, }; let proxy = WsProxyHandler::new(proxy_config); let app = router_with_proxy(proxy); let addr = "0.0.0.0:3001".parse()?; let listener = tokio::net::TcpListener::bind(addr).await?; axum::serve(listener, app).await?; Ok(()) }
Replay Mode
use mockforge_ws::router; // Set replay file via environment variable std::env::set_var("MOCKFORGE_WS_REPLAY_FILE", "./replay.jsonl"); // Enable template expansion std::env::set_var("MOCKFORGE_RESPONSE_TEMPLATE_EXPAND", "1"); #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let app = router(); let addr = "0.0.0.0:3001".parse()?; let listener = tokio::net::TcpListener::bind(addr).await?; axum::serve(listener, app).await?; Ok(()) }
AI Event Generation
#![allow(unused)] fn main() { use mockforge_ws::AiEventGenerator; use mockforge_data::ReplayAugmentationConfig; use axum::extract::ws::WebSocket; async fn handle_ai_events(mut socket: WebSocket) { let config = ReplayAugmentationConfig { narrative: "Simulate a live chat conversation with multiple users".to_string(), event_count: 50, provider: "openai".to_string(), ..Default::default() }; let generator = AiEventGenerator::new(config)?; generator.stream_events_with_rate(socket, None, 2.0).await?; // 2 events/sec } }
Custom WebSocket Handler
#![allow(unused)] fn main() { use axum::{ extract::ws::{WebSocket, WebSocketUpgrade}, response::IntoResponse, }; async fn custom_ws_handler(ws: WebSocketUpgrade) -> impl IntoResponse { ws.on_upgrade(|socket| handle_custom_socket(socket)) } async fn handle_custom_socket(mut socket: WebSocket) { while let Some(msg) = socket.recv().await { match msg { Ok(axum::extract::ws::Message::Text(text)) => { // Process text message let response = format!("Echo: {}", text); if socket.send(axum::extract::ws::Message::Text(response.into())).await.is_err() { break; } } Ok(axum::extract::ws::Message::Close(_)) => { break; } Err(e) => { eprintln!("WebSocket error: {}", e); break; } _ => {} } } } }
Tracing Integration
#![allow(unused)] fn main() { use mockforge_ws::{create_ws_connection_span, record_ws_connection_success}; use axum::extract::ws::WebSocketUpgrade; async fn traced_ws_handler( ws: WebSocketUpgrade, request: axum::http::Request<axum::body::Body>, ) -> impl IntoResponse { // Create connection span let span = create_ws_connection_span(&request); // Record successful connection record_ws_connection_success(&span); ws.on_upgrade(|socket| handle_socket_with_tracing(socket, span)) } async fn handle_socket_with_tracing(mut socket: WebSocket, connection_span: tracing::Span) { let _guard = connection_span.enter(); while let Some(msg) = socket.recv().await { match msg { Ok(axum::extract::ws::Message::Text(text)) => { let message_span = create_ws_message_span(text.len(), "in"); let _msg_guard = message_span.enter(); // Process message... record_ws_message_success(&message_span, text.len()); } // ... other message types } } } }
Replay File Format
Replay files use JSON Lines format with the following structure:
{"ts":0,"dir":"out","text":"HELLO {{uuid}}","waitFor":"^CLIENT_READY$"}
{"ts":10,"dir":"out","text":"{\"type\":\"welcome\",\"sessionId\":\"{{uuid}}\"}"}
{"ts":20,"dir":"out","text":"{\"data\":{{randInt 1 100}}}","waitFor":"^ACK$"}
Fields:
ts: Timestamp offset in millisecondsdir: Direction (“in” for received, “out” for sent)text: Message content (supports template expansion)waitFor: Optional regex pattern to wait for before sending
Environment Variables
MOCKFORGE_WS_REPLAY_FILE: Path to replay file for replay modeMOCKFORGE_RESPONSE_TEMPLATE_EXPAND: Enable template expansion (“1” or “true”)
Integration with MockForge Core
The WebSocket crate integrates with core MockForge functionality:
- Latency Injection: Uses
LatencyInjectorfor network simulation - Proxy Handler: Uses
WsProxyHandlerfor upstream forwarding - Metrics: Integrates with global metrics registry
- Tracing: Uses OpenTelemetry for distributed tracing
- Data Generation: Supports AI-powered content generation
Development Setup
This guide helps contributors get started with MockForge development, including environment setup, development workflow, and project structure.
Prerequisites
Before contributing to MockForge, ensure you have the following installed:
Required Tools
- Rust: Version 1.70.0 or later
- Cargo: Included with Rust
- Git: For version control
- C/C++ Compiler: For native dependencies
- Docker: For containerized development and testing
Recommended Tools
- Visual Studio Code or IntelliJ/CLion with Rust plugins
- cargo-watch for automatic rebuilds
- cargo-edit for dependency management
- cargo-audit for security scanning
- mdbook for documentation development
Environment Setup
1. Install Rust
# Install Rust using rustup
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Add Cargo to PATH
source $HOME/.cargo/env
# Verify installation
rustc --version
cargo --version
2. Clone the Repository
# Clone with SSH (recommended for contributors)
git clone git@github.com:SaaSy-Solutions/mockforge.git
# Or with HTTPS
git clone https://github.com/SaaSy-Solutions/mockforge.git
cd mockforge
# Initialize submodules if any
git submodule update --init --recursive
3. Install Development Tools
# Install cargo-watch for automatic rebuilds
cargo install cargo-watch
# Install cargo-edit for dependency management
cargo install cargo-edit
# Install cargo-audit for security scanning
cargo install cargo-audit
# Install mdbook for documentation
cargo install mdbook mdbook-linkcheck mdbook-toc
# Install additional development tools
cargo install cargo-tarpaulin cargo-udeps cargo-outdated
4. Verify Setup
# Build the project
cargo build
# Run tests
cargo test
# Check code quality
cargo clippy
cargo fmt --check
Development Workflow
Daily Development
-
Create a feature branch:
git checkout -b feature/your-feature-name -
Make changes with frequent testing:
# Run tests automatically on changes cargo watch -x test # Or build automatically cargo watch -x build -
Follow code quality standards:
# Format code cargo fmt # Lint code cargo clippy -- -W clippy::pedantic # Run security audit cargo audit -
Write tests for new functionality:
# Add unit tests cargo test --lib # Add integration tests cargo test --test integration
IDE Configuration
Visual Studio Code
-
Install extensions:
rust-lang.rust-analyzer- Rust language supportms-vscode.vscode-json- JSON supportredhat.vscode-yaml- YAML supportms-vscode.vscode-docker- Docker support
-
Recommended settings in
.vscode/settings.json:{ "rust-analyzer.checkOnSave.command": "clippy", "rust-analyzer.cargo.allFeatures": true, "editor.formatOnSave": true, "editor.codeActionsOnSave": { "source.fixAll": "explicit" } }
IntelliJ/CLion
- Install Rust plugin from marketplace
- Enable external linter (clippy)
- Configure code style to match project standards
Pre-commit Setup
Install pre-commit hooks to ensure code quality:
# Install pre-commit if not already installed
pip install pre-commit
# Install hooks
pre-commit install
# Run on all files
pre-commit run --all-files
Project Structure
mockforge/
├── crates/ # Rust crates
│ ├── mockforge-cli/ # Command-line interface
│ ├── mockforge-core/ # Shared core functionality
│ ├── mockforge-http/ # HTTP REST API mocking
│ ├── mockforge-ws/ # WebSocket connection mocking
│ ├── mockforge-grpc/ # gRPC service mocking
│ ├── mockforge-data/ # Synthetic data generation
│ └── mockforge-ui/ # Web-based admin interface
├── docs/ # Technical documentation
├── examples/ # Usage examples
├── book/ # User documentation (mdBook)
│ └── src/
├── fixtures/ # Test fixtures
├── scripts/ # Development scripts
├── tools/ # Development tools
├── Cargo.toml # Workspace configuration
├── Cargo.lock # Dependency lock file
├── Makefile # Development tasks
├── docker-compose.yml # Development environment
└── README.md # Project overview
Development Tasks
Common Make Targets
# Build all crates
make build
# Run tests
make test
# Run integration tests
make test-integration
# Build documentation
make docs
# Serve documentation locally
make docs-serve
# Run linter
make lint
# Format code
make format
# Clean build artifacts
make clean
Custom Development Scripts
Several development scripts are available in the scripts/ directory:
# Update dependencies
./scripts/update-deps.sh
# Generate API documentation
./scripts/gen-docs.sh
# Run performance benchmarks
./scripts/benchmark.sh
# Check for unused dependencies
./scripts/check-deps.sh
Testing Strategy
Unit Tests
# Run unit tests for all crates
cargo test --lib
# Run unit tests for specific crate
cargo test -p mockforge-core
# Run with coverage
cargo tarpaulin --out Html
Integration Tests
# Run integration tests
cargo test --test integration
# Run with verbose output
cargo test --test integration -- --nocapture
End-to-End Tests
# Run E2E tests (requires Docker)
make test-e2e
# Or run manually
./scripts/test-e2e.sh
Docker Development
Development Container
# Build development container
docker build -f Dockerfile.dev -t mockforge-dev .
# Run development environment
docker run -it --rm \
-v $(pwd):/app \
-p 3000:3000 \
-p 3001:3001 \
-p 50051:50051 \
-p 9080:9080 \
mockforge-dev
Testing with Docker
# Run tests in container
docker run --rm -v $(pwd):/app mockforge-dev cargo test
# Build release binaries
docker run --rm -v $(pwd):/app mockforge-dev cargo build --release
Contributing Workflow
1. Choose an Issue
- Check GitHub Issues for open tasks
- Look for issues labeled
good first issueorhelp wanted - Comment on the issue to indicate you’re working on it
2. Create a Branch
# Create feature branch
git checkout -b feature/issue-number-description
# Or create bugfix branch
git checkout -b bugfix/issue-number-description
3. Make Changes
- Write clear, focused commits
- Follow the code style guide
- Add tests for new functionality
- Update documentation as needed
4. Test Your Changes
# Run full test suite
make test
# Run integration tests
make test-integration
# Test manually if applicable
cargo run -- serve --spec examples/openapi-demo.json
5. Update Documentation
# Update user-facing docs if needed
mdbook build
# Update API docs
cargo doc
# Test documentation links
mdbook test
6. Submit a Pull Request
# Ensure branch is up to date
git fetch origin
git rebase origin/main
# Push your branch
git push origin feature/your-feature
# Create PR on GitHub with:
# - Clear title and description
# - Reference to issue number
# - Screenshots/videos for UI changes
# - Test results
Getting Help
Communication Channels
- GitHub Issues: For bugs, features, and general discussion
- GitHub Discussions: For questions and longer-form discussion
- Discord: Join our community chat - For real-time chat
When to Ask for Help
- Stuck on a technical problem for more than 2 hours
- Unsure about design decisions
- Need clarification on requirements
- Found a potential security issue
Code Review Process
- All PRs require review from at least one maintainer
- CI must pass all checks
- Code coverage should not decrease significantly
- Documentation must be updated for user-facing changes
This setup guide ensures you have everything needed to contribute effectively to MockForge. Happy coding! 🚀
Code Style Guide
This guide outlines the coding standards and style guidelines for MockForge development. Consistent code style improves readability, maintainability, and collaboration.
Rust Code Style
MockForge follows the official Rust style guidelines with some project-specific conventions.
Formatting
Use rustfmt for automatic code formatting:
# Format all code
cargo fmt
# Check formatting without modifying files
cargo fmt --check
Linting
Use clippy for additional code quality checks:
# Run clippy with project settings
cargo clippy
# Run with pedantic mode for stricter checks
cargo clippy -- -W clippy::pedantic
Naming Conventions
Functions and Variables
#![allow(unused)] fn main() { // Good: snake_case for functions and variables fn process_user_data(user_id: i32, data: &str) -> Result<User, Error> { let processed_data = validate_and_clean(data)?; let user_record = create_user_record(user_id, &processed_data)?; Ok(user_record) } // Bad: camelCase or PascalCase fn processUserData(userId: i32, data: &str) -> Result<User, Error> { let ProcessedData = validate_and_clean(data)?; let userRecord = create_user_record(userId, &ProcessedData)?; Ok(userRecord) } }
Types and Traits
#![allow(unused)] fn main() { // Good: PascalCase for types pub struct HttpServer { config: ServerConfig, router: Router, } pub trait RequestHandler { fn handle_request(&self, request: Request) -> Response; } // Bad: snake_case for types pub struct http_server { config: ServerConfig, router: Router, } }
Constants
#![allow(unused)] fn main() { // Good: SCREAMING_SNAKE_CASE for constants const MAX_CONNECTIONS: usize = 1000; const DEFAULT_TIMEOUT_SECS: u64 = 30; // Bad: camelCase or PascalCase const maxConnections: usize = 1000; const DefaultTimeoutSecs: u64 = 30; }
Modules and Files
#![allow(unused)] fn main() { // Good: snake_case for module names pub mod request_handler; pub mod template_engine; // File: request_handler.rs // Module: request_handler }
Documentation
Function Documentation
#![allow(unused)] fn main() { /// Processes a user request and returns a response. /// /// This function handles the complete request processing pipeline: /// 1. Validates the request data /// 2. Applies business logic /// 3. Returns appropriate response /// /// # Arguments /// /// * `user_id` - The ID of the user making the request /// * `request_data` - The request payload as JSON /// /// # Returns /// /// Returns a `Result<Response, Error>` where: /// - `Ok(response)` contains the successful response /// - `Err(error)` contains details about what went wrong /// /// # Errors /// /// This function will return an error if: /// - The user ID is invalid /// - The request data is malformed /// - Database operations fail /// /// # Examples /// /// ```rust /// let user_id = 123; /// let request_data = r#"{"action": "update_profile"}"#; /// let response = process_user_request(user_id, request_data)?; /// assert_eq!(response.status(), 200); /// ``` pub fn process_user_request(user_id: i32, request_data: &str) -> Result<Response, Error> { // Implementation } }
Module Documentation
#![allow(unused)] fn main() { //! # HTTP Server Module //! //! This module provides HTTP server functionality for MockForge, //! including request routing, middleware support, and response handling. //! //! ## Architecture //! //! The HTTP server uses axum as the underlying web framework and provides: //! - OpenAPI specification integration //! - Template-based response generation //! - Middleware for logging and validation //! //! ## Example //! //! ```rust //! use mockforge_http::HttpServer; //! //! let server = HttpServer::new(config); //! server.serve("127.0.0.1:3000").await?; //! ``` }
Error Handling
Custom Error Types
#![allow(unused)] fn main() { use thiserror::Error; #[derive(Error, Debug)] pub enum MockForgeError { #[error("Configuration error: {message}")] Config { message: String }, #[error("I/O error: {source}")] Io { #[from] source: std::io::Error, }, #[error("Template rendering error: {message}")] Template { message: String }, #[error("HTTP error: {status} - {message}")] Http { status: u16, message: String }, } }
Result Types
#![allow(unused)] fn main() { // Good: Use Result<T, MockForgeError> for fallible operations pub fn load_config(path: &Path) -> Result<Config, MockForgeError> { let content = fs::read_to_string(path) .map_err(|e| MockForgeError::Io { source: e })?; let config: Config = serde_yaml::from_str(&content) .map_err(|e| MockForgeError::Config { message: format!("Failed to parse YAML: {}", e), })?; Ok(config) } // Bad: Using Option when you should use Result pub fn load_config_bad(path: &Path) -> Option<Config> { // This loses error information None } }
Async Code
Async Function Signatures
#![allow(unused)] fn main() { // Good: Clear async function signatures pub async fn process_request(request: Request) -> Result<Response, Error> { let data = validate_request(&request).await?; let result = process_data(data).await?; Ok(create_response(result)) } // Bad: Unclear async boundaries pub fn process_request(request: Request) -> impl Future<Output = Result<Response, Error>> { async move { // Implementation } } }
Tokio Usage
#![allow(unused)] fn main() { use tokio::sync::{Mutex, RwLock}; // Good: Use appropriate synchronization primitives pub struct SharedState { data: RwLock<HashMap<String, String>>, counter: Mutex<i64>, } impl SharedState { pub async fn get_data(&self, key: &str) -> Option<String> { let data = self.data.read().await; data.get(key).cloned() } pub async fn increment_counter(&self) -> i64 { let mut counter = self.counter.lock().await; *counter += 1; *counter } } }
Testing
Unit Test Structure
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use super::*; #[test] fn test_function_basic_case() { // Given let input = "test input"; let expected = "expected output"; // When let result = process_input(input); // Then assert_eq!(result, expected); } #[test] fn test_function_error_case() { // Given let input = ""; // When let result = process_input(input); // Then assert!(result.is_err()); assert!(matches!(result.unwrap_err(), Error::InvalidInput(_))); } #[tokio::test] async fn test_async_function() { // Given let client = create_test_client().await; // When let response = client.make_request().await.unwrap(); // Then assert_eq!(response.status(), 200); } } }
Test Organization
#![allow(unused)] fn main() { // tests/integration_tests.rs #[cfg(test)] mod integration_tests { use mockforge_core::config::MockForgeConfig; #[tokio::test] async fn test_full_http_flow() { // Test complete request/response cycle let server = TestServer::new().await; let client = TestClient::new(server.url()); let response = client.get("/api/users").await; assert_eq!(response.status(), 200); } } }
Performance Considerations
Memory Management
#![allow(unused)] fn main() { // Good: Use references when possible pub fn process_data(data: &str) -> Result<String, Error> { // Avoid cloning unless necessary if data.is_empty() { return Err(Error::EmptyInput); } Ok(data.to_uppercase()) } // Good: Use Cow for flexible ownership use std::borrow::Cow; pub fn normalize_string<'a>(input: &'a str) -> Cow<'a, str> { if input.chars().all(|c| c.is_lowercase()) { Cow::Borrowed(input) } else { Cow::Owned(input.to_lowercase()) } } }
Zero-Cost Abstractions
#![allow(unused)] fn main() { // Good: Use iterators for memory efficiency pub fn find_active_users(users: &[User]) -> impl Iterator<Item = &User> { users.iter().filter(|user| user.is_active) } // Bad: Collect into Vec unnecessarily pub fn find_active_users_bad(users: &[User]) -> Vec<&User> { users.iter().filter(|user| user.is_active).collect() } }
Project-Specific Conventions
Configuration Handling
#![allow(unused)] fn main() { // Good: Use builder pattern for complex configuration #[derive(Debug, Clone)] pub struct ServerConfig { pub host: String, pub port: u16, pub tls: Option<TlsConfig>, } impl Default for ServerConfig { fn default() -> Self { Self { host: "127.0.0.1".to_string(), port: 3000, tls: None, } } } impl ServerConfig { pub fn builder() -> ServerConfigBuilder { ServerConfigBuilder::default() } } }
Logging
#![allow(unused)] fn main() { use tracing::{info, warn, error, debug, instrument}; // Good: Use structured logging #[instrument(skip(config))] pub async fn start_server(config: &ServerConfig) -> Result<(), Error> { info!("Starting server", host = %config.host, port = config.port); if let Err(e) = setup_server(config).await { error!("Failed to start server", error = %e); return Err(e); } info!("Server started successfully"); Ok(()) } }
Feature Flags
#![allow(unused)] fn main() { // Good: Use feature flags for optional functionality #[cfg(feature = "grpc")] pub mod grpc { // gRPC-specific code } #[cfg(feature = "websocket")] pub mod websocket { // WebSocket-specific code } }
Code Review Checklist
Before submitting code for review, ensure:
-
Code is formatted with
cargo fmt - No clippy warnings remain
- All tests pass
- Documentation is updated
- No TODO comments left in production code
- Error messages are user-friendly
- Performance considerations are addressed
- Security implications are reviewed
Tools and Automation
Pre-commit Hooks
#!/bin/bash
# .git/hooks/pre-commit
# Format code
cargo fmt --check
if [ $? -ne 0 ]; then
echo "Code is not formatted. Run 'cargo fmt' to fix."
exit 1
fi
# Run clippy
cargo clippy -- -D warnings
if [ $? -ne 0 ]; then
echo "Clippy found issues. Fix them before committing."
exit 1
fi
# Run tests
cargo test
if [ $? -ne 0 ]; then
echo "Tests are failing. Fix them before committing."
exit 1
fi
CI Configuration
# .github/workflows/ci.yml
name: CI
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
toolchain: stable
- name: Check formatting
run: cargo fmt --check
- name: Run clippy
run: cargo clippy -- -D warnings
- name: Run tests
run: cargo test --verbose
- name: Run security audit
run: cargo audit
This style guide ensures MockForge maintains high code quality and consistency across the entire codebase. Following these guidelines makes the code more readable, maintainable, and collaborative.
Testing Guidelines
This guide outlines the testing standards and practices for MockForge contributions. Quality testing ensures code reliability, prevents regressions, and maintains system stability.
Testing Philosophy
Testing Pyramid
MockForge follows a testing pyramid approach with different types of tests serving different purposes:
End-to-End Tests (E2E)
↑
Integration Tests
↑
Unit Tests
Base
- Unit Tests: Test individual functions and modules in isolation
- Integration Tests: Test component interactions and data flow
- End-to-End Tests: Test complete user workflows and system behavior
Testing Principles
- Test First: Write tests before implementation when possible
- Test Behavior: Test what the code does, not how it does it
- Test Boundaries: Focus on edge cases and error conditions
- Keep Tests Fast: Tests should run quickly to encourage frequent execution
- Make Tests Reliable: Tests should be deterministic and not flaky
Unit Testing Requirements
Test Coverage
All new code must include unit tests with the following minimum coverage:
- Functions: Test all public functions with valid inputs
- Error Cases: Test all error conditions and edge cases
- Branches: Test all conditional branches (if/else, match arms)
- Loops: Test loop boundaries (empty, single item, multiple items)
Test Structure
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use super::*; #[test] fn test_function_name_description() { // Given: Set up test data and preconditions let input = create_test_input(); let expected = create_expected_output(); // When: Execute the function under test let result = function_under_test(input); // Then: Verify the result matches expectations assert_eq!(result, expected); } #[test] fn test_function_name_error_case() { // Given: Set up error condition let invalid_input = create_invalid_input(); // When: Execute the function let result = function_under_test(invalid_input); // Then: Verify error handling assert!(result.is_err()); let error = result.unwrap_err(); assert!(matches!(error, ExpectedError::Variant)); } } }
Test Naming Conventions
#![allow(unused)] fn main() { // Good: Descriptive test names #[test] fn test_parse_openapi_spec_validates_required_fields() { ... } #[test] fn test_template_engine_handles_missing_variables() { ... } #[test] fn test_http_server_rejects_invalid_content_type() { ... } // Bad: Non-descriptive names #[test] fn test_function() { ... } #[test] fn test_case_1() { ... } #[test] fn test_error() { ... } }
Test Data Management
Test Fixtures
#![allow(unused)] fn main() { // Use shared test fixtures for common data pub fn sample_openapi_spec() -> &'static str { r#" openapi: 3.0.3 info: title: Test API version: 1.0.0 paths: /users: get: responses: '200': description: Success "# } pub fn sample_user_data() -> User { User { id: "123".to_string(), name: "John Doe".to_string(), email: "john@example.com".to_string(), } } }
Test Utilities
#![allow(unused)] fn main() { // Create test utilities for common setup pub struct TestServer { server_handle: Option<JoinHandle<()>>, base_url: String, } impl TestServer { pub async fn new() -> Self { // Start test server // Return configured instance } pub fn url(&self) -> &str { &self.base_url } } impl Drop for TestServer { fn drop(&mut self) { // Clean up server } } }
Integration Testing Standards
When to Write Integration Tests
Integration tests are required for:
- API Boundaries: HTTP endpoints, gRPC services, WebSocket connections
- Database Operations: Data persistence and retrieval
- External Services: Third-party API integrations
- File I/O: Configuration loading, fixture management
- Component Communication: Cross-crate interactions
Integration Test Structure
#![allow(unused)] fn main() { #[cfg(test)] mod integration_tests { use mockforge_core::config::MockForgeConfig; #[tokio::test] async fn test_http_server_startup() { // Given: Configure test server let config = create_test_config(); let server = HttpServer::new(config); // When: Start the server let addr = server.local_addr(); tokio::spawn(async move { server.serve().await.unwrap(); }); // Wait for startup tokio::time::sleep(Duration::from_millis(100)).await; // Then: Verify server is responding let client = reqwest::Client::new(); let response = client .get(format!("http://{}/health", addr)) .send() .await .unwrap(); assert_eq!(response.status(), 200); } } }
Database Testing
#![allow(unused)] fn main() { #[cfg(test)] mod database_tests { use sqlx::PgPool; #[sqlx::test] async fn test_user_creation(pool: PgPool) { // Given: Clean database state sqlx::query!("DELETE FROM users").execute(&pool).await.unwrap(); // When: Create a user let user_id = create_user(&pool, "test@example.com").await.unwrap(); // Then: Verify user exists let user = sqlx::query!("SELECT * FROM users WHERE id = $1", user_id) .fetch_one(&pool) .await .unwrap(); assert_eq!(user.email, "test@example.com"); } } }
End-to-End Testing Requirements
E2E Test Scenarios
E2E tests must cover:
- Happy Path: Complete successful user workflows
- Error Recovery: System behavior under failure conditions
- Data Persistence: State changes across operations
- Performance: Response times and resource usage
- Security: Authentication and authorization flows
E2E Test Implementation
#![allow(unused)] fn main() { #[cfg(test)] mod e2e_tests { use std::process::Command; use std::time::Duration; #[test] fn test_complete_api_workflow() { // Start MockForge server let mut server = Command::new("cargo") .args(&["run", "--release", "--", "serve", "--spec", "test-api.yaml"]) .spawn() .unwrap(); // Wait for server startup std::thread::sleep(Duration::from_secs(3)); // Execute complete workflow let result = run_workflow_test(); assert!(result.is_ok()); // Cleanup server.kill().unwrap(); } } }
Test Quality Standards
Code Coverage Requirements
- Minimum Coverage: 80% overall, 90% for critical paths
- Branch Coverage: All conditional branches must be tested
- Error Path Coverage: All error conditions must be tested
Performance Testing
#![allow(unused)] fn main() { #[cfg(test)] mod performance_tests { use criterion::Criterion; fn benchmark_template_rendering(c: &mut Criterion) { let engine = TemplateEngine::new(); c.bench_function("render_simple_template", |b| { b.iter(|| { engine.render("Hello {{name}}", &[("name", "World")]); }) }); } } }
Load Testing
#![allow(unused)] fn main() { #[cfg(test)] mod load_tests { use tokio::time::{Duration, Instant}; #[tokio::test] async fn test_concurrent_requests() { let client = reqwest::Client::new(); let start = Instant::now(); // Spawn 100 concurrent requests let handles: Vec<_> = (0..100).map(|_| { let client = client.clone(); tokio::spawn(async move { client.get("http://localhost:3000/api/users") .send() .await .unwrap() }) }).collect(); // Wait for all requests to complete for handle in handles { let response = handle.await.unwrap(); assert_eq!(response.status(), 200); } let duration = start.elapsed(); assert!(duration < Duration::from_secs(5), "Load test took too long: {:?}", duration); } } }
Testing Tools and Frameworks
Required Testing Dependencies
[dev-dependencies]
tokio-test = "0.4"
proptest = "1.0" # Property-based testing
criterion = "0.4" # Benchmarking
assert_cmd = "2.0" # CLI testing
predicates = "2.1" # Value assertions
tempfile = "3.0" # Temporary files
Mocking and Stubbing
#![allow(unused)] fn main() { #[cfg(test)] mod mock_tests { use mockall::mock; #[mockall::mock] trait Database { async fn get_user(&self, id: i32) -> Result<User, Error>; async fn save_user(&self, user: User) -> Result<(), Error>; } #[tokio::test] async fn test_service_with_mocks() { let mut mock_db = MockDatabase::new(); mock_db .expect_get_user() .with(eq(123)) .returning(|_| Ok(User { id: 123, name: "Test".to_string() })); let service = UserService::new(mock_db); let user = service.get_user(123).await.unwrap(); assert_eq!(user.name, "Test"); } } }
Property-Based Testing
#![allow(unused)] fn main() { #[cfg(test)] mod property_tests { use proptest::prelude::*; proptest! { #[test] fn test_template_rendering_with_random_input( input in "\\PC*", // Any printable character except control chars name in "[a-zA-Z]{1,10}" ) { let engine = TemplateEngine::new(); let context = &[("name", &name)]; // Should not panic regardless of input let _result = engine.render(&input, context); } } } }
Test Organization and Naming
File Structure
src/
├── lib.rs
├── module.rs
└── module/
├── mod.rs
└── submodule.rs
tests/
├── unit/
│ ├── module_tests.rs
│ └── submodule_tests.rs
├── integration/
│ ├── api_tests.rs
│ └── database_tests.rs
└── e2e/
├── workflow_tests.rs
└── performance_tests.rs
Test Module Organization
#![allow(unused)] fn main() { // tests/unit/template_tests.rs #[cfg(test)] mod template_tests { use mockforge_core::templating::TemplateEngine; // Unit tests for template functionality } // tests/integration/http_tests.rs #[cfg(test)] mod http_integration_tests { use mockforge_http::HttpServer; // Integration tests for HTTP server } // tests/e2e/api_workflow_tests.rs #[cfg(test)] mod e2e_tests { // End-to-end workflow tests } }
CI/CD Integration
GitHub Actions Testing
name: Test
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@stable
- name: Cache dependencies
uses: Swatinem/rust-cache@v2
- name: Check formatting
run: cargo fmt --check
- name: Run clippy
run: cargo clippy -- -D warnings
- name: Run tests
run: cargo test --verbose
- name: Run integration tests
run: cargo test --test integration
- name: Generate coverage
run: |
cargo install cargo-tarpaulin
cargo tarpaulin --out Xml --output-dir coverage
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
file: coverage/cobertura.xml
Test Result Reporting
- name: Run tests with JUnit output
run: |
cargo install cargo2junit
cargo test -- -Z unstable-options --format json | cargo2junit > test-results.xml
- name: Publish test results
uses: EnricoMi/publish-unit-test-result-action@v2
with:
files: test-results.xml
Best Practices
Test Isolation
#![allow(unused)] fn main() { #[cfg(test)] mod isolated_tests { use tempfile::TempDir; #[test] fn test_file_operations() { // Use temporary directory for isolation let temp_dir = TempDir::new().unwrap(); let file_path = temp_dir.path().join("test.txt"); // Test file operations write_test_file(&file_path); assert!(file_path.exists()); // Cleanup happens automatically } } }
Test Data Management
#![allow(unused)] fn main() { #[cfg(test)] mod test_data { use once_cell::sync::Lazy; static TEST_USERS: Lazy<Vec<User>> = Lazy::new(|| { vec![ User { id: 1, name: "Alice".to_string() }, User { id: 2, name: "Bob".to_string() }, ] }); #[test] fn test_user_operations() { let users = TEST_USERS.clone(); // Use shared test data } } }
Asynchronous Testing
#![allow(unused)] fn main() { #[cfg(test)] mod async_tests { use tokio::time::{timeout, Duration}; #[tokio::test] async fn test_async_operation_with_timeout() { let result = timeout(Duration::from_secs(5), async_operation()).await; match result { Ok(Ok(data)) => assert!(data.is_valid()), Ok(Err(e)) => panic!("Operation failed: {}", e), Err(_) => panic!("Operation timed out"), } } #[tokio::test] async fn test_concurrent_operations() { let (result1, result2) = tokio::join( operation1(), operation2() ); assert!(result1.is_ok()); assert!(result2.is_ok()); } } }
Test Flakiness Prevention
#![allow(unused)] fn main() { #[cfg(test)] mod reliable_tests { #[test] fn test_with_retries() { let mut attempts = 0; let max_attempts = 3; loop { attempts += 1; match potentially_flaky_operation() { Ok(result) => { assert!(result.is_valid()); break; } Err(e) if attempts < max_attempts => { eprintln!("Attempt {} failed: {}, retrying...", attempts, e); std::thread::sleep(Duration::from_millis(100)); continue; } Err(e) => panic!("Operation failed after {} attempts: {}", max_attempts, e), } } } } }
Security Testing
Input Validation Testing
#![allow(unused)] fn main() { #[cfg(test)] mod security_tests { #[test] fn test_sql_injection_prevention() { let malicious_input = "'; DROP TABLE users; --"; let result = sanitize_sql_input(malicious_input); assert!(!result.contains("DROP")); assert!(!result.contains(";")); } #[test] fn test_xss_prevention() { let malicious_input = "<script>alert('xss')</script>"; let result = sanitize_html_input(malicious_input); assert!(!result.contains("<script>")); assert!(result.contains("<script>")); } #[test] fn test_path_traversal_prevention() { let malicious_input = "../../../etc/passwd"; let result = validate_file_path(malicious_input); assert!(result.is_err()); assert!(matches!(result.unwrap_err(), ValidationError::PathTraversal)); } } }
Authentication Testing
#![allow(unused)] fn main() { #[cfg(test)] mod auth_tests { #[tokio::test] async fn test_unauthorized_access() { let client = create_test_client(); let response = client .get("/admin/users") .send() .await .unwrap(); assert_eq!(response.status(), 401); } #[tokio::test] async fn test_authorized_access() { let client = create_authenticated_client(); let response = client .get("/admin/users") .send() .await .unwrap(); assert_eq!(response.status(), 200); } } }
This comprehensive testing guide ensures MockForge maintains high quality and reliability through thorough automated testing at all levels.
Release Process
This guide outlines the complete process for releasing new versions of MockForge, from planning through deployment and post-release activities.
Release Planning
Version Numbering
MockForge follows Semantic Versioning (SemVer):
MAJOR.MINOR.PATCH[-PRERELEASE][+BUILD]
Examples:
- 1.0.0 (stable release)
- 1.1.0 (minor release with new features)
- 1.1.1 (patch release with bug fixes)
- 2.0.0-alpha.1 (pre-release)
- 1.0.0+20230912 (build metadata)
When to Increment
- MAJOR (X.0.0): Breaking changes to public API
- MINOR (X.Y.0): New features, backward compatible
- PATCH (X.Y.Z): Bug fixes, backward compatible
Release Types
Major Releases
- Breaking API changes
- Major feature additions
- Architectural changes
- Extended testing period (2-4 weeks beta)
Minor Releases
- New features and enhancements
- Backward compatible API changes
- Standard testing period (1-2 weeks)
Patch Releases
- Critical bug fixes
- Security patches
- Documentation updates
- Minimal testing period (3-5 days)
Pre-releases
- Alpha/Beta/RC versions
- Feature previews
- Breaking change previews
- Limited distribution
Pre-Release Checklist
1. Code Quality Verification
# Run complete test suite
make test
# Run integration tests
make test-integration
# Run E2E tests
make test-e2e
# Check code quality
make lint
make format-check
# Security audit
cargo audit
# Check for unused dependencies
cargo +nightly udeps
# Performance benchmarks
make benchmark
2. Documentation Updates
# Update CHANGELOG.md with release notes
# Update version numbers in documentation
# Build and test documentation
make docs
make docs-serve
# Test documentation links
mdbook test
3. Version Bump
# Update version in Cargo.toml files
# Update version in package metadata
# Update version in documentation
# Example version bump script
#!/bin/bash
NEW_VERSION=$1
# Update workspace Cargo.toml
sed -i "s/^version = .*/version = \"$NEW_VERSION\"/" Cargo.toml
# Update all crate Cargo.toml files
find crates -name "Cargo.toml" -exec sed -i "s/^version = .*/version = \"$NEW_VERSION\"/" {} \;
# Update README and documentation version references
sed -i "s/mockforge [0-9]\+\.[0-9]\+\.[0-9]\+/mockforge $NEW_VERSION/g" README.md
4. Branch Management
# Create release branch
git checkout -b release/v$NEW_VERSION
# Cherry-pick approved commits
# Or merge from develop/main
# Tag the release
git tag -a v$NEW_VERSION -m "Release version $NEW_VERSION"
# Push branch and tag
git push origin release/v$NEW_VERSION
git push origin v$NEW_VERSION
Release Build Process
1. Build Verification
# Clean build
cargo clean
# Build all targets
cargo build --release --all-targets
# Build specific platforms if needed
cargo build --release --target x86_64-unknown-linux-gnu
cargo build --release --target x86_64-apple-darwin
cargo build --release --target x86_64-pc-windows-msvc
# Test release build
./target/release/mockforge-cli --version
2. Binary Distribution
Linux/macOS Packages
# Strip debug symbols
strip target/release/mockforge-cli
# Create distribution archives
VERSION=1.0.0
tar -czf mockforge-v${VERSION}-x86_64-linux.tar.gz \
-C target/release mockforge-cli
tar -czf mockforge-v${VERSION}-x86_64-macos.tar.gz \
-C target/release mockforge-cli
Debian Packages
# Install cargo-deb
cargo install cargo-deb
# Build .deb package
cargo deb
# Test package installation
sudo dpkg -i target/debian/mockforge_*.deb
Docker Images
# Dockerfile.release
FROM rust:1.70-slim AS builder
WORKDIR /app
COPY . .
RUN cargo build --release
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/mockforge-cli /usr/local/bin/mockforge-cli
EXPOSE 3000 3001 50051 9080
CMD ["mockforge-cli", "serve"]
# Build and push Docker image
docker build -f Dockerfile.release -t mockforge:$VERSION .
docker tag mockforge:$VERSION mockforge:latest
docker push mockforge:$VERSION
docker push mockforge:latest
3. Cross-Platform Builds
# Use cross for cross-compilation
cargo install cross
# Build for different architectures
cross build --release --target aarch64-unknown-linux-gnu
cross build --release --target x86_64-unknown-linux-musl
# Create release archives for each platform
for target in x86_64-unknown-linux-gnu aarch64-unknown-linux-gnu x86_64-apple-darwin x86_64-pc-windows-msvc; do
cross build --release --target $target
if [[ $target == *"windows"* ]]; then
zip -j mockforge-$VERSION-$target.zip target/$target/release/mockforge-cli.exe
else
tar -czf mockforge-$VERSION-$target.tar.gz -C target/$target/release mockforge-cli
fi
done
Release Deployment
1. GitHub Release
# Create GitHub release (manual or automated)
gh release create v$VERSION \
--title "MockForge v$VERSION" \
--notes-file release-notes.md \
--draft
# Upload release assets
gh release upload v$VERSION \
mockforge-v$VERSION-x86_64-linux.tar.gz \
mockforge-v$VERSION-x86_64-macos.tar.gz \
mockforge-v$VERSION-x86_64-windows.zip \
mockforge_$VERSION_amd64.deb
# Publish release
gh release edit v$VERSION --draft=false
2. Package Registries
Crates.io Publication
# Publish all crates to crates.io
# Note: Must be done in dependency order
# Publish core first
cd crates/mockforge-core
cargo publish
# Then other crates
cd ../mockforge-http
cargo publish
cd ../mockforge-ws
cargo publish
cd ../mockforge-grpc
cargo publish
cd ../mockforge-data
cargo publish
cd ../mockforge-ui
cargo publish
# Finally CLI
cd ../mockforge-cli
cargo publish
Docker Hub
Note: Docker Hub publishing is planned for future releases. The organization and repository need to be set up first.
Once Docker Hub is configured, use these commands:
# Build the Docker image with version tag
docker build -t saasy-solutions/mockforge:$VERSION .
docker tag saasy-solutions/mockforge:$VERSION saasy-solutions/mockforge:latest
# Push to Docker Hub (requires authentication)
docker login
docker push saasy-solutions/mockforge:$VERSION
docker push saasy-solutions/mockforge:latest
For now, users should build the Docker image locally as documented in the Installation Guide.
3. Homebrew (macOS)
# Formula/mockforge.rb
class Mockforge < Formula
desc "Advanced API Mocking Platform"
homepage "https://github.com/SaaSy-Solutions/mockforge"
url "https://github.com/SaaSy-Solutions/mockforge/releases/download/v#{version}/mockforge-v#{version}-x86_64-macos.tar.gz"
sha256 "..."
def install
bin.install "mockforge-cli"
end
test do
system "#{bin}/mockforge-cli", "--version"
end
end
4. Package Managers
APT Repository (Ubuntu/Debian)
# Set up PPA or repository
# Upload .deb packages
# Update package indices
Snapcraft
# snapcraft.yaml
name: mockforge
version: '1.0.0'
summary: Advanced API Mocking Platform
description: |
MockForge is a comprehensive API mocking platform supporting HTTP, WebSocket, and gRPC protocols.
grade: stable
confinement: strict
apps:
mockforge:
command: mockforge-cli
plugs: [network, network-bind]
parts:
mockforge:
plugin: rust
source: .
build-packages: [pkg-config, libssl-dev]
Post-Release Activities
1. Announcement
GitHub Release Notes
## What's New in MockForge v1.0.0
### 🚀 Major Features
- Multi-protocol support (HTTP, WebSocket, gRPC)
- Advanced templating system
- Web-based admin UI
- Comprehensive testing framework
### 🐛 Bug Fixes
- Fixed template rendering performance
- Resolved WebSocket connection stability
- Improved error messages
### 📚 Documentation
- Complete API reference
- Getting started guides
- Troubleshooting documentation
### 🤝 Contributors
Special thanks to all contributors!
### 🔗 Links
- [Documentation](https://docs.mockforge.dev)
- [GitHub Repository](https://github.com/SaaSy-Solutions/mockforge)
- [Issue Tracker](https://github.com/SaaSy-Solutions/mockforge/issues)
Social Media & Community
# Post to social media
# Update Discord/Slack channels
# Send email newsletter
# Update website/blog
2. Monitoring & Support
Release Health Checks
# Monitor installation success
# Check for immediate bug reports
# Monitor CI/CD pipelines
# Track adoption metrics
# Example monitoring script
#!/bin/bash
VERSION=$1
# Check GitHub release downloads
gh release view v$VERSION --json assets -q '.assets[].downloadCount'
# Check crates.io download stats
curl -s "https://crates.io/api/v1/crates/mockforge-cli/downloads" | jq '.versions[0].downloads'
# Monitor error reports
gh issue list --label bug --state open --limit 10
Support Channels
- GitHub Issues: Bug reports and feature requests
- GitHub Discussions: General questions and support
- Discord: Join our community chat - Real-time community support
- Documentation: Self-service troubleshooting
3. Follow-up Releases
Hotfix Process
For critical issues discovered post-release:
# Create hotfix branch from release tag
git checkout -b hotfix/critical-bug-fix v1.0.0
# Apply fix
# Write test
# Update CHANGELOG
# Create patch release
NEW_VERSION=1.0.1
git tag -a v$NEW_VERSION
git push origin v$NEW_VERSION
# Deploy hotfix
4. Analytics & Metrics
Release Metrics
- Download counts across platforms
- Installation success rates
- User adoption and usage patterns
- Performance benchmarks vs previous versions
- Community feedback and sentiment
Continuous Improvement
# Post-release retrospective template
## Release Summary
- Version: v1.0.0
- Release Date: YYYY-MM-DD
- Duration: X weeks
## What Went Well
- [ ] Smooth release process
- [ ] No critical bugs found
- [ ] Good community reception
## Areas for Improvement
- [ ] Documentation could be clearer
- [ ] Testing took longer than expected
- [ ] More platform support needed
## Action Items
- [ ] Improve release documentation
- [ ] Automate more of the process
- [ ] Add more platform builds
Release Automation
GitHub Actions Release Workflow
# .github/workflows/release.yml
name: Release
on:
push:
tags:
- 'v*'
jobs:
release:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set version
run: echo "VERSION=${GITHUB_REF#refs/tags/v}" >> $GITHUB_ENV
- name: Build release binaries
run: |
cargo build --release
strip target/release/mockforge-cli
- name: Create release archives
run: |
tar -czf mockforge-${VERSION}-linux-x64.tar.gz -C target/release mockforge-cli
zip mockforge-${VERSION}-linux-x64.zip target/release/mockforge-cli
- name: Create GitHub release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref }}
release_name: MockForge ${{ env.VERSION }}
body: |
## What's New
See [CHANGELOG.md](CHANGELOG.md) for details.
## Downloads
- Linux x64: [mockforge-${{ env.VERSION }}-linux-x64.tar.gz](mockforge-${{ env.VERSION }}-linux-x64.tar.gz)
draft: false
prerelease: false
- name: Upload release assets
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./mockforge-${{ env.VERSION }}-linux-x64.tar.gz
asset_name: mockforge-${{ env.VERSION }}-linux-x64.tar.gz
asset_content_type: application/gzip
Automated Publishing
# Publish to crates.io on release
- name: Publish to crates.io
run: cargo publish --token ${{ secrets.CRATES_IO_TOKEN }}
if: startsWith(github.ref, 'refs/tags/')
# Build and push Docker image
- name: Build and push Docker image
uses: docker/build-push-action@v3
with:
context: .
push: true
tags: mockforge/mockforge:${{ env.VERSION }},mockforge/mockforge:latest
Emergency Releases
Security Vulnerabilities
For security issues requiring immediate release:
- Assess Severity: Determine CVSS score and impact
- Develop Fix: Create minimal fix with comprehensive tests
- Bypass Normal Process: Skip extended testing for critical security fixes
- Accelerated Release: 24-48 hour release cycle
- Public Disclosure: Coordinate with security community
Critical Bug Fixes
For show-stopping bugs affecting production:
- Immediate Assessment: Evaluate user impact and severity
- Rapid Development: 1-2 day fix development
- Limited Testing: Focus on regression and critical path tests
- Fast-Track Release: 3-5 day release cycle
This comprehensive release process ensures MockForge releases are reliable, well-tested, and properly distributed across all supported platforms and package managers.
Configuration Schema
MockForge supports comprehensive configuration through YAML files. This schema reference documents all available configuration options, their types, defaults, and usage examples.
Complete Configuration Template
For a fully annotated configuration template with all options documented inline, see:
This template includes:
- Every configuration field with inline documentation
- Default values and valid ranges
- Example configurations for common scenarios
- Comments explaining each option’s purpose
Quick Start
# Initialize a new configuration
mockforge init my-project
# Validate your configuration
mockforge config validate
# Start with validated config
mockforge serve --config mockforge.yaml
See the Configuration Validation Guide for validation best practices.
File Format
Configuration files use YAML format with the following structure:
# Top-level configuration sections
server: # Server port and binding configuration
admin: # Admin UI settings
validation: # Request validation settings
response: # Response processing options
chaos: # Chaos engineering features
grpc: # gRPC-specific settings
websocket: # WebSocket-specific settings
logging: # Logging configuration
Server Configuration
server.http_port (integer, default: 3000)
HTTP server port for REST API endpoints.
server:
http_port: 9080
server.ws_port (integer, default: 3001)
WebSocket server port for real-time connections.
server:
ws_port: 8081
server.grpc_port (integer, default: 50051)
gRPC server port for protocol buffer services.
server:
grpc_port: 9090
server.bind (string, default: “0.0.0.0”)
Network interface to bind servers to.
server:
bind: "127.0.0.1" # Bind to localhost only
Admin UI Configuration
admin.enabled (boolean, default: false)
Enable the web-based admin interface.
admin:
enabled: true
admin.port (integer, default: 9080)
Port for the admin UI server.
admin:
port: 9090
admin.embedded (boolean, default: false)
Embed admin UI under the main HTTP server instead of running standalone.
admin:
embedded: true
admin.mount_path (string, default: “/admin”)
URL path where embedded admin UI is accessible.
admin:
embedded: true
mount_path: "/mockforge-admin"
admin.standalone (boolean, default: true)
Force standalone admin UI server (overrides embedded setting).
admin:
standalone: true
admin.disable_api (boolean, default: false)
Disable admin API endpoints while keeping the UI interface.
admin:
disable_api: false
Validation Configuration
validation.mode (string, default: “enforce”)
Request validation mode. Options: “off”, “warn”, “enforce”
validation:
mode: warn # Log warnings but allow invalid requests
validation.aggregate_errors (boolean, default: false)
Combine multiple validation errors into a single JSON array response.
validation:
aggregate_errors: true
validation.validate_responses (boolean, default: false)
Validate response payloads against OpenAPI schemas (warn-only).
validation:
validate_responses: true
validation.status_code (integer, default: 400)
HTTP status code to return for validation errors.
validation:
status_code: 422 # Use 422 Unprocessable Entity
validation.skip_admin_validation (boolean, default: true)
Skip validation for admin UI routes.
validation:
skip_admin_validation: true
validation.overrides (object)
Per-route validation overrides.
validation:
overrides:
"/api/users": "off" # Disable validation for this route
"/api/admin/**": "warn" # Warning mode for admin routes
Response Configuration
response.template_expand (boolean, default: false)
Enable template variable expansion in responses.
response:
template_expand: true
response.caching (object)
Response caching configuration.
response:
caching:
enabled: true
ttl_seconds: 300
max_size_mb: 100
Chaos Engineering
chaos.latency_enabled (boolean, default: false)
Enable response latency simulation.
chaos:
latency_enabled: true
chaos.latency_min_ms (integer, default: 0)
Minimum response latency in milliseconds.
chaos:
latency_min_ms: 100
chaos.latency_max_ms (integer, default: 1000)
Maximum response latency in milliseconds.
chaos:
latency_max_ms: 2000
chaos.failures_enabled (boolean, default: false)
Enable random failure injection.
chaos:
failures_enabled: true
chaos.failure_rate (float, default: 0.0)
Probability of random failures (0.0 to 1.0).
chaos:
failure_rate: 0.05 # 5% failure rate
chaos.failure_status_codes (array of integers)
HTTP status codes to return for injected failures.
chaos:
failure_status_codes: [500, 502, 503, 504]
gRPC Configuration
grpc.proto_dir (string, default: “proto/”)
Directory containing Protocol Buffer files.
grpc:
proto_dir: "my-protos/"
grpc.enable_reflection (boolean, default: true)
Enable gRPC server reflection for service discovery.
grpc:
enable_reflection: true
grpc.excluded_services (array of strings)
gRPC services to exclude from automatic registration.
grpc:
excluded_services:
- "grpc.reflection.v1alpha.ServerReflection"
grpc.max_message_size (integer, default: 4194304)
Maximum message size in bytes (4MB default).
grpc:
max_message_size: 8388608 # 8MB
grpc.concurrency_limit (integer, default: 32)
Maximum concurrent requests per connection.
grpc:
concurrency_limit: 64
WebSocket Configuration
websocket.replay_file (string)
Path to WebSocket replay file for scripted interactions.
websocket:
replay_file: "examples/ws-demo.jsonl"
websocket.max_connections (integer, default: 1000)
Maximum concurrent WebSocket connections.
websocket:
max_connections: 500
websocket.message_timeout (integer, default: 30000)
Timeout for WebSocket messages in milliseconds.
websocket:
message_timeout: 60000
websocket.heartbeat_interval (integer, default: 30000)
Heartbeat interval for long-running connections.
websocket:
heartbeat_interval: 45000
Logging Configuration
logging.level (string, default: “info”)
Log level. Options: “error”, “warn”, “info”, “debug”, “trace”
logging:
level: debug
logging.format (string, default: “text”)
Log output format. Options: “text”, “json”
logging:
format: json
logging.file (string)
Path to log file (if not specified, logs to stdout).
logging:
file: "/var/log/mockforge.log"
logging.max_size_mb (integer, default: 10)
Maximum log file size in megabytes before rotation.
logging:
max_size_mb: 50
logging.max_files (integer, default: 5)
Maximum number of rotated log files to keep.
logging:
max_files: 10
Complete Configuration Example
# Complete MockForge configuration example
server:
http_port: 3000
ws_port: 3001
grpc_port: 50051
bind: "0.0.0.0"
admin:
enabled: true
port: 9080
embedded: false
standalone: true
validation:
mode: enforce
aggregate_errors: false
validate_responses: false
status_code: 400
response:
template_expand: true
chaos:
latency_enabled: false
failures_enabled: false
grpc:
proto_dir: "proto/"
enable_reflection: true
max_message_size: 4194304
websocket:
replay_file: "examples/ws-demo.jsonl"
max_connections: 1000
logging:
level: info
format: text
Configuration Precedence
Configuration values are applied in order of priority (highest to lowest):
- Command-line arguments - Override all other settings
- Environment variables - Override config file settings
- Configuration file - Default values from YAML file
- Compiled defaults - Built-in fallback values
Environment Variable Mapping
All configuration options can be set via environment variables using the MOCKFORGE_ prefix with underscore-separated paths:
# Server configuration
export MOCKFORGE_SERVER_HTTP_PORT=9080
export MOCKFORGE_SERVER_BIND="127.0.0.1"
# Admin UI
export MOCKFORGE_ADMIN_ENABLED=true
export MOCKFORGE_ADMIN_PORT=9090
# Validation
export MOCKFORGE_VALIDATION_MODE=warn
export MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
# Protocol-specific
export MOCKFORGE_GRPC_PROTO_DIR="my-protos/"
export MOCKFORGE_WEBSOCKET_REPLAY_FILE="replay.jsonl"
Validation
MockForge validates configuration files at startup and reports errors clearly:
# Validate configuration without starting server
mockforge-cli validate-config config.yaml
# Check for deprecated options
mockforge-cli validate-config --check-deprecated config.yaml
Hot Reloading
Some configuration options support runtime updates without restart:
- Validation mode changes
- Template expansion toggle
- Admin UI settings
- Logging level adjustments
# Update validation mode at runtime
curl -X POST http://localhost:9080/__mockforge/config \
-H "Content-Type: application/json" \
-d '{"validation": {"mode": "warn"}}'
Best Practices
Development Configuration
# development.yaml
server:
http_port: 3000
ws_port: 3001
admin:
enabled: true
embedded: true
validation:
mode: warn
response:
template_expand: true
logging:
level: debug
Production Configuration
# production.yaml
server:
http_port: 9080
bind: "127.0.0.1"
admin:
enabled: true
standalone: true
port: 9090
validation:
mode: enforce
chaos:
latency_enabled: false
failures_enabled: false
logging:
level: warn
file: "/var/log/mockforge.log"
Testing Configuration
# test.yaml
server:
http_port: 3000
validation:
mode: off
response:
template_expand: true
logging:
level: debug
Migration Guide
Upgrading from CLI-only Configuration
If migrating from command-line only configuration:
- Create a
config.yamlfile with your current settings - Test the configuration with
mockforge-cli validate-config - Gradually move settings from environment variables to the config file
- Update deployment scripts to use the config file
Version Compatibility
Configuration options may change between versions. Check the changelog for breaking changes and use the validation command to identify deprecated options:
mockforge-cli validate-config --check-deprecated config.yaml
This schema provides comprehensive control over MockForge’s behavior across all protocols and features.
Configuration Validation Guide
MockForge provides configuration validation to help you catch errors before starting the server. This guide explains how to validate your configuration and troubleshoot common issues.
Quick Start
Initialize a New Configuration
# Create a new project with template configuration
mockforge init my-project
# Or initialize in current directory
mockforge init .
This creates:
mockforge.yaml- Main configuration fileexamples/- Example OpenAPI spec and data files (unless--no-examplesis used)
Validate Configuration
# Validate the current directory's config
mockforge config validate
# Validate a specific config file
mockforge config validate --config ./my-config.yaml
# Auto-discover config in parent directories
mockforge config validate
What Gets Validated
MockForge’s config validate command currently performs these checks:
1. File Existence
- Checks if the config file exists
- Auto-discovers
mockforge.yamlormockforge.ymlin current and parent directories
2. YAML Syntax
- Validates YAML syntax and structure
- Reports parsing errors with line numbers
3. Basic Structure
- Counts HTTP endpoints
- Counts request chains
- Warns about missing sections (HTTP, admin, WebSocket, gRPC)
4. Summary Report
✅ Configuration is valid
📊 Summary:
Found 5 HTTP endpoints
Found 2 chains
⚠️ Warnings:
- No WebSocket configuration found
Manual Validation Checklist
Since validation is currently basic, here’s a manual checklist for comprehensive validation:
Required Fields
HTTP Configuration
http:
port: 3000 # ✅ Required
host: "0.0.0.0" # ✅ Required
Admin Configuration
admin:
enabled: true # ✅ Required if using admin UI
port: 9080 # ✅ Required in standalone mode
Common Mistakes
1. Invalid Port Numbers
# ❌ Wrong - port must be 1-65535
http:
port: 70000
# ✅ Correct
http:
port: 3000
2. Invalid File Paths
# ❌ Wrong - file doesn't exist
http:
openapi_spec: "./nonexistent.json"
# ✅ Correct - verify file exists
http:
openapi_spec: "./examples/openapi.json"
Test the path:
ls -la ./examples/openapi.json
3. Invalid Validation Mode
# ❌ Wrong - invalid mode
validation:
mode: "strict"
# ✅ Correct - must be: off, warn, or enforce
validation:
mode: "enforce"
4. Invalid Latency Configuration
# ❌ Wrong - base_ms is too high
core:
default_latency:
base_ms: 100000
# ✅ Correct - reasonable latency
core:
default_latency:
base_ms: 100
jitter_ms: 50
5. Missing Required Fields in Routes
# ❌ Wrong - missing response status
http:
routes:
- path: /test
method: GET
response:
body: "test"
# ✅ Correct - include status code
http:
routes:
- path: /test
method: GET
response:
status: 200
body: "test"
6. Invalid Environment Variable Names
# ❌ Wrong - incorrect prefix
export MOCK_FORGE_HTTP_PORT=3000
# ✅ Correct - use MOCKFORGE_ prefix
export MOCKFORGE_HTTP_PORT=3000
7. Conflicting Mount Path Configuration
# ❌ Wrong - both standalone and embedded
admin:
enabled: true
port: 9080
mount_path: "/admin" # Conflicts with standalone mode
# ✅ Correct - choose one mode
admin:
enabled: true
mount_path: "/admin" # Embedded under HTTP server
# OR
port: 9080 # Standalone mode (no mount_path)
8. Advanced Validation Configuration
# ✅ Complete validation configuration
validation:
mode: enforce # off | warn | enforce
aggregate_errors: true # Combine multiple errors
validate_responses: false # Validate response payloads
status_code: 400 # Error status code (400 or 422)
skip_admin_validation: true # Skip validation for admin routes
# Per-route overrides
overrides:
"GET /health": "off" # Disable validation for health checks
"POST /api/users": "warn" # Warning mode for user creation
"/api/internal/**": "off" # Disable for internal endpoints
Validation Tools
1. YAML Syntax Validator
Use yamllint for syntax validation:
# Install yamllint
pip install yamllint
# Validate YAML syntax
yamllint mockforge.yaml
2. JSON Schema Validation (Future)
MockForge doesn’t currently provide JSON Schema validation, but you can use the template as a reference:
# Copy the complete template
cp config.template.yaml mockforge.yaml
# Edit with your settings, keeping structure intact
3. Test Your Configuration
The best validation is starting the server:
# Try to start the server
mockforge serve --config mockforge.yaml
# Check for error messages in logs
Troubleshooting
Error: “Configuration file not found”
Cause: Config file doesn’t exist or isn’t in expected location
Solution:
# Check current directory
ls -la mockforge.yaml
# Create from template
mockforge init .
# Or specify path explicitly
mockforge serve --config /path/to/config.yaml
Error: “Invalid YAML syntax”
Cause: YAML parsing error (usually indentation or quotes)
Solution:
# Use yamllint to find the exact error
yamllint mockforge.yaml
# Common fixes:
# - Fix indentation (use 2 spaces, not tabs)
# - Quote strings with special characters
# - Match opening/closing brackets and braces
Warning: “No HTTP configuration found”
Cause: Missing http: section
Solution:
# Add minimal HTTP config
http:
port: 3000
host: "0.0.0.0"
Error: “Port already in use”
Cause: Another process is using the configured port
Solution:
# Find what's using the port
lsof -i :3000
# Kill the process or change the port
# Change port in config:
http:
port: 3001 # Use different port
OpenAPI Spec Not Loading
Cause: File path is incorrect or spec is invalid
Solution:
# Verify file exists
ls -la examples/openapi.json
# Validate OpenAPI spec at https://editor.swagger.io/
# Or use swagger-cli:
npm install -g @apidevtools/swagger-cli
swagger-cli validate examples/openapi.json
Best Practices
1. Use Version Control
# Track your config in Git
git add mockforge.yaml
git commit -m "Add MockForge configuration"
2. Environment-Specific Configs
# Create configs for different environments
mockforge.dev.yaml # Development
mockforge.test.yaml # Testing
mockforge.prod.yaml # Production
# Use with:
mockforge serve --config mockforge.dev.yaml
3. Document Custom Settings
http:
port: 3000
# Custom validation override for legacy endpoint
# TODO: Remove when v2 API is live
validation_overrides:
"POST /legacy/users": "off"
4. Start Simple, Add Complexity
# Start with minimal config
http:
port: 3000
openapi_spec: "./api.json"
admin:
enabled: true
# Add features incrementally:
# 1. Template expansion
# 2. Latency simulation
# 3. Failure injection
# 4. Custom plugins
5. Use the Complete Template
# Copy the complete annotated template
cp config.template.yaml mockforge.yaml
# Remove sections you don't need
# Keep comments for reference
Complete Configuration Template
See the complete annotated configuration template for all available options with documentation.
Validation Roadmap
Future versions of MockForge will include:
- JSON Schema Validation: Full schema validation for all fields
- Field Type Checking: Validate types, ranges, and formats
- Cross-Field Validation: Check for conflicts between settings
- External Resource Validation: Verify files, URLs, and connections
- Deprecation Warnings: Warn about deprecated options
- Migration Assistance: Auto-migrate old configs to new formats
Track progress: MockForge Issue #XXX
Getting Help
Configuration not working as expected?
- Run
mockforge config validatefirst - Check the Configuration Schema Reference
- Review example configurations
- Ask on GitHub Discussions
- Report bugs at GitHub Issues
Pro Tip: Keep a backup of your working configuration before making significant changes. Use cp mockforge.yaml mockforge.yaml.backup before editing.
Supported Formats
MockForge supports various data formats for configuration, specifications, and data exchange. This reference documents all supported formats, their usage, and conversion utilities.
OpenAPI Specifications
JSON Format (Primary)
MockForge primarily supports OpenAPI 3.0+ specifications in JSON format:
{
"openapi": "3.0.3",
"info": {
"title": "User API",
"version": "1.0.0"
},
"paths": {
"/users": {
"get": {
"summary": "List users",
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/User"
}
}
}
}
}
}
}
}
},
"components": {
"schemas": {
"User": {
"type": "object",
"properties": {
"id": {"type": "string"},
"name": {"type": "string"},
"email": {"type": "string"}
}
}
}
}
}
YAML Format (Alternative)
OpenAPI specifications can also be provided in YAML format:
openapi: 3.0.3
info:
title: User API
version: 1.0.0
paths:
/users:
get:
summary: List users
responses:
'200':
description: Success
content:
application/json:
schema:
type: array
items:
$ref: '#/components/schemas/User'
components:
schemas:
User:
type: object
properties:
id:
type: string
name:
type: string
email:
type: string
Conversion Between Formats
# Convert JSON to YAML
node -e "
const fs = require('fs');
const yaml = require('js-yaml');
const spec = JSON.parse(fs.readFileSync('api.json', 'utf8'));
fs.writeFileSync('api.yaml', yaml.dump(spec));
"
# Convert YAML to JSON
node -e "
const fs = require('fs');
const yaml = require('js-yaml');
const spec = yaml.load(fs.readFileSync('api.yaml', 'utf8'));
fs.writeFileSync('api.json', JSON.stringify(spec, null, 2));
"
Protocol Buffers
.proto Files
gRPC services use Protocol Buffer definitions:
syntax = "proto3";
package myapp.user;
service UserService {
rpc GetUser(GetUserRequest) returns (User);
rpc ListUsers(ListUsersRequest) returns (stream User);
rpc CreateUser(CreateUserRequest) returns (User);
}
message GetUserRequest {
string user_id = 1;
}
message User {
string user_id = 1;
string name = 2;
string email = 3;
google.protobuf.Timestamp created_at = 4;
}
message ListUsersRequest {
int32 page_size = 1;
string page_token = 2;
}
message CreateUserRequest {
string name = 1;
string email = 2;
}
Generated Code
MockForge automatically generates Rust code from .proto files:
#![allow(unused)] fn main() { // Generated code structure pub mod myapp { pub mod user { tonic::include_proto!("myapp.user"); // Generated service trait #[tonic::async_trait] pub trait UserService: Send + Sync + 'static { async fn get_user( &self, request: tonic::Request<GetUserRequest>, ) -> Result<tonic::Response<User>, tonic::Status>; async fn list_users( &self, request: tonic::Request<ListUsersRequest>, ) -> Result<tonic::Response<Self::ListUsersStream>, tonic::Status>; } } } }
WebSocket Replay Files
JSONL Format
WebSocket interactions use JSON Lines format:
{"ts":0,"dir":"out","text":"Welcome to chat!","waitFor":"^HELLO$"}
{"ts":1000,"dir":"out","text":"How can I help you?"}
{"ts":2000,"dir":"out","text":"Please wait while I process your request..."}
{"ts":5000,"dir":"out","text":"Here's your response: ..."}
Extended JSONL with Templates
{"ts":0,"dir":"out","text":"Session {{uuid}} started at {{now}}"}
{"ts":1000,"dir":"out","text":"Connected to server {{server_id}}"}
{"ts":2000,"dir":"out","text":"{{#if authenticated}}Welcome back!{{else}}Please authenticate{{/if}}"}
Binary Message Support
{"ts":0,"dir":"out","text":"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChwGA60e6kgAAAABJRU5ErkJggg==","binary":true}
{"ts":1000,"dir":"out","text":"Image data sent"}
Configuration Files
YAML Configuration
MockForge uses YAML for configuration files:
# Server configuration
server:
http_port: 3000
ws_port: 3001
grpc_port: 50051
# Validation settings
validation:
mode: enforce
aggregate_errors: false
# Response processing
response:
template_expand: true
# Protocol-specific settings
grpc:
proto_dir: "proto/"
enable_reflection: true
websocket:
replay_file: "examples/demo.jsonl"
JSON Configuration (Alternative)
Configuration can also be provided as JSON:
{
"server": {
"http_port": 3000,
"ws_port": 3001,
"grpc_port": 50051
},
"validation": {
"mode": "enforce",
"aggregate_errors": false
},
"response": {
"template_expand": true
},
"grpc": {
"proto_dir": "proto/",
"enable_reflection": true
},
"websocket": {
"replay_file": "examples/demo.jsonl"
}
}
Data Generation Formats
JSON Output
Generated test data in JSON format:
[
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"name": "John Doe",
"email": "john.doe@example.com",
"created_at": "2025-09-12T10:00:00Z"
},
{
"id": "550e8400-e29b-41d4-a716-446655440001",
"name": "Jane Smith",
"email": "jane.smith@example.com",
"created_at": "2025-09-12T11:00:00Z"
}
]
YAML Output
Same data in YAML format:
- id: 550e8400-e29b-41d4-a716-446655440000
name: John Doe
email: john.doe@example.com
created_at: '2025-09-12T10:00:00Z'
- id: 550e8400-e29b-41d4-a716-446655440001
name: Jane Smith
email: jane.smith@example.com
created_at: '2025-09-12T11:00:00Z'
CSV Output
Tabular data in CSV format:
id,name,email,created_at
550e8400-e29b-41d4-a716-446655440000,John Doe,john.doe@example.com,2025-09-12T10:00:00Z
550e8400-e29b-41d4-a716-446655440001,Jane Smith,jane.smith@example.com,2025-09-12T11:00:00Z
Log Formats
Text Format (Default)
Human-readable log output:
2025-09-12T10:00:00Z INFO mockforge::http: Server started on 0.0.0.0:3000
2025-09-12T10:00:01Z INFO mockforge::http: Request: GET /users
2025-09-12T10:00:01Z DEBUG mockforge::template: Template expanded: {{uuid}} -> 550e8400-e29b-41d4-a716-446655440000
2025-09-12T10:00:01Z INFO mockforge::http: Response: 200 OK
JSON Format
Structured JSON logging:
{"timestamp":"2025-09-12T10:00:00Z","level":"INFO","module":"mockforge::http","message":"Server started on 0.0.0.0:3000"}
{"timestamp":"2025-09-12T10:00:01Z","level":"INFO","module":"mockforge::http","message":"Request: GET /users","method":"GET","path":"/users","user_agent":"curl/7.68.0"}
{"timestamp":"2025-09-12T10:00:01Z","level":"DEBUG","module":"mockforge::template","message":"Template expanded","template":"{{uuid}}","result":"550e8400-e29b-41d4-a716-446655440000"}
{"timestamp":"2025-09-12T10:00:01Z","level":"INFO","module":"mockforge::http","message":"Response: 200 OK","status":200,"duration_ms":15}
Template Syntax
Handlebars Templates
MockForge uses Handlebars-style templates:
{{variable}}
{{object.property}}
{{array.[0]}}
{{#if condition}}content{{/if}}
{{#each items}}{{this}}{{/each}}
{{helper arg1 arg2}}
Built-in Helpers
<!-- Data generation -->
{{uuid}} <!-- Random UUID -->
{{now}} <!-- Current timestamp -->
{{now+1h}} <!-- Future timestamp -->
{{randInt 1 100}} <!-- Random integer -->
{{randFloat 0.0 1.0}} <!-- Random float -->
{{randWord}} <!-- Random word -->
{{randSentence}} <!-- Random sentence -->
{{randParagraph}} <!-- Random paragraph -->
<!-- Request context -->
{{request.path.id}} <!-- URL path parameter -->
{{request.query.limit}} <!-- Query parameter -->
{{request.header.auth}} <!-- HTTP header -->
{{request.body.name}} <!-- Request body field -->
<!-- Logic helpers -->
{{#if user.authenticated}}
Welcome back, {{user.name}}!
{{else}}
Please log in.
{{/if}}
{{#each users}}
<li>{{name}} - {{email}}</li>
{{/each}}
Conversion Utilities
Format Conversion Scripts
#!/bin/bash
# convert-format.sh - Convert between supported formats
input_file=$1
output_format=$2
case $output_format in
"yaml")
python3 -c "
import sys, yaml, json
data = json.load(sys.stdin)
yaml.dump(data, sys.stdout, default_flow_style=False)
" < "$input_file"
;;
"json")
python3 -c "
import sys, yaml, json
data = yaml.safe_load(sys.stdin)
json.dump(data, sys.stdout, indent=2)
" < "$input_file"
;;
"xml")
python3 -c "
import sys, json, dicttoxml
data = json.load(sys.stdin)
xml = dicttoxml.dicttoxml(data, custom_root='root', attr_type=False)
print(xml.decode())
" < "$input_file"
;;
*)
echo "Unsupported format: $output_format"
echo "Supported: yaml, json, xml"
exit 1
;;
esac
Validation Scripts
#!/bin/bash
# validate-format.sh - Validate file formats
file=$1
format=$(basename "$file" | sed 's/.*\.//')
case $format in
"json")
python3 -c "
import sys, json
try:
json.load(sys.stdin)
print('✓ Valid JSON')
except Exception as e:
print('✗ Invalid JSON:', e)
sys.exit(1)
" < "$file"
;;
"yaml")
python3 -c "
import sys, yaml
try:
yaml.safe_load(sys.stdin)
print('✓ Valid YAML')
except Exception as e:
print('✗ Invalid YAML:', e)
sys.exit(1)
" < "$file"
;;
"xml")
python3 -c "
import sys, xml.etree.ElementTree as ET
try:
ET.parse(sys.stdin)
print('✓ Valid XML')
except Exception as e:
print('✗ Invalid XML:', e)
sys.exit(1)
" < "$file"
;;
*)
echo "Unsupported format: $format"
exit 1
;;
esac
Best Practices
Choosing the Right Format
| Use Case | Recommended Format | Reason |
|---|---|---|
| API Specifications | OpenAPI YAML | More readable, better for version control |
| Configuration | YAML | Human-readable, supports comments |
| Data Exchange | JSON | Universally supported, compact |
| Logs | JSON | Structured, searchable |
| Templates | Handlebars | Expressive, logic support |
Format Conversion Workflow
# API development workflow
# 1. Design API in YAML (readable)
swagger-editor
# 2. Convert to JSON for tools that require it
./convert-format.sh api.yaml json > api.json
# 3. Validate both formats
./validate-format.sh api.yaml
./validate-format.sh api.json
# 4. Generate documentation
swagger-codegen generate -i api.yaml -l html -o docs/
# 5. Commit YAML version (better diff)
git add api.yaml
Performance Considerations
- JSON: Fastest parsing, smallest size
- YAML: Slower parsing, larger size, better readability
- XML: Slowest parsing, largest size, most verbose
- Binary formats: Fastest for large data, not human-readable
Compatibility Matrix
| Format | MockForge Support | Readability | Tool Support | Size |
|---|---|---|---|---|
| JSON | ✅ Full | Medium | Excellent | Small |
| YAML | ✅ Full | High | Good | Medium |
| XML | ❌ None | Low | Good | Large |
| Protocol Buffers | ✅ gRPC only | Low | Limited | Small |
| JSONL | ✅ WebSocket | Medium | Basic | Medium |
This format reference ensures you can work effectively with all data formats supported by MockForge across different use cases and workflows.
Templating Reference
MockForge supports lightweight templating across HTTP responses, overrides, and (soon) WS/gRPC). This page documents all supported tokens and controls.
Enabling
- Environment:
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true|false(default: false) - Config:
http.response_template_expand: true|false - CLI:
--response-template-expand - Determinism:
MOCKFORGE_FAKE_TOKENS=falsedisables faker token expansion.
Time Tokens
{{now}}— RFC3339 timestamp.{{now±Nd|Nh|Nm|Ns}}— Offset from now by Days/Hours/Minutes/Seconds.- Examples:
{{now+2h}},{{now-30m}},{{now+10s}},{{now-1d}}.
- Examples:
Random Tokens
{{rand.int}}— random integer in [0, 1_000_000].{{rand.float}}— random float in [0,1).{{randInt a b}}/{{rand.int a b}}— random integer between a and b (order-agnostic, negatives allowed).- Examples:
{{randInt 10 99}},{{randInt -5 5}}.
- Examples:
UUID
{{uuid}}— UUID v4.
Request Data Access
{{request.body.field}}— Access fields from request body JSON.- Example:
{{request.body.name}}extracts thenamefield from request body.
- Example:
{{request.path.param}}— Access path parameters.- Example:
{{request.path.id}}extracts theidpath parameter.
- Example:
{{request.query.param}}— Access query parameters.- Example:
{{request.query.limit}}extracts thelimitquery parameter.
- Example:
Faker Tokens
Faker expansions can be disabled via MOCKFORGE_FAKE_TOKENS=false.
- Minimal (always available):
{{faker.uuid}},{{faker.email}},{{faker.name}}. - Extended (when feature
data-fakeris enabled):{{faker.address}},{{faker.phone}},{{faker.company}},{{faker.url}},{{faker.ip}}{{faker.color}},{{faker.word}},{{faker.sentence}},{{faker.paragraph}}
Where Templating Applies
- HTTP (OpenAPI): media-level
examplebodies and synthesized responses. - HTTP Overrides: YAML patches loaded via
validation_overrides. - WS/gRPC: provider is registered now; expansion hooks will be added as features land.
Status Codes for Validation Errors
MOCKFORGE_VALIDATION_STATUS=400|422(default 400). Affects HTTP request validation failures in enforce mode.
Security & Determinism Notes
- Tokens inject random/time-based values; disable faker to reduce variability.
- For deterministic integration tests, set
MOCKFORGE_FAKE_TOKENS=falseand prefer explicit literals.
Request Chaining
MockForge supports request chaining, which allows you to create complex workflows where requests can depend on responses from previous requests in the chain. This is particularly useful for testing API workflows that require authentication, data flow between endpoints, or multi-step operations.
Overview
Request chaining enables you to:
- Execute requests in a predefined sequence with dependencies
- Reference data from previous responses using template variables
- Extract and store specific values from responses for reuse
- Validate response status codes and content
- Implement parallel execution for independent requests
- Handle complex authentication and authorization flows
Chain Definition
Chains are defined using YAML or JSON configuration files with the following structure:
id: my-chain
name: My Chain
description: A description of what this chain does
config:
enabled: true
maxChainLength: 20
globalTimeoutSecs: 300
enableParallelExecution: false
links:
# Define your requests here
- request:
id: step1
method: POST
url: https://api.example.com/auth/login
headers:
Content-Type: application/json
body:
username: "testuser"
password: "password123"
extract:
token: body.access_token
storeAs: login_response
dependsOn: []
variables:
base_url: https://api.example.com
tags:
- authentication
- workflow
Chain Configuration
The config section controls how the chain behaves:
| Field | Type | Default | Description |
|---|---|---|---|
enabled | boolean | false | Whether this chain is enabled |
maxChainLength | integer | 20 | Maximum number of requests in the chain |
globalTimeoutSecs | integer | 300 | Total timeout for chain execution |
enableParallelExecution | boolean | false | Enable parallel execution of independent requests |
Request Links
Each link in the chain defines a single HTTP request and its behavior:
Request Definition
| Field | Type | Required | Description |
|---|---|---|---|
id | string | Yes | Unique identifier for this request |
method | string | Yes | HTTP method (GET, POST, PUT, DELETE, etc.) |
url | string | Yes | Request URL (supports template variables) |
headers | object | No | Request headers |
body | any | No | Request body (supports template variables) |
dependsOn | array | No | List of request IDs this request depends on |
timeoutSecs | number | No | Individual request timeout |
expectedStatus | array | No | Expected status codes for validation |
Response Processing
| Field | Type | Required | Description |
|---|---|---|---|
extract | object | No | Extract values from response into variables |
storeAs | string | No | Store entire response with this name |
Template Variables
Chain requests support powerful templating that can reference:
Previous Response Data
Use {{chain.<response_name>.<path>}} to reference data from previous responses:
url: https://api.example.com/users/{{chain.login_response.body.user_id}}/posts
headers:
Authorization: "Bearer {{chain.auth_response.body.access_token}}"
Variable Extraction
Extract values from responses into reusable variables:
extract:
user_id: body.user.id
token: body.access_token
storeAs: user_response
Built-in Template Functions
All standard MockForge templating functions are available:
{{uuid}}- Random UUID{{faker.email}}- Fake email address{{faker.name}}- Fake name{{rand.int}}- Random integer{{now}}- Current timestamp
Advanced Features
Dependency Resolution
Requests can depend on other requests using the dependsOn field. MockForge automatically resolves dependencies using topological sorting:
links:
- request:
id: login
method: POST
url: https://api.example.com/auth/login
body:
username: "user"
password: "pass"
storeAs: auth
- request:
id: get_profile
method: GET
url: https://api.example.com/user/profile
headers:
Authorization: "Bearer {{chain.auth.body.token}}"
dependsOn:
- login
Parallel Execution
Enable enableParallelExecution: true to allow independent requests to run simultaneously:
config:
enableParallelExecution: true
links:
- request:
id: get_profile
method: GET
url: https://api.example.com/profile
dependsOn:
- login
- request:
id: get_preferences
method: GET
url: https://api.example.com/preferences
dependsOn:
- login
# These two requests will run in parallel
Response Validation
Validate response status codes and content:
links:
- request:
id: create_user
method: POST
url: https://api.example.com/users
body:
name: "John Doe"
expectedStatus: [201, 202] # Expect 201 or 202 status codes
JSON Path Support
Chain templating supports JSON path syntax for accessing nested data:
Simple Properties
extract:
user_id: body.id
name: body.profile.name
Array Access
extract:
first_user: body.users.[0].name
user_count: body.users.[*] # Get array length
Complex Nesting
url: https://api.example.com/users/{{chain.login_response.body.user.id}}/projects/{{chain.project_response.body.data.[0].id}}
Response Function (New UI Feature)
MockForge also supports a response() function for use in the Admin UI and other editing contexts:
Syntax
response('request_name', 'json_path')
Examples
// Simple usage
response('login', 'body.user_id')
// Complex JSON path
response('user_profile', 'body.data.employee.name')
// Environment variable usage
let userId = response('login', 'body.user_id');
let updateUrl = `/users/${userId}/profile`;
UI Integration
- Autocomplete: Type
response(in any input field in the UI and use Ctrl+Space for autocomplete - Configuration Dialog: Click the blue template tag next to the function to open the configuration dialog
- Request Selection: Choose from available requests in the current chain
- Path Specification: Enter the JSONPath to extract the desired value
Pre/Post Request Scripting
MockForge supports JavaScript scripting for complex request processing and data manipulation in request chains.
Enable Scripting
Add scripting configuration to any request in your chain:
links:
- request:
id: process_data
method: POST
url: https://api.example.com/process
scripting:
pre_script: |
// Execute before request
console.log('Processing request with mockforge context');
console.log('Request URL:', mockforge.request.url);
if (mockforge.variables.skip_processing) {
request.body.skip_processing = true;
}
post_script: |
// Execute after request
console.log('Request completed in', mockforge.response.duration_ms, 'ms');
if (mockforge.response.status === 429) {
throw new Error('Rate limited - retry needed');
}
// Store custom data for next request
setVariable('processed_user_id', mockforge.response.body.user_id);
runtime: javascript
timeout_ms: 5000
Pre-Scripts
Executed before the HTTP request:
// Available context in mockforge object:
mockforge.request // Current request (id, method, url, headers)
mockforge.chain // Previous responses: mockforge.chain.login.body.user_id
mockforge.variables // Chain variables
mockforge.env // Environment variables
// Direct access to functions:
console.log('Starting request processing');
// Modify request before it goes out
if (mockforge.variables.enable_debug) {
request.headers['X-Debug'] = 'true';
request.body.debug_mode = true;
}
// Set variables for this request
setVariable('request_start_time', Date.now());
// Example: Add authentication from previous response
request.headers['Authorization'] = 'Bearer ' + mockforge.chain.login.body.token;
Post-Scripts
Executed after the HTTP response:
// Available context in mockforge object:
mockforge.response // Current response (status, headers, body, duration_ms)
mockforge.request // Original request
mockforge.chain // Previous responses
mockforge.variables // Chain variables
mockforge.env // Environment variables
// Example: Validate response and extract data
if (mockforge.response.status !== 200) {
throw new Error('Request failed with status ' + mockforge.response.status);
}
// Extract and store data for next requests
setVariable('user_profile', mockforge.response.body);
setVariable('session_cookie', mockforge.response.headers['Set-Cookie']);
// Example: Transform response data
if (mockforge.response.body && mockforge.response.body.user) {
mockforge.response.body.processed_user = {
fullName: mockforge.response.body.user.first_name + ' ' + mockforge.response.body.user.last_name,
age: mockforge.response.body.user.age,
isActive: mockforge.response.body.user.status === 'active'
};
}
Built-in Functions
Logging and Diagnostics
console.log('Debug message:', mockforge.request.url);
console.warn('Warning:', mockforge.response.status);
console.error('Error occurred');
Variable Management
// Set a variable for use in next requests
setVariable('api_token', mockforge.response.body.token);
// Access environment variables
const configUrl = mockforge.env['API_CONFIG_URL'];
Data Validation
// Simple assertions
assert(mockforge.response.status === 200, 'Expected status 200');
// Complex validation
if (!mockforge.response.body || !mockforge.response.body.items) {
throw new Error('Response missing required "items" field');
}
if (mockforge.response.body.items.length === 0) {
console.warn('Response contains empty items array');
}
Error Handling
Scripts can throw errors to fail the chain:
if (mockforge.response.status >= 400) {
throw new Error('HTTP ' + mockforge.response.status + ': ' + mockforge.response.body.error);
}
if (mockforge.response.duration_ms > 30000) {
throw new Error('Request took too long: ' + mockforge.response.duration_ms + 'ms');
}
Security and Isolation
- Timeout Protection: Scripts are limited by
timeout_ms(default: 5 seconds) - Sandboxing: Scripts run in isolated JavaScript contexts
- Resource Limits: CPU and memory usage is monitored and limited
- Network Restrictions: Scripts cannot make outbound network calls
- File System Access: Read-only file access through
fs.readFile()function
Best Practices
- Keep Scripts Simple: Break complex logic into smaller, focused scripts
- Validate Inputs: Always check that expected data exists before processing
- Set Appropriate Timeouts: Use shorter timeouts for simple scripts
- Use Environment Variables: Store configuration in environment variables
- Error Handling: Always check for error conditions and fail fast when needed
- Documentation: Comment complex business logic in your scripts
- Testing: Test scripts with various response scenarios
Environment Variables
For multiple uses of the same response value, store it in an environment variable:
// In environment variables
RESPONSE_USER_ID = response('login', 'body.user_id')
// Then use in multiple places
let url1 = `/users/${RESPONSE_USER_ID}`;
let url2 = `/profile/${RESPONSE_USER_ID}`;
Benefits Over Traditional Templates
- Cleaner Syntax: More readable than
{{chain.request_name.body.path}} - Type Safety: JSONPath validation in the UI
- Better UX: Visual configuration through dialogs
- Autocomplete: Intelligent suggestions for request names and paths
Error Handling
Chains provide comprehensive error handling:
- Dependency errors: Missing or invalid dependencies
- Circular dependencies: Automatic detection and prevention
- Timeout errors: Individual and global timeouts
- Status validation: Expected status code validation
- Network errors: Connection and HTTP errors
Chain Management
Chains can be managed programmatically or via configuration files:
Loading Chains
#![allow(unused)] fn main() { use mockforge_core::RequestChainRegistry; let registry = RequestChainRegistry::new(chain_config); // Load from YAML registry.register_from_yaml(yaml_content).await?; // Load from JSON registry.register_from_json(json_content).await?; }
Executing Chains
#![allow(unused)] fn main() { use mockforge_core::ChainExecutionEngine; let engine = ChainExecutionEngine::new(registry, config); // Execute a chain let result = engine.execute_chain("my-chain").await?; println!("Chain executed in {}ms", result.total_duration_ms); }
Complete Example
See the provided examples in the examples/ directory:
examples/chain-example.yaml- Comprehensive user management workflowexamples/simple-chain.json- Simple authentication chain
Working With Large Values
MockForge provides several strategies to handle large values efficiently without affecting performance or crashing the user interface. The system automatically hides large text values by default, but extremely large values can still impact performance.
File System Template Functions
MockForge supports the fs.readFile() template function for reading file contents directly into templates. This is particularly useful for including large text content within structured data.
Syntax:
{{fs.readFile "path/to/file.txt"}}
{{fs.readFile('path/to/file.txt')}}
Example usage in request chaining:
links:
- request:
id: upload_large_data
method: POST
url: https://api.example.com/upload
headers:
Content-Type: application/json
body:
metadata:
filename: "large_document.txt"
size: 1048576
content: "{{fs.readFile('/path/to/large/file.txt')}}"
Error handling:
- If the file doesn’t exist:
<fs.readFile error: No such file or directory (os error 2)> - If the path is empty:
<fs.readFile: empty path>
Binary File Request Bodies
For truly large binary files (images, videos, documents), MockForge supports binary file request bodies that reference files on disk rather than loading them into memory.
YAML Configuration:
links:
- request:
id: upload_image
method: POST
url: https://api.example.com/upload
body:
type: binary_file
data:
path: "/path/to/image.jpg"
content_type: "image/jpeg"
JSON Configuration:
{
"id": "upload_image",
"method": "POST",
"url": "https://api.example.com/upload",
"body": {
"type": "binary_file",
"data": {
"path": "/path/to/image.jpg",
"content_type": "image/jpeg"
}
}
}
Key Features:
- Path templating: File paths support template expansion (e.g.,
"{{chain.previous_response.body.file_path}}") - Content type: Optional content-type header (defaults to none for binary files)
- Memory efficient: Files are read only when the request is executed
- Error handling: Clear error messages for missing files
Performance Best Practices
- Use binary_file for large binary content (images, videos, large documents)
- Use fs.readFile for large text content within structured JSON/XML bodies
- Template file paths to make configurations dynamic
- Validate file paths before running chains to avoid runtime errors
- Consider file size limits based on your system’s memory constraints
Best Practices
- Keep chains focused: Each chain should have a single, clear purpose
- Use meaningful IDs: Choose descriptive names for requests and chains
- Handle dependencies carefully: Ensure dependency chains are logical and avoid cycles
- Validate responses: Use
expectedStatusandextractfor critical paths - Use parallel execution: Enable for independent requests to improve performance
- Template effectively: Leverage chain context variables for dynamic content
- Error handling: Plan for failure scenarios in your chains
- Handle large values efficiently: Use
fs.readFile()for large text content andbinary_filerequest bodies for large binary files to maintain performance
Limitations
- Maximum chain length is configurable (default: 20 requests)
- Global execution timeout applies to entire chain
- Circular dependencies are automatically prevented
- Parallel execution requires careful dependency management
Fixtures and Smoke Testing
MockForge supports recording and replaying HTTP requests and responses as fixtures, which can be used for smoke testing your APIs.
Recording Fixtures
To record fixtures, enable recording by setting the environment variable:
MOCKFORGE_RECORD_ENABLED=true
By default, all HTTP requests will be recorded. To record only GET requests, set:
MOCKFORGE_RECORD_GET_ONLY=true
Fixtures are saved in the fixtures directory by default. You can change this location with:
MOCKFORGE_FIXTURES_DIR=/path/to/fixtures
Replay Fixtures
To replay recorded fixtures, enable replay by setting the environment variable:
MOCKFORGE_REPLAY_ENABLED=true
When replay is enabled, MockForge will serve recorded responses for matching requests instead of generating new ones.
Ready-to-Run Fixtures
Fixtures can be marked as “ready-to-run” for smoke testing by adding a metadata field smoke_test with the value true. These fixtures will be listed in the smoke test endpoints.
Example fixture with smoke test metadata:
{
"fingerprint": {
"method": "GET",
"path": "/api/users",
"query_params": {},
"headers": {}
},
"timestamp": "2024-01-15T10:30:00Z",
"status_code": 200,
"response_headers": {
"content-type": "application/json"
},
"response_body": "{\"users\": []}",
"metadata": {
"smoke_test": "true",
"name": "Get Users Endpoint"
}
}
Smoke Testing
MockForge provides endpoints to list and run smoke tests:
GET /__mockforge/smoke- List available smoke test endpointsGET /__mockforge/smoke/run- Run all smoke tests
These endpoints are also available in the Admin UI under the “Smoke Tests” tab.
Admin UI Integration
The Admin UI provides a graphical interface for managing fixtures and running smoke tests:
- View all recorded fixtures in the “Fixtures” tab
- Mark fixtures as ready-to-run for smoke testing
- Run smoke tests with a single click
- View smoke test results and status
Configuration
The following environment variables control fixture and smoke test behavior:
Core Settings
MOCKFORGE_FIXTURES_DIR- Directory where fixtures are stored (default:./fixtures)MOCKFORGE_RECORD_ENABLED- Enable recording of requests (default:false)MOCKFORGE_REPLAY_ENABLED- Enable replay of recorded requests (default:false)
Recording Options
MOCKFORGE_RECORD_GET_ONLY- Record only GET requests (default:false)MOCKFORGE_LATENCY_ENABLED- Include latency in recorded fixtures (default:true)MOCKFORGE_RESPONSE_TEMPLATE_EXPAND- Expand templates when recording (default:false)
Validation and Testing
MOCKFORGE_REQUEST_VALIDATION- Validation level during recording (default:enforce)MOCKFORGE_RESPONSE_VALIDATION- Validate responses during replay (default:false)
Configuration File Support
You can also configure fixtures through YAML:
# In your configuration file
core:
fixtures:
dir: "./fixtures"
record_enabled: false
replay_enabled: false
record_get_only: false
Troubleshooting
This guide helps you diagnose and resolve common issues with MockForge. If you’re experiencing problems, follow the steps below to identify and fix the issue.
Quick Diagnosis
Check Server Status
First, verify that MockForge is running and accessible:
# Check if processes are running
ps aux | grep mockforge
# Check listening ports
netstat -tlnp | grep -E ":(3000|3001|50051|9080)"
# Test basic connectivity
curl -I http://localhost:3000/health 2>/dev/null || echo "HTTP server not responding"
curl -I http://localhost:9080/health 2>/dev/null || echo "Admin UI not responding"
Check Logs
Enable verbose logging to see detailed information:
# Run with debug logging
RUST_LOG=mockforge=debug mockforge serve --spec api-spec.yaml
# View recent logs
tail -f mockforge.log
# Filter logs by component
grep "ERROR" mockforge.log
grep "WARN" mockforge.log
HTTP API Issues
Server Won’t Start
Symptoms: mockforge serve exits immediately with error
Common causes and solutions:
-
Port already in use:
# Find what's using the port lsof -i :3000 # Kill conflicting process kill -9 <PID> # Or use different port mockforge serve --http-port 3001 -
Invalid OpenAPI specification:
# Validate YAML syntax yamllint api-spec.yaml # Validate OpenAPI structure swagger-cli validate api-spec.yaml # Test with minimal spec mockforge serve --spec examples/openapi-demo.json -
File permissions:
# Check file access ls -la api-spec.yaml # Fix permissions if needed chmod 644 api-spec.yaml
404 Errors for Valid Routes
Symptoms: API returns 404 for endpoints that should exist
Possible causes:
-
OpenAPI spec not loaded correctly:
# Check if spec was loaded grep "OpenAPI spec loaded" mockforge.log # Verify file path ls -la api-spec.yaml -
Path matching issues:
- Ensure paths in spec match request URLs
- Check for trailing slashes
- Verify HTTP methods match
-
Template expansion disabled:
# Enable template expansion mockforge serve --response-template-expand --spec api-spec.yaml
Template Variables Not Working
Symptoms: {{variable}} appears literally in responses
Solutions:
-
Enable template expansion:
# Via command line mockforge serve --response-template-expand --spec api-spec.yaml # Via environment variable MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true mockforge serve --spec api-spec.yaml # Via config file echo "response:\n template_expand: true" > config.yaml mockforge serve --config config.yaml --spec api-spec.yaml -
Check template syntax:
- Use
{{variable}}not${variable} - Ensure variables are defined in spec examples
- Check for typos in variable names
- Use
Validation Errors
Symptoms: Requests return 400/422 with validation errors
Solutions:
-
Adjust validation mode:
# Disable validation mockforge serve --validation off --spec api-spec.yaml # Use warning mode mockforge serve --validation warn --spec api-spec.yaml -
Fix request format:
- Ensure Content-Type header matches request body format
- Verify required fields are present
- Check parameter formats match OpenAPI spec
WebSocket Issues
Connection Fails
Symptoms: WebSocket client cannot connect
Common causes:
-
Wrong port or path:
# Check WebSocket port netstat -tlnp | grep :3001 # Test connection websocat ws://localhost:3001/ws -
Replay file not found:
# Check file exists ls -la ws-replay.jsonl # Run without replay file mockforge serve --ws-port 3001 # No replay file specified
Messages Not Received
Symptoms: WebSocket connection established but no messages
Solutions:
-
Check replay file format:
# Validate JSONL syntax node -e " const fs = require('fs'); const lines = fs.readFileSync('ws-replay.jsonl', 'utf8').split('\n'); lines.forEach((line, i) => { if (line.trim()) { try { JSON.parse(line); } catch (e) { console.log(\`Line \${i+1}: \${e.message}\`); } } }); " -
Verify message timing:
- Check
tsvalues are in milliseconds - Ensure messages have required fields (
ts,dir,text)
- Check
Interactive Mode Issues
Symptoms: Client messages not triggering responses
Debug steps:
-
Check regex patterns:
# Test regex patterns node -e " const pattern = '^HELLO'; const test = 'HELLO world'; console.log('Match:', test.match(new RegExp(pattern))); " -
Verify state management:
- Check that state variables are properly set
- Ensure conditional logic is correct
gRPC Issues
Service Not Found
Symptoms: grpcurl list shows no services
Solutions:
-
Check proto directory:
# Verify proto files exist find proto/ -name "*.proto" # Check directory path MOCKFORGE_PROTO_DIR=proto/ mockforge serve --grpc-port 50051 -
Compilation errors:
# Check for proto compilation errors cargo build --verbose 2>&1 | grep -i proto -
Reflection disabled:
# Enable gRPC reflection MOCKFORGE_GRPC_REFLECTION_ENABLED=true mockforge serve --grpc-port 50051
Method Calls Fail
Symptoms: gRPC calls return errors
Debug steps:
-
Check service definition:
# List service methods grpcurl -plaintext localhost:50051 describe mockforge.user.UserService -
Validate request format:
# Test with verbose output grpcurl -plaintext -v -d '{"user_id": "123"}' localhost:50051 mockforge.user.UserService/GetUser -
Check proto compatibility:
- Ensure client and server use same proto definitions
- Verify message field names and types match
Admin UI Issues
UI Not Loading
Symptoms: Browser shows connection error
Solutions:
-
Check admin port:
# Verify port is listening curl -I http://localhost:9080 2>/dev/null || echo "Admin UI not accessible" # Try different port mockforge serve --admin --admin-port 9090 -
CORS issues:
- Admin UI should work from any origin by default
- Check browser console for CORS errors
-
Embedded vs standalone:
# Force standalone mode mockforge serve --admin --admin-standalone # Or embedded mode mockforge serve --admin --admin-embed
API Endpoints Not Working
Symptoms: UI loads but API calls fail
Solutions:
-
Check admin API:
# Test admin API directly curl http://localhost:9080/__mockforge/status -
Enable admin API:
# Ensure admin API is not disabled mockforge serve --admin # Don't use --disable-admin-api
Configuration Issues
Config File Not Loading
Symptoms: Settings from config file are ignored
Solutions:
-
Validate YAML syntax:
# Check YAML format python3 -c "import yaml; yaml.safe_load(open('config.yaml'))" # Or use yamllint yamllint config.yaml -
Check file path:
# Use absolute path mockforge serve --config /full/path/to/config.yaml # Verify file permissions ls -la config.yaml -
Environment variable override:
- Remember that environment variables override config file settings
- Command-line arguments override both
Environment Variables Not Working
Symptoms: Environment variables are ignored
Common issues:
-
Shell not reloaded:
# Export variable and reload shell export MOCKFORGE_HTTP_PORT=3001 exec $SHELL -
Variable name typos:
# Check variable is set echo $MOCKFORGE_HTTP_PORT # List all MockForge variables env | grep MOCKFORGE
Performance Issues
High Memory Usage
Symptoms: MockForge consumes excessive memory
Solutions:
-
Reduce concurrent connections:
# Limit connection pool MOCKFORGE_MAX_CONNECTIONS=100 mockforge serve -
Disable unnecessary features:
# Run with minimal features mockforge serve --validation off --response-template-expand false -
Monitor resource usage:
# Check memory usage ps aux | grep mockforge # Monitor over time htop -p $(pgrep mockforge)
Slow Response Times
Symptoms: API responses are slow
Debug steps:
-
Enable latency logging:
RUST_LOG=mockforge=debug mockforge serve --spec api-spec.yaml 2>&1 | grep -i latency -
Check template complexity:
- Complex templates can slow response generation
- Consider caching for frequently used templates
-
Profile performance:
# Use cargo flamegraph for profiling cargo flamegraph --bin mockforge-cli -- serve --spec api-spec.yaml
Docker Issues
Container Won’t Start
Symptoms: Docker container exits immediately
Solutions:
-
Check container logs:
docker logs <container-id> # Run with verbose output docker run --rm mockforge mockforge serve --spec api-spec.yaml -
Volume mounting issues:
# Ensure spec file is accessible docker run -v $(pwd)/api-spec.yaml:/app/api-spec.yaml \ mockforge mockforge serve --spec /app/api-spec.yaml -
Port conflicts:
# Use different ports docker run -p 3001:3000 -p 3002:3001 mockforge
Port Already in Use
Symptoms: Container fails to start with “address already in use” error
Solutions:
# Check what's using the ports
netstat -tlnp | grep :3000
# Or on macOS:
lsof -i :3000
# Use different ports
docker run -p 3001:3000 -p 3002:3001 mockforge
# Or in docker-compose.yml
services:
mockforge:
ports:
- "3001:3000" # Map host 3001 to container 3000
- "3002:3001" # Map host 3002 to container 3001
Permission Issues
Symptoms: Container can’t read/write mounted volumes
Solutions:
# Fix volume permissions (Linux)
sudo chown -R 1000:1000 fixtures/
sudo chown -R 1000:1000 logs/
# Or run container as your user
docker run --user $(id -u):$(id -g) \
-v $(pwd)/fixtures:/app/fixtures \
mockforge
# macOS typically doesn't need permission fixes
Build Issues
Symptoms: Docker build fails or takes too long
Solutions:
# Clear Docker cache
docker system prune -a
# Rebuild without cache
docker build --no-cache -t mockforge .
# Check disk space
df -h
# Remove unused images
docker image prune -a
Container Performance Issues
Symptoms: Slow response times in Docker
Solutions:
-
Increase resources (Docker Desktop):
- Settings → Resources → Memory: Increase to 4GB+
- Settings → Resources → CPUs: Increase to 2+
-
Reduce logging verbosity:
docker run -e RUST_LOG=info mockforge # Instead of RUST_LOG=debug -
Use Docker volumes instead of bind mounts for better performance:
volumes: - mockforge-data:/app/data # Named volume (faster) # Instead of: # - ./data:/app/data # Bind mount (slower on macOS/Windows)
Networking Issues
Symptoms: Can’t connect to MockForge from other containers
Solutions:
# Use Docker network
docker network create mockforge-net
docker run --network mockforge-net --name mockforge \
-p 3000:3000 mockforge
# Other containers on same network can access via:
# http://mockforge:3000
In docker-compose.yml:
services:
mockforge:
networks:
- app-network
frontend:
environment:
API_URL: http://mockforge:3000
networks:
- app-network
networks:
app-network:
driver: bridge
Getting Help
Log Analysis
# Extract error patterns
grep "ERROR" mockforge.log | head -10
# Find recent issues
tail -100 mockforge.log | grep -E "(ERROR|WARN)"
# Count error types
grep "ERROR" mockforge.log | sed 's/.*ERROR //' | sort | uniq -c | sort -nr
Debug Commands
# Full system information
echo "=== System Info ==="
uname -a
echo "=== Rust Version ==="
rustc --version
echo "=== Cargo Version ==="
cargo --version
echo "=== Running Processes ==="
ps aux | grep mockforge
echo "=== Listening Ports ==="
netstat -tlnp | grep -E ":(3000|3001|50051|9080)"
echo "=== Disk Space ==="
df -h
echo "=== Memory Usage ==="
free -h
Community Support
If you can’t resolve the issue:
- Check existing issues: Search GitHub issues for similar problems
- Create a minimal reproduction: Isolate the issue with minimal configuration
- Include debug information: Attach logs, configuration, and system details
- Use descriptive titles: Clearly describe the problem in issue titles
Emergency Stop
If MockForge is causing issues:
# Kill all MockForge processes
pkill -f mockforge
# Kill specific process
kill -9 <mockforge-pid>
# Clean up any leftover files
rm -f mockforge.log
This troubleshooting guide covers the most common issues. For more specific problems, check the logs and consider creating an issue on GitHub with detailed information about your setup and the problem you’re experiencing.
Common Issues & Solutions
This guide addresses the most frequently encountered issues when using MockForge and provides quick solutions.
Server Issues
Port Already in Use
Problem: Error: Address already in use (os error 98)
Solutions:
# Find what's using the port
lsof -i :3000
# On Windows: netstat -ano | findstr :3000
# Kill the process
kill -9 <PID>
# Or use a different port
mockforge serve --spec api.json --http-port 3001
Prevention: Check ports before starting:
# Quick check script
ports=(3000 3001 9080 50051)
for port in "${ports[@]}"; do
if lsof -i :$port > /dev/null; then
echo "Port $port is in use"
fi
end
Server Won’t Start
Problem: MockForge exits immediately or fails silently
Debugging Steps:
- Check configuration
# Validate config file
mockforge config validate --config mockforge.yaml
- Check logs
# Enable verbose logging
RUST_LOG=debug mockforge serve --spec api.json 2>&1 | tee mockforge.log
- Test with minimal config
# Start with just the spec
mockforge serve --spec examples/openapi-demo.json --http-port 3000
- Check file permissions
ls -la api.json mockforge.yaml
chmod 644 api.json mockforge.yaml
Template & Data Issues
Template Variables Not Expanding
Problem: {{uuid}} appears literally in responses instead of generating UUIDs
Solutions:
# Enable template expansion via environment variable
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true mockforge serve --spec api.json
# Or via config file
# mockforge.yaml
http:
response_template_expand: true
# Or via CLI flag
mockforge serve --spec api.json --response-template-expand
Common Mistake: Forgetting that template expansion is opt-in for security reasons.
Faker Functions Not Working
Problem: {{faker.name}} not generating fake data
Solutions:
- Enable template expansion (see above)
- Check faker function name: Use lowercase, e.g.,
{{faker.name}}not{{Faker.Name}} - Install faker if required: Some advanced faker features may require additional setup
Valid faker functions:
{{faker.name}}- Person name{{faker.email}}- Email address{{faker.address}}- Street address{{faker.phone}}- Phone number{{faker.company}}- Company name
See Templating Reference for complete list.
Invalid Date/Timestamp Format
Problem: {{now}} generates invalid date format
Solutions:
# Use proper format in OpenAPI spec
properties:
createdAt:
type: string
format: date-time # Important!
example: "{{now}}"
Alternative: Use custom format
{
"timestamp": "{{now | date:'%Y-%m-%d'}}"
}
OpenAPI Spec Issues
Spec Not Loading
Problem: Error: Failed to parse OpenAPI specification
Solutions:
- Validate spec syntax
# Using swagger-cli
swagger-cli validate api.json
# Or online
# https://editor.swagger.io/
- Check file format
# JSON
cat api.json | jq .
# YAML
yamllint api.yaml
- Check OpenAPI version
{
"openapi": "3.0.3", // Not "3.0" or "swagger": "2.0"
...
}
- Resolve JSON schema references
# Use json-schema-ref-resolver if needed
npm install -g json-schema-ref-resolver
json-schema-ref-resolver api.json > resolved-api.json
404 for Valid Routes
Problem: Endpoints return 404 even though they exist in the spec
Debugging:
- Check path matching
# Verify paths don't have trailing slashes mismatch
# Spec: /users (should match request: GET /users)
curl http://localhost:3000/users # ✅
curl http://localhost:3000/users/ # ❌ May not match
- Check HTTP method
# Ensure method matches spec
# Spec defines GET but you're using POST
curl -X GET http://localhost:3000/users # ✅
curl -X POST http://localhost:3000/users # ❌ May not match
- Enable debug logging
RUST_LOG=mockforge_http=debug mockforge serve --spec api.json
CORS Issues
CORS Errors in Browser
Problem: Access to fetch at 'http://localhost:3000/users' from origin 'http://localhost:3001' has been blocked by CORS policy
Solutions:
# mockforge.yaml
http:
cors:
enabled: true
allowed_origins:
- "http://localhost:3000"
- "http://localhost:3001"
- "http://localhost:5173" # Vite default
allowed_methods: ["GET", "POST", "PUT", "DELETE", "PATCH", "OPTIONS"]
allowed_headers: ["Content-Type", "Authorization"]
Or via environment variable:
MOCKFORGE_CORS_ENABLED=true \
MOCKFORGE_CORS_ALLOWED_ORIGINS="http://localhost:3001,http://localhost:5173" \
mockforge serve --spec api.json
Debugging: Check browser console for exact CORS error message - it will tell you which header is missing.
Validation Issues
Valid Requests Getting Rejected
Problem: Requests return 422/400 even though they look correct
Solutions:
- Check validation mode
# Use 'warn' instead of 'enforce' for development
MOCKFORGE_REQUEST_VALIDATION=warn mockforge serve --spec api.json
- Check Content-Type header
# Ensure Content-Type matches spec
curl -X POST http://localhost:3000/users \
-H "Content-Type: application/json" \
-d '{"name": "John"}'
- Check required fields
# Spec may require fields you're not sending
# Check spec for 'required' array
- Validate request body structure
# Use Admin UI to see exact request received
# Visit http://localhost:9080 to inspect requests
Validation Too Strict
Problem: Validation rejects requests that should be valid
Solutions:
- Temporarily disable validation
mockforge serve --spec api.json --validation off
- Fix spec if it’s incorrect
// Spec might mark optional fields as required
"properties": {
"name": { "type": "string" },
"email": { "type": "string" }
},
"required": [] // Empty array = all optional
WebSocket Issues
Connection Refused
Problem: WebSocket connection fails immediately
Solutions:
- Check WebSocket port
# Verify port is open
netstat -tlnp | grep :3001
- Check replay file exists
# Ensure file path is correct
ls -la ws-replay.jsonl
MOCKFORGE_WS_REPLAY_FILE=./ws-replay.jsonl mockforge serve --ws-port 3001
- Check WebSocket enabled
# Ensure WebSocket server is started
mockforge serve --ws-port 3001 # Explicit port needed
Messages Not Received
Problem: WebSocket connects but no messages arrive
Solutions:
- Check replay file format
# Validate JSONL syntax
cat ws-replay.jsonl | jq -r '.' # Should parse each line as JSON
- Check message timing
// Replay file format
{"ts": 0, "dir": "out", "text": "Welcome"}
{"ts": 1000, "dir": "out", "text": "Next message"}
- Check waitFor patterns
// Ensure regex patterns match
{"waitFor": "^CLIENT_READY$", "text": "Acknowledged"}
Configuration Issues
Config File Not Found
Problem: Error: Configuration file not found
Solutions:
- Use absolute path
mockforge serve --config /full/path/to/mockforge.yaml
- Check file name
# Valid names
mockforge.yaml
mockforge.yml
.mockforge.yaml
.mockforge.yml
mockforge.config.ts
mockforge.config.js
- Check current directory
pwd
ls -la mockforge.yaml
Environment Variables Not Applied
Problem: Environment variables seem to be ignored
Solutions:
- Check variable names
# Correct format: MOCKFORGE_<SECTION>_<OPTION>
MOCKFORGE_HTTP_PORT=3000 # ✅
MOCKFORGE_PORT=3000 # ❌ Wrong
- Check shell reload
# Export and verify
export MOCKFORGE_HTTP_PORT=3000
echo $MOCKFORGE_HTTP_PORT # Should show 3000
# Or use inline
MOCKFORGE_HTTP_PORT=3000 mockforge serve --spec api.json
- Check precedence
# CLI flags override env vars
mockforge serve --spec api.json --http-port 3001
# Even if MOCKFORGE_HTTP_PORT=3000, port will be 3001
Performance Issues
Slow Response Times
Problem: API responses are slow
Solutions:
- Disable template expansion if not needed
# Template expansion adds overhead
mockforge serve --spec api.json # No templates = faster
- Reduce validation overhead
# Validation adds latency
mockforge serve --spec api.json --validation warn # Faster than 'enforce'
- Check response complexity
# Large responses or complex templates slow things down
# Consider simplifying responses for development
- Monitor resource usage
# Check CPU/memory
top -p $(pgrep mockforge)
High Memory Usage
Problem: MockForge consumes too much memory
Solutions:
- Limit connection pool
MOCKFORGE_MAX_CONNECTIONS=100 mockforge serve --spec api.json
- Disable features not needed
# Minimal configuration
mockforge serve --spec api.json \
--validation off \
--response-template-expand false \
--admin false
- Check for memory leaks
# Monitor over time
watch -n 1 'ps aux | grep mockforge | grep -v grep'
Docker Issues
Container Exits Immediately
Problem: Docker container starts then immediately stops
Solutions:
- Check logs
docker logs <container-id>
docker logs -f <container-id> # Follow logs
- Run interactively
docker run -it --rm mockforge mockforge serve --spec api.json
- Check volume mounts
# Ensure spec file is accessible
docker run -v $(pwd)/api.json:/app/api.json \
mockforge mockforge serve --spec /app/api.json
Port Mapping Issues
Problem: Can’t access MockForge from host
Solutions:
# Proper port mapping
docker run -p 3000:3000 -p 9080:9080 mockforge
# Verify ports are exposed
docker port <container-id>
Permission Issues
Problem: Can’t read/write mounted volumes
Solutions:
# Fix permissions
sudo chown -R 1000:1000 ./fixtures ./logs
# Or run as specific user
docker run --user $(id -u):$(id -g) \
-v $(pwd)/fixtures:/app/fixtures \
mockforge
Admin UI Issues
Admin UI Not Loading
Problem: Can’t access http://localhost:9080
Solutions:
- Enable admin UI
mockforge serve --spec api.json --admin --admin-port 9080
- Check port
# Verify port is listening
curl http://localhost:9080
netstat -tlnp | grep :9080
- Try different port
mockforge serve --spec api.json --admin --admin-port 9090
# Access at http://localhost:9090
Admin API Not Working
Problem: Admin UI loads but API calls fail
Solutions:
# Test admin API directly
curl http://localhost:9080/__mockforge/status
# Enable admin API explicitly
mockforge serve --spec api.json --admin --admin-api-enabled
Plugin Issues
Plugin Won’t Load
Problem: Error: Failed to load plugin
Solutions:
- Check plugin format
# Validate WASM file
file plugin.wasm # Should show: WebAssembly
# Check plugin manifest
mockforge plugin validate plugin.wasm
- Check permissions
# Ensure plugin file is readable
chmod 644 plugin.wasm
- Check compatibility
# Plugin may be for different MockForge version
mockforge --version
# Check plugin requirements
Plugin Crashes
Problem: Plugin causes MockForge to crash
Solutions:
- Check plugin logs
RUST_LOG=mockforge_plugin=debug mockforge serve --plugin ./plugin.wasm
- Check resource limits
# plugin.yaml
capabilities:
resources:
max_memory_bytes: 67108864 # 64MB
max_cpu_time_ms: 5000 # 5 seconds
Getting More Help
If none of these solutions work:
- Collect debug information
# System info
uname -a
rustc --version
mockforge --version
# Check logs
RUST_LOG=debug mockforge serve --spec api.json 2>&1 | tee debug.log
# Test with minimal config
mockforge serve --spec examples/openapi-demo.json --http-port 3000
-
Search existing issues
- Check GitHub Issues
- Search for similar problems
-
Create minimal reproduction
- Create smallest possible config that reproduces issue
- Include OpenAPI spec (if relevant)
- Include error logs
-
Open GitHub issue
- Use descriptive title
- Include system info, version, logs
- Attach minimal reproduction
See Also:
- Troubleshooting Guide - Detailed diagnostic steps
- FAQ - Common questions and answers
- Configuration Reference - All configuration options
Frequently Asked Questions (FAQ)
Quick answers to common questions about MockForge.
General Questions
What is MockForge?
MockForge is a comprehensive multi-protocol mocking framework for APIs. It allows you to create realistic mock servers for HTTP/REST, gRPC, WebSocket, and GraphQL without writing code. Perfect for frontend development, integration testing, and parallel team development.
Is MockForge free?
Yes, MockForge is completely free and open-source under MIT/Apache-2.0 licenses. There are no premium tiers, paid features, or usage limits.
What protocols does MockForge support?
MockForge supports:
- HTTP/REST: OpenAPI/Swagger-based mocking with full validation
- gRPC: Dynamic service discovery from
.protofiles with HTTP Bridge - WebSocket: Replay mode, interactive mode, and AI event generation
- GraphQL: Schema-based mocking with automatic resolver generation
How does MockForge compare to WireMock, Mockoon, or MockServer?
See our detailed comparison table in the README. Key differentiators:
- Multi-protocol in a single binary
- AI-powered mock generation and data drift
- WASM plugin system for extensibility
- gRPC HTTP Bridge for REST access to gRPC services
- Built-in encryption for sensitive data
- Rust performance with native compilation
- Multi-language SDKs - Native support for 6 languages vs WireMock’s Java-first approach
For detailed ecosystem comparison, see Ecosystem Comparison Guide.
Can I use MockForge in production?
Yes! MockForge is production-ready with:
- Comprehensive test coverage
- Security audits
- Performance benchmarks
- Docker deployment support
- Observability (Prometheus metrics, tracing)
However, it’s primarily designed for development and testing. For production API mocking, ensure proper security configurations.
Getting Started
How do I install MockForge?
Three options:
# 1. From crates.io (requires Rust)
cargo install mockforge-cli
# 2. From source
git clone https://github.com/SaaSy-Solutions/mockforge
cd mockforge && make setup && make install
# 3. Using Docker
docker pull ghcr.io/saasy-solutions/mockforge:latest
See the Installation Guide for details.
What’s the fastest way to get started?
Follow our 5-Minute Tutorial:
cargo install mockforge-climockforge init my-projectmockforge serve --config mockforge.yaml- Test with
curl
Do I need to know Rust to use MockForge?
No. MockForge is a CLI tool you can use without Rust knowledge. You only need Rust if:
- Building from source
- Developing custom plugins
- Embedding MockForge as a library
What programming languages are supported?
MockForge provides native SDKs for 6 languages:
- Rust - Native SDK with zero-overhead embedding
- Node.js/TypeScript - Full TypeScript support
- Python - Context manager support with type hints
- Go - Idiomatic Go API
- Java - Maven/Gradle integration
- .NET/C# - NuGet package
All SDKs support embedded mock servers in your test suites. See SDK Documentation for examples.
Can I use MockForge from Python/Node.js/Go/etc.?
Yes! MockForge provides native SDKs for multiple languages. You can embed mock servers directly in your test code:
Python:
from mockforge_sdk import MockServer
with MockServer(port=3000) as server:
server.stub_response('GET', '/api/users/123', {'id': 123})
# Your test code here
Node.js:
import { MockServer } from '@mockforge/sdk';
const server = await MockServer.start({ port: 3000 });
await server.stubResponse('GET', '/api/users/123', { id: 123 });
Go:
server := mockforge.NewMockServer(mockforge.MockServerConfig{Port: 3000})
server.Start()
defer server.Stop()
See Ecosystem & Use Cases Guide for complete examples in all languages.
How do I create my first mock API?
# 1. Initialize a project
mockforge init my-api
# 2. Edit the generated mockforge.yaml
vim mockforge.yaml
# 3. Start the server
mockforge serve --config mockforge.yaml
# 4. Test it
curl http://localhost:3000/your-endpoint
Or use an existing OpenAPI spec:
mockforge serve --spec your-api.json
Configuration & Setup
How do I configure MockForge?
Three ways (in order of priority):
- CLI flags:
mockforge serve --http-port 3000 - Environment variables:
export MOCKFORGE_HTTP_PORT=3000 - Config file:
mockforge serve --config config.yaml
See the Configuration Guide and Complete Config Template.
Where should I put my configuration file?
MockForge looks for config files in this order:
- Path specified with
--config MOCKFORGE_CONFIG_FILEenvironment variable./mockforge.yamlor./mockforge.ymlin current directory- Auto-discovered in parent directories
Can I use environment variables for all settings?
Yes! Every config option can be set via environment variables using the MOCKFORGE_ prefix:
export MOCKFORGE_HTTP_PORT=3000
export MOCKFORGE_ADMIN_ENABLED=true
export MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
How do I validate my configuration?
mockforge config validate
mockforge config validate --config my-config.yaml
See the Configuration Validation Guide.
OpenAPI & HTTP Mocking
Can I use Swagger/OpenAPI specs?
Yes! Both OpenAPI 3.0 and Swagger 2.0 are supported:
mockforge serve --spec openapi.json
mockforge serve --spec swagger.yaml
MockForge automatically generates mock endpoints from your specification.
How does request validation work?
Three modes:
off: No validation (accept all requests)warn: Log validation errors but accept requestsenforce: Reject invalid requests with 400/422
mockforge serve --validation enforce --spec api.json
Why aren’t my template variables working?
Template expansion must be explicitly enabled:
# Via CLI
mockforge serve --response-template-expand
# Via environment
export MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
# Via config
http:
response_template_expand: true
This is a security feature to prevent accidental template processing.
What template variables are available?
{{uuid}} - Random UUID v4
{{now}} - Current timestamp (ISO 8601)
{{now+2h}} - Timestamp 2 hours from now
{{now-30m}} - Timestamp 30 minutes ago
{{randInt 1 100}} - Random integer 1-100
{{rand.float}} - Random float
{{faker.email}} - Fake email address
{{faker.name}} - Fake person name
{{request.body.field}} - Access request data
{{request.path.id}} - Path parameters
{{request.header.Auth}} - Request headers
See the Templating Reference for complete details.
Can I override specific endpoints?
Yes! Define custom routes in your config that override OpenAPI spec:
http:
openapi_spec: api.json
routes:
- path: /custom/endpoint
method: GET
response:
status: 200
body: '{"custom": "response"}'
gRPC Mocking
Do I need to compile my proto files?
No. MockForge dynamically parses .proto files at runtime. Just:
- Put
.protofiles in./protodirectory - Start MockForge:
mockforge serve --grpc-port 50051 - Services are automatically discovered and mocked
How do I access gRPC services via HTTP?
Enable the HTTP Bridge:
grpc:
dynamic:
enabled: true
http_bridge:
enabled: true
base_path: "/api"
Now access gRPC services as REST APIs:
# gRPC
grpcurl -d '{"id": "123"}' localhost:50051 UserService/GetUser
# HTTP (via bridge)
curl -X POST http://localhost:8080/api/userservice/getuser \
-d '{"id": "123"}'
Can I use gRPC reflection?
Yes, it’s enabled by default:
# List services
grpcurl -plaintext localhost:50051 list
# Describe a service
grpcurl -plaintext localhost:50051 describe UserService
Does MockForge support gRPC streaming?
Yes, all four streaming modes:
- Unary (single request → single response)
- Server streaming (single request → stream of responses)
- Client streaming (stream of requests → single response)
- Bidirectional streaming (stream ↔ stream)
WebSocket Mocking
How do I create WebSocket replay files?
Use JSON Lines (JSONL) format:
{"ts":0,"dir":"out","text":"Welcome!","waitFor":"^CLIENT_READY$"}
{"ts":100,"dir":"out","text":"{{uuid}}"}
{"ts":200,"dir":"in","text":"ACK"}
ts: Milliseconds timestampdir: “in” (received) or “out” (sent)text: Message content (supports templates)waitFor: Optional regex/JSONPath pattern
Can I match JSON messages?
Yes, use JSONPath in waitFor:
{"waitFor": "$.type", "text": "Matched type field"}
{"waitFor": "$.user.id", "text": "Matched user ID"}
See README-websocket-jsonpath.md.
What’s AI event generation?
Generate realistic WebSocket event streams from narrative descriptions:
mockforge serve --ws-ai-enabled \
--ws-ai-narrative "Simulate 5 minutes of stock trading" \
--ws-ai-event-count 20
Perfect for testing real-time features without manually scripting events.
AI Features
Do I need an API key for AI features?
Not necessarily. Three options:
-
Ollama (Free, Local): No API key needed
ollama pull llama2 mockforge serve --ai-enabled --rag-provider ollama -
OpenAI (Paid): ~$0.01 per 1,000 requests
export MOCKFORGE_RAG_API_KEY=sk-... mockforge serve --ai-enabled --rag-provider openai -
Anthropic, or OpenAI-compatible APIs: Similar to OpenAI
What are AI features used for?
- Intelligent Mock Generation: Generate responses from natural language prompts
- Data Drift Simulation: Realistic data evolution (order status, stock levels, etc.)
- AI Event Streams: Generate WebSocket event sequences from narratives
See AI_DRIVEN_MOCKING.md.
How much does AI cost?
- Ollama: Free (runs locally)
- OpenAI GPT-3.5: ~$0.01 per 1,000 requests
- OpenAI GPT-4: ~$0.10 per 1,000 requests
- Anthropic Claude: Similar to GPT-4
Use Ollama for development, OpenAI for production if needed.
Plugins
How do I install plugins?
# From URL
mockforge plugin install https://example.com/plugin.wasm
# From Git with version
mockforge plugin install https://github.com/user/plugin#v1.0.0
# From local file
mockforge plugin install ./my-plugin.wasm
# List installed
mockforge plugin list
Can I create custom plugins?
Yes! Plugins are written in Rust and compiled to WebAssembly:
- Use
mockforge-plugin-sdkcrate - Implement plugin traits
- Compile to WASM target
- Install and use
See the Plugin Development Guide and Add a Custom Plugin Tutorial.
Are plugins sandboxed?
Yes. Plugins run in a WebAssembly sandbox with:
- Memory isolation
- CPU/memory limits
- No network access (unless explicitly allowed)
- No file system access (unless explicitly allowed)
Admin UI
How do I access the Admin UI?
Two modes:
Standalone (separate port):
mockforge serve --admin --admin-port 9080
# Access: http://localhost:9080
Embedded (under HTTP server):
mockforge serve --admin-embed --admin-mount-path /admin
# Access: http://localhost:3000/admin
Is authentication available?
Not yet. Role-based authentication (Admin/Viewer) is planned for v1.1. The frontend UI components are built, but backend JWT/OAuth integration is pending.
Currently, the Admin UI is accessible without authentication.
What can I do in the Admin UI?
- View real-time request logs (via Server-Sent Events)
- Monitor performance metrics
- Manage fixtures with drag-and-drop
- Configure latency and fault injection
- Search requests and logs
- View server health and statistics
See Admin UI Walkthrough.
Deployment
Can I run MockForge in Docker?
Yes:
# Using Docker Compose
docker-compose up
# Using Docker directly
docker run -p 3000:3000 -p 9080:9080 mockforge
See DOCKER.md for complete documentation.
How do I deploy to Kubernetes?
Use the Helm chart or create Deployment/Service manifests:
# Using Helm (if available)
helm install mockforge ./charts/mockforge
# Or use kubectl
kubectl apply -f k8s/deployment.yaml
What ports does MockForge use?
Default ports:
- 3000: HTTP server
- 3001: WebSocket server
- 50051: gRPC server
- 4000: GraphQL server
- 9080: Admin UI
- 9090: Prometheus metrics
All ports are configurable.
Performance & Limits
How many requests can MockForge handle?
Typical performance (modern hardware):
- HTTP: 10,000+ req/s
- WebSocket: 1,000+ concurrent connections
- gRPC: 5,000+ req/s
Performance depends on:
- Response complexity
- Template expansion
- Validation enabled
- Hardware specs
See our benchmarks.
Does MockForge scale horizontally?
Yes. Run multiple instances behind a load balancer:
# Instance 1
mockforge serve --http-port 3000
# Instance 2
mockforge serve --http-port 3001
# Load balancer distributes traffic
For stateless mocking (no shared state), this works great.
What are the resource requirements?
Minimal:
- Memory: ~50MB base + ~10MB per 1,000 concurrent connections
- CPU: 1-2 cores sufficient for most workloads
- Disk: ~100MB for binary + storage for logs/fixtures
Troubleshooting
Server won’t start - port already in use
# Find what's using the port
lsof -i :3000
# Use a different port
mockforge serve --http-port 3001
Template variables appear literally in responses
Enable template expansion:
mockforge serve --response-template-expand
Validation rejecting valid requests
Adjust validation mode:
mockforge serve --validation warn # or 'off'
WebSocket connection fails
Check the WebSocket port and replay file:
# Verify port
netstat -tlnp | grep :3001
# Check replay file exists
ls -la ws-replay.jsonl
Admin UI not loading
Verify the admin UI is enabled and port is correct:
mockforge serve --admin --admin-port 9080
curl http://localhost:9080
For more issues, see the Troubleshooting Guide.
Development & Contributing
Can I embed MockForge in my application?
Yes! Use MockForge crates as libraries:
#![allow(unused)] fn main() { use mockforge_http::build_router; use mockforge_core::{ValidationOptions, Config}; let router = build_router( Some("api.json".to_string()), Some(ValidationOptions::enforce()), None, ).await; }
See the Rust API Documentation.
How do I contribute to MockForge?
- Check CONTRIBUTING.md
- Look for “good first issue” labels
- Fork, make changes, submit PR
- Ensure tests pass:
cargo test - Follow code style:
cargo fmt && cargo clippy
Where can I report bugs?
Please include:
- MockForge version
- Operating system
- Configuration file (if applicable)
- Steps to reproduce
- Expected vs actual behavior
- Error logs
Is there a community forum?
- GitHub Discussions: Community Forum
- GitHub Issues: Bug Reports & Feature Requests
- Discord: Join our community chat
Licensing & Commercial Use
What license is MockForge under?
Dual-licensed: MIT OR Apache-2.0
You can choose either license for your use case.
Can I use MockForge commercially?
Yes, absolutely. Both MIT and Apache-2.0 are permissive licenses that allow commercial use without restrictions.
Do I need to open-source my configurations?
No. Your configuration files, mock data, and custom plugins are yours. Only if you modify MockForge source code and distribute it do licensing terms apply.
Can I sell MockForge-based services?
Yes. You can offer:
- Hosted MockForge instances
- Custom plugins
- Support services
- Training/consulting
Use Cases
What use cases does MockForge support?
MockForge supports a wide range of use cases:
- Unit Tests - Embed mock servers directly in test suites across all supported languages
- Integration Tests - Test complex multi-service interactions with stateful mocking
- Service Virtualization - Replace external dependencies with mocks using proxy mode
- Development Environments - Create local development environments without backend dependencies
- Isolating from Flaky Dependencies - Simulate network failures and slow responses
- Simulating APIs That Don’t Exist Yet - Generate mocks from API specifications before implementation
See Ecosystem & Use Cases Guide for detailed examples and code samples.
Can I use MockForge for unit testing?
Yes! MockForge SDKs allow you to embed mock servers directly in your unit tests:
Rust:
#![allow(unused)] fn main() { let mut server = MockServer::new().port(0).start().await?; server.stub_response("GET", "/api/users/123", json!({"id": 123})).await?; }
Python:
with MockServer(port=0) as server:
server.stub_response('GET', '/api/users/123', {'id': 123})
No separate server process required. See SDK Documentation for examples.
How do I replace external APIs in my tests?
Use MockForge’s proxy mode with record/replay:
# Record real API interactions
mockforge serve --proxy-enabled \
--proxy-target https://api.external-service.com \
--record-responses ./recordings/
# Replay from recordings
mockforge serve --replay-from ./recordings/
Or use the SDK to programmatically stub responses. See Service Virtualization for details.
Can I simulate network failures and slow responses?
Yes! MockForge provides built-in latency and fault injection:
# Add latency
mockforge serve --latency-mode normal --latency-mean-ms 500
# Inject failures
mockforge serve --failure-rate 0.1 --failure-codes 500,503
Or configure in your SDK:
const server = await MockServer.start({
latency: { mode: 'normal', meanMs: 500 },
failures: { enabled: true, failureRate: 0.1 }
});
See Isolating from Flaky Dependencies for examples.
How do I mock an API that doesn’t exist yet?
Generate mocks from API specifications:
# From OpenAPI spec
mockforge serve --spec api-spec.yaml
# From GraphQL schema
mockforge serve --graphql-schema schema.graphql
# From gRPC proto files
mockforge serve --grpc-port 50051 --proto-dir ./proto
All endpoints are automatically available with schema-validated responses. See Simulating APIs That Don’t Exist Yet for details.
What’s Next?
Ready to start? Try our 5-Minute Tutorial!
Need more help?
[Unreleased]
Added
- Nothing yet.
Changed
- Nothing yet.
Deprecated
- Nothing yet.
Removed
- Nothing yet.
Fixed
- Nothing yet.
Security
- Nothing yet.
[0.2.0] - 2025-10-29
Added
- Output control features for MockForge generator with comprehensive configuration options
- Unified spec parser with enhanced validation and error reporting
- Multi-framework client generation with Angular and Svelte support
- Enhanced mock data generation with OpenAPI support
- Configuration file support for mock generation
- Browser mobile proxy mode implementation
- Comprehensive documentation and example workflows
Changed
- Enhanced CLI with progress indicators, error handling, and code quality improvements
- Comprehensive plugin architecture documentation
Fixed
- Remove tests that access private fields in mock data tests
- Fix compilation issues in mockforge-collab and mockforge-ui
- Update mockforge-plugin-core version to 0.1.6 in plugin-sdk
- Enable SQLx offline mode for mockforge-collab publishing
- Add description field to mockforge-analytics
- Add version requirements to all mockforge path dependencies
- Fix publish order dependencies (mockforge-chaos before mockforge-reporting)
- Update Cargo.lock and format client generator tests
[0.1.3] - 2025-10-22
Changes
- docs: prepare release 0.1.3
- docs: update CHANGELOG for 0.1.3 release
- docs: add roadmap completion summary
- feat: add Kubernetes-style health endpoint aliases and dashboard shortcut
- feat: add unified config & profiles with multi-format support
- feat: add capture scrubbing and deterministic replay
- feat: add native GraphQL operation handlers with advanced features
- feat: add programmable WebSocket handlers
- feat: add HTTP scenario switching for OpenAPI response examples
- feat: add mockforge-test crate and integration testing examples
- build: enable publishing for mockforge-ui and mockforge-cli
- build: extend publish script for internal crates
- build: parameterize publish script with workspace version
[0.1.3] - 2025-10-22
Changes
- docs: update CHANGELOG for 0.1.3 release
- docs: add roadmap completion summary
- feat: add Kubernetes-style health endpoint aliases and dashboard shortcut
- feat: add unified config & profiles with multi-format support
- feat: add capture scrubbing and deterministic replay
- feat: add native GraphQL operation handlers with advanced features
- feat: add programmable WebSocket handlers
- feat: add HTTP scenario switching for OpenAPI response examples
- feat: add mockforge-test crate and integration testing examples
- build: enable publishing for mockforge-ui and mockforge-cli
- build: extend publish script for internal crates
- build: parameterize publish script with workspace version
[0.1.2] - 2025-10-17
Changes
- build: make version update tolerant
- build: manage version references via wrapper
- build: mark example crates as non-publishable
- build: drop publish-order for cargo-release 0.25
- build: centralize release metadata in release.toml
- build: remove per-crate release metadata
- build: fix release metadata field name
- build: move workspace release metadata into Cargo.toml
- build: require execute flag for release wrapper
- build: automate changelog generation during release
- build: add release wrapper with changelog guard
- build: align release tooling with cargo-release 0.25
[0.1.2] - 2025-10-17
Changes
- build: mark example crates as non-publishable
- build: drop publish-order for cargo-release 0.25
- build: centralize release metadata in release.toml
- build: remove per-crate release metadata
- build: fix release metadata field name
- build: move workspace release metadata into Cargo.toml
- build: require execute flag for release wrapper
- build: automate changelog generation during release
- build: add release wrapper with changelog guard
- build: align release tooling with cargo-release 0.25
[0.1.2] - 2025-10-17
Changes
- build: mark example crates as non-publishable
- build: drop publish-order for cargo-release 0.25
- build: centralize release metadata in release.toml
- build: remove per-crate release metadata
- build: fix release metadata field name
- build: move workspace release metadata into Cargo.toml
- build: require execute flag for release wrapper
- build: automate changelog generation during release
- build: add release wrapper with changelog guard
- build: align release tooling with cargo-release 0.25
[0.1.2] - 2025-10-17
Changes
- build: drop publish-order for cargo-release 0.25
- build: centralize release metadata in release.toml
- build: remove per-crate release metadata
- build: fix release metadata field name
- build: move workspace release metadata into Cargo.toml
- build: require execute flag for release wrapper
- build: automate changelog generation during release
- build: add release wrapper with changelog guard
- build: align release tooling with cargo-release 0.25
[0.1.2] - 2025-10-17
Changes
- build: centralize release metadata in release.toml
- build: remove per-crate release metadata
- build: fix release metadata field name
- build: move workspace release metadata into Cargo.toml
- build: require execute flag for release wrapper
- build: automate changelog generation during release
- build: add release wrapper with changelog guard
- build: align release tooling with cargo-release 0.25
[0.1.2] - 2025-10-17
Changes
- build: remove per-crate release metadata
- build: fix release metadata field name
- build: move workspace release metadata into Cargo.toml
- build: require execute flag for release wrapper
- build: automate changelog generation during release
- build: add release wrapper with changelog guard
- build: align release tooling with cargo-release 0.25
[0.1.2] - 2025-10-17
Changes
- build: fix release metadata field name
- build: move workspace release metadata into Cargo.toml
- build: require execute flag for release wrapper
- build: automate changelog generation during release
- build: add release wrapper with changelog guard
- build: align release tooling with cargo-release 0.25
[0.1.2] - 2025-10-17
Changes
- build: move workspace release metadata into Cargo.toml
- build: require execute flag for release wrapper
- build: automate changelog generation during release
- build: add release wrapper with changelog guard
- build: align release tooling with cargo-release 0.25
[0.1.2] - 2025-10-17
Changes
- build: require execute flag for release wrapper
- build: automate changelog generation during release
- build: add release wrapper with changelog guard
- build: align release tooling with cargo-release 0.25
[0.1.2] - 2025-10-17
Changes
- build: require execute flag for release wrapper
- build: automate changelog generation during release
- build: add release wrapper with changelog guard
- build: align release tooling with cargo-release 0.25
[0.1.1] - 2025-10-17
Added
- OpenAPI request validation (path/query/header/cookie/body) with deep $ref resolution and composite schemas (oneOf/anyOf/allOf).
- Validation modes:
disabled,warn,enforce, with aggregate error reporting and detailed error objects. - Runtime Admin UI panel to view/toggle validation mode and per-route overrides; Admin API endpoint
/__mockforge/validation. - CLI flags and config options to control validation (including
skip_admin_validationand per-routevalidation_overrides). - New e2e tests for 2xx/422 request validation and response example expansion across HTTP routes.
- Templating reference docs and examples; WS templating tests and demo update.
- Initial release of MockForge
- HTTP API mocking with OpenAPI support
- gRPC service mocking with Protocol Buffers
- WebSocket connection mocking with replay functionality
- CLI tool for easy local development
- Admin UI for managing mock servers
- Comprehensive documentation with mdBook
- GitHub Actions CI/CD pipeline
- Security audit integration
- Pre-commit hooks for code quality
Changed
- HTTP handlers now perform request validation before routing; invalid requests return 400 with structured details (when
enforce). - Bump
jsonschemato 0.33 and adapt validator API; enable draft selection and format checks internally. - Improve route registry and OpenAPI parameter parsing, including styles/explode and array coercion for query/header/cookie parameters.
Deprecated
- N/A
Removed
- N/A
Fixed
- Resolve admin mount prefix from config and exclude admin routes from validation when configured.
- Various small correctness fixes in OpenAPI schema mapping and parameter handling; clearer error messages.
Security
- N/A