MockForge
MockForge is a comprehensive mocking framework for APIs, gRPC services, and WebSockets. It provides a unified interface for creating, managing, and deploying mock servers across different protocols.
Features
- Multi-Protocol Support: HTTP REST APIs, gRPC services, and WebSocket connections
- Dynamic Response Generation: Create realistic mock responses with configurable latency and failure rates
- Scenario Management: Define complex interaction scenarios with state management
- CLI Tool: Easy-to-use command-line interface for local development
- Admin UI: Web-based interface for managing mock servers
- Extensible Architecture: Plugin system for custom response generators
Quick Start
Installation
cargo install mockforge-cli
Basic Usage
# Start a mock server with an OpenAPI spec
cargo run -p mockforge-cli -- serve --spec examples/openapi-demo.json --http-port 3000
# Add WebSocket support with replay file
MOCKFORGE_WS_REPLAY_FILE=examples/ws-demo.jsonl cargo run -p mockforge-cli -- serve --ws-port 3001
# Full configuration with Admin UI
MOCKFORGE_WS_REPLAY_FILE=examples/ws-demo.jsonl \
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true \
cargo run -p mockforge-cli -- serve --spec examples/openapi-demo.json --admin --admin-port 8080
# Use configuration file
cargo run -p mockforge-cli -- serve --config demo-config.yaml
Docker
docker run -p 3000:3000 -p 3001:3001 -p 50051:50051 SaaSy-Solutions/mockforge
Documentation Structure
- Getting Started - Installation and basic setup
- HTTP Mocking - REST API mocking guide
- gRPC Mocking - gRPC service mocking
- WebSocket Mocking - WebSocket connection mocking
- Configuration - Advanced configuration options
- API Reference - Complete API documentation
- Contributing - How to contribute to MockForge
- FAQ - Frequently asked questions
Examples
Check out the examples/
directory for sample configurations and use cases.
Community
- GitHub Issues - Report bugs and request features
- GitHub Discussions - Ask questions and share ideas
- Discord - Join our community chat
License
Licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE)
- MIT License (LICENSE-MIT)
at your option.
Installation
MockForge can be installed through multiple methods depending on your needs and environment. Choose the installation method that best fits your workflow.
Prerequisites
Before installing MockForge, ensure you have one of the following:
- Rust toolchain (for cargo installation or building from source)
- Docker (for containerized deployment)
- Pre-built binaries (when available)
Method 1: Cargo Install (Recommended)
The easiest way to install MockForge is through Cargo, Rust’s package manager:
cargo install mockforge-cli
This installs the MockForge CLI globally on your system. After installation, you can verify it’s working:
mockforge --version
Updating
To update to the latest version:
cargo install mockforge-cli --force
Method 2: Docker (Containerized)
MockForge is also available as a Docker image, which is ideal for:
- Isolated environments
- CI/CD pipelines
- Systems without Rust installed
Pull from Docker Hub
docker pull SaaSy-Solutions/mockforge
Run with basic configuration
docker run -p 3000:3000 -p 3001:3001 -p 50051:50051 -p 8080:8080 \
-e MOCKFORGE_ADMIN_ENABLED=true \
-e MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true \
mockforge
Build from source
git clone https://github.com/SaaSy-Solutions/mockforge.git
cd mockforge
docker build -t mockforge .
Method 3: Building from Source
For development or custom builds, you can build MockForge from source:
git clone https://github.com/SaaSy-Solutions/mockforge.git
cd mockforge
cargo build --release
The binary will be available at target/release/mockforge
.
To install it system-wide after building:
cargo install --path crates/mockforge-cli
Verification
After installation, verify MockForge is working:
# Check version
mockforge --version
# View help
mockforge --help
# Start with example configuration
mockforge serve --spec examples/openapi-demo.json --http-port 3000
Platform Support
MockForge supports:
- Linux (x86_64, aarch64)
- macOS (x86_64, aarch64)
- Windows (x86_64)
- Docker (any platform with Docker support)
Troubleshooting Installation
Cargo installation fails
If cargo install
fails, ensure you have Rust installed:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
Docker permission issues
If Docker commands fail with permission errors:
# Add user to docker group (Linux)
sudo usermod -aG docker $USER
# Log out and back in for changes to take effect
Port conflicts
If default ports (3000, 3001, 8080, 50051) are in use:
# Check what's using the ports
lsof -i :3000
lsof -i :3001
# Kill conflicting processes or use different ports
mockforge serve --http-port 3001 --ws-port 3002 --admin-port 8081
Next Steps
Once installed, proceed to the Quick Start guide to create your first mock server, or read about Basic Concepts to understand how MockForge works.
Quick Start
Get MockForge running in under 5 minutes with this hands-on guide. We’ll create a mock API server and test it with real HTTP requests.
Prerequisites
Ensure MockForge is installed and available in your PATH.
Step 1: Start a Basic HTTP Mock Server
MockForge can serve mock APIs defined in OpenAPI specifications. Let’s use the included example:
# Navigate to the MockForge directory (if building from source)
cd mockforge
# Start the server with the demo OpenAPI spec
mockforge serve --spec examples/openapi-demo.json --http-port 3000
You should see output like:
MockForge v0.1.0 starting...
HTTP server listening on 0.0.0.0:3000
OpenAPI spec loaded from examples/openapi-demo.json
Ready to serve requests at http://localhost:3000
Step 2: Test Your Mock API
Open a new terminal and test the API endpoints:
# Health check endpoint
curl http://localhost:3000/ping
Expected response:
{
"status": "pong",
"timestamp": "2025-09-12T17:20:01.512504405+00:00",
"requestId": "550e8400-e29b-41d4-a716-446655440000"
}
# List users endpoint
curl http://localhost:3000/users
Expected response:
[
{
"id": "550e8400-e29b-41d4-a716-446655440001",
"name": "John Doe",
"email": "john@example.com",
"createdAt": "2025-09-12T17:20:01.512504405+00:00",
"active": true
}
]
# Create a new user
curl -X POST http://localhost:3000/users \
-H "Content-Type: application/json" \
-d '{"name": "Jane Smith", "email": "jane@example.com"}'
# Get user by ID (path parameter)
curl http://localhost:3000/users/123
Step 3: Enable Template Expansion
MockForge supports dynamic content generation. Enable template expansion for more realistic data:
# Stop the current server (Ctrl+C), then restart with templates enabled
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true \
mockforge serve --spec examples/openapi-demo.json --http-port 3000
Now test the endpoints again - you’ll see different UUIDs and timestamps each time!
Step 4: Add WebSocket Support
MockForge can also mock WebSocket connections. Let’s add WebSocket support to our server:
# Stop the server, then restart with WebSocket support
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true \
MOCKFORGE_WS_REPLAY_FILE=examples/ws-demo.jsonl \
mockforge serve --spec examples/openapi-demo.json --ws-port 3001 --http-port 3000
Step 5: Test WebSocket Connection
Test the WebSocket endpoint (requires Node.js or a WebSocket client):
# Using Node.js
node -e "
const WebSocket = require('ws');
const ws = new WebSocket('ws://localhost:3001/ws');
ws.on('open', () => {
console.log('Connected! Sending CLIENT_READY...');
ws.send('CLIENT_READY');
});
ws.on('message', (data) => {
console.log('Received:', data.toString());
if (data.toString().includes('ACK')) {
ws.send('ACK');
}
if (data.toString().includes('CONFIRMED')) {
ws.send('CONFIRMED');
}
});
ws.on('close', () => console.log('Connection closed'));
"
Expected WebSocket message flow:
- Send
CLIENT_READY
- Receive welcome message with session ID
- Receive data message, respond with
ACK
- Receive heartbeat messages
- Receive notification, respond with
CONFIRMED
Step 6: Enable Admin UI (Optional)
For a visual interface to manage your mock server:
# Stop the server, then restart with admin UI
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true \
MOCKFORGE_WS_REPLAY_FILE=examples/ws-demo.jsonl \
mockforge serve --spec examples/openapi-demo.json \
--admin --admin-port 8080 \
--http-port 3000 --ws-port 3001
Access the admin interface at: http://localhost:8080
Step 7: Using Configuration Files
Instead of environment variables, you can use a configuration file:
# Stop the server, then start with config file
mockforge serve --config demo-config.yaml
Step 8: Docker Alternative
If you prefer Docker:
# Build and run with Docker
docker build -t mockforge .
docker run -p 3000:3000 -p 3001:3001 -p 8080:8080 \
-e MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true \
-e MOCKFORGE_WS_REPLAY_FILE=examples/ws-demo.jsonl \
mockforge
What’s Next?
Congratulations! You now have a fully functional mock server running. Here are some next steps:
- Learn about Basic Concepts to understand how MockForge works
- Explore HTTP Mocking for advanced REST API features
- Try WebSocket Mocking for real-time communication
- Check out the Admin UI for visual management
Troubleshooting
Server won’t start
- Check if ports 3000, 3001, or 8080 are already in use
- Verify the OpenAPI spec file path is correct
- Ensure MockForge is properly installed
Template variables not working
- Make sure
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
is set - Check that template syntax
{{variable}}
is used correctly
WebSocket connection fails
- Verify WebSocket port (default 3001) is accessible
- Check that
MOCKFORGE_WS_REPLAY_FILE
points to a valid replay file - Ensure the replay file uses the correct JSONL format
Need help?
- Check the examples README for detailed testing scripts
- Review Configuration Files for advanced setup
- Visit the Troubleshooting guide
Basic Concepts
Understanding MockForge’s core concepts will help you make the most of its capabilities. This guide explains the fundamental ideas behind MockForge’s design and functionality.
Multi-Protocol Architecture
MockForge is designed to mock multiple communication protocols within a single, unified framework:
HTTP/REST APIs
- OpenAPI/Swagger Support: Define API contracts using industry-standard OpenAPI specifications
- Dynamic Response Generation: Generate realistic responses based on request parameters
- Request/Response Matching: Route requests to appropriate mock responses based on HTTP methods, paths, and parameters
WebSocket Connections
- Replay Mode: Simulate scripted message sequences from recorded interactions
- Interactive Mode: Respond dynamically to client messages
- State Management: Maintain connection state across message exchanges
gRPC Services
- Protocol Buffer Integration: Mock services defined with .proto files
- Dynamic Service Discovery: Automatically discover and compile .proto files
- Streaming Support: Handle unary, server streaming, client streaming, and bidirectional streaming
- Reflection Support: Built-in gRPC reflection for service discovery
Response Generation Strategies
MockForge offers multiple approaches to generating mock responses:
1. Static Responses
Define fixed response payloads that are returned for matching requests:
{
"status": "success",
"data": {
"id": 123,
"name": "Example Item"
}
}
2. Template-Based Dynamic Responses
Use template variables for dynamic content generation:
{
"id": "{{uuid}}",
"timestamp": "{{now}}",
"randomValue": "{{randInt 1 100}}",
"userData": "{{request.body}}"
}
3. Scenario-Based Responses
Define complex interaction scenarios with conditional logic and state management.
4. Advanced Data Synthesis (gRPC)
For gRPC services, MockForge provides sophisticated data synthesis capabilities:
- Smart Field Inference: Automatically detects data types from field names (emails, phones, IDs)
- Deterministic Generation: Reproducible test data with seeded randomness
- Relationship Awareness: Maintains referential integrity across related entities
- RAG-Driven Generation: Uses domain knowledge for contextually appropriate data
Template System
MockForge’s template system enables dynamic content generation using Handlebars-style syntax:
Built-in Template Functions
Data Generation
{{uuid}}
- Generate unique UUID v4 identifiers{{now}}
- Current timestamp in ISO 8601 format{{now+1h}}
- Future timestamps with offset support{{randInt min max}}
- Random integers within a range{{randFloat min max}}
- Random floating-point numbers
Request Data Access
{{request.body}}
- Access complete request body{{request.body.field}}
- Access specific JSON fields{{request.path.param}}
- Access URL path parameters{{request.query.param}}
- Access query string parameters{{request.header.name}}
- Access HTTP headers
Conditional Logic
{{#if condition}}content{{/if}}
- Conditional content rendering{{#each array}}item{{/each}}
- Iterate over arrays
Template Expansion Control
Templates are only processed when explicitly enabled:
# Enable template expansion
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
This security feature prevents accidental template processing in production environments.
Configuration Hierarchy
MockForge supports multiple configuration methods with clear precedence:
1. Command Line Arguments (Highest Priority)
mockforge serve --http-port 3000 --ws-port 3001 --spec api.json
2. Environment Variables
MOCKFORGE_HTTP_PORT=3000
MOCKFORGE_WS_PORT=3001
MOCKFORGE_OPENAPI_SPEC=api.json
3. Configuration Files (Lowest Priority)
# config.yaml
server:
http_port: 3000
ws_port: 3001
spec: api.json
Server Modes
Development Mode
- Template Expansion: Enabled by default for dynamic content
- Verbose Logging: Detailed request/response logging
- Admin UI: Enabled for visual server management
- CORS: Permissive cross-origin requests
Production Mode
- Template Expansion: Disabled by default for security
- Minimal Logging: Essential information only
- Performance Optimized: Reduced overhead for high-throughput scenarios
Request Matching
MockForge uses a sophisticated matching system to route requests to appropriate responses:
HTTP Request Matching
- Method Matching: GET, POST, PUT, DELETE, PATCH
- Path Matching: Exact path or parameterized routes
- Query Parameter Matching: Optional query string conditions
- Header Matching: Conditional responses based on request headers
- Body Matching: Match against request payload structure
Priority Order
- Most specific match first (method + path + query + headers + body)
- Fall back to less specific matches
- Default response for unmatched requests
State Management
For complex scenarios, MockForge supports maintaining state across requests:
Session State
- Connection-specific data persists across WebSocket messages
- HTTP session cookies maintain state between requests
- Scenario progression tracks interaction flow
Global State
- Shared data accessible across all connections
- Configuration updates applied dynamically
- Metrics and counters maintained server-wide
Extensibility
MockForge is designed for extension through multiple mechanisms:
Custom Response Generators
Implement custom logic for generating complex responses based on business rules.
Plugin System
Extend functionality through compiled plugins for specialized use cases.
Configuration Extensions
Add custom configuration options for domain-specific requirements.
Security Considerations
Template Injection Prevention
- Templates are disabled by default in production
- Explicit opt-in required for template processing
- Input validation prevents malicious template injection
Access Control
- Configurable CORS policies
- Request rate limiting options
- Authentication simulation support
Data Privacy
- Request/response logging controls
- Sensitive data masking capabilities
- Compliance-friendly configuration options
Performance Characteristics
Throughput
- HTTP APIs: 10,000+ requests/second (depending on response complexity)
- WebSocket: 1,000+ concurrent connections
- Memory Usage: Minimal overhead per connection
Scalability
- Horizontal Scaling: Multiple instances behind load balancer
- Resource Efficiency: Low CPU and memory footprint
- Concurrent Users: Support for thousands of simultaneous connections
Integration Patterns
MockForge works well in various development and testing scenarios:
API Development
- Contract-First Development: Mock APIs before implementation
- Parallel Development: Frontend and backend teams work independently
- Integration Testing: Validate API contracts between services
Microservices Testing
- Service Virtualization: Mock dependent services during testing
- Chaos Engineering: Simulate service failures and latency
- Load Testing: Generate realistic traffic patterns
CI/CD Pipelines
- Automated Testing: Mock external dependencies in test environments
- Deployment Validation: Verify application behavior with mock services
- Performance Benchmarking: Consistent test conditions across environments
This foundation will help you understand how to effectively use MockForge for your specific use case. The following guides provide detailed instructions for configuring and using each protocol and feature.
HTTP Mocking
MockForge provides comprehensive HTTP API mocking capabilities with OpenAPI specification support, dynamic response generation, and advanced request matching. This guide covers everything you need to create realistic REST API mocks.
OpenAPI Integration
MockForge uses OpenAPI (formerly Swagger) specifications as the foundation for HTTP API mocking. This industry-standard approach ensures your mocks accurately reflect real API contracts.
Loading OpenAPI Specifications
# Load from JSON file
mockforge serve --spec api-spec.json --http-port 3000
# Load from YAML file
mockforge serve --spec api-spec.yaml --http-port 3000
# Load from URL
mockforge serve --spec https://api.example.com/openapi.json --http-port 3000
OpenAPI Specification Structure
MockForge supports OpenAPI 3.0+ specifications with the following key components:
- Paths: API endpoint definitions
- Methods: HTTP verbs (GET, POST, PUT, DELETE, PATCH)
- Parameters: Path, query, and header parameters
- Request Bodies: JSON/XML payload schemas
- Responses: Status codes and response schemas
- Components: Reusable schemas and examples
Example OpenAPI Specification
openapi: 3.0.3
info:
title: User Management API
version: 1.0.0
paths:
/users:
get:
summary: List users
parameters:
- name: limit
in: query
schema:
type: integer
default: 10
responses:
'200':
description: Successful response
content:
application/json:
schema:
type: array
items:
$ref: '#/components/schemas/User'
post:
summary: Create user
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/UserInput'
responses:
'201':
description: User created
content:
application/json:
schema:
$ref: '#/components/schemas/User'
/users/{id}:
get:
summary: Get user by ID
parameters:
- name: id
in: path
required: true
schema:
type: string
responses:
'200':
description: User found
content:
application/json:
schema:
$ref: '#/components/schemas/User'
'404':
description: User not found
components:
schemas:
User:
type: object
properties:
id:
type: string
format: uuid
name:
type: string
email:
type: string
format: email
createdAt:
type: string
format: date-time
UserInput:
type: object
required:
- name
- email
properties:
name:
type: string
email:
type: string
Dynamic Response Generation
MockForge generates realistic responses automatically based on your OpenAPI schemas, with support for dynamic data through templates.
Automatic Response Generation
For basic use cases, MockForge can generate responses directly from your OpenAPI schemas:
# Start server with automatic response generation
mockforge serve --spec api-spec.json --http-port 3000
This generates:
- UUIDs for ID fields
- Random data for string/number fields
- Current timestamps for date-time fields
- Valid email addresses for email fields
Template-Enhanced Responses
For more control, use MockForge’s template system in your OpenAPI examples:
paths:
/users:
get:
responses:
'200':
description: List of users
content:
application/json:
example:
users:
- id: "{{uuid}}"
name: "John Doe"
email: "john@example.com"
createdAt: "{{now}}"
lastLogin: "{{now-1d}}"
- id: "{{uuid}}"
name: "Jane Smith"
email: "jane@example.com"
createdAt: "{{now-7d}}"
lastLogin: "{{now-2h}}"
Template Functions
Data Generation Templates
{{uuid}}
- Generate unique UUID{{now}}
- Current timestamp{{now+1h}}
- Future timestamp{{now-1d}}
- Past timestamp{{randInt 1 100}}
- Random integer{{randFloat 0.0 1.0}}
- Random float
Request Data Templates
{{request.path.id}}
- Access path parameters{{request.query.limit}}
- Access query parameters{{request.header.Authorization}}
- Access headers{{request.body.name}}
- Access request body fields
Request Matching and Routing
MockForge uses sophisticated matching to route requests to appropriate responses.
Matching Priority
- Exact Path + Method Match
- Parameterized Path Match (e.g.,
/users/{id}
) - Query Parameter Conditions
- Header-Based Conditions
- Request Body Matching
- Default Response (catch-all)
Path Parameter Handling
/users/{id}:
get:
parameters:
- name: id
in: path
required: true
schema:
type: string
responses:
'200':
content:
application/json:
example:
id: "{{request.path.id}}"
name: "User {{request.path.id}}"
retrievedAt: "{{now}}"
Query Parameter Filtering
/users:
get:
parameters:
- name: status
in: query
schema:
type: string
enum: [active, inactive]
- name: limit
in: query
schema:
type: integer
default: 10
responses:
'200':
content:
application/json:
example: "{{#if (eq request.query.status 'active')}}active_users{{else}}all_users{{/if}}"
Response Scenarios
MockForge supports multiple response scenarios for testing different conditions.
Success Responses
responses:
'200':
description: Success
content:
application/json:
example:
status: "success"
data: { ... }
Error Responses
responses:
'400':
description: Bad Request
content:
application/json:
example:
error: "INVALID_INPUT"
message: "The provided input is invalid"
'404':
description: Not Found
content:
application/json:
example:
error: "NOT_FOUND"
message: "Resource not found"
'500':
description: Internal Server Error
content:
application/json:
example:
error: "INTERNAL_ERROR"
message: "An unexpected error occurred"
Conditional Responses
Use templates to return different responses based on request data:
responses:
'200':
content:
application/json:
example: |
{{#if (eq request.query.format 'detailed')}}
{
"id": "{{uuid}}",
"name": "Detailed User",
"email": "user@example.com",
"profile": {
"bio": "Detailed user profile",
"preferences": { ... }
}
}
{{else}}
{
"id": "{{uuid}}",
"name": "Basic User",
"email": "user@example.com"
}
{{/if}}
Advanced Features
Response Latency Simulation
# Add random latency (100-500ms)
MOCKFORGE_LATENCY_ENABLED=true \
MOCKFORGE_LATENCY_MIN_MS=100 \
MOCKFORGE_LATENCY_MAX_MS=500 \
mockforge serve --spec api-spec.json
Failure Injection
# Enable random failures (10% chance)
MOCKFORGE_FAILURES_ENABLED=true \
MOCKFORGE_FAILURE_RATE=0.1 \
mockforge serve --spec api-spec.json
Request/Response Recording
# Record all HTTP interactions
MOCKFORGE_RECORD_ENABLED=true \
mockforge serve --spec api-spec.json
Response Replay
# Replay recorded responses
MOCKFORGE_REPLAY_ENABLED=true \
mockforge serve --spec api-spec.json
Testing Your Mocks
Manual Testing with curl
# Test GET endpoint
curl http://localhost:3000/users
# Test POST endpoint
curl -X POST http://localhost:3000/users \
-H "Content-Type: application/json" \
-d '{"name": "Test User", "email": "test@example.com"}'
# Test path parameters
curl http://localhost:3000/users/123
# Test query parameters
curl "http://localhost:3000/users?limit=5&status=active"
# Test error scenarios
curl http://localhost:3000/users/999 # Should return 404
Automated Testing
#!/bin/bash
# test-api.sh
BASE_URL="http://localhost:3000"
echo "Testing User API..."
# Test user creation
USER_RESPONSE=$(curl -s -X POST $BASE_URL/users \
-H "Content-Type: application/json" \
-d '{"name": "Test User", "email": "test@example.com"}')
echo "Created user: $USER_RESPONSE"
# Extract user ID (assuming response contains id)
USER_ID=$(echo $USER_RESPONSE | jq -r '.id')
# Test user retrieval
RETRIEVED_USER=$(curl -s $BASE_URL/users/$USER_ID)
echo "Retrieved user: $RETRIEVED_USER"
# Test user listing
USER_LIST=$(curl -s $BASE_URL/users)
echo "User list: $USER_LIST"
echo "API tests completed!"
Best Practices
OpenAPI Specification Tips
- Use descriptive operation IDs for better organization
- Include examples in your OpenAPI spec for consistent responses
- Define reusable components for common schemas
- Use appropriate HTTP status codes for different scenarios
- Document all parameters clearly
Template Usage Guidelines
- Enable templates only when needed for security
- Use meaningful template variables for maintainability
- Test template expansion thoroughly
- Avoid complex logic in templates - keep it simple
Response Design Principles
- Match real API behavior as closely as possible
- Include appropriate error responses for testing
- Use consistent data formats across endpoints
- Consider pagination for list endpoints
- Include metadata like timestamps and request IDs
Performance Considerations
- Use static responses when dynamic data isn’t needed
- Limit template complexity to maintain response times
- Configure appropriate timeouts for your use case
- Monitor memory usage with large response payloads
Troubleshooting
Common Issues
Templates not expanding: Ensure MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
OpenAPI spec not loading: Check file path and JSON/YAML syntax
Wrong response returned: Verify request matching rules and parameter handling
Performance issues: Reduce template complexity or use static responses
Port conflicts: Change default ports with --http-port
option
For more advanced HTTP mocking features, see the following guides:
- OpenAPI Integration - Advanced OpenAPI features
- Custom Responses - Complex response scenarios
- Dynamic Data - Advanced templating techniques
OpenAPI Integration
Custom Responses
Dynamic Data
gRPC Mocking
MockForge provides comprehensive gRPC service mocking with dynamic Protocol Buffer discovery, streaming support, and flexible service registration. This enables testing of gRPC-based microservices and APIs with realistic mock responses.
Overview
MockForge’s gRPC mocking system offers:
- Dynamic Proto Discovery: Automatically discovers and compiles
.proto
files from configurable directories - Flexible Service Registration: Register and mock any gRPC service without hardcoding
- Streaming Support: Full support for unary, server streaming, client streaming, and bidirectional streaming
- Reflection Support: Built-in gRPC reflection for service discovery and testing
- Template Integration: Use MockForge’s template system for dynamic response generation
- Advanced Data Synthesis: Intelligent mock data generation with deterministic seeding, relationship awareness, and RAG-driven domain knowledge
Quick Start
Basic gRPC Server
Start a gRPC mock server with default configuration:
# Start with default proto directory (proto/)
mockforge serve --grpc-port 50051
With Custom Proto Directory
# Specify custom proto directory
MOCKFORGE_PROTO_DIR=my-protos mockforge serve --grpc-port 50051
Complete Example
# Start MockForge with HTTP, WebSocket, and gRPC support
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true \
MOCKFORGE_WS_REPLAY_FILE=examples/ws-demo.jsonl \
MOCKFORGE_PROTO_DIR=examples/grpc-protos \
mockforge serve \
--spec examples/openapi-demo.json \
--http-port 3000 \
--ws-port 3001 \
--grpc-port 50051 \
--admin --admin-port 8080
Proto File Setup
Directory Structure
MockForge automatically discovers .proto
files in a configurable directory:
your-project/
├── proto/ # Default proto directory
│ ├── user_service.proto # Will be discovered
│ ├── payment.proto # Will be discovered
│ └── subdir/
│ └── analytics.proto # Will be discovered (recursive)
└── examples/
└── grpc-protos/ # Custom proto directory
└── service.proto
Sample Proto File
syntax = "proto3";
package mockforge.user;
service UserService {
rpc GetUser(GetUserRequest) returns (UserResponse);
rpc ListUsers(ListUsersRequest) returns (stream UserResponse);
rpc CreateUser(stream CreateUserRequest) returns (UserResponse);
rpc Chat(stream ChatMessage) returns (stream ChatMessage);
}
message GetUserRequest {
string user_id = 1;
}
message UserResponse {
string user_id = 1;
string name = 2;
string email = 3;
int64 created_at = 4;
Status status = 5;
}
message ListUsersRequest {
int32 limit = 1;
string filter = 2;
}
message CreateUserRequest {
string name = 1;
string email = 2;
}
message ChatMessage {
string user_id = 1;
string content = 2;
int64 timestamp = 3;
}
enum Status {
UNKNOWN = 0;
ACTIVE = 1;
INACTIVE = 2;
SUSPENDED = 3;
}
Dynamic Response Generation
MockForge generates responses automatically based on your proto message schemas, with support for templates and custom logic.
Automatic Response Generation
For basic use cases, MockForge generates responses from proto schemas:
- Strings: Random realistic values
- Integers: Random numbers in appropriate ranges
- Timestamps: Current time or future dates
- Enums: Random valid enum values
- Messages: Nested objects with generated data
- Repeated fields: Arrays with multiple generated items
Template-Enhanced Responses
Use MockForge templates in proto comments for custom responses:
message UserResponse {
string user_id = 1; // {{uuid}}
string name = 2; // {{request.user_id == "123" ? "John Doe" : "Jane Smith"}}
string email = 3; // {{name | replace(" ", ".") | lower}}@example.com
int64 created_at = 4; // {{now}}
Status status = 5; // ACTIVE
}
Request Context Access
Access request data in templates:
message UserResponse {
string user_id = 1; // {{request.user_id}}
string requested_by = 2; // {{request.metadata.user_id}}
string message = 3; // User {{request.user_id}} was retrieved
}
Testing gRPC Services
Using gRPC CLI Tools
grpcurl (Recommended)
# Install grpcurl
go install github.com/fullstorydev/grpcurl/cmd/grpcurl@latest
# List available services
grpcurl -plaintext localhost:50051 list
# Call a unary method
grpcurl -plaintext -d '{"user_id": "123"}' \
localhost:50051 mockforge.user.UserService/GetUser
# Call a server streaming method
grpcurl -plaintext -d '{"limit": 5}' \
localhost:50051 mockforge.user.UserService/ListUsers
# Call a client streaming method
echo '{"name": "Alice", "email": "alice@example.com"}' | \
grpcurl -plaintext -d @ \
localhost:50051 mockforge.user.UserService/CreateUser
grpcui (Web Interface)
# Install grpcui
go install github.com/fullstorydev/grpcui/cmd/grpcui@latest
# Start web interface
grpcui -plaintext localhost:50051
# Open http://localhost:2633 in your browser
Programmatic Testing
Node.js with grpc-js
const grpc = require('@grpc/grpc-js');
const protoLoader = require('@grpc/proto-loader');
const packageDefinition = protoLoader.loadSync(
'proto/user_service.proto',
{
keepCase: true,
longs: String,
enums: String,
defaults: true,
oneofs: true
}
);
const protoDescriptor = grpc.loadPackageDefinition(packageDefinition);
const client = new protoDescriptor.mockforge.user.UserService(
'localhost:50051',
grpc.credentials.createInsecure()
);
// Unary call
client.GetUser({ user_id: '123' }, (error, response) => {
if (error) {
console.error('Error:', error);
} else {
console.log('Response:', response);
}
});
// Server streaming
const stream = client.ListUsers({ limit: 5 });
stream.on('data', (response) => {
console.log('User:', response);
});
stream.on('end', () => {
console.log('Stream ended');
});
Python with grpcio
import grpc
from user_service_pb2 import GetUserRequest
from user_service_pb2_grpc import UserServiceStub
channel = grpc.insecure_channel('localhost:50051')
stub = UserServiceStub(channel)
# Unary call
request = GetUserRequest(user_id='123')
response = stub.GetUser(request)
print(f"User: {response.name}, Email: {response.email}")
# Streaming
for user in stub.ListUsers(ListUsersRequest(limit=5)):
print(f"User: {user.name}")
Advanced Configuration
Custom Response Mappings
Create custom response logic by implementing service handlers:
#![allow(unused)] fn main() { use mockforge_grpc::{ServiceRegistry, ServiceImplementation}; use std::collections::HashMap; struct CustomUserService { user_data: HashMap<String, UserResponse>, } impl ServiceImplementation for CustomUserService { fn handle_unary(&self, method: &str, request: &[u8]) -> Vec<u8> { match method { "GetUser" => { let req: GetUserRequest = prost::Message::decode(request).unwrap(); let response = self.user_data.get(&req.user_id) .cloned() .unwrap_or_else(|| UserResponse { user_id: req.user_id, name: "Unknown User".to_string(), email: "unknown@example.com".to_string(), created_at: std::time::SystemTime::now() .duration_since(std::time::UNIX_EPOCH) .unwrap().as_secs() as i64, status: Status::Unknown as i32, }); let mut buf = Vec::new(); response.encode(&mut buf).unwrap(); buf } _ => Vec::new(), } } } }
Environment Variables
# Proto file configuration
MOCKFORGE_PROTO_DIR=proto/ # Directory containing .proto files
MOCKFORGE_GRPC_PORT=50051 # gRPC server port
# Service behavior
MOCKFORGE_GRPC_LATENCY_ENABLED=true # Enable response latency
MOCKFORGE_GRPC_LATENCY_MIN_MS=10 # Minimum latency
MOCKFORGE_GRPC_LATENCY_MAX_MS=100 # Maximum latency
# Reflection settings
MOCKFORGE_GRPC_REFLECTION_ENABLED=true # Enable gRPC reflection
Configuration File
grpc:
port: 50051
proto_dir: "proto/"
enable_reflection: true
latency:
enabled: true
min_ms: 10
max_ms: 100
services:
- name: "mockforge.user.UserService"
implementation: "dynamic"
- name: "custom.Service"
implementation: "custom_handler"
Streaming Support
MockForge supports all gRPC streaming patterns:
Unary (Request → Response)
rpc GetUser(GetUserRequest) returns (UserResponse);
Standard request-response pattern used for simple operations.
Server Streaming (Request → Stream of Responses)
rpc ListUsers(ListUsersRequest) returns (stream UserResponse);
Single request that returns multiple responses over time.
Client Streaming (Stream of Requests → Response)
rpc CreateUsers(stream CreateUserRequest) returns (UserSummary);
Multiple requests sent as a stream, single response returned.
Bidirectional Streaming (Stream ↔ Stream)
rpc Chat(stream ChatMessage) returns (stream ChatMessage);
Both client and server can send messages independently.
Error Handling
gRPC Status Codes
MockForge supports all standard gRPC status codes:
// In proto comments for custom error responses
rpc GetUser(GetUserRequest) returns (UserResponse);
// @error NOT_FOUND User not found
// @error INVALID_ARGUMENT Invalid user ID format
// @error INTERNAL Server error occurred
Custom Error Responses
#![allow(unused)] fn main() { // Custom error handling fn handle_unary(&self, method: &str, request: &[u8]) -> Result<Vec<u8>, tonic::Status> { match method { "GetUser" => { let req: GetUserRequest = prost::Message::decode(request)?; if !is_valid_user_id(&req.user_id) { return Err(tonic::Status::invalid_argument("Invalid user ID")); } match self.get_user(&req.user_id) { Some(user) => { let mut buf = Vec::new(); user.encode(&mut buf)?; Ok(buf) } None => Err(tonic::Status::not_found("User not found")), } } _ => Err(tonic::Status::unimplemented("Method not implemented")), } } }
Integration Patterns
Microservices Testing
# Start multiple gRPC services
MOCKFORGE_PROTO_DIR=user-proto mockforge serve --grpc-port 50051 &
MOCKFORGE_PROTO_DIR=payment-proto mockforge serve --grpc-port 50052 &
MOCKFORGE_PROTO_DIR=inventory-proto mockforge serve --grpc-port 50053 &
# Test service communication
grpcurl -plaintext localhost:50051 mockforge.user.UserService/GetUser \
-d '{"user_id": "123"}'
Load Testing
# Simple load test with hey
hey -n 1000 -c 10 \
grpcurl -plaintext -d '{"user_id": "123"}' \
localhost:50051 mockforge.user.UserService/GetUser
# Advanced load testing with ghz
ghz --insecure \
--proto proto/user_service.proto \
--call mockforge.user.UserService.GetUser \
--data '{"user_id": "123"}' \
--concurrency 10 \
--total 1000 \
localhost:50051
CI/CD Integration
# .github/workflows/test.yml
name: gRPC Tests
on: [push, pull_request]
jobs:
grpc-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Rust
uses: actions-rust-lang/setup-rust-toolchain@v1
- name: Start MockForge
run: |
cargo run --bin mockforge-cli -- serve --grpc-port 50051 &
sleep 5
- name: Run gRPC Tests
run: |
npm install -g grpcurl
grpcurl -plaintext localhost:50051 list
# Add your test commands here
Best Practices
Proto File Organization
- Clear Package Names: Use descriptive package names that reflect service domains
- Consistent Naming: Follow protobuf naming conventions
- Versioning: Include version information in package names when appropriate
- Documentation: Add comments to proto files for better API documentation
Service Design
- Appropriate Streaming: Choose the right streaming pattern for your use case
- Error Handling: Define clear error conditions and status codes
- Pagination: Implement pagination for large result sets
- Backwards Compatibility: Design for evolution and backwards compatibility
Testing Strategies
- Unit Tests: Test individual service methods
- Integration Tests: Test service interactions
- Load Tests: Verify performance under load
- Chaos Tests: Test failure scenarios and recovery
Performance Optimization
- Response Caching: Cache frequently requested data
- Connection Pooling: Reuse gRPC connections
- Async Processing: Use async operations for I/O bound tasks
- Memory Management: Monitor and optimize memory usage
Troubleshooting
Common Issues
Proto files not found: Check MOCKFORGE_PROTO_DIR
environment variable and directory permissions
Service not available: Verify proto compilation succeeded and service names match
Connection refused: Ensure gRPC port is accessible and not blocked by firewall
Template errors: Check template syntax and available context variables
Debug Commands
# Check proto compilation
cargo build --verbose
# List available services
grpcurl -plaintext localhost:50051 list
# Check service methods
grpcurl -plaintext localhost:50051 describe mockforge.user.UserService
# Test with verbose output
grpcurl -plaintext -v -d '{"user_id": "123"}' \
localhost:50051 mockforge.user.UserService/GetUser
Log Analysis
# View gRPC logs
tail -f mockforge.log | grep -i grpc
# Count requests by service
grep "grpc.*call" mockforge.log | cut -d' ' -f5 | sort | uniq -c
# Monitor errors
grep -i "grpc.*error" mockforge.log
For detailed implementation guides, see:
- Protocol Buffers - Working with .proto files
- Streaming - Advanced streaming patterns
- Advanced Data Synthesis - Intelligent data generation with RAG and validation
Protocol Buffers
Protocol Buffers (protobuf) are the interface definition language used by gRPC services. MockForge provides comprehensive support for working with protobuf files, including automatic discovery, compilation, and dynamic service generation.
Understanding Proto Files
Basic Structure
A .proto
file defines the service interface and message formats:
syntax = "proto3";
package myapp.user;
import "google/protobuf/timestamp.proto";
// Service definition
service UserService {
rpc GetUser(GetUserRequest) returns (User);
rpc ListUsers(ListUsersRequest) returns (stream User);
rpc CreateUser(CreateUserRequest) returns (User);
rpc UpdateUser(UpdateUserRequest) returns (User);
rpc DeleteUser(DeleteUserRequest) returns (google.protobuf.Empty);
}
// Message definitions
message GetUserRequest {
string user_id = 1;
}
message User {
string user_id = 1;
string email = 2;
string name = 3;
google.protobuf.Timestamp created_at = 4;
google.protobuf.Timestamp updated_at = 5;
UserStatus status = 6;
repeated string roles = 7;
}
message ListUsersRequest {
int32 page_size = 1;
string page_token = 2;
string filter = 3;
}
message CreateUserRequest {
string email = 1;
string name = 2;
repeated string roles = 3;
}
message UpdateUserRequest {
string user_id = 1;
string email = 2;
string name = 3;
repeated string roles = 4;
}
message DeleteUserRequest {
string user_id = 1;
}
enum UserStatus {
UNKNOWN = 0;
ACTIVE = 1;
INACTIVE = 2;
SUSPENDED = 3;
}
Key Components
Syntax Declaration
syntax = "proto3";
Declares the protobuf version. MockForge supports proto3.
Package Declaration
package myapp.user;
Defines the namespace for the service and messages.
Imports
import "google/protobuf/timestamp.proto";
Imports common protobuf types and other proto files.
Service Definition
service UserService {
rpc GetUser(GetUserRequest) returns (User);
// ... more methods
}
Defines the RPC methods available in the service.
Message Definitions
message User {
string user_id = 1;
string email = 2;
// ... more fields
}
Defines the structure of data exchanged between client and server.
Enum Definitions
enum UserStatus {
UNKNOWN = 0;
ACTIVE = 1;
// ... more values
}
Defines enumerated types with named constants.
Field Types
Scalar Types
Proto Type | Go Type | Java Type | C++ Type | Notes |
---|---|---|---|---|
double | float64 | double | double | |
float | float32 | float | float | |
int32 | int32 | int | int32 | Uses variable-length encoding |
int64 | int64 | long | int64 | Uses variable-length encoding |
uint32 | uint32 | int | uint32 | Uses variable-length encoding |
uint64 | uint64 | long | uint64 | Uses variable-length encoding |
sint32 | int32 | int | int32 | Uses zigzag encoding |
sint64 | int64 | long | int64 | Uses zigzag encoding |
fixed32 | uint32 | int | uint32 | Always 4 bytes |
fixed64 | uint64 | long | uint64 | Always 8 bytes |
sfixed32 | int32 | int | int32 | Always 4 bytes |
sfixed64 | int64 | long | int64 | Always 8 bytes |
bool | bool | boolean | bool | |
string | string | String | string | UTF-8 encoded |
bytes | []byte | ByteString | string |
Repeated Fields
message SearchResponse {
repeated Result results = 1;
}
Creates an array/list of the specified type.
Nested Messages
message Address {
string street = 1;
string city = 2;
string country = 3;
}
message Person {
string name = 1;
Address address = 2;
}
Messages can contain other messages as fields.
Oneof Fields
message Person {
string name = 1;
oneof contact_info {
string email = 2;
string phone = 3;
}
}
Only one of the specified fields can be set at a time.
Maps
message Config {
map<string, string> settings = 1;
}
Creates a key-value map structure.
Service Patterns
Unary RPC
service Calculator {
rpc Add(AddRequest) returns (AddResponse);
}
Standard request-response pattern.
Server Streaming
service NotificationService {
rpc Subscribe(SubscribeRequest) returns (stream Notification);
}
Server sends multiple responses for a single request.
Client Streaming
service UploadService {
rpc Upload(stream UploadChunk) returns (UploadResponse);
}
Client sends multiple requests, server responds once.
Bidirectional Streaming
service ChatService {
rpc Chat(stream ChatMessage) returns (stream ChatMessage);
}
Both client and server can send messages independently.
Proto File Organization
Directory Structure
proto/
├── user/
│ ├── v1/
│ │ ├── user.proto
│ │ └── user_service.proto
│ └── v2/
│ ├── user.proto
│ └── user_service.proto
├── payment/
│ ├── payment.proto
│ └── payment_service.proto
└── common/
├── types.proto
└── errors.proto
Versioning
// user/v1/user.proto
syntax = "proto3";
package myapp.user.v1;
// Version-specific message
message User {
string id = 1;
string name = 2;
string email = 3;
}
// user/v2/user.proto
syntax = "proto3";
package myapp.user.v2;
// Extended version with new fields
message User {
string id = 1;
string name = 2;
string email = 3;
string phone = 4; // New field
repeated string tags = 5; // New field
}
MockForge Integration
Automatic Discovery
MockForge automatically discovers .proto
files in the configured directory:
# Default proto directory
mockforge serve --grpc-port 50051
# Custom proto directory
MOCKFORGE_PROTO_DIR=my-protos mockforge serve --grpc-port 50051
Service Registration
MockForge automatically registers all discovered services:
# List available services
grpcurl -plaintext localhost:50051 list
# Output:
# grpc.reflection.v1alpha.ServerReflection
# myapp.user.UserService
# myapp.payment.PaymentService
Dynamic Response Generation
MockForge generates responses based on proto message schemas:
message UserResponse {
string user_id = 1; // Generates UUID
string name = 2; // Generates random name
string email = 3; // Generates valid email
int64 created_at = 4; // Generates timestamp
UserStatus status = 5; // Random enum value
}
Template Support
Use MockForge templates for custom responses:
message UserResponse {
string user_id = 1; // {{uuid}}
string name = 2; // {{request.user_id == "123" ? "John Doe" : "Jane Smith"}}
string email = 3; // {{name | replace(" ", ".") | lower}}@example.com
int64 created_at = 4; // {{now}}
UserStatus status = 5; // ACTIVE
}
Best Practices
Naming Conventions
- Packages: Use lowercase with dots (e.g.,
myapp.user.v1
) - Services: Use PascalCase with “Service” suffix (e.g.,
UserService
) - Messages: Use PascalCase (e.g.,
UserProfile
) - Fields: Use snake_case (e.g.,
user_id
,created_at
) - Enums: Use PascalCase for type, SCREAMING_SNAKE_CASE for values
Field Numbering
- Reserve numbers: Don’t reuse field numbers from deleted fields
- Start from 1: Field numbers start from 1
- Gap for extensions: Leave gaps for future extensions
- Document reservations: Comment reserved field numbers
message User {
string user_id = 1;
string name = 2;
string email = 3;
// reserved 4, 5, 6; // Reserved for future use
int64 created_at = 7;
}
Import Organization
- Standard imports: Import well-known protobuf types first
- Local imports: Import project-specific proto files
- Relative paths: Use relative paths for local imports
syntax = "proto3";
import "google/protobuf/timestamp.proto";
import "google/protobuf/empty.proto";
import "common/types.proto";
import "user/profile.proto";
package myapp.user;
Documentation
- Service comments: Document what each service does
- Method comments: Explain each RPC method
- Field comments: Describe field purposes and constraints
- Enum comments: Document enum value meanings
// User management service
service UserService {
// Get a user by ID
rpc GetUser(GetUserRequest) returns (User);
// List users with pagination
rpc ListUsers(ListUsersRequest) returns (ListUsersResponse);
}
message User {
string user_id = 1; // Unique identifier for the user
string email = 2; // User's email address (must be valid)
UserStatus status = 3; // Current account status
}
enum UserStatus {
UNKNOWN = 0; // Default value
ACTIVE = 1; // Account is active
INACTIVE = 2; // Account is deactivated
SUSPENDED = 3; // Account is temporarily suspended
}
Migration and Evolution
Adding Fields
// Original
message User {
string user_id = 1;
string name = 2;
}
// Extended (backwards compatible)
message User {
string user_id = 1;
string name = 2;
string email = 3; // New field
bool active = 4; // New field
}
Reserved Fields
message User {
reserved 5, 6, 7; // Reserved for future use
reserved "old_field"; // Reserved field name
string user_id = 1;
string name = 2;
string email = 3;
}
Versioning Strategy
- Package versioning: Include version in package name
- Service evolution: Extend services with new methods
- Deprecation notices: Mark deprecated fields
- Breaking changes: Create new service versions
Validation
Proto File Validation
# Validate proto syntax
protoc --proto_path=. --error_format=json myproto.proto
# Generate descriptors
protoc --proto_path=. --descriptor_set_out=descriptor.pb myproto.proto
MockForge Integration Testing
# Test proto compilation
MOCKFORGE_PROTO_DIR=my-protos cargo build
# Verify service discovery
mockforge serve --grpc-port 50051 &
sleep 2
grpcurl -plaintext localhost:50051 list
Cross-Language Compatibility
# Generate code for multiple languages
protoc --proto_path=. \
--go_out=. \
--java_out=. \
--python_out=. \
--cpp_out=. \
myproto.proto
Troubleshooting
Common Proto Issues
Import resolution: Ensure all imported proto files are available in the proto path
Field conflicts: Check for duplicate field numbers or names within messages
Circular imports: Avoid circular dependencies between proto files
Syntax errors: Use protoc
to validate proto file syntax
MockForge-Specific Issues
Services not discovered: Check proto directory configuration and file permissions
Invalid responses: Verify proto message definitions match expected schemas
Compilation failures: Check for proto syntax errors and missing dependencies
Template errors: Ensure template variables are properly escaped in proto comments
Debug Commands
# Check proto file discovery
find proto/ -name "*.proto" -type f
# Validate proto files
for file in $(find proto/ -name "*.proto"); do
echo "Validating $file..."
protoc --proto_path=. --error_format=json "$file" > /dev/null
done
# Test service compilation
MOCKFORGE_PROTO_DIR=proto/ cargo check -p mockforge-grpc
# Inspect generated code
cargo doc --open --package mockforge-grpc
Protocol Buffers provide a robust foundation for gRPC service definitions. By following these guidelines and leveraging MockForge’s dynamic discovery capabilities, you can create well-structured, maintainable, and testable gRPC services.
Streaming
gRPC supports four fundamental communication patterns, with three involving streaming. MockForge provides comprehensive support for all streaming patterns, enabling realistic testing of real-time and batch data scenarios.
Streaming Patterns
Unary (Request → Response)
Standard request-response pattern - one message in, one message out.
Server Streaming (Request → Stream of Responses)
Single request initiates a stream of responses from server to client.
Client Streaming (Stream of Requests → Response)
Client sends multiple messages, server responds once with aggregated result.
Bidirectional Streaming (Stream ↔ Stream)
Both client and server can send messages independently and simultaneously.
Server Streaming
Basic Server Streaming
service NotificationService {
rpc Subscribe(SubscribeRequest) returns (stream Notification);
}
message SubscribeRequest {
repeated string topics = 1;
SubscriptionType type = 2;
}
message Notification {
string topic = 1;
string message = 2;
google.protobuf.Timestamp timestamp = 3;
Severity severity = 4;
}
enum SubscriptionType {
REALTIME = 0;
BATCH = 1;
}
enum Severity {
INFO = 0;
WARNING = 1;
ERROR = 2;
CRITICAL = 3;
}
MockForge Configuration
Server streaming generates multiple responses based on configuration:
// Basic server streaming - fixed number of responses
{"ts":0,"dir":"out","text":"{\"topic\":\"system\",\"message\":\"Connected\",\"severity\":\"INFO\"}"}
{"ts":1000,"dir":"out","text":"{\"topic\":\"user\",\"message\":\"New user registered\",\"severity\":\"INFO\"}"}
{"ts":2000,"dir":"out","text":"{\"topic\":\"payment\",\"message\":\"Payment processed\",\"severity\":\"INFO\"}"}
{"ts":3000,"dir":"out","text":"{\"topic\":\"system\",\"message\":\"Maintenance scheduled\",\"severity\":\"WARNING\"}"}
Dynamic Server Streaming
// Template-based dynamic responses
{"ts":0,"dir":"out","text":"{\"topic\":\"{{request.topics[0]}}\",\"message\":\"Subscribed to {{request.topics.length}} topics\",\"timestamp\":\"{{now}}\"}"}
{"ts":1000,"dir":"out","text":"{\"topic\":\"{{randFromArray request.topics}}\",\"message\":\"{{randParagraph}}\",\"timestamp\":\"{{now}}\"}"}
{"ts":2000,"dir":"out","text":"{\"topic\":\"{{randFromArray request.topics}}\",\"message\":\"{{randSentence}}\",\"timestamp\":\"{{now}}\"}"}
{"ts":5000,"dir":"out","text":"{\"topic\":\"system\",\"message\":\"Stream ending\",\"timestamp\":\"{{now}}\"}"}
Testing Server Streaming
Using grpcurl
# Test server streaming
grpcurl -plaintext -d '{"topics": ["user", "payment"], "type": "REALTIME"}' \
localhost:50051 myapp.NotificationService/Subscribe
Using Node.js
const grpc = require('@grpc/grpc-js');
const protoLoader = require('@grpc/proto-loader');
const packageDefinition = protoLoader.loadSync('proto/notification.proto');
const proto = grpc.loadPackageDefinition(packageDefinition);
const client = new proto.myapp.NotificationService(
'localhost:50051',
grpc.credentials.createInsecure()
);
const call = client.Subscribe({
topics: ['user', 'payment'],
type: 'REALTIME'
});
call.on('data', (notification) => {
console.log('Notification:', notification);
});
call.on('end', () => {
console.log('Stream ended');
});
call.on('error', (error) => {
console.error('Error:', error);
});
Client Streaming
Basic Client Streaming
service UploadService {
rpc UploadFile(stream FileChunk) returns (UploadResponse);
}
message FileChunk {
bytes data = 1;
int32 sequence = 2;
bool is_last = 3;
}
message UploadResponse {
string file_id = 1;
int64 total_size = 2;
string checksum = 3;
UploadStatus status = 4;
}
enum UploadStatus {
SUCCESS = 0;
FAILED = 1;
PARTIAL = 2;
}
MockForge Configuration
Client streaming processes multiple incoming messages and returns a single response:
// Client streaming - processes multiple chunks
{"ts":0,"dir":"in","text":".*","response":"{\"file_id\":\"{{uuid}}\",\"total_size\":1024,\"status\":\"SUCCESS\"}"}
Advanced Client Streaming
// Process chunks and maintain state
{"ts":0,"dir":"in","text":"{\"sequence\":0}","response":"Chunk 0 received","state":"uploading","chunks":1}
{"ts":0,"dir":"in","text":"{\"sequence\":1}","response":"Chunk 1 received","chunks":"{{request.ws.state.chunks + 1}}"}
{"ts":0,"dir":"in","text":"{\"is_last\":true}","response":"{\"file_id\":\"{{uuid}}\",\"total_size\":\"{{request.ws.state.chunks * 1024}}\",\"status\":\"SUCCESS\"}"}
Testing Client Streaming
Using grpcurl
# Send multiple messages for client streaming
echo '{"data": "chunk1", "sequence": 0}' | \
grpcurl -plaintext -d @ localhost:50051 myapp.UploadService/UploadFile
echo '{"data": "chunk2", "sequence": 1}' | \
grpcurl -plaintext -d @ localhost:50051 myapp.UploadService/UploadFile
echo '{"data": "chunk3", "sequence": 2, "is_last": true}' | \
grpcurl -plaintext -d @ localhost:50051 myapp.UploadService/UploadFile
Using Python
import grpc
from upload_pb2 import FileChunk
from upload_pb2_grpc import UploadServiceStub
def generate_chunks():
# Simulate file chunks
chunks = [
b"chunk1",
b"chunk2",
b"chunk3"
]
for i, chunk in enumerate(chunks):
yield FileChunk(
data=chunk,
sequence=i,
is_last=(i == len(chunks) - 1)
)
channel = grpc.insecure_channel('localhost:50051')
stub = UploadServiceStub(channel)
response = stub.UploadFile(generate_chunks())
print(f"Upload result: {response}")
Bidirectional Streaming
Basic Bidirectional Streaming
service ChatService {
rpc Chat(stream ChatMessage) returns (stream ChatMessage);
}
message ChatMessage {
string user_id = 1;
string content = 2;
MessageType type = 3;
google.protobuf.Timestamp timestamp = 4;
}
enum MessageType {
TEXT = 0;
JOIN = 1;
LEAVE = 2;
SYSTEM = 3;
}
MockForge Configuration
Bidirectional streaming handles both incoming and outgoing messages:
// Welcome message on connection
{"ts":0,"dir":"out","text":"{\"user_id\":\"system\",\"content\":\"Welcome to chat!\",\"type\":\"SYSTEM\"}"}
// Handle join messages
{"ts":0,"dir":"in","text":"{\"type\":\"JOIN\"}","response":"{\"user_id\":\"system\",\"content\":\"{{request.ws.message.user_id}} joined the chat\",\"type\":\"SYSTEM\"}"}
// Handle text messages
{"ts":0,"dir":"in","text":"{\"type\":\"TEXT\"}","response":"{\"user_id\":\"{{request.ws.message.user_id}}\",\"content\":\"{{request.ws.message.content}}\",\"type\":\"TEXT\"}"}
// Handle leave messages
{"ts":0,"dir":"in","text":"{\"type\":\"LEAVE\"}","response":"{\"user_id\":\"system\",\"content\":\"{{request.ws.message.user_id}} left the chat\",\"type\":\"SYSTEM\"}"}
// Periodic system messages
{"ts":30000,"dir":"out","text":"{\"user_id\":\"system\",\"content\":\"Server uptime: {{randInt 1 24}} hours\",\"type\":\"SYSTEM\"}"}
Advanced Bidirectional Patterns
// State-aware responses
{"ts":0,"dir":"in","text":".*","condition":"{{!request.ws.state.authenticated}}","response":"Please authenticate first"}
{"ts":0,"dir":"in","text":"AUTH","response":"Authenticated","state":"authenticated"}
{"ts":0,"dir":"in","text":".*","condition":"{{request.ws.state.authenticated}}","response":"{{request.ws.message}}"}
{"ts":0,"dir":"in","text":"HELP","response":"Available commands: MSG, QUIT, STATUS"}
{"ts":0,"dir":"in","text":"STATUS","response":"Connected users: {{randInt 1 50}}"}
{"ts":0,"dir":"in","text":"QUIT","response":"Goodbye!","close":true}
Testing Bidirectional Streaming
Using Node.js
const grpc = require('@grpc/grpc-js');
const protoLoader = require('@grpc/proto-loader');
const packageDefinition = protoLoader.loadSync('proto/chat.proto');
const proto = grpc.loadPackageDefinition(packageDefinition);
const client = new proto.myapp.ChatService(
'localhost:50051',
grpc.credentials.createInsecure()
);
const call = client.Chat();
// Handle incoming messages
call.on('data', (message) => {
console.log('Received:', message);
});
// Send messages
setInterval(() => {
call.write({
user_id: 'user123',
content: 'Hello from client',
type: 'TEXT'
});
}, 2000);
// Send join message
call.write({
user_id: 'user123',
content: 'Joined chat',
type: 'JOIN'
});
// Handle stream end
call.on('end', () => {
console.log('Stream ended');
});
// Close after 30 seconds
setTimeout(() => {
call.write({
user_id: 'user123',
content: 'Leaving chat',
type: 'LEAVE'
});
call.end();
}, 30000);
Streaming Configuration
Environment Variables
# Streaming behavior
MOCKFORGE_GRPC_STREAM_TIMEOUT=30000 # Stream timeout in ms
MOCKFORGE_GRPC_MAX_STREAM_MESSAGES=1000 # Max messages per stream
MOCKFORGE_GRPC_STREAM_BUFFER_SIZE=1024 # Buffer size for streaming
# Response timing
MOCKFORGE_GRPC_LATENCY_MIN_MS=10 # Minimum response latency
MOCKFORGE_GRPC_LATENCY_MAX_MS=100 # Maximum response latency
Stream Control Templates
// Conditional streaming
{"ts":0,"dir":"out","text":"Starting stream","condition":"{{request.stream_enabled}}"}
{"ts":1000,"dir":"out","text":"Stream data","condition":"{{request.ws.state.active}}"}
{"ts":0,"dir":"out","text":"Stream ended","condition":"{{request.ws.message.type === 'END'}}","close":true}
// Dynamic intervals
{"ts":"{{randInt 1000 5000}}","dir":"out","text":"Random interval message"}
{"ts":"{{request.interval || 2000}}","dir":"out","text":"Custom interval message"}
Performance Considerations
Memory Management
// Limit message history
{"ts":0,"dir":"in","text":".*","condition":"{{(request.ws.state.messageCount || 0) < 100}}","response":"Message received","messageCount":"{{(request.ws.state.messageCount || 0) + 1}}"}
{"ts":0,"dir":"in","text":".*","condition":"{{(request.ws.state.messageCount || 0) >= 100}}","response":"Message limit reached"}
Connection Limits
// Global connection tracking (requires custom implementation)
{"ts":0,"dir":"out","text":"Connection {{request.ws.connectionId}} established"}
{"ts":300000,"dir":"out","text":"Connection timeout","close":true}
Load Balancing
// Simulate load balancer behavior
{"ts":"{{randInt 100 1000}}","dir":"out","text":"Response from server {{randInt 1 3}}"}
{"ts":"{{randInt 2000 5000}}","dir":"out","text":"Health check from server {{randInt 1 3}}"}
Error Handling in Streams
Stream Errors
// Handle invalid messages
{"ts":0,"dir":"in","text":"","response":"Empty message not allowed"}
{"ts":0,"dir":"in","text":".{500,}","response":"Message too long (max 500 chars)"}
// Simulate network errors
{"ts":5000,"dir":"out","text":"Network error occurred","error":true,"close":true}
Recovery Patterns
// Automatic reconnection
{"ts":0,"dir":"out","text":"Connection lost, attempting reconnect..."}
{"ts":2000,"dir":"out","text":"Reconnected successfully"}
{"ts":100,"dir":"out","text":"Resuming stream from message {{request.ws.state.lastMessageId}}"}
Testing Strategies
Unit Testing Streams
// test-streaming.js
const { expect } = require('chai');
describe('gRPC Streaming', () => {
it('should handle server streaming', (done) => {
const call = client.subscribeNotifications({ topics: ['test'] });
let messageCount = 0;
call.on('data', (notification) => {
messageCount++;
expect(notification).to.have.property('topic');
expect(notification).to.have.property('message');
});
call.on('end', () => {
expect(messageCount).to.be.greaterThan(0);
done();
});
// End test after 5 seconds
setTimeout(() => call.cancel(), 5000);
});
it('should handle client streaming', (done) => {
const call = client.uploadFile((error, response) => {
expect(error).to.be.null;
expect(response).to.have.property('file_id');
expect(response.status).to.equal('SUCCESS');
done();
});
// Send test chunks
call.write({ data: Buffer.from('test'), sequence: 0 });
call.write({ data: Buffer.from('data'), sequence: 1, is_last: true });
call.end();
});
});
Load Testing
#!/bin/bash
# load-test-streams.sh
CONCURRENT_STREAMS=10
DURATION=60
echo "Load testing $CONCURRENT_STREAMS concurrent streams for ${DURATION}s"
for i in $(seq 1 $CONCURRENT_STREAMS); do
node stream-client.js &
done
# Wait for test duration
sleep $DURATION
# Kill all clients
pkill -f stream-client.js
echo "Load test completed"
Best Practices
Stream Design
- Appropriate Patterns: Choose the right streaming pattern for your use case
- Message Size: Keep individual messages reasonably sized
- Heartbeat Messages: Include periodic keepalive messages for long-running streams
- Error Recovery: Implement proper error handling and recovery mechanisms
Performance Optimization
- Buffering: Use appropriate buffer sizes for your throughput requirements
- Compression: Enable compression for large message streams
- Connection Reuse: Reuse connections when possible
- Resource Limits: Set appropriate limits on concurrent streams and message rates
Monitoring and Debugging
- Stream Metrics: Monitor stream duration, message counts, and error rates
- Logging: Enable detailed logging for debugging streaming issues
- Tracing: Implement request tracing across stream messages
- Health Checks: Regular health checks for long-running streams
Client Compatibility
- Protocol Versions: Ensure compatibility with different gRPC versions
- Language Support: Test with multiple client language implementations
- Network Conditions: Test under various network conditions (latency, packet loss)
- Browser Support: Consider WebSocket fallback for web clients
Troubleshooting
Common Streaming Issues
Stream doesn’t start: Check proto file definitions and service registration
Messages not received: Verify message encoding and template syntax
Stream hangs: Check for proper stream termination and timeout settings
Performance degradation: Monitor resource usage and adjust buffer sizes
Client disconnects: Implement proper heartbeat and reconnection logic
Debug Commands
# Monitor active streams
grpcurl -plaintext localhost:50051 list
# Check stream status
netstat -tlnp | grep :50051
# View stream logs
tail -f mockforge.log | grep -E "(stream|grpc)"
# Test basic connectivity
grpcurl -plaintext localhost:50051 grpc.reflection.v1alpha.ServerReflection/ServerReflectionInfo
Performance Profiling
# Profile gRPC performance
cargo flamegraph --bin mockforge-cli -- serve --grpc-port 50051
# Monitor system resources
htop -p $(pgrep mockforge)
# Network monitoring
iftop -i lo
Streaming patterns enable powerful real-time communication scenarios. MockForge’s comprehensive streaming support allows you to create sophisticated mock environments that accurately simulate production streaming services for thorough testing and development.
Advanced Data Synthesis
MockForge provides sophisticated data synthesis capabilities that go beyond simple random data generation. The advanced data synthesis system combines intelligent field inference, deterministic seeding, relationship-aware generation, and cross-endpoint validation to create realistic, coherent, and reproducible test data.
Overview
The advanced data synthesis system consists of four main components:
- Smart Mock Generator - Intelligent field-based mock data generation with deterministic seeding
- Schema Graph Extraction - Automatic discovery of relationships from protobuf schemas
- RAG-Driven Synthesis - Domain-aware data generation using Retrieval-Augmented Generation
- Validation Framework - Cross-endpoint consistency and integrity validation
These components work together to provide enterprise-grade test data generation that maintains referential integrity across your entire gRPC service ecosystem.
Smart Mock Generator
The Smart Mock Generator provides intelligent mock data generation based on field names, types, and patterns. It automatically detects the intent behind field names and generates appropriate realistic data.
Field Name Intelligence
The generator automatically infers appropriate data types based on field names:
Field Pattern | Generated Data Type | Example Values |
---|---|---|
email , email_address | Realistic email addresses | user@example.com , alice.smith@company.org |
phone , mobile , phone_number | Formatted phone numbers | +1-555-0123 , (555) 123-4567 |
id , user_id , order_id | Sequential or UUID-based IDs | user_001 , 550e8400-e29b-41d4-a716-446655440000 |
name , first_name , last_name | Realistic names | John Doe , Alice , Johnson |
created_at , updated_at , timestamp | ISO timestamps | 2023-10-15T14:30:00Z |
latitude , longitude | Geographic coordinates | 40.7128 , -74.0060 |
url , website | Valid URLs | https://example.com |
token , api_key | Security tokens | sk_live_4eC39HqLyjWDarjtT1zdp7dc |
Deterministic Generation
For reproducible test fixtures, the Smart Mock Generator supports deterministic seeding:
#![allow(unused)] fn main() { use mockforge_grpc::reflection::smart_mock_generator::{SmartMockGenerator, SmartMockConfig}; // Create a deterministic generator with a fixed seed let mut generator = SmartMockGenerator::new_with_seed( SmartMockConfig::default(), 12345 // seed value ); // Generate reproducible data let uuid1 = generator.generate_uuid(); let email = generator.generate_random_string(10); // Reset to regenerate same data generator.reset(); let uuid2 = generator.generate_uuid(); // Same as uuid1 }
This ensures that your tests produce consistent results across different runs and environments.
Schema Graph Extraction
The schema graph extraction system analyzes your protobuf definitions to automatically discover relationships and foreign key patterns between entities.
Foreign Key Detection
The system uses naming conventions to detect foreign key relationships:
message Order {
string id = 1;
string user_id = 2; // → Detected as foreign key to User
string customer_ref = 3; // → Detected as reference to Customer
int64 timestamp = 4;
}
message User {
string id = 1; // → Detected as primary key
string name = 2;
string email = 3;
}
Common Foreign Key Patterns:
user_id
→ referencesUser
entityorderId
→ referencesOrder
entitycustomer_ref
→ referencesCustomer
entity
Relationship Types
The system identifies various relationship types:
- Foreign Key: Direct ID references (
user_id
→User
) - Embedded: Nested message types within other messages
- One-to-Many: Repeated field relationships
- Composition: Ownership relationships between entities
RAG-Driven Data Synthesis
RAG (Retrieval-Augmented Generation) enables context-aware data generation using domain knowledge from documentation, examples, and business rules.
Configuration
grpc:
data_synthesis:
rag:
enabled: true
api_endpoint: "https://api.openai.com/v1/chat/completions"
model: "gpt-3.5-turbo"
embedding_model: "text-embedding-ada-002"
similarity_threshold: 0.7
max_documents: 5
context_sources:
- id: "user_docs"
type: "documentation"
path: "./docs/user_guide.md"
weight: 1.0
- id: "examples"
type: "examples"
path: "./examples/sample_data.json"
weight: 0.8
Business Rule Extraction
The RAG system automatically extracts business rules from your documentation:
- Email Validation: “Email fields must follow valid email format”
- Phone Formatting: “Phone numbers should be in international format”
- ID Requirements: “User IDs must be alphanumeric and 8 characters long”
- Relationship Constraints: “Orders must reference valid existing users”
Domain-Aware Generation
Instead of generic random data, RAG generates contextually appropriate values:
message User {
string role = 1; // Context: "admin", "user", "moderator"
string department = 2; // Context: "engineering", "marketing", "sales"
string location = 3; // Context: "San Francisco", "New York", "London"
}
Cross-Endpoint Validation
The validation framework ensures data coherence across different endpoints and validates referential integrity.
Validation Rules
The framework supports multiple types of validation rules:
Built-in Validations:
- Foreign key existence validation
- Field format validation (email, phone, URL)
- Range validation for numeric fields
- Unique constraint validation
Custom Validation Rules:
grpc:
data_synthesis:
validation:
enabled: true
strict_mode: false
custom_rules:
- name: "email_format"
applies_to: ["User", "Customer"]
fields: ["email"]
type: "format"
pattern: "^[^@\\s]+@[^@\\s]+\\.[^@\\s]+$"
error: "Invalid email format"
- name: "age_range"
applies_to: ["User"]
fields: ["age"]
type: "range"
min: 0
max: 120
error: "Age must be between 0 and 120"
Referential Integrity
The validator automatically checks that:
- Foreign key references point to existing entities
- Required relationships are satisfied
- Cross-service data dependencies are maintained
- Business constraints are enforced
Configuration
Environment Variables
# Enable advanced data synthesis
MOCKFORGE_DATA_SYNTHESIS_ENABLED=true
# Deterministic generation
MOCKFORGE_DATA_SYNTHESIS_SEED=12345
MOCKFORGE_DATA_SYNTHESIS_DETERMINISTIC=true
# RAG configuration
MOCKFORGE_RAG_ENABLED=true
MOCKFORGE_RAG_API_KEY=your-api-key
MOCKFORGE_RAG_MODEL=gpt-3.5-turbo
# Validation settings
MOCKFORGE_VALIDATION_ENABLED=true
MOCKFORGE_VALIDATION_STRICT_MODE=false
Configuration File
grpc:
port: 50051
proto_dir: "proto/"
data_synthesis:
enabled: true
smart_generator:
field_inference: true
use_faker: true
deterministic: true
seed: 42
max_depth: 5
rag:
enabled: true
api_endpoint: "https://api.openai.com/v1/chat/completions"
api_key: "${RAG_API_KEY}"
model: "gpt-3.5-turbo"
embedding_model: "text-embedding-ada-002"
similarity_threshold: 0.7
max_context_length: 2000
cache_contexts: true
validation:
enabled: true
strict_mode: false
max_validation_depth: 3
cache_results: true
schema_extraction:
extract_relationships: true
detect_foreign_keys: true
confidence_threshold: 0.8
Example Usage
Basic Smart Generation
# Start MockForge with advanced data synthesis
MOCKFORGE_DATA_SYNTHESIS_ENABLED=true \
MOCKFORGE_DATA_SYNTHESIS_SEED=12345 \
mockforge serve --grpc-port 50051
With RAG Enhancement
# Start with RAG-powered domain awareness
MOCKFORGE_DATA_SYNTHESIS_ENABLED=true \
MOCKFORGE_RAG_ENABLED=true \
MOCKFORGE_RAG_API_KEY=your-api-key \
MOCKFORGE_VALIDATION_ENABLED=true \
mockforge serve --grpc-port 50051
Testing Deterministic Generation
# Generate data twice with same seed - should be identical
grpcurl -plaintext -d '{"user_id": "123"}' \
localhost:50051 com.example.UserService/GetUser
# Reset and call again - will generate same response
grpcurl -plaintext -d '{"user_id": "123"}' \
localhost:50051 com.example.UserService/GetUser
Best Practices
Deterministic Testing
- Use fixed seeds in CI/CD pipelines for reproducible tests
- Reset generators between test cases for consistency
- Document seed values used in critical test scenarios
Schema Design for Synthesis
- Use consistent naming conventions for foreign keys (
user_id
,customer_ref
) - Add comments to proto files describing business rules
- Consider field naming that indicates data type (
email_address
vscontact
)
RAG Integration
- Provide high-quality domain documentation as context sources
- Use specific, actionable descriptions in documentation
- Monitor API costs and implement appropriate caching
Validation Strategy
- Start with lenient validation and gradually add stricter rules
- Use warnings for potential issues, errors for critical problems
- Provide helpful error messages with suggested fixes
Advanced Scenarios
Multi-Service Data Coherence
When mocking multiple related gRPC services, ensure data coherence:
# Start user service
MOCKFORGE_DATA_SYNTHESIS_SEED=100 \
mockforge serve --grpc-port 50051 --proto-dir user-proto &
# Start order service with same seed for consistency
MOCKFORGE_DATA_SYNTHESIS_SEED=100 \
mockforge serve --grpc-port 50052 --proto-dir order-proto &
Custom Field Overrides
Override specific fields with custom values:
grpc:
data_synthesis:
field_overrides:
"admin_email": "admin@company.com"
"api_version": "v2.1"
"environment": "testing"
Business Rule Templates
Define reusable business rule templates:
grpc:
data_synthesis:
rule_templates:
- name: "financial_data"
applies_to: ["Invoice", "Payment", "Transaction"]
rules:
- field_pattern: "*_amount"
type: "range"
min: 0.01
max: 10000.00
- field_pattern: "*_currency"
type: "enum"
values: ["USD", "EUR", "GBP"]
Troubleshooting
Common Issues
Generated data not realistic enough
- Enable RAG synthesis with domain documentation
- Check field naming conventions for better inference
- Add custom business rules for specific constraints
Non-deterministic behavior
- Ensure
deterministic: true
and provide aseed
value - Reset generators between test runs
- Check for external randomness sources
Validation failures
- Review foreign key naming conventions
- Ensure referenced entities are generated before referencing ones
- Check custom validation rule patterns
RAG not working
- Verify API credentials and endpoints
- Check context source file paths and permissions
- Monitor API rate limits and error responses
Debug Commands
# Test data synthesis configuration
mockforge validate-config
# Show detected schema relationships
mockforge analyze-schema --proto-dir proto/
# Test deterministic generation
MOCKFORGE_DATA_SYNTHESIS_DEBUG=true \
mockforge serve --grpc-port 50051
Advanced data synthesis transforms MockForge from a simple mocking tool into a comprehensive test data management platform, enabling realistic, consistent, and validated test scenarios across your entire service architecture.
WebSocket Mocking
MockForge provides comprehensive WebSocket connection mocking with support for both scripted replay scenarios and interactive real-time communication. This enables testing of WebSocket-based applications, real-time APIs, and event-driven systems.
WebSocket Mocking Modes
MockForge supports two primary WebSocket mocking approaches:
1. Replay Mode (Scripted)
Pre-recorded message sequences that play back on schedule, simulating server behavior with precise timing control.
2. Interactive Mode (Real-time)
Dynamic responses based on client messages, enabling complex interactive scenarios and stateful communication.
Configuration
Basic WebSocket Setup
# Start MockForge with WebSocket support
mockforge serve --ws-port 3001 --ws-replay-file ws-scenario.jsonl
Environment Variables
# WebSocket configuration
MOCKFORGE_WS_ENABLED=true # Enable WebSocket support (default: false)
MOCKFORGE_WS_PORT=3001 # WebSocket server port
MOCKFORGE_WS_BIND=0.0.0.0 # Bind address
MOCKFORGE_WS_REPLAY_FILE=path/to/file.jsonl # Path to replay file
MOCKFORGE_WS_PATH=/ws # WebSocket endpoint path (default: /ws)
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true # Enable template processing
Command Line Options
mockforge serve \
--ws-port 3001 \
--ws-replay-file examples/ws-demo.jsonl \
--ws-path /websocket
Replay Mode
Replay mode uses JSONL-formatted files to define scripted message sequences with precise timing control.
Replay File Format
Each line in the replay file is a JSON object with the following structure:
{
"ts": 0,
"dir": "out",
"text": "Hello, client!",
"waitFor": "^CLIENT_READY$"
}
Field Definitions
ts
(number, required): Timestamp offset in milliseconds from connection startdir
(string, required): Message direction"out"
- Message sent from server to client"in"
- Expected message from client (for validation)
text
(string, required): Message content (supports templates)waitFor
(string, optional): Regular expression to wait for before proceeding
Basic Replay Example
{"ts":0,"dir":"out","text":"Welcome to MockForge WebSocket server","waitFor":"^HELLO$"}
{"ts":1000,"dir":"out","text":"Connection established"}
{"ts":2000,"dir":"out","text":"Sending data: 42"}
{"ts":3000,"dir":"out","text":"Goodbye"}
Advanced Replay Features
Template Support
{"ts":0,"dir":"out","text":"Session {{uuid}} started at {{now}}"}
{"ts":1000,"dir":"out","text":"Random value: {{randInt 1 100}}"}
{"ts":2000,"dir":"out","text":"Future event at {{now+5m}}"}
Interactive Elements
{"ts":0,"dir":"out","text":"Please authenticate","waitFor":"^AUTH .+$"}
{"ts":100,"dir":"out","text":"Authentication successful"}
{"ts":200,"dir":"out","text":"Choose option (A/B/C)","waitFor":"^(A|B|C)$"}
Complex Message Structures
{"ts":0,"dir":"out","text":"{\"type\":\"welcome\",\"user\":{\"id\":\"{{uuid}}\",\"name\":\"John\"}}"}
{"ts":1000,"dir":"out","text":"{\"type\":\"data\",\"payload\":{\"items\":[{\"id\":1,\"value\":\"{{randInt 10 99}}\"},{\"id\":2,\"value\":\"{{randInt 100 999}}\"}]}}"}
Replay File Management
Creating Replay Files
# Record from live WebSocket connection
# (Feature in development - manual creation for now)
# Create from application logs
# Extract WebSocket messages and convert to JSONL format
# Generate programmatically
node -e "
const fs = require('fs');
const messages = [
{ts: 0, dir: 'out', text: 'HELLO', waitFor: '^HI$'},
{ts: 1000, dir: 'out', text: 'DATA: 42'}
];
fs.writeFileSync('replay.jsonl', messages.map(JSON.stringify).join('\n'));
"
Validation
# Validate replay file syntax
node -e "
const fs = require('fs');
const lines = fs.readFileSync('replay.jsonl', 'utf8').split('\n');
lines.forEach((line, i) => {
if (line.trim()) {
try {
const msg = JSON.parse(line);
if (!msg.ts || !msg.dir || !msg.text) {
console.log(\`Line \${i+1}: Missing required fields\`);
}
} catch (e) {
console.log(\`Line \${i+1}: Invalid JSON\`);
}
}
});
console.log('Validation complete');
"
Interactive Mode
Interactive mode enables dynamic responses based on client messages, supporting complex conversational patterns and state management.
Basic Interactive Setup
{"ts":0,"dir":"out","text":"What is your name?","waitFor":"^NAME .+$"}
{"ts":100,"dir":"out","text":"Hello {{request.ws.lastMessage.match(/^NAME (.+)$/)[1]}}!"}
State Management
{"ts":0,"dir":"out","text":"Welcome! Type 'START' to begin","waitFor":"^START$"}
{"ts":100,"dir":"out","text":"Game started. Score: 0","state":"playing"}
{"ts":200,"dir":"out","text":"Choose: ROCK/PAPER/SCISSORS","waitFor":"^(ROCK|PAPER|SCISSORS)$"}
{"ts":300,"dir":"out","text":"You chose {{request.ws.lastMessage}}. I chose ROCK. You win!","waitFor":"^PLAY_AGAIN$"}
Conditional Logic
{"ts":0,"dir":"out","text":"Enter command","waitFor":".+","condition":"{{request.ws.message.length > 0}}"}
{"ts":100,"dir":"out","text":"Processing: {{request.ws.message}}"}
{"ts":200,"dir":"out","text":"Command completed"}
Testing WebSocket Connections
Using WebSocket Clients
Node.js Client
const WebSocket = require('ws');
const ws = new WebSocket('ws://localhost:3001/ws');
ws.on('open', () => {
console.log('Connected to MockForge WebSocket');
ws.send('CLIENT_READY');
});
ws.on('message', (data) => {
const message = data.toString();
console.log('Received:', message);
// Auto-respond to common prompts
if (message.includes('ACK')) {
ws.send('ACK');
}
if (message.includes('CONFIRMED')) {
ws.send('CONFIRMED');
}
if (message.includes('AUTH')) {
ws.send('AUTH token123');
}
});
ws.on('close', () => {
console.log('Connection closed');
});
ws.on('error', (err) => {
console.error('WebSocket error:', err);
});
Browser JavaScript
const ws = new WebSocket('ws://localhost:3001/ws');
ws.onopen = () => {
console.log('Connected');
ws.send('CLIENT_READY');
};
ws.onmessage = (event) => {
console.log('Received:', event.data);
// Handle server messages
};
ws.onclose = () => {
console.log('Connection closed');
};
Command Line Tools
# Using websocat
websocat ws://localhost:3001/ws
# Using curl (WebSocket support experimental)
curl --include \
--no-buffer \
--header "Connection: Upgrade" \
--header "Upgrade: websocket" \
--header "Sec-WebSocket-Key: x3JJHMbDL1EzLkh9GBhXDw==" \
--header "Sec-WebSocket-Version: 13" \
ws://localhost:3001/ws
Automated Testing
#!/bin/bash
# test-websocket.sh
echo "Testing WebSocket connection..."
# Test with Node.js
node -e "
const WebSocket = require('ws');
const ws = new WebSocket('ws://localhost:3001/ws');
ws.on('open', () => {
console.log('✓ Connection established');
ws.send('CLIENT_READY');
});
ws.on('message', (data) => {
console.log('✓ Message received:', data.toString());
ws.close();
});
ws.on('close', () => {
console.log('✓ Connection closed successfully');
process.exit(0);
});
ws.on('error', (err) => {
console.error('✗ WebSocket error:', err);
process.exit(1);
});
// Timeout after 10 seconds
setTimeout(() => {
console.error('✗ Test timeout');
process.exit(1);
}, 10000);
"
Advanced Features
Connection Pooling
# Support multiple concurrent connections
MOCKFORGE_WS_MAX_CONNECTIONS=100
MOCKFORGE_WS_CONNECTION_TIMEOUT=30000
Message Filtering
{"ts":0,"dir":"in","text":".*","filter":"{{request.ws.message.startsWith('VALID_')}}"}
{"ts":100,"dir":"out","text":"Valid message received"}
Error Simulation
{"ts":0,"dir":"out","text":"Error occurred","error":"true","code":1006}
{"ts":100,"dir":"out","text":"Connection will close","close":"true"}
Binary Message Support
{"ts":0,"dir":"out","text":"AQIDBAU=","binary":"true"}
{"ts":1000,"dir":"out","text":"Binary data sent"}
Integration Patterns
Real-time Applications
- Chat Applications: Mock user conversations and bot responses
- Live Updates: Simulate real-time data feeds and notifications
- Gaming: Mock multiplayer game state and player interactions
API Testing
- WebSocket APIs: Test GraphQL subscriptions and real-time queries
- Event Streams: Mock server-sent events and push notifications
- Live Dashboards: Simulate real-time metrics and monitoring data
Development Workflows
- Frontend Development: Mock WebSocket backends during UI development
- Integration Testing: Test WebSocket handling in microservices
- Load Testing: Simulate thousands of concurrent WebSocket connections
Best Practices
Replay File Organization
- Modular Files: Break complex scenarios into smaller, focused replay files
- Version Control: Keep replay files in Git for collaboration
- Documentation: Comment complex scenarios with clear descriptions
- Validation: Always validate replay files before deployment
Performance Considerations
- Message Volume: Limit concurrent connections based on system resources
- Memory Usage: Monitor memory usage with large replay files
- Timing Accuracy: Consider system clock precision for time-sensitive scenarios
- Connection Limits: Set appropriate connection pool sizes
Security Considerations
- Input Validation: Validate all client messages in interactive mode
- Rate Limiting: Implement connection rate limits for production
- Authentication: Mock authentication handshakes appropriately
- Data Sanitization: Avoid exposing sensitive data in replay files
Debugging Tips
- Verbose Logging: Enable detailed WebSocket logging for troubleshooting
- Connection Monitoring: Track connection lifecycle and message flow
- Replay Debugging: Step through replay files manually
- Client Compatibility: Test with multiple WebSocket client libraries
Troubleshooting
Common Issues
Connection fails: Check that WebSocket port is not blocked by firewall
Messages not received: Verify replay file path and JSONL format
Templates not expanding: Ensure MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
Timing issues: Check system clock and timestamp calculations
Debug Commands
# Check WebSocket port
netstat -tlnp | grep :3001
# Monitor connections
ss -tlnp | grep :3001
# Test basic connectivity
curl -I http://localhost:3001/health # If HTTP health endpoint exists
Log Analysis
# View WebSocket logs
tail -f mockforge.log | grep -i websocket
# Count connections
grep "WebSocket connection" mockforge.log | wc -l
# Find errors
grep -i "websocket.*error" mockforge.log
For detailed implementation guides, see:
- Replay Mode - Advanced scripted scenarios
- Interactive Mode - Dynamic real-time communication
Replay Mode
Replay mode provides precise, scripted WebSocket message sequences that execute on a predetermined schedule. This mode is ideal for testing deterministic scenarios, reproducing specific interaction patterns, and validating client behavior against known server responses.
Core Concepts
Message Timeline
Replay files define a sequence of messages that execute based on timestamps relative to connection establishment. Each message has a precise timing offset ensuring consistent playback.
Deterministic Execution
Replay scenarios execute identically each time, making them perfect for:
- Automated testing
- Regression testing
- Client behavior validation
- Demo environments
Replay File Structure
JSONL Format
Replay files use JSON Lines format where each line contains a complete JSON object representing a single message or directive.
{"ts":0,"dir":"out","text":"Welcome message"}
{"ts":1000,"dir":"out","text":"Data update","waitFor":"^ACK$"}
{"ts":2000,"dir":"out","text":"Connection closing"}
Message Object Schema
interface ReplayMessage {
ts: number; // Timestamp offset in milliseconds
dir: "out" | "in"; // Message direction
text: string; // Message content
waitFor?: string; // Optional regex pattern to wait for
binary?: boolean; // Binary message flag
close?: boolean; // Close connection after this message
error?: boolean; // Send as error frame
}
Basic Replay Examples
Simple Chat Simulation
{"ts":0,"dir":"out","text":"Chat server connected. Welcome!"}
{"ts":500,"dir":"out","text":"Type 'hello' to start chatting","waitFor":"^hello$"}
{"ts":100,"dir":"out","text":"Hello! How can I help you today?"}
{"ts":2000,"dir":"out","text":"Are you still there?","waitFor":".*"}
{"ts":500,"dir":"out","text":"Thanks for chatting! Goodbye."}
API Status Monitoring
{"ts":0,"dir":"out","text":"{\"type\":\"status\",\"message\":\"Monitor connected\"}"}
{"ts":1000,"dir":"out","text":"{\"type\":\"metrics\",\"cpu\":45,\"memory\":67}"}
{"ts":2000,"dir":"out","text":"{\"type\":\"metrics\",\"cpu\":42,\"memory\":68}"}
{"ts":3000,"dir":"out","text":"{\"type\":\"metrics\",\"cpu\":47,\"memory\":66}"}
{"ts":4000,"dir":"out","text":"{\"type\":\"alert\",\"level\":\"warning\",\"message\":\"High CPU usage\"}"}
Game State Synchronization
{"ts":0,"dir":"out","text":"{\"action\":\"game_start\",\"player_id\":\"{{uuid}}\",\"game_id\":\"{{uuid}}\"}"}
{"ts":1000,"dir":"out","text":"{\"action\":\"state_update\",\"position\":{\"x\":10,\"y\":20},\"score\":0}"}
{"ts":2000,"dir":"out","text":"{\"action\":\"enemy_spawn\",\"enemy_id\":\"{{uuid}}\",\"position\":{\"x\":50,\"y\":30}}"}
{"ts":1500,"dir":"out","text":"{\"action\":\"powerup\",\"type\":\"speed\",\"position\":{\"x\":25,\"y\":15}}"}
{"ts":3000,"dir":"out","text":"{\"action\":\"game_over\",\"final_score\":1250,\"reason\":\"timeout\"}"}
Advanced Replay Techniques
Conditional Branching
While replay mode is inherently linear, you can simulate branching using multiple replay files and external logic:
// File: login-success.jsonl
{"ts":0,"dir":"out","text":"Login successful","waitFor":"^ready$"}
{"ts":100,"dir":"out","text":"Welcome to your dashboard"}
// File: login-failed.jsonl
{"ts":0,"dir":"out","text":"Invalid credentials"}
{"ts":500,"dir":"out","text":"Connection will close","close":true}
Template Integration
{"ts":0,"dir":"out","text":"Session {{uuid}} established at {{now}}"}
{"ts":1000,"dir":"out","text":"Your lucky number is: {{randInt 1 100}}"}
{"ts":2000,"dir":"out","text":"Next maintenance window: {{now+24h}}"}
{"ts":3000,"dir":"out","text":"Server load: {{randInt 20 80}}%"}
Binary Message Support
{"ts":0,"dir":"out","text":"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChwGA60e6kgAAAABJRU5ErkJggg==","binary":true}
{"ts":1000,"dir":"out","text":"Image sent successfully"}
Error Simulation
{"ts":0,"dir":"out","text":"Connection established"}
{"ts":5000,"dir":"out","text":"Internal server error","error":true}
{"ts":1000,"dir":"out","text":"Attempting reconnection..."}
{"ts":2000,"dir":"out","text":"Reconnection failed","close":true}
Creating Replay Files
Manual Creation
# Create a new replay file
cat > chat-replay.jsonl << 'EOF'
{"ts":0,"dir":"out","text":"Welcome to support chat!"}
{"ts":1000,"dir":"out","text":"How can I help you today?","waitFor":".*"}
{"ts":500,"dir":"out","text":"Thanks for your question. Let me check..."}
{"ts":2000,"dir":"out","text":"I found the solution! Here's what you need to do:"}
{"ts":1000,"dir":"out","text":"1. Go to settings\n2. Click preferences\n3. Enable feature X"}
{"ts":3000,"dir":"out","text":"Does this solve your issue?","waitFor":"^(yes|no)$"}
{"ts":500,"dir":"out","text":"Great! Glad I could help. Have a nice day!"}
EOF
From Application Logs
#!/bin/bash
# extract-websocket-logs.sh
# Extract WebSocket messages from application logs
grep "WEBSOCKET_MSG" app.log | \
# Parse log entries and convert to JSONL
awk '{
# Extract timestamp, direction, and message
match($0, /([0-9]+).*dir=([^ ]*).*msg=(.*)/, arr)
printf "{\"ts\":%d,\"dir\":\"%s\",\"text\":\"%s\"}\n", arr[1], arr[2], arr[3]
}' > replay-from-logs.jsonl
Programmatic Generation
// generate-replay.js
const fs = require('fs');
function generateHeartbeatReplay(interval = 30000, duration = 300000) {
const messages = [];
const messageCount = duration / interval;
for (let i = 0; i < messageCount; i++) {
messages.push({
ts: i * interval,
dir: "out",
text: JSON.stringify({
type: "heartbeat",
timestamp: `{{now+${i * interval}ms}}`,
sequence: i + 1
})
});
}
fs.writeFileSync('heartbeat-replay.jsonl',
messages.map(JSON.stringify).join('\n'));
}
generateHeartbeatReplay();
# generate-replay.py
import json
import random
def generate_data_stream(count=100, interval=1000):
messages = []
for i in range(count):
messages.append({
"ts": i * interval,
"dir": "out",
"text": json.dumps({
"type": "data_point",
"id": f"{{{{uuid}}}}",
"value": random.randint(1, 100),
"timestamp": f"{{{{now+{i * interval}ms}}}}}"
})
})
return messages
# Write to file
with open('data-stream-replay.jsonl', 'w') as f:
for msg in generate_data_stream():
f.write(json.dumps(msg) + '\n')
Validation and Testing
Replay File Validation
# Validate JSONL syntax
node -e "
const fs = require('fs');
const lines = fs.readFileSync('replay.jsonl', 'utf8').split('\n');
let valid = true;
lines.forEach((line, i) => {
if (line.trim()) {
try {
const msg = JSON.parse(line);
if (!msg.ts || !msg.dir || !msg.text) {
console.log(\`Line \${i+1}: Missing required fields\`);
valid = false;
}
if (typeof msg.ts !== 'number' || msg.ts < 0) {
console.log(\`Line \${i+1}: Invalid timestamp\`);
valid = false;
}
if (!['in', 'out'].includes(msg.dir)) {
console.log(\`Line \${i+1}: Invalid direction\`);
valid = false;
}
} catch (e) {
console.log(\`Line \${i+1}: Invalid JSON - \${e.message}\`);
valid = false;
}
}
});
console.log(valid ? '✓ Replay file is valid' : '✗ Replay file has errors');
"
Timing Analysis
# Analyze replay timing
node -e "
const fs = require('fs');
const messages = fs.readFileSync('replay.jsonl', 'utf8')
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
const timings = messages.map((msg, i) => ({
index: i + 1,
ts: msg.ts,
interval: i > 0 ? msg.ts - messages[i-1].ts : 0
}));
console.log('Timing Analysis:');
timings.forEach(t => {
console.log(\`Message \${t.index}: \${t.ts}ms (interval: \${t.interval}ms)\`);
});
const totalDuration = Math.max(...messages.map(m => m.ts));
console.log(\`Total duration: \${totalDuration}ms (\${(totalDuration/1000).toFixed(1)}s)\`);
"
Functional Testing
#!/bin/bash
# test-replay.sh
REPLAY_FILE=$1
WS_URL="ws://localhost:3001/ws"
echo "Testing replay file: $REPLAY_FILE"
# Validate file exists and is readable
if [ ! -f "$REPLAY_FILE" ]; then
echo "✗ Replay file not found"
exit 1
fi
# Basic syntax check
if ! node -e "
const fs = require('fs');
const content = fs.readFileSync('$REPLAY_FILE', 'utf8');
const lines = content.split('\n').filter(l => l.trim());
lines.forEach((line, i) => {
try {
JSON.parse(line);
} catch (e) {
console.error(\`Line \${i+1}: \${e.message}\`);
process.exit(1);
}
});
console.log(\`✓ Valid JSONL: \${lines.length} messages\`);
"; then
echo "✗ Syntax validation failed"
exit 1
fi
echo "✓ Replay file validation passed"
echo "Ready to test with: mockforge serve --ws-replay-file $REPLAY_FILE"
Best Practices
File Organization
-
Descriptive Names: Use clear, descriptive filenames
user-authentication-flow.jsonl real-time-data-stream.jsonl error-handling-scenarios.jsonl
-
Modular Scenarios: Break complex interactions into focused files
login-flow.jsonl main-interaction.jsonl logout-flow.jsonl
-
Version Control: Keep replay files in Git with meaningful commit messages
Performance Optimization
- Message Batching: Group related messages with minimal intervals
- Memory Management: Monitor memory usage with large replay files
- Connection Limits: Consider concurrent connection impact
Maintenance
- Regular Updates: Keep replay files synchronized with application changes
- Documentation: Comment complex scenarios inline
- Versioning: Tag replay files with application versions
Debugging
- Verbose Logging: Enable detailed WebSocket logging during development
- Step-through Testing: Test replay files incrementally
- Timing Verification: Validate message timing against expectations
Common Patterns
Authentication Flow
{"ts":0,"dir":"out","text":"Please authenticate","waitFor":"^AUTH .+$"}
{"ts":100,"dir":"out","text":"Authenticating..."}
{"ts":500,"dir":"out","text":"Authentication successful"}
{"ts":200,"dir":"out","text":"Welcome back, user!"}
Streaming Data
{"ts":0,"dir":"out","text":"{\"type\":\"stream_start\",\"stream_id\":\"{{uuid}}\"}"}
{"ts":100,"dir":"out","text":"{\"type\":\"data\",\"value\":{{randInt 1 100}}}"}
{"ts":100,"dir":"out","text":"{\"type\":\"data\",\"value\":{{randInt 1 100}}}"}
{"ts":100,"dir":"out","text":"{\"type\":\"data\",\"value\":{{randInt 1 100}}}"}
{"ts":5000,"dir":"out","text":"{\"type\":\"stream_end\",\"total_messages\":3}"}
Error Recovery
{"ts":0,"dir":"out","text":"System operational"}
{"ts":30000,"dir":"out","text":"Warning: High load detected"}
{"ts":10000,"dir":"out","text":"Error: Service unavailable","error":true}
{"ts":5000,"dir":"out","text":"Attempting recovery..."}
{"ts":10000,"dir":"out","text":"Recovery successful"}
{"ts":1000,"dir":"out","text":"System back to normal"}
Integration with CI/CD
Automated Testing
# .github/workflows/test.yml
name: WebSocket Tests
on: [push, pull_request]
jobs:
websocket-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install dependencies
run: npm install ws
- name: Start MockForge
run: |
cargo install mockforge-cli
mockforge serve --ws-replay-file examples/ws-demo.jsonl &
sleep 2
- name: Run WebSocket tests
run: node test-websocket.js
Performance Benchmarking
#!/bin/bash
# benchmark-replay.sh
CONCURRENT_CONNECTIONS=100
DURATION=60
echo "Benchmarking WebSocket replay with $CONCURRENT_CONNECTIONS connections for ${DURATION}s"
# Start MockForge
mockforge serve --ws-replay-file benchmark-replay.jsonl &
SERVER_PID=$!
sleep 2
# Run benchmark
node benchmark-websocket.js $CONCURRENT_CONNECTIONS $DURATION
# Cleanup
kill $SERVER_PID
This comprehensive approach to replay mode ensures reliable, deterministic WebSocket testing scenarios that can be easily created, validated, and maintained as part of your testing infrastructure.
Interactive Mode
Interactive mode enables dynamic, real-time WebSocket communication where MockForge responds intelligently to client messages. Unlike replay mode’s predetermined sequences, interactive mode supports complex conversational patterns, state management, and adaptive responses based on client input.
Core Concepts
Dynamic Response Logic
Interactive mode evaluates client messages and generates contextually appropriate responses using conditional logic, pattern matching, and state tracking.
State Management
Connections maintain state across messages, enabling complex interactions like authentication flows, game mechanics, and multi-step processes.
Message Processing Pipeline
- Receive client message
- Parse and validate input
- Evaluate conditions and state
- Generate appropriate response
- Update connection state
Basic Interactive Setup
Simple Echo Server
{"ts":0,"dir":"out","text":"Echo server ready. Send me a message!"}
{"ts":0,"dir":"in","text":".*","response":"You said: {{request.ws.message}}"}
Command Processor
{"ts":0,"dir":"out","text":"Available commands: HELP, TIME, ECHO <message>, QUIT"}
{"ts":0,"dir":"in","text":"^HELP$","response":"Commands: HELP, TIME, ECHO <msg>, QUIT"}
{"ts":0,"dir":"in","text":"^TIME$","response":"Current time: {{now}}"}
{"ts":0,"dir":"in","text":"^ECHO (.+)$","response":"Echo: {{request.ws.message.match(/^ECHO (.+)$/)[1]}}"}
{"ts":0,"dir":"in","text":"^QUIT$","response":"Goodbye!","close":true}
Advanced Interactive Patterns
Authentication Flow
{"ts":0,"dir":"out","text":"Welcome! Please login with: LOGIN <username> <password>"}
{"ts":0,"dir":"in","text":"^LOGIN (\\w+) (\\w+)$","response":"Authenticating {{request.ws.message.match(/^LOGIN (\\w+) (\\w+)$/)[1]}}...","state":"authenticating"}
{"ts":1000,"dir":"out","text":"Login successful! Welcome, {{request.ws.state.username}}!","condition":"{{request.ws.state.authenticating}}"}
{"ts":0,"dir":"out","text":"Login failed. Try again.","condition":"{{!request.ws.state.authenticating}}"}
State-Based Conversations
{"ts":0,"dir":"out","text":"Welcome to the survey bot. What's your name?","state":"awaiting_name"}
{"ts":0,"dir":"in","text":".+","response":"Nice to meet you, {{request.ws.message}}! How old are you?","state":"awaiting_age","condition":"{{request.ws.state.awaiting_name}}"}
{"ts":0,"dir":"in","text":"^\\d+$","response":"Thanks! You're {{request.ws.message}} years old. Survey complete!","state":"complete","condition":"{{request.ws.state.awaiting_age}}"}
{"ts":0,"dir":"in","text":".*","response":"Please enter a valid age (numbers only).","condition":"{{request.ws.state.awaiting_age}}"}
Game Mechanics
{"ts":0,"dir":"out","text":"Welcome to Number Guessing Game! I'm thinking of a number between 1-100.","state":"playing","game":{"target":42,"attempts":0}}
{"ts":0,"dir":"in","text":"^GUESS (\\d+)$","condition":"{{request.ws.state.playing}}","response":"{{#if (eq (parseInt request.ws.message.match(/^GUESS (\\d+)$/) [1]) request.ws.state.game.target)}}You won in {{request.ws.state.game.attempts + 1}} attempts!{{else}}{{#if (gt (parseInt request.ws.message.match(/^GUESS (\\d+)$/) [1]) request.ws.state.game.target)}}Too high!{{else}}Too low!{{/if}} Try again.{{/if}}","state":"{{#if (eq (parseInt request.ws.message.match(/^GUESS (\\d+)$/) [1]) request.ws.state.game.target)}}won{{else}}playing{{/if}}","game":{"target":"{{request.ws.state.game.target}}","attempts":"{{request.ws.state.game.attempts + 1}}"}}
Message Processing Syntax
Input Patterns
Interactive mode uses regex patterns to match client messages:
// Exact match
{"dir":"in","text":"hello","response":"Hi there!"}
// Case-insensitive match
{"dir":"in","text":"(?i)hello","response":"Hi there!"}
// Pattern with capture groups
{"dir":"in","text":"^NAME (.+)$","response":"Hello, {{request.ws.message.match(/^NAME (.+)$/)[1]}}!"}
// Optional elements
{"dir":"in","text":"^(HELP|help|\\?)$","response":"Available commands: ..."}
Response Templates
Responses support the full MockForge template system:
{"dir":"in","text":".*","response":"Message received at {{now}}: {{request.ws.message}} (length: {{request.ws.message.length}})"}
Conditions
Use template conditions to control when rules apply:
{"dir":"in","text":".*","condition":"{{request.ws.state.authenticated}}","response":"Welcome back!"}
{"dir":"in","text":".*","condition":"{{!request.ws.state.authenticated}}","response":"Please authenticate first."}
State Updates
Modify connection state based on interactions:
// Set simple state
{"dir":"in","text":"START","response":"Starting...","state":"active"}
// Update complex state
{"dir":"in","text":"SCORE","response":"Current score: {{request.ws.state.score}}","state":"playing","score":"{{request.ws.state.score + 10}}"}
Advanced Features
Multi-Message Conversations
// Step 1: Greeting
{"ts":0,"dir":"out","text":"Hello! What's your favorite color?"}
{"ts":0,"dir":"in","text":".+","response":"{{request.ws.message}} is a great choice! What's your favorite food?","state":"asked_color","color":"{{request.ws.message}}","next":"food"}
// Step 2: Follow-up
{"ts":0,"dir":"out","text":"Based on your preferences, I recommend: ...","condition":"{{request.ws.state.next === 'complete'}}"}
{"ts":0,"dir":"in","text":".+","condition":"{{request.ws.state.next === 'food'}}","response":"Perfect! You like {{request.ws.state.color}} and {{request.ws.message}}. Here's a recommendation...","state":"complete"}
Error Handling
{"ts":0,"dir":"out","text":"Enter a command:"}
{"ts":0,"dir":"in","text":"","response":"Empty input not allowed. Try again."}
{"ts":0,"dir":"in","text":"^.{100,}$","response":"Input too long (max 99 characters). Please shorten."}
{"ts":0,"dir":"in","text":"^INVALID.*","response":"Unknown command. Type HELP for available commands."}
{"ts":0,"dir":"in","text":".*","response":"Processing: {{request.ws.message}}"}
Rate Limiting
{"ts":0,"dir":"in","text":".*","condition":"{{request.ws.state.messageCount < 10}}","response":"Message {{request.ws.state.messageCount + 1}}: {{request.ws.message}}","messageCount":"{{request.ws.state.messageCount + 1}}"}
{"ts":0,"dir":"in","text":".*","condition":"{{request.ws.state.messageCount >= 10}}","response":"Rate limit exceeded. Please wait."}
Session Management
// Initialize session
{"ts":0,"dir":"out","text":"Session started: {{uuid}}","sessionId":"{{uuid}}","startTime":"{{now}}","messageCount":0}
// Track activity
{"ts":0,"dir":"in","text":".*","response":"Received","messageCount":"{{request.ws.state.messageCount + 1}}","lastActivity":"{{now}}","condition":"{{request.ws.state.active}}"}
Template Functions for Interactive Mode
Message Analysis
// Message properties
{"dir":"in","text":".*","response":"Length: {{request.ws.message.length}}, Uppercase: {{request.ws.message.toUpperCase()}}"}
State Queries
// Check state existence
{"condition":"{{request.ws.state.userId}}","response":"Logged in as: {{request.ws.state.userId}}"}
{"condition":"{{!request.ws.state.userId}}","response":"Please log in first."}
// State comparisons
{"condition":"{{request.ws.state.score > 100}}","response":"High score achieved!"}
{"condition":"{{request.ws.state.level === 'expert'}}","response":"Expert mode enabled."}
Time-based Logic
// Session timeout
{"condition":"{{request.ws.state.lastActivity && (now - request.ws.state.lastActivity) > 300000}}","response":"Session expired. Please reconnect.","close":true}
// Time-based greetings
{"response":"{{#if (gte (now.getHours()) 18)}}Good evening!{{else if (gte (now.getHours()) 12)}}Good afternoon!{{else}}Good morning!{{/if}}"}
Creating Interactive Scenarios
From Scratch
# Create a new interactive scenario
cat > interactive-chat.jsonl << 'EOF'
{"ts":0,"dir":"out","text":"ChatBot: Hello! How can I help you today?"}
{"ts":0,"dir":"in","text":"(?i).*help.*","response":"ChatBot: I can answer questions, tell jokes, or just chat. What would you like?"}
{"ts":0,"dir":"in","text":"(?i).*joke.*","response":"ChatBot: Why did the computer go to the doctor? It had a virus! 😂"}
{"ts":0,"dir":"in","text":"(?i).*bye.*","response":"ChatBot: Goodbye! Have a great day! 👋","close":true}
{"ts":0,"dir":"in","text":".*","response":"ChatBot: I'm not sure how to respond to that. Try asking for help!"}
EOF
From Existing Logs
#!/bin/bash
# convert-logs-to-interactive.sh
# Extract conversation patterns from logs
grep "USER:" chat.log | sed 's/.*USER: //' | sort | uniq > user_patterns.txt
grep "BOT:" chat.log | sed 's/.*BOT: //' | sort | uniq > bot_responses.txt
# Generate interactive rules
paste user_patterns.txt bot_responses.txt | while IFS=$'\t' read -r user bot; do
echo "{\"dir\":\"in\",\"text\":\"$(echo "$user" | sed 's/[^a-zA-Z0-9]/\\&/g')\",\"response\":\"$bot\"}"
done > interactive-from-logs.jsonl
Testing Interactive Scenarios
#!/bin/bash
# test-interactive.sh
echo "Testing interactive WebSocket scenario..."
# Start MockForge with interactive file
mockforge serve --ws-replay-file interactive-test.jsonl &
SERVER_PID=$!
sleep 2
# Test conversation flow
node -e "
const WebSocket = require('ws');
const ws = new WebSocket('ws://localhost:3001/ws');
const conversation = [
'Hello',
'Tell me a joke',
'What can you do?',
'Goodbye'
];
let step = 0;
ws.on('open', () => {
console.log('Connected, starting conversation...');
ws.send(conversation[step++]);
});
ws.on('message', (data) => {
const response = data.toString();
console.log('Bot:', response);
if (step < conversation.length) {
setTimeout(() => {
ws.send(conversation[step++]);
}, 1000);
} else {
ws.close();
}
});
ws.on('close', () => {
console.log('Conversation complete');
process.exit(0);
});
ws.on('error', (err) => {
console.error('Error:', err);
process.exit(1);
});
"
# Cleanup
kill $SERVER_PID
Best Practices
Design Principles
- Clear Conversation Flow: Design conversations with clear paths and expectations
- Graceful Error Handling: Provide helpful responses for unexpected input
- State Consistency: Keep state updates predictable and logical
- Performance Awareness: Avoid complex regex or template processing
Pattern Guidelines
- Specific to General: Order patterns from most specific to most general
- Anchored Regex: Use
^
and$
to avoid partial matches - Case Handling: Consider case sensitivity in user input
- Input Validation: Validate and sanitize user input
State Management
- Minimal State: Store only necessary information in connection state
- State Validation: Verify state consistency across interactions
- State Cleanup: Clear state when conversations end
- State Persistence: Consider state requirements for reconnection scenarios
Debugging Interactive Scenarios
- Verbose Logging: Enable detailed WebSocket logging
- State Inspection: Log state changes during conversations
- Pattern Testing: Test regex patterns independently
- Flow Tracing: Track conversation paths through state changes
Common Patterns
Customer Support Chat
{"ts":0,"dir":"out","text":"Welcome to support! How can I help you? (Type your question or 'menu' for options)"}
{"ts":0,"dir":"in","text":"(?i)menu","response":"Options: 1) Password reset 2) Billing 3) Technical issue 4) Other","state":"menu"}
{"ts":0,"dir":"in","text":"(?i).*password.*","response":"I'll help you reset your password. What's your email address?","state":"password_reset","issue":"password"}
{"ts":0,"dir":"in","text":"(?i).*billing.*","response":"For billing questions, please visit our billing portal at billing.example.com","state":"billing"}
{"ts":0,"dir":"in","text":".*","response":"Thanks for your question: '{{request.ws.message}}'. A support agent will respond shortly. Your ticket ID is: {{uuid}}"}
E-commerce Assistant
{"ts":0,"dir":"out","text":"Welcome to our store! What are you looking for?","state":"browsing"}
{"ts":0,"dir":"in","text":"(?i).*shirt.*","response":"We have various shirts: casual, formal, graphic. Which style interests you?","state":"shirt_selection","category":"shirts"}
{"ts":0,"dir":"in","text":"(?i).*size.*","response":"Available sizes: S, M, L, XL. Which size would you like?","state":"size_selection","condition":"{{request.ws.state.category}}"}
{"ts":0,"dir":"in","text":"(?i)(S|M|L|XL)","condition":"{{request.ws.state.size_selection}}","response":"Great! Adding {{request.ws.state.category}} in size {{request.ws.message.toUpperCase()}} to cart. Would you like to checkout or continue shopping?","state":"checkout_ready"}
Game Server
{"ts":0,"dir":"out","text":"Welcome to the game server! Choose your character: WARRIOR, MAGE, ROGUE","state":"character_select"}
{"ts":0,"dir":"in","text":"(?i)^(warrior|mage|rogue)$","response":"Excellent choice! You selected {{request.ws.message.toUpperCase()}}. Your adventure begins now...","state":"playing","character":"{{request.ws.message.toLowerCase()}}","health":100,"level":1}
{"ts":0,"dir":"in","text":"(?i)stats","condition":"{{request.ws.state.playing}}","response":"Character: {{request.ws.state.character}}, Level: {{request.ws.state.level}}, Health: {{request.ws.state.health}}"}
{"ts":0,"dir":"in","text":"(?i)fight","condition":"{{request.ws.state.playing}}","response":"You encounter a monster! Roll for attack... {{randInt 1 20}}! {{#if (gte (randInt 1 20) 10)}}Victory!{{else}}Defeat!{{/if}}"}
Integration Examples
With Testing Frameworks
// test-interactive.js
const WebSocket = require('ws');
class InteractiveWebSocketTester {
constructor(url) {
this.url = url;
this.ws = null;
}
async connect() {
return new Promise((resolve, reject) => {
this.ws = new WebSocket(this.url);
this.ws.on('open', () => resolve());
this.ws.on('error', reject);
});
}
async sendAndExpect(message, expectedResponse) {
return new Promise((resolve, reject) => {
const timeout = setTimeout(() => reject(new Error('Timeout')), 5000);
this.ws.send(message);
this.ws.once('message', (data) => {
clearTimeout(timeout);
const response = data.toString();
if (response === expectedResponse) {
resolve(response);
} else {
reject(new Error(`Expected "${expectedResponse}", got "${response}"`));
}
});
});
}
close() {
if (this.ws) this.ws.close();
}
}
module.exports = InteractiveWebSocketTester;
Load Testing Interactive Scenarios
#!/bin/bash
# load-test-interactive.sh
CONCURRENT_USERS=50
DURATION=300
echo "Load testing interactive WebSocket with $CONCURRENT_USERS concurrent users for ${DURATION}s"
# Start MockForge
mockforge serve --ws-replay-file interactive-load-test.jsonl &
SERVER_PID=$!
sleep 2
# Run load test
node load-test-interactive.js $CONCURRENT_USERS $DURATION
# Generate report
echo "Generating performance report..."
node analyze-results.js
# Cleanup
kill $SERVER_PID
Interactive mode transforms MockForge from a simple message player into an intelligent conversation partner, enabling sophisticated testing scenarios that adapt to client behavior and maintain complex interaction state.
Admin UI
MockForge provides a comprehensive web-based Admin UI for managing and monitoring your mock servers. The Admin UI offers real-time insights, configuration management, and debugging tools to make mock server management effortless.
Accessing the Admin UI
The Admin UI is automatically available when you start MockForge with the --admin
flag:
# Start with Admin UI enabled
mockforge serve --spec api-spec.json --admin --admin-port 8080 --http-port 3000
The Admin UI will be available at: http://localhost:8080 (default port)
Configuration Options
# Custom admin port
mockforge serve --admin --admin-port 9090
# Disable admin UI (default)
mockforge serve --spec api-spec.json --no-admin
Environment Variables
# Enable/disable admin UI
MOCKFORGE_ADMIN_ENABLED=true
# Set admin UI port
MOCKFORGE_ADMIN_PORT=8080
# Set admin UI bind address (default: 0.0.0.0)
MOCKFORGE_ADMIN_BIND=127.0.0.1
Interface Overview
The Admin UI features a clean, modern interface with the following main sections:
Navigation Tabs
- Dashboard - System overview and real-time metrics
- Routes - API endpoint management and testing
- Fixtures - Recorded request/response management
- Logs - Request/response logging and debugging
- Configuration - Runtime configuration management
- Metrics - Performance monitoring and analytics
- Files - File system access for configuration files
Status Indicators
The header displays real-time system status:
- ● Healthy - All systems operational
- ● Warning - Minor issues detected
- ● Error - Critical issues requiring attention
Dashboard
The Dashboard provides a comprehensive overview of your MockForge instance:
System Status
- Uptime - How long the server has been running
- Memory Usage - Current memory consumption
- CPU Usage - Current CPU utilization
- Active Connections - Number of open connections
Recent Activity
- Latest Requests - Most recent API calls with timestamps
- Response Times - Average response latency
- Error Rate - Percentage of failed requests
Quick Actions
- Restart Server - Gracefully restart the mock server
- Clear Logs - Remove all accumulated logs
- Export Configuration - Download current config as YAML
Routes Management
The Routes tab provides detailed API endpoint management:
Route Listing
- View all configured API routes
- Filter by HTTP method, path pattern, or response status
- Sort by request count, response time, or error rate
Route Details
For each route, view:
- Request Count - Total requests served
- Average Response Time - Performance metrics
- Success/Error Rates - Reliability statistics
- Recent Requests - Last 10 requests with details
Route Testing
- Interactive Tester - Send test requests directly from the UI
- Request Builder - Construct complex requests with headers, query params, and body
- Response Preview - See exactly what would be returned
Route Overrides
- Temporary Overrides - Modify responses without changing configuration
- Conditional Responses - Set up A/B testing scenarios
- Failure Injection - Simulate errors for testing resilience
Fixtures Management
The Fixtures tab manages recorded request/response pairs:
Fixture Browser
- Search and Filter - Find fixtures by endpoint, method, or content
- Categorization - Group fixtures by API version or feature
- Tagging - Add custom tags for organization
Fixture Operations
- View Details - Inspect request/response pairs in detail
- Edit Responses - Modify recorded responses
- Export/Import - Backup and restore fixture collections
- Bulk Operations - Apply changes to multiple fixtures
Recording Controls
- Start/Stop Recording - Control when new fixtures are captured
- Recording Filters - Only record specific endpoints or request types
- Storage Management - Configure fixture retention and cleanup
Logging and Debugging
The Logs tab provides comprehensive request/response monitoring:
Log Viewer
- Real-time Updates - See requests as they happen
- Filtering Options - Filter by endpoint, status code, or time range
- Search Functionality - Find specific requests or responses
Log Details
For each log entry:
- Full Request - Headers, body, and metadata
- Full Response - Status, headers, and body
- Timing Information - Request/response duration
- Error Details - Stack traces and error context
Log Management
- Export Logs - Download logs in various formats
- Log Rotation - Automatic cleanup of old logs
- Log Levels - Adjust verbosity for debugging
Configuration Management
The Configuration tab allows runtime configuration changes:
Current Configuration
- View Active Config - See all current settings
- Configuration Sources - Understand precedence (CLI > Env > File)
- Validation Status - Check configuration validity
Configuration Editor
- Live Editing - Modify settings without restart
- Validation - Real-time syntax and semantic validation
- Change History - Track configuration modifications
Configuration Templates
- Save/Load Templates - Reuse common configurations
- Environment Profiles - Different configs for dev/staging/prod
- Backup/Restore - Version control for configurations
Metrics and Analytics
The Metrics tab provides detailed performance analytics:
Performance Metrics
- Response Time Distribution - P50, P95, P99 latencies
- Throughput - Requests per second over time
- Error Rate Trends - Track reliability over time
Endpoint Analytics
- Top Endpoints - Most frequently called routes
- Slowest Endpoints - Performance bottlenecks
- Error-prone Endpoints - Routes with high failure rates
System Metrics
- Resource Usage - CPU, memory, disk over time
- Connection Pool - Database connection utilization
- Cache Hit Rates - Effectiveness of response caching
File System Access
The Files tab provides access to configuration and data files:
File Browser
- Navigate Directory Structure - Browse the file system
- File Type Detection - Syntax highlighting for different file types
- Quick Access - Bookmarks for frequently used directories
File Editor
- In-browser Editing - Edit configuration files directly
- Syntax Validation - Catch errors before saving
- Version Control Integration - Commit changes with Git
File Operations
- Upload/Download - Transfer files to/from the server
- Backup Operations - Create and restore backups
- Permission Management - Control file access
Advanced Features
Auto-Refresh
- Configurable Intervals - Set refresh rates from 1 second to 5 minutes
- Smart Updates - Only refresh when data has changed
- Background Updates - Continue working while data refreshes
Keyboard Shortcuts
- Navigation - Tab switching with keyboard shortcuts
- Actions - Quick access to common operations
- Search - Global search across all tabs
Themes and Customization
- Light/Dark Mode - Choose your preferred theme
- Layout Options - Customize dashboard layout
- Color Schemes - Personalize the interface
Security Considerations
Access Control
- Authentication - Optional login requirements
- Authorization - Role-based access control
- IP Restrictions - Limit access to specific networks
Data Protection
- Sensitive Data Masking - Hide passwords and tokens in logs
- Encryption - Secure data transmission
- Audit Logging - Track all administrative actions
Troubleshooting
Common Issues
Admin UI not loading: Check that --admin
flag is used and port 8080 is accessible
Slow performance: Reduce auto-refresh interval or disable real-time updates
Missing data: Ensure proper permissions for file system access
Configuration not applying: Some changes may require server restart
Debug Tools
- Network Inspector - Monitor all HTTP requests
- Console Logs - JavaScript debugging information
- Performance Profiler - Identify UI performance bottlenecks
Getting Help
- Built-in Help - Press
?
for keyboard shortcuts - Tooltips - Hover over UI elements for explanations
- Context Help - Right-click for contextual help menus
The Admin UI transforms MockForge from a simple mock server into a powerful development and testing platform, providing the visibility and control needed for professional API mocking workflows.
Environment Variables
MockForge supports extensive configuration through environment variables. This page documents all available environment variables, their purposes, and usage examples.
Core Functionality
Server Control
-
MOCKFORGE_LATENCY_ENABLED=true|false
(default:true
)- Enable/disable response latency simulation
- When disabled, responses are immediate
-
MOCKFORGE_FAILURES_ENABLED=true|false
(default:false
)- Enable/disable failure injection
- When enabled, can simulate HTTP errors and timeouts
-
MOCKFORGE_LOG_LEVEL=debug|info|warn|error
(default:info
)- Set the logging verbosity level
- Available:
debug
,info
,warn
,error
Recording and Replay
-
MOCKFORGE_RECORD_ENABLED=true|false
(default:false
)- Enable recording of HTTP requests as fixtures
- Recorded fixtures can be replayed later
-
MOCKFORGE_REPLAY_ENABLED=true|false
(default:false
)- Enable replay of recorded fixtures
- When enabled, serves recorded responses instead of generating new ones
-
MOCKFORGE_PROXY_ENABLED=true|false
(default:false
)- Enable proxy mode for forwarding requests
- Useful for testing against real APIs
HTTP Server Configuration
Server Settings
-
MOCKFORGE_HTTP_PORT=3000
(default:3000
)- Port for the HTTP server to listen on
-
MOCKFORGE_HTTP_HOST=127.0.0.1
(default:0.0.0.0
)- Host address for the HTTP server to bind to
-
MOCKFORGE_CORS_ENABLED=true|false
(default:true
)- Enable/disable CORS headers in responses
-
MOCKFORGE_REQUEST_TIMEOUT_SECS=30
(default:30
)- Timeout for HTTP requests in seconds
OpenAPI Integration
MOCKFORGE_HTTP_OPENAPI_SPEC=path/to/spec.json
- Path to OpenAPI specification file
- Enables automatic endpoint generation from OpenAPI spec
Validation and Templating
-
MOCKFORGE_REQUEST_VALIDATION=enforce|warn|off
(default:enforce
)- Level of request validation
enforce
: Reject invalid requests with errorwarn
: Log warnings but allow requestsoff
: Skip validation entirely
-
MOCKFORGE_RESPONSE_VALIDATION=true|false
(default:false
)- Enable validation of generated responses
- Useful for ensuring response format compliance
-
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true|false
(default:false
)- Enable template expansion in responses
- Allows use of
{{uuid}}
,{{now}}
, etc. in responses
-
MOCKFORGE_AGGREGATE_ERRORS=true|false
(default:true
)- Aggregate multiple validation errors into a single response
- When enabled, returns all validation errors at once
-
MOCKFORGE_VALIDATION_STATUS=400|422
(default:400
)- HTTP status code for validation errors
400
: Bad Request (general)422
: Unprocessable Entity (validation-specific)
WebSocket Server Configuration
Server Settings
-
MOCKFORGE_WS_PORT=3001
(default:3001
)- Port for the WebSocket server to listen on
-
MOCKFORGE_WS_HOST=127.0.0.1
(default:0.0.0.0
)- Host address for the WebSocket server to bind to
-
MOCKFORGE_WS_CONNECTION_TIMEOUT_SECS=300
(default:300
)- WebSocket connection timeout in seconds
Replay Configuration
MOCKFORGE_WS_REPLAY_FILE=path/to/replay.jsonl
- Path to WebSocket replay file
- Enables scripted WebSocket message sequences
gRPC Server Configuration
Server Settings
-
MOCKFORGE_GRPC_PORT=50051
(default:50051
)- Port for the gRPC server to listen on
-
MOCKFORGE_GRPC_HOST=127.0.0.1
(default:0.0.0.0
)- Host address for the gRPC server to bind to
Admin UI Configuration
Server Settings
-
MOCKFORGE_ADMIN_ENABLED=true|false
(default:false
)- Enable/disable the Admin UI
- When enabled, provides web interface for management
-
MOCKFORGE_ADMIN_PORT=8080
(default:8080
)- Port for the Admin UI server to listen on
-
MOCKFORGE_ADMIN_HOST=127.0.0.1
(default:127.0.0.1
)- Host address for the Admin UI server to bind to
UI Configuration
-
MOCKFORGE_ADMIN_MOUNT_PATH=/admin
(default: none)- Mount path for embedded Admin UI
- When set, Admin UI is available under HTTP server
-
MOCKFORGE_ADMIN_API_ENABLED=true|false
(default:true
)- Enable/disable Admin UI API endpoints
- Controls whether
/__mockforge/*
endpoints are available
Data Generation Configuration
Faker Control
-
MOCKFORGE_RAG_ENABLED=true|false
(default:false
)- Enable Retrieval-Augmented Generation for data
- Requires additional setup for LLM integration
-
MOCKFORGE_FAKE_TOKENS=true|false
(default:true
)- Enable/disable faker token expansion
- Controls whether
{{faker.email}}
etc. work
Fixtures and Testing
Fixtures Configuration
-
MOCKFORGE_FIXTURES_DIR=path/to/fixtures
(default:./fixtures
)- Directory where fixtures are stored
- Used for recording and replaying HTTP requests
-
MOCKFORGE_RECORD_GET_ONLY=true|false
(default:false
)- When recording, only record GET requests
- Reduces fixture file size for read-only APIs
Configuration Files
Configuration Loading
MOCKFORGE_CONFIG_FILE=path/to/config.yaml
- Path to YAML configuration file
- Alternative to environment variables
Usage Examples
Basic HTTP Server with OpenAPI
export MOCKFORGE_HTTP_OPENAPI_SPEC=examples/openapi-demo.json
export MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
export MOCKFORGE_ADMIN_ENABLED=true
cargo run -p mockforge-cli -- serve --http-port 3000 --admin-port 8080
Full WebSocket Support
export MOCKFORGE_WS_REPLAY_FILE=examples/ws-demo.jsonl
export MOCKFORGE_WS_PORT=3001
export MOCKFORGE_HTTP_OPENAPI_SPEC=examples/openapi-demo.json
export MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
cargo run -p mockforge-cli -- serve --admin
Development Setup
export MOCKFORGE_LOG_LEVEL=debug
export MOCKFORGE_LATENCY_ENABLED=false
export MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
export MOCKFORGE_ADMIN_ENABLED=true
export MOCKFORGE_HTTP_OPENAPI_SPEC=examples/openapi-demo.json
cargo run -p mockforge-cli -- serve
Production Setup
export MOCKFORGE_LOG_LEVEL=warn
export MOCKFORGE_LATENCY_ENABLED=true
export MOCKFORGE_FAILURES_ENABLED=false
export MOCKFORGE_REQUEST_VALIDATION=enforce
export MOCKFORGE_ADMIN_ENABLED=false
export MOCKFORGE_HTTP_OPENAPI_SPEC=path/to/production-spec.json
cargo run -p mockforge-cli -- serve --http-port 80
Environment Variable Priority
Environment variables override configuration file settings. CLI flags take precedence over both. The priority order is:
- CLI flags (highest priority)
- Environment variables
- Configuration file settings
- Default values (lowest priority)
Security Considerations
- Be careful with
MOCKFORGE_ADMIN_ENABLED=true
in production - Consider setting restrictive host bindings (
127.0.0.1
) for internal use - Use
MOCKFORGE_FAKE_TOKENS=false
for deterministic testing - Review
MOCKFORGE_CORS_ENABLED
settings for cross-origin requests
Troubleshooting
Common Issues
-
Environment variables not taking effect
- Check variable names for typos
- Ensure variables are exported before running the command
- Use
env | grep MOCKFORGE
to verify variables are set
-
Port conflicts
- Use different ports via
MOCKFORGE_HTTP_PORT
,MOCKFORGE_WS_PORT
, etc. - Check what processes are using ports with
netstat -tlnp
- Use different ports via
-
OpenAPI spec not loading
- Verify file path in
MOCKFORGE_HTTP_OPENAPI_SPEC
- Ensure JSON/YAML syntax is valid
- Check file permissions
- Verify file path in
-
Template expansion not working
- Set
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
- Verify token syntax (e.g.,
{{uuid}}
not{uuid}
)
- Set
For more detailed configuration options, see the Configuration Files documentation.
Configuration Files
MockForge supports comprehensive configuration through YAML files as an alternative to environment variables. This page documents the configuration file format, options, and usage.
Configuration File Location
MockForge looks for configuration files in the following order:
- Path specified by
--config
CLI flag - Path specified by
MOCKFORGE_CONFIG_FILE
environment variable - Default location:
./mockforge.yaml
or./mockforge.yml
- No configuration file (uses defaults)
Basic Configuration Structure
# MockForge Configuration Example
# This file demonstrates all available configuration options
# HTTP server configuration
http:
port: 3000
host: "0.0.0.0"
openapi_spec: "examples/openapi-demo.json"
cors_enabled: true
request_timeout_secs: 30
request_validation: "enforce"
aggregate_validation_errors: true
validate_responses: false
response_template_expand: true
skip_admin_validation: true
# WebSocket server configuration
websocket:
port: 3001
host: "0.0.0.0"
replay_file: "examples/ws-demo.jsonl"
connection_timeout_secs: 300
# gRPC server configuration
grpc:
port: 50051
host: "0.0.0.0"
# Admin UI configuration
admin:
enabled: true
port: 8080
host: "127.0.0.1"
mount_path: null
api_enabled: true
# Core MockForge configuration
core:
latency_enabled: true
failures_enabled: false
# Logging configuration
logging:
level: "info"
json_format: false
file_path: null
max_file_size_mb: 10
max_files: 5
# Data generation configuration
data:
default_rows: 100
default_format: "json"
locale: "en"
HTTP Server Configuration
Basic Settings
http:
port: 3000 # Server port
host: "0.0.0.0" # Bind address (0.0.0.0 for all interfaces)
cors_enabled: true # Enable CORS headers
request_timeout_secs: 30 # Request timeout in seconds
OpenAPI Integration
http:
openapi_spec: "path/to/spec.json" # Path to OpenAPI specification
# Alternative: use URL
openapi_spec: "https://example.com/api-spec.yaml"
Validation and Response Handling
http:
request_validation: "enforce" # off|warn|enforce
aggregate_validation_errors: true # Combine multiple errors
validate_responses: false # Validate generated responses
response_template_expand: true # Enable {{uuid}}, {{now}} etc.
skip_admin_validation: true # Skip validation for admin endpoints
Validation Overrides
http:
validation_overrides:
"POST /users/{id}": "warn" # Override validation level per endpoint
"GET /internal/health": "off" # Skip validation for specific endpoints
WebSocket Server Configuration
websocket:
port: 3001 # Server port
host: "0.0.0.0" # Bind address
replay_file: "path/to/replay.jsonl" # WebSocket replay file
connection_timeout_secs: 300 # Connection timeout in seconds
gRPC Server Configuration
grpc:
port: 50051 # Server port
host: "0.0.0.0" # Bind address
proto_dir: null # Directory containing .proto files
tls: null # TLS configuration (optional)
Admin UI Configuration
Standalone Mode (Default)
admin:
enabled: true
port: 8080
host: "127.0.0.1"
api_enabled: true
Embedded Mode
admin:
enabled: true
mount_path: "/admin" # Mount under HTTP server
api_enabled: true # Enable API endpoints
# Note: port/host ignored when mount_path is set
Core Configuration
Latency Simulation
core:
latency_enabled: true
default_latency:
base_ms: 50
jitter_ms: 20
distribution: "fixed" # fixed, normal, or pareto
# For normal distribution
# std_dev_ms: 10.0
# For pareto distribution
# pareto_shape: 2.0
min_ms: 10 # Minimum latency
max_ms: 5000 # Maximum latency (optional)
# Per-operation overrides
tag_overrides:
auth: 100
payments: 200
Failure Injection
core:
failures_enabled: true
failure_config:
global_error_rate: 0.05 # 5% global error rate
# Default status codes for failures
default_status_codes: [500, 502, 503, 504]
# Per-tag error rates and status codes
tag_configs:
auth:
error_rate: 0.1 # 10% error rate for auth operations
status_codes: [401, 403]
error_message: "Authentication failed"
payments:
error_rate: 0.02 # 2% error rate for payments
status_codes: [402, 503]
error_message: "Payment processing failed"
# Tag filtering
include_tags: [] # Empty means all tags included
exclude_tags: ["health", "metrics"] # Exclude these tags
Proxy Configuration
core:
proxy:
upstream_url: "http://api.example.com"
timeout_seconds: 30
Logging Configuration
logging:
level: "info" # debug|info|warn|error
json_format: false # Use JSON format for logs
file_path: "logs/mockforge.log" # Optional log file
max_file_size_mb: 10 # Rotate when file reaches this size
max_files: 5 # Keep this many rotated log files
Data Generation Configuration
data:
default_rows: 100 # Default number of rows to generate
default_format: "json" # Default output format
locale: "en" # Locale for generated data
# Custom faker templates
templates:
custom_user:
name: "{{faker.name}}"
email: "{{faker.email}}"
department: "{{faker.word}}"
# RAG (Retrieval-Augmented Generation) configuration
rag:
enabled: false
api_endpoint: null
api_key: null
model: null
context_window: 4000
Advanced Configuration
Request/Response Overrides
# YAML patch overrides for requests/responses
overrides:
- targets: ["operation:getUser"] # Target specific operations
patch:
- op: add
path: /metadata/requestId
value: "{{uuid}}"
- op: replace
path: /user/createdAt
value: "{{now}}"
- op: add
path: /user/score
value: "{{rand.float}}"
- targets: ["tag:Payments"] # Target by tags
patch:
- op: replace
path: /payment/status
value: "FAILED"
Latency Profiles
# External latency profiles file
latency_profiles: "config/latency.yaml"
# Example latency configuration:
# operation:getUser:
# fixed_ms: 120
# jitter_ms: 80
# fail_p: 0.0
#
# tag:Payments:
# fixed_ms: 200
# jitter_ms: 300
# fail_p: 0.05
# fail_status: 503
Configuration Examples
Development Configuration
# Development setup with debugging and fast responses
http:
port: 3000
response_template_expand: true
request_validation: "warn"
admin:
enabled: true
port: 8080
core:
latency_enabled: false # Disable latency for faster development
logging:
level: "debug"
json_format: false
Testing Configuration
# Testing setup with deterministic responses
http:
port: 3000
response_template_expand: false # Disable random tokens for determinism
core:
latency_enabled: false
data:
rag:
enabled: false # Disable RAG for consistent test data
Production Configuration
# Production setup with monitoring and reliability
http:
port: 80
host: "0.0.0.0"
request_validation: "enforce"
cors_enabled: false
admin:
enabled: false # Disable admin UI in production
core:
latency_enabled: true
failures_enabled: false
logging:
level: "warn"
json_format: true
file_path: "/var/log/mockforge.log"
Configuration File Validation
MockForge validates configuration files at startup. Common issues:
- Invalid YAML syntax - Check indentation and quotes
- Missing required fields - Some fields like
request_timeout_secs
are required - Invalid file paths - Ensure OpenAPI spec and replay files exist
- Port conflicts - Choose unique ports for each service
Configuration Precedence
Configuration values are resolved in this priority order:
- CLI flags (highest priority)
- Environment variables
- Configuration file
- Default values (lowest priority)
This allows you to override specific values without changing your configuration file.
Hot Reloading
Configuration changes require a server restart to take effect. For development, you can use:
# Watch for changes and auto-restart
cargo watch -x "run -p mockforge-cli -- serve --config config.yaml"
For more information on environment variables, see the Environment Variables documentation.
Advanced Options
Building from Source
This guide covers building MockForge from source code, including prerequisites, build processes, and troubleshooting common build issues.
Prerequisites
Before building MockForge, ensure you have the required development tools installed.
System Requirements
- Rust: Version 1.70.0 or later
- Cargo: Included with Rust
- Git: For cloning the repository
- C/C++ Compiler: For native dependencies
Platform-Specific Requirements
Linux (Ubuntu/Debian)
# Install build essentials
sudo apt update
sudo apt install build-essential pkg-config libssl-dev
# Install Rust (if not already installed)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
macOS
# Install Xcode command line tools
xcode-select --install
# Install Homebrew (optional, for additional tools)
# /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
Windows
# Install Visual Studio Build Tools
# Download from: https://visualstudio.microsoft.com/visual-cpp-build-tools/
# Install Rust
# Download from: https://rustup.rs/
# Or use winget: winget install --id Rustlang.Rustup
Rust Setup Verification
# Verify Rust installation
rustc --version
cargo --version
# Update to latest stable
rustup update stable
Cloning the Repository
# Clone the repository
git clone https://github.com/SaaSy-Solutions/mockforge.git
cd mockforge
# Initialize submodules (if any)
git submodule update --init --recursive
Build Process
Basic Build
# Build all crates in debug mode (default)
cargo build
# Build in release mode for production
cargo build --release
# Build specific crate
cargo build -p mockforge-cli
Build Outputs
After building, binaries are available in:
# Debug builds
target/debug/mockforge-cli
# Release builds
target/release/mockforge-cli
Build Features
MockForge supports conditional compilation features:
# Build with all features enabled
cargo build --all-features
# Build with specific features
cargo build --features "grpc,websocket"
# List available features
cargo metadata --format-version 1 | jq '.packages[] | select(.name == "mockforge-cli") | .features'
Development Workflow
Development Builds
# Quick development builds
cargo build
# Run tests during development
cargo test
# Run specific tests
cargo test --package mockforge-core --lib
Watch Mode Development
# Install cargo-watch for automatic rebuilds
cargo install cargo-watch
# Watch for changes and rebuild
cargo watch -x build
# Watch and run tests
cargo watch -x test
# Watch and run specific binary
cargo watch -x "run --bin mockforge-cli -- --help"
IDE Setup
VS Code
Install recommended extensions:
rust-lang.rust-analyzer
ms-vscode.vscode-json
redhat.vscode-yaml
IntelliJ/CLion
Install Rust plugin through marketplace.
Debugging
# Build with debug symbols
cargo build
# Run with debugger
rust-gdb target/debug/mockforge-cli
# Or use lldb on macOS
rust-lldb target/debug/mockforge-cli
Advanced Build Options
Cross-Compilation
# Install cross-compilation targets
rustup target add x86_64-unknown-linux-musl
rustup target add aarch64-unknown-linux-gnu
# Build for different architectures
cargo build --target x86_64-unknown-linux-musl
cargo build --target aarch64-unknown-linux-gnu
Custom Linker
# Use mold linker for faster linking (Linux)
sudo apt install mold
export RUSTFLAGS="-C link-arg=-fuse-ld=mold"
cargo build
Build Caching
# Use sccache for faster rebuilds
cargo install sccache
export RUSTC_WRAPPER=sccache
cargo build
Testing
Running Tests
# Run all tests
cargo test
# Run tests with output
cargo test -- --nocapture
# Run specific test
cargo test test_name
# Run tests for specific package
cargo test -p mockforge-core
# Run integration tests
cargo test --test integration
# Run with release optimizations
cargo test --release
Test Coverage
# Install cargo-tarpaulin
cargo install cargo-tarpaulin
# Generate coverage report
cargo tarpaulin --out Html
# Open coverage report
open tarpaulin-report.html
Benchmarking
# Run benchmarks
cargo bench
# Run specific benchmark
cargo bench benchmark_name
Code Quality
Linting
# Run clippy lints
cargo clippy
# Run with pedantic mode
cargo clippy -- -W clippy::pedantic
# Auto-fix some issues
cargo clippy --fix
Formatting
# Check code formatting
cargo fmt --check
# Auto-format code
cargo fmt
Security Auditing
# Install cargo-audit
cargo install cargo-audit
# Audit dependencies for security vulnerabilities
cargo audit
Documentation
Building Documentation
# Build API documentation
cargo doc
# Open documentation in browser
cargo doc --open
# Build documentation with private items
cargo doc --document-private-items
# Build for specific package
cargo doc -p mockforge-core
Building mdBook
# Install mdbook
cargo install mdbook
# Build the documentation
mdbook build
# Serve documentation locally
mdbook serve
Packaging and Distribution
Creating Releases
# Create a release build
cargo build --release
# Strip debug symbols (Linux/macOS)
strip target/release/mockforge-cli
# Create distribution archive
tar -czf mockforge-v0.1.0-x86_64-linux.tar.gz \
-C target/release mockforge-cli
# Create Debian package
cargo install cargo-deb
cargo deb
Docker Builds
# Build Docker image
docker build -t mockforge .
# Build with buildkit for faster builds
DOCKER_BUILDKIT=1 docker build -t mockforge .
# Multi-stage build for smaller images
docker build -f Dockerfile.multi-stage -t mockforge .
Troubleshooting Build Issues
Common Problems
Compilation Errors
Problem: error[E0432]: unresolved import
Solution: Check that dependencies are properly specified in Cargo.toml
# Update dependencies
cargo update
# Clean and rebuild
cargo clean
cargo build
Linker Errors
Problem: undefined reference to...
Solution: Install system dependencies
# Ubuntu/Debian
sudo apt install libssl-dev pkg-config
# macOS
brew install openssl pkg-config
Out of Memory
Problem: fatal error: Killed signal terminated program cc1
Solution: Increase available memory or reduce parallelism
# Reduce parallel jobs
cargo build --jobs 1
# Or set memory limits
export CARGO_BUILD_JOBS=2
Slow Builds
Solutions:
# Use incremental compilation
export CARGO_INCREMENTAL=1
# Use faster linker
export RUSTFLAGS="-C link-arg=-fuse-ld=mold"
# Use build cache
cargo install sccache
export RUSTC_WRAPPER=sccache
Platform-Specific Issues
Windows
# Install Windows SDK if missing
# Download from: https://developer.microsoft.com/en-us/windows/downloads/windows-sdk/
# Use different target for static linking
cargo build --target x86_64-pc-windows-msvc
macOS
# Install missing headers
open /Library/Developer/CommandLineTools/Packages/macOS_SDK_headers_for_macOS_10.14.pkg
# Or reinstall command line tools
sudo rm -rf /Library/Developer/CommandLineTools
xcode-select --install
Linux
# Install additional development libraries
sudo apt install libclang-dev llvm-dev
# For cross-compilation
sudo apt install gcc-aarch64-linux-gnu
Network Issues
# Clear cargo cache
cargo clean
rm -rf ~/.cargo/registry/cache
rm -rf ~/.cargo/git/checkouts
# Use different registry
export CARGO_REGISTRIES_CRATES_IO_PROTOCOL=sparse
Dependency Conflicts
# Update Cargo.lock
cargo update
# Resolve conflicts
cargo update -p package-name
# Use cargo-tree to visualize dependencies
cargo install cargo-tree
cargo tree
Performance Optimization
Release Builds
# Optimized release build
cargo build --release
# With Link-Time Optimization (LTO)
export RUSTFLAGS="-C opt-level=3 -C lto=fat -C codegen-units=1"
cargo build --release
Profile-Guided Optimization (PGO)
# Build with instrumentation
export RUSTFLAGS="-Cprofile-generate=/tmp/pgo-data"
cargo build --release
# Run instrumented binary with representative workload
./target/release/mockforge-cli serve --spec examples/openapi-demo.json &
sleep 10
curl -s http://localhost:3000/users > /dev/null
pkill mockforge-cli
# Build optimized version
export RUSTFLAGS="-Cprofile-use=/tmp/pgo-data"
cargo build --release
Contributing to the Build System
Adding New Dependencies
# Add to workspace Cargo.toml
[workspace.dependencies]
new-dependency = "1.0"
# Use in crate Cargo.toml
[dependencies]
new-dependency = { workspace = true }
Adding Build Scripts
// build.rs fn main() { // Generate code or check dependencies println!("cargo:rerun-if-changed=proto/"); tonic_build::compile_protos("proto/service.proto").unwrap(); }
Custom Build Profiles
# In Cargo.toml
[profile.release]
opt-level = 3
lto = true
codegen-units = 1
panic = "abort"
[profile.dev]
opt-level = 0
debug = true
overflow-checks = true
This comprehensive build guide ensures developers can successfully compile, test, and contribute to MockForge across different platforms and development environments.
Testing Guide
This guide covers MockForge’s comprehensive testing strategy, including unit tests, integration tests, end-to-end tests, and testing best practices.
Testing Overview
MockForge employs a multi-layered testing approach to ensure code quality and prevent regressions:
- Unit Tests: Individual functions and modules
- Integration Tests: Component interactions
- End-to-End Tests: Full system workflows
- Performance Tests: Load and performance validation
- Security Tests: Vulnerability and access control testing
Unit Testing
Running Unit Tests
# Run all unit tests
cargo test --lib
# Run tests for specific crate
cargo test -p mockforge-core
# Run specific test function
cargo test test_template_rendering
# Run tests matching pattern
cargo test template
# Run tests with output
cargo test -- --nocapture
Writing Unit Tests
Basic Test Structure
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use super::*; #[test] fn test_basic_functionality() { // Arrange let input = "test input"; let expected = "expected output"; // Act let result = process_input(input); // Assert assert_eq!(result, expected); } #[test] fn test_error_conditions() { // Test error cases let result = process_input(""); assert!(result.is_err()); } } }
Async Tests
#![allow(unused)] fn main() { #[cfg(test)] mod async_tests { use tokio::test; #[tokio::test] async fn test_async_operation() { let result = async_operation().await; assert!(result.is_ok()); } #[tokio::test] async fn test_concurrent_operations() { let (result1, result2) = tokio::join( async_operation(), another_async_operation() ); assert!(result1.is_ok()); assert!(result2.is_ok()); } } }
Integration Testing
Component Integration Tests
#![allow(unused)] fn main() { #[cfg(test)] mod integration_tests { use mockforge_core::config::MockForgeConfig; use mockforge_http::HttpServer; #[tokio::test] async fn test_http_server_integration() { // Start test server let config = test_config(); let server = HttpServer::new(config); let addr = server.local_addr(); tokio::spawn(async move { server.serve().await.unwrap(); }); // Wait for server to start tokio::time::sleep(tokio::time::Duration::from_millis(100)).await; // Test HTTP request let client = reqwest::Client::new(); let response = client .get(&format!("http://{}/health", addr)) .send() .await .unwrap(); assert_eq!(response.status(), 200); } } }
End-to-End Testing
Full System Tests
#![allow(unused)] fn main() { #[cfg(test)] mod e2e_tests { use std::process::Command; use std::thread; use std::time::Duration; #[test] fn test_full_openapi_workflow() { // Start MockForge server let mut server = Command::new("cargo") .args(&["run", "--bin", "mockforge-cli", "serve", "--spec", "examples/openapi-demo.json", "--http-port", "3000"]) .spawn() .unwrap(); // Wait for server to start thread::sleep(Duration::from_secs(2)); // Test API endpoints test_user_endpoints(); test_product_endpoints(); // Stop server server.kill().unwrap(); } } }
Performance Testing
Load Testing
# Using hey for HTTP load testing
hey -n 1000 -c 10 http://localhost:3000/users
# Using wrk for more detailed benchmarking
wrk -t 4 -c 100 -d 30s http://localhost:3000/users
Benchmarking
#![allow(unused)] fn main() { // In benches/benchmark.rs use criterion::{black_box, criterion_group, criterion_main, Criterion}; fn benchmark_template_rendering(c: &mut Criterion) { let engine = TemplateEngine::new(); c.bench_function("template_render_simple", |b| { b.iter(|| { engine.render("Hello {{name}}", &Context::from_value("name", "World")) }) }); } criterion_group!(benches, benchmark_template_rendering); criterion_main!(benches); }
Run benchmarks:
cargo bench
Security Testing
Input Validation Tests
#![allow(unused)] fn main() { #[cfg(test)] mod security_tests { #[test] fn test_sql_injection_prevention() { let input = "'; DROP TABLE users; --"; let result = sanitize_input(input); // Ensure dangerous characters are escaped assert!(!result.contains("DROP")); } #[test] fn test_template_injection() { let engine = TemplateEngine::new(); let malicious = "{{#exec}}rm -rf /{{/exec}}"; // Should not execute dangerous commands let result = engine.render(malicious, &Context::new()); assert!(!result.contains("exec")); } } }
Continuous Integration
GitHub Actions Testing
# .github/workflows/test.yml
name: Test
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
toolchain: stable
override: true
- name: Cache dependencies
uses: actions/cache@v2
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- name: Run tests
run: cargo test --verbose
- name: Run clippy
run: cargo clippy -- -D warnings
- name: Check formatting
run: cargo fmt --check
- name: Run security audit
run: cargo audit
This comprehensive testing guide ensures MockForge maintains high code quality and prevents regressions across all components and integration points.
Architecture Overview
MockForge is a modular, Rust-based platform for mocking APIs across HTTP, WebSocket, and gRPC protocols. This document provides a comprehensive overview of the system architecture, design principles, and component interactions.
System Overview
MockForge enables frontend and integration development without live backends by providing realistic API mocking with configurable latency, failure injection, and dynamic response generation. The system is built as a modular workspace of Rust crates that share a core engine for request routing, validation, and data generation.
Key Design Principles
- Modularity: Separated concerns across focused crates
- Extensibility: Plugin architecture for custom functionality
- Performance: Async-first design with efficient resource usage
- Developer Experience: Comprehensive tooling and clear APIs
- Protocol Agnostic: Unified approach across different protocols
High-Level Architecture
+------------------+
| CLI / UI |
+--------+---------+
|
+------------+------------+
| Core Engine (axum) |
+------------+------------+
|
+----------+-------+---------+-----------+
| | | |
HTTP Mock WS Mock gRPC Mock Data Gen
(axum) (tokio-ws) (tonic) (faker+RAG)
Crate Structure
MockForge is organized as a Cargo workspace with the following crates:
mockforge/
crates/
mockforge-cli/ # Command-line interface
mockforge-core/ # Shared functionality
mockforge-http/ # HTTP REST API mocking
mockforge-ws/ # WebSocket connection mocking
mockforge-grpc/ # gRPC service mocking
mockforge-data/ # Synthetic data generation
mockforge-ui/ # Web-based admin interface
Crate Responsibilities
mockforge-core
- Shared Core Engine
The foundation crate providing common functionality used across all protocols:
- Request Routing: Unified route registry and matching logic
- Validation Engine: OpenAPI and schema validation
- Template System: Handlebars-based dynamic content generation
- Latency Injection: Configurable response delays
- Failure Injection: Simulated error conditions
- Record/Replay: Request/response capture and replay
- Logging: Structured request/response logging
- Configuration: Unified configuration management
mockforge-http
- HTTP REST API Mocking
HTTP-specific implementation built on axum:
- OpenAPI Integration: Automatic route generation from specifications
- Request Matching: Method, path, query, header, and body matching
- Response Generation: Schema-driven and template-based responses
- Middleware Support: Custom request/response processing
mockforge-ws
- WebSocket Connection Mocking
Real-time communication mocking:
- Replay Mode: Scripted message sequences with timing control
- Interactive Mode: Dynamic responses based on client messages
- State Management: Connection-specific state tracking
- Template Support: Dynamic message content generation
mockforge-grpc
- gRPC Service Mocking
Protocol buffer-based service mocking:
- Dynamic Proto Discovery: Automatic compilation of
.proto
files - Service Reflection: Runtime service discovery and inspection
- Streaming Support: Unary, server, client, and bidirectional streaming
- Schema Validation: Message validation against proto definitions
mockforge-data
- Synthetic Data Generation
Advanced data generation capabilities:
- Faker Integration: Realistic fake data generation
- RAG Enhancement: Retrieval-augmented generation for contextual data
- Schema-Driven Generation: Data conforming to JSON Schema/OpenAPI specs
- Template Helpers: Integration with core templating system
mockforge-cli
- Command-Line Interface
User-facing command-line tool:
- Server Management: Start/stop mock servers
- Configuration: Load and validate configuration files
- Data Generation: Command-line data generation utilities
- Development Tools: Testing and debugging utilities
mockforge-ui
- Admin Web Interface
Browser-based management interface:
- Real-time Monitoring: Live request/response viewing
- Configuration Management: Runtime configuration changes
- Fixture Management: Recorded interaction management
- Performance Metrics: Response times and error rates
Core Engine Architecture
Request Processing Pipeline
All requests follow a unified processing pipeline regardless of protocol:
- Request Reception: Protocol-specific server receives request
- Route Matching: Core routing engine matches request to handler
- Validation: Schema validation if enabled
- Template Processing: Dynamic content generation
- Latency Injection: Artificial delays if configured
- Failure Injection: Error simulation if enabled
- Response Generation: Handler generates response
- Logging: Request/response logging
- Response Delivery: Protocol-specific response sending
Route Registry System
The core routing system provides unified route management:
#![allow(unused)] fn main() { pub struct RouteRegistry { routes: HashMap<RouteKey, Vec<RouteHandler>>, overrides: Overrides, validation_mode: ValidationMode, } impl RouteRegistry { pub fn register(&mut self, key: RouteKey, handler: RouteHandler); pub fn match_route(&self, request: &Request) -> Option<&RouteHandler>; pub fn apply_overrides(&mut self, overrides: &Overrides); } }
Template Engine
Handlebars-based templating with custom helpers:
#![allow(unused)] fn main() { pub struct TemplateEngine { registry: handlebars::Handlebars<'static>, } impl TemplateEngine { pub fn render(&self, template: &str, context: &Context) -> Result<String>; pub fn register_helper(&mut self, name: &str, helper: Box<dyn HelperDef>); } }
Built-in helpers include:
uuid
: Generate unique identifiersnow
: Current timestamprandInt
: Random integersrequest
: Access request datafaker
: Synthetic data generation
This architecture provides a solid foundation for API mocking while maintaining extensibility, performance, and developer experience. The modular design allows for independent evolution of each protocol implementation while sharing common infrastructure.
CLI Crate
HTTP Crate
gRPC Crate
WebSocket Crate
CLI Reference
MockForge provides a comprehensive command-line interface for managing mock servers and generating test data. This reference covers all available commands, options, and usage patterns.
Global Options
All MockForge commands support the following global options:
mockforge-cli [OPTIONS] <COMMAND>
Global Options
-h, --help
: Display help information
Commands
serve
- Start Mock Servers
The primary command for starting MockForge’s mock servers with support for HTTP, WebSocket, and gRPC protocols.
mockforge-cli serve [OPTIONS]
Server Options
Port Configuration:
--http-port <PORT>
: HTTP server port (default: 3000)--ws-port <PORT>
: WebSocket server port (default: 3001)--grpc-port <PORT>
: gRPC server port (default: 50051)
API Specification:
--spec <PATH>
: Path to OpenAPI specification file (JSON or YAML format)
Configuration:
-c, --config <PATH>
: Path to configuration file
Admin UI Options
Admin UI Control:
--admin
: Enable admin UI--admin-port <PORT>
: Admin UI port (default: 8080)--admin-embed
: Force embedding Admin UI under HTTP server--admin-mount-path <PATH>
: Explicit mount path for embedded Admin UI (implies--admin-embed
)--admin-standalone
: Force standalone Admin UI on separate port (overrides embed)--disable-admin-api
: Disable Admin API endpoints (UI loads but API routes are absent)
Validation Options
Request Validation:
--validation <MODE>
: Request validation mode (default: enforce)off
: Disable validationwarn
: Log warnings but allow requestsenforce
: Reject invalid requests
--aggregate-errors
: Aggregate request validation errors into JSON array--validate-responses
: Validate responses (warn-only)--validation-status <CODE>
: Validation error HTTP status code (default: 400)
Response Processing
Template Expansion:
--response-template-expand
: Expand templating tokens in responses/examples
Chaos Engineering
Latency Simulation:
--latency-enabled
: Enable latency simulation
Failure Injection:
--failures-enabled
: Enable failure injection
Examples
Basic HTTP Server:
mockforge-cli serve --spec examples/openapi-demo.json --http-port 3000
Full Multi-Protocol Setup:
mockforge-cli serve \
--spec examples/openapi-demo.json \
--http-port 3000 \
--ws-port 3001 \
--grpc-port 50051 \
--admin \
--admin-port 8080 \
--response-template-expand
Development Configuration:
mockforge-cli serve \
--config demo-config.yaml \
--validation warn \
--response-template-expand \
--latency-enabled
Production Configuration:
mockforge-cli serve \
--config production-config.yaml \
--validation enforce \
--admin-standalone
data
- Generate Synthetic Data
Generate synthetic test data using various templates and schemas.
mockforge-cli data <SUBCOMMAND>
Subcommands
template
- Generate from Built-in Templates
Generate data using MockForge’s built-in data generation templates.
mockforge-cli data template [OPTIONS]
Options:
--count <N>
: Number of items to generate (default: 1)--format <FORMAT>
: Output format (json, yaml, csv)--template <NAME>
: Template name (user, product, order, etc.)--output <PATH>
: Output file path
Examples:
# Generate 10 user records as JSON
mockforge-cli data template --template user --count 10 --format json
# Generate product data to file
mockforge-cli data template --template product --count 50 --output products.json
schema
- Generate from JSON Schema
Generate data conforming to a JSON Schema specification.
mockforge-cli data schema [OPTIONS] <SCHEMA>
Parameters:
<SCHEMA>
: Path to JSON Schema file
Options:
--count <N>
: Number of items to generate (default: 1)--format <FORMAT>
: Output format (json, yaml)--output <PATH>
: Output file path
Examples:
# Generate data from user schema
mockforge-cli data schema --count 5 user-schema.json
# Generate and save to file
mockforge-cli data schema --count 100 --output generated-data.json api-schema.json
open-api
- Generate from OpenAPI Spec
Generate mock data based on OpenAPI specification schemas.
mockforge-cli data open-api [OPTIONS] <SPEC>
Parameters:
<SPEC>
: Path to OpenAPI specification file
Options:
--endpoint <PATH>
: Specific endpoint to generate data for--method <METHOD>
: HTTP method (get, post, put, delete)--count <N>
: Number of items to generate (default: 1)--format <FORMAT>
: Output format (json, yaml)--output <PATH>
: Output file path
Examples:
# Generate data for all endpoints in OpenAPI spec
mockforge-cli data open-api api-spec.yaml
# Generate data for specific endpoint
mockforge-cli data open-api --endpoint /users --method get --count 20 api-spec.yaml
# Generate POST request body data
mockforge-cli data open-api --endpoint /users --method post api-spec.yaml
admin
- Admin UI Server
Start the Admin UI as a standalone server without the main mock servers.
mockforge-cli admin [OPTIONS]
Options
--port <PORT>
: Server port (default: 8080)
Examples
# Start admin UI on default port
mockforge-cli admin
# Start admin UI on custom port
mockforge-cli admin --port 9090
Configuration File Format
MockForge supports YAML configuration files that can be used instead of command-line options.
Basic Configuration Structure
# Server configuration
server:
http_port: 3000
ws_port: 3001
grpc_port: 50051
# API specification
spec: examples/openapi-demo.json
# Admin UI configuration
admin:
enabled: true
port: 8080
embedded: false
mount_path: "/admin"
standalone: true
disable_api: false
# Validation settings
validation:
mode: enforce
aggregate_errors: false
validate_responses: false
status_code: 400
# Response processing
response:
template_expand: true
# Chaos engineering
chaos:
latency_enabled: false
failures_enabled: false
# Protocol-specific settings
grpc:
proto_dir: "proto/"
enable_reflection: true
websocket:
replay_file: "examples/ws-demo.jsonl"
Configuration Precedence
Configuration values are applied in the following order (later sources override earlier ones):
- Default values (compiled into the binary)
- Configuration file (
-c/--config
option) - Environment variables
- Command-line arguments (highest priority)
Environment Variables
All configuration options can be set via environment variables using the MOCKFORGE_
prefix:
# Server ports
export MOCKFORGE_HTTP_PORT=3000
export MOCKFORGE_WS_PORT=3001
export MOCKFORGE_GRPC_PORT=50051
# Admin UI
export MOCKFORGE_ADMIN_ENABLED=true
export MOCKFORGE_ADMIN_PORT=8080
# Validation
export MOCKFORGE_VALIDATION_MODE=enforce
export MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
# gRPC settings
export MOCKFORGE_PROTO_DIR=proto/
export MOCKFORGE_GRPC_REFLECTION_ENABLED=true
# WebSocket settings
export MOCKFORGE_WS_REPLAY_FILE=examples/ws-demo.jsonl
Exit Codes
MockForge uses standard exit codes:
- 0: Success
- 1: General error
- 2: Configuration error
- 3: Validation error
- 4: File I/O error
- 5: Network error
Logging
MockForge provides configurable logging output to help with debugging and monitoring.
Log Levels
error
: Only error messageswarn
: Warnings and errorsinfo
: General information (default)debug
: Detailed debugging informationtrace
: Very verbose tracing information
Log Configuration
# Set log level via environment variable
export RUST_LOG=mockforge=debug
# Or via configuration file
logging:
level: debug
format: json
Log Output
Logs include structured information about:
- HTTP requests/responses
- WebSocket connections and messages
- gRPC calls and streaming
- Configuration loading
- Template expansion
- Validation errors
Examples
Complete Development Setup
# Start all servers with admin UI
mockforge-cli serve \
--spec examples/openapi-demo.json \
--http-port 3000 \
--ws-port 3001 \
--grpc-port 50051 \
--admin \
--admin-port 8080 \
--response-template-expand \
--validation warn
CI/CD Testing Pipeline
#!/bin/bash
# test-mockforge.sh
# Start MockForge in background
mockforge-cli serve --spec api-spec.yaml --http-port 3000 &
MOCKFORGE_PID=$!
# Wait for server to start
sleep 5
# Run API tests
npm test
# Generate test data
mockforge-cli data open-api --endpoint /users --count 100 api-spec.yaml > test-users.json
# Stop MockForge
kill $MOCKFORGE_PID
Load Testing Setup
#!/bin/bash
# load-test-setup.sh
# Start MockForge with minimal validation for performance
MOCKFORGE_VALIDATION_MODE=off \
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=false \
mockforge-cli serve \
--spec load-test-spec.yaml \
--http-port 3000 \
--validation off
# Now run your load testing tool against localhost:3000
# Example: hey -n 10000 -c 100 http://localhost:3000/api/test
Docker Integration
# Run MockForge in Docker with CLI commands
docker run --rm -v $(pwd)/examples:/examples \
mockforge \
serve --spec /examples/openapi-demo.json --http-port 3000
Troubleshooting
Common Issues
Server won’t start:
# Check if ports are available
lsof -i :3000
lsof -i :3001
# Try different ports
mockforge-cli serve --http-port 3001 --ws-port 3002
Configuration not loading:
# Validate YAML syntax
yamllint config.yaml
# Check file permissions
ls -la config.yaml
OpenAPI spec not found:
# Verify file exists and path is correct
ls -la examples/openapi-demo.json
# Use absolute path
mockforge-cli serve --spec /full/path/to/examples/openapi-demo.json
Template expansion not working:
# Ensure template expansion is enabled
mockforge-cli serve --response-template-expand --spec api-spec.yaml
Debug Mode
Run with debug logging for detailed information:
RUST_LOG=mockforge=debug mockforge-cli serve --spec api-spec.yaml
Health Checks
Test basic functionality:
# HTTP health check
curl http://localhost:3000/health
# WebSocket connection test
websocat ws://localhost:3001/ws
# gRPC service discovery
grpcurl -plaintext localhost:50051 list
This CLI reference provides comprehensive coverage of MockForge’s command-line interface. For programmatic usage, see the Rust API Reference.
Rust API Reference
MockForge provides comprehensive Rust libraries for programmatic usage and extension. This reference covers the main crates and their APIs.
Crate Overview
MockForge consists of several interconnected crates:
mockforge-cli
: Command-line interface and main executablemockforge-core
: Core functionality shared across protocolsmockforge-http
: HTTP REST API mockingmockforge-grpc
: gRPC service mockingmockforge-ui
: Web-based admin interface
Getting Started
Add MockForge to your Cargo.toml
:
[dependencies]
mockforge-core = "0.1"
mockforge-http = "0.1"
mockforge-grpc = "0.1"
For development or testing, you might want to use path dependencies:
[dependencies]
mockforge-core = { path = "../mockforge/crates/mockforge-core" }
mockforge-http = { path = "../mockforge/crates/mockforge-http" }
mockforge-grpc = { path = "../mockforge/crates/mockforge-grpc" }
Core Concepts
Configuration System
MockForge uses a hierarchical configuration system that can be built programmatically:
#![allow(unused)] fn main() { use mockforge_core::config::MockForgeConfig; let config = MockForgeConfig { server: ServerConfig { http_port: Some(3000), ws_port: Some(3001), grpc_port: Some(50051), }, validation: ValidationConfig { mode: ValidationMode::Enforce, aggregate_errors: false, }, response: ResponseConfig { template_expand: true, }, ..Default::default() }; }
Template System
MockForge includes a powerful template engine for dynamic content generation:
#![allow(unused)] fn main() { use mockforge_core::template::{TemplateEngine, Context}; let engine = TemplateEngine::new(); let context = Context::new() .with_value("user_id", "12345") .with_value("timestamp", "2025-09-12T10:00:00Z"); let result = engine.render("User {{user_id}} logged in at {{timestamp}}", &context)?; assert_eq!(result, "User 12345 logged in at 2025-09-12T10:00:00Z"); }
Error Handling
MockForge uses the anyhow
crate for error handling:
#![allow(unused)] fn main() { use anyhow::{Result, Context}; fn start_server(config: &Config) -> Result<()> { let server = HttpServer::new(config) .context("Failed to create HTTP server")?; server.start() .context("Failed to start server")?; Ok(()) } }
HTTP API
Basic HTTP Server
use mockforge_http::{HttpServer, HttpConfig}; use mockforge_core::config::ServerConfig; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { // Create HTTP configuration let http_config = HttpConfig { spec_path: Some("api-spec.yaml".to_string()), validation_mode: ValidationMode::Warn, template_expand: true, }; // Start HTTP server let mut server = HttpServer::new(http_config); server.start(([127, 0, 0, 1], 3000)).await?; println!("HTTP server running on http://localhost:3000"); Ok(()) }
Custom Route Handlers
use mockforge_http::{HttpServer, RouteHandler}; use warp::{Filter, Reply}; struct CustomHandler; impl RouteHandler for CustomHandler { fn handle(&self, path: &str, method: &str) -> Option<Box<dyn Reply>> { if path == "/custom" && method == "GET" { Some(Box::new(warp::reply::json(&serde_json::json!({ "message": "Custom response", "timestamp": chrono::Utc::now() })))) } else { None } } } #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let handler = CustomHandler; let server = HttpServer::with_handler(handler); server.start(([127, 0, 0, 1], 3000)).await?; Ok(()) }
gRPC API
Basic gRPC Server
use mockforge_grpc::{GrpcServer, GrpcConfig}; use std::path::Path; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { // Configure proto discovery let config = GrpcConfig { proto_dir: Path::new("proto/"), enable_reflection: true, ..Default::default() }; // Start gRPC server let server = GrpcServer::new(config); server.start("127.0.0.1:50051").await?; println!("gRPC server running on 127.0.0.1:50051"); Ok(()) }
Custom Service Implementation
use mockforge_grpc::{ServiceRegistry, ServiceImplementation}; use prost::Message; use tonic::{Request, Response, Status}; // Generated from proto file mod greeter { include!("generated/greeter.rs"); } pub struct GreeterService; #[tonic::async_trait] impl greeter::greeter_server::Greeter for GreeterService { async fn say_hello( &self, request: Request<greeter::HelloRequest>, ) -> Result<Response<greeter::HelloReply>, Status> { let name = request.into_inner().name; let reply = greeter::HelloReply { message: format!("Hello, {}!", name), timestamp: Some(prost_types::Timestamp::from(std::time::SystemTime::now())), }; Ok(Response::new(reply)) } } #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let service = GreeterService {}; let server = GrpcServer::with_service(service); server.start("127.0.0.1:50051").await?; Ok(()) }
WebSocket API
Basic WebSocket Server
use mockforge_ws::{WebSocketServer, WebSocketConfig}; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let config = WebSocketConfig { port: 3001, replay_file: Some("ws-replay.jsonl".to_string()), ..Default::default() }; let server = WebSocketServer::new(config); server.start().await?; println!("WebSocket server running on ws://localhost:3001"); Ok(()) }
Custom Message Handler
use mockforge_ws::{WebSocketServer, MessageHandler}; use futures_util::{SinkExt, StreamExt}; struct EchoHandler; impl MessageHandler for EchoHandler { async fn handle_message(&self, message: String) -> String { format!("Echo: {}", message) } } #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let handler = EchoHandler {}; let server = WebSocketServer::with_handler(handler); server.start().await?; Ok(()) }
This Rust API reference provides the foundation for programmatic usage of MockForge. For protocol-specific details, see the HTTP, gRPC, and WebSocket API documentation.
HTTP Module
gRPC Module
WebSocket Module
Development Setup
This guide helps contributors get started with MockForge development, including environment setup, development workflow, and project structure.
Prerequisites
Before contributing to MockForge, ensure you have the following installed:
Required Tools
- Rust: Version 1.70.0 or later
- Cargo: Included with Rust
- Git: For version control
- C/C++ Compiler: For native dependencies
- Docker: For containerized development and testing
Recommended Tools
- Visual Studio Code or IntelliJ/CLion with Rust plugins
- cargo-watch for automatic rebuilds
- cargo-edit for dependency management
- cargo-audit for security scanning
- mdbook for documentation development
Environment Setup
1. Install Rust
# Install Rust using rustup
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Add Cargo to PATH
source $HOME/.cargo/env
# Verify installation
rustc --version
cargo --version
2. Clone the Repository
# Clone with SSH (recommended for contributors)
git clone git@github.com:SaaSy-Solutions/mockforge.git
# Or with HTTPS
git clone https://github.com/SaaSy-Solutions/mockforge.git
cd mockforge
# Initialize submodules if any
git submodule update --init --recursive
3. Install Development Tools
# Install cargo-watch for automatic rebuilds
cargo install cargo-watch
# Install cargo-edit for dependency management
cargo install cargo-edit
# Install cargo-audit for security scanning
cargo install cargo-audit
# Install mdbook for documentation
cargo install mdbook mdbook-linkcheck mdbook-toc
# Install additional development tools
cargo install cargo-tarpaulin cargo-udeps cargo-outdated
4. Verify Setup
# Build the project
cargo build
# Run tests
cargo test
# Check code quality
cargo clippy
cargo fmt --check
Development Workflow
Daily Development
-
Create a feature branch:
git checkout -b feature/your-feature-name
-
Make changes with frequent testing:
# Run tests automatically on changes cargo watch -x test # Or build automatically cargo watch -x build
-
Follow code quality standards:
# Format code cargo fmt # Lint code cargo clippy -- -W clippy::pedantic # Run security audit cargo audit
-
Write tests for new functionality:
# Add unit tests cargo test --lib # Add integration tests cargo test --test integration
IDE Configuration
Visual Studio Code
-
Install extensions:
rust-lang.rust-analyzer
- Rust language supportms-vscode.vscode-json
- JSON supportredhat.vscode-yaml
- YAML supportms-vscode.vscode-docker
- Docker support
-
Recommended settings in
.vscode/settings.json
:{ "rust-analyzer.checkOnSave.command": "clippy", "rust-analyzer.cargo.allFeatures": true, "editor.formatOnSave": true, "editor.codeActionsOnSave": { "source.fixAll": "explicit" } }
IntelliJ/CLion
- Install Rust plugin from marketplace
- Enable external linter (clippy)
- Configure code style to match project standards
Pre-commit Setup
Install pre-commit hooks to ensure code quality:
# Install pre-commit if not already installed
pip install pre-commit
# Install hooks
pre-commit install
# Run on all files
pre-commit run --all-files
Project Structure
mockforge/
├── crates/ # Rust crates
│ ├── mockforge-cli/ # Command-line interface
│ ├── mockforge-core/ # Shared core functionality
│ ├── mockforge-http/ # HTTP REST API mocking
│ ├── mockforge-ws/ # WebSocket connection mocking
│ ├── mockforge-grpc/ # gRPC service mocking
│ ├── mockforge-data/ # Synthetic data generation
│ └── mockforge-ui/ # Web-based admin interface
├── docs/ # Technical documentation
├── examples/ # Usage examples
├── book/ # User documentation (mdBook)
│ └── src/
├── fixtures/ # Test fixtures
├── scripts/ # Development scripts
├── tools/ # Development tools
├── Cargo.toml # Workspace configuration
├── Cargo.lock # Dependency lock file
├── Makefile # Development tasks
├── docker-compose.yml # Development environment
└── README.md # Project overview
Development Tasks
Common Make Targets
# Build all crates
make build
# Run tests
make test
# Run integration tests
make test-integration
# Build documentation
make docs
# Serve documentation locally
make docs-serve
# Run linter
make lint
# Format code
make format
# Clean build artifacts
make clean
Custom Development Scripts
Several development scripts are available in the scripts/
directory:
# Update dependencies
./scripts/update-deps.sh
# Generate API documentation
./scripts/gen-docs.sh
# Run performance benchmarks
./scripts/benchmark.sh
# Check for unused dependencies
./scripts/check-deps.sh
Testing Strategy
Unit Tests
# Run unit tests for all crates
cargo test --lib
# Run unit tests for specific crate
cargo test -p mockforge-core
# Run with coverage
cargo tarpaulin --out Html
Integration Tests
# Run integration tests
cargo test --test integration
# Run with verbose output
cargo test --test integration -- --nocapture
End-to-End Tests
# Run E2E tests (requires Docker)
make test-e2e
# Or run manually
./scripts/test-e2e.sh
Docker Development
Development Container
# Build development container
docker build -f Dockerfile.dev -t mockforge-dev .
# Run development environment
docker run -it --rm \
-v $(pwd):/app \
-p 3000:3000 \
-p 3001:3001 \
-p 50051:50051 \
-p 8080:8080 \
mockforge-dev
Testing with Docker
# Run tests in container
docker run --rm -v $(pwd):/app mockforge-dev cargo test
# Build release binaries
docker run --rm -v $(pwd):/app mockforge-dev cargo build --release
Contributing Workflow
1. Choose an Issue
- Check GitHub Issues for open tasks
- Look for issues labeled
good first issue
orhelp wanted
- Comment on the issue to indicate you’re working on it
2. Create a Branch
# Create feature branch
git checkout -b feature/issue-number-description
# Or create bugfix branch
git checkout -b bugfix/issue-number-description
3. Make Changes
- Write clear, focused commits
- Follow the code style guide
- Add tests for new functionality
- Update documentation as needed
4. Test Your Changes
# Run full test suite
make test
# Run integration tests
make test-integration
# Test manually if applicable
cargo run -- serve --spec examples/openapi-demo.json
5. Update Documentation
# Update user-facing docs if needed
mdbook build
# Update API docs
cargo doc
# Test documentation links
mdbook test
6. Submit a Pull Request
# Ensure branch is up to date
git fetch origin
git rebase origin/main
# Push your branch
git push origin feature/your-feature
# Create PR on GitHub with:
# - Clear title and description
# - Reference to issue number
# - Screenshots/videos for UI changes
# - Test results
Getting Help
Communication Channels
- GitHub Issues: For bugs, features, and general discussion
- GitHub Discussions: For questions and longer-form discussion
- Discord/Slack: For real-time chat (if available)
When to Ask for Help
- Stuck on a technical problem for more than 2 hours
- Unsure about design decisions
- Need clarification on requirements
- Found a potential security issue
Code Review Process
- All PRs require review from at least one maintainer
- CI must pass all checks
- Code coverage should not decrease significantly
- Documentation must be updated for user-facing changes
This setup guide ensures you have everything needed to contribute effectively to MockForge. Happy coding! 🚀
Code Style Guide
This guide outlines the coding standards and style guidelines for MockForge development. Consistent code style improves readability, maintainability, and collaboration.
Rust Code Style
MockForge follows the official Rust style guidelines with some project-specific conventions.
Formatting
Use rustfmt
for automatic code formatting:
# Format all code
cargo fmt
# Check formatting without modifying files
cargo fmt --check
Linting
Use clippy
for additional code quality checks:
# Run clippy with project settings
cargo clippy
# Run with pedantic mode for stricter checks
cargo clippy -- -W clippy::pedantic
Naming Conventions
Functions and Variables
#![allow(unused)] fn main() { // Good: snake_case for functions and variables fn process_user_data(user_id: i32, data: &str) -> Result<User, Error> { let processed_data = validate_and_clean(data)?; let user_record = create_user_record(user_id, &processed_data)?; Ok(user_record) } // Bad: camelCase or PascalCase fn processUserData(userId: i32, data: &str) -> Result<User, Error> { let ProcessedData = validate_and_clean(data)?; let userRecord = create_user_record(userId, &ProcessedData)?; Ok(userRecord) } }
Types and Traits
#![allow(unused)] fn main() { // Good: PascalCase for types pub struct HttpServer { config: ServerConfig, router: Router, } pub trait RequestHandler { fn handle_request(&self, request: Request) -> Response; } // Bad: snake_case for types pub struct http_server { config: ServerConfig, router: Router, } }
Constants
#![allow(unused)] fn main() { // Good: SCREAMING_SNAKE_CASE for constants const MAX_CONNECTIONS: usize = 1000; const DEFAULT_TIMEOUT_SECS: u64 = 30; // Bad: camelCase or PascalCase const maxConnections: usize = 1000; const DefaultTimeoutSecs: u64 = 30; }
Modules and Files
#![allow(unused)] fn main() { // Good: snake_case for module names pub mod request_handler; pub mod template_engine; // File: request_handler.rs // Module: request_handler }
Documentation
Function Documentation
#![allow(unused)] fn main() { /// Processes a user request and returns a response. /// /// This function handles the complete request processing pipeline: /// 1. Validates the request data /// 2. Applies business logic /// 3. Returns appropriate response /// /// # Arguments /// /// * `user_id` - The ID of the user making the request /// * `request_data` - The request payload as JSON /// /// # Returns /// /// Returns a `Result<Response, Error>` where: /// - `Ok(response)` contains the successful response /// - `Err(error)` contains details about what went wrong /// /// # Errors /// /// This function will return an error if: /// - The user ID is invalid /// - The request data is malformed /// - Database operations fail /// /// # Examples /// /// ```rust /// let user_id = 123; /// let request_data = r#"{"action": "update_profile"}"#; /// let response = process_user_request(user_id, request_data)?; /// assert_eq!(response.status(), 200); /// ``` pub fn process_user_request(user_id: i32, request_data: &str) -> Result<Response, Error> { // Implementation } }
Module Documentation
#![allow(unused)] fn main() { //! # HTTP Server Module //! //! This module provides HTTP server functionality for MockForge, //! including request routing, middleware support, and response handling. //! //! ## Architecture //! //! The HTTP server uses axum as the underlying web framework and provides: //! - OpenAPI specification integration //! - Template-based response generation //! - Middleware for logging and validation //! //! ## Example //! //! ```rust //! use mockforge_http::HttpServer; //! //! let server = HttpServer::new(config); //! server.serve("127.0.0.1:3000").await?; //! ``` }
Error Handling
Custom Error Types
#![allow(unused)] fn main() { use thiserror::Error; #[derive(Error, Debug)] pub enum MockForgeError { #[error("Configuration error: {message}")] Config { message: String }, #[error("I/O error: {source}")] Io { #[from] source: std::io::Error, }, #[error("Template rendering error: {message}")] Template { message: String }, #[error("HTTP error: {status} - {message}")] Http { status: u16, message: String }, } }
Result Types
#![allow(unused)] fn main() { // Good: Use Result<T, MockForgeError> for fallible operations pub fn load_config(path: &Path) -> Result<Config, MockForgeError> { let content = fs::read_to_string(path) .map_err(|e| MockForgeError::Io { source: e })?; let config: Config = serde_yaml::from_str(&content) .map_err(|e| MockForgeError::Config { message: format!("Failed to parse YAML: {}", e), })?; Ok(config) } // Bad: Using Option when you should use Result pub fn load_config_bad(path: &Path) -> Option<Config> { // This loses error information None } }
Async Code
Async Function Signatures
#![allow(unused)] fn main() { // Good: Clear async function signatures pub async fn process_request(request: Request) -> Result<Response, Error> { let data = validate_request(&request).await?; let result = process_data(data).await?; Ok(create_response(result)) } // Bad: Unclear async boundaries pub fn process_request(request: Request) -> impl Future<Output = Result<Response, Error>> { async move { // Implementation } } }
Tokio Usage
#![allow(unused)] fn main() { use tokio::sync::{Mutex, RwLock}; // Good: Use appropriate synchronization primitives pub struct SharedState { data: RwLock<HashMap<String, String>>, counter: Mutex<i64>, } impl SharedState { pub async fn get_data(&self, key: &str) -> Option<String> { let data = self.data.read().await; data.get(key).cloned() } pub async fn increment_counter(&self) -> i64 { let mut counter = self.counter.lock().await; *counter += 1; *counter } } }
Testing
Unit Test Structure
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use super::*; #[test] fn test_function_basic_case() { // Given let input = "test input"; let expected = "expected output"; // When let result = process_input(input); // Then assert_eq!(result, expected); } #[test] fn test_function_error_case() { // Given let input = ""; // When let result = process_input(input); // Then assert!(result.is_err()); assert!(matches!(result.unwrap_err(), Error::InvalidInput(_))); } #[tokio::test] async fn test_async_function() { // Given let client = create_test_client().await; // When let response = client.make_request().await.unwrap(); // Then assert_eq!(response.status(), 200); } } }
Test Organization
#![allow(unused)] fn main() { // tests/integration_tests.rs #[cfg(test)] mod integration_tests { use mockforge_core::config::MockForgeConfig; #[tokio::test] async fn test_full_http_flow() { // Test complete request/response cycle let server = TestServer::new().await; let client = TestClient::new(server.url()); let response = client.get("/api/users").await; assert_eq!(response.status(), 200); } } }
Performance Considerations
Memory Management
#![allow(unused)] fn main() { // Good: Use references when possible pub fn process_data(data: &str) -> Result<String, Error> { // Avoid cloning unless necessary if data.is_empty() { return Err(Error::EmptyInput); } Ok(data.to_uppercase()) } // Good: Use Cow for flexible ownership use std::borrow::Cow; pub fn normalize_string<'a>(input: &'a str) -> Cow<'a, str> { if input.chars().all(|c| c.is_lowercase()) { Cow::Borrowed(input) } else { Cow::Owned(input.to_lowercase()) } } }
Zero-Cost Abstractions
#![allow(unused)] fn main() { // Good: Use iterators for memory efficiency pub fn find_active_users(users: &[User]) -> impl Iterator<Item = &User> { users.iter().filter(|user| user.is_active) } // Bad: Collect into Vec unnecessarily pub fn find_active_users_bad(users: &[User]) -> Vec<&User> { users.iter().filter(|user| user.is_active).collect() } }
Project-Specific Conventions
Configuration Handling
#![allow(unused)] fn main() { // Good: Use builder pattern for complex configuration #[derive(Debug, Clone)] pub struct ServerConfig { pub host: String, pub port: u16, pub tls: Option<TlsConfig>, } impl Default for ServerConfig { fn default() -> Self { Self { host: "127.0.0.1".to_string(), port: 3000, tls: None, } } } impl ServerConfig { pub fn builder() -> ServerConfigBuilder { ServerConfigBuilder::default() } } }
Logging
#![allow(unused)] fn main() { use tracing::{info, warn, error, debug, instrument}; // Good: Use structured logging #[instrument(skip(config))] pub async fn start_server(config: &ServerConfig) -> Result<(), Error> { info!("Starting server", host = %config.host, port = config.port); if let Err(e) = setup_server(config).await { error!("Failed to start server", error = %e); return Err(e); } info!("Server started successfully"); Ok(()) } }
Feature Flags
#![allow(unused)] fn main() { // Good: Use feature flags for optional functionality #[cfg(feature = "grpc")] pub mod grpc { // gRPC-specific code } #[cfg(feature = "websocket")] pub mod websocket { // WebSocket-specific code } }
Code Review Checklist
Before submitting code for review, ensure:
-
Code is formatted with
cargo fmt
- No clippy warnings remain
- All tests pass
- Documentation is updated
- No TODO comments left in production code
- Error messages are user-friendly
- Performance considerations are addressed
- Security implications are reviewed
Tools and Automation
Pre-commit Hooks
#!/bin/bash
# .git/hooks/pre-commit
# Format code
cargo fmt --check
if [ $? -ne 0 ]; then
echo "Code is not formatted. Run 'cargo fmt' to fix."
exit 1
fi
# Run clippy
cargo clippy -- -D warnings
if [ $? -ne 0 ]; then
echo "Clippy found issues. Fix them before committing."
exit 1
fi
# Run tests
cargo test
if [ $? -ne 0 ]; then
echo "Tests are failing. Fix them before committing."
exit 1
fi
CI Configuration
# .github/workflows/ci.yml
name: CI
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
toolchain: stable
- name: Check formatting
run: cargo fmt --check
- name: Run clippy
run: cargo clippy -- -D warnings
- name: Run tests
run: cargo test --verbose
- name: Run security audit
run: cargo audit
This style guide ensures MockForge maintains high code quality and consistency across the entire codebase. Following these guidelines makes the code more readable, maintainable, and collaborative.
Testing Guidelines
This guide outlines the testing standards and practices for MockForge contributions. Quality testing ensures code reliability, prevents regressions, and maintains system stability.
Testing Philosophy
Testing Pyramid
MockForge follows a testing pyramid approach with different types of tests serving different purposes:
End-to-End Tests (E2E)
↑
Integration Tests
↑
Unit Tests
Base
- Unit Tests: Test individual functions and modules in isolation
- Integration Tests: Test component interactions and data flow
- End-to-End Tests: Test complete user workflows and system behavior
Testing Principles
- Test First: Write tests before implementation when possible
- Test Behavior: Test what the code does, not how it does it
- Test Boundaries: Focus on edge cases and error conditions
- Keep Tests Fast: Tests should run quickly to encourage frequent execution
- Make Tests Reliable: Tests should be deterministic and not flaky
Unit Testing Requirements
Test Coverage
All new code must include unit tests with the following minimum coverage:
- Functions: Test all public functions with valid inputs
- Error Cases: Test all error conditions and edge cases
- Branches: Test all conditional branches (if/else, match arms)
- Loops: Test loop boundaries (empty, single item, multiple items)
Test Structure
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use super::*; #[test] fn test_function_name_description() { // Given: Set up test data and preconditions let input = create_test_input(); let expected = create_expected_output(); // When: Execute the function under test let result = function_under_test(input); // Then: Verify the result matches expectations assert_eq!(result, expected); } #[test] fn test_function_name_error_case() { // Given: Set up error condition let invalid_input = create_invalid_input(); // When: Execute the function let result = function_under_test(invalid_input); // Then: Verify error handling assert!(result.is_err()); let error = result.unwrap_err(); assert!(matches!(error, ExpectedError::Variant)); } } }
Test Naming Conventions
#![allow(unused)] fn main() { // Good: Descriptive test names #[test] fn test_parse_openapi_spec_validates_required_fields() { ... } #[test] fn test_template_engine_handles_missing_variables() { ... } #[test] fn test_http_server_rejects_invalid_content_type() { ... } // Bad: Non-descriptive names #[test] fn test_function() { ... } #[test] fn test_case_1() { ... } #[test] fn test_error() { ... } }
Test Data Management
Test Fixtures
#![allow(unused)] fn main() { // Use shared test fixtures for common data pub fn sample_openapi_spec() -> &'static str { r#" openapi: 3.0.3 info: title: Test API version: 1.0.0 paths: /users: get: responses: '200': description: Success "# } pub fn sample_user_data() -> User { User { id: "123".to_string(), name: "John Doe".to_string(), email: "john@example.com".to_string(), } } }
Test Utilities
#![allow(unused)] fn main() { // Create test utilities for common setup pub struct TestServer { server_handle: Option<JoinHandle<()>>, base_url: String, } impl TestServer { pub async fn new() -> Self { // Start test server // Return configured instance } pub fn url(&self) -> &str { &self.base_url } } impl Drop for TestServer { fn drop(&mut self) { // Clean up server } } }
Integration Testing Standards
When to Write Integration Tests
Integration tests are required for:
- API Boundaries: HTTP endpoints, gRPC services, WebSocket connections
- Database Operations: Data persistence and retrieval
- External Services: Third-party API integrations
- File I/O: Configuration loading, fixture management
- Component Communication: Cross-crate interactions
Integration Test Structure
#![allow(unused)] fn main() { #[cfg(test)] mod integration_tests { use mockforge_core::config::MockForgeConfig; #[tokio::test] async fn test_http_server_startup() { // Given: Configure test server let config = create_test_config(); let server = HttpServer::new(config); // When: Start the server let addr = server.local_addr(); tokio::spawn(async move { server.serve().await.unwrap(); }); // Wait for startup tokio::time::sleep(Duration::from_millis(100)).await; // Then: Verify server is responding let client = reqwest::Client::new(); let response = client .get(format!("http://{}/health", addr)) .send() .await .unwrap(); assert_eq!(response.status(), 200); } } }
Database Testing
#![allow(unused)] fn main() { #[cfg(test)] mod database_tests { use sqlx::PgPool; #[sqlx::test] async fn test_user_creation(pool: PgPool) { // Given: Clean database state sqlx::query!("DELETE FROM users").execute(&pool).await.unwrap(); // When: Create a user let user_id = create_user(&pool, "test@example.com").await.unwrap(); // Then: Verify user exists let user = sqlx::query!("SELECT * FROM users WHERE id = $1", user_id) .fetch_one(&pool) .await .unwrap(); assert_eq!(user.email, "test@example.com"); } } }
End-to-End Testing Requirements
E2E Test Scenarios
E2E tests must cover:
- Happy Path: Complete successful user workflows
- Error Recovery: System behavior under failure conditions
- Data Persistence: State changes across operations
- Performance: Response times and resource usage
- Security: Authentication and authorization flows
E2E Test Implementation
#![allow(unused)] fn main() { #[cfg(test)] mod e2e_tests { use std::process::Command; use std::time::Duration; #[test] fn test_complete_api_workflow() { // Start MockForge server let mut server = Command::new("cargo") .args(&["run", "--release", "--", "serve", "--spec", "test-api.yaml"]) .spawn() .unwrap(); // Wait for server startup std::thread::sleep(Duration::from_secs(3)); // Execute complete workflow let result = run_workflow_test(); assert!(result.is_ok()); // Cleanup server.kill().unwrap(); } } }
Test Quality Standards
Code Coverage Requirements
- Minimum Coverage: 80% overall, 90% for critical paths
- Branch Coverage: All conditional branches must be tested
- Error Path Coverage: All error conditions must be tested
Performance Testing
#![allow(unused)] fn main() { #[cfg(test)] mod performance_tests { use criterion::Criterion; fn benchmark_template_rendering(c: &mut Criterion) { let engine = TemplateEngine::new(); c.bench_function("render_simple_template", |b| { b.iter(|| { engine.render("Hello {{name}}", &[("name", "World")]); }) }); } } }
Load Testing
#![allow(unused)] fn main() { #[cfg(test)] mod load_tests { use tokio::time::{Duration, Instant}; #[tokio::test] async fn test_concurrent_requests() { let client = reqwest::Client::new(); let start = Instant::now(); // Spawn 100 concurrent requests let handles: Vec<_> = (0..100).map(|_| { let client = client.clone(); tokio::spawn(async move { client.get("http://localhost:3000/api/users") .send() .await .unwrap() }) }).collect(); // Wait for all requests to complete for handle in handles { let response = handle.await.unwrap(); assert_eq!(response.status(), 200); } let duration = start.elapsed(); assert!(duration < Duration::from_secs(5), "Load test took too long: {:?}", duration); } } }
Testing Tools and Frameworks
Required Testing Dependencies
[dev-dependencies]
tokio-test = "0.4"
proptest = "1.0" # Property-based testing
criterion = "0.4" # Benchmarking
assert_cmd = "2.0" # CLI testing
predicates = "2.1" # Value assertions
tempfile = "3.0" # Temporary files
Mocking and Stubbing
#![allow(unused)] fn main() { #[cfg(test)] mod mock_tests { use mockall::mock; #[mockall::mock] trait Database { async fn get_user(&self, id: i32) -> Result<User, Error>; async fn save_user(&self, user: User) -> Result<(), Error>; } #[tokio::test] async fn test_service_with_mocks() { let mut mock_db = MockDatabase::new(); mock_db .expect_get_user() .with(eq(123)) .returning(|_| Ok(User { id: 123, name: "Test".to_string() })); let service = UserService::new(mock_db); let user = service.get_user(123).await.unwrap(); assert_eq!(user.name, "Test"); } } }
Property-Based Testing
#![allow(unused)] fn main() { #[cfg(test)] mod property_tests { use proptest::prelude::*; proptest! { #[test] fn test_template_rendering_with_random_input( input in "\\PC*", // Any printable character except control chars name in "[a-zA-Z]{1,10}" ) { let engine = TemplateEngine::new(); let context = &[("name", &name)]; // Should not panic regardless of input let _result = engine.render(&input, context); } } } }
Test Organization and Naming
File Structure
src/
├── lib.rs
├── module.rs
└── module/
├── mod.rs
└── submodule.rs
tests/
├── unit/
│ ├── module_tests.rs
│ └── submodule_tests.rs
├── integration/
│ ├── api_tests.rs
│ └── database_tests.rs
└── e2e/
├── workflow_tests.rs
└── performance_tests.rs
Test Module Organization
#![allow(unused)] fn main() { // tests/unit/template_tests.rs #[cfg(test)] mod template_tests { use mockforge_core::templating::TemplateEngine; // Unit tests for template functionality } // tests/integration/http_tests.rs #[cfg(test)] mod http_integration_tests { use mockforge_http::HttpServer; // Integration tests for HTTP server } // tests/e2e/api_workflow_tests.rs #[cfg(test)] mod e2e_tests { // End-to-end workflow tests } }
CI/CD Integration
GitHub Actions Testing
name: Test
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@stable
- name: Cache dependencies
uses: Swatinem/rust-cache@v2
- name: Check formatting
run: cargo fmt --check
- name: Run clippy
run: cargo clippy -- -D warnings
- name: Run tests
run: cargo test --verbose
- name: Run integration tests
run: cargo test --test integration
- name: Generate coverage
run: |
cargo install cargo-tarpaulin
cargo tarpaulin --out Xml --output-dir coverage
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
file: coverage/cobertura.xml
Test Result Reporting
- name: Run tests with JUnit output
run: |
cargo install cargo2junit
cargo test -- -Z unstable-options --format json | cargo2junit > test-results.xml
- name: Publish test results
uses: EnricoMi/publish-unit-test-result-action@v2
with:
files: test-results.xml
Best Practices
Test Isolation
#![allow(unused)] fn main() { #[cfg(test)] mod isolated_tests { use tempfile::TempDir; #[test] fn test_file_operations() { // Use temporary directory for isolation let temp_dir = TempDir::new().unwrap(); let file_path = temp_dir.path().join("test.txt"); // Test file operations write_test_file(&file_path); assert!(file_path.exists()); // Cleanup happens automatically } } }
Test Data Management
#![allow(unused)] fn main() { #[cfg(test)] mod test_data { use once_cell::sync::Lazy; static TEST_USERS: Lazy<Vec<User>> = Lazy::new(|| { vec![ User { id: 1, name: "Alice".to_string() }, User { id: 2, name: "Bob".to_string() }, ] }); #[test] fn test_user_operations() { let users = TEST_USERS.clone(); // Use shared test data } } }
Asynchronous Testing
#![allow(unused)] fn main() { #[cfg(test)] mod async_tests { use tokio::time::{timeout, Duration}; #[tokio::test] async fn test_async_operation_with_timeout() { let result = timeout(Duration::from_secs(5), async_operation()).await; match result { Ok(Ok(data)) => assert!(data.is_valid()), Ok(Err(e)) => panic!("Operation failed: {}", e), Err(_) => panic!("Operation timed out"), } } #[tokio::test] async fn test_concurrent_operations() { let (result1, result2) = tokio::join( operation1(), operation2() ); assert!(result1.is_ok()); assert!(result2.is_ok()); } } }
Test Flakiness Prevention
#![allow(unused)] fn main() { #[cfg(test)] mod reliable_tests { #[test] fn test_with_retries() { let mut attempts = 0; let max_attempts = 3; loop { attempts += 1; match potentially_flaky_operation() { Ok(result) => { assert!(result.is_valid()); break; } Err(e) if attempts < max_attempts => { eprintln!("Attempt {} failed: {}, retrying...", attempts, e); std::thread::sleep(Duration::from_millis(100)); continue; } Err(e) => panic!("Operation failed after {} attempts: {}", max_attempts, e), } } } } }
Security Testing
Input Validation Testing
#![allow(unused)] fn main() { #[cfg(test)] mod security_tests { #[test] fn test_sql_injection_prevention() { let malicious_input = "'; DROP TABLE users; --"; let result = sanitize_sql_input(malicious_input); assert!(!result.contains("DROP")); assert!(!result.contains(";")); } #[test] fn test_xss_prevention() { let malicious_input = "<script>alert('xss')</script>"; let result = sanitize_html_input(malicious_input); assert!(!result.contains("<script>")); assert!(result.contains("<script>")); } #[test] fn test_path_traversal_prevention() { let malicious_input = "../../../etc/passwd"; let result = validate_file_path(malicious_input); assert!(result.is_err()); assert!(matches!(result.unwrap_err(), ValidationError::PathTraversal)); } } }
Authentication Testing
#![allow(unused)] fn main() { #[cfg(test)] mod auth_tests { #[tokio::test] async fn test_unauthorized_access() { let client = create_test_client(); let response = client .get("/admin/users") .send() .await .unwrap(); assert_eq!(response.status(), 401); } #[tokio::test] async fn test_authorized_access() { let client = create_authenticated_client(); let response = client .get("/admin/users") .send() .await .unwrap(); assert_eq!(response.status(), 200); } } }
This comprehensive testing guide ensures MockForge maintains high quality and reliability through thorough automated testing at all levels.
Release Process
This guide outlines the complete process for releasing new versions of MockForge, from planning through deployment and post-release activities.
Release Planning
Version Numbering
MockForge follows Semantic Versioning (SemVer):
MAJOR.MINOR.PATCH[-PRERELEASE][+BUILD]
Examples:
- 1.0.0 (stable release)
- 1.1.0 (minor release with new features)
- 1.1.1 (patch release with bug fixes)
- 2.0.0-alpha.1 (pre-release)
- 1.0.0+20230912 (build metadata)
When to Increment
- MAJOR (X.0.0): Breaking changes to public API
- MINOR (X.Y.0): New features, backward compatible
- PATCH (X.Y.Z): Bug fixes, backward compatible
Release Types
Major Releases
- Breaking API changes
- Major feature additions
- Architectural changes
- Extended testing period (2-4 weeks beta)
Minor Releases
- New features and enhancements
- Backward compatible API changes
- Standard testing period (1-2 weeks)
Patch Releases
- Critical bug fixes
- Security patches
- Documentation updates
- Minimal testing period (3-5 days)
Pre-releases
- Alpha/Beta/RC versions
- Feature previews
- Breaking change previews
- Limited distribution
Pre-Release Checklist
1. Code Quality Verification
# Run complete test suite
make test
# Run integration tests
make test-integration
# Run E2E tests
make test-e2e
# Check code quality
make lint
make format-check
# Security audit
cargo audit
# Check for unused dependencies
cargo +nightly udeps
# Performance benchmarks
make benchmark
2. Documentation Updates
# Update CHANGELOG.md with release notes
# Update version numbers in documentation
# Build and test documentation
make docs
make docs-serve
# Test documentation links
mdbook test
3. Version Bump
# Update version in Cargo.toml files
# Update version in package metadata
# Update version in documentation
# Example version bump script
#!/bin/bash
NEW_VERSION=$1
# Update workspace Cargo.toml
sed -i "s/^version = .*/version = \"$NEW_VERSION\"/" Cargo.toml
# Update all crate Cargo.toml files
find crates -name "Cargo.toml" -exec sed -i "s/^version = .*/version = \"$NEW_VERSION\"/" {} \;
# Update README and documentation version references
sed -i "s/mockforge [0-9]\+\.[0-9]\+\.[0-9]\+/mockforge $NEW_VERSION/g" README.md
4. Branch Management
# Create release branch
git checkout -b release/v$NEW_VERSION
# Cherry-pick approved commits
# Or merge from develop/main
# Tag the release
git tag -a v$NEW_VERSION -m "Release version $NEW_VERSION"
# Push branch and tag
git push origin release/v$NEW_VERSION
git push origin v$NEW_VERSION
Release Build Process
1. Build Verification
# Clean build
cargo clean
# Build all targets
cargo build --release --all-targets
# Build specific platforms if needed
cargo build --release --target x86_64-unknown-linux-gnu
cargo build --release --target x86_64-apple-darwin
cargo build --release --target x86_64-pc-windows-msvc
# Test release build
./target/release/mockforge-cli --version
2. Binary Distribution
Linux/macOS Packages
# Strip debug symbols
strip target/release/mockforge-cli
# Create distribution archives
VERSION=1.0.0
tar -czf mockforge-v${VERSION}-x86_64-linux.tar.gz \
-C target/release mockforge-cli
tar -czf mockforge-v${VERSION}-x86_64-macos.tar.gz \
-C target/release mockforge-cli
Debian Packages
# Install cargo-deb
cargo install cargo-deb
# Build .deb package
cargo deb
# Test package installation
sudo dpkg -i target/debian/mockforge_*.deb
Docker Images
# Dockerfile.release
FROM rust:1.70-slim AS builder
WORKDIR /app
COPY . .
RUN cargo build --release
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/mockforge-cli /usr/local/bin/mockforge-cli
EXPOSE 3000 3001 50051 8080
CMD ["mockforge-cli", "serve"]
# Build and push Docker image
docker build -f Dockerfile.release -t mockforge:$VERSION .
docker tag mockforge:$VERSION mockforge:latest
docker push mockforge:$VERSION
docker push mockforge:latest
3. Cross-Platform Builds
# Use cross for cross-compilation
cargo install cross
# Build for different architectures
cross build --release --target aarch64-unknown-linux-gnu
cross build --release --target x86_64-unknown-linux-musl
# Create release archives for each platform
for target in x86_64-unknown-linux-gnu aarch64-unknown-linux-gnu x86_64-apple-darwin x86_64-pc-windows-msvc; do
cross build --release --target $target
if [[ $target == *"windows"* ]]; then
zip -j mockforge-$VERSION-$target.zip target/$target/release/mockforge-cli.exe
else
tar -czf mockforge-$VERSION-$target.tar.gz -C target/$target/release mockforge-cli
fi
done
Release Deployment
1. GitHub Release
# Create GitHub release (manual or automated)
gh release create v$VERSION \
--title "MockForge v$VERSION" \
--notes-file release-notes.md \
--draft
# Upload release assets
gh release upload v$VERSION \
mockforge-v$VERSION-x86_64-linux.tar.gz \
mockforge-v$VERSION-x86_64-macos.tar.gz \
mockforge-v$VERSION-x86_64-windows.zip \
mockforge_$VERSION_amd64.deb
# Publish release
gh release edit v$VERSION --draft=false
2. Package Registries
Crates.io Publication
# Publish all crates to crates.io
# Note: Must be done in dependency order
# Publish core first
cd crates/mockforge-core
cargo publish
# Then other crates
cd ../mockforge-http
cargo publish
cd ../mockforge-ws
cargo publish
cd ../mockforge-grpc
cargo publish
cd ../mockforge-data
cargo publish
cd ../mockforge-ui
cargo publish
# Finally CLI
cd ../mockforge-cli
cargo publish
Docker Hub
# Tag and push Docker images
docker tag mockforge:$VERSION mockforge/mockforge:$VERSION
docker tag mockforge:$VERSION mockforge/mockforge:latest
docker push mockforge/mockforge:$VERSION
docker push mockforge/mockforge:latest
3. Homebrew (macOS)
# Formula/mockforge.rb
class Mockforge < Formula
desc "Advanced API Mocking Platform"
homepage "https://github.com/SaaSy-Solutions/mockforge"
url "https://github.com/SaaSy-Solutions/mockforge/releases/download/v#{version}/mockforge-v#{version}-x86_64-macos.tar.gz"
sha256 "..."
def install
bin.install "mockforge-cli"
end
test do
system "#{bin}/mockforge-cli", "--version"
end
end
4. Package Managers
APT Repository (Ubuntu/Debian)
# Set up PPA or repository
# Upload .deb packages
# Update package indices
Snapcraft
# snapcraft.yaml
name: mockforge
version: '1.0.0'
summary: Advanced API Mocking Platform
description: |
MockForge is a comprehensive API mocking platform supporting HTTP, WebSocket, and gRPC protocols.
grade: stable
confinement: strict
apps:
mockforge:
command: mockforge-cli
plugs: [network, network-bind]
parts:
mockforge:
plugin: rust
source: .
build-packages: [pkg-config, libssl-dev]
Post-Release Activities
1. Announcement
GitHub Release Notes
## What's New in MockForge v1.0.0
### 🚀 Major Features
- Multi-protocol support (HTTP, WebSocket, gRPC)
- Advanced templating system
- Web-based admin UI
- Comprehensive testing framework
### 🐛 Bug Fixes
- Fixed template rendering performance
- Resolved WebSocket connection stability
- Improved error messages
### 📚 Documentation
- Complete API reference
- Getting started guides
- Troubleshooting documentation
### 🤝 Contributors
Special thanks to all contributors!
### 🔗 Links
- [Documentation](https://docs.mockforge.dev)
- [GitHub Repository](https://github.com/SaaSy-Solutions/mockforge)
- [Issue Tracker](https://github.com/SaaSy-Solutions/mockforge/issues)
Social Media & Community
# Post to social media
# Update Discord/Slack channels
# Send email newsletter
# Update website/blog
2. Monitoring & Support
Release Health Checks
# Monitor installation success
# Check for immediate bug reports
# Monitor CI/CD pipelines
# Track adoption metrics
# Example monitoring script
#!/bin/bash
VERSION=$1
# Check GitHub release downloads
gh release view v$VERSION --json assets -q '.assets[].downloadCount'
# Check crates.io download stats
curl -s "https://crates.io/api/v1/crates/mockforge-cli/downloads" | jq '.versions[0].downloads'
# Monitor error reports
gh issue list --label bug --state open --limit 10
Support Channels
- GitHub Issues: Bug reports and feature requests
- GitHub Discussions: General questions and support
- Discord/Slack: Real-time community support
- Documentation: Self-service troubleshooting
3. Follow-up Releases
Hotfix Process
For critical issues discovered post-release:
# Create hotfix branch from release tag
git checkout -b hotfix/critical-bug-fix v1.0.0
# Apply fix
# Write test
# Update CHANGELOG
# Create patch release
NEW_VERSION=1.0.1
git tag -a v$NEW_VERSION
git push origin v$NEW_VERSION
# Deploy hotfix
4. Analytics & Metrics
Release Metrics
- Download counts across platforms
- Installation success rates
- User adoption and usage patterns
- Performance benchmarks vs previous versions
- Community feedback and sentiment
Continuous Improvement
# Post-release retrospective template
## Release Summary
- Version: v1.0.0
- Release Date: YYYY-MM-DD
- Duration: X weeks
## What Went Well
- [ ] Smooth release process
- [ ] No critical bugs found
- [ ] Good community reception
## Areas for Improvement
- [ ] Documentation could be clearer
- [ ] Testing took longer than expected
- [ ] More platform support needed
## Action Items
- [ ] Improve release documentation
- [ ] Automate more of the process
- [ ] Add more platform builds
Release Automation
GitHub Actions Release Workflow
# .github/workflows/release.yml
name: Release
on:
push:
tags:
- 'v*'
jobs:
release:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set version
run: echo "VERSION=${GITHUB_REF#refs/tags/v}" >> $GITHUB_ENV
- name: Build release binaries
run: |
cargo build --release
strip target/release/mockforge-cli
- name: Create release archives
run: |
tar -czf mockforge-${VERSION}-linux-x64.tar.gz -C target/release mockforge-cli
zip mockforge-${VERSION}-linux-x64.zip target/release/mockforge-cli
- name: Create GitHub release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref }}
release_name: MockForge ${{ env.VERSION }}
body: |
## What's New
See [CHANGELOG.md](CHANGELOG.md) for details.
## Downloads
- Linux x64: [mockforge-${{ env.VERSION }}-linux-x64.tar.gz](mockforge-${{ env.VERSION }}-linux-x64.tar.gz)
draft: false
prerelease: false
- name: Upload release assets
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./mockforge-${{ env.VERSION }}-linux-x64.tar.gz
asset_name: mockforge-${{ env.VERSION }}-linux-x64.tar.gz
asset_content_type: application/gzip
Automated Publishing
# Publish to crates.io on release
- name: Publish to crates.io
run: cargo publish --token ${{ secrets.CRATES_IO_TOKEN }}
if: startsWith(github.ref, 'refs/tags/')
# Build and push Docker image
- name: Build and push Docker image
uses: docker/build-push-action@v3
with:
context: .
push: true
tags: mockforge/mockforge:${{ env.VERSION }},mockforge/mockforge:latest
Emergency Releases
Security Vulnerabilities
For security issues requiring immediate release:
- Assess Severity: Determine CVSS score and impact
- Develop Fix: Create minimal fix with comprehensive tests
- Bypass Normal Process: Skip extended testing for critical security fixes
- Accelerated Release: 24-48 hour release cycle
- Public Disclosure: Coordinate with security community
Critical Bug Fixes
For show-stopping bugs affecting production:
- Immediate Assessment: Evaluate user impact and severity
- Rapid Development: 1-2 day fix development
- Limited Testing: Focus on regression and critical path tests
- Fast-Track Release: 3-5 day release cycle
This comprehensive release process ensures MockForge releases are reliable, well-tested, and properly distributed across all supported platforms and package managers.
Configuration Schema
MockForge supports comprehensive configuration through YAML files. This schema reference documents all available configuration options, their types, defaults, and usage examples.
File Format
Configuration files use YAML format with the following structure:
# Top-level configuration sections
server: # Server port and binding configuration
admin: # Admin UI settings
validation: # Request validation settings
response: # Response processing options
chaos: # Chaos engineering features
grpc: # gRPC-specific settings
websocket: # WebSocket-specific settings
logging: # Logging configuration
Server Configuration
server.http_port
(integer, default: 3000)
HTTP server port for REST API endpoints.
server:
http_port: 8080
server.ws_port
(integer, default: 3001)
WebSocket server port for real-time connections.
server:
ws_port: 8081
server.grpc_port
(integer, default: 50051)
gRPC server port for protocol buffer services.
server:
grpc_port: 9090
server.bind
(string, default: “0.0.0.0”)
Network interface to bind servers to.
server:
bind: "127.0.0.1" # Bind to localhost only
Admin UI Configuration
admin.enabled
(boolean, default: false)
Enable the web-based admin interface.
admin:
enabled: true
admin.port
(integer, default: 8080)
Port for the admin UI server.
admin:
port: 9090
admin.embedded
(boolean, default: false)
Embed admin UI under the main HTTP server instead of running standalone.
admin:
embedded: true
admin.mount_path
(string, default: “/admin”)
URL path where embedded admin UI is accessible.
admin:
embedded: true
mount_path: "/mockforge-admin"
admin.standalone
(boolean, default: true)
Force standalone admin UI server (overrides embedded setting).
admin:
standalone: true
admin.disable_api
(boolean, default: false)
Disable admin API endpoints while keeping the UI interface.
admin:
disable_api: false
Validation Configuration
validation.mode
(string, default: “enforce”)
Request validation mode. Options: “off”, “warn”, “enforce”
validation:
mode: warn # Log warnings but allow invalid requests
validation.aggregate_errors
(boolean, default: false)
Combine multiple validation errors into a single JSON array response.
validation:
aggregate_errors: true
validation.validate_responses
(boolean, default: false)
Validate response payloads against OpenAPI schemas (warn-only).
validation:
validate_responses: true
validation.status_code
(integer, default: 400)
HTTP status code to return for validation errors.
validation:
status_code: 422 # Use 422 Unprocessable Entity
validation.skip_admin_validation
(boolean, default: true)
Skip validation for admin UI routes.
validation:
skip_admin_validation: true
validation.overrides
(object)
Per-route validation overrides.
validation:
overrides:
"/api/users": "off" # Disable validation for this route
"/api/admin/**": "warn" # Warning mode for admin routes
Response Configuration
response.template_expand
(boolean, default: false)
Enable template variable expansion in responses.
response:
template_expand: true
response.caching
(object)
Response caching configuration.
response:
caching:
enabled: true
ttl_seconds: 300
max_size_mb: 100
Chaos Engineering
chaos.latency_enabled
(boolean, default: false)
Enable response latency simulation.
chaos:
latency_enabled: true
chaos.latency_min_ms
(integer, default: 0)
Minimum response latency in milliseconds.
chaos:
latency_min_ms: 100
chaos.latency_max_ms
(integer, default: 1000)
Maximum response latency in milliseconds.
chaos:
latency_max_ms: 2000
chaos.failures_enabled
(boolean, default: false)
Enable random failure injection.
chaos:
failures_enabled: true
chaos.failure_rate
(float, default: 0.0)
Probability of random failures (0.0 to 1.0).
chaos:
failure_rate: 0.05 # 5% failure rate
chaos.failure_status_codes
(array of integers)
HTTP status codes to return for injected failures.
chaos:
failure_status_codes: [500, 502, 503, 504]
gRPC Configuration
grpc.proto_dir
(string, default: “proto/”)
Directory containing Protocol Buffer files.
grpc:
proto_dir: "my-protos/"
grpc.enable_reflection
(boolean, default: true)
Enable gRPC server reflection for service discovery.
grpc:
enable_reflection: true
grpc.excluded_services
(array of strings)
gRPC services to exclude from automatic registration.
grpc:
excluded_services:
- "grpc.reflection.v1alpha.ServerReflection"
grpc.max_message_size
(integer, default: 4194304)
Maximum message size in bytes (4MB default).
grpc:
max_message_size: 8388608 # 8MB
grpc.concurrency_limit
(integer, default: 32)
Maximum concurrent requests per connection.
grpc:
concurrency_limit: 64
WebSocket Configuration
websocket.replay_file
(string)
Path to WebSocket replay file for scripted interactions.
websocket:
replay_file: "examples/ws-demo.jsonl"
websocket.max_connections
(integer, default: 1000)
Maximum concurrent WebSocket connections.
websocket:
max_connections: 500
websocket.message_timeout
(integer, default: 30000)
Timeout for WebSocket messages in milliseconds.
websocket:
message_timeout: 60000
websocket.heartbeat_interval
(integer, default: 30000)
Heartbeat interval for long-running connections.
websocket:
heartbeat_interval: 45000
Logging Configuration
logging.level
(string, default: “info”)
Log level. Options: “error”, “warn”, “info”, “debug”, “trace”
logging:
level: debug
logging.format
(string, default: “text”)
Log output format. Options: “text”, “json”
logging:
format: json
logging.file
(string)
Path to log file (if not specified, logs to stdout).
logging:
file: "/var/log/mockforge.log"
logging.max_size_mb
(integer, default: 10)
Maximum log file size in megabytes before rotation.
logging:
max_size_mb: 50
logging.max_files
(integer, default: 5)
Maximum number of rotated log files to keep.
logging:
max_files: 10
Complete Configuration Example
# Complete MockForge configuration example
server:
http_port: 3000
ws_port: 3001
grpc_port: 50051
bind: "0.0.0.0"
admin:
enabled: true
port: 8080
embedded: false
standalone: true
validation:
mode: enforce
aggregate_errors: false
validate_responses: false
status_code: 400
response:
template_expand: true
chaos:
latency_enabled: false
failures_enabled: false
grpc:
proto_dir: "proto/"
enable_reflection: true
max_message_size: 4194304
websocket:
replay_file: "examples/ws-demo.jsonl"
max_connections: 1000
logging:
level: info
format: text
Configuration Precedence
Configuration values are applied in order of priority (highest to lowest):
- Command-line arguments - Override all other settings
- Environment variables - Override config file settings
- Configuration file - Default values from YAML file
- Compiled defaults - Built-in fallback values
Environment Variable Mapping
All configuration options can be set via environment variables using the MOCKFORGE_
prefix with underscore-separated paths:
# Server configuration
export MOCKFORGE_SERVER_HTTP_PORT=8080
export MOCKFORGE_SERVER_BIND="127.0.0.1"
# Admin UI
export MOCKFORGE_ADMIN_ENABLED=true
export MOCKFORGE_ADMIN_PORT=9090
# Validation
export MOCKFORGE_VALIDATION_MODE=warn
export MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true
# Protocol-specific
export MOCKFORGE_GRPC_PROTO_DIR="my-protos/"
export MOCKFORGE_WEBSOCKET_REPLAY_FILE="replay.jsonl"
Validation
MockForge validates configuration files at startup and reports errors clearly:
# Validate configuration without starting server
mockforge-cli validate-config config.yaml
# Check for deprecated options
mockforge-cli validate-config --check-deprecated config.yaml
Hot Reloading
Some configuration options support runtime updates without restart:
- Validation mode changes
- Template expansion toggle
- Admin UI settings
- Logging level adjustments
# Update validation mode at runtime
curl -X POST http://localhost:8080/__mockforge/config \
-H "Content-Type: application/json" \
-d '{"validation": {"mode": "warn"}}'
Best Practices
Development Configuration
# development.yaml
server:
http_port: 3000
ws_port: 3001
admin:
enabled: true
embedded: true
validation:
mode: warn
response:
template_expand: true
logging:
level: debug
Production Configuration
# production.yaml
server:
http_port: 8080
bind: "127.0.0.1"
admin:
enabled: true
standalone: true
port: 9090
validation:
mode: enforce
chaos:
latency_enabled: false
failures_enabled: false
logging:
level: warn
file: "/var/log/mockforge.log"
Testing Configuration
# test.yaml
server:
http_port: 3000
validation:
mode: off
response:
template_expand: true
logging:
level: debug
Migration Guide
Upgrading from CLI-only Configuration
If migrating from command-line only configuration:
- Create a
config.yaml
file with your current settings - Test the configuration with
mockforge-cli validate-config
- Gradually move settings from environment variables to the config file
- Update deployment scripts to use the config file
Version Compatibility
Configuration options may change between versions. Check the changelog for breaking changes and use the validation command to identify deprecated options:
mockforge-cli validate-config --check-deprecated config.yaml
This schema provides comprehensive control over MockForge’s behavior across all protocols and features.
Supported Formats
MockForge supports various data formats for configuration, specifications, and data exchange. This reference documents all supported formats, their usage, and conversion utilities.
OpenAPI Specifications
JSON Format (Primary)
MockForge primarily supports OpenAPI 3.0+ specifications in JSON format:
{
"openapi": "3.0.3",
"info": {
"title": "User API",
"version": "1.0.0"
},
"paths": {
"/users": {
"get": {
"summary": "List users",
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/User"
}
}
}
}
}
}
}
}
},
"components": {
"schemas": {
"User": {
"type": "object",
"properties": {
"id": {"type": "string"},
"name": {"type": "string"},
"email": {"type": "string"}
}
}
}
}
}
YAML Format (Alternative)
OpenAPI specifications can also be provided in YAML format:
openapi: 3.0.3
info:
title: User API
version: 1.0.0
paths:
/users:
get:
summary: List users
responses:
'200':
description: Success
content:
application/json:
schema:
type: array
items:
$ref: '#/components/schemas/User'
components:
schemas:
User:
type: object
properties:
id:
type: string
name:
type: string
email:
type: string
Conversion Between Formats
# Convert JSON to YAML
node -e "
const fs = require('fs');
const yaml = require('js-yaml');
const spec = JSON.parse(fs.readFileSync('api.json', 'utf8'));
fs.writeFileSync('api.yaml', yaml.dump(spec));
"
# Convert YAML to JSON
node -e "
const fs = require('fs');
const yaml = require('js-yaml');
const spec = yaml.load(fs.readFileSync('api.yaml', 'utf8'));
fs.writeFileSync('api.json', JSON.stringify(spec, null, 2));
"
Protocol Buffers
.proto Files
gRPC services use Protocol Buffer definitions:
syntax = "proto3";
package myapp.user;
service UserService {
rpc GetUser(GetUserRequest) returns (User);
rpc ListUsers(ListUsersRequest) returns (stream User);
rpc CreateUser(CreateUserRequest) returns (User);
}
message GetUserRequest {
string user_id = 1;
}
message User {
string user_id = 1;
string name = 2;
string email = 3;
google.protobuf.Timestamp created_at = 4;
}
message ListUsersRequest {
int32 page_size = 1;
string page_token = 2;
}
message CreateUserRequest {
string name = 1;
string email = 2;
}
Generated Code
MockForge automatically generates Rust code from .proto
files:
#![allow(unused)] fn main() { // Generated code structure pub mod myapp { pub mod user { tonic::include_proto!("myapp.user"); // Generated service trait #[tonic::async_trait] pub trait UserService: Send + Sync + 'static { async fn get_user( &self, request: tonic::Request<GetUserRequest>, ) -> Result<tonic::Response<User>, tonic::Status>; async fn list_users( &self, request: tonic::Request<ListUsersRequest>, ) -> Result<tonic::Response<Self::ListUsersStream>, tonic::Status>; } } } }
WebSocket Replay Files
JSONL Format
WebSocket interactions use JSON Lines format:
{"ts":0,"dir":"out","text":"Welcome to chat!","waitFor":"^HELLO$"}
{"ts":1000,"dir":"out","text":"How can I help you?"}
{"ts":2000,"dir":"out","text":"Please wait while I process your request..."}
{"ts":5000,"dir":"out","text":"Here's your response: ..."}
Extended JSONL with Templates
{"ts":0,"dir":"out","text":"Session {{uuid}} started at {{now}}"}
{"ts":1000,"dir":"out","text":"Connected to server {{server_id}}"}
{"ts":2000,"dir":"out","text":"{{#if authenticated}}Welcome back!{{else}}Please authenticate{{/if}}"}
Binary Message Support
{"ts":0,"dir":"out","text":"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChwGA60e6kgAAAABJRU5ErkJggg==","binary":true}
{"ts":1000,"dir":"out","text":"Image data sent"}
Configuration Files
YAML Configuration
MockForge uses YAML for configuration files:
# Server configuration
server:
http_port: 3000
ws_port: 3001
grpc_port: 50051
# Validation settings
validation:
mode: enforce
aggregate_errors: false
# Response processing
response:
template_expand: true
# Protocol-specific settings
grpc:
proto_dir: "proto/"
enable_reflection: true
websocket:
replay_file: "examples/demo.jsonl"
JSON Configuration (Alternative)
Configuration can also be provided as JSON:
{
"server": {
"http_port": 3000,
"ws_port": 3001,
"grpc_port": 50051
},
"validation": {
"mode": "enforce",
"aggregate_errors": false
},
"response": {
"template_expand": true
},
"grpc": {
"proto_dir": "proto/",
"enable_reflection": true
},
"websocket": {
"replay_file": "examples/demo.jsonl"
}
}
Data Generation Formats
JSON Output
Generated test data in JSON format:
[
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"name": "John Doe",
"email": "john.doe@example.com",
"created_at": "2025-09-12T10:00:00Z"
},
{
"id": "550e8400-e29b-41d4-a716-446655440001",
"name": "Jane Smith",
"email": "jane.smith@example.com",
"created_at": "2025-09-12T11:00:00Z"
}
]
YAML Output
Same data in YAML format:
- id: 550e8400-e29b-41d4-a716-446655440000
name: John Doe
email: john.doe@example.com
created_at: '2025-09-12T10:00:00Z'
- id: 550e8400-e29b-41d4-a716-446655440001
name: Jane Smith
email: jane.smith@example.com
created_at: '2025-09-12T11:00:00Z'
CSV Output
Tabular data in CSV format:
id,name,email,created_at
550e8400-e29b-41d4-a716-446655440000,John Doe,john.doe@example.com,2025-09-12T10:00:00Z
550e8400-e29b-41d4-a716-446655440001,Jane Smith,jane.smith@example.com,2025-09-12T11:00:00Z
Log Formats
Text Format (Default)
Human-readable log output:
2025-09-12T10:00:00Z INFO mockforge::http: Server started on 0.0.0.0:3000
2025-09-12T10:00:01Z INFO mockforge::http: Request: GET /users
2025-09-12T10:00:01Z DEBUG mockforge::template: Template expanded: {{uuid}} -> 550e8400-e29b-41d4-a716-446655440000
2025-09-12T10:00:01Z INFO mockforge::http: Response: 200 OK
JSON Format
Structured JSON logging:
{"timestamp":"2025-09-12T10:00:00Z","level":"INFO","module":"mockforge::http","message":"Server started on 0.0.0.0:3000"}
{"timestamp":"2025-09-12T10:00:01Z","level":"INFO","module":"mockforge::http","message":"Request: GET /users","method":"GET","path":"/users","user_agent":"curl/7.68.0"}
{"timestamp":"2025-09-12T10:00:01Z","level":"DEBUG","module":"mockforge::template","message":"Template expanded","template":"{{uuid}}","result":"550e8400-e29b-41d4-a716-446655440000"}
{"timestamp":"2025-09-12T10:00:01Z","level":"INFO","module":"mockforge::http","message":"Response: 200 OK","status":200,"duration_ms":15}
Template Syntax
Handlebars Templates
MockForge uses Handlebars-style templates:
{{variable}}
{{object.property}}
{{array.[0]}}
{{#if condition}}content{{/if}}
{{#each items}}{{this}}{{/each}}
{{helper arg1 arg2}}
Built-in Helpers
<!-- Data generation -->
{{uuid}} <!-- Random UUID -->
{{now}} <!-- Current timestamp -->
{{now+1h}} <!-- Future timestamp -->
{{randInt 1 100}} <!-- Random integer -->
{{randFloat 0.0 1.0}} <!-- Random float -->
{{randWord}} <!-- Random word -->
{{randSentence}} <!-- Random sentence -->
{{randParagraph}} <!-- Random paragraph -->
<!-- Request context -->
{{request.path.id}} <!-- URL path parameter -->
{{request.query.limit}} <!-- Query parameter -->
{{request.header.auth}} <!-- HTTP header -->
{{request.body.name}} <!-- Request body field -->
<!-- Logic helpers -->
{{#if user.authenticated}}
Welcome back, {{user.name}}!
{{else}}
Please log in.
{{/if}}
{{#each users}}
<li>{{name}} - {{email}}</li>
{{/each}}
Conversion Utilities
Format Conversion Scripts
#!/bin/bash
# convert-format.sh - Convert between supported formats
input_file=$1
output_format=$2
case $output_format in
"yaml")
python3 -c "
import sys, yaml, json
data = json.load(sys.stdin)
yaml.dump(data, sys.stdout, default_flow_style=False)
" < "$input_file"
;;
"json")
python3 -c "
import sys, yaml, json
data = yaml.safe_load(sys.stdin)
json.dump(data, sys.stdout, indent=2)
" < "$input_file"
;;
"xml")
python3 -c "
import sys, json, dicttoxml
data = json.load(sys.stdin)
xml = dicttoxml.dicttoxml(data, custom_root='root', attr_type=False)
print(xml.decode())
" < "$input_file"
;;
*)
echo "Unsupported format: $output_format"
echo "Supported: yaml, json, xml"
exit 1
;;
esac
Validation Scripts
#!/bin/bash
# validate-format.sh - Validate file formats
file=$1
format=$(basename "$file" | sed 's/.*\.//')
case $format in
"json")
python3 -c "
import sys, json
try:
json.load(sys.stdin)
print('✓ Valid JSON')
except Exception as e:
print('✗ Invalid JSON:', e)
sys.exit(1)
" < "$file"
;;
"yaml")
python3 -c "
import sys, yaml
try:
yaml.safe_load(sys.stdin)
print('✓ Valid YAML')
except Exception as e:
print('✗ Invalid YAML:', e)
sys.exit(1)
" < "$file"
;;
"xml")
python3 -c "
import sys, xml.etree.ElementTree as ET
try:
ET.parse(sys.stdin)
print('✓ Valid XML')
except Exception as e:
print('✗ Invalid XML:', e)
sys.exit(1)
" < "$file"
;;
*)
echo "Unsupported format: $format"
exit 1
;;
esac
Best Practices
Choosing the Right Format
Use Case | Recommended Format | Reason |
---|---|---|
API Specifications | OpenAPI YAML | More readable, better for version control |
Configuration | YAML | Human-readable, supports comments |
Data Exchange | JSON | Universally supported, compact |
Logs | JSON | Structured, searchable |
Templates | Handlebars | Expressive, logic support |
Format Conversion Workflow
# API development workflow
# 1. Design API in YAML (readable)
swagger-editor
# 2. Convert to JSON for tools that require it
./convert-format.sh api.yaml json > api.json
# 3. Validate both formats
./validate-format.sh api.yaml
./validate-format.sh api.json
# 4. Generate documentation
swagger-codegen generate -i api.yaml -l html -o docs/
# 5. Commit YAML version (better diff)
git add api.yaml
Performance Considerations
- JSON: Fastest parsing, smallest size
- YAML: Slower parsing, larger size, better readability
- XML: Slowest parsing, largest size, most verbose
- Binary formats: Fastest for large data, not human-readable
Compatibility Matrix
Format | MockForge Support | Readability | Tool Support | Size |
---|---|---|---|---|
JSON | ✅ Full | Medium | Excellent | Small |
YAML | ✅ Full | High | Good | Medium |
XML | ❌ None | Low | Good | Large |
Protocol Buffers | ✅ gRPC only | Low | Limited | Small |
JSONL | ✅ WebSocket | Medium | Basic | Medium |
This format reference ensures you can work effectively with all data formats supported by MockForge across different use cases and workflows.
Templating Reference
MockForge supports lightweight templating across HTTP responses, overrides, and (soon) WS/gRPC). This page documents all supported tokens and controls.
Enabling
- Environment:
MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true|false
(default: false) - Config:
http.response_template_expand: true|false
- CLI:
--response-template-expand
- Determinism:
MOCKFORGE_FAKE_TOKENS=false
disables faker token expansion.
Time Tokens
{{now}}
— RFC3339 timestamp.{{now±Nd|Nh|Nm|Ns}}
— Offset from now by Days/Hours/Minutes/Seconds.- Examples:
{{now+2h}}
,{{now-30m}}
,{{now+10s}}
,{{now-1d}}
.
- Examples:
Random Tokens
{{rand.int}}
— random integer in [0, 1_000_000].{{rand.float}}
— random float in [0,1).{{randInt a b}}
/{{rand.int a b}}
— random integer between a and b (order-agnostic, negatives allowed).- Examples:
{{randInt 10 99}}
,{{randInt -5 5}}
.
- Examples:
UUID
{{uuid}}
— UUID v4.
Request Data Access
{{request.body.field}}
— Access fields from request body JSON.- Example:
{{request.body.name}}
extracts thename
field from request body.
- Example:
{{request.path.param}}
— Access path parameters.- Example:
{{request.path.id}}
extracts theid
path parameter.
- Example:
{{request.query.param}}
— Access query parameters.- Example:
{{request.query.limit}}
extracts thelimit
query parameter.
- Example:
Faker Tokens
Faker expansions can be disabled via MOCKFORGE_FAKE_TOKENS=false
.
- Minimal (always available):
{{faker.uuid}}
,{{faker.email}}
,{{faker.name}}
. - Extended (when feature
data-faker
is enabled):{{faker.address}}
,{{faker.phone}}
,{{faker.company}}
,{{faker.url}}
,{{faker.ip}}
{{faker.color}}
,{{faker.word}}
,{{faker.sentence}}
,{{faker.paragraph}}
Where Templating Applies
- HTTP (OpenAPI): media-level
example
bodies and synthesized responses. - HTTP Overrides: YAML patches loaded via
validation_overrides
. - WS/gRPC: provider is registered now; expansion hooks will be added as features land.
Status Codes for Validation Errors
MOCKFORGE_VALIDATION_STATUS=400|422
(default 400). Affects HTTP request validation failures in enforce mode.
Security & Determinism Notes
- Tokens inject random/time-based values; disable faker to reduce variability.
- For deterministic integration tests, set
MOCKFORGE_FAKE_TOKENS=false
and prefer explicit literals.
Fixtures and Smoke Testing
MockForge supports recording and replaying HTTP requests and responses as fixtures, which can be used for smoke testing your APIs.
Recording Fixtures
To record fixtures, enable recording by setting the environment variable:
MOCKFORGE_RECORD_ENABLED=true
By default, all HTTP requests will be recorded. To record only GET requests, set:
MOCKFORGE_RECORD_GET_ONLY=true
Fixtures are saved in the fixtures
directory by default. You can change this location with:
MOCKFORGE_FIXTURES_DIR=/path/to/fixtures
Replay Fixtures
To replay recorded fixtures, enable replay by setting the environment variable:
MOCKFORGE_REPLAY_ENABLED=true
When replay is enabled, MockForge will serve recorded responses for matching requests instead of generating new ones.
Ready-to-Run Fixtures
Fixtures can be marked as “ready-to-run” for smoke testing by adding a metadata field smoke_test
with the value true
. These fixtures will be listed in the smoke test endpoints.
Example fixture with smoke test metadata:
{
"fingerprint": {
"method": "GET",
"path": "/api/users",
"query_params": {},
"headers": {}
},
"timestamp": "2024-01-15T10:30:00Z",
"status_code": 200,
"response_headers": {
"content-type": "application/json"
},
"response_body": "{\"users\": []}",
"metadata": {
"smoke_test": "true",
"name": "Get Users Endpoint"
}
}
Smoke Testing
MockForge provides endpoints to list and run smoke tests:
GET /__mockforge/smoke
- List available smoke test endpointsGET /__mockforge/smoke/run
- Run all smoke tests
These endpoints are also available in the Admin UI under the “Smoke Tests” tab.
Admin UI Integration
The Admin UI provides a graphical interface for managing fixtures and running smoke tests:
- View all recorded fixtures in the “Fixtures” tab
- Mark fixtures as ready-to-run for smoke testing
- Run smoke tests with a single click
- View smoke test results and status
Configuration
The following environment variables control fixture and smoke test behavior:
Core Settings
MOCKFORGE_FIXTURES_DIR
- Directory where fixtures are stored (default:./fixtures
)MOCKFORGE_RECORD_ENABLED
- Enable recording of requests (default:false
)MOCKFORGE_REPLAY_ENABLED
- Enable replay of recorded requests (default:false
)
Recording Options
MOCKFORGE_RECORD_GET_ONLY
- Record only GET requests (default:false
)MOCKFORGE_LATENCY_ENABLED
- Include latency in recorded fixtures (default:true
)MOCKFORGE_RESPONSE_TEMPLATE_EXPAND
- Expand templates when recording (default:false
)
Validation and Testing
MOCKFORGE_REQUEST_VALIDATION
- Validation level during recording (default:enforce
)MOCKFORGE_RESPONSE_VALIDATION
- Validate responses during replay (default:false
)
Configuration File Support
You can also configure fixtures through YAML:
# In your configuration file
core:
fixtures:
dir: "./fixtures"
record_enabled: false
replay_enabled: false
record_get_only: false
Troubleshooting
This guide helps you diagnose and resolve common issues with MockForge. If you’re experiencing problems, follow the steps below to identify and fix the issue.
Quick Diagnosis
Check Server Status
First, verify that MockForge is running and accessible:
# Check if processes are running
ps aux | grep mockforge
# Check listening ports
netstat -tlnp | grep -E ":(3000|3001|50051|8080)"
# Test basic connectivity
curl -I http://localhost:3000/health 2>/dev/null || echo "HTTP server not responding"
curl -I http://localhost:8080/health 2>/dev/null || echo "Admin UI not responding"
Check Logs
Enable verbose logging to see detailed information:
# Run with debug logging
RUST_LOG=mockforge=debug mockforge serve --spec api-spec.yaml
# View recent logs
tail -f mockforge.log
# Filter logs by component
grep "ERROR" mockforge.log
grep "WARN" mockforge.log
HTTP API Issues
Server Won’t Start
Symptoms: mockforge serve
exits immediately with error
Common causes and solutions:
-
Port already in use:
# Find what's using the port lsof -i :3000 # Kill conflicting process kill -9 <PID> # Or use different port mockforge serve --http-port 3001
-
Invalid OpenAPI specification:
# Validate YAML syntax yamllint api-spec.yaml # Validate OpenAPI structure swagger-cli validate api-spec.yaml # Test with minimal spec mockforge serve --spec examples/openapi-demo.json
-
File permissions:
# Check file access ls -la api-spec.yaml # Fix permissions if needed chmod 644 api-spec.yaml
404 Errors for Valid Routes
Symptoms: API returns 404 for endpoints that should exist
Possible causes:
-
OpenAPI spec not loaded correctly:
# Check if spec was loaded grep "OpenAPI spec loaded" mockforge.log # Verify file path ls -la api-spec.yaml
-
Path matching issues:
- Ensure paths in spec match request URLs
- Check for trailing slashes
- Verify HTTP methods match
-
Template expansion disabled:
# Enable template expansion mockforge serve --response-template-expand --spec api-spec.yaml
Template Variables Not Working
Symptoms: {{variable}}
appears literally in responses
Solutions:
-
Enable template expansion:
# Via command line mockforge serve --response-template-expand --spec api-spec.yaml # Via environment variable MOCKFORGE_RESPONSE_TEMPLATE_EXPAND=true mockforge serve --spec api-spec.yaml # Via config file echo "response:\n template_expand: true" > config.yaml mockforge serve --config config.yaml --spec api-spec.yaml
-
Check template syntax:
- Use
{{variable}}
not${variable}
- Ensure variables are defined in spec examples
- Check for typos in variable names
- Use
Validation Errors
Symptoms: Requests return 400/422 with validation errors
Solutions:
-
Adjust validation mode:
# Disable validation mockforge serve --validation off --spec api-spec.yaml # Use warning mode mockforge serve --validation warn --spec api-spec.yaml
-
Fix request format:
- Ensure Content-Type header matches request body format
- Verify required fields are present
- Check parameter formats match OpenAPI spec
WebSocket Issues
Connection Fails
Symptoms: WebSocket client cannot connect
Common causes:
-
Wrong port or path:
# Check WebSocket port netstat -tlnp | grep :3001 # Test connection websocat ws://localhost:3001/ws
-
Replay file not found:
# Check file exists ls -la ws-replay.jsonl # Run without replay file mockforge serve --ws-port 3001 # No replay file specified
Messages Not Received
Symptoms: WebSocket connection established but no messages
Solutions:
-
Check replay file format:
# Validate JSONL syntax node -e " const fs = require('fs'); const lines = fs.readFileSync('ws-replay.jsonl', 'utf8').split('\n'); lines.forEach((line, i) => { if (line.trim()) { try { JSON.parse(line); } catch (e) { console.log(\`Line \${i+1}: \${e.message}\`); } } }); "
-
Verify message timing:
- Check
ts
values are in milliseconds - Ensure messages have required fields (
ts
,dir
,text
)
- Check
Interactive Mode Issues
Symptoms: Client messages not triggering responses
Debug steps:
-
Check regex patterns:
# Test regex patterns node -e " const pattern = '^HELLO'; const test = 'HELLO world'; console.log('Match:', test.match(new RegExp(pattern))); "
-
Verify state management:
- Check that state variables are properly set
- Ensure conditional logic is correct
gRPC Issues
Service Not Found
Symptoms: grpcurl list
shows no services
Solutions:
-
Check proto directory:
# Verify proto files exist find proto/ -name "*.proto" # Check directory path MOCKFORGE_PROTO_DIR=proto/ mockforge serve --grpc-port 50051
-
Compilation errors:
# Check for proto compilation errors cargo build --verbose 2>&1 | grep -i proto
-
Reflection disabled:
# Enable gRPC reflection MOCKFORGE_GRPC_REFLECTION_ENABLED=true mockforge serve --grpc-port 50051
Method Calls Fail
Symptoms: gRPC calls return errors
Debug steps:
-
Check service definition:
# List service methods grpcurl -plaintext localhost:50051 describe mockforge.user.UserService
-
Validate request format:
# Test with verbose output grpcurl -plaintext -v -d '{"user_id": "123"}' localhost:50051 mockforge.user.UserService/GetUser
-
Check proto compatibility:
- Ensure client and server use same proto definitions
- Verify message field names and types match
Admin UI Issues
UI Not Loading
Symptoms: Browser shows connection error
Solutions:
-
Check admin port:
# Verify port is listening curl -I http://localhost:8080 2>/dev/null || echo "Admin UI not accessible" # Try different port mockforge serve --admin --admin-port 9090
-
CORS issues:
- Admin UI should work from any origin by default
- Check browser console for CORS errors
-
Embedded vs standalone:
# Force standalone mode mockforge serve --admin --admin-standalone # Or embedded mode mockforge serve --admin --admin-embed
API Endpoints Not Working
Symptoms: UI loads but API calls fail
Solutions:
-
Check admin API:
# Test admin API directly curl http://localhost:8080/__mockforge/status
-
Enable admin API:
# Ensure admin API is not disabled mockforge serve --admin # Don't use --disable-admin-api
Configuration Issues
Config File Not Loading
Symptoms: Settings from config file are ignored
Solutions:
-
Validate YAML syntax:
# Check YAML format python3 -c "import yaml; yaml.safe_load(open('config.yaml'))" # Or use yamllint yamllint config.yaml
-
Check file path:
# Use absolute path mockforge serve --config /full/path/to/config.yaml # Verify file permissions ls -la config.yaml
-
Environment variable override:
- Remember that environment variables override config file settings
- Command-line arguments override both
Environment Variables Not Working
Symptoms: Environment variables are ignored
Common issues:
-
Shell not reloaded:
# Export variable and reload shell export MOCKFORGE_HTTP_PORT=3001 exec $SHELL
-
Variable name typos:
# Check variable is set echo $MOCKFORGE_HTTP_PORT # List all MockForge variables env | grep MOCKFORGE
Performance Issues
High Memory Usage
Symptoms: MockForge consumes excessive memory
Solutions:
-
Reduce concurrent connections:
# Limit connection pool MOCKFORGE_MAX_CONNECTIONS=100 mockforge serve
-
Disable unnecessary features:
# Run with minimal features mockforge serve --validation off --response-template-expand false
-
Monitor resource usage:
# Check memory usage ps aux | grep mockforge # Monitor over time htop -p $(pgrep mockforge)
Slow Response Times
Symptoms: API responses are slow
Debug steps:
-
Enable latency logging:
RUST_LOG=mockforge=debug mockforge serve --spec api-spec.yaml 2>&1 | grep -i latency
-
Check template complexity:
- Complex templates can slow response generation
- Consider caching for frequently used templates
-
Profile performance:
# Use cargo flamegraph for profiling cargo flamegraph --bin mockforge-cli -- serve --spec api-spec.yaml
Docker Issues
Container Won’t Start
Symptoms: Docker container exits immediately
Solutions:
-
Check container logs:
docker logs <container-id> # Run with verbose output docker run --rm mockforge mockforge serve --spec api-spec.yaml
-
Volume mounting issues:
# Ensure spec file is accessible docker run -v $(pwd)/api-spec.yaml:/app/api-spec.yaml \ mockforge mockforge serve --spec /app/api-spec.yaml
-
Port conflicts:
# Use different ports docker run -p 3001:3000 -p 3002:3001 mockforge
Getting Help
Log Analysis
# Extract error patterns
grep "ERROR" mockforge.log | head -10
# Find recent issues
tail -100 mockforge.log | grep -E "(ERROR|WARN)"
# Count error types
grep "ERROR" mockforge.log | sed 's/.*ERROR //' | sort | uniq -c | sort -nr
Debug Commands
# Full system information
echo "=== System Info ==="
uname -a
echo "=== Rust Version ==="
rustc --version
echo "=== Cargo Version ==="
cargo --version
echo "=== Running Processes ==="
ps aux | grep mockforge
echo "=== Listening Ports ==="
netstat -tlnp | grep -E ":(3000|3001|50051|8080)"
echo "=== Disk Space ==="
df -h
echo "=== Memory Usage ==="
free -h
Community Support
If you can’t resolve the issue:
- Check existing issues: Search GitHub issues for similar problems
- Create a minimal reproduction: Isolate the issue with minimal configuration
- Include debug information: Attach logs, configuration, and system details
- Use descriptive titles: Clearly describe the problem in issue titles
Emergency Stop
If MockForge is causing issues:
# Kill all MockForge processes
pkill -f mockforge
# Kill specific process
kill -9 <mockforge-pid>
# Clean up any leftover files
rm -f mockforge.log
This troubleshooting guide covers the most common issues. For more specific problems, check the logs and consider creating an issue on GitHub with detailed information about your setup and the problem you’re experiencing.
FAQ
Changelog
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
[Unreleased]
Added
-
OpenAPI request validation (path/query/header/cookie/body) with deep $ref resolution and composite schemas (oneOf/anyOf/allOf).
-
Validation modes:
disabled
,warn
,enforce
, with aggregate error reporting and detailed error objects. -
Runtime Admin UI panel to view/toggle validation mode and per-route overrides; Admin API endpoint
/__mockforge/validation
. -
CLI flags and config options to control validation (including
skip_admin_validation
and per-routevalidation_overrides
). -
New e2e tests for 2xx/422 request validation and response example expansion across HTTP routes.
-
Templating reference docs and examples; WS templating tests and demo update.
-
Initial release of MockForge
-
HTTP API mocking with OpenAPI support
-
gRPC service mocking with Protocol Buffers
-
WebSocket connection mocking with replay functionality
-
CLI tool for easy local development
-
Admin UI for managing mock servers
-
Comprehensive documentation with mdBook
-
GitHub Actions CI/CD pipeline
-
Security audit integration
-
Pre-commit hooks for code quality
Changed
- HTTP handlers now perform request validation before routing; invalid requests return 400 with structured details (when
enforce
). - Bump
jsonschema
to 0.33 and adapt validator API; enable draft selection and format checks internally. - Improve route registry and OpenAPI parameter parsing, including styles/explode and array coercion for query/header/cookie parameters.
Deprecated
- N/A
Removed
- N/A
Fixed
- Resolve admin mount prefix from config and exclude admin routes from validation when configured.
- Various small correctness fixes in OpenAPI schema mapping and parameter handling; clearer error messages.
Security
- N/A