Demonstrating microservices architecture with RustAPI using the API Gateway pattern.
📖 Cookbook: RustAPI Deployment
- Rust 1.70+
- Understanding of distributed systems concepts
- Completed auth-api and middleware-chain examples
This example demonstrates a microservices architecture using RustAPI with an API Gateway pattern.
┌─────────────────┐
│ API Gateway │
│ (Port 8080) │
└────────┬────────┘
│
┌────────────┴─────────────┐
│ │
┌───────▼────────┐ ┌───────▼────────┐
│ User Service │ │ Order Service │
│ (Port 8081) │ │ (Port 8082) │
└────────────────┘ └────────────────┘
- Routes requests to appropriate services
- Handles authentication & rate limiting
- Aggregates responses from multiple services
- Provides unified API to clients
- Manages user data
- Handles user CRUD operations
- Internal service (not exposed to public)
- Manages order data
- Handles order processing
- Internal service (not exposed to public)
cargo run -p microservicesThis starts all three services simultaneously:
- Gateway: http://127.0.0.1:8080
- User Service: http://127.0.0.1:8081
- Order Service: http://127.0.0.1:8082
# Get user via gateway
curl http://127.0.0.1:8080/api/users/1
# Get order via gateway
curl http://127.0.0.1:8080/api/orders/1# Direct user service
curl http://127.0.0.1:8081/users/1
# Direct order service
curl http://127.0.0.1:8082/orders/1The gateway uses reqwest to make HTTP calls to backend services:
#[rustapi_rs::get("/api/users/{id}")]
async fn proxy_get_user(Path(id): Path<u64>) -> Json<GatewayResponse> {
let client = reqwest::Client::new();
let user: User = client
.get(&format!("http://127.0.0.1:8081/users/{}", id))
.send()
.await?
.json()
.await?;
Json(GatewayResponse {
service: "user-service".to_string(),
data: serde_json::to_value(user)?,
})
}For production, implement service discovery:
- Consul — Service registry & health checks
- etcd — Distributed configuration
- Kubernetes — Container orchestration
- API Gateway — Single entry point for all clients
- Service Proxy — Gateway forwards requests to services
- Response Aggregation — Combine data from multiple services
- Service Isolation — Each service has its own database/state
// Prevent cascading failures
if service_is_down {
return fallback_response();
}// Round-robin across service instances
let user_service_urls = vec![
"http://user-service-1:8081",
"http://user-service-2:8081",
];
let url = user_service_urls[request_count % 2];// Track requests across services
use opentelemetry::trace::Tracer;
let span = tracer.start("api-gateway");- Istio — Traffic management, security, observability
- Linkerd — Lightweight service mesh for Kubernetes
✅ Scalability — Scale services independently
✅ Resilience — Failure isolation between services
✅ Flexibility — Use different tech stacks per service
✅ Team autonomy — Teams own their services
- Large teams — Multiple teams working on different features
- Different scaling needs — Some services need more resources
- Technology diversity — Need different languages/frameworks
- Independent deployments — Deploy services separately