Microservices Architecture - 1/3
From Production Infrastructure to Distributed Systems Mastery
You’ve mastered professional CI/CD pipelines that automate the entire software delivery lifecycle with comprehensive testing and deployment automation, implemented advanced Infrastructure as Code with Terraform that manages complex environments through modules, state management, and collaborative workflows, established monitoring and alerting systems that proactively detect issues before customers notice, and created log aggregation and disaster recovery systems that ensure business continuity. Your operations now function as world-class automated systems that enable reliable, fast, and confident software delivery at enterprise scale. But here’s the scalability reality that separates functional operations from distributed systems architecture: perfect operations infrastructure means nothing if your application architecture can’t scale beyond a single team, lacks the fault isolation that prevents one bug from bringing down the entire system, has no clear service boundaries that enable independent deployments, and operates without the distributed patterns that allow different teams to move at different speeds while maintaining system reliability.
The monolithic architecture nightmare that destroys scaling organizations:
// Your monolithic scaling horror story
// CTO: "We need to scale to handle 10M users and 50 engineering teams"
// The Monolithic Death Spiral
class MonolithicApplication {
constructor() {
this.userService = new UserService();
this.orderService = new OrderService();
this.inventoryService = new InventoryService();
this.paymentService = new PaymentService();
this.notificationService = new NotificationService();
this.analyticsService = new AnalyticsService();
this.shippingService = new ShippingService();
// 47 more services all tightly coupled...
}
handleUserOrder(userId, orderData) {
// Everything happens in one giant transaction
const user = this.userService.getUser(userId);
const inventory = this.inventoryService.checkStock(orderData.items);
const payment = this.paymentService.processPayment(orderData.payment);
const order = this.orderService.createOrder(user, inventory, payment);
const shipping = this.shippingService.scheduleDelivery(order);
this.notificationService.sendConfirmation(user, order);
this.analyticsService.trackPurchase(user, order);
// One service fails = entire request fails
// One team's bug = entire application down
// One deployment = risk to all functionality
}
}
// The Cascading Monolithic Disasters:
// Disaster 1: Team Scaling Nightmare
const developmentChaos = {
problem: "50 developers working on single codebase",
reality: [
"Merge conflicts on every pull request",
"8-hour code review queues",
"Deploy freezes lasting weeks",
"Features blocked by unrelated bugs",
"Teams stepping on each other's code",
],
impact: "Development velocity approaches zero as teams grow",
};
// Disaster 2: Technology Prison
const technologyLock = {
problem: "Entire application locked to Java 8 for 3 years",
cause: "One legacy library preventing language upgrade",
reality: [
"Can't adopt new technologies",
"Security vulnerabilities unfixable",
"Performance improvements impossible",
"Best engineers leave for modern tech stacks",
],
cost: "Competitive disadvantage and talent exodus",
};
// Disaster 3: Single Point of Failure Nightmare
const systemFailure = {
problem: "Memory leak in analytics module crashes entire system",
cascade: [
"Analytics memory leak → JVM crashes",
"JVM crash → all services unavailable",
"Order processing stops → revenue loss",
"User authentication fails → customer lockout",
"Payment processing down → transaction failures",
],
impact: "Complete business halt from single module bug",
};
// Disaster 4: Deployment Russian Roulette
const deploymentTerror = {
problem: "Every deployment risks entire application stability",
reality: [
"Small UI change can break payment processing",
"Database migration affects all services simultaneously",
"Rollback means losing all recent features",
"Deploy windows become monthly events",
"Innovation speed drops to near zero",
],
outcome: "Business agility destroyed by deployment fear",
};
// Disaster 5: Scaling Wall
const scalingLimits = {
problem: "Monolith hits fundamental scaling limits",
constraints: [
"Database becomes bottleneck for all operations",
"Memory usage grows linearly with features",
"Cache invalidation affects entire application",
"Load balancing can't distribute by service type",
"Horizontal scaling limited by shared state",
],
result: "Infrastructure costs explode while performance degrades",
};
// The brutal scaling truth: Perfect DevOps can't save monolithic architecture
The uncomfortable architectural truth: World-class CI/CD pipelines and infrastructure automation can’t overcome the fundamental limitations of monolithic architecture when your system needs to scale beyond a single team, handle different performance requirements per service, enable independent technology choices, and maintain availability when individual components fail. Professional architecture requires thinking beyond deployment automation to distributed system design patterns.
Real-world monolithic scaling failure consequences:
// What happens when architecture doesn't match organizational scale:
const monolithicFailureImpact = {
organizationalChaos: {
problem:
"50-person engineering team paralyzed by architectural constraints",
cause:
"All teams forced to coordinate deployments in single monolithic codebase",
impact: "Feature delivery drops from daily to monthly releases",
cost: "$5M in missed market opportunities due to slow innovation",
},
technicalDebt: {
problem:
"Critical security vulnerability unfixable due to dependency conflicts",
cause: "Monolithic architecture prevents isolated upgrades",
impact: "System remains vulnerable for 8 months during migration planning",
consequences: "Regulatory fines and security breach losses exceed $10M",
},
competitiveDisadvantage: {
problem: "Startup competitor launches features 10x faster",
cause:
"Monolithic architecture vs competitor's microservices enabling rapid iteration",
impact: "Market share drops 40% in 6 months",
reality: "Perfect infrastructure can't overcome architectural bottlenecks",
},
talentExodus: {
problem: "Senior engineers leave for companies with modern architectures",
cause:
"Frustration with slow development cycles and outdated technology stack",
impact: "Knowledge drain and recruitment difficulties",
prevention:
"Microservices architecture enables technology diversity and team autonomy",
},
// Monolithic architecture becomes the limiting factor regardless of
// how perfect your DevOps and infrastructure practices are
};
Microservices architecture mastery requires understanding:
- Monolith vs Microservices tradeoffs with clear decision criteria for when each approach makes sense
- Service decomposition strategies that break down systems along business domain boundaries
- Inter-service communication patterns using HTTP, messaging, and gRPC for different scenarios
- API gateways and service mesh that handle routing, security, and observability across distributed services
- Data consistency patterns that maintain system integrity without distributed transactions
This article transforms your architecture from single-deployment monoliths into independently scalable microservices that enable organizational growth, technology diversity, and system resilience at enterprise scale.
Monolith vs Microservices: The Architectural Decision That Shapes Your Future
Understanding When Each Architecture Pattern Makes Sense
The monolith vs microservices decision framework:
// Architectural decision matrix for monolith vs microservices
const architecturalDecisionFramework = {
// When Monolith is the RIGHT choice
monolithAdvantages: {
teamSize: "Single team (2-8 people) working on the application",
complexity: "Business domain is simple and well-understood",
timeline: "Need to move fast with limited resources",
deployment: "Simple deployment and operational requirements",
consistency: "Strong consistency requirements across all operations",
benefits: [
"Simple deployment - one artifact to deploy",
"Easy debugging - everything in one process",
"Strong consistency - ACID transactions work naturally",
"Simple testing - integration tests are straightforward",
"Lower operational overhead - one application to monitor",
"Rapid prototyping - quick to get started and iterate",
],
whenToChoose: [
"Startup with small team validating product-market fit",
"Well-defined domain with stable requirements",
"Limited operational expertise for distributed systems",
"Strong consistency requirements (financial applications)",
"Simple CRUD applications with straightforward workflows",
],
},
// When Microservices become NECESSARY
microservicesAdvantages: {
teamSize: "Multiple teams (20+ people) working independently",
complexity: "Complex business domain with distinct bounded contexts",
scale: "Different parts of system have different scaling requirements",
technology: "Different services benefit from different technology stacks",
availability: "Need fault isolation and independent deployments",
benefits: [
"Team autonomy - independent development and deployment",
"Technology diversity - right tool for each job",
"Fault isolation - failures don't cascade across entire system",
"Independent scaling - scale services based on demand",
"Organizational alignment - services match team boundaries",
"Evolutionary architecture - can evolve services independently",
],
whenToChoose: [
"Large organization with multiple autonomous teams",
"Complex domain with clear service boundaries",
"Different performance/scaling requirements per service",
"Need for technology diversity across services",
"High availability requirements with fault tolerance",
],
},
// The critical transition indicators
transitionSignals: {
fromMonolithToMicroservices: [
"Team size exceeds 8-10 people working on same codebase",
"Deployment coordination becomes major bottleneck",
"Different parts of system have vastly different scaling needs",
"Technology choices being constrained by legacy decisions",
"Feature development speed decreasing as codebase grows",
"Different teams stepping on each other's code frequently",
],
fromMicroservicesToMonolith: [
"Distributed system complexity exceeds team's operational capabilities",
"Network latency causing performance problems",
"Data consistency issues causing business logic bugs",
"Operational overhead exceeding development capacity",
"Service boundaries poorly defined causing tight coupling",
],
},
};
// Real-world decision examples
const architecturalDecisions = {
// Netflix: Monolith → Microservices (SUCCESS)
netflix: {
trigger: "Rapid growth from startup to global scale",
problem: "Single monolithic application couldn't handle global traffic",
solution: "Decomposed into hundreds of microservices",
outcome: "Handles billions of requests daily with high availability",
lessons: [
"Invested heavily in tooling and operational expertise",
"Built extensive service mesh and monitoring",
"Gradual migration over several years",
],
},
// Segment: Microservices → Monolith (SUCCESS)
segment: {
trigger: "Operational complexity exceeding team capacity",
problem:
"Early microservices architecture created more problems than it solved",
solution: "Consolidated services back into well-designed monolith",
outcome: "Reduced operational overhead while maintaining performance",
lessons: [
"Microservices aren't automatically better",
"Operational expertise required for distributed systems",
"Team size and complexity must justify architectural complexity",
],
},
// Amazon: Strategic Hybrid (SUCCESS)
amazon: {
approach: "Different architectures for different business units",
strategy: "Monoliths for simple domains, microservices for complex ones",
outcome: "Architecture matches organizational and technical needs",
lessons: [
"No single architecture fits all problems",
"Architecture should evolve with business needs",
"Investment in tooling enables architectural choices",
],
},
};
Professional Monolith Implementation:
// monolith-architecture.js - Well-designed monolithic architecture
// Even when choosing monoliths, structure matters for future evolution
class WellStructuredMonolith {
constructor() {
// Organize by business domains, not technical layers
this.userDomain = new UserDomain();
this.orderDomain = new OrderDomain();
this.inventoryDomain = new InventoryDomain();
this.paymentDomain = new PaymentDomain();
// Clean interfaces between domains
this.eventBus = new InternalEventBus();
this.setupDomainBoundaries();
}
setupDomainBoundaries() {
// Domains communicate through well-defined interfaces
// This makes future microservices extraction easier
this.orderDomain.onOrderCreated((order) => {
this.eventBus.publish("order.created", order);
});
this.inventoryDomain.subscribe("order.created", (order) => {
this.inventoryDomain.reserveItems(order.items);
});
this.paymentDomain.subscribe("order.created", (order) => {
this.paymentDomain.processPayment(order.paymentInfo);
});
}
}
// Domain-driven design within monolith
class OrderDomain {
constructor() {
this.orderRepository = new OrderRepository();
this.orderValidator = new OrderValidator();
this.eventHandlers = [];
}
createOrder(orderData) {
// Business logic contained within domain
const order = this.orderValidator.validate(orderData);
const savedOrder = this.orderRepository.save(order);
// Publish domain events for other domains
this.publishEvent("order.created", savedOrder);
return savedOrder;
}
onOrderCreated(handler) {
this.eventHandlers.push(handler);
}
publishEvent(eventType, data) {
this.eventHandlers.forEach((handler) => handler(data));
}
}
// Database per domain pattern (even in monolith)
class OrderRepository {
constructor() {
// Each domain can have its own database schema
this.db = new DatabaseConnection("orders_schema");
}
save(order) {
return this.db.orders.insert({
id: order.id,
userId: order.userId,
items: order.items,
totalAmount: order.totalAmount,
status: order.status,
createdAt: new Date(),
});
}
findById(orderId) {
return this.db.orders.findOne({ id: orderId });
}
// No cross-domain database queries
// Use events or service interfaces instead
}
Microservices Architecture Foundation:
// microservices-architecture.js - Professional microservices design patterns
// Service Interface Definition
class ServiceInterface {
constructor(serviceName, version) {
this.serviceName = serviceName;
this.version = version;
this.endpoints = new Map();
}
defineEndpoint(name, method, path, schema) {
this.endpoints.set(name, {
method,
path: `/api/v${this.version}${path}`,
requestSchema: schema.request,
responseSchema: schema.response,
errorSchemas: schema.errors || [],
});
}
getEndpoint(name) {
return this.endpoints.get(name);
}
}
// User Service - Independent microservice
class UserService {
constructor() {
this.interface = new ServiceInterface("user-service", 1);
this.setupEndpoints();
this.userRepository = new UserRepository();
this.eventPublisher = new EventPublisher("user-events");
}
setupEndpoints() {
this.interface.defineEndpoint("createUser", "POST", "/users", {
request: {
type: "object",
properties: {
email: { type: "string", format: "email" },
password: { type: "string", minLength: 8 },
profile: { type: "object" },
},
required: ["email", "password"],
},
response: {
type: "object",
properties: {
id: { type: "string" },
email: { type: "string" },
profile: { type: "object" },
createdAt: { type: "string", format: "date-time" },
},
},
});
this.interface.defineEndpoint("getUser", "GET", "/users/:id", {
request: {
type: "object",
properties: {
id: { type: "string" },
},
},
response: {
type: "object",
properties: {
id: { type: "string" },
email: { type: "string" },
profile: { type: "object" },
},
},
});
}
async createUser(userData) {
try {
// Validate input
const validatedData = this.validateUserData(userData);
// Business logic
const hashedPassword = await this.hashPassword(validatedData.password);
const user = await this.userRepository.create({
...validatedData,
password: hashedPassword,
createdAt: new Date(),
});
// Publish domain event for other services
await this.eventPublisher.publish("user.created", {
userId: user.id,
email: user.email,
createdAt: user.createdAt,
});
// Return public data only
return {
id: user.id,
email: user.email,
profile: user.profile,
createdAt: user.createdAt,
};
} catch (error) {
throw new ServiceError("USER_CREATION_FAILED", error.message);
}
}
async getUser(userId) {
const user = await this.userRepository.findById(userId);
if (!user) {
throw new ServiceError("USER_NOT_FOUND", `User ${userId} not found`);
}
return {
id: user.id,
email: user.email,
profile: user.profile,
};
}
}
// Order Service - Communicates with User Service
class OrderService {
constructor() {
this.interface = new ServiceInterface("order-service", 1);
this.setupEndpoints();
this.orderRepository = new OrderRepository();
this.userServiceClient = new UserServiceClient();
this.inventoryServiceClient = new InventoryServiceClient();
this.eventPublisher = new EventPublisher("order-events");
}
async createOrder(orderData) {
try {
// Validate user exists (inter-service call)
const user = await this.userServiceClient.getUser(orderData.userId);
// Check inventory availability (inter-service call)
const inventoryCheck =
await this.inventoryServiceClient.checkAvailability(orderData.items);
if (!inventoryCheck.available) {
throw new ServiceError(
"INSUFFICIENT_INVENTORY",
"Some items are not available"
);
}
// Create order in this service's domain
const order = await this.orderRepository.create({
userId: user.id,
items: orderData.items,
totalAmount: this.calculateTotal(orderData.items),
status: "pending",
createdAt: new Date(),
});
// Publish event for other services to react
await this.eventPublisher.publish("order.created", {
orderId: order.id,
userId: order.userId,
items: order.items,
totalAmount: order.totalAmount,
});
return order;
} catch (error) {
if (error instanceof ServiceError) throw error;
throw new ServiceError("ORDER_CREATION_FAILED", error.message);
}
}
}
// Service Client for inter-service communication
class UserServiceClient {
constructor() {
this.baseUrl = process.env.USER_SERVICE_URL || "http://user-service:3001";
this.httpClient = new HttpClient({
timeout: 5000,
retries: 3,
circuitBreaker: true,
});
}
async getUser(userId) {
try {
const response = await this.httpClient.get(
`${this.baseUrl}/api/v1/users/${userId}`
);
return response.data;
} catch (error) {
if (error.status === 404) {
throw new ServiceError("USER_NOT_FOUND", `User ${userId} not found`);
}
throw new ServiceError("USER_SERVICE_ERROR", "Failed to fetch user data");
}
}
}
// Event-driven communication pattern
class EventPublisher {
constructor(topicPrefix) {
this.topicPrefix = topicPrefix;
this.messageQueue = new MessageQueue();
}
async publish(eventType, eventData) {
const event = {
id: this.generateEventId(),
type: `${this.topicPrefix}.${eventType}`,
data: eventData,
timestamp: new Date().toISOString(),
source: process.env.SERVICE_NAME,
};
await this.messageQueue.publish(event.type, event);
}
}
Service Decomposition Strategies: Breaking Down the Monolith
Domain-Driven Design for Microservices Boundaries
Strategic approaches to service decomposition:
// Service decomposition strategies based on business domains
const decompositionStrategies = {
// Strategy 1: Domain-Driven Design (DDD)
domainDrivenDecomposition: {
approach: "Identify bounded contexts within business domain",
identifyBoundedContexts: {
ecommerce: {
userManagement: {
responsibilities: [
"User registration",
"Authentication",
"Profile management",
],
entities: ["User", "Profile", "Credentials"],
businessCapabilities: ["Identity verification", "Account management"],
},
catalog: {
responsibilities: ["Product information", "Categories", "Search"],
entities: ["Product", "Category", "ProductVariant"],
businessCapabilities: ["Product discovery", "Content management"],
},
ordering: {
responsibilities: [
"Order processing",
"Order history",
"Order status",
],
entities: ["Order", "OrderItem", "OrderStatus"],
businessCapabilities: ["Order management", "Purchase processing"],
},
inventory: {
responsibilities: ["Stock tracking", "Availability", "Reservations"],
entities: ["InventoryItem", "Stock", "Reservation"],
businessCapabilities: ["Stock management", "Availability checking"],
},
payment: {
responsibilities: ["Payment processing", "Billing", "Refunds"],
entities: ["Payment", "PaymentMethod", "Transaction"],
businessCapabilities: [
"Payment processing",
"Financial transactions",
],
},
},
},
decompositionRules: [
"High cohesion within service - related functionality together",
"Low coupling between services - minimal dependencies",
"Single responsibility - one business capability per service",
"Data ownership - each service owns its data",
"Team alignment - service boundaries match team boundaries",
],
},
// Strategy 2: Data-Driven Decomposition
dataPattern: {
approach: "Analyze data access patterns and relationships",
identifyDataClusters: {
stronglyRelated: [
"User + Profile + Preferences (always accessed together)",
"Order + OrderItems + OrderStatus (lifecycle coupled)",
"Product + ProductVariants + ProductImages (content cluster)",
],
weaklyRelated: [
"User ↔ Order (referenced by ID, not always loaded together)",
"Product ↔ Inventory (different update frequencies)",
"Order ↔ Payment (different transactional contexts)",
],
},
decompositionGuidelines: [
"Group data that's always accessed together",
"Separate data with different update frequencies",
"Isolate data with different consistency requirements",
"Consider query patterns and performance needs",
],
},
// Strategy 3: Team-Driven Decomposition (Conway's Law)
teamStructure: {
approach: "Align service boundaries with team organization",
teamMapping: {
frontendTeam: {
services: ["User Interface Service", "API Gateway"],
responsibilities: ["User experience", "API orchestration"],
},
userExperienceTeam: {
services: ["User Service", "Notification Service"],
responsibilities: ["User lifecycle", "Communications"],
},
commerceTeam: {
services: ["Catalog Service", "Order Service", "Cart Service"],
responsibilities: ["Product management", "Purchase flow"],
},
platformTeam: {
services: ["Payment Service", "Inventory Service", "Shipping Service"],
responsibilities: ["Core business operations", "External integrations"],
},
},
principles: [
"Each team owns end-to-end responsibility for their services",
"Team size should match service complexity",
"Minimize coordination required between teams",
"Clear ownership and accountability per service",
],
},
};
Practical Service Decomposition Implementation:
#!/bin/bash
# service-decomposition.sh - Systematic approach to extracting services from monolith
set -euo pipefail
PROJECT_NAME="${PROJECT_NAME:-ecommerce-platform}"
MONOLITH_REPO="${MONOLITH_REPO:-monolith}"
SERVICES_ORG="${SERVICES_ORG:-microservices}"
# Step 1: Analyze monolith to identify service boundaries
analyze_monolith_structure() {
echo "🔍 Analyzing monolith structure for service boundaries..."
# Create analysis directory
mkdir -p analysis/{code-structure,data-flow,team-mapping}
# Analyze code dependencies
echo "📊 Analyzing code dependencies..."
find $MONOLITH_REPO/src -name "*.js" -o -name "*.ts" | xargs grep -l "require\|import" | \
while read file; do
echo "File: $file"
grep -E "(require|import).*from" "$file" || true
echo "---"
done > analysis/code-structure/dependencies.txt
# Analyze database schema relationships
echo "🗄️ Analyzing database relationships..."
if [ -f "$MONOLITH_REPO/database/schema.sql" ]; then
grep -E "(FOREIGN KEY|REFERENCES)" "$MONOLITH_REPO/database/schema.sql" > \
analysis/data-flow/foreign-keys.txt || true
fi
# Analyze API endpoints by domain
echo "🌐 Analyzing API endpoints..."
find $MONOLITH_REPO -name "*.js" -o -name "*.ts" | xargs grep -E "app\.(get|post|put|delete)" | \
sed 's/.*app\.\(.*\)(\'\(.*\)\'.*/\1 \2/' | \
sort | uniq > analysis/code-structure/endpoints.txt
echo "✅ Analysis complete. Review analysis/ directory for insights."
}
# Step 2: Extract User Service as example
extract_user_service() {
echo "👤 Extracting User Service from monolith..."
local service_name="user-service"
local service_dir="$SERVICES_ORG/$service_name"
# Create service directory structure
mkdir -p "$service_dir"/{src,tests,docker,k8s,docs}
mkdir -p "$service_dir"/src/{controllers,services,models,repositories,middleware}
# Create package.json for microservice
cat > "$service_dir/package.json" << EOF
{
"name": "$service_name",
"version": "1.0.0",
"description": "User management microservice",
"main": "src/index.js",
"scripts": {
"start": "node src/index.js",
"dev": "nodemon src/index.js",
"test": "jest",
"test:watch": "jest --watch",
"build": "docker build -t $service_name .",
"lint": "eslint src/",
"migrate": "knex migrate:latest",
"seed": "knex seed:run"
},
"dependencies": {
"express": "^4.18.2",
"bcrypt": "^5.1.0",
"jsonwebtoken": "^9.0.2",
"joi": "^17.9.2",
"knex": "^2.4.2",
"pg": "^8.11.0",
"redis": "^4.6.7",
"cors": "^2.8.5",
"helmet": "^7.0.0",
"winston": "^3.9.0"
},
"devDependencies": {
"jest": "^29.5.0",
"nodemon": "^2.0.22",
"supertest": "^6.3.3",
"@types/jest": "^29.5.1"
}
}
EOF
# Extract User model from monolith
cat > "$service_dir/src/models/User.js" << 'EOF'
// User.js - Extracted and refined User model
class User {
constructor(data) {
this.id = data.id;
this.email = data.email;
this.passwordHash = data.passwordHash;
this.profile = data.profile || {};
this.isActive = data.isActive !== false;
this.createdAt = data.createdAt || new Date();
this.updatedAt = data.updatedAt || new Date();
}
// Business logic methods
updateProfile(profileData) {
this.profile = { ...this.profile, ...profileData };
this.updatedAt = new Date();
}
deactivate() {
this.isActive = false;
this.updatedAt = new Date();
}
// Validation
static validate(userData) {
const Joi = require('joi');
const schema = Joi.object({
email: Joi.string().email().required(),
password: Joi.string().min(8).required(),
profile: Joi.object({
firstName: Joi.string(),
lastName: Joi.string(),
phone: Joi.string(),
dateOfBirth: Joi.date()
}).optional()
});
return schema.validate(userData);
}
// Serialize for API response
toJSON() {
return {
id: this.id,
email: this.email,
profile: this.profile,
isActive: this.isActive,
createdAt: this.createdAt
// Note: passwordHash intentionally excluded
};
}
}
module.exports = User;
EOF
# Create User Repository (Data Access Layer)
cat > "$service_dir/src/repositories/UserRepository.js" << 'EOF'
// UserRepository.js - Data access layer for User service
class UserRepository {
constructor(database) {
this.db = database;
}
async create(userData) {
const [user] = await this.db('users')
.insert({
email: userData.email,
password_hash: userData.passwordHash,
profile: JSON.stringify(userData.profile || {}),
is_active: true,
created_at: new Date(),
updated_at: new Date()
})
.returning('*');
return this.mapFromDatabase(user);
}
async findById(userId) {
const user = await this.db('users')
.where('id', userId)
.first();
return user ? this.mapFromDatabase(user) : null;
}
async findByEmail(email) {
const user = await this.db('users')
.where('email', email)
.first();
return user ? this.mapFromDatabase(user) : null;
}
async update(userId, updates) {
const [user] = await this.db('users')
.where('id', userId)
.update({
...updates,
updated_at: new Date()
})
.returning('*');
return this.mapFromDatabase(user);
}
async delete(userId) {
return await this.db('users')
.where('id', userId)
.del();
}
// Map database row to domain model
mapFromDatabase(row) {
const User = require('../models/User');
return new User({
id: row.id,
email: row.email,
passwordHash: row.password_hash,
profile: row.profile ? JSON.parse(row.profile) : {},
isActive: row.is_active,
createdAt: row.created_at,
updatedAt: row.updated_at
});
}
}
module.exports = UserRepository;
EOF
# Create User Service (Business Logic Layer)
cat > "$service_dir/src/services/UserService.js" << 'EOF'
// UserService.js - Business logic layer
const bcrypt = require('bcrypt');
const jwt = require('jsonwebtoken');
const User = require('../models/User');
class UserService {
constructor(userRepository, eventPublisher) {
this.userRepository = userRepository;
this.eventPublisher = eventPublisher;
}
async createUser(userData) {
// Validate input
const { error, value } = User.validate(userData);
if (error) {
throw new Error(`Validation error: ${error.details[0].message}`);
}
// Check if user already exists
const existingUser = await this.userRepository.findByEmail(value.email);
if (existingUser) {
throw new Error('User with this email already exists');
}
// Hash password
const passwordHash = await bcrypt.hash(value.password, 12);
// Create user
const user = await this.userRepository.create({
email: value.email,
passwordHash,
profile: value.profile || {}
});
// Publish domain event
await this.eventPublisher.publish('user.created', {
userId: user.id,
email: user.email,
createdAt: user.createdAt
});
return user;
}
async authenticateUser(email, password) {
const user = await this.userRepository.findByEmail(email);
if (!user || !user.isActive) {
throw new Error('Invalid credentials');
}
const isValidPassword = await bcrypt.compare(password, user.passwordHash);
if (!isValidPassword) {
throw new Error('Invalid credentials');
}
// Generate JWT token
const token = jwt.sign(
{ userId: user.id, email: user.email },
process.env.JWT_SECRET,
{ expiresIn: '24h' }
);
// Publish authentication event
await this.eventPublisher.publish('user.authenticated', {
userId: user.id,
email: user.email,
timestamp: new Date()
});
return { user, token };
}
async getUserById(userId) {
const user = await this.userRepository.findById(userId);
if (!user) {
throw new Error('User not found');
}
return user;
}
async updateUserProfile(userId, profileData) {
const user = await this.userRepository.findById(userId);
if (!user) {
throw new Error('User not found');
}
user.updateProfile(profileData);
const updatedUser = await this.userRepository.update(userId, {
profile: JSON.stringify(user.profile)
});
// Publish profile update event
await this.eventPublisher.publish('user.profile.updated', {
userId: user.id,
updatedFields: Object.keys(profileData),
timestamp: new Date()
});
return updatedUser;
}
async deactivateUser(userId) {
const user = await this.userRepository.findById(userId);
if (!user) {
throw new Error('User not found');
}
user.deactivate();
await this.userRepository.update(userId, {
is_active: false
});
// Publish deactivation event
await this.eventPublisher.publish('user.deactivated', {
userId: user.id,
deactivatedAt: new Date()
});
return user;
}
}
module.exports = UserService;
EOF
# Create User Controller (API Layer)
cat > "$service_dir/src/controllers/UserController.js" << 'EOF'
// UserController.js - HTTP API layer
class UserController {
constructor(userService) {
this.userService = userService;
}
async createUser(req, res) {
try {
const user = await this.userService.createUser(req.body);
res.status(201).json({
success: true,
data: user.toJSON(),
message: 'User created successfully'
});
} catch (error) {
res.status(400).json({
success: false,
error: error.message
});
}
}
async login(req, res) {
try {
const { email, password } = req.body;
const { user, token } = await this.userService.authenticateUser(email, password);
res.json({
success: true,
data: {
user: user.toJSON(),
token
},
message: 'Authentication successful'
});
} catch (error) {
res.status(401).json({
success: false,
error: error.message
});
}
}
async getUser(req, res) {
try {
const user = await this.userService.getUserById(req.params.id);
res.json({
success: true,
data: user.toJSON()
});
} catch (error) {
res.status(404).json({
success: false,
error: error.message
});
}
}
async updateProfile(req, res) {
try {
const user = await this.userService.updateUserProfile(
req.params.id,
req.body
);
res.json({
success: true,
data: user.toJSON(),
message: 'Profile updated successfully'
});
} catch (error) {
res.status(400).json({
success: false,
error: error.message
});
}
}
}
module.exports = UserController;
EOF
# Create database migration for User service
mkdir -p "$service_dir/database/migrations"
cat > "$service_dir/database/migrations/001_create_users_table.sql" << 'EOF'
-- 001_create_users_table.sql
CREATE TABLE IF NOT EXISTS users (
id SERIAL PRIMARY KEY,
email VARCHAR(255) UNIQUE NOT NULL,
password_hash VARCHAR(255) NOT NULL,
profile JSONB DEFAULT '{}',
is_active BOOLEAN DEFAULT true,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);
-- Indexes for performance
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_users_active ON users(is_active);
CREATE INDEX idx_users_created_at ON users(created_at);
-- Update timestamp trigger
CREATE OR REPLACE FUNCTION update_updated_at_column()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = CURRENT_TIMESTAMP;
RETURN NEW;
END;
$$ language 'plpgsql';
CREATE TRIGGER update_users_updated_at
BEFORE UPDATE ON users
FOR EACH ROW
EXECUTE FUNCTION update_updated_at_column();
EOF
echo "✅ User Service extracted to $service_dir"
}
# Step 3: Create Order Service with inter-service communication
extract_order_service() {
echo "📦 Extracting Order Service with inter-service communication..."
local service_name="order-service"
local service_dir="$SERVICES_ORG/$service_name"
mkdir -p "$service_dir"/src/{controllers,services,models,repositories,clients}
# Create Order Service with User Service client
cat > "$service_dir/src/services/OrderService.js" << 'EOF'
// OrderService.js - Order service with inter-service communication
class OrderService {
constructor(orderRepository, userServiceClient, inventoryServiceClient, eventPublisher) {
this.orderRepository = orderRepository;
this.userServiceClient = userServiceClient;
this.inventoryServiceClient = inventoryServiceClient;
this.eventPublisher = eventPublisher;
}
async createOrder(orderData) {
try {
// Step 1: Validate user exists (inter-service call)
const user = await this.userServiceClient.getUser(orderData.userId);
if (!user) {
throw new Error('Invalid user ID');
}
// Step 2: Check inventory availability (inter-service call)
const inventoryCheck = await this.inventoryServiceClient.checkAvailability(
orderData.items
);
if (!inventoryCheck.available) {
throw new Error(`Insufficient inventory: ${inventoryCheck.unavailableItems.join(', ')}`);
}
// Step 3: Calculate order total
const totalAmount = orderData.items.reduce((sum, item) => {
return sum + (item.price * item.quantity);
}, 0);
// Step 4: Create order in our database
const order = await this.orderRepository.create({
userId: orderData.userId,
items: orderData.items,
totalAmount,
status: 'pending',
shippingAddress: orderData.shippingAddress
});
// Step 5: Reserve inventory (inter-service call)
await this.inventoryServiceClient.reserveItems(order.id, orderData.items);
// Step 6: Publish domain event for other services
await this.eventPublisher.publish('order.created', {
orderId: order.id,
userId: order.userId,
items: order.items,
totalAmount: order.totalAmount,
createdAt: order.createdAt
});
return order;
} catch (error) {
// If anything fails, ensure no partial state
throw new Error(`Order creation failed: ${error.message}`);
}
}
}
EOF
# Create HTTP client for inter-service communication
cat > "$service_dir/src/clients/UserServiceClient.js" << 'EOF'
// UserServiceClient.js - HTTP client for User Service communication
const axios = require('axios');
class UserServiceClient {
constructor() {
this.baseUrl = process.env.USER_SERVICE_URL || 'http://user-service:3001';
this.timeout = 5000;
this.maxRetries = 3;
}
async getUser(userId) {
try {
const response = await this.makeRequest('GET', `/api/v1/users/${userId}`);
return response.data.data;
} catch (error) {
if (error.response?.status === 404) {
return null;
}
throw new Error(`Failed to fetch user ${userId}: ${error.message}`);
}
}
async makeRequest(method, path, data = null, retryCount = 0) {
try {
const config = {
method,
url: `${this.baseUrl}${path}`,
timeout: this.timeout,
headers: {
'Content-Type': 'application/json',
'X-Service-Name': 'order-service'
}
};
if (data) {
config.data = data;
}
return await axios(config);
} catch (error) {
if (retryCount < this.maxRetries && this.shouldRetry(error)) {
await this.delay(Math.pow(2, retryCount) * 1000);
return this.makeRequest(method, path, data, retryCount + 1);
}
throw error;
}
}
shouldRetry(error) {
// Retry on network errors or 5xx responses
return !error.response || (error.response.status >= 500);
}
delay(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
}
module.exports = UserServiceClient;
EOF
echo "✅ Order Service extracted with inter-service communication"
}
# Command routing
case "${1:-help}" in
"analyze")
analyze_monolith_structure
;;
"extract-user")
extract_user_service
;;
"extract-order")
extract_order_service
;;
"extract-all")
analyze_monolith_structure
extract_user_service
extract_order_service
echo "🎉 Service extraction completed!"
;;
"help"|*)
cat << EOF
Service Decomposition Tool
Usage: $0 <command>
Commands:
analyze Analyze monolith structure for service boundaries
extract-user Extract User Service from monolith
extract-order Extract Order Service with inter-service communication
extract-all Run complete extraction process
Examples:
$0 analyze
$0 extract-user
$0 extract-all
EOF
;;
esac
Inter-Service Communication: Patterns for Distributed Systems
HTTP, Messaging, and gRPC Communication Strategies
Communication pattern decision matrix:
// Inter-service communication pattern selection guide
const communicationPatterns = {
// Synchronous Communication Patterns
synchronousPatterns: {
restfulHTTP: {
whenToUse: [
"Real-time data queries (user profile, product details)",
"CRUD operations that need immediate response",
"External API integrations",
"Simple request-response interactions",
],
advantages: [
"Simple to understand and implement",
"Great tooling and debugging support",
"Standard HTTP status codes and methods",
"Easy to test and mock",
"Wide language support",
],
disadvantages: [
"Creates tight coupling between services",
"Cascade failures if dependent service is down",
"Latency accumulation in deep call chains",
"No built-in retry or circuit breaker logic",
],
implementationExample: {
scenario: "Order Service calling User Service",
pattern: "RESTful HTTP with retry logic and timeout",
},
},
grpc: {
whenToUse: [
"High-performance service-to-service communication",
"Type-safe contracts between services",
"Streaming data between services",
"Internal service communication (not public APIs)",
],
advantages: [
"High performance with binary protocol",
"Strong typing with Protocol Buffers",
"Built-in load balancing and retry",
"Bi-directional streaming support",
"Code generation for multiple languages",
],
disadvantages: [
"More complex setup than REST",
"Limited browser support",
"Requires shared schema management",
"Learning curve for Protocol Buffers",
],
implementationExample: {
scenario: "Inventory Service providing real-time stock updates",
pattern: "gRPC with streaming for live inventory updates",
},
},
},
// Asynchronous Communication Patterns
asynchronousPatterns: {
messageQueues: {
whenToUse: [
"Order processing workflows",
"Background tasks and job processing",
"Email notifications and alerts",
"Data synchronization between services",
],
advantages: [
"Loose coupling between services",
"Natural resilience to service outages",
"Load leveling and backpressure handling",
"Retry logic and dead letter queues",
"Scalable message processing",
],
patterns: {
pointToPoint: "Direct message from producer to consumer",
publishSubscribe: "One message broadcast to multiple consumers",
requestReply: "Asynchronous request-response pattern",
},
},
eventDriven: {
whenToUse: [
"Domain events (user registered, order completed)",
"Data consistency across services",
"Audit logging and analytics",
"Triggering downstream processes",
],
advantages: [
"Complete decoupling between services",
"Services can evolve independently",
"Natural audit trail of system events",
"Easy to add new consumers without changing producers",
],
eventTypes: {
domainEvents:
"Business-meaningful events (OrderPlaced, UserRegistered)",
integrationEvents: "Technical events for service coordination",
systemEvents: "Infrastructure events (ServiceStarted, HealthCheck)",
},
},
},
};
Professional HTTP Communication Implementation:
// http-communication.js - Robust HTTP inter-service communication
// Circuit Breaker Pattern for Service Resilience
class CircuitBreaker {
constructor(options = {}) {
this.failureThreshold = options.failureThreshold || 5;
this.recoveryTimeout = options.recoveryTimeout || 60000;
this.monitoringPeriod = options.monitoringPeriod || 10000;
this.state = "CLOSED"; // CLOSED, OPEN, HALF_OPEN
this.failureCount = 0;
this.nextAttempt = Date.now();
this.successCount = 0;
}
async execute(operation) {
if (this.state === "OPEN") {
if (Date.now() < this.nextAttempt) {
throw new Error("Circuit breaker is OPEN");
}
this.state = "HALF_OPEN";
this.successCount = 0;
}
try {
const result = await operation();
return this.onSuccess(result);
} catch (error) {
return this.onFailure(error);
}
}
onSuccess(result) {
this.failureCount = 0;
if (this.state === "HALF_OPEN") {
this.successCount++;
if (this.successCount >= 3) {
this.state = "CLOSED";
}
}
return result;
}
onFailure(error) {
this.failureCount++;
if (this.failureCount >= this.failureThreshold) {
this.state = "OPEN";
this.nextAttempt = Date.now() + this.recoveryTimeout;
}
throw error;
}
}
// Professional HTTP Client with Resilience Patterns
class ServiceHttpClient {
constructor(serviceName, baseUrl, options = {}) {
this.serviceName = serviceName;
this.baseUrl = baseUrl;
this.timeout = options.timeout || 5000;
this.maxRetries = options.maxRetries || 3;
this.retryDelay = options.retryDelay || 1000;
this.circuitBreaker = new CircuitBreaker({
failureThreshold: options.failureThreshold || 5,
recoveryTimeout: options.recoveryTimeout || 60000,
});
this.metrics = {
requests: 0,
failures: 0,
successes: 0,
averageLatency: 0,
};
}
async request(method, path, data = null, options = {}) {
const startTime = Date.now();
this.metrics.requests++;
try {
const result = await this.circuitBreaker.execute(async () => {
return await this.makeHttpRequest(method, path, data, options);
});
this.metrics.successes++;
this.updateAverageLatency(Date.now() - startTime);
return result;
} catch (error) {
this.metrics.failures++;
throw new ServiceCommunicationError(
this.serviceName,
method,
path,
error.message,
error.status
);
}
}
async makeHttpRequest(method, path, data, options, attempt = 1) {
const axios = require("axios");
const config = {
method,
url: `${this.baseUrl}${path}`,
timeout: this.timeout,
headers: {
"Content-Type": "application/json",
"X-Service-Name": process.env.SERVICE_NAME,
"X-Request-ID": this.generateRequestId(),
"X-Attempt": attempt,
...options.headers,
},
};
if (data) {
config.data = data;
}
try {
const response = await axios(config);
return response.data;
} catch (error) {
if (attempt < this.maxRetries && this.shouldRetry(error)) {
await this.delay(this.retryDelay * Math.pow(2, attempt - 1));
return this.makeHttpRequest(method, path, data, options, attempt + 1);
}
throw error;
}
}
shouldRetry(error) {
// Retry on network errors or 5xx responses
if (!error.response) return true;
const status = error.response.status;
return status >= 500 || status === 408 || status === 429;
}
delay(ms) {
return new Promise((resolve) => setTimeout(resolve, ms));
}
generateRequestId() {
return `${this.serviceName}-${Date.now()}-${Math.random()
.toString(36)
.substr(2, 9)}`;
}
updateAverageLatency(latency) {
this.metrics.averageLatency =
(this.metrics.averageLatency * (this.metrics.successes - 1) + latency) /
this.metrics.successes;
}
getHealthMetrics() {
return {
service: this.serviceName,
baseUrl: this.baseUrl,
circuitBreakerState: this.circuitBreaker.state,
...this.metrics,
successRate: this.metrics.successes / this.metrics.requests,
};
}
}
// Service Communication Error
class ServiceCommunicationError extends Error {
constructor(service, method, path, message, status) {
super(`${service} communication failed: ${method} ${path} - ${message}`);
this.service = service;
this.method = method;
this.path = path;
this.status = status;
this.name = "ServiceCommunicationError";
}
}
// Service Registry for Dynamic Service Discovery
class ServiceRegistry {
constructor() {
this.services = new Map();
this.healthCheckInterval = 30000;
this.startHealthChecking();
}
register(serviceName, instances) {
this.services.set(serviceName, {
instances: instances.map((instance) => ({
...instance,
healthy: true,
lastHealthCheck: Date.now(),
})),
lastUpdated: Date.now(),
});
}
discover(serviceName) {
const service = this.services.get(serviceName);
if (!service) {
throw new Error(`Service ${serviceName} not found in registry`);
}
// Return only healthy instances
const healthyInstances = service.instances.filter(
(instance) => instance.healthy
);
if (healthyInstances.length === 0) {
throw new Error(`No healthy instances available for ${serviceName}`);
}
// Simple round-robin load balancing
const randomIndex = Math.floor(Math.random() * healthyInstances.length);
return healthyInstances[randomIndex];
}
async startHealthChecking() {
setInterval(async () => {
for (const [serviceName, service] of this.services) {
await this.checkServiceHealth(serviceName, service);
}
}, this.healthCheckInterval);
}
async checkServiceHealth(serviceName, service) {
const axios = require("axios");
for (const instance of service.instances) {
try {
await axios.get(`${instance.url}/health`, { timeout: 5000 });
instance.healthy = true;
instance.lastHealthCheck = Date.now();
} catch (error) {
instance.healthy = false;
console.warn(
`Health check failed for ${serviceName} at ${instance.url}`
);
}
}
}
}
// Service Client Factory
class ServiceClientFactory {
constructor(serviceRegistry) {
this.serviceRegistry = serviceRegistry;
this.clients = new Map();
}
getClient(serviceName, options = {}) {
if (!this.clients.has(serviceName)) {
const serviceInstance = this.serviceRegistry.discover(serviceName);
const client = new ServiceHttpClient(
serviceName,
serviceInstance.url,
options
);
this.clients.set(serviceName, client);
}
return this.clients.get(serviceName);
}
// Convenience methods for common HTTP operations
createUserServiceClient() {
return this.getClient("user-service", {
timeout: 3000,
maxRetries: 2,
});
}
createInventoryServiceClient() {
return this.getClient("inventory-service", {
timeout: 5000,
maxRetries: 3,
failureThreshold: 3,
});
}
createPaymentServiceClient() {
return this.getClient("payment-service", {
timeout: 10000,
maxRetries: 1, // Payment operations should not be retried automatically
failureThreshold: 2,
});
}
}
module.exports = {
ServiceHttpClient,
ServiceCommunicationError,
ServiceRegistry,
ServiceClientFactory,
CircuitBreaker,
};
Message-Based Communication Implementation:
// message-communication.js - Event-driven communication patterns
// Event Publisher for Domain Events
class EventPublisher {
constructor(messageQueue, serviceName) {
this.messageQueue = messageQueue;
this.serviceName = serviceName;
}
async publishDomainEvent(eventType, eventData, options = {}) {
const event = {
id: this.generateEventId(),
type: eventType,
source: this.serviceName,
data: eventData,
timestamp: new Date().toISOString(),
version: options.version || "1.0",
correlationId: options.correlationId || this.generateCorrelationId(),
metadata: {
service: this.serviceName,
environment: process.env.NODE_ENV,
...options.metadata,
},
};
await this.messageQueue.publish(`events.${eventType}`, event, {
persistent: true,
priority: options.priority || 0,
expiration: options.expiration,
});
console.log(`Published event: ${eventType}`, {
eventId: event.id,
correlationId: event.correlationId,
});
return event;
}
generateEventId() {
return `${this.serviceName}-${Date.now()}-${Math.random()
.toString(36)
.substr(2, 9)}`;
}
generateCorrelationId() {
return `corr-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
}
}
// Event Subscriber for Handling Domain Events
class EventSubscriber {
constructor(messageQueue, serviceName) {
this.messageQueue = messageQueue;
this.serviceName = serviceName;
this.handlers = new Map();
this.deadLetterQueue = `dead-letters.${serviceName}`;
}
subscribe(eventType, handler, options = {}) {
this.handlers.set(eventType, {
handler,
options: {
retryAttempts: options.retryAttempts || 3,
retryDelay: options.retryDelay || 1000,
deadLetterAfterRetries: options.deadLetterAfterRetries || true,
...options,
},
});
// Start consuming messages for this event type
this.startConsuming(eventType);
}
async startConsuming(eventType) {
const queueName = `${this.serviceName}.${eventType}`;
await this.messageQueue.subscribe(queueName, async (message) => {
try {
await this.processEvent(eventType, message);
return true; // Acknowledge message
} catch (error) {
console.error(`Failed to process event ${eventType}:`, error);
return false; // Reject message for retry
}
});
}
async processEvent(eventType, message, attempt = 1) {
const handlerConfig = this.handlers.get(eventType);
if (!handlerConfig) {
console.warn(`No handler found for event type: ${eventType}`);
return;
}
try {
console.log(`Processing event: ${eventType}`, {
eventId: message.id,
attempt,
correlationId: message.correlationId,
});
await handlerConfig.handler(message);
console.log(`Successfully processed event: ${eventType}`, {
eventId: message.id,
correlationId: message.correlationId,
});
} catch (error) {
if (attempt < handlerConfig.options.retryAttempts) {
console.warn(`Retrying event processing: ${eventType}`, {
eventId: message.id,
attempt: attempt + 1,
error: error.message,
});
await this.delay(
handlerConfig.options.retryDelay * Math.pow(2, attempt - 1)
);
return this.processEvent(eventType, message, attempt + 1);
} else if (handlerConfig.options.deadLetterAfterRetries) {
await this.sendToDeadLetter(eventType, message, error);
}
throw error;
}
}
async sendToDeadLetter(eventType, message, error) {
const deadLetterMessage = {
...message,
deadLetterReason: error.message,
deadLetterTimestamp: new Date().toISOString(),
originalEventType: eventType,
processingAttempts: this.handlers.get(eventType).options.retryAttempts,
};
await this.messageQueue.publish(this.deadLetterQueue, deadLetterMessage, {
persistent: true,
});
console.error(`Sent to dead letter queue: ${eventType}`, {
eventId: message.id,
reason: error.message,
});
}
delay(ms) {
return new Promise((resolve) => setTimeout(resolve, ms));
}
}
// Saga Pattern for Distributed Transactions
class SagaOrchestrator {
constructor(eventPublisher, eventSubscriber) {
this.eventPublisher = eventPublisher;
this.eventSubscriber = eventSubscriber;
this.sagas = new Map();
}
defineSaga(sagaName, steps) {
this.sagas.set(sagaName, {
steps,
compensations: steps.map((step) => step.compensation).filter(Boolean),
});
}
async startSaga(sagaName, initialData) {
const saga = this.sagas.get(sagaName);
if (!saga) {
throw new Error(`Saga ${sagaName} not defined`);
}
const sagaInstance = {
id: this.generateSagaId(),
name: sagaName,
status: "STARTED",
currentStep: 0,
data: initialData,
completedSteps: [],
startedAt: new Date(),
};
// Store saga state (in practice, use persistent storage)
this.storeSagaState(sagaInstance);
// Start first step
await this.executeNextStep(sagaInstance);
return sagaInstance;
}
async executeNextStep(sagaInstance) {
const saga = this.sagas.get(sagaInstance.name);
const step = saga.steps[sagaInstance.currentStep];
if (!step) {
// All steps completed
sagaInstance.status = "COMPLETED";
sagaInstance.completedAt = new Date();
this.storeSagaState(sagaInstance);
await this.eventPublisher.publishDomainEvent("saga.completed", {
sagaId: sagaInstance.id,
sagaName: sagaInstance.name,
result: sagaInstance.data,
});
return;
}
try {
// Execute step
const result = await step.execute(sagaInstance.data);
sagaInstance.completedSteps.push({
stepIndex: sagaInstance.currentStep,
stepName: step.name,
result,
completedAt: new Date(),
});
sagaInstance.data = { ...sagaInstance.data, ...result };
sagaInstance.currentStep++;
this.storeSagaState(sagaInstance);
// Execute next step
await this.executeNextStep(sagaInstance);
} catch (error) {
// Step failed, start compensation
sagaInstance.status = "COMPENSATING";
sagaInstance.error = error.message;
await this.startCompensation(sagaInstance);
}
}
async startCompensation(sagaInstance) {
// Execute compensations in reverse order
for (let i = sagaInstance.completedSteps.length - 1; i >= 0; i--) {
const completedStep = sagaInstance.completedSteps[i];
const saga = this.sagas.get(sagaInstance.name);
const step = saga.steps[completedStep.stepIndex];
if (step.compensation) {
try {
await step.compensation(completedStep.result);
console.log(`Compensated step: ${step.name}`);
} catch (compensationError) {
console.error(
`Compensation failed for step: ${step.name}`,
compensationError
);
}
}
}
sagaInstance.status = "COMPENSATED";
sagaInstance.compensatedAt = new Date();
this.storeSagaState(sagaInstance);
await this.eventPublisher.publishDomainEvent("saga.failed", {
sagaId: sagaInstance.id,
sagaName: sagaInstance.name,
error: sagaInstance.error,
compensated: true,
});
}
generateSagaId() {
return `saga-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
}
storeSagaState(sagaInstance) {
// In practice, store in database or distributed cache
console.log(`Storing saga state: ${sagaInstance.id}`, {
status: sagaInstance.status,
currentStep: sagaInstance.currentStep,
});
}
}
module.exports = {
EventPublisher,
EventSubscriber,
SagaOrchestrator,
};
API Gateways and Service Mesh: Managing Distributed Communication
The Gateway Pattern for Centralized Service Management
Understanding API Gateway architecture patterns:
// API Gateway patterns and responsibilities
const apiGatewayPatterns = {
// Single API Gateway Pattern
singleGateway: {
architecture: "All client requests go through one gateway",
responsibilities: [
"Request routing to appropriate microservices",
"Authentication and authorization",
"Rate limiting and throttling",
"Request/response transformation",
"Caching and response optimization",
"Monitoring and analytics",
"SSL termination and security",
],
advantages: [
"Single entry point simplifies client integration",
"Centralized cross-cutting concerns",
"Consistent security and monitoring",
"Easy to implement basic patterns",
],
disadvantages: [
"Single point of failure risk",
"Can become performance bottleneck",
"Coupling between different client types",
"Scaling challenges as services grow",
],
},
// Backend for Frontend (BFF) Pattern
backendForFrontend: {
architecture: "Separate gateways for different client types",
clientSpecificGateways: {
webAppGateway: {
optimizedFor: "Web application needs",
features: ["Session management", "HTML rendering", "SEO optimization"],
dataAggregation: "Combines multiple service calls for rich web views",
},
mobileAppGateway: {
optimizedFor: "Mobile application constraints",
features: [
"Offline support",
"Data minimization",
"Battery optimization",
],
dataAggregation: "Lightweight payloads for mobile bandwidth",
},
publicApiGateway: {
optimizedFor: "Third-party developer experience",
features: ["API versioning", "Developer portal", "Usage analytics"],
dataAggregation: "Stable, well-documented API contracts",
},
},
advantages: [
"Optimized for specific client needs",
"Independent evolution of different client interfaces",
"Reduced coupling between client types",
"Better performance for each client type",
],
},
// Micro Gateway Pattern
microGateway: {
architecture: "Lightweight gateways per service or domain",
characteristics: [
"Domain-specific routing and policies",
"Deployed alongside services",
"Minimal overhead and high performance",
"Service-specific customization",
],
advantages: [
"High performance with minimal latency",
"Service team ownership and control",
"Natural scalability with services",
"Reduced blast radius of failures",
],
},
};
Professional API Gateway Implementation:
// api-gateway.js - Production-ready API Gateway implementation
const express = require("express");
const httpProxyMiddleware = require("http-proxy-middleware");
const rateLimit = require("express-rate-limit");
const helmet = require("helmet");
const jwt = require("jsonwebtoken");
const redis = require("redis");
class ApiGateway {
constructor(config) {
this.app = express();
this.config = config;
this.serviceRegistry = new Map();
this.rateLimiters = new Map();
this.cache = redis.createClient(config.redis);
this.setupMiddleware();
this.registerServices();
this.setupRoutes();
}
setupMiddleware() {
// Security headers
this.app.use(
helmet({
contentSecurityPolicy: {
directives: {
defaultSrc: ["'self'"],
apiSrc: ["'self'", this.config.allowedOrigins],
},
},
})
);
// CORS handling
this.app.use((req, res, next) => {
const origin = req.headers.origin;
if (this.config.allowedOrigins.includes(origin)) {
res.setHeader("Access-Control-Allow-Origin", origin);
}
res.setHeader(
"Access-Control-Allow-Methods",
"GET, POST, PUT, DELETE, OPTIONS"
);
res.setHeader(
"Access-Control-Allow-Headers",
"Content-Type, Authorization"
);
next();
});
// Request logging and tracing
this.app.use((req, res, next) => {
req.requestId = this.generateRequestId();
req.startTime = Date.now();
console.log(`[${req.requestId}] ${req.method} ${req.url}`, {
userAgent: req.headers["user-agent"],
clientIp: req.ip,
});
next();
});
// Body parsing
this.app.use(express.json({ limit: "10mb" }));
this.app.use(express.urlencoded({ extended: true }));
}
registerServices() {
// Register microservices with the gateway
this.config.services.forEach((service) => {
this.serviceRegistry.set(service.name, {
...service,
healthStatus: "unknown",
lastHealthCheck: 0,
});
// Create service-specific rate limiter
this.rateLimiters.set(
service.name,
rateLimit({
windowMs: service.rateLimit?.windowMs || 15 * 60 * 1000, // 15 minutes
max: service.rateLimit?.max || 100,
message: {
error: "Too many requests",
service: service.name,
resetTime: new Date(
Date.now() + (service.rateLimit?.windowMs || 900000)
),
},
keyGenerator: (req) => {
// Rate limit by user ID if authenticated, otherwise by IP
const userId = req.user?.id;
return userId
? `user:${userId}:${service.name}`
: `ip:${req.ip}:${service.name}`;
},
})
);
});
this.startHealthChecking();
}
setupRoutes() {
// Health check endpoint
this.app.get("/gateway/health", (req, res) => {
const serviceHealth = {};
for (const [serviceName, service] of this.serviceRegistry) {
serviceHealth[serviceName] = {
status: service.healthStatus,
lastCheck: new Date(service.lastHealthCheck),
url: service.url,
};
}
res.json({
gateway: "healthy",
timestamp: new Date(),
services: serviceHealth,
});
});
// Metrics endpoint
this.app.get("/gateway/metrics", (req, res) => {
// In production, this would integrate with Prometheus/metrics system
res.json({
requestCount: this.requestCount || 0,
errorCount: this.errorCount || 0,
averageLatency: this.averageLatency || 0,
services: Array.from(this.serviceRegistry.keys()),
});
});
// Service routes
for (const [serviceName, service] of this.serviceRegistry) {
this.setupServiceRoute(serviceName, service);
}
}
setupServiceRoute(serviceName, service) {
const basePath = service.basePath || `/${serviceName}`;
// Apply authentication middleware if required
if (service.requiresAuth) {
this.app.use(basePath, this.authenticationMiddleware.bind(this));
}
// Apply service-specific rate limiting
const rateLimiter = this.rateLimiters.get(serviceName);
this.app.use(basePath, rateLimiter);
// Apply caching if configured
if (service.caching?.enabled) {
this.app.use(basePath, this.cachingMiddleware(service.caching));
}
// Request transformation middleware
this.app.use(basePath, this.requestTransformationMiddleware(service));
// Proxy to microservice
const proxyMiddleware = httpProxyMiddleware({
target: service.url,
changeOrigin: true,
pathRewrite: service.pathRewrite || {},
onProxyReq: (proxyReq, req, res) => {
// Add tracing headers
proxyReq.setHeader("X-Request-ID", req.requestId);
proxyReq.setHeader("X-Gateway-Service", serviceName);
proxyReq.setHeader("X-Client-IP", req.ip);
// Add user context if authenticated
if (req.user) {
proxyReq.setHeader("X-User-ID", req.user.id);
proxyReq.setHeader("X-User-Roles", JSON.stringify(req.user.roles));
}
},
onProxyRes: (proxyRes, req, res) => {
// Add CORS headers to service responses
if (req.headers.origin) {
proxyRes.headers["Access-Control-Allow-Origin"] = req.headers.origin;
}
// Add tracing headers
proxyRes.headers["X-Request-ID"] = req.requestId;
proxyRes.headers["X-Service-Name"] = serviceName;
},
onError: (err, req, res) => {
console.error(
`[${req.requestId}] Proxy error for ${serviceName}:`,
err.message
);
res.status(503).json({
error: "Service Unavailable",
message: `${serviceName} is currently unavailable`,
requestId: req.requestId,
timestamp: new Date(),
});
},
});
this.app.use(basePath, proxyMiddleware);
}
authenticationMiddleware(req, res, next) {
const authHeader = req.headers.authorization;
if (!authHeader || !authHeader.startsWith("Bearer ")) {
return res.status(401).json({
error: "Authentication Required",
message: "Bearer token required",
requestId: req.requestId,
});
}
const token = authHeader.substring(7);
try {
const decoded = jwt.verify(token, this.config.jwtSecret);
req.user = decoded;
next();
} catch (error) {
return res.status(401).json({
error: "Invalid Token",
message: "Authentication token is invalid or expired",
requestId: req.requestId,
});
}
}
cachingMiddleware(cacheConfig) {
return async (req, res, next) => {
// Only cache GET requests
if (req.method !== "GET") {
return next();
}
const cacheKey = `cache:${req.originalUrl}:${
req.user?.id || "anonymous"
}`;
try {
const cachedResponse = await this.cache.get(cacheKey);
if (cachedResponse) {
const parsed = JSON.parse(cachedResponse);
res.set(parsed.headers);
res.set("X-Cache", "HIT");
res.set("X-Request-ID", req.requestId);
return res.status(parsed.statusCode).json(parsed.data);
}
} catch (error) {
console.warn(`Cache read error: ${error.message}`);
}
// Intercept response to cache it
const originalSend = res.json;
res.json = function (data) {
// Cache successful responses
if (res.statusCode >= 200 && res.statusCode < 300) {
const cacheData = {
statusCode: res.statusCode,
headers: res.getHeaders(),
data,
};
this.cache
.setex(cacheKey, cacheConfig.ttl || 300, JSON.stringify(cacheData))
.catch((error) =>
console.warn(`Cache write error: ${error.message}`)
);
}
res.set("X-Cache", "MISS");
return originalSend.call(this, data);
}.bind(this);
next();
};
}
requestTransformationMiddleware(service) {
return (req, res, next) => {
// Apply request transformations if configured
if (service.requestTransform) {
try {
service.requestTransform(req);
} catch (error) {
console.error(
`Request transformation error for ${service.name}:`,
error
);
}
}
// Add service-specific headers
if (service.headers) {
Object.keys(service.headers).forEach((header) => {
req.headers[header] = service.headers[header];
});
}
next();
};
}
async startHealthChecking() {
const healthCheckInterval = 30000; // 30 seconds
setInterval(async () => {
for (const [serviceName, service] of this.serviceRegistry) {
try {
const axios = require("axios");
await axios.get(`${service.url}/health`, { timeout: 5000 });
service.healthStatus = "healthy";
service.lastHealthCheck = Date.now();
} catch (error) {
service.healthStatus = "unhealthy";
service.lastHealthCheck = Date.now();
console.warn(
`Health check failed for ${serviceName}: ${error.message}`
);
}
}
}, healthCheckInterval);
}
generateRequestId() {
return `gw-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
}
start() {
const port = this.config.port || 3000;
this.app.listen(port, () => {
console.log(`API Gateway running on port ${port}`);
console.log(
"Registered services:",
Array.from(this.serviceRegistry.keys())
);
});
}
}
module.exports = ApiGateway;
Service Mesh Implementation with Istio:
# service-mesh.yaml - Service mesh configuration for microservices
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: control-plane
spec:
components:
pilot:
k8s:
resources:
requests:
cpu: 100m
memory: 128Mi
egressGateways:
- name: istio-egressgateway
enabled: true
k8s:
resources:
requests:
cpu: 100m
memory: 128Mi
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
service:
type: LoadBalancer
resources:
requests:
cpu: 100m
memory: 128Mi
values:
global:
proxy:
resources:
requests:
cpu: 100m
memory: 128Mi
tracer:
zipkin:
address: zipkin.istio-system:9411
---
# Gateway configuration
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: microservices-gateway
namespace: production
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- api.myapp.com
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: api-tls-secret
hosts:
- api.myapp.com
---
# Virtual Service for traffic routing
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: microservices-routing
namespace: production
spec:
hosts:
- api.myapp.com
gateways:
- microservices-gateway
http:
# User service routing
- match:
- uri:
prefix: /api/v1/users
route:
- destination:
host: user-service.production.svc.cluster.local
port:
number: 3001
fault:
delay:
percentage:
value: 0.1
fixedDelay: 5s
retries:
attempts: 3
perTryTimeout: 10s
retryOn: 5xx,reset,connect-failure,refused-stream
timeout: 30s
# Order service routing with circuit breaker
- match:
- uri:
prefix: /api/v1/orders
route:
- destination:
host: order-service.production.svc.cluster.local
port:
number: 3002
retries:
attempts: 2
perTryTimeout: 15s
timeout: 45s
# Payment service routing (no retries for safety)
- match:
- uri:
prefix: /api/v1/payments
route:
- destination:
host: payment-service.production.svc.cluster.local
port:
number: 3003
timeout: 60s
---
# Destination Rules for load balancing and circuit breaker
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: user-service-destination
namespace: production
spec:
host: user-service.production.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
connectionPool:
tcp:
maxConnections: 100
http:
http1MaxPendingRequests: 50
http2MaxRequests: 100
maxRequestsPerConnection: 10
maxRetries: 3
consecutiveGatewayErrors: 5
interval: 30s
baseEjectionTime: 30s
outlierDetection:
consecutiveGatewayErrors: 5
consecutive5xxErrors: 5
interval: 30s
baseEjectionTime: 30s
maxEjectionPercent: 50
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: order-service-destination
namespace: production
spec:
host: order-service.production.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
connectionPool:
tcp:
maxConnections: 50
http:
http1MaxPendingRequests: 25
http2MaxRequests: 50
maxRequestsPerConnection: 5
circuitBreaker:
consecutiveGatewayErrors: 3
consecutive5xxErrors: 3
interval: 10s
baseEjectionTime: 30s
maxEjectionPercent: 80
---
# Service security policies
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: production
spec:
mtls:
mode: STRICT
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: user-service-policy
namespace: production
spec:
selector:
matchLabels:
app: user-service
rules:
- from:
- source:
principals: ["cluster.local/ns/production/sa/order-service"]
to:
- operation:
methods: ["GET"]
paths: ["/api/v1/users/*"]
- from:
- source:
principals:
[
"cluster.local/ns/production/sa/istio-ingressgateway-service-account",
]
to:
- operation:
methods: ["GET", "POST", "PUT", "DELETE"]
Data Consistency in Distributed Systems
From ACID to BASE: Consistency Patterns for Microservices
Understanding distributed data consistency challenges:
// Data consistency patterns in distributed systems
const distributedDataConsistency = {
// Traditional ACID properties (Monolith)
acidProperties: {
atomicity: "All operations in transaction succeed or all fail",
consistency: "Database remains in valid state after transaction",
isolation: "Concurrent transactions don't interfere",
durability: "Committed data survives system failures",
limitations: [
"Requires distributed transactions across services",
"Performance penalties with 2-phase commit",
"Availability issues when services are down",
"Tight coupling between services",
],
},
// BASE properties (Microservices)
baseProperties: {
basicAvailability: "System remains available for operations",
softState: "Data consistency not guaranteed at all times",
eventualConsistency: "System becomes consistent over time",
benefits: [
"High availability and partition tolerance",
"Better performance and scalability",
"Services can operate independently",
"Natural fit for distributed systems",
],
tradeoffs: [
"Temporary inconsistencies possible",
"More complex application logic",
"Requires careful event design",
"Debugging distributed state is harder",
],
},
// Consistency Models
consistencyModels: {
strongConsistency: {
guarantee: "All nodes see the same data simultaneously",
implementation: "Synchronous replication, distributed locks",
useCases: ["Financial transactions", "Inventory management"],
cost: "High latency, reduced availability",
},
eventualConsistency: {
guarantee: "All nodes will converge to same state eventually",
implementation: "Asynchronous replication, conflict resolution",
useCases: ["User profiles", "Product catalogs", "Social feeds"],
cost: "Temporary inconsistencies, complex conflict resolution",
},
causalConsistency: {
guarantee: "Causally related operations are seen in correct order",
implementation: "Vector clocks, operation dependencies",
useCases: ["Collaborative editing", "Message threads"],
cost: "Complex dependency tracking",
},
},
};
// Distributed transaction patterns
const distributedTransactionPatterns = {
// Two-Phase Commit (2PC) - Avoid in microservices
twoPhaseCommit: {
description: "Coordinator ensures all participants commit or abort",
problems: [
"Blocking protocol - coordinator failure stops all",
"High latency due to multiple round trips",
"Tight coupling between services",
"Poor availability characteristics",
],
verdict: "Generally avoid in microservices architecture",
},
// Saga Pattern - Recommended approach
sagaPattern: {
description: "Chain of local transactions with compensating actions",
orchestrationSaga: {
approach: "Central coordinator manages saga workflow",
advantages: [
"Clear workflow logic",
"Easy monitoring",
"Complex routing support",
],
disadvantages: [
"Single point of failure",
"Tight coupling to coordinator",
],
},
choreographySaga: {
approach: "Services coordinate through domain events",
advantages: [
"Loose coupling",
"Natural scalability",
"No single point of failure",
],
disadvantages: ["Complex debugging", "Workflow harder to visualize"],
},
},
// Event Sourcing + CQRS
eventSourcingCQRS: {
description: "Store events, not current state + separate read/write models",
advantages: [
"Perfect audit trail",
"Natural event-driven architecture",
"Can rebuild state from events",
"Optimized read and write paths",
],
complexities: [
"Event versioning and migration",
"Snapshot strategies for performance",
"Query complexity for read models",
"Storage requirements grow over time",
],
},
};
Practical Saga Pattern Implementation:
// saga-implementation.js - Professional saga pattern for order processing
// Order Processing Saga - Choreography Pattern
class OrderProcessingSaga {
constructor(eventPublisher, eventSubscriber) {
this.eventPublisher = eventPublisher;
this.eventSubscriber = eventSubscriber;
this.setupEventHandlers();
}
setupEventHandlers() {
// Subscribe to events from different services
this.eventSubscriber.subscribe(
"order.created",
this.handleOrderCreated.bind(this)
);
this.eventSubscriber.subscribe(
"inventory.reserved",
this.handleInventoryReserved.bind(this)
);
this.eventSubscriber.subscribe(
"inventory.reservation_failed",
this.handleInventoryReservationFailed.bind(this)
);
this.eventSubscriber.subscribe(
"payment.processed",
this.handlePaymentProcessed.bind(this)
);
this.eventSubscriber.subscribe(
"payment.failed",
this.handlePaymentFailed.bind(this)
);
this.eventSubscriber.subscribe(
"shipping.scheduled",
this.handleShippingScheduled.bind(this)
);
this.eventSubscriber.subscribe(
"shipping.failed",
this.handleShippingFailed.bind(this)
);
}
// Step 1: Order created, start saga
async handleOrderCreated(event) {
const { orderId, userId, items, totalAmount } = event.data;
console.log(`Starting order processing saga for order: ${orderId}`);
try {
// Publish event to inventory service to reserve items
await this.eventPublisher.publishDomainEvent(
"inventory.reserve_requested",
{
orderId,
items,
sagaId: this.generateSagaId(),
}
);
} catch (error) {
await this.publishSagaFailed(
"inventory_reservation_request_failed",
orderId,
error.message
);
}
}
// Step 2: Inventory reserved successfully
async handleInventoryReserved(event) {
const { orderId, reservationId } = event.data;
console.log(`Inventory reserved for order: ${orderId}`);
try {
// Get order details to proceed with payment
const orderData = await this.getOrderData(orderId);
// Publish event to payment service
await this.eventPublisher.publishDomainEvent(
"payment.process_requested",
{
orderId,
userId: orderData.userId,
amount: orderData.totalAmount,
reservationId,
}
);
} catch (error) {
// Compensate: Release inventory reservation
await this.compensateInventoryReservation(orderId, reservationId);
await this.publishSagaFailed(
"payment_request_failed",
orderId,
error.message
);
}
}
// Step 2 Compensation: Inventory reservation failed
async handleInventoryReservationFailed(event) {
const { orderId, reason } = event.data;
console.log(
`Inventory reservation failed for order: ${orderId} - ${reason}`
);
// Update order status to failed
await this.updateOrderStatus(
orderId,
"failed",
`Insufficient inventory: ${reason}`
);
// Publish saga completion event
await this.publishSagaFailed("insufficient_inventory", orderId, reason);
}
// Step 3: Payment processed successfully
async handlePaymentProcessed(event) {
const { orderId, paymentId, reservationId } = event.data;
console.log(`Payment processed for order: ${orderId}`);
try {
// Get order details for shipping
const orderData = await this.getOrderData(orderId);
// Publish event to shipping service
await this.eventPublisher.publishDomainEvent(
"shipping.schedule_requested",
{
orderId,
paymentId,
reservationId,
shippingAddress: orderData.shippingAddress,
items: orderData.items,
}
);
} catch (error) {
// Compensate: Refund payment and release inventory
await this.compensatePayment(orderId, paymentId);
await this.compensateInventoryReservation(orderId, reservationId);
await this.publishSagaFailed(
"shipping_request_failed",
orderId,
error.message
);
}
}
// Step 3 Compensation: Payment failed
async handlePaymentFailed(event) {
const { orderId, reservationId, reason } = event.data;
console.log(`Payment failed for order: ${orderId} - ${reason}`);
// Compensate: Release inventory reservation
await this.compensateInventoryReservation(orderId, reservationId);
// Update order status
await this.updateOrderStatus(orderId, "payment_failed", reason);
// Publish saga completion event
await this.publishSagaFailed("payment_failed", orderId, reason);
}
// Step 4: Shipping scheduled successfully (Saga Success)
async handleShippingScheduled(event) {
const { orderId, trackingNumber, estimatedDelivery } = event.data;
console.log(
`Shipping scheduled for order: ${orderId} - Tracking: ${trackingNumber}`
);
// Update order with shipping details
await this.updateOrderStatus(orderId, "shipped", null, {
trackingNumber,
estimatedDelivery,
});
// Publish saga success event
await this.eventPublisher.publishDomainEvent(
"saga.order_processing.completed",
{
orderId,
status: "completed",
trackingNumber,
estimatedDelivery,
completedAt: new Date(),
}
);
// Send confirmation to user
await this.eventPublisher.publishDomainEvent(
"notification.order_confirmation",
{
orderId,
trackingNumber,
estimatedDelivery,
}
);
console.log(
`Order processing saga completed successfully for order: ${orderId}`
);
}
// Step 4 Compensation: Shipping failed
async handleShippingFailed(event) {
const { orderId, paymentId, reservationId, reason } = event.data;
console.log(`Shipping failed for order: ${orderId} - ${reason}`);
// Compensate: Refund payment and release inventory
await this.compensatePayment(orderId, paymentId);
await this.compensateInventoryReservation(orderId, reservationId);
// Update order status
await this.updateOrderStatus(orderId, "shipping_failed", reason);
// Publish saga completion event
await this.publishSagaFailed("shipping_failed", orderId, reason);
}
// Compensation Actions
async compensateInventoryReservation(orderId, reservationId) {
try {
await this.eventPublisher.publishDomainEvent(
"inventory.release_requested",
{
orderId,
reservationId,
reason: "saga_compensation",
}
);
console.log(`Inventory compensation initiated for order: ${orderId}`);
} catch (error) {
console.error(
`Failed to compensate inventory for order: ${orderId}`,
error
);
// In production, this would trigger manual intervention
}
}
async compensatePayment(orderId, paymentId) {
try {
await this.eventPublisher.publishDomainEvent("payment.refund_requested", {
orderId,
paymentId,
reason: "saga_compensation",
});
console.log(`Payment compensation initiated for order: ${orderId}`);
} catch (error) {
console.error(
`Failed to compensate payment for order: ${orderId}`,
error
);
// In production, this would trigger manual intervention
}
}
// Utility Methods
async getOrderData(orderId) {
// In production, this would call the order service or read from event store
return {
userId: "user-123",
totalAmount: 99.99,
shippingAddress: {
/* address data */
},
items: [{ productId: "product-456", quantity: 2 }],
};
}
async updateOrderStatus(
orderId,
status,
failureReason = null,
additionalData = {}
) {
await this.eventPublisher.publishDomainEvent("order.status_updated", {
orderId,
status,
failureReason,
...additionalData,
updatedAt: new Date(),
});
}
async publishSagaFailed(reason, orderId, details) {
await this.eventPublisher.publishDomainEvent(
"saga.order_processing.failed",
{
orderId,
reason,
details,
failedAt: new Date(),
}
);
console.log(
`Order processing saga failed for order: ${orderId} - ${reason}`
);
}
generateSagaId() {
return `saga-order-${Date.now()}-${Math.random()
.toString(36)
.substr(2, 9)}`;
}
}
// Event Sourcing Implementation for Order Aggregate
class OrderAggregate {
constructor(orderId) {
this.orderId = orderId;
this.version = 0;
this.events = [];
this.state = {
status: "initial",
items: [],
totalAmount: 0,
userId: null,
createdAt: null,
};
}
// Command: Create Order
createOrder(userId, items, shippingAddress) {
if (this.state.status !== "initial") {
throw new Error("Order already exists");
}
const totalAmount = items.reduce(
(sum, item) => sum + item.price * item.quantity,
0
);
const event = {
type: "OrderCreated",
data: {
orderId: this.orderId,
userId,
items,
shippingAddress,
totalAmount,
createdAt: new Date(),
},
version: this.version + 1,
};
this.applyEvent(event);
return event;
}
// Command: Update Order Status
updateStatus(newStatus, reason = null, additionalData = {}) {
const event = {
type: "OrderStatusUpdated",
data: {
orderId: this.orderId,
previousStatus: this.state.status,
newStatus,
reason,
...additionalData,
updatedAt: new Date(),
},
version: this.version + 1,
};
this.applyEvent(event);
return event;
}
// Event Application
applyEvent(event) {
switch (event.type) {
case "OrderCreated":
this.state = {
...this.state,
status: "created",
userId: event.data.userId,
items: event.data.items,
totalAmount: event.data.totalAmount,
shippingAddress: event.data.shippingAddress,
createdAt: event.data.createdAt,
};
break;
case "OrderStatusUpdated":
this.state.status = event.data.newStatus;
if (event.data.trackingNumber) {
this.state.trackingNumber = event.data.trackingNumber;
}
if (event.data.estimatedDelivery) {
this.state.estimatedDelivery = event.data.estimatedDelivery;
}
break;
}
this.version = event.version;
this.events.push(event);
}
// Reconstruct aggregate from events
static fromEvents(orderId, events) {
const aggregate = new OrderAggregate(orderId);
events.forEach((event) => {
aggregate.applyEvent(event);
});
// Clear uncommitted events after reconstruction
aggregate.events = [];
return aggregate;
}
// Get uncommitted events
getUncommittedEvents() {
return [...this.events];
}
// Mark events as committed
markEventsAsCommitted() {
this.events = [];
}
}
module.exports = {
OrderProcessingSaga,
OrderAggregate,
};
This comprehensive exploration of microservices architecture foundations provides the essential patterns and practices needed to successfully decompose monolithic systems into scalable, maintainable distributed architectures. The next article will dive deeper into advanced microservices patterns including service discovery, configuration management, distributed tracing, circuit breakers, and event-driven architecture patterns that enable truly resilient distributed systems.