By Kasun Gunarathna
The Microservices architecture allows us to build scalable and maintainable applications by breaking down the application into loosely coupled services and small services. Node.js, with its event-driven architecture, is a popular choice for building such services. This article will delve into three crucial aspects of building microservices (Node.js) inter-communication, logging and distributed tracing (Zipkin). We will provide practical steps to implement each of these components. Let’s do it.
Inter-communication in Microservices (Node.js/Kafka)
The Microservices need to communicate with each other to function as a cohesive application. There are several techniques and tools available for inter-service communication. Here, we will explore some popular choices:
Transporters for Intercommunication
- Redis
- MQTT
- NATS
- RabbitMQ
- Kafka
- gRPC
Each tool has its own use cases, advantages, and disadvantages. Let’s explore Kafka in detail with an example.
Kafka
Kafka is a distributed streaming platform used for building real-time data pipelines and streaming applications.
Step 1: Set Up Kafka
Install Kafka using Docker:
docker-compose.yml
version: '3.8'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
depends_on:
- zookeeper
Start Kafka:
docker-compose up -d
Step 2: Install Kafka Client Library
Install the kafka-node library for Node.js:
yarn add kafka-node
Step 3: Implement Kafka Producer and Consumer
Create a producer and consumer service using Kafka.
Producer
// kafka_producer.js
const kafka = require('kafka-node');
const Producer = kafka.Producer;
const client = new kafka.KafkaClient({ kafkaHost: 'localhost:9092' });
const producer = new Producer(client);
producer.on('ready', () => {
console.log('Kafka Producer is connected and ready.');
producer.send([{ topic: 'test', messages: 'Hello Kafka!' }], (err, data) => {
if (err) console.error('Error:', err);
else console.log('Data:', data);
});
});
producer.on('error', (error) => {
console.error('Error:', error);
});
Consumer
// kafka_consumer.js
const kafka = require('kafka-node');
const Consumer = kafka.Consumer;
const client = new kafka.KafkaClient({ kafkaHost: 'localhost:9092' });
const consumer = new Consumer(client, [{ topic: 'test', partition: 0 }], { autoCommit: true });
consumer.on('message', (message) => {
console.log('Received message:', message);
});
consumer.on('error', (error) => {
console.error('Error:', error);
});
Run both kafka_producer.js and kafka_consumer.js to see the communication in action.
Logging (Winston) in Microservices
Logging is essential for monitoring, debugging, and analyzing application behavior. In a microservices architecture, centralized logging is crucial for aggregating logs from multiple services.
Example: Implementing Logging with Winston
Winston is a popular logging library in Node.js. Here’s how to set up Winston for logging in a Node.js microservice.
Step 1: Initialize the Project
yarn init
Step 2: Install Dependencies
yarn add express winston
Step 3: Set Up Winston Logger
Create a logger.js file and configure Winston:
// logger.js
const winston = require('winston');
const logger = winston.createLogger({
level: 'info',
format: winston.format.json(),
defaultMeta: { service: 'user-service' },
transports: [
new winston.transports.File({ filename: 'error.log', level: 'error' }),
new winston.transports.File({ filename: 'combined.log' }),
],
});
if (process.env.NODE_ENV !== 'production') {
logger.add(new winston.transports.Console({
format: winston.format.simple(),
}));
}
module.exports = logger;
Step 4: Create an Express Application with Logging
Create index.js and include logging statements:
// index.js
const express = require('express');
const logger = require('./logger');
const app = express();
app.get('/', (req, res) => {
logger.info('GET request received at /');
res.send('Hello, World!');
});
app.listen(3000, () => {
logger.info('Server started on port 3000');
});
Start the application:
node index.js
Send a request to the server and check the combined.log file for log entries.
Distributed Tracing with OpenTelemetry/Zipkin
Distributed tracing helps track requests as they traverse multiple services, identifying bottlenecks and performance issues. Zipkin is a distributed tracing system that can be easily integrated with Node.js applications.
Example: Setting Up Distributed Tracing with Zipkin
Step 1: Set Up Zipkin with Docker
Create a docker-compose.yml file:
version: '3.8'
services:
zipkin:
image: openzipkin/zipkin
ports:
- 9411:9411
Start Zipkin:
docker-compose up
Zipkin UI will be available at http://localhost:9411.
Step 2: Install OpenTelemetry Packages
Install the necessary OpenTelemetry packages:
yarn add @opentelemetry/api @opentelemetry/sdk-node @opentelemetry/instrumentation-http @opentelemetry/exporter-zipkin
Step 3: Set Up OpenTelemetry in Node.js
Create tracing.js and configure OpenTelemetry to use Zipkin:
// tracing.js
const { NodeTracerProvider } = require('@opentelemetry/sdk-node');
const { SimpleSpanProcessor } = require('@opentelemetry/tracing');
const { ZipkinExporter } = require('@opentelemetry/exporter-zipkin');
const { getNodeAutoInstrumentations } = require('@opentelemetry/auto-instrumentations-node');
const provider = new NodeTracerProvider({
instrumentations: [getNodeAutoInstrumentations()],
});
const exporter = new ZipkinExporter({
url: 'http://localhost:9411/api/v2/spans',
});
provider.addSpanProcessor(new SimpleSpanProcessor(exporter));
provider.register();
console.log('Tracing initialized');
Step 4: Import Tracing Configuration
Import the tracing configuration as the first thing in your entry file (index.js):
require('./tracing');
const express = require('express');
const logger = require('./logger');
const app = express();
app.get('/', (req, res) => {
logger.info('GET request received at /');
res.send('Hello, World!');
});
app.listen(3000, () => {
logger.info('Server started on port 3000');
});
Step 5: Verify Tracing in Zipkin
Start the application:
node index.js
Send a request to the server and check the Zipkin UI at http://localhost:9411 to see the traces.
Complete Example: User Login and Adding New Product with Events (NodeJS/Kafka/Wiston/Zipkin)
Explanation
● User Service: Handles user authentication and publishes a “user_logged_in” event upon successful login.
● Auth Service: Subscribes to “user_logged_in” events and processes them as needed (e.g., logging).
● Product Service: Manages products and publishes a “new_product_added” event upon adding a new product.
● Event Service: Acts as a centralized hub, subscribing to events from both User Service and Product Service, and processing them accordingly.
This example demonstrates how microservices can communicate asynchronously using events via Redis Pub/Sub, ensuring loose coupling and scalability in a distributed system architecture. Each service focuses on specific functionalities, and events facilitate communication and coordination between them without direct dependencies.
1. Setup Kafka
Make sure you have Kafka and Zookeeper running. You can use Docker to set them up:
# docker-compose.yml
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper:3.4.6
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka:2.12-2.2.0
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
zipkin:
image: openzipkin/zipkin
ports:
- "9411:9411"
Start Kafka and Zipkin:
docker-compose up -d
2. Setup Logging with Winston
Install the required packages:
npm install winston kafkajs @opentelemetry/api @opentelemetry/sdk-trace-node @opentelemetry/exporter-zipkin
Create a logger.js file:
// logger.js
const { createLogger, transports, format } = require('winston');
const logger = createLogger({
level: 'info',
format: format.combine(
format.timestamp(),
format.json()
),
transports: [
new transports.Console(),
new transports.File({ filename: 'application.log' })
]
});
module.exports = logger;
3. Setup Tracing with Zipkin
Create a tracing.js file:
// tracing.js
const { NodeTracerProvider } = require('@opentelemetry/sdk-trace-node');
const { ZipkinExporter } = require('@opentelemetry/exporter-zipkin');
const { BatchSpanProcessor } = require('@opentelemetry/sdk-trace-base');
const provider = new NodeTracerProvider();
const zipkinExporter = new ZipkinExporter({
serviceName: 'my-service',
url: 'http://localhost:9411/api/v2/spans'
});
provider.addSpanProcessor(new BatchSpanProcessor(zipkinExporter));
provider.register();
4. Kafka Setup
Create a kafka.js file to configure Kafka producers and consumers:
// kafka.js
const { Kafka } = require('kafkajs');
const logger = require('./logger');
const kafka = new Kafka({
clientId: 'my-app',
brokers: ['localhost:9092']
});
const producer = kafka.producer();
const consumer = kafka.consumer({ groupId: 'my-group' });
const initKafka = async () => {
await producer.connect();
await consumer.connect();
};
const produceMessage = async (topic, message) => {
await producer.send({
topic,
messages: [{ value: message }]
});
};
const consumeMessages = async (topic, callback) => {
await consumer.subscribe({ topic, fromBeginning: true });
await consumer.run({
eachMessage: async ({ topic, partition, message }) => {
logger.info(`Received message: ${message.value.toString()} on topic: ${topic}`);
callback(message.value.toString());
}
});
};
module.exports = { initKafka, produceMessage, consumeMessages };
5. Implement Auth Service
Create auth.js for the Auth service:
// auth-service.js
const { produceMessage, consumeMessages } = require('./kafka');
const logger = require('./logger');
const authenticateUser = (username, password) => {
if (username === 'user' && password === 'password') {
logger.info('Authentication successful');
return { success: true, message: 'Authentication successful' };
} else {
logger.error('Authentication failed');
return { success: false, message: 'Authentication failed' };
}
};
const startAuthService = async () => {
await consumeMessages('auth-requests', async (message) => {
const { username, password } = JSON.parse(message);
const result = authenticateUser(username, password);
await produceMessage('auth-responses', JSON.stringify(result));
});
};
module.exports = { startAuthService };
6. Implement User Service
Create user-service.js for the User service:
// user.js
const { produceMessage, consumeMessages } = require('./kafka');
const logger = require('./logger');
const requestAuthentication = async (username, password) => {
const message = JSON.stringify({ username, password });
await produceMessage('auth-requests', message);
};
const startUserService = async () => {
await consumeMessages('auth-responses', async (message) => {
const result = JSON.parse(message);
logger.info(`Authentication result: ${result.message}`);
});
};
module.exports = { requestAuthentication, startUserService };
7. Implement Product Service
Create product-service.js for the Product service:
// product.js
const { produceMessage, consumeMessages } = require('./kafka');
const logger = require('./logger');
const addProduct = async (productName, price) => {
const message = JSON.stringify({ productName, price });
await produceMessage('product-additions', message);
logger.info(`Product added: ${productName}, Price: ${price}`);
};
const startProductService = async () => {
await consumeMessages('product-additions', async (message) => {
const { productName, price } = JSON.parse(message);
logger.info(`Processing product addition: ${productName}, Price: ${price}`);
});
};
module.exports = { addProduct, startProductService };
8. Implement Event Service
Create events-service.js for the Event service:
// events.js
const { produceMessage, consumeMessages } = require('./kafka');
const logger = require('./logger');
const startEventService = async () => {
await consumeMessages('auth-responses', async (message) => {
logger.info(`Event: Auth response received - ${message}`);
});
await consumeMessages('product-additions', async (message) => {
logger.info(`Event: Product addition received - ${message}`);
});
};
module.exports = { startEventService };
9. Main Application
Create app.js to start the services:
// app.js
require('./tracing');
const { initKafka } = require('./kafka-service');
const { startAuthService } = require('./auth-service');
const { requestAuthentication, startUserService } = require('./user-service');
const { addProduct, startProductService } = require('./product-service');
const { startEventService } = require('./events-service');
const logger = require('./logger');
const start = async () => {
await initKafka();
await startAuthService();
await startUserService();
await startProductService();
await startEventService();
// Simulate user authentication
requestAuthentication('user', 'password');
// Simulate adding a product
addProduct('New Product', 100);
logger.info('Microservices started');
};
start();
10. Run the Application
Ensure Kafka and Zipkin are running. Start the application:
node app.js
Running the Example
- Start Kafka Server: Ensure Kafka server is running locally or adjust connection details as per your setup.
- Start Services: Start each service in separate terminal windows:
○ User Service: node user-service.js
○ Auth Service: node auth-service.js
○ Product Service: node product-service.js
○ Event Service: node event-service.js
3. Simulate User Login and Add Product:
○ Use tools like Postman or cURL to send POST requests:
4. View Console Output: Check console logs of each service to see events being processed.
Conclusion
Building robust Node.js microservices involves more than just writing code. Effective intercommunication, logging, and tracing are essential for maintaining and scaling your applications. By leveraging Kafka for communication, Winston for logging, and Zipkin for distributed tracing, you can ensure your microservices architecture is efficient, maintainable, and easy to debug. Happy coding!