Recently, many Java developers in our company need the front end to call various microservices one by one. Isn't this very cumbersome?
First, let's talk about why we need to implement microservices:
By adopting a microservices architecture, we can not only enhance the scalability and reliability of the system but also promote efficient collaboration and rapid iteration within the team, thereby better addressing complex business scenarios.
This approach allows them to closely integrate their services with our business.
Directly posting the code:
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
import {
FastifyAdapter,
NestFastifyApplication,
} from '@nestjs/platform-fastify';
import { HkAuthMiddleware } from './hk-auth/hk-auth.middleware';
import { HttpService } from '@nestjs/axios';
import fastifyMultipart from '@fastify/multipart';
import cluster from 'cluster';
import * as os from 'os';
import { Transport, MicroserviceOptions } from '@nestjs/microservices';
async function bootstrap() {
// Create Fastify HTTP application
const app = await NestFactory.create<NestFastifyApplication>(
AppModule,
new FastifyAdapter(),
);
// Register microservice connection
app.connectMicroservice<MicroserviceOptions>({
transport: Transport.TCP,
options: {
host: process.env.SCREENSHOT_SERVICE_HOST || 'localhost',
port: parseInt(process.env.SCREENSHOT_SERVICE_PORT || '4000'),
retryAttempts: 5, // Retry attempts
retryDelay: 3000, // Retry delay (milliseconds)
},
});
// Start microservice connection
await app.startAllMicroservices();
// Maintain existing configuration
const httpService = new HttpService();
const authMiddleware = new HkAuthMiddleware(httpService);
app.setGlobalPrefix(`/api`);
await app.register(fastifyMultipart, {
limits: {
fileSize: 200 * 1024 * 1024,
},
});
app.use((req, res, next) => authMiddleware.use(req, res, next));
await app.enableCors();
// Start HTTP service
await app.listen(3000, '0.0.0.0');
}
// Maintain existing cluster mode
if (cluster.isPrimary) {
const cpuCount = os.cpus().length;
console.log(`Primary process started, preparing to fork ${cpuCount} worker processes`);
for (let i = 0; i < cpuCount; i++) {
cluster.fork();
}
cluster.on('exit', (worker) => {
console.log(`Worker process ${worker.process.pid} exited, restarting...`);
cluster.fork();
});
} else {
bootstrap();
}
Here’s an explanation of registering microservices: you can simply understand it as opening your own store, where others can come to purchase your various services.
So the host represents the IP of your server, the port, and the port you opened for the microservice.
This way, the application can be transformed into a microservice application, with each worker process supporting both HTTP and microservice communication simultaneously.
Establishing TCP communication at the service layer#
This way, we can connect to other Java developers' microservices at the service layer. For example, we need to store the base64 of a screenshot function in Redis or call a Java service.
import { Injectable } from '@nestjs/common';
import { ClientProxy, Transport } from '@nestjs/microservices';
import { firstValueFrom } from 'rxjs';
@Injectable()
export class ScreenshotClient {
private client: ClientProxy;
constructor() {
this.client = new ClientProxy({
transport: Transport.TCP,
options: {
host: process.env.SCREENSHOT_SERVICE_HOST || 'localhost',
port: parseInt(process.env.SCREENSHOT_SERVICE_PORT || '4000'),
},
});
}
async generateScreenshot(groups: any) {
try {
const result = await firstValueFrom(
this.client.send({ cmd: 'generate_screenshot' }, groups)
);
if (!result.success) {
throw new Error(result.error);
}
return result.data;
} catch (error) {
throw new Error(`Screenshot generation failed: ${error.message}`);
}
}
}
The client represents the service you connect to via IP + port.
Send indicates the message command you send to the other microservice.
For example: send({ cmd: 'some_command' }, data).
Of course, you can call different microservices within a single service.
@Injectable()
export class SomeService {
private redisClient: ClientProxy;
private rabbitClient: ClientProxy;
constructor() {
// Redis microservice client
this.redisClient = new ClientProxy({
transport: Transport.REDIS,
options: {
host: '192.168.1.111',
port: 6379,
},
});
// RabbitMQ microservice client
this.rabbitClient = new ClientProxy({
transport: Transport.RMQ,
options: {
urls: ['amqp://192.168.1.222:5672'],
queue: 'my_queue',
},
});
}
}
Exposing your own microservices#
For example, if a Java developer needs to perform multiple DTO-OTO operations for a business, they can process it through our service:
import { Controller } from '@nestjs/common';
import { MessagePattern } from '@nestjs/microservices';
import { ScreenshotService } from './screen-shot.service';
@Controller()
export class ScreenshotController {
constructor(private readonly screenshotService: ScreenshotService) {}
// This is a microservice endpoint
@MessagePattern({ cmd: 'generate_screenshot' })
async generateScreenshot(data: any) {
try {
const result = await this.screenshotService.compositeAllGroups(data);
return {
success: true,
data: result
};
} catch (error) {
return {
success: false,
error: error.message
};
}
}
// This is the original HTTP endpoint, unchanged
@Post()
async httpGenerateScreenshot(@Body() data: any) {
// ... Original HTTP handling logic
}
}
end----
Next time, we will cover Prometheus + Grafana integration for visual performance monitoring.