Oct 4, 2024
Instrumenting Express.js Apps with Prometheus Metrics
In today's microservices-driven world, observability is crucial for maintaining and optimizing application performance. This guide will walk you through instrumenting an Express.js application to expose metrics that can be scraped by Prometheus, laying the groundwork for powerful monitoring and alerting capabilities.
Table of Contents
Understanding Prometheus and Time-Series Metrics
Setting Up the Express.js Application
Instrumenting Express.js with Custom Metrics
Auto-instrumentation with prom-client
Running the Application and Sending Traffic
Observing Metrics
Understanding Prometheus and Time-Series Metrics
Prometheus is an open-source monitoring toolkit that stores metrics as time-series data, meaning each data point is associated with a timestamp. These metrics typically include:
Counters: Cumulative metrics that only increase (e.g., total number of requests)
Gauges: Metrics that can go up and down (e.g., current CPU usage)
Histograms: Samples observations and counts them in configurable buckets (e.g., request durations)
To leverage Prometheus, applications need to be instrumented to expose metrics in a format that Prometheus can scrape. This instrumentation allows us to collect valuable data about our application's performance and behavior, which can later be used for monitoring, alerting, and optimization.
Setting Up the Express.js Application
Let's start by cloning and setting up the sample Express.js application:
Install dependencies:
To run the application, use the following command:
The application will start and be available at http://localhost:3000
.
Instrumenting Express.js with Custom Metrics
Inspect the source code. The sample application is already instrumented with custom metrics using the prom-client
package:
The http_request_duration_seconds
metric is a Histogram metric that measures the duration of HTTP requests. This metric is deemed custom because we're explicitly defining it using the prom-client
package. It's not automatically generated or collected by the library itself. The package provides the tools (Histogram, Counter, Gauge) to create these metrics, but it's up to us to define them, give them names, descriptions, and labels, and then update them appropriately in our code.
Auto-instrumentation with prom-client
In addition to our custom metrics, Express.js with prom-client
provides a wide range of auto-instrumented metrics out of the box. These metrics are automatically collected and don't require explicit code in our application. We enable this with the following line:
Some of these auto-instrumented metrics include:
System metrics:
process_cpu_user_seconds_total
: A Counter for total user CPU time spent.process_cpu_system_seconds_total
: A Counter for total system CPU time spent.process_cpu_seconds_total
: A Counter for total CPU time spent.process_resident_memory_bytes
: A Gauge for resident memory size.
Node.js specific metrics:
nodejs_eventloop_lag_seconds
: A Gauge for event loop lag.nodejs_active_handles_total
: A Gauge for the total number of active handles.nodejs_active_requests_total
: A Gauge for the total number of active requests.
Garbage collection metrics:
nodejs_gc_duration_seconds
: A Histogram for garbage collection duration by kind.
These auto-instrumented metrics provide valuable insights into our application's performance and resource utilization without requiring additional code.
Running the Application and Sending Traffic
Now that the application is set up and running, let's send some traffic to it using Postman:
Open Postman and import the provided collection
express-metrics-postman-collection.json
.Use the various requests in the collection to interact with the API:
Send GET requests to the root endpoint
Create, retrieve, update, and delete items using the
/items
endpoints
Observing Metrics
After sending traffic to the application, let's observe the metrics:
In your browser or using a tool like curl, navigate to
http://localhost:3000/metrics
.You'll see a list of all the metrics our application is tracking, including both custom and auto-instrumented metrics. Let's break down some key metrics:
The custom
http_request_duration_seconds
histogram provides information about HTTP request durations.The auto-instrumented
process_cpu_user_seconds_total
gives us insight into our application's CPU usage.The auto-instrumented
process_resident_memory_bytes
provides information about memory usage.The auto-instrumented
nodejs_eventloop_lag_seconds
shows the lag of the event loop.The auto-instrumented
nodejs_active_handles_total
shows the number of active handles in our application.The auto-instrumented
nodejs_gc_duration_seconds
provides information about garbage collection durations.
These metrics, both custom and auto-instrumented, are now exposed in a format that Prometheus can understand and scrape. In future steps, you can set up Prometheus to scrape these metrics at regular intervals, storing them for analysis and visualization. This data can then be used with tools like Grafana to create insightful dashboards, or with Prometheus's built-in alerting to notify you of potential issues before they become critical problems.
By instrumenting your Express.js application with Prometheus metrics, you've taken a significant step towards improving your application's observability. This will allow you to monitor performance, track usage patterns, and quickly identify and resolve issues as they arise.