Oct 2, 2024
Instrumenting FastAPI Apps with Prometheus Metrics
In today's microservices-driven world, observability is crucial for maintaining and optimizing application performance. This guide will walk you through instrumenting a FastAPI application to expose metrics that can be scraped by Prometheus, laying the groundwork for powerful monitoring and alerting capabilities.
Table of Contents
Understanding Prometheus and Time-Series Metrics
Setting Up the FastAPI Application
Instrumenting FastAPI with Custom Metrics
Running the Application and Sending Traffic
Observing Metrics
Understanding Prometheus and Time-Series Metrics
Prometheus is an open-source monitoring toolkit that stores metrics as time-series data, meaning each data point is associated with a timestamp. These metrics typically include:
Counters: Cumulative metrics that only increase (e.g., total number of requests)
Gauges: Metrics that can go up and down (e.g., current CPU usage)
Histograms: Samples observations and counts them in configurable buckets (e.g., request durations)
To leverage Prometheus, applications need to be instrumented to expose metrics in a format that Prometheus can scrape. This instrumentation allows us to collect valuable data about our application's performance and behavior, which can later be used for monitoring, alerting, and optimization.
Setting Up the FastAPI Application
Let's start by cloning and setting up the sample FastAPI application:
Create a virtual environment:
Activate the virtual environment (macOS and Linux):
Upgrade pip:
Install the required packages:
To run the application, use the following command:
The application will start and be available at http://localhost:8000
.
Instrumenting FastAPI with Custom Metrics
Inspect the application code. The application is already instrumented with custom metrics using the prometheus_client library. Let's examine the key parts:
It's important to note that all of these metrics are custom-defined:
Custom HTTP metrics:
REQUEST_COUNT
: A Counter to track the total number of HTTP requests.REQUEST_LATENCY
: A Histogram to measure the duration of HTTP requests.REQUEST_IN_PROGRESS
: A Gauge to monitor the number of ongoing HTTP requests.
Custom system metrics:
CPU_USAGE
: A Gauge to track CPU usage.MEMORY_USAGE
: A Gauge to monitor memory usage.
These metrics are all custom because we're explicitly defining them using the Prometheus client library. They're not automatically generated or collected by the library itself. The prometheus_client library provides the tools (Counter, Histogram, Gauge) to create these metrics, but it's up to us to define them, give them names, descriptions, and labels, and then update them appropriately in our code.
Updating Custom Metrics
To collect and update these custom metrics, we use a middleware in our FastAPI application:
Let's break down how each custom metric is updated:
REQUEST_IN_PROGRESS
: We increment this Gauge at the start of each request and decrement it at the end. This gives us a real-time count of ongoing requests.REQUEST_COUNT
: We increment this Counter at the end of each request, categorizing it by method, status, and path.REQUEST_LATENCY
: We use the observe method of this Histogram to record the duration of each request.
For the system metrics (CPU_USAGE and MEMORY_USAGE), we update them separately:
Here, we're updating the system metrics just before we generate the metrics response. This ensures that we're providing the most up-to-date system information each time Prometheus scrapes our metrics endpoint.
By implementing our metrics this way, we have full control over what we're measuring and when we're updating those measurements. This level of customization allows us to tailor our monitoring precisely to our application's needs.
Running the Application and Sending Traffic
Now that the application is set up and running, let's send some traffic to it using Postman:
Open Postman and import the provided collection fastapi-metrics-postman-collection.json.
Use the various requests in the collection to interact with the API:
Send GET requests to the root endpoint
Create, retrieve, update, and delete items using the /items endpoints
Observing Metrics
After sending traffic to the application, let's observe the metrics:
In your browser or using a tool like curl, navigate to http://localhost:8000/metrics.
You'll see a list of all the metrics our application is tracking. Let's break down some key metrics:
The http_request_total counter shows how many requests of each type we've received.
The http_request_duration_seconds histogram provides information about request durations.
The http_requests_in_progress gauge shows how many requests are currently being processed.
process_cpu_usage and process_memory_usage_bytes give us insight into our application's resource usage.
These metrics are now exposed in a format that Prometheus can understand and scrape. In future steps, you can set up Prometheus to scrape these metrics at regular intervals, storing them for analysis and visualization. This data can then be used with tools like Grafana to create insightful dashboards, or with Prometheus's built-in alerting to notify you of potential issues before they become critical problems.
Cleanup
When you're done working on the project:
Deactivate the virtual environment:
Optionally, remove the virtual environment directory: