Oct 8, 2024

Prometheus & Grafana: Docker Compose Monitoring Tutorial

This tutorial will guide you through running Prometheus and Grafana in a Docker Compose environment to monitor application metrics. You'll learn how to set up a monitoring stack where:

  • An application exposes metrics in a Prometheus-compatible format

  • Prometheus scrapes these metrics at regular intervals

  • Grafana creates visualizations by querying the Prometheus data

Types of Metrics

  • Counter Metrics: Values that only increase (e.g., total HTTP requests)

  • Gauge Metrics: Values that can go up or down (e.g., CPU usage, memory usage)

  • Histogram Metrics: Measurements grouped into buckets (e.g., most requests take 0-5ms, some take 5-10ms, and very few take longer)

How It Works

Applications are instrumented using libraries specific to their framework (Spring Boot, Flask, FastAPI, Node.js) to expose metrics at a designated endpoint (commonly /metrics)

  1. Prometheus is configured to discover and scrape these metrics endpoints at regular intervals (typically every 15 seconds)

  2. The metric data is stored in Prometheus's time-series database

  3. Grafana queries this data using PromQL (Prometheus Query Language) to create real-time visualizations and dashboards

With this foundation, let's proceed with setting up our monitoring stack…

Table of Contents

  1. Setting Up the Monitoring Stack

  2. Exploring the Application Metrics

  3. Exploring Prometheus

  4. Setting Up Grafana

  5. Conclusion

1. Setting Up the Monitoring Stack

Start by cloning the repository

git clone https://github.com/rslim087a/prometheus-docker-compose.git

cd

Run the Docker Compose command to start the stack:

docker-compose up

This command will download the necessary Docker images and start the services defined in the docker-compose.yaml file. Let's break down what's happening:

a) FastAPI Application:
fastapi-app:
  image: rslim087/fastapi-prometheus:latest
  ports:
    - "8000:8000"
  networks

This service starts a FastAPI application that's been instrumented with Prometheus metrics. The application is accessible at http://localhost:8000. It exposes a /metrics endpoint that Prometheus will scrape to collect performance data.

b) Prometheus:
prometheus:
  image: prom/prometheus:v2.37.0
  volumes:
    - ./prometheus.yml:/etc/prometheus/prometheus.yml
  ports:
    - "9090:9090"
  networks

Prometheus is started and configured to scrape metrics from our FastAPI application. The prometheus.yml file in the repository is mounted into the container, providing the scraping configuration:

scrape_configs:
  - job_name: 'fastapi-app'
    static_configs:
      - targets: ['fastapi-app:8000']
    metrics_path: '/metrics'
c) Grafana:
grafana:
  image: grafana/grafana:9.0.0
  environment:
    - GF_SECURITY_ADMIN_PASSWORD=admin
  ports:
    - "3000:3000"
  networks

Grafana is started and will be available at http://localhost:3000. You can log in with the username "admin" and the password "admin". Grafana is used to create dashboards and visualizations based on the metrics collected by Prometheus.

All these services are connected through a Docker network named monitoring, allowing them to communicate with each other using their service names as hostnames.

Putting it all together:

After running docker-compose up, you'll have a fully functional monitoring stack:

  1. The FastAPI application running and exposing metrics.

  2. Prometheus collecting these metrics at regular intervals.

  3. Grafana ready to be configured to visualize the collected metrics.

  1. Exploring the Application Metrics

Before we dive into Prometheus, let's take a look at the metrics our application is exposing. These are the raw metrics that Prometheus will scrape and store.

  1. Open your web browser and navigate to http://localhost:8000/metrics

  2. You should see a page with text content that looks something like this:

# TYPE http_request_total counter
http_request_total{method="GET",path="/metrics",status="200"} 55.0
# HELP http_request_created Total HTTP Requests
# TYPE http_request_created gauge
http_request_created{method="GET",path="/metrics",status="200"} 1.7284304204223764e+09
# HELP http_request_duration_seconds HTTP Request Duration
# TYPE http_request_duration_seconds histogram
http_request_duration_seconds_bucket{le="0.005",method="GET",path="/metrics",status="200"} 14.0
http_request_duration_seconds_bucket{le="0.01",method="GET",path="/metrics",status="200"} 40.0
http_request_duration_seconds_bucket{le="0.025",method="GET",path="/metrics",status="200"} 55.0
http_request_duration_seconds_bucket{le="0.05",method="GET",path="/metrics",status="200"} 55.0
...
http_request_duration_seconds_count{method="GET",path="/metrics",status="200"} 55.0
http_request_duration_seconds_sum{method="GET",path="/metrics",status="200"} 0.4152390956878662
# HELP http_request_duration_seconds_created HTTP Request Duration
# TYPE http_request_duration_seconds_created gauge
http_request_duration_seconds_created{method="GET",path="/metrics",status="200"} 1.7284304204224129e+09
# HELP http_requests_in_progress HTTP Requests in progress
# TYPE http_requests_in_progress gauge
http_requests_in_progress{method="GET",path="/metrics"} 1.0
# HELP process_cpu_usage Current CPU usage in percent
# TYPE process_cpu_usage gauge
process_cpu_usage 0.9
# HELP process_memory_usage_bytes Current memory usage in bytes
# TYPE process_memory_usage_bytes gauge

This output displays various metrics being exposed by the containerized application. Prometheus will scrape these exposed metrics every 15 seconds, as configured in our prometheus.yml, storing them in its time-series database for subsequent querying and analysis.

Side Note: Prior to being containerized, this Python application was instrumented to expose Prometheus metrics. All major web frameworks have Prometheus clients available to expose metrics in a format that Prometheus can understand and scrape. This includes:

To learn more about how an application can be instrumented to expose Prometheus metrics, you may feel free to refer to the articles linked above.

3. Exploring Prometheus

Once the stack is running, you can access the Prometheus UI at http://localhost:9090.

First, check if Prometheus is successfully scraping your FastAPI application:

Go to Status > Targets in the Prometheus UI. You should see fastapi-app listed with the state "UP".

Now, let's try to query for a metric that's exposed by our FastAPI application:

You should see a result showing the total number of HTTP requests, broken down by method, path, and status code. If you see this data, it confirms that Prometheus is successfully scraping metrics from our FastAPI application and storing them in its database.

With this confirmation, we're ready to set up Grafana to query this data and create visualizations.

4. Setting Up Grafana

Access Grafana at http://localhost:3000. The default login is admin/admin.

Setting up the Data Source

In order for Grafana to query the Prometheus data, we need to set up Prometheus as a data source:

  1. Click on Settings (Gear Icon)

  2. Go to Configuration > Data Sources.

  3. Click "Add data source" and select Prometheus.

  4. Set the URL to http://prometheus:9090.

    • We use prometheus:9090 instead of localhost:9090 because Grafana and Prometheus are on the same Docker network, and prometheus resolves to the Prometheus container's IP.

  5. Click "Save & Test" to ensure the connection is working.

Import a Dashboard

Go to Dashboards > Import and paste the JSON from grafana-dashboard.json :

Each panel in the dashboard uses a PromQL query to visualize metrics from your FastAPI application.

Keep Going!

You've set up Prometheus and Grafana using Docker Compose. Ready to scale it up?

Next Lesson: Deploy the Prometheus Monitoring Stack in Kubernetes.

Kubernetes Training

If you found these guides helpful, check out The Complete Kubernetes Training course

Let’s keep in touch

Subscribe to the mailing list and receive the latest updates

Let’s keep in touch

Subscribe to the mailing list and receive the latest updates

Let’s keep in touch

Subscribe to the mailing list and receive the latest updates

Let’s keep in touch

Subscribe to the mailing list and receive the latest updates