Apr 4, 2025

Loki + Grafana + Promtail: Quickstart with Docker Compose

In this tutorial, we'll set up a complete logging system using Grafana Loki - a horizontally-scalable, highly-available log aggregation system. This stack consists of three main components:

  • Loki: A log aggregation system that efficiently stores and indexes log data

  • Promtail: A log collector that ships logs from containers to Loki

  • Grafana: A visualization platform that allows you to query and visualize logs

We'll also deploy a sample Node.js payment service and a load generator to create real-world log data for exploration.

Architectural Overview

Before diving into the setup, let's understand how these components work together:

  1. Log Generation: Our payment service generates structured JSON logs as it processes payment transactions

  2. Collection: Promtail discovers and collects logs from Docker containers

  3. Storage: Loki receives, indexes, and stores the logs efficiently

  4. Visualization: Grafana connects to Loki as a data source, allowing us to query logs with LogQL and create dashboards

Prerequisites

  • Docker and Docker Compose installed on your system

  • Basic understanding of containerization and logging concepts

  • Git to clone the repository

Project Structure

The complete project is available at github.com/rslim087a/loki-grafana-docker-compose-quickstart-tutorial. Here's what you'll find:


Step 1: Clone the Repository

Start by cloning the repository to your local machine:

git clone https://github.com/rslim087a/loki-grafana-docker-compose-quickstart-tutorial.git
cd

Step 2: Understanding the Components

Docker Compose Configuration

The docker-compose.yml file defines our entire stack. Let's examine each service:

services:
  payment-service:
    build:
      context: ./payment-service
      dockerfile: Dockerfile
    container_name: payment-service
    ports:
      - "3000:3000"
    environment:
      - PORT=3000
      - SERVICE_NAME=payment-processor
      - LOKI_URL=http://loki:3100/loki/api/v1/push
      - NODE_ENV=production
      - LOG_LEVEL=info
    networks:
      - loki-network
    depends_on:
      - loki
    restart: unless-stopped
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"

The payment-service is our sample application that will generate logs.

  load-generator:
    build:
      context: ./load-generator
      dockerfile: Dockerfile
    container_name: load-generator
    environment:
      - PAYMENT_SERVICE_URL=http://payment-service:3000
      - REQUESTS_PER_MINUTE=30
      - RUN_FOREVER=true
      - DURATION_MINUTES=30
    networks:
      - loki-network
    depends_on:
      - payment-service
    restart

The load-generator will create traffic to our payment service, generating logs for us to analyze.

  loki:
    image: grafana/loki:2.9.0
    container_name: loki
    ports:
      - "3100:3100"
    command: -config.file=/etc/loki/config.yml
    volumes:
      - ./loki-config.yml:/etc/loki/config.yml
      - loki-data:/loki
    networks:
      - loki-network
    restart

The loki service is our log aggregation system.

  promtail:
    image: grafana/promtail:2.9.0
    container_name: promtail
    user: "0:0"  # Run as root to ensure access
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./promtail-config.yml:/etc/promtail/config.yml
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
    command: -config.file=/etc/promtail/config.yml
    networks:
      - loki-network
    depends_on:
      - loki
    restart

The promtail service collects logs from containers and forwards them to Loki.

  grafana:
    image: grafana/grafana:latest
    container_name: grafana
    ports:
      - "3001:3000"
    environment:
      - GF_SECURITY_ADMIN_USER=admin
      - GF_SECURITY_ADMIN_PASSWORD=admin
      - GF_USERS_ALLOW_SIGN_UP=false
    volumes:
      - grafana-data:/var/lib/grafana
      - ./grafana/provisioning:/etc/grafana/provisioning
      - ./grafana/dashboards:/var/lib/grafana/dashboards
    networks:
      - loki-network
    depends_on:
      - loki
    restart

The grafana service provides visualization for our logs.

Loki Configuration

The loki-config.yml file configures Loki's behavior:

auth_enabled: false
server:
  http_listen_port: 3100
common:
  path_prefix: /loki
  storage:
    filesystem:
      chunks_directory: /loki/chunks
      rules_directory: /loki/rules
  replication_factor: 1
  ring:
    kvstore:
      store

This configuration:

  • Disables authentication for simplicity

  • Sets up filesystem storage for logs

  • Configures basic retention and limits

Promtail Configuration

The promtail-config.yml defines how logs are collected:

server:
  http_listen_port: 9080
  grpc_listen_port: 0
positions:
  filename: /tmp/positions.yaml
clients:
  - url: http://loki:3100/loki/api/v1/push
scrape_configs:
  - job_name: docker
    docker_sd_configs:
      - host: unix:///var/run/docker.sock
        refresh_interval: 5s
    relabel_configs:
      - source_labels: ['__meta_docker_container_name']
        regex: '/(.*)'
        target_label: 'container'

This configuration:

  • Sets up a connection to Loki

  • Configures Docker service discovery to automatically find containers

  • Extracts labels like container name to help organize logs

Grafana Provisioning

The ./grafana/provisioning directory contains configuration for automatically setting up Grafana:

  • datasources/loki.yml: Configures Loki as a data source

  • dashboards/dashboards.yml: Sets up dashboard provisioning

Step 3: Understanding the Payment Service

Our payment service is a simple Node.js application that simulates payment processing. It:

  1. Receives payment requests via API

  2. Processes the payments with simulated success/failure scenarios

  3. Logs detailed information about each transaction

  4. Provides endpoints for payment status and refunds

Key features for logging:

  • Structured JSON logging with request IDs for tracing

  • Detailed transaction metadata

  • Response time tracking

  • Error logging with context

Step 4: Start the Stack

Let's bring up the entire stack with Docker Compose:

docker compose up -d

This command will:

  1. Build the payment service and load generator images

  2. Pull the Loki, Promtail, and Grafana images

  3. Create the necessary networks and volumes

  4. Start all services in detached mode

Step 5: Verify the Services

Let's make sure everything is running correctly:

docker compose ps

You should see all services running:

  • payment-service

  • load-generator

  • loki

  • promtail

  • grafana

Check the logs to make sure there are no startup issues:


Step 6: Check if Logs are Flowing

Let's verify our payment service is generating logs correctly:

You should see structured JSON logs from the payment service like these:


Notice the rich JSON structure containing:

  • Timestamps

  • Trace and transaction IDs for request tracking

  • HTTP method and path information

  • Event type categorization

  • Detailed contextual data

Step 7: Access Grafana

Now, open your browser and navigate to:

Login with the credentials:

  • Username: admin

  • Password: admin

Step 8: Verify Loki Datasource

In Grafana:

  1. Go to Configuration → Data Sources

  2. You should see Loki already configured (this was done via provisioning)

  3. Click on Loki and click "Test" to verify the connection

Step 9: Explore the Logs

Let's start exploring our logs:

  1. Click on "Explore" in the Grafana sidebar

  2. Make sure Loki is selected as the data source

  3. Enter the following LogQL query to see all logs from the payment service:

  4. Click "Run Query"

You should see logs streaming in from the payment service.

Step 10: Understanding LogQL

LogQL is Loki's query language, similar to PromQL. Here are some useful queries:

View all error logs:

Filter for payment processing events:

Extract fields from JSON logs:

Calculate request counts over time:

sum(count_over_time({container="payment-service"} |~ "Incoming request" [1m]

Step 11: The Payment Service Dashboard

Grafana is pre-configured with a dashboard for our payment service:

  1. Go to Dashboards → Browse

  2. Look for "Payment Service Dashboard"

  3. Open the dashboard

The dashboard provides several visualizations:

  • Requests Per Minute: A time series showing the rate of incoming requests

  • Response Count Per Minute: A chart showing how many responses are being generated

  • Actual Response Times: A table showing detailed response times for individual requests

  • Total Payments: A count of all payment attempts

  • Successful Payments: A count of successful payment transactions

  • Failed Payments: A count of failed payment transactions

  • Payment Timeouts: A count of payment gateway timeouts

  • Payment Event Types: A bar chart showing the distribution of different event types

  • Request Methods: A pie chart showing the breakdown of HTTP methods used

Each panel uses LogQL queries to extract and visualize data from our logs.

Step 12: Understanding the Dashboard Queries

Let's examine a few of the queries used in the dashboard:

Requests Per Minute:

sum(count_over_time({container="payment-service"} |~ "Incoming request" [1m]

This counts log lines containing "Incoming request" and aggregates them by minute.

Response Count Per Minute:

sum by (container) (count_over_time({container="payment-service"} | json | response_time > 0 [1m]

This parses JSON logs, filters for entries with a response_time field, and counts them by minute.

Actual Response Times:

This extracts specific fields from JSON logs and formats them for display in a table.

Step 13: Creating a Custom Query

Let's create our own custom visualization:

  1. Click "Add panel" on the dashboard

  2. In the query editor, enter:

    sum by (method, path) (count_over_time({container="payment-service"} | json | status_code >= 400 [10m]
  3. This shows error counts by API endpoint over 10-minute windows

  4. Change the visualization type to "Bar chart"

  5. Set a title like "Errors by Endpoint"

  6. Save the panel

Step 14: Real-time Monitoring

One of the powerful features of this stack is real-time monitoring:

  1. In the dashboard, change the time range to "Last 5 minutes"

  2. Enable auto-refresh (5s)

You'll now see the dashboard updating in real-time as new logs are generated.

Step 15: Viewing Structured Data

Our payment service logs are structured in JSON format, which makes them powerful for querying:

  1. Go to Explore

  2. Enter the query:

  3. Expand one of the log entries

You'll see that all the JSON fields are extracted and available for filtering and analysis.

Step 16: Shutting Down the Stack

When you're done exploring, you can shut down the stack:

To completely clean up, including volumes:

docker-compose down -v

Understanding LogQL More Deeply

LogQL is a powerful query language that combines log filtering with metric extraction. It has two stages:

  1. Log Stream Selection: Filters the logs (e.g., {container="payment-service"})

  2. Log Pipeline: Processes the log content (e.g., | json | status_code>=400)

Common pipeline operations:

  • | json: Parse JSON logs

  • | logfmt: Parse logfmt logs

  • | pattern: Match and extract using patterns

  • | line_format: Reformat log lines

  • | label_format: Manipulate labels

For metrics, you can use functions like:

  • rate(): Calculate per-second rate

  • count_over_time(): Count log lines over a time period

  • sum(): Sum values across series

  • avg(): Calculate averages

  • max(): Find maximum values

  • by(): Group results by labels

Conclusion

In this tutorial, we've set up a complete logging stack with Loki, Promtail, and Grafana. We've deployed a sample payment service and load generator to create realistic logs, and explored how to:

  1. Collect container logs with Promtail

  2. Store and index them efficiently with Loki

  3. Query and visualize logs with Grafana and LogQL

  4. Build dashboards for monitoring our application

This setup provides a solid foundation for logging in containerized environments, with excellent performance and scalability. As your applications grow, you can extend this stack with additional features like alerting, log retention policies, and distributed tracing.

Next Steps

  • Add alerting based on log patterns

  • Integrate with Prometheus for metrics

  • Set up more detailed dashboards for different service components

  • Configure log rotation and retention policies

  • Explore multi-tenant setups for larger environments

Happy logging!

Let’s keep in touch

Subscribe to the mailing list and receive the latest updates

Let’s keep in touch

Subscribe to the mailing list and receive the latest updates

Let’s keep in touch

Subscribe to the mailing list and receive the latest updates

Let’s keep in touch

Subscribe to the mailing list and receive the latest updates