Apr 4, 2025
Loki + Grafana + Promtail: Quickstart with Docker Compose
In this tutorial, we'll set up a complete logging system using Grafana Loki - a horizontally-scalable, highly-available log aggregation system. This stack consists of three main components:
Loki: A log aggregation system that efficiently stores and indexes log data
Promtail: A log collector that ships logs from containers to Loki
Grafana: A visualization platform that allows you to query and visualize logs
We'll also deploy a sample Node.js payment service and a load generator to create real-world log data for exploration.
Architectural Overview
Before diving into the setup, let's understand how these components work together:
Log Generation: Our payment service generates structured JSON logs as it processes payment transactions
Collection: Promtail discovers and collects logs from Docker containers
Storage: Loki receives, indexes, and stores the logs efficiently
Visualization: Grafana connects to Loki as a data source, allowing us to query logs with LogQL and create dashboards
Prerequisites
Docker and Docker Compose installed on your system
Basic understanding of containerization and logging concepts
Git to clone the repository
Project Structure
The complete project is available at github.com/rslim087a/loki-grafana-docker-compose-quickstart-tutorial. Here's what you'll find:
Step 1: Clone the Repository
Start by cloning the repository to your local machine:
Step 2: Understanding the Components
Docker Compose Configuration
The docker-compose.yml
file defines our entire stack. Let's examine each service:
The payment-service
is our sample application that will generate logs.
The load-generator
will create traffic to our payment service, generating logs for us to analyze.
The loki
service is our log aggregation system.
The promtail
service collects logs from containers and forwards them to Loki.
The grafana
service provides visualization for our logs.
Loki Configuration
The loki-config.yml
file configures Loki's behavior:
This configuration:
Disables authentication for simplicity
Sets up filesystem storage for logs
Configures basic retention and limits
Promtail Configuration
The promtail-config.yml
defines how logs are collected:
This configuration:
Sets up a connection to Loki
Configures Docker service discovery to automatically find containers
Extracts labels like container name to help organize logs
Grafana Provisioning
The ./grafana/provisioning
directory contains configuration for automatically setting up Grafana:
datasources/loki.yml
: Configures Loki as a data sourcedashboards/dashboards.yml
: Sets up dashboard provisioning
Step 3: Understanding the Payment Service
Our payment service is a simple Node.js application that simulates payment processing. It:
Receives payment requests via API
Processes the payments with simulated success/failure scenarios
Logs detailed information about each transaction
Provides endpoints for payment status and refunds
Key features for logging:
Structured JSON logging with request IDs for tracing
Detailed transaction metadata
Response time tracking
Error logging with context
Step 4: Start the Stack
Let's bring up the entire stack with Docker Compose:
This command will:
Build the payment service and load generator images
Pull the Loki, Promtail, and Grafana images
Create the necessary networks and volumes
Start all services in detached mode
Step 5: Verify the Services
Let's make sure everything is running correctly:
You should see all services running:
payment-service
load-generator
loki
promtail
grafana
Check the logs to make sure there are no startup issues:
Step 6: Check if Logs are Flowing
Let's verify our payment service is generating logs correctly:
You should see structured JSON logs from the payment service like these:
Notice the rich JSON structure containing:
Timestamps
Trace and transaction IDs for request tracking
HTTP method and path information
Event type categorization
Detailed contextual data
Step 7: Access Grafana
Now, open your browser and navigate to:
Login with the credentials:
Username:
admin
Password:
admin
Step 8: Verify Loki Datasource
In Grafana:
Go to Configuration → Data Sources
You should see Loki already configured (this was done via provisioning)
Click on Loki and click "Test" to verify the connection
Step 9: Explore the Logs
Let's start exploring our logs:
Click on "Explore" in the Grafana sidebar
Make sure Loki is selected as the data source
Enter the following LogQL query to see all logs from the payment service:
Click "Run Query"
You should see logs streaming in from the payment service.
Step 10: Understanding LogQL
LogQL is Loki's query language, similar to PromQL. Here are some useful queries:
View all error logs:
Filter for payment processing events:
Extract fields from JSON logs:
Calculate request counts over time:
Step 11: The Payment Service Dashboard
Grafana is pre-configured with a dashboard for our payment service:
Go to Dashboards → Browse
Look for "Payment Service Dashboard"
Open the dashboard
The dashboard provides several visualizations:
Requests Per Minute: A time series showing the rate of incoming requests
Response Count Per Minute: A chart showing how many responses are being generated
Actual Response Times: A table showing detailed response times for individual requests
Total Payments: A count of all payment attempts
Successful Payments: A count of successful payment transactions
Failed Payments: A count of failed payment transactions
Payment Timeouts: A count of payment gateway timeouts
Payment Event Types: A bar chart showing the distribution of different event types
Request Methods: A pie chart showing the breakdown of HTTP methods used
Each panel uses LogQL queries to extract and visualize data from our logs.
Step 12: Understanding the Dashboard Queries
Let's examine a few of the queries used in the dashboard:
Requests Per Minute:
This counts log lines containing "Incoming request" and aggregates them by minute.
Response Count Per Minute:
This parses JSON logs, filters for entries with a response_time field, and counts them by minute.
Actual Response Times:
This extracts specific fields from JSON logs and formats them for display in a table.
Step 13: Creating a Custom Query
Let's create our own custom visualization:
Click "Add panel" on the dashboard
In the query editor, enter:
This shows error counts by API endpoint over 10-minute windows
Change the visualization type to "Bar chart"
Set a title like "Errors by Endpoint"
Save the panel
Step 14: Real-time Monitoring
One of the powerful features of this stack is real-time monitoring:
In the dashboard, change the time range to "Last 5 minutes"
Enable auto-refresh (5s)
You'll now see the dashboard updating in real-time as new logs are generated.
Step 15: Viewing Structured Data
Our payment service logs are structured in JSON format, which makes them powerful for querying:
Go to Explore
Enter the query:
Expand one of the log entries
You'll see that all the JSON fields are extracted and available for filtering and analysis.
Step 16: Shutting Down the Stack
When you're done exploring, you can shut down the stack:
To completely clean up, including volumes:
Understanding LogQL More Deeply
LogQL is a powerful query language that combines log filtering with metric extraction. It has two stages:
Log Stream Selection: Filters the logs (e.g.,
{container="payment-service"}
)Log Pipeline: Processes the log content (e.g.,
| json | status_code>=400
)
Common pipeline operations:
| json
: Parse JSON logs| logfmt
: Parse logfmt logs| pattern
: Match and extract using patterns| line_format
: Reformat log lines| label_format
: Manipulate labels
For metrics, you can use functions like:
rate()
: Calculate per-second ratecount_over_time()
: Count log lines over a time periodsum()
: Sum values across seriesavg()
: Calculate averagesmax()
: Find maximum valuesby()
: Group results by labels
Conclusion
In this tutorial, we've set up a complete logging stack with Loki, Promtail, and Grafana. We've deployed a sample payment service and load generator to create realistic logs, and explored how to:
Collect container logs with Promtail
Store and index them efficiently with Loki
Query and visualize logs with Grafana and LogQL
Build dashboards for monitoring our application
This setup provides a solid foundation for logging in containerized environments, with excellent performance and scalability. As your applications grow, you can extend this stack with additional features like alerting, log retention policies, and distributed tracing.
Next Steps
Add alerting based on log patterns
Integrate with Prometheus for metrics
Set up more detailed dashboards for different service components
Configure log rotation and retention policies
Explore multi-tenant setups for larger environments
Happy logging!