Apr 25, 2025

Kubernetes, Loki, Prometheus & Grafana: Logging & Monitoring

In this article, you'll learn how to set up a complete monitoring solution in Kubernetes using Loki for log storage, Prometheus for metrics collection, and Grafana for visualization. This unified approach allows you to quickly identify issues using metrics and diagnose them using the corresponding logs - all within a single dashboard.

Why Loki Over Elasticsearch?

Before we dive in, let's address why enterprises are starting to abandon Elasticsearch for Loki. The key difference lies in how these systems handle log storage:

  • Loki only indexes the metadata while compressing the actual log content

  • Elasticsearch indexes everything, consuming significantly more resources

For enterprises storing millions of logs, Loki provides a more efficient and scalable solution.

The Power of Combined Monitoring

The real power comes from combining Prometheus metrics with Loki logs in a Grafana dashboard:

  1. Metrics show you when something is wrong (like error spikes or unusual patterns)

  2. Logs reveal specific events that happened at that moment in time

  3. Grafana unifies both in a single dashboard

Prerequisites

  • A Kubernetes cluster (minikube, kind, or a cloud provider)

  • Helm installed

  • kubectl configured to access your cluster

All project files are available in the GitHub repository.

Setup Steps

1. Create the Monitoring Namespace

2. Install Loki

helm repo add grafana <https://grafana.github.io/helm-charts>
helm repo update

helm install loki grafana/loki \
  --namespace monitoring \
  --version 6.29.0 \
  --values

The custom values file configures Loki for single-binary deployment mode, which is perfect for development:

deploymentMode: SingleBinary
loki:
  auth_enabled: false
  storage:
    type

3. Install Kube-Prometheus-Stack

helm repo add prometheus-community <https://prometheus-community.github.io/helm-charts>
helm repo update

helm install prometheus prometheus-community/kube-prometheus-stack \
  --namespace monitoring \
  --version 45.7.1 \
  --values

The values file automatically configures Grafana to use Loki as a data source:

grafana:
  additionalDataSources:
    - name: Loki
      type: loki
      url: http://loki-gateway.monitoring.svc.cluster.local
      access

4. Deploy the Demo Environment

Apply the following resources to set up log generation, metrics export, and log shipping:

kubectl apply -f

This creates:

  • A log generator that produces JSON-formatted logs

  • A metrics exporter that generates metrics based on those logs

  • Promtail to ship logs to Loki

  • ServiceMonitor for Prometheus to scrape metrics

5. Access Grafana

Get the Grafana password:

kubectl get secret --namespace monitoring prometheus-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

Port-forward to access Grafana:

kubectl port-forward --namespace monitoring svc/prometheus-grafana 3000

Access Grafana at http://localhost:3000 with:

  • Username: admin

  • Password: (from the command above)

Creating the Unified Dashboard

Step 1: Add Metrics Visualization

  1. Create a new dashboard

  2. Add a visualization with Prometheus as data source

  3. Use PromQL to query metrics:

sum by(method) (rate(http_requests_total{method=~"$method_filter"}[5m]

This shows request rates by HTTP method.

Step 2: Add Logs Panel

  1. Add another visualization with Loki as data source

  2. Use LogQL to query logs:

Step 3: Add Variables for Synchronization

Create variables to synchronize both panels:

  1. method_filter: Query Loki for available HTTP methods

  2. filter: Text box for searching through logs

Both panels now update together when you change filters.

Understanding the Architecture

How Loki Indexes Logs

Loki only indexes labels (metadata), not the entire log content. In our setup:

  1. Promtail reads logs and extracts labels like http_method and http_status

  2. These labels become indexed for fast querying

  3. The remaining log content is compressed

This is configured in the Promtail ConfigMap:

pipeline_stages:
  - json:
      expressions:
        http_method: 'method'
        http_status: "status"
  - labels:
      http_method:
      http_status

Using the Dashboard

The final dashboard shows:

  • Request rates by method (metrics)

  • Status code distribution (metrics)

  • Response time percentiles (metrics)

  • Raw logs filtered by method and status (logs)

When you see anomalies in metrics (like a sudden drop in requests), you can:

  1. Note the exact time of the anomaly

  2. Look at the corresponding logs

  3. Filter by error codes or specific paths

  4. Diagnose the root cause

Clean Up

To remove all resources:

helm uninstall loki --namespace monitoring
helm uninstall prometheus --namespace monitoring
kubectl delete -f

Conclusion

This setup demonstrates why combining Prometheus metrics with Loki logs in a unified Grafana dashboard is becoming the industry standard. You get:

  • Efficient log storage with Loki

  • Powerful metrics tracking with Prometheus

  • Unified visualization in Grafana

  • Quick issue identification and diagnosis

The complete project files, including the dashboard JSON, are available in our GitHub repository.

Let’s keep in touch

Subscribe to the mailing list and receive the latest updates

Let’s keep in touch

Subscribe to the mailing list and receive the latest updates

Let’s keep in touch

Subscribe to the mailing list and receive the latest updates

Let’s keep in touch

Subscribe to the mailing list and receive the latest updates