
Logs Without Correlation Are Noise
When a service fails at 2 AM, you want to open one interface, select a time range, and see log lines from every container that was active — Caddy, Nextcloud, the database, the auth service — all aligned on the same timeline as your Prometheus metrics. Loki makes this possible without the operational overhead of Elasticsearch. It does not index log content, only labels, which keeps ingestion fast and storage small. The trade-off is that full-text queries require scanning chunks rather than looking up an index — acceptable for a home lab with modest log volumes.
Understand Loki's Architecture and Label Model
Loki stores logs in compressed chunks. Each chunk is associated with a set of labels — key-value pairs like container="nextcloud" or host="pi" — and a time range. Queries select chunks by label and then filter within them by content. Because Loki indexes
Install Promtail or Grafana Alloy as the Log Shipp
Promtail is Loki's traditional log shipping agent. Grafana Alloy is its more capable successor, also able to ship metrics and traces. Add either as a service in your docker-compose.yml. Promtail needs two mounts: the Docker socket (to discover container l
Configure the Docker Logging Driver
An alternative to Promtail scraping Docker's log files is configuring Docker to push logs directly to Loki via the Loki Docker logging driver plugin. Install the plugin with docker plugin install grafana/loki-docker-driver. In /etc/docker/daemon.json, set
Query Logs with LogQL
LogQL is Loki's query language. A query starts with a log stream selector in curly braces: {container="nextcloud"} returns all logs from the Nextcloud container. Add a filter with |=: {container="nextcloud"} |= "error" returns only lines containing the wo
Parse Structured Logs with the JSON Pipeline
Many modern applications emit logs as JSON objects rather than plain text lines. LogQL can parse these inline. Add | json to a query to automatically extract JSON fields from each log line and make them available as labels for further filtering. For examp
Correlate Logs and Metrics in Grafana
In Grafana, you can link a log panel directly to a metrics panel. When you click a data point on a Prometheus graph — say, the spike in error rate at 3:17 AM — Grafana can automatically open a Loki log panel filtered to the same time range and the same co
What Comes Next
Logs show what your applications said — error messages, request paths, stack traces. But sometimes the failure is not in the application layer at all. It is the application making a system call that returns an unexpected error, or blocking on a file descriptor, or talking to a network socket that never responds. For that level of visibility, you need to intercept system calls directly using strace and eBPF.


