Grafana Loki is a log aggregation system developed by the team at Grafana Labs. It is used in conjunction with Grafana to provide convenient log visualization.
Unlike the ELK stack, Loki does not index log content, but stores logs alongside metadata (e.g., pod name or namespace). This makes it faster and less resource-intensive.
Loki is most commonly used together with Promtail — an agent that collects logs from nodes and sends them to Loki. You can then view those logs directly in Grafana using LogQL queries.
To install Loki:
Here’s a brief explanation of each block:
test_pod:
enabled: true
image: bats/bats:1.8.2
Launches a test pod with the bats utility to verify that Loki is available and receiving logs. Used only for automatic validation after installation.
loki:
enabled: true
isDefault: true
The main component—Loki—responsible for receiving and storing logs.
Configurable options include:
url
: internal service address for Loki (used by other components)readinessProbe
and livenessProbe
: health checksdatasource
: settings for connecting Loki as a data source in Grafana.You can specify datasource.uid
if you want to link it explicitly.
promtail:
enabled: true
config:
clients:
- url: http://<release-name>:3100/loki/api/v1/push
Promtail is the default agent that collects logs from cluster nodes (from /var/log/containers/*.log
) and sends them to Loki.
If you're using alternative log shippers like Filebeat or Fluent Bit, you can disable Promtail:
fluent-bit:
enabled: false
filebeat:
enabled: false
filebeatConfig: ...
logstash:
enabled: false
filters: ...
outputs: ...
These are optional alternatives in case Promtail isn’t suitable—for example, if you're using Fluent Operator. They're disabled by default.
grafana:
enabled: false
Enables installation of Grafana. Set to true
if you want to view logs through a web UI. If Grafana is already deployed in your cluster, you can leave this disabled and manually configure Loki as a data source.
prometheus:
enabled: false
isDefault: false
url: ...
Lets you add Prometheus as a data source in Grafana. This does not install Prometheus itself—it only registers the connection. Enable if Prometheus is already running in your cluster and you want to use it alongside Loki.
proxy:
http_proxy: ""
https_proxy: ""
no_proxy: ""
Proxy settings in case you need to connect to external services.
Let’s walk through an example setup using Loki and Promtail.
We'll launch a pod that writes logs to standard output. Kubelet will save those logs to /var/log/containers/
. Promtail (as a DaemonSet) reads those logs and pushes them to Loki. Then, Grafana can be used to explore them via the web interface.
Step 1. Enable Grafana
Install the Loki addon from the cluster management panel. In the configuration window, leave all parameters as-is, but change:
grafana:
enabled: false
to:
grafana:
enabled: true
This will deploy Grafana along with Loki.
Step 2. Install NGINX Ingress
Install the NGINX Ingress add-on, which is required to access the Grafana interface via domain name.
Step 3. Verify pod status
Run:
kubectl get pods -n loki
Expected output:
NAME READY STATUS RESTARTS AGE
loki-stack-0 1/1 Running 0 3h5m
loki-stack-grafana-878d56dc6-s28sq 2/2 Running 0 3h5m
loki-stack-promtail-5wtlr 1/1 Running 0 3h5m
loki-stack-promtail-hl6pl 1/1 Running 0 3h5m
Also verify the Ingress controller is running:
kubectl get pods -n ingress-nginx
And check Grafana Loki services:
kubectl get svc -n loki
Example output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
loki-stack ClusterIP 10.111.63.222 <none> 3100/TCP 3h7m
loki-stack-grafana ClusterIP 10.104.154.158 <none> 80/TCP 3h7m
loki-stack-headless ClusterIP None <none> 3100/TCP 3h7m
loki-stack-memberlist ClusterIP None <none> 7946/TCP 3h7m
Step 4. Configure Ingress
Now let’s configure NGINX Ingress to expose Grafana by domain. For this, we will create a load balancer and the loadbalancer.yaml
manifest:
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
spec:
selector:
app.kubernetes.io/name: ingress-nginx
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
type: LoadBalancer
Apply it:
kubectl apply -f loadbalancer.yaml
Create an Ingress rule in the grafana-ingress.yaml
file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana
namespace: loki
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: grafana.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: loki-stack-grafana
port:
number: 80
Replace grafana.example.com
with your actual domain name. Make sure the domain's A record points to the LoadBalancer's external IP.
Apply the manifest:
kubectl apply -f grafana-ingress.yaml
Step 5. Generate logs
Create a pod that continuously logs messages:
kubectl run logger --image=busybox --restart=Never -- sh -c 'while true; do echo "hello from logger"; sleep 5; done'
Verify it's running:
kubectl get pods
The logger
pod should be in Running
state.
Step 6. Open Grafana
Open the domain configured in your Ingress (e.g., https://grafana.example.com
). You'll be greeted with the Grafana login screen.
Use the username admin
. Retrieve the password with:
kubectl get secret -n loki loki-stack-grafana -o jsonpath="{.data.admin-password}" | base64 --decode
Step 7. View logs
In the Grafana UI, go to the Explore section:
Select Loki as the data source. In Label filters, enter pod=logger, and click Run query.
Scroll down—you should see log entries like "hello from logger".