PostgreSQL is a popular relational database management system (RDBMS) that provides high-availability features like streaming replication, logical replication, and failover solutions. Deploying PostgreSQL on Kubernetes allows organizations to build resilient systems that ensure minimal downtime and data availability.
With Kubernetes StatefulSets
, you can scale PostgreSQL deployment in response to demand.
To get started, make sure you have the following:
Kubernetes Cluster (Cloud or Local): You can set up a Kubernetes cluster on Hostman within no time. To follow this tutorial with a local Kubernetes cluster, you can use one of these tools: k3s
, minikube
, microk8s
, kind
.
Kubectl: Kubectl allows users to interact with a Kubernetes cluster. The kubectl needs a configuration YAML file which contains cluster details and is usually provided by your cloud provider.
From the Hostman control panel, you can simply download this configuration file with a click of a button as indicated in the below screenshot.
To connect, you need to set KUBECONFIG
environment variable accordingly.
export KUBECONFIG=/absolute/path/to/file/k8s-cluster-config.yaml
Helm: You need Helm CLI to install Helm charts. Helm version 3 is required.
Helm is a package manager for Kubernetes just like apt
for Ubuntu and Debian. Instead of manually creating multiple YAML files for Pods, Services, Persistent Volumes, Secrets, etc., the Helm chart simplifies this to a single command (e.g., helm install
), streamlining the deployment process.
To add the Bitnami PostgreSQL Helm repo, run this command:
helm repo add bitnami https://charts.bitnami.com/bitnami
To sync your local Helm repository with the remote one:
helm repo update
PostgreSQL requires persistent storage to ensure that data is preserved even if a pod crashes or is rescheduled.
When a Persistent Volume Claim (PVC) is combined with a Persistent Volume (PV), Kubernetes can allocate a desired chunk of storage either in disk or cloud storage. PVC requests the Kubernetes cluster for storage space. Kubernetes then looks at the available PVs and assigns one to it.
Create a file named postgres-local-pv.yaml
with the YAML manifest:
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgresql-local-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
hostPath:
path: /mnt/data/postgresql
This manifest creates a PersistentVolume
backed by a local directory (/mnt/data/postgresql
) on a specific node. This means if the node goes down or becomes unavailable, the data stored in that PV will be inaccessible, which is a critical risk in production. Therefore, it’s highly recommended to use cloud-native storage solutions instead of hostPath to ensure reliability, scalability and data protection.
This PV has a reclaim policy of Retain
, ensuring that it is not deleted when no longer in use by a PVC.
You can set storageClassName
to ceph-storage
, glusterfs
, portworx-sc
, or openebs-standard
based on your needs.
Create a file named postgres-local-pvc.yaml
with this text:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgresql-local-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: manual
The ReadWriteOnce
config means the volume can be read-write by a single node at a time. You might think, replacing it with ReadWriteMany
will make your application highly available. This isn’t the case.
ReadWriteMany
(RWX) access mode allows multiple pods to access the same PersistentVolume
simultaneously, this can indeed create serious issues leading to potential race conditions, data corruption, or inconsistent state.
Apply these manifests using kubectl
and create new resources.
kubectl apply -f postgres-local-pv.yaml
kubectl apply -f postgres-local-pvc.yaml
Run the following command to install the Helm chart.
helm install tutorial-db bitnami/postgresql --set auth.username=bhuwan \
--set auth.password=”AeSeigh2gieshe” \
--set auth.database=k8s-tutorial \
--set auth.postgresPassword=”Ze4hahshez6dop9vaing” \
--set primary.persistence.existingClaim=postgresql-local-pvc \
--set volumePermissions.enabled=true
After a couple of minutes, verify if things have worked successfully with this command:
kubectl get all
The following command runs a temporary PostgreSQL client pod. The pod connects to the database named k8s-tutorial
, using the username bhuwan
and the password from the environment variable $POSTGRES_PASSWORD
.
export POSTGRES_PASSWORD=$(kubectl get secret --namespace default tutorial-db-postgresql -o jsonpath="{.data.password}" | base64 -d)
kubectl run tutorial-db-postgresql-client --rm --tty -i --restart='Never' \
--image docker.io/bitnami/postgresql:17.2.0-debian-12-r6 \
--env="PGPASSWORD=$POSTGRES_PASSWORD" \
--command -- psql --host tutorial-db-postgresql \
-U bhuwan -d k8s-tutorial -p 5432
After the session ends, the pod will be deleted automatically due to the --rm
flag.
A quick reminder, if you have changed the Helm chart release name, users, or database name, adjust the above commands accordingly.
A StatefulSet
is the best Kubernetes resource for deploying stateful applications like PostgreSQL. This way, every PostgreSQL pod gets its own stable network identities and persistent volumes.
Note: you’ll be using a previously created Persistent Volume Claim (PVC) and Persistent Volume(PV). So, do some cleanup and recreate those resources.
helm delete tutorial-db
kubectl delete pvc postgresql-local-pvc
kubectl delete pv postgresql-local-pv
kubectl apply -f postgres-local-pv.yaml -f postgres-local-pvc.yaml
Create a file named postgres-statefulset.yaml
with the following text:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-statefulset
labels:
app: postgres
spec:
serviceName: "postgresql-headless-svc"
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:17.2
envFrom:
- secretRef:
name: postgresql-secret
ports:
- containerPort: 5432
name: postgresdb
volumeMounts:
- name: pv-data
mountPath: /var/lib/postgresql/db
volumes:
- name: pv-data
persistentVolumeClaim:
claimName: postgresql-local-pvc
Before you can apply these changes, create a new Secret
for handling sensitive details like passwords with kubectl
.
kubectl create secret generic postgresql-secret --from-literal=POSTGRES_PASSWORD=Ze4hahshez6dop9vaing
kubectl apply -f postgres-statefulset.yaml
If the pod gets stuck with Pending state, you can try creating a StorageClass
with the following manifest.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: manual
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
To investigate any further issues with the pod, you can use the command:
kubectl describe pod postgres-statefulset-0
This command will report any issues related to scheduling the pod to a node, mounting volumes, or resource constraints.
Databases like PostgreSQL are typically accessed internally by other services or applications within the cluster, so it's better to create a Headless service for it.
Create a file called postgres-service.yaml
and include the following YAML manifest:
apiVersion: v1
kind: Service
metadata:
name: postgresql-headless-svc
spec:
type: ClusterIP
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
clusterIP: None
Finally, you can test the connection with kubectl run
.
kubectl run tutorial-db-postgresql-client --rm --tty -i --restart='Never' \
--image docker.io/bitnami/postgresql:17.2.0-debian-12-r6 \
--env="PGPASSWORD=Ze4hahshez6dop9vaing" \
--command -- psql --host postgres-statefulset-0.postgresql-headless-svc \
-U postgres -p 5432
To scale up a Statefulset, simply pass the number of replicas with --replicas
flag.
kubectl scale statefulset postgres-statefulset --replicas=3
To reach replicas, you can make use of headless service. For instance, with hostname postgres-statefulset-1.postgresql-headless-svc
you can send requests to pod 1.
For handling backups, you can use CronJob with the pg_dump
utility provided by PostgreSQL.
Throughout the tutorial, the decision to handle passwords via Kubernetes Secret, using StatefulSet
instead of Deployment was a good move. To make this deployment even more secure, reliable, and highly available, here are some ideas:
Set Resource Requests and Limits: Set appropriate CPU and memory requests and limits to avoid over-provisioning and under-provisioning.
Backups: Use Kubernetes CronJobs to regularly back up your PostgreSQL data. Consider implementing Volume Snapshots as well.
Monitoring and Log Postgresql: You can use tools like Prometheus and Grafana to collect and visualize PostgreSQL metrics, such as query performance, disk usage, and replication status.
Use Pod Disruption Budgets (PDBs): If too many PostgreSQL pods are disrupted at once (e.g., during a rolling update), it can lead to database unavailability or replication issues.
Helm chart is the recommended way of complex and production deployment. Helm provides an automated version manager alongside hiding the complexities of configuring individual Kubernetes components. Using the Helm template command, you can even render the Helm chart locally and make necessary adjustments with its YAML Kubernetes manifests.
Kubernetes provides scalability, flexibility, and ease of automation for PostgreSQL databases. By leveraging Kubernetes features like StatefulSets, PVCs, PDBs, and secrets management, you can ensure that your PostgreSQL database is tuned for the production environment.