Sign In
Sign In

Vault is a secrets management system by HashiCorp that allows you to securely store and access tokens, passwords, certificates, and other sensitive data.

In Kubernetes, Vault is used as an external secrets storage and can provide centralized access management.

Installation

You can install Vault into the cluster through the Hostman control panel. To do this:

  1. Go to the Kubernetes section and click on your cluster.

  2. In the Addons tab, click Vault.

  3. In the Configuration window, you can modify the installation parameters:

    • Switch to advanced installation mode;

    • Edit the configuration manually or upload your own values.yaml file;

  4. Once parameters are set, click Install and wait for the installation to finish.

The default configuration assumes that the add-on will run in dev mode. This is a simplified mode in which Vault automatically initializes itself, requires no storage configuration, and uses a preconfigured root token. The dev mode is suitable only for testing and development; it is not secure for production.

Verifying the Installation

After the installation is complete, make sure that the Vault components have started successfully. Run the following command:

kubectl get pods -n vault

You should see a list of pods, including:

  • vault-0: the main Vault pod;

  • vault-agent-injector-xxx: the service responsible for automatically injecting secrets into pods;

  • Additional pods (for example, vault-1, vault-2) if Vault is installed in HA mode.

If the configuration includes the option ui = true (enabled by default), you can access the Vault web interface.

  1. Forward the port using the following command:

kubectl port-forward -n vault svc/vault 8200:8200
  1. Open a browser and go to:

http://localhost:8200

In dev mode, Vault is already initialized, and you can log in using the token specified in the configuration. By default, the value is:

devRootToken: "root"

Cfcd1d4a 6972 4dd6 A46f 8a12771aee34.png

HA Mode

HA (High Availability) mode allows you to deploy multiple Vault instances with distributed data storage. This provides:

  • Fault tolerance: if one instance fails, the cluster continues to operate
  • Centralized storage
  • Scalability

One of the pods becomes the leader, while the others run in standby mode. Read/write requests are handled only by the leader, but if it becomes unavailable, control automatically passes to one of the standby pods.

For this mode to work, we recommend installing the CSI-S3 add-on.

To enable HA mode, specify the following parameters in the add-on configuration:

server.dev.enabled: false          # disable dev mode  
server.standalone.enabled: false   # disable standalone mode  
server.ha.enabled: true            # enable HA mode  
server.ha.replicas: 3              # specify number of replicas  
server.ha.raft.enabled: true       # use built-in Raft storage  
server.dataStorage.storageClass: csi-s3   # connect S3 storage via CSI
Example configuration (click to expand)
global:
  enabled: true
  namespace: ""
  imagePullSecrets: []
  tlsDisable: true
  externalVaultAddr: ""
  openshift: false
  psp:
    enable: false
    annotations: |
      seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default,runtime/default
      apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
      seccomp.security.alpha.kubernetes.io/defaultProfileName:  runtime/default
      apparmor.security.beta.kubernetes.io/defaultProfileName:  runtime/default
  serverTelemetry:
    prometheusOperator: false

injector:
  enabled: true
  replicas: 1
  port: 8080
  leaderElector:
    enabled: true
  metrics:
    enabled: false
  externalVaultAddr: ""

  image:
    repository: "hashicorp/vault-k8s"
    tag: "1.7.0"
    pullPolicy: IfNotPresent

  agentImage:
    repository: "hashicorp/vault"
    tag: "1.20.4"

  agentDefaults:
    cpuLimit: "500m"
    cpuRequest: "250m"
    memLimit: "128Mi"
    memRequest: "64Mi"
    template: "map"
    templateConfig:
      exitOnRetryFailure: true
      staticSecretRenderInterval: ""

  livenessProbe:
    failureThreshold: 2
    initialDelaySeconds: 5
    periodSeconds: 2
    successThreshold: 1
    timeoutSeconds: 5

  readinessProbe:
    failureThreshold: 2
    initialDelaySeconds: 5
    periodSeconds: 2
    successThreshold: 1
    timeoutSeconds: 5

  startupProbe:
    failureThreshold: 12
    initialDelaySeconds: 5
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 5

  authPath: "auth/kubernetes"
  logLevel: "info"
  logFormat: "standard"
  revokeOnShutdown: false

  webhook:
    failurePolicy: Ignore
    matchPolicy: Exact
    timeoutSeconds: 30
    namespaceSelector: {}
    objectSelector: |
      matchExpressions:
      - key: app.kubernetes.io/name
        operator: NotIn
        values:
        - {{ template "vault.name" . }}-agent-injector

    annotations: {}
  failurePolicy: Ignore
  namespaceSelector: {}
  objectSelector: {}
  webhookAnnotations: {}

  certs:
    secretName: null
    caBundle: ""
    certName: tls.crt
    keyName: tls.key
  securityContext:
    pod: {}
    container: {}
  resources: {}
  extraEnvironmentVars: {}
  affinity: |
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              app.kubernetes.io/name: {{ template "vault.name" . }}-agent-injector
              app.kubernetes.io/instance: "{{ .Release.Name }}"
              component: webhook
          topologyKey: kubernetes.io/hostname
  topologySpreadConstraints: []
  tolerations: []
  nodeSelector: {}
  priorityClassName: ""
  annotations: {}
  extraLabels: {}
  hostNetwork: false
  service:
    annotations: {}
  serviceAccount:
    annotations: {}
  podDisruptionBudget: {}
  strategy: {}

server:
  enabled: true
  enterpriseLicense:
    secretName: ""
    secretKey: "license"

  image:
    repository: "hashicorp/vault"
    tag: "1.20.4"
    pullPolicy: IfNotPresent

  updateStrategyType: "OnDelete"

  logLevel: "info"
  logFormat: "standard"

  resources: {}

  ingress:
    enabled: false
    labels: {}
    annotations: {}
    ingressClassName: ""
    pathType: Prefix
    activeService: true
    hosts:
      - host: chart-example.local
        paths: []
    extraPaths: []
    tls: []

  hostAliases: []

  route:
    enabled: false
    activeService: true
    labels: {}
    annotations: {}
    host: chart-example.local
    tls:
      termination: passthrough

  authDelegator:
    enabled: true

  extraInitContainers: null

  extraContainers: null

  shareProcessNamespace: false

  extraArgs: ""

  extraPorts: null

  readinessProbe:
    enabled: true
    port: 8200
    failureThreshold: 2
    initialDelaySeconds: 5
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 3

  livenessProbe:
    enabled: false
    execCommand: []
    path: "/v1/sys/health?standbyok=true"
    port: 8200
    failureThreshold: 2
    initialDelaySeconds: 60
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 3

  terminationGracePeriodSeconds: 10

  preStopSleepSeconds: 5

  preStop: []

  postStart: []

  extraEnvironmentVars: {}

  extraSecretEnvironmentVars: []

  extraVolumes: []

  volumes: null

  volumeMounts: null

  affinity: |
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              app.kubernetes.io/name: {{ template "vault.name" . }}
              app.kubernetes.io/instance: "{{ .Release.Name }}"
              component: server
          topologyKey: kubernetes.io/hostname

  topologySpreadConstraints: []

  tolerations: []

  nodeSelector: {}

  networkPolicy:
    enabled: false
    egress: []

    ingress:
      - from:
        - namespaceSelector: {}
        ports:
        - port: 8200
          protocol: TCP
        - port: 8201
          protocol: TCP

  priorityClassName: ""

  extraLabels: {}

  annotations: {}

  includeConfigAnnotation: false

  service:
    enabled: true
    active:
      enabled: true
      annotations: {}
    standby:
      enabled: true
      annotations: {}
    instanceSelector:
      enabled: true
    ipFamilyPolicy: ""
    ipFamilies: []
    publishNotReadyAddresses: true
    externalTrafficPolicy: Cluster
    port: 8200
    targetPort: 8200
    annotations: {}

  dataStorage:
    enabled: true
    size: 1Gi
    mountPath: "/vault/data"
    storageClass: "csi-s3"
    accessMode: ReadWriteOnce
    annotations: {}
    labels: {}

  persistentVolumeClaimRetentionPolicy: {}

  auditStorage:
    enabled: false
    size: 1Gi
    mountPath: "/vault/audit"
    storageClass: "csi-s3"
    accessMode: ReadWriteOnce
    annotations: {}
    labels: {}

  dev:
    enabled: false
    devRootToken: "root"

  standalone:
    enabled: false
    config: |-
      ui = true
      listener "tcp" {
        tls_disable = 1
        address = "[::]:8200"
        cluster_address = "[::]:8201"
      }
      storage "file" {
        path = "/vault/data"
      }

  ha:
    enabled: true
    replicas: 3
    apiAddr: null
    clusterAddr: null
    raft:
      enabled: true
      setNodeId: false
      config: |
        ui = true
        listener "tcp" {
          tls_disable = 1
          address = "[::]:8200"
          cluster_address = "[::]:8201"
        }
        storage "raft" {
          path = "/vault/data"
        }
        service_registration "kubernetes" {}

    disruptionBudget:
      enabled: true
      maxUnavailable: null

  serviceAccount:
    create: true
    name: ""
    createSecret: false
    annotations: {}
    extraLabels: {}
    serviceDiscovery:
      enabled: true

  statefulSet:
    annotations: {}
    securityContext:
      pod: {}
      container: {}

  hostNetwork: false

ui:
  enabled: true
  publishNotReadyAddresses: true
  activeVaultPodOnly: false
  serviceType: "ClusterIP"
  serviceNodePort: null
  externalPort: 8200
  targetPort: 8200
  serviceIPFamilyPolicy: ""
  serviceIPFamilies: []
  externalTrafficPolicy: Cluster
  annotations: {}

csi:
  enabled: true
  image:
    repository: "hashicorp/vault-csi-provider"
    tag: "1.5.1"
    pullPolicy: IfNotPresent
  volumes: null
  volumeMounts: null
  resources: {}
  hmacSecretName: ""
  hostNetwork: false
  daemonSet:
    updateStrategy:
      type: RollingUpdate
      maxUnavailable: ""
    annotations: {}
    providersDir: "/var/run/secrets-store-csi-providers"
    kubeletRootDir: "/var/lib/kubelet"
    extraLabels: {}
    securityContext:
      pod: {}
      container: {}
  pod:
    annotations: {}
    tolerations: []
    nodeSelector: {}
    affinity: {}
    extraLabels: {}
  agent:
    enabled: true
    extraArgs: []
    image:
      repository: "hashicorp/vault"
      tag: "1.20.4"
      pullPolicy: IfNotPresent
    logFormat: standard
    logLevel: info
    resources: {}
    securityContext:
      container:
        allowPrivilegeEscalation: false
        capabilities:
          drop:
            - ALL
        readOnlyRootFilesystem: true
        runAsNonRoot: true
        runAsUser: 100
        runAsGroup: 1000

  priorityClassName: ""
  serviceAccount:
    annotations: {}
    extraLabels: {}
  readinessProbe:
    failureThreshold: 2
    initialDelaySeconds: 5
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 3
  livenessProbe:
    failureThreshold: 2
    initialDelaySeconds: 5
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 3
  logLevel: "info"
  debug: false
  extraArgs: []

serverTelemetry:
  serviceMonitor:
    enabled: false
    selectors: {}
    interval: 30s
    scrapeTimeout: 10s
    tlsConfig: {}
    authorization: {}
    metricRelabelings: []
  prometheusRules:
      enabled: false
      selectors: {}
      rules: []

 

After installing the cluster with HA mode enabled, Vault is not initialized automatically. You must do this manually using the CLI.

To initialize Vault, connect to the main pod (vault-0):

kubectl exec -it vault-0 -n vault -- /bin/sh

Then execute the following commands:

export VAULT_ADDR=http://127.0.0.1:8200
export VAULT_CLIENT_TIMEOUT=300s
vault operator init

You will receive several Unseal Keys and an Initial Root Token. Store this information in a safe place—without it, you will not be able to restore access to Vault.

Example output:

Unseal Key 1: 4ErPXwe87rjULP6yz7h3XZ8Dr/nhTyMrVLiIsQ8s5ksX
Unseal Key 2: IVk3hipR5D/yR5ngi1LJaaxRwarEWjR/hjC8DFwXuNYb
Unseal Key 3: qBCx+7B+wiehep0yArs7nVT73SyMYXh+AH3jCXTCs80H
Unseal Key 4: CQm+0tOTS9wZQWYJJU8Roo2tMCGS+dZt7eXMDLjU5gX+
Unseal Key 5: KTvyD+vhEXPNQgcQJQe69Gu/sjkhhl/ScGZNnmmN64xC

Initial Root Token: hvs.uKO8ZtmUgARVtrLhzBlQV4tA

By default, five keys are generated, and at least three of them are required to activate the storage.

After initialization, Vault is in a sealed state. To start the cluster, it needs to be unsealed.

To do this:

  1. Connect to each pod in turn (vault-0, vault-1, vault-2).

  2. Run the following commands, providing any three of the obtained keys:

vault operator unseal <UNSEAL_KEY_1>
vault operator unseal <UNSEAL_KEY_2>
vault operator unseal <UNSEAL_KEY_3>
  1. Check the status:

vault status

If everything was successful, the Sealed field will have the value false.

Was this page helpful?
Updated on 30 October 2025

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support