Sign In
Sign In

Encrypting Secrets in HashiCorp Terraform

Encrypting Secrets in HashiCorp Terraform
Hostman Team
Technical writer
Terraform
25.08.2025
Reading time: 10 min

Encrypting private data in HashiCorp Terraform is an important aspect of working with this tool. There are methods that allow securely storing sensitive data in encrypted form and transferring it in a safe way when needed.

In this article, we will look at how to securely store secrets in Terraform, and encryption methods that help improve security when working with cloud infrastructure. 

Why You Should Not Store Sensitive Information as Plain Text

Terraform users sometimes face the need to handle sensitive information—for example, API keys or a user’s login and password for a database.

Here is an example of code for creating a database in Hostman (a more detailed guide on working with Terraform and Hostman is available on GitHub):

resource "hm_db_mysql_8" "my_db" {
    name = "mysql_8_database"

    # Username and password
    login    = <To be decided>
    password = <To be decided>

    preset_id = data.hm_db_preset.example-db-preset.id
}

To make this code work, the variables username and password must contain actual credentials. Our task is to properly handle these credentials and prevent their accidental disclosure.

The simplest option is to immediately assign text values to these variables:

resource "hm_db_mysql_8" "my_db" {
    name = "mysql_8_database"

    # Username and password
    login    = "root"
    password = "admin"

    preset_id = data.hm_db_preset.example-db-preset.id
}

But this is a bad practice that harms the security of the whole system—even if you use a private Git repository to store the project. Anyone with access to the version control system would also have access to the secrets.

Additionally, many tools that access the repository, such as Jenkins, CircleCI, or GitLab, keep a local copy of the repo before building the code. If sensitive information is stored as plain text, other programs on your computer may also access it, and therefore gain access to the secrets.

In general, storing secrets as plain text greatly simplifies access to them, which could be exploited by attackers. This problem is relevant not only to Terraform but to any tool. Therefore, the main rule of secret encryption in Infrastructure as Code is: never store sensitive information as plain text.

Risks of unprotected information:

  • Compliance violations. Many industries work with sensitive information, such as personal data, which is regulated by law. Mishandling sensitive data can result in heavy fines and reputational damage.
  • Unauthorized access to data. Improperly protected data may be accessed by people without the proper authorization.
  • Security breaches. Exposed data can be used by attackers to steal, alter, or even fully compromise entire systems.

The terraform.tfstate File

Terraform has a flaw that reduces system security. Each time Terraform is used to deploy infrastructure, it saves a large amount of information about it, including database connection parameters, inside the terraform.tfstate file—in plain text. This file is stored in the same directory where the apply command is run.

This means that even if you use one of the encryption methods described in this article, sensitive data will still appear in plain text in the state file.

This problem has been known for several years, but there is still no universal solution. There are temporary fixes that remove secrets from state files, but they are not reliable and may break compatibility after updates.

Therefore, at present, regardless of the encryption method, the most important aspect of data security is securing the state file. It is not recommended to keep it locally or in a repoÑŽ Instead, use storage systems that support encryption, such as Amazon S3 with access control.

Environment Variables for Key Encryption

Terraform supports reading environment variables, and this can be used for storing keys securely.

First, create a file variables.tf with variables:

variable "username" {
    description = "Username"
    type        = string
    sensitive   = true
}

variable "password" {
    description = "User password"
    type        = string
    sensitive   = true
}

The type parameter sets the variable type, while sensitive = true marks it as sensitive so its value won’t be shown in logs, including during plan and apply.

Now replace credentials in your resources with variables:

resource "hm_db_mysql_8" "my_db" {
    name = "mysql_8_database"

    # Username and password
    login    = var.username
    password = var.password

    preset_id = data.hm_db_preset.example-db-preset.id
}

Then set the values via terminal. Prefix each variable name with TF_VAR_:

export TF_VAR_username="root"
export TF_VAR_password="admin"

When you run terraform apply, Terraform will use these environment variable values.

Important: Bash commands are saved in history. To prevent passwords and logins from being stored, set the environment variable HISTCONTROL=ignorespace. Then, any command starting with a space won’t be saved:

export HISTCONTROL=ignorespace

Encrypting Sensitive Data in Terraform with GPG

Using environment variables prevents secrets from appearing in Terraform code, but it doesn’t fully solve the problem, it only shifts it from Terraform to the OS, where the data must also be protected.

A popular approach is to store sensitive data in GPG-encrypted files. You can use plain GPG, or a password manager that uses the same logic—for example, Pass.

Pass Password Manager

In Pass, secrets are stored as GPG-encrypted files organized into a directory hierarchy. They can be copied between devices and managed with standard terminal commands.

To install Pass on Ubuntu:

sudo apt update
sudo apt install pass

You’ll also need GPG. Install and generate a key:

sudo apt install gpg
gpg --full-generate-key

Choose the key type (e.g., “RSA and RSA”) and size, and provide details. Once generated, copy the GPG key and initialize Pass:

pass init <GPG-key>

Encrypt credentials:

pass insert username
pass insert password

To access secrets from the command line, use the pass command and the secret’s name:

pass username

Enter the passphrase you specified when generating the GPG key, and the secret will be displayed in the console as plain text.

Now, to use the encrypted data, set them as environment variables:

export TF_VAR_username=$(pass username)
export TF_VAR_password=$(pass password)

Pros and cons of using environment variables for secrets:

Pros

Cons

Secrets remain outside the code, which means they are not stored in the repository.

The infrastructure is not fully described in Terraform code, which complicates its maintenance and reduces readability.

Easy to use: you don’t need advanced qualifications to start working with it.

Requires additional steps to work with this solution.

Environment variables can be integrated with many password managers, such as in the example with Pass.

Since all work with secrets takes place outside of Terraform, Terraform’s built-in security mechanisms do not apply to them.

Suitable for test runs: dummy values can easily be set as environment variables.

 

Encrypting Secrets with HashiCorp Vault

Vault is an open-source external storage system for sensitive information. Like Terraform, it was developed by HashiCorp.

Vault allows you to implement centralized encrypted storage for secrets. Here are its key features:

  • Storage of sensitive information. The core function of Vault is encryption and storage of secrets such as passwords, API keys, certificates, tokens, and other data.
  • User authentication. To access secrets, a user or application must authenticate. The tool supports several methods: from the classic login-password combination to integration with other authentication systems.
  • Access control. Vault allows fine-grained configuration of which users and applications can access certain secrets. This is achieved through access policies that define permissions for viewing and performing operations on secrets.
  • Audit logging. The tool makes it possible to track who accessed the data in the storage and when.

Installation

Vault is an external storage system, and interaction with it is carried out over the network. Therefore, it can be installed either on a local device (accessed via localhost) or on a remote server.

In this material, we’ll show how to install it on Ubuntu.

Update package indexes and install GPG:

sudo apt update && sudo apt install gpg

Download the GPG key:

wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list

Install Vault:

sudo apt update && sudo apt install vault

To make sure the installation was successful, check the version of the software:

vault version

If installation was successful, the terminal will display the latest version of the key storage system.

Configuration

After installation, the tool must be configured. We will use it in server mode. Start it with:

vault server -dev

Important: here we are running the server in development mode. This means that in this mode it stores all data, including keys, in RAM. When the server restarts, all data is lost. Therefore, in production it is better to use the standard server mode. Development mode is suitable for learning purposes, like in this material.

During execution, the terminal will display details of the process and, afterwards, the URL where the server is running and a token for authorization. You can even connect to it through a browser by entering the server’s URL in the address bar.

For further work with the server in development mode, create an environment variable with its URL. If you installed Vault on your local machine, the command will look like this:

export VAULT_ADDR='http://127.0.0.1:8200'

Check the storage status as follows:

vault status

The command will return information about it: creation date, software version, and more.

To store sensitive data, we’ll use a key-value store. First, enable it:

vault secrets enable -path=db_data kv

Then, add secrets into the storage:

vault kv put db_data/secret_tf username=root password=admin

You can check the result directly in your browser.

Using HashiCorp Vault for Storing Secrets in Terraform

In the main Terraform file, specify Vault as a provider:

terraform {
     required_providers {
          vault = {
               source  = "hashicorp/vault"
               version = "3.23.0"
          }
          hm = {
               source = "hostman-cloud/hostman"
          }
     }
     required_version = ">= 0.13"
}

In the Terraform configuration file, you also need to declare Vault as a provider:

provider "vault" {
  address = "http://127.0.0.1:8200"
  token   = "Vault Token"
}

Then, implement a method for reading data from the storage:

data "vault_generic_secret" "secret_credentials" {
     path = "db_data/secret_tf"
}

Now you can use the retrieved secrets in a resource, for example when creating a MySQL database:

resource "hm_db_mysql_8" "my_db" {
     name = "mysql_8_database"

     # Username and password
     login    = data.vault_generic_secret.secret_credentials.data["username"]
     password = data.vault_generic_secret.secret_credentials.data["password"]

     preset_id = data.hm_db_preset.example-db-preset.id
}

How to Automate Secret Encryption in Terraform

Secret encryption in Terraform can be automated to ensure a scalable and secure process for managing sensitive data. Below are some methods of automation:

  • Scripts. They can be used to automatically pass secrets into Terraform. This can be implemented using GPG or OpenSSL.

  • CI/CD tools. Many CI/CD tools, such as GitLab CI/CD or Jenkins, have built-in encryption that can be used automatically together with Terraform.

Conclusion

In this article, we examined the issue of handling sensitive information, discussed the risks of storing it in an unprotected form, explained why it is important to carefully protect the state file, and provided examples of encrypting secrets in Terraform using HashiCorp Vault, environment variables, and GPG.

Terraform
25.08.2025
Reading time: 10 min

Similar

Terraform

How to Deploy Jenkins CI/CD Pipelines with Terraform

Modern software development requires fast and high-quality delivery of new features and fixes. CI/CD (Continuous Integration and Delivery) automation allows you to: Reduce time-to-market: Developers can quickly validate code and deliver it to the production environment. Improve code quality: Automated testing detects errors at early stages, reducing the cost of fixing them. Increase reliability and stability: CI/CD pipelines minimize the human factor, ensuring process consistency. Ensure flexibility and scalability: It becomes easier to adapt to changing requirements and increase workloads. Benefits of Using Jenkins and Terraform Together Jenkins and Terraform are a powerful combination of tools for CI/CD automation: Jenkins: Provides flexible pipeline configuration options. Integrates with many tools and version control systems (Git, GitHub, Bitbucket). Supports pipeline customization through Jenkinsfile. Terraform: Simplifies infrastructure creation and management through declarative code. Supports multi-cloud environments, making it a universal deployment solution. Allows easy tracking of infrastructure changes through its state management system. By using Jenkins and Terraform together, you can: Automate infrastructure deployment: Terraform creates the environment for running the application. Optimize CI/CD pipelines: Jenkins handles build, test, and deploy using the resources provisioned by Terraform. Reduce time and effort in infrastructure management: The entire process becomes manageable through code (IaC). Deployment Registration and Initial Setup in Hostman Go to the Hostman website and create an account. Create a new project to separate CI/CD resources from other projects. Configure access keys. In the API section, create an API key for working with Terraform. Save the key in a secure place, as it will be needed for configuration. Installing and Configuring Terraform Download Terraform from the HashiCorp website. Install it on your system: For Windows: add the path to the Terraform binary to environment variables. For macOS/Linux: move the terraform file to /usr/local/bin/. Verify installation: terraform --version Creating Hostman Infrastructure Next, we will use Terraform to create the necessary infrastructure. We recommend using at least 2 CPUs, 4 GB RAM for a basic Jenkins setup, and 15–40 GB disk for temporary files and artifacts. Create the configuration file provider.tf: terraform { required_providers { hm = { source = "hostman-cloud/hostman" } } required_version = ">= 0.13" } provider "hm" { token = "your_API_key" } Describe infrastructure for Jenkins deployment in main.tf: data "hm_configurator" "example_configurator" { location = "us-2" } data "hm_os" "example_os" { name = "ubuntu" version = "22.04" } resource "hm_ssh_key" "jenkins_key" { name = "jenkins-ssh-key" body = file("~/.ssh/id_rsa.pub") } resource "hm_vpc" "jenkins_vpc" { name = "jenkins-vpc" description = "VPC for Jenkins infrastructure" subnet_v4 = "192.168.0.0/24" location = "us-2" } resource "hm_server" "jenkins_server" { name = "Jenkins-Server" os_id = data.hm_os.example_os.id ssh_keys_ids = [hm_ssh_key.jenkins_key.id] configuration { configurator_id = data.hm_configurator.example_configurator.id disk = 15360 cpu = 2 ram = 4096 } local_network { id = hm_vpc.jenkins_vpc.id ip = "192.168.0.10" # Static IP within the VPC subnet mode = "dnat_and_snat" } connection { type = "ssh" user = "root" private_key = file("~/.ssh/id_rsa") host = self.networks[0].ips[0].ip # Correct way to get IP timeout = "10m" } provisioner "remote-exec" { inline = [ "apt-get update -y", "apt-get install -y ca-certificates software-properties-common", "add-apt-repository -y ppa:openjdk-r/ppa", "apt-get update -y", "apt-get install -y openjdk-17-jdk", "curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | tee /usr/share/keyrings/jenkins-keyring.asc > /dev/null", "echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian-stable binary/ | tee /etc/apt/sources.list.d/jenkins.list > /dev/null", "apt-get update -y", "apt-get install -y jenkins", "update-alternatives --set java /usr/lib/jvm/java-17-openjdk-amd64/bin/java", "sed -i 's|JAVA_HOME=.*|JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64|' /etc/default/jenkins", "systemctl daemon-reload", "systemctl start jenkins", "ufw allow 8080" ] } } output "jenkins_url" { value = "http://${hm_server.jenkins_server.networks[0].ips[0].ip}:8080" } Install the Hostman Terraform Provider: terraform init Check the connection to Hostman: terraform plan After successful initialization, you should see the following message: Apply the changes: terraform apply Confirm execution by typing yes. When you go to your Hostman project, you will see a new server being created. Take note of the resources—they match exactly what we specified in the main.tf configuration. After the server is created, go to the IP address with port 8080, where Jenkins should now be running. This address is also visible in the control panel, on the server Dashboard. You will see Jenkins’ initial setup screen: Configuration Breakdown In this section, we will go through each part of the code in detail. Data Sources (Fetching Infrastructure Information) Fetching the configurator ID for the selected region. Needed to define available hardware configurations (CPU/RAM/Disk): data "hm_configurator" "example_configurator" { location = "us-2" } Fetching the OS image ID for Ubuntu 22.04. data "hm_os" "example_os" { name = "ubuntu" version = "22.04" } Resources (Creating Infrastructure) Creating an SSH key for server access: resource "hm_ssh_key" "jenkins_key" {   name = "jenkins-ssh-key"   body = file("~/.ssh/id_rsa.pub") } Creating a private network (VPC) for Jenkins isolation: resource "hm_vpc" "jenkins_vpc" {   name        = "jenkins-vpc"   description = "VPC for Jenkins infrastructure"   subnet_v4 = "192.168.0.0/24"   location = "us-2" } Server Configuration Main server parameters: name: Server name os_id: OS image ID ssh_keys_ids: SSH key binding resource "hm_server" "jenkins_server" {   name     = "Jenkins-Server"   os_id    = data.hm_os.example_os.id   ssh_keys_ids = [hm_ssh_key.jenkins_key.id] Server characteristics: configurator_id: Link to configurator disk: Disk size (15 GB) cpu: Number of cores ram: Memory size (4 GB) configuration {   configurator_id = data.hm_configurator.example_configurator.id   disk = 15360    cpu  = 2        ram  = 4096   } Private network settings: id: ID of created VPC ip: Static IP within subnet mode: NAT mode local_network {   id = hm_vpc.jenkins_vpc.id   ip = "192.168.0.10"   mode = "dnat_and_snat" } Connection Settings SSH connection parameters for provisioning: connection {   type        = "ssh"   user        = "root"   private_key = file("~/.ssh/id_rsa")   host        = self.networks[0].ips[0].ip   timeout     = "10m" } Provisioning (Software Installation) provisioner "remote-exec" { inline = [ # Package update "apt-get update -y", # Installing dependencies "apt-get install -y ca-certificates software-properties-common", # Adding Java repository "add-apt-repository -y ppa:openjdk-r/ppa", # Installing Java 17 "apt-get install -y openjdk-17-jdk", # Adding Jenkins repository "curl -fsSL ...", "echo deb ...", # Installing Jenkins "apt-get install -y jenkins", # Java configuration "update-alternatives --set java ...", "sed -i 's|JAVA_HOME=.*|...|' /etc/default/jenkins", # Starting service "systemctl daemon-reload", "systemctl start jenkins", # Opening port "ufw allow 8080" ] } Output Information Displaying the URL for accessing Jenkins: output "jenkins_url" {   value = "http://${hm_server.jenkins_server.networks[0].ips[0].ip}:8080" } Potential Issues with Jenkins and Terraform Secret and Confidential Data Management Problem: Storing API keys, SSH keys, and other secrets may be unsafe if added as plain text. Solution: Use secret managers such as HashiCorp Vault, or Jenkins integration with credential management systems. In Terraform, specify secrets via environment variables or remote storage. Terraform State Management Issues Problem: The state file (terraform.tfstate) may be corrupted or overwritten if multiple users access it simultaneously. Solution: Configure a remote backend (e.g., object storage) to store the state. Enable state locking to prevent simultaneous command execution. Complexity of Terraform and Jenkins Integration Problem: Integration can be difficult for beginners due to differences in configuration and infrastructure management. Solution: Use the Terraform plugin for Jenkins to simplify setup. Create detailed documentation and use Jenkinsfile templates for repeatable processes. Jenkins and Terraform Updates Problem: New versions of Jenkins and Terraform may be incompatible with existing configurations. Solution: Test updates in local or test environments. Regularly update Jenkins plugins to maintain compatibility. Hostman Limitations and Workarounds Hostman API Limitations Problem: Not all actions or resources may be available through the API. Solution: Use a combination of Terraform and the Hostman interface. Leave operations unavailable via API for manual execution or scripts in Python/Go. Resource Limitations Problem: Limits on the number of servers, storage, or networking resources may cause scaling delays. Solution: Optimize existing resource usage. Plan workload ahead, increasing limits through Hostman support. Network Infrastructure Scalability Problem: Deploying large applications may be difficult due to complex network configurations (e.g., VPC). Solution: Break infrastructure into Terraform modules. Use pre-prepared network architecture templates. Lack of Deep Integrations Problem: Hostman may not provide full integrations with external services, such as GitHub Actions or third-party monitoring. Solution: Configure webhook notifications for integration with Jenkins or other CI/CD systems. Use external APIs of third-party services for custom integrations. Debugging Recommendations Terraform Debugging Use the terraform plan option to validate configuration before applying changes. For detailed analysis, add the -debug flag:terraform apply -debug Verify provider settings (e.g., token and API keys). Jenkins Pipeline Debugging Enable logging in Jenkins: specify --verbose or --debug in Terraform and Docker build commands. Configure artifact archiving in Jenkins to save error logs. Split steps in Jenkinsfile to isolate problematic stages. Monitoring and Notifications Install monitoring plugins in Jenkins (e.g., Slack or Email Extension) for failure notifications. Use external monitoring (e.g., Zabbix or Prometheus) to monitor server status in Hostman. Backups of Terraform State and Jenkins Data Configure regular backups of the state file (terraform.tfstate) to remote storage. Create backups of Jenkins configuration and job data. Troubleshooting Common Errors Jenkins server connection error: Check SSH keys and firewall rules. Terraform resource creation error: Verify API key validity and available quotas. Pipeline hang: Ensure Docker containers are correctly downloaded and running. Efficient work with Jenkins, Terraform, and Hostman requires awareness of possible issues and systematic resolution. Use the recommendations above to prevent errors, optimize resources, and simplify debugging of your CI/CD processes. Recommendations for Beginners and Professionals For Beginners: Start simple: Learn basic Terraform commands such as init, plan, apply. Configure Jenkins using pre-built images and plugins. Use documentation and templates: Explore Hostman API docs and Terraform module examples. Use Jenkinsfile templates to learn pipeline writing faster. Test locally: Run a local Jenkins instance and test small Terraform configurations before moving to the cloud. Experiment: Start with small projects to understand CI/CD and infrastructure management principles. Reach out to the Hostman community or support if issues arise. For Professionals: Optimize processes: Create custom Terraform modules for reuse across projects. Configure Jenkins for parallel builds to reduce pipeline execution time. Focus on security: Configure secure storage of keys and sensitive data using Vault or Jenkins built-in tools. Protect terraform.tfstate by using remote storage with HTTPS access. Develop multi-cloud approach: Integrate Hostman with other platforms to create backup pipelines for critical applications. Use Terraform to manage cross-cloud infrastructure. Adopt new tools: Explore containerization and orchestration with Kubernetes and Docker. Integrate Jenkins with monitoring systems such as Prometheus or Grafana for performance analysis. Jenkins and Terraform combined with Hostman open wide opportunities for developers and DevOps engineers. Beginners can easily learn these tools by starting with simple projects, while professionals can implement complex CI/CD pipelines with multi-cloud infrastructure. This approach not only accelerates development but also helps create a scalable, secure, and fault-tolerant environment for modern applications. Resources for Learning Terraform Documentation: Terraform Official Website Terraform Module Registry Jenkins Guides: Official Jenkins Documentation Jenkins Plugins Article: How to Automate Jenkins Setup with Docker Hostman: Hostman API Documentation Hostman Terraform Provider Documentation Hostman user guides Useful Tools: VS Code for working with Terraform and Jenkins HashiCorp Vault for secure token storage Communities: DevOps on Reddit Stack Overflow: Terraform
20 August 2025 · 12 min to read
Terraform

Using Variables in Terraform: Guide and Examples

Terraform is a popular software tool for DevOps engineers and system administrators, primarily designed for creating and managing infrastructure in the cloud. Its main feature is the ability to automate all processes related to infrastructure deployment. In Terraform, there is a set of core elements used to describe infrastructure. These include providers, resources, data sources, modules, expressions, and variables. We have already touched on variables in our article on Managing Private IP Addresses with Terraform and discussed their use in configurations. Variables in Terraform are special elements that allow users to store and pass values into different aspects of modules without modifying the code of the main configuration file. They provide flexibility in managing infrastructure settings and parameters, making it easier to configure and maintain. In this guide, we will focus on Terraform variables and explain how to use them in your configuration. Declaring Variables You can think of variables as containers where users store information (such as the deployment region, instance types, passwords, or access keys). You define their values once, using CLI parameters or environment variables, and can then use them throughout your configuration. To use Terraform variables, you first need to declare them. This is usually done in the variables.tf file using the variable block. The syntax for declaring variables looks like this: variable "variable_name" {   list_of_arguments } Each variable must have a unique name. This name is used to assign a value from outside and to reference it within a module. The name can be anything, but it must not conflict with meta-arguments such as version, providers, locals, etc. Arguments for variables are optional, but you should not avoid them, as they allow you to set additional parameters. The main arguments include: type — specifies the type of data allowed for the variable. We will discuss possible types in detail in the section “Variable Type Restrictions”. description — adds a description explaining the purpose and usage of the variable. default — sets a default value for the variable. validation — defines custom validation rules. sensitive — marks the variable as confidential in output. nullable — accepts two values (true or false) and specifies whether the variable can take a null value. We’ll go over some of these arguments in detail in the next sections. Variable Type Restrictions As mentioned above, you can restrict the type of data that a variable can accept using the type argument. Terraform supports the following data types: number — numeric values (integers, floats, etc.); string — a Unicode string for storing text; bool — Boolean values (true or false); map or object — key-value pairs enclosed in curly braces {}; list or tuple — ordered sequences of values, enclosed in square brackets []. Example of specifying a variable type: variable "region" {   type = string } Variable Description Since input variables in a module are part of its user interface, you can briefly describe their purpose using the optional description argument. Example: variable "region" {   type        = string   description = "Specifies the server region" } Descriptions help developers and other users better understand the role of a variable and the type of values it expects. Custom Validation Rules In Terraform, you can define custom validation rules for a variable using the validation argument. Each validation must contain two required arguments: condition — an expression that returns true if the value is valid and false otherwise; error_message — the message displayed to the user if condition returns false. Example: variable "email" { type = string description = "Email address" validation { condition = can(regex("^\\S+@\\S+\\.\\S+$", var.email)) error_message = "Invalid email address format" } } In this example, we validate the email variable against a regular expression for correct email formatting. If validation fails, the user will see the message “Invalid email address format.” Variable Confidentiality When the sensitive argument is set, Terraform treats the variable in a special way to prevent accidental exposure of sensitive data in plan or apply output. Example: variable "user" { type = object({ name = string role = string }) sensitive = true } resource "example_resource" "example1" { name = var.user.name role = var.user.role } Any resources or other Terraform elements associated with a sensitive variable also become sensitive. As a result, sensitive values will be hidden in the output. Assigning Values to Root Module Variables After declaring variables in the root module, you can assign values to them in several ways: Command Line You can pass values to variables using the -var parameter when running terraform plan or terraform apply. Example: terraform apply -var="variable1=value1" -var="variable2=value2" There is no limit to how many -var parameters you can use in one command. Variable Definition Files You can also specify variable values in a special file that must end with .tfvars or .tfvars.json. Example .tfvars file: variable1 = "value1" variable2 = "value2" This is how to use the .tfvars file: terraform apply -var-file="filename.tfvars" Environment Variables Another method is to use environment variables with the TF_VAR_ prefix. Example: export TF_VAR_variable1=value1 export TF_VAR_variable2=value2 terraform apply Conclusion In this guide, we explored Terraform variables, their declaration syntax, the main arguments they support, and the methods for assigning them values. Correct use of variables will help you create a more flexible and secure infrastructure with Terraform.
20 August 2025 · 5 min to read
Terraform

Increasing Boot Disk Size via Terraform: A Complete Guide

Terraform is one of the most effective tools for working with the IaC (Infrastructure as Code) model. This open-source software makes it much easier to deploy and manage infrastructure both in local environments and in the cloud. In this article, we will look at how to increase the size of a virtual machine’s boot disk using Terraform, in different ways and across different environments. How to Create Boot Disks in Terraform Terraform offers a number of tools for working with virtual machine disks. However, before you can increase a disk size, you first need to create it. When creating a virtual machine, you can immediately specify the disk size using the size parameter. For example, to create a disk with a size of 100 GB, you need to add the line size = 1024 * 100 to your Terraform configuration. The configuration file has a .tf extension and is located in the root directory. For example, if you want to create a virtual machine in Hostman and set its hard disk size to 100 GB, then in the .tf configuration file you need to create a hm_server resource (for an additional disk, you would create hm_server_disk) and specify this parameter in the configuration block. Example for an additional disk: resource "hm_server" "my-server" { name = "My Server Disk" os_id = data.hm_os.os.id configuration { configurator_id = data.hm_configurator.configurator.id size = 1024 * 100 } } In this example, we set the disk size to 100 GB. The size is always specified in megabytes, with increments of 5120 MB (5 GB). Naturally, you can adjust parameter values according to your project’s needs. To attach the disk to a virtual machine, use the source_server_id parameter in the disk block: resource "hm_server_disk" "my-server-disk" { disk { size = 1024 * 10 source_server_id = hm_server.my-server.id } } Note that the id value will be automatically assigned to the resource after it is successfully created. How to Increase the Boot Disk Size of a Virtual Machine in Terraform Changing the size of a Hostman disk in Terraform is straightforward. Here’s an example for our newly created system disk: configuration {     configurator_id = data.hm_configurator.configurator.id     size  = 1024 * 200 } Now the disk size will be 200 GB, and after rebooting, the filesystem will also expand. The main thing is not to forget to update your Terraform configuration using: terraform apply So the changes are applied to your infrastructure. You can only increase disk sizes (not decrease), but you can also delete existing disks and add new ones. Increasing Disk Size with Terraform in GCP Google Cloud Platform (GCP) is a cloud platform offering many tools and services for developing, deploying, and managing applications in the cloud. The instructions given above will work for GCP as well; you just need to replace hm_server with google_compute in the first line of the resource. Example: resource "google_compute" "mydisk" {   name = "my_new_VM"   type  = "ssd"   size  = 100 } However, GCP also allows you to create a disk via Terraform whose size can be changed later without having to create a new disk and copy data from the old one. You can do this by adding the following lines: resource "google_compute" "mydisk" {   name  = "my_new_VM"   image = data.google_compute_image.my_image.self_link } Here we added a line with the image data. This enables dynamic changes to the disk parameters without using initialization parameters that are intended for recreating the disk rather than modifying it. Increasing Disk Size with Terraform in EBS EBS (Amazon Elastic Block Store) is a data storage service in Amazon Web Services (AWS). It provides block storage that can be used for virtual machines and data storage. EBS volumes can be resized, and Terraform greatly simplifies this task. With Terraform, you can change an EBS disk size in just three steps. Step 1 — Create a Terraform Resource The code will be similar to the previous examples, but with EBS-specific values and some additional parameters: resource "aws_ebs_volume" "mydisk" { zone = "europe-north1-a" size = 200 type = "ssd" tags { Name = "mydisk" Role = "db" Terraform = "true" FS = "xfs" } } Then import the volume (name given as an example): terraform import aws_ebs_volume.mydisk vol-13579ace02468bdf1 If the import is successful, you’ll get a confirmation message. Now you can change the volume size by replacing 200 with 500 in the code: size = 500 Then run: terraform apply -target=aws_ebs_volume.mydisk After that, you should see a message confirming the volume size change. Step 2 — Get the IP Address Create an instance and retrieve its identifier: data "aws_instance" "mysql" { filter { name = "block-device-mapping.volume-id" values = ["${aws_ebs_volume.mydisk.id}"] } } output "instance_id" { value = "${data.aws_instance.mydisk.id}" } Update the configuration with: terraform apply terraform refresh This should produce output like: instance_id = i-13579ace02468bdf1 Next, get the mount point pointing to our volume inside the instance: locals {   mount_point = "${data.aws_instance.mydisk.ebs_block_device.0.device_name}" } Step 3 — Run the Script To make the OS recognize and use the entire expanded disk size, create a script like this: resource "null_resource" "expand_disk" { connection { type = "ssh" user = "username" private_key = "${file("ssh-key-here")}" host = "${data.aws_instance.mydisk.xxx.xxx.xxx.xxx}" } provisioner "remote-exec" { inline = [ "sudo lsblk", "sudo xfs_growfs ${local.mount_point}", ] } } Note: replace xxx.xxx.xxx.xxx with the public IP address of the created disk. Finally, run: terraform apply -target=null_resource.expand_disk This way, you can resize EBS volumes in Terraform (not only increase, but also decrease) without creating a new volume, which is not always convenient. Conclusion We’ve learned how to create disks in Terraform and increase their size using Terraform itself, as well as configuration files and scripts in different environments.
20 August 2025 · 5 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support