Sign In
Sign In

Terraform for DevOps: Best Practices

Terraform for DevOps: Best Practices
Hostman Team
Technical writer
Terraform
27.03.2025
Reading time: 10 min

Terraform is one of the most popular Infrastructure as Code (IaC) tools that allows you to manage infrastructure using various cloud providers. It follows a declarative approach, meaning you describe the desired infrastructure rather than writing step-by-step instructions for its creation—Terraform then automatically provisions it.

This tutorial outlines best practices for efficient Terraform development.

File and Folder Structure

Maintaining a clear and organized file structure is crucial when working on a large project with complex infrastructure. A consistent folder and file structure improves project maintainability.

A project example:

-- PROJECT-DIRECTORY/
-- modules/
      -- <service1-name>/
         -- main.tf
         -- variables.tf
         -- outputs.tf
         -- provider.tf
         -- README
      -- <service2-name>/
         -- main.tf
         -- variables.tf
         -- outputs.tf
         -- provider.tf
         -- README
      -- ...other…
 -- environments/
      -- dev/
         -- backend.tf
         -- main.tf
         -- outputs.tf
         -- variables.tf
         -- terraform.tfvars
 -- qa/
         -- backend.tf
         -- main.tf
         -- outputs.tf
         -- variables.tf
         -- terraform.tfvars
-- stage/
         -- backend.tf
         -- main.tf
         -- outputs.tf
         -- variables.tf
         -- terraform.tfvars
-- prod/
         -- backend.tf
         -- main.tf
         -- outputs.tf
         -- variables.tf
         -- terraform.tfvars

Separating Configuration Files

Keeping all the code in a single main.tf file is a bad idea.  Instead, split it into separate files based on their purpose:

  • main.tf – Calls modules and data sources to create resources.

  • variables.tf – Defines variables used in main.tf.

  • outputs.tf – Specifies the outputs of resources created in main.tf.

  • versions.tf – Defines Terraform and provider version requirements.

  • terraform.tfvars – Contains variable values.

Module Structure

The standard module structure is described in the Terraform documentation.

Follow these best practices:

  • Group resources based on their purpose, e.g., vps.tf, s3.tf, load_balancer.tf.

  • Avoid creating a separate file for each resource unless necessary.

  • Include a README.md file for each module with a clear description of its purpose.

Directory Separation for Applications and Environments

  • To independently manage infrastructure for different applications, place resources for each application in its own directory.

  • Store shared resources (e.g., networks) in a dedicated common resources directory.

  • Use separate directories for each environment (dev, qa, stage, production).

  • Use modules to share common code across all environments.

  • Each environment directory should contain:

    • backend.tf – Defines the Terraform backend state configuration.

    • main.tf – Contains the infrastructure description.

Static Files and Templates

  • Static files should be stored in the files/ directory.

  • Template files should be placed in the templates/ directory.

General Code Structuring Guidelines

  • Place count or for_each as the first argument within a resource or data source block, followed by a new line. Always list tags, depends_on, and lifecycle as the last arguments in a consistent order, separated by a single blank line.

  • Keep resource modules simple—avoid unnecessary complexity.

  • Avoid hardcoding values—use variables or data sources instead.

Naming, Code Style, and File Formatting

Terraform code should be written in a way that other developers can easily understand. Consistent naming conventions improve code readability and maintainability.

Key Naming Rules

  • Use underscore (_) instead of hyphen (-) to separate multiple words in names.

  • Use only lowercase letters and numbers.

  • Use singular nouns for resource names.

  • Do not repeat the resource type in its name:

    • Good example: resource "aws_route_table" "public" {}

    • Bad example: resource "aws_route_table" "public_route_table" {}

  • Differentiate resources of the same type using descriptive names (e.g., primary, secondary, public, private).

Variable Management Rules

  • Declare all variables in the variables.tf file.

  • Use descriptive names that reflect the variable's purpose.

  • Provide meaningful descriptions for all variables—this helps generate module documentation and provides context for new developers.

  • Organize variable keys in the following order:

    • description

    • type

    • default

    • validation

  • Provide default values whenever possible:

    • Specify a default if a variable has environment-independent values (e.g., disk size).

    • If a variable has environment-specific values, do not set a default.

  • Use plural names for variables of type list(...) or map(...).

  • Prefer simple types (number, string, list(...), map(...), any) over object(), unless strict key constraints are needed.

  • For boolean variables, use positive names (e.g., enable_external_access).

  • For numeric input/local/output variables, include measurement units in the name (e.g., ram_size_gb).

  • Use binary prefixes for storage units (kilo, mega, giga).

Output Value Rules

  • Organize all outputs in the outputs.tf file.

  • Output all useful values that other modules may need.

  • Provide meaningful descriptions for all output values.

  • Name outputs based on their contents, following this structure: {name}_{type}_{attribute}

  • Document outputs in README.md.

  • Use tools like terraform-docs to auto-generate documentation when committing code.

Using Built-in Formatting Tools

  • Use the terraform fmt command to format Terraform files according to its canonical style.

  • All Terraform files must comply with terraform fmt standards to ensure consistency.

Best Security Practices in Terraform

Terraform interacts with cloud infrastructure using sensitive data such as API keys. To protect your infrastructure, follow these security best practices:

Secure the Terraform State File

  • Never store the state file locally or in version control (e.g., Git). Instead, use Terraform Remote State (e.g., AWS S3, Azure Storage). The state file contains sensitive values in plain text, posing a security risk.

  • Add Terraform state files to .gitignore to prevent accidental commits.

  • Encrypt state files as an extra security measure.

  • Regularly back up state files in case of corruption or accidental loss.

  • Use one state file per environment (e.g., dev, staging, production).

Enable State Locking

Multiple developers running Terraform commands simultaneously can lead to state corruption and data loss.

To prevent conflicts, enable state locking, which ensures only one user modifies the state at a time.

Note that not all backends support built-in locking. Azure Blob Storage supports locking natively, while AWS S3 supports locking when used with DynamoDB.

Keep Secrets Out of the State File

Terraform stores secrets in plain text within the state file. Avoid storing secrets directly in Terraform configuration.

Instead, use secret management tools (AWS Secrets Manager, HashiCorp Vault, Azure Key Vault) and reference secrets using data sources rather than hardcoding them.

Minimize the Blast Radius

The blast radius refers to the potential impact of failure in your infrastructure. To reduce risk:

  • Deploy smaller, isolated infrastructure components.

  • Manage critical resources separately from less essential ones.

  • Implement least privilege access for Terraform execution roles.

Perform Security Audits

Run security checks after every terraform apply. Use security auditing tools like InSpec or Serverspec. These tools ensure that infrastructure remains in a secure state.

Use the sensitive Flag for Variables

Terraform configurations often contain sensitive inputs like passwords, API tokens, and private keys.

To prevent accidental exposure, mark variables as sensitive:

variable "db_password" {
  description = "Database admin password"
  type        = string
  sensitive   = true
}

Terraform will hide these values in the console and logs.

However, sensitive does not encrypt the data—other precautions are still necessary.

Use .tfvars Files for Variable Definitions

Instead of defining many variables manually, store them in .tfvars files:

db_password = "super_secret_password"

Pass the file during execution:

terraform apply -var-file="secrets.tfvars"

Terraform automatically loads .tfvars files if they exist.

Always store secrets locally using -var-file rather than committing them to version control.

Using Modules in Terraform

Modules are designed for code reuse and help organize infrastructure as code effectively.

Use Common Modules

Use official Terraform modules whenever possible. There is no need to reinvent a module that already exists.

Each module should focus on a single aspect of infrastructure, such as creating database instances.

Tag Module Versions

Sometimes, breaking changes are required in modules. Use version tags so that users can lock their configurations to a specific version and avoid unexpected issues.

Avoid Declaring Providers and Remote State in Shared Modules

Shared modules should not declare providers or remote state. Instead, define providers and remote state configuration in the root modules.

Expose Outputs for All Resources

Variables and outputs help define dependencies between modules and resources. Users cannot properly integrate your module into their Terraform configurations without output values.

Include at least one output that references each resource in a shared module.

Use Submodules for Complex Logic

Submodules help break down complex Terraform logic into smaller, reusable units. This approach reduces code duplication for shared resources.

  • Store submodules in modules/$modulename.

  • Consider modules as private unless the module documentation states otherwise.

Minimize Resources in Each Root Module

Avoid large root configurations that contain too many resources in a single directory and state file.

Smaller modules make infrastructure easier to manage and faster to deploy.

Additional Recommendations

Version Control

  • Store infrastructure code in version control just like application code to maintain history and allow easy rollbacks.

  • Follow a branching strategy such as GitFlow.

  • Use separate branches for environment-specific root configurations, if necessary.

Best Practices for Testing

Static Analysis

  • Validate configuration syntax and structure before deploying resources using linters and dry-run tools.

  • Use terraform validate and tools like tflint, config-lint, Checkov, Terrascan, tfsec, Deepsource.

Integration Testing

  • Test modules in isolation to ensure correctness.

  • Use testing frameworks like Terratest, Kitchen-Terraform, InSpec

Plan Before Applying

  • Always check the output of terraform validate and terraform plan before applying changes to an environment.

Keep Terraform Up to Date

Stay on the latest Terraform version whenever a major release occurs.

Run terraform -v to check for updates.

Protect Stateful Resources

Enable deletion protection for stateful resources like databases to prevent accidental removal.

Use the self Variable

The self variable helps when values are unknown before deployment.

Example: If you need the IP address of an instance, but it’s only available after deployment, you can use self to reference it dynamically.

Use Terraform Workspaces

Workspaces allow managing multiple instances of the same Terraform configuration, each with its own state.

Useful for handling multiple environments (e.g., dev, staging, production) using the same Terraform codebase.

For example, here is how you could use workspaces to manage the dev and prod environments:

terraform workspace new dev
terraform apply
terraform workspace new prod
terraform apply
Terraform
27.03.2025
Reading time: 10 min

Similar

Terraform

Encrypting Secrets in HashiCorp Terraform

Encrypting private data in HashiCorp Terraform is an important aspect of working with this tool. There are methods that allow securely storing sensitive data in encrypted form and transferring it in a safe way when needed. In this article, we will look at how to securely store secrets in Terraform, and encryption methods that help improve security when working with cloud infrastructure.  Why You Should Not Store Sensitive Information as Plain Text Terraform users sometimes face the need to handle sensitive information—for example, API keys or a user’s login and password for a database. Here is an example of code for creating a database in Hostman (a more detailed guide on working with Terraform and Hostman is available on GitHub): resource "hm_db_mysql_8" "my_db" { name = "mysql_8_database" # Username and password login = <To be decided> password = <To be decided> preset_id = data.hm_db_preset.example-db-preset.id } To make this code work, the variables username and password must contain actual credentials. Our task is to properly handle these credentials and prevent their accidental disclosure. The simplest option is to immediately assign text values to these variables: resource "hm_db_mysql_8" "my_db" { name = "mysql_8_database" # Username and password login = "root" password = "admin" preset_id = data.hm_db_preset.example-db-preset.id } But this is a bad practice that harms the security of the whole system—even if you use a private Git repository to store the project. Anyone with access to the version control system would also have access to the secrets. Additionally, many tools that access the repository, such as Jenkins, CircleCI, or GitLab, keep a local copy of the repo before building the code. If sensitive information is stored as plain text, other programs on your computer may also access it, and therefore gain access to the secrets. In general, storing secrets as plain text greatly simplifies access to them, which could be exploited by attackers. This problem is relevant not only to Terraform but to any tool. Therefore, the main rule of secret encryption in Infrastructure as Code is: never store sensitive information as plain text. Risks of unprotected information: Compliance violations. Many industries work with sensitive information, such as personal data, which is regulated by law. Mishandling sensitive data can result in heavy fines and reputational damage. Unauthorized access to data. Improperly protected data may be accessed by people without the proper authorization. Security breaches. Exposed data can be used by attackers to steal, alter, or even fully compromise entire systems. The terraform.tfstate File Terraform has a flaw that reduces system security. Each time Terraform is used to deploy infrastructure, it saves a large amount of information about it, including database connection parameters, inside the terraform.tfstate file—in plain text. This file is stored in the same directory where the apply command is run. This means that even if you use one of the encryption methods described in this article, sensitive data will still appear in plain text in the state file. This problem has been known for several years, but there is still no universal solution. There are temporary fixes that remove secrets from state files, but they are not reliable and may break compatibility after updates. Therefore, at present, regardless of the encryption method, the most important aspect of data security is securing the state file. It is not recommended to keep it locally or in a repoÑŽ Instead, use storage systems that support encryption, such as Amazon S3 with access control. Environment Variables for Key Encryption Terraform supports reading environment variables, and this can be used for storing keys securely. First, create a file variables.tf with variables: variable "username" { description = "Username" type = string sensitive = true } variable "password" { description = "User password" type = string sensitive = true } The type parameter sets the variable type, while sensitive = true marks it as sensitive so its value won’t be shown in logs, including during plan and apply. Now replace credentials in your resources with variables: resource "hm_db_mysql_8" "my_db" { name = "mysql_8_database" # Username and password login = var.username password = var.password preset_id = data.hm_db_preset.example-db-preset.id } Then set the values via terminal. Prefix each variable name with TF_VAR_: export TF_VAR_username="root" export TF_VAR_password="admin" When you run terraform apply, Terraform will use these environment variable values. Important: Bash commands are saved in history. To prevent passwords and logins from being stored, set the environment variable HISTCONTROL=ignorespace. Then, any command starting with a space won’t be saved: export HISTCONTROL=ignorespace Encrypting Sensitive Data in Terraform with GPG Using environment variables prevents secrets from appearing in Terraform code, but it doesn’t fully solve the problem, it only shifts it from Terraform to the OS, where the data must also be protected. A popular approach is to store sensitive data in GPG-encrypted files. You can use plain GPG, or a password manager that uses the same logic—for example, Pass. Pass Password Manager In Pass, secrets are stored as GPG-encrypted files organized into a directory hierarchy. They can be copied between devices and managed with standard terminal commands. To install Pass on Ubuntu: sudo apt update sudo apt install pass You’ll also need GPG. Install and generate a key: sudo apt install gpg gpg --full-generate-key Choose the key type (e.g., “RSA and RSA”) and size, and provide details. Once generated, copy the GPG key and initialize Pass: pass init <GPG-key> Encrypt credentials: pass insert username pass insert password To access secrets from the command line, use the pass command and the secret’s name: pass username Enter the passphrase you specified when generating the GPG key, and the secret will be displayed in the console as plain text. Now, to use the encrypted data, set them as environment variables: export TF_VAR_username=$(pass username) export TF_VAR_password=$(pass password) Pros and cons of using environment variables for secrets: Pros Cons Secrets remain outside the code, which means they are not stored in the repository. The infrastructure is not fully described in Terraform code, which complicates its maintenance and reduces readability. Easy to use: you don’t need advanced qualifications to start working with it. Requires additional steps to work with this solution. Environment variables can be integrated with many password managers, such as in the example with Pass. Since all work with secrets takes place outside of Terraform, Terraform’s built-in security mechanisms do not apply to them. Suitable for test runs: dummy values can easily be set as environment variables.   Encrypting Secrets with HashiCorp Vault Vault is an open-source external storage system for sensitive information. Like Terraform, it was developed by HashiCorp. Vault allows you to implement centralized encrypted storage for secrets. Here are its key features: Storage of sensitive information. The core function of Vault is encryption and storage of secrets such as passwords, API keys, certificates, tokens, and other data. User authentication. To access secrets, a user or application must authenticate. The tool supports several methods: from the classic login-password combination to integration with other authentication systems. Access control. Vault allows fine-grained configuration of which users and applications can access certain secrets. This is achieved through access policies that define permissions for viewing and performing operations on secrets. Audit logging. The tool makes it possible to track who accessed the data in the storage and when. Installation Vault is an external storage system, and interaction with it is carried out over the network. Therefore, it can be installed either on a local device (accessed via localhost) or on a remote server. In this material, we’ll show how to install it on Ubuntu. Update package indexes and install GPG: sudo apt update && sudo apt install gpg Download the GPG key: wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list Install Vault: sudo apt update && sudo apt install vault To make sure the installation was successful, check the version of the software: vault version If installation was successful, the terminal will display the latest version of the key storage system. Configuration After installation, the tool must be configured. We will use it in server mode. Start it with: vault server -dev Important: here we are running the server in development mode. This means that in this mode it stores all data, including keys, in RAM. When the server restarts, all data is lost. Therefore, in production it is better to use the standard server mode. Development mode is suitable for learning purposes, like in this material. During execution, the terminal will display details of the process and, afterwards, the URL where the server is running and a token for authorization. You can even connect to it through a browser by entering the server’s URL in the address bar. For further work with the server in development mode, create an environment variable with its URL. If you installed Vault on your local machine, the command will look like this: export VAULT_ADDR='http://127.0.0.1:8200' Check the storage status as follows: vault status The command will return information about it: creation date, software version, and more. To store sensitive data, we’ll use a key-value store. First, enable it: vault secrets enable -path=db_data kv Then, add secrets into the storage: vault kv put db_data/secret_tf username=root password=admin You can check the result directly in your browser. Using HashiCorp Vault for Storing Secrets in Terraform In the main Terraform file, specify Vault as a provider: terraform { required_providers { vault = { source = "hashicorp/vault" version = "3.23.0" } hm = { source = "hostman-cloud/hostman" } } required_version = ">= 0.13" } In the Terraform configuration file, you also need to declare Vault as a provider: provider "vault" { address = "http://127.0.0.1:8200" token = "Vault Token" } Then, implement a method for reading data from the storage: data "vault_generic_secret" "secret_credentials" { path = "db_data/secret_tf" } Now you can use the retrieved secrets in a resource, for example when creating a MySQL database: resource "hm_db_mysql_8" "my_db" { name = "mysql_8_database" # Username and password login = data.vault_generic_secret.secret_credentials.data["username"] password = data.vault_generic_secret.secret_credentials.data["password"] preset_id = data.hm_db_preset.example-db-preset.id } How to Automate Secret Encryption in Terraform Secret encryption in Terraform can be automated to ensure a scalable and secure process for managing sensitive data. Below are some methods of automation: Scripts. They can be used to automatically pass secrets into Terraform. This can be implemented using GPG or OpenSSL. CI/CD tools. Many CI/CD tools, such as GitLab CI/CD or Jenkins, have built-in encryption that can be used automatically together with Terraform. Conclusion In this article, we examined the issue of handling sensitive information, discussed the risks of storing it in an unprotected form, explained why it is important to carefully protect the state file, and provided examples of encrypting secrets in Terraform using HashiCorp Vault, environment variables, and GPG.
25 August 2025 · 10 min to read
Terraform

How to Deploy Jenkins CI/CD Pipelines with Terraform

Modern software development requires fast and high-quality delivery of new features and fixes. CI/CD (Continuous Integration and Delivery) automation allows you to: Reduce time-to-market: Developers can quickly validate code and deliver it to the production environment. Improve code quality: Automated testing detects errors at early stages, reducing the cost of fixing them. Increase reliability and stability: CI/CD pipelines minimize the human factor, ensuring process consistency. Ensure flexibility and scalability: It becomes easier to adapt to changing requirements and increase workloads. Benefits of Using Jenkins and Terraform Together Jenkins and Terraform are a powerful combination of tools for CI/CD automation: Jenkins: Provides flexible pipeline configuration options. Integrates with many tools and version control systems (Git, GitHub, Bitbucket). Supports pipeline customization through Jenkinsfile. Terraform: Simplifies infrastructure creation and management through declarative code. Supports multi-cloud environments, making it a universal deployment solution. Allows easy tracking of infrastructure changes through its state management system. By using Jenkins and Terraform together, you can: Automate infrastructure deployment: Terraform creates the environment for running the application. Optimize CI/CD pipelines: Jenkins handles build, test, and deploy using the resources provisioned by Terraform. Reduce time and effort in infrastructure management: The entire process becomes manageable through code (IaC). Deployment Registration and Initial Setup in Hostman Go to the Hostman website and create an account. Create a new project to separate CI/CD resources from other projects. Configure access keys. In the API section, create an API key for working with Terraform. Save the key in a secure place, as it will be needed for configuration. Installing and Configuring Terraform Download Terraform from the HashiCorp website. Install it on your system: For Windows: add the path to the Terraform binary to environment variables. For macOS/Linux: move the terraform file to /usr/local/bin/. Verify installation: terraform --version Creating Hostman Infrastructure Next, we will use Terraform to create the necessary infrastructure. We recommend using at least 2 CPUs, 4 GB RAM for a basic Jenkins setup, and 15–40 GB disk for temporary files and artifacts. Create the configuration file provider.tf: terraform { required_providers { hm = { source = "hostman-cloud/hostman" } } required_version = ">= 0.13" } provider "hm" { token = "your_API_key" } Describe infrastructure for Jenkins deployment in main.tf: data "hm_configurator" "example_configurator" { location = "us-2" } data "hm_os" "example_os" { name = "ubuntu" version = "22.04" } resource "hm_ssh_key" "jenkins_key" { name = "jenkins-ssh-key" body = file("~/.ssh/id_rsa.pub") } resource "hm_vpc" "jenkins_vpc" { name = "jenkins-vpc" description = "VPC for Jenkins infrastructure" subnet_v4 = "192.168.0.0/24" location = "us-2" } resource "hm_server" "jenkins_server" { name = "Jenkins-Server" os_id = data.hm_os.example_os.id ssh_keys_ids = [hm_ssh_key.jenkins_key.id] configuration { configurator_id = data.hm_configurator.example_configurator.id disk = 15360 cpu = 2 ram = 4096 } local_network { id = hm_vpc.jenkins_vpc.id ip = "192.168.0.10" # Static IP within the VPC subnet mode = "dnat_and_snat" } connection { type = "ssh" user = "root" private_key = file("~/.ssh/id_rsa") host = self.networks[0].ips[0].ip # Correct way to get IP timeout = "10m" } provisioner "remote-exec" { inline = [ "apt-get update -y", "apt-get install -y ca-certificates software-properties-common", "add-apt-repository -y ppa:openjdk-r/ppa", "apt-get update -y", "apt-get install -y openjdk-17-jdk", "curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | tee /usr/share/keyrings/jenkins-keyring.asc > /dev/null", "echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian-stable binary/ | tee /etc/apt/sources.list.d/jenkins.list > /dev/null", "apt-get update -y", "apt-get install -y jenkins", "update-alternatives --set java /usr/lib/jvm/java-17-openjdk-amd64/bin/java", "sed -i 's|JAVA_HOME=.*|JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64|' /etc/default/jenkins", "systemctl daemon-reload", "systemctl start jenkins", "ufw allow 8080" ] } } output "jenkins_url" { value = "http://${hm_server.jenkins_server.networks[0].ips[0].ip}:8080" } Install the Hostman Terraform Provider: terraform init Check the connection to Hostman: terraform plan After successful initialization, you should see the following message: Apply the changes: terraform apply Confirm execution by typing yes. When you go to your Hostman project, you will see a new server being created. Take note of the resources—they match exactly what we specified in the main.tf configuration. After the server is created, go to the IP address with port 8080, where Jenkins should now be running. This address is also visible in the control panel, on the server Dashboard. You will see Jenkins’ initial setup screen: Configuration Breakdown In this section, we will go through each part of the code in detail. Data Sources (Fetching Infrastructure Information) Fetching the configurator ID for the selected region. Needed to define available hardware configurations (CPU/RAM/Disk): data "hm_configurator" "example_configurator" { location = "us-2" } Fetching the OS image ID for Ubuntu 22.04. data "hm_os" "example_os" { name = "ubuntu" version = "22.04" } Resources (Creating Infrastructure) Creating an SSH key for server access: resource "hm_ssh_key" "jenkins_key" {   name = "jenkins-ssh-key"   body = file("~/.ssh/id_rsa.pub") } Creating a private network (VPC) for Jenkins isolation: resource "hm_vpc" "jenkins_vpc" {   name        = "jenkins-vpc"   description = "VPC for Jenkins infrastructure"   subnet_v4 = "192.168.0.0/24"   location = "us-2" } Server Configuration Main server parameters: name: Server name os_id: OS image ID ssh_keys_ids: SSH key binding resource "hm_server" "jenkins_server" {   name     = "Jenkins-Server"   os_id    = data.hm_os.example_os.id   ssh_keys_ids = [hm_ssh_key.jenkins_key.id] Server characteristics: configurator_id: Link to configurator disk: Disk size (15 GB) cpu: Number of cores ram: Memory size (4 GB) configuration {   configurator_id = data.hm_configurator.example_configurator.id   disk = 15360    cpu  = 2        ram  = 4096   } Private network settings: id: ID of created VPC ip: Static IP within subnet mode: NAT mode local_network {   id = hm_vpc.jenkins_vpc.id   ip = "192.168.0.10"   mode = "dnat_and_snat" } Connection Settings SSH connection parameters for provisioning: connection {   type        = "ssh"   user        = "root"   private_key = file("~/.ssh/id_rsa")   host        = self.networks[0].ips[0].ip   timeout     = "10m" } Provisioning (Software Installation) provisioner "remote-exec" { inline = [ # Package update "apt-get update -y", # Installing dependencies "apt-get install -y ca-certificates software-properties-common", # Adding Java repository "add-apt-repository -y ppa:openjdk-r/ppa", # Installing Java 17 "apt-get install -y openjdk-17-jdk", # Adding Jenkins repository "curl -fsSL ...", "echo deb ...", # Installing Jenkins "apt-get install -y jenkins", # Java configuration "update-alternatives --set java ...", "sed -i 's|JAVA_HOME=.*|...|' /etc/default/jenkins", # Starting service "systemctl daemon-reload", "systemctl start jenkins", # Opening port "ufw allow 8080" ] } Output Information Displaying the URL for accessing Jenkins: output "jenkins_url" {   value = "http://${hm_server.jenkins_server.networks[0].ips[0].ip}:8080" } Potential Issues with Jenkins and Terraform Secret and Confidential Data Management Problem: Storing API keys, SSH keys, and other secrets may be unsafe if added as plain text. Solution: Use secret managers such as HashiCorp Vault, or Jenkins integration with credential management systems. In Terraform, specify secrets via environment variables or remote storage. Terraform State Management Issues Problem: The state file (terraform.tfstate) may be corrupted or overwritten if multiple users access it simultaneously. Solution: Configure a remote backend (e.g., object storage) to store the state. Enable state locking to prevent simultaneous command execution. Complexity of Terraform and Jenkins Integration Problem: Integration can be difficult for beginners due to differences in configuration and infrastructure management. Solution: Use the Terraform plugin for Jenkins to simplify setup. Create detailed documentation and use Jenkinsfile templates for repeatable processes. Jenkins and Terraform Updates Problem: New versions of Jenkins and Terraform may be incompatible with existing configurations. Solution: Test updates in local or test environments. Regularly update Jenkins plugins to maintain compatibility. Hostman Limitations and Workarounds Hostman API Limitations Problem: Not all actions or resources may be available through the API. Solution: Use a combination of Terraform and the Hostman interface. Leave operations unavailable via API for manual execution or scripts in Python/Go. Resource Limitations Problem: Limits on the number of servers, storage, or networking resources may cause scaling delays. Solution: Optimize existing resource usage. Plan workload ahead, increasing limits through Hostman support. Network Infrastructure Scalability Problem: Deploying large applications may be difficult due to complex network configurations (e.g., VPC). Solution: Break infrastructure into Terraform modules. Use pre-prepared network architecture templates. Lack of Deep Integrations Problem: Hostman may not provide full integrations with external services, such as GitHub Actions or third-party monitoring. Solution: Configure webhook notifications for integration with Jenkins or other CI/CD systems. Use external APIs of third-party services for custom integrations. Debugging Recommendations Terraform Debugging Use the terraform plan option to validate configuration before applying changes. For detailed analysis, add the -debug flag:terraform apply -debug Verify provider settings (e.g., token and API keys). Jenkins Pipeline Debugging Enable logging in Jenkins: specify --verbose or --debug in Terraform and Docker build commands. Configure artifact archiving in Jenkins to save error logs. Split steps in Jenkinsfile to isolate problematic stages. Monitoring and Notifications Install monitoring plugins in Jenkins (e.g., Slack or Email Extension) for failure notifications. Use external monitoring (e.g., Zabbix or Prometheus) to monitor server status in Hostman. Backups of Terraform State and Jenkins Data Configure regular backups of the state file (terraform.tfstate) to remote storage. Create backups of Jenkins configuration and job data. Troubleshooting Common Errors Jenkins server connection error: Check SSH keys and firewall rules. Terraform resource creation error: Verify API key validity and available quotas. Pipeline hang: Ensure Docker containers are correctly downloaded and running. Efficient work with Jenkins, Terraform, and Hostman requires awareness of possible issues and systematic resolution. Use the recommendations above to prevent errors, optimize resources, and simplify debugging of your CI/CD processes. Recommendations for Beginners and Professionals For Beginners: Start simple: Learn basic Terraform commands such as init, plan, apply. Configure Jenkins using pre-built images and plugins. Use documentation and templates: Explore Hostman API docs and Terraform module examples. Use Jenkinsfile templates to learn pipeline writing faster. Test locally: Run a local Jenkins instance and test small Terraform configurations before moving to the cloud. Experiment: Start with small projects to understand CI/CD and infrastructure management principles. Reach out to the Hostman community or support if issues arise. For Professionals: Optimize processes: Create custom Terraform modules for reuse across projects. Configure Jenkins for parallel builds to reduce pipeline execution time. Focus on security: Configure secure storage of keys and sensitive data using Vault or Jenkins built-in tools. Protect terraform.tfstate by using remote storage with HTTPS access. Develop multi-cloud approach: Integrate Hostman with other platforms to create backup pipelines for critical applications. Use Terraform to manage cross-cloud infrastructure. Adopt new tools: Explore containerization and orchestration with Kubernetes and Docker. Integrate Jenkins with monitoring systems such as Prometheus or Grafana for performance analysis. Jenkins and Terraform combined with Hostman open wide opportunities for developers and DevOps engineers. Beginners can easily learn these tools by starting with simple projects, while professionals can implement complex CI/CD pipelines with multi-cloud infrastructure. This approach not only accelerates development but also helps create a scalable, secure, and fault-tolerant environment for modern applications. Resources for Learning Terraform Documentation: Terraform Official Website Terraform Module Registry Jenkins Guides: Official Jenkins Documentation Jenkins Plugins Article: How to Automate Jenkins Setup with Docker Hostman: Hostman API Documentation Hostman Terraform Provider Documentation Hostman user guides Useful Tools: VS Code for working with Terraform and Jenkins HashiCorp Vault for secure token storage Communities: DevOps on Reddit Stack Overflow: Terraform
20 August 2025 · 12 min to read
Terraform

Using Variables in Terraform: Guide and Examples

Terraform is a popular software tool for DevOps engineers and system administrators, primarily designed for creating and managing infrastructure in the cloud. Its main feature is the ability to automate all processes related to infrastructure deployment. In Terraform, there is a set of core elements used to describe infrastructure. These include providers, resources, data sources, modules, expressions, and variables. We have already touched on variables in our article on Managing Private IP Addresses with Terraform and discussed their use in configurations. Variables in Terraform are special elements that allow users to store and pass values into different aspects of modules without modifying the code of the main configuration file. They provide flexibility in managing infrastructure settings and parameters, making it easier to configure and maintain. In this guide, we will focus on Terraform variables and explain how to use them in your configuration. Declaring Variables You can think of variables as containers where users store information (such as the deployment region, instance types, passwords, or access keys). You define their values once, using CLI parameters or environment variables, and can then use them throughout your configuration. To use Terraform variables, you first need to declare them. This is usually done in the variables.tf file using the variable block. The syntax for declaring variables looks like this: variable "variable_name" {   list_of_arguments } Each variable must have a unique name. This name is used to assign a value from outside and to reference it within a module. The name can be anything, but it must not conflict with meta-arguments such as version, providers, locals, etc. Arguments for variables are optional, but you should not avoid them, as they allow you to set additional parameters. The main arguments include: type — specifies the type of data allowed for the variable. We will discuss possible types in detail in the section “Variable Type Restrictions”. description — adds a description explaining the purpose and usage of the variable. default — sets a default value for the variable. validation — defines custom validation rules. sensitive — marks the variable as confidential in output. nullable — accepts two values (true or false) and specifies whether the variable can take a null value. We’ll go over some of these arguments in detail in the next sections. Variable Type Restrictions As mentioned above, you can restrict the type of data that a variable can accept using the type argument. Terraform supports the following data types: number — numeric values (integers, floats, etc.); string — a Unicode string for storing text; bool — Boolean values (true or false); map or object — key-value pairs enclosed in curly braces {}; list or tuple — ordered sequences of values, enclosed in square brackets []. Example of specifying a variable type: variable "region" {   type = string } Variable Description Since input variables in a module are part of its user interface, you can briefly describe their purpose using the optional description argument. Example: variable "region" {   type        = string   description = "Specifies the server region" } Descriptions help developers and other users better understand the role of a variable and the type of values it expects. Custom Validation Rules In Terraform, you can define custom validation rules for a variable using the validation argument. Each validation must contain two required arguments: condition — an expression that returns true if the value is valid and false otherwise; error_message — the message displayed to the user if condition returns false. Example: variable "email" { type = string description = "Email address" validation { condition = can(regex("^\\S+@\\S+\\.\\S+$", var.email)) error_message = "Invalid email address format" } } In this example, we validate the email variable against a regular expression for correct email formatting. If validation fails, the user will see the message “Invalid email address format.” Variable Confidentiality When the sensitive argument is set, Terraform treats the variable in a special way to prevent accidental exposure of sensitive data in plan or apply output. Example: variable "user" { type = object({ name = string role = string }) sensitive = true } resource "example_resource" "example1" { name = var.user.name role = var.user.role } Any resources or other Terraform elements associated with a sensitive variable also become sensitive. As a result, sensitive values will be hidden in the output. Assigning Values to Root Module Variables After declaring variables in the root module, you can assign values to them in several ways: Command Line You can pass values to variables using the -var parameter when running terraform plan or terraform apply. Example: terraform apply -var="variable1=value1" -var="variable2=value2" There is no limit to how many -var parameters you can use in one command. Variable Definition Files You can also specify variable values in a special file that must end with .tfvars or .tfvars.json. Example .tfvars file: variable1 = "value1" variable2 = "value2" This is how to use the .tfvars file: terraform apply -var-file="filename.tfvars" Environment Variables Another method is to use environment variables with the TF_VAR_ prefix. Example: export TF_VAR_variable1=value1 export TF_VAR_variable2=value2 terraform apply Conclusion In this guide, we explored Terraform variables, their declaration syntax, the main arguments they support, and the methods for assigning them values. Correct use of variables will help you create a more flexible and secure infrastructure with Terraform.
20 August 2025 · 5 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support