Sign In
Sign In

Adding a Load Balancer Rule Using Terraform

Updated on 25 August 2025

Terraform is an infrastructure-as-code tool created by HashiCorp. It allows you to automate the creation and modification of infrastructure resources using the declarative configuration language HCL.

Terraform capabilities:

  • Managing infrastructure with a single tool
  • Version control and reusable configurations
  • Ability to track the current state of infrastructure

In this guide, we will look at how to manage load balancer rules using Terraform.

Why Do You Need a Load Balancer?

A network load balancer allows you to evenly distribute incoming traffic across multiple servers to improve the availability and reliability of your service. It is an indispensable tool when implementing horizontal scaling of services.

Horizontal scaling is the process of adding additional nodes or machines to an existing infrastructure to increase its capacity and handle a larger volume of traffic.

Key benefits provided by a load balancer:

  • Load balancing for TCP sessions to ensure optimal server resource usage
  • Support for various protocols: HTTP, HTTP2, HTTPS, TCP
  • Application server fault tolerance to prevent service downtime and maintain availability
  • Ability to configure traffic routing rules to direct requests to specific server ports
  • Support for SSL configuration and redirection to HTTPS

Creating a Network Load Balancer

First, we define the requirements for the load balancer:

  • We have two servers, each hosting a website on port 80
  • When accessing the load balancer on port 80, requests should be routed to one of our servers
  • Algorithm: Round Robin
  • No additional parameters

To add a rule to our load balancer, we need to create it and configure server load balancing. We added two servers to configure load balancing for them.

Next, we create a Terraform project. Check this guide for more details on installing and configuring the provider.

First, create a folder where the configurations will be stored:

mkdir hostman-lb
cd hostman-lb

The file structure looks like this:

├── hostman-lb
│   ├── main.tf
│   ├── variables.tf

In the variables.tf file, we specify variables for the provider token and IP addresses for load balancing:

variable "hm_token" {
  type = string
  sensitive = true
}

variable "lb-ips" {
    type = set(string)
}

In the main.tf file, we add the provider information:

terraform {
  required_providers {
    hm = {
      source = "hostman-cloud/hostman"
    }
  }
  required_version = ">= 0.13"
}


provider "hm" {
  token = var.hm_token
}

When describing a Terraform configuration, we use two main types of entities: data sources (data) and resources.

  • Data sources are used to define variables and settings available within the chosen provider. Examples include presets, projects, and other parameters.
  • Resources are the infrastructure elements we want to create using Terraform, such as servers, databases, load balancers, and other components.

In main.tf, we specify a preset for configuration:

data "hm_lb_preset" "lb-preset" {
  requests_per_second = "10K"
  price_filter {
    from = 100
    to = 200
  }
}

If you use projects in the control panel, you can specify the project where the resources will be created. For example, we add a project named "Docs":

data "hm_projects" "docs" {
  name = "Docs"
}

The resource we will create is called hm_lb. When creating it, you can specify many optional parameters, such as health check settings and load balancing algorithm. We'll take the simplest setup:

resource "hm_lb" "load-balancer" {
    name = "load-balancer"
    algo = "roundrobin"

    project_id = data.hm_projects.articles.id
    preset_id = data.hm_lb_preset.lb-preset.id

    health_check {
      proto = "tcp"
      port = 80
    }

    ips = var.lb-ips
}

Other parameters we can specify include:

  • ips — list of IP addresses for balancing
  • is_keepalive — flag indicating whether to maintain TCP connection with the server
  • is_ssl — automatic redirect to HTTPS
  • is_sticky — maintain user session on a single backend server

We also added a .tfvars file to the project with values for variables from variables.tf:

hm_token = "<insert API key for provider here>"
lb-ips = [ "<IP for first server>", "<IP for second server>" ]

Next, execute:

terraform validate

If everything is correct, you should see a success message.

To see which resources will be created, run:

terraform plan -var-file=.tfvars

The -var-file=.tfvars flag allows Terraform to use the variable values defined in that file.

Variables can also be defined in other ways:

  • Using the -var flag, e.g., -var="hm_token=243453452345235456643"
  • Using environment variables:
export TF_VAR_hm_token=243453452345235456643

Adding Rules to the Load Balancer

A rule in the load balancer refers to port forwarding configuration. For any rule, you must specify:

  • The port clients access on the load balancer
  • The port on the target servers to which traffic will be forwarded

Add a new resource to the configuration in main.tf:

resource "hm_lb_rule" "lb-rule" {
  lb_id = resource.hm_lb.load-balancer.id
  balancer_proto = "http"
  balancer_port = 80
  server_proto = "http"
  server_port = 80
}

Run the commands again:

terraform validate
terraform plan -var-file=.tfvars
terraform apply -var-file=.tfvars

Confirm by typing yes.

In the control panel, you will see the created resources:

  • Load Balancer
  • Rule

Visit the load balancer address (twice to reach different servers):

  • First server specified in variables
  • Second server specified in variables

If a second rule is needed, for example, a new application runs on port 81, simply add another resource block:

resource "hm_lb_rule" "lb-second-rule" {
  lb_id = resource.hm_lb.load-balancer.id
  balancer_proto = "http"
  balancer_port = 81
  server_proto = "http"
  server_port = 81
}

Run:

terraform plan -var-file=.tfvars

You will see that one load balancer and one rule already exist, and a new rule will be added.

Then run:

terraform apply -var-file=.tfvars

Agree to apply the configuration when prompted.

Check the control panel—now two rules are configured. Visit the load balancer address to verify everything works correctly.

Conclusion

In this guide, we learned how to create network load balancers and rules using Terraform. Besides Terraform, load balancers can also be managed via API.

Useful links:

Was this page helpful?
Updated on 25 August 2025

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support