Terraform is an infrastructure-as-code tool created by HashiCorp. It allows you to automate the creation and modification of infrastructure resources using the declarative configuration language HCL.
Terraform capabilities:
In this guide, we will look at how to manage load balancer rules using Terraform.
A network load balancer allows you to evenly distribute incoming traffic across multiple servers to improve the availability and reliability of your service. It is an indispensable tool when implementing horizontal scaling of services.
Horizontal scaling is the process of adding additional nodes or machines to an existing infrastructure to increase its capacity and handle a larger volume of traffic.
Key benefits provided by a load balancer:
First, we define the requirements for the load balancer:
To add a rule to our load balancer, we need to create it and configure server load balancing. We added two servers to configure load balancing for them.
Next, we create a Terraform project. Check this guide for more details on installing and configuring the provider.
First, create a folder where the configurations will be stored:
mkdir hostman-lb
cd hostman-lb
The file structure looks like this:
├── hostman-lb
│ ├── main.tf
│ ├── variables.tf
In the variables.tf
file, we specify variables for the provider token and IP addresses for load balancing:
variable "hm_token" {
type = string
sensitive = true
}
variable "lb-ips" {
type = set(string)
}
In the main.tf
file, we add the provider information:
terraform {
required_providers {
hm = {
source = "hostman-cloud/hostman"
}
}
required_version = ">= 0.13"
}
provider "hm" {
token = var.hm_token
}
When describing a Terraform configuration, we use two main types of entities: data sources (data
) and resources.
In main.tf
, we specify a preset for configuration:
data "hm_lb_preset" "lb-preset" {
requests_per_second = "10K"
price_filter {
from = 100
to = 200
}
}
If you use projects in the control panel, you can specify the project where the resources will be created. For example, we add a project named "Docs":
data "hm_projects" "docs" {
name = "Docs"
}
The resource we will create is called hm_lb
. When creating it, you can specify many optional parameters, such as health check settings and load balancing algorithm. We'll take the simplest setup:
resource "hm_lb" "load-balancer" {
name = "load-balancer"
algo = "roundrobin"
project_id = data.hm_projects.articles.id
preset_id = data.hm_lb_preset.lb-preset.id
health_check {
proto = "tcp"
port = 80
}
ips = var.lb-ips
}
Other parameters we can specify include:
ips
— list of IP addresses for balancingis_keepalive
— flag indicating whether to maintain TCP connection with the serveris_ssl
— automatic redirect to HTTPSis_sticky
— maintain user session on a single backend serverWe also added a .tfvars
file to the project with values for variables from variables.tf
:
hm_token = "<insert API key for provider here>"
lb-ips = [ "<IP for first server>", "<IP for second server>" ]
Next, execute:
terraform validate
If everything is correct, you should see a success message.
To see which resources will be created, run:
terraform plan -var-file=.tfvars
The -var-file=.tfvars
flag allows Terraform to use the variable values defined in that file.
Variables can also be defined in other ways:
-var
flag, e.g., -var="hm_token=243453452345235456643"
export TF_VAR_hm_token=243453452345235456643
A rule in the load balancer refers to port forwarding configuration. For any rule, you must specify:
Add a new resource to the configuration in main.tf
:
resource "hm_lb_rule" "lb-rule" {
lb_id = resource.hm_lb.load-balancer.id
balancer_proto = "http"
balancer_port = 80
server_proto = "http"
server_port = 80
}
Run the commands again:
terraform validate
terraform plan -var-file=.tfvars
terraform apply -var-file=.tfvars
Confirm by typing yes
.
In the control panel, you will see the created resources:
Visit the load balancer address (twice to reach different servers):
If a second rule is needed, for example, a new application runs on port 81, simply add another resource block:
resource "hm_lb_rule" "lb-second-rule" {
lb_id = resource.hm_lb.load-balancer.id
balancer_proto = "http"
balancer_port = 81
server_proto = "http"
server_port = 81
}
Run:
terraform plan -var-file=.tfvars
You will see that one load balancer and one rule already exist, and a new rule will be added.
Then run:
terraform apply -var-file=.tfvars
Agree to apply the configuration when prompted.
Check the control panel—now two rules are configured. Visit the load balancer address to verify everything works correctly.
In this guide, we learned how to create network load balancers and rules using Terraform. Besides Terraform, load balancers can also be managed via API.
Useful links: