Terraform is one of the most effective tools for working with the IaC (Infrastructure as Code) model. This open-source software makes it much easier to deploy and manage infrastructure both in local environments and in the cloud.
In this article, we will look at how to increase the size of a virtual machine’s boot disk using Terraform, in different ways and across different environments.
Terraform offers a number of tools for working with virtual machine disks. However, before you can increase a disk size, you first need to create it.
When creating a virtual machine, you can immediately specify the disk size using the size
parameter. For example, to create a disk with a size of 100 GB, you need to add the line size = 1024 * 100
to your Terraform configuration.
The configuration file has a .tf
extension and is located in the root directory. For example, if you want to create a virtual machine in Hostman and set its hard disk size to 100 GB, then in the .tf
configuration file you need to create a hm_server
resource (for an additional disk, you would create hm_server_disk
) and specify this parameter in the configuration block.
Example for an additional disk:
resource "hm_server" "my-server" {
name = "My Server Disk"
os_id = data.hm_os.os.id
configuration {
configurator_id = data.hm_configurator.configurator.id
size = 1024 * 100
}
}
In this example, we set the disk size to 100 GB. The size is always specified in megabytes, with increments of 5120 MB (5 GB). Naturally, you can adjust parameter values according to your project’s needs.
To attach the disk to a virtual machine, use the source_server_id
parameter in the disk
block:
resource "hm_server_disk" "my-server-disk" {
disk {
size = 1024 * 10
source_server_id = hm_server.my-server.id
}
}
Note that the id
value will be automatically assigned to the resource after it is successfully created.
Changing the size of a Hostman disk in Terraform is straightforward.
Here’s an example for our newly created system disk:
configuration {
configurator_id = data.hm_configurator.configurator.id
size = 1024 * 200
}
Now the disk size will be 200 GB, and after rebooting, the filesystem will also expand.
The main thing is not to forget to update your Terraform configuration using:
terraform apply
So the changes are applied to your infrastructure.
You can only increase disk sizes (not decrease), but you can also delete existing disks and add new ones.
Google Cloud Platform (GCP) is a cloud platform offering many tools and services for developing, deploying, and managing applications in the cloud.
The instructions given above will work for GCP as well; you just need to replace hm_server
with google_compute
in the first line of the resource.
Example:
resource "google_compute" "mydisk" {
name = "my_new_VM"
type = "ssd"
size = 100
}
However, GCP also allows you to create a disk via Terraform whose size can be changed later without having to create a new disk and copy data from the old one.
You can do this by adding the following lines:
resource "google_compute" "mydisk" {
name = "my_new_VM"
image = data.google_compute_image.my_image.self_link
}
Here we added a line with the image data. This enables dynamic changes to the disk parameters without using initialization parameters that are intended for recreating the disk rather than modifying it.
EBS (Amazon Elastic Block Store) is a data storage service in Amazon Web Services (AWS). It provides block storage that can be used for virtual machines and data storage.
EBS volumes can be resized, and Terraform greatly simplifies this task. With Terraform, you can change an EBS disk size in just three steps.
The code will be similar to the previous examples, but with EBS-specific values and some additional parameters:
resource "aws_ebs_volume" "mydisk" {
zone = "europe-north1-a"
size = 200
type = "ssd"
tags {
Name = "mydisk"
Role = "db"
Terraform = "true"
FS = "xfs"
}
}
Then import the volume (name given as an example):
terraform import aws_ebs_volume.mydisk vol-13579ace02468bdf1
If the import is successful, you’ll get a confirmation message.
Now you can change the volume size by replacing 200 with 500 in the code:
size = 500
Then run:
terraform apply -target=aws_ebs_volume.mydisk
After that, you should see a message confirming the volume size change.
Create an instance and retrieve its identifier:
data "aws_instance" "mysql" {
filter {
name = "block-device-mapping.volume-id"
values = ["${aws_ebs_volume.mydisk.id}"]
}
}
output "instance_id" {
value = "${data.aws_instance.mydisk.id}"
}
Update the configuration with:
terraform apply
terraform refresh
This should produce output like:
instance_id = i-13579ace02468bdf1
Next, get the mount point pointing to our volume inside the instance:
locals {
mount_point = "${data.aws_instance.mydisk.ebs_block_device.0.device_name}"
}
To make the OS recognize and use the entire expanded disk size, create a script like this:
resource "null_resource" "expand_disk" {
connection {
type = "ssh"
user = "username"
private_key = "${file("ssh-key-here")}"
host = "${data.aws_instance.mydisk.xxx.xxx.xxx.xxx}"
}
provisioner "remote-exec" {
inline = [
"sudo lsblk",
"sudo xfs_growfs ${local.mount_point}",
]
}
}
Note: replace xxx.xxx.xxx.xxx
with the public IP address of the created disk.
Finally, run:
terraform apply -target=null_resource.expand_disk
This way, you can resize EBS volumes in Terraform (not only increase, but also decrease) without creating a new volume, which is not always convenient.
We’ve learned how to create disks in Terraform and increase their size using Terraform itself, as well as configuration files and scripts in different environments.