Sign In
Sign In

Cloud Managed
MySQL

Ready-made clusters. No administration. With an hourly rate.
Contact Sales
Cloud Managed MySQL
No Downtime
We provide 99.9% reliability according to SLA. We host servers exclusively in Tier IV data centers that meet all international security standards.
Two-click Launch
Run the database directly in Hostman's modern control panel. All settings, services and facilities are available directly from the panel.
Saving for Real
Use the database with hourly billing — and pay only for the services you use. No hidden charges and no imposed services.
Convenient Scaling
Is the project growing? Connect additional resources. Hostman will provide as much power as your service requires.

Pricing

MySQL
New York
1 x 3 GHz CPU
CPU
1 x 3 GHz
1 GB RAM
RAM
1 GB
20 GB NVMe
NVMe
20 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
$4
 /mo
2 x 3 GHz CPU
CPU
2 x 3 GHz
2 GB RAM
RAM
2 GB
60 GB NVMe
NVMe
60 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
$9
 /mo
2 x 3 GHz CPU
CPU
2 x 3 GHz
4 GB RAM
RAM
4 GB
80 GB NVMe
NVMe
80 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
$18
 /mo
4 x 3 GHz CPU
CPU
4 x 3 GHz
8 GB RAM
RAM
8 GB
160 GB NVMe
NVMe
160 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
$36
 /mo
6 x 3 GHz CPU
CPU
6 x 3 GHz
16 GB RAM
RAM
16 GB
320 GB NVMe
NVMe
320 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
$72
 /mo
8 x 3 GHz CPU
CPU
8 x 3 GHz
32 GB RAM
RAM
32 GB
640 GB NVMe
NVMe
640 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
$114
 /mo
16 x 3 GHz CPU
CPU
16 x 3 GHz
64 GB RAM
RAM
64 GB
1280 GB NVMe
NVMe
1280 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
$288
 /mo
Anup k.
Associate Cloud Engineer
5.0 out of 5

"Hostman Comprehensive Review of Simplicity and Potential"

It been few years that I have been working on Cloud and most of the cloud service...
Mansur H.
Security Researcher
5.0 out of 5

"A perfect fit for everything cloud services!"

Hostman's seemless integration, user-friendly interface and its robust features (backups, etc) makes it much easier...
Adedeji E.
DevOps Engineer
5.0 out of 5

"Superb User Experience"

For me, Hostman is exceptional because of it's flexibility and user-friendliness. The platform's ability to offer dedicated computing resources acr...
Yudhistira H.
Mid-Market(51-1000 emp.)
5.0 out of 5

"Streamlined Cloud Excellence!"

What I like best about Hostman is their exceptional speed of deployment, scalability, and robust security features. Their...
Mohammad Waqas S.
Biotechnologist and programmer
5.0 out of 5

"Seamless and easy to use Hosting Solution for Web Applications"

From the moment I signed up, the process has been seamless and straightforward...
Mohana R.
Senior Software Engineer
5.0 out of 5

"Availing Different DB Engine Services Provided by Hostman is Convenient for my Organization usecases"

Hostman manages the cloud operations...
Faizan A.
5.0 out of 5

"Hostman is a great fit for me"

Hostman is a great fit for me. What do you like best about Hostman? It was very easy to deploy my application and create database, I didn't have
Adam M.
5.0 out of 5

"Perfect website"

This website is extremely user friendly and easy to use. I had no problems so didn't have to contact customer support. Really good website and would recommend to others.
Anup K.
4.0 out of 5

"Simplifying Cloud Deployment with Strengths and Areas for Growth"

What I like best about Hostman is its unwavering commitment to simplicity...
Naila J.
5.0 out of 5

"Streamlined Deployment with Room for Improvement"

Hostman impresses with its user-friendly interface and seamless deployment process, simplifying web application hosting...

Trusted by 500+ companies and developers worldwide

One panel to rule them all

Easily control your database, pricing plan, and additional services
through the intuitive Hostman management console.
Easy set up and management
Ready-to-deploy cloud database solutions come pre-configured. Choose your setup, launch your database, and begin managing your data with ease.
Saves time and resources
Forget about configuring hardware and software or manual database management—our service has it all covered for you.
Security
Deploy databases on an isolated network to maintain private access solely through your own infrastructure.
Hostman management console, statistics for an hour

Everything is ready to deploy
your MySQL database to our
cloud — up and running
in seconds!

Databases for all tastes

MySQL

The most popular relational database management system from Oracle. Developed under an open-source model.

PostgreSQL

An object-relational database management system. Supported by most UNIX platforms.

Redis

A high-performance database that operates on a 'key-value model'. Often used for caching.

MongoDB

A classic database management system oriented towards document storage and supporting JSON queries.

OpenSearch

A system of search and analytics resources for monitoring applications and event logs.

ClickHouse

A columnar analytical database. Supports queries to a large array of structured data in real-time mode.

Kafka

An open-source messaging system. Known for its high speed and low latency.

RabbitMQ

A messaging system based on the AMQP standard.

Code locally, launch worldwide

Our servers, certified with ISO/IEC 27001, are located in Tier 3 data
centers across the US, Europe, and Asia.
Hostmans' Locations
🇺🇸 San Francisco
🇺🇸 San Jose
🇺🇸 Texas
🇺🇸 New York
🇳🇱 Amsterdam
🇳🇬 Lagos
🇩🇪 Frankfurt
🇵🇱 Gdansk
🇦🇪 Dubai
🇸🇬 Singapore

Latest News

Go

Variables in Go

Variables are named values stored in specific areas of memory and used during program execution. Go (also known as Golang) is a statically typed programming language. This means that once a variable is declared, its type is fixed and cannot be changed. Variables can have various types, each with its own purpose and characteristics.Go provides several basic data types, which form the foundation of the language's logic: Integer Floating-point String Boolean Additionally, Go supports composite data types: Arrays Slices Structures Maps There are also several auxiliary types: Pointers Interfaces Besides these, Go (similar to C++) includes a Standard Library (std) containing many predefined types. You can find more detailed information about variable types in Go in a separate article. For instructions on installing Go on Linux, Windows, or macOS, refer to the Hostman guides.  All the code examples in this tutorial were tested using Go version 1.21.3. Compiling and Running Code All the code examples in this guide are run in separate files with the .go extension. First, create a new file: sudo nano example.go Next, fill it with code inside the main() function, including any necessary modules: package main import "fmt" func main() { // start of example var number int = 10 fmt.Println(number) // end of example } Then run the file: go run example.go Declaring a Variable There are different ways to declare a variable in Go before using it—ranging from a full form, explicitly specifying the parameters (or multiple parameters) of the variable, to a shorthand form that uses automatic type inference and initialization. The choice of declaration method depends on the context. However, it’s generally recommended to use the most concise and automatic form whenever possible, as this reduces the likelihood of programmer errors by shifting some of the responsibility to the language's interpreter. Using the var Keyword The most explicit way to declare a variable in Golang is by using the var keyword, followed by the variable name, type, and value: var some_variable int = 5 However, if the variable is initialized with a value, you can omit the explicit type: var some_variable = 5 You can also declare a variable without assigning a value, but in this case, you must specify the type: var some_variable intsome_variable = 5 In all of these examples: var — the keyword for declaring a variable some_variable — the variable's name int — the variable's type 5 — the variable's value For example, this is how you can declare string variables: var some_name string = "John" The following declaration will result in an error: // ERROR: no value or type specified during declarationvar some_namesome_name = "John" It’s important to note that type inference is only possible during the initial declaration of the variable when the interpreter allocates the appropriate amount of memory for its value. Short Form := Despite Go's strict static typing, it allows variables to be declared in a more concise form without explicitly specifying their parameters: some_variable := 5 In this case, the interpreter understands that it needs to automatically infer the variable type based on the assigned value. However, this shorthand declaration is only allowed inside a function (including main()); it cannot be used outside a function: package main // ERROR: short form declaration outside of a function some_variable := 5 func main() { // OK: short form declaration inside a function other_variable := 10 } It’s important to understand the distinction between declaring a variable (with initialization) and assigning a value to it: package main func main() { some_variable := 5 // this is declaration and initialization (colon is present) some_variable = 50 // this is assignment (no colon) other_variable = 7 // ERROR: this is assignment (no colon) to an undeclared variable } For example, you can declare (and initialize) several variables sequentially: age := 50 // variable of type int name := "John" // variable of type string occupation := "Just a guy" // variable of type string height := 190.5 // variable of type float32 You cannot use the := operator together with the var keyword. Doing so will result in an error: var someVariable int := 5 // ERRORvar someVariable := 5 // ERROR Excluding the var keyword but still explicitly specifying the type will still result in an error: someVariable int := 5 // ERROR Multiple Variables In Go, you can declare multiple variables in one line or block. For example, you can use the var keyword with a single type for all declared variables: var width, height, depth int = 100, 200, 300 You can also separate the declaration of variables and their assignment: var width, height, depth intwidth, height, depth = 100, 200, 300 If the variable types differ, the interpreter can automatically infer their types: var name, age, fired = "John", 50, false Similarly, you can use the short form for multiple variables: name, age, fired := "John", 50, false In this case, there is no var keyword, nor are the types of the variables specified. Another way to declare multiple variables is by using a block: var ( name string = "John" age int = 50 height float64 = 190 fired bool = false ) By the way, you can format block declarations using spaces in such a way that names, types, and values align in columns, improving code readability: var ( name string = "John" age int = 50 height float64 = 190.5 fired bool = false ) The block declaration has no particular utility significance. It’s just syntactic sugar that: Improves code readability by grouping important variables in one place. Improves code cleanliness by avoiding repeated use of the var keyword for each variable. Improves code maintainability by simplifying the search and modification of variable parameters. Thus, block declaration is justified only when you need to group several key variables, simplifying their visual perception in a code editor. No Initialization In Go, it is possible to create a variable without initializing it. In this case, the variable is assigned a zero value corresponding to the specified type: For int, float32, float64: 0, 0.0, 0.0 For bool: false For string: "" For pointers: nil We can demonstrate this behavior of Go regarding variable declaration and initialization in the following script: package main import "fmt" func main() { // Integer var numberInt int fmt.Println("Integer:", numberInt) // Floating-point number var numberFloat float32 fmt.Println("Floating-point number:", numberFloat) // String var text string fmt.Println("String:", text) // Boolean var condition bool fmt.Println("Boolean:", condition) // Array var array [5]int fmt.Println("Array:", array) // Slice var cut []int fmt.Println("Slice:", cut) // Struct type S struct { name string size int address string } var structure S fmt.Println("Struct:", structure) // Map var dictionary map[int]int fmt.Println("Map:", dictionary) // Pointer var pointer *int fmt.Println("Pointer:", pointer) } The console output will be as follows: Integer: 0 Floating-point number: 0 String: Boolean: false Array: [0 0 0 0 0] Slice: [] Struct: { 0} Map: map[] Pointer: <nil> As you can see, variables of different types are automatically initialized with zero (or empty) values wherever possible. Naming Conventions In Golang, variable names can either start with a Latin letter or an underscore (_): onething := 123 // OK Onething := 123 // OK _onething := 123 // OK __onething := 123 // OK 1thing := 123 // ERROR Additionally, variable names have a functional feature: names starting with an uppercase letter are visible in other packages, while names starting with a lowercase letter are not. There are also several universal naming conventions across programming languages, including Go: Snake Case Camel Case Pascal Case Kebab Case (not supported in Go) Snake Case In Snake Case, the variable name looks like this: some_random_variable := 123 // lowercaseSOME_RANDOM_VARIABLE := 123 // uppercase Camel Case In Camel Case, the variable name looks like this: someRandomVariable := 12 Pascal Case In Pascal Case, the variable name looks like this: SomeRandomVariable := 123 Kebab Case In Kebab Case, the variable name looks like this: // ERRORsome-random-variable := 123 // lowercaseSOME-RANDOM-VARIABLE := 123 // uppercase However, Go doesn't support the Kebab Case style due to the hyphen character, which is reserved for the subtraction operation. Example: Declaring Multiple Variables Let’s further explore all the aforementioned ways of declaring variables in Golang in this script example: package main import "fmt" func main() { // Explicit declaration with type specification var age int = 50 fmt.Println("Age:", age) // Explicit declaration with type inference var height = 190.5 fmt.Println("Height:", height) // Short declaration name := "John" fmt.Println("Name:", name) // Explicit declaration of multiple variables var width, depth int = 100, 200 fmt.Println("Width:", width, "Depth:", depth) // Explicit declaration without initialization var distance int fmt.Println("Distance:", distance) // Block declaration of multiple variables var ( occupation string = "Welder" category float32 = 3.4 license bool ) fmt.Println("Occupation:", occupation, "Category:", category, "License:", license) } The result of running this code will be the following output in the console: Age: 50 Height: 190.5 Name: John Width: 100 Depth: 200 Distance: 0 Occupation: Welder Category: 3.4 License: false The var keyword is required for explicit variable declaration, especially in the global scope. The := operator is used for short variable declarations, particularly within functions. The block () syntax is used for readable declaration of multiple variables. It's important to remember that Go emphasizes minimalism and concise syntax. Therefore, the most compact form of notation should be used wherever possible. This reduces errors and issues while maintaining the cleanliness and readability of the code. Variable Initialization Typically, when a variable is declared, it is manually initialized with a specific value. The initialization of different types has syntactic differences. Number Numerical variables are initialized by assigning a numerical value, which is syntactically simple: // int var someNumber int = 5 // float32 otherNumber := 10.0 A number can be initialized with another number: // int var someNumber int = 5 var otherNumber int = someNumber // int oneMoreNumber := someNumber String String variables are initialized by assigning a sequence of characters enclosed in double quotes: // stringvar someString string = "Some programmer was here" A string can also be initialized with another string: // string var someString string = "Some programmer was here" var otherString string = someString // string oneMoreString := someString Boolean Initializing boolean variables is similar to initializing numeric and string variables, except that the value used is the keyword true or false: // boolvar someBool bool = true Similarly, boolean variables can be initialized with other boolean variables: // bool var someBool bool = true var otherBool bool = someBool // bool oneMoreBool := someBool Array There are several ways to initialize an array. The simplest one is through sequential access to the elements: // array var languages [3]string languages[0] = "Golang" languages[1] = "Python" languages[2] = "Rust" A more complex method is using a composite literal. A composite literal is a compact syntax for initializing any composite (struct-like) type, which avoids assigning each element individually. Thus, the array can be initialized in one step: var languages = [3]string{"Golang", "Python", "Rust"} Or using the shorthand form: languages := [3]string{"Golang", "Python", "Rust"} You can also partially initialize array elements: // array size 5, but only 3 elements initialized languages := [5]string{"Golang", "Python", "Rust"} languages[3] = "Java" languages[4] = "C++" To make the initialization of a large array more readable, you can format it like this: languages := [5]string{ "Golang", "Python", "Rust", "Java", "C++", // the comma at the end is REQUIRED } By the way, an array can be initialized with another array, copying all of its elements: languages := [3]string{"Golang", "Python", "Rust"}otherLanguages := languages It’s important to understand that copying an array also occurs when it is passed to a function: package main import "fmt" func change(languages [5]string) { for i := range languages { languages[i] = "[" + languages[i] + "]" } } func main() { languages := [5]string{ "Golang", "Python", "Rust", "Java", "C++", } change(languages) fmt.Println(languages) } The output in the console will be: [Golang Python Rust Java C++] Thus, only the copy of the array inside the change() function was modified, not the original array from the main() function. However, explicit initialization of an array with another array is possible only if both arrays have the same length and type: languages := [3]string{"Golang", "Python", "Rust"} var otherLanguages [3]string = languages // OK var oneMoreLanguages [4]string = languages // ERROR Additionally, in Go, you can create arrays from an arbitrary number of other arrays. You can initialize elements of such arrays both sequentially: var matrix [2][2]string matrix[0][0] = "a" matrix[0][1] = "b" matrix[1][0] = "c" matrix[1][1] = "d" Or using a composite literal: var matrix = [2][2][2]string{{{"a", "b"}, {"c", "d"}}, {{"e", "f"}, {"g", "h"}}} As shown, the second option takes up less space, but the syntax is more complex. Slice A slice is initialized the same way as an array: var languages = []string{"Golang", "Python", "Rust"} However, unlike an array, a slice can be initialized with another slice of arbitrary length: var languages = []string{"Golang", "Python", "Rust"}var otherLanguages []string = languages Map Maps are initialized using a composite literal with the type of the key and value specified. The content is listed using commas and separated by a colon: var languages = map[string]string{"first": "Golang", "second": "Python", "third": "Rust"} You can also use the shorthand declaration and a more readable initialization format: languages := map[string]string{ "first": "Golang", "second": "Python", "third": "Rust", // the comma at the end is MANDATORY } However, initializing a map with another map does not copy the elements; instead, it makes them shared: package main import "fmt" func main() { languages := map[string]string{"first": "Golang", "second": "Python", "third": "Rust"} otherLanguages := languages fmt.Println(languages) fmt.Println(otherLanguages) otherLanguages["first"] = "C++" fmt.Println(languages) fmt.Println(otherLanguages) delete(otherLanguages, "second") fmt.Println(languages) fmt.Println(otherLanguages) } The console output of this example will be: map[first:Golang second:Python third:Rust] map[first:Golang second:Python third:Rust] map[first:C++ second:Python third:Rust] map[first:C++ second:Python third:Rust] map[first:C++ third:Rust] map[first:C++ third:Rust] Pointer Pointers can only be initialized with the address of a variable of the same type: var variable int = 15var pointer *int = &variable The ampersand (&) symbol is used to get the address of any variable: package main import "fmt" func main() { var variable int = 15 var pointer *int = &variable fmt.Println(pointer) } The console output of this example will look something like: 0xc000104040 You can also use shorthand notation to initialize pointers: variable := 15pointer := &variable To access the value stored at the address of a pointer, you need to dereference it using the asterisk (*): package main import "fmt" func main() { var variable int = 15 var pointer *int = &variable fmt.Println(*pointer) } In this case, the console output will show: 15 Thus, you can assign new values to a variable located at the address of the pointer: package main import "fmt" func main() { var variable int = 15 var pointer *int = &variable *pointer = 5 fmt.Println(*pointer) } The console will display: 5 Finally, a pointer can be initialized with an anonymous object in memory. This is done using the new() function, which returns the address of the allocated memory: variable := new(int)*variable = 15 You don't need to manually delete the allocated memory — the garbage collector automatically handles this. Structure A structure can be initialized either with explicitly specified values in order: type something struct { first string second int } var structure something = something{"John", 15} Or with explicitly specified values by key names: type something struct { first string second int } var structure something = something{second: 15, first: "John"} Alternatively, you can choose not to specify any values, which will automatically initialize all fields to their zero values: package main import "fmt" type something struct { first string second int } func main() { var structure something = something{} fmt.Println(structure) structure.first = "John" structure.second = 15 fmt.Println(structure) } In this case, the console output will be: { 0} {John 15} Branching Based on Variables Variables play a central role in branching. Different parts of the program's code are executed based on their values (conditions). if/else The most basic conditional construct is created using the if/else statements. Here's the simplest condition: a := 5 b := 10 if a < b { fmt.Println("A is less than B") } For example, you can use a simple condition to check a pointer: var pointer *int if pointer == nil { fmt.Println("No address") } A more complex form would look like this: a := 10 b := 5 if a < b { fmt.Println("A is less than B") } else { fmt.Println("A is greater than B") } You can create even more complex constructs by combining else and if: a := 10 b := 5 if a < b { fmt.Println("A is less than B") } else if a > b { fmt.Println("A is greater than B") } else { fmt.Println("A is equal to B") } Multiple if/else expressions can be used: a := 12 if a < 5 { fmt.Println("A is less than 5") } else if a < 10 { fmt.Println("A is less than 10") } else if a < 20 { fmt.Println("A is less than 20") } else { fmt.Println("A is in superposition") } switch Another way to branch is using the switch construct, where possible values of a variable are defined, and actions are performed if there's a match: a := 1 switch a { case 0: fmt.Println("A is 0") case 1: fmt.Println("A is 1") case 2: fmt.Println("A is 2") } The default section can be used to define an action that runs if no match occurs: a := 3 switch a { case 0: fmt.Println("A is 0") case 1: fmt.Println("A is 1") case 2: fmt.Println("A is 2") default: fmt.Println("A is in superposition") } You can also combine multiple possible matches into one section: a := 1 switch a { case 0, 1, 2: fmt.Println("A is either 0, 1, or 2") default: fmt.Println("A is in superposition") } Useful Functions Go has many utility functions for working with variables. In this guide, we'll cover just the basic ones. Environment Variables Go provides special system functions that allow you to set and get environment variables: package main import ( "fmt" "os" ) func main() { os.Setenv("SOMEVAR", "1") // Set an environment variable fmt.Println("SOMEVAR:", os.Getenv("SOMEVAR")) // Read an environment variable } Time Variables Often, the program logic requires measuring time. Go has a corresponding tool for this — the time type. Time is a broad topic by itself. To learn more about the time package, you can check the official documentation. This guide will show how to get the current time in different formats: package main import ( "fmt" "time" ) func main() { fmt.Println("Current time:", time.Now()) fmt.Println("Current time (UTC):", time.Now().UTC()) fmt.Println("Current time (Unix):", time.Now().Unix()) } The console output will look something like this: Current time: 2009-11-10 23:00:00 +0000 UTC m=+0.000000001 Current time (UTC): 2009-11-10 23:00:00 +0000 UTC Current time (Unix): 1257894000 You can also specify specific time parameters: package main import ( "fmt" "time" ) func main() { timeNow := time.Now() fmt.Println("Full time:", timeNow) fmt.Println("Year:", timeNow.Year()) fmt.Println("Month:", timeNow.Month()) fmt.Println("Day:", timeNow.Day()) fmt.Println("Hour:", timeNow.Hour()) fmt.Println("Minutes:", timeNow.Minute()) fmt.Println("Seconds:", timeNow.Second()) } In this case, the console output will be: Full time: 2024-11-15 23:46:09.157929822 +0000 UTC m=+0.000031801 Year: 2024 Month: November Day: 15 Hour: 23 Minutes: 23 Seconds: 9 Adding and Removing Elements from a Slice You can add elements to slices: var languages = []string{"Golang", "Python", "Rust"} languages = append(languages, "Java", "C++") fmt.Println(languages) This will append "Java" and "C++" to the languages slice. You can also remove elements from slices: var languages = []string{"Golang", "Python", "Rust"} // Remove the 2nd element (index 1) n := 1 languages = append(languages[:n], languages[n+1:]...) fmt.Println(languages) In this example, the second element is removed from the languages slice using slice operators, which create a new sequence from parts of the original slice. Here’s an example of slicing a sequence: package main import "fmt" func main() { var sequence = []string{"One", "Two", "Three", "Four", "Five"} newSequence := sequence[1:4] // Elements from index 1 to 3 become the new slice fmt.Println(newSequence) } The output in the console will be: [Two Three Four] Checking the Type of a Variable You can check the type of a variable using the TypeOf() function from the reflect package: package main import ( "fmt" "reflect" // Package to determine the type ) func main() { variableString := "string" variableInt := 5 variableFloat64 := 1.5 variableBool := true fmt.Println(reflect.TypeOf(variableString)) fmt.Println(reflect.TypeOf(variableInt)) fmt.Println(reflect.TypeOf(variableFloat64)) fmt.Println(reflect.TypeOf(variableBool)) } The console output for this example will be: string int float64 bool Variables in Strings Often, you need to insert a variable into a string. There are several ways to do this: package main import "fmt" func main() { // METHOD 1 stringPre := "human-readable" stringEnd1 := fmt.Sprintf("This is a %s string", stringPre) fmt.Println(stringEnd1) // METHOD 2 stringEnd2 := "This is " + stringPre + " string" fmt.Println(stringEnd2) } The output in the console will be: This is a human-readable stringThis is a human-readable string You can also combine numeric variables with strings: package main import "fmt" func main() { name := "John" age := 50 fmt.Printf("Hi, my name is %v and I'm %v years old.\n", name, age) } The output will be: Hi, my name is John and I'm 50 years old. Conclusion Like in most other programming languages, variables in Go are essential for storing data. Since data types differ from each other, Golang variables have several basic types, each having a specific representation in the computer's memory. In this guide, we only covered the basic ways to work with variables. You can find more detailed (and comprehensive) information about types and their specifics in the official Golang documentation. Additionally, the official Go package manager catalog provides information on many useful modules available for import into your project. One such module is the Standard Library. Check out our app platform to deploy Go applications (such as Beego and Gin). 
27 January 2025 · 19 min to read
Linux

Linux Permissions Explained

In Linux, permissions are extremely valuable in dealing with access to folders as well as files. It makes sure proper authority over which one can deal with them. Effectively handling these privileges is fundamental for enhancing system file management and security. These privileges give groups or users the ability for reading, executing, or modifying, directories and their content. These rules safeguard data and restrict access, particularly in environments with more than one user. Each folder or file comes with particular rights that represent what users can accomplish. This article will demonstrate the basis of permissions, point out access, understanding, and changing privileges for them, and manage folders and their content. Basis of Permissions For beginners, the directories or file privileges can be challenging. It involves the concepts of types and groups as below:  Types  In Linux, each folder or file holds three kinds of permissions, each serving a particular purpose: Read (r): It indicates the permission to view the file’s content or enlist the items inside the folder. Write (w): It allows modifications to the particular file or addition and deletion of files inside the directory. Execute (x): It permits the file execution as the program or getting the folder content. Groups They are classified into three groups, each serving a particular role: User (Owner): It indicates the user who has the ownership rights of the folder or file. Group: It indicates a bunch of users having shared access rights. Others: It includes those who are not owners or participants of the desired group. Permission Formats Privileges are visualized in two formats: symbolic and octal. The symbolic employs symbols to mean rights, r indicates reading, w refers to writing, and x is utilized for the execution purpose. In contrast, the octal utilizes numbers, where 4 means reading, 2 stands for writing, and 1 signifies execution. Linux Display Permissions  Linux offers several methods to examine privileges. Individuals can employ a terminal for detailed information or go through the file manager's properties option for a graphical visualization. Using GUI This approach is the most straightforward for evaluating rights of permissions. It permits individuals to display them through the file manager's properties. To employ this method, hit the right-click on the desired folder and click Properties: Next, navigate to Permissions for viewing the permissions given to the particular directory and its content: In the figure, readers can see and adjust privileges for directories and their content, defining what the group is permitted to do, such as modifying, accessing, or deleting them. Additionally, it provides security context info and offers the choice to implement these privileges to all enclosed files: Through the ls Command You can employ the ls command along with -l, followed by the specified folder or file, to analyze its stats, including privileges: ls -l <file_or_directory_name> It retrieves thorough entries, including file privileges and a variety of properties. For instance, the below one retrieves the privilege attributes of the Downloads: ls -l Downloads In the output, the starting part indicates the permissions for all files or folders. For instance, -rw-rw-r-- describes the file as having reading and writing rights for the group as well as the owner. Also, reading-only privileges for others. drwxrwxr-x demonstrates the particular folder possessing the privileges of reading, writing, and executing for the group and owner. Also, reading and executing privileges for others. The next section describes the number of hard links to a particular file or folder. The next section shows (e.g., linuxuser) the owner. The next part shows the group which is corresponding with the directory or file. The fifth part describes the file's size in bytes. Next you see the most recent modification date and time, and finally the seventh section shows the file or folder’s name. Through the namei Utility In Linux, namei is an effective utility that shows the individual sections of a file or folder path along with their rights: namei -l /path/to/file Now, employ the namei -l to visualize comprehensive details about the Downloads folder: namei -l Downloads In this outcome, f: Downloads relates to the last entry in the folder, e.g. Downloads. The d signifies that it is a directory. The rwxr-xr-x means that the linuxuser has the right to read, write, and execute. However, both the linuxuser owner and the group have the capacity to read and execute privileges. It confirms that the linuxuser group as well as a user have the owners' rights of the particular folder. Through the stat Command This utility retrieves comprehensive info about the particular folder and its content, e.g. files: stat fileName  Let’s employ it to retrieve the comprehensive statistics of the Downloads: stat Downloads It retrieves the size of the file, rights, and a lot more: Modifying Permissions Editing file and folder rights are effective for system privacy purposes. Linux provides two main methods to revise privileges: symbolic and absolute mode. Symbolic Mode In this mode, individuals adjust permissions by adding (+), deleting (-), or setting (=) specific rights for the owner, group, or others. For making these modifications, the chmod is utilized.  Let's check out the permissions for the hostmanData file: ls -l hostmanData For adding execution access for the file’s owner, utilize the chmod utility as below: chmod u+x hostmanData Next, verify the updated privileges by running: ls -l hostmanData Absolute Mode In this method, rights are given through octal synonyms. There, every digit is related to reading, writing, and executing access for the user, group, and others. For instance, the code line allows full privileges to the owner and gives reading and executing access to the group and others: chmod 755 hostmanData Modifying Owner Rights The chown utility permits individuals to alter the folder's ownership and content. It allocates the new group or owner to maintain access control. Modifying Ownership We can alter the owner status of a particular directory or file via the chown. For instance, to alter the privileges of the hostmanData file to anees, employ the below code line: sudo chown anees hostmanData Next, confirm the changes via the following code line: ls -l hostmanData Modifying Group Ownership For updating the owner of a group of files, you can employ the below syntax: sudo chown :users hostmanData The above line updates the group of the hostmanData from linuxuser to users: Other Permissions  Linux permits individuals the appropriate way to handle advanced or complex operations via the below utilities: setuid: It allowed the file to execute with the authority of the owner compared to the user when implemented to the particular executable file. setgid: It permits the specific file for execution with the particular authority of the group that corresponds with the given file. Sticky Bit: It makes sure that the file’s owner has the capacity for renaming or deleting particular files inside a particular folder. Final Words In Linux, permissions are significantly important for handling access to particular folders or files. It plays an essential impact in system management or security. In this article, we covered the basis of permissions, their authority and modification, and editing ownership. We also demonstrated special rights to deal with complicated tasks. With a solid comprehension of these concepts, users can effectively secure Linux and manage access with ease.
27 January 2025 · 6 min to read
Linux

Linux Navigation and File Management

Navigating and manipulating files in Linux is a core concept for fluently operating the system. The terminal acts as your powerful control center, allowing you to traverse through directories, fetch files, and perform modifications. Whether you work on a local Linux installation or access a remote system via a secured SSH session, you can rely on basic commands to streamline your tasks.  This tutorial will use practical examples and precise instructions to demonstrate Linux file navigation and manipulation. Exploring Linux File Setup Linux structures its files and directories in a hierarchical layout resembling a tree, where the root directory (/) serves as the foundation. Everything, including folders and files, branches out from this root. Below is a concise synopsis of some key directories: Root (/) It is the main folder where everything begins. Every other directory and file is kept inside it. /home This folder contains personal files for every user. For instance, if your username is hostman, files will be kept in /home/hostman. /var This folder keeps frequently updated files, including system logs and temporary data. Logs can be located in /var/log. /etc Linux preserves configuration files and system settings, including networking and account setup inside the etc directory. File Navigation and Manipulation in Linux Navigating and organizing files involves traversing the file system to handle files and directories. Linux commands make these tasks efficient, let you automate them, and provide you precise control, especially in non-graphical environments. Checking Current Directory The pwd command retrieves your current address in the file system. It's useful for identifying your position when navigating through complex directory structures: pwd The outcome indicates that the active directory is anees located inside the system’s home directory: Traversing Directories The cd utility lets us traverse distinct directories. Entering the command below will take us to the root directory: cd / Let’s employ cd with the tilde symbol ~ or cd without any option to traverse back to the Home Directory: cd ~ Execute cd with the - sign to toggle between current and previous directories: cd - It switches us from hostmanExamples to the previous folder, i.e., hostman: Similarly, running cd with a precise path lets us access a particular file/directory: cd Desktop/hostman/hostmanExamples Retrieving Directories Data ls is a Linux utility that retrieves the folder’s data, such as files, links, and sub-folders. You can utilize distinct flags with ls to exhibit additional details like access rights, sizes, and last modified timestamps. Let’s utilize ls without any flag to retrieve the folder’s data from the recent path: ls Utilizing ls with a precise path retrieves the details of that directory: ls Desktop/hostman/ To get invisible files, utilize ls with -a flag: ls -a To demonstrate precise information, run ls followed by the -l flag: ls -l Returning Folder Structure The tree command retrieves folders and files in a hierarchical tree format. To do that, utilize the syntax: tree /directory_path Let’s retrieve the directory structure in a tree format, with each file’s size depicted in a human-understandable structure: tree -h Hidden Linux Files These files or folders initiate with a dot (.) and are invisible by default. They typically keep configuration settings or important data for applications, such as .bashrc for customizing terminal behavior or .ssh for managing secure keys. They can be viewed by utilizing the ls -a command or by enabling the "show hidden files" from the file managers.  File Administration Let’s study the below-listed Linux sections to handle files and directories effortlessly. File Creation Files are made in Linux via the touch command. It offers a simple method to make blank files. If the file is already present, touch revises the DateTime of the last change instead of generating a new file: touch hostmanExample.txt Folder Creation The mkdir utility in Linux allows you to create a directory at the desired path: mkdir hostmanFolder It constructs a hostmanFolder in the current directory: Re-labeling and Relocating Files mv lets you alter the name of files or transfer them to a distinct location. To re-title a file, write the existing name followed by the desired one: mv hostmanExample.txt hostman1.txt This alters the name of hostmanExample.txt to hostman1.txt: Likewise, users can transfer a file by specifying its name and the destination directory: mv hostman1.txt /hostmanFolder Cloning Files/Directories To duplicate a file/folder, execute cp with the source file and the destination path: cp graphqlFile.txt hostmanFolder To duplicate a directory along with its contents, employ the -r (recursive) flag with cp: cp -r graphql-examples hostmanFolder It clones the complete graphql-examples folder with all its data to the hostmanFolder: Deleting Files and Directories The rm command removes files and directories eternally without sending them to the bin/trash, so use it cautiously by providing the file name: rm graphqlFile.txt Likewise, to trash a directory and all of its data, employ the -r flag: rm -r hostmanFolder For extended safety, utilize the -i flag, which asks for verification before deleting any item: rm -ri graphql-examples Specify y (for "yes") and press Enter to approve the removal of each file or directory. If you input n, the file or directory will not be deleted. File Compression Zipping files is vital for sharing data, and organizing files and storage space. To accomplish this, Linux utilizes tools like gzip and bzip2.  gzip is a widely employed tool for compressing files. It decreases the file size while retaining the original data. Let’s employ gzip to compress the hostmanExample.txt file: gzip hostmanExample.txt The command compresses the original file (overrides the actual file): Similarly, users can execute gzip with -d flag to decompress a compressed file: gzip -d hostmanExample.txt.gz Users can employ the -k flag to compress a file without overriding the original one: gzip -k hostmanExample.txt File Archiving File archiving refers to the method of grouping multiple files and directories together into one unified archive file. This practice makes it easier to share, compress, and manage large data by consolidating various items into a single, organized package. For this purpose, we can utilize the tar command: tar -cvf hostman.tar hostmanExample.txt hostman1.txt hostmanDir/ By default, it makes an archive without compression, which contains hostmanExample.txt and hostman1.txt files and a hostmanDir directory: To assemble a compressed archive, we can define the compression format such as z for gzip and j for bzip2: tar -czvf hostman1.tar.gz hostmanExample.txt hostman1.txt hostmanDir/ Similarly, users can extract the archived data by executing the tar command with the -x flag: tar -xvf hostman.tar To extract a gzip or bzip2-compressed archives, use the -z or -j flags respectively: tar -xzvf hostman1.tar.gz Navigating with Shortkeys Shortcut keys can save time and make command-line navigation more efficient: Use cd - to switch to the previous directory. Run cd .. to steer one directory up. Execute cd ~ to return to the home folder. Press Ctrl + A to leap to the left-most (beginning) of the command line. Press Ctrl + E to leap to the right-most (end) of the command line. Hit Tab to auto-complete file or folder names. Gaining proficiency in these fundamental shortcut keys will simplify and enhance your experience with Linux file management. Conclusion In this write-up, we wrapped the essential techniques for navigating and handling files in Linux. We examined how to switch between directories and depict their data. We also examined the creation, deletion, renaming, and relocation of files and directories. Additionally, we explored invisible files, their functionality, and methods to handle them. Mastering these core skills will make working with Linux more easily and effectively.
27 January 2025 · 7 min to read
PostgreSQL

How to Install PostgreSQL on Ubuntu

PostgreSQL is a well-known relational database management system that provides high-availability features. These are renowned for their functionalities, such as support for complex statements, views, triggers, and foreign keys. PostgreSQL is installed on Ubuntu to provide a secure as well as flexible DB infrastructure. By installing it on the Linux distribution, you can enable and deal with the vast data in an efficient and secure manner. This infrastructure is designed to deal with different workloads, from single-machine apps to large internet-facing apps with multiple concurrent users. In this tutorial, we will walk through installing and setting up PostgreSQL on Ubuntu. Installing PostgreSQL on Ubuntu Ubuntu comes with the PostgreSQL package in its default repositories. You can install it on a Linux system following the given steps. It downloads the stable package. This is important for DB analysts, administrators, and industries that require a dependable DB solution for managing a large volume of data.  Let’s move forward into each step of installing PostgreSQL with detailed explanations on Ubuntu. Step 1: Refresh Ubuntu Repository First, refresh the Ubuntu package indexes. It is useful for maintaining system stability and security: sudo apt update Step 2: PostgreSQL Installation  PostgreSQL is included in Ubuntu's default repositories, making installation straightforward. To install it with some additional tools, execute the below command: sudo apt install -y postgresql postgresql-contrib Here: postgresql: The core PostgreSQL DB server. postgresql-contrib: Additional utilities and extensions useful for DB. Step 3: Start and Enable Services It is essential to enable the PostgreSQL service after installation and ensure it starts automatically upon bootup. The installed package utilizes the systemd daemon, which deals with the DB server runtime operations.  Run the commands below to make sure the DB server automatically initializes when the server boots up. Let’s initialize the service: sudo systemctl start postgresql And enable it so that it launches automatically when the machine boots up: sudo systemctl enable postgresql Step 4: Verify Installation To verify if the installed package is operating properly, check out its status through the below: sudo systemctl status postgresql In the figure, you can see an active status. It represents the services that are correctly running. Step 5: Access Shell Finally, switch to the PostgreSQL user account and launch the CLI to interact with database operations. Here, the -i flag provides an interactive login shell: sudo -i -u postgres The above display gives access to a user account. Once you're in the user environment, launch the CLI through the given command: psql As you can see in the above screenshot, this command launchs the CLI, where users can perform multiple operations by executing commands. Step 6: Basic Database Commands The creation of a database and a new user for any application is a good practice rather than using the root. In this way, a particular user can access the shell interface and run basic statements on the particular database. Here are general commands with thorough explanations and examples: Creation of the New Database To prevent conflicts, ensure data integrity and control access privileges for different applications or users, you need to create a new database. For creation, use the given command within the shell: CREATE DATABASE hostmandb; The screenshot shows that a database has been created. Creation of the New User Create a customized user and set a strong password: CREATE USER hostman_user WITH ENCRYPTED PASSWORD 'Qwer1234'; We have created a hostman_user with the password Qwer1234. Give Privileges to the Created User You can grant hostman_user complete privileges to the hostmandb through the given command. This allows controlled accessibility to database objects, enhancing data integrity as well as security: GRANT ALL PRIVILEGES ON DATABASE hostmandb TO hostman_user; Exit Shell For closing the shell, you can utilize the \q option as below: \q The figure shows that the shell has been exited. Finally, you can exit the user environment by executing the given command: exit The output confirms that you have returned to the main system user account. Access a Database as the Particular User You can connect to the particular database from the specific user account through the -U flag: psql -U hostman_user -d hostmandb -h 127.0.0.1 -p 5432 Here: -U hostman_user: Indicates the user. -d hostmandb: Indicates the database name. -h 127.0.0.1: Indicates the host (local machine). -p 5432: Indicates the PostgreSQL default port. In the given output, you can see that the user hostman_user has accessed the hostmandb console. Listing of All Databases View all databases and check whether the hostmandb database is available or not: \l In the figure, you can see that the database hostmandb is listed. Switch/Connect to the Particular Database To switch to a specific database, such as hostmandb, within PostgreSQL, use the \c command: \c hostmandb The above output confirms that the postgres user has successfully connected to the hostmandb database. Creation of the Particular Table To create a customized table, use the below statement. In our example, we create a hostmantb table: CREATE TABLE hostmantb(    id SERIAL PRIMARY KEY,    name VARCHAR(100),    age INTEGER,    department_id INTEGER); Here: id SERIAL PRIMARY KEY: It creates an auto-incrementing primary key column with the name id. name VARCHAR(100): It creates a name column that can store up to 100 characters. age INTEGER: It creates an age column that stores integer values. department_id INTEGER: This line creates a department_id column that stores integer values. The outcome of the above screenshot confirms that the table hostmantb is successfully created with specific attributes. List All Tables in the Particular Database For listing all tables in the hostmandb database, use the \dt command: \dt It displays all tables as well as confirms that the hostmantb is listed above. Overview a Table's Structure You can view the structure of the created table through the \d command. Let’s display the structure of hostmantb table: \d hostmantb In the above figure, you can see the complete structure of hostmantb table with id, name, age, and department_id having specific types. Input Entries into a Particular Table To input entries into a hostmantb table, use the below statement. This way, we input the values Peter, 35, and 1 to the columns name, age, and department_id, respectively: INSERT INTO hostmantb(name, age, department_id) VALUES ('Peter', 35, 1); The output confirms that we have successfully input data into the hostmantb table. Select All Data from a Particular Table You can select all data from the hostmantb table through the below statement: SELECT * FROM hostmantb; Modify Data in Particular Table For modifying data in hostmantb, you can utilize the below statement. For example, set the age column to 40 for the row where the name column equals Peter: UPDATE hostmantb SET age = 40 WHERE name = 'Peter'; The output confirms that the age column is updated. Remove Data from the Particular Table In this section, you can remove data from the specific table through the DELETE statement. It deletes all entries from the hostmantb table where the value in the name column is Peter: DELETE FROM hostmantb WHERE name = 'Peter'; In the screenshot, one entry from the hostmantb table is removed. Exit CLI You can easily exit the CLI via the \q utility: \q This tutorial has given you thorough guidelines for every step involved in installing as well as setting PostgreSQL on Ubuntu.  Conclusion By installing PostgreSQL on Ubuntu, you can optimize the DB infrastructure. PostgreSQL is often chosen for its freely available nature, which allows for customization, the system's stability and security features. In addition, PostgreSQL's supporters' help and comprehensive guide make it simpler to enhance operations and troubleshoot problems.  This combination makes sure that the DB system is both powerful as well as adaptable to several needs. In this tutorial, there are a lot of fundamental DB commands to get you started with database administration. Therefore, you will be able to create databases, and users and perform basic tasks.
24 January 2025 · 7 min to read
MySQL

How to Secure MySQL Server

MySQL is the best choice for relational database management, but its widespread use also makes it an appealing target for potential attackers. This tutorial outlines essential steps to fortify your MySQL server, protecting your data from unauthorized access and activities that are malicious. Prerequisites Prior to diving into security configurations, ensure the prerequisites mentioned below: MySQL is Installed and Running: Consider reading the official MySQL documentation if needed.  Access to Administrator Privileges: sudo or root access is required for many steps. Familiarity with MySQL Commands: Basic command-line knowledge will be helpful. Understanding Threat Landscape MySQL databases face various threats, including:   Unauthorized Logins: Weak credentials or misconfigurations can allow attackers to access your data.   SQL Injection Attacks: Unvalidated inputs in applications can expose your database to manipulation.   Insufficient Network Security: Open ports and weak firewalls leave the server vulnerable.   Proactively securing the database reduces the likelihood of such incidents. Step 1: Update the Server and MySQL As new updates provide new security features and also fixes bugs, it is highly important that your system stays updated because if will fix known vulnerabilities and leverage latest security enhancements.  Utilize the commands mentioned below to update both the operating system and MySQL. sudo apt updatesudo apt upgrade -y Verify the current MySQL version: mysql --version Keep an eye on official release notes to understand updates and security patches. Step 2: Secure Initial Configuration   The mysql_secure_installation script is an essential tool for strengthening the security of your MySQL server's. It streamlines sensitive tasks such as: Setting a root password Removing anonymous user accounts Disabling remote root logins Deleting the test database To run the script: sudo mysql_secure_installation Follow the prompts to implement the recommended security measures. Step 3: Configure User Authentication   Effective user management is essential for database security.   Use Unique Users for Applications Create separate users for each application, and assign only the permissions they need. For example:   CREATE USER 'app_user'@'localhost' IDENTIFIED BY 'StrongPassword123!';GRANT SELECT, INSERT ON app_database.* TO 'app_user'@'localhost';FLUSH PRIVILEGES; This ensures that a breach in one application doesn’t compromise the entire database.   Enable Authentication Plugins MySQL provides plugins like caching_sha2_password for secure authentication. To enable it:   ALTER USER 'root'@'localhost' IDENTIFIED WITH 'caching_sha2_password' BY 'SecurePassword!'; Step 4: Enforce Password Policies   If passwords are strong then the possibility of brute force attacks may not be of any help for the hackers. Install the password validation plugin if it isn’t already enabled: INSTALL PLUGIN validate_password SONAME 'validate_password.so'; Configure password strength rules in the MySQL configuration file: validate_password_policy=STRONGvalidate_password_length=12 These settings enforce strong, hard-to-guess passwords. Step 5: Restrict Remote Access   Unnecessary remote access exposes the database to risks. Disable Remote Root Access By default, MySQL allows root logins from remote systems. Disable this feature by editing the MySQL configuration file: sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf Set the bind address to localhost: bind-address = 127.0.0.1 Restart the MySQL service:   sudo systemctl restart mysql Use Firewalls for Remote Connections If remote access is necessary then configure firewalls. This will only allow traffic from trusted IPs. sudo ufw allow from <trusted_ip> to any port 3306sudo ufw enable Step 6: Secure Data Transmission   Encryption will make sure that data transferred between the client and server is safe from eavesdropping. Enable SSL/TLS Certificates for MySQL are generated automatically at the time of installation.  Add the following lines to /etc/mysql/mysql.conf.d/mysqld.cnf:   [mysqld]ssl-ca=/var/lib/mysql/ca.pemssl-cert=/var/lib/mysql/server-cert.pemssl-key=/var/lib/mysql/server-key.pemrequire_secure_transport = ON Restart MySQL to apply changes. Use tools like OpenSSL to verify encrypted connections. Step 7: Audit and Monitor Database Activity   Monitoring user activity can help detect unauthorized actions. Enable Logs Activate general logs and error logs in the configuration file:   general_log=1log_error=/var/log/mysql/error.log Analyze Access Patterns Periodically review logs to identify anomalies or suspicious activity. Use automated tools like Percona Monitoring and Management for advanced analytics. Step 8: Protect Against SQL Injection   SQL injection is a common attack vector for web-based MySQL applications. Mitigate this risk by: Using Parameterized Queries: This prevents attackers from injecting malicious SQL commands Validating User Inputs: Sanitize and validate inputs at the application level. Implementing a Web Application Firewall (WAF): Tools like ModSecurity can block malicious queries. Step 9: Regular Backups and Disaster Recovery   Prepare for the unexpected by setting up automated backups. Full Database Backups Use mysqldump to create complete backups:   mysqldump -u root -p --all-databases > backup.sql Incremental Backups For large databases, consider incremental backups using tools like Percona XtraBackup. Step 10: Disable Unused Features   Reducing the database’s attack surface is crucial.   Disable Unused Plugins Run the following command to list active plugins:   SHOW PLUGINS; Unload any unnecessary plugins UNINSTALL PLUGIN plugin_name; Remove Sample Databases Delete test databases that come pre-installed with MySQL:   DROP DATABASE test; Step 11: Use Security Updates and Best Practices   Regularly update MySQL to address vulnerabilities and ensure you’re using the most secure version. Stay connected with the MySQL Community for updates, news, and security alerts. Advance Practices: Segmentation Isolate your database server from other services using network segmentation to reduce the risk of unauthorized access. Intrusion Detection Systems (IDS). Use tools like Snort to monitor for unusual activity and detect potential intrusions in real-time. Two-Factor Authentication (2FA). Enable 2FA for administrative accounts to add an extra layer of protection against unauthorized access. Conclusion Securing a MySQL server requires a comprehensive approach, combining careful configuration, continuous monitoring, and proactive management. If you utilize these practices then your database can be safeguarded very effectively, protecting sensitive data as well as mitigating the chance of unauthorized access. You must evaluate and update the security measures regularly so that you can stay prepared for emerging threats.
24 January 2025 · 6 min to read
Kubernetes

How to Deploy PostgreSQL on Kubernetes

PostgreSQL is a popular relational database management system (RDBMS) that provides high-availability features like streaming replication, logical replication, and failover solutions. Deploying PostgreSQL on Kubernetes allows organizations to build resilient systems that ensure minimal downtime and data availability. With Kubernetes StatefulSets, you can scale PostgreSQL deployment in response to demand. Kubernetes Environment Setup To get started, make sure you have the following: Kubernetes Cluster (Cloud or Local):  You can set up a Kubernetes cluster on Hostman within no time. To follow this tutorial with a local Kubernetes cluster, you can use one of these tools: k3s, minikube, microk8s, kind. Kubectl: Kubectl allows users to interact with a Kubernetes cluster. The kubectl needs a configuration YAML file which contains cluster details and is usually provided by your cloud provider.  From the Hostman control panel, you can simply download this configuration file with a click of a button as indicated in the below screenshot. To connect, you need to set KUBECONFIG environment variable accordingly. export KUBECONFIG=/absolute/path/to/file/k8s-cluster-config.yaml Helm: You need Helm CLI to install Helm charts. Helm version 3 is required. Deploy PostgreSQL Using a Helm Chart Helm is a package manager for Kubernetes just like apt for Ubuntu and Debian. Instead of manually creating multiple YAML files for Pods, Services, Persistent Volumes, Secrets, etc., the Helm chart simplifies this to a single command (e.g., helm install), streamlining the deployment process. Step 1: Add helm repository To add the Bitnami PostgreSQL Helm repo, run this command: helm repo add bitnami https://charts.bitnami.com/bitnami To sync your local Helm repository with the remote one: helm repo update Step 2: Manage Data Persistence PostgreSQL requires persistent storage to ensure that data is preserved even if a pod crashes or is rescheduled. When a Persistent Volume Claim (PVC) is combined with a Persistent Volume (PV), Kubernetes can allocate a desired chunk of storage either in disk or cloud storage. PVC requests the Kubernetes cluster for storage space. Kubernetes then looks at the available PVs and assigns one to it. Create a file named postgres-local-pv.yaml with the YAML manifest: apiVersion: v1 kind: PersistentVolume metadata: name: postgresql-local-pv spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: manual hostPath: path: /mnt/data/postgresql This manifest creates a PersistentVolume backed by a local directory (/mnt/data/postgresql) on a specific node. This means if the node goes down or becomes unavailable, the data stored in that PV will be inaccessible, which is a critical risk in production. Therefore, it’s highly recommended to use cloud-native storage solutions instead of hostPath to ensure reliability, scalability and data protection. This PV has a reclaim policy of Retain, ensuring that it is not deleted when no longer in use by a PVC. You can set storageClassName to ceph-storage, glusterfs, portworx-sc, or openebs-standard based on your needs. Create a file named postgres-local-pvc.yaml with this text: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: postgresql-local-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: manual The ReadWriteOnce config means the volume can be read-write by a single node at a time. You might think, replacing it with ReadWriteMany will make your application highly available. This isn’t the case. ReadWriteMany (RWX) access mode allows multiple pods to access the same PersistentVolume simultaneously, this can indeed create serious issues leading to potential race conditions, data corruption, or inconsistent state. Apply these manifests using kubectl and create new resources. kubectl apply -f postgres-local-pv.yamlkubectl apply -f postgres-local-pvc.yaml Step 3: Install PostgreSQL Helm Chart Run the following command to install the Helm chart. helm install tutorial-db bitnami/postgresql --set auth.username=bhuwan \ --set auth.password=”AeSeigh2gieshe” \ --set auth.database=k8s-tutorial \ --set auth.postgresPassword=”Ze4hahshez6dop9vaing” \ --set primary.persistence.existingClaim=postgresql-local-pvc \ --set volumePermissions.enabled=true After a couple of minutes, verify if things have worked successfully with this command: kubectl get all Step 4: Test and Connect The following command runs a temporary PostgreSQL client pod. The pod connects to the database named k8s-tutorial, using the username bhuwan and the password from the environment variable $POSTGRES_PASSWORD. export POSTGRES_PASSWORD=$(kubectl get secret --namespace default tutorial-db-postgresql -o jsonpath="{.data.password}" | base64 -d) kubectl run tutorial-db-postgresql-client --rm --tty -i --restart='Never' \ --image docker.io/bitnami/postgresql:17.2.0-debian-12-r6 \ --env="PGPASSWORD=$POSTGRES_PASSWORD" \ --command -- psql --host tutorial-db-postgresql \ -U bhuwan -d k8s-tutorial -p 5432 After the session ends, the pod will be deleted automatically due to the --rm flag. A quick reminder, if you have changed the Helm chart release name, users, or database name, adjust the above commands accordingly. Deploy Postgres on Kubernetes from scratch A StatefulSet is the best Kubernetes resource for deploying stateful applications like PostgreSQL. This way, every PostgreSQL pod gets its own stable network identities and persistent volumes. Note: you’ll be using a previously created Persistent Volume Claim (PVC) and Persistent Volume(PV). So, do some cleanup and recreate those resources. helm delete tutorial-db kubectl delete pvc postgresql-local-pvc kubectl delete pv postgresql-local-pv kubectl apply -f postgres-local-pv.yaml -f postgres-local-pvc.yaml Create a file named postgres-statefulset.yaml with the following text: apiVersion: apps/v1 kind: StatefulSet metadata: name: postgres-statefulset labels: app: postgres spec: serviceName: "postgresql-headless-svc" replicas: 1 selector: matchLabels: app: postgres template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:17.2 envFrom: - secretRef: name: postgresql-secret ports: - containerPort: 5432 name: postgresdb volumeMounts: - name: pv-data mountPath: /var/lib/postgresql/db volumes: - name: pv-data persistentVolumeClaim: claimName: postgresql-local-pvc Before you can apply these changes, create a new Secret for handling sensitive details like passwords with kubectl. kubectl create secret generic postgresql-secret --from-literal=POSTGRES_PASSWORD=Ze4hahshez6dop9vaing kubectl apply -f postgres-statefulset.yaml If the pod gets stuck with Pending state, you can try creating a StorageClass with the following manifest. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: manual provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer To investigate any further issues with the pod, you can use the command: kubectl describe pod postgres-statefulset-0 This command will report any issues related to scheduling the pod to a node, mounting volumes, or resource constraints. Databases like PostgreSQL are typically accessed internally by other services or applications within the cluster, so it's better to create a Headless service for it. Create a file called postgres-service.yaml and include the following YAML manifest: apiVersion: v1 kind: Service metadata: name: postgresql-headless-svc spec: type: ClusterIP selector: app: postgres ports: - port: 5432 targetPort: 5432 clusterIP: None Finally, you can test the connection with kubectl run. kubectl run tutorial-db-postgresql-client --rm --tty -i --restart='Never' \ --image docker.io/bitnami/postgresql:17.2.0-debian-12-r6 \ --env="PGPASSWORD=Ze4hahshez6dop9vaing" \ --command -- psql --host postgres-statefulset-0.postgresql-headless-svc \ -U postgres -p 5432 Scale, Replication, and Backup To scale up a Statefulset, simply pass the number of replicas with --replicas flag.  kubectl scale statefulset postgres-statefulset --replicas=3  To reach replicas, you can make use of headless service. For instance, with hostname postgres-statefulset-1.postgresql-headless-svc you can send requests to pod 1. For handling backups, you can use CronJob with the pg_dump utility provided by PostgreSQL. Best Practices Throughout the tutorial, the decision to handle passwords via Kubernetes Secret, using StatefulSet instead of Deployment was a good move. To make this deployment even more secure, reliable, and highly available, here are some ideas: Set Resource Requests and Limits: Set appropriate CPU and memory requests and limits to avoid over-provisioning and under-provisioning. Backups: Use Kubernetes CronJobs to regularly back up your PostgreSQL data. Consider implementing Volume Snapshots as well. Monitoring and Log Postgresql: You can use tools like Prometheus and Grafana to collect and visualize PostgreSQL metrics, such as query performance, disk usage, and replication status. Use Pod Disruption Budgets (PDBs): If too many PostgreSQL pods are disrupted at once (e.g., during a rolling update), it can lead to database unavailability or replication issues. Conclusion Helm chart is the recommended way of complex and production deployment. Helm provides an automated version manager alongside hiding the complexities of configuring individual Kubernetes components. Using the Helm template command, you can even render the Helm chart locally and make necessary adjustments with its YAML Kubernetes manifests. Kubernetes provides scalability, flexibility, and ease of automation for PostgreSQL databases. By leveraging Kubernetes features like StatefulSets, PVCs, PDBs, and secrets management, you can ensure that your PostgreSQL database is tuned for the production environment.
24 January 2025 · 8 min to read
Node.js

How to Update Node.js Version

Node.js stands as a robust JavaScript runtime environment powered by Chrome's V8 engine. With its capabilities, developers can construct scalable network applications with simplicity. Renowned for its event-driven, non-blocking architecture, it’s perfect for creating real-time applications. Regularly refreshing your tools ensures access to the newest features, security fixes, and performance gains. Updating Node.js regularly is crucial for the stability and security of projects, regardless of their scale. This all-inclusive guide will navigate you through diverse methods to update Node.js, covering everything from package managers to hands-on manual installations. Method 1: Via NVM Node Version Manager, abbreviated as NVM, is extensively employed for seamless handling of various Node.js versions. Its flexibility and user-friendly interface make it particularly popular among developers. This tool facilitates easy switching between node versions, perfect for projects that demand particular Node.js versions. Why Use NVM? Flexibility: Easily shift from one node version to another. Convenience: Handle installations, updates, and management of various versions effortlessly. Isolation: Isolates different versions to minimize conflicts. Step-by-Step Guide Adhere to these guidelines to set up and utilize NVM for node version management. Install NVM Initiate a terminal session and input: curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash Then configure NVM in your shell profile like Bash: export NVM_DIR="$HOME/.nvm" [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion Next, confirm the NVM installation with: nvm --version Update Version List available versions first, then proceed with installation: nvm ls-remote Subsequently, install the latest release via: nvm install node Then, set the newly installed version as the default: nvm alias default node Validate the installation through: node -v Update npm To guarantee superior performance and safety, keep npm up-to-date alongside node: npm install -g npm Lastly, validate the updated npm version for confirmation via: npm -v Switching Node Versions First, list out the installed versions through: nvm ls Next, switch to another version: nvm use <version> Insert the required version number in place of <version>. Method 2: Via Node.js Binary Packages Direct installation from Node.js official binaries is hassle-free and secures the specific release straight from the source. Why Use Binary Packages? Simplicity: Achieve direct installation without using additional tools. Consistency: Backed by the official Node.js development team. Reliability: Guarantees the exact release comes from the official source. Step-by-Step Guide Adopt these guidelines to update Node.js through binary packages. Obtain the Binary Package Access the Node.js website and grab the binary package compatible for your particular OS. Install Node.js Finish the installation by adhering to the guidelines specific to your OS: Windows: Run the installer and finish by following the on-screen steps. macOS: Execute the .pkg file and continue via the setup process. Linux: Unpack the tarball and transfer the contents to /usr/local. tar -xvf node-v<version>-linux-x64.tar.xz sudo cp -r node-v<version>-linux-x64/bin /usr/local/ Next, access the .bashrc file: nano ~/.bashrc Then, insert the lines below: export PATH="/usr/local/bin:$PATH" Save the file and use source to update settings: source ~/.bashrc Verify Installation Validate the installation via: node -v Check npm release: npm -v Method 3: Via Package Managers Homebrew and apt facilitate Node.js installation and updates by handling dependency and version management. Why Use Package Managers? Ease of Use: Simplifies installation and updates. Automation: Handles dependencies and version management. Integration: Easily integrates with other software and systems. Step-by-Step Guide Apply these procedures to upgrade Node.js via package managers. Via Homebrew (macOS) Employ the instructions on the Homebrew website if not already installed. Enter the subsequent command: brew install node For upgrading the existing installation: brew upgrade node Validate the current installation: node -v Via apt (Ubuntu/Debian) Include the node’s PPA in your system setup: curl -fsSL https://deb.nodesource.com/setup_23.x | sudo -E bash - Update 23.x to the desired release. Enter the subsequent command: sudo apt install nodejs Check that the node release is correct: node -v Method 4: Via Node.js Version Manager (n) The n package offers an efficient method for managing Node.js versions, serving as an alternative to NVM. This tool transforms the way you install, switch, and maintain different node versions on your system. Why Use n? Efficiency: Fast and lightweight. Simplicity: User-friendly and requires minimal commands. Control: Complete control over the versions set up. Step-by-Step Guide Here's how to configure and use the n package for managing node versions. Install n Run npm to globally install the n package: npm install -g n Install or Update Node.js Install the latest node release: sudo n latest Install a specific release: sudo n <version> Change <version> to the required release number. Verify Installation Confirm version: node -v Method 5: Manual Installation Manual installation suits advanced users seeking complete control over the setup. Why Use Manual Installation? Control: Complete authority over the installation workflow. Customization: Customize the build and installation settings. Step-by-Step Guide Adhere to these guidelines for manual installation: Install Dependencies Install essential dependencies first via: sudo apt install build-essential gcc g++ make Download Source Code Head over to the official website and obtain the source code. wget https://nodejs.org/download/release/v23.6.0/node-v23.6.0.tar.gz Build and Install Extract the source code: tar -xvf node-v<version>.tar.gz Replace <version> with the version number. Navigate to the extracted directory: cd node-v<version> Configure and compile the source code: ./configure make If you encounter dependency errors while running the above command, use your default package manager to install them. Install the tool: sudo make install Verify Installation Confirm version: node -v Additional Resources For in-depth information on Node.js updates and management, consider these resources: Documentation: Comprehensive resource for all Node.js things. NVM GitHub Repository: Extensive support and information for NVM. n GitHub Repository: Information on applying the n package for node version control. Release Notes: Remain informed about the newest enhancements and updates. Best Practices Regular Checks: Make it a habit to regularly check for node updates to ensure your environment is always up-to-date. Backup: Always create backups of your projects and important data before updating to safeguard against any loss during the upgrade. Testing: Upon completing the update, thoroughly test your applications to verify they function correctly with the new node release. Documentation: Keep your project documentation in sync with the latest Node.js release and note any changes from the update. Fixing Common Problems Running into problems while updating Node.js? Discover frequent issues and effective fixes: Issue 1: Version Not Changing  In case the node version stays the same after updating, attempt the following: Clear npm cache: npm cache clean -f Reinstall node using the desired method. Issue 2: Permission Errors  If permission problems occur, use sudo for elevated command execution: sudo npm install -g n Issue 3: Dependency Conflicts  Sometimes, updating the application can result in conflicts with dependencies in your current projects.  To resolve this: Utilize nvm to pick the necessary release for each project. Ensure dependencies in your project are updated to align with the new Node.js release. Conclusion There are various methods available to update Node.js, each suited to specific needs and preferences. Whether it's NVM, binary packages, package managers, the n package, or manual installation, you can keep your dev environment up-to-date with the newest features and security fixes. Frequent updates are always a key factor to maintaining top performance and safety. Consistently update Node.js to benefit from the latest features, security enhancements, and performance boosts for robust and secure applications. Using this in-depth guide, you can expertly handle and update your node installations according to your specific needs and preferences. In addition, check out our platform as a service to deploy various Node.js frameworks, such as Express, Fastify, Hapi and Nest.
23 January 2025 · 7 min to read
Linux

How to Rename Files in Linux

Visualize yourself as a Linux expert, skillfully navigating files and directories. One day, you find yourself needing to alter the names of numerous files. Perhaps you're organizing documents, changing photos names from a vacation, or managing code files for a project. Renaming each file manually seems daunting and time-consuming. What do you do? The Linux environment offers various strong tools to make this task easy and effective. Whether dealing with a single file or a directory full of them, the system offers various ways to change files names quickly and easily. Here, we'll explore a range of ways to rename files in the Linux environment.  Method 1: Via the mv Command Changing file names in Linux is usually accomplished via the mv command, which is both simple and widely adopted. Besides changing file names, it can also be employed to move files. The primary syntax is: mv [options] source target Where: source is the existing name or path of the file or directory you aim to rename or move. target refers to the updated name or destination path for the file or directory. Changing a Filename with mv Adhere to the following steps to change a filename with mv: Launch your terminal application. Enter the directory where the file you wish to change is located: cd /path/to/directory Employ mv to change the filename: mv oldfilename newfilename Update oldfilename to match the current name and newfilename to reflect the new name. Check the directory files to ensure their names are changed: ls Other Options To prevent existing files from being overwritten, apply: mv -n oldfilename newfilename For transferring files to another directory while modifying their names, utilize: mv oldfilename /newpath/newfilename To change directories name, apply: mv olddirectory newdirectory Method 2: Via the rename Command For bulk files, rename surpasses mv in functionality. It can change multiple filenames in a single command and accommodates complex patterns with regular expressions. Below is the standard format for employing the command: rename [options] 's/oldpattern/newpattern/' files Where: 's/oldpattern/newpattern/': A substitution pattern where oldpattern is the text you want to replace, and newpattern is the text you want to substitute in. files: The files you want to apply the rename operation to. rename Installation on Linux Some Linux distributions don't come with this utility pre-installed. Employ the package manager for installation. On Debian/Ubuntu: sudo apt install rename On CentOS/RHEL: sudo yum install prename Changing a Filename with rename Launch the terminal and go to the target folder: cd /path/to/directory Next, run rename with a regex pattern to adjust multiple file names: rename 's/oldpattern/newpattern/' * Replace oldpattern with the pattern you want to modify and newpattern with the updated pattern. To update all .txt filenames to .md in a directory, utilize: rename 's/\.txt$/\.md/' *.txt Additional rename Options Start filename with a prefixed text: rename 's/^/prefix_/' * Append a suffix to the filenames: rename 's/$/_suffix/' * Real-time filename display while renaming: rename -v 's/oldpattern/newpattern/' * Update the filename even if the target file already exists: rename -f 's/oldpattern/newpattern/' * Previews the actions without executing any modifications: rename -n 's/oldpattern/newpattern/' * Method 3: Via Bash Script To perform more advanced file name changes, consider using a bash script. This technique enables sophisticated file name changes and automates frequent renaming operations. Open your terminal and create a new script file to start writing a bash script: nano rename_files.sh Proceed by adding this code to the script file: #!/bin/bashfor file in *.txt; domv "$file" "${file%.txt}.md"done This script changes all .txt filenames to .md files. Save the file and grant it executable permissions: sudo chmod +x rename_files.sh Run the script to change filenames: ./rename_files.sh Method 4: Via the find Command with mv find and mv together offer a precise way to update multiple file names based on detailed conditions. Using this method, you can pinpoint specific files based on criteria like name patterns, size, and modification date. Further commands can be combined to create powerful file modification operations. Use this template to update file names with find and mv: find . -name "oldpattern" -exec mv {} newpattern \; Additional Options Change the file names larger than 1MB: find . -size +1M -exec mv {} newname \; Modify file names in the last 7 days: find . -mtime -7 -exec mv {} newname \; Method 5: Via the mmv Command The mmv command is a powerful tool designed to simplify batch renaming of files through its advanced pattern matching capabilities. This command allows you to change multiple filenames at once by specifying patterns and replacement strings. It makes it ideal for handling large numbers of files that follow a specific naming convention.  The syntax is: mmv [options] source target Changing a Filename with mmv Get mmv ready for use by installing it through the default package manager: sudo apt install mmv Utilize mmv alongside patterns for effective filename modification: mmv oldpattern newpattern Additional mmv Options Utilize this command to add a prefix to every file in a directory: mmv '*' 'prefix#1' Exhibit the names of files as they get modified: mmv -v '*.txt' 'prefix_#1.txt' Method 6: Via GUI For those who favor a graphical interface, various Linux desktop environments offer tools for effortless file name changes. This approach is especially beneficial for users who aren't as familiar with command-line tasks. Follow this procedure to change file names through a graphical tool: Launch your file manager application. The name and appearance may vary depending on your desktop environment (e.g., Nautilus for GNOME, Dolphin for KDE, Thunar for XFCE). Open the file manager/explorer and head to the folder with the files. Right-click the file you plan to edit and pick "Rename" or "Edit Name" from the contextual menu that appears. Type the new name, then press Enter or select "Rename" to apply the update. Bulk file name change procedures may differ somewhat based on your file manager: Hold the Ctrl key and click on each file you want to change to select them. Select "Rename" by right-clicking on any of the files you've picked. Confirm the updates and check that the files are adjusted as desired. Best Practices for File Naming Conventions Consistent file naming conventions can significantly improve the ease of managing files and enhance overall organization. This section outlines best practices for naming files. Use Descriptive Names Choose names that are clear and descriptive, highlighting the file's content, purpose, or creation date. For example, replace doc1.txt with project_report_Jan2025.txt. Avoid Special Characters Refrain from including special characters (such as !, @, #, $, %, ^, &, and *) in filenames, since they can cause complications in file management and scripts. Use Underscores or Hyphens Replace spaces with underscores (_) or hyphens (-) in filenames to ensure they work seamlessly across different systems and scripts. For instance, use project_report_Jan2025.txt instead of project report Jan 2025.txt. Guidelines for Changing Filename in Linux Backup First: Back up your files first before performing extensive name changes. Test Changes: Test the changes on a handful of files first. Careful Use of Wildcards: Use wildcards carefully to prevent unintentional file modifications. Conclusion There are several approaches to changing file names in Linux, each tailored to different user preferences. Single-file tasks suit mv; for bulk operations, choose rename or mmv. Advanced customization can be achieved with Bash scripts and command combinations, whereas GUI tools present a more user-friendly choice. This in-depth guide will ensure you’re capable of executing any filename changing task smoothly in a Linux environment.  By mastering these tools and techniques, you can significantly enhance your efficiency and productivity when managing files in Linux. Understanding how to use these commands not only saves time but also reduces the risk of errors that can occur with manual renaming. In addition, Hostman provides Linux VPS web hosting services to empower your applications. 
23 January 2025 · 7 min to read
Docker

Converting a Container to a Virtual Machine

A tricky question often asked during technical interviews for a DevOps engineer position is: "What is the difference between a container and a virtual machine?" Most candidates get confused when answering this question, and some interviewers themselves don’t fully understand what kind of answer they want to hear. To clearly understand the differences and never have to revisit this question, we will show you how to convert a container into a virtual machine and run it in the Hostman cloud. The process described in this article will help better understand the key differences between containers and virtual machines and demonstrate each approach's practical application. This article will be especially useful for working with systems requiring a specific environment. We will perform all further actions in a Linux OS environment and use a virtual machine based on the KVM hypervisor created with VirtualBox to prepare the necessary image. You can also use other providers such as VMware, QEMU, or virt-manager. Configuration of Our Future Virtual Machine Let’s start this exciting journey by creating a container. For this, we will use Docker. If it is not installed yet, install it using the command below (before that, you may need to update the list of available packages with sudo apt update): sudo apt install docker.io -y Create a container based on the minimal Alpine image and attach to its shell: sudo docker run --name test -it alpine sh Install the necessary programs using the apk package manager that you plan to use in the future virtual machine. You don’t necessarily have to limit yourself to packages from the standard Alpine repository — you can also add other repositories or, if needed, download or compile packages directly in the container. apk add tmux busybox-extras openssh-client openssh-server iptables dhclient ppp socat tcpdump vim openrc mkinitfs grub grub-bios Here’s a list of minimally required packages: tmux — a console multiplexer. It will be useful for saving user sessions and the context of running processes in case of a network disconnect. busybox-extras — an extended version of BusyBox that includes additional utilities but remains a compact distribution of standard tools. openssh-client and openssh-server — OpenSSH client and server, necessary for setting up remote connections. iptables — a utility for configuring IP packet filtering rules. dhclient — a DHCP client for automating network configuration. ppp — a package for implementing the Point-to-Point Protocol. socat — a program for creating tunnels, similar to netcat, with encryption support and an interactive shell. tcpdump — a utility for capturing traffic. Useful for debugging network issues. vim — a console text editor with rich customization options. It is popular among experienced Linux users. openrc — an initialization system based on dependency management that works with SysVinit. It’s a key component needed to convert a container into a virtual machine, as containers do not have it by default. mkinitfs — a package for generating initramfs, allowing you to build necessary drivers and modules that are loaded during the initial system initialization. grub and grub-bios — OS bootloader. In this case, we are specifically interested in creating a bootloader for BIOS-based systems using an MBR partition table. Set the root password: export PASSWORD=<your secret password>  echo "root:$PASSWORD" | chpasswd   Create a user. You will need it for remote SSH access later: export USERNAME=<username>  adduser -s /bin/sh $USERNAME   Set the SUID bit on the executable file busybox. This is necessary so that the user can execute commands with superuser privileges: chmod u+s /bin/busybox   Create a script to be executed during system initialization: cat <<EOF > /etc/local.d/init.start #!/bin/sh dmesg -n 1 mount -o remount,rw / ifconfig lo 127.0.0.1 netmask 255.0.0.0 dhclient eth0 # ifconfig eth0 172.16.0.200 netmask 255.255.255.0 # route add -net default gw 172.16.0.1 busybox-extras telnetd EOF Let’s go through the script line by line: dmesg -n 1 — Displays critical messages from the Linux kernel's message buffer so that potential issues can be detected during startup. mount -o remount,rw / — Remounts the root file system (/) with the rw (read-write) flag. This allows modifications to the file system after boot. ifconfig lo 127.0.0.1 netmask 255.0.0.0 — Configures the loopback interface (lo) with IP address 127.0.0.1 and subnet mask 255.0.0.0. This ensures internal network communication on the machine. dhclient eth0 — Runs the DHCP client for the eth0 interface to automatically obtain IP address settings and other network parameters from a DHCP server. # ifconfig eth0 172.16.0.200 netmask 255.255.255.0 — This line is commented out, but if uncommented, it will assign a static IP address 172.16.0.200 and subnet mask 255.255.255.0 to the eth0 interface. We included this line in the script in case a static network configuration is needed. # route add -net default gw 172.16.0.1 — This line is also commented out, but if uncommented, it will add a default route with gateway 172.16.0.1. This determines how packets will be routed outside the local network. busybox-extras telnetd — Starts the Telnet server. Please note that using the Telnet protocol in production environments is not recommended due to the lack of encryption for data transmission. Make the script executable: chmod +x /etc/local.d/init.start Add the script to the autostart: rc-update add local Add the OpenSSH server daemon to the autostart. This will allow you to connect to the cloud server via SSH later: rc-update add sshd default Set the default DNS server: echo nameserver 8.8.8.8 > /etc/resolv.conf Exit the terminal using the exit command or the keyboard shortcut CTRL+D. The next step is to save the container's file system to the host as an archive, which can also be done using Docker. In my case, the final artifact is only 75 megabytes in size. sudo docker export test > test.tar Transforming a Docker Image into a Virtual Machine Image Containers are a Linux-specific technology since they don't have their own kernel and instead rely on abstractions of the host's Linux kernel to provide isolation and resource management. The key abstractions include: namespaces: isolation for USER, TIME, PID, NET, MOUNT, UTS, IPC, CGROUP namespaces. cgroups: limitations on resources like CPU, RAM, and I/O. capabilities: a set of capabilities for executing specific privileged operations without superuser rights. These kernel components make Docker and other container technologies closely tied to Linux, meaning they can't natively run on other operating systems like Windows, macOS, or BSD. For running Docker on Windows, macOS, or BSD, there is Docker Desktop, which provides a virtual machine with a minimal Linux-based operating system kernel. Docker Engine is installed and running inside this virtual machine, enabling users to manage containers and images in their usual environment. Since we need a full operating system and not just a container, we will require our own kernel. Create the image file we will work with: truncate -s 200M test.img Use fdisk to create a partition on the test.img image: echo -e "n\np\n1\n\n\nw" | fdisk test.img n — create a new partition p — specify that this will be a primary partition 1 — the partition number \n\n — use default values for the start and end sectors w — write changes Associate the test.img file with the /dev/loop3 device, starting from an offset of 2048 blocks (1 MB): sudo losetup -o $[2048*512] /dev/loop3 test.img Note that /dev/loop3 may already be in use. You can check used devices with: losetup -l Format the partition linked to /dev/loop3 as EXT4: sudo mkfs.ext4 /dev/loop3 Mount the partition at /mnt: sudo mount /dev/loop3 /mnt Extract the Docker image (test.tar) into the /mnt directory: sudo tar xvf test.tar -C /mnt Create the /mnt/boot directory to store the bootloader and kernel files: sudo mkdir -pv /mnt/boot Download the Linux kernel source code: wget https://cdn.kernel.org/pub/linux/kernel/v6.x/linux-6.8.9.tar.xz Extract the Linux kernel source code in the current directory: tar xf linux-6.8.9.tar.xz Install the necessary packages for building the Linux kernel: sudo apt install git fakeroot build-essential ncurses-dev xz-utils libssl-dev bc flex libelf-dev bison -y Navigate to the kernel source directory and create the default configuration file: cd linux-6.8.9make defconfig Add necessary configuration options to the .config file: echo -e "CONFIG_BRIDGE=y\nCONFIG_TUN=y\nCONFIG_PPP=y\nCONFIG_PPP_ASYNC=y\nCONFIG_PPP_DEFLATE=y" >> .config CONFIG_BRIDGE=y — Enables network bridge support, allowing multiple network interfaces to be combined into one. CONFIG_TUN=y — Enables support for virtual network interfaces like TUN/TAP, useful for VPN setups. CONFIG_PPP=y — Enables support for the Point-to-Point Protocol (PPP). CONFIG_PPP_ASYNC=y — Enables asynchronous PPP for serial ports. CONFIG_PPP_DEFLATE=y — Enables PPP data compression using the DEFLATE algorithm. Prepare the source code for building: make prepare -j4 Create the necessary scripts, build the compressed kernel image (bzImage) and the kernel modules: make scripts -j4make bzImage -j4make modules -j4 Install the built kernel and modules into the /mnt/boot directory (which contains the virtual machine image filesystem): sudo make INSTALL_PATH=/mnt/boot installsudo make INSTALL_MOD_PATH=/mnt modules_install Install the GRUB bootloader into the /mnt/boot directory. Make sure you're in the directory containing the test.img file: sudo grub-install --target=i386-pc --boot-directory=/mnt/boot/test.img --modules='part_msdos' Bind-mount the host system’s /proc, /sys, and /dev directories to the /mnt directory. This is necessary for creating the initramfs: sudo mount --bind /proc /mnt/proc/sudo mount --bind /sys /mnt/sys/sudo mount --bind /dev /mnt/dev/ Change root (chroot) into the /mnt filesystem using a shell: sudo chroot /mnt /bin/sh Generate the initial RAM filesystem (initramfs) for the kernel version you are working with: mkinitfs -k -o /boot/initrd.img-6.8.9 6.8.9 Generate the GRUB bootloader configuration file: grub-mkconfig -o /boot/grub/grub.cfg By completing these steps, you will have created a small virtual machine image with a fully working Linux kernel, a bootloader (GRUB), and an initramfs. Local Verification of the Built Image For local verification, it’s most convenient to use QEMU. This package is available for Windows, macOS, and Linux. Install it by following the instructions for your OS on the official website. Convert the test.img to the qcow2 format. This will reduce the size of the final image from 200 MB to 134 MB. qemu-img convert test.img -O qcow2 test.qcow2 Run the image using QEMU. qemu-system-x86_64 -hda test.qcow2 If all steps were completed correctly, the initialization process will be successful, and an interactive menu for entering the login and password will appear. To check the version of the installed kernel, use the uname -a command, which will output the necessary information. Creating a Virtual Machine in Hostman Go to the Cloud Servers section and start creating a new server. Select the prepared and tested image as the server’s base. To do this, first add it to the list of available images. Supported formats include: iso, qcow2, vmdk, vhd, vhdx, vdi, raw, img. Upload the image in one of the available ways: from your computer or by link. Note that after uploading, the image will also be available via URL. Continue with the creation of the cloud server and specify the other parameters of its configuration. Since the image is minimal, it can be run even on the smallest configuration. Once the cloud server is created, go to the Console tab and verify whether the virtual machine was successfully created from the image. The virtual machine has been created and works correctly. Since we added the OpenSSH daemon to the autostart in advance, it is now possible to establish a full remote connection to the server using the username, IP address, and password. Conclusion To turn a container into a full-fledged lightweight virtual machine, we sequentially added key components: the OpenRC initialization system, GRUB bootloader, Linux kernel, and initramfs. This process highlighted the importance of each component in the overall virtual machine architecture and demonstrated the practical differences from container environments. As a result of this experiment, we realized the importance of understanding the architecture and functions of each component to successfully create images for specific needs and to manage virtual machines more effectively from a resource perspective. The image built in this article is quite minimal since it is a Proof-of-Concept, but one can go even further. For example, you could use a special guide to minimize the kernel and explore minimal Linux distributions such as Tiny Core Linux or SliTaz. On the other hand, if your choice is to add functionality by increasing the image size, we strongly recommend checking out the Gentoo Wiki. This resource offers extensive information on fine-tuning the system.
22 January 2025 · 11 min to read
Docker

How to Create and Optimize Docker Images

In today's environment, most companies actively use the Docker containerization system in their projects, especially when working with microservice applications. Docker allows you to quickly deploy any applications, whether monolithic or cloud-native. Despite the simplicity of working with Docker, it's important to understand some nuances of creating your own images. In this article, we will explore how to work with Docker images and optimize them using two different applications as examples. Prerequisites To work with the Docker containerization system, we will need: A cloud server or a virtual machine with any pre-installed Linux distribution. We will be using Ubuntu 22.04. Docker installed. See our installation guide.  You can also use a pre-configured image with Docker. To do this, go to the Cloud servers section in your Hostman control panel, click Create server, and select Docker in the Marketplace tab. Working with Docker Images Docker images are created by other users and stored in registries—special repositories for images. Registries can be public or private. Public repositories are available to all users without requiring authentication. Private registries, however, can only be accessed by users with appropriate login credentials. Companies widely use private repositories to store their own images during software development. By default, Docker uses the public registry Docker Hub, which any user can use to publish their own images or download images created by others. When a user runs a command such as docker run, the Docker daemon will, by default, contact its standard registry. If necessary, you can change the registry to another one. To create custom Docker images, a Dockerfile is used—a text file containing instructions for building an image. These instructions use 18 specially reserved keywords. The most common types of instructions include the following: FROM specifies the base image. Every image starts with a base image. A base image refers to a Linux distribution, such as Ubuntu, Debian, Oracle Linux, Alpine, etc. There are also many images with various pre-installed software, such as Nginx, Grafana, Prometheus, MySQL, and others. However, even when using an image with pre-installed software, some Linux OS distribution will always be specified inside. WORKDIR creates a directory inside the image. Its functionality is similar to the mkdir utility used to create directories in Linux distributions. It can be used multiple times in one image. COPY copies files and directories from the host system into the image. It is used to copy configuration files and application source code files. ADD is similar to the COPY instruction, but in addition to copying files, ADD allows downloading files from remote sources and extracting .tar archives. RUN executes commands inside the image. With RUN, you can perform any actions that a user can perform in a Bash shell, including creating files, installing packages, starting services, etc. CMD specifies the command that will be executed when the container is started. Example: Creating an Image As an example, we will create an image with a simple Python program. Create a project directory and move into it: mkdir python-calculator && cd python-calculator Create a file console_calculator.py with the following content: print("*" * 10, "Calculator", "*" * 10) print("To exit from program type q") try: while True: arithmetic_operators = input("Choose arithmetic operation (+ - * /):\n") if arithmetic_operators == "q": break if arithmetic_operators in ("+", "-", "*", "/"): first_number = float(input("First number is:\n")) second_number = float(input("Second number is:\n")) print("The result is:") if arithmetic_operators == "+": print("%.2f" % (first_number + second_number)) elif arithmetic_operators == "-": print("%.2f" % (first_number - second_number)) elif arithmetic_operators == "*": print("%.2f" % (first_number * second_number)) elif arithmetic_operators == "/": if second_number != 0: print("%.2f" % (first_number / second_number)) else: print("You can't divide by zero!") else: print("Invalid symbol!") except (KeyboardInterrupt, EOFError) as e: print(e) Create a new Dockerfile with the following content: FROM python:3.10-alpine WORKDIR /app COPY console_calculator.py . CMD ["python3","console_calculator.py"] For the base image, we will use python:3.10, which is based on a lightweight Linux distribution called Alpine. We will discuss the use of Alpine in more detail in the next chapter. Inside the image, we will create a directory app, where the project file will be located. The container will be launched using the command "python3", "console_calculator.py". To build the image, the docker build command is used. Each image must also be assigned a tag. A tag is a unique identifier that can be assigned to an image. The tag is specified using the -t flag: docker build -t python-console-calculator:01 . The period at the end of the command indicates that the Dockerfile is located in the current directory. You can display the list of created images using: docker images To launch the container, use:  docker run --rm -it python-console-calculator:01 Let's test the functionality of the program by performing a few simple arithmetic operations: To exit the program, you need to press the q key. Since we specified the --rm flag when starting the container, the container will be automatically removed. You can also run the container in daemon mode, i.e., in the background. To do this, include the -d flag when starting the container: docker run -dit python-console-calculator:01 After that, the container will appear in the list of running containers: When starting the container in the background to access our script, you need to use docker exec, which executes a command inside the container. First, you need to start a shell (bash or sh), then manually run the script inside the container. To do this, use the docker exec command, passing the sh command as an argument to open the shell inside the container (where 4f1b8b26c607 is the unique container ID displayed in the CONTAINER ID column of the docker ps output): docker exec -it 4f1b8b26c607 sh Then, run the script manually: python console_calculator.py To remove a running container, you need to use the docker rm command and pass the container's ID or name. You also need to use the -f flag, which will force the removal of a running container: docker rm -f 186e8f43ca60 Optimizing Docker Images When creating Docker images, there is one main rule: finished images should be compact and occupy as little space as possible. Additionally, the smaller the image, the faster it is built. This can play a key role when using CI/CD methods or when releasing software in the Time to Market model. Proper Selection of the Base Image As the first recommendation, it's important to choose the base image wisely. For example, instead of using various Linux distribution images like Ubuntu, Oracle Linux, Rocky Linux, and many others, you can directly choose an image that already comes with the required programming language, framework, or other necessary technology. Examples of such images include: node for working with the Node.js platform A pre-built image with Nginx ibmjava for working with the Java programming language postgres for working with the PostgreSQL databases redis for working with the NoSQL Redis Using a specific image instead of an operating system image has the following advantages: There is no need to install the main tool (programming language, framework, etc.), so the image won't be "cluttered" with unnecessary packages, preventing an increase in size. Images that come with pre-installed software (like Nginx, Redis, PostgreSQL, Grafana, etc.) are always created by the developers of the software themselves. This means that users do not need to configure the program to run it (except in cases where it needs to be integrated with their service). Let's consider this recommendation with a practical example. We will use a simple Python program that prints "Hello from Python!".  First, we will build an image using debian as the base image. Create and navigate to the directory where the project files will be stored: mkdir dockerfile-python && cd dockerfile-python Create the test.py file with the following content: print("Hello from Python!") Next, create a Dockerfile with the following content: FROM debian:latest COPY test.py . RUN apt update RUN apt -y install python3 CMD ["python3", "test.py"] To run Python programs, you also need to install the Python interpreter. Then, build the image: docker build -t python-debian:01 . Let’s check the Docker image size:  docker images The image takes up 185MB, which is quite a lot for an application that just prints a single line to the terminal. Now, let's choose the correct base image, which is based on the Alpine distribution. Another feature of base images is that for many images, there are special versions in the form of slim and alpine images, which are even smaller. Let's look at the example of the official Python 3.10 image. The python:3.10 image takes up a whole 1 GB, whereas the slim version is much smaller—127 MB. And the alpine image is only 50 MB. Slim images are images that contain the minimum set of packages necessary to run a finished application. These images lack most packages and libraries. Slim images are created from both regular Linux distributions (such as Ubuntu or Debian) and Alpine-based distributions. Alpine images are images that use the Alpine distribution as the operating system— a lightweight Linux distribution that takes up about 5 MB of disk space (without the kernel). It differs from other Linux distributions in that it uses a package manager called apk, lacks the system initialization system, and has fewer pre-installed programs. When using both slim and Alpine images, it is essential to thoroughly test your application, as the required packages or libraries might be missing in such distributions. Now, let's test our application using the Python image with Alpine. Return to the previously used Dockerfile and replace the base image from debian to the python:alpine3.19 image. You should also remove the two RUN instructions, as there will be no need to install the Python interpreter: FROM python:alpine3.19 COPY test.py . CMD ["python3", "test.py"] Use a new tag to build the image: List all the Docker images. Check the image size and compare with the previous one:  Since we chose the correct base image with Python already preinstalled, the image size was reduced from 185 MB to 43.8 MB. Reducing the Number of Layers Docker images are based on the concept of layers. A layer represents a change made to the image's file system. These changes include copying/creating directories and files or installing packages. It is recommended to use as few layers as possible in the image. Among all Dockerfile instructions, only the FROM, COPY, ADD, and RUN instructions create layers that increase the final image size. All other instructions create temporary intermediate images and do not directly increase the image size. Let's take the previously used Dockerfile and modify it according to new requirements. Suppose we need to install additional packages using the apt package manager: FROM debian:latest COPY test.py . RUN apt update RUN apt -y install python3 htop net-tools mc gcc CMD ["python3", "test.py"] Build the image: docker build -t python-non-optimize:01 . Check the size of the created Docker image: docker images The image size was 570 MB. However, we can reduce the size by using fewer layers. Previously, our Dockerfile contained two RUN instructions, which created two layers. We can reduce the image size by combining the apt update and apt install commands using the && symbol, which in Bash means that the next command will only run if the first one completes successfully. Another important point is to remove cache files left in the image after package installation using the apt package manager (this also applies to other package managers such as yum/dnf and apk). The cache must be removed. For distributions using apt, the cache of installed programs is stored in the /var/lib/apt/lists directory. Therefore, we will add a command to delete all files in that directory within the RUN instruction without creating a new layer: FROM debian:latest COPY test.py . RUN apt update && apt -y install python3 htop net-tools mc gcc && rm -rf /var/lib/apt/lists/* CMD ["python3", "test.py"] Build the image: docker build -t python-optimize:03 . And check the size: The image size was reduced from the initial 570 MB to the current 551 MB. Using Multi-Stage Builds Another significant way to reduce the size of the created image is by using multi-stage builds. These builds, which involve two or more base images, allow us to separate the build environment from the runtime environment, effectively removing unnecessary files and dependencies from the final image. These unnecessary files might include libraries or development dependencies that are only needed during the build process. Let’s explore this approach with a practical example using the Node.js platform. Node.js should be installed beforehand, following our guide. We will first build the application image without multi-stage builds to evaluate the difference in size. Create a directory for the project: mkdir node-app && cd node-app Initialize a new Node.js application: npm init -y Install the express library: npm install express Create an index.js file with the content: const express = require('express'); const app = express(); const PORT = process.env.PORT || 3000; app.get('/', (req, res) => { res.send('Hello, World!'); }); app.listen(PORT, () => { console.log(Server is running on port${PORT}); }); Create Dockerfile with this content: FROM node:14-alpine WORKDIR /app COPY package*.json ./ RUN npm install COPY index.js . EXPOSE 3000 CMD ["npm", "start"] Build the image: docker build -t node-app:01 . Check the size: docker images The image size was 124 MB. Now let's rewrite the Dockerfile to use two images, transforming it into the following form: FROM node:14 AS builder WORKDIR /app COPY package*.json ./ RUN npm install COPY index.js . FROM gcr.io/distroless/base-debian10 AS production WORKDIR /app COPY --from=builder /app . EXPOSE 3000 CMD ["npm", "start"] Build the image: docker build -t node-app:02 . List the Docker images and check the size: docker images As a result, the image size was drastically reduced—from 124 MB to 21.5 MB. Conclusion In this article, we created our own Docker image and explored various ways to run it. We also paid significant attention to optimizing Docker images. Through optimization, we can greatly reduce the image size, which allows for faster image builds.
22 January 2025 · 12 min to read

Answers to Your Questions

What is MySQL in the cloud and how does it differ from traditional installations?

MySQL in the cloud is a familiar DBMS, which is chosen due to its API for all popular development languages and broad support for popular CMS. However, unlike traditional MySQL, a cloud database solution allows you to save resources on hardware, database setup and administration — in Hostman all this is already done.

How do I get started with MySQL on your cloud service?

After registering in the Hostman control panel, you will be able to create and launch a DBMS in a few clicks. No special knowledge is required for this.

Which versions of MySQL are supported on your cloud platform?

We support the most widely used and stable versions: MySQL 5.7 and MySQL 8.

What are the performance characteristics of MySQL in the cloud, including allocated resources and data access speed?

Our MySQL databases (all other DBMS as well) run only on the most high-performance server hardware: Intel and AMD processors of the latest generations and ultra-fast NVMe disks. Data exchange speed — from 100 to 200 megabit per second. In a private network — up to 1 gigabit.

How is the security of MySQL ensured on your cloud service, including data encryption and authentication mechanisms?

We provide 99.9% SLA reliability. We place servers exclusively in the most reliable Tier IV data centers that meet all international security standards:

  • ISO: data center design standards,
  • PCI DSS: payment data processing standards,
  • GDPR: European Union standards for the protection of personal data.

In addition, only authorized professionals (or only you) can access your database. Access can be easily revoked if the specialist's role changes. User management takes place directly in the modern Hostman control panel.

What database management tools are available for MySQL on your cloud platform?

You can use any familiar web interfaces for database management: Adminer, phpMyAdmin, etc. But it is most convenient to do it directly in the Hostman control panel. But it is most convenient to do it right in the Hostman control panel.

In the Hostman control panel you can:

  • track load and resource consumption graphs,
  • add users and manage their access rights,
  • customize editing settings,
  • connect extensions and increase the functionality of the database,
  • create backups, manage IP addresses, change the tariff and so on.
Can I scale resources for my MySQL database, and if so, what options are available?

Add resources with ease right in the control panel (and always pay for them on an hourly basis). Hostman will provide as much capacity as you need. In order to reduce your resource consumption, contact our friendly support staff — we will handle everything promptly.

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support