Sign In
Sign In

Cloud Server

Deploy your cloud server in minutes and experience the freedom to scale your
infrastructure effortlessly. Fast, secure, and flexible cloud server solution
designed to meet your unique needs without the constraints of traditional
servers.
Contact Sales
Hostman Cloud
Blazing 3.3 GHz Processors
& NVMe Disks
Experience unparalleled speed with processors optimized for demanding applications, combined with ultra-fast NVMe disks for quick data retrieval.
200 Mbit Channels,
Unlimited Traffic
Enjoy stable, high-speed connectivity with unthrottled traffic, ensuring smooth performance even during peak usage periods.
24/7 Monitoring
& Support
Stay worry-free with round-the-clock monitoring and professional support, ensuring your systems are always operational.
Cost-Effective
Management
Our cloud server solutions are designed to deliver maximum value for your money, offering flexible pricing without compromising on performance.

Cloud server pricing

We offer various cloud server plans, tailored to your exact needs.
Get the best performance at a price that fits your budget.
New York
1 x 3 GHz CPU
CPU
1 x 3 GHz
1 GB RAM
RAM
1 GB
25 GB NVMe
NVMe
25 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$4
 /mo
1 x 3 GHz CPU
CPU
1 x 3 GHz
2 GB RAM
RAM
2 GB
40 GB NVMe
NVMe
40 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$5
 /mo
2 x 3 GHz CPU
CPU
2 x 3 GHz
2 GB RAM
RAM
2 GB
60 GB NVMe
NVMe
60 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$6
 /mo
2 x 3 GHz CPU
CPU
2 x 3 GHz
4 GB RAM
RAM
4 GB
80 GB NVMe
NVMe
80 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$8
 /mo
4 x 3 GHz CPU
CPU
4 x 3 GHz
8 GB RAM
RAM
8 GB
160 GB NVMe
NVMe
160 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$17
 /mo
8 x 3 GHz CPU
CPU
8 x 3 GHz
16 GB RAM
RAM
16 GB
320 GB NVMe
NVMe
320 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$37
 /mo
CPU
RAM
Gb
NVMe
Gb
Public IP
$0
 /mo

Deploy any software in seconds

Select the desired OS or App and install it in one click.
OS Distributions
Pre-installed Apps
Custom Images
Ubuntu
Debian
CentOS

Hostman's commitment to simplicity
and budget-friendly solutions

1 CPU
2 CPU
4 CPU
8 CPU
Configuration
1 CPU, 1 GB RAM, 25 GB SSD
Hostman
DigitalOcean
Google Cloud
AWS
Vultr
Price
$4
$6
$6.88
$7.59
$5
Tech support
Free
$24/mo
$29/mo + 3% of
monthly charges
$29/mo or 3% of
monthly charges
Free
Backups
from $0.07/GB
20% or 30% higher
base daily/weekly fee
$0.03/GB per mo
$0.05/GB per mo
20% higher base
monthly/hourly fee
Bandwidth
Free
$0.01 per GB
$0.01 per GB
$0.09/GB first
10 TB / mo
$0.01 per GB
Live chat support
Avg. support response time
<15 min
<24 hours
<4 hours
<12 hours
<12 hours

What is a cloud server?

Cloud server is a virtualized computing resource hosted in the cloud, designed to deliver powerful performance without the need for physical hardware. It is built on a network of connected virtual machines, which enables flexible resource allocation, instant scalability, and high availability. Unlike traditional on-premises servers, a cloud-based server allows users to adjust resources dynamically, making it ideal for handling fluctuating workloads or unpredictable traffic spikes. Whether you're running an e-commerce store, a SaaS platform, or any application, a cloud web server provides the adaptability necessary to grow with your business.

Cloud servers solve a wide range of challenges, from reducing infrastructure costs to improving uptime and reliability. By leveraging the cloud, businesses can avoid the upfront investment and maintenance costs associated with physical servers. Additionally, a cloud server system allows users to deploy applications quickly, scale resources in real-time, and manage data more efficiently. The key benefits for clients include operational flexibility, cost savings, and the ability to respond quickly to changing demands.

Ready to buy a cloud server?

1 CPU / 1GB RAM / 25GB NVMe / 200 Mbps / $2/mo.

Efficient tools for your convenient work

See all Products

Backups, Snapshots

Protect your data with regular backups and snapshots, ensuring you never lose crucial information.

Firewall

Enhance your security measures with our robust firewall protection, safeguarding your infrastructure against potential threats.

Load Balancer

Ensure optimal performance and scalability by evenly distributing traffic across multiple servers with our load balancer feature.

Private Networks

Establish secure and isolated connections between your servers with private networks, shielding sensitive data and enhancing network efficiency.

Trusted by 500+ companies and developers worldwide

One panel to rule them all

Easily control your database, pricing plan, and additional services
through the intuitive Hostman management console.
Project management
Organize your multiple cloud servers and databases into a single, organized project, eliminating confusion and simplifying management.
Software marketplace
24 ready-made assemblies for any tasks: frameworks, e-commerce, analytics tools.
Mobile responsive
Get the optimal user experience across all devices with our mobile-responsive design.
Hostman Cloud

Code locally, launch worldwide

Our servers, certified with ISO/IEC 27001, are located in Tier 3 data
centers across the US, Europe, and Asia.
🇺🇸 San Francisco
🇺🇸 San Jose
🇺🇸 Texas
🇺🇸 New York
🇳🇱 Amsterdam
🇳🇬 Lagos
🇩🇪 Frankfurt
🇵🇱 Gdansk
🇦🇪 Dubai
🇸🇬 Singapore
Anup k.
Associate Cloud Engineer
5.0 out of 5

"Hostman Comprehensive Review of Simplicity and Potential"

It been few years that I have been working on Cloud and most of the cloud service...
Mansur H.
Security Researcher
5.0 out of 5

"A perfect fit for everything cloud services!"

Hostman's seemless integration, user-friendly interface and its robust features (backups, etc) makes it much easier...
Adedeji E.
DevOps Engineer
5.0 out of 5

"Superb User Experience"

For me, Hostman is exceptional because of it's flexibility and user-friendliness. The platform's ability to offer dedicated computing resources acr...
Yudhistira H.
Mid-Market(51-1000 emp.)
5.0 out of 5

"Streamlined Cloud Excellence!"

What I like best about Hostman is their exceptional speed of deployment, scalability, and robust security features. Their...
Mohammad Waqas S.
Biotechnologist and programmer
5.0 out of 5

"Seamless and easy to use Hosting Solution for Web Applications"

From the moment I signed up, the process has been seamless and straightforward...
Mohana R.
Senior Software Engineer
5.0 out of 5

"Availing Different DB Engine Services Provided by Hostman is Convenient for my Organization usecases"

Hostman manages the cloud operations...
Faizan A.
5.0 out of 5

"Hostman is a great fit for me"

Hostman is a great fit for me. What do you like best about Hostman? It was very easy to deploy my application and create database, I didn't have
Adam M.
5.0 out of 5

"Perfect website"

This website is extremely user friendly and easy to use. I had no problems so didn't have to contact customer support. Really good website and would recommend to others.
Anup K.
4.0 out of 5

"Simplifying Cloud Deployment with Strengths and Areas for Growth"

What I like best about Hostman is its unwavering commitment to simplicity...
Naila J.
5.0 out of 5

"Streamlined Deployment with Room for Improvement"

Hostman impresses with its user-friendly interface and seamless deployment process, simplifying web application hosting...

More cloud services from Hostman

See all Products

Latest News

Go

How to Use Templates in Go

Go (Golang) comes with a powerful, versatile templating system that allows for dynamic output generation, whether it's textual information (such as an email, document, or simply a console command) or entire web pages. Template processing in Go is based on two primary packages, each serving its own purpose: text/template html/template It’s important to note that both packages have an identical interface; however, the second (html/template) automatically protects HTML output from certain types of attacks, such as injections. Converting a Go template into final output is done by applying the template to the appropriate data structure. The input text for Go templates can be in any format and encoded in UTF-8. Template Entities A template is generally associated with a specific data structure (e.g., a struct) whose data will be inserted into the template. Thus, any template formally consists of three basic types of entities that "extract" the necessary variables and insert them into the output: ActionsThese are text fragments enclosed in curly braces {{ }}, where calculations or data substitutions take place. Actions make the content within the template dynamic by inserting the appropriate data. Actions can include simple variable substitutions, as well as loops or function executions that contribute to the final output. They directly control how the final result will appear. ConditionsConditions are the classic if-else constructions used within the template. Conditions allow you to include or exclude entire blocks of text from the final output, significantly enhancing the flexibility and capability of template-based content generation. LoopsLoops allow you to iterate over a collection of data, outputting multiple similar blocks but with different key information. These are useful when you need to generate repeated elements based on a list or array. Managing Templates in Go In Go, there are three most commonly used functions for managing templates: New: Creates a new template, which must be defined later. Parse: Analyzes the provided string containing the template text, and then returns a ready-to-use template. Execute: Executes the parsed template, applying the provided data structure, and writes the result to a specified variable. Additionally, there is the ParseFiles function, which allows you to process entire files containing the template's content rather than just a string. The following code demonstrates how to use these basic template functions in a simple scenario: package main // In addition to the template package, we import "os", which provides a platform-independent interface for interacting with the operating system. In this case, we'll use it to output the result of the template execution to the console. import ( "os" "text/template" ) // Define a struct whose data will be inserted into the template. type Person struct { Name string Age int } func main() { some_person := Person{"Alex", 32} // Instance of the previously defined struct some_template := "This is {{ .Name }} and he is {{ .Age }} years old" // Template text with embedded actions inside curly braces // Create a new template and parse its content, preparing it for further use ready_template, err := template.New("test").Parse(some_template) // Check for errors (nil means no error, similar to null in C) if err != nil { panic(err) // Stop execution and print the error } // Execute the template and print the result to the console err = ready_template.Execute(os.Stdout, some_person) // OUTPUT: This is Alex and he is 32 years old // Check for errors again if err != nil { panic(err) // Stop execution and print the error } } You can reuse a template "compiled" using the Parse function but with data from a different structure. For example, you could continue the main function from the above code like this: // Continuing from the previous code ... another_person := Person{"Max", 27} // Create another instance of the struct err = ready_template.Execute(os.Stdout, another_person) } In this case, the template will be reused and applied to the new instance (another_person), producing different output based on the new data. Note that inside a template, variables from the structure that was passed during execution are referenced within double curly braces {{ }}. When referring to these variables, the structure name is implicitly omitted and only the variable name is used, prefixed with a dot. For example: This is {{ .Name }} and he is {{ .Age }} years old You can also directly access the data passed during execution. For instance, the following code demonstrates how to pass simple text directly into the template: package main import ( "os" "text/template" ) func main() { some_template := "Here we have {{ . }}" ready_template, err := template.New("test").Parse(some_template) if err != nil { panic(err) } ready_template.Execute(os.Stdout, "no data, only text") // OUTPUT: Here we have no data, only text } In this example, the template simply inserts whatever value was passed to it (in this case, "no data, only text") without referencing a structure or fields within a structure. Template Syntax Features Static Text In the simplest case, a template can simply output static text without using any additional data: import ( "os" "text/template" ) ... some_template := "Just regular text" ready_template, err := template.New("test").Parse(some_template) if err != nil { panic(err) } ready_template.Execute(os.Stdout, "no data") // OUTPUT: Just regular text Static Text Inside Actions (Curly Braces) You can combine regular static text with additional data within curly braces: import ( "os" "text/template" ) ... some_template := "Not just regular text with {{ \"additional\" }} data" // Don't forget to escape the double quotes ready_template, err := template.New("test").Parse(some_template) if err != nil { panic(err) } ready_template.Execute(os.Stdout, "no data") // OUTPUT: Not just regular text with additional data Trimming Whitespace Markers You can use trimming markers before and after the curly braces to remove spaces: ... some_template := "Not just regular text with {{- \"additional\" -}} data" ... ready_template.Execute(os.Stdout, "no data") // OUTPUT: Not just regular text withadditionaldata // The output above isn't a typo — the spaces around "additional" have been removed Trimming markers remove not only a single space but multiple spaces on both sides of the text produced by the code inside the curly braces — both from the inside and outside. Numbers in Templates Unlike text, numbers are automatically inserted into the output without needing quotes: ... some_template := "Maybe this code was written by {{ 5 }} people." ... ready_template.Execute(os.Stdout, "no data") // OUTPUT: Maybe this code was written by 5 people. Similarly, trimming markers can be used with numbers as well: ... some_template := "Maybe this code was written by {{- 5 }} people." ... ready_template.Execute(os.Stdout, "no data") // OUTPUT: Maybe this code was written by5 people. Template Variables Golang allows you to define special variables that are only available within the template itself. Like in Go, a variable is defined by specifying its name and value, and then it is used. To define an internal variable, use $: package main import ( "os" "text/template" ) func main() { some_template := "First, we define a variable {{- $some_variable :=`Hello, I'm a variable` }}, then we use it: \"{{ $some_variable }}\"" ready_template, err := template.New("test").Parse(some_template) if err != nil { panic(err) } ready_template.Execute(os.Stdout, "no data") // OUTPUT: First, we define a variable, then we use it: "Hello, I'm a variable" } Note that to access the variable, we use $ because this variable is not related to any Go data structure, but is defined within the template itself. Conditional Expressions Go templates allow branching based on logic using the standard if/else conditional operators found in most programming languages: package main import ( "os" "text/template" ) func main() { some_template := "{{ if eq . `hello` -}} Hello! {{ else -}} Goodbye! {{ end }}" // We use a trimming marker after each condition to remove the leading space in the output ready_template, err := template.New("test").Parse(some_template) if err != nil { panic(err) } ready_template.Execute(os.Stdout, "hello") // OUTPUT: Hello! } In this example, the eq function is used (which stands for "equal") to compare the value passed to the template (accessed via the dot) with the string hello. Also, note that every conditional block is terminated with the end keyword. You can actually simplify the code by skipping the string comparison and directly passing a boolean variable, which makes the code more concise: package main import ( "os" "text/template" ) func main() { some_template := "{{ if . -}} Hello! {{ else -}} Goodbye! {{ end }}" ready_template, err := template.New("test").Parse(some_template) if err != nil { panic(err) } ready_template.Execute(os.Stdout, false) // OUTPUT: Goodbye! } Loops Templates are commonly used to output multiple similar data items, where the number of items changes from one output to another. This is where loops come in handy: package main import ( "os" "text/template" ) func main() { some_list := []string{"First", "Second", "Third"} some_template := "Let's count in order: {{ range .}}{{.}}, {{ end }}" ready_template, err := template.New("test").Parse(some_template) if err != nil { panic(err) } ready_template.Execute(os.Stdout, some_list) // OUTPUT: Let's count in order: First, Second, Third, } In this example, there's one issue—the last item in the list results in an extra comma followed by a space. To fix this, you can modify the code to check if the item is the last one in the list, ensuring that there is no comma and space after the last item: package main import ( "os" "text/template" ) func main() { some_list := []string{"First", "Second", "Third"} some_template := "Let's count in order: {{ range $index, $element := .}}{{ if $index }}, {{ end }}{{$element}}{{ end }}" ready_template, err := template.New("test").Parse(some_template) if err != nil { panic(err) } ready_template.Execute(os.Stdout, some_list) // OUTPUT: Let's count in order: First, Second, Third } In this modified example, two new variables are introduced—$index and $element—which are updated on each iteration of the loop. A comma and space are printed before each element, but only if the index ($index) is not zero. This ensures that the comma is not added before the first element. Template Functions Within Go templates, you can define and call custom functions that perform various operations on the passed arguments. However, before using them in a template, they need to be explicitly declared and registered. Here is an example: package main import ( "os" "text/template" ) func manipulate(first_arg, second_arg int) int { return first_arg + second_arg } func main() { some_list := []int{1, 2, 3} some_template := "Adding index and element in order: {{ range $index, $element := .}}{{ if $index }}, {{ end }}{{$index}} + {{$element}} = {{ do_manipulation $index $element }}{{ end }}" ready_template, err := template.New("test").Funcs(template.FuncMap{"do_manipulation": manipulate}).Parse(some_template) if err != nil { panic(err) } ready_template.Execute(os.Stdout, some_list) // OUTPUT: Adding index and element in order: 0 + 1 = 1, 1 + 2 = 3, 2 + 3 = 5 } In this example, we intentionally renamed the Go function manipulate inside the template to do_manipulation. This is possible due to Go's flexibility. However, you could also use the original function name by simply registering it like this: ready_template, err := template.New("test").Funcs(template.FuncMap{"manipulate": manipulate}).Parse(some_template) This allows the custom function manipulate (or do_manipulation if renamed) to be used within the template for performing operations like addition on the index and element. Working with HTML Templates in Go As mentioned earlier, Go has an additional package for working with HTML templates: html/template. Unlike the standard text/template, this package protects applications from cross-site scripting (XSS) attacks, as Go ensures that data is safely rendered without allowing malicious content. Here’s how to import the necessary packages: import ( "html/template" "net/http" ) The net/http package is required to start an HTTP server on your local machine, which is necessary for testing the next example. HTML Template File It's best practice to store the template in a separate file. In this case, we'll create a file with the .html extension, although you can use any extension you prefer in your projects — Go does not impose any restrictions. We'll call the file index.html: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Document</title> </head> <body> <h1>{{ .Title }}</h1> <p> {{ .Text }} </p> </body> </html> Notice that we’ve specified two variables: Title and Text. Their values will be passed from a Go structure into the template. Minimal Go Code to Serve HTML Template Now let’s write the minimal Go code to run an HTTP server and send the rendered template result as a response to any request to the server: package main import ( "os" "html/template" "net/http" "log" ) // Declare the structure to store data for generating the template type Content struct { Title string Text string } // Function to handle HTTP requests to the server func generateResponse(writer http.ResponseWriter, request *http.Request) { if request.Method == "GET" { some_template, _ := template.ParseFiles("index.html") // Parse the template file some_content := Content{ Title: "This is the Title", Text: "This is the text content", } err := some_template.Execute(writer, some_content) // Execute the template, writing the output to the response writer if err != nil { panic(err) } } } func main() { // Start the HTTP server and use the generateResponse function to handle requests http.HandleFunc("/", generateResponse) err := http.ListenAndServe("localhost:8080", nil) if err != nil { log.Fatalln("Something went wrong:", err) } } Conclusion The Go programming language provides built-in support for creating dynamic content or rendering customizable output through templates. On our app platform you can deploy Golang apps, such as Beego and Gin.  This article covered the basic template functions that allow you to manage data dynamically, altering it according to a defined pattern — the template description itself. The implementation involves a few usage options: text/template html/template Remember that every template goes through three stages of formation, each of which is handled by a corresponding function: New: Creating the template. Parse: Analyzing (parsing) the template. Execute: Executing the template. This stage can be repeated indefinitely. You can refer to the official Go documentation on the text/template and html/template packages for more detailed information on the available functions and ways to use them.
05 December 2024 · 13 min to read
Go

How to Use the Cobra Package in Go

A Command-Line Interface (CLI) is a type of application that runs exclusively in the command-line terminal. Typically, such programs are used to manage various tools related to developing and maintaining network infrastructure. The interaction process is simple: The user types the name of the CLI application, the command name, parameters, and sometimes additional flags into the terminal. The CLI application performs the requested action and sends a text response to the terminal. CLI applications may seem outdated due to the lack of a graphical user interface (GUI), but they are still considered the most versatile, fast, and convenient way for system administration. Creating CLIs with Go and Cobra To create a CLI in Go, you can use a special package called Cobra, which is developed by third-party developers. It is built on top of the flag package (a command-line flag parser) from Go's standard library and provides a higher level of abstraction. Cobra is a complete CLI platform for the Go language, consisting of two main components: A library for creating modern CLI applications. A CLI tool for quickly building applications based on standard (for Cobra) command handler files. Cobra was originally developed by one of the Go team members, Steve Francia (spf13), for the Hugo project — a special framework for building websites. Over time, Cobra became one of the most popular packages in the Go community. Features of Cobra Cobra offers several simple features for building modern command-line interfaces. Additionally, Cobra includes a high-level controller to help organize the code for the CLI application being developed. Cobra implements: A command hierarchy Powerful argument and flag parsing Flag hierarchy (global and local) Subcommand checking POSIX compliance Automatic help generation for commands and flags In fact, large projects such as Kubernetes, Hugo, and CockroachDB are built on Go and use the Cobra package to handle commands. CLI commands follow a fairly standard pattern: {application} {command code} [arguments] [--flags and their parameters] For example, commands in real projects might look like this: kubectl get all -n kube-system Or like this: etcdctl put first second Cobra Architecture The working entities in Cobra can be divided into three types — each represents the structure of commands in the console terminal: Commands: These specify specific actions that need to be performed, much like in any classic CLI application. Arguments (Args): These are items or entities passed to a command, which the command works with and returns the result. Flags: Short modifiers for commands (i.e., specific actions) that make certain adjustments to the execution and affect the final result of the CLI application's operation. A Little About POSIX Compatibility The POSIX standard defines a pattern (scheme) for organizing arguments and flags that CLI applications should follow. This is the classic format that most developers are familiar with — numerous Linux utility programs (such as ls, cp, useradd) and third-party applications follow this convention. It is important to remember that the command scheme is strictly formalized in the standard and looks as follows: application_name [-a] [-b] [-c argument] [-d|-e] Each application may have multiple versions of the same option — long and short forms. There is a clear rule that the short version must consist of only one character. Step 1. Environment Setup Checking Go First, check whether the Go compiler is installed on your system. You can do this by running the version query command: go version If Go is installed, the console will display the Go version along with the operating system’s short name. Creating the Project Directory Next, create a separate directory for our Cobra project: mkdir CobraProject After that, navigate into it: cd CobraProject Golang has some peculiarities in its module system, which is necessary for connecting packages. Therefore, you need to initialize the project directory with a special command: go mod init CobraProject This will turn the directory into a full-fledged Go module, and the console will display a message about the creation of the module named CobraProject. Step 2. Installing the Cobra Package Downloading the Package from the Official Repository Starting from Go 1.18, Go includes a special command go install, which automatically installs remote modules. Therefore, we will use it to download the Cobra package from the official GitHub repository: go install github.com/spf13/cobra-cli@latest Note that with the @latest tag we are installing the latest release. Initializing the CLI After installation, the executable file cobra-cli will be available in the terminal. We will use this tool to initialize the Cobra project in our working directory — at this point, you should already be in that directory: cobra-cli init Once executed, this will create several files in your working directory, containing the standard Cobra package code along with the project name CobraProject. The file structure will look like this: CobraProject/ cmd/ root.go main.go go.mod go.sum The main.go file is the entry point for the CLI application. Its default content will look something like this: package main import ( "CobraProject/cmd" // the path may vary depending on the location of the working directory ) func main() { cmd.Execute() } All commands are placed as separate files in the /cmd directory.The root.go file is the root command handler — essentially the base command for any command-line interface. For example, consider the following command: go get URL Here, go is the root command, which is handled by root.go, and get is a subcommand, whose handler is placed in a file different from root.go. Building the CLI To build the CLI application, you use the same command as for building any regular Go binary project: go build By default, the executable file will appear in the project’s working directory. To make the built CLI application usable, you also need to install it: go install After this, the CLI application will be available for execution directly from the terminal. To use it, simply type the project name in the console: CobraProject If everything is set up correctly, the standard output for the command without any parameters will appear in the console. Of course, you can modify this standard output later in the root.go file. Step 3. Creating a Function for the Command Each command entered in the terminal calls a corresponding Go function, which executes the logic for that command. Any parameters and flags specified in the terminal are passed into the function. As a simple example, we will implement a small function that displays the time in the current time zone. To do this, we will use the time package. After initializing the CLI, the cmd directory should have been created in your working directory. Let's go to it: cd cmd Now, let's create a file that will contain our function: touch timefunc.go The code inside the file will look like this: package cmd // specify the name of our package import "time" // import the standard Go time package func getTimeFromZone(zone string) (string, error) { loc, err := time.LoadLocation(zone) // get the current location // check for error if err != nil { return "", err // return an empty result with error details } timeNow := time.Now().In(loc) // get the current time based on the location return timeNow.Format(time.RFC1123), nil // return the formatted result without error details } As you can see, the function returns two values: the result and any error data. You can use it in the CLI to retrieve the time for a specified time zone. Step 4. Adding a Command to the CLI Now that the functional part of our application is ready, we can "register" the command in the CLI application for external access. There is a separate add command for this: cobra-cli add timefromzone After running this command, a timefromzone.go file will appear in the cmd folder with the standard code inside. In this same folder, you will also find the root.go file, responsible for the "root" command processing, i.e., the command without any parameters. It’s easy to guess that the handlers for console commands are formed in the file system as separate Go source files. Let’s open the newly created file and populate it with the following code: package cmd import ( "fmt" "log" "github.com/spf13/cobra" ) var timefromzoneCmd = &cobra.Command{ Use: "timefromzone", Short: "Returns the time from a given geographical zone", Long: `This command returns the time from a specified geographical zone. It accepts only one argument — the zone for which the time is required. The result is returned in the RFC1123 format.`, Args: cobra.ExactArgs(1), Run: func(cmd *cobra.Command, args []string) { timefromzone := args[0] timeNow, err := getTimeFromZone(timefromzone) if err != nil { log.Fatalln("Invalid time zone") } fmt.Println(timeNow) }, } func init() { rootCmd.AddCommand(timefromzoneCmd) // add the new command to the root command } Let’s break down what each field means in the command definition: Use: The name under which the command will be available in the terminal. Short: A brief description of the command, which will be displayed to the user in the console. Long: A full description of the command, which will be shown to the user in the console. Args: The exact number of arguments required for the command to function. Run: The handler function where we call and process the previously created getTimeFromZone function. In some cases, you could simplify the code by writing the logic directly inside the command handler function, like this: import "time" var timefromzoneCmd = &cobra.Command{ Use: "timefromzone", Short: "Returns the time from a given geographical zone", Long: `This command returns the time from a specified geographical zone. It accepts only one argument — the zone for which the time is required. The result is returned in RFC1123 format.`, Args: cobra.ExactArgs(1), Run: func(cmd *cobra.Command, args []string) { zone := args[0] loc, err := time.LoadLocation(zone) if err != nil { log.Fatalln("Invalid time zone") } fmt.Println(time.Now().In(loc).Format(time.RFC1123)) }, } In this case, we directly implemented the logic for retrieving the time inside the Run function. If the time zone is invalid, an error message is logged. Once the command is added, we just need to reinstall our CLI application: go install Now, we can use the application from the terminal by specifying the command name and passing the time zone code as an argument: CobraProject timefromzone Europe/Nicosia The console output will look something like this: Sun, 10 Nov 2024 12:41:06 Europe/Nicosia You can find a complete list of time zones and their codes in Wikipedia. Step 5. Adding Flags to CLI Typically, when running command-line applications, you can specify flags in addition to parameters. Flags are options that modify the behavior of a specific command. They are easily recognized by the preceding hyphen (or double hyphen). The inclusion of flags in a CLI application adds variability and flexibility to the command behavior. Without flags, you would have to create many complex functions with a lot of redundant code. In this sense, flags help standardize the application. Cobra has two types of flags: Local flags: These only apply to the specific command. Persistent flags: These can apply  to all commands and subcommands. Let’s return to the timefromzone.go file and modify the initialization function to add a flag. The flag will specify the desired time format. Here’s how you can add the flag to your command: func init() { rootCmd.AddCommand(timefromzoneCmd) // Add the defined command to the root command timefromzoneCmd.Flags().String("format", "", "Outputs the time in the yyyy-mm-dd format") // Add a flag to the command } This adds a flag named --format, which specifies the time format. Here is the complete updated file with flag handling: package cmd import ( "fmt" "time" "github.com/spf13/cobra" ) var timefromzoneCmd = &cobra.Command{ Use: "timefromzone", Short: "Returns the time from a given geographical zone", Long: `This command returns the time from a specified geographical zone. It accepts only one argument — the zone for which the time is required. The result is returned in RFC1123 format.`, Args: cobra.ExactArgs(1), Run: func(cmd *cobra.Command, args []string) { var date string zone := args[0] loc, _ := time.LoadLocation(zone) // Load the location from the zone argument fla, _ := cmd.Flags().GetString("format") // Get the value of the 'format' flag if fla != "" { date = time.Now().In(loc).Format(fla) // If flag is provided, use custom format } else { date = time.Now().In(loc).Format(time.RFC1123) // Default format } fmt.Printf("Current time in timezone %v: %v\n", loc, date) }, } func init() { rootCmd.AddCommand(timefromzoneCmd) // Add the timefromzone command to the root command timefromzoneCmd.Flags().String("format", "", "Outputs the time in the yyyy-mm-dd format") // Add the 'format' flag } Now, let's reinstall the updated CLI application: go install To use the new flag, run the command with the --format flag, like this: CobraProject timefromzone Europe/Nicosia --format 2006-01-02 The output will be formatted according to the flag, like this: Current time in timezone Europe/Nicosia: 2024-11-10 Here, the --format flag explicitly tells the command to display the time in the yyyy-mm-dd format, and the result will reflect this format. Conclusion The Cobra package for the Go programming language is an excellent solution that helps developers abstract away the complexities of low-level command-line parsing functions provided by the standard library. Cobra is a kind of framework for CLI applications that alleviates the "headache" developers face when working with the command-line terminal, allowing them to focus more on business logic. Each command is represented as a separate file in the /cmd directory, and you can modify it using flags. This is convenient because you can explicitly build a hierarchy of commands and control the process of handling them by editing hook functions like init or run. This feature gives Cobra CLI applications a more structured layout and less clutter, forming a solid framework. It’s important to note that third-party developers created Cobra, so it is hosted in a separate GitHub repository and is not part of the Go standard library. Additionally, on the official Cobra website, you can find installation instructions and details about using the command-line parser. On our cloud app platform you can deploy Golang apps, such as Beego and Gin.
05 December 2024 · 12 min to read
Docker

Removing Docker Images, Containers, Volumes, and Networks

Docker is software for quickly deploying applications through containerization. However, with its active use, many objects accumulate, consuming valuable host resources: images, containers, volumes, and networks. You can remove these objects through Docker Desktop, but it is much more convenient, especially when dealing with a significant number of objects, to use command-line tools. In this article, you will find tips for working with Docker and learn how to remove various objects using both the Docker Desktop client and command-line tools. Removing Containers To interact with containers and change their current state, including removing them, go to the "Containers/Apps" tab in the Docker Desktop web interface, select the desired object, and apply the chosen action: Now, let's look at how to remove these objects using command-line tools. To remove containers, use the docker container rm command or simply docker rm. For clarity, we will use docker container rm with the following syntax: docker container rm [removal options] [object ID] Options: --force or -f: Force removal of the container (e.g., if running). --link or -l: Remove the specified link (e.g., between two objects)*. --volume or -v: Remove anonymous volumes associated with the container. Containers are isolated from each other. One way to link them is via network ports. Using the --link flag will remove this network link in Docker. There are two commands in the command-line arsenal for obtaining information about containers: docker ps and docker container ls. They have the same options and syntax: Options: --all or -a: Display all containers. By default, only running containers are displayed. --filter or -f: Filter based on a set of flags. --format: Format the output. You can display only the necessary information. --last or -n: Show the last n containers. --latest or -l: Show the most recent container. --no-trunc: Do not truncate the output. --quiet or -q: Display only the container IDs. --size or -s: Display the total size. Using these parameters, you can create a list of containers you wish to remove, then pass the container IDs to the docker container rm command. For example, to create a list of containers with the status created or exited, run this command to get such objects: docker ps -a -f status=created -f status=exited Now, pass the result to the removal command: docker container rm $(docker ps -a -f status=created -f status=exited -q) To remove running containers, you must first stop them. Of course, you can use the --force option, but this may lead to data corruption with the application's working data. It is always better to first stop the containers with the docker stop command. To remove all containers in Docker, you can simply use these two commands: docker stop $(docker ps -a -q)docker container rm $(docker ps -a -q) There is a separate command to remove all stopped containers: docker container prune. Removing Docker Images Like containers, you can also remove Docker images within the client application. To do this, go to the "Images" tab: To delete an image, click "Clean up…" in the upper right corner and select the images. If an image is currently in use, Docker will not allow you to delete it. Now, let's move on to the command-line tools. There are two commands for removing Docker images: docker rmi and docker image rm. They are identical and work in much the same way as docker rm. Here's their syntax: docker rmi [remove options] [image IDs] Options: --force or -f: Forcefully remove the image. --no-prune: Do not remove untagged parent images. To find the image IDs, we use the following command: docker images [options] [REPOSITORY:[TAG]] Options: --all or -a: Show all images. By default, intermediate images are hidden. --digests: Show digests. --filter or -f: Filter by flags. --format: Format the output. --no-trunc: Do not truncate the output. --quiet or -q: Show only the image IDs. The application of these commands is the same as in the previous section. First, we query the list of images we want and use it as input for the docker rmi command. For example, to remove images that are not associated with any containers, we can use the dangling=true flag. It is important to note that we will get untagged images. docker images --filter dangling=true After checking the list, we can safely remove it: docker rmi $(docker images --filter dangling=true -q) To remove all unused images, use the docker image prune command. Removing Volumes A volume is a file system located outside the containers and stored on the host machine. To free up disk space occupied by volumes, go to the "Volumes" section, and in the top-right corner, select the corresponding icon: To delete volumes from the command line, use the docker volume rm command with the following syntax: docker volume rm [options] [volume names] This command is not very flexible with options and only provides the --force or -f flag for forced removal. You can only remove volumes if they are not associated with running containers. Forced removal of volumes linked to active containers is not recommended, as it may lead to data corruption. To list volume names, use the docker volume ls command with the following syntax: docker volume ls [options] Again, Docker is limited on options here, with only three available: --filter or -f: Filter by flags. --format: Format the output. --quiet or -q: Show only the volume names. Volumes exist independently of containers, and after their deletion, they remain in the host's file system as unlinked volumes. Let's try deleting such volumes. Use the dangling=true flag for this purpose: docker volume ls -f dangling=true Now, pass the results to the command for deletion: docker volume rm $(docker volume ls -f dangling=true -q) Alternatively, you can use another command to remove all unused volumes: docker volume prune. However, before using this command, check the list to ensure it includes only the volumes you want to remove. If you need to remove an unnamed volume, you can delete it with its associated container. For this, add the -v flag when using docker rm. Removing Networks To remove networks, you need to use the docker network rm command with the following syntax: docker network rm [network names/IDs] This command does not have any options. You can pass either names or IDs of the networks. To find the names and IDs of the networks, use the docker network ls command: docker network ls [options] This command has four available options: --filter or -f: Filter by flags. --format: Format the output. --no-trunc: Do not truncate the output. --quiet or -q: Show only IDs. Before deleting a network, you must remove any objects (containers) that are using it. To check which containers are using a specific network, use the following command: docker ps -f network=[network ID] Afterward, you can proceed to delete the network. For example, to delete networks with the driver=bridge value, use the following commands: docker network ls -f driver=bridgedocker network rm $(docker network ls -f driver=bridge -q) Cleaning Up Docker from All Objects Sometimes, you might need to remove everything and reinstall Docker to return an application to its initial state. Instead of deleting Docker entirely, you can execute a series of commands to clean up all objects and work with a fresh environment: Stop and remove containers: docker stop $(docker ps -a -q)docker rm $(docker ps -a -q) Remove images: docker rmi $(docker images -a -q) Remove volumes: docker volume rm $(docker volume ls -a -q) Remove networks: docker network rm $(docker network ls -a -q)  
05 December 2024 · 6 min to read
Docker

How to Install Docker on Ubuntu 22.04

Docker is a free, open-source tool for application containerization. Containers are isolated environments similar to virtual machines (VMs), but they are more lightweight and portable across platforms, requiring fewer system resources. Docker uses OS-level virtualization, leveraging features built into the Linux kernel. This guide walks through installing Docker on Ubuntu 22.04 but also applies to other Ubuntu versions. Additionally, we’ll download Docker Compose, a tool essential for managing multiple containers efficiently. For this guide, we will use a Hostman cloud server. System Requirements According to Docker's documentation, the following 64-bit Ubuntu versions are supported: Ubuntu Oracular 24.10 Ubuntu Noble 24.04 (LTS) Ubuntu Jammy 22.04 (LTS) Ubuntu Focal 20.04 (LTS) Docker works on most popular architectures. The resource requirements for your device will depend on your intended use and how comfortably you want to work with Docker. The scale of applications you plan to deploy in containers will largely dictate the system needs. Some sources recommend a minimum of 2 GB of RAM. Additionally, a stable internet connection is required. Installing Docker on Ubuntu 22.04 Installing Docker on Ubuntu 22.04 involves executing a series of terminal commands. Below is a step-by-step guide with explanations. The steps are also applicable to server versions of Ubuntu. 1. Update Package Indexes The default repository may not always contain the latest software releases. Therefore, we will download Docker from its official repository to ensure the latest version. First, update the package indexes: sudo apt update 2. Install Additional Packages To install Docker, you’ll need to download four additional packages: curl: Required for interacting with web resources. software-properties-common: Enables software management via scripts. ca-certificates: Contains information about certification authorities. apt-transport-https: Necessary for data transfer over the HTTPS protocol. Download these packages with the following command: sudo apt install curl software-properties-common ca-certificates apt-transport-https -y The -y flag automatically answers "Yes" to all terminal prompts. 3. Import the GPG Key The GPG key is required to verify software signatures. It is needed to add Docker's repository to the local list. Import the GPG key with the following command: wget -O- https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor | sudo tee /etc/apt/keyrings/docker.gpg > /dev/null During the import process, the terminal may display a warning before confirming the successful execution of the command. 4. Add Docker Repository Add the repository for your version of Ubuntu, named "Jammy." For other versions, use their respective code names listed in the "System Requirements" section. Run the following command: echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu jammy stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null During execution, the terminal will prompt you to confirm the operation. Press Enter. 5. Update Package Indexes Again After making these changes, update the package indexes once more using the familiar command: sudo apt update 6. Verify the Repository Ensure that the installation will proceed from the correct repository by running the following command: apt-cache policy docker-ce Output example: The output may vary depending on the latest Docker releases. The key point is to confirm that the installation will be performed from Docker's official repository. 7. Installing Docker After configuring the repositories, proceed with the Docker installation: sudo apt install docker-ce -y The installation process will begin immediately. To confirm a successful installation, check Docker's status in the system: sudo systemctl status docker Output example: The output should indicate that the Docker service is active and running. Installing Docker Compose Docker Compose is a Docker tool designed for managing multiple containers. It is commonly used in projects where many containers must work together as a unified system. Managing this process manually can be challenging. Instead, you describe the entire configuration in a single YAML file containing the settings and configurations for all containers and their applications. There are several ways to install Docker Compose. If you need the latest version, make sure to use manual installation and installation via the Git version control system. Installation via apt-get If having the latest version is not critical for you, Docker Compose can be installed directly from the Ubuntu repository. Run the following command: sudo apt-get install docker-compose Installing via Git First, install Git: sudo apt-get install git Verify the installation by checking the Git version: git --version The output should show the Git version. Next, clone the Docker Compose repository. Navigate to the Docker Compose GitHub page and copy the repository URL. Run the following command to clone the repository: git clone https://github.com/docker/compose.git The cloning process will begin, and the repository will be downloaded from GitHub. Manual Installation Go to the Docker Compose GitHub repository and locate the latest release version under the Latest tag. At the time of writing, the Latest version of Docker Compose is v2.31.0. Let's download it: sudo curl -L "https://github.com/docker/compose/releases/download/v2.31.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose In this command, the parameters $(uname -s) and $(uname -m) automatically account for the system characteristics and architecture. After the download finishes, change the file's permissions: sudo chmod +x /usr/local/bin/docker-compose Conclusion In this guide, we covered the installation of Docker on Ubuntu 22.04, along with several ways to install Docker Compose. You can order a cloud server at Hostman for your experiments and practice.
04 December 2024 · 5 min to read
SQL

SQL Basics Cheat Sheet

SQL is a globally operated Query Language that interacts with the databases. It assists us in finding, editing, and handling data effectively. A cheat sheet makes learning easier by giving a quick way to memorize important commands. In this tutorial, we'll go over primary SQL commands, clauses, joins, transactions, and much more to assist you in administering databases instantly and easily. To demonstrate these concepts, we will implement each command in MySQL. SQL Data Types A data type determines the kind of values that can be preserved in a column, outlined below with their explanations: INT: It keeps integers. CHAR(n): It saves a static-size string consisting of n characters. VARCHAR(n): It keeps a variable-length string, comprising a max of n characters. TEXT: It enables the storage of extensive text or strings. DATE: It lets us store dates. DATETIME: It saves dates & times. FLOAT: It stores floating-point digits. BOOLEAN: Postgres and MySQL offer BOOLEAN for storing true or false entries. In contrast, SQL Server utilizes BIT for this purpose. Basic SQL Commands Commands let us create tables, delete, and insert or edit records. For example: CREATE: Generates new databases and other objects. SHOW: Displays a list of all accessible databases and other objects. Postgres doesn’t offer SHOW; however, the equivalent functionality can be obtained in psql by utilizing meta-commands like \l, \dt, \dn, etc. USE: Switches the database. Postgres uses \c meta-command, instead. INSERT: Appends new entries into a designated table. SELECT: Displays information from the stated table(s). UPDATE: Ugrades existing entries in a table. DELETE: Removes desired or all rows. DROP: Permanently drops a database or other objects. Example 1: Create Database Let’s generate a database called hostman_info: CREATE DATABASE hostman_info; Now execute SHOW to justify the database creation: SHOW DATABASES; Now utilize hostman_info by employing the USE command: USE hostman_info; The screenshot below demonstrates that we have successfully established a connection with the hostman_info: Example 2: Create Table The below-stated query demonstrates the table creation with various data types: CREATE TABLE hostman_team ( Hostman_EID INT AUTO_INCREMENT PRIMARY KEY, Hostman_FName VARCHAR(30), Hostman_LName VARCHAR(30), Hostman_DOB DATE, Hostman_ESalary FLOAT, Hostman_EStatus BOOLEAN ); It constructs a new hostman_team table with the requested columns, which can be confirmed with this command: SHOW TABLES; Example 3: Insert Rows Once a table is formed, we can append new entries to the hostman_team table: INSERT INTO hostman_team (Hostman_FName, Hostman_LName, Hostman_DOB, Hostman_ESalary, Hostman_EStatus)VALUES ('Anees', 'Asghar', '1995-01-01', 60000, TRUE); Similarly, users can insert as many records as necessary with a single INSERT statement. In this scenario, each entry to be appended must be separated by a comma. INSERT INTO hostman_team (Hostman_FName, Hostman_LName, Hostman_DOB, Hostman_ESalary, Hostman_EStatus) VALUES ('Joe', 'Root', '1990-01-15', 65000, TRUE),    ('Steve', 'Smith', '1980-03-12', 70000, FALSE); Example 4: Fetch Records Next, execute SELECT to display data from hostman_team: SELECT * FROM hostman_team; Similarly, we can extract only the preferred columns by defining their names: SELECT Hostman_FName, Hostman_LName, Hostman_ESalary FROM hostman_team; Example 5: Update Table SQL gives another helpful statement called UPDATE that assists us in editing existing records: UPDATE hostman_teamSET Hostman_ESalary = 62000WHERE Hostman_EID = 1; To edit entries as per defined criteria, we can utilize UPDATE with the WHERE clause: UPDATE hostman_teamSET Hostman_ESalary = 75000WHERE Hostman_ESalary >= 65000; Example 6: Delete Data If a particular entry is no longer relevant, we can remove it: DELETE FROM hostman_teamWHERE Hostman_EID = 3; To clear all entries of hostman_team, utilize the subsequent query: DELETE FROM hostman_team; SQL SELECT Queries SQL presents various SELECT queries that let us collect data in different ways, including filtering, arranging, and limiting results according to our requirements: DISTINCT: It fetches distinct values while deleting duplicates. WHERE: Obtain the entries according to predetermined criteria. ORDER BY: It gives a certain order to the resultant table. LIMIT: It applies restrictions to the entries to be fetched. This would extract distinct firstNames from Hostman_team: SELECT DISTINCT Hostman_FName FROM hostman_team; Similarly, the subsequent query extracts entries from Hostman_team with EmpID 2 or above and then sorts them in descending sequence to exhibit only the topmost entry: SELECT * FROM hostman_team WHERE Hostman_EID >= 2 ORDER BY Hostman_EID DESC LIMIT 1; SQL Joins SQL comes up with distinct kinds of JOIN that let us merge rows from several tables using related columns. Let’s create a couple of tables titled Hostman_depts and Hostman_staff with the following structure: CREATE TABLE Hostman_depts ( HDptID INT AUTO_INCREMENT PRIMARY KEY, HDptName VARCHAR(255), HDptLocation VARCHAR(255) ); CREATE TABLE Hostman_staff ( HStaffID INT AUTO_INCREMENT PRIMARY KEY, HFirstName VARCHAR(255), HLastName VARCHAR(255), HEmail VARCHAR(255), HPhoneNumber VARCHAR(20), HHireDate DATE, HDptID INT, FOREIGN KEY (HDptID) REFERENCES Hostman_depts(HDptID) ON DELETE CASCADE ON UPDATE CASCADE ); The above query creates the hostman_staff table with a foreign key HDptID linking to the hostman_depts table. After creating the table, we insert some records in these tables, which are populated with the following query: SELECT * FROM Hostman_depts;SELECT * FROM Hostman_staff; INNER JOIN It fetches rows that have related records in both target tables: SELECT HStaffID, HFirstName, HLastName, HEmail, HDptName, HDptLocation FROM Hostman_staff INNER JOIN Hostman_depts ON Hostman_staff.HDptID = Hostman_depts.HDptID; We combine records where the HDptID in the Hostman_staff table corresponds to the HDptID in the Hostman_depts table: LEFT JOIN It fetches all data from the left table with associated entries from the right table. If unmatched, NULLs fill the right table's columns. The below query displays all staff members with their respective departments and addresses: SELECT HStaffID, HFirstName, HLastName, HEmail, HDptName, HDptLocation FROM Hostman_staff LEFT JOIN Hostman_depts ON Hostman_staff.HDptID = Hostman_depts.HDptID; Every record of Hostman_staff is returned, even if there is no related match in the Hostman_depts table: RIGHT JOIN It exhibits all details from the right table and associated entries from the left table. If unmatched, NULL will be displayed for the left table: SELECT HStaffID, HFirstName, HLastName, HEmail, HDptName, HDptLocation FROM Hostman_staff RIGHT JOIN Hostman_depts ON Hostman_staff.HDptID = Hostman_depts.HDptID; It displays all departments and enlisted staff members, with NULL entries when no staff is linked with a department: FULL JOIN It depicts all rows from both tables, with associated records where available. The resultant table possesses NULL values for unavailable records: SELECT HStaffID, HFirstName, HLastName, HEmail, HDptName, HDptLocation FROM Hostman_staff FULL JOIN Hostman_depts ON Hostman_staff.HDptID = Hostman_depts.HDptID; It exhibits all staff members with departments, even if no staff members are allocated to each department. Note: Some SQL versions may not directly support FULL OUTER JOIN. In this scenario, we can integrate LEFT and RIGHT JOIN with UNION to accomplish a similar functionality: SELECT HStaffID, HFirstName, HLastName, HEmail, HDptName, HDptLocation FROM Hostman_staff LEFT JOIN Hostman_depts ON Hostman_staff.HDptID = Hostman_depts.HDptID UNION SELECT HStaffID, HFirstName, HLastName, HEmail, HDptName, HDptLocation FROM Hostman_staff RIGHT JOIN Hostman_depts ON Hostman_staff.HDptID = Hostman_depts.HDptID; Aggregate Functions SQL offers distinct aggregate functions that execute computations on numerous rows and yield a single outcome: COUNT: Computes the total records. SUM: Finds the aggregate of the targeted column. AVG: Calculates column average. MIN: Extracts the column's minimal value. MAX: Locates the column's most elevated value. Let us invoke the aggregate methods to demonstrate their working in practice: SELECT COUNT(*) AS TotalStaff, MIN(HHireDate) AS EarliestHireDate, MAX(HHireDate) AS LatestHireDate FROM Hostman_staff; The outcome demonstrates TotalStaff, EarliestHireDate, and LatestHireDate: Grouping and Filtering in SQL SQL contains several clauses for grouping and filtering the table’s details, as illustrated below. GROUP BY It combines rows with identical entries in targeted columns into a single summary row: SELECT HDptID, COUNT(*) AS TotalStaff FROM Hostman_staff GROUP BY HDptID; The staff members are grouped by HDptID and show the total staff in each department: HAVING It sorts groups as per predefined aggregate criteria. It groups data after the aggregation, unlike WHERE, which filters rows before aggregation: SELECT HDptID, COUNT(*) AS TotalStaff FROM Hostman_staff GROUP BY HDptID HAVING COUNT(*) >3; It assembles staff by HDptID, computes staff members in each department, and demonstrates only departments exceeding 3 staff members: Aliases and Subqueries SQL aliases shorten table and column names, while subqueries assist us in returning data by embedding one query within another. Aliases They are temporary names allotted to tables or columns to make queries user-friendly: SELECT HFirstName AS FN, HLastName AS LN, HDptID AS DID FROM Hostman_staff AS HS; Subqueries SQL subqueries are referred to as the queries embedded inside another query and execute actions as per the outcomes of the outer query: SELECT HFirstName, HLastName, HDptID FROM Hostman_staff WHERE HDptID = ( SELECT HDptID FROM Hostman_staff GROUP BY HDptID ORDER BY COUNT(*) DESC LIMIT 1 ); It fetches staff members who are registered in the department with the highest number of staff: Indexes  Indexes boost the data fetching rate but consume more memory and demand extra supervision. Let’s create an index titled idx_HFirstName on the HFirstName column of Hostman_staff: CREATE INDEX idx_HFirstNameON Hostman_staff (HFirstName); To abolish an index, employ this query: DROP INDEX ON Hostman_staff; Constraints in SQL They impose limitations on table content to sustain precision and stability: PRIMARY KEY: Uniquely recognizes every row. FOREIGN KEY: Sustains referential integrity among tables. NOT NULL: Restrict NULL entries. UNIQUE: Accept distinct entries. CHECK: It applies a restriction check on the data. Let’s constructs a Hostman_orders table with the columns Hostman_OID, Hostman_ODate, Hostman_EID, and more: CREATE TABLE Hostman_orders ( Hostman_OID INT AUTO_INCREMENT PRIMARY KEY, Hostman_ODate DATE NOT NULL, Hostman_EID INT NOT NULL, Hostman_OrderAmount DECIMAL(10, 2) CHECK (Hostman_OrderAmount > 0), Hostman_Status VARCHAR(50) DEFAULT 'Pending', HDptID INT, FOREIGN KEY (HDptID) REFERENCES Hostman_depts(HDptID), FOREIGN KEY (Hostman_EID) REFERENCES Hostman_staff(HStaffID), CHECK (Hostman_ODate >= '2020-01-01') ); The Hostman_OID is set as the primary key, ensuring unique identification for each order. Hostman_ODate must keep a date on or after January 1, 2020. Hostman_EID must reference a valid HStaffID from the Hostman_staff table via a foreign key constraint. The HDptID references a valid HDptID from the Hostman_depts table through a foreign key constraint. Additionally, the Hostman_OrderAmount has a check constraint to ensure it holds a value greater than 0, and the Hostman_Status has a default value of Pending: Data Modifying Statements ALTER TABLE enables modifications to the table’s structure. These changes can involve adding, editing, or deleting columns: ALTER TABLE Hostman_staffADD HStaffEmail VARCHAR(100); It appends a column titled HStaffEmail in the Hostman_staff table: To edit the HStaffEmail column, we employ the MODIFY clause: ALTER TABLE Hostman_staffMODIFY COLUMN HStaffEmail TEXT; To delete EmpEmail from HostmanEmployee, we employ the DROP clause: ALTER TABLE Hostman_staffDROP COLUMN HStaffEmail; SQL Transactions SQL transactions make sures that multiple functions are carried out as one cohesive action to keep data precise and consistent: COMMIT: Finalizes and keeps any modifications made during the recent transaction. ROLLBACK: Cancels any modifications applied throughout the ongoing transaction, reversing all alterations. SAVEPOINT: Designates a precise point within a transaction to which it return if needed. ROLLBACK TO: Undoes modifications to the preferred savepoint if a problem emerges. Conclusion In this cheat sheet, we've gone over core SQL concepts for successfully supervising data in databases. Grasping fundamental SQL principles is vital for successfully manipulating and engaging with databases. We've also illustrated advanced concepts like transaction control, joins, aggregate functions, and SQL constraints that may assist you manage data more accurately.
04 December 2024 · 11 min to read
Microservices

Popular Message Brokers in Microservice Architecture: NATS, Kafka, and RabbitMQ

Anyone who encounters the term "microservices architecture" for the first time may wonder what it is and how it works. Simply put, microservice architecture is a software development approach where an application is divided into many small, independent modules (microservices). Each module performs its specific function and operates independently of the others. To communicate and interact with each other, these modules need an intermediary that will facilitate the transmission and translation of messages. In the world of microservices, message brokers serve this role — software components that provide communication and consistency between individual services. In this article, we will take a closer look at popular message brokers, understand their purpose, and learn which broker is best suited for different situations. Why Do You Need a Message Broker in Microservice Architecture? Microservice architecture, where an application is broken down into small independent services, offers several advantages that contribute to flexibility, scalability, and fault tolerance in the process of application creation and maintenance. In such an architecture, ensuring successful interaction and data exchange between independent microservices is crucial. This is where message brokers come into play. Let's explore a few key reasons why a message broker is needed: Helps Microservices Communicate: Without a broker, each microservice would have to establish a direct connection with every other service, leading to unnecessary complexity and chaos. Protects Against Data Loss: If a microservice "crashes" or stops functioning, the broker will hold the messages until the recipient is ready to process them, ensuring system resilience in the event of temporary failures. Increases System Flexibility: If we need to add a new microservice or remove an old one, the broker makes this change easier by tracking all messages and determining where they should be routed. Enables Asynchronous Communication Patterns: A message broker allows the implementation of design patterns such as "message queue" or "publish-subscribe." This means microservices can send information without worrying about who will receive it and when, adding flexibility and parallelism to operations. Helps with Load Distribution: Message brokers can distribute messages evenly between services, ensuring load balancing and smooth data flow. Today, there are many different message brokers available in the market, such as Apache Kafka, RabbitMQ, NATS (NATS Messaging System), ActiveMQ, Redis Pub/Sub, Amazon SNS, Google Cloud Pub/Sub, Microsoft Azure Service Bus, and others. Let’s look at three of the most popular message brokers: Kafka, NATS, and RabbitMQ. Apache Kafka Apache Kafka is a high-performance message broker designed for data exchange in distributed systems. Created at LinkedIn and later becoming an open project under the Apache Software Foundation, Kafka provides a reliable and resilient mechanism for real-time message transmission between different system components. Data Organization in Apache Kafka Topics and Partitions: In Apache Kafka, data is organized into topics. A topic is a logical category that represents a stream of messages. For instance, a topic could be created for events of a particular type. Topics allow efficient organization of data streams. Each topic is divided into several partitions. Partitions are used for the physical distribution of data within a topic. This enables parallel processing of messages, enhancing system performance. Producers and Consumers: Producers are responsible for sending messages to topics. They create data or events and publish them to specific Kafka topics. Consumers, on the other hand, subscribe to topics and process the incoming messages. They can read data from one or more partitions. Offsets: Each message within a topic has a unique identifier called an offset. The offset is a numerical value that indicates the position of a message within a partition. This ensures data durability, as the system remembers the last offset processed by each consumer. In case of a failure or restart, a consumer can resume processing from the saved offset, preventing message duplication or data loss. For example, imagine a topic called "logs" with three partitions. The producer writes server logs to this topic. Consumers subscribe to different partitions, processing logs asynchronously. The offsets for each consumer track the progress of data processing, ensuring accuracy and recovery in case of failures. This data structure in Kafka provides flexibility, scalability, and resilience in message exchange across distributed systems. Additionally, Kafka is a distributed system consisting of multiple brokers. Brokers work in a cluster, ensuring high availability, fault tolerance, and distributed data processing. A typical Kafka cluster includes several brokers, each performing its function in the system, handling data, managing partitions, and ensuring overall performance. Advantages and Features of Apache Kafka High Performance Due to its distributed architecture and the use of multiple replicas for each partition, Apache Kafka can easily handle millions of messages per second. This makes it an essential tool for working with stream data, especially when dealing with large volumes of information. Kafka’s high throughput ensures it can support demanding applications, such as real-time analytics or large-scale event processing. Guaranteed Message Delivery When a producer sends a message, Kafka guarantees its delivery. This is achieved through atomic operations, acknowledgments, replication, and a leader-follower structure within the system. These features ensure a high level of confidence in the durability and integrity of transmitted messages, even in the event of network or system failures. Scalability and Flexibility Kafka’s dynamic data distribution across a cluster of brokers allows it to scale effortlessly, ensuring an even load distribution and optimal resource management as data volumes grow. The ability to create multiple topics and partitions enhances the flexibility in stream management, enabling companies to organize data based on the specific needs of their applications. Fault Tolerance and Replication Kafka implements a data replication mechanism between brokers. Each partition of a topic has multiple replicas distributed across different brokers in the cluster. When data is written to a topic, it is replicated to other brokers. This replication ensures the system’s fault tolerance. In case one broker fails, other brokers holding the replica data remain available, guaranteeing continuous operation even in unforeseen situations. Wide Adoption Large companies such as LinkedIn, Uber, and Airbnb use Apache Kafka to manage real-time data streams. Kafka’s application in these organizations demonstrates its effectiveness in handling high workloads and meeting stringent data processing requirements. Ecosystem and Integration Kafka's ecosystem includes a variety of tools and libraries, with notable components like Kafka Streams and Kafka Connect. These components provide powerful capabilities for stream processing, data analysis, and integration with other systems. Kafka Streams enables real-time stream processing directly within Kafka, while Kafka Connect facilitates data synchronization between Kafka and external systems like databases or file systems. RabbitMQ RabbitMQ is a highly reliable, open-source message broker designed to ensure stable asynchronous communication between different components within a system. The AMQP (Advanced Message Queuing Protocol) enables reliable and flexible communication between applications. This makes RabbitMQ a popular choice for integrating and decoupling services in distributed systems. Data Organization in RabbitMQ Queues and Exchanges: Queues in RabbitMQ are specialized storage areas for temporarily holding messages. Producers send messages to specific queues, where they are held until consumers retrieve and process them. Exchanges act as message routers. They decide which queue(s) the message should be sent to based on routing rules and the type of exchange used. Producers and Consumers: Producers send messages either directly to a queue or to an exchange. The producer may specify a routing key to indicate the desired destination queue. Consumers listen to queues and retrieve messages for further processing. Message Flow in RabbitMQ A producer generates a message and sends it to an exchange. The producer can also specify a routing key, a label the exchange uses to route the message to the correct queue. Once the exchange receives the message, it applies routing rules (based on the exchange type and routing key) to decide which queue(s) the message will be sent to. Each queue is associated with an exchange and waits for messages to arrive for processing. Consumers subscribe to queues and process messages from them. Types of Exchanges in RabbitMQ Direct Exchange: Routes messages to queues based on an exact match between the routing key and the queue’s binding key. Example: A producer might send a message with the routing key "error," and the direct exchange will route it to the queue specifically bound to the "error" routing key. Fanout Exchange: Routes messages to all queues that are bound to the exchange, ignoring the routing key. It is often used when the same message needs to be broadcasted to multiple consumers. Example: A broadcast message to all consumers, regardless of the specific routing criteria. Topic Exchange: Routes messages to queues based on wildcard patterns in the routing key. This allows for more flexible routing based on specific message attributes. Example: A routing key might be "stock.usd.nyse" and the exchange could route the message to queues bound with patterns like "stock.*.nyse" (all stocks in the NYSE). Headers Exchange: Routes messages based on the headers of the message (such as content type or priority) rather than the routing key. This type of exchange provides more fine-grained control over message routing. Example: A message might include a header like "priority: high," and the exchange will route it to the appropriate queue based on the header value. Advantages and Features of RabbitMQ Routing Flexibility RabbitMQ allows highly configurable message routing via exchanges and queues. For instance, with a topic exchange, you can route messages to multiple queues based on patterns in the message’s routing key. This flexibility makes RabbitMQ ideal for various use cases, such as order management systems or event-driven systems, where different types of messages may need to be sent to different consumers based on their content. Support for Multiple Data Exchange Protocols One of RabbitMQ’s standout features is its support for a wide range of protocols. Primarily, it uses AMQP (Advanced Message Queuing Protocol), a standardized protocol that ensures smooth communication between system components. Additionally, RabbitMQ supports HTTP/HTTPS and other popular protocols like STOMP and MQTT. This makes it versatile for various application requirements and communication needs. High Availability and Replication Similar to Kafka, RabbitMQ ensures high availability and data redundancy through data replication. This means that messages are replicated across different nodes in the cluster, so even if one broker fails, the data remains accessible. This reduces the risk of message loss, especially in critical systems where reliability is key. High Performance RabbitMQ is built to handle large volumes of messages efficiently. It can process a high throughput of messages per second, which makes it suitable for high-load environments. Whether you're handling user notifications or event streams, RabbitMQ can scale to meet the demands of high-performance applications. Integration with a Wide Range of Languages and Platforms RabbitMQ provides official client libraries for several popular programming languages, including Java, Python, .NET (C#), Ruby, JavaScript, Go, and many others. This ensures seamless integration with a wide variety of technologies, making it easier to implement in diverse development ecosystems. Whether you're working with web applications, mobile backends, or microservices, RabbitMQ can be incorporated into your stack effectively. NATS NATS is a lightweight, high-performance message broker designed for simplicity and fast asynchronous communication in distributed systems. Data Organization in NATS Topics (Subjects): In NATS, data is organized into topics (referred to as subjects), which are named channels for message transmission. Topics are hierarchical and can be structured with segments separated by dots (e.g., service1.logs.info), allowing for organized and flexible message routing. Publish/Subscribe Model: NATS operates on a publish/subscribe (pub/sub) model. Publishers send messages to topics, and subscribers listen to those topics to receive messages. This decouples producers and consumers, facilitating scalable and efficient messaging. Advantages and Features of NATS Simplicity and Performance NATS is optimized for simplicity and high-speed message delivery. The pub/sub model allows publishers to send messages to topics, and all subscribers to that topic will instantly receive the message. The minimal overhead ensures that messages are transmitted with low latency, making NATS ideal for high-performance applications. Statelessness One of NATS's core features is its stateless nature. It doesn't store information about previous messages or track the state of subscribers. This simplifies scalability since there is no need for complex state synchronization, and you can add new nodes with minimal overhead. No Default Queues Unlike other brokers like RabbitMQ or Kafka, NATS does not use queues by default. This makes it particularly well-suited for scenarios where the timeliness of messages is more important than their durability or retention. This setup eliminates the need for queue management and configuration. Reliable Delivery Protocol NATS offers a reliable "at-most-once delivery" protocol, ensuring that messages are delivered to recipients at most once. While it does not guarantee message persistence, this is sufficient for use cases where quick, reliable delivery is needed without the complexity of guaranteed delivery or storage of past messages. These features make NATS a great choice for applications requiring fast, simple, and scalable communication with minimal overhead, ideal for microservices, IoT, and real-time systems. Which Broker to Choose? The choice of a message broker largely depends on the data volume and your project's performance requirements. Each of the brokers discussed offers unique capabilities tailored to specific data processing needs. Apache Kafka: Real-Time Data Stream Processing Apache Kafka might be the ideal choice if your project handles huge data streams, especially in real time. Its architecture, designed for stream processing, ensures high performance and scalability, making it well-suited for applications that need to process large amounts of data in real time. Use Case Example: A financial market analytics system, where real-time transaction processing and data storage for auditing are crucial. In Hostman, we offer a pre-configured and ready-to-use Kafka service in the cloud. RabbitMQ: Flexible Routing and Diverse Patterns If your project requires flexible message routing and support for various interaction patterns, RabbitMQ is a better fit. With its variety of exchanges and customizable routing types, RabbitMQ provides extensive capabilities for creating complex message exchange scenarios. Use Case Example: An order management system in e-commerce, where asynchronous processing of orders and customer notifications are key. If you need an efficient messaging solution between components in your system, consider using managed databases (including RabbitMQ) in Hostman. We offer a reliable and scalable cloud solution for managing message exchange and data across different systems. NATS: Lightweight and Fast Asynchronous Communication NATS offers an optimal solution for projects focused on lightweight and fast asynchronous communication in distributed systems. Due to its simplicity and high performance, NATS is the perfect choice for scenarios where message exchange must be as fast as possible and have optimal resource usage. Use Case Example: An IoT monitoring system that requires fast and reliable event transmission from sensors to a server for further processing. Conclusion In this article, we reviewed three key message brokers: Apache Kafka, RabbitMQ, and NATS. Each of them has unique features that make them suitable for different tasks. Choosing the right broker is a decision based on the specific needs of your project. To make the right choice, assess your requirements, prioritize your goals, and carefully evaluate each broker in the context of your objectives. We hope this guide helps you make an informed decision and successfully implement a message broker in your project.
03 December 2024 · 14 min to read
Servers

How to Protect a Server from DDoS Attacks

A DDoS attack (Distributed Denial of Service) aims to overwhelm a network with excessive traffic, reducing its performance or causing a complete outage. This is reflected in the term "denial-of-service" (refusal of service). The frequency and intensity of DDoS attacks have been rising rapidly. A report by Cloudflare noted that in 2021, the number of attacks grew by one-third compared to 2020, with a peak in activity observed in December. The duration of a DDoS attack can vary. According to research by Securelist: 94.95% of attacks end within four hours. 3.27% last between 5 to 9 hours. 1.05% persist for 10 to 19 hours. Only 0.73% of all attacks extend beyond 20 hours. Effective Tools for Protecting a Server from DDoS Attacks If you don't want to rely on vendors' solutions, paid services, or proprietary software, you can use the following tools to defend against DDoS attacks: IPTables. A powerful firewall tool available in Linux systems that allows precise control over incoming and outgoing traffic. CSF (ConfigServer Security and Firewall). A robust security tool that simplifies managing firewall rules and provides additional protection mechanisms. Nginx Modules. Modules specifically designed for mitigating DDoS attacks, such as limiting the number of requests per IP or delaying excessive requests. Software Filters. Tools or scripts that analyze and filter traffic to block malicious or excessive requests, helping to maintain service availability. IPTables. Blocking Bots by IP Address The IPTables tool helps protect a server from basic DDoS attacks. Its primary function is to filter incoming traffic through special tables. The resource owner can add custom tables. Each table contains a set of rules that govern the tool's behavior in specific situations. By default, there are only two response options: ACCEPT (allow access) and REJECT (block access). In IPTables, it is possible to limit the number of connections.  If a single IP address exceeds the allowed number of connections, the tool will block access for that IP. You can extend the tool's functionality with additional criteria: Limit: Sets a limit on the number of packet connections within a chosen time period. Hashlimit: Works similarly to Limit, but applies to groups of hosts, subnets, and ports. Mark: Used to mark packets, limit traffic, and filter. Connlimit: Limits the number of simultaneous connections for a single IP address or subnet. IPRange: Defines a range of IP addresses that are not considered as a subnet by the tool. Additionally, IPTables can use criteria such as Owner, State, TOS, TTL, and Unclean Match to set personalized configurations, effectively protecting the resource from DDoS attacks. The ipset kernel module allows you to create a list of addresses that exceed the specified connection limit. The ipset timeout parameter sets a time limit for the created list, which is enough to ride out a DDoS attack. By default, IPTables settings return to their basic configuration after a system reboot. To save the settings, you can use additional utilities (such as iptables-save or iptables-persistent), but it is recommended to start with the default options to avoid saving incorrect settings that could block server access for everyone. ConfigServer Security and Firewall While IPTables is a convenient and effective tool, it can be quite complex to configure. You’ll need to learn how to manage it and write additional scripts, and if something goes wrong, your resource may end up being a "closed club" for just a few users. CSF (ConfigServer Security and Firewall) is a "turnkey" configurator, meaning you only need to set the correct parameters and not worry about the server's security. Installing the Server Firewall The preliminary installation steps involve downloading two additional components required to run CSF: the Perl interpreter and the libwww library. The next step is to install ConfigServer Security and Firewall itself. Since the tool is not available in the official repository, you'll need to download it directly from the provided link or by fetching the ready-made archive: cd /usr/srcwget https://download.configserver.com/csf.tgz After downloading, extract the archive and move it to the defender’s files folder. Then, run the installation process. Once installed successfully, you can proceed with configuring CSF. Configuring the Server Firewall By default, the settings in ConfigServer and Firewall are active for 5 minutes, after which any changes are reset. This test format is useful for conducting experiments and understanding errors in the applied configuration. To switch to live mode, change the Testing value to 0. Proper configuration of CSF ensures reliable protection against DDoS attacks. Here are some essential commands in CSF: Specify incoming ports: TCP_IN = "22,23,25,36,75,87" Specify outgoing ports: TCP_OUT = "22,23,25,36,75,87" Configure email notifications for SSH connections: LF_SSH_EMAIL_ALERT = "1" Add an IP address to the exception list (useful for server management teams): csf -a 192.168.0.7 Block a specific IP address from connecting to the server: csf -d 192.168.0.6 Nginx Modules How can you protect your server from DDoS attacks using simpler methods? Use Nginx modules like limit_conn and limit_req. The limit_conn module limits the maximum number of connections to the server, while the limit_req module limits the number of requests within a specified time frame. For example, if you want to limit the number of simultaneous connections to 30 and restrict the number of connections within a 3-second window, the configuration will look as follows: limit_conn_zone $binary_remote_addr zone=perip: 30m;limit_req_zone $binary_remote_addr zone=dynamic:30m rate=3r/s; This configuration allows only 3 requests per second. Any additional requests are queued. The burst parameter controls the queue size. For example, if the burst value is set to 7, the module will queue up to 7 requests when the request count exceeds 10, while any further requests will be rejected with an error. Software Filter Server protection against DDoS attacks can also be achieved using web applications. A traffic filter uses JavaScript, which is inaccessible to bots, effectively redirecting DDoS attacks to a placeholder page. The operation of the filter is simple. The configuration defines conditions for blocking bots, and when a visitor meets those conditions, they are redirected to a placeholder page instead of the requested page. The filter can also specify the reason for the redirection.
03 December 2024 · 6 min to read
Servers

How to Protect a Server: 6 Practical Methods

Any IT infrastructure requires robust protection. While information security is a vast topic, there are basic steps that can safeguard against attacks from amateur hackers and bots. This article outlines six straightforward methods to protect your server effectively. Tools and Methods of Protection Securing a server from breaches involves a combination of measures. These can be categorized into the following areas: Securing communication channels used for system administration and operation. Implementing multi-layered security for the system. Restricting access to infrastructure resources. Monitoring and auditing system activities. Backing up data. Timely updates or rollbacks of software. Antivirus protection for servers. Below, we detail six practical methods to achieve a robust security level against amateur attackers and bots. Privilege Restriction When managing access to resources, follow the principle of least privilege: users and processes should only have the minimal permissions necessary to perform their tasks. This is particularly important for databases and operating systems. This approach not only prevents unauthorized external access but also mitigates risks from internal threats. Separate Accounts for Administrators: Create individual accounts for each admin. Use non-privileged accounts for operations that don’t require elevated permissions. Active Directory: In environments using Microsoft Active Directory, regularly audit and configure group policies. Mismanagement of these policies can lead to severe security breaches, especially if exploited by a malicious admin or hacker. Minimize Root Usage in Unix Systems: Avoid working as the root user. Instead, disable the root account and use the sudo program for tasks requiring elevated permissions. To customize sudo behavior, modify the /etc/sudoers file using the visudo command. Below are two useful directives for monitoring sudo activity. By default, sudo logs to syslog. To store logs in a separate file for better clarity, add the following to /etc/sudoers: Defaults log_host, log_year, logfile="/var/log/sudo.log" This directive records command logs, along with input and output (stdin, stdout, stderr), into /var/log/sudo-io: Defaults log_host, log_year, logfile="/var/log/sudo.log" For a deeper dive into managing the sudoers file, check this guide. Mandatory Access Control (MAC) This recommendation focuses on Linux systems and builds upon the principle of access control. Many Linux administrators rely solely on discretionary access control (DAC) mechanisms, which are basic and always active by default. However, several Linux distributions include mandatory access control (MAC) mechanisms, such as AppArmor in Ubuntu and SELinux in RHEL-based systems. While MAC requires more complex configuration of the OS and services, it allows for granular access control to filesystem objects, significantly enhancing the server's security. Remote Administration of Operating Systems When remotely administering an operating system, always use secure protocols: For Windows, use RDP (Remote Desktop Protocol). For Linux, use SSH (Secure Shell). Although these protocols are robust, additional measures can further strengthen security. For RDP, you can block connections of accounts with blank passwords. You can configure it via Local Security Policy under the setting: Accounts: Limit local account use of blank passwords to console logon only. RDP sessions can be protected with the secure TLS transport protocol, which will be discussed later. By default, SSH user authentication relies on passwords. Switching to SSH key-based authentication provides stronger protection, as a long key is far more difficult to brute-force than a password. Additionally, key-based authentication eliminates the need to enter a password during login since the key is stored on the server. Setting up keys requires only a few simple steps: Generate a key pair on your local machine: ssh-keygen -t rsa Copy the public key to the remote server: ssh-copy-id username@remote_address If key-based authentication is not an option, consider implementing Fail2ban. This tool monitors failed login attempts and blocks the IP addresses of attackers after a specified number of failed attempts. Additionally, changing default ports can help reduce the likelihood of automated attacks: Default SSH port 22/tcp → Choose a non-standard port. Default RDP port 3389/tcp → Use a custom port. Firewall Configuration A robust security system is layered. Relying solely on access control mechanisms is insufficient; it is more logical to manage network connections before they reach your services. This is where firewalls come in. A firewall provides network-level access control to segments of the infrastructure. The firewall decides which traffic to permit through the perimeter based on a specific set of allow rules. Any traffic that does not match these rules is blocked. In Linux, the firewall is integrated into the kernel (via netfilter), and you can manage using a frontend tool such as nftables, iptables, ufw, or firewalld. The first step in configuring a firewall is to close unused ports and keep only those that are intended for external access. For instance, a web server typically requires ports 80 (HTTP) and 443 (HTTPS) to remain open. While an open port itself is not inherently dangerous (the risk lies in the program behind the port), it is still better to eliminate unnecessary exposure. In addition to securing the external perimeter, firewalls can segment infrastructure and control traffic between these segments. If you have public-facing services, consider isolating them from internal resources by using a DMZ (Demilitarized Zone). Additionally, it’s worth exploring Intrusion Detection and Prevention Systems (IDS/IPS). These solutions work on the opposite principle: they block security threats while allowing all other traffic through. Hostman offers a cloud firewall that provides cutting-edge defense for your server. Virtual Private Networks (VPNs) Up until now, we have focused on protecting a single server. Let’s now consider securing multiple servers. The primary purpose of a Virtual Private Network (VPN) is to provide secure connectivity between organizational branches. Essentially, a VPN creates a logical network over an existing network (e.g., the Internet). Its security is ensured through cryptographic methods, so the protection of connections does not depend on the underlying network's security. There are many protocols available for VPNs, and the choice depends on the size of the organization, network architecture, and required security level. PPTP (Point-to-Point Tunneling Protocol) is a simple option for a small business or home network, as it is widely supported on routers and mobile devices. However, its encryption methods are outdated. For high-security needs and site-to-site connections, protocols like IPsec are suitable. For site-to-host connections, options like WireGuard are more appropriate. WireGuard and similar protocols provide advanced security but require more intricate configuration compared to PPTP. TLS and Public Key Infrastructure (PKI) Many application-layer protocols, such as HTTP, FTP, and SMTP, were developed in an era when networks were limited to academic institutions and military organizations long before the invention of the web. These protocols transmit data in plaintext. To ensure the security of a website, web control panels, internal services, or email, you should use TLS. TLS (Transport Layer Security) is a protocol designed to secure data transmission over an untrusted network. While the term SSL (e.g., SSL certificates, OpenSSL package) is often mentioned alongside TLS, it’s important to note that the modern versions of the protocol are TLS 1.2 and TLS 1.3. Earlier versions of TLS and its predecessor, SSL, are now considered obsolete. TLS provides privacy, data integrity, and resource authentication. Authentication is achieved through digital signatures and the Public Key Infrastructure (PKI). PKI functions as follows: the server's authenticity is verified using an SSL certificate, which is signed by a Certificate Authority (CA). The CA’s certificate is, in turn, signed by a higher-level CA, continuing up the chain. The root CA certificates are self-signed, meaning their trust is implicitly assumed. TLS can also be used with Virtual Private Networks (VPNs), such as setting up client authentication using SSL certificates or a TLS handshake. In this case, it would be necessary to organize your own PKI within the local network, including a CA server, as well as the keys and certificates for network nodes. The Dangers of Attackers The level of threat depends on the type of attack. Cyberattacks can be broadly categorized into two main types. Breaching the Security Perimeter This type of attack involves gaining unauthorized access to the account of an authenticated user of a service or system, such as a database. Breaches of privileged accounts pose significant risks because attackers gain the ability to view sensitive information and modify system parameters. The most critical type of breach involves gaining unauthorized access to the superuser account of the operating system, potentially compromising a significant portion of the infrastructure. Disabling Systems This category of attacks aims to disrupt system operations rather than steal data, but it is no less dangerous. The most prominent example is a DoS (Denial of Service) or DDoS (Distributed Denial of Service) attack. These attacks overload the server with a flood of requests, causing it to fail and become unresponsive to legitimate users. In some cases, a DoS attack serves as a precursor to other forms of cyberattacks. The results of cyberattacks often include data breaches, financial losses, and reputational damage. For this reason, even the most basic level of security should be implemented when establishing an IT infrastructure.
02 December 2024 · 8 min to read
Servers

Load Testing: Objectives, Tasks, and Procedure

This article explores the features and benefits of load testing a web server, discussing why it is important and how to perform it correctly. What Is Load Testing? Load testing is the process of evaluating the performance and reliability of a web server using specialized tools designed to simulate real-world server loads. These tools emulate the activity of a specified number of users and document the resulting load on the server. The collected data is then analyzed to assess the performance of hardware resources, communication channels, and server software. Why Use Load Tests: Objectives of Testing Most websites and applications are created to generate revenue, or profitability is set as one of the project goals. The performance of the server—its ability to handle the planned number of simultaneous users—is a key success factor. If a server cannot handle a surge in visitors, it results in decreased traffic, negatively impacting the website's behavioral metrics. As a result, the site's ranking in search engine results drops, reducing organic traffic and leading to a decline in sales and advertising revenue. Such failures can be equally disastrous for web applications used by thousands of people. The primary goal of load testing is to evaluate server capacity under extreme conditions, pushing it to its operational limits. This helps determine whether additional resources are needed or if existing ones are sufficient for stable operation. The outcome includes mitigating the risk of site or application downtime and achieving significant cost savings in the long run. Step-by-Step Guide to Load Testing a Server Let’s break down the entire process into sequential steps: Preparation for Testing. Before conducting load testing, start with functional testing to ensure the chosen tools and configurations are correct. Define Objectives. Typical objectives include identifying the server’s performance limits and detecting system bottlenecks. Specify Requirements. Clearly define the requirements, such as: 90% of users must be served within a maximum of 10 seconds each. Develop Scenarios. Create scenarios based on typical user behavior on the website, application, or service. Choose Tools. Select software that best aligns with the testing goals. Configure Tools. Set the load levels and write scripts to simulate user behavior. Execute Testing. Gradually increase the load while documenting critical thresholds. Analyze Results. Process the collected data, draw conclusions, and prepare recommendations for improving system performance. Objectives and Requirements The type and scale of the load, as well as the metrics to monitor, depend on the specific objectives. Common tasks include: Determining the server’s performance limits. Checking configuration reliability. Monitoring backups. Identifying problematic areas in the system. Regarding requirements, they often define user service times as percentages. It’s important to avoid aiming for 100% of users to be served within a strict timeframe, as a buffer (typically around 10%) is necessary. This allows the system to handle unexpected events without failures. User Scenarios User scenarios depend on how users interact with the site. For example, a typical scenario for an online store might include: Logging in. Searching for and selecting a product. Viewing the product details. Adding the product to the cart. Proceeding to the cart. Initiating the checkout process. Filling in form fields. Confirming and paying for the purchase. The exact flow depends on the functionality of the site or application. After modeling one or more typical scenarios, identify the most resource-intensive pages and select tools to simulate the load on these critical points. Tools for Load Testing If the objectives allow, it is reasonable to use free and open-source tools for testing. One of the most popular options is Apache JMeter, a highly configurable cross-platform software that supports all web protocols. JMeter makes it easy to develop scripts that simulate user actions on a website or application. Once the scripts are created, we can set the load levels and proceed with the testing process.  However, JMeter is not the only tool for load testing. Other options include WAPT, NeoLoad, Siege, Gobench, WRK, Curl-loader, Tsung, and many more. Each of these tools has unique features. Before choosing one, review their descriptions, study available information, and consider user reviews and forums. Load Testing After defining typical scenarios and selecting appropriate tools, the testing process begins. Most scenarios involve gradually increasing the load. The number of concurrent threads or users increases until response times rise. This marks the first critical threshold, often referred to as the degradation point. The second threshold, known as the sub-critical point, occurs when response times exceed acceptable limits. The system can still process requests at this stage, but response times hit the SLA (Service Level Agreement) threshold. Beyond this point, delays accumulate rapidly, causing the system to reach the critical point. The critical point, or failure point, occurs when the server's resources are exhausted—either CPU power or memory runs out. At this stage, the server crashes, signaling the end of testing and the start of data analysis. Analysis of Load Testing Results Testers analyze the collected data to identify bottlenecks. Sometimes, you can resolve the issues by adjusting configurations or refining the code. In other cases, a specific service within the project may cause delays, requiring targeted optimization. This might involve configuration adjustments or scaling the service. For high user volumes, the most common issue is hardware overload. Typically, addressing this requires upgrading the infrastructure—for example, adding RAM or switching to a more powerful processor. Conclusion Load testing a server is an essential procedure for anyone looking to avoid failures in a growing website, service, or application. Practical experience shows that proper configuration adjustments or code optimization can significantly enhance server performance. However, to achieve these improvements, it’s critical to identify system bottlenecks, which is precisely the purpose of load testing.
02 December 2024 · 5 min to read
Ubuntu

How to Configure an Additional IP as an Alias in Ubuntu

In the network administration world, the task of setting up additional IP addresses on a single network interface is commonly performed. The technique of IP aliasing, which is a system for a device to reply to several IP addresses on one network interface, penetrates this model. All Ubuntu users should be familiar with modifying and applying these settings to ensure robust networking administration. This guide will detail the methods of creating an extra IP address in Ubuntu as an alias for both the versions of Ubuntu 24.04 and 22.04. Prerequisites Obviously, one first needs to set up the system in a way that would allow for the manipulation of all IP addresses over the same network, using Ubuntu. Here is the list: A system running either Ubuntu 24.04 or Ubuntu 22.04 Admin access to the system (sudo privileges) Basic knowledge of command-line interface operations An additional IP address assigned by a network administrator or ISP Network interface name information (e.g., eth0, ens3) When troubleshooting problems, we are in danger of causing even more difficulty, as network interfaces provided by networks are not reliable. With this in mind, it would be wise to keep a backup of the configuration files before proceeding with the changes. Configuring an Additional IP Address within Ubuntu 24.04 Ubuntu 24.04, the latest long-term support release, uses Netplan for network configuration. This configuration is also applicable for Ubuntu 22.04 and beyond. Netplan is a utility for configuring networking on Linux systems. Here's how to create an additional IP address: Check the Network Interface Primarily, it is necessary to define the network interface that will carry the new address. You can achieve this by running the following command: ip addr show The output of this command will display all the interfaces. Find the name of the interface (e.g. ens3, eth0) currently in use. Edit the Netplan Configuration File Normally Netplan configuration files are found in the /etc/netplan/ directory. The file name may be different but most of them end with a .yaml extension. To change the file, use a text editor with root privileges: sudo nano /etc/netplan/50-cloud-init.yaml Insert the New IP Address In the YAML file, add the new IP address under the addresses section of the appropriate network interface. The configuration may appear like this: network: version: 2 renderer: networkd ethernets: eth0: addresses: - "195.133.93.70/24" - "166.1.227.189/24" #New IP address - "192.168.1.2/24" #Private IP address nameservers: addresses: - "1.1.1.1" - "1.0.0.1" dhcp4: false dhcp6: false routes: - to: "0.0.0.0/0" via: "195.133.93.1" Apply the Changes Upon saving your edits, you need to apply the new version of the configuration by running this command: sudo netplan apply Validate the Configuration After completing the steps above, you will need to repeat the ip addr show command to confirm that the new IP address is in place. Now the output of this command should also include the new IP address. Additional Considerations Persistent Configuration The choices made by Netplan are stable and will last through the restart of the device. But, it's a good idea to verify the configuration with a system reboot to make sure everything goes well after the restart. Firewall Configuration When adding a new IP address, you may need to update the firewall rules. Ubuntu traditionally uses UFW (Uncomplicated Firewall). To avoid blocking the new IP, you will have to create new rules to UFW. Network Services If the system has some services running which are linked to specific IP addresses, then you must update their configurations to recognize and utilize the new IP address as well. IPv6 Considerations The above examples talk about IPv4. If you have to use IPv6 addresses, then the procedure is relatively the same; you will have to use a different style of address though. Netplan supports both IPv4 and IPv6 configurations. Troubleshooting In case of issues emerging during the configuration stage, try: Check for syntax errors in the YAML file with the command: sudo netplan --debug generate. Ensure that there are no conflicts with other devices using the same IP address on the network. Verify correct setting of the subnet mask and the gateway. Check the system logfile for error messages: journalctl -xe. Advanced IP Aliasing Techniques Network administrators can see how advanced IP aliasing plays a key role in improving network management: virtual interfaces make it possible to have several logical interfaces on a physical network interface, wherein all have their IP and network settings. Dynamic IP Aliasing There are cases where network administrators would have to implement dynamic IP aliasing. With the help of scripts, it is possible to add or remove IP aliases according to certain conditions or occurrences. For example, a script can be made to insert an IP alias whenever a particular service starts and remove it every time the service stops. IP Aliasing in Containerized Environments The popularity of containerization in the present age necessitates having IP aliasing in order to control network configuration of Docker containers and any other containerized applications. In such cases, IP aliases are quite often employed to expose multiple services on a container at different IP addresses or assist containers to communicate with one another. Docker Network Aliases In Docker, network aliases can be used to allow multiple containers to respond to the same DNS name on a custom network. Among other things, this is indispensable in microservices architectures where service discovery is a very important issue. Security Implications of IP Aliasing Though IP aliasing has a multitude of advantages, the issue of security deserves also to be looked into. Among other things, the more IP addresses you put, the larger the possible security breach of a system. The network administrators must guarantee the applications are protected with: Configurations of a firewall that will secure all the IP aliases Intrusion Detection Systems (IDS) to record the traffic of all IP addresses Periodically checking the use and need of each IP alias Enabling of appropriate security tools for those services bound to specific IP aliases Conclusion Putting a new IP address as an alias into Ubuntu is a highly efficient process as their utility of Netplan helps greatly. Whether you are using Ubuntu 24.04 or 22.04, the steps remain the same including editing the Netplan configuration file, adding the new IP address, and applying the changes. A system with multiple IP addresses on a single network interface of a single computer can be used to do different tasks on such a network. The ability to respond to several IP addresses on one network interface becomes very useful in several networking situations. Through these steps, you can increase the Ubuntu computer networking capabilities quickly and effectively. The sequence is always to first back up existing configurations then to make changes followed by in-depth test post-installation. With these skills, a network infrastructure manager or an IT technician can effectively manage and optimize his Ubuntu-powered network infrastructure to cater to diverse networking requirements.
29 November 2024 · 6 min to read

Tailored cloud server
solutions for every need

General-purpose cloud servers for web hosting

Ideal for websites, content management systems, and basic business applications, cloud web servers provide balanced performance with reliable uptime. These servers are designed to handle moderate traffic while keeping your website or application responsive.

High-performance servers for cloud computing


For businesses needing powerful resources for tasks like AI, machine learning, or data analysis, our high-performance cloud servers are built to process large datasets efficiently. Equipped with 3.3 GHz processors and high-speed NVMe storage, they ensure smooth execution of even the most demanding applications.

Storage-optimized cloud servers for data-driven operations

Need to store and retrieve large amounts of data? Our cloud data servers offer vast capacity with superior read/write speeds. These servers are perfect for databases, large-scale backups, or big data management, ensuring fast access to your data when you need it.

Memory-Optimized Servers for Heavy Workloads


These servers are built for applications that require high memory capacity, such as in-memory databases or real-time analytics. With enhanced memory resources, they ensure smooth handling of large datasets, making them ideal for businesses with memory-intensive operations.

In-depth answers to your questions

Which operating systems are supported on your cloud servers?

Choose popular server operating systems and deploy them in one click: from Ubuntu to CentOS. Licensed operating systems are available directly in the control panel.

How can I get started with a cloud server? Is there a straightforward registration process?

Register with Hostman and choose the tariff that suits your needs and requirements. You can always add processing power and purchase additional services if needed.

You don't need a development team to start shared hosting - you'll do everything yourself in a convenient control panel. Even a person with no technical background can easily work with it.

What is the minimum and maximum resource allocation (CPU, RAM, storage) available for cloud servers?

The starter package includes a 1×1.28 GHz 1-core CPU, 1 GB RAM, 15 GB fast MVNe SSD, dedicated IP address and 200 Mbps. For demanding users, go for a powerful 8×3.3 GHz server, 16 GB RAM, 160 GB fast MVNe SSD, dedicated IP address and 200 Mbps. Alternatively, you can always get an incredibly powerful server by configuring it yourself.

What scaling options are available for cloud servers?

You can easily add power, bandwidth, and channel width with just a few clicks directly in the control panel. With Hostman, you can enhance all the important characteristics of your server with hourly billing.

How much does a cloud server cost, and what is the pricing structure like?

Add capacity, bandwidth and channel width with a few clicks right in the control panel. With Hostman, you can improve all important features of your server - with hourly billing.

Is there a trial or testing period available for cloud servers before purchasing?

Contact the friendly Hostman support team, and they will offer you comfortable conditions for test-driving our cloud server — and will transfer your current projects to the cloud for free.

What security measures and data protection are in place for cloud servers?

Cloud servers are hosted in a Tier III data center with a high level of reliability. Hostman guarantees 99.99% availability according to the SLA, with downtime not exceeding 52 minutes per year. Additionally, data is backed up for extra security, and the communication channel is protected against DDoS attacks.

What level of support is provided for cloud servers?

Hostman support is always available, 7 days a week, around the clock. We respond to phone calls within a minute and chat inquiries within 15 minutes. Your questions will always be handled by knowledgeable staff with sufficient authority and technical background.

Can I install my own software on a cloud server?

Yes, absolutely! You can deploy any software, operating systems, and images you desire on your server. Everything is ready for self-configuration.

What backup and data recovery methods are available for cloud servers?

Hostman takes care of the security of your data and backs up important information. Additionally, you can utilize the automatic backup service for extra safety and reliability.

Is there a guaranteed Service Level Agreement (SLA) for cloud server availability?

Hostman guarantees a 99.99% level of virtual server availability according to the SLA (Service Level Agreement).

Which data center locations are available for hosting cloud servers?

Our servers are located in a modern Tier III data center in the European Union and the United States.

Can I create and manage multiple cloud servers under a single account?

Certainly, you can launch multiple cloud servers and other services (such as managed database or vps server) within a single account.

What is the deployment time for cloud servers after ordering?

With Hostman, you'll get a service that is easy and quick to manage on your own. New cloud servers can be launched almost instantly from the control panel, and the necessary software can be installed within minutes.

What monitoring and notification capabilities are offered for cloud servers?

Hostman specialists monitor the technical condition of servers and software around the clock. You won't have to worry about server availability — it will simply work, always.

Can I modify the specifications of my cloud server (e.g., increase RAM) after creation?

You can easily configure your server by adding resources directly in the control panel. And if you need to switch to lower-tier plans, you can rely on Hostman support — our specialists will handle everything for you.

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support