Sign In
Sign In

Cloud Server

Deploy your cloud server in minutes and experience the freedom to scale your
infrastructure effortlessly. Fast, secure, and flexible cloud server solution
designed to meet your unique needs without the constraints of traditional
servers.
Contact Sales
Cloud Server
Blazing 3.3 GHz Processors
& NVMe Disks
Experience unparalleled speed with processors optimized for demanding applications, combined with ultra-fast NVMe disks for quick data retrieval.
200 Mbit Channels,
Unlimited Traffic
Enjoy stable, high-speed connectivity with unthrottled traffic, ensuring smooth performance even during peak usage periods.
24/7 Monitoring
& Support
Stay worry-free with round-the-clock monitoring and professional support, ensuring your systems are always operational.
Cost-Effective
Management
Our cloud server solutions are designed to deliver maximum value for your money, offering flexible pricing without compromising on performance.

Cloud server pricing

High-performance cloud servers with pay-as-you-go pricing. Powered by Intel Xeon Gold and AMD EPYC processors, NVMe SSD storage, and 200 Mbps connectivity. Hosted on enterprise-grade Supermicro, Dell, and SuperCloud hardware in certified data centers (ISO 27001, SSAE 16).
New York
1 x 3 GHz CPU
CPU
1 x 3 GHz
1 GB RAM
RAM
1 GB
25 GB NVMe
NVMe
25 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$4
 /mo
1 x 3 GHz CPU
CPU
1 x 3 GHz
2 GB RAM
RAM
2 GB
40 GB NVMe
NVMe
40 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$5
 /mo
2 x 3 GHz CPU
CPU
2 x 3 GHz
2 GB RAM
RAM
2 GB
60 GB NVMe
NVMe
60 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$6
 /mo
2 x 3 GHz CPU
CPU
2 x 3 GHz
4 GB RAM
RAM
4 GB
80 GB NVMe
NVMe
80 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$8
 /mo
4 x 3 GHz CPU
CPU
4 x 3 GHz
8 GB RAM
RAM
8 GB
160 GB NVMe
NVMe
160 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$17
 /mo
8 x 3 GHz CPU
CPU
8 x 3 GHz
16 GB RAM
RAM
16 GB
320 GB NVMe
NVMe
320 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$37
 /mo
CPU
RAM
Gb
NVMe
Gb
Public IP
$0
 /mo

Deploy any software in seconds

Select the desired OS or App and install it in one click.
OS Distributions
Pre-installed Apps
Custom Images
Ubuntu
Debian
CentOS

Hostman's commitment to simplicity
and budget-friendly solutions

1 CPU
2 CPU
4 CPU
8 CPU
Configuration
1 CPU, 1 GB RAM, 25 GB SSD
Hostman
DigitalOcean
Google Cloud
AWS
Vultr
Price
$4
$6
$6.88
$7.59
$5
Tech support
Free
$24/mo
$29/mo + 3% of
monthly charges
$29/mo or 3% of
monthly charges
Free
Backups
from $0.07/GB
20% or 30% higher
base daily/weekly fee
$0.03/GB per mo
$0.05/GB per mo
20% higher base
monthly/hourly fee
Bandwidth
Free
$0.01 per GB
$0.01 per GB
$0.09/GB first
10 TB / mo
$0.01 per GB
Live chat support
Avg. support response time
<15 min
<24 hours
<4 hours
<12 hours
<12 hours

What is a cloud server?

Cloud server is a virtualized computing resource hosted in the cloud, designed to deliver powerful performance without the need for physical hardware. It is built on a network of connected virtual machines, which enables flexible resource allocation, instant scalability, and high availability. Unlike traditional on-premises servers, a cloud-based server allows users to adjust resources dynamically, making it ideal for handling fluctuating workloads or unpredictable traffic spikes. Whether you're running an e-commerce store, a SaaS platform, or any application, a cloud web server provides the adaptability necessary to grow with your business.

Cloud servers solve a wide range of challenges, from reducing infrastructure costs to improving uptime and reliability. By leveraging the cloud, businesses can avoid the upfront investment and maintenance costs associated with physical servers. Additionally, a cloud server system allows users to deploy applications quickly, scale resources in real-time, and manage data more efficiently. The key benefits for clients include operational flexibility, cost savings, and the ability to respond quickly to changing demands.

Ready to buy a cloud server?

1 CPU / 1GB RAM / 25GB NVMe / 200 Mbps / $2/mo.

Efficient tools for your convenient work

See all Products

Backups, Snapshots

Protect your data with regular backups and snapshots, ensuring you never lose crucial information.

Firewall

Enhance your security measures with our robust firewall protection, safeguarding your infrastructure against potential threats.

Load Balancer

Ensure optimal performance and scalability by evenly distributing traffic across multiple servers with our load balancer feature.

Private Networks

Establish secure and isolated connections between your servers with private networks, shielding sensitive data and enhancing network efficiency.

Trusted by 500+ companies and developers worldwide

Recognized as a Top Cloud Hosting Provider by HostAdvice

Hostman review

One panel to rule them all

Easily control your database, pricing plan, and additional services
through the intuitive Hostman management console.
Project management
Organize your multiple cloud servers and databases into a single, organized project, eliminating confusion and simplifying management.
Software marketplace
24 ready-made assemblies for any tasks: frameworks, e-commerce, analytics tools.
Mobile responsive
Get the optimal user experience across all devices with our mobile-responsive design.
Hostman Cloud

Code locally, launch worldwide

Our servers, certified with ISO/IEC 27001, are located in Tier 3 data
centers across the US, Europe, and Asia.
Hostmans' Locations
🇺🇸 San Francisco
🇺🇸 San Jose
🇺🇸 Texas
🇺🇸 New York
🇳🇱 Amsterdam
🇳🇬 Lagos
🇩🇪 Frankfurt
🇵🇱 Gdansk
🇦🇪 Dubai
🇸🇬 Singapore
Anup k.
Associate Cloud Engineer
5.0 out of 5

"Hostman Comprehensive Review of Simplicity and Potential"

It been few years that I have been working on Cloud and most of the cloud service...
Mansur H.
Security Researcher
5.0 out of 5

"A perfect fit for everything cloud services!"

Hostman's seemless integration, user-friendly interface and its robust features (backups, etc) makes it much easier...
Adedeji E.
DevOps Engineer
5.0 out of 5

"Superb User Experience"

For me, Hostman is exceptional because of it's flexibility and user-friendliness. The platform's ability to offer dedicated computing resources acr...
Yudhistira H.
Mid-Market(51-1000 emp.)
5.0 out of 5

"Streamlined Cloud Excellence!"

What I like best about Hostman is their exceptional speed of deployment, scalability, and robust security features. Their...
Mohammad Waqas S.
Biotechnologist and programmer
5.0 out of 5

"Seamless and easy to use Hosting Solution for Web Applications"

From the moment I signed up, the process has been seamless and straightforward...
Mohana R.
Senior Software Engineer
5.0 out of 5

"Availing Different DB Engine Services Provided by Hostman is Convenient for my Organization usecases"

Hostman manages the cloud operations...
Faizan A.
5.0 out of 5

"Hostman is a great fit for me"

Hostman is a great fit for me. What do you like best about Hostman? It was very easy to deploy my application and create database, I didn't have
Adam M.
5.0 out of 5

"Perfect website"

This website is extremely user friendly and easy to use. I had no problems so didn't have to contact customer support. Really good website and would recommend to others.
Anup K.
4.0 out of 5

"Simplifying Cloud Deployment with Strengths and Areas for Growth"

What I like best about Hostman is its unwavering commitment to simplicity...
Naila J.
5.0 out of 5

"Streamlined Deployment with Room for Improvement"

Hostman impresses with its user-friendly interface and seamless deployment process, simplifying web application hosting...

More cloud services from Hostman

See all Products

Latest News

Linux

How to Use SSH Keys for Authentication

Many cloud applications are built on the popular SSH protocol—it is widely used for managing network infrastructure, transferring files, and executing remote commands. SSH stands for Secure Socket Shell, meaning it provides a shell (command-line interface) around the connection between multiple remote hosts, ensuring that the connection is secure (encrypted and authenticated). SSH connections are available on all popular operating systems, including Linux, Ubuntu, Windows, and Debian. The protocol establishes an encrypted communication channel within an unprotected network by using a pair of public and private keys. Keys: The Foundation of SSH SSH operates on a client-server model. This means the user has an SSH client (a terminal in Linux or a graphical application in Windows), while the server side runs a daemon, which accepts incoming connections from clients. In practice, an SSH channel enables remote terminal management of a server. In other words, after a successful connection, everything entered in the local console is executed directly on the remote server. The SSH protocol uses a pair of keys for encrypting and decrypting information: public key and private key. These keys are mathematically linked. The public key is shared openly, resides on the server, and is used to encrypt data. The private key is confidential, resides on the client, and is used to decrypt data. Of course, keys are not generated manually but with special tools—keygens. These utilities generate new keys using encryption algorithms fundamental to SSH technology. More About How SSH Works Exchange of Public Keys SSH relies on symmetric encryption, meaning two hosts wishing to communicate securely generate a unique session key derived from the public and private data of each host. For example, host A generates a public and private key pair. The public key is sent to host B. Host B does the same, sending its public key to host A. Using the Diffie-Hellman algorithm, host A can create a key by combining its private key with the public key of host B. Likewise, host B can create an identical key by combining its private key with the public key of host A. This results in both hosts independently generating the same symmetric encryption key, which is then used for secure communication. Hence, the term symmetric encryption. Message Verification To verify messages, hosts use a hash function that outputs a fixed-length string based on the following data: The symmetric encryption key The packet number The encrypted message text The result of hashing these elements is called an HMAC (Hash-based Message Authentication Code). The client generates an HMAC and sends it to the server. The server then creates its own HMAC using the same data and compares it to the client's HMAC. If they match, the verification is successful, ensuring that the message is authentic and hasn't been tampered with. Host Authentication Establishing a secure connection is only part of the process. The next step is authenticating the user connecting to the remote host, as the user may not have permission to execute commands. There are several authentication methods: Password Authentication: The user sends an encrypted password to the server. If the password is correct, the server allows the user to execute commands. Certificate-Based Authentication: The user initially provides the server with a password and the public part of a certificate. Once authenticated, the session continues without requiring repeated password entries for subsequent interactions. These methods ensure that only authorized users can access the remote system while maintaining secure communication. Encryption Algorithms A key factor in the robustness of SSH is that decrypting the symmetric key is only possible with the private key, not the public key, even though the symmetric key is derived from both. Achieving this property requires specific encryption algorithms. There are three primary classes of such algorithms: RSA, DSA, and algorithms based on elliptic curves, each with distinct characteristics: RSA: Developed in 1978, RSA is based on integer factorization. Since factoring large semiprime numbers (products of two large primes) is computationally difficult, the security of RSA depends on the size of the chosen factors. The key length ranges from 1024 to 16384 bits. DSA: DSA (Digital Signature Algorithm) is based on discrete logarithms and modular exponentiation. While similar to RSA, it uses a different mathematical approach to link public and private keys. DSA key length is limited to 1024 bits. ECDSA and EdDSA: These algorithms are based on elliptic curves, unlike DSA, which uses modular exponentiation. They assume that no efficient solution exists for the discrete logarithm problem on elliptic curves. Although the keys are shorter, they provide the same level of security. Key Generation Each operating system has its own utilities for quickly generating SSH keys. In Unix-like systems, the command to generate a key pair is: ssh-keygen -t rsa Here, the type of encryption algorithm is specified using the -t flag. Other supported types include: dsa ecdsa ed25519 You can also specify the key length with the -b flag. However, be cautious, as the security of the connection depends on the key length: ssh-keygen -b 2048 -t rsa After entering the command, the terminal will prompt you to specify a file path and name for storing the generated keys. You can accept the default path by pressing Enter, which will create standard file names: id_rsa (private key) and id_rsa.pub (public key). Thus, the public key will be stored in a file with a .pub extension, while the private key will be stored in a file without an extension. Next, the command will prompt you to enter a passphrase. While not mandatory (it is unrelated to the SSH protocol itself), using a passphrase is recommended to prevent unauthorized use of the key by a third-party user on the local Linux system. Note that if a passphrase is used, you must enter it each time you establish the connection. To change the passphrase later, you can use: ssh-keygen -p Or, you can specify all parameters at once with a single command: ssh-keygen -p old_password -N new_password -f path_to_files For Windows, there are two main approaches: Using ssh-keygen from OpenSSH: The OpenSSH client provides the same ssh-keygen command as Linux, following the same steps. Using PuTTY: PuTTY is a graphical application that allows users to generate public and private keys with the press of a button. Installing the Client and Server Components The primary tool for an SSH connection on Linux platforms (both client and server) is OpenSSH. While it is typically pre-installed on most operating systems, there may be situations (such as with Ubuntu) where manual installation is necessary. The general command for installing SSH, followed by entering the superuser password, is: sudo apt-get install ssh However, in some operating systems, SSH may be divided into separate components for the client and server. For the Client To check whether the SSH client is installed on your local machine, simply run the following command in the terminal: ssh If SSH is supported, the terminal will display a description of the command. If nothing appears, you’ll need to install the client manually: sudo apt-get install openssh-client You will be prompted to enter the superuser password during installation. Once completed, SSH connectivity will be available. For the Server Similarly, the server-side part of the OpenSSH toolkit is required on the remote host. To check if the SSH server is available on your remote host, try connecting locally via SSH: ssh localhost If the SSH daemon is running, you will see a message indicating a successful connection. If not, you’ll need to install the SSH server: sudo apt-get install openssh-server As with the client, the terminal will prompt you to enter the superuser password. After installation, you can check whether SSH is active by running: sudo service ssh status Once connected, you can modify SSH settings as needed by editing the configuration file: ./ssh/sshd_config For example, you might want to change the default port to a custom one. Don’t forget that after making changes to the configuration, you must manually restart the SSH service to apply the updates: sudo service ssh restart Copying an SSH Key to the Server On Hostman, you can easily add SSH keys to your servers using the control panel. Using a Special Copy Command After generating a public SSH key, it can be used as an authorized key on a server. This allows quick connections without the need to repeatedly enter a password. The most common way to copy the key is by using the ssh-copy-id command: ssh-copy-id -i ~/.ssh/id_rsa.pub name@server_address This command assumes you used the default paths and filenames during key generation. If not, simply replace ~/.ssh/id_rsa.pub with your custom path and filename. Replace name with the username on the remote server. Replace server_address with the host address. If the usernames on both the client and server are the same, you can shorten the command: ssh-copy-id -i ~/.ssh/id_rsa.pub server_address If you set a passphrase during the SSH key creation, the terminal will prompt you to enter it. Otherwise, the key will be copied immediately. In some cases, the server may be configured to use a non-standard port (the default is 22). If that’s the case, specify the port using the -p flag: ssh-copy-id -i ~/.ssh/id_rsa.pub -p 8129 name@server_address Semi-Manual Copying There are operating systems where the ssh-copy-id command may not be supported, even though SSH connections to the server are possible. In such cases, the copying process can be done manually using a series of commands: ssh name@server_address 'mkdir -pm 700 ~/.ssh; echo ' $(cat ~/.ssh/id_rsa.pub) ' >> ~/.ssh/authorized_keys; chmod 600 ~/.ssh/authorized_keys' This sequence of commands does the following: Creates a special .ssh directory on the server (if it doesn’t already exist) with the correct permissions (700) for reading and writing. Creates or appends to the authorized_keys file, which stores the public keys of all authorized users. The public key from the local file (id_rsa.pub) will be added to it. Sets appropriate permissions (600) on the authorized_keys file to ensure it can only be read and written by the owner. If the authorized_keys file already exists, it will simply be appended with the new key. Once this is done, future connections to the server can be made using the same SSH command, but now the authentication will use the public key added to authorized_keys: ssh name@server_address Manual Copying Some hosting platforms offer server management through alternative interfaces, such as a web-based control panel. In these cases, there is usually an option to manually add a public key to the server. The web interface might even simulate a terminal for interacting with the server. Regardless of the method, the remote host must contain a file named ~/.ssh/authorized_keys, which lists all authorized public keys. Simply copy the client’s public key (found in ~/.ssh/id_rsa.pub by default) into this file. If the key pair was generated using a graphical application (typically PuTTY on Windows), you should copy the public key directly from the application and add it to the existing content in authorized_keys. Connecting to a Server To connect to a remote server on a Linux operating system, enter the following command in the terminal: ssh name@server_address Alternatively, if the local username is identical to the remote username, you can shorten the command to: ssh server_address The system will then prompt you to enter the password. Type it and press Enter. Note that the terminal will not display the password as you type it. Just like with the ssh-copy-id command, you can explicitly specify the port when connecting to a remote server: ssh client@server_address -p 8129 Once connected, you will have control over the remote machine via the terminal; any command you enter will be executed on the server side. Conclusion Today, SSH is one of the most widely used protocols in development and system administration. Therefore, having a basic understanding of its operation is crucial. This article aimed to provide an overview of SSH connections, briefly explain the encryption algorithms (RSA, DSA, ECDSA, and EdDSA), and demonstrate how public and private key pairs can be used to establish secure connections with a personal server, ensuring that exchanged messages remain inaccessible to third parties. We covered the primary commands for UNIX-like operating systems that allow users to generate key pairs and grant clients SSH access by copying the public key to the server, enabling secure connections.
30 January 2025 · 10 min to read
Docker

How to Automate Jenkins Setup with Docker

In the modern software development world, Continuous Integration and Continuous Delivery (CI/CD) have become an integral part of the development process. Jenkins, one of the leading CI/CD tools, helps automate application build, testing, and deployment. However, setting up and managing Jenkins can be time-consuming and complex, especially in large projects with many developers and diverse requirements. Docker, containerization, and container orchestration have come to the rescue, offering more efficient and scalable solutions for deploying applications and infrastructure. Docker allows developers to package applications and their dependencies into containers, which can be easily transported and run on any system with Docker installed. Benefits of Using Docker for Automating Jenkins Setup Simplified Installation and Setup: Using Docker to deploy Jenkins eliminates many challenges associated with installing dependencies and setting up the environment. You only need to run a few commands to get a fully functional Jenkins server. Repeatability: With Docker, you can be confident that your environment will always be the same, regardless of where it runs. This eliminates problems associated with different configurations across different servers. Environment Isolation: Docker provides isolation of applications and their dependencies, avoiding conflicts between different projects and services. Scalability: Using Docker and orchestration tools such as Docker Compose or Kubernetes allows Jenkins to be easily scaled by adding or removing agents as needed. Fast Deployment and Recovery: In case of failure or the need for an upgrade, Docker allows you to quickly deploy a new Jenkins container, minimizing downtime and ensuring business continuity. In this article, we will discuss how to automate the setup and deployment of Jenkins using Docker. We will cover all the stages, from creating a Docker file and setting up Docker Compose to integrating Jenkins Configuration as Code (JCasC) for automatic Jenkins configuration. As a result, you'll have a complete understanding of the process and a ready-made solution for automating Jenkins in your projects. Prerequisites Before you begin setting up Jenkins with Docker, you need to ensure that you have all the necessary tools and software. In this section, we will discuss the requirements for successfully automating Jenkins and how to install the necessary components. Installing Docker and Docker Compose Docker can be installed on various operating systems, including Linux, macOS, and Windows. Below are the steps for installing Docker on the most popular platforms: Linux (Ubuntu) Update the package list with the command: sudo apt update Install packages for HTTPS support: sudo apt install apt-transport-https ca-certificates curl software-properties-common Add the official Docker GPG key: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - Add the Docker repository to APT sources: sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" Install Docker: sudo apt install docker-ce Verify Docker is running: sudo systemctl status docker macOS Download and install Docker Desktop from the official website: Docker Desktop for Mac. Follow the on-screen instructions to complete the installation. Windows Download and install Docker Desktop from the official website: Docker Desktop for Windows. Follow the on-screen instructions to complete the installation. Docker Compose is typically installed along with Docker Desktop on macOS and Windows. For Linux, it requires separate installation: Download the latest version of Docker Compose: sudo curl -L "https://github.com/docker/compose/releases/download/$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep -Po '"tag_name": "\K.*?(?=")')/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose Make the downloaded file executable: sudo chmod +x /usr/local/bin/docker-compose Verify the installation: docker-compose --version Docker Hub is a cloud-based repository where you can find and store Docker images. The official Jenkins Docker image is available on Docker Hub and provides a ready-to-use Jenkins server. Go to the Docker Hub website. In the search bar, type Jenkins. Select the official image jenkins/jenkins. The official image is regularly updated and maintained by the community, ensuring a stable and secure environment. Creating a Dockerfile for Jenkins In this chapter, we will explore how to create a Dockerfile for Jenkins that will be used to build a Docker image. We will also discuss how to add configurations and plugins to this image to meet the specific requirements of your project. Structure of a Dockerfile A Dockerfile is a text document containing all the commands that a user could call on the command line to build an image. In each Dockerfile, instructions are used to define a step in the image-building process. The key commands include: FROM: Specifies the base image to create a new image from. RUN: Executes a command in the container. COPY or ADD: Copies files or directories into the container. CMD or ENTRYPOINT: Defines the command that will be executed when the container starts. Basic Dockerfile for Jenkins Let’s start by creating a simple Dockerfile for Jenkins. This file will use the official Jenkins image as the base and add a few necessary plugins. Create a new file named Dockerfile in your project directory. Add the following code: FROM jenkins/jenkins:lts RUN jenkins-plugin-cli --plugins workflow-aggregator git EXPOSE 8080 EXPOSE 50000 This basic Dockerfile installs two plugins: workflow-aggregator and git. It also exposes ports 8080 (for the web interface) and 50000 (for connecting Jenkins agents). Adding Configurations and Plugins For more complex configurations, we can add additional steps to the Dockerfile. For example, we can configure Jenkins to automatically use a specific configuration file or add scripts for pre-configuration. Create a jenkins_home directory to store custom configurations. Inside the new directory, create a custom_config.xml file with the required configurations: <?xml version='1.0' encoding='UTF-8'?> <hudson> <numExecutors>2</numExecutors> <mode>NORMAL</mode> <useSecurity>false</useSecurity> <disableRememberMe>false</disableRememberMe> <label></label> <primaryView>All</primaryView> <slaveAgentPort>50000</slaveAgentPort> <securityRealm class='hudson.security.SecurityRealm$None'/> <authorizationStrategy class='hudson.security.AuthorizationStrategy$Unsecured'/> </hudson> Update the Dockerfile as follows: FROM jenkins/jenkins:lts RUN jenkins-plugin-cli --plugins workflow-aggregator git docker-workflow COPY jenkins_home/custom_config.xml /var/jenkins_home/config.xml COPY scripts/init.groovy.d /usr/share/jenkins/ref/init.groovy.d/ EXPOSE 8080 EXPOSE 50000 In this example, we are installing additional plugins, copying the custom configuration file into Jenkins, and adding scripts to the init.groovy.d directory for automatic initialization of Jenkins during its first startup. Docker Compose Setup Docker Compose allows you to define your application's infrastructure as code using YAML files. This simplifies the configuration and deployment process, making it repeatable and easier to manage. Key benefits of using Docker Compose: Ease of Use: Create and manage multi-container applications with a single YAML file. Scalability: Easily scale services by adding or removing containers as needed. Convenience for Testing: Ability to run isolated environments for development and testing. Example of docker-compose.yml for Jenkins Let’s create a docker-compose.yml file to deploy Jenkins along with associated services such as a database and Jenkins agent. Create a docker-compose.yml file in your project directory. Add the following code to the file: version: '3.8' services: jenkins: image: jenkins/jenkins:lts container_name: jenkins-server ports: - "8080:8080" - "50000:50000" volumes: - jenkins_home:/var/jenkins_home networks: - jenkins-network jenkins-agent: image: jenkins/inbound-agent container_name: jenkins-agent environment: - JENKINS_URL=http://jenkins-server:8080 - JENKINS_AGENT_NAME=agent - JENKINS_AGENT_WORKDIR=/home/jenkins/agent volumes: - agent_workdir:/home/jenkins/agent depends_on: - jenkins networks: - jenkins-network volumes: jenkins_home: agent_workdir: networks: jenkins-network: This file defines two services: jenkins: The service uses the official Jenkins image. Ports 8080 and 50000 are forwarded for access to the Jenkins web interface and communication with agents. The /var/jenkins_home directory is mounted on the external volume jenkins_home to persist data across container restarts. jenkins-agent: The service uses the Jenkins inbound-agent image. The agent connects to the Jenkins server via the URL specified in the JENKINS_URL environment variable. The agent's working directory is mounted on an external volume agent_workdir. Once you create the docker-compose.yml file, you can start all services with a single command: Navigate to the directory that contains your docker-compose.yml. Run the following command to start all services: docker-compose up -d The -d flag runs the containers in the background. After executing this command, Docker Compose will create and start containers for all services defined in the file. You can now check the status of the running containers using the following command: docker-compose ps If everything went well, you should see only the jenkins-server container in the output. Now, let’s set up the Jenkins server and agent. Open a browser and go to http://localhost:8080/. During the first startup, you will see the following message: To retrieve the password, run this command: docker exec -it jenkins-server cat /var/jenkins_home/secrets/initialAdminPassword Copy the password and paste it into the Unlock Jenkins form. This will open a new window with the initial setup. Select Install suggested plugins. After the installation is complete, fill out the form to create an admin user. Accept the default URL and finish the setup. Then, go to Manage Jenkins → Manage Nodes. Click New Node, provide a name for the new node (e.g., "agent"), and select Permanent Agent. Fill in the remaining fields as shown in the screenshot. After creating the agent, a window will open with a command containing the secret for the agent connection. Copy the secret and add it to your docker-compose.yml: environment: - JENKINS_URL=http://jenkins-server:8080 - JENKINS_AGENT_NAME=agent - JENKINS_AGENT_WORKDIR=/home/jenkins/agent - JENKINS_SECRET=<your-secret-here> # Insert the secret here To restart the services, use the following commands and verify that the jenkins-agent container has started: docker-compose downdocker-compose up -d Configuring Jenkins with Code (JCasC) Jenkins Configuration as Code (JCasC) is an approach that allows you to describe the entire Jenkins configuration in a YAML file. It simplifies the automation, maintenance, and portability of Jenkins settings. In this chapter, we will explore how to set up JCasC for automatic Jenkins configuration when the container starts. JCasC allows you to describe Jenkins configuration in a single YAML file, which provides the following benefits: Automation: A fully automated Jenkins setup process, eliminating the need for manual configuration. Manageability: Easier management of configurations using version control systems. Documentation: Clear and easily readable documentation of Jenkins configuration. Example of a Jenkins Configuration File First, create the configuration file. Create a file named jenkins.yaml in your project directory. Add the following configuration to the file: jenkins: systemMessage: "Welcome to Jenkins configured as code!" securityRealm: local: allowsSignup: false users: - id: "admin" password: "${JENKINS_ADMIN_PASSWORD}" authorizationStrategy: loggedInUsersCanDoAnything: allowAnonymousRead: false tools: jdk: installations: - name: "OpenJDK 11" home: "/usr/lib/jvm/java-11-openjdk" jobs: - script: > pipeline { agent any stages { stage('Build') { steps { echo 'Building...' } } stage('Test') { steps { echo 'Testing...' } } stage('Deploy') { steps { echo 'Deploying...' } } } } This configuration file defines: System message in the systemMessage block. This string will appear on the Jenkins homepage and can be used to inform users of important information or changes. Local user database and administrator account in the securityRealm block. The field allowsSignup: false disables self-registration of new users. Then, a user with the ID admin is defined, with the password set by the environment variable ${JENKINS_ADMIN_PASSWORD}. Authorization strategy in the authorizationStrategy block. The policy loggedInUsersCanDoAnything allows authenticated users to perform any action, while allowAnonymousRead: false prevents anonymous users from accessing the system. JDK installation in the tools block. In this example, a JDK named OpenJDK 11 is specified with the location /usr/lib/jvm/java-11-openjdk. Pipeline example in the jobs block. This pipeline includes three stages: Build, Test, and Deploy, each containing one step that outputs a corresponding message to the console. Integrating JCasC with Docker and Docker Compose Next, we need to integrate our jenkins.yaml configuration file with Docker and Docker Compose so that this configuration is automatically applied when the Jenkins container starts. Update the Dockerfile to copy the configuration file into the container and install the JCasC plugin: FROM jenkins/jenkins:lts RUN jenkins-plugin-cli --plugins configuration-as-code COPY jenkins.yaml /var/jenkins_home/jenkins.yaml EXPOSE 8080 EXPOSE 50000 Update the docker-compose.yml to set environment variables and mount the configuration file. Add the following code in the volumes block: - ./jenkins.yaml:/var/jenkins_home/jenkins.yaml After the volumes block, add a new environment block (if you haven't defined it earlier): environment: - JENKINS_ADMIN_PASSWORD=admin_password Build the new Jenkins image with the JCasC configuration: docker-compose build Run the containers: docker-compose up -d After the containers start, go to your browser at http://localhost:8080 and log in with the administrator account. You should see the system message and the Jenkins configuration applied according to your jenkins.yaml file. A few important notes: The YAML files docker-compose.yml and jenkins.yaml might seem similar at first glance but serve completely different purposes. The file in Docker Compose describes the services and containers needed to run Jenkins and its environment, while the file in JCasC describes the Jenkins configuration itself, including plugin installation, user settings, security, system settings, and jobs. The .yml and .yaml extensions are variations of the same YAML file format. They are interchangeable and supported by various tools and libraries for working with YAML. The choice of format depends largely on historical community preferences; in Docker documentation, you will more often encounter examples with the .yml extension, while in JCasC documentation, .yaml is more common. The pipeline example provided below only outputs messages at each stage with no useful payload. This example is for demonstrating structure and basic concepts, but it does not prevent Jenkins from successfully applying the configuration. We will not dive into more complex and practical structures. jenkins.yaml describes the static configuration and is not intended to define the details of a specific CI/CD process for a particular project. For that purpose, you can use the Jenkinsfile, which offers flexibility for defining specific CI/CD steps and integrating with version control systems. We will discuss this in more detail in the next chapter. Key Concepts of Jobs in JCasC Jobs are a section of the configuration file that allows you to define and configure build tasks using code. This block includes the following: Description of Build Tasks: This section describes all aspects of a job, including its type, stages, triggers, and execution steps. Types of Jobs: There are different types of jobs in Jenkins, such as freestyle projects, pipelines, and multiconfiguration projects. In JCasC, pipelines are typically used because they provide a more flexible and powerful approach to automation. Declarative Syntax: Pipelines are usually described using declarative syntax, simplifying understanding and editing. Example Breakdown: pipeline: The main block that defines the pipeline job. agent any: Specifies that the pipeline can run on any available Jenkins agent. stages: The block that contains the pipeline stages. A stage is a step in the process. Additional Features: Triggers: You can add triggers to make the job run automatically under certain conditions, such as on a schedule or when a commit is made to a repository: triggers { cron('H 4/* 0 0 1-5') } Post-Conditions: You can add post-conditions to execute steps after the pipeline finishes, such as sending notifications or archiving artifacts. Parameters: You can define parameters for a job to make it configurable at runtime: parameters { string(name: 'BRANCH_NAME', defaultValue: 'main', description: 'Branch to build') } Automating Jenkins Deployment in Docker with JCasC Using Scripts for Automatic Deployment Use Bash scripts to automate the installation, updating, and running Jenkins containers. Leverage Jenkins Configuration as Code (JCasC) to automate Jenkins configuration. Script Examples Script for Deploying Jenkins in Docker: #!/bin/bash # Jenkins Parameters JENKINS_IMAGE="jenkins/jenkins:lts" CONTAINER_NAME="jenkins-server" JENKINS_PORT="8080" JENKINS_AGENT_PORT="50000" VOLUME_NAME="jenkins_home" CONFIG_DIR="$(pwd)/jenkins_configuration" # Create a volume to store Jenkins data docker volume create $VOLUME_NAME # Run Jenkins container with JCasC docker run -d \ --name $CONTAINER_NAME \ -p $JENKINS_PORT:8080 \ -p $JENKINS_AGENT_PORT:50000 \ -v $VOLUME_NAME:/var/jenkins_home \ -v $CONFIG_DIR:/var/jenkins_home/casc_configs \ -e CASC_JENKINS_CONFIG=/var/jenkins_home/casc_configs \ $JENKINS_IMAGE The JCasC configuration file jenkins.yaml was discussed earlier. Setting Up a CI/CD Pipeline for Jenkins Updates To set up a CI/CD pipeline, follow these steps: Open Jenkins and go to the home page. Click on Create Item. Enter a name for the new item, select Pipeline, and click OK. If this section is missing, you need to install the plugin in Jenkins. Go to Manage Jenkins → Manage Plugins. In the Available Plugins tab, search for Pipeline and install the Pipeline plugin. Similarly, install the Git Push plugin. After installation, go back to Create Item. Select Pipeline, and under Definition, choose Pipeline script from SCM. Select Git as the SCM. Add the URL of your repository; if it's private, add the credentials. In the Branch Specifier field, specify the branch that contains the Jenkinsfile (e.g., */main). Note that the Jenkinsfile should be created without an extension. If it's located in a subdirectory, specify it in the Script Path field. Click Save. Example of a Jenkinsfile pipeline { agent any environment { JENKINS_CONTAINER_NAME = 'new-jenkins-server' JENKINS_IMAGE = 'jenkins/jenkins:lts' JENKINS_PORT = '8080' JENKINS_VOLUME = 'jenkins_home' } stages { stage('Setup Docker') { steps { script { // Install Docker on the server if it's not installed sh ''' if ! [ -x "$(command -v docker)" ]; then curl -fsSL https://get.docker.com -o get-docker.sh sh get-docker.sh fi ''' } } } stage('Pull Jenkins Docker Image') { steps { script { // Pull the latest Jenkins image sh "docker pull ${JENKINS_IMAGE}" } } } stage('Cleanup Old Jenkins Container') { steps { script { // Stop and remove the old container if it exists def existingContainer = sh(script: "docker ps -a -q -f name=${JENKINS_CONTAINER_NAME}", returnStdout: true).trim() if (existingContainer) { echo "Stopping and removing existing container ${JENKINS_CONTAINER_NAME}..." sh "docker stop ${existingContainer} || true" sh "docker rm -f ${existingContainer} || true" } else { echo "No existing container with name ${JENKINS_CONTAINER_NAME} found." } } } } stage('Run Jenkins Container') { steps { script { // Run Jenkins container with port binding and volume mounting sh ''' docker run -d --name ${JENKINS_CONTAINER_NAME} \ -p ${JENKINS_PORT}:8080 \ -p 50000:50000 \ -v ${JENKINS_VOLUME}:/var/jenkins_home \ ${JENKINS_IMAGE} ''' } } } stage('Configure Jenkins (Optional)') { steps { script { // Additional Jenkins configuration through Groovy scripts or REST API sh ''' # Example script for performing initial Jenkins setup curl -X POST http://localhost:${JENKINS_PORT}/scriptText --data-urlencode 'script=println("Jenkins is running!")' ''' } } } } post { always { echo "Jenkins setup and deployment process completed." } } } On the page of your new pipeline, click Build Now. Go to Console Output. In case of a successful completion, you should see the following output. For this pipeline, we used the following files.  Dockerfile: FROM jenkins/jenkins:lts USER root RUN apt-get update && apt-get install -y docker.io docker-compose.yml: version: '3.7' services: jenkins: build: . ports: - "8081:8080" - "50001:50000" volumes: - jenkins_home:/var/jenkins_home - /var/run/docker.sock:/var/run/docker.sock environment: - JAVA_OPTS=-Djenkins.install.runSetupWizard=false networks: - jenkins-network volumes: jenkins_home: networks: jenkins-network: Ports 8081 and 50001 are used here so that the newly deployed Jenkins can occupy ports 8080 and 50000, respectively. This means that the main Jenkins, from which the pipeline is running, is currently located at http://localhost:8081/. One way to check if Jenkins has been deployed is to go to http://localhost:8080/, as we specified this in the pipeline. Since this is a new image, a welcome message with authentication will appear on the homepage. Conclusion Automating the deployment, updates, and backups of Jenkins is crucial for ensuring the reliability and security of CI/CD processes. Using modern tools enhances this process with a variety of useful features and resources. If you're further interested in exploring Jenkins capabilities, we recommend the following useful resources that can assist with automating deployments: Official Jenkins website Jenkins Configuration as Code documentation Pipeline Syntax
30 January 2025 · 19 min to read
R

How to Find Standard Deviation in R

Standard deviation is a statistical technique, which shows to what extent the values ​​of the studied feature deviate on average from the mean.  We use it to determine whether the units in our sample or population are similar with respect to the studied feature, or whether they differ significantly from each other. If you want to learn how to find standard deviation in R or just learn what standard deviation is, then read on. This guide will offer a detailed explanation of calculating standard deviation in R, covering various methods and practical examples to assist users in analyzing data efficiently. The Mathematics Behind Standard Deviation Standard deviation is a measure defining the average variation of individual values ​​of a statistical feature from the arithmetic mean. It has a more intuitive interpretation as a measure of variability of a distribution. If the calculation had been undertaken with distances from the mean, then the sum would always be 0, a very shaky result.  The mathematical formula of standard deviation is: =∑(xi​−μ)2N​ Where Σ represents sum, xi​ is each observation, μ is the mean of the data, and N is the total number of observations. It is usually abbreviated as SD.  The smaller the standard deviation, the closer the values are to the average, which shows that the data is more consistent. To properly judge whether the SD is small or large, it's important to know the range of the scale being used. The Significance of Standard Deviation The standard deviation is very helpful when comparing the variability between two data sets of similar size and average. Only using the simple average often does not help in deeper analysis. What good is it that we know the average salary in the company, if one does not know the variability of the salary? Do all employees get exactly the same? Or maybe the manager is overstating the average salary? To dig deeper and get to the underlying truth, we will have to calculate standard deviation. Similarly, standard deviation is also helpful to find the risk while making investment decisions. If on the stock exchange, one company brought an average annual profit of 4% and another an average annual profit of 5%, it does not mean that it is better to choose the second company without thinking.  Setting aside both fundamental and technical analysis of a specific company, as well as the broader macroeconomic conditions, it's valuable to focus on the fluctuations in the quotations themselves. If the stock value of the first company had slight, several percent fluctuations during the year, and the other fluctuated by several dozen percent. Then it is logical that the investment in the first company was much less risky. And to compare different rates of return and check their riskiness, you can use the standard deviation. Different Ways to Find Standard Deviation in R To perform any kind of analysis, first we must have some data. In R you can input manually by defining a vector or importing it from external sources, such as excel or CSV file. Let’s create a vector with six values: data <- c(4, 8, 6, 5, 3, 7) Alternatively, datasets can be imported using the read.csv() function, which loads data from a CSV file into R. Here's an example of importing data: # Read a CSV file into a data frame data <-read.csv("datafile.csv") # Install the 'readxl' package install.packages("readxl") # Load the library library(readxl) # Read an Excel file into a data frame data_excel <- read_excel("datafile.xlsx", sheet = 1) Finding Sample Standard Deviation in R A quick and easy way to standard deviation of a sample is through the sd() function which is one of the built-in function in R. It takes a data sample, often in the form of a vector, as input and returns the standard deviation. For example, to measure the SD of the vector created earlier: sd(data) Output: [1] 1.870829 If your sample has missing or null values, then you just need to set the parameter na.rm=TRUE in the sd() function and the missing value will not be included in the analysis: standard_deviation <- sd(data, na.rm = TRUE) Finding Population Standard Deviation in R To calculate the population standard deviation, we will first find the mean and subtract it from each observation in the dataset and square the results. Once we have the squared differences, we just have to find their average to find the variance. Finally, taking the square root of the variance will give us the population SD. Here is the R code to manually compute population standard deviation: mean_data <- mean(data) squared_differences <- (data - mean_data)^2 mean_squared_diff <- mean(squared_differences) standard_deviation_manual <- sqrt(mean_squared_diff) print(standard_deviation_manual) Grouped Standard Deviation in R Let's say you are analyzing the grades of students across different subjects in a school. The categorical variable here is “subject,” and you want to know not only the average grade for each subject but also the variation in grades. This will help us understand if certain subject have a wide or uniform range of grades. To determine the standard deviation for each category in a dataset containing categorical variables, one can utilize the dplyr package. The group_by() function facilitates the segmentation of the data by the categorical variable, and summarise() then calculates the SD for each distinct group. Before moving to calculation, we will install the dplyr package: install.packages("dplyr") Following our earlier example, let’s take a dataset which contains grades of students across different subjects: library(dplyr) # Example data frame with class and grades data <- data.frame( Subject = c('Math', 'Math', 'Math', 'History', 'History', 'History'), grade = c(85, 90, 78, 88, 92, 85) ) # Calculate standard deviation for each class grouped_sd <- data %>% group_by(Subject) %>% summarise(Standard_Deviation = sd(grade)) print(grouped_sd) Output: # A tibble: 2 × 2 Subject Standard_Deviation <chr> <dbl> 1 History 3.511885 2 Math 6.027714 Finding Column-Wise Standard Deviation  In R, there are a number of different ways to find column-wise standard deviation. To find the SD of specific columns, you can use apply the sd() function. A more efficient way is to use the summarise() or summarise_all() functions of the dplyr package. Example using apply(): data_frame <- data.frame(A = c(1, 2, 3), B = c(4, 5, 6)) apply(data_frame, 2, sd) Example using dplyr: library(dplyr) data_frame %>% summarise(across(everything(), sd)) Weighted Standard Deviation Now imagine that you are a manager of a sports league where a team has 5 players while others have 50 players. If you calculate the SD of scores across the entire league and treat all teams equally, the 5-player teams would contribute just as much to the calculation as the 50-player teams, even when they have far fewer players. Such an analysis will be misleading, therefore we need a measure like weighted standard deviation which controls for the weights based on the size of the teams, ensuring that teams with more players contribute proportionally to the overall variability. The formula for calculating the weighted standard deviation is as follows: Dw=∑wi​(xi​−μw​)2∑wi​​​ Where: 𝑤i ​represents the weight for each data point, 𝑥i ​ denotes each data point, μw is the weighted mean, calculated as: μw​=​∑wi​xi∑wi​​ Though R does not have a built-in function for measuring weighted standard deviation, it can be computed manually. Manually Find Weighted Standard Deviation Let's say we have test grades data with corresponding weights, and we want to measure the weighted standard deviation: # Example data with grades and weights grades <- c(85, 90, 78, 88, 92, 85) weights <- c(0.2, 0.3, 0.1, 0.15, 0.1, 0.15) # Calculate the weighted mean weighted_mean <- sum(grades * weights) / sum(weights) # Calculate the squared differences from the weighted mean squared_differences <- (grades - weighted_mean)^2 # Calculate the weighted variance weighted_variance <- sum(weights * squared_differences) / sum(weights) # Calculate the weighted standard deviation weighted_sd <- sqrt(weighted_variance) print(weighted_sd) Output: [1] 3.853245 Conclusion Standard deviation is quite easy to calculate, despite those cruel sums and roots in the formula, and even easier to interpret. If you just want to make friends with statistics or data science, then like it or not, you also have to make friends with standard deviation and how to measure it in R. 
30 January 2025 · 7 min to read
Kubernetes

Kubernetes Requests and Limits

When working with the Kubernetes containerization platform, it is important to control resource usage for cluster objects such as pods. The requests and limits parameters allow you to configure resource consumption limits, such as how many resources a pod can use in a Kubernetes cluster. This article will explore the use of requests and limits in Kubernetes through practical examples. Prerequisites To work with requests and limits in a Kubernetes cluster, we need: A Kubernetes cluster (you can create one in the Hostman control panel). For testing purposes, a cluster with two nodes will suffice. The cluster can also be deployed manually by renting the necessary number of cloud or dedicated (physical) servers, setting up the operating system, and installing the required packages. Lens or kubectl for connecting to and managing your Kubernetes clusters. Connecting to a Kubernetes Cluster Using Lens First, go to the cluster management page in your Hostman panel. Download the Kubernetes cluster configuration file (the kubeconfig file). Once Lens is installed on your system, launch the program, and from the left menu, go to the Catalog (app) section: Select Clusters and click the blue plus button at the bottom right. Choose the directory where you downloaded the Kubernetes configuration file by clicking the Sync button at the bottom right. After this, our cluster will appear in the list of available clusters. Click on the cluster's name to open its dashboard: What are Requests and Limits in Kubernetes First, let's understand what requests and limits are in Kubernetes. Requests are a mechanism in Kubernetes that is responsible for allocating physical resources, such as memory and CPU cores, to the container being launched. In simple terms, requests in Kubernetes are the minimum system requirements for an application to function properly. Limits are a mechanism in Kubernetes that limits the physical resources (memory and CPU cores) allocated to the container being launched. In other words, limits in Kubernetes are the maximum values for physical resources, ensuring that the launched application cannot consume more resources than specified in the limits. The container can only use resources up to the limit specified in the Limits. The request and limit mechanisms apply only to objects of type pod and are defined in the pod configuration files, including deployment, StatefulSet, and ReplicaSet files. Requests are added in the containers block using the resources parameter. In the resources section, you need to add the requests block, which consists of two values: cpu (CPU resource request) and memory (memory resource request). The syntax for requests is as follows: containers: ... resources: requests: cpu: "1.0" memory: "150Mi" In this example, for the container to be launched on a selected node in the cluster, at least one free CPU core and 150 megabytes of memory must be available. Limits are set in the same way. For example: containers: ... resources: limits: cpu: "2.0" memory: "500Mi" In this example, the container cannot use more than two CPU cores and no more than 500 megabytes of memory. The units of measurement for requests and limits are as follows: CPU — in millicores (milli-cores) RAM — in bytes For CPU resources, cores are used. For example, if we need to allocate one physical CPU core to a container, the manifest should specify 1.0. To allocate half a core, specify 0.5. A core can be logically divided into millicores, so you can allocate, for example, 100m, which means one-thousandth of a core (1 full CPU core contains 1000 millicores). For RAM, we specify values in bytes. You can use numbers with the suffixes E, P, T, G, M, k. For example, if a container needs to be allocated 1 gigabyte of memory, you should specify 1G. In megabytes, it would be 1024M, in kilobytes, it would be 1048576k, and so on. The requests and limits parameters are optional; however, it is important to note that if both parameters are not set, the container will be able to run on any available node in the cluster regardless of the free resources and will consume as many resources as are physically available on each node. Essentially, the cluster will allocate excess resources. This practice can negatively affect the stability of the entire cluster, as it significantly increases the risk of errors such as OOM (Out of Memory) and OutOfCPU (lack of CPU resources). To prevent these errors, Kubernetes introduced the request and limit mechanisms. Practical Use of Requests and Limits in Kubernetes Let's look at the practical use of requests and limits. First, we will deploy a deployment file with an Nginx image where we will set only the requests. In the configuration below, to launch a pod with a container, the node must have at least 100 millicores of CPU (1/1000 of a CPU core) and 150 megabytes of free memory: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-test-deployment namespace: ns-for-nginx labels: app: nginx-test spec: selector: matchLabels: app: nginx-test template: metadata: labels: app: nginx-test spec: containers: - name: nginx-test image: nginx:1.25 resources: requests: cpu: "100m" memory: "150Mi" Before deploying the deployment, let's create a new namespace named ns-for-nginx: kubectl create ns ns-for-nginx After creating the namespace, we will deploy the deployment file using the following command: kubectl apply -f nginx-test-deployment.yml Now, let's check if the deployment was successfully created: kubectl get deployments -A Also, check the status of the pod: kubectl get po -n ns-for-nginx The deployment file and the pod have been successfully launched. To ensure that the minimum resource request was set for the Nginx pod, we will use the kubectl describe pod command (where nginx-test-deployment-786d6fcb57-7kddf is the name of the running pod): kubectl describe pod nginx-test-deployment-786d6fcb57-7kddf -n ns-for-nginx In the output of this command, you can find the requests block, which contains the previously set minimum requirements for our container to run: In the example above, we created a deployment that sets only the minimum required resources for deployment. Now, let's add limits for the container to run with 1 full CPU core and 1 gigabyte of RAM by creating a new deployment file: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-test-deployment-2 namespace: ns-for-nginx labels: app: nginx-test2 spec: selector: matchLabels: app: nginx-test2 template: metadata: labels: app: nginx-test2 spec: containers: - name: nginx-test2 image: nginx:1.25 resources: requests: cpu: "100m" memory: "150Mi" limits: cpu: "1.0" memory: "1G" Let's create the deployment in the cluster: kubectl apply -f nginx-test-deployment2.yml Using the kubectl describe command, let's verify that both requests and limits have been applied (where nginx-test-deployment-2-6d5df6c95c-brw8n is the name of the pod): kubectl describe pod nginx-test-deployment-2-6d5df6c95c-brw8n -n ns-for-nginx In the screenshot above, both requests and limits have been set for the container. With these quotas, the container will be scheduled on a node with at least 150 megabytes of RAM and 100 milli-CPU. At the same time, the container will not be allowed to consume more than 1 gigabyte of RAM and 1 CPU core. Using ResourceQuota In addition to manually assigning resources for each container, Kubernetes provides a way to allocate quotas to specific namespaces in the cluster. The ResourceQuota mechanism allows setting resource usage limits within a particular namespace. ResourceQuota is intended to limit resources such as CPU and memory. The practical use of ResourceQuota looks like this: Create a new namespace with quota settings: kubectl create ns ns-for-resource-quota Create a ResourceQuota object: apiVersion: v1 kind: ResourceQuota metadata: name: resource-quota-test namespace: ns-for-resource-quota spec: hard: pods: "2" requests.cpu: "0.5" requests.memory: "800Mi" limits.cpu: "1" limits.memory: "1G" In this example, for all objects created in the ns-for-resource-quota namespace, the following limits will apply: A maximum of 2 pods can be created. The minimum CPU resources required for starting the pods is 0.5 milliCPU. The minimum memory required for starting the pods is 800MB. CPU limits are set to 1 core (no more can be allocated). Memory limits are set to 1GB (no more can be allocated). Apply the configuration file: kubectl apply -f test-resource-quota.yaml Check the properties of the ResourceQuota object: kubectl get resourcequota resource-quota-test -n ns-for-resource-quota As you can see, resource quotas have been set. Also, verify the output of the kubectl describe ns command: kubectl describe ns ns-for-resource-quota The previously created namespace ns-for-resource-quota will have the corresponding resource quotas. Example of an Nginx pod with the following configuration: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-with-quota namespace: ns-for-resource-quota labels: app: nginx-with-quota spec: selector: matchLabels: app: nginx-with-quota replicas: 3 template: metadata: labels: app: nginx-with-quota spec: containers: - name: nginx image: nginx:1.22.1 resources: requests: cpu: 100m memory: 100Mi limits: cpu: 100m memory: 100Mi Here we define 3 replicas of the Nginx pod to test the quota mechanism. We also set minimum resource requests for the containers and apply limits to ensure the containers don't exceed the defined resources. Apply the configuration file: kubectl apply -f nginx-deployment-with-quota.yaml kubectl get all -n ns-for-resource-quota As a result, only two of the three replicas of the pod will be successfully created. The deployment will show an error message indicating that the resource quota for pod creation has been exceeded (in this case, we're trying to create more pods than allowed): However, the remaining two Nginx pods were successfully started: Conclusion Requests and limits are critical mechanisms in Kubernetes that allow for flexible resource allocation and control within the cluster, preventing unexpected errors in running applications and ensuring the stability of the cluster itself. We offer an affordable Kubernetes hosting platform, with transparent and scalable pricing for all workloads.
29 January 2025 · 9 min to read
Python

How to Update Python

As software evolves, so does the need to keep your programming environment up-to-date. Python, known for its versatility and widespread application, frequently sees new version releases. These updates frequently bring new features, performance enhancements, and crucial security patches for developers and organizations that depend on Python. Ensuring that Python is up-to-date guarantees enhanced performance and fortified security. We'll explore different methods for updating Python, suited to your needs. Prerequisites Before starting, ensure you have: Administrative access to your cloud server. Reliable internet access. Updating Python Several methods are available to update Python on a cloud server. Here are four effective methods to achieve this. Method 1: Via Package Manager Employing a package manager makes updating Python a quick and effortless task. This approach is simple and quick, especially for users who are familiar with package management systems. Step 1: Find the Current Python Version Begin by validating the Python version on your server via: python --version or for Python 3: python3 --version Step 2: Update Package Repository Make sure your package repository is updated to receive the latest version data by applying: sudo apt update Step 3: Upgrade Python Then, proceed to use your package manager to install the current version of Python: sudo apt install --upgrade python3 This will bring your Python installation up to the latest version provided by your package repository. Method 2: Building Python from Source Compiling Python from the source provides the ability to customize the build process and apply specific optimizations. This method is especially useful for developers who need a customized Python build tailored to their requirements. Check out these instructions: Step 1: Install Dependencies Get the essential dependencies from the default package manager for building process: sudo apt install build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev pkg-config libffi-dev wget Step 2: Download Python Source Code Then, get the updated Python source code by visiting the official website.  You could also opt to download it directly using wget: wget https://www.python.org/ftp/python/3.13.1/Python-3.13.1.tgz Substitute 3.13.1 with your preferred Python version number. Step 3: Extract the Package Once downloaded, simply extract the tarball via: tar -xf Python-<latest-version>.tgz Step 4: Set Up and Compile Python Enter the extracted folder and configure the installation using these commands: cd Python-<latest-version>./configure --enable-optimizations Once done, compile Python via make command given below: make -j $(nproc) Note: The above command utilizes all available CPU cores to speed up the build process. On a machine with limited resources, such as CPU and 1GB RAM, limit the number of parallel jobs to reduce memory usage. For example, apply: make -j1 Step 5: Install Python Following compilation, go ahead and install Python through: sudo make install Note: The make altinstall command can be applied too instead of make install. Implementing this will prevent any interruptions to your system tools and applications that require the default Python version. However, extra steps are needed: Verify the installed location via: ls /usr/local/bin/python3.13 Apply update-alternatives system for managing and switching multiple Python versions: sudo update-alternatives --install /usr/bin/python3 python3 /usr/local/bin/python3.13 1sudo update-alternatives --config python3 Step 6: Validate the Python Installation Close the terminal and open again. Then check the newly installed version via: python3 --version Method 3: Via Pyenv  Pyenv is a go-to solution for maintaining different Python versions on the same system. It offers a versatile method for installing and switching between various Python versions. To update Python through Pyenv, use the following instructions. Step 1: Install Dependencies First, set up the dependencies needed for compiling Python: sudo apt install -y make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev libffi-dev liblzma-dev git Step 2: Install Pyenv Following that, utilize the curl command to download and install Pyenv: curl https://pyenv.run | bash Step 3: Update Shell Configuration After that, reload the shell configuration: export PYENV_ROOT="$HOME/.pyenv"[[ -d $PYENV_ROOT/bin ]] && export PATH="$PYENV_ROOT/bin:$PATH"eval "$(pyenv init - bash)" Step 4: Install Recent Python  Once completion is completed, display all available Python versions with: pyenv install --list Then proceed to install the version you want via: pyenv install <latest-version>   Configure the newly installed version as the system-wide default through: pyenv global <latest-version> Step 5: Verify the Installation Confirm the new Python version by applying: python --version Method 4: Via Anaconda  Anaconda supplies a full-featured distribution of Python and R, specifically aimed at data science and computational applications. It simplifies package handling and implementation, providing an accessible and efficient framework for developers. Here’s are the steps: Step 1: Fetch Anaconda Installer Fetch the Anaconda installer script directly from the official site: wget https://repo.anaconda.com/archive/Anaconda3-<latest-version>-Linux-x86_64.sh Replace <latest-version> with the desired version number. For example: wget https://repo.anaconda.com/archive/Anaconda3-2024.06-1-Linux-x86_64.sh Step 2: Run the Installer Run the installer script through bash: bash Anaconda3-<latest-version>-Linux-x86_64.sh Adhere to the prompts to finalize the installation. Step 3: Initialize Anaconda Set up Anaconda by incorporating it into your shell configuration using: source ~/.bashrc Step 4: Update Anaconda Ensure Anaconda is updated by applying: conda update conda Confirm the Python installation through: conda install python=<version> Step 5: Verify the Installation Identify the Python version being utilized in your Anaconda configuration. Apply: python --version Additional Tips for Maintaining Your Python Environment Listed below are some key practices to ensure your Python environment runs smoothly and efficiently: Regular Updates and Maintenance For maintaining optimal performance and security, it is important to keep your Python environment updated frequently. It's recommended to check for updates periodically and apply them as needed.  Using Virtual Environments It's a good idea to use virtual environments when working with Python. They let you set up separate environments for each project, so dependencies and versions stay separate. Tools like venv and virtualenv can help manage these environments efficiently. Backup and Version Control It's always a good idea to maintain backups of your important projects and configurations. Git helps you record changes, work with teammates, and switch back to older versions when needed. Troubleshooting Common Issues Listed here are frequent problems you may face and ways to solve them: Dependency Conflicts Sometimes, upgrading Python or installing new packages can lead to dependency conflicts. To resolve these conflicts, consider using tools like pipenv or poetry that manage dependencies and virtual environments. Path Issues After upgrading Python, you might encounter issues with the PATH environment variable. Ensure that your system recognizes the correct Python version by updating the PATH variable in your shell configuration file (e.g., .bashrc, .zshrc). Security Considerations Ensuring the protection of your Python environment is essential. Follow these recommendations to maintain a secure environment: Stick to trusted sources when downloading packages. Use PIP's hash-checking mode to confirm package integrity. Always review the code and documentation before incorporating a new package. Stay informed with security updates and advisories from the Python ecosystem and package maintainers. Keep PIP and your packages updated regularly to ensure protection with the newest security fixes and improvements. FAQs Q1: What's the recommended approach to updating Python on a cloud server? A: The best method depends on your requirements. For a straightforward update, using a package manager is ideal. For customization, building from source is recommended. Pyenv is great for managing multiple versions, while Anaconda is tailored for data science needs. Q2: How frequently should I update my Python environment? A: Periodically review for updates and implement them to ensure top performance and robust security. Q3: What should I do if I encounter issues after updating Python? A: Refer to the troubleshooting section for common issues. Check the PATH variable for accuracy, and use virtual environments to solve any dependency conflicts. Conclusion Updating Python on a cloud server can be accomplished through various methods depending on your preferences and requirements. Whether using a package manager, compiling from source, managing versions with Pyenv, or leveraging Anaconda, each approach has its benefits. By following this comprehensive guide, you can ensure your Python environment remains current, secure, and equipped with the latest features. Regularly updating Python is essential to leverage new functionalities and maintain the security of your applications.
29 January 2025 · 8 min to read
Linux

How to Download Files with cURL

Downloading content from remote servers is a regular task for both administrators and developers. Although there are numerous tools for this job, cURL stands out for its adaptability and simplicity. It’s a command-line utility that supports protocols such as HTTP, HTTPS, FTP, and SFTP, making it crucial for automation, scripting, and efficient file transfers. You can run cURL directly on your computer to fetch files. You can also include it in scripts to streamline data handling, thereby minimizing manual effort and mistakes. This guide demonstrates various ways to download files with cURL. By following these examples, you’ll learn how to deal with redirects, rename files, and monitor download progress. By the end, you should be able to use cURL confidently for tasks on servers or in cloud setups. Basic cURL Command for File Download The curl command works with multiple protocols, but it’s primarily used with HTTP and HTTPS to connect to web servers. It can also interact with FTP or SFTP servers when needed. By default, cURL retrieves a resource from a specified URL and displays it on your terminal (standard output). This is often useful for previewing file contents without saving them, particularly if you’re checking a small text file. Example: To view the content of a text file hosted at https://example.com/file.txt, run: curl https://example.com/file.txt For short text documents, this approach is fine. However, large or binary files can flood the screen with unreadable data, so you’ll usually want to save them instead. Saving Remote Files Often, the main goal is to store the downloaded file on your local machine rather than see it in the terminal. cURL simplifies this with the -O (capital O) option, which preserves the file’s original remote name. curl -O https://example.com/file.txt This retrieves file.txt and saves it in the current directory under the same name. This approach is quick and retains the existing filename, which might be helpful if the file name is significant. Choosing a Different File Name Sometimes, renaming the downloaded file is important to avoid collisions or to create a clear naming scheme. In this case, use the -o (lowercase o) option: curl -o myfile.txt https://example.com/file.txt Here, cURL downloads the remote file file.txt but stores it locally as myfile.txt. This helps keep files organized or prevents accidental overwriting. It’s particularly valuable in scripts that need descriptive file names. Following Redirects When requesting a file, servers might instruct your client to go to a different URL. Understanding and handling redirects is critical for successful downloads. Why Redirects Matter Redirects are commonly used for reorganized websites, relocated files, or mirror links. Without redirect support, cURL stops after receiving an initial “moved” response, and you won’t get the file. Using -L or --location To tell cURL to follow a redirect chain until it reaches the final target, use -L (or --location): curl -L -O https://example.com/redirected-file.jpg This allows cURL to fetch the correct file even if its original URL points elsewhere. If you omit -L, cURL will simply print the redirect message and end, which is problematic for sites with multiple redirects. Downloading Multiple Files cURL can also handle multiple file downloads at once, saving you from running the command repeatedly. Using Curly Braces and Patterns If filenames share a pattern, curly braces {} let you specify each name succinctly: curl -O https://example.com/files/{file1.jpg,file2.jpg,file3.jpg} cURL grabs each file in sequence, making it handy for scripted workflows. Using Ranges For a series of numbered or alphabetically labeled files, specify a range in brackets: curl -O https://example.com/files/file[1-5].jpg cURL automatically iterates through files file1.jpg to file5.jpg. This is great for consistently named sequences of files. Chaining Multiple Downloads If you have different URLs for each file, you can chain them together: curl -O https://example1.com/file1.jpg -O https://example2.com/file2.jpg This approach downloads file1.jpg from the first site and file2.jpg from the second without needing multiple commands. Rate Limiting and Timeouts In certain situations, you may want to control the speed of downloads or prevent cURL from waiting too long for an unresponsive server. Bandwidth Control To keep your network from being overwhelmed or to simulate slow conditions, limit the download rate with --limit-rate: curl --limit-rate 2M -O https://example.com/bigfile.zip 2M stands for 2 megabytes per second. You can also use K for kilobytes or G for gigabytes. Timeouts If a server is too slow, you may want cURL to stop after a set time. The --max-time flag does exactly that: curl --max-time 60 -O https://example.com/file.iso Here, cURL quits after 60 seconds, which is beneficial for scripts that need prompt failures. Silent and Verbose Modes cURL can adjust its output to show minimal information or extensive details. Silent Downloads For batch tasks or cron jobs where you don’t need progress bars, include -s (or --silent): curl -s -O https://example.com/file.jpg This hides progress and errors, which is useful for cleaner logs. However, troubleshooting is harder if there’s a silent failure. Verbose Mode In contrast, -v (or --verbose) prints out detailed request and response information: curl -v https://example.com Verbose output is invaluable when debugging issues like invalid SSL certificates or incorrect redirects. Authentication and Security Some downloads require credentials, or you might need a secure connection. HTTP/FTP Authentication When a server requires a username and password, use -u: curl -u username:password -O https://example.com/protected/file.jpg Directly embedding credentials can be risky, as they might appear in logs or process lists. Consider environment variables or .netrc files for more secure handling. HTTPS and Certificates By default, cURL verifies SSL certificates. If the certificate is invalid, cURL blocks the transfer. You can bypass this check with -k or --insecure, though it introduces security risks. Whenever possible, use a trusted certificate authority so that connections remain authenticated. Using a Proxy In some environments, traffic must route through a proxy server before reaching the target. Downloading Through a Proxy Use the -x or --proxy option to specify the proxy: curl -x http://proxy_host:proxy_port -O https://example.com/file.jpg Replace proxy_host and proxy_port with the relevant details. cURL forwards the request to the proxy, which then retrieves the file on your behalf. Proxy Authentication If your proxy requires credentials, embed them in the URL: curl -x https://proxy.example.com:8080 -U myuser:mypassword -O https://example.com/file.jpg Again, storing sensitive data in plain text can be dangerous, so environment variables or configuration files offer more secure solutions. Monitoring Download Progress Tracking download progress is crucial for large files or slower links. Default Progress Meter By default, cURL shows a progress meter, including total size, transfer speed, and estimated finish time. For example: % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current                                 Dload  Upload   Total   Spent    Left  Speed100  1256  100  1256    0     0   2243      0 --:--:-- --:--:-- --:--:--  2246 This readout helps you gauge how much remains and if the transfer rate is acceptable. Compact Progress Bar If you want fewer details, add -#: curl -# -O https://example.com/largefile.iso A simpler bar shows the overall progress as a percentage. It’s easier on the eyes but lacks deeper stats like current speed. Capturing Progress in Scripts When using cURL within scripts, you might want to record progress data. cURL typically sends progress info to stderr, so you can redirect it: curl -# -O https://example.com/largefile.iso 2>progress.log Here, progress.log contains the status updates, which you can parse or store for later review. Conclusion cURL shines as a flexible command-line tool for downloading files in multiple protocols and environments. Whether you need to handle complex redirects, rename files on the fly, or throttle bandwidth, cURL has you covered. By mastering its core flags and modes, you’ll be able to integrate cURL seamlessly into your daily workflow for scripting, automation, and more efficient file transfers.
29 January 2025 · 7 min to read
Docker

Docker Exec: How to Use It to Run Commands in a Container

Docker is an effective and versatile environment built to assist you in the matter of running, creating, as well as deploying apps within containers. One of the significant utilities in it is docker exec. It permits you to run code within a particular container. Furthermore, you can maintain as well as build a reliable, compact container through it. During creation or installation, it is significant to analyze different operations/configurations and examine the current condition or resolve bugs. Therefore, it offers an environment where commands can be run in dockerized apps. This tutorial will cover docker exec, complete with possible use cases and explanations. Prerequisites You must meet certain prerequisites before beginning the article: Installation: Verify that Docker is already installed. If not, check our tutorial to install it. Permissions: The user account should have permissions/privileges to run the script. Running Container: A container needs to be accessible as well as running at the moment. Through the docker ps, you can determine the ID or name of the container. General Concepts: You should be familiar with core concepts of Docker. Familiarity with Linux systems and Docker basics will help in troubleshooting any issues during configuration. These requirements are necessary before beginning the setup. Basic Introduction  docker exec permits greater control, enhanced privacy, as well as better security for your apps. It helps users with regarding, management, monitoring, and debugging running apps within the particular container. Explore its features to boost your productivity and automate workflows. In this way, you can run direct commands by performing several operations like opening sessions, shell commands, and even running scripts. This significantly enhances workflow by enabling interaction with the active/operational app. You can address issues as well as make configurations without the need for a full container restart, which improves efficiency. General Syntax  The general syntax is: docker exec [OPTIONS] CONTAINER CODE [ARG...] OPTIONS: These are flags for customizing the behaviour of the particular command. Several options are as below: -i: It indicates STDIN is launched even if not connected. -t: It addresses the allocation of a pseudo-TTY. -u USER: It indicates the particular user for running the command. -w WORKDIR: It indicates the directory which is working for the particular command. CONTAINER: It indicates the container ID or name, where instructions are executed. CODE:  This is the script or command that you require to run inside the container ARG: It represents the additional parameters that are required to be passed to the particular CODE. How to Use Docker Exec to Run Commands in a Container Through this utility, you can run programs, check logs, and perform other admin operations inside the particular running container by accessing its CLI. It is beneficial for effective management since it increases adaptability and gives more hold of dockerized apps. Testing with a Sample Container Before running the specific command, you should have the minimum of one container that is currently operational. If you do not have it yet, execute the below command by with the particular container name. In our case, we use mynginx:  docker run -d --name mynginx nginx Finding the Active Container ID Before beginning, you are required to know the ID or name of the running container. Let’s run the below command to obtain the info on all dockerized apps that are currently operational: docker ps In the figure, the operational instance ID is b51dc8e05c77 and the name is mynginx.  Working With a Particular Directory In this first example, you can run the command in the particular directory of the operational container. To achieve this, the --workdir or -w option is used by mentioning the folder name. Look at a use case where the pwd is run within the operational container mynginx: docker exec --workdir /tmp mynginx pwd Here: docker exec: It is the core command to run the command within the operational container. --workdir /tmp: This OPTION indicates our working directory. mynginx: It indicates the CONTAINER name.  pwd: It indicates the executed CODE within the container. In the figure, the pwd executes within the particular mynginx instance and allocates the working directory to /tmp.  Single Command Execution  In this example, execute a single command. For this, first mention the container name or ID, and afterwards, the particular command that you are required to execute. Here, mynginx is the name of the operational container, and the echo "Hello, Hostman Users!" is the command: docker exec mynginx echo "Hello, Hostman Users!" In the figure, there is an execution of the echo "Hello, Hostman Users!" command within mynginx. Several Commands Execution  You can execute several commands in a single line statement by splitting them with semicolon. Let’s look at the below statement: docker exec mynginx /bin/bash ls; free -m; df -h; In the result, ls shows the content inside of the mentioned folder, free -m shows the system memory and df -h disk space usage. It permits you to analyze the memory state, filesystem, and other info in one statement. Enabling the Shell Through Name You can enable the shell within the dockerized app. It permits an interface for the file system as well as script execution. Here, the -it option activates interactive mode and assigns the interface: docker exec -it mynginx /bin/bash The figure enables the bash shell interface within mynginx. But, /bin/bash is not guaranteed to be present in every image of Docker. Therefore, other shells like sh can also be enabled. Now, input exit and press ENTER to close the interface: exit To launch other shells like sh (which is a symbolic link to bash or another shell), use /bin/sh in the below statement line: docker exec -it mynginx /bin/sh In the figure, the code line launches the shell interface, which is operational.  Enabling the Shell Through ID In this particular use case, enable the session through the b51dc8e05c77 container ID inside the Docker app. Furthermore, you have the ability to interact with the interface as though you directly logged in via the -it flag. The -t indicates the assignment of pseudo-TTY, and the -i opens the STDIN. Both are beneficial for analysis, debugging, as well as managerial operations: docker exec -it b51dc8e05c77 bash Furthermore, you can analyse the information of the current folder inside the particular shell (in a detailed format), e.g., file size, owner, group, number of links, modification date, and file permissions: ls -l It gives detailed information on each file as well as the folder that assists you in knowing their attributes and managing them effectively. Working As a Particular User You can execute a command as the specific user through the -u option. It is beneficial when you are permitted to work with specific privileges. It runs the command in the operational container through the particular user and group: docker exec -u <user>:<group> <container_id> <command>  For instance, the whoami runs as the www-data in the mynginx container: docker exec -u www-data mynginx whoami In the figure, www-data verifies that the particular command is executed successfully with the correct user permissions and within the expected interface.  Enabling a Non-Interactive Shell Sometimes, users prefer not to have any interaction. For such circumstances, they can execute the command without any argument: docker exec mynginx tail /etc/passwd  The last 10 lines of the passwd file have been shown. This passwd file is stored in the /etc/passwd folder containing the user information. It helps you monitor the user account information, permitting you to quickly check for troubleshooting or update issues.  Working With a Single Environment Variable You may need to pass environment variables to the command that is run in the operational container. To achieve this, use the -e option as below: docker exec -e MY_VAR=value mynginx printenv MY_VAR In the figure, the printenv MY_VAR is successfully executed in mynginx when the MY_VAR is set to value correctly. Working With Multiple Environment Variables You can set more than one variable through the -e flag.  docker exec -e TEST=john -e ENVIRONMENT=prod mynginx env The figure confirms that the two variables TEST and ENVIRONMENT have been set to john and prod in the mynginx. Working With the Detached Mode You can run commands in the detached mode through the -d flag. Therefore, it runs in the background: docker exec -d mynginx sleep 500 The figure confirms that the mynginx is executing the sleep 500 command. Working With the Privileged Mode Here, the --privileged flag permits you to execute the command, such as mount, with elevated privileges in the running container: docker exec --privileged mynginx mount In the figure, mount permits the system to create a mount point with the particular permissions in the mynginx. More Information on docker exec The --help option shows the manual with a list of available options with concise explanations.  docker exec --help Final Words docker exec is an effective utility for controlling and interacting with active containers. It is helpful for operations like monitoring, managing, and debugging apps without interfering with their functionality. It permits you to run code, launch shells, customize several configuration aspects, and also set environment variables. Once you become familiar with the usage of this utility, you can manage containers easily. It makes your operations much smoother for creating and deploying apps.
29 January 2025 · 8 min to read
Kubernetes

Kubernetes Cluster Health Checks

The Kubernetes containerization platform is a complex system consisting of many different components and internal API objects totaling over 50. When issues arise with the cluster, it is important to know how to troubleshoot them. There are many different health checks available for a Kubernetes cluster and its components — let's go over them today. Connecting to a Kubernetes Cluster with kubectl To connect to a Kubernetes cluster using the kubectl command-line utility, you need a kubeconfig configuration file that contains the settings for connecting to the cluster. By default, this file is located in the hidden .kube directory in the user's home directory. The configuration file is located on the master node at /etc/kubernetes/admin.conf. To copy the configuration file to the user's home directory, you need to run the following command: sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config When using cloud-based Kubernetes clusters, you can download the file from the cluster's management panel. For example, on Hostman: After copying the file to the user's home directory, you need to export the environment variable so that the kubectl utility can locate the configuration file. To do this, run: export KUBECONFIG=$HOME/.kube/config Now, the kubectl command will automatically connect to the cluster, and all commands will be applied to the cluster specified in the exported configuration file. If you're using a kubeconfig file downloaded from the cluster's management panel, you can use the following command: export KUBECONFIG=/home/user/Downloads/config.yaml Where /home/user/Downloads/ is the full path to the config.yaml file. Checking the Client and Server Versions of Kubernetes Although this check might seem not so obvious, it plays a fundamental role in starting the troubleshooting process. The reason is that for Kubernetes to function stably, the client and server versions of Kubernetes need to be identical to avoid unexpected issues. This is mentioned in the official kubectl installation documentation. To check the client and server version of your Kubernetes cluster, run the following command: kubectl version In the output of the command, pay attention to the Client Version and Server Version lines. If the client and server versions differ (as in the screenshot above), the following warning will appear in the command output: WARNING: version difference between client (1.31) and server (1.29) exceeds the supported minor version skew of +/-1. Retrieving Basic Cluster Information During cluster health checks, it might be useful to know the IP address or domain name of the control plane component, as well as the address of the embedded Kubernetes DNS server — CoreDNS. To do this, use the following command: kubectl cluster-info For the most detailed information about the cluster, you can obtain a cluster dump using the command: kubectl cluster-info dump Note that this command produces a huge amount of data. For further use and analysis, it's a good idea to save the data to a separate file. To do this, redirect the output to a file: kubectl cluster-info dump > cluster_dump.txt Retrieving All Available Cluster API Objects If you need to get a list of all the API objects available in the cluster, run the following command: kubectl api-resources Once you know the names of the cluster objects, you can perform various actions on them — from listing the existing objects to editing and deleting them. Instead of using the full object name, you can use its abbreviation (listed in the SHORTNAMES column), though abbreviations are not supported for all objects. Cluster Health Check Let's go over how to check the health of various components within a Kubernetes cluster. Nodes Health Check Checking Node Status Start by checking the status of the cluster nodes. To do this, use the following command: kubectl get nodes In the output, pay attention to the STATUS column. Each node should display a Ready status. Viewing Detailed Information If any node shows a NotReady status, you can view more detailed information about that node to understand the cause. To do this, use the command: kubectl describe node <node_name> In particular, pay attention to the Conditions and Events sections, which show all events on the node. These messages can help determine the cause of the node's unavailability. Additionally, the Conditions section displays the status of the following node components: NetworkUnavailable — Shows the status of the network configuration. If there are no network issues, the status will be False. If there are network issues, it will be True. MemoryPressure — Displays the status of memory usage on the node. If sufficient memory is available, the status will be False; if memory is running low, the status will be True. DiskPressure — Displays the status of available disk space on the node. If enough space is available, the status will be False. If disk space is low, the status will be True. PIDPressure — Shows the status of process "overload." If there are only a few processes running, the status will be False. If there are many processes running, the status will be True. Ready — Displays the overall health of the node. If the node is healthy and ready to run pods, the status will be True. If any issues are found (e.g., memory or network problems), the status will be False. Monitoring Resource Usage In Kubernetes, you can track resource consumption for both cluster nodes and pod-type objects, as well as containers. To display resource usage consumed by the cluster nodes, use the command: kubectl top node The top node command shows how much CPU and memory each node is consuming. The values are displayed in millicores for CPU and bytes for memory, and also as percentages. To display resource usage of all pods across all namespaces running in the cluster, use the command: kubectl top pod -A If you need to display resource usage for pods in a specific namespace, specify the namespace with the -n flag: kubectl top pod -n kube-system To view the resource consumption specifically by containers running in pods, use the --containers option: kubectl top pod --containers -A Viewing Events in the Cluster To view all events within the cluster, use the following command: kubectl get events It will display all events, regardless of the type of object in the cluster. The following columns are used: LAST SEEN — The time when the event occurred, displayed in seconds, minutes, hours, days, or months. TYPE — Indicates the event's status, which is akin to the severity level. The supported statuses are: Normal, Warning, and Error. REASON — Represents the cause of the event. For example, Starting indicates that an object in the cluster was started, and Pulling means that an image for a container was pulled. OBJECT — The cluster object that triggered the event, including nodes in the cluster (e.g., during initialization). MESSAGE — Displays the detailed message of the event, which can be useful for troubleshooting. To narrow down the list of events, you can use a specific namespace: kubectl get events -n kube-system For more detailed event output, use the wide option: kubectl get events -o wide The wide option adds additional columns of information, including: SUBOBJECT — Displays the subobject related to the event (e.g., container, volume, secret). SOURCE — The source of the event, which could be components like kubelet, node-controller, etc. FIRST SEEN — The timestamp when the event was first recorded in the cluster. COUNT — The number of times the event has been repeated since it was first seen in the cluster. NAME — The name of the object (e.g., pod, secret) associated with the event. To view events in real-time, use the -w flag: kubectl get events -w The get events command also supports filtering via the --field-selector option, where you specify a field from the get events output. For example, to display all events with a Warning type in the cluster: kubectl get events --field-selector type=Warning -A Additionally, filtering by timestamps is supported. To display events in the order they first occurred, use the .metadata.creationTimestamp parameter: kubectl get events --sort-by='.metadata.creationTimestamp' Monitoring Kubernetes API Server The API server is a critical component and the "brain" of Kubernetes, that processes all requests to the cluster. The API server should always be available to respond to requests. To check its status, you can use special API endpoints: livez and readyz. To check the live status of the API server, use the following command: kubectl get --raw '/livez?verbose' To check the readiness status of the API server, use the following command: kubectl get --raw '/readyz?verbose' If both the livez and readyz requests return an ok status, it means the API server is running and ready to handle requests. Kubernetes Cluster Components To quickly display the status of all cluster components, use the following command: kubectl get componentstatuses If the STATUS and MESSAGE columns show Healthy and ok, it means the components are running successfully. If any component encounters an error or failure, the STATUS column will display Unhealthy, and the MESSAGE column will provide an error message. Container Runtime As is known, Kubernetes itself does not run pods with containers. Instead, it uses an external component called the Container Runtime Interface (CRI), or simply the container runtime. It’s important to ensure that the container runtime environment is functioning correctly. At the time of writing, Kubernetes supports the following container runtimes: containerd CRI-O It’s worth mentioning that Docker's container runtime is no longer supported starting from Kubernetes version 1.24. CRI-O First, check the status of the container runtime. To do this, on the node where the error appears, run the following command: systemctl status crio In the Active line, the status should show active (running). If it shows failed, further investigation is needed in the crio information messages and log files. To display the basic information about crio, including the latest error messages, use the command: crictl info If an error occurs while using crio, the message parameter will show a detailed description. crio also logs all its activities to log files, typically found in the /var/log/crio/pods directory. Additionally, you can use the journalctl logs. To display all logs for the crio unit, run: journalctl -u crio Containerd As with crio, start by checking the status of the container runtime. On the node where the error appears, run: systemctl status containerd In the Active line, the status should show active (running). If the status shows failed, you can get more detailed information by using the built-in status command, which will display all events, including errors: containerd status Alternatively, you can view the logs using journalctl. To display logs for the containerd unit, run: journalctl -u containerd You can also check the configuration file parameters for containerd using two commands (the output is usually quite large): containerd config default — Displays the default configuration file. Use this if no changes have been made to the file. If errors occur, this file can be used for rollback. containerd config dump — Displays the current configuration file, which may have been modified. Pods Health Check Kubernetes operates with pods, the smallest software units in the cluster, where containers with applications are run. The status of pods should always be READY. To display a list of all pods in the cluster and their statuses, use the following command: kubectl get po -A To display pods in a specific namespace, use the -n flag followed by the namespace name: kubectl get po -n kube-system For more detailed information about a pod, including any possible errors, use the kubectl describe pod command, which provides the most detailed information about the pod: kubectl describe pod coredns-6997b8f8bd-b5dq6 -n kube-system All events related to the pod, including errors, are displayed in the Events section. Getting Information About Objects with kubectl describe The kubectl describe command is a powerful tool for finding detailed information about an object, including searching for and viewing various errors. You can apply this command to all Kubernetes objects that are listed in the output of the kubectl api-resources command. Deployment files are widely used when deploying applications in a Kubernetes cluster. They allow you to control the state of service deployments, including scaling application replicas. To display the statuses of all available deployments in the Kubernetes cluster, use the command: kubectl get deployments -A It is important that the columns READY, UP-TO-DATE, and AVAILABLE display the same number of pods as specified in the deployment file. If the READY column shows 0 or fewer pods than specified, the pod with the application will not be started. To find the error's cause, use the describe command with the type of object, in this case, deployment: kubectl describe deployment coredns -n kube-system Just like when using describe for pods, all events, including errors, are displayed in the Conditions section. Conclusion Checking the health of a Kubernetes cluster is an important step in troubleshooting and resolving issues. Kubernetes consists of many different components, each with its own verification algorithm. It is important to know what and how to check to identify and fix errors quickly.
28 January 2025 · 11 min to read
Go

Working with Date and Time in Go Using the time Package

Go (Golang), like many other programming languages, has a built-in time package that provides special types and methods for working with dates and times. You can find comprehensive information about the time package in the official documentation. This guide will cover the basic aspects of working with time in Go.  All the examples shown were run on a cloud server provided by Hostman, using the Ubuntu 22.04 operating system and Go version 1.21.3. It is assumed that you are already familiar with the basics of Go and know how to run scripts using the appropriate interpreter command: go run script.go Parsing, Formatting, and Creating Dates Before getting started with time manipulation, it's important to understand a key feature of time formatting in Go. In most programming languages, date and time formats are specified using special symbols, which are replaced by values representing day, month, year, hour, minute, and second. However, Go approaches this differently. Instead of special symbols, it uses default date and time values represented by an increasing sequence of numbers: 01-02-03-04-05-06 This sequence of numbers represents: 1st month of the year (January) 2nd day of the month 3rd hour in 12-hour format (p.m.) 4th minute in 12-hour format (p.m.) 5th second in 12-hour format (p.m.) 6th year of the 21st century Thus, this results in the following time format: January 2nd, 3:04:05 PM, 2006 Or in another form: 02.01.2006 03:04:05 PM It is important to remember that this value is nothing more than a regular increasing sequence of numbers without any special significance. Therefore, this date and time act as a predefined layout for working with any explicitly specified date and time values. For example, here’s an abstract (not Go-specific) pseudocode example: currentTime = time.now() console.write("Current date: ", currentTime.format("%D.%M.%Y")) console.write("Current time: ", currentTime.format("%H:%M")) console.write("Current date and time: ", currentTime.format("%D.%M.%Y %H:%M")) In our pseudo-console, this would produce the following pseudo-output: Current date: 26.11.2024 Current time: 14:05 Current date and time: 26.11.2024 14:05 This is how date and time formatting works in most programming languages. In Go, however, the pseudocode would look like this: currentTime = time.now() console.write("Current date: ", currentTime.format("02.01.2006")) console.write("Current time: ", currentTime.format("03:04")) console.write("Current date and time: ", currentTime.format("02.01.2006 03:04")) The console output would be similar: Current date: 26.11.2024 Current time: 14:05 Current date and time: 26.11.2024 14:05 Here, the standard template values for date and time are automatically replaced with the actual date and time values. Additionally, template values have certain variations. For instance, you can specify the month 01 as Jan. Thanks to this approach, Go allows templates to be defined in a more intuitive and human-readable way. Parsing Working with time in Go starts by explicitly specifying it. This can be done using the time parsing function: package main import ( "fmt" // package for console I/O "time" // package for working with time "reflect" // package for determining variable types ) func main() { timeLayout := "2006-01-02" // time layout template timeValue := "2024-11-16" // time value to be parsed timeVariable, err := time.Parse(timeLayout, timeValue) // parsing time value using the template if err != nil { panic(err) // handling possible parsing errors } fmt.Println(timeVariable) // output the parsed time variable to the console fmt.Println(reflect.TypeOf(timeVariable)) // output the type of the time variable } When you run the script, the terminal will display the following output: 2024-11-16 00:00:00 +0000 UTC  time.Time Note that after parsing, a variable of type time.Time is created. This variable stores the parsed time value in its internal format. In the example shown, the time layout and value could be replaced with another equivalent format. func main() { timeLayout := "2006-Jan-02" timeValue:= "2024-Nov-16" ... The final result would remain the same. During parsing, an additional parameter can be specified to set the time zone, also known as the time offset or time zone: package main import ( "fmt" "time" ) func main() { // Local timeLocation, err := time.LoadLocation("Local") if err != nil { panic(err) } timeVariable, err := time.ParseInLocation("2006-01-02 15:04", "2024-11-16 07:45", timeLocation) if err != nil { panic(err) } fmt.Println("Local: ", timeVariable) // Asia/Bangkok timeLocation, err = time.LoadLocation("Asia/Bangkok") if err != nil { panic(err) } timeVariable, err = time.ParseInLocation("2006-01-02 15:04", "2024-11-16 07:45", timeLocation) if err != nil { panic(err) } fmt.Println("Asia/Bangkok: ", timeVariable) // Europe/Nicosia timeLocation, err = time.LoadLocation("Europe/Nicosia") if err != nil { panic(err) } timeVariable, err = time.ParseInLocation("2006-01-02 15:04", "2024-11-16 07:45", timeLocation) if err != nil { panic(err) } fmt.Println("Europe/Nicosia: ", timeVariable) } The console output of this script will be as follows: Local: 2024-11-16 07:45:00 +0000 UTC Asia/Bangkok: 2024-11-16 07:45:00 +0700 +07 Europe/Nicosia: 2024-11-16 07:45:00 +0300 EET Instead of explicitly creating a time zone variable, you can use a predefined constant: package main import ( "fmt" "time" ) func main() { // time.LoadLocation("Local") timeLocation, err := time.LoadLocation("Local") if err != nil { panic(err) } timeVariable, err := time.ParseInLocation("2006-01-02 15:04", "2024-11-16 07:45", timeLocation) if err != nil { panic(err) } fmt.Println(timeVariable) // time.Local timeVariable, err = time.ParseInLocation("2006-01-02 15:04", "2024-11-16 07:45", time.Local) if err != nil { panic(err) } fmt.Println(timeVariable) } In this case, the complete date and time values in both variants will be identical. 2024-11-16 07:45:00 +0000 UTC2024-11-16 07:45:00 +0000 UTC You can find a complete list of available time zones in the so-called Time Zone Database (tz database). Time zone identifiers are specified using two region names separated by a slash. For example: Europe/Nicosia Asia/Dubai US/Alaska Formatting We can format an already created time variable to represent its value as a specific text string. Thus, a variable of type time.Time has built-in methods for converting date and time into a string type. package main import ( "fmt" "time" ) func main() { timeLayout := "2006-01-02 15:04:05" timeValue := "2024-11-15 12:45:20" timeVariable, err := time.Parse(timeLayout, timeValue) if err != nil { panic(err) } fmt.Print("\r", "DATE", "\r\n") fmt.Println(timeVariable.Format("2006-01-02")) fmt.Println(timeVariable.Format("01/02/06")) fmt.Println(timeVariable.Format("01/02/2006")) fmt.Println(timeVariable.Format("20060102")) fmt.Println(timeVariable.Format("010206")) fmt.Println(timeVariable.Format("January 02, 2006")) fmt.Println(timeVariable.Format("02 January 2006")) fmt.Println(timeVariable.Format("02-Jan-2006")) fmt.Println(timeVariable.Format("Jan-02-06")) fmt.Println(timeVariable.Format("Jan-02-2006")) fmt.Println(timeVariable.Format("06")) fmt.Println(timeVariable.Format("Mon")) fmt.Println(timeVariable.Format("Monday")) fmt.Println(timeVariable.Format("Jan-06")) fmt.Print("\r", "TIME", "\r\n") fmt.Println(timeVariable.Format("15:04")) fmt.Println(timeVariable.Format("15:04:05")) fmt.Println(timeVariable.Format("3:04 PM")) fmt.Println(timeVariable.Format("03:04:05 PM")) fmt.Print("\r", "DATE and TIME", "\r\n") fmt.Println(timeVariable.Format("2006-01-02T15:04:05")) fmt.Println(timeVariable.Format("2 Jan 2006 15:04:05")) fmt.Println(timeVariable.Format("2 Jan 2006 15:04")) fmt.Println(timeVariable.Format("Mon, 2 Jan 2006 15:04:05 MST")) fmt.Print("\r", "PREDEFINED FORMATS", "\r\n") fmt.Println(timeVariable.Format(time.RFC1123)) // predefined format fmt.Println(timeVariable.Format(time.Kitchen)) // predefined format fmt.Println(timeVariable.Format(time.Stamp)) // predefined format fmt.Println(timeVariable.Format(time.DateOnly)) // predefined format } Running this script will output various possible date and time formats in the terminal: DATE 2024-11-15 11/15/24 11/15/2024 20241115 111524 November 15, 2024 15 November 2024 15-Nov-2024 Nov-15-24 Nov-15-2024 24 Fri Friday Nov-24 TIME 12:45 12:45:20 12:45 PM 12:45:20 PM DATE and TIME 2024-11-15T12:45:20 15 Nov 2024 12:45:20 15 Nov 2024 12:45 Fri, 15 Nov 2024 12:45:20 UTC PREDEFINED FORMATS Fri, 15 Nov 2024 12:45:20 UTC 12:45PM Nov 15 12:45:20 2024-11-15 Pay attention to the last few formats, which are predefined as constant values. These constants provide commonly used date and time formats in a convenient, ready-to-use form. You can find a complete list of these constants in the official documentation. time.Layout 01/02 03:04:05PM '06 -0700 time.ANSIC Mon Jan _2 15:04:05 2006 time.UnixDate Mon Jan _2 15:04:05 MST 2006 time.RubyDate Mon Jan 02 15:04:05 -0700 2006 time.RFC822 02 Jan 06 15:04 MST time.RFC822Z 02 Jan 06 15:04 -0700 time.RFC850 Monday, 02-Jan-06 15:04:05 MST time.RFC1123 Mon, 02 Jan 2006 15:04:05 MST time.RFC1123Z Mon, 02 Jan 2006 15:04:05 -0700 time.RFC3339 2006-01-02T15:04:05Z07:00 time.RFC3339Nano 2006-01-02T15:04:05.999999999Z07:00 time.Kitchen 3:04PM time.Stamp Jan _2 15:04:05 time.StampMilli Jan _2 15:04:05.000 time.StampMicro Jan _2 15:04:05.000000 time.StampNano Jan _2 15:04:05.000000000 time.DateTime 2006-01-02 15:04:05 time.DateOnly 2006-01-02 time.TimeOnly 15:04:05 Another common method to format date and time in Go is by converting it to Unix time.  package main import ( "fmt" "time" "reflect" ) func main() { timeVariable := time.Unix(350, 50) // set Unix time to 350 seconds and 50 nanoseconds from January 1, 1970, 00:00:00 fmt.Println("Time:", timeVariable) // display time in UTC format timeUnix := timeVariable.Unix() timeUnixNano := timeVariable.UnixNano() fmt.Println("Time (UNIX, seconds):", timeUnix) // display time in Unix format (seconds) fmt.Println("Time (UNIX, nanoseconds):", timeUnixNano) // display time in Unix format (nanoseconds) fmt.Println("Time (type):", reflect.TypeOf(timeUnix)) // display the variable type for Unix time } After running this script, the following output will appear in the terminal: Time: 1970-01-01 00:05:50.00000005 +0000 UTC Time (UNIX, seconds): 350 Time (UNIX, nanoseconds): 350000000050 Time (type): int64 Note that the variable created to store the Unix time value is of type int64, not time.Time. Thus, by using formatting, you can perform conversions between string-based time and Unix time and vice versa: package main import ( "fmt" "time" ) func main() { timeString, _ := time.Parse("2006-01-02 15:04:05", "2024-11-15 12:45:20") fmt.Println(timeString.Unix()) timeUnix := time.Unix(12345, 50) fmt.Println(timeUnix.Format("2006-01-02 15:04:05")) } The console output of this script will display the results of conversions to and from Unix time: 17316747201970-01-01 03:25:45 Creation In Go, there is a more straightforward way to create a time.Time variable by explicitly setting the date and time parameters: package main import ( "fmt" "time" ) func main() { timeLocation, _ := time.LoadLocation("Europe/Vienna") // year, month, day, hour, minute, second, nanosecond, time zone timeVariable := time.Date(2024, 11, 20, 12, 30, 45, 50, timeLocation) fmt.Print(timeVariable) } After running this script, the following output will appear in the terminal: 2024-11-20 12:30:45.00000005 +0100 CET Current Date and Time In addition to manually setting arbitrary dates and times, you can set the current date and time: package main import ( "fmt" "time" "reflect" ) func main() { timeNow := time.Now() fmt.Println(timeNow) fmt.Println(timeNow.Format(time.DateTime)) fmt.Println(timeNow.Unix()) fmt.Println(reflect.TypeOf(timeNow)) } After running this script, the following output will appear in the terminal: 2024-11-27 17:08:18.195495127 +0000 UTC m=+0.000035621 2024-11-27 17:08:18 1732727298 time.Time As you can see, the time.Now() function creates the familiar time.Time variable, whose values can be formatted arbitrarily. Extracting Parameters The time.Time variable consists of several parameters that together form the date and time: Year Month Day Weekday Hour Minute Second Nanosecond Time zone Go provides a set of methods to extract and modify each of these parameters. Most often, you will need to retrieve specific parameters from an already created time variable: package main import ( "fmt" "time" "reflect" ) func main() { timeLayout := "2006-01-02 15:04:05" timeValue := "2024-11-15 12:45:20" timeVariable, _ := time.Parse(timeLayout, timeValue) fmt.Println("Year:", timeVariable.Year()) fmt.Println("Month:", timeVariable.Month()) fmt.Println("Day:", timeVariable.Day()) fmt.Println("Weekday:", timeVariable.Weekday()) fmt.Println("Hour:", timeVariable.Hour()) fmt.Println("Minute:", timeVariable.Minute()) fmt.Println("Second:", timeVariable.Second()) fmt.Println("Nanosecond:", timeVariable.Nanosecond()) fmt.Println("Time zone:", timeVariable.Location()) fmt.Println("") fmt.Println("Year (type):", reflect.TypeOf(timeVariable.Year())) fmt.Println("Month (type):", reflect.TypeOf(timeVariable.Month())) fmt.Println("Day (type):", reflect.TypeOf(timeVariable.Day())) fmt.Println("Weekday (type):", reflect.TypeOf(timeVariable.Weekday())) fmt.Println("Hour (type):", reflect.TypeOf(timeVariable.Hour())) fmt.Println("Minute (type):", reflect.TypeOf(timeVariable.Minute())) fmt.Println("Second (type):", reflect.TypeOf(timeVariable.Second())) fmt.Println("Nanosecond (type):", reflect.TypeOf(timeVariable.Nanosecond())) fmt.Println("Time zone (type):", reflect.TypeOf(timeVariable.Location())) } The console output of this script will be: Year: 2024 Month: November Day: 15 Weekday: Friday Hour: 12 Minute: 45 Second: 20 Nanosecond: 0 Time zone: UTC Year (type): int Month (type): time.Month Day (type): int Weekday (type): time.Weekday Hour (type): int Minute (type): int Second (type): int Nanosecond (type): int Time zone (type): *time.Location Thus, you can individually retrieve specific information about the date and time without needing to format the output before displaying it in the console. Note the types of the retrieved variables — all of them have the int type except for a few: Month (time.Month) Weekday (time.Weekday) Time zone (*time.Location) The last one (time zone) is a pointer. Modification, Addition, and Subtraction Modification You cannot change the parameters of date and time directly in an already created time.Time variable. However, you can recreate the variable with updated values, thus changing the existing date and time: package main import ( "fmt" "time" ) func main() { timeVariable := time.Now() fmt.Println(timeVariable) // year, month, day, hour, minute, second, nanosecond, time zone timeChanged := time.Date(timeVariable.Year(), timeVariable.Month(), timeVariable.Day(), timeVariable.Hour() + 14, timeVariable.Minute(), timeVariable.Second(), timeVariable.Nanosecond(), timeVariable.Location()) fmt.Println(timeChanged) } When running this script, the following output will appear: 2024-11-28 14:35:05.287957345 +0000 UTC m=+0.0000391312024-11-29 04:35:05.287957345 +0000 UTC In this example, 14 hours were added to the current time. This way, you can selectively update the time values in an existing time.Time variable. Change by Time Zone Sometimes, it is necessary to determine what the specified date and time will be in a different time zone. For this, Go provides a special method: package main import ( "fmt" "time" ) func main() { locationFirst, _ := time.LoadLocation("Europe/Nicosia") timeFirst := time.Date(2000, 1, 1, 0, 0, 0, 0, locationFirst) fmt.Println("Time (Europe/Nicosia)", timeFirst) locationSecond, _ := time.LoadLocation("America/Chicago") timeSecond := timeFirst.In(locationSecond) // changing the time zone and converting the date and time based on it fmt.Println("Time (America/Chicago)", timeSecond) } The result of running the script will produce the following console output: Time (Europe/Nicosia) 2000-01-01 00:00:00 +0200 EET Time (America/Chicago) 1999-12-31 16:00:00 -0600 CST Thus, we obtain new date and time values, updated according to the newly specified time zone. Addition and Subtraction Go does not have separate methods for date and time addition. Instead, you can add time intervals to an already created time.Time variable: package main import ( "fmt" "time" ) func main() { // current time timeVariable := time.Now() fmt.Println(timeVariable) // adding 5 days (24 hours * 5 days = 120 hours) timeChanged := timeVariable.Add(120 * time.Hour) fmt.Println(timeChanged) // subtracting 65 days (24 hours * 65 days = 1560 hours) timeChanged = timeVariable.Add(-1560 * time.Hour) fmt.Println(timeChanged) } Running this script will give the following output: 2024-12-05 08:42:01.927334604 +0000 UTC m=+0.000035141 2024-12-10 08:42:01.927334604 +0000 UTC m=+432000.000035141 2024-10-01 08:42:01.927334604 +0000 UTC m=-5615999.999964859 Note that when subtracting a sufficient number of days from the time.Time variable, the month is also modified. Also, the time.Hour variable actually has a special type, time.Duration: package main import ( "fmt" "time" "reflect" ) func main() { fmt.Println(reflect.TypeOf(time.Hour)) fmt.Println(reflect.TypeOf(120* time.Hour)) } The output after running the script will be: time.Durationtime.Duration However, modifying the date and time by adding or subtracting a large number of hours is not very clear. In some cases, it is better to use more advanced methods for changing the time: package main import ( "fmt" "time" ) func main() { timeVariable := time.Now() fmt.Println(timeVariable) // year, month, day timeChanged := timeVariable.AddDate(3, 2, 1) fmt.Println(timeChanged) // day timeChanged = timeChanged.AddDate(0, 0, 15) fmt.Println(timeChanged) // year, month timeChanged = timeChanged.AddDate(5, 1, 0) fmt.Println(timeChanged) // -year, -day timeChanged = timeChanged.AddDate(-2, 0, -10) fmt.Println(timeChanged) } After running this script, the output will look like this: 2024-11-28 17:51:45.769245873 +0000 UTC m=+0.000024921 2028-01-29 17:51:45.769245873 +0000 UTC 2028-02-13 17:51:45.769245873 +0000 UTC 2033-03-13 17:51:45.769245873 +0000 UTC 2031-03-03 17:51:45.769245873 +0000 UTC Subtraction Unlike addition, Go has specialized methods for subtracting one time.Time variable from another. package main import ( "fmt" "time" "reflect" ) func main() { timeFirst := time.Date(2024, 6, 14, 0, 0, 0, 0, time.Local) timeSecond := time.Date(2010, 3, 26, 0, 0, 0, 0, time.Local) timeDeltaSub := timeFirst.Sub(timeSecond) // timeFirst - timeSecond timeDeltaSince := time.Since(timeFirst) // time.Now() - timeFirst timeDeltaUntil := time.Until(timeFirst) // timeFirst - time.Now() fmt.Println("timeFirst - timeSecond =", timeDeltaSub) fmt.Println("time.Now() - timeFirst =", timeDeltaSince) fmt.Println("timeFirst - time.Now() =", timeDeltaUntil) fmt.Println("") fmt.Println(reflect.TypeOf(timeDeltaSub)) fmt.Println(reflect.TypeOf(timeDeltaSince)) fmt.Println(reflect.TypeOf(timeDeltaUntil)) } Console output: timeFirst - timeSecond = 124656h0m0s time.Now() - timeFirst = 4029h37m55.577746026s timeFirst - time.Now() = -4029h37m55.577746176s time.Duration time.Duration time.Duration As you can see, the result of the subtraction is the familiar time.Duration type variable. In fact, the main function for finding the difference is time.Time.Sub(), and the other two are just its derivatives: package main import ( "fmt" "time" ) func main() { timeVariable := time.Date(2024, 6, 14, 0, 0, 0, 0, time.Local) fmt.Println(time.Now().Sub(timeVariable)) fmt.Println(time.Since(timeVariable)) fmt.Println("") fmt.Println(timeVariable.Sub(time.Now())) fmt.Println(time.Until(timeVariable)) } Console output: 4046h10m53.144212707s 4046h10m53.144254987s -4046h10m53.144261117s -4046h10m53.144267597s You can see that the results of these described functions are identical. time.Time.Since() = time.Now().Sub(timeVariable) time.Time.Until() = timeVariable.Sub(time.Now()) Time Durations Individual time intervals (durations) in the time package are represented as a special variable of type time.Duration. Unlike time.Time, they store not full date and time but time intervals. With durations, you can perform some basic operations that modify their time parameters. Parsing Durations A duration is explicitly defined using a string containing time parameters: package main import ( "fmt" "time" ) func main() { // hours, minutes, seconds durationHMS, _ := time.ParseDuration("4h30m20s") fmt.Println("Duration (HMS):", durationHMS) // minutes, seconds durationMS, _ := time.ParseDuration("6m15s") fmt.Println("Duration (MS):", durationMS) // hours, minutes durationHM, _ := time.ParseDuration("2h45m") fmt.Println("Duration (HM):", durationHM) // hours, seconds durationHS, _ := time.ParseDuration("2h10s") fmt.Println("Duration (HS):", durationHS) // hours, minutes, seconds, milliseconds, microseconds, nanoseconds durationFULL, _ := time.ParseDuration("6h50m40s30ms4µs3ns") fmt.Println("Full Duration:", durationFULL) } Output of the script: Duration (HMS): 4h30m20s Duration (MS): 6m15s Duration (HM): 2h45m0s Duration (HS): 2h0m10s Full Duration: 6h50m40.030004003s Note the last duration, which contains all possible time parameters in decreasing order of magnitude—hours, minutes, seconds, milliseconds, microseconds, and nanoseconds. During parsing, each parameter is specified using the following keywords: Hours — h Minutes — m Seconds — s Milliseconds — ms Microseconds — µs Nanoseconds — ns Moreover, the order of specifying duration parameters does not affect it: package main import ( "fmt" "time" ) func main() { duration, _ := time.ParseDuration("7ms20s4h30m") fmt.Println("Duration:", duration) } Terminal output: Duration: 4h30m20.007s Formatting Durations In Go, we can represent the same duration in different units of measurement: package main import ( "fmt" "time" "reflect" ) func main() { duration, _ := time.ParseDuration("4h30m20s") fmt.Println("Duration:", duration) fmt.Println("") fmt.Println("In hours:", duration.Hours()) fmt.Println("In minutes:", duration.Minutes()) fmt.Println("In seconds:", duration.Seconds()) fmt.Println("In milliseconds:", duration.Milliseconds()) fmt.Println("In microseconds:", duration.Microseconds()) fmt.Println("In nanoseconds:", duration.Nanoseconds()) fmt.Println("") fmt.Println(reflect.TypeOf(duration.Hours())) fmt.Println(reflect.TypeOf(duration.Minutes())) fmt.Println(reflect.TypeOf(duration.Seconds())) fmt.Println(reflect.TypeOf(duration.Milliseconds())) fmt.Println(reflect.TypeOf(duration.Microseconds())) fmt.Println(reflect.TypeOf(duration.Nanoseconds())) } Output of the script: Duration: 4h30m20s In hours: 4.5055555555555555 In minutes: 270.3333333333333 In seconds: 16220 In milliseconds: 16220000 In microseconds: 16220000000 In nanoseconds: 16220000000000 float64 float64 float64 int64 int64 int64 As you can see, the parameters for hours, minutes, and seconds are of type float64, while the rest are of type int. Conclusion This guide covered the basic functions for working with dates and times in the Go programming language, all of which are part of the built-in time package. Thus, Go allows you to: Format dates and times Convert dates and times Set time zones Extract specific date and time parameters Set specific date and time parameters Add and subtract dates and times Execute code based on specific time settings For more detailed information on working with the time package, refer to the official Go documentation. In addition, you can deploy Go applications (such as Beego and Gin) on our app platform.
28 January 2025 · 19 min to read
Linux

How to Extract or Unzip .tar.gz Files in Linux

Exploring the Linux landscape often means dealing with several file formats, especially compressed ones like .tar.gz. This format is popular because it combines multiple documents and folders into one compressed archive. Whether you're obtaining software packages, organizing project backups, or overseeing data storage, mastering this format usage is essential.  Throughout this guide, we will examine various strategies for unpacking .gz archives in Linux. From the versatile tar command to the more straightforward gzip and gunzip commands, we'll cover everything. We'll also dive into combining commands like unzip and tar, and using graphical interfaces for those who prefer a more visual approach. Why Choose .tar.gz? Listed below are few key reasons why you might opt to utilize this format: Space Efficiency: The combination of tar and gzip allows for the streamlined compression of large data amounts, enhancing disk space usage. Simplified Data Management: Merging several documents and directories into a single archive enhances data management and organizes storage. Easy Distribution: This widely-adopted format ensures seamless transfers between systems without any compatibility hurdles. Preservation of Metadata: The tar utility maintains file permissions and timestamps, making it perfect for backups and migrating systems. Creating a .tar.gz File Before jumping into extraction, it's helpful to know how to create an archive. This makes it easier to combine and compress many documents into one neat, smaller package. Here is the standard syntax for creation: tar -czf archive-name.tar.gz file1 file2 directory1 Where: c: Creates an entirely new archive. z: Perform compression. f: Assigns a specific name to the archive. For instance, to compress report1, report2, and the directory projects into a file called backup, apply: tar -czf backup.tar.gz report1.txt report2.txt projects For verification, list the directory items via: ls Examining .tar.gz Content To examine the items without extracting them, use a command that lists every compressed item. This is particularly handy for verifying items before unpacking. To list .gz content: tar -ztvf archive-name.tar.gz For instance, to list the items of backup: tar -ztvf backup.tar.gz Extracting .tar.gz in Linux Linux offers a variety of extraction methods for these archives, each bringing its own advantages. Here are comprehensive instructions for utilizing various commands and tools. Method 1: Via tar Utility The tar command is a powerful and flexible utility designed to manage compressed documents, offering functions to create, extract, and display the items of archives. This command is your ultimate tool for handling .gz resources efficiently. Basic Extraction To unpack .gz items directly into the current directory, apply: tar -xvzf archive-name.tar.gz Where: x: Unpacks the archive's items. v: Verbose mode actively displays each file being unpacked. z: Decompresses the data. f: Gives the archive a unique name. For unpacking the backup, apply: tar -xvzf backup.tar.gz Extracting to a Specific Directory For placing the unpacked files in a different location, use the -C option to indicate your chosen directory. This is handy when you need to ensure your retrieved file are neatly arranged in a designated location. To unpack the items into a chosen directory, apply: tar -xvzf archive-name.tar.gz -C /path/to/destination For instance, to unpack the backup into the Documents folder, utilize: tar -xvzf backup.tar.gz -C /home/user/Documents Extracting Specific Content For retrieving certain items from the archive, simply provide their names. This enables you to pinpoint and retrieve just the necessary data.  Here’s the format: tar -xvzf archive-name.tar.gz file1 file2 For example, to retrieve report1 and report2 from backup, apply: tar -xvzf backup.tar.gz report1.txt report2.txt Extracting Contents with a Specific Extension For retrieving items with a particular extension, the --wildcards option proves to be quite useful. This option lets you filter and retrieve data based on their names or extensions. Here's the syntax: tar -xvzf archive-name.tar.gz --wildcards '*.txt' For instance, to retrieve all .txt docs from backup: tar -xvzf backup.tar.gz --wildcards '*.txt' Method 2: Via gzip Utility The gzip is a tool primarily used for compressing data, but it can also decompress them with the -d option. This method is straightforward and effective for handling .gz resources. To unzip a .gz file, apply the subsequent command: gzip -d archive-name.tar.gz For instance, to unpack backup, apply: gzip -d backup.tar.gz After decompressing, retrieve the items via: tar -xf archive-name.tar For instance: tar -xf backup.tar Method 3: Via gunzip Utility The gunzip is a specifically designed tool for decompressing .gz documents, functioning as an alias for gzip -d. This command is simple to use and directly addresses the need to decompress .gz files. To decompress, apply: gunzip archive-name.tar.gz For example: gunzip backup.tar.gz After decompressing, unpack the items through: tar -xf archive-name.tar For example: tar -xf backup.tar Method 4: Via GUI For users who favor a GUI, various Linux desktop environments include file managers equipped with extraction tools. This method is user-friendly and ideal for beginners. Extracting Contents to the Current Directory Find the .gz file within your file manager. Right-click on it and choose "Extract." Extracting Contents to a Specific Directory Spot the .gz file within your file explorer. Right-click on it and select "Extract to…". Choose the destination directory. Handling Large Archives with Parallel Decompression When handling massive archives, pigz (Parallel Implementation of gzip) can significantly enhance decompression speed by using several CPU cores. Here's how to use it: Install pigz on Linux via: sudo apt install pigz To uncompress a .gz file via pigz, apply: pigz -d archive-name.tar.gz After decompression, retrieve the resulting .tar doc with: tar -xf archive-name.tar Utilizing Compression with Encryption For added security, you can encrypt your .gz doc. GPG (GNU Privacy Guard) can be used to encrypt documents, ensuring that sensitive information remains protected during storage and transfer. Encrypting an Archive For encryption, use GPG with the following command: gpg -c archive-name.tar.gz Decrypting an Archive To decrypt an encrypted archive, apply: gpg -d archive-name.tar.gz.gpg > archive-name.tar.gz Tips for Content Extraction in Linux Backup Important Docs: Always create backups before unpacking multiple docs to avoid data loss. Check Permissions: Ensure you possess the required permissions to retrieve documents in the designated directory. Utilize Wildcards Carefully: Be cautious when using wildcards to avoid unintentional extraction. Troubleshooting Frequent Issues with Extraction Here are a few common extraction difficulties and the ways to address them: Corrupted Archives In case an archive is corrupted, try using the --ignore-zeros option to retrieve it: tar -xvzf archive-name.tar.gz --ignore-zeros Insufficient Permissions Confirm that you have the proper permissions to access and modify files. Utilize sudo if required: sudo tar -xvzf archive-name.tar.gz -C /path/to/destination Disk Space Issues Check that you have enough disk space to unzip the documents. Verify disk usage with: df -h Conclusion Unpacking .tar.gz documents in Linux is a simple task, with multiple methods to cater to different user preferences. Whether you're using the tar, gzip, gunzip commands, or a GUI, Linux equips you with efficient tools to handle compressed data seamlessly. This guide empowers you with the know-how to confidently retrieve .gz docs. Whether it's handling software packages, arranging backups, or managing data storage, mastering the creation and extraction of such files keeps your workflow streamlined and efficient.  By mastering the creation and extraction of these files, you streamline your workflow and enhance your overall efficiency, making data management a breeze.
28 January 2025 · 7 min to read

Tailored cloud server
solutions for every need

General-purpose cloud servers for web hosting

Ideal for websites, content management systems, and basic business applications, cloud web servers provide balanced performance with reliable uptime. These servers are designed to handle moderate traffic while keeping your website or application responsive.

High-performance servers for cloud computing


For businesses needing powerful resources for tasks like AI, machine learning, or data analysis, our high-performance cloud servers are built to process large datasets efficiently. Equipped with 3.3 GHz processors and high-speed NVMe storage, they ensure smooth execution of even the most demanding applications.

Storage-optimized cloud servers for data-driven operations

Need to store and retrieve large amounts of data? Our cloud data servers offer vast capacity with superior read/write speeds. These servers are perfect for databases, large-scale backups, or big data management, ensuring fast access to your data when you need it.

Memory-Optimized Servers for Heavy Workloads


These servers are built for applications that require high memory capacity, such as in-memory databases or real-time analytics. With enhanced memory resources, they ensure smooth handling of large datasets, making them ideal for businesses with memory-intensive operations.

In-depth answers to your questions

Which operating systems are supported on your cloud servers?

Choose popular server operating systems and deploy them in one click: from Ubuntu to CentOS. Licensed operating systems are available directly in the control panel.

How can I get started with a cloud server? Is there a straightforward registration process?

Register with Hostman and choose the tariff that suits your needs and requirements. You can always add processing power and purchase additional services if needed.

You don't need a development team to start shared hosting - you'll do everything yourself in a convenient control panel. Even a person with no technical background can easily work with it.

What is the minimum and maximum resource allocation (CPU, RAM, storage) available for cloud servers?

The starter package includes a 1×1.28 GHz 1-core CPU, 1 GB RAM, 15 GB fast MVNe SSD, dedicated IP address and 200 Mbps. For demanding users, go for a powerful 8×3.3 GHz server, 16 GB RAM, 160 GB fast MVNe SSD, dedicated IP address and 200 Mbps. Alternatively, you can always get an incredibly powerful server by configuring it yourself.

What scaling options are available for cloud servers?

You can easily add power, bandwidth, and channel width with just a few clicks directly in the control panel. With Hostman, you can enhance all the important characteristics of your server with hourly billing.

How much does a cloud server cost, and what is the pricing structure like?

Add capacity, bandwidth and channel width with a few clicks right in the control panel. With Hostman, you can improve all important features of your server - with hourly billing.

Is there a trial or testing period available for cloud servers before purchasing?

Contact the friendly Hostman support team, and they will offer you comfortable conditions for test-driving our cloud server — and will transfer your current projects to the cloud for free.

What security measures and data protection are in place for cloud servers?

Cloud servers are hosted in a Tier III data center with a high level of reliability. Hostman guarantees 99.99% availability according to the SLA, with downtime not exceeding 52 minutes per year. Additionally, data is backed up for extra security, and the communication channel is protected against DDoS attacks.

What level of support is provided for cloud servers?

Hostman support is always available, 7 days a week, around the clock. We respond to phone calls within a minute and chat inquiries within 15 minutes. Your questions will always be handled by knowledgeable staff with sufficient authority and technical background.

Can I install my own software on a cloud server?

Yes, absolutely! You can deploy any software, operating systems, and images you desire on your server. Everything is ready for self-configuration.

What backup and data recovery methods are available for cloud servers?

Hostman takes care of the security of your data and backs up important information. Additionally, you can utilize the automatic backup service for extra safety and reliability.

Is there a guaranteed Service Level Agreement (SLA) for cloud server availability?

Hostman guarantees a 99.99% level of virtual server availability according to the SLA (Service Level Agreement).

Which data center locations are available for hosting cloud servers?

Our servers are located in a modern Tier III data center in the European Union and the United States.

Can I create and manage multiple cloud servers under a single account?

Certainly, you can launch multiple cloud servers and other services (such as managed database or vps server) within a single account.

What is the deployment time for cloud servers after ordering?

With Hostman, you'll get a service that is easy and quick to manage on your own. New cloud servers can be launched almost instantly from the control panel, and the necessary software can be installed within minutes.

What monitoring and notification capabilities are offered for cloud servers?

Hostman specialists monitor the technical condition of servers and software around the clock. You won't have to worry about server availability — it will simply work, always.

Can I modify the specifications of my cloud server (e.g., increase RAM) after creation?

You can easily configure your server by adding resources directly in the control panel. And if you need to switch to lower-tier plans, you can rely on Hostman support — our specialists will handle everything for you.

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support