Cloud Service Provider
for Developers and Teams

We make it simple to get started in the cloud and scale up as you grow —
whether you have one virtual machine or ten thousand.
Cloud Service Provider
99.9% Uptime
Our cloud service provides the ultimate in server dependability and stability.
Money-back Guarantee
Experience our high-speed cloud services without any risk, assured by our money-back guarantee.
Easy to Deploy
Manage your services with ease using our intuitive control panel, where deploying software is a matter of minutes.
Reliable and Available
Select from 6 datacenter regions around the world based on latency or deploy across regions for redundancy.

Robust cloud services for every demand

See all Products

Hostman's commitment to simplicity
and budget-friendly solutions

1 CPU
2 CPU
4 CPU
8 CPU
Configuration
1 CPU, 1 GB RAM, 25 GB SSD
Hostman
DigitalOcean
Google Cloud
AWS
Vultr
Price
$4
$6
$6.88
$7.59
$5
Tech support
Free
$24/mo
$29/mo + 3% of
monthly charges
$29/mo or 3% of
monthly charges
Free
Backups
from $0.07/GB
20% or 30% higher
base daily/weekly fee
$0.03/GB per mo
$0.05/GB per mo
20% higher base
monthly/hourly fee
Bandwidth
Free
$0.01 per GB
$0.01 per GB
$0.09/GB first
10 TB / mo
$0.01 per GB
Live chat support
Avg. support response time
<15 min
<24 hours
<4 hours
<12 hours
<12 hours
Anup k.
Associate Cloud Engineer
5.0 out of 5

"Hostman Comprehensive Review of Simplicity and Potential"

It been few years that I have been working on Cloud and most of the cloud service...
Mansur H.
Security Researcher
5.0 out of 5

"A perfect fit for everything cloud services!"

Hostman's seemless integration, user-friendly interface and its robust features (backups, etc) makes it much easier...
Adedeji E.
DevOps Engineer
5.0 out of 5

"Superb User Experience"

For me, Hostman is exceptional because of it's flexibility and user-friendliness. The platform's ability to offer dedicated computing resources acr...
Yudhistira H.
Mid-Market(51-1000 emp.)
5.0 out of 5

"Streamlined Cloud Excellence!"

What I like best about Hostman is their exceptional speed of deployment, scalability, and robust security features. Their...
Mohammad Waqas S.
Biotechnologist and programmer
5.0 out of 5

"Seamless and easy to use Hosting Solution for Web Applications"

From the moment I signed up, the process has been seamless and straightforward...
Mohana R.
Senior Software Engineer
5.0 out of 5

"Availing Different DB Engine Services Provided by Hostman is Convenient for my Organization usecases"

Hostman manages the cloud operations...
Faizan A.
5.0 out of 5

"Hostman is a great fit for me"

Hostman is a great fit for me. What do you like best about Hostman? It was very easy to deploy my application and create database, I didn't have
Adam M.
5.0 out of 5

"Perfect website"

This website is extremely user friendly and easy to use. I had no problems so didn't have to contact customer support. Really good website and would recommend to others.
Anup K.
4.0 out of 5

"Simplifying Cloud Deployment with Strengths and Areas for Growth"

What I like best about Hostman is its unwavering commitment to simplicity...
Naila J.
5.0 out of 5

"Streamlined Deployment with Room for Improvement"

Hostman impresses with its user-friendly interface and seamless deployment process, simplifying web application hosting...

Trusted by 500+ companies and developers worldwide

Recognized as a Top Cloud Hosting Provider by HostAdvice

Hostman review

Deploy a cloud server
in just a few clicks

Set up your сloud servers at Hostman swiftly and without any fees, customizing them for your business with a quick selection of region, IP range, and details—ensuring seamless integration and data flow.

Code locally, launch worldwide

Our servers, certified with ISO/IEC 27001, are located in Tier 3 data
centers across the US, Europe, and Asia.
Hostmans' Locations
🇺🇸 San Francisco
🇺🇸 San Jose
🇺🇸 Texas
🇺🇸 New York
🇳🇱 Amsterdam
🇳🇬 Lagos
🇩🇪 Frankfurt
🇵🇱 Gdansk
🇦🇪 Dubai
🇸🇬 Singapore

Latest News

Node.js

How to Update Node.js Version

Node.js stands as a robust JavaScript runtime environment powered by Chrome's V8 engine. With its capabilities, developers can construct scalable network applications with simplicity. Renowned for its event-driven, non-blocking architecture, it’s perfect for creating real-time applications. Regularly refreshing your tools ensures access to the newest features, security fixes, and performance gains. Updating Node.js regularly is crucial for the stability and security of projects, regardless of their scale. This all-inclusive guide will navigate you through diverse methods to update Node.js, covering everything from package managers to hands-on manual installations. Method 1: Via NVM Node Version Manager, abbreviated as NVM, is extensively employed for seamless handling of various Node.js versions. Its flexibility and user-friendly interface make it particularly popular among developers. This tool facilitates easy switching between node versions, perfect for projects that demand particular Node.js versions. Why Use NVM? Flexibility: Easily shift from one node version to another. Convenience: Handle installations, updates, and management of various versions effortlessly. Isolation: Isolates different versions to minimize conflicts. Step-by-Step Guide Adhere to these guidelines to set up and utilize NVM for node version management. Install NVM Initiate a terminal session and input: curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash Then configure NVM in your shell profile like Bash: export NVM_DIR="$HOME/.nvm" [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion Next, confirm the NVM installation with: nvm --version Update Version List available versions first, then proceed with installation: nvm ls-remote Subsequently, install the latest release via: nvm install node Then, set the newly installed version as the default: nvm alias default node Validate the installation through: node -v Update npm To guarantee superior performance and safety, keep npm up-to-date alongside node: npm install -g npm Lastly, validate the updated npm version for confirmation via: npm -v Switching Node Versions First, list out the installed versions through: nvm ls Next, switch to another version: nvm use <version> Insert the required version number in place of <version>. Method 2: Via Node.js Binary Packages Direct installation from Node.js official binaries is hassle-free and secures the specific release straight from the source. Why Use Binary Packages? Simplicity: Achieve direct installation without using additional tools. Consistency: Backed by the official Node.js development team. Reliability: Guarantees the exact release comes from the official source. Step-by-Step Guide Adopt these guidelines to update Node.js through binary packages. Obtain the Binary Package Access the Node.js website and grab the binary package compatible for your particular OS. Install Node.js Finish the installation by adhering to the guidelines specific to your OS: Windows: Run the installer and finish by following the on-screen steps. macOS: Execute the .pkg file and continue via the setup process. Linux: Unpack the tarball and transfer the contents to /usr/local. tar -xvf node-v<version>-linux-x64.tar.xz sudo cp -r node-v<version>-linux-x64/bin /usr/local/ Next, access the .bashrc file: nano ~/.bashrc Then, insert the lines below: export PATH="/usr/local/bin:$PATH" Save the file and use source to update settings: source ~/.bashrc Verify Installation Validate the installation via: node -v Check npm release: npm -v Method 3: Via Package Managers Homebrew and apt facilitate Node.js installation and updates by handling dependency and version management. Why Use Package Managers? Ease of Use: Simplifies installation and updates. Automation: Handles dependencies and version management. Integration: Easily integrates with other software and systems. Step-by-Step Guide Apply these procedures to upgrade Node.js via package managers. Via Homebrew (macOS) Employ the instructions on the Homebrew website if not already installed. Enter the subsequent command: brew install node For upgrading the existing installation: brew upgrade node Validate the current installation: node -v Via apt (Ubuntu/Debian) Include the node’s PPA in your system setup: curl -fsSL https://deb.nodesource.com/setup_23.x | sudo -E bash - Update 23.x to the desired release. Enter the subsequent command: sudo apt install nodejs Check that the node release is correct: node -v Method 4: Via Node.js Version Manager (n) The n package offers an efficient method for managing Node.js versions, serving as an alternative to NVM. This tool transforms the way you install, switch, and maintain different node versions on your system. Why Use n? Efficiency: Fast and lightweight. Simplicity: User-friendly and requires minimal commands. Control: Complete control over the versions set up. Step-by-Step Guide Here's how to configure and use the n package for managing node versions. Install n Run npm to globally install the n package: npm install -g n Install or Update Node.js Install the latest node release: sudo n latest Install a specific release: sudo n <version> Change <version> to the required release number. Verify Installation Confirm version: node -v Method 5: Manual Installation Manual installation suits advanced users seeking complete control over the setup. Why Use Manual Installation? Control: Complete authority over the installation workflow. Customization: Customize the build and installation settings. Step-by-Step Guide Adhere to these guidelines for manual installation: Install Dependencies Install essential dependencies first via: sudo apt install build-essential gcc g++ make Download Source Code Head over to the official website and obtain the source code. wget https://nodejs.org/download/release/v23.6.0/node-v23.6.0.tar.gz Build and Install Extract the source code: tar -xvf node-v<version>.tar.gz Replace <version> with the version number. Navigate to the extracted directory: cd node-v<version> Configure and compile the source code: ./configure make If you encounter dependency errors while running the above command, use your default package manager to install them. Install the tool: sudo make install Verify Installation Confirm version: node -v Additional Resources For in-depth information on Node.js updates and management, consider these resources: Documentation: Comprehensive resource for all Node.js things. NVM GitHub Repository: Extensive support and information for NVM. n GitHub Repository: Information on applying the n package for node version control. Release Notes: Remain informed about the newest enhancements and updates. Best Practices Regular Checks: Make it a habit to regularly check for node updates to ensure your environment is always up-to-date. Backup: Always create backups of your projects and important data before updating to safeguard against any loss during the upgrade. Testing: Upon completing the update, thoroughly test your applications to verify they function correctly with the new node release. Documentation: Keep your project documentation in sync with the latest Node.js release and note any changes from the update. Fixing Common Problems Running into problems while updating Node.js? Discover frequent issues and effective fixes: Issue 1: Version Not Changing  In case the node version stays the same after updating, attempt the following: Clear npm cache: npm cache clean -f Reinstall node using the desired method. Issue 2: Permission Errors  If permission problems occur, use sudo for elevated command execution: sudo npm install -g n Issue 3: Dependency Conflicts  Sometimes, updating the application can result in conflicts with dependencies in your current projects.  To resolve this: Utilize nvm to pick the necessary release for each project. Ensure dependencies in your project are updated to align with the new Node.js release. Conclusion There are various methods available to update Node.js, each suited to specific needs and preferences. Whether it's NVM, binary packages, package managers, the n package, or manual installation, you can keep your dev environment up-to-date with the newest features and security fixes. Frequent updates are always a key factor to maintaining top performance and safety. Consistently update Node.js to benefit from the latest features, security enhancements, and performance boosts for robust and secure applications. Using this in-depth guide, you can expertly handle and update your node installations according to your specific needs and preferences. In addition, check out our platform as a service to deploy various Node.js frameworks, such as Express, Fastify, Hapi and Nest.
23 January 2025 · 7 min to read
Linux

How to Rename Files in Linux

Visualize yourself as a Linux expert, skillfully navigating files and directories. One day, you find yourself needing to alter the names of numerous files. Perhaps you're organizing documents, changing photos names from a vacation, or managing code files for a project. Renaming each file manually seems daunting and time-consuming. What do you do? The Linux environment offers various strong tools to make this task easy and effective. Whether dealing with a single file or a directory full of them, the system offers various ways to change files names quickly and easily. Here, we'll explore a range of ways to rename files in the Linux environment.  Method 1: Via the mv Command Changing file names in Linux is usually accomplished via the mv command, which is both simple and widely adopted. Besides changing file names, it can also be employed to move files. The primary syntax is: mv [options] source target Where: source is the existing name or path of the file or directory you aim to rename or move. target refers to the updated name or destination path for the file or directory. Changing a Filename with mv Adhere to the following steps to change a filename with mv: Launch your terminal application. Enter the directory where the file you wish to change is located: cd /path/to/directory Employ mv to change the filename: mv oldfilename newfilename Update oldfilename to match the current name and newfilename to reflect the new name. Check the directory files to ensure their names are changed: ls Other Options To prevent existing files from being overwritten, apply: mv -n oldfilename newfilename For transferring files to another directory while modifying their names, utilize: mv oldfilename /newpath/newfilename To change directories name, apply: mv olddirectory newdirectory Method 2: Via the rename Command For bulk files, rename surpasses mv in functionality. It can change multiple filenames in a single command and accommodates complex patterns with regular expressions. Below is the standard format for employing the command: rename [options] 's/oldpattern/newpattern/' files Where: 's/oldpattern/newpattern/': A substitution pattern where oldpattern is the text you want to replace, and newpattern is the text you want to substitute in. files: The files you want to apply the rename operation to. rename Installation on Linux Some Linux distributions don't come with this utility pre-installed. Employ the package manager for installation. On Debian/Ubuntu: sudo apt install rename On CentOS/RHEL: sudo yum install prename Changing a Filename with rename Launch the terminal and go to the target folder: cd /path/to/directory Next, run rename with a regex pattern to adjust multiple file names: rename 's/oldpattern/newpattern/' * Replace oldpattern with the pattern you want to modify and newpattern with the updated pattern. To update all .txt filenames to .md in a directory, utilize: rename 's/\.txt$/\.md/' *.txt Additional rename Options Start filename with a prefixed text: rename 's/^/prefix_/' * Append a suffix to the filenames: rename 's/$/_suffix/' * Real-time filename display while renaming: rename -v 's/oldpattern/newpattern/' * Update the filename even if the target file already exists: rename -f 's/oldpattern/newpattern/' * Previews the actions without executing any modifications: rename -n 's/oldpattern/newpattern/' * Method 3: Via Bash Script To perform more advanced file name changes, consider using a bash script. This technique enables sophisticated file name changes and automates frequent renaming operations. Open your terminal and create a new script file to start writing a bash script: nano rename_files.sh Proceed by adding this code to the script file: #!/bin/bashfor file in *.txt; domv "$file" "${file%.txt}.md"done This script changes all .txt filenames to .md files. Save the file and grant it executable permissions: sudo chmod +x rename_files.sh Run the script to change filenames: ./rename_files.sh Method 4: Via the find Command with mv find and mv together offer a precise way to update multiple file names based on detailed conditions. Using this method, you can pinpoint specific files based on criteria like name patterns, size, and modification date. Further commands can be combined to create powerful file modification operations. Use this template to update file names with find and mv: find . -name "oldpattern" -exec mv {} newpattern \; Additional Options Change the file names larger than 1MB: find . -size +1M -exec mv {} newname \; Modify file names in the last 7 days: find . -mtime -7 -exec mv {} newname \; Method 5: Via the mmv Command The mmv command is a powerful tool designed to simplify batch renaming of files through its advanced pattern matching capabilities. This command allows you to change multiple filenames at once by specifying patterns and replacement strings. It makes it ideal for handling large numbers of files that follow a specific naming convention.  The syntax is: mmv [options] source target Changing a Filename with mmv Get mmv ready for use by installing it through the default package manager: sudo apt install mmv Utilize mmv alongside patterns for effective filename modification: mmv oldpattern newpattern Additional mmv Options Utilize this command to add a prefix to every file in a directory: mmv '*' 'prefix#1' Exhibit the names of files as they get modified: mmv -v '*.txt' 'prefix_#1.txt' Method 6: Via GUI For those who favor a graphical interface, various Linux desktop environments offer tools for effortless file name changes. This approach is especially beneficial for users who aren't as familiar with command-line tasks. Follow this procedure to change file names through a graphical tool: Launch your file manager application. The name and appearance may vary depending on your desktop environment (e.g., Nautilus for GNOME, Dolphin for KDE, Thunar for XFCE). Open the file manager/explorer and head to the folder with the files. Right-click the file you plan to edit and pick "Rename" or "Edit Name" from the contextual menu that appears. Type the new name, then press Enter or select "Rename" to apply the update. Bulk file name change procedures may differ somewhat based on your file manager: Hold the Ctrl key and click on each file you want to change to select them. Select "Rename" by right-clicking on any of the files you've picked. Confirm the updates and check that the files are adjusted as desired. Best Practices for File Naming Conventions Consistent file naming conventions can significantly improve the ease of managing files and enhance overall organization. This section outlines best practices for naming files. Use Descriptive Names Choose names that are clear and descriptive, highlighting the file's content, purpose, or creation date. For example, replace doc1.txt with project_report_Jan2025.txt. Avoid Special Characters Refrain from including special characters (such as !, @, #, $, %, ^, &, and *) in filenames, since they can cause complications in file management and scripts. Use Underscores or Hyphens Replace spaces with underscores (_) or hyphens (-) in filenames to ensure they work seamlessly across different systems and scripts. For instance, use project_report_Jan2025.txt instead of project report Jan 2025.txt. Guidelines for Changing Filename in Linux Backup First: Back up your files first before performing extensive name changes. Test Changes: Test the changes on a handful of files first. Careful Use of Wildcards: Use wildcards carefully to prevent unintentional file modifications. Conclusion There are several approaches to changing file names in Linux, each tailored to different user preferences. Single-file tasks suit mv; for bulk operations, choose rename or mmv. Advanced customization can be achieved with Bash scripts and command combinations, whereas GUI tools present a more user-friendly choice. This in-depth guide will ensure you’re capable of executing any filename changing task smoothly in a Linux environment.  By mastering these tools and techniques, you can significantly enhance your efficiency and productivity when managing files in Linux. Understanding how to use these commands not only saves time but also reduces the risk of errors that can occur with manual renaming. In addition, Hostman provides Linux VPS web hosting services to empower your applications. 
23 January 2025 · 7 min to read
Docker

Converting a Container to a Virtual Machine

A tricky question often asked during technical interviews for a DevOps engineer position is: "What is the difference between a container and a virtual machine?" Most candidates get confused when answering this question, and some interviewers themselves don’t fully understand what kind of answer they want to hear. To clearly understand the differences and never have to revisit this question, we will show you how to convert a container into a virtual machine and run it in the Hostman cloud. The process described in this article will help better understand the key differences between containers and virtual machines and demonstrate each approach's practical application. This article will be especially useful for working with systems requiring a specific environment. We will perform all further actions in a Linux OS environment and use a virtual machine based on the KVM hypervisor created with VirtualBox to prepare the necessary image. You can also use other providers such as VMware, QEMU, or virt-manager. Configuration of Our Future Virtual Machine Let’s start this exciting journey by creating a container. For this, we will use Docker. If it is not installed yet, install it using the command below (before that, you may need to update the list of available packages with sudo apt update): sudo apt install docker.io -y Create a container based on the minimal Alpine image and attach to its shell: sudo docker run --name test -it alpine sh Install the necessary programs using the apk package manager that you plan to use in the future virtual machine. You don’t necessarily have to limit yourself to packages from the standard Alpine repository — you can also add other repositories or, if needed, download or compile packages directly in the container. apk add tmux busybox-extras openssh-client openssh-server iptables dhclient ppp socat tcpdump vim openrc mkinitfs grub grub-bios Here’s a list of minimally required packages: tmux — a console multiplexer. It will be useful for saving user sessions and the context of running processes in case of a network disconnect. busybox-extras — an extended version of BusyBox that includes additional utilities but remains a compact distribution of standard tools. openssh-client and openssh-server — OpenSSH client and server, necessary for setting up remote connections. iptables — a utility for configuring IP packet filtering rules. dhclient — a DHCP client for automating network configuration. ppp — a package for implementing the Point-to-Point Protocol. socat — a program for creating tunnels, similar to netcat, with encryption support and an interactive shell. tcpdump — a utility for capturing traffic. Useful for debugging network issues. vim — a console text editor with rich customization options. It is popular among experienced Linux users. openrc — an initialization system based on dependency management that works with SysVinit. It’s a key component needed to convert a container into a virtual machine, as containers do not have it by default. mkinitfs — a package for generating initramfs, allowing you to build necessary drivers and modules that are loaded during the initial system initialization. grub and grub-bios — OS bootloader. In this case, we are specifically interested in creating a bootloader for BIOS-based systems using an MBR partition table. Set the root password: export PASSWORD=<your secret password>  echo "root:$PASSWORD" | chpasswd   Create a user. You will need it for remote SSH access later: export USERNAME=<username>  adduser -s /bin/sh $USERNAME   Set the SUID bit on the executable file busybox. This is necessary so that the user can execute commands with superuser privileges: chmod u+s /bin/busybox   Create a script to be executed during system initialization: cat <<EOF > /etc/local.d/init.start #!/bin/sh dmesg -n 1 mount -o remount,rw / ifconfig lo 127.0.0.1 netmask 255.0.0.0 dhclient eth0 # ifconfig eth0 172.16.0.200 netmask 255.255.255.0 # route add -net default gw 172.16.0.1 busybox-extras telnetd EOF Let’s go through the script line by line: dmesg -n 1 — Displays critical messages from the Linux kernel's message buffer so that potential issues can be detected during startup. mount -o remount,rw / — Remounts the root file system (/) with the rw (read-write) flag. This allows modifications to the file system after boot. ifconfig lo 127.0.0.1 netmask 255.0.0.0 — Configures the loopback interface (lo) with IP address 127.0.0.1 and subnet mask 255.0.0.0. This ensures internal network communication on the machine. dhclient eth0 — Runs the DHCP client for the eth0 interface to automatically obtain IP address settings and other network parameters from a DHCP server. # ifconfig eth0 172.16.0.200 netmask 255.255.255.0 — This line is commented out, but if uncommented, it will assign a static IP address 172.16.0.200 and subnet mask 255.255.255.0 to the eth0 interface. We included this line in the script in case a static network configuration is needed. # route add -net default gw 172.16.0.1 — This line is also commented out, but if uncommented, it will add a default route with gateway 172.16.0.1. This determines how packets will be routed outside the local network. busybox-extras telnetd — Starts the Telnet server. Please note that using the Telnet protocol in production environments is not recommended due to the lack of encryption for data transmission. Make the script executable: chmod +x /etc/local.d/init.start Add the script to the autostart: rc-update add local Add the OpenSSH server daemon to the autostart. This will allow you to connect to the cloud server via SSH later: rc-update add sshd default Set the default DNS server: echo nameserver 8.8.8.8 > /etc/resolv.conf Exit the terminal using the exit command or the keyboard shortcut CTRL+D. The next step is to save the container's file system to the host as an archive, which can also be done using Docker. In my case, the final artifact is only 75 megabytes in size. sudo docker export test > test.tar Transforming a Docker Image into a Virtual Machine Image Containers are a Linux-specific technology since they don't have their own kernel and instead rely on abstractions of the host's Linux kernel to provide isolation and resource management. The key abstractions include: namespaces: isolation for USER, TIME, PID, NET, MOUNT, UTS, IPC, CGROUP namespaces. cgroups: limitations on resources like CPU, RAM, and I/O. capabilities: a set of capabilities for executing specific privileged operations without superuser rights. These kernel components make Docker and other container technologies closely tied to Linux, meaning they can't natively run on other operating systems like Windows, macOS, or BSD. For running Docker on Windows, macOS, or BSD, there is Docker Desktop, which provides a virtual machine with a minimal Linux-based operating system kernel. Docker Engine is installed and running inside this virtual machine, enabling users to manage containers and images in their usual environment. Since we need a full operating system and not just a container, we will require our own kernel. Create the image file we will work with: truncate -s 200M test.img Use fdisk to create a partition on the test.img image: echo -e "n\np\n1\n\n\nw" | fdisk test.img n — create a new partition p — specify that this will be a primary partition 1 — the partition number \n\n — use default values for the start and end sectors w — write changes Associate the test.img file with the /dev/loop3 device, starting from an offset of 2048 blocks (1 MB): sudo losetup -o $[2048*512] /dev/loop3 test.img Note that /dev/loop3 may already be in use. You can check used devices with: losetup -l Format the partition linked to /dev/loop3 as EXT4: sudo mkfs.ext4 /dev/loop3 Mount the partition at /mnt: sudo mount /dev/loop3 /mnt Extract the Docker image (test.tar) into the /mnt directory: sudo tar xvf test.tar -C /mnt Create the /mnt/boot directory to store the bootloader and kernel files: sudo mkdir -pv /mnt/boot Download the Linux kernel source code: wget https://cdn.kernel.org/pub/linux/kernel/v6.x/linux-6.8.9.tar.xz Extract the Linux kernel source code in the current directory: tar xf linux-6.8.9.tar.xz Install the necessary packages for building the Linux kernel: sudo apt install git fakeroot build-essential ncurses-dev xz-utils libssl-dev bc flex libelf-dev bison -y Navigate to the kernel source directory and create the default configuration file: cd linux-6.8.9make defconfig Add necessary configuration options to the .config file: echo -e "CONFIG_BRIDGE=y\nCONFIG_TUN=y\nCONFIG_PPP=y\nCONFIG_PPP_ASYNC=y\nCONFIG_PPP_DEFLATE=y" >> .config CONFIG_BRIDGE=y — Enables network bridge support, allowing multiple network interfaces to be combined into one. CONFIG_TUN=y — Enables support for virtual network interfaces like TUN/TAP, useful for VPN setups. CONFIG_PPP=y — Enables support for the Point-to-Point Protocol (PPP). CONFIG_PPP_ASYNC=y — Enables asynchronous PPP for serial ports. CONFIG_PPP_DEFLATE=y — Enables PPP data compression using the DEFLATE algorithm. Prepare the source code for building: make prepare -j4 Create the necessary scripts, build the compressed kernel image (bzImage) and the kernel modules: make scripts -j4make bzImage -j4make modules -j4 Install the built kernel and modules into the /mnt/boot directory (which contains the virtual machine image filesystem): sudo make INSTALL_PATH=/mnt/boot installsudo make INSTALL_MOD_PATH=/mnt modules_install Install the GRUB bootloader into the /mnt/boot directory. Make sure you're in the directory containing the test.img file: sudo grub-install --target=i386-pc --boot-directory=/mnt/boot/test.img --modules='part_msdos' Bind-mount the host system’s /proc, /sys, and /dev directories to the /mnt directory. This is necessary for creating the initramfs: sudo mount --bind /proc /mnt/proc/sudo mount --bind /sys /mnt/sys/sudo mount --bind /dev /mnt/dev/ Change root (chroot) into the /mnt filesystem using a shell: sudo chroot /mnt /bin/sh Generate the initial RAM filesystem (initramfs) for the kernel version you are working with: mkinitfs -k -o /boot/initrd.img-6.8.9 6.8.9 Generate the GRUB bootloader configuration file: grub-mkconfig -o /boot/grub/grub.cfg By completing these steps, you will have created a small virtual machine image with a fully working Linux kernel, a bootloader (GRUB), and an initramfs. Local Verification of the Built Image For local verification, it’s most convenient to use QEMU. This package is available for Windows, macOS, and Linux. Install it by following the instructions for your OS on the official website. Convert the test.img to the qcow2 format. This will reduce the size of the final image from 200 MB to 134 MB. qemu-img convert test.img -O qcow2 test.qcow2 Run the image using QEMU. qemu-system-x86_64 -hda test.qcow2 If all steps were completed correctly, the initialization process will be successful, and an interactive menu for entering the login and password will appear. To check the version of the installed kernel, use the uname -a command, which will output the necessary information. Creating a Virtual Machine in Hostman Go to the Cloud Servers section and start creating a new server. Select the prepared and tested image as the server’s base. To do this, first add it to the list of available images. Supported formats include: iso, qcow2, vmdk, vhd, vhdx, vdi, raw, img. Upload the image in one of the available ways: from your computer or by link. Note that after uploading, the image will also be available via URL. Continue with the creation of the cloud server and specify the other parameters of its configuration. Since the image is minimal, it can be run even on the smallest configuration. Once the cloud server is created, go to the Console tab and verify whether the virtual machine was successfully created from the image. The virtual machine has been created and works correctly. Since we added the OpenSSH daemon to the autostart in advance, it is now possible to establish a full remote connection to the server using the username, IP address, and password. Conclusion To turn a container into a full-fledged lightweight virtual machine, we sequentially added key components: the OpenRC initialization system, GRUB bootloader, Linux kernel, and initramfs. This process highlighted the importance of each component in the overall virtual machine architecture and demonstrated the practical differences from container environments. As a result of this experiment, we realized the importance of understanding the architecture and functions of each component to successfully create images for specific needs and to manage virtual machines more effectively from a resource perspective. The image built in this article is quite minimal since it is a Proof-of-Concept, but one can go even further. For example, you could use a special guide to minimize the kernel and explore minimal Linux distributions such as Tiny Core Linux or SliTaz. On the other hand, if your choice is to add functionality by increasing the image size, we strongly recommend checking out the Gentoo Wiki. This resource offers extensive information on fine-tuning the system.
22 January 2025 · 11 min to read
Docker

How to Create and Optimize Docker Images

In today's environment, most companies actively use the Docker containerization system in their projects, especially when working with microservice applications. Docker allows you to quickly deploy any applications, whether monolithic or cloud-native. Despite the simplicity of working with Docker, it's important to understand some nuances of creating your own images. In this article, we will explore how to work with Docker images and optimize them using two different applications as examples. Prerequisites To work with the Docker containerization system, we will need: A cloud server or a virtual machine with any pre-installed Linux distribution. We will be using Ubuntu 22.04. Docker installed. See our installation guide.  You can also use a pre-configured image with Docker. To do this, go to the Cloud servers section in your Hostman control panel, click Create server, and select Docker in the Marketplace tab. Working with Docker Images Docker images are created by other users and stored in registries—special repositories for images. Registries can be public or private. Public repositories are available to all users without requiring authentication. Private registries, however, can only be accessed by users with appropriate login credentials. Companies widely use private repositories to store their own images during software development. By default, Docker uses the public registry Docker Hub, which any user can use to publish their own images or download images created by others. When a user runs a command such as docker run, the Docker daemon will, by default, contact its standard registry. If necessary, you can change the registry to another one. To create custom Docker images, a Dockerfile is used—a text file containing instructions for building an image. These instructions use 18 specially reserved keywords. The most common types of instructions include the following: FROM specifies the base image. Every image starts with a base image. A base image refers to a Linux distribution, such as Ubuntu, Debian, Oracle Linux, Alpine, etc. There are also many images with various pre-installed software, such as Nginx, Grafana, Prometheus, MySQL, and others. However, even when using an image with pre-installed software, some Linux OS distribution will always be specified inside. WORKDIR creates a directory inside the image. Its functionality is similar to the mkdir utility used to create directories in Linux distributions. It can be used multiple times in one image. COPY copies files and directories from the host system into the image. It is used to copy configuration files and application source code files. ADD is similar to the COPY instruction, but in addition to copying files, ADD allows downloading files from remote sources and extracting .tar archives. RUN executes commands inside the image. With RUN, you can perform any actions that a user can perform in a Bash shell, including creating files, installing packages, starting services, etc. CMD specifies the command that will be executed when the container is started. Example: Creating an Image As an example, we will create an image with a simple Python program. Create a project directory and move into it: mkdir python-calculator && cd python-calculator Create a file console_calculator.py with the following content: print("*" * 10, "Calculator", "*" * 10) print("To exit from program type q") try: while True: arithmetic_operators = input("Choose arithmetic operation (+ - * /):\n") if arithmetic_operators == "q": break if arithmetic_operators in ("+", "-", "*", "/"): first_number = float(input("First number is:\n")) second_number = float(input("Second number is:\n")) print("The result is:") if arithmetic_operators == "+": print("%.2f" % (first_number + second_number)) elif arithmetic_operators == "-": print("%.2f" % (first_number - second_number)) elif arithmetic_operators == "*": print("%.2f" % (first_number * second_number)) elif arithmetic_operators == "/": if second_number != 0: print("%.2f" % (first_number / second_number)) else: print("You can't divide by zero!") else: print("Invalid symbol!") except (KeyboardInterrupt, EOFError) as e: print(e) Create a new Dockerfile with the following content: FROM python:3.10-alpine WORKDIR /app COPY console_calculator.py . CMD ["python3","console_calculator.py"] For the base image, we will use python:3.10, which is based on a lightweight Linux distribution called Alpine. We will discuss the use of Alpine in more detail in the next chapter. Inside the image, we will create a directory app, where the project file will be located. The container will be launched using the command "python3", "console_calculator.py". To build the image, the docker build command is used. Each image must also be assigned a tag. A tag is a unique identifier that can be assigned to an image. The tag is specified using the -t flag: docker build -t python-console-calculator:01 . The period at the end of the command indicates that the Dockerfile is located in the current directory. You can display the list of created images using: docker images To launch the container, use:  docker run --rm -it python-console-calculator:01 Let's test the functionality of the program by performing a few simple arithmetic operations: To exit the program, you need to press the q key. Since we specified the --rm flag when starting the container, the container will be automatically removed. You can also run the container in daemon mode, i.e., in the background. To do this, include the -d flag when starting the container: docker run -dit python-console-calculator:01 After that, the container will appear in the list of running containers: When starting the container in the background to access our script, you need to use docker exec, which executes a command inside the container. First, you need to start a shell (bash or sh), then manually run the script inside the container. To do this, use the docker exec command, passing the sh command as an argument to open the shell inside the container (where 4f1b8b26c607 is the unique container ID displayed in the CONTAINER ID column of the docker ps output): docker exec -it 4f1b8b26c607 sh Then, run the script manually: python console_calculator.py To remove a running container, you need to use the docker rm command and pass the container's ID or name. You also need to use the -f flag, which will force the removal of a running container: docker rm -f 186e8f43ca60 Optimizing Docker Images When creating Docker images, there is one main rule: finished images should be compact and occupy as little space as possible. Additionally, the smaller the image, the faster it is built. This can play a key role when using CI/CD methods or when releasing software in the Time to Market model. Proper Selection of the Base Image As the first recommendation, it's important to choose the base image wisely. For example, instead of using various Linux distribution images like Ubuntu, Oracle Linux, Rocky Linux, and many others, you can directly choose an image that already comes with the required programming language, framework, or other necessary technology. Examples of such images include: node for working with the Node.js platform A pre-built image with Nginx ibmjava for working with the Java programming language postgres for working with the PostgreSQL databases redis for working with the NoSQL Redis Using a specific image instead of an operating system image has the following advantages: There is no need to install the main tool (programming language, framework, etc.), so the image won't be "cluttered" with unnecessary packages, preventing an increase in size. Images that come with pre-installed software (like Nginx, Redis, PostgreSQL, Grafana, etc.) are always created by the developers of the software themselves. This means that users do not need to configure the program to run it (except in cases where it needs to be integrated with their service). Let's consider this recommendation with a practical example. We will use a simple Python program that prints "Hello from Python!".  First, we will build an image using debian as the base image. Create and navigate to the directory where the project files will be stored: mkdir dockerfile-python && cd dockerfile-python Create the test.py file with the following content: print("Hello from Python!") Next, create a Dockerfile with the following content: FROM debian:latest COPY test.py . RUN apt update RUN apt -y install python3 CMD ["python3", "test.py"] To run Python programs, you also need to install the Python interpreter. Then, build the image: docker build -t python-debian:01 . Let’s check the Docker image size:  docker images The image takes up 185MB, which is quite a lot for an application that just prints a single line to the terminal. Now, let's choose the correct base image, which is based on the Alpine distribution. Another feature of base images is that for many images, there are special versions in the form of slim and alpine images, which are even smaller. Let's look at the example of the official Python 3.10 image. The python:3.10 image takes up a whole 1 GB, whereas the slim version is much smaller—127 MB. And the alpine image is only 50 MB. Slim images are images that contain the minimum set of packages necessary to run a finished application. These images lack most packages and libraries. Slim images are created from both regular Linux distributions (such as Ubuntu or Debian) and Alpine-based distributions. Alpine images are images that use the Alpine distribution as the operating system— a lightweight Linux distribution that takes up about 5 MB of disk space (without the kernel). It differs from other Linux distributions in that it uses a package manager called apk, lacks the system initialization system, and has fewer pre-installed programs. When using both slim and Alpine images, it is essential to thoroughly test your application, as the required packages or libraries might be missing in such distributions. Now, let's test our application using the Python image with Alpine. Return to the previously used Dockerfile and replace the base image from debian to the python:alpine3.19 image. You should also remove the two RUN instructions, as there will be no need to install the Python interpreter: FROM python:alpine3.19 COPY test.py . CMD ["python3", "test.py"] Use a new tag to build the image: List all the Docker images. Check the image size and compare with the previous one:  Since we chose the correct base image with Python already preinstalled, the image size was reduced from 185 MB to 43.8 MB. Reducing the Number of Layers Docker images are based on the concept of layers. A layer represents a change made to the image's file system. These changes include copying/creating directories and files or installing packages. It is recommended to use as few layers as possible in the image. Among all Dockerfile instructions, only the FROM, COPY, ADD, and RUN instructions create layers that increase the final image size. All other instructions create temporary intermediate images and do not directly increase the image size. Let's take the previously used Dockerfile and modify it according to new requirements. Suppose we need to install additional packages using the apt package manager: FROM debian:latest COPY test.py . RUN apt update RUN apt -y install python3 htop net-tools mc gcc CMD ["python3", "test.py"] Build the image: docker build -t python-non-optimize:01 . Check the size of the created Docker image: docker images The image size was 570 MB. However, we can reduce the size by using fewer layers. Previously, our Dockerfile contained two RUN instructions, which created two layers. We can reduce the image size by combining the apt update and apt install commands using the && symbol, which in Bash means that the next command will only run if the first one completes successfully. Another important point is to remove cache files left in the image after package installation using the apt package manager (this also applies to other package managers such as yum/dnf and apk). The cache must be removed. For distributions using apt, the cache of installed programs is stored in the /var/lib/apt/lists directory. Therefore, we will add a command to delete all files in that directory within the RUN instruction without creating a new layer: FROM debian:latest COPY test.py . RUN apt update && apt -y install python3 htop net-tools mc gcc && rm -rf /var/lib/apt/lists/* CMD ["python3", "test.py"] Build the image: docker build -t python-optimize:03 . And check the size: The image size was reduced from the initial 570 MB to the current 551 MB. Using Multi-Stage Builds Another significant way to reduce the size of the created image is by using multi-stage builds. These builds, which involve two or more base images, allow us to separate the build environment from the runtime environment, effectively removing unnecessary files and dependencies from the final image. These unnecessary files might include libraries or development dependencies that are only needed during the build process. Let’s explore this approach with a practical example using the Node.js platform. Node.js should be installed beforehand, following our guide. We will first build the application image without multi-stage builds to evaluate the difference in size. Create a directory for the project: mkdir node-app && cd node-app Initialize a new Node.js application: npm init -y Install the express library: npm install express Create an index.js file with the content: const express = require('express'); const app = express(); const PORT = process.env.PORT || 3000; app.get('/', (req, res) => { res.send('Hello, World!'); }); app.listen(PORT, () => { console.log(Server is running on port${PORT}); }); Create Dockerfile with this content: FROM node:14-alpine WORKDIR /app COPY package*.json ./ RUN npm install COPY index.js . EXPOSE 3000 CMD ["npm", "start"] Build the image: docker build -t node-app:01 . Check the size: docker images The image size was 124 MB. Now let's rewrite the Dockerfile to use two images, transforming it into the following form: FROM node:14 AS builder WORKDIR /app COPY package*.json ./ RUN npm install COPY index.js . FROM gcr.io/distroless/base-debian10 AS production WORKDIR /app COPY --from=builder /app . EXPOSE 3000 CMD ["npm", "start"] Build the image: docker build -t node-app:02 . List the Docker images and check the size: docker images As a result, the image size was drastically reduced—from 124 MB to 21.5 MB. Conclusion In this article, we created our own Docker image and explored various ways to run it. We also paid significant attention to optimizing Docker images. Through optimization, we can greatly reduce the image size, which allows for faster image builds.
22 January 2025 · 12 min to read
JavaScript

How to Use Functions in JavaScript

Functions are fundamental building blocks in JavaScript programming. They help users to encapsulate reusable chunks of logic for specific tasks. Functions also aid in organizing code effectively, which makes it modular as well as easier for maintaining applications. Functions decrease repetition, improve readability and promote better practices for debugging. Functions in JavaScript have different types that can depend on use cases as well as structure: Named Functions: Have explicit names, ideal for debugging. Anonymous Functions: Usually used as arguments for other functions. Arrow Functions: Streamlined syntax introduced in ES6 for cleaner and concise code. How Functions Work In JavaScript, functions are like objects of the Function type, which allows them to possess methods and properties. When invoked, a function makes a new execution context. The context comprises of: Variable Environment: Holds function arguments and local variables. Scope Chain: Ensures access to external variables within nested scopes. This Binding: An object which initiates the function call, influencing its behaviour in different contexts. Structure of a Function Functions in JavaScript typically comprise of a few components: Declaration Keyword: An arrow => or function is at the start of the syntax. Name: Functions may or may not have names based on their type. Parameters: Input variables passed into the function. Logic Body: The instructions are executed at the time the function runs. Example: function calculateSum(a, b) { return a + b; } Specialized Function Types Anonymous Functions Anonymous functions have no declared name and are usually utilized in temporary tasks, like call-back functions. Example: setTimeout(function() { console.log("Executing an anonymous function."); }, 1000); Arrow Functions Arrow functions possess simpler syntax, also in some cases help you avoid binding complexities. Example: const multiply = (x, y) => x * y; console.log(multiply(4, 5)); // Output: 20 Immediately Invoked Function Expressions (IIFE) These are executed just after being defined. IIFE are advantageous when initializing variables without polluting the global scope. Example: (function() { console.log("IIFE executed immediately!"); })(); Real-Life Applications Below are a few real-life applications. Handling Events Handling events are significant when it comes to managing events and making web pages interactive. Example: document.getElementById("button").addEventListener("click", function() { alert("Button clicked!"); }); Fetch Data through API Functions simplify the retrieval of data from external APIs and its subsequent processing. Example: fetch('https://api.example.com/data') .then(response => response.json()) .then(data => console.log(data)); Manipulating Data Structures Array methods like map, filter, and reduce use functions to enable efficient and concise data processing operations. Example: const numbers = [1, 2, 3, 4, 5]; const squares = numbers.map(num => num * num); console.log(squares); // Output: [1, 4, 9, 16, 25] Advanced Function Concepts Delving into Closures Closure is a type of function that can use variables from parent scope. This can happen even after a parent function has ended to run. Example: function counter() { let count = 0; return function() { count++; return count; }; } const increment = counter(); console.log(increment()); // Output: 1 console.log(increment()); // Output: 2 These are widely used in maintenance of private states and in functional programming paradigms. Recursive Functions Recursion enables a function to call itself for repetitive problem-solving. It’s particularly effective for operations like calculating factorials, generating Fibonacci numbers, or traversing data structures like trees. Example: function factorial(n) { if (n === 0) return 1; return n * factorial(n - 1); } console.log(factorial(5)); // Output: 120 Callback Functions Callbacks allow asynchronous execution, making them indispensable in event-driven programming. Example: function processUserInput(callback) { const name = prompt("Enter your name:"); callback(name); } processUserInput(name => alert(`Hello, ${name}!`)); Higher-Order Functions A key feature of JavaScript that significantly enhances its versatility is the use of higher-order functions. These functions either accept other functions as arguments, return a function, or both. They form the foundation of functional programming, enabling developers to write concise and expressive code. Example: const withLogging = (fn) => (...args) => { console.log(`Arguments: ${args}`); const result = fn(...args); console.log(`Result: ${result}`); return result; }; const add = (a, b) => a + b; const loggedAdd = withLogging(add); loggedAdd(3, 4); // Logs: Arguments: 3,4; Result: 7 This method is widely used in libraries like Lodash or RxJS for functional utilities and reactive programming. Memoization Memoization is an optimization technique that caches the results of function calls to avoid redundant computations, especially for functions with heavy processing or repeated calls that have the same input. Example: const memoize = (fn) => { const cache = new Map(); return (...args) => { const key = JSON.stringify(args); if (!cache.has(key)) { cache.set(key, fn(...args)); } return cache.get(key); }; }; const expensiveCalculation = (num) => num ** 10; const memoizedCalculation = memoize(expensiveCalculation); console.log(memoizedCalculation(2)); // Computed and cached console.log(memoizedCalculation(2)); // Retrieved from cache Performance Optimization with Function: Debouncing and Throttling Event-driven programming often requires optimization to prevent excessive function executions. Debouncing will make sure that a function will execute only after a delay, while throttling will make sure that a function will execute at regular intervals. Example: Debouncing const debounce = (fn, delay) => { let timer; return (...args) => { clearTimeout(timer); timer = setTimeout(() => fn(...args), delay); }; }; const handleResize = debounce(() => console.log("Resized!"), 500); window.addEventListener("resize", handleResize); Example: Throttling const throttle = (fn, limit) => { let inThrottle; return (...args) => { if (!inThrottle) { fn(...args); inThrottle = true; setTimeout(() => (inThrottle = false), limit); } }; }; const handleScroll = throttle(() => console.log("Scrolling!"), 200); window.addEventListener("scroll", handleScroll); Mastering Function Scope Functions operate within a specific scope that specifies accessibility of variable: Global Scope: Variables accessible throughout the program. Local Scope: Exclusive to a specific function. Hoisting in JavaScript JavaScript hoists function declarations, which allows them to be called before they are defined. Example: console.log(greet()); // Output: Hello! function greet() { return "Hello!"; } Techniques for Better Performance Limit Global Variables: Keep variable declarations as localized as possible. Decreasing Redundancy: Reuse logic effectively with modular functions. Debouncing and Throttling: Optimize event-driven functions to prevent excessive execution. Debugging Functions Efficient debugging includes: Using console.log() to trace execution flow. Leveraging browser developer tools for debugging. Writing unit tests to validate logic and identify issues. Conclusion Functions in JavaScript are vital for building scalable, maintainable applications. Mastering them requires understanding their various types, use cases, and techniques to optimize performance. Whether you have to handle events, fetch API data, or build complex algorithms, functions empower developers to write efficient as well as reusable code. In addition, on our cloud application platform you can find frontend apps, such as React, Angular, Vue and more.
21 January 2025 · 7 min to read
Firewall

How to Install CSF (ConfigServer Security & Firewall) on Ubuntu 22.04

ConfigServer Security & Firewall (CSF) is a highly regarded tool for securing Linux servers against varying cyberattacks. Its robust functionality and simple interface proves that it is the best choice for system administrators. Whether you're managing a small server or large network, this tool provides an effective defense mechanism which is easy to deploy and manage. The below manual discusses the installation process and configuration of this tool on Ubuntu 22.04 LTS to maximize protection and performance of the server. Advantages of Usage of CSF Firewall on Ubuntu 22.04 LTS This versatile security solution has a variety of benefits tailored for servers using Linux as an operating system. User-Friendly Management With an easy-to-edit configuration file and simple command-line utilities, it ensures even novice administrators can quickly implement server security measures. Powerful Security Features Port Restrictions: Secure sensitive services like MySQL (port 3306) and SSH (port 22) by allowing or denying specific traffic. DDoS Attack Mitigation: Safety against high-traffic denial-of-service attempts. False Login Notifications: Automatically blocks IPs after repeated false login attempts to protect from brute force attacks. GeoIP Filtering: Restriction ofentry from certain geographic regions which gives enhanced security Effortless Update Integrated with the system's package manager, CSF updates seamlessly, making sure that firewall contains the latest security patches. Low Resource Consumption Optimized for performance, this tool works without placing unnecessary strain on system resources. Custom Alerts and Logs Receive real-time notifications for security events and go through detailed logs to monitor server activity. Step-by-Step Guide to Install CSF on Ubuntu 22.04 LTS Below is a detailed manual which explains installing it on Ubuntu 22.04 LTS. Step 1: Updating the System First, update the system’s package repository to make sure you have the latest software. To update the system’s package repository, use the following command: sudo apt update && sudo apt upgrade -y Step 2: Installation of Dependencies CSF relies on some dependencies to function appropriately. To install dependencies, use command below: sudo apt install -y perl libwww-perl liblwp-protocol-https-perl iptables nano Essential Dependencies: Perl: It is a programming language. Many of the CSF scripts and configuration tools are written in Perl. Installing perl will ensure that the system will run necessary scripts to manage firewall operations properly. libwww-perl and liblwp-protocol-https-perl: These libraries handle HTTP & HTTPS requests. CSF uses them to fetch updates, download remote blocklists, and securely manage real-time threat data feeds over HTTPS, enhancing firewall’s ability to keep itself updated with the new security information. iptables: Serving as the foundation for the Linux firewall functionality, iptables is integral for operations. It allows to define and implement traffic filtering rules, block specific ports, and restrict connectivity by IP addresses at the kernel level. nano: While optional, it is included to simplify the method of editing the configuration files directly from the terminal. It enables system administrators for doing quick modifications to firewall settings while staying in terminal. Step 3: Download and install CSF The package is available to download through its official website. For downloading, run these commands: cd /usr/srcsudo wget https://download.configserver.com/csf.tgz Extract the files: sudo tar -xvzf csf.tgz For installation, go to the extracted directory:  cd csf And execute the installer by running the following command: sudo sh install.sh When the installation is complete, it will look like the following: To confirm installation, check the CSF version by running: sudo csf -v CSF version will appear on the screen: Step 4: Configure CSF Firewall Settings CSF needs to be configured according to the user’s needs. There are two ways to configure it, through GUI and through terminal. By Terminal For this, we will perform changes in csf.conf located at /etc/csf/csf.conf. Use the following command to open the csf.conf file: sudo nano /etc/csf/csf.conf Output: Do the following changings for basic firewall protection: Testing mode (TESTING = "1") temporarily clears firewall rules to prevent being locked out during configuration. Enable it until you verify all settings, then disable it (TESTING = "0") to activate full protection. TESTING = "0" Allow MySQL Port: If you need to allow using MySQL, update csf.conf as below: TCP_IN = "22,3306" After modifications have been done in configuration, apply them, using: sudo csf -rsudo systemctl restart csf By GUI This tool already has a GUI mode built in. It needs to be configured through the csf.conf and a few commands. Following is the procedure of enabling it. 1. Install prerequisites: To install prerequisites, use the following command: apt-get install libio-socket-ssl-perl libcrypt-ssleay-perl \                    libnet-libidn-perl libio-socket-inet6-perl libsocket6-perl 2. Perform amendments in csf.conf: Perform edits in csf.conf to enable the UI mode and allow endpoints. Also update the username and password for UI. Note that the default username and password have to be updated. So to access the csf.conf, use the following command: sudo nano /etc/csf/csf.conf Now find ui = "0" and convert it to "1". This will enable the UI mode. Then find UI_PORT =  and write an entry of 7171. This will be the specific gateway on which you can interact with the GUI. Be sure that the port number is always bigger than 1024. After these, also edit the UI_USER and UI_PASSWORD. If the default UI_USER and UI_PASSWORD are not updated the UI will not work. Its a MUST to edit these from default values. Place the same port in TCP_IN and TCP_OUT.  Enter this command to add your IP address to ui.allow file: sudo echo "your_IP_adress" >> /etc/csf/ui/ui.allow In this command you have to paste your IP address. If you do not know your IP address, then you can just google "Whats my IP" and it will show you your IP. Just copy and paste in the above command and hit enter. Then in the terminal, type: csf -rsystemctl restart csfsystemctl restart lfd This will properly apply the ammendmets you have performed. Now, your UI has been set up. Now you need to enter your IP address with the endpoint you allowed (7171). Finally, interact with the UI. To connect with the GUI, you need to type the IP of the server along with the port that you set (7171) in your browser. In my case it was the following: 195.133.93.222:7171 195.133.93.222: The public IP address of the server 7171: Gateway number which I set Now you have the GUI. Additional CSF Firewall Commands With the tool now installed, you can make additional commands that can be highly useful for enhancing firewall management. Following is some detail: To block an IP address: sudo csf -d <IP_ADDRESS> To allow an IP address: sudo csf -a <IP_ADDRESS> To view status: sudo csf -l To restart: sudo csf -r Setting Up Alerts in CSF Activating notifications allows the admins of the server to get timely updates on important events, such as IP blocks, failed login attempts, and other security incidents. These are important in quickly detecting, as well as addressing safety risks. Below is the explanation about setting up notifications via email. Updating the Configuration File Use a text editor like Nano to open the tool’s main configuration file by writing: sudo nano /etc/csf/csf.conf Search for the line starting with LF_EMAIL_ALERT. This setting determines whether CSF sends email notifications. LF_EMAIL_ALERT = "0" Change the value from 0 to 1 to enable email notifications. LF_EMAIL_ALERT = "1" Find the LF_ALERT_TO option in the file, which defines the recipient email for alerts. Add the preferred email here: LF_ALERT_TO = "[email protected]" Define email of the sender by utilizing the option LF_ALERT_FROM. The notifications will be sent from this email address: LF_ALERT_FROM = "[email protected]" Press CTRL + O to save changes and CTRL + X to exit Nano. Restart CSF and LFD: sudo csf -rsudo systemctl restart lfd Customizing Alert Types You can customize which types of events would trigger email notifications. A few common options in the config are below: Login Failures: Controlled by LF_ALERT_LOGIN. Blocked IP Alerts: Enabled by LF_EMAIL_ALERT. Excessive Resource Usage: Configured via LF_ALERT_RESOURCE. For example, to enable login failure notifications, set: LF_ALERT_LOGIN = "1" Benefits of Email Notifications Real-Time Monitoring: Immediate awareness of suspicious activities or potential threats. Quick Response: Reduces the time between detecting and mitigating safety issues. Audit Trail: Email warnings provide a record of important safety events. By enabling notifications, it becomes an even more proactive tool in managing server safety. Example: Configuring CSF for WordPress Here's how to configure CSF to meet the requirements for a server hosting WordPress (WP), MySQL, and Redis: 1: Open the CSF configuration file: sudo nano /etc/csf/csf.conf 2: Allow endpoints 80 (HTTP) and 443 (HTTPS). These are required for serving the WordPress site. Find the line that starts with TCP_IN and modify it as follows: TCP_IN = "22,80,443,3306,6379" Explanation: 22: SSH access 80: HTTP for WordPress 443: HTTPS for WordPress 3306: MySQL gateway 6379: Redis gateway 3: Add a custom rule to limit interaction to Redis (port 6379) from the internal network only. Find the csf.allow file and add: 192.168.1.0/24 6379 # Internal network access to Redis Replace 192.168.1.0/24 with your internal network's CIDR notation. 4: Restrict MySQL connectivity. Allow MySQL authorization from internal network. In the csf.allow file, add: 192.168.1.0/24 3306 # Internal network access to MySQL Allow MySQL access from an external developer IP. In the same csf.allow file, add: 203.0.113.5 3306 # Developer IP access to MySQL Replace 203.0.113.5 with the developer's external IP address. 5: Restrict SSH Access. To allow SSH visibility only from a specific subnet, add the subnet to csf.allow: 192.168.1.0/24  # SSH access from the specific subnet Also, explicitly deny SSH visibility from all other IPs in csf.deny: ALL 22 # Block SSH for all except explicitly allowed IPs 6: Apply changes by restarting CSF and LFD: sudo csf -rsudo systemctl restart lfd 7: Verify that the connections points are correctly opened: sudo csf -l Check specific IP connectivity using: csf -g <IP_ADDRESS> Conclusion Using ConfigServer Security & Firewall on Ubuntu 22.04 LTS significantly improves the safety of the server. Along with its advanced functions like managing gateways, DDoS protection, and warnings in real-time, it also provides a comprehensive solution for safeguarding servers of Linux. To find more about different options and settings of this tool, check its official website. By utilizing this guide, you'll establish a robust firewall infrastructure capable of defending against modern cyber threats while maintaining optimal server performance. In addition, you can install Ubuntu on our cloud VPS server.
21 January 2025 · 9 min to read
CSS

How to Create Scrolling on Your Website Using CSS

Do you want to add parallax and scrolling effects to your page but don’t know where to start? You don’t need to jump straight into JavaScript—simpler options are available. In this article, we’ll explain how to achieve this using CSS alone and clarify which browsers might not support these effects. Smooth Scrolling To create simple scrolling from one part of the page to another, use the CSS property scroll-behavior.  Use the CSS property scroll-behavior to create simple scrolling from one part of the page to another. Here are its main values: auto: The default behavior. smooth: Enables smooth transitions. instant: Enables quick transitions. Here’s what CSS code looks like with the second value (smooth) applied to a conditional p selector: p { scroll-behavior: smooth; } Next, we’ll demonstrate how to create a smooth transition from one part of a site to another using the smooth value. In this example, the user will see the transition when clicking a hyperlink that leads to another section of the page. Step 1. Create two blocks with links to each other: <html> <body> <head></head> <h1>Smooth Scrolling</h1> <div class="main" id="block1"> <h2>Block 1</h2> <p>Click the hyperlink to see smooth scrolling.</p> <a href="#block2">Click here to go to Block 2.</a> </div> <div class="main" id="block2"> <h2>Block 2</h2> <a href="#block1">Click here to go to Block 1.</a> </div> </body> </html> Step 2. Add smooth scrolling using CSS code inside the <head> tag. The code will include the scroll-behavior property with the value smooth. Additionally, set colors and heights for the different text sections: pink and yellow. <head> <style> html { scroll-behavior: smooth; } #block1 { height: 1000px; background-color: #ffa3f6; } #block2 { height: 1000px; background-color: #fffc9c; } </style> </head> Thanks to the smooth value, smooth scrolling is implemented on the web page. If You Need Parallax Parallax is an effect where background elements move faster or slower than objects in the foreground. For example, as a user scrolls a page, the background image may move at one speed while the text moves at another. In this case, we’ll ensure the background image doesn’t move at all during scrolling. Step 1. Write the HTML code, which includes a large purple text block: <html> <head></head> <body> <h1>Scroll Further Down the Page</h1> <div class="paral"></div> <div style="height:550px;background-color:#bf89e0;font-size:50px"> This text is included simply to demonstrate the effect. Try scrolling up and down. The text block will move, but the image will remain stationary. </div> <div class="paral"></div> </body> </html> Step 2. Add the CSS code. It will include a link to an image that becomes the background using the background-attachment: fixed property. <style> body, html { height: 90%; } .paral { background-position: center; background-attachment: fixed; background-size: cover; background-image: url('https://images.unsplash.com/photo-1519681393784-d120267933ba?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=2070&q=80'); height: 90%; } </style> As a result, the parallax effect is implemented on the page. Note. To remove this effect, simply replace background-attachment: fixed with background-attachment: scroll in the code. The image will then move along with the text. Additional Examples To better understand CSS properties, let’s explore other scrolling effects for a website. Example 1 Here’s a step-by-step guide to achieving a multi-layered scrolling effect. In this case, multiple objects on the site will move at different speeds during scrolling. Step 1. Write two sections in the following way: <html> <head></head> <body> <section class="block1"> <h1>Block 1</h1> </section> <section class="block2"> <h1>Block 2</h1> </section> </body> </html> Step 2. Add CSS code inside the <head> tag. First, define the parameters and select a background from a free photo stock for Block 2: <head> <style> section { position: absolute; min-height: 90vh; width: 100%; transform-style: inherit; position: relative; } .block1 { z-index: 2; background: #ff9012; } .block2::before { background: url(https://images.unsplash.com/photo-1536308037887-165852797016?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=826&q=80) top center; content: ''; top: 0; left: 0; right: 0; bottom: 0; position: absolute; display: block; transform: translateZ(-0.5px) scale(2); z-index: -2; background-size: cover; } </style> </head> Step 3. Specify the parameters for the headings: <head> <style> h1 { font-size: 3.7rem; position: absolute; padding: 0.8rem; background: #fffcfc; transform: translateZ(-1px) scale(1) translate(-25%, -25%); top: 49%; left: 49%; text-align: center; } .block1 h1 { z-index: 2; transform: translate(-49%, -49%); } .block2 h1 { z-index: 2; transform: translateZ(-0.4px) scale(1.2) translate(-39%, -39%); } </style> </head> Step 4. Define the parameters for other elements: <head> <style> *, *::before, *::after, :root { box-sizing: border-box; margin: 0; padding: 0; } html { height: 95%; overflow: hidden; } body { height: 95%; overflow-x: hidden; overflow-y: scroll; perspective: 0.8px; transform-style: preserve-3d; font-size: 40%; font-family: 'PT Astra Sans'; } </style> </head> As a result, the website will have three moving objects. Here they are, ranked from slowest to fastest: The background image of Block 2. The "Block 2" heading. Block 1 with the orange background. This is how multi-layered scrolling looks in action. Example 2 Let’s look at how to fix an element on a webpage during scrolling. For instance, hostman.com has such pinned elements: Step 1. Write HTML with two text blocks like this: <html> <head></head> <body> <h1>Fixed Element</h1> <div class="extra"></div> <div class="wrap"> <div class="elem"> Element </div> </div> </body> </html> Step 2. Add the following CSS inside the <head> tag: <head> <style> body { font-family: Times New Roman; } h1 { text-align: justify; } .wrap { background-color: #52ff83ab; width: 90%; height: 2000px; margin: 30px; } .elem { background: #6052ff; width: 150px; height: 150px; color: #fcfcfc; align-items: center; justify-content: center; display: flex; position: fixed; } </style> </head> When using the position: fixed property, the element remains visible and moves along with scrolling. Example 3 This time, let’s create horizontal scrolling. Step 1. Create four text blocks like this: <html> <head></head> <body> <div id="container"> <div id="container2"> <div class="type one"><div>One</div></div> <div class="type two"><div>Two</div></div> <div class="type three"><div>Three</div></div> <div class="type back"><div>Four</div></div> </div> </div> </body> </html> Step 2. Add the following CSS inside the <head> tag. This code defines the size and color of the text blocks: <head> <style> body { font-family: PT Astra Sans; margin: 0; } #container .type { position: relative; display: inline-block; width: 150vw; height: 99vh; } #container .type > div { position: relative; width: 99px; height: 99px; color: #080808; line-height: 0.8; top: 48%; left: 48%; font-weight: bold; font-size: 96px; } #container { position: absolute; overflow-x: scroll; overflow-y: scroll; transform: rotate(270deg) translateX(-100%); transform-origin: top left; background-color: #ccc; width: 99vh; height: 99vw; } #container2 { transform: rotate(90deg) translateY(-98vh); transform-origin: top left; white-space: nowrap; } .one { background-color: #00ff48; } .two { background-color: #ff00d5; } .three { background-color: #f00; } .back { background-color: #fff71c; } </style> </head> This creates a horizontally scrolling page divided into several sections with text. Conclusion Creating scrolling effects is a straightforward process. With just CSS properties, you can implement unique transitions between sections of a website during scrolling. The code may be lengthy in some cases, such as multi-layered scrolling where each section moves at a different speed. However, this is still simpler and more efficient than using JavaScript for similar effects. Check out our reliable and high-performance WordPress hosting solutions for your websites. 
20 January 2025 · 8 min to read
CSS

How to Create a Sticky Footer Using CSS

You can create many different effects for a website with HTML and CSS. One of the simplest examples is fixing an element in any part of the page during scrolling. In this article, we will explain what a footer is and how to fix it without JavaScript. We will also look at HTML footer examples. What is a Footer? A footer is an element located at the bottom of a website. Sometimes, this element is fixed on the page so that it remains visible during scrolling. The effect is similar to, for example, the chat box on the hostman.com website. When scrolling, it always stays in place. Next, let’s look at a few clear examples of how to make an HTML footer stick on a page. Example 1 We will create a simple site footer as a colored block with a single word. Step 1. Write HTML code with a large number of paragraphs to demonstrate the fixed element during scrolling: <html> <head> <style> </style> </head> <body> <h1>A simple example of a sticky footer</h1> <p>This text is additional</p> <p>This text is additional</p> <p>This text is additional</p> <p>This text is additional</p> <p>This text is additional</p> <p>This text is additional</p> <p>This text is additional</p> <p>This text is additional</p> <p>This text is additional</p> <p>This text is additional</p> <p>This text is additional</p> <p>This text is additional</p> <p>This text is additional</p> <p>This text is additional</p> <p>This text is additional</p> <p>This text is additional</p> <p>This text is additional</p> <p>This text is additional</p> <p>This text is additional</p> <p>This text is additional</p> <p>This text is additional</p> <div class="footer"> <p>Footer</p> </div> </body> </html> Step 2. Using CSS, we will select the font for the text on the page, set the background color, and configure the element's positioning: <style> body { font-family: PT Astra Sans; background-color: #e9e9f0; } .footer { background-color: #2e34e5; position: fixed; right: 0; bottom: 0; text-align: center; width: 99%; font-size: 19px; font-weight: bold; color: #fafaff; } </style> As a result, we’ll have a blue footer on the site that always stays at the bottom during scrolling. This text-containing element will span the full width of the window. Example 2 How can you create a site footer with links to external resources? We'll explain this in the second example: the links will point to images from a free stock photo site. Step 1. Add links to three different images so that we can include them in the HTML footer: <html> <head> <style> </style> </head> <body> <h2>An example of a sticky footer with links to images</h2> <div class="wrapper"> <div class="footer"> <p class="text"> <a href="https://images.unsplash.com/photo-1483728642387-6c3bdd6c93e5?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=876&q=80" class="menu">Pic1</a> <a href="https://images.unsplash.com/photo-1570092178365-4cd46034bb19?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1032&q=80" class="menu">Pic2</a> <a href="https://images.unsplash.com/photo-1604231751678-3f2b03928c10?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=774&q=80" class="menu">Pic3</a> </p> </div> </div> </body> </html> Step 2. Using CSS, we will configure the site dimensions and the footer with three links: .wrapper { height: 1111px; } .footer { background-color: #aac5fa; position:fixed; right:0; bottom:0; width:99%; height: 101px; } .text { float:none; font-size:29px; word-spacing:40px; padding-left:29px; } </style> Step 3. Let’s make it so that the text size and background color change when hovering over the links. <style> a:active { text-decoration: none; } a:link { text-decoration: none; } a.menu:hover { font-size:29px; background-color:#8eff8c; } </style> Example 3 Now let’s look at how to create an HTML site footer with image links to two popular websites. We won't need the <footer> tag. Step 1. Write the code with a large paragraph, add icons for Facebook and Twitter, and include links to their websites. <html> <head> <style></style> </head> <body> <div id="wrapper"> This text is simply included to show how the footer will move during scrolling. This text is simply included to show how the footer will move during scrolling. This text is simply included to show how the footer will move during scrolling. This text is simply included to show how the footer will move during scrolling. This text is simply included to show how the footer will move during scrolling. This text is simply included to show how the footer will move during scrolling. This text is simply included to show how the footer will move during scrolling. This text is simply included to show how the footer will move during scrolling. This text is simply included to show how the footer will move during scrolling. This text is simply included to show how the footer will move during scrolling. This text is simply included to show how the footer will move during scrolling. This text is simply included to show how the footer will move during scrolling. This text is simply included to show how the footer will move during scrolling. This text is simply included to show how the footer will move during scrolling. This text is simply included to show how the footer will move during scrolling. This text is simply included to show how the footer will move during scrolling. </div> <div id="footer"> <div class="all-symbols"> <a href="https://facebook.com/"><img src="https://cdn-icons-png.flaticon.com/256/20/20837.png" class="symbol" alt="FB icon"></a> <a href="https://x.com/"><img src="https://cdn-icons-png.flaticon.com/256/5969/5969020.png" class="symbol" alt="TW icon"></a> </div> </div> </body> </html> Step 2. Use CSS to stick the footer at the bottom of the page and configure the colors and other parameters. The page background will be gray, and the background of the element with icons will be blue. <style> #wrapper { margin: 344 auto 433px; width: 399px; } body { background-color: e2e1eb; } #footer { background-color: #301fed; position: fixed; left: 0; bottom: 0; height: 70px; padding-left: 29px; color: #f3f2ff; width: 99%; } .all-symbols { float:left; margin-top:19px; padding-left:199px; } .symbol { color: #fffcfc; margin-top:6px; margin-left:29px; height:29px; } </style> Step 3. Now make it so that the background color turns white when hovering over the icons: <style> .symbol:hover { padding:2px; background-color:#fffcfc; } </style> As a result, we will have a page where the footer contains Facebook and Twitter icons that link to their respective sites. Example 4 Let’s now look at how to make an HTML footer stick at the bottom of the page with an unusual effect. The block will be hidden beneath the page; to reveal it, users must interact with another element. In this case, that element will be a red dashed square. Step 1. Add many paragraphs to the HTML code. This makes it easier to test the footer's fixation on the page: <style> </style> <h1>To find the footer, hover your cursor over the red square.</h1> <h2><p><p>A simple paragraph is for example here.</p><p>A simple paragraph is for example here.</p><p>A simple paragraph is for example here.</p><p>A simple paragraph is for example here.</p><p>A simple paragraph is for example here.</p><p>A simple paragraph is for example here.</p><p>A simple paragraph is for example here.</p><p>A simple paragraph is for example here.</p><p>A simple paragraph is for example here.</p><p>A simple paragraph is for example here.</p><p>A simple paragraph is for example here.</p><p>A simple paragraph is for example here.</p></p></h2> <div class="wrapper"> <div class="footer"> <div id="button"></div> <div id="block"> <div id="text"> <div class="footer1"> <h1>Hidden footer</h1> </div> </div> </div> </div> Step 2. Next, add the following CSS code to define the background, footer, and other parameters. We’ll use the PT Astra Sans font; the background will be light blue. <style> .wrapper { height: 1111px; } body{ background-color:#8f85ff; font-family: PT Astra Sans; text-align:center; color:#e8e6fc; } .footer #block{ position:relative; margin-top:2px; width:99,99%; height:99%; background: #120f2b; } .footer #text{ Position: relative; right:201px; width:123px; margin:14 auto; top:-51px; } .footer1{ float:left; width:499px; } .footer h1{ color: #ffc9c9; font-family: PT Astra Sans; margin-top:69px; margin-left:39px; } </style> Step 3. Create the red square, which will reveal the hidden block. Using the dashed value, make the object dashed to make it stand out even more. <style> … .footer #button{ width:50px; height:50px; border: #d41542 8px dashed; margin:-3 auto; position:center; } .footer #button:hover{ width:50px; height:50px; border: #d41542 8px dashed; } .footer { position: fixed; right:0; bottom:0; width: 99%; height: 2em; overflow:hidden; transition: all 2s ease; } .footer:hover { transition: all 2s ease; height: 9em; } … </style> The result will be a page where the hidden CSS footer smoothly appears and disappears at any scrolling point. We can apply this effect to any part of the page. The footer only appears when the cursor hovers over the bottom of the window. Conclusion We reviewed four ways to stick an HTML footer on a site. The following CSS properties and values were particularly useful: position: fixed; right: 0; bottom: 0; Instead of text and links, you can include a site search form or a request for clients to leave their contact information in the footer. However, large blocks like these may interfere with page viewing. In such cases, the method in Example 4 can be helpful: creating a small element to hide the HTML footer. Check out our reliable and high-performance WordPress hosting solutions for your websites.
20 January 2025 · 9 min to read
Docker

Using Traefik in Docker as a Reverse Proxy for Docker Containers

Docker containers allow for quick and easy deployment of services and applications. However, as the number of deployed applications grows, and when multiple instances of a single service are required (especially relevant for microservices architecture), we must distribute network traffic. For this purpose, you can use Traefik, a modern open-source reverse proxy server designed specifically to work with Docker containers. In this guide, we will configure Traefik as a reverse proxy for several applications running in Docker containers. Prerequisites To use Traefik, the following are required: A cloud server or a virtual machine with any pre-installed Linux distribution. We will be using Ubuntu 22.04. Docker and Docker Compose installed. See our installation guide.  You can also use a pre-configured image with Docker. To do this, go to the Cloud servers section in your Hostman control panel, click Create server, and select Docker in the Marketplace tab. In this guide, we will use two containers with the Nginx web server. Each container will display a specific message when accessed by its domain name. We will cover the creation of these containers further below. Configuring Traefik Let's start by setting up Traefik: Create a directory for storing configuration files and navigate into it: mkdir ~/test-traefik && cd ~/test-traefik Inside the project’s root directory, create three subdirectories: one for the Traefik configuration file and two others for the configuration files of the applications that will use Traefik: mkdir traefik app1 app2 Create the main configuration file for Traefik named traefik.yml in the previously created traefik directory: nano traefik/traefik.yml Insert the following code into the file: entryPoints: web: address: ":80" providers: docker: exposedByDefault: false api: dashboard: true insecure: true Let’s look closer at the parameters. entryPoints define the ports and protocols through which Traefik will accept requests. They specify on which port and IP address the service will listen for traffic. web — A unique name for the entry point, which can be referenced in routes. In this example, we use the name web. address: ":80" — Indicates that the entry point will listen for traffic on port 80 (HTTP) across all available network interfaces on the system. providers specify the sources of information about which routes and services should be used (e.g., Docker, Kubernetes, files, etc.). docker — Enables and uses the Docker provider. When using the Docker provider, Traefik automatically detects running containers and routes traffic to them. exposedByDefault: false — Disables automatic exposure of all Docker containers as services. This makes the configuration more secure: only containers explicitly enabled through labels (traefik.enable=true) will be routed (i.e., will accept and handle traffic). The api section contains settings for the administrative API and Traefik's built-in monitoring web interface. dashboard: true — Enables Traefik's web-based monitoring dashboard, which allows you to track active routes, entry points, and services. The dashboard is not a mandatory component and can be disabled by setting this to false. insecure: true — Allows access to the monitoring dashboard over HTTP. This is convenient for testing and getting familiar with the system but is unsafe to use in a production environment. To ensure secure access to the dashboard via HTTPS, set this to false. Preparing Configuration Files for Applications Now, let's prepare the configuration files for the applications that will use Traefik as a reverse proxy. We will deploy two Nginx containers, each displaying a specific message when accessed via its address. Create the Nginx configuration file for the first application: nano app1/default.conf Contents: server { listen 80; server_name app1.test.com; location / { root /usr/share/nginx/html; index index.html; } } For the server name, we specify the local domain app1.test.com. You can use either an IP address or a domain name. If you don't have a global domain name, you can use any name that is accessible only at the local level. Additionally, you will need to add the chosen domain to the /etc/hosts file (explained later). Next, create the html directory where the index.html file for the first application will be stored: mkdir app1/html Write the message "Welcome to App 1" into the index.html file using input redirection: echo "<h1>Welcome to App 1</h1>" > app1/html/index.html Repeat the same steps for the second application, but use values specific to the second app: nano app2/default.conf Contents: server { listen 80; server_name app2.test.com; location / { root /usr/share/nginx/html; index index.html; } } Set the local domain name for the second application as app2.test.com. Create the html directory for the second application: mkdir app2/html Write the message "Welcome to App 2" into the index.html file: echo "<h1>Welcome to App 2</h1>" > app2/html/index.html Since we used local domain names, they need to be registered in the system. To do this, open the hosts file using any text editor: nano /etc/hosts Add the following entries: 127.0.0.1 app1.test.com  127.0.0.1 app2.test.com   The final project structure should look like this: test-traefik/ ├── app1/ │ ├── default.conf │ └── html/ │ └── index.html ├── app2/ │ ├── default.conf │ └── html/ │ └── index.html └── traefik/ └── traefik.yml Launching Traefik and Applications Now let's proceed with launching Traefik and the applications. To do this, create a docker-compose.yml file in the root project directory (test-traefik): nano docker-compose.yml Insert the following configuration: version: "3.9" services: traefik: image: traefik:v2.10 container_name: traefik restart: always command: - "--configFile=/etc/traefik/traefik.yml" ports: - "80:80" - "8080:8080" volumes: - "./traefik/traefik.yml:/etc/traefik/traefik.yml" - "/var/run/docker.sock:/var/run/docker.sock:ro" app1: image: nginx:1.26-alpine container_name: nginx-app1 restart: always volumes: - "./app1/default.conf:/etc/nginx/conf.d/default.conf" - "./app1/html:/usr/share/nginx/html" labels: - "traefik.enable=true" - "traefik.http.routers.app1.rule=Host(`app1.test.com`)" - "traefik.http.services.app1.loadbalancer.server.port=80" app2: image: nginx:1.26-alpine container_name: nginx-app2 restart: always volumes: - "./app2/default.conf:/etc/nginx/conf.d/default.conf" - "./app2/html:/usr/share/nginx/html" labels: - "traefik.enable=true" - "traefik.http.routers.app2.rule=Host(`app2.test.com`)" - "traefik.http.services.app2.loadbalancer.server.port=80" Use the following command to launch the containers: docker compose up -d If Docker Compose was installed using the docker-compose-plugin package, the command to launch the containers will be as follows: docker-compose up -d Check the status of the running containers using the command: docker ps All containers should have the status Up. Let's verify whether the running containers with Nginx services can handle the traffic. To do this, send a request to the domain names using the curl utility. For the first application: curl -i app1.test.com For the second application: curl -i app2.test.com As you can see, both services returned the previously specified messages. Next, let's check the Traefik monitoring dashboard. Open a browser and go to the server's IP address on port 8080: In the Routers section, you will see the previously defined routes app1.test.com and app2.test.com. Conclusion Today, we explored Traefik's functionality using two Nginx services as an example. With Traefik, you can easily proxy applications running in Docker containers.
17 January 2025 · 7 min to read
HTML

How to Add Images in Markdown

When visiting any website, you’ve likely noticed that images make the pages more engaging and visually appealing. If content were limited to text alone, it would look dull and monotonous. Visual elements help users better understand and remember information while also making the interface more user-friendly. Markdown is a simple and user-friendly markup language used to create text with minimal effort. It’s widely used for writing documentation, articles, and blog posts. Markdown also allows you to add images to your text, which play a crucial role in visualizing content, making it more comprehensible and memorable. Key Methods for Adding Images There are two primary methods for adding images in Markdown: using local images and external links. Local Images It’s essential to correctly specify the file path to insert images stored locally. It’s recommended to store images either in the same directory as the Markdown file or at the same hierarchical level. If the image is in the same directory as the .md file, simply provide the file name: ![Computer](computer.png) If the image is in a subdirectory (e.g., /img) within the project folder, specify the path as follows: ![Computer](img/computer.png) The text in square brackets ([Computer]) is the alternative text (alt-text). This text appears if the image fails to load and helps screen readers describe the image for visually impaired users. The image path is enclosed in parentheses. Ensure the path is correct to avoid issues with image display after uploading to a server. External Images To insert an image hosted on the internet, use its URL: ![Image Description](https://site/photo.png) Advantages of using external images: Saves repository space: You don’t need to store large image files locally. Easy content management: It’s convenient when images are updated frequently. Disadvantages: Dependency on the external source: If the image is removed or the URL changes, the image will no longer display. Image Size In standard Markdown, there is no built-in support for controlling image sizes (except for platforms like GitHub and others where this feature has been manually added), but you can use HTML for this purpose: <img src="/img/computer.png" alt="Computer" width="500" height="300"> Enhanced Formatting Enhanced formatting helps draw attention and makes the content more accessible and easier to read. Image Caption Captions for images are important as they provide additional information to the reader. ![Computer](/img/computer.png "Text below the image") Clickable Image To create a clickable image that links to another resource, wrap the image syntax in square brackets with a link: [![Computer](/img/computer.png)](https://site) Effective Alt Text Alt text should describe the content of the image and be clear for all users. Bad alt text: ![Computer](/images/picture.jpg) Good alt text: ![The first computer ever built](/img/computer.png) Why is Alt Text Important? Accessibility: Users with visual impairments use screen reader programs that read the alt text aloud. SEO: Search engines index alt text, helping your content to be found through search queries. Tips for Working with Images Try to use images with the smallest file size possible to speed up page loading. Optimize images before uploading to avoid large file sizes and long loading times. Ensure that the alt text is unique and accurate. The file name should be relevant and include keywords. For example, instead of img123.png, use computer-setup.png. Comparison of Methods for Inserting Images There are various methods to insert images, each with its own pros and cons. Below is a comparison table. Method Advantages Disadvantages Markdown syntax Simple and fast insertion Less flexibility in customization HTML markup Full control over style and size More complex syntax Combination of Markdown and HTML Combines simplicity and flexibility Requires basic HTML knowledge Conclusion Now you know how to insert images in Markdown, control their size, add captions, and make content more accessible using alt text. Using images makes the text more visual and helps readers better comprehend the information. Check out our reliable and high-performance WordPress hosting solutions for your WordPress websites.
17 January 2025 · 4 min to read

Answers to Your Questions

What is Hostman used for, and what services do you offer?

Hostman is a cloud platform where developers and tech teams can host their solutions: websites, e-commerce stores, web services, applications, games, and more. With Hostman, you have the freedom to choose services, reserve as many resources as you need, and manage them through a user-friendly interface.

Currently, we offer ready-to-go solutions for launching cloud servers and databases, as well as a platform for testing any applications.

 

  • Cloud Servers. Your dedicated computing resources on servers in Poland and the Netherlands. Soon, we'll also be in the USA, Singapore, Egypt, and Nigeria. We offer 25+ ready-made setups with pre-installed environments and software for analytics systems, gaming, e-commerce, streaming, and websites of any complexity.

  • Cloud Databases. Instant setup for any popular database management system (DBMS), including MySQL, PostgreSQL, MongoDB, Redis, Apache Kafka, and OpenSearch.

  • Apps. Connect your Github, Gitlab, or Bitbucket and test your websites, services, and applications. No matter the framework - React, Angular, Vue, Next.js, Ember, etc. - chances are, we support it on our app platform.

Can I have confidence in Hostman to handle my sensitive data and cloud-based applications?

Your data's security is our top priority. Only you will have access to whatever you host with Hostman.

Additionally, we house our servers in Tier IV data centers, representing the pinnacle of reliability available today. Furthermore, all data centers comply with international standards: 

  • ISO: Data center design standards

  • PCI DSS: Payment data processing standards

  • GDPR: EU standards for personal data protection

What are the benefits of using Hostman as my cloud service provider?

User-Friendly. With Hostman, you're in control. Manage your services, infrastructure, and pricing structures all within an intuitive dashboard. Cloud computing has never been this convenient.

 

Great Uptime: Experience peace of mind with 99.99% SLA uptime. Your projects stay live, with no interruptions or unpleasant surprises.

 

Around-the-Clock Support. Our experts are ready to assist and consult at any hour. Encountered a hurdle that requires our intervention? Please don't hesitate to reach out. We're here to help you through every step of the process.

 

How does pricing work for your cloud services?

At Hostman, you pay only for the resources you genuinely use, down to the hour. No hidden fees, no restrictions.

Pricing starts as low as $4 per month, providing you with a single-core processor at 3.2 GHz, 1 GB of RAM, and 25 GB of persistent storage. On the higher end, we offer plans up to $75 per month, which gives you access to 8 cores, 16 GB of RAM, and 320 GB of persistent storage.

For a detailed look at all our pricing tiers, please refer to our comprehensive pricing page.

Do you provide 24/7 customer support for any issues or inquiries?

Yes, our technical specialists are available 24/7, providing continuous support via chat, email, phone, and WhatsApp. We strive to respond to inquiries within minutes, ensuring you're never left stranded. Feel free to reach out for any issue — we're here to assist.

Can I easily scale my resources with Hostman's cloud services?

With Hostman, you can scale your servers instantly and effortlessly, allowing for configuration upsizing or downsizing, and bandwidth adjustments.

Please note: While server disk space can technically only be increased, you have the flexibility to create a new server with less disk space at any time, transfer your project, and delete the old server.

What security measures do you have in place to protect my data in the cloud?

Hostman ensures 99.99% reliability per SLA, guaranteeing server downtime of no more than 52 minutes over a year. Additionally, we house our servers exclusively in Tier IV data centers, which comply with all international security standards.

 

How can I get started with Hostman's cloud services for my business?

Just sign up and select the solution that fits your needs. We have ready-made setups for almost any project: a vast marketplace for ordering servers with pre-installed software, set plans, a flexible configurator, and even resources for custom requests.

If you need any assistance, reach out to our support team. Our specialists are always happy to help, advise on the right solution, and migrate your services to the cloud — for free.

Is there a guaranteed Service Level Agreement (SLA) for VPS server uptime?

Hostman guarantees a 99.98% server availability level according to the SLA.

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support