Sign In
Sign In

Cloud Service Provider
for Developers and Teams

We make it simple to get started in the cloud and scale up as you grow —
whether you have one virtual machine or ten thousand
By signing up you agree to the Terms of Service and Privacy Policy
99.9% Uptime
Our cloud service provides the ultimate in server dependability and stability
Money-back Guarantee
Experience our high-speed cloud services without any risk, assured by our money-back guarantee
Easy to Deploy
Manage your services with ease using our intuitive control panel, where deploying software is a matter of minutes
Reliable and Available
Select from 6 datacenter regions around the world based on latency or deploy across regions for redundancy

Robust cloud services for every demand

See all Products

Cloud Servers

Cutting-edge hardware for cloud solutions: powerful Intel and AMD processors, ultra-fast NVMe disks

Databases

We provide a cloud database ready to store everything you have. The best DBMSs are on deck: MySQL, Redis, Kafka, and more

App Platform

Just link your repo, pick a project to deploy, and Hostman will have it running in the cloud with just a couple of clicks from the dashboard

S3 Storage

A universal object storage compatible with the S3 protocol

Firewall

Multi-layered protection from vulnerability scanning, DDoS, and cyber-attacks

Kubernetes

Automate the management of containerized applications, from deployment and scaling to monitoring and error handling

Managed Backups

Our server and application backup feature allows for both on-demand and scheduled backup and one-click data restoration

Images

Create images for backup free of charge or deploy your own in the Hostman cloud

Hostman's commitment to simplicity
and budget-friendly solutions

1 CPU
2 CPU
4 CPU
8 CPU
Configuration
1 CPU, 1 GB RAM, 25 GB SSD
Hostman
DigitalOcean
Google Cloud
AWS
Vultr
Price
$4
$6
$6.88
$7.59
$5
Tech support
Free
$24/mo
$29/mo + 3% of
monthly charges
$29/mo or 3% of
monthly charges
Free
Backups
from $0.07/GB
20% or 30% higher
base daily/weekly fee
$0.03/GB per mo
$0.05/GB per mo
20% higher base
monthly/hourly fee
Bandwidth
Free
$0.01 per GB
$0.01 per GB
$0.09/GB first
10 TB / mo
$0.01 per GB
Live chat support
Avg. support response time
<15 min
<24 hours
<4 hours
<12 hours
<12 hours
Anup k.
Associate Cloud Engineer
5.0 out of 5

"Hostman Comprehensive Review of Simplicity and Potential"

It been few years that I have been working on Cloud and most of the cloud service...
Mansur H.
Security Researcher
5.0 out of 5

"A perfect fit for everything cloud services!"

Hostman's seemless integration, user-friendly interface and its robust features (backups, etc) makes it much easier...
Adedeji E.
DevOps Engineer
5.0 out of 5

"Superb User Experience"

For me, Hostman is exceptional because of it's flexibility and user-friendliness. The platform's ability to offer dedicated computing resources acr...
Yudhistira H.
Mid-Market(51-1000 emp.)
5.0 out of 5

"Streamlined Cloud Excellence!"

What I like best about Hostman is their exceptional speed of deployment, scalability, and robust security features. Their...
Mohammad Waqas S.
Biotechnologist and programmer
5.0 out of 5

"Seamless and easy to use Hosting Solution for Web Applications"

From the moment I signed up, the process has been seamless and straightforward...
Mohana R.
Senior Software Engineer
5.0 out of 5

"Availing Different DB Engine Services Provided by Hostman is Convenient for my Organization usecases"

Hostman manages the cloud operations...
Faizan A.
5.0 out of 5

"Hostman is a great fit for me"

Hostman is a great fit for me. What do you like best about Hostman? It was very easy to deploy my application and create database, I didn't have
Adam M.
5.0 out of 5

"Perfect website"

This website is extremely user friendly and easy to use. I had no problems so didn't have to contact customer support. Really good website and would recommend to others.
Anup K.
4.0 out of 5

"Simplifying Cloud Deployment with Strengths and Areas for Growth"

What I like best about Hostman is its unwavering commitment to simplicity...
Naila J.
5.0 out of 5

"Streamlined Deployment with Room for Improvement"

Hostman impresses with its user-friendly interface and seamless deployment process, simplifying web application hosting...

Trusted by 500+ companies and developers worldwide

Deploy a cloud server
in just a few clicks

Set up your сloud servers at Hostman swiftly and without any fees, customizing them for your business with a quick selection of region, IP range, and details—ensuring seamless integration and data flow

Code locally, launch worldwide

Our servers, certified with ISO/IEC 27001, are located in Tier 3 data
centers across the US, Europe, and Asia
🇺🇸 San Francisco
🇺🇸 San Jose
🇺🇸 Texas
🇺🇸 New York
🇳🇱 Amsterdam
🇳🇬 Lagos
🇩🇪 Frankfurt
🇵🇱 Gdansk
🇦🇪 Dubai
🇸🇬 Singapore

Latest News

Tutorials

How to Install Tensorflow

If you’re serious about diving into machine learning, TensorFlow will become your reliable companion. Developed by Google, this library provides everything necessary for training neural networks. However, before you start, you’ll need to install it, and the installation methods for TensorFlow vary significantly depending on your environment. Installing TensorFlow in Different Environments As a general guideline, choose a directory path for the library that avoids spaces or special symbols; using only Latin characters and numbers is best, with underscores as a last resort. This helps prevent errors or crashes during loading. How to Install TensorFlow in Python Installing TensorFlow in Python can be either straightforward or a bit more involved. The simple approach works well if you’ve installed Python specifically for TensorFlow. In this case, just enter the following command in IDLE (Python’s IDE), using the pip package manager, which is included in Python versions starting from 3.4: pip install tensorflow To remove any package, simply replace install with uninstall. However, this installation method may not be ideal if you’re already using Python for other purposes, as it will install TensorFlow dependencies that might alter the versions of other packages you need. To avoid this, first install the following package: pip install virtualenv Now, let’s create a virtual environment (the directory path is an example; replace it with your desired path): cd ~mkdir mymlprojectvirtualenv ~/mymlproject/tensorflow Next, activate the virtual environment: source ~/mymlproject/tensorflow/bin/activate If activation is successful, you’ll see the name of the virtual environment in the command line prompt in parentheses: (tensorflow). We’re almost there! Now, install TensorFlow itself with this simple command: pip install tensorflow That’s it! You can now start working on optimizing your neural network. When you’re finished, exit the virtual environment by entering: deactivate How to Install TensorFlow on Windows Installing TensorFlow on Windows is also quite straightforward. You can even choose which version to install: CPU or GPU. The CPU version means computations will be processed using the power of the central processor. In contrast, the GPU version allows computations to be offloaded to a graphics processor. To install the CPU version, use the following command: pip install tensorflow For the GPU version, use: pip install tensorflow-gpu Keep in mind that for most machine learning tasks, the CPU version is usually sufficient, while the GPU version is mainly beneficial for training deep learning models (multi-layer neural networks with complex structures). How to Install TensorFlow in Anaconda Installing TensorFlow in Anaconda is a slightly more complex process. Let’s assume you already have Anaconda installed. First, search for "Anaconda Prompt" in the Start menu and open the application, or go to Anaconda Navigator and select CMD.exe Prompt. In the opened window, enter the following command (the path will, of course, be your own): (base) C:\conda create -n tensorflow When prompted by the system to download and install all packages (there may be many, which is normal), respond by pressing Y. Next, activate the virtual environment by entering the following in the console: conda activate tensorflow If everything is done correctly, you will see the environment name change from (base) to (tensorflow). Now, proceed to install TensorFlow: conda install tensorflow Again, press Y and wait for the library and all its dependencies to download. That’s it; you can now get to work. How to Install TensorFlow in PyCharm In PyCharm, this is done without any code through the menu for creating a virtual environment. In the PyCharm menu under "New environment using," select Virtualenv, and below that, specify the path where this environment will be available. Next, in the left menu, click on "Project Interpreter," find TensorFlow (you can use the built-in search), click on "Install Package" at the bottom, and wait for the installation to finish. To verify that the installation was successful, return to the previous "Project Interpreter" menu: both TensorFlow and all its dependencies should be listed there. Conclusion TensorFlow is a library developed by Google that contains everything necessary for training neural networks. In this guide, we explored various methods for installing TensorFlow in different environments.
06 November 2024 · 4 min to read
Redis

How to Manage Redis Keys and Databases

Redis is a NoSQL, open-source database management system that stores data in memory as key-value pairs, where each key is a unique identifier for its associated values. A single Redis instance can host multiple databases, each capable of storing various data types. Advantages of Redis High processing speed: It can handle up to 110,000 SET and 81,000 GET operations per second. Support for advanced data types: Besides strings, Redis databases can include lists, sets (including sorted sets), and hashes. Atomicity: Operations are atomic, so each client receives consistent data regardless of the number of simultaneous requests. Versatility: Redis is effective for caching, message queuing, and storing short-term information (e.g., during web sessions). Ease of setup and use: Redis’s simplicity makes it popular for development with Python, PHP, Java, and more. In this article, we’ll cover some basic commands for managing keys and databases in Redis. Each example is independent and does not need to be executed in sequence, so they can be reviewed individually. We’ll execute all commands on a server running Ubuntu 22.04 with Redis version 7.0.12, using the redis-cli utility. However, these commands are compatible with other interfaces (such as Redli) and with cloud-based Redis databases. Redis Data Types Before diving into commands, let's look at the data types Redis supports. Redis keys are binary-coded strings with a maximum length of 512 MB, serving as identifiers for the associated values. String Strings are simple byte sequences with no restrictions on special characters, so they can hold nearly any content: text, integers, floating-point numbers, images, videos, and other media files, with a maximum size of 512 MB. Example: redis 127.0.0.1:6379> SET name "educativeOK redis 127.0.0.1:6379> GET name "educative" In this example, name is the key, and educative is the string value stored in the database. List Lists in Redis are ordered sequences of strings, sorted in insertion order. The system can efficiently handle lists with both small (500) and large (50,000) volumes of interconnected entries, making it ideal for processing large datasets. Example of creating lists: LPUSH mylist x   # list is now "x"LPUSH mylist y   # list is now "y","x"RPUSH mylist z   # list is now "y","x","z" (using RPUSH this time) Set Sets are similar to lists but unordered, and duplicates are not allowed. This storage method is useful when uniqueness is important but sequence order does not matter. Sorted Set Sorted sets allow users to choose the sorting criteria. If two elements have the same score, Redis will order them lexicographically. Each element is associated with a score, determining the set's ordering. Hash The hash data type stores field-value pairs. A Redis hash can contain millions of objects within a compact structure, making it suitable for use cases involving large numbers of entries or accounts in a single database. Example usage: HMSET user:1000 username antirez password P1pp0 age 34HGETALL user:1000HSET user:1000 password 12345HGETALL user:1000 Managing Redis Databases By default, Redis includes 16 isolated databases, each numbered from 0 to 15, with isolation ensuring that commands affect only the selected database, not others. By default, Redis connects to database 0. You can switch databases using the SELECT command after connecting. For instance, to switch to database 10: select 10 The selected database will then be indicated in the console prompt like this: 127.0.0.1:6379[10]˃ If you’re working in the default database 0, the prompt won’t show a database number: 127.0.0.1:6379˃ Copying Data Between Databases The swapdb command allows you to swap data between two databases, fully replacing the contents of the target database with those of the source database. For example, to swap databases 1 and 2: swapdb 1 2 If the operation is successful, Redis will display OK. Changes are applied immediately. Moving Keys Between Redis Instances The migrate command transfers a key from one Redis instance to another, removing it from the source instance. This command includes the following parameters in order: The target database’s host or IP address The target database’s port number The name of the key to be transferred The target database number (0–15) A timeout in milliseconds (maximum idle time allowed) For example: migrate 123.45.4.134 6379 key_1 6 8000 You can add one of the following options to the end of the migrate command: COPY: Retains the key in the source database while copying it to the target database. REPLACE: If the key already exists in the target database, it will be overwritten by the migrating key. KEYS: Instead of specifying a single key, you can provide a pattern after keys to transfer all matching keys, following patterns as outlined in the documentation. This enables bulk data transfers that match the specified pattern. Managing Keys Below are examples of several basic Redis commands for working with keys. To rename a key, use rename: rename old_key_name new_key_name The randomkey command is used to return a random key from the database: randomkey The output will display the key. The type command allows you to output the data type. The output will indicate one of the possible options: string, list, hash, set, zset, stream, or none — if such a key does not exist in the database. type key_name The move command transfers a key between databases within a single Redis instance (unlike migrate, which moves them to a different Redis instance). The command includes the key name and the target database file. For example, we will transfer data to database 6: move key_name 6 Upon successful execution, the output will show OK. Deleting Keys To delete one or more keys, use del: del key_name_1 key_name_2 If successful, the output will show (integer) 1. If something goes wrong, you will see (integer) 0. The unlink command is functionally similar to del but has some nuances. del temporarily blocks the client to free the memory occupied by a key. If this takes very little time (in the case of a small object), the blocking will likely go unnoticed. However, if the key is associated with a large number of objects, the deletion will take a considerable amount of time. During that time, any other operations will be impossible. Unlike del, the unlink command will first assess the cost of freeing the memory occupied by the key. If the costs are insignificant, unlink will behave like del, temporarily blocking the client. If the memory release requires significant resources, the deletion will occur asynchronously: unlink works in the background and gradually frees memory without blocking the client: unlink key_name In most cases, it is preferable to use unlink, as the ability to delete keys asynchronously and reduce errors due to blocking is a significant advantage. One of the following commands — flushdb or flushall — is used for bulk deletion of keys. Be very careful; this procedure occurs with no possibility of recovery (applicable to one or several databases). To delete all keys in the current database, use: flushdb To remove all keys across all databases on the Redis platform, use: flushall Both commands have an asynchronous deletion mode; add the async option to enable this. In this case, they will behave like unlink, gradually cleaning up memory in the background while other operations continue. Backup To create a backup of the current Redis database, you can use: save As a result, a snapshot of the current information is exported to a .rdb file. It is important to note that save blocks all other clients accessing the database during its execution. Therefore, the official documentation recommends running the save command only in a testing environment. Instead, it is suggested to use bgsave. This command informs Redis to create a fork of the database: the parent process will continue to serve clients, while the child process will unload the database backup. If changes are made during the execution of the bgsave command, they will not be included in the snapshot. bgsave You can also configure automatic regular snapshots that will occur when a certain number of changes have been made to the database. This creates a "save point." By default, the following settings for save points are specified in the redis.conf configuration file: /etc/redis/redis.conf...save 900 1save 300 10save 60 10000...dbfilename "nextfile.rdb"... According to this configuration, Redis will dump a snapshot of the database to the file specified in the dbfilename line at intervals of 900 seconds if at least 1 key has changed; 300 seconds if 10 or more keys have changed; and every 60 seconds if 10,000 or more keys have changed. Another command for creating a backup is shutdown. It will block every client connected to the database, perform a save, and close the connection. It is important to consider that this command will operate similarly to save, meaning: A snapshot will be created only if a save point is configured. During the blocking of clients while the shutdown command is executed, the necessary data may become unavailable to users or applications. It should be used only in a testing environment and when you are fully confident that you can safely block all server clients. shutdown If a save point is not configured but you want to create a snapshot, add save to the command: shutdown save You can also bypass creating a backup if a save point is configured, but you need to turn off the server without saving. For this, use: shutdown nosave Additionally, the shutdown command resets changes in the append-only file (the content does not change, and all new data is added to the end) before shutting down, if the AOF (Append Only File) function is enabled. This logs all write operations to the storage in a .aof file. The AOF and RDB modes can be enabled simultaneously, and using both methods is an effective backup strategy. The append-only file will gradually grow significantly. It is recommended to enable file rewriting considering certain variables. These settings should be specified within the redis.conf file. Another method for rewriting the append-only file is to execute the command: bgrewriteaof This will create a concise list of commands needed to roll back the database to its current state. bgrewriteaof operates in the background, but other background processes must be fully completed before it can be executed.
06 November 2024 · 9 min to read
Network

iptables: Overview and Practical Use

The iptables utility allows you to manage the network firewall in Linux distributions. iptables is a popular command-line utility for interacting with the built-in Linux kernel firewall called Netfilter, which has been included in the Linux kernel since version 2.4.  In this article, we will examine how iptables works and go through practical usage examples. Installing iptables As mentioned, iptables is included in nearly all Linux distributions, from the most common (Ubuntu, Debian, RHEL) to distributions like openSUSE, Arch Linux, Gentoo, and others. First, let's check if iptables is already installed on your server by displaying its version with the command: iptables --version If this command returns a version number, iptables is already installed on the system. However, if you see the message iptables: command not found, you’ll need to install it manually. Below are instructions for installing iptables using package managers across various Linux distributions. Alternatively, you can compile and install iptables from the source code. APT For APT-based distributions (Ubuntu/Debian/Linux Mint/Kali Linux), use the command: apt -y install iptables RPM For RPM-based distributions (CentOS, Fedora, Red Hat Enterprise Linux, ALT Linux), use one of the following commands: For the YUM package manager: yum -y install iptables For the DNF package manager: dnf -y install iptables Pacman For Pacman-based distributions (Arch Linux, ArchLabs, Manjaro), use the command: pacman -S iptables All commands must be run as the root user or as a regular user with sudo privileges. How iptables Works iptables operates using a system of rules. These rules control incoming and outgoing traffic, organized into chains that either allow or block traffic. A more detailed breakdown of how iptables works is as follows: Network packets pass through one or more chains. As a network packet moves through a chain, each rule in that chain is applied to it. During this process, the packet is checked against specified criteria. If it does not meet a criterion, a specific action is applied to it. These actions can include allowing or blocking traffic, among other operations. Key iptables Terminology While working with iptables, you may encounter the following terms: Chain: A sequence or set of rules that determine how traffic will be handled. Rules: Defined actions that contain criteria and a target or goal. Module: An added feature that provides extra options for iptables, allowing for more extensive and complex traffic filtering rules. Table: An abstraction in iptables that stores chains of rules. iptables includes the following tables: Security, Raw, NAT, Filter, and Mangle. Each table has a specific function, described below. iptables Tables Filter Table The Filter table is the default table, using three chains: OUTPUT, FORWARD, and INPUT. INPUT: Controls incoming connections. For instance, this might manage incoming SSH connections. FORWARD: Manages incoming connections not directed to the local device, typically used on a router. OUTPUT: Controls outgoing connections, such as navigating to a website using a browser. NAT Table The NAT (Network Address Translation) table includes three chains: PREROUTING, POSTROUTING, and OUTPUT. PREROUTING: Determines the destination IP address of a packet. POSTROUTING: Alters the source IP address. OUTPUT: Changes the target address of outgoing packets. Mangle Table The Mangle table is used to modify packet IP headers. Raw Table The Raw table provides a mechanism for marking packets to bypass connection tracking. Security Table The Security table enables interaction with various OS security mechanisms, such as SELinux. iptables Rules The rules in iptables are designed to control incoming and outgoing network traffic. Rules can also be used to configure port forwarding and create protocol-specific rules. Each rule is made up of criteria and a target. The criteria of a rule are matched, and the specified actions are applied to the target object. If a packet doesn’t match a rule’s criteria, the next rule is processed. The decisions made by iptables are called actions. Below is a list of key actions for handling connections: ACCEPT: Opens (allows) the connection. DROP: Closes the connection without sending a response to the client. QUEUE: Sends the packet to a queue for further processing by an external application. RETURN: Returns the packet to the previous rule, stopping the processing of the current rule. REJECT: Blocks the connection and sends an error message in response. DENY: Drops the incoming connection without sending a response. ESTABLISHED: Marks an already established connection, as the session has already received at least one packet Practical Application of iptables Let's look at using iptables in practice. All the commands below will work on any Linux distribution. iptables commands must be run as the root user or a regular user with sudo privileges. To display the current iptables configuration (including all existing rules), use the command: iptables --list For a more detailed output, which includes the number and size of processed packets in the INPUT, FORWARD, and OUTPUT chains, along with IP addresses and port numbers in numeric format, use: iptables --line-numbers -L -v -n You can also specify a specific chain to display rules for just that chain, such as: iptables -L INPUTiptables -L FORWARDiptables -L OUTPUT Initially, iptables does not create or store any rule chains, so the output of these commands may be empty. Blocking IP Addresses To block a specific IP address, add a rule to the INPUT chain and specify the appropriate table. In the command below, the table is explicitly set. If the -t option is omitted, the rule is added to the default Filter table. For example, to block the IP address 10.0.36.126: iptables -t filter -A INPUT -s 10.0.36.126 -j REJECT This command uses the following options: -t: Specifies the table for the rule. -A: Adds the rule to the specified chain, in this case, the INPUT chain. -s: Specifies the source IP address to which the action applies. -j: Specifies the action to take; here, traffic is rejected (action REJECT). To block an entire subnet, specify it with the -s option: iptables -A INPUT -s 10.0.36.0/24 -j REJECT Or, you can specify the subnet mask in full format: iptables -A INPUT -s 10.0.36.0/255.255.255.0 -j REJECT To block outgoing traffic to a specific IP address, use the OUTPUT chain and the -d option: iptables -A OUTPUT -d 10.0.36.126 -j REJECT Blocking Ports Ports can be blocked by specifying them directly. This is done with the --dport option, which designates the port of the service. Instead of a port number, you can use the service name. You must specify the protocol as well. For example, to block SSH connections from host 10.0.36.126 using the TCP protocol: iptables -A INPUT -p tcp --dport ssh -s 10.0.36.126 -j REJECT For the UDP protocol, use: iptables -A INPUT -p udp --dport ssh -s 10.0.36.126 -j REJECT Alternatively, to block SSH connections from 10.0.36.126 using the SSH service port (22), use: iptables -A INPUT -p tcp --dport 22 -s 10.0.36.126 -j REJECT To block SSH connections from any IP address over TCP: iptables -A INPUT -p tcp --dport ssh -j DROP Allowing an IP Address To allow traffic from a specific IP address, use the ACCEPT action. In the example below, all traffic from the IP address 10.0.36.126 is allowed: iptables -A INPUT -s 10.0.36.126 -j ACCEPT To allow traffic from a specific range of IP addresses, for example, from 10.0.36.126 to 10.0.36.156, use the iprange module and the --src-range option: iptables -A INPUT -m iprange --src-range 10.0.36.126-10.0.36.156 -j ACCEPT Here: iprange: A module for working with IP address ranges. --src-range: Specifies the source IP address range. To perform the reverse operation (allowing all traffic from the server to a specific IP range from 10.0.36.126 to 10.0.36.156), use the --dst-range option: iptables -A OUTPUT -m iprange --dst-range 10.0.36.126-10.0.36.156 -j ACCEPT --dst-range: Specifies the destination IP address range. Opening Ports To open a port, specify the protocol using the -p option. Supported protocols include tcp, udp, etc. A full list of supported protocols can be found in /etc/protocols: cat /etc/protocols Specify the port using the --dport option. You can use either numeric values or service names. The ACCEPT action is used to open ports. To open port 22 for TCP traffic from IP address 10.0.36.126: iptables -A INPUT -p tcp --dport 22 -s 10.0.36.126 -j ACCEPT To open multiple ports at once, use the multiport module and the --dports option, listing the ports separated by commas. For example, to open ports 22, 80, and 443 over TCP from IP address 10.0.36.126: iptables -A INPUT -p tcp -m multiport --dports 22,80,443 -s 10.0.36.126 -j ACCEPT multiport: A module for managing multiple ports simultaneously. --dports: Specifies multiple ports, unlike --dport, which supports only a single port. Blocking ICMP Traffic One commonly used feature in iptables is blocking ICMP traffic, often generated by the ping utility. To block incoming ICMP traffic, use the following command: iptables -A INPUT -j DROP -p icmp --icmp-type echo-request This command will prevent the ping command from receiving a response without displaying an error message. If you want to display an error message like "Destination Port Unreachable," replace the DROP action with REJECT: iptables -A INPUT -j REJECT -p icmp --icmp-type echo-request Allowing ICMP Traffic To allow previously blocked ICMP traffic, run the following command: iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT However, it’s important to note that if ICMP traffic was previously blocked with this command: iptables -A INPUT -j DROP -p icmp --icmp-type echo-request and then allowed with: iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT the ICMP traffic will still be blocked, as the drop rule will be the first rule in the INPUT chain. Blocking Traffic by MAC Address In addition to IP addresses, traffic can be blocked based on the device’s MAC address. Below is an example to block traffic from a device with the MAC address 00:0c:29:ed:a9:60: iptables -A INPUT -m mac --mac-source 00:0c:29:ed:a9:60 -j DROP mac: A module for working with device MAC addresses. mac-source: Specifies the MAC address of the device. Allowing Traffic by MAC Address To allow traffic from a specific MAC address, use this command: iptables -A INPUT -m mac --mac-source 00:0c:29:ed:a9:60 -j ACCEPT Blocking traffic by MAC address with iptables will only work if the devices are on the same network segment. For broader use cases, blocking traffic by IP address is generally more effective. Allowing Traffic on the Loopback Interface Traffic on the loopback interface can also be controlled. To allow incoming traffic on the loopback interface, use: iptables -A INPUT -i lo -j ACCEPT For outgoing traffic on the loopback interface, the command is: iptables -A OUTPUT -o lo -j ACCEPT Restricting Network Access by Schedule One of the useful features of iptables is the ability to temporarily allow or restrict traffic to specific services or ports based on a schedule. For example, let’s say we want to allow incoming SSH access only on weekdays, Monday through Friday, from 9 AM to 6 PM. The command would look like this: iptables -A INPUT -p tcp --dport 22 -m time --timestart 09:00 --timestop 18:00 --weekdays Mon,Tue,Wed,Thu,Fri -j ACCEPT time: Module for working with time-based rules. timestart: Specifies the start time for the rule. timestop: Specifies the end time for the rule. weekdays: Specifies the days of the week when the rule will be active, separated by commas. Supported values are: Mon, Tue, Wed, Thu, Fri, Sat, Sun, or numbers 1 to 7. Saving iptables Rules By default, user-created iptables rules are not saved automatically. This means that the rules are cleared after a server reboot or shutdown. To save the rules, install the iptables-persistent package with the following command: apt -y install iptables-persistent During the installation, two dialog boxes will appear, allowing you to save the current rules to /etc/iptables/rules.v4 for IPv4 and /etc/iptables/rules.v6 for IPv6. To manually save all rules for the IPv4 protocol, use: iptables-save > /etc/iptables/rules.v4 For IPv6 rules, use: ip6tables-save > /etc/iptables/rules.v6 This method has a significant advantage: saved rules can be restored from the file, which is helpful, for example, when transferring rules to another host. To restore previously saved rules, run: iptables-restore < /etc/iptables/rules.v4 If executing this command on a different host, transfer the rule file first and then execute the restore command. Deleting Rules in iptables You can delete rules in iptables using several methods. Deleting a Specific Rule One way to delete a rule is to target a specific rule in a chain using its line number. To display the rule numbers, use: iptables -L --line-numbers For example, in the INPUT chain, we might see two rules that open ports 80 and 443 over TCP for IP addresses 10.0.36.126 (rule number 1) and 10.0.36.127 (rule number 2). To delete rule number 2, use: iptables -D INPUT 2 Then, display the list of all current rules to verify: iptables -L --line-numbers Rule number 2 should now be removed successfully. Deleting All Rules in a Specific Chain You can also delete all rules in a specific chain at once. For example, to clear all rules in the OUTPUT chain: iptables -F OUTPUT Deleting All Rules To delete all rules across all chains, simply run: iptables -F Use caution with this command, as it will remove all existing rules, including potentially essential ones. Conclusion In summary, iptables is a powerful tool for managing the built-in firewall in Linux-based operating systems. Its extensive features and modular support allow flexible configuration for controlling network traffic. For more detailed information on iptables, consult the official documentation or use the man iptables command in Linux-based systems.
05 November 2024 · 11 min to read
Network

How to Use tcpdump to Capture and Analyze Network Traffic

Sometimes, troubleshooting network issues requires capturing network traffic. tcpdump is a network traffic analyzer, or "sniffer," that allows you to intercept and analyze network traffic passing through the utility. This tool provides a rich set of options and various filters, making it versatile for different purposes. tcpdump is entirely console-based, meaning it doesn’t have a graphical interface, so it can be run on servers without GUI support. The first version of tcpdump was released back in 1988. Since then, it has been actively maintained, with new versions released every year. This article will cover various scenarios for using tcpdump. Prerequisites To follow this tutorial, you will need:  A cloud server or virtual machine with a Linux OS installed. Any Linux distribution will work. Access to the root user or a user with sudo privileges. Installing tcpdump We will install tcpdump on Ubuntu 22.04. The tcpdump package is available in the OS’s official repository. First, update the package index: sudo apt update Next, install the utility: sudo apt -y install tcpdump Confirm that the installation was successful by checking the tcpdump version: tcpdump --version Note that further use of the utility requires running it as the root user or a user with sudo privileges. Running tcpdump Without Parameters If you run tcpdump without any parameters, it will start capturing all traffic on all available interfaces in the system and display the data on the screen (stdout): tcpdump To stop the program, press Ctrl + C. After each run, tcpdump provides the following information: packets captured — shows the number of packets captured (packets that were received and processed by tcpdump). packets received by filter — shows the number of packets captured using filters. packets dropped by kernel — shows the number of packets dropped by the OS kernel. By default, tcpdump does not save its output. We will discuss saving the output to a file later in the article. tcpdump Output Format Let's analyze the output of a captured packet using the TCP protocol as an example. By default, tcpdump displays the following data for each capture: 09:33:57.063196 IP nexus-test.com.ssh > 192.168.111.1.50653: Flags [P.], seq 27376:27440, ack 321, win 521, length 64 The parameter descriptions are provided in the table below. Parameter Description 09:33:57.063196 Timestamp. Uses the format hours:minutes:seconds.fraction, where "fraction" represents seconds from midnight. In this example, the packet was captured at 9:33:57.063196. IP Protocol used. nexus-test.com.ssh Domain name (or IP address) and port of the source host. Here, ssh is shown instead of port number 22. To display addresses and protocols as numbers, run tcpdump with the -n option. 192.168.111.1.50653 Domain name (or IP address) and port of the destination host. Flags [P.] ACK flag(s) used to indicate the connection state. Multiple values are possible. In this example, P is used, indicating the PUSH flag for processing packets immediately rather than buffering them. seq 27376:27440 Sequence number of data in the packet. Shows the data range as bytes 27376 through 27440 in the packet. ack 321 Acknowledgment of the received packet. win 521 Window size in bytes, showing the available buffer space for receiving data. length 64 Packet length in bytes, indicating the payload size as the difference between the first and last sequence bytes. Practical Use of tcpdump Let’s move on to practical applications of tcpdump with examples. Displaying a List of Network Interfaces To list all network interfaces available in the system for traffic capture, use: tcpdump -D Capturing Traffic from a Specific Network Interface By default, tcpdump captures traffic from all available interfaces. To capture traffic from a specific network interface (e.g., ens33), use: tcpdump -i ens33 Disabling IP Address to Hostname Resolution By default, tcpdump converts IP addresses to hostnames and replaces port numbers with service names. To prevent tcpdump from converting IP addresses to hostnames, add the -n option: tcpdump -n To disable both IP-to-hostname and port-to-service name conversions, use the -nn option: tcpdump -nn Capturing a Specific Number of Packets By default, tcpdump captures an unlimited number of packets. To capture a specified number of packets, for example, 4 packets, use the -c option: tcpdump -c 4 Adding Date Information tcpdump does not display the date of packet capture by default. To include the date in the output, use the -tttt option. The date will appear at the beginning of each line in the format year:month:day: tcpdump -tttt Packet Filtering in tcpdump tcpdump has extensive filters that allow capturing only the desired packets. Here are some key filters. Filtering by Port To capture traffic on a specific port, use the port option. For example, to capture traffic on port 80 directed towards the destination, you can specify dst: tcpdump dst -n port 80 You can also specify a range of ports: tcpdump -n portrange 80-443 Filtering by Protocol tcpdump supports filtering by protocols. Supported protocol values include: ether, fddi, tr, wlan, ppp, slip, link, ip, arp, rarp, tcp, udp, icmp, and ipv6. Examples for capturing specific protocols are: tcpdump icmp tcpdump tcp tcpdump arp tcpdump udp Filtering by Packet Size tcpdump allows capturing packets of a specified size using two options: less — captures packets smaller than the specified number of bytes. greater — captures packets larger than the specified number of bytes. Here are some examples: Capture traffic with packets that are no more than 43 bytes in size: tcpdump less 43 Capture traffic with packets larger than 43 bytes: tcpdump greater 43 Note that the packet size includes header size: an Ethernet header without CRC occupies 14 bytes, an IPv4 header occupies 20 bytes, and an ICMP header occupies 8 bytes. Filtering by MAC Address To filter by MAC address, use the ether host option. For example, to capture any traffic sent to or from a specified MAC address (e.g., 00:0c:29:c7:00:3f), use: tcpdump ether host 00:0c:29:c7:00:3f Filtering by Source or Destination Address You can filter traffic using the IP address or hostname of the source or destination. To capture traffic originating from a specific host, use the src option: tcpdump -nn src 192.168.36.132 To capture traffic directed to a specific host, use the dst option: tcpdump -nn dst 192.168.36.132 Using Logical Operators in tcpdump tcpdump supports various logical operators, allowing you to combine options. The following operators are supported: and or && — logical "AND." Combines multiple conditions and shows results matching all conditions. or or || — logical "OR." Combines multiple conditions and shows results matching at least one condition. not or ! — logical "NOT." Excludes specified conditions, showing results that do not match the given condition. Here are examples of using logical operators: Capture packets sent from the host 192.168.36.132 and only those listening on port 22: tcpdump -nn src 192.168.36.132 and port 22 Capture packets on all available interfaces that are listening on either port 22 or port 80: tcpdump -nn port 22 or port 80 Capture all packets except ICMP packets: tcpdump -nn not icmp Saving Output to a File As previously mentioned, tcpdump does not save its output to a file by default. To save captured data to a file, use the -w option, specifying the filename with a .pcap extension: tcpdump -nn src 192.168.36.132 -w results.pcap While saving to a file, results will not display in the terminal. To stop capturing packets, press CTRL + C. To read the data saved in the file, use the -r option, followed by the filename where the tcpdump results were saved: tcpdump -r results.pcap Conclusion tcpdump is a powerful command-line tool for analyzing networks and identifying issues. The utility supports a wide array of options, enabling users to filter for specific packet information.
05 November 2024 · 7 min to read
Terraform

Installing Terraform: Step-by-Step Guide

Terraform is an infrastructure management tool from HashiCorp that enables the deployment and management of infrastructure across various cloud platforms. Its main advantage is its ability to automate the creation and management of infrastructure, making it an essential tool for DevOps engineers and system administrators. This guide covers the steps to install Terraform on both Windows and Ubuntu. What Terraform Is and How It Works Terraform automates infrastructure deployment and management in cloud environments. It manages a variety of resources—such as virtual machines, networks, and data storage—through a single tool. Terraform uses configuration files written in HashiCorp Configuration Language (HCL) to describe the infrastructure to be created. These HCL files let users specify the infrastructure setup, including all resources required. For example, in a configuration file, you can describe a virtual machine with specific attributes, a database, or any other service you want to provision. When you run Terraform, it reads these configuration files and creates the resources described within them. Terraform considers dependencies between resources and creates them in the correct order. If the user changes the Terraform configuration—for instance, by adding a new resource or modifying an existing one—Terraform identifies the necessary adjustments to bring the infrastructure to the desired state as described in the configuration files. One of Terraform’s main benefits is its ability to manage infrastructure across different cloud environments with a single tool. This flexibility simplifies moving infrastructure between cloud platforms and streamlines infrastructure management overall. Installing Terraform on Windows There are several ways to install Terraform on Windows, including: Using a package manager like Chocolatey Manual installation Here’s a guide for both methods. Installation Using Chocolatey Chocolatey is a package manager for Windows that allows software installation, updates, and management from the command line. If Chocolatey is not yet installed, follow the guide on its official website. Once installed, you can install software from the command line using choco install. Here’s the syntax: choco install <package_name> To install Terraform with Chocolatey: Open the command prompt as an administrator from the Start menu. Run the installation command: choco install terraform After the installation, you can verify it with terraform -v: C:\Windows\system32>terraform -vTerraform v1.3.6on windows_amd64 Manual Installation One drawback of using a package manager is the possibility of downloading an outdated version. For the latest version, consider installing Terraform manually. To do this: Visit the HashiCorp website and download the appropriate version for your system. Extract the contents to a preferred folder, such as C:\Terraform. Note that the Terraform command won’t work directly from the command line without the full path unless added to your system path. C:\Windows\system32>terraform -v 'terraform' is not recognized as an internal or external command, operable program or batch file. C:\Windows\system32>C:\Terraform\terraform -v Terraform v1.3.6 on windows_amd64 To make Terraform accessible directly from the command line, add it to the system PATH: Open Control Panel and go to System and Security. In the System section, click on Advanced System Settings. In the System Properties window, go to the Advanced tab. Click Environment Variables. In the System Variables section, find the PATH variable and click Edit. Click New in the Edit Environment Variable window and add the path to the folder where you extracted Terraform (e.g., C:\Terraform). Click OK. Verify the installation in the command prompt: terraform -v Terraform v1.3.6on windows_amd64 Installing Terraform on Ubuntu To install Terraform on Ubuntu, follow these steps: Step 1: Open the terminal and update the package list: sudo apt update Step 2: Install necessary packages for downloading and installation: sudo apt install wget unzip Step 3: Navigate to the directory where you want to install Terraform, for example: cd ~ Step 4: Download the latest version from HashiCorp’s website: wget https://releases.hashicorp.com/terraform/0.x.x/terraform_0.x.x_linux_amd64.zip Replace 0.x.x with the desired version. A list of available versions can be found on the Terraform release page. Step 5: Unzip the downloaded archive: unzip terraform_0.x.x_linux_amd64.zip Step 6: Move the extracted file to /usr/local/bin to make it accessible system-wide: sudo mv terraform /usr/local/bin/ Step 7: Verify that Terraform is installed and accessible by checking its version: terraform -v You should see the Terraform version you installed. Terraform is now ready for use. Advantages of Terraform Terraform offers several advantages for infrastructure management: Automation: Infrastructure is described in configuration files, simplifying deployment and management. Unified Configuration Language: Terraform uses a single configuration language, making it possible to manage infrastructure across different cloud environments with one tool. Dependency Management: Terraform allows you to define dependencies between resources, helping manage the order in which resources are created or destroyed. Rollback Capability: Terraform tracks infrastructure changes, enabling you to roll back updates if needed. Conclusion Terraform is a powerful cloud infrastructure management tool. It lets you create, modify, and delete resources across multiple cloud providers using a single configuration language, which offers convenience, speed, and consistency in infrastructure management. Additionally, you can back up configurations and manage changes through version control tools.
02 November 2024 · 5 min to read
MongoDB

How to Work with MongoDB Queries

MongoDB is not a traditional relational database management system. This is reflected in many aspects: the principles of database design, the format of stored data, the approach to scaling, and the query language. The primary language for creating queries in relational databases is SQL. However, MongoDB uses its own syntax for queries. In this guide, we’ll discuss how to create queries and which commands are used. Structure of MongoDB A MongoDB database consists of collections similar to tables in relational databases. Each collection has its own unique name. Unlike table-based databases, collections do not have a strict data structure with a fixed number of columns and data types. Collections store documents, which are objects that resemble JSON format. An example of an employee document might look like this: { "name": "Mary", "surname": "Jameson", "age": 27, "salary": "100000", "department": "marketing", "date of birth": "15.02.1997" } Data in documents can be represented in various data types. In this example, all the data is described as strings. Database and Collection-Level Queries We will be writing and composing queries in MongoDB Compass. For more details on installing MongoDB on Ubuntu, refer to our tutorial. After installation, three databases will be available on the local server: To display them, we use the show databases command: show databases# Output: admin 40.00 KiB config 60.00 KiB local 40.00 KiB A shorter version, show dbs, can also be used: show dbs# Output: admin 40.00 KiB config 60.00 KiB local 40.00 KiB To work with a specific database, use the use command: use testdb Output: 'switched to db testdb' The testdb database does not exist on our server. If the terminal cannot find the specified database when executing the use command, it will create a new one with that name. Use the db.createCollection("collection_name") command to create collections. Let’s create a collection in the testdb database: db.createCollection('cloud'){ ok: 1 } To list collections: show collectionscloud To delete a collection, use the drop() command: db.cloud.drop()true List collections again to confirm deletion: show collections To delete the entire database while working with it, use the db.dropDatabase() command: db.dropDatabase() Output: { ok: 1, dropped: 'testdb' } Adding and Deleting Documents Let’s restore the database and create a collection in it: use testdb# Output:'switched to db testdb' db.createCollection('employees')# Output:{ ok: 1 } Then, we’ll add the first document: db.employees.insert({ name: "Mary", surname: "Jameson", age: 27, salary: 100000, department: "marketing", date_of_birth: "15.02.1997" }) Output: 'DeprecationWarning: Collection.insert() is deprecated. Use insertOne, insertMany, or bulkWrite.'{ acknowledged: true,  insertedIds: { '0': ObjectId("637c9cbd7025c2523a76fe64") } } After executing insert, the terminal suggests using the insertOne, insertMany, or bulkWrite methods to optimize queries, as insert is now deprecated. insertOne - Adding a Single Document The insertOne method adds one document to the collection: db.employees.insertOne({ name: "James", surname: "Johns", age: 37, salary: 150000, department: "QA", date_of_birth: "12.06.1987" }) Output: { acknowledged: true,  insertedId: ObjectId("637ca6127025c2523a76fe65") } insertMany - Adding Multiple Documents The insertMany method adds an array of documents to the collection: db.employees.insertMany( [{ name: "Andrew", surname: "Stuart", age: 21, salary: 12000, department: "Tech Support", date_of_birth: "15.10.2003" }, { name: "Natalie", surname: "Richardson", age: 45, salary: 200000, department: "HR", date_of_birth: "6.05.1979" }] ) Output: { acknowledged: true, insertedIds: { '0': ObjectId("637ca7817025c2523a76fe66"), '1': ObjectId("637ca7817025c2523a76fe67") } } bulkWrite - Performing Multiple Operations The bulkWrite method allows you to perform multiple operations, including inserting, deleting, and updating documents: db.employees.bulkWrite([{ insertOne: { document: { name: "Michael", surname: "Smith", age: 32, salary: 20000, department: "Tech Support", date_of_birth: "10.01.1992" } } }]) Output: { acknowledged: true, insertedCount: 1, insertedIds: { '0': ObjectId("637cafaa7025c2523a76fe68") }, matchedCount: 0, modifiedCount: 0, deletedCount: 0, upsertedCount: 0, upsertedIds: {} } Using these methods—insertOne, insertMany, and bulkWrite—can improve efficiency and provide more control over document management in MongoDB. Document Search The find() query is MongoDB's equivalent of SQL's SELECT. This query searches and displays documents in a collection. To start, let’s output all documents in the employees collection: db.employees.find() Output: [ { "_id": ObjectId("637c9cbd7025c2523a76fe64"), "name": "Mary", "surname": "Jameson", "age": 27, "salary": 100000, "department": "marketing", "date_of_birth": "15.02.1997" }, { "_id": ObjectId("637ca6127025c2523a76fe65"), "name": "James", "surname": "Johns", "age": 37, "salary": 150000, "department": "QA", "date_of_birth": "12.06.1987" }, { "_id": ObjectId("637ca7817025c2523a76fe66"), "name": "Andrew", "surname": "Stuart", "age": 21, "salary": 12000, "department": "Tech Support", "date_of_birth": "15.10.2003" }, { "_id": ObjectId("637ca7817025c2523a76fe67"), "name": "Natalie", "surname": "Richardson", "age": 45, "salary": 200000, "department": "HR", "date_of_birth": "6.05.1979" } ] Search by Criteria To find a document with specific parameters, pass them as arguments to the find() query. For example, let’s find an employee with a salary of 50000: db.employees.find({ salary: 50000 }) Output: [] In this case, no employees have a salary of 50000, so the output is empty. If there are multiple parameters, list them separated by commas: db.employees.find({ salary: 12000, name: "Andrew" }) Output: [ { "_id": ObjectId("637ca7817025c2523a76fe66"), "name": "Andrew", "surname": "Stuart", "age": 21, "salary": 12000, "department": "Tech Support", "date_of_birth": "15.10.2003" } ] Find with OR Condition To set an OR condition in the MongoDB query, use $or: db.employees.find({ $or: [{ salary: 50000 }, { name: "Natalie" }] }) Output: [ { "_id": ObjectId("637ca7817025c2523a76fe67"), "name": "Natalie", "surname": "Richardson", "age": 45, "salary": 200000, "department": "HR", "date_of_birth": "6.05.1979" } ] Search with Comparison The following comparison operators are used: $lt — less than $lte — less than or equal to $gt — greater than $gte — greater than or equal to $ne — not equal For example, let’s find employees with a salary greater than 100000 and under the age of 30: db.employees.find({ salary: { $gte: 100000 }, age: { $lt: 30 } }) Output: [ { "_id": ObjectId("637c9cbd7025c2523a76fe64"), "name": "Mary", "surname": "Jameson", "age": 27, "salary": 100000, "department": "marketing", "date_of_birth": "15.02.1997" } ] Sorting The sort() method sorts documents based on a given parameter and takes a number: 1 for ascending order or -1 for descending order. Let’s sort employees by age: db.employees.find().sort({ age: 1 }) Output: [ { "_id": ObjectId("637ca7817025c2523a76fe66"), "name": "Andrew", "surname": "Stuart", "age": 21, "salary": 12000, "department": "Tech Support", "date_of_birth": "15.10.2003" }, { "_id": ObjectId("637c9cbd7025c2523a76fe64"), "name": "Mary", "surname": "Jameson", "age": 27, "salary": 100000, "department": "marketing", "date_of_birth": "15.02.1997" }, { "_id": ObjectId("637ca6127025c2523a76fe65"), "name": "James", "surname": "Johns", "age": 37, "salary": 150000, "department": "QA", "date_of_birth": "12.06.1987" }, { "_id": ObjectId("637ca7817025c2523a76fe67"), "name": "Natalie", "surname": "Richardson", "age": 45, "salary": 200000, "department": "HR", "date_of_birth": "6.05.1979" } ] Limiting Results To limit the number of documents returned, use the limit() method: db.employees.find().sort({ age: 1 }).limit(2) Output: [ { "_id": ObjectId("637ca7817025c2523a76fe66"), "name": "Andrew", "surname": "Stuart", "age": 21, "salary": 12000, "department": "Tech Support", "date_of_birth": "15.10.2003" }, { "_id": ObjectId("637c9cbd7025c2523a76fe64"), "name": "Mary", "surname": "Jameson", "age": 27, "salary": 100000, "department": "marketing", "date_of_birth": "15.02.1997" } ] Document Updates To update documents in a MongoDB database, use a query with the update() command. It takes two parameters: the first specifies which documents to update, and the second indicates which fields to change and their new values. Here’s an example: db.employees.update({ name: 'Mary' }, { $set: { name: 'Anna', age: 51, date_of_birth: '15.11.1972' } }) Then, to verify the update, we can search for documents with the new name: db.employees.find({ name: 'Anna' }) Output: [ { "_id": ObjectId("637c9cbd7025c2523a76fe64"), "name": "Anna", "surname": "Jameson", "age": 51, "salary": 100000, "department": "marketing", "date_of_birth": "15.11.1972" } ] Renaming Fields To rename fields, use the $rename operator. In this case, let’s rename name to first_name: db.employees.updateMany({}, { $rename: { name: 'first_name' } }) Now, all documents with the name field will have it replaced by first_name. Document Deletion MongoDB provides two functions for deleting documents: deleteOne() and deleteMany(). deleteOne() The deleteOne() function deletes the first document that matches the criteria. For example, let’s delete an employee named "Natalie": db.employees.deleteOne({ first_name: 'Natalie' }) Output: { acknowledged: true, deletedCount: 1 } deleteMany() The deleteMany() function deletes all documents that match the criteria. Let’s delete all employees in the Tech Support department: db.employees.deleteMany({ department: 'Tech Support' }) Output: { acknowledged: true, deletedCount: 2 } After these deletions, only two documents should remain in the employees collection. Let’s check: db.employees.find() Final Output: [ { "_id": ObjectId("637c9cbd7025c2523a76fe64"), "first_name": "Anna", "surname": "Jameson", "age": 51, "salary": 100000, "department": "marketing", "date_of_birth": "15.11.1972" }, { "_id": ObjectId("637ca6127025c2523a76fe65"), "first_name": "James", "surname": "Johns", "age": 37, "salary": 150000, "department": "QA", "date_of_birth": "12.06.1987" } ] In summary, these operations—update, $rename, deleteOne, and deleteMany—allow you to manage document updates and deletions efficiently in MongoDB. MongoDB Query Optimization To improve the speed of your queries, consider the following tips: Create indexes for frequently used queries. Limit the number of returned documents. MongoDB retrieves all matching documents by default, so use the limit() method if you only need part of the result. Return only necessary fields from documents. You can enhance query performance by specifying only the required fields in the search result. Use more "selective" queries. For example, checking by _id will return no more than one document. Aim to use parameters in MongoDB queries that most accurately describe the documents you need. Conclusion MongoDB provides a straightforward query syntax that enables efficient document management within databases. In this material, we covered basic MongoDB queries that encompass primary document management tasks. We used a database hosted locally, but you can also host a database in the cloud using Hostman Managed Databases. Cloud databases are easier to administer and scale and can be managed and created using a web interface.
02 November 2024 · 10 min to read
Virtualization

Installing and Configuring Hyper-V Server 2019

Hyper-V is a hypervisor that provides hardware virtualization. Each virtual machine operates on virtual hardware. Hyper-V allows you to create virtual hard disks, virtual switches, and other devices that can be added to virtual machines. Hyper-V Installation The installation involves activating the corresponding role in Windows Server 2019 Datacenter or Standard. Launch the Server Manager. Select Add Roles and Features. Choose Hyper-V from the menu. Click Add Features to add all necessary Microsoft Hyper-V components. Next, configure the roles. This is done in three sections. First, Virtual Switches: Select a network interface. It will serve as the virtual switch for the hypervisor. Do not use the primary interface for this, as you will need it for physical access to the server. The next section is Migration. You can leave the default settings as they suit our purposes. On the Default Stores tab, also leave the standard parameters. You can change the directories where the virtual hard disk and VM configuration files are stored at any time in the hypervisor settings. You have prepared the configuration for installation. To start it, click on Install. Hyper-V Configuration The next step in working with Hyper-V is configuration. But first, check the server configuration. Launch the Run window (press Win+R) and execute the command sconfig. In response, a window displaying the configuration will appear. There is a lot of information, so pay attention to these key parameters: Domain/Workgroup — specified when creating the server. Computer Name — also set in the previous step. Configure Remote Management — enabled by default. This feature will be useful later for management through special utilities or the PowerShell console. Remote Desktop — an option that allows you to manage the machine remotely, enabled by default. Network Settings — network parameters of the virtual switch. You specified these when configuring the server while working with the Virtual Switches section. You do not need to change or configure anything further if you created virtualization servers according to the previous section. You can proceed to remotely manage the server. Remote Management You can manage the server in two ways. The most convenient method is through the Windows Admin Center (WAC) web interface. You can download it for free from the Microsoft website: Windows Admin Center Overview. When installing WAC, use the default settings; there is no need to change any configurations. After installation, you will gain access to the web interface, which should be opened by the name or IP address of the remote host. Another management method is through the PowerShell console. You enabled this capability when you installed several components along with the hypervisor. To verify that everything is working, launch the PowerShell console and execute: Get-Command –Module Hyper-V | Measure-Object In response, you will receive the number of cmdlets designed to manage the infrastructure. These cmdlets duplicate the capabilities of WAC. The difference is that with the web interface, you click buttons, while in PowerShell, you input commands. For example, you can check network settings through the console with: Get-NetIPConfiguration The same information is displayed when connecting to the server through WAC. Another example is checking for IPv6 support. Execute the following command in PowerShell: Get-NetAdapterBinding -InterfaceDescription "Hyper-V Virtual Ethernet Adapter" | Where-Object -Property DisplayName -Match IPv6 | Format-Table -AutoSize If the Enabled field shows True, then IPv6 support is enabled on the Hyper-V Server. You can try to disable it using the command: Disable-NetAdapterBinding -InterfaceDescription "Hyper-V Virtual Ethernet Adapter" -ComponentID ms_tcpip6 After executing this cmdlet, there will be no output; it simply quietly disables IPv6 support. You can check the status with another command: Get-NetAdapterBinding All of this can also be done using Windows Admin Center. The choice between WAC and PowerShell depends on the tasks you need to accomplish. For example, WAC is good for quickly changing settings, while PowerShell cmdlets are convenient for automation. Suppose setting up a virtual machine is a routine task. It can be tedious to configure another host through WAC each time. In that case, you can write a script executing the required commands through PowerShell. Firewall Configuration To manage the Windows Server 2019 firewall, you can use Advanced Firewall rules. It’s more convenient to configure them through PowerShell. To see the list of available requests for regulating the firewall, execute: Get-Command -Noun *Firewall* -Module NetSecurity Set the rules for remote access. To do this, execute the following commands sequentially in PowerShell: Enable-NetFireWallRule -DisplayName "Windows Management Instrumentation (DCOM-In)"Enable-NetFireWallRule -DisplayGroup "Remote Event Log Management"Enable-NetFireWallRule -DisplayGroup "Remote Service Management"Enable-NetFireWallRule -DisplayGroup "Remote Volume Management"Enable-NetFireWallRule -DisplayGroup "Windows Defender Firewall Remote Management"Enable-NetFireWallRule -DisplayGroup "Remote Scheduled Tasks Management" You can conveniently check the status through the Windows Admin Center web interface. Launch it, connect to the server, and go to the Firewall section. The rules are divided into three tabs: General, Inbound, and Outbound. Creating Disk Storage You have already created disk storage. Remember the Default Stores section when installing Microsoft Hyper-V Server? In that section, you needed to specify a folder where all the data would be stored.If you left the default settings, that’s okay. They can be changed at any time through the web interface.The Hyper-V settings are located in the Settings section. Here, the same two folders that were available when creating the server can be modified. The first directory is the Virtual Hard Disk Path. This is the path to the folder that serves as the virtual hard disk. The second directory is the Virtual Machines Path. This is where the configuration files for the virtual machines are stored. You can specify any directories to use for storing data and configuration files. Configuring Virtual Machines Hyper-V virtualization is needed to create virtual hosts on a single physical server. Let’s finally move on to this part. Download a trial virtual machine for Hyper-V from the official Microsoft website. In it, you will find a trial version of the OS, Visual Studio, an included Linux subsystem with pre-installed Ubuntu, developer mode enabled, and a Windows terminal: Microsoft Developer Downloads. The Windows 11 virtual machine can be created through the Windows Admin Center in just a few clicks. Launch the WAC web interface. Go to the Virtual Machines section. Create a new virtual machine. Specify the path to the extracted files of the trial image. Add the hard disk image. Click Create to start the virtual machine creation process. You can also start the VM through WAC. Creating Backups For security and data preservation, it is necessary to create backups of virtual machines. Typically, the built-in utility wbadmin is used for this purpose. However, the default installation of Hyper-V does not add it to the server. To fix this, add a new role — Windows Server Backup. Open the Server Management Console. Click Add Roles and Features. Select Windows Server Backup from the list. After adding the role, the wbadmin utility will be available on the server. To check that it is working and to get a list of all virtual machines, execute the command: wbadmin get virtualmachines The command will return a list of virtual machines. Note the host name for which you want to create a backup. You need to add it to the command syntax: wbadmin start backup -backuptarget:D: -hyperv:"Windows11" -quiet Replace "Windows11" with the name of your virtual machine. This command will initiate the backup creation process. After completing the process, a log will be saved in the backup folder. You can also manage backups on Microsoft Hyper-V through the Windows Server Backup application. Open the Server Management Console. In the Tools menu, select Windows Server Backup. Open the Action tab and start a one-time backup — Backup Once. Select Different Options. Choose the Full Server configuration to back up all data on the server, including applications and even the system state. Specify a remote shared folder as the destination. Enter the address of the remote shared folder. Enable inheritance (the Inherit option) if you want the backup to be accessible to everyone who has access to the remote shared folder. Provide the credentials of a user with write permissions to the shared network directory. Confirm the backup configuration. After the process is complete, open the remote shared folder you specified as the destination for the backup using File Explorer. You should see a directory named WindowsImageBackup containing the backup data. Scheduled Backups The utility for backups on MS Hyper-V Server can also be used to create backups on a schedule. In the Windows Server Backup application, select Backup Schedule. Specify that you need to back up the entire server. Configure the frequency of the backups. For example, you can set it to back up every day at 11:00 PM. Choose the destination; it can be a separate disk or a remote shared folder. Save the configuration. You can flexibly configure the scheduled backup settings: change the time, store backups on separate disks, or transfer them to a network folder. The key is to ensure that the data is easy to restore. Windows Server Backup provides all the necessary tools for this.
01 November 2024 · 8 min to read
Linux

Bash Regular Expressions

One of the core principles of Unix systems is the extensive use of text data: configuration files, as well as input and output data in *nix systems, are often organized as plain text. Regular expressions are a powerful tool for manipulating text data. This guide delves into the intricacies of using regular expressions in Bash, helping you fully harness the power of the command line and scripts in Linux. What Are Regular Expressions? Regular expressions are specially formatted strings used to search for character patterns in text. They resemble shell wildcards in some ways, but their capabilities are much broader. Many text-processing utilities in Linux and programming languages include a regular expression engine. However, different programs and languages often employ different regular expression dialects. This article focuses on the POSIX standard to which most Linux utilities adhere. The grep Utrequires at least one match of theility The grep program is the primary tool for working with regular expressions. grep reads data from standard input, searches for matches to a specified pattern, and outputs all matching lines. grep is typically pre-installed on most distributions. You can try the commands in a virtual machine or a VPS to practice using regular expressions. The syntax of grep is as follows: grep [options] regular_expression [file...] The simplest use case for grep is finding lines that contain a fixed substring. In the example below, grep outputs all lines that contain the sequence nologin: grep nologin /etc/passwd Output: daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin bin:x:2:2:bin:/bin:/usr/sbin/nologin sys:x:3:3:sys:/dev:/usr/sbin/nologin games:x:5:60:games:/usr/games:/usr/sbin/nologin ... grep has many options, which are detailed in the documentation. Here are some useful options for working with regular expressions: -v — Inverts the match criteria. With this option, grep outputs lines that do not contain matches: ls /bin | grep -v zip# Output:411toppm 7z 7za 7zr ... -i — Ignores case. -o — Outputs only the matches, not the entire lines: ls /bin | grep -o zip# Output:zip zip zip zip ... -w — Searches for lines containing whole words matching the pattern. ls /bin | grep -w zip# Output:gpg-zipzip For comparison, the same command without the -w option also includes lines where the pattern appears as a substring within a word. ls /bin | grep zip# Output:bunzip2 bzip2 bzip2recover funzip Basic Regular Expressions (BRE) As previously mentioned, there are multiple dialects of regular expressions. The POSIX standard defines two main types of implementations: Basic Regular Expressions (BRE), which are supported by almost all POSIX-compliant programs, and Extended Regular Expressions (ERE), which allow for more complex patterns but aren't supported by all utilities. We'll start by exploring the features of BRE. Metacharacters and Literals We've already encountered simple regular expressions. For example, the expression “zip” represents a string with the following criteria: it must contain at least three characters; it includes the characters “z”, “i”, and “p” in that exact order; and there are no other characters in between. Characters that match themselves (like “z”, “i”, and “p”) are called literals. Another category is metacharacters, which are used to define various search criteria. Metacharacters in BRE include: ^ $ . [ ] * \ - To use a metacharacter as a literal, you need to escape it with a backslash (\). Note that some metacharacters have special meanings in the shell, so enclose it in quotes when passing a regular expression as a command argument. Any Character The dot (.) metacharacter matches any character in that position. For example: ls /bin | grep '.zip' Output: bunzip2 bzip2 bzip2recover funzip gpg-zip gunzip gzip mzip p7zip pbzip2 preunzip prezip prezip-bin streamzip unzip unzipsfx One important detail: the zip program itself isn’t included in the output because the dot (.) metacharacter increases the required match length to four characters. Anchors The caret (^) and dollar sign ($) in regular expressions serve as anchors. This means that, when included, a match can only occur at the start of a line (^) or at the end ($). ls /bin | grep '^zip'# Output:zip zipcloak zipdetails zipgrep …ls /bin | grep 'zip$'# Output:funzip gpg-zip gunzip ...ls /bin | grep '^zip$'# Output:zip The regular expression ^$ matches empty lines. Character Sets Besides matching any character in a given position (.), regular expressions allow for matching a character from a specific set. This is done with square brackets. The following example searches for strings matching bzip or gzip: ls /bin | grep '[bg]zip'# Output:bzip2bzip2recovergzip All metacharacters lose their special meaning within square brackets, except two. If a caret (^) is placed immediately after the opening bracket, the characters in the set are treated as excluded from that position. For example: ls /bin | grep '[^bg]zip' Output: bunzip2 funzip gpg-zip gunzip mzip p7zip preunzip prezip prezip-bin streamzip unzip unzipsfx With negation, we get a list of filenames containing zip but preceded by any character other than b or g. Note that zip is not included here; the negation requires the presence of some character in that position. The caret serves as a negation only if it appears immediately after the opening bracket; otherwise, it loses its special meaning. Using a hyphen (-), you can specify character ranges. This lets you match a range of characters or even multiple ranges. For instance, to find all filenames that start with a letter or a number: ls ~ | grep '^[A-Za-z0-9]' Output: backup bin Books Desktop docker Documents Downloads GNS3 ... POSIX Character Classes When using character ranges, one challenge is that ranges can be interpreted differently based on locale settings. For instance, the range [A-Z] may sometimes be interpreted lexicographically, potentially excluding lowercase a. To address this, the POSIX standard provides several classes that represent various character sets. Some of these classes include: [:alnum:] — Alphanumeric characters; equivalent to [A-Za-z0-9] in ASCII. [:alpha:] — Alphabetic characters; equivalent to [A-Za-z] in ASCII. [:digit:] — Digits from 0 to 9. [:lower:] and [:upper:] — Lowercase and uppercase letters, respectively. [:space:] — Whitespace characters, including space, tab, carriage return, newline, vertical tab, and form feed. Character classes don’t provide an easy way to express partial ranges, like [A-M]. Here’s an example of using a character class: ls ~ | grep '[[:upper:]].*' Output: Books Desktop Documents Downloads GNS3 GOG Games Learning Music ... Extended Regular Expressions (ERE) Most POSIX-compliant applications and those using BRE (such as grep and the stream editor sed) support the features discussed above. The POSIX ERE standard allows for more expressive regular expressions, though not all programs support it. The egrep program traditionally supported the ERE dialect, but the GNU version of grep also supports ERE when run with the -E option. In ERE, the set of metacharacters is expanded to include: ( ) { } ? + | Alternation Alternation allows for a match with one of multiple expressions. Similar to square brackets that allow a character to match one of several characters, alternation allows for matching one of multiple strings or regular expressions. Alternation is represented by the pipe (|): echo "AAA" | grep -E 'AAA|BBB'# Output:AAA echo "BBB" | grep -E 'AAA|BBB'# Output:BBB echo "CCC" | grep -E 'AAA|BBB'# Output: (no match) Grouping You can group elements of regular expressions and treat them as a single unit using parentheses. The following expression matches filenames starting with bz, gz, or zip. Without the parentheses, the regular expression would change meaning to match filenames starting with bz or containing gz or zip. ls /bin | grep -E '^(bz|gz|zip)' Output: bzcat bzgrep bzip2 bzip2recover bzless bzmore gzexe gzip zip zipdetails zipgrep zipinfo zipsplit Quantifiers Quantifiers specify the number of times an element can occur. BRE supports several quantifiers: ? — Matches the preceding element zero or one time, meaning the previous element is optional: echo "tet" | grep -E 'tes?t'# Output:tet echo "test" | grep -E 'tes?t'# Output:test echo "tesst" | grep -E 'tes?t'# Output: (no match) * — Matches the preceding element zero or more times. Unlike ?, this element can appear any number of times: echo "tet" | grep -E 'tes*t'# Output:tet echo "test" | grep -E 'tes*t'# Output:test echo "tesst" | grep -E 'tes*t'# Output:tesst + — Similar to *, but requires at least one match of the preceding element: echo "tet" | grep -E 'tes+t'# Output: (no match) echo "test" | grep -E 'tes+t'# Output:test echo "tesst" | grep -E 'tes+t'# Output:tesst In BRE, special metacharacters { and } allow you to specify minimum and maximum match counts for the preceding element in four possible ways: {n} — Matches if the preceding element occurs exactly n times. {n,m} — Matches if the preceding element occurs at least n and at most m times. {n,} — Matches if the preceding element occurs n or more times. {,m} — Matches if the preceding element occurs no more than m times. Example: echo "tet" | grep -E "tes{1,3}t"# Output: (no match) echo "test" | grep -E "tes{1,3}t"# Output:test echo "tesst" | grep -E "tes{1,3}t"# Output:tesst echo "tessst" | grep -E "tes{1,3}t"# Output:tessst echo "tesssst" | grep -E "tes{1,3}t"# Output: (no match) Only the lines where s appears one, two, or three times match the pattern. Regular Expressions in Practice To conclude, let’s look at a couple of practical examples of how regular expressions can be applied. Validating Phone Numbers Suppose we have a list of phone numbers where the correct format is (nnn) nnn-nnnn. Out of a list of 10 numbers, three are incorrectly formatted. cat phonenumbers.txt Output: (185) 136-1035 (95) 213-1874 (37) 207-2639 (285) 227-1602 (275) 298-1043 (107) 204-2197 (799) 240-1839 (218) 750-7390 (114) 776-2276 (7012) 219-3089 The task is to identify the incorrect numbers. We can use the following command: grep -Ev '^\([0-9]{3}\) [0-9]{3}-[0-9]{4}$' phonenumbers.txt Output: (95) 213-1874(37) 207-2639(7012) 219-3089 Here, we used the -v option to invert the match and output only lines that do not match the specified format. Since parentheses are considered metacharacters in ERE, we escaped them with backslashes to treat them as literals. Finding Improper File Names The find command supports checking paths with regular expressions. It’s important to note that, unlike grep which matches parts of lines, find requires the whole path to match. Suppose we want to identify files and directories containing spaces or potentially problematic characters. find . -regex '.*[^-_./0-9a-zA-Z].*' The .* sequences at the beginning and end represent any number of any characters, which is necessary because find expects the entire path to match. Inside the square brackets, we use negation to exclude valid filename characters, meaning any file or directory name containing characters other than hyphens, underscores, digits, or Latin letters will appear in the output. Conclusion This article has covered a few practical examples of Bash regular expressions. Creating complex regular expressions might seem challenging at first. But over time, you’ll gain experience and skill in using regular expressions for searches across various applications that support them.
01 November 2024 · 10 min to read
Git

How to Use the Git Rebase Command

In Git, managing code history is important for tracking changes. For this purpose, git supports several commands, such as commit, log, diff, branch, merge, revert, and rebase. The git rebase command, in particular, is useful for keeping branch histories clean by allowing developers to reapply commits from one branch to another. In this article, we’ll discuss what git rebase is, how it differs from the git merge command, and how to use it to maintain a structured, linear commit history that’s easier to read and review. Understanding Git Rebase: What Is It? The git rebase command allows us to move, combine, reorder, edit, or remove commits. Moreover, it simplifies the project history by moving the commits of one branch onto the base of another branch. Rebase in git is especially useful when integrating changes into a feature branch, resulting in a streamlined history without unnecessary merge commits. Git Rebase vs. Git Merge: What’s the Difference? Both merge and rebase commands are used to combine branches, but they differ in how the commit history looks after one branch is added to another. Here’s a comparison to understand when to use rebase versus merge: Git Merge: It combines the histories of both branches and creates a merge commit, marking the point where they joined. This commit retains the complete history of both branches. Git Rebase: It applies changes from one branch to another and rewrites the history as though all work was done linearly. Git Merge maintains separate histories for each branch, while Git Rebase linearizes the history, making it appear as if all work was done in a straight line. When using git merge, the focus is on merging feature branches, whereas git rebase is used to rewrite and clean up the commit history for better organization and readability. Basic Syntax and Options for Git Rebase The git rebase command allows users to transfer commits from the current branch to the base of another branch. The basic syntax of the git rebase command is shown below: git rebase <target-branch> Users can use different options with the git rebase command, which are listed below:  git rebase master: This command adds all the changes from the master branch to your current branch. -I or --interactive: This option opens an editor to reorder, combine, or modify commits interactively. --onto <newbase>: This option enables us to set a new base commit for the rebase. We can use it to move several commits to a different branch or commit. --skip: This option skips a commit if there's a conflict during rebase. It tells Git to ignore that commit and continue with the rebase. --no-verify: This option ignores any pre-commit checks set up in the repository. It’s useful if we want to commit quickly without running those checks. --auto-squash: It automatically applies the fixup or squash flags to commits. This is helpful for cleaning up commit history during an interactive rebase. These git rebase options should be used carefully, as they can change the commit history of the repository. It is recommended to back up your code before running the rebase command in Git. This way, users can restore the original code if anything goes wrong. How to Perform an Interactive Rebase Interactive rebasing enables users to reorder, combine, or edit commit messages. This practice gives users precise control over their history. Go through the following steps to perform an interactive rebase: Step 1: Switch to the feature branch Users can use the git checkout command to navigate to a different branch in a Git repository: git checkout <feature-branch> This command changes the user's current working branch to the specified <feature-branch>. After switching, any subsequent Git operations, including rebase, will be performed in the context of that branch. Step 2: Start interactive rebase Users can run the rebase command with the -i option to perform an interactive rebase: git rebase -i <target-branch> When a user runs this command, it opens the default text editor. The user will see a list of commits from the current branch that are not present in <target-branch>. Each commit comes with actions to choose from, such as: pick: Keep the commit as it is. edit: Stop and allow changes to the commit (like the message or the files). squash: Combine this commit with the one before it. drop: Removes a commit.  After the user makes the desired changes and saves the file, Git will continue the rebase based on the selected choices. Handling Merge Conflicts During Rebase When rebasing, conflicts can occur if the same line of code is modified in both branches. In that case, Git pauses the rebase process, allowing users to resolve conflicts. Follow the steps below to resolve the merge conflicts during the rebase: Step 1: Identify Conflicting Files Run the git status command to see where the problem/conflict lies in a Git repository: git status This command displays a list of files that have conflicts, marked as unmerged. Step 2: Edit the Conflicted Files When there are conflicts during a Git operation, like a merge or rebase, Git marks the conflicting parts in the files with special markers: <<<<<<< HEAD: It shows the user's changes (from the current branch). =======: It separates the user's changes from the other branch's changes. >>>>>>> <branch-name>: It shows the end of the conflicting section and shows the name of the branch with the conflicting changes. To resolve the conflicts, users should open the files in a text editor and decide which changes to keep. They can choose to: Keep their changes. Keep the changes from the other branch. Combine both sets of changes. After making the edits, it's important to remove the conflict markers to clean up the code and make sure it works properly. Step 3: Stage the Resolved Files Once conflicts have been resolved, the next step is to stage the resolved files. This is done using the following command: git add <file-name> Replace <file-name> with the file’s name that was edited. If multiple files are resolved, they can be added simultaneously or individually. Step 4: Continue the Rebase After staging the resolved files, users can continue the rebase process with the command: git rebase --continue How to Abort, Skip, or Continue a Rebase Users can manage the rebase process by executing the git rebase command with the abort, skip, and continue options. Aborting the Rebase Run the git rebase command with the --abort option to cancel the ongoing rebase and return the branch to its original state: git rebase --abort Skipping the Rebase Similarly, if a user runs into unresolved conflicts during a rebase, he can execute the git rebase command with the --skip option to omit the problematic commit: git rebase --skip Continuing the Rebase If we encounter conflicts while rebasing, we need to resolve them first. After fixing the issues, we can run the rebase command with the --continue option to continue the rebasing process: git rebase --continue Common Mistakes Users can encounter several issues during Git rebase, such as merge conflicts, uncommitted changes, aborted rebase attempts, etc. Here are some common mistakes that users may face while rebasing: Merge Conflicts Users can face merging conflicts when changes in the rebased branch overlap with the base branch. These conflicts require manual resolution. Use the git add <filename> command to mark conflicts as resolved. Then, continue with the git rebase --continue command. Uncommitted Changes If you have uncommitted changes in your working directory, Git won't allow a rebase. In that case, commit or stash your changes with git stash before starting the rebase. Rebasing Shared Branches Rebasing the shared branches can create confusion and conflicts. To avoid this issue, users can rebase the branches that they own or are not currently used by anyone else. Complex History A branch with a complicated commit history can make the rebase process error-prone. In such cases, consider using git merge instead or simplify the history before rebasing. Incorrect Rebase Sequence Specifying the wrong base commit can lead to unexpected changes. Therefore, it is recommended to always double-check that you are rebasing onto the correct branch. Apart from this, the git rebase command has several disadvantages, including increased complexity compared to merging, especially with complex commit histories. It can lead to lost commits if the wrong branch is rebased or if conflicts are unresolved. Additionally, rebasing alters the commit history in public repositories, which makes collaboration difficult. Conclusion In Git, the rebase command helps maintain a clean and readable commit history. However, it requires careful usage due to certain challenges. Therefore, before making significant changes to a branch’s commit history, it’s important to carefully consider the risks and benefits of using the git rebase command.
30 October 2024 · 8 min to read
Python

The Walrus Operator in Python

The first question newcomers often ask about the walrus operator in Python is: why such a strange name? The answer lies in its appearance. Look at the Python walrus operator: :=. Doesn't it resemble a walrus lounging on a beach, with the symbols representing its "eyes" and "tusks"? That's how it earned the name. How the Walrus Operator Works Introduced in Python 3.8, the walrus operator allows you to assign a value to a variable while returning that value in a single expression. Here's a simple example: print(apples = 7) This would result in an error because print expects an expression, not an assignment. But with the walrus operator: print(apples := 7) The output will be 7. This one-liner assigns the value 7 to apples and returns it simultaneously, making the code compact and clear. Practical Examples Let’s look at a few examples of how to use the walrus operator in Python. Consider a program where users input phrases. The program stops if the user presses Enter. In earlier versions of Python, you'd write it like this: expression = input('Enter something or just press Enter: ') while expression != '': print('Great!') expression = input('Enter something or just press Enter: ') print('Bored? Okay, goodbye.') This works, but we can simplify it using the walrus operator, reducing the code from five lines to three: while (expression := input('Enter something or just press Enter: ')) != '': print('Great!') print('Bored? Okay, goodbye.') Here, the walrus operator allows us to assign the user input to expression directly inside the while loop, eliminating redundancy. Key Features of the Walrus Operator: The walrus operator only assigns values within other expressions, such as loops or conditions. It helps reduce code length while maintaining clarity, making your scripts more efficient and easier to read. Now let's look at another example of the walrus operator within a conditional expression, demonstrating its versatility in Python's modern syntax. Using the Walrus Operator with Conditional Constructs Let’s write a phrase, assign it to a variable, and then find a word in this phrase using a condition: phrase = 'But all sorts of things and weather must be taken in together to make up a year and a sphere...' word = phrase.find('things') if word != -1: print(phrase[word:]) The expression [word:] allows us to get the following output: things and weather must be taken in together to make up a year and a sphere... Now let's shorten the code using the walrus operator. Instead of: word = phrase.find('things') if word != -1: print(phrase[word:]) we can write: if (word := phrase.find('things')) != -1: print(phrase[word:]) In this case, we saved a little in volume but also reduced the number of lines. Note that, despite the reduced time for writing the code, the walrus operator doesn’t always simplify reading it. However, in many cases, it’s just a matter of habit, so with practice, you'll learn to read code with "walruses" easily. Using the Walrus Operator with Numeric Expressions Lastly, let’s look at an example from another area where using the walrus operator helps optimize program performance: numerical operations. We will write a simple program to perform exponentiation: def pow(number, power): print('Calling pow') result = 1 while power: result *= number power -= 1 return result Now, let’s enter the following in the interpreter: >>> [pow(number, 2) for number in range(3) if pow(number, 2) % 2 == 0] We get the following output: Calling pow Calling pow Calling pow Calling pow Calling pow [0, 4, 16] Now, let's rewrite the input in the interpreter using the walrus operator: >>> [p for number in range(3) if (p := pow(number, 2)) % 2 == 0] Output: Calling pow Calling pow Calling pow [0, 4, 16] As we can see, the code hasn’t shrunk significantly, but the number of function calls has nearly been halved, meaning the program will run faster! Conclusion In conclusion, the walrus operator (:=) introduced in Python 3.8 streamlines code by allowing assignment and value retrieval in a single expression. This operator enhances readability and efficiency, particularly in loops and conditional statements. Through practical examples, we’ve seen how it reduces line counts and minimizes redundant function calls, leading to faster execution. With practice, developers can master the walrus operator, making their code cleaner and more concise.
23 October 2024 · 4 min to read

Answers to Your Questions

What is Hostman used for, and what services do you offer?

Hostman is a cloud platform where developers and tech teams can host their solutions: websites, e-commerce stores, web services, applications, games, and more. With Hostman, you have the freedom to choose services, reserve as many resources as you need, and manage them through a user-friendly interface.

Currently, we offer ready-to-go solutions for launching cloud servers and databases, as well as a platform for testing any applications.

 

  • Cloud Servers. Your dedicated computing resources on servers in Poland and the Netherlands. Soon, we'll also be in the USA, Singapore, Egypt, and Nigeria. We offer 25+ ready-made setups with pre-installed environments and software for analytics systems, gaming, e-commerce, streaming, and websites of any complexity.

  • Databases. Instant setup for any popular database management system (DBMS), including MySQL, PostgreSQL, MongoDB, Redis, Apache Kafka, and OpenSearch.

  • Apps. Connect your Github, Gitlab, or Bitbucket and test your websites, services, and applications. No matter the framework - React, Angular, Vue, Next.js, Ember, etc. - chances are, we support it.

Can I have confidence in Hostman to handle my sensitive data and cloud-based applications?

Your data's security is our top priority. Only you will have access to whatever you host with Hostman.

Additionally, we house our servers in Tier IV data centers, representing the pinnacle of reliability available today. Furthermore, all data centers comply with international standards: 

  • ISO: Data center design standards

  • PCI DSS: Payment data processing standards

  • GDPR: EU standards for personal data protection

What are the benefits of using Hostman as my cloud service provider?

User-Friendly. With Hostman, you're in control. Manage your services, infrastructure, and pricing structures all within an intuitive dashboard. Cloud computing has never been this convenient.

 

Great Uptime: Experience peace of mind with 99.99% SLA uptime. Your projects stay live, with no interruptions or unpleasant surprises.

 

Around-the-Clock Support. Our experts are ready to assist and consult at any hour. Encountered a hurdle that requires our intervention? Please don't hesitate to reach out. We're here to help you through every step of the process.

 

How does pricing work for your cloud services?

At Hostman, you pay only for the resources you genuinely use, down to the hour. No hidden fees, no restrictions.

Pricing starts as low as $4 per month, providing you with a single-core processor at 3.2 GHz, 1 GB of RAM, and 25 GB of persistent storage. On the higher end, we offer plans up to $75 per month, which gives you access to 8 cores, 16 GB of RAM, and 320 GB of persistent storage.

For a detailed look at all our pricing tiers, please refer to our comprehensive pricing page.

Do you provide 24/7 customer support for any issues or inquiries?

Yes, our technical specialists are available 24/7, providing continuous support via chat, email, and phone. We strive to respond to inquiries within minutes, ensuring you're never left stranded. Feel free to reach out for any issue — we're here to assist.

Can I easily scale my resources with Hostman's cloud services?

With Hostman, you can scale your servers instantly and effortlessly, allowing for configuration upsizing or downsizing, and bandwidth adjustments.

Please note: While server disk space can technically only be increased, you have the flexibility to create a new server with less disk space at any time, transfer your project, and delete the old server

What security measures do you have in place to protect my data in the cloud?

Hostman ensures 99.99% reliability per SLA, guaranteeing server downtime of no more than 52 minutes over a year. Additionally, we house our servers exclusively in Tier IV data centers, which comply with all international security standards.

 

How can I get started with Hostman's cloud services for my business?

Just sign up and select the solution that fits your needs. We have ready-made setups for almost any project: a vast marketplace for ordering servers with pre-installed software, set plans, a flexible configurator, and even resources for custom requests.

If you need any assistance, reach out to our support team. Our specialists are always happy to help, advise on the right solution, and migrate your services to the cloud — for free.

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start
Email us