Sign In
Sign In

Cloud Service Provider
for Developers and Teams

We make it simple to get started in the cloud and scale up as you grow —
whether you have one virtual machine or ten thousand
By signing up you agree to the Terms of Service and Privacy Policy
99.9% Uptime
Our cloud service provides the ultimate in server dependability and stability
Money-back Guarantee
Experience our high-speed cloud services without any risk, assured by our money-back guarantee
Easy to Deploy
Manage your services with ease using our intuitive control panel, where deploying software is a matter of minutes
Reliable and Available
Select from 6 datacenter regions around the world based on latency or deploy across regions for redundancy

Robust cloud services for every demand

See all Products

Cloud Servers

Cutting-edge hardware for cloud solutions: powerful Intel and AMD processors, ultra-fast NVMe disks

Databases

We provide a cloud database ready to store everything you have. The best DBMSs are on deck: MySQL, Redis, Kafka, and more

App Platform

Just link your repo, pick a project to deploy, and Hostman will have it running in the cloud with just a couple of clicks from the dashboard

S3 Storage

A universal object storage compatible with the S3 protocol

Firewall

Multi-layered protection from vulnerability scanning, DDoS, and cyber-attacks

Kubernetes

Automate the management of containerized applications, from deployment and scaling to monitoring and error handling

Managed Backups

Our server and application backup feature allows for both on-demand and scheduled backup and one-click data restoration

Images

Create images for backup free of charge or deploy your own in the Hostman cloud

Hostman's commitment to simplicity
and budget-friendly solutions

1 CPU
2 CPU
4 CPU
8 CPU
Configuration
1 CPU, 1 GB RAM, 25 GB SSD
Hostman
DigitalOcean
Google Cloud
AWS
Vultr
Price
$4
$6
$6.88
$7.59
$5
Tech support
Free
$24/mo
$29/mo + 3% of
monthly charges
$29/mo or 3% of
monthly charges
Free
Backups
from $0.07/GB
20% or 30% higher
base daily/weekly fee
$0.03/GB per mo
$0.05/GB per mo
20% higher base
monthly/hourly fee
Bandwidth
Free
$0.01 per GB
$0.01 per GB
$0.09/GB first
10 TB / mo
$0.01 per GB
Live chat support
Avg. support response time
<15 min
<24 hours
<4 hours
<12 hours
<12 hours
Anup k.
Associate Cloud Engineer
5.0 out of 5

"Hostman Comprehensive Review of Simplicity and Potential"

It been few years that I have been working on Cloud and most of the cloud service...
Mansur H.
Security Researcher
5.0 out of 5

"A perfect fit for everything cloud services!"

Hostman's seemless integration, user-friendly interface and its robust features (backups, etc) makes it much easier...
Adedeji E.
DevOps Engineer
5.0 out of 5

"Superb User Experience"

For me, Hostman is exceptional because of it's flexibility and user-friendliness. The platform's ability to offer dedicated computing resources acr...
Yudhistira H.
Mid-Market(51-1000 emp.)
5.0 out of 5

"Streamlined Cloud Excellence!"

What I like best about Hostman is their exceptional speed of deployment, scalability, and robust security features. Their...
Mohammad Waqas S.
Biotechnologist and programmer
5.0 out of 5

"Seamless and easy to use Hosting Solution for Web Applications"

From the moment I signed up, the process has been seamless and straightforward...
Mohana R.
Senior Software Engineer
5.0 out of 5

"Availing Different DB Engine Services Provided by Hostman is Convenient for my Organization usecases"

Hostman manages the cloud operations...
Faizan A.
5.0 out of 5

"Hostman is a great fit for me"

Hostman is a great fit for me. What do you like best about Hostman? It was very easy to deploy my application and create database, I didn't have
Adam M.
5.0 out of 5

"Perfect website"

This website is extremely user friendly and easy to use. I had no problems so didn't have to contact customer support. Really good website and would recommend to others.
Anup K.
4.0 out of 5

"Simplifying Cloud Deployment with Strengths and Areas for Growth"

What I like best about Hostman is its unwavering commitment to simplicity...
Naila J.
5.0 out of 5

"Streamlined Deployment with Room for Improvement"

Hostman impresses with its user-friendly interface and seamless deployment process, simplifying web application hosting...

Trusted by 500+ companies and developers worldwide

Deploy a cloud server
in just a few clicks

Set up your сloud servers at Hostman swiftly and without any fees, customizing them for your business with a quick selection of region, IP range, and details—ensuring seamless integration and data flow

Code locally, launch worldwide

Our servers, certified with ISO/IEC 27001, are located in Tier 3 data
centers across the US, Europe, and Asia
🇺🇸 San Francisco
🇺🇸 San Jose
🇺🇸 Texas
🇺🇸 New York
🇳🇱 Amsterdam
🇳🇬 Lagos
🇩🇪 Frankfurt
🇵🇱 Gdansk
🇦🇪 Dubai
🇸🇬 Singapore

Latest News

Docker

How to Install Nextcloud with Docker

Nextcloud is an open-source software for creating and using your own cloud storage. It allows users to store data, synchronize it between devices, and share files through a user-friendly interface. This solution is ideal for those prioritizing privacy and security over public cloud services. Nextcloud offers a range of features, including file management, calendars, contacts, and integration with other services and applications. When deploying Nextcloud, Docker provides a convenient and efficient way to install and manage the application. Docker uses containerization technology, simplifying deployment and configuration and ensuring scalability and portability. Combining Docker with Docker Compose allows you to automate and standardize the deployment process, making it accessible even to users with minimal technical expertise. In this guide, we'll walk you through installing Nextcloud using Docker Compose, configuring Nginx as a reverse proxy, and obtaining an SSL certificate with Certbot to secure your connection. Installing Docker and Docker Compose Docker is a powerful tool for developers that makes deploying and running applications in containers easy. Docker Compose simplifies orchestration of multi-container applications using YAML configuration files, which streamline the setup and management of complex applications. Download the installation script by running the command: curl -fsSL https://get.docker.com -o get-docker.sh This script automates the Docker installation process for various Linux distributions. Run the installation script: sudo sh ./get-docker.sh This command installs both Docker and Docker Compose. You can add the --dry-run option to preview the actions without executing them. After the script completes, verify that Docker and Docker Compose are installed correctly by using the following commands: docker -vdocker compose version These commands should display the installed versions, confirming successful installation. Preparing to Install Nextcloud Creating a Working Directory In Linux, third-party applications are often installed in the /opt directory. Navigate to this directory with the command: cd /opt Create a folder named mynextcloud in the /opt directory, which will serve as the working directory for your Nextcloud instance: mkdir mynextcloud Configuring the docker-compose.yml File After creating the directory, navigate into it: cd mynextcloud We will define the Docker Compose configuration in the docker-compose.yml file. To edit this file, use a text editor such as nano or vim: nano docker-compose.yml In the docker-compose.yml file, you should include the following content: version: '2' volumes: mynextcloud: db: services: db: image: mariadb:10.6 restart: unless-stopped command: --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW volumes: - db:/var/lib/mysql environment: - MYSQL_ROOT_PASSWORD=RootPass - MYSQL_PASSWORD=NextPass - MYSQL_DATABASE=nextclouddb - MYSQL_USER=nextclouduser app: image: nextcloud restart: unless-stopped ports: - 8081:80 links: - db volumes: - mynextcloud:/var/www/html environment: - MYSQL_PASSWORD=NextPass - MYSQL_DATABASE=nextclouddb - MYSQL_USER=nextclouduser - MYSQL_HOST=db Parameters in this file: version: '2': Specifies the version of Docker Compose being used. Version 2 is known for its simplicity and stability. volumes: Defines two named volumes: mynextcloud for app data and db for database storage. services: db: image: Uses the MariaDB 10.6 image. restart: Automatically restarts the service unless manually stopped. volumes: Binds the db volume to /var/lib/mysql in the container for persistent database storage. environment: Sets environment variables like passwords, database name, and user credentials. app: image: Uses the Nextcloud image. ports: Maps port 8081 on the host to port 80 inside the container, allowing access to Nextcloud through port 8081. links: Links the app container to the db container for database interaction. volumes: Binds the mynextcloud volume to /var/www/html for storing Nextcloud files. environment: Configures database-related environment variables, linking the Nextcloud app to the database. This configuration sets up your application and database environment. Now, we can move on to launching and configuring Nextcloud. Running and Configuring Nextcloud Once the docker-compose.yml configuration is ready, you can start the project. Run the following commands in the mynextcloud directory to download the necessary images and start the containers: docker compose pulldocker compose up The docker compose pull command will download the required Nextcloud and MariaDB images. The docker compose up command will launch the containers based on your configuration. The initial setup may take a while. When it’s complete, you will see messages like: nextcloud-app-1  | New nextcloud instancenextcloud-app-1  | Initializing finished After the initial configuration, you can access Nextcloud through your browser. Enter http://server-ip:8081 into the browser’s address bar. You will be prompted to create an administrator account by providing your desired username and password. During the initial configuration, you can also choose additional apps to install. Stopping and Restarting Containers in Detached Mode After verifying that Nextcloud is running correctly through the web interface, you can restart the containers in detached mode to keep them running in the background. If the containers are still running in interactive mode (after executing docker compose up without the -d flag), stop them by pressing Ctrl+C in the terminal. To restart the containers in detached mode, use the command: docker compose up -d The -d flag stands for "detached mode," which allows the containers to run in the background independently of your terminal session. Now the containers are running in the background. If you have a domain ready, you can proceed with configuring the server as a reverse proxy. Setting up Nginx as a Reverse Proxy Installation Nginx is often chosen as a reverse proxy due to its performance and flexibility. You can install it by running the command: sudo apt install nginx Configuring Nginx Create a configuration file for your domain (e.g., nextcloud-test.com). Use a text editor to create the file in the /etc/nginx/sites-available directory: sudo nano /etc/nginx/sites-available/nextcloud-test.com Add the following directives to the file: server { listen 80; server_name nextcloud-test.com; location / { proxy_pass http://localhost:8081; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; add_header Strict-Transport-Security "max-age=15552000; includeSubDomains" always; } location ^~ /.well-known { location = /.well-known/carddav { return 301 /remote.php/dav/; } location = /.well-known/caldav { return 301 /remote.php/dav/; } location /.well-known/acme-challenge { try_files $uri $uri/ =404; } location /.well-known/pki-validation { try_files $uri $uri/ =404; } return 301 /index.php$request_uri; } } This configuration sets up the web server to proxy requests to Nextcloud running on port 8081, with headers for security and proxying. Key Configuration Details Basic Configuration: server { listen 80; server_name nextcloud-test.com; location / { proxy_pass http://localhost:8081; ... } } This block configures the server to listen on port 80 (standard HTTP) and handle requests directed to nextcloud-test.com. Requests are proxied to the Docker container running Nextcloud on port 8081. Proxy Settings: proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; These headers ensure that the original request information (like the client’s IP address and request protocol) is passed on to the application, which is important for proper functionality and security. HSTS (HTTP Strict Transport Security): add_header Strict-Transport-Security "max-age=15552000; includeSubDomains" always; This header enforces security by instructing browsers only to use HTTPS when accessing your site for the next 180 days. Well-Known URI Settings: location ^~ /.well-known { ... } This block handles special requests to .well-known URIs, used for service discovery (e.g., CalDAV, CardDAV) and domain ownership verification (e.g., for SSL certificates). Enabling the Nginx Configuration Create a symbolic link to the configuration file from the /etc/nginx/sites-enabled/ directory: sudo ln -s /etc/nginx/sites-available/nextcloud-test.com /etc/nginx/sites-enabled/ Now restart Nginx to apply the new configuration: sudo systemctl restart nginx At this point, your web server is configured as a reverse proxy for the Nextcloud application, and you can access it via your domain (note that you might initially see an "Access through untrusted domain" error, which we’ll fix later). Configuring SSL Certificates with Certbot Installing Certbot Certbot is a tool from the Electronic Frontier Foundation (EFF) used for obtaining and managing SSL certificates from Let's Encrypt. It automates the process, enhancing your website's security by encrypting the data exchanged between the server and its users. To install Certbot and the Nginx plugin, use the following command: sudo apt install certbot python3-certbot-nginx Obtaining and Installing the SSL Certificate To obtain an SSL certificate for your domain and configure the web server to use it, run the command: sudo certbot --non-interactive -m [email protected] --agree-tos --no-eff-email --nginx -d nextcloud-test.com In this command: --non-interactive: Runs Certbot without interactive prompts. -m [email protected]: Specifies the admin email for notifications. --agree-tos: Automatically agrees to Let's Encrypt’s terms of service. --no-eff-email: Opts out of EFF-related emails. --nginx: Uses the Nginx plugin to automatically configure SSL. -d nextcloud-test.com: Specifies the domain for which the certificate is issued. Certbot will automatically update the Nginx configuration to use the SSL certificate, including setting up HTTP-to-HTTPS redirection. After Certbot completes the process, restart Nginx to apply the changes: sudo systemctl restart nginx Now, your Nextcloud instance is secured with an SSL certificate, and all communication between the server and clients will be encrypted. Fixing the "Access through Untrusted Domain" Error When accessing Nextcloud through your domain, you may encounter an "Access through untrusted domain" error. This occurs because the initial configuration was done using the server’s IP address. Since our application is running inside a container, you can either use docker exec or modify the Docker volume directly. We’ll use the latter method since we created Docker volumes earlier in the docker-compose.yml file. First, list your Docker volumes: docker volume ls Find the volume named mynextcloud_mynextcloud. To access the volume, run: docker volume inspect mynextcloud_mynextcloud Look for the Mountpoint value to find the path to the volume. Change to that directory: cd /var/lib/docker/volumes/mynextcloud_mynextcloud/_data Navigate to the config directory and open the config.php file for editing: cd confignano config.php In the file, update the following lines: Change overwrite.cli.url from http://server_ip:8081 to https://your_domain. In the trusted_domains section, replace server_ip:8081 with your domain. Add the line 'overwriteprotocol' => 'https' after overwrite.cli.url to ensure all resources load via HTTPS. Save the changes (in Nano, use Ctrl+O, then Ctrl+X to exit). After saving the changes in config.php, you should be able to access the application through your domain without encountering the "untrusted domain" error. Conclusion Following these steps, you’ll have a fully functional, secure Nextcloud instance running in a containerized environment.
27 September 2024 · 10 min to read
Linux

How to Use the grep Command in Linux

The grep command is built into many Linux distributions. It runs a utility that searches either for a specific file containing the specified text or for a specific line within a file containing the given characters. The name "grep" stands for "global regular expression print." Some developers casually say "to grep" something, meaning searching for a specific regular expression in a large set of files. The command can accept directories with files to search and the text output of other commands, filtering it accordingly. In this article, we will take a detailed look at using the grep command: We will break down the grep command syntax; Test the functionality of regular expressions; Try various options while using the command; Perform searches both within a single file and across entire directories; Learn how to include and exclude specific files from the search. Command Syntax The command is structured as follows: grep [flags] pattern [<path to directory or file>] First, specify the flags to configure the search and output behavior. Next, provide a regular expression, which is used to search for text. As the last argument, enter the path to a file or a directory where the search will be performed. If a directory is specified, the search is performed recursively. Instead of files and directories, you can also pass the output of another command as input: another_command | grep [flags] pattern This helps filter out the most important information from less relevant data during the output from other programs. Regular expressions are the core of the grep command. They are essential for creating search patterns. Regular expressions have two levels—Basic Regular Expressions (BRE) and Extended Regular Expressions (ERE). To enable the latter, you need to use the -E flag. The nuances of using the grep utility are best understood through practical examples. We will sequentially review the main methods of searching for strings within files. Creating Text Files Before running any searches, let’s prepare the environment by setting up a few text files that we’ll use with the grep utility. Directory for Files First, we’ll create a separate folder to hold the files where we’ll search for matches. Create a directory: mkdir files Then navigate into it: cd files Text Files Let’s create a couple of files with some text: nano english.txt This file will contain an excerpt from Jane Austen’s Pride and Prejudice along with some additional text to demonstrate the search commands: However little known the feelings or views of such a man may be on his first entering a neighbourhood, this truth is so well fixed in the minds of the surrounding families, that he is considered as the rightful property of some one or other of their daughters.The surrounding was quite overwhelmingWalking and talking became the main activities of the evening Additionally, let’s create another text file named sample.txt: nano sample.txt Add the following content: Line 1: This is the first line. Line 2: Here we see the second line ending with something interesting. Line 3: Another normal line follows here. Line 4: This line is captivating and worth noting. Line 5: The pattern we seek is right here, at the ending. Line 6: Yet another normal line to keep the flow. Line 7: Ending this line with something worth checking. Line 8: A concluding thought here. Line 9: This line does not end as the others. Line 10: Just a regular line here. File with Code Next, let’s add a file that contains some simple JavaScript code: nano code Here’s the content: const number1 = 2; const number2 = 4; const sum = number1 + number2; console.log('The sum of ' + number1 + ' and ' + number2 + ' is ' + sum); Listing Created Files Finally, let’s check the created files: ls The console should display: code  english.txt  sample.txt Perfect! These are the files we’ll use to test the functionality of the grep command. Simple Match Let's try to find all instances of the word "the" in the first file: grep 'the' english.txt The console will display the found elements, with all occurrences of "the" highlighted in red. However, there’s an issue—grep also highlighted parts of words like "other" and "their," which are not standalone articles. To find only the article "the," we can use the -w flag. This flag ensures that the search looks for whole words only, without matching subsets of characters within other words: grep -w 'the' english.txt Now the terminal will highlight only those instances of "the" that are not part of another word. End of Line We can make the regular expression more complex by adding a special operator. For example, we can find lines that end with a specific set of characters: grep 'ing$' english.txt The console will display only those lines that contain the specified matches, with them highlighted in red. This approach helps refine searches, especially when focusing on precise patterns within text. Search Flags Searching with Extended Regular Expressions (-E) You can activate extended regular expressions by specifying the -E flag. The extended mode adds several new symbols, making the search even more flexible. +The preceding character repeats one or more times. ?The preceding character repeats zero or more times. {n, m}The preceding character repeats between n and m times. |A separator that combines different patterns. Here’s a small example of using extended regular expressions: grep -E '[a-z]+ing$' ./* This command specifies that the string should end with "ing," which must be preceded by one or more lowercase letters. The output would be something like: ./english.txt:The surrounding was quite overwhelming../english.txt:Walking and talking became the main activities of the evening. Regular expressions, the foundation of the grep utility, are a versatile formal language used across various programming languages and operating systems. Therefore, this guide covers only a portion of their capabilities. Line Number (-n) The -n flag can be used to display line numbers alongside the found matches: grep -n 'ing$' english.txt The output will be: 4:The surrounding was quite overwhelming.5:Walking and talking became the main activities of the evening. Case-Insensitive Search (-i) The -i flag allows you to search for matches without considering the case of the characters: grep -i 'the' english.txt The output will be: However little known the feelings or views of such a man may be on his first entering a neighbourhood, this truth is so well fixed in the minds of the surrounding families, that he is considered as the rightful property of some one or other of their daughters. The surrounding was quite overwhelming. Walking and talking became the main activities of the evening. If we didn’t use this flag, we would only find the matches with the exact case: grep 'the' english.txt However little known the feelings or views of such a man may be on his first entering a neighbourhood, this truth is so well fixed in the minds of the surrounding families, that he is considered as the rightful property of some one or other of their daughters. Walking and talking became the main activities of the evening. This shows how adjusting flags can refine your search results with grep. Search for Whole Words (-w) Sometimes, you need to find only whole words rather than partial matches of specific characters. For this, the -w flag is used. We can modify the previous search by using both the -i and -w flags simultaneously: grep -iw 'the' english.txt The output will contain lines with full matches of the word "the" in any case: However little known the feelings or views of such a man may be on his first entering a neighbourhood, this truth is so well fixed in the minds of the surrounding families, that he is considered as the rightful property of some one or other of their daughters. The surrounding was quite overwhelming. Walking and talking became the main activities of the evening. Inverted Search (-v) You can invert the search results, which means it will display only those lines that do not contain the specified matches: grep -v 'the' english.txt For clarity, you can include line numbers: grep -vn 'the' english.txt The console output will be: 4:The surrounding was quite overwhelming. As you can see, lines containing the word "the" are excluded from the results. The line "The surrounding was quite overwhelming." is included because grep -v 'the' performs a case-sensitive search by default. Since the search pattern 'the' is in lowercase, it does not match the uppercase "The" at the beginning of the sentence. As a result, this line is not excluded from the output.   To exclude lines with any case of "the," you would need to use the -i flag along with -v:   grep -vin 'the' english.txt   This command would then exclude lines containing "The" as well. Multiple Regular Expressions (-e) You can use multiple regular expressions in a single search by specifying each pattern after the -e flag: grep -e 'ing$' -e 'surround' ./* This command is equivalent to running the two searches sequentially: grep 'ing$' ./*grep 'surround' ./* The combined output will include matches from both patterns. Recursive Search (-r) Let’s move up one level to the root directory: cd Now, let’s perform a recursive search in the root directory: grep -r 'ing$' ./ The grep command will find matches in the directory one level down—in the folder containing text files. The output will be as follows: ./files/english.txt:The surrounding was quite overwhelming../files/english.txt:Walking and talking became the main activities of the evening. Note the file path in the results; it now includes the subdirectory's name. Let’s navigate back to the folder with the files: cd files Extended Output (-A, -B, -C) In some cases, it’s important to extract not only the line with the matching pattern but also the lines surrounding it. This helps to understand the context better. After Match Lines (-A) Using the -A flag, you can specify the number of lines to display AFTER the line with the found match. For example, let's display one line after each match of lines ending with "ending": grep -A1 'ending' sample.txt The output will be: Line 2: Here we see the second line ending with something interesting. Line 3: Another normal line follows here. -- Line 5: The pattern we seek is right here, at the ending. Line 6: Yet another normal line to keep the flow. Before Match Lines (-B) Using the -B flag, you can specify the number of lines to display BEFORE the line with the found match: grep -B1 'ending' sample.txt The output will be: Line 1: This is the first line. Line 2: Here we see the second line ending with something interesting. -- Line 4: This line is captivating and worth noting. Line 5: The pattern we seek is right here, at the ending. Context Lines (-C) Using the -C flag, you can specify the number of lines to display both BEFORE and AFTER the line with the found match: grep -C1 'ending' sample.txt The output will be: Line 1: This is the first line. Line 2: Here we see the second line ending with something interesting. Line 3: Another normal line follows here. Line 4: This line is captivating and worth noting. Line 5: The pattern we seek is right here, at the ending. Line 6: Yet another normal line to keep the flow. Output Only the Count of Matching Lines (-c) The -c flag allows you to display only the number of matches instead of showing each matching line: grep -c 'ing$' ./* The console output will be: ./code:0./english.txt:2./sample.txt:4 As you can see, even the absence of matches is displayed in the terminal. In this case, there are three matches in the english.txt file and three in the sample.txt file, while no matches are found in code. Limited Output (-m) You can limit the output to a specific number of matching lines using the -m flag. The number of lines is specified immediately after the flag without a space: grep -m1 'ing$' ./* Instead of displaying all matches, the console will show only the first occurrence: ./english.txt:The surrounding was quite overwhelming../sample.txt:Line 2: Here we see the second line ending with something interesting. This allows you to shorten the output, displaying only the specified number of matches, which can be useful when working with large datasets. Searching in Multiple Files Searching in Directories To search across multiple directories, you can specify a pattern that includes the possible paths of the files you're looking for: grep 'su' ./* The terminal will display combined output with matching lines from multiple files: ./code:const sum = number1 + number2; ./code:console.log('The sum of ' + number1 + ' and ' + number2 + ' is ' + sum); ./english.txt:However little known the feelings or views of such a man may be on his first entering a neighbourhood, ./english.txt:this truth is so well fixed in the minds of the surrounding families, ./english.txt:The surrounding was quite overwhelming. Notice that when searching in a directory, the console output includes the file path for each matching line, distinguishing it from searches within a single file. Including and Excluding Files When searching in directories, you can include or exclude specific files using the --include and --exclude flags. For example, you can exclude the English text file from the previous search: grep --exclude 'english.txt' 'su' ./* The terminal will then display: ./code:const sum = number1 + number2;./code:console.log('The sum of ' + number1 + ' and ' + number2 + ' is ' + sum); You could achieve the same result by including only the code file in the search: grep --include 'code' 'su' ./* It’s important to understand that the file names used in --include and --exclude are also treated as regular expressions. For instance, you can do the following: grep --include '*s*1' ' ' ./* This command searches for a space character only in files that contain the letter "s" and end with the digit "1" in their names. Excluding Directories In addition to excluding files, you can exclude entire directories from your search. First, let’s move up one level: cd Now perform a recursive search in the current directory while excluding specific folders using the --exclude-dir option: grep --exclude-dir='files' -R 'su' ./* In this case, the folder named files will be excluded from the search results. Let’s navigate back to the folder with the files: cd files Conclusion In most UNIX-like systems, the grep command provides powerful capabilities for searching text within the file system. Additionally, grep is well-suited for use within Linux pipelines, enabling it to process external files and the output of other console commands. This flexibility is achieved through using regular expressions and various configurable search flags. By combining all the features of this utility, you can tackle a wide range of search tasks. In many ways, grep is like a "Swiss Army knife" for finding information in Linux-based operating systems.
27 September 2024 · 12 min to read
Linux

How to Install and Use ripgrep

ripgrep (often abbreviated as rg) is a modern, fast, and powerful command-line search tool that can recursively search your files like grep, but with added efficiency and features. It is designed to search code repositories while ignoring files and directories specified in .gitignore or other similar configuration files. This makes ripgrep highly efficient for developers working in large codebases. This tutorial will cover: Installing ripgrep on Linux Basic syntax and commands for ripgrep Common use cases and examples Advanced features Comparison with other search tools like grep Troubleshooting and best practices By the end, you’ll have a solid understanding of how to use ripgrep effectively. Installing ripgrep on Linux Installing ripgrep is straightforward on most Linux distributions. You can install it using your package manager or by downloading the binary. To install ripgrep on Ubuntu, follow these steps: 1. Update your package list: sudo apt update 2. Install ripgrep: sudo apt install ripgrep fzf 3. To check your installed ripgrep version, use: rg --version Basic Syntax and Commands for ripgrep The syntax for ripgrep is similar to grep, but ripgrep provides faster performance and more powerful features out-of-the-box. Basic Syntax The basic structure of a ripgrep command looks like this: rg [OPTIONS] PATTERN [PATH] Where: PATTERN is the string or regular expression you want to search for. [PATH] is optional and specifies the directory or file to search in. If omitted, ripgrep searches the current directory. Searching with Specific File Extensions If you want to search within files of a specific extension (e.g., .py files), you can run: rg "function" *.py Recursive Search with File Extensions When using file extensions directly in the search pattern (e.g., *.py), ripgrep does not perform a recursive search through subdirectories. To search recursively and filter by file type, use the --type option instead: rg --type py "function" This ensures that the search is conducted across all relevant files in the directory tree. Searching for Regular Expressions ripgrep supports searching using regular expressions. For example: rg '\d{4}-\d{2}-\d{2}' This searches for dates in the format YYYY-MM-DD. Common Use Cases and Examples of ripgrep Case-Insensitive Search You can make your search case-insensitive using the -i option: rg -i "error" This will match "error", "Error", or "ERROR" in your files. Searching with File Type ripgrep allows searching within specific file types using the --type option. To search only Python files: rg --type py "import" Excluding Directories To exclude certain directories from your search, use the --glob option. For example, to exclude the node_modules folder: rg "config" --glob '!node_modules/*' Searching Compressed Files ripgrep can search through compressed files without needing to extract them first. It supports formats like .gzip, .xz, .lz4, .bzip2, .lzma, and .zstd. To search within compressed files, use the --search-zip or -z option. Here's an example: rg 'ERST' -z demo.gz Advanced Features of ripgrep ripgrep offers advanced features to enhance search results by including additional context around matched lines. Here's a quick overview of these features: Before and After Context:  Use -B [number] to include lines before the match. Use -A [number] to include lines after the match. Example: rg "EXT4-fs \(sda3\)" /var/log/syslog.demo -B 1 -A 2 Combined Context: Use -C [number] to include lines both before and after the match. Example: rg "EXT4-fs \(sda3\)" /var/log/syslog -C 1 -B 1 -A 2 provides more control by allowing you to specify different numbers of lines before and after the match. -C 2 provides a combined context with the same number of lines before and after the match, useful for seeing the surrounding context without having to specify separate options. Comparing ripgrep with Other Search Tools ripgrep vs grep ripgrep is faster than grep, especially for large codebases, because it skips over ignored files like .gitignore automatically. grep is more universally available but lacks many features that ripgrep provides out of the box. ripgrep vs ag (The Silver Searcher) ripgrep is often compared to ag because both tools are optimized for searching codebases. However, ripgrep tends to be faster and has better support for file globbing and regular expressions. Troubleshooting and Best Practices for Using ripgrep Handling Large Files If you experience memory issues while searching large files, consider using the --max-filesize option: rg "search-term" --max-filesize 10M This limits the search to files under 10MB. Excluding Certain File Types If you want to exclude certain file types globally, you can create a .ripgreprc configuration file in your home directory: --glob '!*.log'--glob '!*.tmp' This will exclude .log and .tmp files from all searches. Conclusion This tutorial has covered the installation of ripgrep, its basic commands, advanced features, and comparisons with other tools. With its speed and efficiency, ripgrep is an excellent choice for developers looking to enhance their search capabilities in large codebases.
27 September 2024 · 4 min to read
Python

Python Sets and Set Operations

A set in Python is an unordered collection of unique elements. It is one of the fundamental data types in Python, offering flexibility in how data is stored and accessed. Unlike lists or tuples, sets do not allow duplicate elements, making them an ideal choice for handling unique values. Sets are often used in situations where operations such as membership testing, union, intersection, and difference are frequently performed. This tutorial will cover the basics of Python sets, how to create them, and how to use Python set operations effectively. By the end, you’ll understand how to leverage sets in your Python projects for optimal performance and readability. Why Use Sets in Python? Sets ensure that there are no duplicate values. They are useful for membership tests, eliminating duplicates, and performing set operations (union, intersection, etc.). The operations on sets in Python are optimized for performance. Creating Sets in Python In Python, sets are created using curly braces {} or the set() constructor. If you use curly braces, you can define a set directly with its elements, while the set() constructor can be used to create an empty set or a set from an iterable. Example 1: Creating a Set Using Curly Braces fruits = {'apple', 'banana', 'cherry'}print(fruits) Example 2: Creating an Empty Set empty_set = set()print(empty_set) Using {} without any elements creates an empty dictionary, not a set. To create an empty set, always use set(). Basic Set Operations Python sets support various operations that allow developers to handle collections of data efficiently. Below are some of the most commonly used set operations. Adding Elements to a Set To add an element to a set, use the add() method. If the element already exists in the set, the set remains unchanged. fruits = {'apple', 'banana'}fruits.add('orange')print(fruits)  # Output: {'apple', 'banana', 'orange'} Removing Elements from a Set You can remove elements using the remove() or discard() methods. The difference between the two is that remove() raises an error if the element is not found, while discard() does not. fruits.remove('banana') print(fruits) # Output: {'apple', 'orange'} # Using discard() to remove a non-existent element fruits.discard('grape') # No error is raised Set Union The union operation combines elements from two sets. The result contains all unique elements from both sets. set_a = {1, 2, 3} set_b = {3, 4, 5} union_set = set_a.union(set_b) print(union_set) # Output: {1, 2, 3, 4, 5} Set Intersection Intersection returns only the elements that are present in both sets. set_a = {1, 2, 3} set_b = {2, 3, 4} intersection_set = set_a.intersection(set_b) print(intersection_set) # Output: {2, 3} Set Difference The difference operation returns elements that are in one set but not in the other. set_a = {1, 2, 3} set_b = {2, 3, 4} difference_set = set_a.difference(set_b) print(difference_set) # Output: {1} Advanced Set Methods Python sets provide several advanced methods that make them powerful tools for handling collections of data. issubset() The issubset() method checks if all elements of one set are present in another set. Example: # Define two sets set_a = {1, 2, 3} set_b = {1, 2, 3, 4, 5} # Check if set_a is a subset of set_b result = set_a.issubset(set_b) # Print the result print(result) # Output: True In this example: set_a contains {1, 2, 3}, and all these elements are present in set_b, which contains {1, 2, 3, 4, 5}. Since set_a is fully contained within set_b, issubset() returns True. issuperset() The issuperset() method checks if a set contains all elements of another set. set_a = {1, 2, 3, 4} set_b = {1, 2} print(set_a.issuperset(set_b)) # Output: True Symmetric Difference The symmetric difference returns all elements that are in either of the sets but not in both. set_a = {1, 2, 3} set_b = {3, 4, 5} symmetric_diff = set_a.symmetric_difference(set_b) print(symmetric_diff) # Output: {1, 2, 4, 5} Use Cases of Sets in Python Removing Duplicates from a List One of the simplest use cases for sets is to remove duplicate items from a list. my_list = [1, 2, 2, 3, 4, 4, 5] unique_set = set(my_list) unique_list = list(unique_set) print(unique_list) # Output: [1, 2, 3, 4, 5] Membership Testing Sets are highly optimized for membership testing, i.e., checking if an element is in the set. my_set = {'apple', 'banana', 'cherry'} print('banana' in my_set) # Output: True Mathematical Set Operations Sets can be used to perform complex mathematical operations such as union, intersection, and difference. Example: # Define two sets set_a = {1, 2, 3, 4} set_b = {3, 4, 5, 6} # 1. Union: Elements from both sets union_result = set_a.union(set_b) print(f"Union: {union_result}") # Output: {1, 2, 3, 4, 5, 6} # 2. Intersection: Common elements between both sets intersection_result = set_a.intersection(set_b) print(f"Intersection: {intersection_result}") # Output: {3, 4} # 3. Difference: Elements in set_a but not in set_b difference_result = set_a.difference(set_b) print(f"Difference: {difference_result}") # Output: {1, 2} # 4. Symmetric Difference: Elements that are in either set, but not both sym_diff_result = set_a.symmetric_difference(set_b) print(f"Symmetric Difference: {sym_diff_result}") # Output: {1, 2, 5, 6} In this example: Union gives {1, 2, 3, 4, 5, 6}. Intersection gives {3, 4}. Difference gives {1, 2} (elements only in set_a). Symmetric Difference gives {1, 2, 5, 6} (elements unique to each set). Best Practices for Working with Sets Use Sets for Unique Elements: Since sets automatically remove duplicates, use them when you need a collection of unique values. Avoid Using Sets for Ordered Data: Sets do not maintain the order of elements. If order is important, consider using a list or tuple. Leverage Set Operations: Use built-in set operations like union, intersection, and difference to simplify code that deals with data comparisons. Conclusion Python sets provide a powerful, easy-to-use tool for managing collections of unique elements. Whether you're performing membership tests, eliminating duplicates, or conducting set operations, Python sets are a must-have in any developer’s toolbox. Understanding and utilizing set operations will enhance your ability to write clean, efficient, and maintainable Python code. By following the steps and best practices outlined in this guide, you can confidently use sets in your Python projects.
27 September 2024 · 5 min to read
Linux

Installing and Configuring cloud-init in Linux

cloud-init is a free and open-source package designed for configuring Linux-based virtual machines during their startup. In a traditional (home) environment, we would install systems from a CD or USB drive and manually configure them via a standard installer. However, in a cloud environment, we may need to configure systems regularly and frequently create, delete, and restart instances. In such cases, manual configuration becomes impractical and unfeasible. cloud-init automates the configuration process and standardizes the setup of virtual machines. What Is cloud-init The main task of cloud-init is to process input metadata (such as user data) and configure the virtual machine before it starts. This allows us to pre-configure servers, install software, prepare working directories, and create users with specific permissions. Cloud-init and Hostman Cloud Servers Hostman cloud servers support working with cloud-init scripts through the control panel. Hostman’s documentation includes a brief guide on using cloud-init scripts directly on their cloud servers. Essentially, Hostman offers a text editor for cloud-init scripts accessible via a web browser, allowing users to pass configuration data directly to the utility before the system starts. Installing Cloud-init There are several ways to get a Linux OS with cloud-init: Use a specialized Linux OS image with pre-installed cloud-init (we’ll mention some key examples below). Use pre-built distributions from cloud providers (most cloud platforms support cloud-init, though the setup processes may vary). Build a custom OS image using HashiCorp Packer. Manually install the cloud-init package. Cloud-init Images Ubuntu: The most common cloud-init image is Ubuntu 22.04 Cloud Images, officially created by Canonical for public cloud use. These images are optimized and tailored for cloud tasks. Debian: Similarly, Debian Cloud offers specialized cloud images for Debian users. Alma Linux: Another distribution designed for cloud deployment is Alma Linux Cloud. VMware: VMware’s Photon image, built for cloud environments, also comes with pre-installed cloud-init. Alternatively, you can install cloud-init manually. Installation via APT In most Linux distributions, cloud-init is installed like any other package and includes three systemd services located in the /lib/systemd/system/ directory: cloud-init.service cloud-config.service cloud-final.service Additionally, there are two more auxiliary systemd services: cloud-init-local.service cloud-init-hotplugd.service Before installing, it's best to update the list of available repositories: sudo apt update Then, download the cloud-init package via APT: sudo apt install cloud-init In some Linux images, cloud-init may already be installed by default. If so, the system will notify you after running the install command. cloud-init also supports additional modules that expand configuration capabilities. The full list of modules is available in the official documentation. Running cloud-init Since cloud-init operates as a service, it starts immediately after the systemd utility starts, i.e., when the physical machine starts and before the system connects to the network. This allows for pre-configuring network settings, gateways, DNS addresses, etc. Cloud-init Workflow There are three main stages in cloud-init’s workflow, during which the system is configured. Each stage triggers specific cloud-init services: Before networking (init): Initial setup before the network starts, including system settings, network configurations, and disk preparation. cloud-init-local.service cloud-init.service After networking (config): Network is available, so updates and required packages are installed. cloud-config.service Final stage (final): Final configurations, such as user creation and permission assignments, are applied. cloud-final.service cloud-init-hotplugd.service Cloud-init Modules cloud-init offers additional modules that enhance system configuration. These modules run in sequence at various stages. Depending on the specific use case, they can be triggered during any of the three stages. Module execution is managed through three lists in the configuration file: cloud_init_modules: Modules run during the initialization (init) stage before the network starts. cloud_config_modules: Modules run during the configuration (cloud) stage after the network is up. cloud_final_modules: Modules run during the final stage. In more detail, cloud-init’s stages can be broken down into five steps: systemd checks if cloud-init needs to run during system boot. cloud-init starts, locates local data sources, and applies the configurations. At this stage, the network is configured. During the initial setup, cloud-init processes user data and runs the modules listed under cloud_init_modules in the configuration file. During the configuration phase, cloud-init runs the modules listed under cloud_config_modules. In the final stage, cloud-init runs the modules from cloud_final_modules, installing the specified packages. You can find more details on the cloud-init workflow in the official documentation. Each module also has an additional parameter that specifies how often the module runs during system configuration: per instance: The module runs each time a new system instance (clone or snapshot) boots. per once: The module runs only once during the initial system boot. per always: The module runs at every system startup. Cloud-init Configuration In public (AWS, GCP, Azure, Hostman) or private clouds (OpenStack, CloudStack), a service usually provides the virtual machine with environment data. cloud-init uses these data in a specific order: User data (user-data): Configurations and directives defined in the cloud.cfg file. These may include files to run, packages to install, and shell scripts. Typically, user-data configure specific virtual machine instances. Metadata (meta-data): Environment information, such as the server name or instance ID, used after user-data. Vendor data (vendor-data): Information from cloud service providers, used for default settings, applied after metadata. Metadata is often available at a URL like http://localhost/latest/meta-data/, and user data at http://localhost/latest/user-data/. Cloud-init Scripts When the system boots, cloud-init first checks the YAML configuration files with the scripts and then executes the instructions. YAML is a format for data serialization that looks like markup but is not. The primary YAML configuration file for cloud-init is located at /etc/cloud/cloud.cfg. This file serves as the main configuration script, with directives and parameters for specific cloud-init modules. You can write scripts as YAML files (using #cloud-config) or as shell scripts (using #!/bin/sh). Here’s a simple example of a cloud-init script setting a hostname: #cloud-config hostname: my-host fqdn: my-address.com manage_etc_hosts: true In this example: #cloud-config: indicates that the instructions are for cloud-init in YAML format. hostname: sets the short hostname. fqdn: sets the fully qualified domain name. manage_etc_hosts: allows cloud-init to manage the /etc/hosts file. If this option is set to false, cloud-init won’t overwrite manual changes to /etc/hosts on reboot. Cloud-init Script Examples Cloud-init configuration using YAML should start with #cloud-config. Users and Groups When a virtual machine starts, you can predefine users with the users directive: #cloud-config users: - name: userOne gecos: This is the first user groups: sudo shell: sh system: true - name: userTwo gecos: This is the second user groups: sudo shell: /bin/bash system: false expiredate: '2030-01-02' As shown, each new user entry begins with a dash, and parameters are specified in a "key: value" format. These parameters mean: name: User account name gecos: Brief info about the user groups: Groups the user belongs to shell: Default shell for the user, here set to the simplest sh. system: If true, the account will be a system account without a home directory. expiredate: The user's expiration date in the "YYYY-MM-DD" format. Changing User Passwords Another simple directive is chpasswd, used to reset an existing user's password. Example configuration: #cloud-config chpasswd: list: | userOne:passOne userTwo:passTwo userThree:passThree expire: false This sets a list of users and their new passwords. The | symbol indicates a multi-line entry. The expire parameter defines whether the password will need to be changed after expiration. Updating the Repository List cloud-config has a directive for updating the available package list: package_update. It's the declarative equivalent of running  sudo apt update  By default, it's set to true, meaning cloud-init will always update the package list unless explicitly disabled: #cloud-config package_update: false Installing Specific Packages For updating or installing specific packages, use the packages directive: #cloud-config packages: - nginx - nodejs Running Commands The runcmd directive allows you to execute console commands through cloud-config. Simply pass a list of commands that cloud-init will run in sequence: #cloud-config runcmd: - echo 'This is a string command!' >> /somefile.txt - [ sh, -c, "echo 'This is a list command!' >> /somefile.txt" ] Here, two types of commands are used: As a simple string. As a YAML list specifying the executable and its arguments. Another similar directive is bootcmd. While runcmd runs commands only on the system's first boot, bootcmd runs commands on every boot: #cloud-config bootcmd: - echo 'Command that runs at every system boot!' Creating and Running a Script You can combine runcmd with the write_files directive to create and run a script: #cloud-config write_files: - path: /run/scripts/somescript.sh content: | #!/bin/bash echo 'This script just executed!' permissions: '0755' runcmd: - [ sh, "/run/scripts/somescript.sh" ] The permissions parameter (set to 0755) means the script is readable and executable by all, but only writable by the owner. Overriding Module Execution You can override the list of modules to be executed at specific configuration stages. For example, the default cloud_config_modules list might look like this: #cloud-config cloud_config_modules: - emit_upstart - snap - ssh-import-id - locale - set-passwords - grub-dpkg - apt-pipelining - apt-configure - ubuntu-advantage - ntp - timezone - disable-ec2-metadata - runcmd - byobu Remember, there are three stages: cloud_init_modules cloud_config_modules cloud_final_modules If you remove runcmd, for example, the commands within it won’t execute. Updating Repositories and Installing Packages via Shell Script cloud-init configurations can also consist purely of shell scripts. In this case, the script starts with #!/bin/sh instead of #cloud-config: #!/bin/sh apt update apt -y install nodejs apt -y install nginx The -y flag automatically answers "yes" to any prompts during installation. Conclusion In this guide, we covered the theoretical and practical aspects of using cloud-init: How cloud-init works. How to interact with cloud-init for system configuration. Writing scripts in YAML or shell format. Example configurations. cloud-init runs before the system boots, ensuring that the instance follows the desired configuration (network, directories, packages, updates). cloud-init uses modules for specific configuration tasks, and the system configuration is done in phases: init (before networking) config (after networking) final (last stage) More detailed information is available in the official documentation maintained by Canonical, the primary developer of Ubuntu.
26 September 2024 · 10 min to read
Docker

Configuring External Docker Registries

When working with Docker, users deal with images which are executable files that contain everything needed to run an application, including the app's source code, libraries, etc. These images are stored in specialized repositories known as registries, which can be either private or public. The most well-known public registry is Docker Hub, where you can find many official images like Nginx, PostgreSQL, Alpine, Ubuntu, Node, and MongoDB. Users can register on Docker Hub and store their images, with access to three private repositories and one public repository by default. Docker Hub is the default registry used by Docker to pull images. This guide will cover changing Docker's default registry to another one. Using External Docker Registries A simple way to use external registries is to leverage third-party registries offered by companies like Google and Amazon. Below is a list of public registries you can use: Owner Registry URL Google https://mirror.gcr.io Amazon https://public.ecr.aws Red Hat https://quay.io https://registry.access.redhat.com https://registry.redhat.io Using unknown external Docker registries may pose security risks, so proceed with caution. Follow the steps below to switch the default Docker Hub registry to another one. Linux Configuration Open the daemon.json file using any text editor. If Docker is installed normally (not in rootless mode), the file is located in /etc/docker. If the file doesn’t exist, the command will create it: nano /etc/docker/daemon.json For Docker in rootless mode, the file is located at ~/.config/docker in the user's home directory. Again, the command will create the file if it doesn't exist: nano ~/.config/docker/daemon.json Add the following parameter to set a new default registry (https://mirror.gcr.io in this example): {  "registry-mirrors": ["https://mirror.gcr.io"]} Save and exit the file. Restart the Docker service to apply the changes: systemctl reload docker Now, when you pull an image, Docker will use the newly specified registry. For example, pull the Alpine image from Google's registry: docker pull mirror.gcr.io/alpine You can also specify a tag. For instance, pull Nginx version 1.25.2: docker pull mirror.gcr.io/nginx:1.25.2 Windows Configuration (Docker Desktop) Open the daemon.json file located at: C:\Users\<your_username>\.docker\daemon.json Add the registry-mirrors parameter: {  "registry-mirrors": ["https://mirror.gcr.io"]} Save the file, then restart Docker. Right-click the Docker icon in the system tray and select "Restart." Alternatively, you can configure the registry via Docker Desktop’s UI. Go to the Docker Engine section and add: {  "registry-mirrors": ["https://mirror.gcr.io"]} Click Apply & Restart to save the changes and restart Docker. After restarting, Docker will use the new registry for image pulls. For example, download a curl image: docker pull mirror.gcr.io/curlimages/curl To pull a specific version, specify the tag. For example: docker pull mirror.gcr.io/node:21-alpine Using Nexus as a Docker Registry You can also use Nexus to manage Docker images. Nexus supports proxy repositories, which cache images pulled from external registries like Docker Hub. This allows Nexus to act as a caching proxy repository for Docker images, which can be useful if external registries are unavailable. Setting up a Proxy Repository in Nexus Log in to Nexus using an administrator or a user with repository creation rights. Go to Server Administration and Configuration and navigate to Repositories. Click Create repository and choose the docker (proxy) type. Fill out the necessary fields: Name: Give the repository a unique name. Online: Ensure this checkbox is checked, allowing the repository to accept incoming requests. If Nexus is behind a proxy server (such as Nginx), you won’t need to use ports for authentication. If no proxy is used, assign a unique port for HTTP or HTTPS. Allow anonymous docker pull: If checked, you won’t need to authenticate using docker login. If not checked, you’ll need to log in before pulling images. Remote storage: Specify the URL of the external registry (e.g., https://registry-1.docker.io for Docker Hub). After the repository is created, log in to the Nexus registry (if authentication is required) using: docker login <nexus_registry_address> To pull an image, use the following format: docker pull <nexus_registry_address>/image_name:tag For example, to pull a Python image with tag 3.8.19-alpine: docker pull nexus-repo.com/python:3.8.19-alpine Avoid using the latest tag for security reasons, as it may contain bugs or vulnerabilities. Conclusion This article reviewed several methods for pulling and storing Docker images. Using third-party Docker registries can be helpful when the default registry is unavailable. If you don’t trust external registries, you can always set up your own private or public registry.
26 September 2024 · 4 min to read
Git

How to Use the Git Reset Command

Today, it's hard to imagine the work of a programmer or IT professional without version control. Among the various SCM tools, Git stands out, having quickly gained popularity and becoming the de facto standard in the world of version control systems. Git allows you to easily track project file changes, manage branches, collaborate, and centrally store code and other files.  One of Git's strengths is its flexible ability to undo or remove changes. One such way to undo changes is with the git reset command, which supports three different modes. In this tutorial, we'll explore how to undo changes using git reset and its modes through practical examples. Prerequisites We'll focus on practical use cases of the git reset command, so it's necessary to have Git installed beforehand. We'll use a Linux-based operating system for this tutorial, specifically Ubuntu 22.04. However, any Linux distribution will work, as Git is available in nearly all modern package managers. In most distributions, Git comes pre-installed, though the version may not always be the latest. For Ubuntu-based systems, you can install Git from the official repository with the following commands: add-apt-repository ppa:git-core/ppa && apt -y install git For other Debian-based distributions (Debian, Linux Mint, Kali Linux, etc.), you can install Git using: apt -y install git For RHEL-based distributions (RedHat, CentOS, Fedora, Oracle Linux), the installation command will vary depending on the package manager: For yum package manager: yum -y install git For dnf package manager: dnf -y install git After installation, verify the Git version: git --version What is git reset? The git reset command is used to undo local changes. Technically speaking, git reset moves the HEAD pointer to a previous commit in the repository. HEAD is a pointer to the current branch and points to the latest commit in that branch. The git reset command operates with three key elements: the working directory, the HEAD pointer, and the index. These elements are often referred to as "trees" in Git, as they are structured using nodes and pointers. We'll go into detail about each of these elements below. It's worth noting that various Git-based web services like GitHub, GitLab, and Bitbucket offer the ability to undo actions through their web interface. However, they typically use a safer alternative, git revert, which preserves the entire project history, unlike git reset which can permanently remove commits. The Working Directory The working directory is where files are stored and tracked by Git. When you run the git reset command, Git knows which directory is being tracked because of a hidden .git folder created when you initialize a repository with git init. Here's how the working directory works in practice: Create a new directory and navigate into it: mkdir new_project && cd new_project Initialize a new Git repository: git init Once you initialize the repository, a hidden .git folder containing Git configuration files is created in the root directory. The HEAD Pointer HEAD points to the current branch and the latest commit in that branch. Every time you switch branches with git checkout, HEAD updates to point to the latest commit in the new branch. Here's a practical example: Create a new file: touch new1.txt Add the file to the repository: git add new1.txt Commit the file: git commit -m "Initial commit" To see where HEAD is pointing, use the git cat-file command: git cat-file -p HEAD Since there's only one commit, HEAD points to it. Now, let's modify the file and add it again. Modify the file: echo "This is a test file" > new1.txt Stage the file: git add new1.txt Commit the changes: git commit -m "Added content to new1.txt" Check the HEAD pointer again: git cat-file -p HEAD As you can see, HEAD now points to the new, latest commit. The Index The index (or "staging area") is where files go after being added with git add. Think of it as a pre-commit area. Files in the index are tracked by Git but not yet part of the actual commit. You can remove or modify files in the index before they are committed. Create a new file: touch new2.txt Add it to the index: git add new2.txt Check the status: git status The file is now in the staging area but not yet committed.   Git Reset Modes The git reset command supports three modes: soft, mixed, and hard. Soft Mode The soft mode undoes the last commit but keeps the changes in the index. This means that you can modify and recommit them. Create a new file: touch new3.txt Add it to the index: git add new3.txt Commit the file: git commit -m "Added new3.txt" If we run git log now, that's what we'll see: To undo the last commit: git reset --soft HEAD~1 The commit is undone, but the file remains in the index. Mixed Mode The mixed mode is the default for git reset. It undoes the commit and resets the index, but leaves the working directory untouched. Create three new files: touch new{1..3}.txt Add and commit them: git add new1.txt new2.txt new3.txtgit commit -m "Added three files" Now undo the commit: git reset HEAD~1 The files remain, but the last commit is removed. Hard Mode The hard mode deletes the commit, resets the index, and removes the files from the working directory. This is the most destructive option. Create and commit a file: touch readme.mdgit add readme.mdgit commit -m "Added readme.md" To remove the commit and the file: git reset --hard HEAD~1 The file and the commit are permanently deleted. Resetting to an Earlier Commit You can also reset to a specific commit using its hash: git reset --hard <commit-hash> This will reset the repository to that specific commit. Conclusion In this tutorial, we explored the git reset command and its modes: soft, mixed, and hard. While git reset is a powerful tool for undoing local changes, it's essential to understand each mode's impact, especially the potential risks of using the hard mode to avoid irreversible data loss.
26 September 2024 · 5 min to read
Git

Git Checkout: How to Work with Branches

The checkout command in the Git version control system is responsible for switching between different branches in a repository. Each switch updates the files in the working directory based on the data stored in the selected branch. Every subsequent commit is automatically added to the active branch chosen earlier using the checkout command. This guide will cover various ways to use the git checkout command and other related commands (such as git branch, git reflog, and git remote show), which enable interaction with both local and remote branches. Creating a Repository First, let's prepare a directory for a test Git project: mkdir project Then, navigate to it: cd project Finally, initialize the Git repository: git init Creating a File and Committing To understand how branch switching affects the working directory (and the repository as a whole), we’ll create a basic project source file with trivial content inside: sudo nano file_m The content of the file will be: file in master Let’s check the status of the working directory: ls There is only one file: file_m Now let’s stage the changes: git add file_m Then, commit them: git commit -m "First commit" Throughout this guide, we’ll observe how working with branches impacts the contents of the working directory — particularly the files we create or edit. Creating a New Branch Let’s assume we want to introduce a new feature into our project but are unsure of its necessity or effectiveness. Essentially, we want to test a hypothesis with the ability to revert changes to the stable version of the project. To do this, Git allows us to create separate branches and switch between them. This way, we can test the project both with and without the feature. But first, let’s check which branch we are currently on: git branch The console will display the output with the active branch, master, highlighted: * master We committed the previous changes to this branch, which means the file_m file is in this branch. Now, we’ll create a separate branch for our new feature using the same git branch command but with a new branch name: git branch feature1 It’s important to note that git branch does not automatically switch to the newly created branch. We can confirm this by rechecking the list of existing branches: git branch You’ll notice that the list now includes the feature1 branch, but the active branch (marked by green and asterisk) is still master: feature1* master Now we have multiple branches to switch between. Switching to an Existing Branch To manually switch to an existing branch, use the checkout command, specifying the branch name: git checkout feature1 The console will display a message confirming the successful switch: Switched to branch 'feature1' Let’s check the list of existing branches again: git branch As you can see, the active branch is now feature1: * feature1  master Let’s check the working directory again: ls It still contains the same file that was “inherited” from the master branch: file_m Since the feature1 branch is for modifying the project, we’ll create another file: sudo nano file_f1 Its content will be: file in feature1 Let’s stage the changes: git add file_f1 And commit them: git commit -m "Commit from feature1" Now, check the working directory again: ls You’ll see there are now multiple files: file_m  file_f1 Now, let’s switch back to the main branch: git checkout master After this, the working directory will only contain the original file: file_m Each time we switch between branches, the files in the working directory update to reflect the state of the commits that exist in the active branch. Switching to a New Branch Let’s assume we want to add another feature to our project, meaning we’ll need to create a new branch. First, ensure that we’re on the master branch: git checkout master Now, attempt to switch to a branch that hasn’t been created yet, feature2: git checkout feature2 As expected, you’ll receive an error: error: pathspec 'feature2' did not match any file(s) known to git However, the git checkout command allows you to create new branches while switching to them by using the -b flag: git checkout -b feature2 The console will display a message confirming the successful switch: Switched to a new branch 'feature2' In essence, git checkout with the -b flag is equivalent to running the following two commands: git branch feature2git checkout feature2 Recheck the list of existing branches: git branch Now we have the feature2 branch, which became active immediately upon its creation: feature1* feature2  master The new branch is based on the branch (its working directory and commit history) that was active before it was created. Since we switched to the master branch before creating feature2, the working directory should only contain the file file_m but not file_f1. Deleting a Branch You cannot delete a branch that is currently active: git branch -d feature2 The -d flag indicates the request to delete the specified branch. The console will display an error message: error: Cannot delete branch 'feature2' checked out at '/root/project' So, first, switch to another branch: git checkout master Then proceed with the branch deletion: git branch -d feature2 This time, the console will display a message confirming the successful deletion of the branch: Deleted branch feature2 (was 24c65ff). The list of existing branches will now look like this: feature1* master Creating a Branch from Another Branch Git allows you to specify which branch to base a new branch on without switching branches first. Let’s first ensure we're currently on the master branch: git checkout master At this point, the special HEAD pointer points to the active master branch, which, in turn, points to the latest commit of this branch. Previously, we created the feature2 branch from the active master branch. However, now we’ll create the feature2 branch from the feature1 branch (instead of master) without explicitly switching to it — we'll stay on master: git checkout -b feature2 feature1 Now the active branch is feature2. Let’s check the contents of the working directory: ls As you can see, the state of the directory matches feature1, not master: file_m  file_f1 We can also look at the commit history: git log The feature2 branch contains both the commits from master and from feature1: commit fb1b1616c85c258f647df4137df535df5ac17d6c (HEAD -> feature2, feature1)Author: root <[email protected]>Date:   Tue Feb 13 02:18:02 2024 +0100    Commit from feature1commit 24c65ffab574a5e478061034137298ca2ce33c94 (master)Author: root <[email protected]>Date:   Mon Feb 12 11:30:56 2024 +0100    First commit Resetting a Branch to Another Branch In addition to creating a branch from another, the checkout command can reset an existing branch to match the state of another branch. For example, we can reset the feature2 branch to match the state of master: git checkout -B feature2 master Note the use of the -B flag instead of -b. The console will show the following message: Reset branch 'feature2' Check the working directory: ls Only one file remains: file_m The list of "inherited" commits in the feature2 branch will now match the commits of the master branch: git log In the console, there will only be one commit — the very first one: commit 24c65ffab574a5e478061034137298ca2ce33c94 (HEAD -> feature2, master)Author: root <[email protected]>Date:   Mon Feb 12 11:30:56 2024 +0100    First commit Viewing Checkout History Switching branches is not just a read operation; it makes changes to the repository, creating a new record in the checkout history. Git has a special command to display the full history of branch switches: git reflog The history of operations is displayed from bottom to top, with the most recent switches at the top: fb1b161 (HEAD -> feature2, feature1) HEAD@{1}: checkout: moving from master to feature224c65ff (master) HEAD@{2}: checkout: moving from feature1 to masterfb1b161 (HEAD -> feature2, feature1) HEAD@{3}: commit: Added the first feature24c65ff (master) HEAD@{4}: checkout: moving from master to feature124c65ff (master) HEAD@{5}: checkout: moving from feature2 to master24c65ff (master) HEAD@{6}: checkout: moving from feature1 to feature224c65ff (master) HEAD@{7}: checkout: moving from master to feature124c65ff (master) HEAD@{8}: commit (initial): First commit Switching to a Remote Branch Adding a Remote Repository Suppose we have a remote GitHub repository we are working with over HTTPS: git remote add repository_remote https://github.com/USER/REPOSITORY.git Alternatively, we could access it via SSH: git remote add repository_remote [email protected]:USER/REPOSITORY.git In this case, an SSH key needs to be generated beforehand: ssh-keygen -t rsa -b 4096 -C "GITHUB_ACCOUNT_EMAIL" The public key (.pub), located in the /.ssh/known_hosts/ directory, is copied into the GitHub account settings under SSH Keys. In our case, the remote repository will be Nginx: git remote add repository_remote https://github.com/nginx/nginx Fetching Files from a Remote Branch After adding the remote repository, we can check the list of all its branches: git remote show repository_remote Before switching to a remote branch, we first need to retrieve detailed information about the remote repository — branches and tags: git fetch repository_remote You can also fetch from all remote repositories at once: git fetch --all Now, we can switch directly to a remote branch and retrieve its files into the working directory: git checkout branches/stable-0.5 In older Git versions, it was necessary to specify the remote repository explicitly: git checkout repository_remote/branches/stable-0.5 Now, if you run the command: git branch You will see the remote branch listed as active: * branches/stable-0.5  feature2  feature1  master Check the state of the working directory: ls Now it contains the following directories: auto  conf  contrib  docs  misc  src You can delete a remote branch just like a local one. First, switch to a different branch: git checkout master Then, delete the remote branch: git branch -D branches/stable-0.5 Now the branch list looks like this: feature2  feature1* master Switching to a Specific Commit Just like switching branches, you can switch to a specific commit. However, it's important to understand the difference between commits and branches. Branches diverge from the project's timeline without disrupting the sequence of changes, while commits are more like progress points, containing specific states of the project at particular times. Let’s first switch to the latest branch, which contains the most commits: git checkout feature2 To switch to a specific commit, provide the commit hash (ID) instead of the branch name: git checkout fb1b1616c85c258f647df4137df535df5ac17d6c To find the hash, use the command: git log In our case, the commit history looks like this (only the hashes may differ): commit fb1b1616c85c258f647df4137df535df5ac17d6c (HEAD -> feature2, feature1)Author: root <[email protected]>Date:   Tue Feb 13 02:18:02 2024 +0100    Commit from feature1commit 24c65ffab574a5e478061034137298ca2ce33c94 (master)Author: root <[email protected]>Date:   Mon Feb 12 11:30:56 2024 +0100    First commit After switching to a commit, you can check which branch is currently active: git branch The list of branches will now look like this: * (HEAD detached at fb1b1616c)  feature2  feature1  master This results in a "detached HEAD" state. Any subsequent commits won’t belong to any existing branch. However, this mode is risky — the lack of a specific branch in the HEAD pointer may result in data loss. For this reason, it's common to "wrap" the chosen commit in a new branch to continue project modifications. Switching to a specific commit is usually used to review changes made at a particular stage of development. Difference Between checkout and switch In later Git versions (2.23 and above), there’s another command for working with branches — switch. These commands are quite similar, but switch is more specialized: git switch is a new command focused more on branch operations. At the same time, git checkout is an older command that can also handle "peripheral" tasks, such as creating new branches while switching or modifying the working directory to match a specific commit's state. git checkout has a more universal (and less standardized) syntax, which can make it seem more complex and prone to errors compared to git switch. Conclusion In this guide, we’ve covered the git checkout command, primarily used for switching between different branches in a repository. Here’s a complete list of what the checkout command can do: Switch between existing local branches. Create new local branches, Create new local branches based on other branches. Reset existing local branches to the state of other branches. Switch between existing remote branches (and download their files into the working directory). Switch to a specific commit from a local or remote branch. After switching to another branch, the use of commands like git add and git commit typically follows to index changes and update the repository state within that branch. Always be cautious — switching branches after making changes in the working directory without committing can result in data loss. For more information on working with Git, refer to the official documentation.
26 September 2024 · 11 min to read
VPN

How to Set Up WireGuard VPN

WireGuard VPN is an open-source project that allows users to set up encrypted tunnels for secure networking easily. WireGuard VPN Pros: Minimal latency and maximum throughput. Easy installation and configuration. WireGuard VPN Cons: Requires additional software installation on client devices (though this isn't a major issue since it supports all platforms, and many modern routers come with WireGuard support built-in). There are many guides and tutorials on how to install and set up WireGuard VPN. The official website provides detailed instructions, but this guide will show simple ways to start using WireGuard with examples, focusing on practical steps rather than theory. Setting Up a WireGuard Server via Hostman Marketplace The easiest way to install WireGuard VPN on a cloud server is to use Hostman Marketplace. In the control panel, go to Cloud Servers > Create > Marketplace > Network > WireGuard GUI. Choose a location (e.g., Netherlands), select the minimal configuration, and click Order. The virtual machine and software installation will take around 5 minutes. Once it's ready, you'll receive an email confirmation. WireGuard Configuration and Connection Follow the link in the email to access the interface and log in using your password. Add new WireGuard clients to connect Android and Windows devices. There are two ways to connect a client device to the server: QR Code: Convenient for mobile devices. Config file: Easier for PC setups. In the interface, you'll see buttons to generate a QR code or download the configuration file. Android Setup Download the official WireGuard app from Google Play. Open the app, scan the QR code from the web interface, and tap "Connect." To confirm the connection, check your IP address on whatismyipaddress.com. If it shows the server's IP, you're successfully connected. Windows Setup Download the WireGuard Windows client from the official site. Download the WireGuard configuration file from the web interface. Open the client, add a tunnel, select the file, and click "Connect." That's it!  There are more advanced configuration options, but this basic setup should be enough for most users. WireGuard tends to be a "set it and forget it" solution; it works reliably after initial setup. Speed Testing To check the server connection speed, install the Speedtest CLI tool: curl -s https://packagecloud.io/install/repositories/ookla/speedtest-cli/script.deb.sh | sudo bash sudo apt-get install speedtest I got a speed of 194 Mbps — excellent. Setting Up a WireGuard Server Using Docker Compose While the WebGUI and Hostman marketplace one-click setup are easy, you may want more control over the configuration. Since I prefer working with Docker, I'll use it to install the same WireGuard with a web interface. Start with a clean system: Сloud Servers > Create > Select Ubuntu 22.04. After creation, connect to the server, update packages, and install Docker and Docker Compose: apt update && apt upgrade -y curl -fsSL https://get.docker.com -o get-docker.sh sudo sh get-docker.sh Create a new docker-compose.yml file: nano docker-compose.yml Add the following configuration: version: '3.8' services: wireguard: image: weejewel/wg-easy:7 environment: WG_HOST: 'your-server-ip' # Hostname or IP address PASSWORD: 'MegaSuperPass@42' # Web GUI password volumes: - ./wireguard:/etc/wireguard ports: - 51820:51820/udp - 51821:51821/tcp cap_add: - NET_ADMIN - SYS_MODULE sysctls: - net.ipv6.conf.all.disable_ipv6=0 - net.ipv4.ip_forward=1 - net.ipv4.conf.all.src_valid_mark=1 restart: always Replace your-server-ip with your actual server IP address and set a password. Save and run the following command to start the service: docker compose up -d You can now access the web interface at http://your-server-ip:51821. The project used here is called wg-easy, and you can explore additional settings in the repository. Additional Configuration Options In the Docker Compose file, you can adjust the following settings: PASSWORD: Password for the WebGUI. WG_HOST: Hostname or IP address. WG_DEVICE: The Ethernet device to use for WireGuard traffic. WG_PORT: The public UDP port (default: 51820). WG_MTU: The MTU used by clients (default server MTU is used). WG_PERSISTENT_KEEPALIVE: Time in seconds to keep connections alive. If set to 0, no keep-alive will be sent. WG_DEFAULT_ADDRESS: The address range for clients. WG_DEFAULT_DNS: DNS server. WG_ALLOWED_IPS: The IP addresses that clients are allowed to use. This setup gives you more control over configuration, restart management, and the ability to run additional services in Docker containers if needed. Accessing Local Resources One common issue with VPNs is losing access to local network resources because all traffic is routed through the tunnel by default. To solve this, modify the AllowedIPs setting. By default, it's set to 0.0.0.0/0, which sends all traffic through the VPN. To retain access to local resources, you can add a list of IPs that should bypass the VPN. Add the following environment variable in your docker-compose.yml and restart the container: environment: WG_ALLOWED_IPS: '0.0.0.0/5, 8.0.0.0/7, 11.0.0.0/8, 12.0.0.0/6, 16.0.0.0/4, 32.0.0.0/3, 64.0.0.0/2, 128.0.0.0/3, 160.0.0.0/5, 168.0.0.0/6, 172.0.0.0/12, 172.32.0.0/11, 172.64.0.0/10, 172.128.0.0/9, 173.0.0.0/8, 174.0.0.0/7, 176.0.0.0/4, 192.0.0.0/9, 192.128.0.0/11, 192.160.0.0/13, 192.169.0.0/16, 192.170.0.0/15, 192.172.0.0/14, 192.176.0.0/12, 192.192.0.0/10, 193.0.0.0/8, 194.0.0.0/7, 196.0.0.0/6, 200.0.0.0/5, 208.0.0.0/4, 8.8.8.8/32' Alternatively, edit the client’s configuration file: [Peer] PublicKey = PublicKey PresharedKey = PresharedKey AllowedIPs = 0.0.0.0/5, 8.0.0.0/7, 11.0.0.0/8, 12.0.0.0/6, 16.0.0.0/4, 32.0.0.0/3, 64.0.0.0/2, 128.0.0.0/3, 160.0.0.0/5, 168.0.0.0/6, 172.0.0.0/12, 172.32.0.0/11, 172.64.0.0/10, 172.128.0.0/9, 173.0.0.0/8, 174.0.0.0/7, 176.0.0.0/4, 192.0.0.0/9, 192.128.0.0/11, 192.160.0.0/13, 192.169.0.0/16, 192.170.0.0/15, 192.172.0.0/14, 192.176.0.0/12, 192.192.0.0/10, 193.0.0.0/8, 194.0.0.0/7, 196.0.0.0/6, 200.0.0.0/5, 208.0.0.0/4, 8.8.8.8/32 Endpoint = Endpoint Conclusion WireGuard VPN is one of the easiest and most convenient services for secure networking. I've worked with PPTP, SSTP, L2TP/IPsec, and others, each with its pros and cons. For now, WireGuard covers all my needs without any hassle. The project is actively developing, with more devices supporting WireGuard and third-party teams creating additional UIs for easier configuration, such as the NetMaker project.
25 September 2024 · 6 min to read
Ubuntu

How to Install and Configure VNC on Ubuntu

Various protocols are used to organize remote access to computers and servers. For Windows, the native protocol is RDP, while for Unix/Linux, we mostly use SSH. However, there is another option: VNC. This guide will cover installing a VNC server, specifically the TightVNC implementation, on Ubuntu 22.04, and explain how to connect to the VNC server. What is VNC? VNC (Virtual Network Computing) is a system for remote access to computers and servers based on the RFB (Remote FrameBuffer) protocol. Using a network connection, it transmits keyboard inputs and mouse movements from one machine to another. VNC is platform-independent and a cross-platform solution. VNC consists of a server and a client: the server provides access to the device's screen, and the client displays the server's screen. We will use TightVNC, which is open-source, optimized for slow connections, and widely supported by third-party VNC client programs. VNC vs. RDP While VNC and RDP both provide remote access, there are key differences. RDP is a proprietary protocol developed by Microsoft for Windows, while VNC is cross-platform, running on Windows, Linux/Unix, and macOS. VNC is open-source and free. RDP transmits a video stream using a capture device, displaying the remote desktop after the connection is initiated. VNC, however, sends pixel data directly. RDP includes built-in encryption and authentication integration with Windows, while VNC requires additional security configuration. RDP also supports device forwarding, file transfers, and peripheral access (e.g., USB drives and printers), while VNC primarily focuses on remote desktop functionality. Prerequisites To install and configure VNC, you'll need: A VPS running Ubuntu 22.04. A VNC client program installed on any operating system, as VNC is cross-platform. Some client programs are listed in the "Connecting to the VNC Server" section. Installing TightVNC and Xfce First, we'll install the TightVNC server and the Xfce desktop environment, which is lightweight and optimized for TightVNC. The following commands should be run as the root user or a user with sudo privileges. Update the package list and install the required packages: apt update && apt -y install xfce4 xfce4-goodies tightvncserver If you are using UFW, iptables, or another firewall tool, open port 5901 for VNC connections: For UFW: ufw allow 5901 You can also temporarily disable UFW for testing: systemctl stop ufw For iptables: To allow incoming connections on port 5901: iptables -I INPUT -p tcp --dport 5901 -j ACCEPT To allow outgoing connections on port 5901: iptables -I OUTPUT -p tcp --sport 5901 -j ACCEPT Configuring the TightVNC Server Once TightVNC is installed, we need to configure it. Set the password for accessing the remote host by running the vncserver command: vncserver The password should be between 6 and 8 characters. If it's longer, TightVNC will truncate it to 8 characters. You will be prompted to set a view-only password (optional). This password allows users to view the remote screen without controlling it. To set this password, type y and provide a password. If you don't need this feature, enter n. After running vncserver, you’ll see the following output: Creating default startup script /root/.vnc/xstartupStarting applications specified in /root/.vnc/xstartupLog file is /root/.vnc/[hostname]:1.log Stop the VNC server to configure it further: vncserver -kill :1 Backup the default configuration file before editing it: cp ~/.vnc/xstartup ~/.vnc/xstartup.bak Open the configuration file in a text editor: nano /root/.vnc/xstartup Add the following line to the end of the file: startxfce4 Save the changes and exit. Restart the VNC server: vncserver Managing TightVNC with systemd We’ll create a systemd service to manage TightVNC more easily. Create a new unit file: nano /etc/systemd/system/vncserver.service Add the following content: [Unit] Description=TightVNC server After=syslog.target network.target [Service] Type=forking User=root PAMName=login PIDFile=/root/.vnc/%H:1.pid ExecStartPre=-/usr/bin/vncserver -kill :1 > /dev/null 2>&1 ExecStart=/usr/bin/vncserver ExecStop=/usr/bin/vncserver -kill :1 [Install] WantedBy=multi-user.target Reload the systemd daemon: systemctl daemon-reload Enable the service to start on boot: systemctl enable --now vncserver Check the VNC server status: systemctl status vncserver If the status shows "active (running)," the server is running successfully. Connecting to the VNC Server There are various VNC client programs, both free and paid. Examples include UltraVNC and TightVNC Viewer for Windows, Remmina for Linux, and RealVNC for macOS. For example, to connect using TightVNC Viewer on Windows: Enter the server's IP address and port in the format: IP_address::port Note: TightVNC requires :: to separate the IP and port, whereas other programs may use :. When prompted, enter the password you set earlier. Once authenticated, the remote desktop will appear. TightVNC Viewer allows saving sessions for quick connections. Click the save icon, provide a name, and save the file with a .vnc extension. You can also save the password for easier future access. For increased security, it's recommended to use SSH tunnels when connecting over VNC. Conclusion VNC is a convenient system for remote access, often used for technical support or server maintenance. This guide provides a step-by-step process for installing and configuring TightVNC on an Ubuntu server and connecting to it from a remote machine. With simple setup steps, you can have a VNC server running in no time.
25 September 2024 · 5 min to read

Answers to Your Questions

What is Hostman used for, and what services do you offer?

Hostman is a cloud platform where developers and tech teams can host their solutions: websites, e-commerce stores, web services, applications, games, and more. With Hostman, you have the freedom to choose services, reserve as many resources as you need, and manage them through a user-friendly interface.

Currently, we offer ready-to-go solutions for launching cloud servers and databases, as well as a platform for testing any applications.

 

  • Cloud Servers. Your dedicated computing resources on servers in Poland and the Netherlands. Soon, we'll also be in the USA, Singapore, Egypt, and Nigeria. We offer 25+ ready-made setups with pre-installed environments and software for analytics systems, gaming, e-commerce, streaming, and websites of any complexity.

  • Databases. Instant setup for any popular database management system (DBMS), including MySQL, PostgreSQL, MongoDB, Redis, Apache Kafka, and OpenSearch.

  • Apps. Connect your Github, Gitlab, or Bitbucket and test your websites, services, and applications. No matter the framework - React, Angular, Vue, Next.js, Ember, etc. - chances are, we support it.

Can I have confidence in Hostman to handle my sensitive data and cloud-based applications?

Your data's security is our top priority. Only you will have access to whatever you host with Hostman.

Additionally, we house our servers in Tier IV data centers, representing the pinnacle of reliability available today. Furthermore, all data centers comply with international standards: 

  • ISO: Data center design standards

  • PCI DSS: Payment data processing standards

  • GDPR: EU standards for personal data protection

What are the benefits of using Hostman as my cloud service provider?

User-Friendly. With Hostman, you're in control. Manage your services, infrastructure, and pricing structures all within an intuitive dashboard. Cloud computing has never been this convenient.

 

Great Uptime: Experience peace of mind with 99.99% SLA uptime. Your projects stay live, with no interruptions or unpleasant surprises.

 

Around-the-Clock Support. Our experts are ready to assist and consult at any hour. Encountered a hurdle that requires our intervention? Please don't hesitate to reach out. We're here to help you through every step of the process.

 

How does pricing work for your cloud services?

At Hostman, you pay only for the resources you genuinely use, down to the hour. No hidden fees, no restrictions.

Pricing starts as low as $4 per month, providing you with a single-core processor at 3.2 GHz, 1 GB of RAM, and 25 GB of persistent storage. On the higher end, we offer plans up to $75 per month, which gives you access to 8 cores, 16 GB of RAM, and 320 GB of persistent storage.

For a detailed look at all our pricing tiers, please refer to our comprehensive pricing page.

Do you provide 24/7 customer support for any issues or inquiries?

Yes, our technical specialists are available 24/7, providing continuous support via chat, email, and phone. We strive to respond to inquiries within minutes, ensuring you're never left stranded. Feel free to reach out for any issue — we're here to assist.

Can I easily scale my resources with Hostman's cloud services?

With Hostman, you can scale your servers instantly and effortlessly, allowing for configuration upsizing or downsizing, and bandwidth adjustments.

Please note: While server disk space can technically only be increased, you have the flexibility to create a new server with less disk space at any time, transfer your project, and delete the old server

What security measures do you have in place to protect my data in the cloud?

Hostman ensures 99.99% reliability per SLA, guaranteeing server downtime of no more than 52 minutes over a year. Additionally, we house our servers exclusively in Tier IV data centers, which comply with all international security standards.

 

How can I get started with Hostman's cloud services for my business?

Just sign up and select the solution that fits your needs. We have ready-made setups for almost any project: a vast marketplace for ordering servers with pre-installed software, set plans, a flexible configurator, and even resources for custom requests.

If you need any assistance, reach out to our support team. Our specialists are always happy to help, advise on the right solution, and migrate your services to the cloud — for free.

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start
Email us