Sign In
Sign In

Cloud Server

Deploy your cloud server in minutes and experience the freedom to scale your
infrastructure effortlessly. Fast, secure, and flexible cloud server solution
designed to meet your unique needs without the constraints of traditional
servers.
Contact Sales
Hostman Cloud
Blazing 3.3 GHz Processors
& NVMe Disks
Experience unparalleled speed with processors optimized for demanding applications, combined with ultra-fast NVMe disks for quick data retrieval.
200 Mbit Channels,
Unlimited Traffic
Enjoy stable, high-speed connectivity with unthrottled traffic, ensuring smooth performance even during peak usage periods.
24/7 Monitoring
& Support
Stay worry-free with round-the-clock monitoring and professional support, ensuring your systems are always operational.
Cost-Effective
Management
Our cloud server solutions are designed to deliver maximum value for your money, offering flexible pricing without compromising on performance.

Cloud server pricing

We offer various cloud server plans, tailored to your exact needs.
Get the best performance at a price that fits your budget.
New York
1 x 3 GHz CPU
CPU
1 x 3 GHz
1 GB RAM
RAM
1 GB
25 GB NVMe
NVMe
25 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$4
 /mo
1 x 3 GHz CPU
CPU
1 x 3 GHz
2 GB RAM
RAM
2 GB
40 GB NVMe
NVMe
40 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$5
 /mo
2 x 3 GHz CPU
CPU
2 x 3 GHz
2 GB RAM
RAM
2 GB
60 GB NVMe
NVMe
60 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$6
 /mo
2 x 3 GHz CPU
CPU
2 x 3 GHz
4 GB RAM
RAM
4 GB
80 GB NVMe
NVMe
80 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$8
 /mo
4 x 3 GHz CPU
CPU
4 x 3 GHz
8 GB RAM
RAM
8 GB
160 GB NVMe
NVMe
160 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$17
 /mo
8 x 3 GHz CPU
CPU
8 x 3 GHz
16 GB RAM
RAM
16 GB
320 GB NVMe
NVMe
320 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$37
 /mo
CPU
RAM
Gb
NVMe
Gb
Public IP
$0
 /mo

Deploy any software in seconds

Select the desired OS or App and install it in one click.
OS Distributions
Pre-installed Apps
Custom Images
Ubuntu
Debian
CentOS

Hostman's commitment to simplicity
and budget-friendly solutions

1 CPU
2 CPU
4 CPU
8 CPU
Configuration
1 CPU, 1 GB RAM, 25 GB SSD
Hostman
DigitalOcean
Google Cloud
AWS
Vultr
Price
$4
$6
$6.88
$7.59
$5
Tech support
Free
$24/mo
$29/mo + 3% of
monthly charges
$29/mo or 3% of
monthly charges
Free
Backups
from $0.07/GB
20% or 30% higher
base daily/weekly fee
$0.03/GB per mo
$0.05/GB per mo
20% higher base
monthly/hourly fee
Bandwidth
Free
$0.01 per GB
$0.01 per GB
$0.09/GB first
10 TB / mo
$0.01 per GB
Live chat support
Avg. support response time
<15 min
<24 hours
<4 hours
<12 hours
<12 hours

What is a cloud server?

Cloud server is a virtualized computing resource hosted in the cloud, designed to deliver powerful performance without the need for physical hardware. It is built on a network of connected virtual machines, which enables flexible resource allocation, instant scalability, and high availability. Unlike traditional on-premises servers, a cloud-based server allows users to adjust resources dynamically, making it ideal for handling fluctuating workloads or unpredictable traffic spikes. Whether you're running an e-commerce store, a SaaS platform, or any application, a cloud web server provides the adaptability necessary to grow with your business.

Cloud servers solve a wide range of challenges, from reducing infrastructure costs to improving uptime and reliability. By leveraging the cloud, businesses can avoid the upfront investment and maintenance costs associated with physical servers. Additionally, a cloud server system allows users to deploy applications quickly, scale resources in real-time, and manage data more efficiently. The key benefits for clients include operational flexibility, cost savings, and the ability to respond quickly to changing demands.

Ready to buy a cloud server?

1 CPU / 1GB RAM / 25GB NVMe / 200 Mbps / $2/mo.

Efficient tools for your convenient work

See all Products

Backups, Snapshots

Protect your data with regular backups and snapshots, ensuring you never lose crucial information.

Firewall

Enhance your security measures with our robust firewall protection, safeguarding your infrastructure against potential threats.

Load Balancer

Ensure optimal performance and scalability by evenly distributing traffic across multiple servers with our load balancer feature.

Private Networks

Establish secure and isolated connections between your servers with private networks, shielding sensitive data and enhancing network efficiency.

Trusted by 500+ companies and developers worldwide

One panel to rule them all

Easily control your database, pricing plan, and additional services
through the intuitive Hostman management console.
Project management
Organize your multiple cloud servers and databases into a single, organized project, eliminating confusion and simplifying management.
Software marketplace
24 ready-made assemblies for any tasks: frameworks, e-commerce, analytics tools.
Mobile responsive
Get the optimal user experience across all devices with our mobile-responsive design.
Hostman Cloud

Code locally, launch worldwide

Our servers, certified with ISO/IEC 27001, are located in Tier 3 data
centers across the US, Europe, and Asia.
🇺🇸 San Francisco
🇺🇸 San Jose
🇺🇸 Texas
🇺🇸 New York
🇳🇱 Amsterdam
🇳🇬 Lagos
🇩🇪 Frankfurt
🇵🇱 Gdansk
🇦🇪 Dubai
🇸🇬 Singapore
Anup k.
Associate Cloud Engineer
5.0 out of 5

"Hostman Comprehensive Review of Simplicity and Potential"

It been few years that I have been working on Cloud and most of the cloud service...
Mansur H.
Security Researcher
5.0 out of 5

"A perfect fit for everything cloud services!"

Hostman's seemless integration, user-friendly interface and its robust features (backups, etc) makes it much easier...
Adedeji E.
DevOps Engineer
5.0 out of 5

"Superb User Experience"

For me, Hostman is exceptional because of it's flexibility and user-friendliness. The platform's ability to offer dedicated computing resources acr...
Yudhistira H.
Mid-Market(51-1000 emp.)
5.0 out of 5

"Streamlined Cloud Excellence!"

What I like best about Hostman is their exceptional speed of deployment, scalability, and robust security features. Their...
Mohammad Waqas S.
Biotechnologist and programmer
5.0 out of 5

"Seamless and easy to use Hosting Solution for Web Applications"

From the moment I signed up, the process has been seamless and straightforward...
Mohana R.
Senior Software Engineer
5.0 out of 5

"Availing Different DB Engine Services Provided by Hostman is Convenient for my Organization usecases"

Hostman manages the cloud operations...
Faizan A.
5.0 out of 5

"Hostman is a great fit for me"

Hostman is a great fit for me. What do you like best about Hostman? It was very easy to deploy my application and create database, I didn't have
Adam M.
5.0 out of 5

"Perfect website"

This website is extremely user friendly and easy to use. I had no problems so didn't have to contact customer support. Really good website and would recommend to others.
Anup K.
4.0 out of 5

"Simplifying Cloud Deployment with Strengths and Areas for Growth"

What I like best about Hostman is its unwavering commitment to simplicity...
Naila J.
5.0 out of 5

"Streamlined Deployment with Room for Improvement"

Hostman impresses with its user-friendly interface and seamless deployment process, simplifying web application hosting...

More cloud services from Hostman

See all Products

Latest News

Ubuntu

How to Configure an Additional IP as an Alias in Ubuntu

In the network administration world, the task of setting up additional IP addresses on a single network interface is commonly performed. The technique of IP aliasing, which is a system for a device to reply to several IP addresses on one network interface, penetrates this model. All Ubuntu users should be familiar with modifying and applying these settings to ensure robust networking administration. This guide will detail the methods of creating an extra IP address in Ubuntu as an alias for both the versions of Ubuntu 24.04 and 22.04. Prerequisites Obviously, one first needs to set up the system in a way that would allow for the manipulation of all IP addresses over the same network, using Ubuntu. Here is the list: A system running either Ubuntu 24.04 or Ubuntu 22.04 Admin access to the system (sudo privileges) Basic knowledge of command-line interface operations An additional IP address assigned by a network administrator or ISP Network interface name information (e.g., eth0, ens3) When troubleshooting problems, we are in danger of causing even more difficulty, as network interfaces provided by networks are not reliable. With this in mind, it would be wise to keep a backup of the configuration files before proceeding with the changes. Configuring an Additional IP Address within Ubuntu 24.04 Ubuntu 24.04, the latest long-term support release, uses Netplan for network configuration. This configuration is also applicable for Ubuntu 22.04 and beyond. Netplan is a utility for configuring networking on Linux systems. Here's how to create an additional IP address: Check the Network Interface Primarily, it is necessary to define the network interface that will carry the new address. You can achieve this by running the following command: ip addr show The output of this command will display all the interfaces. Find the name of the interface (e.g. ens3, eth0) currently in use. Edit the Netplan Configuration File Normally Netplan configuration files are found in the /etc/netplan/ directory. The file name may be different but most of them end with a .yaml extension. To change the file, use a text editor with root privileges: sudo nano /etc/netplan/50-cloud-init.yaml Insert the New IP Address In the YAML file, add the new IP address under the addresses section of the appropriate network interface. The configuration may appear like this: network: version: 2 renderer: networkd ethernets: eth0: addresses: - "195.133.93.70/24" - "166.1.227.189/24" #New IP address - "192.168.1.2/24" #Private IP address nameservers: addresses: - "1.1.1.1" - "1.0.0.1" dhcp4: false dhcp6: false routes: - to: "0.0.0.0/0" via: "195.133.93.1" Apply the Changes Upon saving your edits, you need to apply the new version of the configuration by running this command: sudo netplan apply Validate the Configuration After completing the steps above, you will need to repeat the ip addr show command to confirm that the new IP address is in place. Now the output of this command should also include the new IP address. Additional Considerations Persistent Configuration The choices made by Netplan are stable and will last through the restart of the device. But, it's a good idea to verify the configuration with a system reboot to make sure everything goes well after the restart. Firewall Configuration When adding a new IP address, you may need to update the firewall rules. Ubuntu traditionally uses UFW (Uncomplicated Firewall). To avoid blocking the new IP, you will have to create new rules to UFW. Network Services If the system has some services running which are linked to specific IP addresses, then you must update their configurations to recognize and utilize the new IP address as well. IPv6 Considerations The above examples talk about IPv4. If you have to use IPv6 addresses, then the procedure is relatively the same; you will have to use a different style of address though. Netplan supports both IPv4 and IPv6 configurations. Troubleshooting In case of issues emerging during the configuration stage, try: Check for syntax errors in the YAML file with the command: sudo netplan --debug generate. Ensure that there are no conflicts with other devices using the same IP address on the network. Verify correct setting of the subnet mask and the gateway. Check the system logfile for error messages: journalctl -xe. Advanced IP Aliasing Techniques Network administrators can see how advanced IP aliasing plays a key role in improving network management: virtual interfaces make it possible to have several logical interfaces on a physical network interface, wherein all have their IP and network settings. Dynamic IP Aliasing There are cases where network administrators would have to implement dynamic IP aliasing. With the help of scripts, it is possible to add or remove IP aliases according to certain conditions or occurrences. For example, a script can be made to insert an IP alias whenever a particular service starts and remove it every time the service stops. IP Aliasing in Containerized Environments The popularity of containerization in the present age necessitates having IP aliasing in order to control network configuration of Docker containers and any other containerized applications. In such cases, IP aliases are quite often employed to expose multiple services on a container at different IP addresses or assist containers to communicate with one another. Docker Network Aliases In Docker, network aliases can be used to allow multiple containers to respond to the same DNS name on a custom network. Among other things, this is indispensable in microservices architectures where service discovery is a very important issue. Security Implications of IP Aliasing Though IP aliasing has a multitude of advantages, the issue of security deserves also to be looked into. Among other things, the more IP addresses you put, the larger the possible security breach of a system. The network administrators must guarantee the applications are protected with: Configurations of a firewall that will secure all the IP aliases Intrusion Detection Systems (IDS) to record the traffic of all IP addresses Periodically checking the use and need of each IP alias Enabling of appropriate security tools for those services bound to specific IP aliases Conclusion Putting a new IP address as an alias into Ubuntu is a highly efficient process as their utility of Netplan helps greatly. Whether you are using Ubuntu 24.04 or 22.04, the steps remain the same including editing the Netplan configuration file, adding the new IP address, and applying the changes. A system with multiple IP addresses on a single network interface of a single computer can be used to do different tasks on such a network. The ability to respond to several IP addresses on one network interface becomes very useful in several networking situations. Through these steps, you can increase the Ubuntu computer networking capabilities quickly and effectively. The sequence is always to first back up existing configurations then to make changes followed by in-depth test post-installation. With these skills, a network infrastructure manager or an IT technician can effectively manage and optimize his Ubuntu-powered network infrastructure to cater to diverse networking requirements.
29 November 2024 · 6 min to read
Node.js

How to Handle Asynchronous Tasks with Node.js and BullMQ

Handling asynchronous tasks efficiently is crucial in Node.js applications, especially when dealing with time-intensive operations like sending emails, processing images, or performing complex calculations. Without proper management, these tasks can block the event loop, leading to poor performance and a subpar user experience. This is where BullMQ comes into play. BullMQ is a powerful Node.js package that offers a reliable and scalable queuing system powered by Redis. It enables developers to transfer heavy operations to a queue in the background, keeping the main application responsive. With BullMQ you can successfully manage async queues, plan processes, and easily keep an eye on their progress. This tutorial will show you how to manage asynchronous tasks with Node.js and BullMQ. The process involves setting up a project folder, performing a time-intensive task without using BullMQ, and enhancing the application by incorporating BullMQ for running tasks in parallel. Prerequisites Before you begin, ensure you: Set up a Linux server. Set up Node.js on your server. Set up Redis on your server, as BullMQ depends on Redis for managing queues. Setting Up the Project Directory Before you can use Node.js and BullMQ for asynchronous tasks, it is necessary to establish your project directory. Set up and customize your Node.js application using these guidelines. Create a New Directory Open your terminal and go to the location of your project. Create a fresh folder and navigate into it: mkdir bullmq-demo && cd bullmq-demo Initialize a New Node.js Project Set up a Node.js project using npm. It generates a package.json file containing the default configurations: npm init -y Install Required Dependencies Set up the required packages for your application: npm install express bullmq ioredis Here's what each package does: express: A fast Node.js web framework commonly used for server creation. bullmq: An excellent tool for handling queues within Node.js programs. ioredis: A Redis client for Node.js that BullMQ needs in order to establish a connection with Redis. Create the Main Application File Create an index.js file as the primary access point for your application: touch index.js Alternatively, you have the option to generate this file by using your code editor. Set Up a Basic Express Server To set up a simple Express server, include this code in your index.js file: const express = require('express'); const app = express(); const port = 3000; app.use(express.json()); app.listen(port, () => { console.log(`Server is running on port ${port}`); }); This code initiates an Express app on port 3000 which handles requests using JSON middleware. Verify the Server Setup Start the server by running: node index.js The below message should appear: Open up your internet browser and go to either http://your_server_ip:3000 or http://localhost:3000. You will receive a "Cannot GET /" message as there are no routes set up, as anticipated. When ready to proceed, you can terminate the server using Ctrl + C. Implementing a Time-Intensive Task Without BullMQ This part describes how to include a route in your Express app that performs a time-consuming task in a synchronous way. This will demonstrate how specific tasks can block the event loop and negatively affect your application's performance. Define a Time-Intensive Function Create a function in the index.js file that simulates a computationally intensive task: // Function to simulate a heavy computation function heavyComputation() { const start = Date.now(); // Run a loop for 5 seconds while (Date.now() - start < 5000) { // Perform a CPU-intensive task Math.sqrt(Math.random()); } } The function runs a loop for about five seconds, performing math operations to mimic a CPU-heavy task. Create a Route to Handle the Task Create a fresh route in your Express application that calls the heavyComputation function: app.get('/heavy-task', (req, res) => { heavyComputation(); res.send('Heavy computation finished'); }); This route is set up to receive GET requests specifically at the /heavy-task endpoint. After receiving a request, it carries out the specified intensive computation and then provides a response. Start the Server To restart your server, execute the following command: node index.js Confirm the server is functioning before moving on to the next stage. Test the Heavy Task Route Open your internet browser and type in either http://your_server_ip:3000/heavy-task or http://localhost:3000/heavy-task to access the webpage.  The following message should be displayed: It is important to observe that the response time is approximately five seconds. The delay is a result of the synchronous execution of the intensive computation process. Observe Blocking Behavior After the server is up and running, open a new tab on your internet browser and go to http://your_server_ip:3000/. The response to this request may not be immediate. The system delays taking action until the extensive processing of the previous step. This happens when the time-consuming task is blocking the Node.js event loop, stopping the server from processing additional incoming requests. When the server performs a task that takes a lot of time in a synchronous manner, it is unable to respond to additional requests. The act of blocking could result in a suboptimal user experience, particularly in apps that need to be highly responsive. Executing Time-Intensive Tasks Asynchronously with BullMQ We saw in the last section how synchronous execution of time-consuming operations can severely affect your application's performance by slowing down the event loop. This section explains how to implement a high-performance asynchronous queue into your application using BullMQ. Modify index.js to Use BullMQ Make changes to the index.js file to include BullMQ in your application. Import BullMQ and ioredis At the top of your index.js file, you should include the following import statements: const { Queue, Worker } = require('bullmq'); const Redis = require('ioredis'); Create a Redis Connection Next, set up a connection with Redis: const connection = new Redis(); Redis has been programmed to run on port 6379 and the localhost interface by default. To create a connection to a remote Redis server that has a different port, please enter the appropriate host address and port number: const connection = new Redis({ host: '127.0.0.1', port: 6379, maxRetriesPerRequest: null, }); Initialize a BullMQ Queue Create a new queue called heavyTaskQueue: const heavyTaskQueue = new Queue('heavyTaskQueue', { connection }); Add a Route to Enqueue Tasks Change the heavy-task route to add a job to the queue instead of running the task right away: app.get('/heavy-task', async (req, res) => { await heavyTaskQueue.add('heavyComputation', {}); res.send('Heavy computation job added to the queue'); }); The application will respond after a lengthy process has completed, handling requests asynchronously, when the /heavy-task route is accessed. Remove the Worker Code from index.js The worker must be implemented in a separate file. This is essential to ensure that the worker does not coexist with the Express server process. A worker's use of the heavyComputation function during execution won't interfere with the event loop of the main application. The index.js file is structured in the following way: const express = require('express'); const app = express(); const port = 3000; app.use(express.json()); const { Queue } = require('bullmq'); const Redis = require('ioredis'); const connection = new Redis({ host: '127.0.0.1', port: 6379, maxRetriesPerRequest: null, }); const heavyTaskQueue = new Queue('heavyTaskQueue', { connection }); app.get('/heavy-task', async (req, res) => { await heavyTaskQueue.add('heavyComputation', {}); res.send('Heavy computation job added to the queue'); }); app.listen(port, () => { console.log(`Server is running on port ${port}`); }); Create a Separate Worker File Generate a fresh file and name it worker.js. The file is intended for executing the worker code in charge of handling tasks obtained from the queue. Create the worker.js file: touch worker.js Add Worker Code to worker.js: const { Worker } = require('bullmq'); const Redis = require('ioredis'); const connection = new Redis({ host: '127.0.0.1', port: 6379, maxRetriesPerRequest: null, }); // Function to simulate a heavy computation function heavyComputation() { const start = Date.now(); // Run a loop for 5 seconds while (Date.now() - start < 5000) { // Perform a CPU-intensive task Math.sqrt(Math.random()); } } const worker = new Worker( 'heavyTaskQueue', async job => { // Time-intensive task here heavyComputation(); console.log('Heavy computation completed'); }, { connection } ); worker.on('completed', job => { console.log(`Job ${job.id} has completed`); }); worker.on('failed', (job, err) => { console.log(`Job ${job.id} has failed with error ${err.message}`); }); Run the Worker in a Separate Process You must now execute worker.js as an independent Node.js process. Start the Worker Process Open a new terminal window or tab, navigate to your project folder, and run the specified command: node worker.js Start the Express Server Initiate the Express server in your original terminal window: node index.js Test the Application with BullMQ Proceed to conduct testing of the application utilizing BullMQ.  Make a Request to /heavy-task:Open your internet browser and type in either http://your_server_ip:3000/heavy-task or http://localhost:3000/heavy-task in the URL bar. The following message should be displayed: Heavy computation job added to the queue. The rapid response time suggests that there is no blockage in the main thread. Adding a Dashboard to Monitor BullMQ Queues Monitoring your application's queues and jobs is essential for ensuring they are functioning properly and for troubleshooting purposes. BullMQ comes with a functionality called Bull Board, which offers a visual interface for overseeing your queues. This part explains how to incorporate a dashboard into your application. Install Bull Board Use npm to install the @bull-board/express package: npm install @bull-board/express Set Up Bull Board in Your Application In order to set up the bull board application, follow these steps: Import Bull Board Modules Insert the code provided at the top of your index.js file: const { createBullBoard } = require('@bull-board/api'); const { BullMQAdapter } = require('@bull-board/api/bullMQAdapter'); const { ExpressAdapter } = require('@bull-board/express'); Create an Express Adapter for the Dashboard Initialize the Express adapter: const serverAdapter = new ExpressAdapter(); serverAdapter.setBasePath('/admin/queues'); Set Up Bull Board with Your Queue Create the Bull Board instance and pass your queue: createBullBoard({ queues: [new BullMQAdapter(heavyTaskQueue)], serverAdapter: serverAdapter, }); Use the Dashboard in Your Express App Add the following line to mount the dashboard at /admin/queues: app.use('/admin/queues', serverAdapter.getRouter()); Make sure to include this line following the setup of your queue and worker. The final index.js file looks like below: // Import Express and Initialize App const express = require('express'); const app = express(); const port = 3000; app.use(express.json()); // Import BullMQ and Redis const { Queue } = require('bullmq'); const Redis = require('ioredis'); // Redis Connection const connection = new Redis({ host: '127.0.0.1', port: 6379, maxRetriesPerRequest: null, }); // Initialize Queue const heavyTaskQueue = new Queue('heavyTaskQueue', { connection }); // Define Route to Add Job to Queue app.get('/heavy-task', async (req, res) => { await heavyTaskQueue.add('heavyComputation', {}); res.send('Heavy computation job added to the queue'); }); // Import Bull Board and Set Up Dashboard const { createBullBoard } = require('@bull-board/api'); const { BullMQAdapter } = require('@bull-board/api/bullMQAdapter'); const { ExpressAdapter } = require('@bull-board/express'); const serverAdapter = new ExpressAdapter(); serverAdapter.setBasePath('/admin/queues'); createBullBoard({ queues: [new BullMQAdapter(heavyTaskQueue)], serverAdapter: serverAdapter, }); app.use('/admin/queues', serverAdapter.getRouter()); // Start the Server app.listen(port, () => { console.log(`Server is running on port ${port}`); }); Access the Dashboard To access the dashboard, follow the steps listed below: Restart Your Server node index.js Navigate to the Dashboard Open your browser and go to http://your_server_ip:3000/admin/queues. Explore the Dashboard: Queue Overview: See the list of queues and their status. Jobs List: View active, completed, failed, and delayed jobs. Job Details: Click on a job to see its data, logs, and stack trace if it failed. You can easily manage your BullMQ queues by integrating Bull Board into your application. It is much easier to keep an eye on progress and identify issues when you can view your queues and tasks on the dashboard in real-time. Conclusion You have now learned how to use BullMQ with Node.js to manage asynchronous processes. Your application's responsiveness and efficiency have been enhanced by moving time-consuming operations to a separate queue. Your Node.js app is now much more capable of handling heavy demands thanks to the usage of queues.
28 November 2024 · 11 min to read
Servers

Proxmox Backup Server (PBS): Integration with Proxmox VE and Basic Operations

Proxmox Backup Server is a Debian-based solution that makes backup simple. With it, you can back up virtual machines, containers, and the contents of physical hosts. PBS is installed on bare metal. All the necessary tools are bundled in a single distribution. Proxmox Backup Server is optimized for the Proxmox Virtual Environment platform. With this combination, you can: Safely back up and replicate data. Manage backups through both a graphical interface and command line. Proxmox Backup Server is free software. Key Features Data loss or corruption due to deletion, ransomware, or other dangers can occur at any time. Therefore, regular backups of critical data are essential. To improve performance and achieve objectives, create backups of data using Proxmox Debian. These backups will take up minimal space, allow for instant recovery, and efficiently reduce working time through simplified management. User Role and Group Permissions Proxmox Backup protects data from unauthorized access. A range of access control options ensures that users are limited to only the level of access they need. For example, marketers don’t need access to accounting reports, and accountants don’t need to see backups of the main product code. For convenience, you can use several authentication domains: OpenID Connect, Linux PAM, or a separate authentication server. The administrator precisely defines what each user is allowed to do and what is prohibited. Easy Management PBS comes with a graphical interface through which the administrator manages the server. For advanced users who are familiar with the Unix shell, Proxmox provides a command-line interface for performing specialized or highly complex tasks. Additionally, Proxmox Backup Server uses a RESTful API. The main data format is JSON. The entire API is formally defined using its schema. This ensures fast and easy integration with third-party management tools. Reliable Encryption It’s not just important to have access to backups, but also to be confident that the information has not been compromised. To provide that confidence, PBS securely encrypts Linux backups. This guarantees security even on less-trusted hosts — for example, on rented servers. No one except the owner can decrypt and read the stored information. Granular Recovery Why restore all data when you can restore only what’s needed? To reduce overhead, Proxmox Backup Server comes with a snapshot catalog for navigation. You can quickly explore the contents of an archive and instantly recover individual objects. System Requirements CPU: A 64-bit processor based on AMD or Intel with at least 4 cores. Memory: At least 4 GB for the system, file system cache, and daemons. It is recommended to add at least 1 GB of memory for each terabyte of disk space. Storage: Requires at least 32 GB of free space. The documentation suggests using hardware RAID. It is recommended to use solid-state drives (SSDs) for backup storage. Server Installation To store backups, you need a server. You will install Proxmox Backup Server on it. You can manage this setup through either a graphical interface or the command line, depending on what suits you best. The easiest way to install the backup system on the server is by using a disk image (ISO file). This distribution includes all the necessary components for full functionality: Installation wizard Operating system with all dependencies Proxmox Linux kernel with ZFS support Tools to manage backups and other resources Management interface Installation from the disk is very simple. If you have ever installed an operating system, you will have no trouble. The installation wizard will help partition the disk and configure basic settings like time zone, language, and network for internet access. During the installation process, you will add all the necessary packages that convert a regular Debian system into one for managing backups. PBS uses the entire server. During installation, all other data will be deleted. You will create a server dedicated to one task — managing backups. Setting up a separate server also involves a security consideration. In this case, you will have access to backups even if other parts of the distributed system stop working. Installation on Debian Suppose you already have a server with Debian installed. In this case, installing Proxmox can be done through a different scenario. There’s no need to reinstall the OS—just add the missing packages. They will integrate seamlessly on top of the standard setup. To perform the Proxmox download, enter the following commands in the Debian command line: apt-get update apt-get install proxmox-backup-server This will install the packages in a minimal configuration. If you want to get the same set as when using the installer, run: apt-get update apt-get install proxmox-backup This will install the packages in the full configuration, including the ZFS-supporting kernel and a set of useful tools. Essentially, this is the same as using the disk image. After installation, you can immediately connect to the Proxmox Web Interface through a browser, using HTTPS on Proxmox port 8007. For example, at https://<ip-or-dns-name>:8007. You can also separately install the Proxmox Backup Client. To do so, you need to configure the client repository based on APT and run these commands: apt-get update apt-get install proxmox-backup-client These are the standard installation recommendations. If you need to set up a custom configuration, such as with Proxmox DHCP, refer to the documentation for further guidance. Adding a Server to Proxmox VE Before backing up the server, you need to perform a preliminary configuration. Create a User In Proxmox, configuration is done through an easy-to-use interface. Let’s create the first user: Open the Configuration tab — Access Control. Click Add. Add a new user. For example, you can add a user user1@pbs. The "pbs" part is mandatory; if it’s omitted, an error message about incorrect credentials will appear. Create a Storage The next step is to create repositories. These allow you to distribute data according to your criteria. For example, you can create incremental backups for PostgreSQL, store data saved by Proxmox Ubuntu separately, and more. To do this, use the Proxmox Add Disk procedure. Go to Administration — Storage / Disks. Select a disk and initialize it by clicking Initialize Disk with GPT. Go to Directory — Create: Directory and create a directory for storing data. Specify the name of the data storage and the absolute path to the directory. If you check Add as Datastore, the new data storage will be immediately connected as a datastore object. Proxmox Storage configuration is now complete, and you just need to assign access rights to the repository. To do this: Click on the name of the created data storage, go to Permissions, and click Add — User Permission. Select the desired user and their role, then click Add to confirm. At this point, the preliminary setup is complete. Save the Fingerprint By default, PBS uses a self-signed SSL certificate. You must save the fingerprint to establish trusted connections between the client and the server in the future. Without it, you won’t be able to connect — this is one of the security mechanisms. Go to Administration — Shell and capture the server's "fingerprint" with the command: proxmox-backup-manager cert info | grep Fingerprint This will return a string containing the unique fingerprint. You can later use it to establish a connection with the backup server. Add a Server You can add storage directly from the Proxmox VE web interface (Datacenter — Storage — Add) or manually via the console. Let’s explore the second option, as it provides more flexibility in configuration. You need to define the new storage with the pbs type on your Proxmox VE node. In the following example, store2 is used as the storage name, and the server address is localhost. You are connecting as user1@pbs. Add the storage: pvesm add pbs store2 --server localhost --datastore store2 Set the username and password for access: pvesm set store2 --username user1@pbs --password <secret> If you don’t want to enter the password as plain text, you can pass the --password parameter without any arguments. This will prompt the program to ask for the password when you enter the command. If your backup server uses a self-signed certificate, you need to add the certificate's fingerprint to the configuration. You already obtained the fingerprint earlier with the following command: proxmox-backup-manager cert info | grep Fingerprint To establish a trusted relationship with the Backup Server Linux, add the fingerprint to the configuration: pvesm set store2 --fingerprint 64:d3:ff:3a:50:38:53:5a:9b:f7:50:...:ab:fe After --fingerprint, paste the fingerprint you obtained. Check the status of the storage with: pvesm status --storage store2 Through the web interface, you will see the storage listed among the virtual machines and container backups available for data storage, along with usage statistics. It’s now time to create your first backup. Backup and Recovery Suppose you have an LXC container running Ubuntu inside. To back it up: Open the Backup section. Select the desired Storage. Click Backup now. Choose the type of backup. If you access the PBS server, you can view the information about the completed backup task. To verify the backup’s functionality, delete the Ubuntu container and then perform a recovery. In the PVE web interface, go to Storage. Open the Content tab. Select the backup file. For recovery, choose the location and a new identifier (by default, it will be the same as when we created the backup), and set the read data limit. This will help avoid overloading the virtualization server’s input channel. Click Restore and start the container. Thanks to the fast backup creation and recovery process in Proxmox, you can also easily migrate a virtual machine. Backing up a virtual machine is no different from backing up a container. The recovery process is the same. You specify the desired backup and the location for deployment and decide whether to start the machine immediately after the procedure is completed. If you need not the entire backup but only individual files, you can recover them through the PBS web interface. Conclusion By setting up backups with Proxmox, you can be confident that virtual machines or containers won’t be lost in case of a storage failure. You can easily restore them with minimal effort. All that is required is to mount a new host, add the data storage, and start the recovery process.
21 November 2024 · 9 min to read
Servers

How to Use Nessus for Vulnerability Scanning on Ubuntu 22.04

Nessus is one of the most popular and widely used vulnerability scanners worldwide. Developed by Tenable, Inc., Nessus provides a comprehensive solution for identifying vulnerabilities, allowing organizations and individuals to detect and address potential security threats in their network infrastructure. With Nessus, you can conduct in-depth security analysis, covering a range of tasks from simple vulnerability detection to complex compliance checks. Versions of Nessus: Essentials, Professional, and Expert Nessus Essentials. A free version intended for home users and those new to the security field. This version provides basic scanning and vulnerability detection features. Nessus Professional. A paid version designed for security professionals and large organizations. It offers advanced features like large network scanning, integration with other security systems, and additional analysis and reporting tools. Nessus Expert. A premium version that includes all Professional features, along with additional tools and capabilities such as cloud scanning support, integration with security incident management systems, and further customization options. Nessus Vulnerability Scanning Features Vulnerability Detection. Nessus detects vulnerabilities across different systems and applications based on its extensive vulnerability database. Compliance Checks. Nessus performs checks to ensure compliance with various security standards and regulations. Integration with Other Systems. It can integrate with incident management systems, log management systems, and other security tools. Cloud Server Scanning. Nessus Expert offers scanning capabilities for cloud environments such as AWS, Azure, and Google Cloud. Data Visualization. Nessus includes dashboards and reports for visualizing scan results. Regular Updates. Nessus continuously updates its vulnerability database to keep up with emerging threats. Flexible Configuration. It provides customization options to tailor the scanning process to specific environments. Installing Nessus You can install Nessus on Ubuntu in two ways: as a Docker container or as a .deb package. Here’s a step-by-step guide for both methods. Installing Nessus on Ubuntu via Docker Preparation First, ensure that Docker is installed on your system. If Docker isn’t installed, follow this guide to install Docker on Ubuntu 22.04. Download the Nessus Image Download the latest Nessus image from Docker Hub by running: docker pull tenable/nessus:latest-ubuntu The download process may take around 10 minutes. Create and Start the Container Once the image is downloaded, create and start the container with: docker run --name "nessus_hostman" -d -p 8834:8834 tenable/nessus:latest-ubuntu Here: --name "nessus_hostman" sets the container's name. -d runs the container in detached mode (background). -p 8834:8834 maps port 8834 of the container to port 8834 on the host, making Nessus accessible at localhost:8834. If you need to restart the container after stopping it, use: docker start nessus_hostman Installing Nessus on Ubuntu as a .deb Package Download the Installation Package Start by downloading the installer for Ubuntu with: curl --request GET \  --url 'https://www.tenable.com/downloads/api/v2/pages/nessus/files/Nessus-10.6.1-ubuntu1404_amd64.deb' \  --output 'Nessus-10.6.1-ubuntu1404_amd64.deb' Install Nessus With the installation file downloaded to your current directory, use dpkg to install Nessus: sudo dpkg -i ./Nessus-10.6.1-ubuntu1404_amd64.deb Start the Nessus Service After installing, start the nessusd service: sudo systemctl start nessusd.service Verify the Nessus Service Check if nessusd is active and running without errors: sudo systemctl status nessusd You should see the status: Active: active (running). Accessing Nessus in a Browser Now, access Nessus by opening a browser and navigating to: https://localhost:8834/ Port 8834 is the default port for Nessus. Most browsers will show a security warning when accessing Nessus, but it’s safe to proceed by clicking Advanced and continuing to the site. Initial Setup of Nessus Navigate to the Setup Page. After starting the container, open your browser and go to https://localhost:8834. You’ll see a loading screen as necessary components are downloaded. Register on the Tenable Website. While Nessus is downloading components, register on the Tenable website to obtain an activation code. The code will be sent to the email address you provide. Use the Setup Wizard Once components are downloaded, the setup wizard will launch. Click Continue. Select Nessus Essentials. Enter the activation code sent to your email. Create a user account by entering a username and password. Completing the Installation. Wait for the setup to finish and for all plugins to load. Once everything is complete, you’ll see the status updates on https://localhost:8834/#/settings/about/events. After this, the Nessus installation is fully set up and ready to use. Setting Up the beeBox Server In this guide, we’ll use the beeBox virtual machine to demonstrate Nessus’s capabilities. If you’re scanning your own server, skip this step. After successfully installing and configuring Nessus, it’s time to test it in action. To do this, we need a target system to scan for vulnerabilities. We’ll use a virtual machine called beeBox, which is based on bWAPP (a "buggy" web application). Designed with known vulnerabilities, beeBox is perfect for security professionals, developers, and students to practice identifying and mitigating security threats. beeBox includes the following vulnerabilities: Injection (HTML, SQL, LDAP, SMTP, etc.) Broken Authentication & Session Management Cross-Site Scripting (XSS) Insecure Direct Object References Security Misconfiguration Sensitive Data Exposure Missing Function Level Access Control Cross-Site Request Forgery (CSRF) Using Components with Known Vulnerabilities Unvalidated Redirects & Forwards XML External Entity (XXE) Attack sServer-Side Request Forgery (SSRF) These make beeBox ideal for showcasing Nessus’s scanning capabilities. Installing beeBox on VirtualBox We’ll go through the installation process using VirtualBox 7.0. Steps may vary slightly for other VirtualBox versions. Download the beeBox Image. Download the beeBox virtual machine image (the bee-box_v1.6.7z file) and extract it. Create a New Virtual Machine. Open VirtualBox, click New, and in the Name and Operating System section: Enter a name for the virtual machine. Set the OS type to Linux. Choose Oracle Linux (64-bit) as the version. Configure Hardware. Allocate 1024 MB of RAM and 1 CPU to the virtual machine. Select a Hard Disk. In the Hard Disk section: Choose Use an Existing Virtual Hard Disk File. Click Add and select the path to the bee-box.vmdk file you extracted earlier. Configure Network Settings. Before starting the VM: Go to Settings > Network. Change Attached to from NAT to Bridged Adapter to ensure the VM is on the same network as your primary machine. Start the Virtual Machine. Click Start to launch beeBox. Set Keyboard Layout. Once the desktop loads: Click on USA in the top menu. Select Keyboard Preferences, go to the Layouts tab, and set Keyboard model to IBM Rapid Access II. Retrieve IP Address. Open a terminal in beeBox and run ip a to find the virtual machine’s IP address. You can then access the beeBox application from your main machine using this IP, confirming its accessibility. Scanning with Nessus Nessus General Settings Before using Nessus to scan for vulnerabilities, it's essential to understand its interface and configuration options. The main screen is divided into two primary tabs: Scans and Settings. First, let’s take a closer look at the Settings tab. About: Overview: Provides general information about your Nessus installation, including the version, license details, and other key information. License Utilization: Displays all IP addresses that have been scanned. In the free version, up to 16 hosts can be scanned. Hosts not scanned in the last 90 days will be automatically released from the license. Software Update: Allows you to set up automatic updates or initiate updates manually. Encryption Password: Lets you set a password for encrypting Nessus data. This password is crucial for data recovery if set, as data will be inaccessible without it. Events: Enables you to view the update history and other important events. Advanced Settings: Contains additional configurations for Nessus. Though we won’t cover each option in detail here, you can find specifics about each setting on the official website. Proxy Server: If your network requires a proxy server for internet access or to reach target servers, you can configure the proxy settings here. SMTP Server: This allows you to configure an SMTP server so that Nessus can send scan result notifications and other alerts via email. Running a Basic Scan Now let’s move to the Scans tab. It’s essential to accurately set up the scan parameters for optimal efficiency and accuracy in detecting vulnerabilities. Initiate a New Scan. On the main screen, click New Scan to open the scan creation wizard. Select Scan Type. For this example, we’ll choose Basic Network Scan. General Settings: General: Enter a name and description for the scan, choose a folder for the results, and specify the target IP address (e.g., the IP of the beeBox virtual machine). Schedule: Set up scan frequency if desired (optional). Notifications: Add email addresses to receive notifications about scan results. For this to work, configure the SMTP server in the settings. Detailed Settings: Discovery: Here, you can select the type of port scan—common ports (4,700 commonly used ports), all ports, or Custom for detailed port scan settings. For this example, we’ll select common ports. Assessment: Choose the vulnerability detection method. We’ll use Scan for all web vulnerabilities to speed up the scan. Custom options are also available, and details for each setting are provided in the documentation. Report: Set report generation parameters if needed (we’ll leave this unchanged for the example). Advanced: Configure scan speed settings. You can enable or disable debugging for plugins in manual settings mode. For this example, we’ll set Default. You can find more information in the docs. Additional Settings Above the primary settings, you’ll see two tabs: Credentials and Plugins. Credentials: Allows you to provide credentials for accessing services on the target host (useful for finding vulnerabilities that require non-privileged access). Plugins: Displays the list of plugins that will be used during the scan. When using other types of scans, such as advanced scans, you can enable or disable specific plugins. Click Save to save your scan setup, then return to the main screen. Click Launch to start the scan. The scan is now underway, and you can monitor its progress by clicking on the scan in the Scans tab. Viewing Scan Results in Nessus After completing a scan, you can analyze the results by navigating to the specific scan. The main section of the results page contains a table with detailed information on detected vulnerabilities: Severity: Reflects the threat level based on the CVSS (Common Vulnerability Scoring System) metric. CVSS: Shows the CVSSv2 metric score, indicating the risk level of the vulnerability. VPR: An alternative risk metric by Tenable, providing an additional risk assessment. Name: The name of the detected vulnerability. Family: The category or group the vulnerability belongs to. Count: The number of instances of this vulnerability. It’s worth noting that some vulnerabilities may be grouped as Mixed.To change this grouping, go to Settings > Advanced and set Use Mixed Vulnerability Groups to No. On the left side of the table, you’ll find information about the target host, along with a chart displaying vulnerabilities' distribution by severity level. To explore a specific vulnerability in detail, click on its name. For example, let’s look at the Drupal Database Abstraction API SQLi vulnerability. Vulnerability Description: A brief description of the issue and the software version in which it was patched. Detection Details: Reports on vulnerability detection and recommended mitigation methods. Technical Details: An SQL query that was used to identify the vulnerability. In the left panel, you can find: Plugin Information: Description of the plugin that detected the vulnerability. VPR and CVSS Ratings: Displays the severity ratings of the vulnerability according to different metrics. Exploitation Data: Information about the potential for exploiting the vulnerability. References: Useful links to resources like exploit-db, nist.gov, and others, where you can learn more about the vulnerability. Conclusion This guide covered Nessus's installation, configuration, and use for vulnerability scanning. Nessus is a powerful automated tool, but its effectiveness relies on accurate configuration. Remember that network and system security require a comprehensive approach; automated tools are best used alongside ongoing security education and layered defense strategies for reliable protection.
20 November 2024 · 11 min to read
PHP

Installing and Switching PHP Versions on Ubuntu: A Step-by-Step Guide

PHP is a scripting programming language commonly used for developing web applications. It allows developers to create dynamic websites that adapt their pages for specific users. These websites are not stored on the server in a ready-made form but are created on the server after a user request. This means that PHP is a server-side language, meaning scripts written in PHP run on the server, not the user's computer. There are many different versions of PHP. The language becomes more powerful and flexible with each new version, offering developers more opportunities to create modern web applications. However, not all websites upgrade or are ready to upgrade to the latest PHP version and remain on older versions. Therefore, switching between versions is an essential task for many web developers. Some developers want to take advantage of new features introduced in newer versions, while others need to fix bugs and improve the security of existing applications. In this article, we will go over how to install PHP on Ubuntu and how to manage different PHP versions. How to Install PHP on the Server To install PHP on Ubuntu Server, follow these steps: Connect to the server via SSH. Update the package list: sudo apt update Install the required dependencies: sudo apt install build-essential libssl-dev Download the installation script from the official website, replacing <version> with the desired version: curl -L -O https://www.php.net/distributions/php-<version>.tar.gz Extract the downloaded file, replacing <version> with the downloaded version: tar xzf php-<version>.tar.gz Navigate to the directory with the installed PHP: cd php-<version> Configure the installation script: ./configure Build PHP: make Install PHP: sudo make install After completing these steps, PHP will be installed on your server. The next step is to install a web server to work with PHP. The configuration may involve specifying the PHP module in the web server configuration file and setting up how .php files are handled. Finally, restart the web server. For example, to restart Apache, you can run the following command: sudo service apache2 restart How to Check PHP Version There are several ways to find out which version of PHP a website is running: Use the terminal. Create a script with phpinfo() in the website's root directory. Check PHP Version via Terminal Run the command in the terminal: php -v You will get output similar to: PHP 8.3.13 (cli) (built: Oct 30 2024 11:27:41) (NTS)Copyright (c) The PHP GroupZend Engine v4.3.13, Copyright (c) Zend Technologies    with Zend OPcache v8.3.13, Copyright (c), by Zend Technologies Check PHP Version with phpinfo() Create a file named phpinfo.php with the following content: <?phpphpinfo( );?> Save the file in the root directory of your website (where the index.html or index.php file is located). Open this file in your browser by using the following URL: http://your_website_address/phpinfo.php You will see a page with detailed information about the PHP configuration. After finding out the PHP version, be sure to delete the phpinfo.php file as it contains important server configuration information that attackers could exploit. How to Manage PHP Versions To switch between installed PHP versions on Ubuntu, follow these steps. Check if multiple PHP versions are installed. To see the list of installed PHP packages, run the command: dpkg --list | grep php Install the php-switch package, which allows change PHP versions easily: sudo apt-get install -y php-switch Switch to the desired PHP version using the php-switch command. For example, to switch to PHP 7.4, run: php-switch 8.2 Verify which PHP version is currently active by running: php -v Some scripts and extensions may only work with certain PHP versions. Before switching, make sure that all the scripts and extensions you are using support the new version. Otherwise, the website may become inaccessible or malfunction. Troubleshooting If PHP scripts are not being processed on your server, the first thing to check is the web server's functionality. Open a browser and go to the website where PHP scripts are not working. If the page opens but the PHP script output is not displayed, the problem may lie with PHP. Here are some steps you can take to troubleshoot the issue. Check PHP Service Status Run the following command, using your PHP version (e.g., PHP 8.3): sudo service php8.3-fpm status If the service is running, the output should indicate active (running). If the service is not running, start it with this command: sudo service php8.3-fpm start Check PHP Log Files To view PHP log files, use the following command: tail /var/log/php7\8.3-fpm.log This command will display the last few lines of the PHP log file, which may help identify the issue. Check PHP Configuration Open the php.ini file in a text editor and ensure the display_errors option is set to On. This will allow PHP errors to be displayed on your website pages. Check for Script Errors Open the PHP scripts in a text editor and look for syntax errors or other issues that could prevent the scripts from working properly. Check for Web Server Restrictions Check the web server configuration for any restrictions that might affect the execution of PHP scripts. For example, there may be restrictions in the .htaccess file that prevent certain directories from running scripts. Test the Script on Another Server If the script works on another server, the issue may be related to the configuration of the current server.
20 November 2024 · 5 min to read
Servers

Setting Up NTP on a Server: A Step-by-Step Guide

NTP (Network Time Protocol) is used to synchronize system time with a reference time provided by special servers. This article will cover how to configure NTP on various operating systems and devices, starting with a comprehensive guide on setting up an NTP Server on Linux. Configuring an NTP Server on Linux We'll demonstrate synchronization setup using Ubuntu, but this guide also applies to Debian and most Linux-based systems. We’ve divided the instructions into three parts: the first covers installing the NTP server, the second explains synchronizing NTP clients, and the third covers advanced synchronization settings. To follow this guide, you will need: A cloud server with Ubuntu installed A root user or a user with sudo privileges nano or any other editor installed Installing the NTP Server These steps will guide you through installing and preparing the NTP server for further configuration. Update the repository index to ensure you can download the latest software versions. Use the following command: sudo apt-get update Install the NTP server: sudo apt-get install ntp Confirm the installation by choosing Y if prompted (Y/N). Wait until the software is downloaded and installed. Verify the installation: sntp --version The output should display the version number and the installation time. Switch to the nearest server pool. The server should receive accurate time by default, but it’s better to connect to a server pool closest to your location for extra reliability. To do this, edit the ntp.conf file located at /etc/ntp.conf. Open it with nano (you need sudo privileges) by entering: sudo nano /etc/ntp.conf You’ll see four lines, which we’ve highlighted in orange for reference: These are the default pools, which we’ll replace with local ones (for example, for the USA, we can use NTP servers from this page). After replacing the lines, save and close ntp.conf by pressing Ctrl+O and Ctrl+X. Restart the server: sudo service ntp restart Check the server status: sudo service ntp status The output should indicate Active (running) on one of the first lines (Active) and the server start time. Configure the firewall. To allow client access to the server, open UDP port 123 using UFW with the following command: sudo ufw allow from any to any port 123 proto udp The installation is complete, and the server is running; now, you can proceed with further configuration. Configuring NTP Client Synchronization The following steps will allow client systems to synchronize with our NTP server, which will serve as their primary time source. Check the Connection To verify the network configuration for NTP, enter the following command in the terminal: sudo apt-get install ntpdate Specify IP Address and Hostname To configure the server’s IP and hostname, edit the hosts file located at /etc/hosts: sudo nano /etc/hosts Add the relevant data in the third line from the top (the address below is just an example; replace it with the actual IP of your NTP server): 192.168.154.142 ntp-server Press Ctrl+X to exit and save changes by pressing Y. Alternatively, if you have a DNS server, you can perform this step there. Verify Client Synchronization with the Server To check if synchronization is active between the server and client, enter: sudo ntpdate ntp-server The output will show the time offset. A few milliseconds difference is normal, so you can ignore small values. Disable the timesyncd Service This service synchronizes the local system time, but we don't need it since our clients will sync with the NTP server. Disable it with: sudo timedatectl set-ntp off Install NTP on the Client System Install NTP on the client with this command: sudo apt-get install ntp Set Your NTP Server as the Primary Reference To ensure clients sync specifically with your server, open the ntp.conf file and add the following line: server NTP-server-host prefer iburst The prefer directive marks the server as preferred, and iburst allows multiple requests to the server for higher synchronization accuracy. Save the changes by pressing Ctrl+X and confirming with Y. Restart the Server Restart the NTP server with this straightforward command: sudo service ntp restart Check the Synchronization Queue Finally, check the synchronization status by entering: ntpq -ps This command displays the list of servers in the synchronization queue, including your NTP server as the designated source. Advanced Synchronization Options Now that we’ve set up the NTP server and synchronized client machines, we’ll revisit the ntp.conf file (located at /etc/ntp.conf), which contains additional configurations to ensure robust synchronization with external sources. Preferred Server Mark the most reliable server or server pool with the prefer directive we’ve used before. For example: server 1.north-america.pool.ntp.org prefer The server directive indicates a specific server, while pool can be used to specify a pool of servers. Don’t forget the line server 127.127.1.0 at the end of the pool list, which defaults to the system time if the connection is lost. Security Settings Make sure the following lines are included in ntp.conf: restrict default kod notrap nomodify nopeer noquery The default command applies these settings as defaults for all restrict commands: kod (Kiss-o’-Death) limits the rate of requests. notrap blocks the acceptance of control commands. nomodify restricts commands that might alter the server state. nopeer prohibits synchronization with external hosts. noquery blocks query requests. For IPv4, use -4 before default, and for IPv6, use -6. Here’s an example of using some of these commands. The following line allows synchronization of nodes in a specific network while restricting nodes from receiving control or state-altering commands: restrict 192.168.0.0 mask 255.255.255.0 nomodify notrap The following lines are required for the server to communicate with itself: restrict 127.0.0.1restrict ::1 Finally, remember to restart the server after making these changes. Verifying NTP Operation To check if NTP is functioning correctly, use the command ntpq -p. Example output: In the first column, you’ll see the synchronization server’s address, followed by its parent server, stratum level (st column), and nup (t column). The next three columns show details on the last synchronization time, sync interval, and reliability status . The final two columns display the time difference between the synchronized server and the reference server, as well as the offset. Pay attention to the symbols in the first column, which appear before the IP address: A + symbol indicates a reliable server for synchronization and a - means the opposite. An * indicates the current server chosen for synchronization. Occasionally, an x will appear, which means the server is unavailable. Checking if the Server Provides Accurate Time To ensure the server is distributing the correct time, run the ntpdate command from another system, specifying the IP address of the NTP server you want to verify. The output should look something like this: adjust time server (IP address here) offset 0.012319 sec The number represents the time offset. Here, an offset of about 0.01 seconds (12 milliseconds) is perfectly acceptable. Now that we’ve completed the Linux setup, let’s look at configuring the NTP protocol on Windows. Configuring an NTP Server on Windows Server To install and configure an NTP server on Windows Server, you'll need to make some changes in the registry and run commands in the command prompt.  Before proceeding with the configuration, you must start the service. This is done by modifying the following registry entry: HKLM\System\CurrentControlSet\services\W32Time\TimeProviders\NtpServer In this section, find the Enabled entry on the right and set it to 1 so that the Data column displays: 0x00000001 (1) Next, open cmd and enter the command needed to restart the protocol: net stop w32time && net start w32time Make sure to run this command from C:\Users\Administrator. To verify that NTP is enabled, use the following command: w32tm /query /configuration You’ll get a long entry, and you should check the block NtpServer <Local>. In the Enabled line, the value should be 1. Now, open UDP port 123 in the firewall for proper client servicing, and then proceed with the configuration. Return to the registry and look for the entry: HKLM\System\CurrentControlSet\services\W32Time\Parameters This section contains many parameters, but the main one is Type, which can take one of four values: NoSync — No synchronization. NTP — Synchronization with external servers specified in the NtpServer registry entry (this is the default for standalone machines). NT5DS — Synchronization according to the domain hierarchy (default for machines in a domain). AllSync — Synchronization with all available servers. Now, go back to the registry and configure the values under the NtpServer section. Most likely, only the Microsoft server is listed. You can add others, paying attention to the flag at the end: 0x1, SpecialInterval — Standard mode recommended by Microsoft. 0x2, UseAsFallbackOnly — Use this server as a fallback. 0x4, SymmetricActive — This is the main mode for NTP servers. 0x8, Client — Used when synchronization issues occur. The last thing you need to do is set the synchronization interval in the section: W32Time\TimeProviders\NtpClient The parameter is SpecialPollInterval, where you should set the desired value (in seconds). By default, it’s set to one week. If you want more frequent synchronization, set: 86400 for 1 day. 21600 for a quarter of a day (6 hours). 3600 for 1 hour. The last value is optimal in terms of system load and acceptable precision when frequent synchronization is required. Configuring an NTP Server on Cisco Devices On Cisco devices, the process is simple and quick: Enter configuration mode with the command: conf t Set the time zone using the command: clock timezone <timezone> <offset> For example: clock timezone CST -6 Next, enter the command to set the NTP source: ntp source Specify the source. If you want to make the server the primary one for other machines in the network, use the following command: ntp master 2 The number should be 2 or greater. Use the command ntp update-calendar to update the time. Enter the names or IP addresses of the NTP servers. Enter the time zone with the command: clock timezone And set the source using: ntp source To check the configuration or troubleshoot, use the show command. It will be useful for checking the time (show clock), NTP status (show ntp status), and associations (show ntp associations). Configuring an NTP Server on MikroTik Routers We will configure the NTP server using SNTP: In Winbox, go to System – SNTP Client. Find the SNTP Client section and enable it by checking the Enabled box. In the Server DNS Names field below, enter the IP addresses of the NTP servers. To check if everything is working, go to System – Clock. Set the time zone by choosing it from the dropdown list or check the Time Zone Autodetect box, and the time zone will be set automatically. The synchronization interval can be seen in the Poll Interval field in the SNTP Client menu. Below, you will find the last synchronization time in the Last Update field. That’s it! Now you’ve learned how to configure NTP on different operating systems and devices.
19 November 2024 · 9 min to read
MySQL

How to Find and Delete Duplicate Rows in MySQL with GROUP BY and HAVING Clauses

Duplicate entries may inadvertently accumulate in databases, which are crucial for storing vast amounts of structured data. These duplicates could show up for a number of reasons, including system errors, data migration mistakes, or repeated user submissions. A database with duplicate entries may experience irregularities, sluggish performance, and erroneous reporting. Using the GROUP BY and HAVING clauses, as well as a different strategy that makes use of temporary tables, we will discuss two efficient methods for locating and removing duplicate rows in MySQL. With these techniques, you can be sure that your data will always be accurate, clean, and well-organized. Database duplication in MySQL tables can clog your data, resulting in inaccurate analytics and needless storage. Locating and eliminating them is a crucial database upkeep task. This is a detailed guide on how to identify and remove duplicate rows. If two or more columns in a row have identical values, it is called a duplicate row. For instance, rows that have the same values in both the userName and userEmail columns of a userDetails table may be considered duplicates. Benefits of Removing Duplicate Data The advantage of eliminating duplicate data is that duplicate entries can slow down query performance, take up extra storage space, and produce misleading results in reports and analytics. The accuracy and speed of data processing are improved by keeping databases clean, which is particularly crucial for databases that are used for critical applications or are expanding. Requirements Prior to starting, make sure you have access to a MySQL database or have MySQL installed on your computer. The fundamentals of general database concepts and SQL queries. One can execute SQL commands by having access to a MySQL client or command-line interface. To gain practical experience, you can create a sample database and table that contains duplicate records so that you can test and comprehend the techniques for eliminating them. Creating a Test Database Launch the MySQL command-line tool to create a Test Database. mysql -u your_username -p Create a new database called test_dev_db after entering your MySQL credentials. CREATE DATABASE test_dev_db; Then, switch to this newly created database:. USE test_dev_db; Add several rows, including duplicates, to the userDetails table after creating it with the CREATE TABLE query and INSERT query below. CREATE TABLE userDetails ( userId INT AUTO_INCREMENT PRIMARY KEY, userName VARCHAR(100), userEmail VARCHAR(100) ); INSERT INTO userDetails (userName, userEmail) VALUES (‘Alisha’, ‘[email protected]’), (‘Bobita, ‘[email protected]’), (‘Alisha’, ‘[email protected]’), (‘Alisha’, ‘[email protected]’); Using GROUP BY and HAVING to Locate Duplicates Grouping rows according to duplicate-defining columns and using HAVING to filter groups with more than one record is the simplest method for finding duplicates. Now that you have duplicate data, you can use SQL to determine which rows contain duplicate entries. MySQL's GROUP BY and HAVING clauses make this process easier by enabling you to count instances of each distinct value. An example of a table structure is the userDetails table, which contains the columns userId, userName, and userEmail. The GROUP BY clause is useful for counting occurrences and identifying duplicates because it groups records according to specified column values. The HAVING clause  allows duplicate entries in groups formed by GROUP BY to be found by combining groups based on specific criteria. Table userDetails Structure userId userName userEmail 1 Alisha  [email protected] 2 Bobita  [email protected] 3 Alisha  [email protected] 4 Alisha  [email protected] In the above table userDetails, records with identical userName and userEmail values are considered duplicates. Finding Duplicates Query for find the duplicate entries: SELECT userName, userEmail, COUNT(*) as count FROM userDetails GROUP BY userName, userEmail HAVING count > 1; Rows are grouped by userName and userEmail in the aforementioned query, which also counts entries within the group and eliminates groups with a single entry (no duplicates). Explanation: SELECT userName, userEmail, COUNT(*) as count: Retrieves the count of each combination of username and userEmail, as well as their unique values. GROUP BY userName, userEmail: Records are grouped by username and user email using the GROUP BY userName, userEmail function COUNT (*): Tallies the rows in each set. HAVING occurrences > 1: Recurring entries are identified by displaying only groups with more than one record. This query will return groups of duplicate records based on the selected columns. userName userEmail count Alisha [email protected] 3 Eliminating Duplicate Rows After finding duplicates, you may need to eliminate some records while keeping the unique ones. Joining the table to itself and removing rows with higher userId values is one effective method that preserves the lowest userId for every duplicate. Use the SQL query to remove duplicate rows while keeping the lowest userId entry. DELETE u1 FROM userDetails u1 JOIN userDetails u2 ON u1. userName = u2. userName AND u1. userEmail = u2. userEmail AND u1. userId > u2. userId ; Explanation: u1 & u2: Aliases for the userDetails table to ease a self-join. ON u1. userName = u2. userName AND u1. userEmail = u2. userEmail: Matches rows with identical userName, userEmail. AND u1. userId > u2. userId: Removes rows with higher userId values, keeping only the row with the smallest userId. Because this action cannot be undone, it is advised that you backup your data before beginning the deletion procedure. Confirming Duplicate Removal To confirm that all duplicates have been removed, repeat the Step 1 identification query. SELECT userName, userEmail, COUNT(*) as count FROM userDetails GROUP BY userName, userEmail HAVING count > 1; All duplicates have been successfully eliminated if this query yields no rows. Benefits of Employing GROUP BY and HAVING The GROUP BY and HAVING clauses serve as vital instruments for the aggregation of data and the filtration of grouped outcomes. These functionalities are especially useful for detecting and handling duplicate entries or for condensing extensive datasets. Below are the primary benefits of employing these clauses. Efficient Identification of Duplicates Data Aggregation and Summarization Filtering Aggregated Results with Precision Versatility Across Multiple Scenarios Compatibility and Simplicity Enhanced Query Readability Support for Complex Aggregations The GROUP BY and HAVING clauses serve as essential instruments for data aggregation, identifying duplicates, and filtering results. Their effectiveness, ease of use, and adaptability render them crucial for database management and data analysis activities, allowing users to derive insights and handle data proficiently across a variety of applications. Identifying Duplicates Using a Temporary Table When dealing with large datasets, it can be easier and more efficient to separate duplicates using a temporary table before deleting them. Creating the Table Make a temporary table to store duplicate groups according to predetermined standards (e.g. A. username, along with userEmail. CREATE TEMPORARY TABLE temp_view_duplicates AS SELECT username, userEmail, MIN (userId) AS minuid FROM userDetails GROUP BY username, userEmail, HAVING COUNT(*) > 1; Explanation: CREATE TEMPORARY TABLE temp_view_duplicates AS: Creates a temporary table named temp_view_duplicates. SELECT userName, userEmail, MIN(userId) AS minuid: Groups duplicates by userName and userEmail, keeping only the row with the smallest userId. GROUP BY userName, userEmail: Groups rows by userName, userEmail. HAVING COUNT(*) > 1: Filters only groups with more than one row, identifying duplicates. This temporary table will now contain one representative row per duplicate group (the row with the smallest id). Deleting Duplicates from the Main Table Now that we have a list of unique rows with duplicates in the temp_view_duplicates table, we can use the temporary table to remove duplicates while keeping only the rows with the smallest userId. Use the following DELETE command: DELETE FROM userDetails WHERE (username, userEmail) IN ( SELECT username, userEmail FROM temp_view_duplicates ) AND userId NOT IN ( SELECT minuid FROM temp_view_duplicates ); Explanation: WHERE (username, userEmail,) IN: Targets only duplicate groups identified in temp_view_duplicates. AND userId NOT IN (SELECT minuid FROM temp_view_duplicates): Ensures that only duplicate rows (those with higher userId values) are deleted. Verifying Results To confirm that duplicates have been removed, query the userDetails table: SELECT * FROM userDetails; Only unique rows should remain. Temporary tables (CREATE TEMPORARY TABLE) are automatically dropped when the session ends, so they don’t persist beyond the current session. When making extensive deletions, think about utilizing a transaction to safely commit or undo changes as necessary. Key Advantages of Using a Temporary Table Lower Complexity: By isolating duplicates, the removal process is simpler and clearer. Enhanced Efficiency: It's faster for large datasets, as it avoids repeated joins. Improved Readability: Using a temporary table makes the process more modular and easier to understand. Conclusion Eliminating duplicate records is essential for maintaining a well-organized database, improving performance, and ensuring accurate reporting. This guide presented two approaches: Direct Method with GROUP BY and HAVING Clauses: Ideal for small datasets, using self-joins to delete duplicates. Temporary Table Approach: More efficient for larger datasets, leveraging temporary storage to streamline deletion. Choose the method that best fits your data size and complexity to keep your database clean and efficient.
19 November 2024 · 8 min to read
Servers

iSCSI Protocol: How It Works and What It’s Used For

iSCSI, or Internet Small Computer System Interface, is a protocol for data storage that enables SCSI commands to be run over a network connection, typically Ethernet. In this article, we’ll look at how it works, its features and advantages, and explain how to configure the iSCSI protocol. How iSCSI Works To understand how iSCSI functions, let’s look at its structure in more detail. The main components are initiators and targets. This terminology is straightforward: initiators are hosts that initiate an iSCSI connection, while targets are hosts that accept these connections. Thus, storage devices serve as targets to which the initiator hosts connect. The connection is established over TCP/IP, with iSCSI handling the SCSI commands and data organization, assembling them into packets. These packets are then transferred over a point-to-point connection between the local and remote hosts. iSCSI processes the packets it receives, separating out the SCSI commands, making the OS perceive the storage as a local device, which can be formatted and managed as usual. Authentication and Data Transmission In iSCSI, initiators and targets are identified using special names: IQN (iSCSI Qualified Name) and EUI (Extended Unique Identifier), the latter used with IPv6 protocol. Example of IQN: iqn.2003-02.com.site.iscsi:name23. Here, 2003-02 represents the year and month the domain site.com was registered. Domain names in IQN appear in reverse order. Lastly, name23 is the unique name assigned to the iSCSI host. Example of EUI: eui.fe9947fff075cee0. This is a hexadecimal value in IEEE format. The upper 24 bits identify a specific network or company (such as a provider), while the remaining 40 bits uniquely identify the host within that network. Each session involves two phases. The first phase is authentication over TCP. After successful authentication, the second phase is data exchange between the initiator host and the storage device, conducted over a single connection, eliminating the need to track requests in parallel. When the data transfer is complete, the connection is closed using an iSCSI logout command. Error Handling and Security To address data loss, iSCSI includes mechanisms for data recovery, such as PDU packet retransmission, connection recovery, and session restart, while canceling any unprocessed commands. Data exchange security is ensured through the CHAP protocol, which doesn’t directly transmit confidential information (like passwords) but uses a hash comparison. Additionally, all packets are encrypted and integrity-checked using IPsec protocols integrated into iSCSI. Types of iSCSI Implementations There are three main types of iSCSI implementations: Host CPU Processing: Processing is handled by the initiator host's CPU. TCP/IP Offload with Shared Load: Most packets are processed by the storage device, while the initiator host handles certain exceptions. Full TCP/IP Offload: All data packets are processed entirely by the storage device. Additionally, iSCSI can be extended for RDMA (Remote Direct Memory Access) to allow direct remote memory access. The advantage of RDMA is that it transfers data without consuming system resources on network nodes, achieving high data exchange speeds. In the case of iSCSI, SCSI buffer memory is used to connect to storage, eliminating the need for intermediate data copies and reducing CPU load. This iSCSI variation is known as iSER (iSCSI Extension for RDMA). Advantages of iSCSI iSCSI provides not only cost-effectiveness and improved performance but also offers: Simplified Network Storage: Since iSCSI operates over Gigabit Ethernet devices, network storage becomes easier to set up and manage. Ease of Support: iSCSI uses the same principles as TCP/IP, so IT specialists don’t need additional training. Network Equipment Compatibility: iSCSI is based on the TCP/IP network model, so almost any storage-related network equipment is compatible within an iSCSI environment. Differences Between iSCSI SAN and FC SAN In discussions comparing these two protocols, iSCSI SAN (Storage Area Network) and FC SAN (Fibre Channel SAN) are often seen as competitors. Let’s look at the key differences iSCSI SAN is a more cost-effective solution than FC SAN. iSCSI offers high data transfer performance and doesn’t require additional specialized hardware—it operates on existing network equipment. However, for maximum performance, network adapters are recommended. By contrast, FC SAN requires additional hardware like switches and host bus adapters. To illustrate, here is a table summarizing key differences between the protocols: Feature iSCSI SAN Fibre Channel SAN Operation on existing network Possible Not possible Data transfer speed 1 to 100 Gbps 2 to 32 Gbps Setup on existing equipment Yes No Data flow control No packet retransmission protection Reliable Network isolation No Yes Conclusion As shown in the comparison table, each protocol has its strengths, so the choice depends on the requirements of your storage system. In short, iSCSI is ideal when cost efficiency, ease of setup, and straightforward protocol management are priorities. On the other hand, FC offers low latency, easier scalability, and is better suited for more complex storage networks.
18 November 2024 · 5 min to read
Nginx

How to Set Up Load Balancing with Nginx

Modern applications can handle many requests simultaneously, and even under heavy load, they must return correct information to users. There are different ways to scale applications: Vertical Scaling: add more RAM or CPU power by renting or purchasing a more powerful server. It is easy during the early stages of the application’s development, but it has drawbacks, such as cost and the limitations of modern hardware. Horizontal Scaling: add more instances of the application. Set up a second server, deploy the same application on it, and somehow distribute traffic between these instances. Horizontal scaling, on the one hand, can be cheaper and less restrictive in terms of hardware. You can simply add more instances of the application. However, now we need to distribute user requests between the different instances of the application. Load Balancing is the process of distributing application requests (network traffic) across multiple devices. A Load Balancer is a middleware program between the user and a group of applications. The general logic is as follows: The user accesses the website through a specific domain, which hides the IP address of the load balancer. Based on its configuration, the load balancer determines which application instance should handle the user's traffic. The user receives a response from the appropriate application instance. Load Balancing Advantages Improved Application Availability: Load balancers have the functionality to detect server failures. If one of the servers goes down, the load balancer can automatically redirect traffic to another server, ensuring uninterrupted service for users. Scalability: One of the main tasks of a load balancer is to distribute traffic across multiple instances of the application. This enables horizontal scaling by adding more application instances, increasing the overall system performance. Enhanced Security: Load balancers can include security features such as traffic monitoring, request filtering, and routing through firewalls and other mechanisms, which help improve the application's security. Using Nginx for Network Traffic Load Balancing There are quite a few applications that can act as a load balancer, but one of the most popular is Nginx.  Nginx is a versatile web server known for its high performance, low resource consumption, and wide range of capabilities. Nginx can be used as: A web server A reverse proxy and load balancer A mail proxy server And much more. You can learn more about Nginx's capabilities on its website. Now, let's move on to the practical setup. Installing Nginx on Ubuntu Nginx can be installed on all popular Linux distributions, including Ubuntu, CentOS, and others. In this article, we will be using Ubuntu. To install Nginx, use the following commands: sudo apt updatesudo apt install nginx To verify that the installation was successful, you can use the command: systemctl status nginx The output should show active (running). The configuration files for Nginx are located in the /etc/nginx/sites-available/ directory, including the default file that we will use for writing our configuration. Example Nginx Configuration First, we need to install nano: sudo apt install nano Now, open the default configuration file: cd /etc/nginx/sites-available/sudo nano default Place the following configuration inside: upstream application { server 10.2.2.11; # IP addresses of the servers to distribute requests between server 10.2.2.12; server 10.2.2.13; } server { listen 80; # Nginx will open on this port location / { # Specify where to redirect traffic from Nginx proxy_pass http://application; } } Setting Up Load Balancing in Nginx To configure load balancing in Nginx, you need to define two blocks in the configuration: upstream — Defines the server addresses between which the network traffic will be distributed. Here, you specify the IP addresses, ports, and, if necessary, load balancing methods. We will discuss these methods later. server — Defines how Nginx will receive requests. Usually, this includes the port, domain name, and other parameters. The proxy_pass path specifies where the requests should be forwarded. It refers to the upstream block mentioned earlier. In this way, Nginx is used not only as a load balancer but also as a reverse proxy. A reverse proxy is a server that sits between the client and backend application instances. It forwards requests from clients to the backend and can provide additional features such as SSL certificates, logging, and more. Load Balancing Methods Round Robin There are several methods for load balancing. By default, Nginx uses the Round Robin algorithm, which is quite simple. For example, if we have three applications (1, 2, and 3), the load balancer will send the first request to the first application, then the second request to the second application, the third request to the third application, and then continue the cycle, sending the next request to the first one again. Let’s look at an example. I have deployed two applications and configured load balancing with Nginx for them: upstream application { server 172.25.208.1:5002; # first server 172.25.208.1:5001; # second } Let’s see how this works in practice: The first request goes to the first server. The second request goes to the second server. Then it goes back to the first server, and so on. However, this algorithm has a limitation: backend instances may be idle simply because they are waiting for their turn. Round Robin with Weights To avoid idle servers, we can use numerical priorities. Each server gets a weight, which determines how much traffic will be directed to that specific application instance. This way, we ensure that more powerful servers will receive more traffic. In Nginx, the priority is specified using server weight as follows: upstream application { server 10.2.2.11 weight=5; server 10.2.2.12 weight=3; server 10.2.2.13 weight=1; } With this configuration, the server at address 10.2.2.11 will receive the most traffic because it has the highest weight. This approach is more reliable than the standard Round Robin, but it still has a drawback. We can manually specify weights based on server power, but requests can still differ in execution time. Some requests might be more complex and slower, while others are fast and lightweight. upstream application { server 172.25.208.1:5002 weight=3; # first server 172.25.208.1:5001 weight=1; # second } Least Connections What if we move away from Round Robin? Instead of simply distributing requests in order, we can base the distribution on certain parameters, such as the number of active connections to the server. The Least Connections algorithm ensures an even distribution of load between application instances by considering the number of active connections to each server. To configure it, simply add least_conn; in the upstream block: upstream application { least_conn; server 10.2.2.11; … } Let’s return to our example. To test how this algorithm works, I wrote a script that sends 500 requests concurrently and checks which application each request is directed to. Here is the output of that script: Additionally, this algorithm can be used together with weights for the addresses, similar to Round Robin. In this case, the weights will indicate the relative number of connections to each address — for example, with weights of 1 and 5, the address with a weight of 5 will receive five times more connections than the address with a weight of 1. Here’s an example of such a configuration: upstream application { least_conn; server 10.2.2.11 weight=5; … } nginx upstream loadbalancer { least_conn; server 172.25.208.1:5002 weight=3; # first server 172.25.208.1:5001 weight=1; # second } And here’s the output of the script: As we can see, the number of requests to the first server is exactly three times higher than to the second. IP Hash This method works based on the client’s IP address. It guarantees that all requests from a specific address will be routed to the same instance of the application. The algorithm calculates a hash of the client’s and server’s addresses and uses this result as a unique key for load balancing. This approach can be useful in blue-green deployment scenarios, where we update each backend version sequentially. We can direct all requests to the backend with the old version, then update the new one and direct part of the traffic to it. If everything works well, we can direct all users to the new backend version and update the old one. Example configuration: upstream app { ip_hash; server 10.2.2.11; … } With this configuration, in our example, all requests will now go to the same application instance: Error Handling When configuring a load balancer, it's also important to detect server failures and, if necessary, stop directing traffic to "down" application instances. To allow the load balancer to mark a server address as unavailable, you must define additional parameters in the upstream block: failed_timeout and max_fails. failed_timeout: This parameter specifies the amount of time during which a certain number of connection errors must occur for the server address in the upstream block to be marked as unavailable. max_fails: This parameter sets the number of connection errors allowed before the server is considered "down." Example configuration: upstream application { server 10.2.0.11 max_fails=2 fail_timeout=30s; … } Now, let's see how this works in practice. We will "take down" one of the test backends and add the appropriate configuration. The first backend instance from the example is now disabled. Nginx redirects traffic only to the second server. Comparative Table of Traffic Distribution Algorithms Algorithm Pros Cons Round Robin Simple and lightweight algorithm. Evenly distributes load across applications. Scales well. Does not account for server performance differences. Does not consider the current load of applications. Weighted Round Robin Allows setting different weights for servers based on performance. Does not account for the current load of applications. Manual weight configuration may be required. Least Connection Distributes load to applications with the fewest active connections. Can lead to uneven load distribution with many slow clients. Weighted Least Connection Takes server performance into account by focusing on active connections. Distributes load according to weights and connection count. Manual weight configuration may be required. IP Hash Ties the client to a specific IP address. Ensures session persistence on the same server. Does not account for the current load of applications. Does not consider server performance. Can result in uneven load distribution with many clients from the same IP. Conclusion In this article, we explored the topic of load balancing. We learned about the different load balancing methods available in Nginx and demonstrated them with examples.
18 November 2024 · 9 min to read
Servers

Sentry: Error Tracking and Monitoring

Sentry is a platform for error logging and application monitoring. The data we receive in Sentry contains comprehensive information about the context in which an issue occurred, making it easier to reproduce, trace the root cause, and significantly assist in error resolution. It's a valuable tool for developers, testers, and DevOps professionals. This open-source project can be deployed on a private or cloud server. Originally, Sentry was a web interface for displaying traces and exceptions in an organized way, grouping them by type. Over time, it has grown, adding new features, capabilities, and integrations. It's impossible to showcase everything it can do in a single article fully, and even a brief video overview could take up to three hours. Official Website  Documentation  GitHub Why Use Sentry When We Have Logging? Reviewing logs to understand what's happening with a service is helpful. When logs from all services are centralized in one place, like Elastic, OpenSearch, or Loki, it’s even better. However, you can analyze errors and exceptions faster, more conveniently, and with greater detail in Sentry. There are situations when log analysis alone does not clarify an issue, and Sentry comes to the rescue. Consider cases where a user of your service fails to log in, buy a product, or perform some other action and leaves without submitting a support ticket. Such issues are extremely difficult to identify through logs alone. Even if a support ticket is submitted, analyzing, identifying, and reproducing such specific errors can be costly: What device and browser were used? What function triggered the error, and why? What specific error occurred? What data was on the front end, and what was sent to the backend? Sentry’s standout feature is the way it provides detailed contextual information about errors in an accessible format, enabling faster response and improved development. As the project developers claim on their website, “Your code will tell you more than what logs reveal. Sentry’s full-stack monitoring shows a more complete picture of what's happening in your service’s code, helping identify issues before they lead to downtime.” How It Works In your application code, you set up a DSN (URL) for your Sentry platform, which serves as the destination for reports (errors, exceptions, and logs). You can also customize, extend, or mask the data being sent as needed. Sentry supports JavaScript, Node, Python, PHP, Ruby, Java, and other programming languages. In the setup screenshot, you can see various project types, such as a basic Python project as well as Django, Flask, and FastAPI frameworks. These frameworks offer enhanced and more detailed data configurations for report submission. Usage Options Sentry offers two main usage options: Self-hosted (deployed on your own server) Cloud-based (includes a limited free version and paid plans with monthly billing) The Developer version is a free cloud plan suitable for getting acquainted with Sentry. For anyone interested in Sentry, we recommend at least trying the free cloud version, as it’s a good introduction. However, a self-hosted option is ideal since the cloud version can experience error reporting delays of 1 to 5 minutes, which may be inconvenient. Self-Hosted Version Installation Now, let's move on to the technical part. To deploy Sentry self-hosted, we need the getsentry/self-hosted repository. The platform will be set up using Docker Compose. System Requirements Docker 19.03.6+ Docker Compose 2.19.0+ 4 CPU cores 16 GB RAM 20 GB free disk space We’ll be using a VPS from Hostman with Ubuntu 22.04. System Setup Update Dependencies First, we need to update the system packages: apt update && apt upgrade -y Install Required Packages Docker Docker's version available in the repository is 24.0.7, so we’ll install it with: apt install docker.io Docker Compose The version offered by apt is 1.29.2-1, which does not match the required version. So we need to install in manully. We’ll get the latest version directly from the official repository: VERSION=$(curl --silent https://api.github.com/repos/docker/compose/releases/latest | grep -Po '"tag_name": "\K.*\d')DESTINATION=/usr/bin/docker-composesudo curl -L https://github.com/docker/compose/releases/download/${VERSION}/docker-compose-$(uname -s)-$(uname -m) -o $DESTINATIONsudo chmod 755 $DESTINATION Verify Docker Compose Installation To ensure everything is correctly installed, check the version of Docker Compose: docker-compose --version Output: Docker Compose version v2.20.3 Once these steps are completed, you can proceed with deploying Sentry using Docker Compose. Installation The Sentry developers have simplified the installation process with a script. Here's how to set it up: Clone the Repository and Release Branch First, clone the repository and checkout the release branch: git clone https://github.com/getsentry/self-hosted.gitcd self-hostedgit checkout 24.10.0 Run the Installation Script Start the installation process by running the script with the following flags: ./install.sh --skip-user-prompt --no-report-self-hosted-issues Flags explanation: --skip-user-prompt: Skips the prompt for creating a user (we’ll create the user manually, which can be simpler). --no-report-self-hosted-issues: Skips the prompt to send anonymous data to the Sentry developers from your host (this helps developers improve the product, but it uses some resources; decide if you want this enabled). The script will check system requirements and download the Docker images (docker pull). Start Sentry Once the setup is complete, you’ll see a message with the command to run Sentry: You're all done! Run the following command to get Sentry running:docker-compose up -d Run the command to start Sentry: docker-compose up -d The Sentry web interface will now be available at your host's IP address on port 9000. Before your first login, edit the ./sentry/config.yml configuration file and the line: system.url-prefix: 'http://server_IP:9000' And restart the containers: docker-compose restart Create a User We skipped the user creation during the installation, so let’s create the user manually. Run: docker-compose run --rm web createuser Enter your email, password, and answer whether you want to give the user superuser privileges. Upon first login, you’ll see an initial setup screen where you can specify: The URL for your Sentry instance. Email server settings for sending emails. Whether to allow other users to self-register. At this point, Sentry is ready to use. You can read more about the configuration here. Configuration Files Sentry’s main configuration files include: .env./sentry/config.yml./sentry/sentry.conf.py By default, 42 containers are launched, and we can customize settings in the configuration files. Currently, it is not possible to reduce the number of containers due to the complex architecture of the system.  You can modify the .env file to disable some features. For example, to disable the collection of private statistics, add this line to .env: SENTRY_BEACON=False You can also change the event retention period. By default, it is set to 90 days: SENTRY_EVENT_RETENTION_DAYS=90 Database and Caching Project data and user accounts are stored in PostgreSQL. If needed, you can easily configure your own database and Redis in the configuration files. HTTPS Proxy Setup To access the web interface securely, you need to set up an HTTPS reverse proxy. The Sentry documentation does not specify a particular reverse proxy, but you can choose any that fits your needs. After configuring your reverse proxy, you will need to update the system.url-prefix in the config.yml file and adjust the SSL/TLS settings in sentry/sentry.conf.py. Project Setup and Integration with Sentry To set up and connect your first project with Sentry, follow these steps: Create a New Project In the Sentry web interface, click Add New Project and choose your platform. After creating the project, Sentry will generate a unique DSN (Data Source Name), which you'll need to use in your application to send events to Sentry. Configure the traces_sample_rate Pay attention to the traces_sample_rate setting. It controls the percentage of events that are sent to Sentry. The default value is 1.0, which sends 100% of all events.  traces_sample_rate=1.0  # 100% of events will be sent If you set it to 0.25, it will only send 25% of events, which can be useful to avoid overwhelming the platform with too many similar errors. You can adjust this value depending on your needs. You can read more about additional parameters of the sentry_sdk in the official documentation. Example Code with Custom Exception Here’s an example script that integrates Sentry with a custom exception and function: import sentry_sdk sentry_sdk.init( dsn="http://[email protected]:9000/3", # DSN from project creation traces_sample_rate=1.0, # Send 100% of events environment="production", # Set the runtime environment release="my-app-1.0.0", # Specify the app version send_default_pii=True, # Send Personally Identifiable Information (PII) ) class MyException(Exception): pass def my_function(user, email): raise MyException(f"User {user} ({email}) encountered an error.") def create_user(): print("Creating a user...") my_function('James', '[email protected]') if __name__ == "__main__": sentry_sdk.capture_message("Just a simple message") # Send a test message to Sentry create_user() # Simulate the error Run the Script Run the Python script: python main.py This script will: Initialize Sentry with your project’s DSN. Capture a custom exception when calling my_function. Send an example message to Sentry. Check Results in Sentry After running the script, you should see the following in Sentry: The Just a simple message message will appear in the event stream. The MyException that is raised in my_function will be captured as an error, and the details of the exception will be logged. You can also view the captured exception, including the user information (user and email) and any other data you choose to send (such as stack traces, environment, etc.). In Sentry, the tags displayed in the error reports include important contextual information that can help diagnose issues. These tags often show: Environment Variable: This indicates the runtime environment of the application, such as "production", "development", or "staging". It helps you understand which environment the error occurred in. Release Version: The version of your application that was running when the error occurred. This is particularly useful for identifying issues that might be specific to certain releases or versions of the application. Hostname: The name of the server or machine where the error happened. This can be helpful when working in distributed systems or multiple server environments, as it shows the exact server where the issue occurred. These tags appear in the error reports, providing valuable context about the circumstances surrounding the issue. For example, the stack trace might show which functions were involved in the error, and these tags can give you additional information, such as which version of the app was running and on which server, making it easier to trace and resolve issues. Sentry automatically adds these contextual tags, but you can also customize them by passing additional information when you capture errors, such as environment, release version, or user-related data. Conclusion In this article, we discussed Sentry and how it can help track errors and monitor applications. We hope it has sparked your interest enough to explore the documentation or try out Sentry. Despite being a comprehensive platform, Sentry is easy to install and configure. The key is to carefully manage errors and group events and use flexible configurations to avoid chaos. When set up properly, Sentry becomes a powerful and efficient tool for development teams, offering valuable insights into application behavior and performance.
15 November 2024 · 10 min to read

Tailored cloud server
solutions for every need

General-purpose cloud servers for web hosting

Ideal for websites, content management systems, and basic business applications, cloud web servers provide balanced performance with reliable uptime. These servers are designed to handle moderate traffic while keeping your website or application responsive.

High-performance servers for cloud computing


For businesses needing powerful resources for tasks like AI, machine learning, or data analysis, our high-performance cloud servers are built to process large datasets efficiently. Equipped with 3.3 GHz processors and high-speed NVMe storage, they ensure smooth execution of even the most demanding applications.

Storage-optimized cloud servers for data-driven operations

Need to store and retrieve large amounts of data? Our cloud data servers offer vast capacity with superior read/write speeds. These servers are perfect for databases, large-scale backups, or big data management, ensuring fast access to your data when you need it.

Memory-Optimized Servers for Heavy Workloads


These servers are built for applications that require high memory capacity, such as in-memory databases or real-time analytics. With enhanced memory resources, they ensure smooth handling of large datasets, making them ideal for businesses with memory-intensive operations.

In-depth answers to your questions

Which operating systems are supported on your cloud servers?

Choose popular server operating systems and deploy them in one click: from Ubuntu to CentOS. Licensed operating systems are available directly in the control panel.

How can I get started with a cloud server? Is there a straightforward registration process?

Register with Hostman and choose the tariff that suits your needs and requirements. You can always add processing power and purchase additional services if needed.

You don't need a development team to start shared hosting - you'll do everything yourself in a convenient control panel. Even a person with no technical background can easily work with it.

What is the minimum and maximum resource allocation (CPU, RAM, storage) available for cloud servers?

The starter package includes a 1×1.28 GHz 1-core CPU, 1 GB RAM, 15 GB fast MVNe SSD, dedicated IP address and 200 Mbps. For demanding users, go for a powerful 8×3.3 GHz server, 16 GB RAM, 160 GB fast MVNe SSD, dedicated IP address and 200 Mbps. Alternatively, you can always get an incredibly powerful server by configuring it yourself.

What scaling options are available for cloud servers?

You can easily add power, bandwidth, and channel width with just a few clicks directly in the control panel. With Hostman, you can enhance all the important characteristics of your server with hourly billing.

How much does a cloud server cost, and what is the pricing structure like?

Add capacity, bandwidth and channel width with a few clicks right in the control panel. With Hostman, you can improve all important features of your server - with hourly billing.

Is there a trial or testing period available for cloud servers before purchasing?

Contact the friendly Hostman support team, and they will offer you comfortable conditions for test-driving our cloud server — and will transfer your current projects to the cloud for free.

What security measures and data protection are in place for cloud servers?

Cloud servers are hosted in a Tier III data center with a high level of reliability. Hostman guarantees 99.99% availability according to the SLA, with downtime not exceeding 52 minutes per year. Additionally, data is backed up for extra security, and the communication channel is protected against DDoS attacks.

What level of support is provided for cloud servers?

Hostman support is always available, 7 days a week, around the clock. We respond to phone calls within a minute and chat inquiries within 15 minutes. Your questions will always be handled by knowledgeable staff with sufficient authority and technical background.

Can I install my own software on a cloud server?

Yes, absolutely! You can deploy any software, operating systems, and images you desire on your server. Everything is ready for self-configuration.

What backup and data recovery methods are available for cloud servers?

Hostman takes care of the security of your data and backs up important information. Additionally, you can utilize the automatic backup service for extra safety and reliability.

Is there a guaranteed Service Level Agreement (SLA) for cloud server availability?

Hostman guarantees a 99.99% level of virtual server availability according to the SLA (Service Level Agreement).

Which data center locations are available for hosting cloud servers?

Our servers are located in a modern Tier III data center in the European Union and the United States.

Can I create and manage multiple cloud servers under a single account?

Certainly, you can launch multiple cloud servers and other services (such as databases) within a single account.

What is the deployment time for cloud servers after ordering?

With Hostman, you'll get a service that is easy and quick to manage on your own. New cloud servers can be launched almost instantly from the control panel, and the necessary software can be installed within minutes.

What monitoring and notification capabilities are offered for cloud servers?

Hostman specialists monitor the technical condition of servers and software around the clock. You won't have to worry about server availability — it will simply work, always.

Can I modify the specifications of my cloud server (e.g., increase RAM) after creation?

You can easily configure your server by adding resources directly in the control panel. And if you need to switch to lower-tier plans, you can rely on Hostman support — our specialists will handle everything for you.

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support