Cloud Server

Deploy your cloud server in minutes and experience the freedom to scale your
infrastructure effortlessly. Fast, secure, and flexible cloud server solution
designed to meet your unique needs without the constraints of traditional
servers.
Cloud Server
Blazing 3.3 GHz Processors
& NVMe Disks
Experience unparalleled speed with processors optimized for demanding applications, combined with ultra-fast NVMe disks for quick data retrieval.
200 Mbit Channels,
Unlimited Traffic
Enjoy stable, high-speed connectivity with unthrottled traffic, ensuring smooth performance even during peak usage periods.
24/7 Monitoring
& Support
Stay worry-free with round-the-clock monitoring and professional support, ensuring your systems are always operational.
Cost-Effective
Management
Our cloud server solutions are designed to deliver maximum value for your money, offering flexible pricing without compromising on performance.

Cloud server pricing

High-performance cloud servers with pay-as-you-go pricing. Powered by Intel Xeon Gold and AMD EPYC processors, NVMe SSD storage, and 200 Mbps connectivity. Hosted on enterprise-grade Supermicro, Dell, and SuperCloud hardware in certified data centers (ISO 27001, SSAE 16).
New York
1 x 3 GHz CPU
CPU
1 x 3 GHz
1 GB RAM
RAM
1 GB
25 GB NVMe
NVMe
25 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$4
 /mo
1 x 3 GHz CPU
CPU
1 x 3 GHz
2 GB RAM
RAM
2 GB
40 GB NVMe
NVMe
40 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$5
 /mo
2 x 3 GHz CPU
CPU
2 x 3 GHz
2 GB RAM
RAM
2 GB
60 GB NVMe
NVMe
60 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$6
 /mo
2 x 3 GHz CPU
CPU
2 x 3 GHz
4 GB RAM
RAM
4 GB
80 GB NVMe
NVMe
80 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$8
 /mo
4 x 3 GHz CPU
CPU
4 x 3 GHz
8 GB RAM
RAM
8 GB
160 GB NVMe
NVMe
160 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$17
 /mo
8 x 3 GHz CPU
CPU
8 x 3 GHz
16 GB RAM
RAM
16 GB
320 GB NVMe
NVMe
320 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$37
 /mo
CPU
RAM
Gb
NVMe
Gb
Public IP
$0
 /mo

Deploy any software in seconds

Select the desired OS or App and install it in one click.
OS Distributions
Pre-installed Apps
Custom Images
Ubuntu
Debian
CentOS

Hostman's commitment to simplicity
and budget-friendly solutions

1 CPU
2 CPU
4 CPU
8 CPU
Configuration
1 CPU, 1 GB RAM, 25 GB SSD
Hostman
DigitalOcean
Google Cloud
AWS
Vultr
Price
$4
$6
$6.88
$7.59
$5
Tech support
Free
$24/mo
$29/mo + 3% of
monthly charges
$29/mo or 3% of
monthly charges
Free
Backups
from $0.07/GB
20% or 30% higher
base daily/weekly fee
$0.03/GB per mo
$0.05/GB per mo
20% higher base
monthly/hourly fee
Bandwidth
Free
$0.01 per GB
$0.01 per GB
$0.09/GB first
10 TB / mo
$0.01 per GB
Live chat support
Avg. support response time
<15 min
<24 hours
<4 hours
<12 hours
<12 hours

What is a cloud server?

Cloud server is a virtualized computing resource hosted in the cloud, designed to deliver powerful performance without the need for physical hardware. It is built on a network of connected virtual machines, which enables flexible resource allocation, instant scalability, and high availability. Unlike traditional on-premises servers, a cloud-based server allows users to adjust resources dynamically, making it ideal for handling fluctuating workloads or unpredictable traffic spikes. Whether you're running an e-commerce store, a SaaS platform, or any application, a cloud web server provides the adaptability necessary to grow with your business.

Cloud servers solve a wide range of challenges, from reducing infrastructure costs to improving uptime and reliability. By leveraging the cloud, businesses can avoid the upfront investment and maintenance costs associated with physical servers. Additionally, a cloud server system allows users to deploy applications quickly, scale resources in real-time, and manage data more efficiently. The key benefits for clients include operational flexibility, cost savings, and the ability to respond quickly to changing demands.

Ready to buy a cloud server?

1 CPU / 1GB RAM / 25GB NVMe / 200 Mbps / $2/mo.

Efficient tools for your convenient work

See all Products

Backups, Snapshots

Protect your data with regular backups and snapshots, ensuring you never lose crucial information.

Firewall

Enhance your security measures with our robust firewall protection, safeguarding your infrastructure against potential threats.

Load Balancer

Ensure optimal performance and scalability by evenly distributing traffic across multiple servers with our load balancer feature.

Private Networks

Establish secure and isolated connections between your servers with private networks, shielding sensitive data and enhancing network efficiency.

Trusted by 500+ companies and developers worldwide

One panel to rule them all

Easily control your database, pricing plan, and additional services
through the intuitive Hostman management console.
Project management
Organize your multiple cloud servers and databases into a single, organized project, eliminating confusion and simplifying management.
Software marketplace
24 ready-made assemblies for any tasks: frameworks, e-commerce, analytics tools.
Mobile responsive
Get the optimal user experience across all devices with our mobile-responsive design.
Hostman Cloud

Code locally, launch worldwide

Our servers, certified with ISO/IEC 27001, are located in Tier 3 data
centers across the US, Europe, and Asia.
🇺🇸 San Francisco
🇺🇸 San Jose
🇺🇸 Texas
🇺🇸 New York
🇳🇱 Amsterdam
🇳🇬 Lagos
🇩🇪 Frankfurt
🇵🇱 Gdansk
🇦🇪 Dubai
🇸🇬 Singapore
Anup k.
Associate Cloud Engineer
5.0 out of 5

"Hostman Comprehensive Review of Simplicity and Potential"

It been few years that I have been working on Cloud and most of the cloud service...
Mansur H.
Security Researcher
5.0 out of 5

"A perfect fit for everything cloud services!"

Hostman's seemless integration, user-friendly interface and its robust features (backups, etc) makes it much easier...
Adedeji E.
DevOps Engineer
5.0 out of 5

"Superb User Experience"

For me, Hostman is exceptional because of it's flexibility and user-friendliness. The platform's ability to offer dedicated computing resources acr...
Yudhistira H.
Mid-Market(51-1000 emp.)
5.0 out of 5

"Streamlined Cloud Excellence!"

What I like best about Hostman is their exceptional speed of deployment, scalability, and robust security features. Their...
Mohammad Waqas S.
Biotechnologist and programmer
5.0 out of 5

"Seamless and easy to use Hosting Solution for Web Applications"

From the moment I signed up, the process has been seamless and straightforward...
Mohana R.
Senior Software Engineer
5.0 out of 5

"Availing Different DB Engine Services Provided by Hostman is Convenient for my Organization usecases"

Hostman manages the cloud operations...
Faizan A.
5.0 out of 5

"Hostman is a great fit for me"

Hostman is a great fit for me. What do you like best about Hostman? It was very easy to deploy my application and create database, I didn't have
Adam M.
5.0 out of 5

"Perfect website"

This website is extremely user friendly and easy to use. I had no problems so didn't have to contact customer support. Really good website and would recommend to others.
Anup K.
4.0 out of 5

"Simplifying Cloud Deployment with Strengths and Areas for Growth"

What I like best about Hostman is its unwavering commitment to simplicity...
Naila J.
5.0 out of 5

"Streamlined Deployment with Room for Improvement"

Hostman impresses with its user-friendly interface and seamless deployment process, simplifying web application hosting...

More cloud services from Hostman

See all Products

Latest News

Redis

How to Cache Node.js Applications with Redis

Caching is the process of storing copies of files in a cache — a temporary storage that is much faster to access than other available storage methods in the system. When developing Node.js applications, caching becomes highly relevant because database queries can take significantly longer than fetching data from temporary storage. For example, there is no need to reload the HTML markup of a webpage for every user request to the server — this would add several (sometimes dozens of) milliseconds to the response time. Storing the page (or JSON data for rendering in a SPA application) is much more efficient in the cache. In simple terms, caching is about optimization. This article will explore how to cache application data in a Node.js application using Redis with the Express framework. What is Redis? Redis (Remote Dictionary Server) is an open-source, in-memory database with simple "key-value" data structures. The terminology may vary. Some refer to Redis as a database, others as a caching tool, or something else. The key point is that Redis stores data in RAM instead of a hard drive, which results in higher performance. This is why Redis is referred to as an "in-memory" database. Although the data is kept in RAM, it is periodically saved to a hard drive in the form of snapshots. Redis is often used together with relational DBMSs, such as managed PostgreSQL. Installing Redis Server The installation process for Redis differs depending on the operating system, and you can find detailed instructions for each system on the official website. This article focuses on Ubuntu or Debian. Therefore, we will install the latest version of Redis from the official APT (Advanced Packaging Tool) repository — packages.redis.io: sudo apt update sudo apt install redis Once this is done, the Redis server is ready to use.  For Windows, you need to download the installer from the official GitHub repository. After installation, start the Redis server with the following CLI command: redis-cli For macOS, you can install Redis using the Homebrew package manager: brew install redis Once installed, start the server with: redis-server Node.js Project Configuration Before we dive into how to interact with Redis through a Node.js application, let's first create a separate working directory and navigate to it: mkdir node_redis cd node_redis As usual, let's create a package.json configuration file with a minimal set of data: { "name": "node_redis", "version": "1.0.0", "description": "Simple example of using Redis by Hostman", "main": "index.js", "license": "MIT", "dependencies": { "express": "latest", "axios": "latest", "redis": "latest" } } Note the specified dependencies. For this project, we will need the latest versions of the Express framework and the official Redis client for Node.js from NPM. This is a separate library that provides a high-level API (classes and functions) for interacting with a Redis server. The Axios module will help parse the JSON data the remote server will return in response to API requests. To install these dependencies, we will use the NPM package manager. If you don't have it yet, install it with the following command: sudo apt install npm You can read a separate guide on how to install the latest version of Node.js on Ubuntu. Since the app will use the async/await syntax, the minimum required version of Node.js is 8. Now, once all dependencies are specified, they can be installed: npm install Express Application Without Caching In this example, the application will use a fake API from JSONPlaceholder, specifically created for such purposes. We will send a request to the URL https://jsonplaceholder.typicode.com/posts/1 and receive mock data in JSON format: { "userId": 1, "id": 1, "title": "sunt aut facere repellat provident occaecati excepturi optio reprehenderit", "body": "quia et suscipit\nsuscipit recusandae consequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto" } Subsequent loading of data from the cache (instead of making repeated requests to the remote server) will increase the speed of the application. However, we will first implement the process of handling user requests without caching and add it later. Let's first create and edit our index.js file. The script will use modern JavaScript (ES6) syntax with async/await operators whenever possible: const express = require("express"); // import the Express framework const axios = require("axios"); // import the Axios module for working with JSON data const app = express(); // create an instance of the app // create an async function to request data from the remote server using axios async function getRemoteData() { const information = await axios.get(`https://jsonplaceholder.typicode.com/posts/1`); // send a request to the remote API console.log("There was a request to a remote server"); // log the informational message to the console return information.data; // return the raw JSON data } // create an async function to handle user requests async function onRequest(req, res) { let results = await getRemoteData(); // call the previously created function to get data from the remote server if(results.length === 0) throw "API error"; // handle empty responses with an error res.send(results); // respond to the user's request with the raw JSON data } app.get('/', onRequest); // attach the previously created function to the GET request hook app.listen(8080); // start listening for incoming requests on the default HTTP server port Now, you can run the script, open localhost in your browser, and see the raw JSON data displayed on the web page: node index.js Each request to the local server will, in turn, trigger a request to the remote server. For example, if you refresh the page three times in the browser, the message "There was a request to a remote server" will be printed three times in the terminal of the running Node.js server. But why? From a rational perspective, this is unnecessary. The data retrieved the first time should be cached to reduce the number of operations and user wait times. This is relevant only when the data is expected to remain static for a certain period — in other words, you can only cache data that doesn't change frequently. Express Application with Caching Let's modify the previous example so our application "learns" to cache data. To do this, we'll first connect the Redis client — add a new line at the beginning of the index.js: const redis = require("redis");  Now, naturally, we need to connect to the Redis server we started earlier, and only after that can we set and get keys. Let's add a few more lines of code: (async () => { client = redis.createClient(); client.on("error", (error) => console.log('Something went wrong', error)); // set up an error handler for Redis connection issues await client.connect(); // connect to the Redis server })(); Note that the connection to the Redis server is done in an anonymous self-invoking asynchronous function. This ensures that all pre-configuration steps are executed sequentially. Additionally, the connect function returns a promise, which can be handled using then/catch or inside an async function. In our example, the caching logic will be as follows: if the API request to the remote server is made for the first time, we cache the obtained data. If the data has been previously retrieved, it will be available in the cache — we fetch it and send it to the user. Let's modify the onRequest function (middleware) to implement caching: async function onRequest(req, res) { let results; // declare the variable for the result const cacheData = await client.get("post"); // try to get the "post" key from Redis database if (cacheData) { results = JSON.parse(cacheData); // parse the data from a raw string format into a structure } else { results = await getRemoteData(); // call the function to get data from the remote server if (results.length === 0) throw "API error"; // handle empty result with an error await client.set("post", JSON.stringify(results)); // cache the obtained data } res.send(results); // respond to the request with JSON data } Notice that the get function returns null if no value is saved for the given key in Redis. If this happens, an API request is made to the remote server. If data exists in the cache, it is retrieved and sent to the user. The set function is responsible for caching — it stores the given value under a specified key so we can retrieve it later with get. The full code of the application at this stage looks like this: const express = require("express"); // import Express framework const axios = require("axios"); // import Axios module for working with JSON data const redis = require("redis"); // import Redis client const app = express(); // create an instance of the application // Connect to Redis server (async () => { client = redis.createClient(); client.on("error", (error) => console.log('Something went wrong', error)); // set up an error handler for Redis connection issues await client.connect(); // connect to the Redis server })(); // create an asynchronous function to request data from the remote server using axios async function getRemoteData() { const information = await axios.get(`https://jsonplaceholder.typicode.com/posts/1`); // send a request to the remote server with the API console.log("There was a request to a remote server"); // log an informational message to the console return information.data; // return the obtained JSON data in raw form } // create an asynchronous function to handle user requests async function onRequest(req, res) { let results; // declare the variable for the result const cacheData = await client.get("post"); // attempt to retrieve the "post" variable from the Redis database if (cacheData) { results = JSON.parse(cacheData); // parse the data from a raw string into a structured format } else { results = await getRemoteData(); // call the function to fetch data from the remote server if (results.length === 0) throw "API error"; // handle empty result with an error await client.set("post", JSON.stringify(results)); // cache the obtained data } res.send(results); // respond with the JSON data } // run the HTTP server with the necessary configurations app.get('/', onRequest); // associate the created function with the GET request hook app.listen(8080); // start handling incoming requests on the standard HTTP server port Setting Cache Expiration We should periodically update the data stored in the cache to prevent it from becoming outdated. In real-world projects, APIs often provide additional information about how frequently cached data should be updated. This information is used to set a timeout — the duration for which the data in the cache remains valid. Once this time expires, the application makes a new request to obtain fresh data. In our case, we will take a simpler approach that is commonly used in practice. We will set a constant cache expiration time of 60 seconds. After this period, the application will make another request to the remote server for fresh data. It’s important to note that cache expiration is handled by Redis. This can be achieved by providing additional parameters when using the set function. To implement this, we will modify the set function call to include an additional structure as the third argument. Thus, the line: await client.set("post", JSON.stringify(results)); // cache the received data Will be updated to: await client.set("post", JSON.stringify(results), { EX: 60, NX: true }); // cache the received data with expiration In this case, we updated the previous line of code by adding the EX parameter, which is the cache expiration time in seconds. The NX parameter ensures that the key is only set if it does not already exist in the Redis database. This last parameter is important because re-setting the key would update the cache timeout without it, preventing it from fully expiring. Now, the Redis database will store the value of the post key for 60 seconds and then delete it. This means that every minute, the cacheData variable in our app will receive a null value, triggering an API request to the remote server and re-caching the obtained result. Conclusion This article demonstrated how in-memory storage can serve as a "mediator" between processing and storing data on solid-state drives. All of this is a form of caching that reduces unnecessary computational (and network) operations, thereby improving the application's performance and reducing the server's load. As shown, you can quickly set up such storage using Redis with a Node.js client. In our case, we used a mock API that returned trivial JSON data. In one scenario, the data was requested every time, while in the other, it was cached — sometimes with an expiration time. The examples provided are just the basics. As always, you can find more information on using Redis in the official documentation. The same applies to the documentation for Express and Axios.
26 December 2024 · 11 min to read
Python

Mastering Python For Loops: An Essential Guide

Loops are a significant aspect of programming languages. Python for loop is a simple and easy way to repeat actions which are in sequence on each item. Whether it is to process characters in string, iterate through the list, or generate numeric ranges, any type of repetitive task can be done easily and efficiently. The following guide walks through their usage with syntax, examples, and day-to-day applications. A Python for loop simplifies iteration by automatically traversing elements within collections like lists, tuples, dictionaries, strings, or ranges. Instead of relying on a manual index like in some other languages, Python loops directly interact with the elements of the collection, making them more intuitive and there is a lower possibility of errors. Breaking down the flow of a for loop can help in understanding its mechanics. Consider this sequence of steps: Start -> Initialize -> Condition Check -> Execute Block -> Increment -> Repeat -> End Structure and syntax This section discusses structure and syntax of for loops by performing a few simple examples.  Structure Below is representation of the simple structure of a for loop in Python: for variable in iterable: # Code block to execute variable: Temporary variable that represents every element of the sequence. iterable: Collection to iterate over (e.g., a list or range). Code block: Indented block of code executed for every iteration. Example: fruits = ["apple", "banana", "cherry"] for fruit in fruits: print(fruit) Output: apple banana cherry Utilizing range() for Numerical Loops When numeric values need to be generated in a sequence, the range() function proves invaluable. It offers a convenient method to produce numbers within a defined range, with the option to skip values using a specified step. Syntax: range(start, stop, step) start: Beginning of sequence (default is 0). stop: Endpoint, excluded from the range. step: Increment or decrement applied between consecutive values (default is 1). Example: for i in range(1, 6): print(i) Output: 1 2 3 4 5 Use Cases and Practical Examples of Python For Loops Dealing with Strings Strings can be easily iterated using a for loop, making it useful for tasks like counting characters or modifying text. Example: text = "Python" for char in text: print(char) Output: P y t h o n Combining Nested For Loops In the scenario of dealing with nested structures which include multidimensional lists or grids, nested for loops are a handy solution. A loop within another loop ensures that every element is addressed at each hierarchy level. Example: matrix = [[1, 2], [3, 4], [5, 6]] for row in matrix: for item in row: print(item) Output: 1 2 3 4 5 6 Dealing with Dictionaries Dictionaries are easily looped through by utilizing a for loop in Python. You can iterate over values, keys, or both by using for loops. Example: student_scores = {"Alice": 85, "Bob": 78, "Charlie": 92} # Looping through keys for student in student_scores: print(student) # Looping through values for score in student_scores.values(): print(score) # Looping through keys and values for student, score in student_scores.items(): print(f"{student}: {score}") This makes working with dictionaries simple and efficient, whether you need just the keys, the values, or both in a single loop. Controlling Loop Flow with break and continue Another method to further refine a for loop is by utilizing the statements break and continue: Break: In this scenario, a condition must be satisfied so that the loop can exit prematurely. Continue: It will skip current iteration and proceed to next. Example demonstrating break: for num in range(10): if num == 5: break print(num) Output: 0 1 2 3 4 Example demonstrating continue: for num in range(10): if num % 2 == 0: continue print(num) Output: 1 3 5 7 9 Summation of Values in List Here’s an example of using for loops to sum numbers in a list. numbers = [10, 20, 30, 40] total = 0 for num in numbers: total += num print("Total:", total) Output: Total: 100   Creating Multiplication Tables With the help of nested for loops, complete multiplication table which showcases the product of two numbers in a structured format can be generated. for i in range(1, 6): for j in range(1, 6): print(f"{i} x {j} = {i * j}") print() Output: 1 x 1 = 1 1 x 2 = 2 ... Reading Files Line by Line Reading a file line by line with a for loop is memory efficient, as it processes the file without loading it entirely into memory, reducing computational power. Example: with open("example.txt", "r") as file: for line in file: print(line.strip()) # Strips leading/trailing whitespaces Here, the for loop in Python will iterate through each line in the file, and will print each one after removing extra spaces. The method is memory efficient and works well for large text files. Enhancing the Readability of Your Code Python's for loop syntax is efficient, simple, and enhances code readability by allowing focus on the task rather than access mechanics, reducing errors. Example: # Without a for loop print(“1”) print(“2”) print(“3”) # With a for loop numbers = [1, 2, 3] for number in numbers: print(number) Notice how the second method is more straightforward and readable. Complex Data Structures For loops are flexible enough to handle more advanced collections like sets, dictionaries, and even custom objects. The iteration is seamless over these structures due to for loops and there is no need for any additional logic. Example: # Iterating Through a Dictionary student_scores = {"Alice": 85, "Bob": 78, "Charlie": 92} # Access keys for student in student_scores: print(student) # Access values for score in student_scores.values(): print(score) # Access both keys and values for student, score in student_scores.items(): print(f"{student}: {score}") The above example shows the easiness of extracting specific elements as well as combinations of those elements. For Loops in Real-Life Programming For loops aren’t just theoretical; they play an important role in handling real-world processes like processing files, analyzing data, and automating repetitive actions. Example: # Reading Lines from a File with open("example.txt", "r") as file: for line in file: print(line.strip()) In case one has to work with large datasets stored in text files then this approach is much practical. Using Enumerate for Indexed Iteration Enumerate is best suited for when the index and value, both, of each element are needed. Writing extra code to manage counters is not required anymore. Its much time efficient. Example: # Enumerating Elements fruits = ["apple", "banana", "cherry"] for index, fruit in enumerate(fruits): print(f"{index}: {fruit}") This method is concise and reduces the chance of errors. Making Loops Error-Proof By adding error-handling mechanisms, you can be sure that your loops are resilient and have ability to handle unexpected scenarios gracefully. Example: # Handling Errors in Loops numbers = [10, 20, "a", 30] for num in numbers: try: print(num * 2) except TypeError: print(f"Skipping invalid type: {num}") This approach works great when one has to deal with unpredictable data. Other Iteration Techniques While for loops are versatile, some tasks might benefit from alternative approaches like list comprehensions or generator expressions. These are often more concise and better suited for specific scenarios. Example: # Using List Comprehension # Traditional for loop squares = [] for num in range(5): squares.append(num ** 2) print(squares) # List comprehension squares = [num ** 2 for num in range(5)] print(squares) Both approaches achieve the same result, but list comprehensions are much compact. Performance Tips for For Loops Although for loops have been more practical for huge amount of queries, large-scale operations might require faster alternatives like NumPy which are best for numerical data. Example: # Using for loop large_list = list(range(1000000)) squared = [num ** 2 for num in large_list] # Using NumPy (faster) import numpy as np large_array = np.array(large_list) squared = large_array ** 2 This comparison highlights that libraries actually significantly boost performance. Summary For loops in Python are proven to be highly advantageous and versatile when it comes to handling repetitive tasks across various data structures. From simple iterations to complex nested loops, understanding their potential unlocks efficiency in programming. Practice these examples and experiment with your own to master this essential concept. If you want to build a web service using Python, you can rent a cloud server at competitive prices with Hostman.
25 December 2024 · 7 min to read
SQL

How To Use Nested Queries in SQL

Nested queries, usually referred to as subqueries, are a fundamental feature of SQL that empower users To execute advanced data retrieval and analysis. By embedding one query within another, tasks that might otherwise be challenging or unfeasible utilizing a unique query can be efficiently executed. This tutorial outlines the concept of nested queries through the use of a sample database and explores their various applications for extracting meaningful insights. Creating the Sample Database To demonstrate the potential of nested searches, assume a database called Company. It consists of two primary tables: Employees and Departments. The Employees table contains information about individual employees, while the Departments table provides data about the departments they are associated with. This structured setup serves as the foundation for demonstrating how several types of nested queries can address specific problems. -- Create the database called Company CREATE DATABASE Company ; USE Company ; -- Create the Departments table CREATE TABLE Departments ( department_id INT PRIMARY KEY, department_name VARCHAR(50), location VARCHAR(50) ); -- Insert data into Departments INSERT INTO Departments VALUES (101, 'Sales', 'New York'), (102, 'HR', 'Chicago'), (103, 'IT', 'San Francisco'); -- Create the Employees table CREATE TABLE Employees ( employee_id INT PRIMARY KEY, name VARCHAR(50), department_id INT, salary DECIMAL(10, 2), hire_date DATE, FOREIGN KEY (department_id) REFERENCES Departments(department_id) ); -- Insert data into Employees INSERT INTO Employees VALUES (1, 'Alice', 101, 60000, '2020-01-15'), (2, 'Bob', 102, 55000, '2018-03-22'), (3, 'Charlie', 101, 70000, '2019-11-01'), (4, 'David', 103, 50000, '2021-06-10'), (5, 'Eve', 102, 45000, '2017-07-19'); The tables should look like this: The Departments table The Employees table Applications of Nested Queries Single-Row Subqueries A frequent scenario for single-row subqueries is extracting employees' wages that surpass the company's overall average. In this scenario, an inner query computes the overall average wage, while an outer query retrieves the employees earning above this benchmark. Such queries are particularly effective for leveraging aggregate functions like AVG, MAX, or MIN. -- Select the name and salary of employees SELECT name, salary FROM Employees -- Where the salary is greater than the average salary of all employees WHERE salary > (SELECT AVG(salary) FROM Employees); Multi-Row Subqueries Another practical application involves listing employees who work in departments based in a specific location, such as New York. Here, the inner query identifies the relevant department IDs, and the outer query selects employees linked to these departments. Multi-row subqueries depend on operators like IN, ANY, or ALL to compare sets of values and reach the desired results.. -- Select the name of employees SELECT name FROM Employees -- Where the department ID is in the list of department IDs from the Departments table WHERE department_id IN (SELECT department_id FROM Departments WHERE location = 'New York'); Correlated Subqueries Correlated subqueries support more dynamic comparisons by tying the inner query to each row of the outer query. For locating employees earning more than the average wage within their respective departments, the inner query computes the department-specific average, and the outer query selects employees based on this criterion. While highly flexible, correlated subqueries could be computationally intensive. SELECT name FROM Employees e1 -- Where the salary is greater than the average salary of employees in the same department WHERE salary > (SELECT AVG(salary) FROM Employees e2 WHERE e1.department_id = e2.department_id); Subqueries in the FROM Clause Subqueries can be employed in the FROM clause to generate temporary result sets, often referred to as derived tables. For example, locating departments with average salaries above $50,000 entails calculating department-level salary averages in the inner query and filtering the results in the outer query. This approach is particularly useful for organizing intermediate data before applying further analysis. SELECT department_name, avg_salary FROM (SELECT department_id, AVG(salary) AS avg_salary FROM Employees GROUP BY department_id) AS avg_table -- Join the average salary table with the Departments table on department ID JOIN Departments ON avg_table.department_id = Departments.department_id -- Filter the results to include only departments with an average salary greater than 50,000 WHERE avg_salary > 50000; Data Validation with Nested Queries Nested queries are valuable for validating data integrity. For example, identifying employees associated with non-existent departments involves comparing employee department IDs against a list of valid IDs retrieved by the inner query. This technique helps ensure referential accuracy and detect potential anomalies in the data. SELECT name FROM Employees WHERE department_id NOT IN (SELECT department_id FROM Departments); Conditional Logic with Subqueries Combining subqueries with conditional logic allows for more nuanced insights. For example, to identify departments with at least one employee earning more than $60,000, the inner query selects relevant department IDs based on salary criteria, and the outer query gathers the related department names. This method highlights meaningful relationships between tables through filtering and comparison. SELECT DISTINCT department_name FROM Departments WHERE department_id IN ( -- Select the department ID from the Departments table SELECT department_id FROM Employees WHERE salary > 60000 ); Best Practices for Using Nested Queries Optimize for Performance: When working with huge datasets, evaluate the performance of nested searches and consider indexing commonly used columns to increase efficiency. Simplify Complex Queries: Prevent excessive nesting by leveraging common table expressions (CTEs) or temporary tables, which improve readability and simplify debugging. Validate Inner Queries: Run inner queries independently to ensure they produce the expected results before integrating them into outer queries. Utilize Joins Where Possible: In some scenarios, joins can achieve similar outcomes as nested queries but with better performance. For instance, filtering employees in specific departments can often be implemented using joins. Reduce Correlated Subqueries: Since correlated subqueries execute for each row in the outer query, consider replacing them with joins or CTEs to improve performance. Conclusion Nested queries are a versatile tool in SQL, offering solutions to sophisticated data retrieval challenges through advanced filtering, aggregation, and comparison techniques. Using the Company database as a reference, this discussion has showcased the utility of various types of nested queries in solving real-world problems. By practicing these techniques and adhering to best practices, you can enhance your SQL proficiency and craft efficient, maintainable queries. Hostman provides pre-configured and ready-to-use cloud SQL databases.
25 December 2024 · 6 min to read
MySQL

How To Use Triggers in MySQL

SQL triggers are a vital component of many database systems, allowing automated execution of specific actions when predefined events occur. Triggers act as responsive mechanisms within a database, ensuring consistency and enabling automation of repetitive tasks. These event-driven procedures are particularly effective for handling operations triggered by changes such as  INSERT,  UPDATE, or DELETE in a table. By using triggers, database administrators and developers can enforce rules, maintain logs, or even invoke complex processes with minimal manual intervention. Let’s begin by defining an example database for a small online store to understand how triggers work in practice: -- Let’s create a databse called SHOP ; CREATE DATABASE SHOP ; USE SHOP ; -- Now we create the Products table CREATE TABLE Products ( ProductID INT PRIMARY KEY, ProductName VARCHAR(100), Stock INT, Price DECIMAL(10, 2) ); -- Then the StockAudit table CREATE TABLE StockAudit ( AuditID INT AUTO_INCREMENT PRIMARY KEY, ProductID INT, ChangeType VARCHAR(10), QuantityChanged INT, ChangeTimestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); Classification of SQL Triggers SQL triggers can be classified based on their scope and timing. Row-level triggers are executed once for every row affected by a database operation, making them adequate for detailed tracking of data changes. For example, when updating inventory quantities for multiple products, a row-level trigger can record changes for each product individually. Conversely, statement-level triggers run once for an entire operation, regardless of how many rows are affected. These are useful for performing global checks or logging summary information. Triggers can also be categorized by their execution timing relative to the triggering event. Before triggers are executed prior to the event, often to validate or modify data before it is written to the database. After triggers execute after the event, making them ideal for tasks such as auditing or enforcing referential integrity. This is an example of a row-level AFTER INSERT trigger which logs new product additions: -- The DELIMITER command is used to change the statement delimiter from ; to // while defining the trigger DELIMITER // CREATE TRIGGER LogNewProduct AFTER INSERT ON Products FOR EACH ROW BEGIN INSERT INTO StockAudit (ProductID, ChangeType, QuantityChanged) VALUES (NEW.ProductID, 'ADD', NEW.Stock); END; // DELIMITER ; How Triggers Operate in a Database Triggers are defined by specifying the event they respond to, the table they act upon, and the SQL statements they execute. When a trigger’s event occurs, the database automatically invokes it, running the associated logic seamlessly. This behavior eliminates the necessity for external application code to maintain consistency. For instance, consider a scenario where we need to prevent negative stock levels in our inventory. We can achieve this with a BEFORE UPDATE trigger that validates the updated stock value: DELIMITER // -- Trigger to prevent negative stock values CREATE TRIGGER PreventNegativeStock BEFORE UPDATE ON Products FOR EACH ROW BEGIN -- Check if the new stock value is less than 0 IF NEW.Stock < 0 THEN -- Raise an error if the stock value is negative SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'Stock cannot be negative'; END IF; END; // DELIMITER ; This guarantees that no changes violating the business rules are applied to the database. Practical Advantages of Using Triggers Triggers offer numerous advantages, such as enforcing business logic directly within the database layer. This ensures that data integrity is preserved across all applications accessing the database, reducing the need for repetitive coding. By centralizing critical logic, triggers simplify maintenance and enhance consistency. For example, a trigger can automate logging of stock adjustments, saving developers from implementing this functionality in multiple application layers. Consider this AFTER UPDATE trigger: DELIMITER // -- Trigger to log stock adjustments after an update on the Products table CREATE TRIGGER LogStockAdjustment AFTER UPDATE ON Products FOR EACH ROW BEGIN -- Insert a record into the StockAudit table with the product ID, change type, and quantity changed INSERT INTO StockAudit (ProductID, ChangeType, QuantityChanged) VALUES (OLD.ProductID, 'ADJUST', NEW.Stock - OLD.Stock); END; // DELIMITER ; This trigger automatically records every stock change, streamlining audit processes and ensuring compliance. Challenges and Considerations While triggers are powerful, they are not without challenges. Debugging triggers can be tricky since they operate at the database level and their effects may not be immediately visible. For example, a misconfigured trigger might inadvertently cause cascading changes or conflicts with other triggers, complicating issue resolution. Performance is another critical consideration. Triggers that are not well designed can slow down database operations, especially if they include resource-intensive logic or are triggered frequently. For instance, a trigger performing complex calculations on large datasets can bottleneck critical operations like order processing or stock updates. To mitigate these challenges, it is advisable to: Keep trigger logic concise and efficient. Use triggers sparingly and only for tasks best handled within the database. Test triggers extensively in controlled environments before deployment. Real-World Example: Cascading Triggers Cascading triggers can ensure data integrity across related tables. Consider a database with Orders and OrderDetails tables. When an order is deleted, it is essential to remove all associated details: DELIMITER // -- Trigger to cascade delete order details after a delete on the Orders table CREATE TRIGGER CascadeDeleteOrderDetails AFTER DELETE ON Orders FOR EACH ROW BEGIN -- Delete the corresponding records from the OrderDetails table DELETE FROM OrderDetails WHERE OrderID = OLD.OrderID; END; // DELIMITER ; This ensures that orphaned records are automatically removed, maintaining database consistency without manual intervention. However, cascading triggers require careful documentation to avoid unintended interactions. Optimizing Trigger Performance To prevent performance bottlenecks, triggers should handle minimal logic and avoid intensive operations. For tasks requiring significant processing, consider using scheduled jobs or batch processes instead. For example, instead of recalculating inventory levels on every update, a nightly job could summarize stock levels for reporting purposes. Here’s a simplified trigger that avoids complex calculations: DELIMITER // -- Trigger to log stock changes after an update on the Products table CREATE TRIGGER SimpleStockLog AFTER UPDATE ON Products FOR EACH ROW BEGIN -- Check if the new stock value is different from the old stock value IF NEW.Stock <> OLD.Stock THEN -- Insert a record into the StockAudit table with the product ID, change type, and quantity changed INSERT INTO StockAudit (ProductID, ChangeType, QuantityChanged) VALUES (NEW.ProductID, 'UPDATE', NEW.Stock - OLD.Stock); END IF; END; // DELIMITER ; Conditional Logic and Business Rules Conditional logic within triggers enables dynamic enforcement of business rules. For example, a trigger can adjust discounts based on stock availability: DELIMITER // -- Trigger to adjust discount based on stock levels after an update on the Products table TRIGGER AdjustDiscount AFTER UPDATE ON Products FOR EACH ROW BEGIN -- Check if the new stock value is greater than 100 IF NEW.Stock > 100 THEN -- Set the discount to 10 if the stock is greater than 100 UPDATE Products SET Discount = 10 WHERE ProductID = NEW.ProductID; ELSE -- Set the discount to 0 if the stock is 100 or less UPDATE Products SET Discount = 0 WHERE ProductID = NEW.ProductID; END IF; END; // DELIMITER ; This dynamic adjustment ensures that promotions align with inventory levels. Conclusion SQL triggers are indispensable for automating tasks, enforcing rules, and maintaining data integrity within a database. While they offer significant benefits, their design and implementation require careful consideration to avoid performance issues and unintended consequences. By adhering to best practices, such as keeping triggers simple, testing thoroughly, and documenting dependencies, developers can harness their full potential. Properly implemented triggers can elevate database management, making operations more efficient and reliable. Hostman provides pre-configured and ready-to-use cloud databases, including cloud MySQL.
24 December 2024 · 7 min to read
Linux

How To Use SSHFS to Mount Remote File Systems Over SSH

SSHFS is a Linux tool for mounting remote folders over SSH. It allows users to manage network-shared files just like local ones. This tool is secure and efficient, providing seamless management of network shared folders across different environments. Required Setup Before you start, ensure you have: Root or sudo permissions. An external server with SSH enabled. An SSH service working on your local machine. Using SSHFS Step 1: Install SSHFS First, install SSHFS on your local system through the package manager. This tool installation on the other system is not needed. On Ubuntu/Debian: sudo apt install sshfs On CentOS/RHEL: sudo yum install sshfs Step 2: Create a Connection Point Set up a folder in your home or any desired location. This will act as the connection point for the network shared directory. sudo mkdir remote_dir Step 3: Attach a Directory Attach the linked folder to the local computer for seamless access. Use the below-given command to perform remote filesystem mounting: sudo sshfs -o [options] user@host:/remote_path /local_mount Substitute user with your real remote server’s username, host with the IP address or hostname of the server, and /remote_path with the directory path you want to connect. The [options] include: allow_other: Grants access to other local machine users for accessing the mounted folder. reconnect: Automatically re-establishes the connection in case it drops. IdentityFile=/loc/of/private_key: Specify the location where SSH private key is stored. idmap=user: Aligns the ID of remote user to the local user ID. default_permissions: Applies the remote file system's default permissions. To connect the linux home folder from 192.X.X.X to /home/ubuntu/remote_dir, utilize: sudo sshfs -o allow_other,default_permissions [email protected]:/home/linux/ /home/ubuntu/remote_dir/ To employ an SSH key found at /home/ubuntu/.ssh/id_rsa, use: sudo sshfs -o allow_other,default_permissions,IdentityFile=/home/ubuntu/.ssh/id_rsa [email protected]:/home/linux/ /home/ubuntu/remote_dir/ Type 'yes' to accept the server’s fingerprint and add it to known hosts. Enter the password for authentication. Use the key if set up. After verification, the folder will be linked to the local path. Step 4: Verification Create a new folder or file in the attached directory and verify its existence on the external server. If the folder or file appears in the external server's directory, the operation is successful. This ensures changes in your local directory are mirrored on the external system. If you experience the "Permission denied" error when trying to create or modify an existing file, follow these instructions to resolve it: Run the ls -l command to view the current permission of files or directory. Execute the chmod command to modify the permissions. sudo chmod 644 /path/to/file_or_directory If the file or directory is owned by another person, run the chown command to change the ownership. sudo chown your_username  /path/to/file_or_directory Step 5: Unmounting Once finished, simply unmount the folder: sudo umount /remote_directory Additional Options Below are some additional things you can also do:  Auto-Mounting at Boot To automatically connect remote filesystem at startup, utilize these steps: Step 1: Edit fstab Access the /etc/fstab file with elevated privileges: sudo nano /etc/fstab Step 2: Add Entry Append an entry to the end of the file: user@remote_host:/remote/directory /local/mount/point fuse.sshfs noauto,x-systemd.automount,_netdev,users,idmap=user,allow_other,reconnect 0 0 Example: [email protected]:/home/linux/ /home/ubuntu/remote_dir fuse.sshfs noauto,x-systemd.automount,_netdev,users,idmap=user,allow_other,reconnect 0 0 Where: noauto: Automatically stops the mount from happening at boot. x-systemd.automount: Uses systemd to dynamically connect the filesystem upon access. _netdev: Indicates that network access is required for the process. users: Grant non-root users the ability to mount and unmount. idmap=user: Associates external user with local one. allow_other: Permits another person from retrieving the connected directory. reconnect: Ensures automatic reconnection in case connection drops. Step 3: Create a Connection Point Make sure the local mount point directory exists, and if not, create it: sudo mkdir -p /home/ubuntu/remote_dir Step 4: Testing Test the connectivity: sudo mount -a This command initiates the connection of all filesystems listed in /etc/fstab. If no errors arise, the process is successful. Utilizing SSHFS Without SFTP SSHFS usually utilizes SFTP for transferring. To bypass this, run: sshfs -o sftp_server=/usr/lib/openssh/sftp-server user@host:/remote_directory ~/remote_mount Configuration File To save commonly used options, create a .sshfs_config file in your home location. This will allow you to store and easily apply your preferred settings. nano ~/.sshfs_config Add your options and connect via the configuration file. sshfs -F ~/.sshfs_config username@remote_host:/remote/directory ~/remote_mount Resolving Typical Problems Below are some common problems and solutions. Connectivity Problems To ensure seamless connectivity, make certain that SSH service is configured in the correct way on both your local and external systems. Also, check that the service port is open and that your firewall settings allow access, which is crucial for maintaining an uninterrupted connection. Performance Issues For better performance, use the -o direct_io and -o cache=yes options. sshfs -o direct_io -o cache=yes user@host:/remote_directory ~/remote_mount Connection Reset by Peer Cause: The external SSH server may be down, or there could be network instability. Solution: Verify that the SSH server is operational on the external machine. Ensure a stable network connection for consistent communication. Permission Denied Cause: The user lacks the required permissions to access the network-shared folder. Solution: Confirm that you have the correct permissions. Proper access rights are essential for successful connection. Running SSHFS on Windows To utilize SSHFS for Windows, follow these instructions: Download and set up SSHFS-Win from this location. Right-click on This PC and go with the option Map network drive from the context menu: Choose an available drive letter from the menu. In the Folder field, input the command as follows: \\sshfs\user@host\remote_path Click Finish to complete the mapping process. Enter your credentials and provide the required username and password (or SSH key, if configured). Once connected, access the directory via Windows Explorer. Here are the additional options for the sshfs command based on different used cases: sshfs: Integrates the remote home directory locally. sshfs.r: Links to the remote server's root directory. sshfs.k: Uses [local-user]/.ssh/id_rsa to map the remote home directory. sshfs.kr: Utilizes a locally stored SSH key to access the root directory. When finished, right-click the network drive and choose Disconnect to detach the directory. Conclusion SSHFS provides an efficient and secure method for mounting remote file systems through SSH. This guide helps you set up and use this tool to improve file management on Linux systems. Whether performing the SSHFS mount as root, avoiding SFTP, or utilizing configuration files, this tool offers flexibility and control for various scenarios.
24 December 2024 · 6 min to read
MongoDB

How to Use Indexes in MongoDB

A MongoDB index is a special data structure comprising the search field and its location in the MongoDB document. Indexing allows MongoDB to ind and retrieve data, reducing query execution time quickly. Imagine searching for a spelling in a dictionary. Instead of flipping through each page, you scroll to the alphabet section where the name begins, greatly narrowing your search. MongoDB indexing is essential for fast document retrieval, sorting, and filtering. Without it, databases become slow, especially with large collections, and response times increase. MongoDB indexing strategies play a crucial role in reducing the application response times.  How do MongoDB Indexes Work? MongoDB uses a B-tree index, organizing entries in sorted order for efficient insertion, deletion, and search. Creating an index adds a structure with document keys and links to the corresponding documents. Key Terminologies for MongoDB Indexing Index Key: It is simply the field or fields in a MongoDB document. Index Direction: The index direction commonly referred to as index order, determines if the field is sorted in ascending(1) and descending(-1) order.                 Prerequisites To proceed with this tutorial, you will need to: Have a MongoDB database installation or get your free MongoDB atlas account from here. This tutorial assumes you have some familiarity with mongosh. You should know how to switch to different databases and query collections. Download the test database: wget https://raw.githubusercontent.com/ozlerhakan/mongodb-json-files/refs/heads/master/datasets/companies.json Some parts of the tutorial use the Airbnb review database. Import it. If you have local installation of MongoDB, you can import this JSON database with mongoimport utility.  mongoimport --collection="companies" --file='companies.json' --db hostman-tutorialmongoimport --collection="reviews" --file='reviews.csv' –type csv --db hostman-tutorial --headerline How to manage MongoDB Indexes To show indexes in MongoDB: db.reviews.getIndexes() To create an index in MongoDB, simply use the function db.createIndex and pass the field name. db.reviews.createIndex({ reviewer_name: 1 }) To drop the index in MongoDB: db.reviews.dropIndex("reviewer_name_1") MongoDB Indexing Strategies It obviously depends upon the scenario if creating an index on a single field or a combination of fields will be more efficient. It’s also interesting to note what kind of information the field is storing.  Here is a list of different techniques. Single Field Index Single field index is useful in scenarios where MongoDB needs to frequently query data by a particular field. Obviously, a single field index is not a viable option in case you need to support searching across multiple fields. In the reviews dataset, it might be interesting to list only the property reviewed by a particular person. db.reviews.createIndex({ reviewer_name : 1 }) To verify if index creation has benefitted the database queries: db.reviews.find({ reviewer_name: “Kristen” }).explain(“executionStats”) The executionTimeMillis has been drastically reduced from 31 ms to 1 ms. Similarly, totalDocsExamined was reduced from 24752 to 47 only, thanks to MongoDB indexing. To retrieve comments of multiple reviewers, use the MongoDB $in operator. db.reviews.find( { reviewer_name : { $in: ["Christopher", "Altay", "Kristen"] } }, { reviewer_name:1, comments: 1 } ) Compound Index What if a database frequently needs to query by three different fields, that’s where compound index comes to the rescue. db.companies.createIndex({ category_code: 1, number_of_employees: 1, founded_year: 1 }) Now, let's verify how the compound index improves our query using explain('executionStats'). db.companies.find({ category_code: "enterprise", number_of_employees: { $gte: 500, $lte: 1000 }, founded_year: { $gte: 1990 } }).explain("executionStats") Remember, if you have hundreds of compound indexes, it can cause a significant downfall in the write performance of the database. The reason is its high resource usage. Multikey Index What if the MongoDB field that needs to be indexed is an array? For example, a quick database inspection with the following command reveals relationships is an array field. db.companies.find().limit(1) The multikey index really shines here. It would be really interesting to filter out those persons who have still held their positions. For this purpose, you can create a multikey index on the is_past field. db.companies.createIndex({ "relationships.is_past": 1 }) Text Index For full-text search in MongoDB, use a text index, like in the Airbnb review database sample.  db.reviews.createIndex({ comments:“text” }) [ { v: 2, key: { _id: 1 }, name: '_id_' }, { v: 2, key: { _fts: 'text', _ftsx: 1 }, name: 'comments_text', weights: { comments: 1 }, default_language: 'english', language_override: 'language', textIndexVersion: 3 } ] Now, let’s search for a large-bedroom apartment. db.reviews.find({ $text: { $search: "large bedroom" } }).limit(20) If you ever need to implement sorting, MongoDB does provide the sort function and textScore metadata for searching. db.reviews.find({ $text: { $search: "large bedroom" } }) .sort({ score: { $meta: "textScore" } }) While creating a text index in MongoDB, if the key is an array, it would index and search across each element of the array. Hash Index MongoDB internally uses the hash function and uses it as a reference to the contents of fields in consideration. If you’re using the MongoDB sharding feature, a hash index can make it more performant.  db.users.createIndex({ password: "hashed" }) db.users.find({ password: “very-long-hash” }) While the hash index is great, there are a few limitations though. For instance, you can’t use range queries like $gte, $lte, $gt. Sparse Index Whenever you come across a lot of null, missing, or boolean values in a MongoDB collection, a sparse index is worth consideration. Sparse indexes are easier to maintain, and can significantly improve query performance. Let’s create an index for the documents that have a phone number field. db.customers.createIndex({ phone: 1 }, { sparse: true }) Consider, out of 1 million customers only 20% of them provided their phone number. So, while creating a sparse index, it will only create an index for 0.2 million records. Isn’t this great? Mongoose Indexing Mongoose is similar to what SQLalchemy is for Flask. It makes working with MongoDB databases a lot easier in Node.js applications.  Here are two different approaches. Index with Mongoose Schema: Mongoose schema determines the structure of a collection. Mongoose provides an index method to create a new index on the desired schema. Every mongoose schema is tied to a model. const mongoose = require(‘mongoose’); const reviewSchema = new mongoose.Schema({ property: String, comment: String }); reviewSchema.index({ comment: 'text' }); const Review = mongoose.model(‘Review’, reviewSchema); Index with MongoDB Collection: The second strategy is to retrieve the collection, the Mongoose way, and then set up an index. const mongoose = require(‘mongoose’); mongoose.connect('mongodb://localhost:27017/hostman-mongodb-tutorial, { useNewUrlParser: true, useUnifiedTopology: true }); mongoose.connection.once('open', function() { const reviewsCollection = mongoose.connection.collection(‘reviews’); reviewsCollection.createIndex({ email: 1 }, (err, result) => { if (err) { console.error('Error creating index:', err); } else { console.log('Index created successfully:', result); } }); }); Exploring MongoDB's Internal Index Intersection for Complex Queries Index intersection is a technique to combine multiple indexes to satisfy a complex query. The benefit is you get improved read performance without sacrificing index size in the long run. Consider the scenario: db.reviews.createIndex({ listing_id : 1 }) db.reviews.createIndex({ reviewer_id : 1 }) Perform the following query: db.reviews.find({ listing_id: 2992450, reviewer_id: 16827297 }) With these two indexes, the query will use index intersection, but only if MongoDB’s query optimizer finds it more efficient. MongoDB Indexing tradeoffs No doubt, Indexing improves application response time but don’t overdo it. Too many indexes can be hard to maintain as data grows. Here are a few pointers: Write Performance: Indexing requires additional disk I/O and CPU resources. For every insert, update, and delete, MongoDB needs to perform an additional operation. Think carefully! Data Consistency: Index maintenance is another critical tradeoff in MongoDB indexing. Indexes must be regularly maintained to ensure data consistency and prevent corruption. Index Size: Larger indexes can provide faster query performance but require more disk space. Conclusion MongoDB indexes are a great way to improve query times for document retrieval, and they’re crucial for high-availability setups. However, understanding how indexing works, its tradeoffs and the challenges it can bring for a maintenance team can help you get the most out of it. At Hostman, you can deploy a MongoDB cloud database in a few seconds and start working in no time.
24 December 2024 · 8 min to read
Ubuntu

How To Add Swap Space on Ubuntu 22.04

Managing resources efficiently is vital for maintaining the performance and stability of the OS. In this article, the methods of adding swap space to Ubuntu 22.04 is outlined to help users boost their platform's capacity to carry on memory-intensive activities. Swap space acts as a virtual extension of physical memory (RAM), allowing the system to offload inactive processes when it is fully utilised. While Ubuntu 22.04 is highly efficient in memory management, adding or increasing paging area can be a practical solution for environments with small data storage unit or when running resource-heavy applications. This article provides a step-by step approach in creation, configuration, and optimisation of swap space, ensuring a smooth and efficient setup tailored to everyone's needs. Prerequisites Before adding swap space on Ubuntu 22.04, make sure the following prerequisites are satisfied to avoid potential issues: Administrative Privileges: User must have root or sudo access to the platform to execute commands for creating and configuring swap space. Existing Disk Volume: Confirm that the instance has sufficient free disk storage to allocate for the desired swap size. Deploy the following instruction to check disk space: df -h Current Status: Determine whether a swap space already exists and come up with the decision to expand it. Utilise the instruction below to verify: sudo swapon --show Suitable Performance Needs Assessment: Determine the required capacity of the swap space according to the current storage resource and workload. A common rule is to have at least same amount as the RAM size, but this may vary depending on your use case. What is Swap A crucial part of Linux memory management, swap space is intended to improve system performance and stability by increasing the system's accessible capacity beyond the physical random-access memory (RAM). The OS frees up memory for running processes by offloading idle or seldom used data to the paging space area when the RAM is completely utilised. This procedure enables the system to manage resource-intensive tasks more effectively and keeps apps from crashing because of memory shortages. Depending on the demands of the user, swap can be implemented in Ubuntu as a file or as a separate disc. This can be useful, but it cannot take the place of enough RAM. Because disc storage has slower read and write rates than physical memory, an over-reliance on this might result in performance loss. Optimising system performance requires an understanding of swap's operation and proper configuration, especially for tasks like managing apps on platforms with limited RAM, operating virtual machines, or compiling huge codebases. Swap Advantages Swap space is an important part of Linux environment memory management because it provides a number of benefits. The following advantages are offered by swap: Prevents System Crashes Supports Memory-Intensive Applications Enhances Multitasking Smoother multitasking without sacrificing speed for platforms managing numerous processes at once by balancing memory use by offloading less important operations. Provides Flexibility Swap space allows for the dynamic addition or resizing of paging space, which facilitates system requirements adaptation without requiring disc repartitioning. Extends Uptime Period It is a short-term fix to increase stability and prolong its uptime under high loads in situations where replacing physical memory is not immediately practical. Facilitates Hibernation Swap is crucial for systems set up to utilise the hibernate feature since it keeps the contents of the RAM in place when the system is turned off, enabling a smooth restart. Supports Low-Memory Systems For lightweight systems, this is beneficial because it guarantees that critical operations continue to run even when memory is limited on devices with little physical memory. Swap is essential for increasing overall system resilience and flexibility, especially in resource-constrained contexts, even while it cannot replace physical RAM and shouldn't be over-relied upon. Swap Disadvantages Although swap space has several benefits for memory management, there are a few significant drawbacks that should be taken into account when setting it up. Slower Performance Compared to RAM Increased Disk Wear Latency in Resource-Intensive Tasks When the system relies heavily on swap, tasks that require high memory bandwidth, such as video editing or large-scale data analysis, may experience significant delays due to slower data transfer rates. Limited Effectiveness in Low-RAM Scenarios While swap can extend memory, it is not a substitute for adequate RAM. On systems with extremely low physical memory, relying on swap may not be enough to handle modern applications efficiently. Hibernation Dependency If the swap space is insufficient, hibernation may fail as it requires swap to store the contents of the RAM. Misconfigured swap sizes can lead to system errors during hibernation attempts. Additional Storage Allocation Allocating swap space reduces the available storage for other purposes. For systems with limited disk capacity, dedicating a portion to swap may not be feasible. Complexity in Configuration Optimising swappiness and settings require careful planning and monitoring. Poor configuration may lead to either underutilisation or excessive reliance, both of which impact system performance. How to Add Swap Space by Creating a Swap File Making a swap file in Ubuntu 22.04 to increase swap space is a simple procedure that can assist boost system performance, particularly on systems with low RAM. Here is a thorough, step-by-step guide to assist you with the process: Make sure swap space is enabled before making a new file. Run the instruction below. sudo swapon --show Based on the RAM capacity and usage needs, choose the swap file's size. A typical rule of thumb is: For systems with less than 2 GB of RAM, swap size is equal to RAM size × 2. For systems with more than 2 GB of RAM, swap size equals RAM size. Choose the location of the file, which is often the root directory. Adjust to the user's preferred swap size. To do it, use the fallocate command. sudo fallocate -l 4G /swapfile If fallocate is unavailable or gives an error, employ the dd command. sudo dd if=/dev/zero of=/swapfile bs=1M count=4096 bs=1M: Sets the block size to 1 Megabyte. count=4096: Creates a 4GB file (4096 × 1MB). Verify that the permissions are configured appropriately to prevent unauthorised access. Execute the following command. sudo chmod 600 /swapfile It is necessary to format the file as swap space. After that, swap can be activated. Execute the command listed below.       sudo mkswap /swapfile sudo swapon /swapfile To verify if it has been added, use the instructions listed below, appropriately. sudo swapon --show free -h Add the swap file to the /etc/fstab file to guarantee it stays active following a reboot. Perform the following steps. Backup the fstab file before editing. sudo cp /etc/fstab /etc/fstab.bak Add the swap record in fstab. echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab Validate using command below. cat /etc/fstab Configuring Swappiness (Optional) Swappiness controls the kernel's use of swap space. 60 is the default value. Usage rises with higher values and falls with lower values. Verify current swappiness value by running command below. cat /proc/sys/vm/swappiness Use the sysctl utility to temporarily modify the swappiness. The value is lowered to 40 from 60 by the subsequent command. sudo sysctl vm.swappiness=40 To make the changes permanent, run these commands respectively. echo 'vm.swappiness=40' | sudo tee -a /etc/sysctl.conf sudo sysctl -p Modify Cache Pressure (Optional) Cache pressure regulates the kernel's propensity to recover caching memory, which can be lessened with lower values. If for example, a user wants to set VFS Cache Pressure to 40, this can be set using the commands below respectively. echo 'vm.vfs_cache_pressure=40' | sudo tee -a /etc/sysctl.conf sudo sysctl -p Verify that the swap file is operational and set up properly. Use the commands below to check it. sudo swapon --show free -h Increasing Swap Space with Swap File To resize the system's swap file, use the following actions. Temporarily disable the swap file. sudo swapoff /swapfile Change the size of the swap file to the preferred size. Replace 8G with your desired new size. Using the fallocate command sudo fallocate -l 8G /swapfile Using the dd command sudo dd if=/dev/zero of=/swapfile bs=1M count=8192 To adjust for the new size, reinitialise the swap file. sudo mkswap /swapfile Activate the swap file that has been resized. sudo swapon /swapfile Validate that the swap space has been updated from 4GB to 8GB. sudo swapon --show free -h Conclusion To sum up, creating a swap file in Ubuntu is a simple procedure that can greatly improve system speed, especially when working with memory-demanding apps or when physical RAM is at limited availability. Without the need for intricate partitioning, users can rapidly increase the virtual memory of their system by following the instructions to create, format, and activate a swap file. The swap space will also be active across reboots if the swap file is made permanent via the /etc/fstab file. The memory management can be further optimised by modifying variables like swappiness. All things considered, making a swap file is a practical and adaptable way to enhance Ubuntu system efficiency and stability. You can install Ubuntu on a VPS on Hostman.
23 December 2024 · 8 min to read
Minecraft

How to Create Your Own Minecraft Gaming Server

Minecraft is a popular game that allows players to create and control their gaming worlds. Setting up a server enables you to customize the environment, manage player limits, and configure gameplay settings to suit your preferences. This guide covers setting up a Minecraft Java server on a Linux system. Requirements The official wiki page recommends the following specs for running a Minecraft server. Minimum Specifications: Supported OS: Windows 7 or newer, macOS Mojave (version 10.14.5 or later), or any Linux distribution. Processor: Intel Core i3-3210, AMD A8-7600, Apple M1 chip, or an equivalent processor (may function on older first-gen i3 processors but isn't officially recommended). Graphics Card: Intel HD Graphics, AMD Radeon R5, or comparable integrated GPUs. Memory: At least 2 GB of RAM. Recommended Specifications: Supported OS: Windows 10 or newer, macOS Mojave (version 10.14.5 or later), or Linux. Processor: Intel Core i5-4690, AMD A10-7800, Apple M1 chip, or a similar CPU. Graphics Card: Dedicated GPUs like NVIDIA GeForce 700 series or AMD Radeon RX 200 series. Memory: 4 GB of RAM or more for better performance. For this guide, we’ll use a Hostman Ubuntu cloud machine configured with the following specifications: a dual-core 3 GHz CPU, 4 GB of RAM, 80 GB of NVMe storage, and a 200 Mbps bandwidth connection. Preparing the Server First, we’ll install tools like screen to run the executable in the background, allowing us to continue using the terminal. We’ll also set up a non-root user to run the server securely. Additional security measures will be covered later. Let’s proceed to the setup.  Updating and Installing Tools Let’s begin by updating the server and installing essential tools. Update the package list and upgrade existing packages: sudo apt update && sudo apt upgrade -y Next, we install essential tools like net-tools and screen: sudo apt install net-tools screen -y net-tools, including netstat, manages network connections, while screen enables background Minecraft server operation. Install the latest Java Development Kit to proceed:  sudo apt install openjdk-21-jdk With these steps done, we can move to the setup. Create a User First, add a new user called minecraft (feel free to name whatever you want).  sudo useradd -r -U -d /usr/local/minecraft/server/ -s /sbin/nologin minecraft The -s option in the useradd command specifies the login shell for the new user. By setting it to /sbin/nologin, we effectively prevent the user from being able to log in interactively to the server. This will ensure that this user is only used to run the Minecraft server and reduce the attack surface.  Create a Directory for the Executable Files Next, we create the directory structure to store the Minecraft server files: sudo mkdir -p /usr/local/minecraft/server/Java The -p flag ensures that any missing parent directories are created automatically. This prepares a dedicated location to house all installation related files. Assign Directory Ownership We assign ownership of the directory to the minecraft user and group: sudo chown -R minecraft: /usr/local/minecraft/server/ The -R flag ensures permissions are applied recursively to all files and subdirectories. This allows the minecraft user to manage all server-related files without access issues. Download and Prepare the Minecraft Server We start by switching to the minecraft user with an interactive shell to perform the setup tasks securely: sudo su - minecraft -s /bin/bash Next, we navigate to the server directory where all the Minecraft files will be stored: cd /usr/local/minecraft/server/Java Finally, we download the Minecraft server .jar file from Mojang’s official servers: wget https://piston-data.mojang.com/v1/objects/4707d00eb834b446575d89a61a11b5d548d8c001/server.jar This ensures the Minecraft executable is in the correct location and ready for further configuration. We recommend updating the URL if a newer server file version is available. Run the Minecraft Server We start the Minecraft server using the following command: java -Xmx1024M -Xms1024M -jar server.jar nogui This command allocates 1GB of memory to the server (-Xmx for the maximum and -Xms for the initial allocation). The nogui option disables the graphical interface, making it more efficient for a server environment by reducing resource usage. You might first encounter an error due to the EULA similar to this:  When the server is started for the first time, it exits with a message requiring acceptance of the Minecraft End-user License Agreement (EULA). During this process, several files are created in the server directory: ls eula.txt  libraries  logs  server.jar  server.properties To accept the EULA, we update the eula.txt file by replacing false with TRUE using the following command: sed -i 's/\bfalse\b/TRUE/' eula.txt This command edits the file, eliminating the need to open an editor like Vim or Nano. With the EULA accepted, the server can now be launched. Use the screen command to run the server in the background: screen -S mc_Java_server -dm java -Xmx1024M -Xms1024M -jar server.jar nogui Create a session named mc_Java_server, which allows detaching from the terminal while keeping the server active. Great! The Minecraft service is now running and accessible at your IP address on port 25565.  Testing the Server To connect in multiplayer mode, open Minecraft and add a new address. Once done, you can join the server after your client has established a connection.  Use online tools like mcsrvstat.us to check if the server is online. It also displays details such as player count, Minecraft version, and debug information. Now that the Minecraft server is set up, let’s improve security and resource management. Best Tips for Securing and Optimizing Your Installation With the Minecraft server set up, focusing on securing the server and optimizing resource management is essential. Implementing these tips will help ensure smooth performance while protecting against potential vulnerabilities. Secure with Firewall Set up a firewall to control traffic and block unauthorized access: Allow only necessary ports, such as 25565 for Minecraft, using tools like ufw or iptables: sudo ufw allow sshsudo ufw allow 25565sudo ufw enable Block all other incoming traffic unless explicitly required for different services. A firewall protects the server from external threats by ensuring that only expected traffic can reach it. Limit Access with a Whitelist Activate the server whitelist to restrict access to specific players: # In the Minecraft console or server.properties file:whitelist=true Add trusted players to the whitelist: whitelist add <player_username> This ensures that only approved players can access your machine, reducing the risk of griefing or malicious activities. Use a Dedicated User Always run the Minecraft server under a non-root user account, like the minecraft user we created earlier. This limits the machine’s permissions, ensuring it cannot harm the underlying system even if the server is compromised. Specify and Monitor RAM Usage Optimize memory usage to match the server’s workload. Allocate a specific amount of RAM to the server using the -Xmx and -Xms flags in the Java command. For example, allocate 2GB of RAM: java -Xmx2048M -Xms2048M -jar server.jar nogui You can monitor RAM usage using tools like htop to ensure the server runs smoothly without exhausting system resources. Limit Player Slots and Connections You can restrict the maximum number of players in server.properties to match your server’s capacity: max-players=10 Add Plugins Plugins are a great way to expand your Minecraft server's functionality with features like anti-griefing tools, economy systems, or mini-games. Here are some recommended plugins: WorldGuard: Protect specific areas from unwanted changes. CoreProtect: Log and roll back player actions. NoCheatPlus: Detect and prevent cheats or exploits. Refer to our tutorial for plugin installation, from downloading to placing them in the correct directory. These enhancements will improve security and user experience. Conclusion This article covered the steps to install a Minecraft gaming server on a remote Ubuntu machine and best practices for maintenance and providing a fantastic player experience. Consider our ready-to-run Minecraft servers available at Hostman Marketplace for a hassle-free setup.
23 December 2024 · 7 min to read
PostgreSQL

How to Set Up Physical Streaming Replication with PostgreSQL on Ubuntu

Streaming replication is a common method for horizontally scaling relational databases. It involves one or more copies of the same database cluster operating on different devices. The primary database cluster handles both read and write operations, while the replicas are read-only. We can also use streaming replication to provide high availability: if the primary database cluster or server fails unexpectedly, the replicas can continue handling read operations, or one of them can be promoted to become the new primary cluster. PostgreSQL, a popular relational database, supports both logical and physical replication: Logical replication streams high-level changes from the primary cluster to replicas, allowing you to replicate changes to a single database or table. Physical replication, on the other hand, streams changes from the Write-Ahead Log (WAL) files, copying the entire cluster's state rather than specific areas. This method ensures that all changes to the primary cluster are replicated. This guide will help you set up physical streaming replication with PostgreSQL on Ubuntu 22.04 across two separate devices, each running PostgreSQL 17 clusters. One device will host the primary cluster, and the other will serve as the replica. Hostman offers a cloud PostgreSQL for your projects.  Prerequisites To follow this tutorial, you will need: Two separate devices running Ubuntu 22.04: One will act as the primary server and the other as the replica. Firewall settings that allow HTTP/HTTPS traffic and traffic on port 5432 (the default port for PostgreSQL 17). PostgreSQL 17 installed and running on both servers. Step 1: Configuring the Primary Database to Accept Connections The first step is to configure the primary database to allow connections from the replica(s). By default, PostgreSQL only accepts connections from localhost (127.0.0.1). To change this behavior, you need to modify the listen_addresses configuration parameter in the primary database. On the primary server, open the PostgreSQL configuration file postgresql.conf, located in the /etc/postgresql/17/main/ directory: sudo nano /etc/postgresql/17/main/postgresql.conf Once the file is open, find the listen_addresses variable and change its value from localhost to the IP address of the primary server. Remove the # symbol at the beginning of the line as well: listen_addresses = 'your_primary_IP_address' Save the changes and exit the file. The primary database is now ready to accept connections from other devices using the specified IP address. Next, you need to create a user role with the appropriate permissions that the replica will use to connect to the primary database. Step 2: Creating a Replication Role with Permissions Next, you need to create a dedicated role in the primary database with permissions for database replication. The replica will use this role to connect to the primary database. Creating a specific role for replication is crucial for security, as the replica will only have permission to copy data, not modify it. Connect to the database cluster: Log in as the postgres user by running: sudo -u postgres psql Create a replication role: Use the CREATE ROLE command to set up a role for replication: CREATE ROLE test WITH REPLICATION PASSWORD 'testpassword' LOGIN; This will output: CREATE ROLE We have now created the test role with the password testpassword, which has replication permissions for the database cluster. Configure access for replication: PostgreSQL has a special pseudo-database, replication, which replicas use to connect. To allow access, edit the pg_hba.conf file. Exit the PostgreSQL prompt by typing: \q Then open the configuration file using nano or your preferred editor: sudo nano /etc/postgresql/17/main/pg_hba.conf Add a rule for the replica: Append the following line to the end of the pg_hba.conf file: host  replication   test  your-replica-IP/32  md5 host: Enables non-local connections over plain or SSL-encrypted TCP/IP sockets. replication: Specifies the special pseudo-database used for replication. test: Refers to the previously created replication role. your-replica-IP/32: Restricts access to the specific IP address of your replica. md5: Sets the authentication method to password-based. If you plan to create multiple replicas, repeat this step for each additional replica, specifying its IP address. Restart the primary database cluster: To apply these changes, restart the primary cluster: sudo systemctl restart postgresql@17-main If the primary cluster restarts successfully, it is properly configured and ready to stream data once the replica connects. Next, proceed with configuring the replica cluster. Step 3: Backing Up the Primary Cluster to the Replica During the setup of physical replication with PostgreSQL, you need to perform a physical backup of the primary cluster’s data directory to the replica’s data directory. Before doing this, you must clear the replica’s data directory of all existing files. On Ubuntu, the default data directory for PostgreSQL is /var/lib/postgresql/17/main/. To find the data directory, you can run the following command on the replica database: SHOW data_directory; Once you locate the data directory, run the following command to clear all files: sudo -u postgres rm -r /var/lib/postgresql/17/main/* Since the files in the default data directory are owned by the postgres user, you need to run the command as postgres using sudo -u postgres. Note: If a file in the directory is corrupted and the command does not work (this is very rare), you can remove the main directory entirely and recreate it with the correct permissions: sudo -u postgres rm -r /var/lib/postgresql/17/mainsudo -u postgres mkdir /var/lib/postgresql/17/mainsudo -u postgres chmod 700 /var/lib/postgresql/17/main Now that the replica’s data directory is cleared, you can physically back up the primary server’s data files. PostgreSQL provides a useful utility called pg_basebackup to simplify this process. It even allows you to promote the server to standby mode using the -R option. Run the following pg_basebackup command on the replica: sudo -u postgres pg_basebackup -h primary-ip-addr -p 5432 -U test -D /var/lib/postgresql/17/main/ -Fp -Xs -R -h: Specifies the remote host. Enter the IP address of your primary server. -p: Specifies the port number for connecting to the primary server. By default, PostgreSQL uses port 5432. -U: Specifies the user role to connect to the primary cluster (the role created in the previous step). -D: Specifies the backup's destination directory, which is your replica's cleared data directory. -Fp: Ensures the backup is output in plain format (instead of a tar file). -Xs: Streams the contents of the WAL file during the backup from the primary database. -R: Creates a file named standby.signal in the replica’s data directory, signaling that the replica should operate in standby mode. It also adds the connection information for the primary server to the postgresql.auto.conf file. This configuration file is read each time the standard postgresql.conf is read, but the values in the .auto.conf file override those in the regular configuration file. When you run this command, you will be prompted to enter the password for the replication role created earlier. The time required to copy all the files depends on the size of your primary database cluster. At this point, your replica now has all the necessary data files from the primary server to begin replication. Next, you need to configure the replica to start in standby mode and proceed with replication. Step 4: Restarting and Testing Clusters After successfully creating a backup of the primary cluster’s data files on the replica, you need to restart the replica database cluster and switch it to standby mode. To restart the replica, run the following command: sudo systemctl restart postgresql@17-main Once the replica has restarted in standby mode, it should automatically connect to the primary database cluster on the other machine. To check whether the replica is connected and receiving the stream from the primary server, connect to the primary database cluster with the following command: sudo -u postgres psql Next, query the pg_stat_replication table on the primary cluster as follows: SELECT client_addr, state FROM pg_stat_replication; The output should look something like this: client_addr  | state----------------+-----------your_replica_IP | streaming If you see this result, the streaming replication from the primary server to the replica is correctly set up. Conclusion You now have two Ubuntu 22.04 servers with PostgreSQL 17 clusters, and streaming replication is configured between the servers. Any changes made in the primary database cluster will be reflected in the replica cluster. You can add more replicas if your databases need to handle higher traffic. To learn more about physical streaming replication, including how to configure synchronous replication to prevent the loss of critical data, refer to the official PostgreSQL documentation.
20 December 2024 · 8 min to read
PostgreSQL

Managing PostgreSQL Extensions

PostgreSQL offers a vast array of extensions designed to simplify solving complex and non-standard tasks. They allow you to enhance the capabilities of your database management system, bypass certain limitations, and streamline analysts' work. There are two types of extensions: Bundled extensions: These come with PostgreSQL (e.g., in the contrib package). Custom extensions: These are created by users based on their specific needs. Once downloaded and installed, custom functions work just like standard ones. Extensions enable the handling of temporal, spatial, and other data types. Remember: if you can't find a ready-made solution, you can always create one yourself. If you're using a cloud PostgreSQL database on Hostman, you can easily install several popular extensions directly from the control panel. Simply open your database page, navigate to Configuration → Modify, and enable the extensions you need. Installation and Management of Extensions Since the contents of the contrib package differ for each PostgreSQL version, start by checking which functions are available in your version. Viewing the List of Standard Extensions To list the extensions available for installation, the default version, the version of the installed application, and a brief description of their functions, run the following command: SELECT * FROM pg_available_extensions; Note: Some features are only accessible with a superuser account (postgres) or an account with installation privileges.  Installing Extensions Let's break down the command used to install any PostgreSQL extensions: CREATE EXTENSION IF NOT EXISTS extension_nameWITH SCHEMA schema_nameVERSION versionCASCADE; The command includes optional but helpful parameters that you can use during installation: IF NOT EXISTS: Checks if an extension with the specified name already exists. WITH SCHEMA: Specifies the schema where the extension will be installed. If not provided, it will be installed in the current schema. VERSION: Specifies the version to install. If not specified, the latest version will be installed. CASCADE: Automatically installs all additional extensions required for proper functioning. Important: After installation using this command, you need to make specific entries in the PostgreSQL configuration file and then restart the server. Updating Extensions A new version is almost always better than the old one, right? Developers refine the code, fix bugs, and introduce new features, making it important and beneficial to update extensions. To upgrade an extension to a specific version, use the following command: ALTER EXTENSION extension_nameUPDATE TO version; If we omit the version parameter, the latest version will be installed. Removing Extensions Sometimes, an extension is no longer needed, and you might want to free up memory for better use. You can remove an extension with the following command: DROP EXTENSION IF EXISTS extension_nameCASCADE | RESTRICT; Additional Parameters: IF EXISTS: Checks whether the extension exists before attempting to remove it. CASCADE: Automatically removes all objects that depend on the extension. RESTRICT: Prevents removal if other objects depend on the extension. Top Most Useful Extensions for PostgreSQL pg_stat_statements The pg_stat_statements extension helps identify queries that place a heavy load on the system, how often they are executed, and how long they take. This information is crucial for evaluating database performance, identifying bottlenecks, and optimizing processes. Given the large size of many databases, query execution time must be efficient. This extension provides the metrics to assess and improve performance. Example Usage The following command shows the SQL query (query), its total execution time in minutes (total_min), average execution time in milliseconds (avg_ms), and the number of times it was called (calls): SELECT query, (total_exec_time / 1000 / 60) AS total_min, mean_exec_time AS avg_ms, callsFROM pg_stat_statementsORDER BY 1 DESCLIMIT 10; pgcrypto If you’re interested in encrypting data in PostgreSQL, the pgcrypto extension is essential. It offers cryptographic functions for encrypting data, such as passwords. By default, it supports the following encryption algorithms: md5, sha1, sha224, sha256, sha384, and sha512. You can expand the list of supported algorithms by configuring the extension to use OpenSSL in its settings. btree_gist You need the btree_gist extension to leverage different types of PostgreSQL database indexes (B-tree and GiST). It is especially useful for databases containing spatial data, such as city or store coordinates. B-tree Index: The default index type in PostgreSQL. It can index any sortable data, such as numbers and dates. B-tree is efficient and versatile but unsuitable for unsortable data. GiST Index: Handles any type of data, including geospatial data. Key Feature: In addition to the typical search operators for B-tree indexes, btree_gist also supports the PostgreSQL "not equal" operator (<>). timescaledb Time-series data tracks changes over time, such as application requests, sales volumes, or weather temperatures. While specialized databases like InfluxDB or ClickHouse are designed for time-series data, they may not handle other data types effectively. In such cases, the timescaledb extension for PostgreSQL offers a convenient alternative. timescaledb enables the storage and processing of time-series data directly in PostgreSQL. To use it: Download the appropriate version from the official website. Follow the installation instructions. Add the extension with the CREATE EXTENSION command. hstore The hstore extension allows PostgreSQL to store key-value pairs in a single data field. This is similar to data structures found in object-oriented programming languages like Python. With hstore, you can store grouped data without requiring additional database columns. For example, in a bookstore database, a single column could hold attributes such as the number of pages, genre, and illustration details for each book. Example Usage: Create a table with an hstore column: CREATE TABLE books (    id serial PRIMARY KEY,    name varchar,    attributes hstore); Insert data into the table: INSERT INTO books (name, attributes) VALUES (    'Harry Potter and the Philosopher''s Stone',    'author => "J. K. Rowling", pages => 223, series => "Harry Potter"'); Query books in the "Harry Potter" series: SELECT name, attributes->'author' AS authorFROM booksWHERE attributes->'series' = 'Harry Potter'; Result: The attributes for an individual book are displayed like this: SELECT * FROM books WHERE attributes->'series' = 'Harry Potter'; Conclusion PostgreSQL extensions significantly enhance database capabilities, enabling efficient handling of time-series data, encryption, indexing, key-value storage, and performance analysis. We went over a few popular tools like timescaledb, pgcrypto, and hstore; however, in reality, PostgreSQL supports many more extensions, offering solutions for a variety of use cases.
20 December 2024 · 6 min to read

Tailored cloud server
solutions for every need

General-purpose cloud servers for web hosting

Ideal for websites, content management systems, and basic business applications, cloud web servers provide balanced performance with reliable uptime. These servers are designed to handle moderate traffic while keeping your website or application responsive.

High-performance servers for cloud computing


For businesses needing powerful resources for tasks like AI, machine learning, or data analysis, our high-performance cloud servers are built to process large datasets efficiently. Equipped with 3.3 GHz processors and high-speed NVMe storage, they ensure smooth execution of even the most demanding applications.

Storage-optimized cloud servers for data-driven operations

Need to store and retrieve large amounts of data? Our cloud data servers offer vast capacity with superior read/write speeds. These servers are perfect for databases, large-scale backups, or big data management, ensuring fast access to your data when you need it.

Memory-Optimized Servers for Heavy Workloads


These servers are built for applications that require high memory capacity, such as in-memory databases or real-time analytics. With enhanced memory resources, they ensure smooth handling of large datasets, making them ideal for businesses with memory-intensive operations.

In-depth answers to your questions

Which operating systems are supported on your cloud servers?

Choose popular server operating systems and deploy them in one click: from Ubuntu to CentOS. Licensed operating systems are available directly in the control panel.

How can I get started with a cloud server? Is there a straightforward registration process?

Register with Hostman and choose the tariff that suits your needs and requirements. You can always add processing power and purchase additional services if needed.

You don't need a development team to start shared hosting - you'll do everything yourself in a convenient control panel. Even a person with no technical background can easily work with it.

What is the minimum and maximum resource allocation (CPU, RAM, storage) available for cloud servers?

The starter package includes a 1×1.28 GHz 1-core CPU, 1 GB RAM, 15 GB fast MVNe SSD, dedicated IP address and 200 Mbps. For demanding users, go for a powerful 8×3.3 GHz server, 16 GB RAM, 160 GB fast MVNe SSD, dedicated IP address and 200 Mbps. Alternatively, you can always get an incredibly powerful server by configuring it yourself.

What scaling options are available for cloud servers?

You can easily add power, bandwidth, and channel width with just a few clicks directly in the control panel. With Hostman, you can enhance all the important characteristics of your server with hourly billing.

How much does a cloud server cost, and what is the pricing structure like?

Add capacity, bandwidth and channel width with a few clicks right in the control panel. With Hostman, you can improve all important features of your server - with hourly billing.

Is there a trial or testing period available for cloud servers before purchasing?

Contact the friendly Hostman support team, and they will offer you comfortable conditions for test-driving our cloud server — and will transfer your current projects to the cloud for free.

What security measures and data protection are in place for cloud servers?

Cloud servers are hosted in a Tier III data center with a high level of reliability. Hostman guarantees 99.99% availability according to the SLA, with downtime not exceeding 52 minutes per year. Additionally, data is backed up for extra security, and the communication channel is protected against DDoS attacks.

What level of support is provided for cloud servers?

Hostman support is always available, 7 days a week, around the clock. We respond to phone calls within a minute and chat inquiries within 15 minutes. Your questions will always be handled by knowledgeable staff with sufficient authority and technical background.

Can I install my own software on a cloud server?

Yes, absolutely! You can deploy any software, operating systems, and images you desire on your server. Everything is ready for self-configuration.

What backup and data recovery methods are available for cloud servers?

Hostman takes care of the security of your data and backs up important information. Additionally, you can utilize the automatic backup service for extra safety and reliability.

Is there a guaranteed Service Level Agreement (SLA) for cloud server availability?

Hostman guarantees a 99.99% level of virtual server availability according to the SLA (Service Level Agreement).

Which data center locations are available for hosting cloud servers?

Our servers are located in a modern Tier III data center in the European Union and the United States.

Can I create and manage multiple cloud servers under a single account?

Certainly, you can launch multiple cloud servers and other services (such as managed database or vps server) within a single account.

What is the deployment time for cloud servers after ordering?

With Hostman, you'll get a service that is easy and quick to manage on your own. New cloud servers can be launched almost instantly from the control panel, and the necessary software can be installed within minutes.

What monitoring and notification capabilities are offered for cloud servers?

Hostman specialists monitor the technical condition of servers and software around the clock. You won't have to worry about server availability — it will simply work, always.

Can I modify the specifications of my cloud server (e.g., increase RAM) after creation?

You can easily configure your server by adding resources directly in the control panel. And if you need to switch to lower-tier plans, you can rely on Hostman support — our specialists will handle everything for you.

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support