Sign In
Sign In

Cloud Server

Deploy your cloud server in minutes and experience the freedom to scale your
infrastructure effortlessly. Fast, secure, and flexible cloud server solution
designed to meet your unique needs without the constraints of traditional
servers.
Contact Sales
Cloud Server
Blazing 3.3 GHz Processors
& NVMe Disks
Experience unparalleled speed with processors optimized for demanding applications, combined with ultra-fast NVMe disks for quick data retrieval.
200 Mbit Channels,
Unlimited Traffic
Enjoy stable, high-speed connectivity with unthrottled traffic, ensuring smooth performance even during peak usage periods.
24/7 Monitoring
& Support
Stay worry-free with round-the-clock monitoring and professional support, ensuring your systems are always operational.
Cost-Effective
Management
Our cloud server solutions are designed to deliver maximum value for your money, offering flexible pricing without compromising on performance.

Cloud server pricing

We offer various cloud server plans, tailored to your exact needs.
Get the best performance at a price that fits your budget.
New York
1 x 3 GHz CPU
CPU
1 x 3 GHz
1 GB RAM
RAM
1 GB
25 GB NVMe
NVMe
25 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$4
 /mo
1 x 3 GHz CPU
CPU
1 x 3 GHz
2 GB RAM
RAM
2 GB
40 GB NVMe
NVMe
40 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$5
 /mo
2 x 3 GHz CPU
CPU
2 x 3 GHz
2 GB RAM
RAM
2 GB
60 GB NVMe
NVMe
60 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$6
 /mo
2 x 3 GHz CPU
CPU
2 x 3 GHz
4 GB RAM
RAM
4 GB
80 GB NVMe
NVMe
80 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$8
 /mo
4 x 3 GHz CPU
CPU
4 x 3 GHz
8 GB RAM
RAM
8 GB
160 GB NVMe
NVMe
160 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$17
 /mo
8 x 3 GHz CPU
CPU
8 x 3 GHz
16 GB RAM
RAM
16 GB
320 GB NVMe
NVMe
320 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
Public IP
$37
 /mo
CPU
RAM
Gb
NVMe
Gb
Public IP
$0
 /mo

Deploy any software in seconds

Select the desired OS or App and install it in one click.
OS Distributions
Pre-installed Apps
Custom Images
Ubuntu
Debian
CentOS

Hostman's commitment to simplicity
and budget-friendly solutions

1 CPU
2 CPU
4 CPU
8 CPU
Configuration
1 CPU, 1 GB RAM, 25 GB SSD
Hostman
DigitalOcean
Google Cloud
AWS
Vultr
Price
$4
$6
$6.88
$7.59
$5
Tech support
Free
$24/mo
$29/mo + 3% of
monthly charges
$29/mo or 3% of
monthly charges
Free
Backups
from $0.07/GB
20% or 30% higher
base daily/weekly fee
$0.03/GB per mo
$0.05/GB per mo
20% higher base
monthly/hourly fee
Bandwidth
Free
$0.01 per GB
$0.01 per GB
$0.09/GB first
10 TB / mo
$0.01 per GB
Live chat support
Avg. support response time
<15 min
<24 hours
<4 hours
<12 hours
<12 hours

What is a cloud server?

Cloud server is a virtualized computing resource hosted in the cloud, designed to deliver powerful performance without the need for physical hardware. It is built on a network of connected virtual machines, which enables flexible resource allocation, instant scalability, and high availability. Unlike traditional on-premises servers, a cloud-based server allows users to adjust resources dynamically, making it ideal for handling fluctuating workloads or unpredictable traffic spikes. Whether you're running an e-commerce store, a SaaS platform, or any application, a cloud web server provides the adaptability necessary to grow with your business.

Cloud servers solve a wide range of challenges, from reducing infrastructure costs to improving uptime and reliability. By leveraging the cloud, businesses can avoid the upfront investment and maintenance costs associated with physical servers. Additionally, a cloud server system allows users to deploy applications quickly, scale resources in real-time, and manage data more efficiently. The key benefits for clients include operational flexibility, cost savings, and the ability to respond quickly to changing demands.

Ready to buy a cloud server?

1 CPU / 1GB RAM / 25GB NVMe / 200 Mbps / $2/mo.

Efficient tools for your convenient work

See all Products

Backups, Snapshots

Protect your data with regular backups and snapshots, ensuring you never lose crucial information.

Firewall

Enhance your security measures with our robust firewall protection, safeguarding your infrastructure against potential threats.

Load Balancer

Ensure optimal performance and scalability by evenly distributing traffic across multiple servers with our load balancer feature.

Private Networks

Establish secure and isolated connections between your servers with private networks, shielding sensitive data and enhancing network efficiency.

Trusted by 500+ companies and developers worldwide

One panel to rule them all

Easily control your database, pricing plan, and additional services
through the intuitive Hostman management console.
Project management
Organize your multiple cloud servers and databases into a single, organized project, eliminating confusion and simplifying management.
Software marketplace
24 ready-made assemblies for any tasks: frameworks, e-commerce, analytics tools.
Mobile responsive
Get the optimal user experience across all devices with our mobile-responsive design.
Hostman Cloud

Code locally, launch worldwide

Our servers, certified with ISO/IEC 27001, are located in Tier 3 data
centers across the US, Europe, and Asia.
🇺🇸 San Francisco
🇺🇸 San Jose
🇺🇸 Texas
🇺🇸 New York
🇳🇱 Amsterdam
🇳🇬 Lagos
🇩🇪 Frankfurt
🇵🇱 Gdansk
🇦🇪 Dubai
🇸🇬 Singapore
Anup k.
Associate Cloud Engineer
5.0 out of 5

"Hostman Comprehensive Review of Simplicity and Potential"

It been few years that I have been working on Cloud and most of the cloud service...
Mansur H.
Security Researcher
5.0 out of 5

"A perfect fit for everything cloud services!"

Hostman's seemless integration, user-friendly interface and its robust features (backups, etc) makes it much easier...
Adedeji E.
DevOps Engineer
5.0 out of 5

"Superb User Experience"

For me, Hostman is exceptional because of it's flexibility and user-friendliness. The platform's ability to offer dedicated computing resources acr...
Yudhistira H.
Mid-Market(51-1000 emp.)
5.0 out of 5

"Streamlined Cloud Excellence!"

What I like best about Hostman is their exceptional speed of deployment, scalability, and robust security features. Their...
Mohammad Waqas S.
Biotechnologist and programmer
5.0 out of 5

"Seamless and easy to use Hosting Solution for Web Applications"

From the moment I signed up, the process has been seamless and straightforward...
Mohana R.
Senior Software Engineer
5.0 out of 5

"Availing Different DB Engine Services Provided by Hostman is Convenient for my Organization usecases"

Hostman manages the cloud operations...
Faizan A.
5.0 out of 5

"Hostman is a great fit for me"

Hostman is a great fit for me. What do you like best about Hostman? It was very easy to deploy my application and create database, I didn't have
Adam M.
5.0 out of 5

"Perfect website"

This website is extremely user friendly and easy to use. I had no problems so didn't have to contact customer support. Really good website and would recommend to others.
Anup K.
4.0 out of 5

"Simplifying Cloud Deployment with Strengths and Areas for Growth"

What I like best about Hostman is its unwavering commitment to simplicity...
Naila J.
5.0 out of 5

"Streamlined Deployment with Room for Improvement"

Hostman impresses with its user-friendly interface and seamless deployment process, simplifying web application hosting...

More cloud services from Hostman

See all Products

Latest News

PostgreSQL

How to Set Up Physical Streaming Replication with PostgreSQL on Ubuntu

Streaming replication is a common method for horizontally scaling relational databases. It involves one or more copies of the same database cluster operating on different devices. The primary database cluster handles both read and write operations, while the replicas are read-only. We can also use streaming replication to provide high availability: if the primary database cluster or server fails unexpectedly, the replicas can continue handling read operations, or one of them can be promoted to become the new primary cluster. PostgreSQL, a popular relational database, supports both logical and physical replication: Logical replication streams high-level changes from the primary cluster to replicas, allowing you to replicate changes to a single database or table. Physical replication, on the other hand, streams changes from the Write-Ahead Log (WAL) files, copying the entire cluster's state rather than specific areas. This method ensures that all changes to the primary cluster are replicated. This guide will help you set up physical streaming replication with PostgreSQL on Ubuntu 22.04 across two separate devices, each running PostgreSQL 17 clusters. One device will host the primary cluster, and the other will serve as the replica. Hostman offers a cloud PostgreSQL for your projects.  Prerequisites To follow this tutorial, you will need: Two separate devices running Ubuntu 22.04: One will act as the primary server and the other as the replica. Firewall settings that allow HTTP/HTTPS traffic and traffic on port 5432 (the default port for PostgreSQL 17). PostgreSQL 17 installed and running on both servers. Step 1: Configuring the Primary Database to Accept Connections The first step is to configure the primary database to allow connections from the replica(s). By default, PostgreSQL only accepts connections from localhost (127.0.0.1). To change this behavior, you need to modify the listen_addresses configuration parameter in the primary database. On the primary server, open the PostgreSQL configuration file postgresql.conf, located in the /etc/postgresql/17/main/ directory: sudo nano /etc/postgresql/17/main/postgresql.conf Once the file is open, find the listen_addresses variable and change its value from localhost to the IP address of the primary server. Remove the # symbol at the beginning of the line as well: listen_addresses = 'your_primary_IP_address' Save the changes and exit the file. The primary database is now ready to accept connections from other devices using the specified IP address. Next, you need to create a user role with the appropriate permissions that the replica will use to connect to the primary database. Step 2: Creating a Replication Role with Permissions Next, you need to create a dedicated role in the primary database with permissions for database replication. The replica will use this role to connect to the primary database. Creating a specific role for replication is crucial for security, as the replica will only have permission to copy data, not modify it. Connect to the database cluster: Log in as the postgres user by running: sudo -u postgres psql Create a replication role: Use the CREATE ROLE command to set up a role for replication: CREATE ROLE test WITH REPLICATION PASSWORD 'testpassword' LOGIN; This will output: CREATE ROLE We have now created the test role with the password testpassword, which has replication permissions for the database cluster. Configure access for replication: PostgreSQL has a special pseudo-database, replication, which replicas use to connect. To allow access, edit the pg_hba.conf file. Exit the PostgreSQL prompt by typing: \q Then open the configuration file using nano or your preferred editor: sudo nano /etc/postgresql/17/main/pg_hba.conf Add a rule for the replica: Append the following line to the end of the pg_hba.conf file: host  replication   test  your-replica-IP/32  md5 host: Enables non-local connections over plain or SSL-encrypted TCP/IP sockets. replication: Specifies the special pseudo-database used for replication. test: Refers to the previously created replication role. your-replica-IP/32: Restricts access to the specific IP address of your replica. md5: Sets the authentication method to password-based. If you plan to create multiple replicas, repeat this step for each additional replica, specifying its IP address. Restart the primary database cluster: To apply these changes, restart the primary cluster: sudo systemctl restart postgresql@17-main If the primary cluster restarts successfully, it is properly configured and ready to stream data once the replica connects. Next, proceed with configuring the replica cluster. Step 3: Backing Up the Primary Cluster to the Replica During the setup of physical replication with PostgreSQL, you need to perform a physical backup of the primary cluster’s data directory to the replica’s data directory. Before doing this, you must clear the replica’s data directory of all existing files. On Ubuntu, the default data directory for PostgreSQL is /var/lib/postgresql/17/main/. To find the data directory, you can run the following command on the replica database: SHOW data_directory; Once you locate the data directory, run the following command to clear all files: sudo -u postgres rm -r /var/lib/postgresql/17/main/* Since the files in the default data directory are owned by the postgres user, you need to run the command as postgres using sudo -u postgres. Note: If a file in the directory is corrupted and the command does not work (this is very rare), you can remove the main directory entirely and recreate it with the correct permissions: sudo -u postgres rm -r /var/lib/postgresql/17/mainsudo -u postgres mkdir /var/lib/postgresql/17/mainsudo -u postgres chmod 700 /var/lib/postgresql/17/main Now that the replica’s data directory is cleared, you can physically back up the primary server’s data files. PostgreSQL provides a useful utility called pg_basebackup to simplify this process. It even allows you to promote the server to standby mode using the -R option. Run the following pg_basebackup command on the replica: sudo -u postgres pg_basebackup -h primary-ip-addr -p 5432 -U test -D /var/lib/postgresql/17/main/ -Fp -Xs -R -h: Specifies the remote host. Enter the IP address of your primary server. -p: Specifies the port number for connecting to the primary server. By default, PostgreSQL uses port 5432. -U: Specifies the user role to connect to the primary cluster (the role created in the previous step). -D: Specifies the backup's destination directory, which is your replica's cleared data directory. -Fp: Ensures the backup is output in plain format (instead of a tar file). -Xs: Streams the contents of the WAL file during the backup from the primary database. -R: Creates a file named standby.signal in the replica’s data directory, signaling that the replica should operate in standby mode. It also adds the connection information for the primary server to the postgresql.auto.conf file. This configuration file is read each time the standard postgresql.conf is read, but the values in the .auto.conf file override those in the regular configuration file. When you run this command, you will be prompted to enter the password for the replication role created earlier. The time required to copy all the files depends on the size of your primary database cluster. At this point, your replica now has all the necessary data files from the primary server to begin replication. Next, you need to configure the replica to start in standby mode and proceed with replication. Step 4: Restarting and Testing Clusters After successfully creating a backup of the primary cluster’s data files on the replica, you need to restart the replica database cluster and switch it to standby mode. To restart the replica, run the following command: sudo systemctl restart postgresql@17-main Once the replica has restarted in standby mode, it should automatically connect to the primary database cluster on the other machine. To check whether the replica is connected and receiving the stream from the primary server, connect to the primary database cluster with the following command: sudo -u postgres psql Next, query the pg_stat_replication table on the primary cluster as follows: SELECT client_addr, state FROM pg_stat_replication; The output should look something like this: client_addr  | state----------------+-----------your_replica_IP | streaming If you see this result, the streaming replication from the primary server to the replica is correctly set up. Conclusion You now have two Ubuntu 22.04 servers with PostgreSQL 17 clusters, and streaming replication is configured between the servers. Any changes made in the primary database cluster will be reflected in the replica cluster. You can add more replicas if your databases need to handle higher traffic. To learn more about physical streaming replication, including how to configure synchronous replication to prevent the loss of critical data, refer to the official PostgreSQL documentation.
20 December 2024 · 8 min to read
PostgreSQL

Managing PostgreSQL Extensions

PostgreSQL offers a vast array of extensions designed to simplify solving complex and non-standard tasks. They allow you to enhance the capabilities of your database management system, bypass certain limitations, and streamline analysts' work. There are two types of extensions: Bundled extensions: These come with PostgreSQL (e.g., in the contrib package). Custom extensions: These are created by users based on their specific needs. Once downloaded and installed, custom functions work just like standard ones. Extensions enable the handling of temporal, spatial, and other data types. Remember: if you can't find a ready-made solution, you can always create one yourself. If you're using a cloud PostgreSQL database on Hostman, you can easily install several popular extensions directly from the control panel. Simply open your database page, navigate to Configuration → Modify, and enable the extensions you need. Installation and Management of Extensions Since the contents of the contrib package differ for each PostgreSQL version, start by checking which functions are available in your version. Viewing the List of Standard Extensions To list the extensions available for installation, the default version, the version of the installed application, and a brief description of their functions, run the following command: SELECT * FROM pg_available_extensions; Note: Some features are only accessible with a superuser account (postgres) or an account with installation privileges.  Installing Extensions Let's break down the command used to install any PostgreSQL extensions: CREATE EXTENSION IF NOT EXISTS extension_nameWITH SCHEMA schema_nameVERSION versionCASCADE; The command includes optional but helpful parameters that you can use during installation: IF NOT EXISTS: Checks if an extension with the specified name already exists. WITH SCHEMA: Specifies the schema where the extension will be installed. If not provided, it will be installed in the current schema. VERSION: Specifies the version to install. If not specified, the latest version will be installed. CASCADE: Automatically installs all additional extensions required for proper functioning. Important: After installation using this command, you need to make specific entries in the PostgreSQL configuration file and then restart the server. Updating Extensions A new version is almost always better than the old one, right? Developers refine the code, fix bugs, and introduce new features, making it important and beneficial to update extensions. To upgrade an extension to a specific version, use the following command: ALTER EXTENSION extension_nameUPDATE TO version; If we omit the version parameter, the latest version will be installed. Removing Extensions Sometimes, an extension is no longer needed, and you might want to free up memory for better use. You can remove an extension with the following command: DROP EXTENSION IF EXISTS extension_nameCASCADE | RESTRICT; Additional Parameters: IF EXISTS: Checks whether the extension exists before attempting to remove it. CASCADE: Automatically removes all objects that depend on the extension. RESTRICT: Prevents removal if other objects depend on the extension. Top Most Useful Extensions for PostgreSQL pg_stat_statements The pg_stat_statements extension helps identify queries that place a heavy load on the system, how often they are executed, and how long they take. This information is crucial for evaluating database performance, identifying bottlenecks, and optimizing processes. Given the large size of many databases, query execution time must be efficient. This extension provides the metrics to assess and improve performance. Example Usage The following command shows the SQL query (query), its total execution time in minutes (total_min), average execution time in milliseconds (avg_ms), and the number of times it was called (calls): SELECT query, (total_exec_time / 1000 / 60) AS total_min, mean_exec_time AS avg_ms, callsFROM pg_stat_statementsORDER BY 1 DESCLIMIT 10; pgcrypto If you’re interested in encrypting data in PostgreSQL, the pgcrypto extension is essential. It offers cryptographic functions for encrypting data, such as passwords. By default, it supports the following encryption algorithms: md5, sha1, sha224, sha256, sha384, and sha512. You can expand the list of supported algorithms by configuring the extension to use OpenSSL in its settings. btree_gist You need the btree_gist extension to leverage different types of PostgreSQL database indexes (B-tree and GiST). It is especially useful for databases containing spatial data, such as city or store coordinates. B-tree Index: The default index type in PostgreSQL. It can index any sortable data, such as numbers and dates. B-tree is efficient and versatile but unsuitable for unsortable data. GiST Index: Handles any type of data, including geospatial data. Key Feature: In addition to the typical search operators for B-tree indexes, btree_gist also supports the PostgreSQL "not equal" operator (<>). timescaledb Time-series data tracks changes over time, such as application requests, sales volumes, or weather temperatures. While specialized databases like InfluxDB or ClickHouse are designed for time-series data, they may not handle other data types effectively. In such cases, the timescaledb extension for PostgreSQL offers a convenient alternative. timescaledb enables the storage and processing of time-series data directly in PostgreSQL. To use it: Download the appropriate version from the official website. Follow the installation instructions. Add the extension with the CREATE EXTENSION command. hstore The hstore extension allows PostgreSQL to store key-value pairs in a single data field. This is similar to data structures found in object-oriented programming languages like Python. With hstore, you can store grouped data without requiring additional database columns. For example, in a bookstore database, a single column could hold attributes such as the number of pages, genre, and illustration details for each book. Example Usage: Create a table with an hstore column: CREATE TABLE books (    id serial PRIMARY KEY,    name varchar,    attributes hstore); Insert data into the table: INSERT INTO books (name, attributes) VALUES (    'Harry Potter and the Philosopher''s Stone',    'author => "J. K. Rowling", pages => 223, series => "Harry Potter"'); Query books in the "Harry Potter" series: SELECT name, attributes->'author' AS authorFROM booksWHERE attributes->'series' = 'Harry Potter'; Result: The attributes for an individual book are displayed like this: SELECT * FROM books WHERE attributes->'series' = 'Harry Potter'; Conclusion PostgreSQL extensions significantly enhance database capabilities, enabling efficient handling of time-series data, encryption, indexing, key-value storage, and performance analysis. We went over a few popular tools like timescaledb, pgcrypto, and hstore; however, in reality, PostgreSQL supports many more extensions, offering solutions for a variety of use cases.
20 December 2024 · 6 min to read
MongoDB

How To Use the MongoDB Shell

MongoDB Shell is an adaptive JavaScript and Node.js REPL (Read-Eval-Print Loop) environment for dealing with MongoDB deployments. It is an important utility for querying data, administering databases, and running various operations. It provides an attractive view for controlling databases, whether you're using them on a remote server or MongoDB locally. This article will demonstrate the step-by-step guidelines for utilizing the MongoDB shell. Step 1: Installation of MongoDB Shell Before starting operations, confirm that users have installed it on the system. If not, first, download the appropriate version via the official site and consider the installation instructions for the system. Windows Visit the official page, pick the Windows version, and start downloading. macOS On the official page, choose the macOS version and begin the downloading process. Alternatively, use Homebrew by executing the commands: brew tap mongodb/brew brew install mongosh Linux Consider the guidelines provided on the website for the Linux OS. For example, on a Debian-based operating system, follow these steps: Add the GPG key for the MongoDB repository: curl -fsSL https://pgp.mongodb.com/server-7.0.asc |  sudo gpg -o /usr/share/keyrings/mongodb-server-7.0.gpg --dearmor Add the MongoDB repository to your system: echo "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/7.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-7.0.list Update the package list: sudo apt update Install MongoDB: sudo apt install mongodb-org -y Upon the successful installation, authenticate the installed version: mongod --version Once the installation is done, enable, start, and check the services of it: sudo systemctl enable mongod sudo systemctl start mongod sudo systemctl status mongod Step 2: Establish a Connection to an Instance Once the installation is done, you can establish connections to its instance. If it is local, enter mongosh. It establishes the connection to the instance by default. It is running at port 27017 on localhost: mongosh The following syntax is utilized to establish a connection to the remote server: mongodb+srv://<username>:<password>@<cluster-url>/<db>?retryWrites=true&w=majority Modify <username>, <password>, <cluster-url>, and <db> with original credentials and connection details. Step 3: Navigation to the Terminal Once connected, you can work with their DBs. Before it, the interface greets a message: The assigned DB on a freshly launched shell instance is called a test. It is safe to use for experimenting. Useful operations are given below: Exploring all Databases First, run show dbs, which gives a view of every database that is available: show dbs Accessing the Current Database The majority of commands operate on a database or a collection that is contained within the database. In order to accomplish this, the object db represents the presently selected database and is available: db Switching to the Database Enter the use command with the DB name, e.g., new_db, to create or switch to the specified one: use new_db Replace new_db with the actual name of the DB. Insertion of an Object First, enter a name as an object into the student_data in the existing new_db: db.student_data.insertOne({name: 'School has 500 students'}) Users can also insert multiple documents into the student_data, each containing a name field with specified values. This allows for batch insertion of data into the collection: db.student_data.insertMany([ {name: 'School has 500 students'}, {name: 'School has 600 students'} ]) Viewing Collections Once the input is inserted, display the existing collections: show collections If you have not done any task, leave the interface by executing: exit You have to understand the basic shell operations. Step 4: Operations Using CRUD When working with the shell, CRUD (Create, Read, Update, Delete) operations are essential. Let’s perform some basic ones: Creating/Inserting Data When inserting new information into the collection, employ the insertOne function. Let us create a new one and fill it with the required information, including name, age, and city:  db.collection.insertOne({ name: "Harry", age: 45, city: "Sydney" }) Finding/Reading Data You can query documents that are associated with the collection by employing the find function. For instance, all entries with an age greater than 25 are retrieved: db.collection.find({ age: { $gt: 25 } }) //  Condition where age is greater than 25 Modifying Data Use the updateOne or updateMany functions to make changes to documents that exist. For instance, the age with Harry's name is updated to 50 on the existing one: db.collection.updateOne({ name: "Harry" }, { $set: { age: 50 } }) Deleting Data Use the deleteOne or deleteMany methods to eliminate entries from the collection that are available. This command removes one document from the available ones with the value John in the name field: db.collection.deleteOne({ name: "John" }) Step 5: Advanced Usage In this step, carry out complex operations via the advanced functionalities. By offering debugging, profiling, and performance optimization tools, the shell helps you find bottlenecks and optimize your setup. Aggregation Pipeline With the help of pipelines, readers can compute records and get calculated results. By using pipelines, create complex analytics and transformations that will assist you in extracting insightful information from your raw. For executing a pipeline of stages, utilize the aggregate function. Here’s a basic code example: db.collection.aggregate([ { $match: { status: "A" } }, { $group: { _id: "$city", total: { $sum: "$amount" } } }, { $sort: { total: -1 } } ]) In the script, the pipeline filters documents with status A, groups them by city, sums the amounts, and sorts the results in descending order. One strong tool for data analysis and transformation is the aggregation pipeline. It is composed of several stages, each of which changes the stream of documents. These stages include: $project: Selects or excludes specific fields. $match: Applying a query condition to filter them. $group: Merges input by a field and calculates aggregations. $sort: Arrange entries according to a given field. $limit: Restricts the amount of records. $skip: Skips a specified number of records. MapReduce An effective paradigm for computing huge datasets. For executing MapReduce jobs, utilize the mapReduce command: // Sample collection data db.collection.insertMany([ { name: "Harry", age: 25, salary: 5000 }, { name: "Buttler", age: 30, salary: 6000 }, { name: "Morgan", age: 35, salary: 7000 } ]); // Map function var mapFunction = function() { emit(this.age, this.salary); }; // Reduce function var reduceFunction = function(keyAge, valuesSalaries) { return Array.sum(valuesSalaries); }; // Run MapReduce db.runCommand({ mapreduce: "collection", map: mapFunction, reduce: reduceFunction, out: "results" }); db.results.find().forEach(printjson); // Output the results In this example, the mapFunction emits the age as the key and the salary as the value.  The reduceFunction sums the salaries for each age group.  The results are stored in a new collection called results, and the final output is printed using db.results.find().forEach(printjson). The output is given below: Projection Utilize the projection operator to specify which fields should be included or excluded from the result set. It retrieves all info associated with the particular query, e.g., displaying only the name and age fields. This offers anyone the ability to visualize specific results while excluding others. Let’s project only name and age fields: db.collection.find({}, { name: 1, age: 1 })  Sorting Sort the results employing the sort function. It retrieves every document belonging to the set. Then, sort them in descending sequence based on the age. It gives a view with the highest age values first. Let’s sort by age in descending order: db.collection.find().sort({ age: -1 })  Limiting Users limit the entries of results via the limit function. For instance, obtain the initial three documents linked to the collection. It is useful for previewing a small subset of information without fetching the entire list. Let’s limit to 3 results: db.collection.find().limit(3)  Skipping Skip an entry is done via the skip function. For instance, skipping the initial two documents that linked to the collection. It is beneficial for paginating results or ignoring an entry of initial documents. Let’s skip the first 2 results: db.collection.find().skip(2)  Step 6: Scripting with Shell Scripts for task automation can be written by users within the shell. To achieve it, save the script in the .js file and run the mongosh. It is beneficial for executing repetitive tasks efficiently, e.g., data seeding or batch updates: mongosh script.js By mastering the MongoDB command, you gain valuable insights into data. Conclusion With the help of the immersive MongoDB shell interface, you conduct repetitive management tasks like writing, reading, and manipulating. In addition, query existing collections, add new objects to the DB, and carry out management administrative tasks. From simple CRUD tasks to complex aggregations and scripting, users can efficiently utilize the interface to carry out a variety of activities.  By executing scripts, readers efficiently perform repetitive tasks. This tutorial has covered the installation, configuration, and managerial tasks for managing the DBs, their collections, and their users.  At Hostman, you can deploy a MongoDB cloud database in a few seconds and start working in no time.
19 December 2024 · 8 min to read
Python

How to Use f-strings in Python

Python f-strings, introduced in Python 3.6, revolutionized how developers format strings. These possess abilities of being precise, readable, & highly powerful which makes these a preferred method for string interpolation in Python. This manual covers all that there is so that you learn about all the different aspects about f-strings of python. It covers basic usage & goes all the way to advanced formatting techniques. Introduction Formatting a string is an integral part of programming in Python, helping developers to dynamically include data within strings. Among different processes of formatting strings, f-strings stand out because of their simplicity and performance benefits. f-strings are also known as formatted string literals. First came out  in Python 3.6, f-strings blend flexibility & efficiency, making them a go-to choice for many developers. This manual will provide a detailed elaboration of f-strings, covering their syntax, features, & practical applications. By the end, you will be an expert in using f-strings effectively. It also proves much efficient for your code & is also better readable. f-strings are also called formatted string literals.These are used for inserting expressions or variables, as well as function outputs directly. For creating an f-string, you can prefix string with either an uppercase or lowercase f. After that whatever will be placed in curly braces {} will be a part of the string. It will also be printed as if it was part of that string. Benefits of f-strings Using f-strings offers numerous benefits, including: Enhanced Readability: Due to these there is seamless embedding of Variables & Expressions into strings Improved Performance: As compared to other techniques that are used for formatting like str.format() or %-based formatting, they have faster performance. Flexibility: Supports nested formatting, calling functions. & calculations. f-strings Syntax Its syntax is very straightforward. f"string with {expression}" Basic syntax example:  name = "Alice" age = 30 greeting = f"My name is {name}, and I am {age} years old." print(greeting) Output: My name is Alice, and I am 30 years old. Core functions of f-strings Let’s dive into the essential features of f-strings. Using Expressions Inside f-strings F-strings not only embed  variables; they also allow the inclusion of any valid Python expression within the curly braces. Example: x = 10 y = 20 result = f"The sum of {x} and {y} is {x + y}." print(result) Output: The sum of 10 and 20 is 30. Formatting of numbers in f-strings Formatted string literals provide an elegant way to display numbers with specific formatting options, like rounding, padding, or converting values to percentages. Example: pi = 3.14159formatted_pi = f"Value of pi: {pi:.2f}"print(formatted_pi) Output: Value of pi: 3.14 Escaping Curly Braces For some cases, curly braces are used in the output text itself. To display literal curly braces in an f-string, use double braces {{ and }}. Example: template = f"Use {{braces}} to include special characters."print(template) Output: Use {braces} to include special characters. Multiline f-strings F-strings can span multiple lines, making them useful for constructing large text blocks while maintaining readability. Example: title = "Python f-strings" description = "powerful, fast, and easy to use" message = f""" Title: {title} Description: f-strings are {description}. """ print(message) Output: Title: Python f-stringsDescription: f-strings are powerful, fast, and easy to use. Nesting and Combining f-strings F-strings can contain other f-strings or be combined with traditional strings. This capability is helpful for dynamic and complex outputs. Example: name = "Bob"info = f"{name.upper()}: {f'Name has {len(name)} characters'}"print(info) Output: BOB: Name has 3 characters Handling Lists and Dictionaries With f-strings, you can directly access elements from lists or keys in dictionaries. Example with Lists: items = ["Python", "JavaScript", "C++"]favorite = f"My favorite programming language is {items[0]}."print(favorite) Output: My favorite programming language is Python. Example with Dictionaries: data = {"name": "Eve", "role": "Developer"}message = f"{data['name']} works as a {data['role']}."print(message) Output: Eve works as a Developer. f-Strings vs. Other string Methods Now, let's compare f-strings with other types of strings methods in python.  % Formatting vs. f-Strings The % operator, an older method, uses placeholders like %s for strings and %d for integers. While functional, it can be cumbersome and error-prone. Example: # % Formatting name = "Alice" age = 25 print("Hello, %s. You are %d years old." % (name, age)) # Equivalent f-string print(f"Hello, {name}. You are {age} years old.") Comparison: % formatting requires tuples and placeholder matching, increasing complexity. Python f-strings embed variables directly, making the code simpler and easier to read. str.format() vs. f-Strings The str.format() method introduced named placeholders, improving readability over % formatting. However, it still requires method calls, which can feel verbose. Example: # str.format() print("Hello, {}. You are {} years old.".format(name, age)) # Equivalent f-string print(f"Hello, {name}. You are {age} years old.") Advanced Example: Named placeholders: # str.format() with named placeholders print("Hello, {name}. You are {age} years old.".format(name=name, age=age)) # Equivalent f-string print(f"Hello, {name}. You are {age} years old.") Comparison: str.format() improves over % formatting but can still feel clunky. f-strings streamline the process, especially for dynamic expressions. String Concatenation vs. f-Strings String concatenation combines strings using the + operator. While straightforward, it becomes inefficient for more complex formatting needs. Example: # String concatenation print("Hello, " + name + ". You are " + str(age) + " years old.") # Equivalent f-string print(f"Hello, {name}. You are {age} years old.") Comparison: Concatenation requires explicit type conversion, increasing verbosity. Python f-strings handle formatting and type conversion automatically. Advanced Example: Including expressions: # String concatenation years_later = 5 print(name + " will be " + str(age + years_later) + " in " + str(years_later) + " years.") # Equivalent f-string print(f"{name} will be {age + years_later} in {years_later} years.") Practical Applications of f-Strings in Python Whether you're crafting dynamic SQL queries, improving logging efficiency, or processing data for analytics, f-strings in Python simplify your workflow and enhance code readability. Generating Dynamic SQL Queries In applications involving databases, f-strings perform really well to construct dynamic SQL queries by embedding variables directly into the query string. Example: # Generating SQL queries using f-strings table_name = "users" condition = "age > 30" sql_query = f"SELECT * FROM {table_name} WHERE {condition};" print(sql_query) # Output: SELECT * FROM users WHERE age > 30; By embedding variables into the SQL query string, f-strings in Python reduce the risk of syntax errors and make the code intuitive. Enhancing Logging Statements Logging plays an important role in debugging and monitoring applications. Python f-strings simplify logging statements, especially at the time at which dynamic data is included. Example: # Logging with f-strings username = "Alice" action = "logged in" print(f"User {username} has {action} at 10:30 AM.") # Output: User Alice has logged in at 10:30 AM. Working with data for analytics purposes Because of the use of f-strings, formatting strings dynamically based on variable content, is possible. This enables efficient and concise manipulation of data. Example: # Processing analytics data metric = "conversion rate" value = 7.5 print(f"The {metric} has increased to {value}%.") # Output: The conversion rate has increased to 7.5%. Crafting Dynamic File Paths Automating the handling of a file often involves dynamically generating file paths. Python f-strings have made this process straightforward. Example: # Generating dynamic file paths directory = "/data/exports" filename = "report_2024.csv" path = f"{directory}/{filename}" print(path) # Output: /data/exports/report_2024.csv Dynamic Web Content Generation During web development, HTML or JSON content can be generated dynamically by the use of f-strings. Example: # Dynamic HTML generation title = "Welcome" content = "This is a demo of Python f-strings in action." html = f"<h1>{title}</h1><p>{content}</p>" print(html) # Output: <h1>Welcome</h1><p>This is a demo of Python f-strings in action.</p> Automating titles of report During reporting or analytics, titles often need to reflect about data that is being processed. f-strings in Python automate this with ease. Example: # Automating report titles report_date = "December 2024" report_title = f"Sales Report - {report_date}" print(report_title) # Output: Sales Report - December 2024 Advanced Formatting Features f-strings are capable of handling alignment, width specifications, or time & date  formatting for creation of cleaner outputs. Example: # Aligning text for name, score in [("Alice", 92), ("Bob", 87)]: print(f"{name:<10} | {score:>5}") # Formatting dates from datetime import datetime now = datetime.now() print(f"Current time: {now:%Y-%m-%d %H:%M:%S}") Debugging Made Easier with f-Strings f-strings are capable of showing error messages in more informative by embedding relevant expressions or variables. Example: value = 42 try: assert value > 50, f"Value {value} is not greater than 50." except AssertionError as e: print(e) Common Errors to Avoid During the use of f-strings, a few common pitfalls include: Forgetting to prefix the string with f: This results in a plain string without any formatting. Incompatible Python versions: Ensure Python 3.6 or newer is installed, as f-strings are not supported in earlier versions. Conclusion F-strings are a robust and versatile tool for string formatting in Python. Whether you need to include variables, perform calculations, or debug your code, f-strings simplify such types of tasks with cleaner syntax & better performance. If you want to build a web service using Python, you can rent a cloud server at competitive prices with Hostman.
19 December 2024 · 9 min to read
PostgreSQL

How to Change the PostgreSQL Data Directory on Ubuntu

Databases tend to grow beyond their original filesystem over time. If they share the same partition as the operating system, it could potentially lead to I/O conflicts. Devices like network block storage RAID provide redundancy and enhance scalability. Regardless of your goals—whether increasing space or optimizing performance—this guide will assist you in moving the PostgreSQL data directory. Prerequisites To follow this guide, you will need: An Ubuntu 22.04 VPS server with a non-privileged user account and sudo privileges. PostgreSQL installed on your server. In this guide, we will move the data to a block storage device mounted at /mnt/volume_01. The method described here is universal and will help relocate the data directory to another location in any basic storage. Step 1. Move the PostgreSQL Data Directory First, check the current location of the data directory by starting an interactive PostgreSQL session. Here, psql is the command to enter the interactive monitor, and -u postgres instructs sudo to run psql as the system user postgres: sudo -u postgres psql In the PostgreSQL command line, enter this command to show the current directory: SHOW data_directory; By default, the directory is /var/lib/postgresql/xx/main, where xx is your PostgreSQL version. Exit the PostgreSQL prompt by typing \q and pressing ENTER. Before making any changes to the directory, stop PostgreSQL to avoid compromising data integrity: sudo systemctl stop postgresql You won’t be able to check the service status directly through systemctl after stopping it. To ensure the service has been stopped, run the following command: sudo systemctl status postgresql The last line of the output will confirm that PostgreSQL has indeed stopped. To copy the directory to the new location, use the rsync command. You can add flags: -a preserves the permissions and other attributes of the directory, while -v ensures detailed output so you can track progress. To replicate the original directory structure in the new location, run rsync from the postgresql directory. Creating this postgresql directory at the mount point and preserving ownership by the PostgreSQL user will prevent permission issues during future updates. Note: If tab completion is enabled, make sure the directory doesn't have a trailing slash. Otherwise, rsync will only copy the directory's contents to the mount point, not the directory itself. Strictly speaking, the versioned directory (e.g., 16) is unnecessary since the location is explicitly defined in the postgresql.conf file. However, it is recommended to follow the project's conventions, especially if you later need to run multiple versions of PostgreSQL: sudo rsync -av /var/lib/postgresql /mnt/volume_01 Once the copy is complete, rename the original folder with a .bak extension, and don't delete it until the move is complete. This ensures that nothing gets mixed up due to directories with the same name: sudo mv /var/lib/postgresql/16/main /var/lib/postgresql/10/main.bak Now, we can configure PostgreSQL to access the data directory in the new location. Step 2. Point to the New Location of the Directory By default, the value for data_directory is set to /var/lib/postgresql/16/main in the file /etc/postgresql/16/main/postgresql.conf. You need to edit this file to point to the new directory: sudo nano /etc/postgresql/16/main/postgresql.conf Now, locate the line starting with data_directory and change the path to point to the new location. The updated directive will look something like this: # /etc/postgresql/16/main/postgresql.conf...data_directory = '/mnt/volume_01/postgresql/10/main'... Save the file and close it by pressing CTRL+X, then Y, and finally ENTER. No further configuration is needed for PostgreSQL in the new directory. The only thing left to do at this point is restart the PostgreSQL service and verify that it correctly points to the new data directory. Step 3. Restart PostgreSQL After changing the data_directory directive in the postgresql.conf file, start the PostgreSQL server using systemctl: sudo systemctl start postgresql Check the server status: sudo systemctl status postgresql If the service started correctly, you will see a line like this at the end of the output. Finally, to ensure that the new directory is being used, open the PostgreSQL command line: sudo -u postgres psql Check the value of the data directory again: SHOW data_directory; The output will confirm that PostgreSQL is using the new data directory location. After verifying that everything is working properly, ensure you have access to your database and can interact with its data without issues. Once you are confident that the existing data is intact, you can delete the backup directory: sudo rm -Rf /var/lib/postgresql/16/main.bak Thus, you have successfully moved the PostgreSQL data directory to a new location. Conclusion If you followed the instructions correctly, your database directory is now in a new location, and you're closer to being able to scale your storage. Congrats!
18 December 2024 · 4 min to read
Python

How to Add Elements to an Array in Python

In Python, inserting items into arrays is a frequent task. Arrays hold data of a single type and can be initialized with lists, the array module, or through NumPy. Although Python lacks a native array type, both the array module and the NumPy library offer flexible options for managing arrays. Each approach provides unique methods for inserting elements, based on specific needs. Functions such as append() and extend() allow us to add items to built-in arrays. List comprehension is helpful for generating new arrays. For more complex tasks, NumPy offers tools like append(), concatenate(), and insert() to add elements, particularly when dealing with numerical or structured data. Each approach is useful for specific situations. In this tutorial, we will demonstrate all available techniques for inserting elements into an array in Python. Adding Values to Python's Inbuilt Arrays Python provides different methods for inserting values into its inbuilt arrays. These functions allow us to add items at the start, end, or a specific array position. Let’s go through the following methods to understand how they work and which one fits your needs. Method 1: array.append() append() is a useful array method that lets us insert a single value at the last index of the target array. It modifies the original array: from array import array AuthorsIDs = array('i', [12, 110, 13]) print("Original Array: ") print(AuthorsIDs) print("Modified Array: ") AuthorsIDs.append(140) print(AuthorsIDs) Initially, the AuthorIDs array has 12, 110, and 13 as its elements. Next, we invoke append() on the AuthorIDs array to insert 140 at the last position: Here, we utilize i to assign signed integers to AuthorIDs. Similarly, users can specify type codes like f, u, d, etc. to assign float, Unicode, and double-type data to an array. Method 2: array.extend() Array module offers another useful function extend() that lets us add numerous items at the end of an array. It expands the actual array: from array import array AuthorsIDs = array('i', [12, 110, 13]) print("Original Array: ") print(AuthorsIDs) AuthorsIDs.extend([19, 105, 16]) print("Modified Array: ") print(AuthorsIDs) This time, we extend AuthorsIDs with a sub-array of three items: Method 3: array.insert() insert() is an inbuilt array function that lets us add values at an index of our choice and shift the subsequent entries accordingly. It accepts two arguments a value to be added and an index at which the value will be placed: from array import array AuthorsIDs = array('i', [12, 110, 13]) print("Original Array: ") print(AuthorsIDs) AuthorsIDs.insert(2, 55) print("Modified Array: ") print(AuthorsIDs) Here, we add 55 at the third index of AuthorIDs: Method 4: List Comprehension List comprehension lets us integrate new values with existing ones to create an updated array. It doesn’t alter the actual array; instead, it generates a new array based on the given logic: from array import array AuthorsIDs = array('i', [12, 110, 13]) print("Original Array: ") print(AuthorsIDs) newIDs = [14, 51] AuthorsIDs = array('i', [x for x in AuthorsIDs] + newIDs) print("Modified Array: ") print(AuthorsIDs) The newIDs are successfully merged with the AuthorIDs through list comprehension: Method 5: Plus Operator The plus operator + joins the provided arrays. It enables us to add one or more values to the target array: from array import array AuthorsIDs = array('i', [12, 110, 13]) print("Original Array: ") print(AuthorsIDs) newIDs = array('i', [14, 51, 72]) totalIDs = AuthorsIDs + newIDs print("Modified Array: ") print(totalIDs) The + operator successfully integrates the AuthorsIDs and newIDs arrays while preserving the initial ones: Add Elements to NumPy Array NumPy is a commonly utilized Python library in data science and numerical computing. It aids in handling arrays and executing arithmetic operations. Various functions, including append(), concatenate(), and insert(), can be employed to add values to NumPy arrays. Method 1: numpy.append() The append() method of the numpy module adds elements at the end of an array and retrieves a new array. It lets us insert one or more values to a numpy array. Let's import the NumPy library and invoke append() to add the desired elements to the last of AuthorIDs: import numpy as npy AuthorsIDs = npy.array([12, 110, 13]) print("Original Array: ") print(AuthorsIDs) updatedIDs = npy.append(AuthorsIDs, [140, 31]) print("Modified Array: ") print(updatedIDs) It successfully appends 140 and 31 at the right/last of AuthorsIDs: Method 2: numpy.concatenate() NumPy offers a very useful function named concatenate() that merges multiple numpy arrays. Let’s invoke the concatenate() function to integrate the AuthorIDs with newIDs array: import numpy as npy AuthorsIDs = npy.array([12, 110, 13]) newIDs = npy.array([101, 1, 31]) concatenatedIDs = npy.concatenate((AuthorsIDs, newIDs)) print("Modified Array: ") print(concatenatedIDs) We store the concatenated values in a new array named concatenatedIDs: Method 3: numpy.insert() The numpy.insert() function provides the flexibility to place one or more values at any given index of the target array: import numpy as npy AuthorsIDs = npy.array([1, 103, 41]) print("Original Array: ") print(AuthorIDs) newIDs = npy.insert(AuthorsIDs, 1, 102) print("Modified Array: ") print(newIDs) It successfully appended 102 at the first index of AuthorsIDs: Best Practices When managing arrays in Python, the append() method is utilized to insert a single value to the final index of the array. To include multiple elements, you can employ extend() or the + operator. Additionally, the insert() method enables adding elements at specific positions within the array, making it versatile for various use cases. In contrast, NumPy arrays offer more specialized functions for managing data efficiently. numpy.append() is used for appending data, while numpy.concatenate() merges multiple arrays. numpy.insert() can be used for precise insertions. NumPy functions are generally preferred for tasks involving large datasets or numerical computations due to their better performance and scalability. Conclusion In this tutorial, we demonstrated distinct inbuilt and numpy functions for appending elements to Python arrays. Users can utilize several methods to append values to Python arrays, based on the array type and specific use case. For inbuilt arrays, append(), extend(), and insert() allow easy modifications, while list comprehension and the + operator provide additional flexibility for merging arrays.  When operating with NumPy arrays, append(), concatenate(), and insert() offer advanced functionality, especially for quantitative and data science tasks. For larger datasets or more complex operations, you should prefer NumPy due to its efficiency and performance.  If you want to build a web service using Python, you can rent a cloud server at competitive prices with Hostman. 
18 December 2024 · 6 min to read
PostgreSQL

How to Delete a PostgreSQL Database

This guide explains how to delete PostgreSQL databases using two methods: the SQL command DROP DATABASE and the dropdb utility. Both methods are effective, but dropdb offers slightly more options for managing the deletion process. Method 1: DROP DATABASE The first way to delete a PostgreSQL database is to use the SQL command: DROP DATABASE <database_name>; This command deletes the directory containing the database's information and previously stored records. Note the following: Only the database owner can execute DROP DATABASE. If someone is connected to the database at the time of execution, the deletion will not proceed. Let’s walk through deleting a database step by step. Step 1: Connect to PostgreSQL. Access PostgreSQL through the terminal: sudo -i -u postgres psql Step 2: Create a Test Database: CREATE DATABASE example_db; Step 3: Verify the Database Exists. List all databases to confirm we have successfully created the new one: \l This will display a table listing all databases on the server. Step 4: Delete the Database: DROP DATABASE example_db; Step 5: Verify Deletion. List all databases again to ensure example_db is no longer in the list: \l The table should no longer include the deleted database. IF EXISTS The IF EXISTS option allows you to check if a database exists before attempting to delete it. The full syntax is as follows: DROP DATABASE IF EXISTS <database_name>; If the database exists, the command will delete it. If no database with that name exists, an informational message will be displayed instead of an error. First, create a sample database: CREATE DATABASE hostman_db; Then, delete the newly created database using the IF EXISTS option: DROP DATABASE IF EXISTS hostman_db; If the database exists, the result will be the same as using DROP DATABASE. The hostman_db database will no longer be available. If you run the DROP DATABASE IF EXISTS command again with the same database name: DROP DATABASE IF EXISTS hostman_db; The terminal will display an informational message indicating that the entity doesn't exist. Finally, using DROP DATABASE without the IF EXISTS option will result in a less informative error message: DROP DATABASE hostman_db; WITH (FORCE) The WITH (FORCE) option is available in PostgreSQL version 13 and above. It is used for "forcefully" removing a database that is currently in use. The WITH (FORCE) option forcibly closes the session and deletes the database. DROP DATABASE <database_name> WITH (FORCE); Method 2: dropdb The dropdb command-line utility is essentially a wrapper around the DROP DATABASE SQL command. These two methods are identical in terms of functionality, but dropdb offers additional features that make the process of deleting a PostgreSQL database more convenient and informative. Its basic syntax is: dropdb <connection parameters> <options> <database_name> Flags and Options: Flag Description -e --echo Displays all the commands actually executed on the server while using the dropdb utility. -f --force Terminates all current connections before deleting the database. -i --interactive Prompts for confirmation before deleting the database. -V --version Displays the version of the dropdb utility. --if-exists Provides an informative message instead of an error if the target database does not exist. -? --help Displays the help menu with information about available options. -h <host> --host=<host> Specifies the host where the database is running. -p <port> --port=<port> Specifies the port number the server is listening to. -U <username> --username <username> Allows connection to the database using the specified username. -w --no-password Makes the password prompt optional. Useful for batch jobs and scripts. -W --password Forces a password prompt for every operation. --maintenance-db=<database_name> Specifies the name of the maintenance database for the connection. If you have cloud databases and need to delete unused entities periodically, you might want the process to be safe and clear to all users. The dropdb utility handles both of these requirements. For example, consider the following command: dropdb -i -e example_db It will: First prompt you to confirm the deletion of example_db. Display the commands executed during the operation. Conclusion To delete a database, you must be the owner of the database and have access to the terminal with sudo privileges. The choice of deletion method depends on you. There is no fundamental difference between using DROP DATABASE and the dropdb utility. It is simply a matter of convenience, as well as the specific scenarios in which you need to delete a database in PostgreSQL. Hostman offers a high-performance cloud PostgreSQL database solution to meet your needs. 
17 December 2024 · 4 min to read
PostgreSQL

User Management in PostgreSQL

PostgreSQL is an open-source database management system based on the SQL query language. System administrators often choose PostgreSQL for its free usage, high performance, and compatibility with almost any hardware platform. You can also easily create a PostgreSQL cloud database at Hostman. This guide explains how to manage users in PostgreSQL, including creating and deleting users, setting access permissions, and practical use cases (e.g., creating backups). The steps described here are performed in the PostgreSQL shell. To begin, switch to the postgres user account: su – postgres If you encounter a permissions error, escalate privileges using: sudo su or simply: su Once logged in as the postgres user, start the PostgreSQL shell: psql -U postgres template1 Here, template1 is a default example database. Replace it with the name of your database as needed. The session will run under the postgres account. Before managing users, you can view a list of all PostgreSQL users: SELECT * FROM pg_user; Creating a New PostgreSQL User To create a new user account in PostgreSQL, you'll define the user with a password and assign the necessary privileges. Additionally, you'll need to configure access in the pg_hba.conf file. Step 1: Create a User Create a new user with a password using the PostgreSQL shell: CREATE USER user123 WITH PASSWORD 'myPassword'; Alternatively, use the Linux command line: createuser -P hostman_user Step 2: Grant Database Permissions Assign the user access rights to a specific database: GRANT ALL PRIVILEGES ON DATABASE "database1" TO hostman_user; Activate the connection to the database: \c database1 Grant the user permissions for all tables in the default schema public: GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO "hostman_user"; The default schema is public, but users can switch to and use other schemas. For specific tables, assign permissions with: GRANT ALL PRIVILEGES ON TABLE table1 IN SCHEMA public TO "hostman_user"; Once done, exit the shell: \q Step 3: Configure the pg_hba.conf File To enable connections for the new user, configure the pg_hba.conf file. Open the file for editing. The path would depend on your operating system and the Postgres version. Debian/Ubuntu: nano /etc/postgresql/16/main/pg_hba.conf CentOS:  vi /var/lib/pgsql/16/data/pg_hba.conf Make sure to specify your Postgres version instead of 16 if it differs. Add the following line to allow hostman_user to connect from localhost using password authentication: # IPv4 local connections:host    all             hostman_user           127.0.0.1/32            md5 Place this line above the default configuration line: host    all             all             127.0.0.1/32            ident Restart the PostgreSQL service to apply the changes: systemctl restart postgresql Step 4: Test the Connection Verify that the new user can connect to the database: psql -U hostman_user template1 -h 127.0.0.1 Configuring Database Access via a Group in PostgreSQL To manage user permissions efficiently, start by creating a group role: CREATE ROLE "myRole" NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION; Assign the user hostman_user to the newly created group: GRANT "myRole" TO hostman_user; Connect to the database: \c database1 Then, assign permissions to the group myRole for all tables in the public schema: GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO GROUP "myRole"; Editing User Properties You can easily modify a user's properties, such as their password. For example, to change the password of the postgres user: ALTER USER postgres PASSWORD 'password'; From the Linux command line, you can achieve the same with: sudo -u postgres psql -U postgres -d postgres -c "ALTER USER postgres PASSWORD 'password'" Removing Users and Groups To remove a user: DROP USER hostman_user; Instead of removing the user entirely, you can restrict their permissions: REVOKE ALL PRIVILEGES ON ALL TABLES IN SCHEMA public FROM "hostman_user"; Assigning Specific Privileges PostgreSQL allows fine-grained control over permissions. For instance, instead of granting all privileges, you can allow specific operations: GRANT SELECT, UPDATE, INSERT ON ALL TABLES IN SCHEMA public TO "hostman_user"; This example permits hostman_user to read, edit, and add data but prohibits deletion. To assign permissions for a specific table: GRANT ALL PRIVILEGES ON table_users TO "hostman_user"; This setup provides flexibility in granting tailored access to users or groups while maintaining database security and integrity. Creating an Account for Database Backup in PostgreSQL We recommend using an account with minimal privileges to perform database backups securely. Follow these steps to create a read-only PostgreSQL user for backups. Create a user with a password for backup purposes: CREATE USER bkpuser WITH PASSWORD 'bkppasswd'; Here, bkpuser is the username, and bkppasswd is the password. Replace them with your desired values. Enable the backup user to connect to the database: GRANT CONNECT ON DATABASE database TO bkpuser; Connect to the database: \c database Grant the necessary privileges for backup operations: GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO bkpuser; The example above uses the public schema. Replace it with another schema if required. For detailed instructions on backing up and restoring PostgreSQL databases using pg_dump, check out this comprehensive guide. Summary This guide explains how to create PostgreSQL users via the console and configure their permissions. Tools like pgAdmin can simplify these tasks for users who prefer graphical interfaces. To experiment with user management without risking production databases, consider testing on cloud servers like Hostman's.
17 December 2024 · 5 min to read
PostgreSQL

How to Safely Store Passwords Using PostgreSQL

PostgreSQL is a free, open-source object-relational database. Object-relational databases differ from regular relational ones. Data is still stored in tables, with columns that are linked to each other. However, PostgreSQL operates according to ACID standards (Atomicity, Consistency, Isolation, and Durability), ensuring data accuracy through consistency and atomicity of operations within the tables. Changes are made sequentially, allowing for the immediate detection of failures while writing values. PostgreSQL supports Multi-Version Concurrency Control (MVCC), a database feature that creates copies of records during modifications, protecting against loss and conflicts during simultaneous read or write operations. The indexing system in PostgreSQL is more complex and operates faster, using trees and various types of indexing, such as partial, hash, and expression indexing. The syntax of PostgreSQL is similar to MySQL, but PostgreSQL supports additional subqueries like LIMIT or ALL. Furthermore, PostgreSQL is compatible with a large number of programming languages. The most common ones are: C/C++ Delphi Erlang Go Java Javascript JSON (native since version 9.2) Lisp .NET Python R Tcl Projects involving user registration and authentication must store passwords on the server side, — usually encrypted. Databases are the most suitable tool for this. In this article, we will discuss how to properly store passwords in a database (writing and reading them) using PostgreSQL while ensuring all security measures are followed. Why Protect Credentials Before discussing storing passwords in databases, we should note that no serious project should store credentials in plaintext, meaning unencrypted, human-readable text. Data is always encrypted. Always. Here are some brief reasons why: Developer Breach The developer's servers or the application host's servers can be hacked, for example, through an SQL injection that could retrieve a password string from the database. Unencrypted data will immediately fall into the hands of hackers and be compromised. From that point on, we can only guess what will happen next and what the consequences could be. User Carelessness Users almost always ignore security recommendations — they don't use password managers or disregard password naming rules for different services. A password leak in one application can lead to the compromise of accounts in other services. Reputation and Trust Users will easily accuse the service provider (the application developer) of dishonesty if they can read passwords. This can happen even if no illegal actions were taken by employees. It would be a blow to the reputation of the company or project. Password Hashing First, it's important to note that when dealing with passwords, what is done is not encryption but rather hashing. It’s crucial to understand that if something is encrypted, it can always be decrypted. Encrypted information is the same information, just represented differently. However, hashing works differently. A hash is completely new, unique information derived from some input data — in our case, from a password. The key point is that retrieving the original data from a hash is impossible (theoretically, possible but practically not feasible). In short, hashing is a one-way operation. Here are some other less obvious drawbacks of encryption that have made hashing the preferred method over time: Unlike a hash, an encryption method has a variable length, which isn’t ideal for storage within a database or for sending server (or client) packets. Generating an encryption key takes more computational time than generating a hash. When using encryption, key management becomes necessary. This means the keys must be stored somewhere and must be kept secure to prevent unauthorized access. What Does a Hash Look Like? A hash is essentially a string of random characters — a set of symbols that has no inherent meaning. The algorithm that generates such a string is called a hash function. Currently, hashing can only be broken by brute-force attempts. This method is quite crude and only works on initially weak passwords. Hackers simply try a long list of common passwords based on a dictionary. Each password is hashed and sent to the targeted server for authentication attempts. This continues until a match is found. So, no magic is involved. At the same time, hashing itself is a computationally complex task. Deriving information from the original data (the password) takes time. Some hash functions generate longer keys (for example, through repeated hashing) specifically to increase the time needed for generation. In this case, a dictionary brute-force attack takes more time, giving the security team or the user an advantage. Hashing in PostgreSQL Built-in Extension pgcrypto PostgreSQL has a built-in extension specifically designed for password hashing, so it does not need to be loaded separately. To activate it, you need to run the following command: CREATE EXTENSION pgcrypto; This command will load the available extension into your current database. Essentially, this will execute the extension script that adds new SQL objects, such as functions, data types, operators, and indexing methods. Adding Salt with gen_salt() To make the hash even more secure, a salt is added during the hashing operation. The issue is that a hash function always generates the same value for specific input data. This characteristic leads to several problems: Two identical passwords for two different users will have the same hash, whereas it would be better if they were different. Hackers use so-called rainbow tables — pre-hashed dictionaries with commonly used passwords to avoid unnecessary hash calculations during a brute-force attack. The solution is simple: use not only the password but also an additional piece of text — the salt — as input during hashing. A salt is a pseudorandom string that ensures the uniqueness of the resulting hash. PostgreSQL has a gen_salt() function that allows you to pass the type of cryptographic algorithm as an argument: md5 (MD5) des (DES) xdes (Extended DES) bf (Blowfish) For example, here’s how you can get a salt using the fairly popular MD5: SELECT gen_salt('md5'); In reality, many developers no longer recommend using MD5 as it is considered insecure. Now that we've covered salt, let's look at the options for hashing itself. Password Hashing with the crypt() Function Whenever a user creates a new password or changes an existing one, PostgreSQL needs to store its hash. The generation itself is done using the built-in crypt() function. It has two arguments: the password string the salt string Thus, the process of generating the hash along with the salt looks as follows: SELECT crypt('password', gen_salt('md5')); By the way, if for any reason you don't want the hash to be unique, you can simply pass a constant value as the second argument: SELECT crypt('password', 'nothing'); Verifying a Password Against a Previously Created Hash Interestingly, verifying a password is done using the same hashing function. The only difference is the arguments. For example, to check if the password "password" matches its hash, you would run: SELECT crypt('password', hash); Notice that the salt is not specified here; it is only the password and the hash. The output of this command will be the same hash created earlier and provided as the second argument. However, if the password is incorrect, the hash will differ: SELECT crypt('another_password', hash); To reiterate, calling crypt with the password "password" and the hash of this password (hash) will return the same hash as hash. In any other case, the output will differ. How to Use PostgreSQL Hashing in Practice? Creating a Table for Passwords In a real project, credentials are stored in tables and are read or written as needed.Therefore, we will create a table accounts with three columns: identifier, login, and password hash: CREATE table accounts (identifier SERIAL, login VARCHAR(10), password VARCHAR(100)); Next, let's populate the table with some example credentials: INSERT INTO accounts (login, password)VALUES ('login_1', crypt('some_password', gen_salt('md5'))); This is roughly how you can store a password in a PostgreSQL database. The login field is also specified, which is typically the user's email or phone number. Updating a Password in the Table Whenever a user changes their password, the data in the table needs to be updated. In the most basic case, the query to update the hash of the new password looks as follows: UPDATE accountsSET password = crypt('new_password', gen_salt('md5'))WHERE login = 'login_1'; Verifying the Entered Password Against the Saved One During user authentication, the user's credentials are retrieved from the database, and the password hashes are compared: SELECT (password = crypt(entered_password, password)) AS password_match FROM accounts WHERE login = 'login_1'; If password_match equals t (true), the passwords match. If it equals f (false), the passwords are different. By the way, the gen_salt function has an additional argument — the number of iterations. This works only with the xdes and bf algorithms: The number of iterations for xdes can be any odd number between 1 and 16777215. The default is 725. The number of iterations for bf can be any integer between 4 and 31. The default is 6. For example, here’s how to set the number of iterations for Extended DES: SELECT crypt('password', gen_salt('xdes', 963)); Of course, the number of iterations affects the hashing time and increases password security. The choice of the right number is a balance between speed and security. A high value may reduce convenience for both users and developers, while a low value reduces resistance. On average, server hardware is expected to compute between 15 to 100 hashes per second. Hashing on the Client-Side or Server-Side Application One approach to prevent sending the password (from the client to the server) in plain text is to hash the password on the application side. This, in fact, complicates the client-server communication mechanism, but it is sometimes used in certain cases. However, most web resources use HTTPS encryption, which allows transmitting sensitive data in "plain" form over a secure connection. Another option is to hash the password on the server-side application level (not on the client-side), rather than in the database itself. In this case, the pre-hashed password is placed in the database table, just like any other value. Many popular programming languages provide ready-made modules or libraries designed for password hashing. These are not just hash functions but higher-level constructs that allow you to work directly with password hashes and authentication. One such module is the bcrypt package based on the Blowfish algorithm. The interface may vary from language to language, but the functionality remains the same. Here is a simple example of using bcrypt in Python: import bcrypt # Generate a hash before sending it to the DB def generate_hash(password): # Convert string to bytes password_bytes = password.encode("utf-8") # Generate salt password_salt = bcrypt.gensalt() # Generate hash hash_bytes = bcrypt.hashpw(password_bytes, password_salt) # Convert bytes back to string hash_str = hash_bytes.decode("utf-8") return hash_str # Authenticate by checking the password against the hash def authenticate(password, hash): # Convert both password and hash from string to bytes password_bytes = password.encode("utf-8") hash_bytes = hash.encode("utf-8") # The salt is automatically (cryptographically) "discovered" in the hash, so it's not needed separately during verification result = bcrypt.checkpw(password_bytes, hash_bytes) return result Subsequently, API function calls are added to either send the generated hash to the database or read it from the database during the authentication process. In any case, each programming language has its own special libraries for working with hashes. Therefore, depending on which language you use, you should study the relevant documentation, standards, or open-source libraries that simplify working with passwords. And most importantly — do not reinvent the wheel. Both PostgreSQL's built-in functions (extensions) and time-tested external library solutions are created by experienced developers who have gone through numerous iterations of bug fixes and vulnerability patches. There is no sense in creating your own cryptographic "mashups," naively thinking that it would be a better solution. It will most likely lead to numerous internal issues and increase the chances of hacks. Conclusion Storing passwords on the server side requires responsibility from the application's developer and system administrator. Therefore, it carries certain reputational risks in the event of failures or hacker attacks. Authentication is a separate IT domain. Creating a reliable authentication system requires both experience and time. This is why there has been a growing trend toward "outsourcing" authentication. More and more services rely on external authentication systems, whose developers specialize mainly in security rather than business logic. This is a form of division of labor. For example, there are protocols (standards) like OpenID and OAuth 2.0. The latter is used in Google API for user authentication, so anyone can integrate Google authentication into their application or online service. This is also beneficial for users, as they can log in using their familiar email, avoiding the creation of numerous credentials, which always come with the risk of being lost. However, password-based authentication remains a reliable (albeit conservative) method. Therefore, securely storing user passwords in the database is essential to such an implementation.
16 December 2024 · 11 min to read
Linux

How to Use the find Command in Linux

One of the most effective tools for locating files and directories according to a number of criteria is the Linux find command. Learning how to use this tool can save a lot of time and effort, whether you're a system administrator resolving file system problems or a casual Linux user attempting to search for lost files. You will learn all about find in this tutorial, from its fundamental syntax to its extensive application cases. By the end, you'll be able to use this tool effectively and confidently. Why Use the find Command? Utilizing a number of characteristics, such as permissions, modification date, size, kind, and name, the find program assists you in locating files and folders. It provides unprecedented control and accuracy in contrast to graphical search tools. Here’s what makes it special: Works recursively within directories. Supports complex filtering options. Executes actions on found files (like deleting, moving, or editing them). Handles large datasets efficiently. Let’s dive into its practical applications. Basics of find This is how the general syntax for find looks: find [starting_point] [expression] [starting_point]: The directory where the search starts. Use . to represent the current directory, / for the entire filesystem, or specify a particular path. [expression]: Defines what to search for. This can include file names, types, permissions, sizes, and other attributes. A Simple Example To find a file named notes.txt in your current directory and its subdirectories, run: find . -name "notes.txt" Let’s break this down: .: Search starts within the current directory. -name: Search based on file name. "notes.txt": The target file. Searching by File Name You can search for files using -name or -iname (case-insensitive). Case-Sensitive Search find /home -name "project.txt" Case-Insensitive Search find /home -iname "project.txt" Partial Matches Use wildcards (*) to find files containing specific text: find /var/log -name "*.log" This command locates all .log files in /var/log. Exploring File Types Linux treats everything as a file, but find lets you filter by type using the -type option: Regular Files: -type f Directories: -type d Symbolic Links: -type l Sockets: -type s Character Devices: -type c Block Devices: -type b Example: Finding Directories Only find /etc -type d Filtering by Time Your search can be narrowed down based on when files were accessed, modified, or created. Modified Time (-mtime) Identify which files were changed in the past 7 days: find . -mtime -7 Files modified exactly 7 days ago: find . -mtime 7 Access Time (-atime) Locate files accessed in the last 3 days: find /tmp -atime -3 Change Time (-ctime)  Use -ctime to find files whose metadata changed, such as permissions or ownership. find /var/www -ctime 5 Searching by Size The -size flag allows you to search for files of a specific size. Units of size: k: Kilobytes M: Megabytes G: Gigabytes Exact Size Find files that are 1 MB: find /var -size 1M Greater or Smaller Files Larger than 100MB: find /home -size +100M Smaller than 500KB: find /data -size -500k Combining Conditions Utilize logical operators to combine several search criteria: AND (-a): Default behavior. OR (-o): Specify explicitly. NOT (!): Exclude matches. To find .txt files larger than 1 MB: find . -name "*.txt" -a -size +1M To exclude directories named backup: find /data -type d ! -name "backup" Taking Action on Located Files Find can do more than just search; it can also apply operations on the found files. To do this, use the -exec or -ok flags. Delete Files find /tmp -name "*.tmp" -exec rm {} \; {}: Placeholder for the located file. \;: End of the -exec command. For safer deletion, prompt for confirmation: find /tmp -name "*.tmp" -ok rm {} \; Move or Copy Files find /home/user/docs -name "*.pdf" -exec mv {} /home/user/backup/ \; Run Custom Commands  You can run virtually any command on the located files. For instance, compress all .log files: find /var/log -name "*.log" -exec gzip {} \; Working with Permissions Use these options to search files by ownership or permissions: File Permissions (-perm) Find files with exact permissions: find /etc -perm 644 Locate files writable by others: find /data -perm -o=w User and Group Ownership Files owned by root: find / -user root Files owned by group admin: find / -group admin Avoiding Errors and Boosting Efficiency Ignoring Errors If you lack permissions for certain directories, suppress errors using 2>/dev/null: find / -name "config.yaml" 2>/dev/null Optimizing Searches To improve performance, limit your search depth using -maxdepth: find . -maxdepth 2 -name "*.sh" Use -mindepth to start searching from a certain depth. find . -mindepth 2 -name "*.txt" Real-World Use Cases Housekeeping Logs find /var/log -name "*.log" -mtime +30 -exec rm {} \; Archiving Old Files Move files unused for over a year to an archive directory: find /projects -atime +365 -exec mv {} /archive/ \; Security Audits Find world-writable files, which may pose security risks: find / -perm -o=w Backup Automation Copy all .docx files to a backup directory: find /documents -name "*.docx" -exec cp {} /backup/ \; Combining find with Logical Operators Logical operators such as -and, -or, and -not can be used to build more complex search expressions. These operators allow to search for files that fulfill various conditions concurrently. Find Files with Specific Extensions find . \( -name "*.jpg" -or -name "*.png" \) This searches for files that are either .jpg or .png. Exclude Certain Files find /var/log -type f -not -name "*.gz" This excludes .gz files from the search results. Find Files Modified in the Last Week and Owned by a User find . -mtime -7 -and -user alice Using find for Security Audits The find command can be a valuable tool for identifying security vulnerabilities, such as world-writable files or files with unsafe permissions. Find World-Writable Files find / -type f -perm /o=w This identifies files that are writable by any user. Locate SUID/SGID Files find / -perm /4000 -o -perm /2000 This finds files with the SUID or SGID bit set, which can sometimes pose security risks. Managing Large Data Sets When managing large file systems, searching efficiently is key. Here are a few tips for optimizing find usage: Limit Results with -print and head find /data -type f -name "*.csv" -print | head -n 10 This command quickly previews the first 10 results. Search in Parallel Use find with xargs for parallel processing: find /large_dir -type f -print0 | xargs -0 -P4 -I{} echo "Processing {}" This processes files in parallel using four threads (-P4). Managing Special File Name Characters Files with spaces, newlines, or other special characters in their names can cause issues when using find. To avoid problems, use -print0 with xargs or other commands. Delete Files Containing Special Characters Safely find . -name "*.bak" -print0 | xargs -0 rm Conclusion One useful utility that can revolutionize your Linux system interaction is the find command. It gives you the ability to handle files with accuracy and originality, from basic searches to intricate workflows. You will soon be able to utilize this program to its fullest extent if you practice the instructions provided here and try out various settings. No matter your level of experience as an administrator, find is a crucial tool for your Linux toolbox. You can try our reliable Linux VPS hosting for your projects.
16 December 2024 · 6 min to read

Tailored cloud server
solutions for every need

General-purpose cloud servers for web hosting

Ideal for websites, content management systems, and basic business applications, cloud web servers provide balanced performance with reliable uptime. These servers are designed to handle moderate traffic while keeping your website or application responsive.

High-performance servers for cloud computing


For businesses needing powerful resources for tasks like AI, machine learning, or data analysis, our high-performance cloud servers are built to process large datasets efficiently. Equipped with 3.3 GHz processors and high-speed NVMe storage, they ensure smooth execution of even the most demanding applications.

Storage-optimized cloud servers for data-driven operations

Need to store and retrieve large amounts of data? Our cloud data servers offer vast capacity with superior read/write speeds. These servers are perfect for databases, large-scale backups, or big data management, ensuring fast access to your data when you need it.

Memory-Optimized Servers for Heavy Workloads


These servers are built for applications that require high memory capacity, such as in-memory databases or real-time analytics. With enhanced memory resources, they ensure smooth handling of large datasets, making them ideal for businesses with memory-intensive operations.

In-depth answers to your questions

Which operating systems are supported on your cloud servers?

Choose popular server operating systems and deploy them in one click: from Ubuntu to CentOS. Licensed operating systems are available directly in the control panel.

How can I get started with a cloud server? Is there a straightforward registration process?

Register with Hostman and choose the tariff that suits your needs and requirements. You can always add processing power and purchase additional services if needed.

You don't need a development team to start shared hosting - you'll do everything yourself in a convenient control panel. Even a person with no technical background can easily work with it.

What is the minimum and maximum resource allocation (CPU, RAM, storage) available for cloud servers?

The starter package includes a 1×1.28 GHz 1-core CPU, 1 GB RAM, 15 GB fast MVNe SSD, dedicated IP address and 200 Mbps. For demanding users, go for a powerful 8×3.3 GHz server, 16 GB RAM, 160 GB fast MVNe SSD, dedicated IP address and 200 Mbps. Alternatively, you can always get an incredibly powerful server by configuring it yourself.

What scaling options are available for cloud servers?

You can easily add power, bandwidth, and channel width with just a few clicks directly in the control panel. With Hostman, you can enhance all the important characteristics of your server with hourly billing.

How much does a cloud server cost, and what is the pricing structure like?

Add capacity, bandwidth and channel width with a few clicks right in the control panel. With Hostman, you can improve all important features of your server - with hourly billing.

Is there a trial or testing period available for cloud servers before purchasing?

Contact the friendly Hostman support team, and they will offer you comfortable conditions for test-driving our cloud server — and will transfer your current projects to the cloud for free.

What security measures and data protection are in place for cloud servers?

Cloud servers are hosted in a Tier III data center with a high level of reliability. Hostman guarantees 99.99% availability according to the SLA, with downtime not exceeding 52 minutes per year. Additionally, data is backed up for extra security, and the communication channel is protected against DDoS attacks.

What level of support is provided for cloud servers?

Hostman support is always available, 7 days a week, around the clock. We respond to phone calls within a minute and chat inquiries within 15 minutes. Your questions will always be handled by knowledgeable staff with sufficient authority and technical background.

Can I install my own software on a cloud server?

Yes, absolutely! You can deploy any software, operating systems, and images you desire on your server. Everything is ready for self-configuration.

What backup and data recovery methods are available for cloud servers?

Hostman takes care of the security of your data and backs up important information. Additionally, you can utilize the automatic backup service for extra safety and reliability.

Is there a guaranteed Service Level Agreement (SLA) for cloud server availability?

Hostman guarantees a 99.99% level of virtual server availability according to the SLA (Service Level Agreement).

Which data center locations are available for hosting cloud servers?

Our servers are located in a modern Tier III data center in the European Union and the United States.

Can I create and manage multiple cloud servers under a single account?

Certainly, you can launch multiple cloud servers and other services (such as managed database or vps server) within a single account.

What is the deployment time for cloud servers after ordering?

With Hostman, you'll get a service that is easy and quick to manage on your own. New cloud servers can be launched almost instantly from the control panel, and the necessary software can be installed within minutes.

What monitoring and notification capabilities are offered for cloud servers?

Hostman specialists monitor the technical condition of servers and software around the clock. You won't have to worry about server availability — it will simply work, always.

Can I modify the specifications of my cloud server (e.g., increase RAM) after creation?

You can easily configure your server by adding resources directly in the control panel. And if you need to switch to lower-tier plans, you can rely on Hostman support — our specialists will handle everything for you.

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support