Sign In
Sign In

Cloud Database

Unlock the power of flexible, secure, and scalable cloud database solutions for your business.
Contact Sales
Cloud Database
Advanced Scalability
With Hostman Cloud Database, you can effortlessly scale your databases up or down to match your workload demands. Our cloud-based database solutions ensure that you can handle sudden traffic spikes without compromising performance, providing seamless growth for your applications.
Enhanced Security
Security is a top priority at Hostman. Our cloud database security measures include advanced encryption, regular security audits, and stringent access controls. Your data is protected against unauthorized access and potential threats, ensuring peace of mind.
Easy Integration
Integrate additional resources and services with ease using Hostman Cloud Database. Whether you need to expand storage, add new applications, or connect to third-party services, our platform supports seamless integration, enabling you to enhance your capabilities effortlessly.
In-house tech support
Enjoy 24/7 technical support with Hostman Cloud Database. Our dedicated support team is always available to assist you with any issues or questions, ensuring that your database operations run without interruptions.

Tailored database solutions for every need

Versatile capabilities and distributions.

MySQL

Streamline app development with our fully managed MySQL environments, designed for optimal performance and scalability.

PostgreSQL

Unlock the power of PostgreSQL. We manage the details: you harness its advanced capabilities for your data-driven solutions.

Redis

Accelerate with managed Redis. Blazing-fast data handling, zero management overhead — all in your control.

MongoDB

Flexible, dynamic MongoDB management lets you focus on innovation while we handle the data agility your app needs.

OpenSearch

Managed OpenSearch powers your insights. We handle the complexity, you enjoy lightning-fast, scalable search capabilities.

ClickHouse

Instant analytics with managed ClickHouse. Fast, reliable, and maintenance-free — query at the speed of thought.

Kafka

Effortless data streaming with Kafka. Our management means reliable, scalable, real-time processing for your applications.

RabbitMQ

Seamless messaging with RabbitMQ. Let us manage the queues while you build responsive, interconnected app features.

What is Cloud Database?

A cloud database is a flexible, scalable solution that allows you to store and manage data in a cloud environment. It eliminates the need for physical hardware, offering seamless integration, automated scaling, and strong security measures. With Hostman Cloud Database, you can easily adapt to changing workloads, ensuring your applications run smoothly, even during traffic spikes.

Get started with Hostman
cloud database platform

Don’t let your database slow you down. Choose Hostman
for reliable, scalable cloud database solutions that
grow with your business.

Transparent pricing for your needs and predictable pricing

MySQL
New York
1 x 3 GHz CPU
CPU
1 x 3 GHz
1 GB RAM
RAM
1 GB
20 GB NVMe
NVMe
20 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
$4
 /mo
2 x 3 GHz CPU
CPU
2 x 3 GHz
2 GB RAM
RAM
2 GB
60 GB NVMe
NVMe
60 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
$9
 /mo
2 x 3 GHz CPU
CPU
2 x 3 GHz
4 GB RAM
RAM
4 GB
80 GB NVMe
NVMe
80 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
$18
 /mo
4 x 3 GHz CPU
CPU
4 x 3 GHz
8 GB RAM
RAM
8 GB
160 GB NVMe
NVMe
160 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
$36
 /mo
6 x 3 GHz CPU
CPU
6 x 3 GHz
16 GB RAM
RAM
16 GB
320 GB NVMe
NVMe
320 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
$72
 /mo
8 x 3 GHz CPU
CPU
8 x 3 GHz
32 GB RAM
RAM
32 GB
640 GB NVMe
NVMe
640 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
$114
 /mo
16 x 3 GHz CPU
CPU
16 x 3 GHz
64 GB RAM
RAM
64 GB
1280 GB NVMe
NVMe
1280 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
$288
 /mo

One panel to rule them all

Easily control your database, pricing plan, and additional services
through the intuitive Hostman management console.
Easy set up and management
Ready-to-deploy cloud database solutions come pre-configured. Choose your setup, launch your database, and begin managing your data with ease.
Saves time and resources
Forget about configuring hardware and software or manual database management—our service has it all covered for you.
Security
Deploy databases on an isolated network to maintain private access solely through your own infrastructure.
Hostman Cloud
Anup k.
Associate Cloud Engineer
5.0 out of 5

"Hostman Comprehensive Review of Simplicity and Potential"

It been few years that I have been working on Cloud and most of the cloud service...
Mansur H.
Security Researcher
5.0 out of 5

"A perfect fit for everything cloud services!"

Hostman's seemless integration, user-friendly interface and its robust features (backups, etc) makes it much easier...
Adedeji E.
DevOps Engineer
5.0 out of 5

"Superb User Experience"

For me, Hostman is exceptional because of it's flexibility and user-friendliness. The platform's ability to offer dedicated computing resources acr...
Yudhistira H.
Mid-Market(51-1000 emp.)
5.0 out of 5

"Streamlined Cloud Excellence!"

What I like best about Hostman is their exceptional speed of deployment, scalability, and robust security features. Their...
Mohammad Waqas S.
Biotechnologist and programmer
5.0 out of 5

"Seamless and easy to use Hosting Solution for Web Applications"

From the moment I signed up, the process has been seamless and straightforward...
Mohana R.
Senior Software Engineer
5.0 out of 5

"Availing Different DB Engine Services Provided by Hostman is Convenient for my Organization usecases"

Hostman manages the cloud operations...
Faizan A.
5.0 out of 5

"Hostman is a great fit for me"

Hostman is a great fit for me. What do you like best about Hostman? It was very easy to deploy my application and create database, I didn't have
Adam M.
5.0 out of 5

"Perfect website"

This website is extremely user friendly and easy to use. I had no problems so didn't have to contact customer support. Really good website and would recommend to others.
Anup K.
4.0 out of 5

"Simplifying Cloud Deployment with Strengths and Areas for Growth"

What I like best about Hostman is its unwavering commitment to simplicity...
Naila J.
5.0 out of 5

"Streamlined Deployment with Room for Improvement"

Hostman impresses with its user-friendly interface and seamless deployment process, simplifying web application hosting...

Compare Hostman Cloud Database with leading providers

Discover how Hostman stands out against other top cloud database providers in terms of pricing, support, and features.
Hostman
DigitalOcean
Google Cloud
Vultr
Price
From $4/mo
$6
$6.88
$5
Tech support
Free
$24/mo
$29/mo + 3% of
monthly charges
$29/mo or 3% of
monthly charges
Backups
from $0.07/GB
20% or 30% higher
base daily/weekly fee
$0.03/GB per mo
20% higher base
monthly/hourly fee
Bandwidth
Free
1 TB Free further $0.01/GiB additional transfer
$0.01 per GB
$0.09/GB first
10 TB / mo
Live chat support
Avg. support response time
<15 min
<24 hours
<4 hours
<12 hours

Ready to get started?

Sign up and discover how easy cloud database
management can be with Hostman.

Start turning your ideas into solutions with Hostman products and services

See all Products

Trusted by 500+ companies and developers worldwide

Global network of Hostman's data centers

Our servers, certified with ISO/IEC 27001, are located in Tier 3 data centers across the US, Europe, and Asia.
🇺🇸 San Francisco
🇺🇸 San Jose
🇺🇸 Texas
🇺🇸 New York
🇳🇱 Amsterdam
🇳🇬 Lagos
🇩🇪 Frankfurt
🇵🇱 Gdansk
🇦🇪 Dubai
🇸🇬 Singapore

Explore more of Hostman cloud databases

SQL

SQL Constraints

When working with SQL tables, you'll often need to set constraints on the data types stored in a specific table. For instance, if you have a table with employee data, it's logical that some fields should not contain null values. You can apply such a constraint to the SQL values with a simple command. You can also require that entered values be unique or that data be checked against certain conditions. In this article, we'll look at how to do this and cover all possible types of constraints, but first, let's start with some terminology. What Are SQL Constraints? An SQL constraint is a rule that we apply to fields in SQL, determining which values are allowed and which are not. After adding a constraint, the program will check whether it's possible to insert, update, or delete data in the table based on the user-defined constraints. If not, the operation will not be executed, and the program will return an error. Now, let's explore all possible types of constraints in SQL databases, and for clarity, we'll provide examples that could be practically useful for you. Adding SQL Constraints You can create SQL constraints using the following commands: PRIMARY KEY, FOREIGN KEY, UNIQUE, CHECK, and NOT NULL. NOT NULL Constraint The NOT NULL constraint ensures that a column must have a value for every record, meaning the value cannot be null. This prevents empty values from being stored in the column. Let's create a table with a column that has this constraint: CREATE TABLE Countries (Country VARCHAR(46) NOT NULL,Capital VARCHAR(46)) Here, we allow the name of the country's capital to be omitted, but the name of the country must always be provided. Let's try to add a record that violates this rule: INSERT INTO Countries VALUES (null, 'Madrid') It will result in this error: Column 'Country' cannot be null However, this record will not trigger an error because we did not prohibit leaving the capital name (Capital) column empty: INSERT INTO Countries VALUES ('Spain', null) The NOT NULL constraint can be useful for columns with contact information when we need to require the user to enter their email address or phone number, for example. Therefore, such mandatory fields often use the NOT NULL constraint to ensure that the user enters a specific value: CREATE TABLE Subscribers (SubscriberName VARCHAR(46) NOT NULL,SubscriberContact VARCHAR(46) NOT NULL,) In this case, we require users to obligatorily enter their name and email address, setting a 64-character limit for each field in the table. Specifying character limits for certain fields can also be useful to prevent the addition of invalid data. This operation is often applied for efficiency so the database size is not unnecessarily inflated. UNIQUE Constraint This constraint’s name fully reflects its essence. The UNIQUE constraint ensures that no two values in the specified column will be the same. Let’s take a look at a table where UNIQUE is used: CREATE TABLE Workers1 (WorkerName VARCHAR(46) NOT NULL,WorkerDate DATE,WorkerContact INTEGER UNIQUE) We created an employee table, where we will add the employee's name (the field cannot be empty, as we have already set the familiar NOT NULL constraint), the hire date (in date format, indicated by the DATE data type), and the phone number. The phone number must be unique, as indicated by the UNIQUE constraint. Let’s insert the following data into our table: INSERT INTO Workers1 VALUES ('John Smith', DATE '2018-05-10', 375123456789) Now, when trying to insert a row with the same phone number: INSERT INTO Workers1 VALUES ('James Smith', DATE '2020-06-11', 375123456789) The program will throw an error: Duplicate entry 375123456789 for key 'uniqueconstraint.WorkerContact' The UNIQUE constraint is ideal for columns that should not contain duplicate values. For example, each of us has a unique social security number. Therefore, if a table contains a column for SSN, it should use the UNIQUE constraint. This is necessary to avoid two people having the same numbers, which could be inserted by mistake or intentionally. CHECK Constraint The CHECK constraint is used to validate values based on a specific condition. Consider the following example: CREATE TABLE Customers1 (CustomerName1 VARCHAR(46),CustomerName2 VARCHAR(46),CustomerEmail VARCHAR(56),CustomerAge INTEGER CHECK (CustomerAge>17)) We have set an age constraint where the age must be greater than 17. Now, let’s see what happens when a customer enters the following data: INSERT INTO Customers1 VALUES ('John', 'Smith, '[email protected]', 17) That’s what we’ll see: Check constraint 'checkconstraint_chk_1' is violated The CHECK constraint can be used to implement custom rules. For example, if a table should only store data for adults, we could use the CHECK constraint for the CustomerAge column (CustomerAge > 17, as shown in the example above). Another example: if the table should only store data for Cyprus citizens, we could use CHECK for a new column CustomerCountry: CHECK (CustomerCountry = 'Cyprus'). PRIMARY KEY Constraint PRIMARY KEY is one of the table key constraints in SQL, specifically the primary key. It is used to create an identifier that corresponds to each row in the table. Additionally, a PRIMARY KEY can only apply to one column in a table (since it serves as the identifier). Therefore, each PRIMARY KEY value must be unique, and null values are not allowed in the column defined by the PRIMARY KEY. To better understand this, let’s look at the following example: CREATE TABLE Workers2 (id INTEGER PRIMARY KEY,WorkerName1 VARCHAR(46),WorkerName2 VARCHAR(46),WorkerAge INTEGER CHECK (WorkerAge>17)) As we can see, the PRIMARY KEY constraint allows us to assign an employee ID, so that each record can be referenced using a unique numeric key. Also, notice the familiar CHECK constraint in the age column. FOREIGN KEY constraint A FOREIGN KEY constraint creates a reference to a PRIMARY KEY from another table. Therefore, a column with a FOREIGN KEY points to the PRIMARY KEY column from another table, linking the current table to it through this constraint. To better understand what this key does, let’s look at an example of a FOREIGN KEY constraint linked to the PRIMARY KEY from the previously created table: CREATE TABLE WorkersTaxes (WorkerTax INTEGER,Worker_id INTEGER,FOREIGN KEY (Worker_id) REFERENCES Workers2(id)) So, we needed to create a table for calculating workers' taxes. To link this table (WorkersTaxes) with the workers' table (Workers2), we used a FOREIGN KEY reference, which identifies workers based on the PRIMARY KEY from the Workers2 table. This way, we achieved data consistency, and now each employee can easily be identified in both tables by the linked keys. Other constraints We should also note that the SQL Standard sometimes includes DEFAULT as a constraint; however, DEFAULT does not restrict the type of data entered, so technically it is not considered a constraint. Nevertheless, it is important to mention here because it enables a crucial function: setting default values when users do not input data. This can be useful, for example, to avoid potential errors when data is not entered. Let’s consider the following example: CREATE TABLE Customers2 (CustomerName1 VARCHAR(46) NOT NULL,CustomerName2 VARCHAR(46) NOT NULL,CustomerAge INTEGER DEFAULT 18,) Now, if the customer does not provide an age, it will be automatically set. And we required the customer to enter their first and last name using the already familiar NOT NULL constraint. We hope you now understand how to use each SQL constraint and the benefits they offer. Good luck with your work!
13 December 2024 · 6 min to read
SQL

How to Select Data in SQL

In the modern world, where information is becoming an increasingly valuable resource, databases (DBs) remain an integral part of any information system, and the ability to retrieve data from them with maximum efficiency becomes a decisive factor in successfully working with these systems. SQL (Structured Query Language) is a specialized programming language for managing records stored in relational databases. Within SQL, there are many operators and methods that allow developers to retrieve the required information from a DB. This article is a practical guide for those who want to learn how to select data from an SQL table. In this guide, we will explore the syntax of the SELECT statement, learn how to filter data using WHERE, and examine how to aggregate data using GROUP BY and HAVING. Basics of the SELECT Statement SQL, being an incredibly flexible language for managing data, offers many tools for working with information stored in databases. One of the most important and widely used tools is the SELECT statement, which allows users to retrieve information from a DB. This statement allows us to select the specific columns from a table and apply various operations to the data. The syntax of the SELECT statement is simple and easy to understand. It begins with the keyword SELECT, followed by a list of columns from which we will retrieve data and the table name from which we will extract the data. Here’s what it looks like in practice: SELECT field1, field2FROM data_table; In this example, field1 and field2 are the specific columns we want to retrieve, and data_table is the name of the table from which we want to fetch the data. You can use the SELECT statement in many different ways. For example, if the task is to select all columns from a particular table, we can use the * symbol, which serves as a wildcard for all columns: SELECT * FROM StaffMembers; This query will return all the data contained in the StaffMembers table. In addition, we can use SELECT to retrieve only unique values from a specific column, excluding duplicate entries, which is especially useful when analyzing data: SELECT DISTINCT DivisionIDFROM StaffMembers; In this example, the query returns a list of unique DivisionID values from the StaffMembers table, removing all duplicate entries. The SELECT statement also allows the use of various aggregation functions, such as COUNT, SUM, AVG, and others. These functions are key for performing aggregate operations that help analyze large volumes of data to obtain totals, averages, or other types of aggregate statistics. For example, we can use the COUNT function to count the number of rows in a table: SELECT COUNT(StaffID)FROM StaffMembers; This query will return the total number of employees. Similarly, we can use other aggregation functions to calculate sums, averages, and other aggregate statistics for the data. Another useful operator is ORDER BY, which orders the results of a query according to specific criteria. This operator allows us to sort data either in ascending (ASC) or descending (DESC) order. If we do not specify the sort order explicitly, ascending order will be used by default. Here's how it looks in practice: SELECT *FROM StaffMembersORDER BY Surname DESC; In this example, the query results will be presented in sorted order, where the data will be sorted by employees' surnames in reverse alphabetical order, from the last name in the alphabet to the first. The SELECT statement plays an important role in SQL as it determines which specific data will be included in the query results. It can be used in conjunction with other operators, so let’s move on to the next key SQL query component—the WHERE clause, which allows us to set specific conditions for data selection. Using WHERE to Filter Data The WHERE clause in SQL provides data filtering based on specified conditions, allowing you to retrieve, update, or delete only the data that meets certain criteria. Without the WHERE clause, we would be forced to extract all data from the table and then manually filter it, for example, in an application, to perform specific tasks. This would be highly inefficient, especially for large databases. The WHERE clause can be used with various operators such as equality (=), inequality (<>), greater than (>), less than (<), greater than or equal to (>=), less than or equal to (<=), as well as more specialized operators like BETWEEN, which allows you to specify a range of values, LIKE, designed for pattern matching, and IN, which allows you to select data from a specific set. Let’s look at some examples of using WHERE to filter data. Using the WHERE clause with the equality operator (=): SELECT *FROM StaffMembers WHERE StaffID = 123456; In this case, the equality operator selects the record where the employee ID matches the number 123456. This is the simple equivalent of the equality operator in mathematics. Using WHERE with the greater than (>) or less than (<) operators: SELECT *FROM StaffMembers WHERE Wage > 60000; Here, the > operator is used, which filters out unnecessary data and returns information about employees whose wage exceeds 60,000. This operator can be useful when searching for records with values above or below a certain threshold. Using WHERE with BETWEEN: SELECT *FROM StaffMembers WHERE Wage BETWEEN 60000 AND 80000; The BETWEEN operator allows you to select records that fall within a specified range of values. In this case, it selects all employees whose salary is between 60,000 and 80,000, inclusive. This is useful when you need to extract a specific range of values. Using WHERE with LIKE and Wildcard Characters: SELECT *FROM StaffMembers WHERE StaffName LIKE '%ja%'; The LIKE operator is used for pattern matching. In SQL databases, two wildcard characters can represent patterns: % replaces zero or more characters, and _ replaces exactly one character. In this specific example, the query returns all records from the StaffMembers table where the staff names contain "ja." This approach is often used when the exact value is not known, or when multiple matches need to be found. These are just a few examples of the possibilities with WHERE in SQL. The variety of combinations and operators makes it a powerful tool when working with data. Next, we will look at the AND, OR, and NOT operators, which are commonly used together with WHERE to create more complex queries for databases. Using the AND, OR, and NOT Operators AND, OR, and NOT are key logical operators in SQL. They are used to combine or invert conditions in SQL operators such as WHERE, HAVING, and others. The AND operator is used to create a query that returns true only when both conditions being compared are true. Let’s consider an example: SELECT * FROM StaffMembers WHERE Wage > 60000 and ExperienceYears > 3; In this case, the AND operator links two selection criteria: a wage greater than 60,000 and more than 3 years of experience. The result of this query will be records from the table that satisfy both conditions simultaneously. The OR operator returns true if at least one of the conditions is true: SELECT * FROM StaffMembers WHERE Division = 'Production' OR Division = 'Advertising'; Here, the OR operator connects two selection conditions. The query will return records from the StaffMembers table where the employee belongs to either the 'Production' or 'Advertising' department. The NOT operator inverts the logical value of a condition, returning true if the condition is false, and false if the condition is true: SELECT * FROM StaffMembers WHERE NOT (Division = 'HR'); In this query, the NOT operator inverts the condition Division = 'HR'. The query will return all rows from the StaffMembers table where the department is not 'HR'. This allows you to create queries that exclude certain categories of data. These operators can be combined in any way to create complex conditions. For example: SELECT * FROM StaffMembers WHERE (Division = 'Production' OR Division = 'Advertising') AND ExperienceYears > 5; Here, the AND and OR operators are combined to create a more complex selection condition. The query will return only those records from the StaffMembers table where the department is either 'Production' or 'Advertising' and the employee has more than five years of experience. Aggregating Data with GROUP BY and HAVING In SQL, GROUP BY and HAVING are often used together to aggregate data and calculate various statistical measures based on grouping data by predefined criteria. Let’s take a closer look at the GROUP BY operator. It is used to group rows in the result set by the values of a specific column or group of columns.  After the grouping, we can use aggregation functions like COUNT, SUM, AVG, and others to calculate statistical data for each individual group. Example: SELECT ClientID, COUNT(PurchaseID)FROM PurchasesGROUP BY ClientID; In this example, we count the total number of purchases (PurchaseID) made by each client (ClientID). The HAVING operator is similar to WHERE, but the key difference is that HAVING is applied after the grouping has been done using GROUP BY. The main purpose of HAVING is to filter groups based on already computed aggregate values. This allows us to display only those groups that meet the criteria we set. For instance, after performing the aggregation, we can filter to only groups meeting a certain threshold. Example: SELECT ClientID, COUNT(PurchaseID)FROM PurchasesGROUP BY ClientIDHAVING COUNT(PurchaseID) > 3; In this example, we only see clients (ClientID) whose total number of orders exceeds three. Note that HAVING is applied in SQL queries exclusively after using GROUP BY. You cannot use HAVING without first grouping the data using GROUP BY. In general, the order of operations in SQL looks like this: FROM: Specify the data source. WHERE: Filter data before grouping. GROUP BY: Group rows into sets based on column values. HAVING: Filter groups after they’ve been created. SELECT: Specify which columns will appear in the query result. ORDER BY: Sort the results in the desired order. This sequence reflects the logic of query processing in SQL. The filtering conditions through WHERE are applied before grouping, which helps reduce the volume of data being processed. Conditions defined in HAVING apply to already formed data groups, allowing for more detailed analysis. The GROUP BY and HAVING operators are essential tools for data aggregation in SQL.  Their use provides extensive data analysis capabilities, allowing statistical data collection and the identification of patterns, trends, and relationships within the data. Using JOIN to Combine Tables Often, developers need to select data from two SQL tables. To accomplish this, the JOIN operator is used, allowing data from two or more sources to be combined based on matching values in specific columns. Tables in a database usually have linking columns that correlate with keys in other tables, thus enabling the linking of data. This allows for automatic synchronization of changes across related tables, which is an invaluable advantage when working with large databases where information is split across multiple tables. The structure of a query using JOIN looks like this: SELECT dataField(s)FROM tableAJOIN tableBON tableA.dataField = tableB.dataField; In this case, JOIN is used to combine two tables (tableA and tableB). The join is performed based on a common column (dataField). Additionally, the query includes the selection of specific columns (dataField(s)) that the developer wants to display in the final result. It is important to note that in SQL, there are different types of table joins, including: INNER JOIN: This allows us to retrieve only those rows that have matching records in both tables, meaning where the join condition is met: SELECT Purchases.PurchaseID, Clients.ClientNameFROM PurchasesINNER JOIN ClientsON Purchases.ClientID = Clients.ClientID; LEFT (OUTER) JOIN: This is used when we need to retrieve all rows from the left table (the one specified first in the query), and the matching rows from the right table. If there are no matching rows in the right table, the results for those rows will contain NULL values: SELECT Clients.ClientName, Purchases.PurchaseIDFROM ClientsLEFT JOIN PurchasesON Clients.ClientID = Purchases.ClientID; RIGHT (OUTER) JOIN: This works similarly to the LEFT JOIN, but in reverse. Here, we get all the records from the right table, supplemented with matching data from the left table. If no matches are found for records from the right table, NULL will be placed in the columns for the left table: SELECT Clients.ClientName, Purchases.PurchaseIDFROM ClientsRIGHT JOIN PurchasesON Clients.ClientID = Purchases.ClientID; FULL (OUTER) JOIN: This type of join gives us all rows from both tables that have corresponding records. In other words, it combines the LEFT and RIGHT JOINs. If there are rows in the first table with no matching rows in the second table, the columns from the second table will contain NULL for those rows. Similarly, if records from the second table do not have matches in the first table, the columns from the first table will contain NULL for those rows: SELECT Clients.ClientName, Purchases.PurchaseIDFROM ClientsFULL OUTER JOIN PurchasesON Clients.ClientID = Purchases.ClientID; It is worth noting that although the FULL (OUTER) JOIN is a standard SQL feature, not all SQL systems support it. For example, MySQL does not have built-in support for FULL (OUTER) JOIN, but you can emulate it using a combination of LEFT JOIN and UNION: SELECT Clients.ClientName, Purchases.PurchaseID FROM Clients LEFT JOIN Purchases ON Clients.ClientID = Purchases.ClientID UNION SELECT Clients.ClientName, Purchases.PurchaseID FROM Purchases LEFT JOIN Clients ON Clients.ClientID = Purchases.ClientID WHERE Clients.ClientID IS NULL; This query first performs a left outer join, attaching records from the Purchases table to the Clients table. Then, it joins records from Clients to Purchases that were not selected in the first query (i.e., those where ClientID is NULL). Finally, it combines the results of these two queries. In this section, we discussed different types of JOIN in SQL. Each of these joins provides flexibility in managing which data from related tables we want to see in the result set. Conclusion In this guide, we explored the use of SQL operators such as SELECT, WHERE, ORDER BY, JOIN, GROUP BY, and HAVING through practical examples. These operators offer users extensive capabilities for processing information, enabling complex analytical queries and extracting maximum value from stored data. We hope that you now have a clear understanding of how to use SQL to extract data from a database!
13 December 2024 · 12 min to read
SQL

How to Insert Data into SQL Databases

The data insertion operation in relational databases is one of the most essential tasks. In this article, we will explain how to perform this operation using relational database management systems (DBMS) that work with the SQL language. We will use MySQL, the most popular SQL-based DBMS. As an example, we will consider two tables: one for leading European football clubs (including their national affiliation, year of establishment, number of national championships won, domestic cups, and European trophies), and another for some leading countries worldwide (with their capitals and population in millions). Initially, the tables (let's call them Clubs and Countries) will be empty, containing only column names. Our task is to populate them using various SQL commands so that they appear as follows: Club Country Year Champs Cups Eurocups Real Madrid Spain 1902 35 19 21 Barcelona Spain 1899 26 31 18 Milan Italy 1899 19 5 14 Juventus Italy 1897 36 14 8 Bavaria Germany 1900 32 20 10 The Countries table: Country Capital Population Russia Moscow 147 USA Washington 336 China Beijing 1427 India Delhi 1435 Brazil Brasilia 218 So, first, we need to create a database and two tables.  Connect to the MySQL server (replace xxx.xxx.xxx.xxx with the appropriate IP address): mysql -u root -h xxx.xxx.xxx.xxx -p Then enter the following command: CREATE DATABASE TestDB; To verify that the new database has been successfully created, use the command: SHOW DATABASES; Our database, TestDB, should appear in the list. Next, we need to grant users access to this database. For example, if we already have a user test_user, we can grant them access using the command: GRANT ALL PRIVILEGES ON TestDB.* TO 'test_user'@'%' WITH GRANT OPTION; Now, we can proceed to create the tables. Let’s start with the table for clubs: CREATE TABLE Clubs ( Club VARCHAR(64) NOT NULL, Country VARCHAR(32), Year INT, Champs INT, Cups INT, Eurocups INT ); This means that for the first two columns, we specified string values, with the length of the data in each cell not exceeding 64 and 32 characters, respectively. Additionally, cells in the Club column cannot be empty when adding data (NOT NULL). For the remaining four columns, we designated integer values (INT). Now, following the same pattern, we create the second table: CREATE TABLE Countries ( Country VARCHAR(32) NOT NULL, Capital VARCHAR(32), Population INT ); That’s it! Our tables are created and ready to be populated. The INSERT INTO Statement The INSERT INTO statement allows you to insert data into an SQL table. However, the data is inserted in the column order, so you must know the exact sequence of columns in the table. Let’s insert data into the first rows of our tables: INSERT INTO Clubs VALUES("Real Madrid", "Spain", 1902, 35, 19, 21); INSERT INTO Countries VALUES("Russia", "Moscow", 147); Note that we must specify values for all columns. For instance, the following entries would result in errors or misaligned values (e.g., we omitted the club’s founding year and the country’s capital): INSERT INTO Clubs VALUES("Real Madrid", "Spain", 35, 19, 21); INSERT INTO Countries VALUES("Russia", 147); The INSERT INTO Statement with Column List This method is more reliable as it prevents errors when skipping columns. However, you must specify the column names explicitly: INSERT INTO Clubs(Club, Country, Champs, Cups, Eurocups) VALUES("Barcelona", "Spain", 26, 31, 18); INSERT INTO Countries(Country, Capital) VALUES("USA", "Washington"); In these examples, we skipped the club's founding year (the Year column) in the first case and the population (the Population column) in the second. No errors occur because these fields simply remain empty and can be filled later. The INSERT INTO Statement for Bulk Insertion Adding data row by row is not always convenient. Therefore, let’s look at how to insert data into multiple rows simultaneously. We can do it with the following command: INSERT INTO Clubs(Club, Country, Eurocups) VALUES ("Real Madrid", "Spain", 21), ("Barcelona", "Spain", 18), ("Milan", "Italy", 14), ("Juventus", "Italy", 8), ("Bavaria", "Germany", 10); INSERT INTO Countries(Country, Population) VALUES ("Russia", 147), ("USA", 336), ("China", 1427), ("India", 1435), ("Brazil", 218); In the first table, we populated the columns with the names of the clubs, their national affiliation, and the number of European trophies won, leaving the remaining columns empty. In the second table, we omitted the countries' capitals.  As we can see, there are some syntax differences, and the commands are split across multiple lines. The SET Statement In combination with INSERT INTO, the SET statement allows you to insert a single record into a table: INSERT INTO Clubs SET Club="Milan", Country="Italy", Year=1899, Champs=19, Cups=5, Eurocups=14; INSERT INTO Countries SET Country="China", Capital="Beijing", Population=1427; However, this is also its drawback, as it does not allow us to insert multiple records into an SQL table at once. For that purpose, the previous method is more suitable. How to Insert Data from Another SQL Table Suppose we have other tables with the same columns, and we want to add their data to our tables. This can be done using the SELECT statement in combination with the familiar INSERT: INSERT INTO Clubs(Club, Country, Year, Champs, Cups, Eurocups) SELECT Club, Country, Year, Champs, Cups, Eurocups FROM Clubs2; INSERT INTO Countries(Country, Capital, Population) SELECT Country, Capital, Population FROM Countries2; The IGNORE Clause for Avoiding Errors We can use the IGNORE clause to prevent MySQL from halting when trying to insert invalid values. For example, if we set a uniqueness constraint on the Club column to ensure that each club name is unique, the clause helps avoid interruptions. In MySQL, the first value in a table is always treated as unique, so manually adding such a constraint may not be necessary. For instance, if we already have a row like this: Real Madrid Spain 1902 35 19 21 This command will result in an error: INSERT INTO Clubs VALUES("Real Madrid", "Spain", 1902, 35, 19, 21); We already have a row with the value Real Madrid in the first column. However, duplicates are likely to occur when copying data from multiple tables. To ensure the program ignores these duplicates without generating errors, we use the IGNORE clause: INSERT IGNORE Clubs(Club, Country, Year, Champs, Cups, Eurocups) VALUES("Real Madrid", "Spain", 1902, 35, 19, 21); The same is true for the second example. We already have the line:  Russia Moscow 147 Running this query: INSERT INTO Countries VALUES("Russia", "Moscow", 147); — will result in an error. That’s why we should use IGNORE: INSERT IGNORE Countries(Country, Capital, Population) VALUES("Russia", "Moscow", 147); The program will simply ignore the duplicate row and continue execution without throwing an error. The LOAD DATA Statement for Importing from a Text File Suppose we have an empty Clubs table with the appropriate columns, and we need to populate it with information from a text file. The LOAD DATA statement allows us to do this. However, you first need to prepare the text file with the data. Open your file (let's say Clubs.txt) in a text editor and format the data as follows, separating values with a tab character: 'Real Madrid' 'Spain' '1902' '35' '19' '21' 'Barcelona' 'Spain' '1899' '26' '31' '18' 'Milan' 'Italy' '1899' '19' '5' '14' 'Juventus' 'Italy' '1897' '36' '14' '8' 'Bavaria' 'Germany' '1900' '32' '20' '10' This SQL query will insert data in the table, placing them in the right columns. But what if we don’t have values for specific columns? Suppose we don’t know when the clubs were established. The record will look like this: 'Real Madrid' 'Spain' '\N' '35' '19' '21' 'Barcelona' 'Spain' '\N' '26' '31' '18' 'Milan' 'Italy' '\N' '19' '5' '14' 'Juventus' 'Italy' '\N' '36' '14' '8' 'Bavaria' 'Germany' '\N' '32' '20' '10' \N means that the cell in the table will remain empty. Now, we just need to load the data into SQL, but first, let's enable working with local files like this: set global local_infile=true; exit mysql --local_infile=1 -u test_user -h xxx.xxx.xxx.xxx -p The Linux command to load data is: LOAD DATA LOCAL INFILE '/your_directory/Clubs.txt' INTO TABLE Clubs; And in Windows, it is done like this: LOAD DATA LOCAL INFILE '/your_directory/Clubs.txt'' INTO TABLE Clubs LINES TERMINATED BY '\r\n'; However, sometimes the system might not respond to these instructions. In that case, you will need to enable working with local files in MySQL, which is explained in detail in the official documentation. To summarize, the value of unsigned int in the MYSQL_OPT_LOCAL_INFILE option, related to the mysql_options() settings, must be non-zero. That’s all for now! Now you know how to insert data into SQL databases using various statements. Just choose the most appropriate statement for the specific case, and you’ll avoid issues when copying data.
12 December 2024 · 8 min to read
MySQL

The UPDATE Command: How to Modify Records in a MySQL Table

Updating data in databases is a critical task when working with MySQL. It involves modifying the values of existing records in a table. Updates can range from modifying fields in a group of rows (or even all rows in a table) to adjusting a specific field in a single row. Understanding the syntax for updating data is essential for effectively working with both local and cloud databases. The key command for modifying records in a MySQL database table is UPDATE. Updates occur sequentially, from the first row to the last. Depending on the type of update, there are two syntax options for the UPDATE statement in MySQL. Syntax for Updating a Single Table UPDATE [LOW_PRIORITY] [IGNORE] table_reference SET assignment_list WHERE where_condition ORDER BY ... LIMIT row_count; Parameters: Required: SET assignment_list: Specifies which columns to modify and how (assignment_list is the list of columns and their new values). Optional: LOW_PRIORITY: If specified, the UPDATE is delayed until no other user is reading data from the table. IGNORE: Ensures the UPDATE continues even if errors occur. Rows with duplicate values in unique key columns are not updated. WHERE where_condition: Specifies the conditions for selecting rows to update. If omitted, all rows in the table will be updated. ORDER BY: Determines the order in which rows are updated. LIMIT row_count: Limits the number of rows updated (row_count specifies the number of rows). This count applies to rows matching the WHERE condition, regardless of whether they are actually modified. Syntax for Updating Multiple Tables UPDATE [LOW_PRIORITY] [IGNORE] table_references SET assignment_list WHERE where_condition; Parameters: table_references: Specifies the tables to update. Changes are applied as defined in assignment_list. ORDER BY and LIMIT are not allowed when updating multiple tables. Other optional parameters (LOW_PRIORITY, IGNORE, WHERE) behave the same as for a single-table update. Note that when updating multiple tables, there is no guarantee that updates will occur in a specific order. Creating a Test Database Let’s create a database for a bookstore that sells rare and antique books from around the world. The table will have four tables: author, genre, book, and sales. CREATE TABLE author ( id INT PRIMARY KEY AUTO_INCREMENT, author_name VARCHAR(50) NOT NULL ); INSERT INTO author (author_name) VALUES ('Leo Tolstoy'), ('Franz Kafka'), ('Nikolai Gogol'), ('William Shakespeare'), ('Homer'); CREATE TABLE genre ( id INT PRIMARY KEY AUTO_INCREMENT, genre_name VARCHAR(30) NOT NULL ); INSERT INTO genre (genre_name) VALUES ('Realist novel'), ('Dystopian novel'), ('Picaresque novel'), ('Epic poetry'); CREATE TABLE book ( book_id INT PRIMARY KEY AUTO_INCREMENT, title VARCHAR(50), author_id INT NOT NULL, genre_id INT, price DECIMAL(8,2) NOT NULL, amount INT DEFAULT 0, FOREIGN KEY (author_id) REFERENCES author (id), FOREIGN KEY (genre_id) REFERENCES genre (id) ); INSERT INTO book (title, author_id, genre_id, price, amount) VALUES ('Anna Karenina', 1, 1, 650.00, 15), ('The Castle', 2, 2, 570.20, 6), ('Dead Souls', 3, 3, 480.00, 2), ('Iliad', 5, 4, 518.99, 4), ('Odyssey', 5, 4, 518.99, 7); CREATE TABLE sales ( id INT PRIMARY KEY AUTO_INCREMENT, book_id INT NOT NULL, count INT NOT NULL, cost DECIMAL(8,2) NOT NULL, FOREIGN KEY (book_id) REFERENCES book (book_id) ); We will get the following tables. Table: book Here, we have the columns: book_id: Unique book identifier. title: The book's title. author_id: Author identifier (foreign key to author). genre_id: Genre identifier (foreign key to genre). price: Price of the book per unit. amount: Number of books in stock. The genre table will have the following content:  id genre_name 1 Realist novel 2 Dystopian novel 3 Picaresque novel 4 Epic poetry And our author table will look like this: id author_name 1 Leo Tolstoy 2 Franz Kafka 3 Nikolai Gogol 4 William Shakespeare 5 Homer Table: sales The columns here are: id: Unique identifier for the transaction. book_id: Unique book identifier (foreign key to book). count: Number of books sold. cost: Total cost of the purchased books. Data Update Operations Now, having created a sample database, we will demonstrate the execution of various data update operations using the UPDATE statement and other commands in MySQL. 1. Updating All Rows If we omit the WHERE clause in an UPDATE statement, all rows in the table will be updated. For example, suppose there is a promotion in a bookstore where all books are priced at a fixed rate of 500. The query would look like this: UPDATE bookSET price = 500; Resulting Table: If we try to assign a value that is already in a column, MySQL will not update it. If we want to assign a NULL value to a column defined as NOT NULL will, the query will return an error:  Column 'name_column' cannot be null Using the IGNORE parameter forces the value to default: 0 for numeric types, "" for string types, default dates (e.g., 0000 for YEAR, 0000-00-00 00:00:00 for DATETIME). 2. Updating Rows with a Condition Updating all rows is rare; typically, updates are performed based on specific conditions. For instance, to apply a discount on books with fewer than 5 copies in stock: UPDATE book SET price = 300WHERE amount < 5; Resulting Table: 3. Updating Values Using Expressions Columns can be updated using expressions instead of static values. For example, we can apply a 15% discount on Russian classics (author IDs 1 and 3): UPDATE book SET price = price * 0.85WHERE author_id IN (1, 3); Resulting Table: Updates are executed from left to right. For example, the query below increments the amount by 1 and then doubles it: UPDATE book SET amount = amount + 1, amount = amount * 2; Resulting Table: 4. Updating with DEFAULT We can update column values to their default values (DEFAULT), which are defined during the creation or modification of the table. To find out the default values used in our table, we can execute the following query: DESC book; Table Structure: Next, we reset the values of the amount column to its default value. Since the default value for amount is 0, all rows will now have amount set to 0: UPDATE book SET amount = DEFAULT; Resulting Table: 5. Updating Multiple Columns We can update multiple columns in a single query. For example, let's change the price and amount values for rows where book_id < 4: UPDATE book SET price = price * 0.9, amount = amount - 1 WHERE book_id < 4; Resulting Table: 6. Using LIMIT The LIMIT clause allows us to restrict the number of rows to be updated. For instance, we update only the first row where genre_id = 4: UPDATE book SET price = 100 WHERE genre_id = 4 LIMIT 1; The table contains two rows with genre_id equal to 4, but since we specified LIMIT 1, only one will be updated. Resulting Table: Note: The LIMIT N parameter does not guarantee that exactly N rows will be updated—it processes the first N rows matching the WHERE condition, regardless of whether the rows are updated or not. 7. Updating Multiple Tables In MySQL, it is possible to update multiple tables simultaneously. For example, update the amount in the book table and set the author_name to "-" in the author table for specific conditions: UPDATE book, author SET amount = amount + 3, author.author_name = '-' WHERE book.book_id = 4 AND author.id = 4; Resulting book Table: While the author table will look like this: id author_name 1 Leo Tolstoy 2 Franz Kafka 3 Nikolai Gogol 4 William Shakespeare 5 Homer 8. Updating Tables with a Join (INNER JOIN) While performing updates, we can also join tables using the INNER JOIN command. UPDATE book b INNER JOIN author a ON b.author_id = a.idSET b.title = CONCAT(b.title, ' (', a.author_name,')'); Specifying INNER is not mandatory, as this type of join is used by default. We can rewrite the query as follows and get the same result: UPDATE book, author a SET b.title = CONCAT(b.title, ' (', a.author_name,')')WHERE b.author_id = a.id; 9. Updating Tables with a Join (LEFT JOIN) We can also use a LEFT JOIN. In this case, we must specify LEFT JOIN in the query.  For example, we can update the stock quantity of books after a purchase. Let's add two rows to the sales table: INSERT INTO sales (book_id, count, cost) VALUES (1, 3, (SELECT price FROM book WHERE book_id = 1)*3), (3, 1, (SELECT price FROM book WHERE book_id = 3)*1); The store sold 3 copies of 'Anna Karenina' and 1 copy of 'Dead Souls'. Let's execute the query: UPDATE book LEFT JOIN sales on book.book_id = sales.book_idSET amount = amount - countWHERE sales.book_id is not NULL; We can see that now we have less copies of these books: If we try not to use LEFT JOIN, we will encounter the error: Out of range value for column 'amount' at row 3 This happens because amount cannot be negative. Alternatively, if we add IGNORE, the result will be: As we can see, we reduced the quantity in all rows by three books, which is not what we wanted. 10. Updating with CASE, IF, IFNULL, COALESCE When updating a table, it is also possible to use conditional operators such as CASE, IF, and others.The CASE function evaluates a set of conditions and, depending on the result, returns one of the possible outcomes. The syntax for using CASE and WHEN in an UPDATE statement in MySQL is as follows: UPDATE book SET price = CASE genre_id WHEN 1 THEN 100 WHEN 2 THEN 150 ELSE price END; In this case, if the book has genre 1, we set the price to 100, and if the genre is 2, the price is set to 150. The IF function returns one of two values depending on the result of a conditional expression. If the book has genre 4, we decrease its price by 200; otherwise, we leave the price unchanged. UPDATE bookSET price = IF (genre_id = 4, price-200, price); The result: The IFNULL function checks the value of an expression – if it is NULL, a specified value is returned; otherwise, the expression itself is returned. Let's assume that one of the amount values is NULL: Let's check all the values in the amount column, and if any NULL values are found, we will replace them with 0: UPDATE bookSET amount = IFNULL(amount, 0); Resulting Table: The COALESCE function is quite similar to IFNULL. The main difference is that this function can accept multiple values (two or more). Like IFNULL, it returns the first value that is not NULL. To see how this works, let's create a table like this: id col1 col2 col3 col4 1   val12 val13 val14 2     val23 val24 3       val34 And run the query: UPDATE test_tableSET col4 = COALESCE(col1, col2, col3, 'no value'); We will get: id col1 col2 col3 col4 1   val12 val13 val12 2     val23 val23 3       no value 11. Updating with Sorting Sorting can help when updating a field with a unique key. If we want to shift our id values by 1, updating the first row would result in two rows having id = 2, which will cause an error. However, if we add ORDER BY and start updating from the end, the query will execute successfully: UPDATE bookSET book_id=book_id+1  We will get: 12. Updating Based on Data from Other Tables In MySQL, when working with UPDATE, it is possible to use nested SELECT and FROM commands in the WHERE condition. In the following example, we first retrieve the id of the 'Epic poetry' genre, then use the retrieved value to select the rows for updating the table. UPDATE book SET amount = 0 WHERE genre_id = ( SELECT id FROM genre Where genre_name = 'Epic poetry' ); Alternatively, we can select the values that need to be changed using the query: UPDATE book SET price = ( SELECT MIN (cost) FROM sales) WHERE amount < 5; We are updating the price values of all books whose stock quantity is less than 5, setting the price to the minimum selling amount. The minimum selling amount is 480: It is not possible to update the table by selecting values from the same table in a subquery. However, there is a trick we can use – we can join the table with itself: UPDATE book AS book_1 INNER JOIN( SELECT genre_id, MIN(amount) AS min_amount FROM book GROUP BY genre_id ) AS book_2 ON book_1.genre_id = book_2.genre_id SET book_1.amount = book_2.min_amount; In this case, the subquery creates a temporary table for the join and closes it before the UPDATE begins execution. The subquery finds the minimum quantity of books for each genre, which is then used to update the amount column. In our table, only genre 4 has more than one row. The values in both rows should be replaced with the minimum value for that genre, which is 4. We will get: Another option is using SELECT FROM SELECT: UPDATE book AS book_1 SET book_1.price = (SELECT MIN(price) AS min_price FROM ( SELECT price FROM book) as book_2); In this case, a temporary table is also created. However, only a single value is assigned to all rows. Conclusion We have covered the features of using the UPDATE statement in MySQL in as much detail as possible and covered simple scenarios with practical examples.
12 December 2024 · 11 min to read
PostgreSQL

Logical Replication in PostgreSQL

When deploying applications, having more than one copy of the database is always beneficial. After creating copies, it is essential to ensure they are all synchronized. The process of synchronizing database copies is called replication. Logical replication in PostgreSQL refers to the synchronization of copies without being tied to a specific physical data representation on a disk. It is independent of processor architecture, platform, or database system version. Synchronization is performed based on a replication identifier, which is typically the primary key. Logical replication uses a publish-and-subscribe model. Replication Process In general, the replication process consists of the following steps: Creating one or more publications on the publisher node. Subscribing one or more subscribers to one or more publications. Copying a snapshot of the publisher's database to the subscriber. This step is also known as the table synchronization phase. It is possible to create multiple table synchronization workers to reduce the time required for this phase. However, there can only be one synchronization process for each table. Sending the subsequent changes made on the publisher node to the subscriber node. These changes are applied in the commit order to ensure transactional consistency. The subscriber node fetches changes as they occur in the publisher's database in real-time, ensuring that the subscriber and publisher databases remain synchronized. This mechanism ensures up-to-date data consistency across the replicated databases. Logical Replication in Practice Suppose you want to set up logical replication on a single host. To achieve this, use different ports—for example, the publisher will operate on port 5432, and the subscriber on port 5431. Edit the Configuration File Start by editing the PostgreSQL configuration file: sudo nano /etc/postgresql/10/main/postgresql.conf Uncomment the wal_level parameter and set it to logical. It should look like this: wal_level = logical Save and close the configuration file, then restart PostgreSQL: sudo systemctl restart postgresql Export Global Objects On the master, execute the following command for the main database: pg_dumpall --database=postgres --host=192.168.1.2 --no-password --globals-only --no-privileges | psql The pg_dumpall command exports databases in script format. The --database parameter specifies the database used for connecting and exporting global objects and locating other databases. By default, it uses the postgres database. The --globals-only parameter ensures only global objects are exported, excluding the database contents. For detailed information, consult the PostgreSQL documentation. Export Schema on the Replica On the replica, run: pg_dump --dbname=db_name --host=192.168.1.2 --no-password --create --schema-only | psql Prepare Data for Testing Create a test table with two columns: CREATE TABLE table1(x int primary key, y int); The x column will store the primary key. The y column will store integer values. Insert a sample row: INSERT INTO table1 VALUES(10, 11); At this point, the table contains a single row where the primary key is 10 and the value is 11. This minimal dataset is enough to verify synchronization. Create a Publication on the Master Create a publication that replicates the desired table: CREATE PUBLICATION my_publication FOR TABLE table1; The FOR TABLE parameter allows you to specify which tables to replicate. You can limit the changes to be published or include additional tables later. To create a publication for all existing and future tables, use the ALL TABLES parameter. For more details, refer to the PostgreSQL documentation. The publication named my_publication is ready. Now it’s time to create a subscription on port 5431. Recreate the Table on the Subscriber Node On the subscriber, create the same table structure as on the publisher: CREATE TABLE table1(x int primary key, y int); Create a Subscription on the Replica Create a subscription named my_subscription: CREATE SUBSCRIPTION my_subscription CONNECTION 'host=localhost port=5432 dbname=postgres' PUBLICATION my_publication; Verify Synchronization Query the table on the subscriber: SELECT * FROM table1; This command will display the rows synchronized from the publisher. Initially, it should return the row added earlier (11 with the primary key 10). How It Works The CREATE SUBSCRIPTION command creates a subscription for the current database, which begins receiving logical changes from the publication my_publication. Upon execution, a logical replication worker is created to fetch changes from the publisher. On the publisher side, a walsender process starts to read the WAL (Write-Ahead Log), decode changes, and send them to the subscriber. To test the synchronization, add additional rows on the publisher: INSERT INTO table1 VALUES(20, 21), (30, 31); Verify that the subscriber displays these rows: SELECT * FROM table1; If you have multiple servers, additional configuration is required. Allow Connections on the Publisher On the main server, edit the configuration file to listen on the private IP address: sudo nano /etc/postgresql/10/main/postgresql.conf Locate the listen_addresses parameter and modify it to include the private IP address of the master: listen_addresses = 'localhost, MASTER_PRIVATE_IP' Configure Access Control Edit the pg_hba.conf file on the publisher to allow incoming connections from the replica: sudo nano /etc/postgresql/10/main/pg_hba.conf Add the following line, replacing REPLICA_PRIVATE_IP with the actual private IP address of the replica: host replication postgres REPLICA_PRIVATE_IP/32 md5 Look for the comment: # If you want to allow non-local connections, you need to add more. Add your new rule below this line. Firewall Configuration On the publisher, allow traffic from the replica to port 5432: sudo ufw allow from REPLICA_PRIVATE_IP to any port 5432 Apply Changes Restart PostgreSQL to apply all changes: sudo systemctl restart postgresql Troubleshooting Issues If replication doesn’t seem to work, check the PostgreSQL logs on the replica for possible errors. The log file is typically located at: /var/log/postgresql/postgresql-10-main.log. Common Issues and Solutions: Private Network Not Enabled. Ensure both servers are in the same private network or correctly configured for cross-network access. Incorrect IP Address Configuration. Verify that the server is listening on the correct private network IP address. wal_level Not Set to logical. Double-check the wal_level parameter in the PostgreSQL configuration. Firewall Blocking Connections. Confirm that the firewall is not blocking incoming connections on the required port (e.g., 5432). Mismatch in Table or Field Names. Ensure that table and column names match the publisher and subscriber exactly. Table Not Included in the Publication. Verify that the table is added to the publication on the publisher. After addressing these issues, replication should resume automatically. If not, drop the existing subscription and recreate it: DROP SUBSCRIPTION my_subscription; Physical Replication Overview PostgreSQL supports two types of replication: logical (discussed above) and physical replication. Here's a brief overview of physical replication. Key Features of Physical Replication: Introduced in PostgreSQL 9.0. Physical replication synchronizes databases at the file level. Block-Level Synchronization. Changes are tracked using precise block addresses and replicated byte-by-byte. Write-Ahead Log (WAL). Changes from the master are transmitted via WAL and applied on the standby server. Limitations of Physical Replication: No Partial Database Replication: You cannot replicate only a portion of the database. High Overhead: All changes are transmitted, potentially increasing network load. Platform Restrictions: Physical replication requires identical server platforms, including CPU architecture (e.g., Windows to Windows or Linux to Linux). Version Compatibility: Databases on different PostgreSQL versions cannot synchronize. Conclusion This guide covered setting up and managing logical replication in PostgreSQL, including troubleshooting common issues. We also briefly touched on physical replication, highlighting its characteristics and limitations. For simplified database management, consider cloud database services like Hostman, which offers managed PostgreSQL and other database solutions to streamline deployment and scaling.
11 December 2024 · 7 min to read
Go

For Loops in the Go Programming Language

A loop is a block of code that runs until a specified condition is met or a required number of repetitions is completed. Loops are convenient for solving tasks where a program needs to repeat the same actions multiple times. For example, imagine you have a list of directors. You need to extract each director's last name and display it on the screen. Instead of manually accessing each element of the list, it's easier to use a loop. A loop will iterate through the list and display each last name on the screen. Loops in Go In Go, there are only for loops. There are no while or do while loops like in some other languages. Similar concepts are implemented using the same for loop. This design choice makes the code more readable. Developers don't have to decide on a looping strategy — if you need to repeat actions, there's for, which can be used in various ways. Let's explore how to create loops in Golang to solve specific tasks. ForClause The structure of a ForClause is simple. It consists of a condition and a body. The code inside the body executes if the condition is evaluated as true. for i := 0; i < 6; i++ { fmt.Println(i) } Here: i := 0 is the initializer. It sets the starting value of the loop. i < 6 is the condition. If it is evaluated as true, the code inside the loop is executed. fmt.Println(i) sequentially prints numbers from 0 to 5. i++ is the post-operation that increments i by 1 after each iteration. The code starts with i = 0. Since 0 < 6, the condition is true, and 0 is printed. Then, i++ increments i by 1, making i = 1. The loop continues as long as i < 6. When i becomes 6, the condition i < 6 is false, and the loop stops. The number 6 is not printed. Output: 0 1 2 3 4 5 You don't have to start at zero or stop at a fixed value. The for loop in Go allows you to adjust the logic as needed. for i := 100; i < 150; i = i + 10 { fmt.Println(i) } Output: 100 110 120 130 140 If you modify the condition slightly, you can include the number 150: for i := 100; i <= 150; i = i + 10 { fmt.Println(i) } Output: 100 110 120 130 140 150 You can also iterate in reverse, from the end to the beginning, by modifying the condition and the post-operation. for i := 50; i > 0; i -= 10 { fmt.Println(i) } Here, the loop starts with i = 50. On each iteration, it checks if i > 0. If the condition is true, it subtracts 10 from the current value of i. Output: 50 40 30 20 10 Note that 0 is not printed because the condition requires i > 0. Loop with a Condition If you remove the initializer and post-operator from the syntax, you get a simple construct that works based on a condition. The loop declaration in this case looks like this: i := 0 for i < 6 { fmt.Println(i) i++ } If you are familiar with other programming languages, you might recognize this as similar to a while loop. In this example, i is defined outside the loop. The for loop only has a condition, which keeps the loop running while i is less than 6. Note that the increment operation (i++), previously specified as a post-operator, is now inside the body. Sometimes, the number of iterations is unknown in advance. You can't specify a condition for ending the loop in such cases. To avoid infinite loops, Go supports the break keyword. Here's a simple example: func main() { i := 0 for { fmt.Println("Hello") if i == 5 { break } i++ } } Initially, i = 0. The loop runs indefinitely, printing "Hello" each time. However, when i reaches 5, the break statement is executed, and the program stops. RangeClause Go also provides another type of loop — the RangeClause. It is similar to ForClause, but it returns two values by default: the index of an element and its value. package main import "fmt" func main() { words := []string{"host", "man", "hostman", "cloud"} for i, word := range words { fmt.Println(i, word) } } Output: 0 host 1 man 2 hostman 3 cloud To omit the index, use an underscore _ as a placeholder: package main import "fmt" func main() { words := []string{"host", "man", "hostman", "cloud"} for _, word := range words { fmt.Println(word) } } Output: host man hostman cloud You can also use range to add elements to a list: package main import "fmt" func main() { words := []string{"host", "man", "hostman", "cloud"} for range words { words = append(words, "great") } fmt.Printf("%q\n", words) } Output: ["host" "man" "hostman" "cloud" "great" "great" "great" "great"] In this example, the word "great" is added for each element in the original length of the words slice. Suppose you have a slice of 10 zeros and need to populate it with numbers from 0 to 9: package main import "fmt" func main() { integers := make([]int, 10) fmt.Println(integers) for i := range integers { integers[i] = i } fmt.Println(integers) } [0 0 0 0 0 0 0 0 0 0] [0 1 2 3 4 5 6 7 8 9] You can use range to iterate over each character in a string: package main import "fmt" func main() { hostman := "Hostman" for _, letter := range hostman { fmt.Printf("%c\n", letter) } } Output: H o s t m a n This allows you to process each character in a string individually. Nested Constructs A for loop can be created inside another construct, making it nested. We can represent its syntax as: for { [Action] for { [Action] } } First, the outer loop starts running. It executes and then triggers the inner loop. After the inner loop finishes, the program returns to the outer loop. This process repeats as long as the given condition holds or until the program encounters a break statement. There is also a risk of creating an infinite loop, which even the powerful resources of Hostman wouldn’t handle, as the program would never terminate. To avoid this, always ensure the condition is properly checked or use the break operator. Here’s a simple example to demonstrate nested loops: package main import "fmt" func main() { numList := []int{1, 2} alphaList := []string{"a", "b", "c"} for _, i := range numList { fmt.Println(i) for _, letter := range alphaList { fmt.Println(letter) } } } Output: 1 a b c 2 a b c This example clearly demonstrates the order of operations: The first value from numList (1) is printed. The inner loop executes, printing each value from alphaList (a, b, c). The program returns to the outer loop and prints the next value from numList (2). The inner loop runs again, printing the values of alphaList (a, b, c) a second time. Conclusion Using for loops in Go is straightforward. Depending on the task, you can choose one of the three main forms of for or combine them to create nested constructs. You can control the loop's behavior by modifying the condition, initializer, and post-operator or by using break and continue statements. Nested loops provide flexibility and power but should be used carefully to avoid infinite loops or excessive computational overhead. You can deploy Go applications (such as Beego and Gin) on our app platform.
11 December 2024 · 6 min to read
Go

How to Install Go on Windows

Go, or Golang, is a high-performance, multithreaded programming language developed by Google in 2007 and released in 2009. To this day, Golang continues to gain popularity.  The Go programming language supports many operating systems, making it a versatile choice for development across various platforms. In this guide, we will walk through the step-by-step process of installing Golang on Windows. Installing Go on Windows Go supports Windows 7 and newer versions. Ensure that you have a supported version of the OS installed. In this guide, we will use Windows 11. You will also need an administrator account to configure environment variables. To install Golang on Windows: Download the installer for the latest version of Microsoft Windows from the official Go website. If needed, you can select any other available version of the language instead of the latest one. Once the file has finished downloading, run it and follow the installation wizard's instructions. If necessary, you can change the file location. This will be useful when configuring environment variables. After the installation, check if Golang was successfully installed on your system. To do this, open the terminal (Win + R → cmd) and run the following command: go version The output should show the version of Go you just installed. For example: To update Golang to a newer version on Windows, you must uninstall the old version and follow the instructions to install the new one. Now, let's move on to setting up environment variables so that Go works properly. Setting Up Environment Variables Setting up environment variables is an important step in installing Go on Windows, as it allows the operating system to determine where the necessary Go files and directories are located. For Go to work correctly, two environment variables are required: GOPATH points to where Go stores downloaded and compiled packages. PATH allows the system to find Go executable files without specifying their full paths. GOPATH First, let's set up the GOPATH environment variable. For this, you need to organize a workspace where Go files and projects will be stored. In this guide, we will create a workspace at C:\GoProject. We will also add two directories to this folder: bin – for storing executable files (binary files). Go creates an executable file and places it in this directory when you compile your project. src – for storing Go source files. All .go files will be placed here. After creating the workspace, we will set the GOPATH environment variable. To do this, go to the Control Panel → System and Security → System and click on Advanced System Settings. There is also an easier way to access system properties: open the Run window (Win + R) and enter: sysdm.cpl Click on Environment Variables, then click the New button under the User Variables section. Here, you need to fill in two fields: the variable name and its value. In the Variable name field, enter GOPATH, and in the Variable value field, enter the path to the workspace you created earlier (in our case, C:\GoProject). Click OK twice to save the changes. To verify the creation of the system variable, open the Run window (Win + R) and enter the string: %GOPATH% If everything was done correctly, your workspace will open. PATH The PATH environment variable should have been automatically added after we installed Go. To check this, go to the Control Panel → System and Security → System and click on Advanced System Settings. In the window that opens, you need to find PATH among the system variables. To view its values, double-click on it. In the new window, there should be an entry that holds the path to the Go bin folder. In our case, it is C:\Program Files\Go\bin. If your value does not match what was specified during the Go installation, change it to the correct one using the Edit button. At this point, the installation of Golang on Windows and the setup of environment variables is complete. Now we can check its functionality by writing and running our first program. Verifying Installation To check the functionality of the newly installed Golang on Windows: Сreate a test file with the .go extension in the workspace (C:\GoProject\src). For example, ExampleProgram.go. Add the following simple code: package mainimport "fmt"func main() {    fmt.Println("Hello, Go has been successfully installed into your system!")} The program should display a message confirming that Go has been successfully installed on your system. To compile and run the program, enter the following command in the terminal: go run %GOPATH%/src/ExampleProgram.go As shown in the image below, the program compiles and runs, displaying the specified text on the screen. Conclusion Installing Go on Windows is a straightforward process, involving downloading the installer, setting up environment variables, and verifying the installation. Once Go is properly configured, you can easily start developing applications. With support for multiple operating systems, Go remains a powerful and versatile language, ideal for cross-platform development. On our app platform you can deploy Golang apps, such as Beego and Gin. 
10 December 2024 · 5 min to read
Go

Type Conversion in Go

Go is a statically typed programming language, meaning that data types are tied to variables. If you declare a variable as int to store numerical values, you cannot store a string in it. This rule works in the reverse direction as well. Static typing protects developers from errors where the program expects one data type and gets another. However, this strict binding can be limiting when performing certain operations. Go provides type conversion (or type casting) to overcome this limitation. This formal process allows developers to convert integer values to floating-point numbers, strings to numbers, and vice versa. This article will help you understand how to perform such conversions. Data Types in Go The basic types in Go are as follows: bool — Boolean values: true or false string — Strings int, int8, int16, int32, int64 — Signed integer types uint, uint8, uint16, uint32, uint64, uintptr — Unsigned integer types byte — Alias for uint8 rune — Alias for int32 float32, float64 — Floating-point numbers complex64, complex128 — Complex numbers The types int, uint, and uintptr have a width of 32 bits in 32-bit systems and 64 bits in 64-bit systems. When you need an integer value, you should use int unless you have a specific reason for using a sized or unsigned integer type. Go does not have a char data type. The language uses byte and rune to represent character values. byte represents ASCII characters, while rune represents a broader set of Unicode characters encoded in UTF-8. To define characters in Go, you enclose them in single quotes like this: 'a'. The default type for character values is rune. If you do not explicitly declare the type when assigning a character value, Go will infer the type as rune: var firstLetter = 'A' // Type inferred as `rune` You can explicitly declare a byte variable like this: var lastLetter byte = 'Z' Both byte and rune are integer types. For example, a byte with the value 'a' is converted to the integer 97. Similarly, a rune with the Unicode value '♥' is converted to the corresponding Unicode code point U+2665, where U+ indicates Unicode, and the numbers are in hexadecimal, which is essentially an integer. Here's an example: package main import "fmt" func main() { var myByte byte = 'a' var myRune rune = '♥' fmt.Printf("%c = %d and %c = %U\n", myByte, myByte, myRune, myRune) } Output: a = 97 and ♥ = U+2665 When you need to convert from int to string or vice versa, you essentially take the type initially assigned to a variable and convert it to another type. As mentioned earlier, Go strictly formalizes these actions. The examples in this article will help you understand the basics of such conversions. Number Conversion in Go Converting numeric types can be useful when solving various tasks. For example, we decided to add a calculator to the website. It should perform only one operation: division. The main requirement is that the result be accurate down to the last digit. However, when dividing two integer variables, the result may be inaccurate. For example: package main import "fmt" func main() { var first int = 15 var second int = 6 var result = first / second fmt.Println(result) } Output: 2 After executing this code, you get 2. The program outputs the nearest integer quotient, but this is far from the precise division you need. Such a calculator is not useful. To improve the accuracy, you need to cast both variables to float. Here's how you can do it: package main import "fmt" func main() { var first int = 15 var second int = 6 var result = float64(first) / float64(second) fmt.Println(result) } Output: 2.5 Now the output will be precise — 2.5. It was quite easy to achieve by simply wrapping the variables with the float64() or float32() conversion functions. Now the calculator works as expected. Product metrics are not a concern, as the feature is technically implemented correctly. You can also divide numbers without explicitly converting them to float. When you use floating-point numbers, other types are automatically cast to float. Try this code: package main import "fmt" func main() { a := 5.0 / 2 fmt.Println(a) } Output: 2.5 Even though you didn’t explicitly use the float64() or float32() wrapper in the code, Go's compiler automatically recognizes that 5.0 is a floating-point number and performs the division with the floating-point precision. The result is displayed as a floating-point number. In the first example with division, you explicitly cast the integers to float using the float64() function. Here’s another example of converting from int64 to float64: package main import "fmt" func main() { var x int64 = 57 var y float64 = float64(x) fmt.Printf("%.2f\n", y) } Output: 57.00 The two zeros after the decimal point appear because we added the %.2f\n format specifier. Instead of 2, you could specify any other number, depending on how many decimal places you want to display. You can also convert from float to int. Here's an example: package main import "fmt" func main() { var f float64 = 409.8 var i int = int(f) fmt.Printf("f = %.3f\n", f) fmt.Printf("i = %d\n", i) } Output: f = 409.800i = 409 In this example, the program prints f = 409.800 with three decimal places. In the second print statement, the float is first converted to int, and the decimal part is discarded. Note that Go does not perform rounding, so the result is 409 without any rounding to the nearest integer. Strings Conversion in Go In Golang, we can convert a number to a string using the method strconv.Itoa. This method is part of the strconv package in the language's standard library. Run this code: package main import ( "fmt" "strconv" ) func main() { a := strconv.Itoa(12) fmt.Printf("%q\n", a) } The result should be the string "12". The quotes in the output indicate that this is no longer a number. In practice, such string-to-number and number-to-string conversions are often used to display useful information to users. For example, if you're building an online store, you can host it at Hostman, implement the core business logic, and fill it with products. After some time, the product manager suggests improving the user profile. The user should see the amount they have spent and how much more they need to spend to reach the next level. To do this, you need to display a message in the user profile that consists of a simple text and a set of digits. Try running this code: package main import ( "fmt" ) func main() { user := "Alex" sum := 50 fmt.Println("Congratulations, " + user + "! You have already spent " + lines + " dollars.") } The result will be an error message. You cannot concatenate a string and a number. The solution to this problem is to convert the data in Go. Let's fix the code by converting the variable lines to a string: package main import ( "fmt" "strconv" ) func main() { user := "Alex" sum := 50 fmt.Println("Congratulations, " + user + "! You have already spent " + strconv.Itoa(sum) + " dollars.") } Now, there will be no error, and the output will display the correct message with the proper set of digits. Of course, this is a simplified example. In real projects, the logic is much more complex and challenging. However, knowing the basic operations helps avoid a large number of errors. This is especially important when working with complex systems. Let's go back to our example. The product manager comes again and says that customers want to see the exact total amount of their purchases in their profile, down to the pennies. An integer value won't work here. As you already understood from the examples above, all digits after the decimal point are simply discarded. To make sure the total purchase amount in the user profile is displayed correctly, we will convert not an int, but a float to a string. For this task, there is a method fmt.Sprint, which is part of the fmt package. package main import ( "fmt" ) func main() { fmt.Println(fmt.Sprint(421.034)) f := 5524.53 fmt.Println(fmt.Sprint(f)) } To verify that the conversion was successful, concatenate the total with the string. For example: package main import ( "fmt" ) func main() { f := 5524.53 fmt.Println("Alex spent " + fmt.Sprint(f) + " dollars.") } There is no error now, and the information message correctly displays the floating-point number. Customers can see how much money they've spent in your store, with all expenses accounted for down to the penny. A common reverse task is to convert a string into numbers. For example, you have a form where the user enters their age or any other numeric values. The entered data is saved in the string format. Let's try working with this data— for instance, performing a subtraction: package main import ( "fmt" ) func main() { lines_yesterday := "50" lines_today := "108" lines_more := lines_today - lines_yesterday fmt.Println(lines_more) } The result of running this code will be an error message, as subtraction cannot be applied to string values. To perform mathematical operations on data stored as strings, you need to convert them to int or float. The choice of method depends on the type you will convert the string to. If you are working with integers, use the strconv.Atoi method. For floating-point numbers, use the strconv.ParseFloat method. package main import ( "fmt" "log" "strconv" ) func main() { lines_yesterday := "50" lines_today := "108" yesterday, err := strconv.Atoi(lines_yesterday) if err != nil { log.Fatal(err) } today, err := strconv.Atoi(lines_today) if err != nil { log.Fatal(err) } lines_more := today - yesterday fmt.Println(lines_more) } In this example, you use the if operator to check whether the conversion was successful. If an error occurs, the program will terminate, and the error information will be saved in the log. If the conversion is successful, the output will give you the correct result: 108 - 50 = 58. If you try to convert a string that does not contain a numerical value in the same way, you will receive an error message: strconv.Atoi: parsing "not a number": invalid syntax Try running this code: package main import ( "fmt" "strconv" ) func main() { a := "not a number" b, err := strconv.Atoi(a) fmt.Println(b) fmt.Println(err) } The code from the example above will fail because you are trying to convert a string whose value is not a number into a numeric type. Strings can also be converted to byte slices and back using the []byte() and string() constructs.  package main import ( "fmt" ) func main() { a := "hostman" b := []byte(a) c := string(b) fmt.Println(a) fmt.Println(b) fmt.Println(c) } In this function, you save the string to variable a, then convert the same string into a byte slice and save it to variable b, then turn the byte slice into a string and save the result to variable c. The output will be like this: hostman[104 111 115 116 109 97 110]hostman This simple example shows that you can easily convert strings to byte slices and back. Conclusion In this article, we only covered the basics. We looked at the available data types and how to perform type conversion in Go. If you want to learn more, explore the language documentation or at least the "A Tour of Go" tutorial — it's an interactive introduction to Go divided into three sections. The first section covers basic syntax and data structures, the second discusses methods and interfaces, and the third introduces Go's concurrency primitives. Each section concludes with several exercises so you can practice what you've learned.  In addition,  you can deploy Go applications (such as Beego and Gin) on our app platform.
10 December 2024 · 10 min to read
PHP

How to Install PHP and PHP-FPM on Ubuntu 24.04

In this guide, we will describe installing PHP and PHP-FPM on Ubuntu 24.04. PHP, which stands for Hypertext Preprocessor, is a language that is widely used and open-sourced, mainly for web development. PHP is the only PHP FastCGI implementation, that is extremely useful for high-traffic websites. At the end of this guide, you should be ready to go with PHP running on your server. Prerequisites Before we start, please confirm you have the following: Ubuntu 24.04 LTS installed on the server A user account with the sudo access An essential command-line operation understanding A reliable internet connection for downloading software packages To ensure that your system is up to date, run the following commands: sudo apt updatesudo apt upgrade Install Apache Launch the Apache web server using the following command: sudo apt install apache2 Install PHP Let's begin with installing the PHP package in Ubuntu 24.04 server. First, open a terminal on your Ubuntu system. PHP and common modules are included in the installation action: sudo apt install php That command installs the core PHP package, the command-line interface, and common libraries. Make sure the installation works: php -v Install PHP Extensions PHP extensions are the way to go to extending PHP installation with certain functions. Start by installing extensions: sudo apt install php-curl php-mbstring php-xml Short description: php-mysql: Allows MySQL database connection php-gd: Adds ability to manipulate images php-curl: Makes possible to communicate with servers php-mbstring: Provides multibyte string support php-xml: Enables XML support php-zip: Enables ZIP support Additional extensions can be installed as you see fit for your projects. You can search them using: apt-cache search php- Install and Configure PHP-FPM PHP-FPM is essential when dealing with high-traffic websites. To install and configure it: Install the package: sudo apt install php-fpm Launch PHP-FPM service. Depending on the installation, version number may differ. sudo systemctl start php8.3-fpm Tell PHP-FPM to go on boot: sudo systemctl enable php8.3-fpm Verfy PHP-FPM is working: systemctl status php8.3-fpm This will output a response that says "Active (Running)" if everything is working as expected. Test PHP and PHP-FPM To ensure that PHP and PHP-FPM are both running with no problems, create a test file then serve it via the website's server. Let's say it uses Apache in this example: Generate PHP Info File. To show PHP settings using the phpinfo() function, do the following: mkdir -p /var/www/htmlecho "<?php phpinfo(); ?>" | sudo tee /var/www/html/info.php Set Up Apache for PHP-FPM. Ensure Apache is made compatible for PHP-FPM, by first finding Apache configuration file (usually /etc/apache2/sites-available/000-default.conf) then inserting: <FilesMatch \.php$>   SetHandler "proxy:unix:/var/run/php/php8.3-fpm.sock|fcgi://localhost/"</FilesMatch> Remember we must alter specific PHP version and socket path to suit individual settings of the server. Activate PHP and PHP-FPM. Enable PHP and PHP-FPM following these instructions: sudo apt install libapache2-mod-phpsudo a2enmod proxy_fcgi setenvif Reboot Apache. Apply changes by restarting Apache server: sudo systemctl restart apache2 Access PHP Info Page. First open your web browser and go to: http://your_server_ip/info.php Replace [server_ip] with the server IP address or domain. You can see details of your PHP installation. Install Multiple PHP Versions For particular projects you might need to run different applications, each one may require different functionalities. This is the way to manage and manipulate multiple PHP versions on Ubuntu 24.04. First, add PHP repository: sudo apt install software-properties-commonsudo add-apt-repository ppa:ondrej/php && sudo apt update Install PHP versions you need: sudo apt install php8.1 php8.1-fpm Deselect one PHP version and elect the other: sudo update-alternatives --set php /usr/bin/php8.1 If you are using multiple PHP versions, ensure that your web server is pointing to the appropriate PHP-FPM socket. Securing PHP and PHP-FPM: Best Practices As a web developer, you know the importance of incorporating both PHP and PHP-FPM into web applications that are safe and robust. In this section, we will introduce a number of security steps that you should adapt using PHP and PHP-FPM. 1. Keep PHP and PHP-FPM Updated PHP and PHP-FPM should be up to date. Doing regular updates will eliminate known security breaches and provide overall security improvements. You need to check for updates as often as possible then update the system as soon as the updates are available. 2. Configure PHP Securely To configure PHP securely, start by disabling unnecessary and potentially dangerous functions, such as exec, shell_exec, and eval, in the PHP configuration file (php.ini). Use open_basedir directive to restrict PHP’s access to specific directories, preventing unauthorized access to sensitive files. Set display_errors to Off in production to avoid exposing error messages that could provide insights to attackers. Limit file upload sizes and execution times to reduce the risk of resource exhaustion attacks. Besides, ensure that PHP runs under a dedicated, restricted user account with minimal permissions to prevent privilege escalation. Regularly update PHP to the latest stable version to patch vulnerabilities and improve security. 3. Use Safe Error Reporting To ensure an error-free application, it is quite handy locating and correcting code bugs in a development environment. In production environment, you have the possibility to hide the PHP errors by setting the display_error directive to be off, and you should also set the log_errors directive to be On, thus this will help you prevent PHP from showing errors to the users whereas your server will log it in a safe location without problems to users. 4. Implement Input Validation Being aware of the input validations is quite crucial during the programming of your software. Make sure that all deficiencies are tested and only SQL statements containing their SQL equivalent that can produce outwardly neutral queries via prepared statements is considered safe. 5. Secure PHP-FPM Configuration PHP-FPM is required to run using a non-usual user account with minium rights. Furthermore, access to the PHP-FPM socket or port should be very limited to the web application. 6. Enable Open_basedir You need to bind open_basedir directive in order to restrict access files within the given directory. In this case, if you attempt to visit a forbidden directory and the request is accidentally transmitted to the server, PHP will prevent you from doing so. 7. Use HTTPS We need to secure web calls by making apps HTTPS-only, which is the only prominent way to block all the known hacking tricks. Conclusion With this guide, you've successfully set up PHP and PHP-FPM on Ubuntu 24.04. Your server is now configured for dynamic web applications. To maintain security and performance, remember to keep the system and packages regularly updated.
09 December 2024 · 6 min to read
Mail

How to Configure Postfix Using External SMTP Servers

Postfix is a widely used tool for routing and delivering emails. Known for its adaptability, reliability, and easy setup, it's essential to email systems. It ensures smooth message delivery and allows administrators to manage email traffic efficiently. To install Postfix, you will need to install the software, configure it with an external SMTP server, and set up verifications. Follow these guidelines for a seamless setup. Before moving to the main process, ensure you have: sudo privileges or root access on a Linux server  An external SMTP server (like Gmail)  Installing Postfix Employ the instructions below to install Postfix across several Linux distros: On Debian-based Linux Distros (like Ubuntu) sudo apt install postfix On Red Hat-based Linux Distros (like CentOS) sudo yum install postfix On Fedora sudo dnf install postfix On Arch Linux sudo pacman -S postfix During installation, users will see a setup window. This window will ask for basic setup settings. After finalizing, complete the installation. Configuring Postfix Correctly configuring Postfix is crucial for successful email delivery. This involves updating configuration files, activating authentication, and setting methods for processing and delivering mails. Here's the process: Step 1: Configuration File Modification The main.cf (Postfix configuration) contains principal settings, and to tweak them, open the file using: sudo nano /etc/postfix/main.cf Note: By default, new servers have ports 465 and 587 blocked. To unblock these ports, reach out to technical support. Step 2: Configuration with an External SMTP Server Set up the relay host and enable security protocols by adding the provided lines to the file: relayhost = [smtp.example.com]:587smtp_sasl_auth_enable = yessmtp_sasl_password_maps = hash:/etc/postfix/sasl_passwdsmtp_sasl_security_options = noanonymoussmtp_tls_security_level = encryptsmtp_tls_note_starttls_offer = yes Here: The initial line configures the Postfix relay host. This line sets the SMTP and port (587 for TLS); if you’re using Gmail, replace "smtp.example.com" with "smtp.gmail.com." The second line enables SASL authentication. The third line points to the file containing your SMTP credentials (an essential file that helps setup Postfix map sasl_password. The fourth line prevents anonymous connections. The fifth line causes the utility to utilize TLS encryption. The sixth line reports the server's STARTTLS offer. Save the file once you’ve adjusted the necessary settings. Step 3: Construct the SASL Credentials File Create a SASL password file via your SMTP credentials: sudo nano /etc/postfix/sasl_passwd Insert the credentials in the specified format within the file: [smtp.example.com]:587 [email protected]:password Substitute [smtp.example.com] with your chosen server (e.g., smtp.gmail.com). Swap out password and [email protected] with your real email address and corresponding password. Produce an app-specific password in Gmail by accessing the App Passwords segment of your account settings. Step 4: Protect the SASL Credentials Once done, protect your credentials via provided commands: sudo chmod 600 /etc/postfix/sasl_passwd sudo postmap /etc/postfix/sasl_passwd The first command restricts access to the credentials file, permitting read access solely to the root user. The application will authenticate via the hash database file generated by the second command. Step 5: Restart Postfix Restart to apply the changes: sudo systemctl restart postfix Note: If encountering an error like "fatal: the Postfix mail system is not running," double-check that the server is configured correctly and that all processes have been exactly followed. Testing the SMTP Server Now that everything has been modified, you can send mail. Before sending, install mailutils on your Linux PC using: sudo apt install mailutils Post-installation, check the configuration by sending a test mail using the specified format below: echo "Test email from Postfix" | mail -s "Test Postfix" [email protected] The first part displays the beginning part of the text intended for the mail body. Second is the pipe symbol (|) which directs the echo command’s output straight into the mail command. Third is the mail command that establishes the email’s subject when used with -s option. The last part indicates the email address of the test message's recipient. To make sure everything is functioning and that the test mail was delivered correctly, delve into the mail logs using: sudo tail -f /var/log/mail.log This log file provides a snapshot of recent activities. If the test mail logs successful, your setup is complete. Note: If experiencing difficulties receiving Gmail messages, use below guidelines: Enter your current login details to access Gmail through a web browser. Locate the gear icon at the top right, click it and select "See all settings." In the Gmail menu, pick "Forwarding and POP/IMAP." Go with "IMAP access" and activate "Enable IMAP." Continue scrolling down, and hit "Save Changes. By adhering to these guidelines, you'll activate IMAP in Gmail and improve the message delivery system. Setting Up Email Forwarding Email forwarding configuration ensures seamless redirection of incoming mails from one address to another, guaranteeing you never miss a message. This functionality is useful to centralize email management or direct system alerts to an external email address. Take the following action to configure forwarding: Step 1: Modify Aliases File Begin by adding modifications to the aliases file, which can be accessed using: sudo nano /etc/aliases To specify forwarding addresses, they must be detailed in the aliases file. For instance: root:    [email protected] This command ensures mails destined for the root are passed along to [email protected]. Users can establish extra forwarding rules as required. Step 2: Refresh the Aliases Database Refresh the aliases database to apply the modification using: sudo newaliases Step 3: Restart Postfix Lastly, restart again via: sudo systemctl restart postfix By sticking to these instructions, you may smoothly establish email forwarding, guaranteeing that mails intended for certain addresses are quickly forwarded to the selected account. Enabling SMTP Encryption Encrypting SMTP is a must to preserve the security and privacy of emails as they travel over the internet. Activating Transport Layer Security (TLS) strengthens the integrity of the communication path between the server and the mail client. Adhere to the below instructions to enable encryption: Step 1: Install Certbot First, the Certbot program needs to be installed to get a free TLS certificate from Let's Encrypt. The process of obtaining and renewing these certifications is made easier by Certbot, which can be installed on Ubuntu using: sudo apt install certbot Step 2: Allow HTTP Traffic Next, update the firewall settings to enable HTTP traffic on port 80 using the command provided below: sudo ufw allow 80 Step 3: Obtain a TLS Certificate Proceed by employing Certbot to acquire a TLS certificate for your domain. To achieve this, swap out your_domain with your real domain name in the command below: sudo certbot certonly --standalone --rsa-key-size 4096 --agree-tos --preferred-challenges http -d your_domain This command directs Certbot to: Use a 4096-bit RSA key to enhance security. Deploy a temporary independent server to carry out domain verification. Conduct the verification process through port 80. Adhere to the on-screen instructions and add your email address when prompted. After the process is finalized, Certbot will securely place your SSL certificate and private key within the /etc/letsencrypt/live/your_domain directory. Step 4: Postfix Configuration for TLS With the certificate in hand, update the settings to implement it by opening the configuration file via: sudo nano /etc/postfix/main.cf Find the TLS parameters part and update it to include these lines: # TLS parameterssmtpd_tls_cert_file=/etc/letsencrypt/live/your_domain/fullchain.pemsmtpd_tls_key_file=/etc/letsencrypt/live/your_domain/privkey.pemsmtpd_tls_security_level=maysmtp_tls_CApath=/etc/ssl/certssmtp_tls_security_level=maysmtp_tls_session_cache_database = btree:${data_directory}/smtp_scache Alter your_domain to your real domain's name; subsequently, the tool will be able to use the TLS certificate to safeguard email exchanges. Step 5: Restart Postfix To implement the modified settings, restart again using: sudo systemctl restart postfix Send the mail after finishing that, then check the recipient's mailbox. Unencrypted mails are more prone to being flagged as spam by email providers so the message might appear almost instantly. Following these guidelines can help you send mails safely and reduce the likelihood that email providers may mark them as spam. Conclusion Setting up Postfix through external SMTP servers is a simple process that enhances your server's email capabilities. This guide has thoroughly covered the installation, configuration, and testing phases of Postfix, including the setup of email forwarding and the activation of SMTP encryption. By adhering to these steps, users can ensure their mails are delivered securely.
09 December 2024 · 8 min to read

Answers to Your Questions

How secure are cloud databases?

Hostman Cloud Databases emphasize security through advanced measures like encryption, regular security audits, and stringent access controls. Our robust cloud database security ensures that your data is protected against threats and unauthorized access, giving you peace of mind.

What types of databases are supported in your cloud database offering?

Various database types are supported by Hostman Cloud Database, including NoSQL databases like MongoDB and Redis, as well as relational databases like MySQL, PostgreSQL, and many more. You can choose any database for your unique requirements thanks to our flexible platform.

Can I easily scale my database resources in your cloud environment?

Yes, scaling with Hostman Cloud Databases is simple and effortless. Your database resources can be quickly scaled up or down to meet the demands of your workload, guaranteeing peak performance during traffic surges without sacrificing productivity.

What backup and recovery options are available for cloud databases?

Reliable backup and recovery choices are offered by Hostman Cloud Database. Point-in-time recovery, automated backups, and regularly scheduled snapshots guarantee that your data is safe and can be promptly restored in the event of an emergency or breakdown.

How does your pricing model work for cloud databases?

Our cloud database services operate on a pay-as-you-go model. You only pay for the storage and processing power that you actually use.

Is there any downtime during database maintenance?

Hostman Cloud Database's continuous updates and automatic failover reduces downtime during upgrades or maintenance. Even when undergoing maintenance, your databases will continue to be accessible and functional due to our cloud-based database solutions.

Can I migrate my existing databases to your cloud platform?

Absolutely, you may migrate your current databases to Hostman Cloud Database with ease. Transferring databases to our cloud platform with the least amount of downtime and disturbance is made possible by our professional support team and migration tools.

What level of support do you provide for managing and troubleshooting database issues?

For maintaining and resolving database problems, Hostman provides extensive help. To ensure seamless database operations, our technical support staff is accessible around-the-clock to help with any inquiries or issues.

Can I integrate third-party tools or applications with your cloud databases?

Yes, you can quickly connect new resources and services to enhance the usefulness of your database with the aid of our flexible platform.

How do you handle data encryption and data privacy in your cloud database environment?

Hostman uses extensive encryption mechanisms to guarantee data confidentiality and privacy. Securing data ensures compliance with data protection rules and protects it against illegal access, both during transmission and at rest.

What monitoring and performance tuning options are available for cloud databases?

Advanced monitoring and performance optimization features are available with Hostman Cloud Database. Real-time monitoring, performance analysis, and automated alerting tools are all included in our platform, which enables you to maximize database performance and promptly handle any problems.

Take your database
to the next level

Contact us for a personalized consultation. Our team is here
to help you find the perfect solution for your business and
support you every step of the way.
Email us
Hostman's Support