Sign In
Sign In

Setting up replication in MySQL

Setting up replication in MySQL
Hostman Team
Technical writer
MySQL
19.06.2024
Reading time: 19 min

When working with databases, having multiple copies of your data can be beneficial. This provides redundancy in case one of the servers fails, as well as improves availability, scalability, and overall performance of the database. The practice of synchronizing data across multiple separate databases is called replication.

MySQL is a relational database management system and the most popular open-source relational database today. It has several built-in replication features that allow you to maintain multiple copies of your data.

In this guide, we will set up two servers with MySQL: the original database, the "source", and its "replica". We will also look at how MySQL performs replication.

Note: Historically, this type of replication was called "master-slave". In a post published in July 2020, the MySQL team acknowledged the negative connotation of this term and announced plans to update the program and accompanying documentation to use more inclusive language.

However, this process is not fully complete. While the documentation and most commands in MySQL version 8 have been updated to refer to servers as "source" and its "replicas", there are still places with the old terminology. By default, this guide will use the more modern terms and commands, but in some cases, it is unavoidable to refer to the concepts of master and slave.

Prerequisites

  • Two cloud servers running Ubuntu 20.04. Both should have a non-root user with sudo privileges and a firewall configured using the UFW utility.

  • MySQL installed on each server. In this guide, we will use the 8.0.25 version.

On Hostman, you can install hosted MySQL on your cloud server with one click.

The process described here involves designating MySQL on one server as the original database, the "source", and then setting up MySQL on another server as its copy, the "replica".

This guide also describes transferring existing data from the source to the replica. This process involves creating a snapshot of the source database and transferring the resulting file to the replica. For this, we recommend setting up SSH keys on the source server and then ensuring that the source's public key is copied to the replica.

How Replication Works in MySQL

In MySQL, replication involves the source database recording every data change in one or more databases into a special file known as the binary log. Once the replica starts, two parallel threads are created. The first, called the IO thread, connects to the source MySQL instance and reads events from the binary log line by line, then copies them to a local file on the replica server called the relay log. The second thread, called the SQL thread, reads events from the relay log and applies them to the replica as quickly as possible.

Recent versions of MySQL support two methods of data replication. The difference between them lies in how the replicas track which database events from the source they have already processed.

The traditional replication method is called position-based binary log file replication. If you set up a MySQL replica using this method, you must provide it with a set of coordinates from the binary log. These consist of the name of the log file on the source that the replica should read from and a specific position in that log. This position represents the first event in the source database that the replica should copy.

Since replicas receive a copy of the entire binary log from the source database, without the correct coordinates, they will start copying every recorded database event. This can cause issues if you want to replicate data only from a certain point in time or duplicate only a subset of the source data.

Position-based binary log file replication is suitable for many cases but can become cumbersome in more complex systems. This led to the development of a new MySQL replication method, sometimes called transaction-based replication. This method involves creating a Global Transaction Identifier (GTID) for each transaction, or isolated piece of work, that the source MySQL instance performs.

The mechanism of this replication is similar to position-based binary log file replication: each time a transaction occurs in the source, MySQL assigns and records a GTID for it in the binary log along with the transaction itself. The GTID and transaction are then sent to the replicas for processing.

Transaction-based replication has several advantages over the traditional method. For example, both the source and its replicas maintain the GTID, so if the source or a replica detects a transaction with an already processed GTID, they skip it. This helps ensure consistency between the source and its replicas. Additionally, with transaction-based replication, replicas do not need to know the binary log coordinates of the next database event. Starting new replicas or changing the order of replicas in the replication chain becomes much simpler.

Keep in mind that this is just an overview of how MySQL handles replication; MySQL provides many settings that you can adapt to your needs. In this guide, we set up position-based binary log file replication. If you want to set up a different replication environment, check the official MySQL documentation.

Step 1 — Configuring the Source Server's Firewall for MySQL Replication

Your firewalls on both servers should be configured using UFW. This will help protect both your servers; however, now the source's firewall will block any connection attempts from the MySQL replica.

To change this, you need to enable a new rule in UFW that will allow connections from the replica through the source's firewall. You can do this by running the following command on the source server. This command allows any connections originating from the replica's IP address (in this example, replica_server_ip) to the default MySQL port, 3306:

sudo ufw allow from replica_server_ip to any port 3306

Be sure to replace replica_server_ip with the actual IP address of your replica server.

After this, you don't need to change anything in the replica's firewall since this server will not receive any incoming connections, and UFW does not block outgoing connections to the source MySQL server. Now, let's move on to configuring the source.

Step 2 — Configuring the Source Database for MySQL Replication

To get your source MySQL database to start synchronizing data, you need to make a few changes to its configuration.

In Ubuntu 20.04, the default MySQL server configuration file is named mysqld.cnf and is located in the /etc/mysql/mysql.conf.d/ directory. Open this file on the source server in any text editor. Here, we will use nano:

sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf

Find the bind-address directive in the file. By default, it looks like this:

/etc/mysql/mysql.conf.d/mysqld.cnf
...
bind-address        = 127.0.0.1
...

127.0.0.1 is the IPv4 address representing localhost. If you specify this value for the bind-address directive, MySQL will listen for connections only on the localhost address. In other words, this MySQL instance will only accept connections originating from the server it is installed on.

Remember that you are turning another MySQL instance into a replica of this one, so the replica needs to be able to read all new data written to the source. Therefore, you need to configure the source MySQL server to listen for connections on an address accessible to the replica, such as the source's public IP address.

Replace 127.0.0.1 with the source server's IP address. After this, the bind-address directive will look like this, but with your server's IP address instead of source_server_ip:

...
bind-address        = source_server_ip
...

Then, find the server-id directive, which defines the identifier by which MySQL distinguishes servers within the replication system. Each server in the system, including the source and all its replicas, must have its unique server-id value. By default, this directive is commented out and looks like this:

/etc/mysql/mysql.conf.d/mysqld.cnf
...
# server-id         = 1

Uncomment this line by removing the # sign. You can choose any number as the value for this directive, but remember that the number must be unique and cannot match other server-id values in your replication group. For simplicity, in the following example, this value remains the default (1):

/etc/mysql/mysql.conf.d/mysqld.cnf
...
server-id         = 1
...

Find the log_bin directive under the server-id line. It specifies the name and location of the MySQL binary log.

By default, this directive is commented out, so binary logging is disabled. To know when and how to start replication, your replica server needs to read the source's binary log, so uncomment this line to enable logging on the source. After this, it will look like this:

/etc/mysql/mysql.conf.d/mysqld.cnf
...
log_bin                   = /var/log/mysql/mysql-bin.log
...

Finally, scroll down to the end of the file to find the commented binlog_do_db directive:

/etc/mysql/mysql.conf.d/mysqld.cnf
...
# binlog_do_db      = include_database_name

Remove the # sign to uncomment this line, and replace include_database_name with the name of the database you want to replicate. In this example, the binlog_do_db directive points to a database named db. If you already have an existing database on the source that you want to replicate, use its name instead of db:

/etc/mysql/mysql.conf.d/mysqld.cnf
...
binlog_do_db      = db

Note: If you are replicating more than one database, you can add another binlog_do_db directive for each database you need to add to the group. In this guide, we will continue with replicating just one database, but if you have several, it will look something like this:

/etc/mysql/mysql.conf.d/mysqld.cnf
...
binlog_do_db      = db
binlog_do_db      = db_1
binlog_do_db      = db_2

Alternatively, you can specify which MySQL databases should not be duplicated by adding a binlog_ignore_db directive for each of them:

/etc/mysql/mysql.conf.d/mysqld.cnf
...
binlog_ignore_db      = db_to_ignore
...

After making these changes, save and close the file. If you used nano, press CTRL + X, Y, and then ENTER.

Restart MySQL by running the following command:

sudo systemctl restart mysql

Now this MySQL instance is ready to act as the source database, which your MySQL server will replicate. However, before configuring the replica, we need to perform a few more steps on the source to ensure that replication works correctly. To begin with, we need to create a special MySQL user account that will perform all the replication-related actions.

Step 3 — Creating a User Account for MySQL Replication

Each replica in a MySQL environment connects to the source database using a username and password. Replicas can connect using any MySQL user that exists in the source database and has the appropriate privileges, but in this guide, we will create a special user account for this purpose.

Start by opening the MySQL shell:

sudo mysql

Note: If you have set up a password-authenticated user account, you can connect to MySQL using the following command:

mysql -u username -p

Replace username with your user name and enter the password when prompted.

Keep in mind that some operations in this guide, including those performed on the replica server, require elevated privileges. For this reason, connecting with administrator privileges may be more convenient. If you still want to use a MySQL user with limited rights, they must at least be granted the CREATE USER, RELOAD, REPLICATION CLIENT, REPLICATION SLAVE, and REPLICATION_SLAVE_ADMIN privileges.

Create a new MySQL user. In this example, a user named replica_user is created, but you can use any name you prefer. Be sure to replace replica_server_ip with the public IP address of your replica server and set a more secure password instead of the default password:

CREATE USER 'replica_user'@'replica_server_ip' IDENTIFIED WITH mysql_native_password BY 'password';

Note that this command specifies that replica_user will use the mysql_native_password plugin for authentication. You can instead use the default mechanism in MySQL, caching_sha2_password, but this will require setting up an encrypted connection between the source and the replica. This type may be suitable for production environments, but setting up encrypted connections is beyond the scope of this guide. If you wish, you can find instructions for setting up a replication environment with encrypted connections in the MySQL documentation.

After creating the new user, grant them the appropriate privileges. At a minimum, such a user should have the REPLICATION SLAVE permissions:

GRANT REPLICATION SLAVE ON *.* TO 'replica_user'@'replica_server_ip';

Next, it is recommended to run the FLUSH PRIVILEGES command. This will clear all cached memory on the server for the previous CREATE USER and GRANT statements:

FLUSH PRIVILEGES;

With this, we have finished setting up the replication user in the source MySQL. However, do not exit the MySQL shell yet. You will need it in the next step to obtain important information about the binary log of the source database, so leave it open for now.

Step 4 — Obtaining Binary Log Coordinates from the Source Database

As mentioned in the "How Replication Works in MySQL" section, MySQL performs replication by line-by-line copying of database events from the source's binary log and executing each event on the replica. When using position-based binary log file replication, you must provide the replica with a set of coordinates detailing the name of the source binary log and the specific position within that file. The replica then uses these coordinates to determine the point in the log file from which it should start copying database events and track which events it has already processed.

In this step, we will see how to obtain the current coordinates of the source database's binary log to start copying data on the replicas from the last line in the log. To avoid issues related to other users changing the data while obtaining the coordinates, you will need to lock the database so that no one can read or write information. This will not take long but will pause your database's operation.

Execute the following command in the open MySQL shell on the source server. It will close all open tables in each database in the source instance and lock them:

FLUSH TABLES WITH READ LOCK;

Then execute the following command to output information about the current state of the source's (master's) binary log:

SHOW MASTER STATUS;

This position is where the replica will start copying database events. Record (remember or write down) the File and Position values; you will need them later when you start the replication.

Your next steps depend on whether there are any existing data in the source database that you want to transfer to your replicas. Proceed to the appropriate subsection.

If Your Source Has No Data to Transfer

If you have just installed this MySQL or there is no existing data that you want to transfer to the replica, you can unlock the tables at this point:

UNLOCK TABLES;

You can create a database for master-slave replication while you still have the MySQL shell open. As in the example from Step 2, the following command will create a database named db:

CREATE DATABASE db;

After that, close the MySQL shell:

exit

Now you can proceed to the next step.

If Your Source Has Data to Transfer

You can transfer existing data from the source database to the replica by creating a snapshot (copy) of the database using the mysqldump utility. However, your database is still locked. If you make any new changes in the same window or close it, it will automatically unlock, which could lead to problems. Unlocking the tables means that data in the database may change. This could potentially lead to discrepancies between your snapshot and the binary log coordinates you just obtained.

Therefore, you should open a new terminal window or tab on your local computer. This will allow you to create a database snapshot without unlocking MySQL.

In the new window or tab, open another SSH session to the server hosting the source MySQL instance:

ssh username@source_server_ip

Then, in the new tab or window, export your database using mysqldump. In this example, a dump file named db.sql is created from the database named db. Make sure to use the name of your database. Also, remember to run this command in the bash shell, not in the MySQL shell:

sudo mysqldump -u root db > db.sql

Now you can close this window or tab and return to the original window where the MySQL shell should still be open. In the MySQL command line, unlock the databases to make them writable again:

UNLOCK TABLES;

Exit the MySQL shell:

exit

You can now send the snapshot to your replica server. Assuming you have already set up SSH keys on the source server and added the source's public key to the authorized_keys file of your replica, you can securely send the snapshot using the scp command:

scp db.sql username@replica_server_ip:/tmp/

Remember to replace username with the name of the Ubuntu administrative user created on the replica server and change replica_server_ip to the IP address of your replica server. Also, note that this command places the snapshot in the replica server's /tmp directory.

After sending the snapshot to the replica server, connect to it via SSH:

ssh username@replica_server_ip

Open the MySQL shell:

sudo mysql

Create the database that you will copy from the source:

CREATE DATABASE db;

You do not need to create tables or load any sample data into this database. Everything will be filled in automatically when you import the database using the snapshot you just created. Instead, exit the MySQL shell:

exit

Import the snapshot:

sudo mysql db < /tmp/db.sql

Now your master-slave replica contains all the existing data from the source database. Let's move on to the final step to configure the replica server to start replication.

Step 5 — Configuring the Replica

Now we need to change the configuration of the replica, similar to how you changed the source database. Open the MySQL configuration file, mysqld.cnf, on the replica server:

sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf

As mentioned earlier, each MySQL instance in the environment must have a unique server-id value. Find the server-id directive on the replica, uncomment it, and change its value to any positive integer different from the source database:

/etc/mysql/mysql.conf.d/mysqld.cnf
server-id           = 2

Next, update the log_bin and binlog_do_db values to match the values you set in the source server's configuration file:

/etc/mysql/mysql.conf.d/mysqld.cnf
...
log_bin             = /var/log/mysql/mysql-bin.log
...
binlog_do_db        = db
...

Finally, add the relay-log directive, defining the location of the replica's relay log file. Include this line at the end of the configuration file:

/etc/mysql/mysql.conf.d/mysqld.cnf
...
relay-log           = /var/log/mysql/mysql-relay-bin.log

After making these changes, save and close the file. Then restart MySQL on the replica to apply the new settings:

sudo systemctl restart mysql

After restarting the MySQL service, the replication is ready to start.

Step 6 — Starting and Testing Replication

By this point, both of your MySQL instances are fully configured to begin master-slave replication. To start it, open the MySQL shell on the replica server:

sudo mysql

In the command line, execute the command below; it sets several MySQL replication parameters simultaneously. After running this command and starting replication, the server will attempt to connect to the IP address in SOURCE_HOST using the login and password (SOURCE_USER and SOURCE_PASSWORD, respectively). It will also look for the binary log named SOURCE_LOG_FILE and begin reading it from position SOURCE_LOG_POS.

Make sure to replace source_server_ip with the IP address of the source server. Similarly, replica_user and password should match the replication user you created in Step 3; mysql-bin.000001 and 899 should reflect the binary log coordinates obtained in Step 4.

You can enter this command in a text editor before running it on the server to make it easier to replace all the necessary information:

CHANGE REPLICATION SOURCE TO 
SOURCE_HOST='source_server_ip', 
SOURCE_USER='replica_user', 
SOURCE_PASSWORD='password', 
SOURCE_LOG_FILE='mysql-bin.000001', 
SOURCE_LOG_POS=899;

Now activate the replica server:

START REPLICA;

If you entered all the information correctly, this instance will start replicating all changes made to the db database on the source.

You can get all the details about the current status of the replica with the following command. The \G modifier restructures the text, making it more readable:

SHOW REPLICA STATUS\G;

This command outputs a lot of useful information that can be used during debugging and troubleshooting.

Note: If the replica has connection issues or replication unexpectedly stops, an event in the source's binary log might be blocking replication. In such cases, try running the SET GLOBAL SQL_SLAVE_SKIP_COUNTER command to skip a certain number of events following the binary log position specified in the previous command. For example, to skip only the first event:

SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1;

Then you will need to restart the replica:

START REPLICA;

If you ever need to stop replication, run this command on the replica server:

STOP REPLICA;

Your replica now copies data from the source. All changes made to the source database will be reflected in the MySQL replica instance. To test, let's create a test table and verify the successful execution of replication.

Open the MySQL shell on the source server:

sudo mysql

Select the database for replication:

USE db;

Create a table in it. In this example, we have a table named example_table with one column example_column:

CREATE TABLE example_table (
example_column varchar(30)
);

If desired, you can add some data:

INSERT INTO example_table VALUES 
('First row'), 
('Second row'), 
('Third row');

After this, return to the MySQL shell on your replica server and select the copied database:

USE db;

Then run the SHOW TABLES statement to display a list of all tables in the selected database:

SHOW TABLES;

If MySQL replication is working correctly, you will see the newly created table in the list.

Additionally, if you added data to the table on the source, you can check it by running a similar query:

SELECT * FROM example_table;

In SQL, the asterisk (*) means "all columns". Essentially, this query tells MySQL to display every column from example_table. If MySQL replication is working correctly, this operation will show this data.

If none of these operations output the table or data from the source, there might be an error somewhere in the configuration. To find the problem, you can try running the SHOW REPLICA STATUS\G operation. Additionally, you can refer to the MySQL documentation for troubleshooting replication.

Conclusion

With this guide, we have set up a position-based binary log replication environment in MySQL with one source and one replica. But remember, this is just one way to set up replication in MySQL. MySQL offers several different master-slave synchronization options that you can choose and configure according to your needs. There are also several third-party tools, such as Galera Cluster, which can be used to extend the built-in MySQL replication features.

MySQL
19.06.2024
Reading time: 19 min

Similar

MySQL

How to Import and Export Databases in MySQL or MariaDB

Database management is a crucial aspect of Linux server and web application administration. Importing and exporting databases are essential tasks for DevOps and system administrators. At a minimum, developers should know how to back up databases and transfer them between servers. This guide explains how to import and export database dumps in MySQL or MariaDB on a Linux server (using Ubuntu as an example). Working with Databases MySQL and MariaDB are popular relational database management systems (RDBMS) used for storing data in large applications. MariaDB is a fork of MySQL developed by its original creators due to licensing concerns following Oracle's acquisition of MySQL. Both MariaDB and MySQL share identical or similar APIs and operating mechanisms. Creating a Database Connect to MySQL or MariaDB with root privileges: For MySQL: mysql -u root -p   For MariaDB: mariadb -u root -p   Create a database (if it doesn't already exist): CREATE DATABASE IF NOT EXISTS <database_name>; Viewing Databases To see the list of available databases: SHOW DATABASES; Switching Databases To switch to a specific database: USE <database_name>; Viewing Tables To list all tables in the selected database: SHOW TABLES; Common SQL Commands Creating a table: CREATE TABLE IF NOT EXISTS users (  user_id INT NOT NULL PRIMARY KEY AUTO_INCREMENT,  username VARCHAR(100) NOT NULL); This creates a table named users with fields user_id and username. Inserting data into the table: INSERT INTO users (username) VALUES ('John Doe'); This adds a new row to the users table. Selecting all rows from the table: SELECT * FROM users; Monitoring MySQL/MariaDB status To check the server's global status and statistics: SHOW GLOBAL STATUS; Exporting Databases Exporting data from MySQL or MariaDB can be efficiently done using the mysqldump CLI utility or third-party tools like phpMyAdmin. The mysqldump utility allows you to save your database as an SQL dump, which contains the necessary commands for creating columns and populating them with data. This dump file can be easily managed, imported, or transferred. You will need: A database you want to export. User credentials with at least read access to the database. In the terminal, run the following command: mysqldump -u <username> -p<password> <database_name> > db_dump.SQL Where: -p<password>: Password for the database user (you can omit the password and simply use -p to prompt for it manually). db_dump.SQL: The name of the output dump file. <username>: The privileged user with read access. <database_name>: The name of the database you are exporting. When exporting over the internet, refer to our Creating an SSH Tunnel for MySQL tutorial for detailed steps on setting up local port forwarding, adjusting firewall rules, and using key-based authentication to secure your data and credentials in transit. To create dumps from a remote server, add the -h flag: mysqldump -h <ip-address> -u <username> -p<password> <database_name> > db_dump.SQL If the MySQL server uses a non-standard port, specify it with the -P flag: mysqldump -h <ip-address> -P <port> -u <username> -p<password> <database_name> > db_dump.SQL While the default export format is SQL, mysqldump also supports exporting data as CSV, XML, and other formats by configuring additional parameters. The SQL dump typically includes: Information about the RDBMS (MySQL or MariaDB) Commands for creating the required tables and their columns Data to populate those columns By default, it provides a comprehensive snapshot of the database structure and contents, making it an essential tool for database backups and migrations. Importing Data into MySQL or MariaDB To import a database dump, you don’t need mysqldump; a direct call to MySQL will suffice. Run the following command in your terminal: mysql -u <username> -p<password> <new_database_name> < db_dump.SQL Where: -p<password>: The user's password (use -p without the password to be prompted manually). db_dump.SQL: The dump file containing your database data. <username>: A privileged user with write access. <new_database_name>: The name of the target database to which you are importing the dump. If the process completes successfully, there will be no output. If any errors or warnings occur, MySQL or MariaDB will display them. You can check if the import was successful with these commands: SHOW DATABASES;  -- Lists all databasesUSE <database_name>;  -- Selects the target databaseSHOW TABLES;  -- Lists all tables within the selected database By executing these commands, you can confirm that the database structure and data have been imported correctly. Creating a systemd Service for Backup Suppose you want to automate the database backup (export) process. In this guide, we will create a service-timer that will trigger a script for backing up data. A Timer is a mechanism used to schedule the execution of a specific service at a given time or through certain intervals. Follow these steps to set it up: First, connect to the server and create the directory for backup scripts: mkdir -p /usr/bin/backup_scripts  # Create directory for backup scripts Create and open the file /usr/bin/backup_scripts/mysql_backup.sh in any text editor (e.g., nano): nano /usr/bin/backup_scripts/mysql_backup.sh Inside the file, add the following script: TIMESTAMP=$(date +'%F') # Get the current date BACKUP_DIR='<path_to_backup_directory>' MYSQL_USER='<username>' MYSQL_PASSWORD='<password>' DATABASE_NAME='<database_name>' mkdir -p "$BACKUP_DIR/$TIMESTAMP" # Create directory for this date mysqldump -u $MYSQL_USER -p$MYSQL_PASSWORD $DATABASE_NAME > "$BACKUP_DIR/$TIMESTAMP/$DATABASE_NAME-$TIMESTAMP.sql" # Create dump find "$BACKUP_DIR" -type d -mtime +28 -exec rm -rf {} \; # Delete backups older than 28 days Replace the placeholder variables with the actual backup directory path, MySQL user credentials, and the database name. Grant execution permissions to the script: chmod +x /usr/bin/backup_scripts/mysql_backup.sh Create the service file /etc/systemd/system/mysql-backup.service: sudo nano /etc/systemd/system/mysql-backup.service Add the following content: [Unit] Description=MySQL Database Backup Service [Service] Type=oneshot ExecStart=/usr/bin/backup_scripts/mysql_backup.sh [Install] WantedBy=multi-user.target Create the timer file: sudo nano /etc/systemd/system/mysql-backup.timer Add this content to schedule the backup: [Unit] Description=Run MySQL Backup Service Weekly [Timer] OnCalendar=weekly Persistent=true [Install] WantedBy=timers.target Reload the systemd configuration, enable the timer for autostart, and start it: systemctl daemon-reload  # Reload systemd configurationsystemctl enable mysql-backup.timer  # Enable timer to start automaticallysystemctl start mysql-backup.timer  # Start the timer Check the status of the timer and ensure it is working: systemctl status mysql-backup.timersystemctl list-timers  # Lists active timers Now, your system will automatically create a backup of the specified database every week. Export and Import via phpMyAdmin You can perform database imports and exports not only through command-line utilities but also through the phpMyAdmin web interface. This method is typically more convenient when the dump size is small (less than 1GB). This section will cover the basic process of importing and exporting databases. Export To export a database: Go to the phpMyAdmin interface and select the desired database from the left-hand panel. Click on the Export tab. Choose export method: Quick Export: Select this if you want a basic export with default settings. Custom Export: Choose this for more specific export options, such as selecting certain tables, formats, or compression methods. Click Export. To export specific tables: Click on the database name in the left sidebar to view its tables. Select the tables you want to export by checking the boxes next to their names. At the bottom of the page, choose Export from the list of actions. On the next page, verify the export format and options, then click Go to save the dump to your local machine. Import The process of importing a database is very similar to exporting. Follow these steps: Open phpMyAdmin and navigate to the database you want to import into. If the database doesn't exist, create it by clicking Create Database from the left sidebar. Inside the database, click on the Import tab. Click the Choose File button to browse for and select the SQL dump file from your device. Choose the necessary options like file encoding, format (typically default settings are fine), and other options as needed. Click Go to start the import process. Once the process is complete, you will see the imported files and their contents listed in the left sidebar. You can also modify them if needed. Conclusion The choice of method depends on your needs: phpMyAdmin is ideal for manually creating backups or for quick access to a database. It’s user-friendly and simple for small-scale tasks. Using command-line utilities would be more efficient and flexible for automation or if you’re transferring infrastructure. Frequently Asked Questions How do I export a MySQL or MariaDB database? Use mysqldump just like in MySQL: mysqldump -u user db > file.sql. How do I import a database dump in MariaDB? Run: mysql -u user dbname < dump.sql. Can I use the same dump file for both MySQL and MariaDB? Yup — standard SQL dumps work seamlessly between the two. How do I perform a MySQL import database operation from a dump file? Use the command mysql -u username -p database_name < backup.sql to import a database from a SQL dump file.
16 June 2025 · 8 min to read
MySQL

How to Create a MySQL Database Dump

MySQL is the most popular relational database management system that performs various operations with tables, such as adding, deleting, searching, sorting, and outputting data based on user queries. It's important to understand that MySQL controls databases but is not itself a database. Therefore, MySQL and the database are separate entities: MySQL is a program that operates on information. The database is the information recorded on a hard disk. Based on this architecture, MySQL supports exporting information — creating a database dump. This functionality allows several useful operations: Database Backup: Unexpected situations when using cloud (or local) servers can lead not only to system failures but also to data loss. Therefore, it’s important to regularly create database dumps, which can be stored on other secure storage devices. Database Transfer from One Server to Another: Manually copying database elements may be challenging or impossible when migrating from one server to another. A dump makes it possible to transfer data quickly. A database dump is essentially a sequential set of SQL instructions that create an exact copy of the original database, including both its structure and content. This guide will cover the primary methods for creating a database dump and importing it back into MySQL to restore data. Preparing a Test Database We will create a cloud database to test the examples in this guide. If you already have a MySQL database where you can test the dump creation function, you can skip this step. In the Hostman panel, we will create a MySQL 8 database, leaving all other parameters as default. You can connect to the cloud database via a terminal. The necessary command can be copied from the control panel. Let's connect to our database: mysql -u USER -p'PASSWORD' -h HOST -P 3306 -D DATABASE For example, a real connection command might look like this: mysql -u gen_user -p'sU}NEyx#<2P~\e' -h 91.206.179.29 -P 3306 -D default_db Next, we need to create a simple table consisting of three columns: CREATE TABLE People ( id INT, name VARCHAR(255) NOT NULL, bord DATE ); And populate it with some values: INSERT INTO People VALUES (120, 'Natalie', NOW()), (121, 'Meredith', NOW()), (122, 'James', NOW()); This fills the new database so that we can later create a dump from it. By the way, on the Users tab of the database management page, there are buttons that open interfaces for graphical MySQL database administration tools — phpMyAdmin and Adminer. Before running your first mysqldump command, consider securing your remote connections by following the steps in our Creating an SSH Tunnel for MySQL guide—learn how to wrap mysqldump inside an SSH port-forward so your dump credentials and data stay encrypted in transit. Method 1: Console Terminal A more traditional but less interactive way to create a MySQL database dump is by using the appropriate command in a console terminal. To do this, you need to connect to MySQL via an SSH connection and then enter the dump creation command: mysqldump -u USER -p'PASSWORD' -h ADDRESS -P PORT DATABASE > FILE Let's break down each of the specified parameters: USER: The username used to authenticate in MySQL. PASSWORD: The password for the user to authenticate in MySQL. ADDRESS: The IP address of the remote MySQL server. PORT: The port of the remote MySQL server. DATABASE: The name of the database to be dumped. FILE: The name of the file where the database dump will be saved on the local machine. There are two possible ways to create a dump via the console: Local MySQL: The dump is created from a database located on a local MySQL server. In this case, we don’t need to specify the ADDRESS and PORT parameters. Remote MySQL: The dump is created from a database located on a remote MySQL server. In this case, you need to specify ADDRESS and PORT. Local MySQL dump command example: mysqldump -u admin -p'qwerty123' default_db > just_dump.sql Remote MySQL dump command example: mysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db > just_dump.sql In both cases, for security reasons, you can omit the explicit password specification — this way, the system will prompt you to enter the password manually: mysqldump -u admin -p default_db > just_dump.sqlmysqldump -u admin -p -h 91.206.179.29 -P 3306 default_db > just_dump.sql Warnings and Errors After executing the command, several warnings and errors may appear in the console output. Let’s break down each message in detail. Password Security Warning The first warning from MySQL notifies you about the insecurity of using a password as an explicit parameter: mysqldump: [Warning] Using a password on the command line interface can be insecure. To suppress this warning, use the -p flag without specifying the password directly. Global Transaction Identifier (GTID) Warning The next warning concerns the inclusion of the Global Transaction Identifier (GTID) in the resulting dump and suggests disabling it with the --set-gtid-purged=OFF flag: Warning: A partial dump from a server that has GTIDs will by default include the GTIDs of all transactions, even those that changed suppressed parts of the database. If you don't want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete dump, pass --all-databases --triggers --routines --events. GTID (Global Transaction Identifier) is a unique 128-bit identifier associated with each transaction, which improves overall data consistency. Disabling GTID may lead to data inconsistency (for example, due to duplication of certain SQL statements). Data Dump Consistency Warning Another GTID-related warning indicates that the dump operation is not atomic: Warning: A dump from a server that has GTIDs enabled will by default include the GTIDs of all transactions, even those that were executed during its extraction and might not be represented in the dumped data. This might result in an inconsistent data dump.In order to ensure a consistent backup of the database, pass --single-transaction or --lock-all-tables or --master-data. This means that database changes performed by other applications during the dump creation may be missing, leading to data inconsistency. To avoid this issue, use one of the following flags: --single-transaction to create the dump within a single transaction. --lock-all-tables to block any other operations on the database during the dump. Access Denied Error You might encounter an error preventing the dump creation due to insufficient privileges: mysqldump: Error: 'Access denied; you need (at least one of) the PROCESS privilege(s) for this operation' when trying to dump tablespaces Even if the user specified in the command has all database privileges, they may lack the global PROCESS privilege. To grant this privilege, execute the following command: GRANT PROCESS ON *.* TO 'admin'@'localhost'; However, this is not the best solution from a security perspective. Instead of granting global privileges, it's better to use the --no-tablespaces flag during the dump command execution. With all the additional flags, the dump creation command will look like this: mysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db --no-tablespaces --set-gtid-purged=OFF --single-transaction > just_dump.sql In this case, only one harmless warning will remain about explicitly specifying the password: mysqldump: [Warning] Using a password on the command line interface can be insecure. Non-existent Database Error If you accidentally specify the name of a non-existent database, an unclear error will appear denying access to the database for the specified user: ERROR 1044 (42000): Access denied for user 'admin'@'%' to database 'default_db' This can cause confusion, so always double-check the accuracy of the database name specified in the command. Dump File After successfully executing the dump command, you can check the file system using: ls You should see the corresponding database dump file: just_dump.sql  resize.log  snap Although you can open this file in any text editor, its size may be quite large, especially if the original database contained a lot of information: cat just_dump.sql At the beginning of the file, there is information about the created dump, followed by SQL instructions: -- MySQL dump 10.13 Distrib 8.0.40, for Linux (x86_64) -- -- Host: 37.220.80.65 Database: default_db -- ------------------------------------------------------ -- Server version 8.0.22-13 /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */; /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */; /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */; ... /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */; /*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */; -- Dump completed on 2025-01-19 5:33:16 The output file doesn't have to be saved in the current directory; you can specify any other directory: mysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db --no-tablespaces --set-gtid-purged=OFF --single-transaction > /tmp/just_dump.sql In this case, we create the dump file just_dump.sql in the /tmp directory. Dumping Multiple Databases In real projects, MySQL often handles multiple databases. You can use a special flag to dump all existing databases: mysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 --all-databases --no-tablespaces --set-gtid-purged=OFF --single-transaction > just_dump.sql This command differs from the previous one only in that the --all-databases flag is specified instead of a specific database name. Alternatively, you can list the databases you want to include in the dump: mysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 db_first db_second db_third --no-tablespaces --set-gtid-purged=OFF --single-transaction > just_dump.sql Structure Without Data You can create a dump containing only the database structure (table schemas) without any data by using the --no-data flag: mysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db --no-data --no-tablespaces --set-gtid-purged=OFF --single-transaction > just_dump.sql Specific Tables Instead of dumping an entire MySQL database, you can dump only specific tables by listing their names after the database name: mysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db table1 table2 --no-data --no-tablespaces --set-gtid-purged=OFF --single-transaction > just_dump.sql On the other hand, you can dump a database excluding specific tables using the --ignore-table parameter: mysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db --ignore-table=default_db.logs --no-data --no-tablespaces --set-gtid-purged=OFF --single-transaction > just_dump.sql Note that the table name must always be preceded by the database name and separated by a dot. To exclude multiple tables, list each one with the --ignore-table option: mysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db --ignore-table=default_db.table1 --ignore-table=default_db.table2 --no-data --no-tablespaces --set-gtid-purged=OFF --single-transaction > just_dump.sql Character Encoding In some cases, it may be necessary to explicitly specify the character encoding for the dump: mysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db --no-tablespaces --set-gtid-purged=OFF --single-transaction --default-character-set=utf8 > just_dump.sql Typically, UTF-8 is the preferred character encoding. Archiving the Dump Sometimes it’s useful to compress the dump immediately after creation. You can do this by piping the mysqldump output into gzip, then saving the compressed archive: mysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db --no-tablespaces --set-gtid-purged=OFF --single-transaction | gzip > just_dump.sql.gz If you check the current directory with the ls command, you’ll see the compressed dump: just_dump.sql  just_dump.sql.gz  resize.log  snap Restoring Data A database dump is usually created to restore data in the future, for example, in case of data loss or server migration. To load the database dump into MySQL and restore data, use the following command: mysql -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db < just_dump.sql If the dump file size is too large, MySQL may have a default limit that prevents loading it. To adjust the maximum allowed dump size, you can use the --max_allowed_packet flag: mysql -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db --max_allowed_packet=64M < just_dump.sql In this example, the maximum allowed dump size is set to 64 MB. mysql -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db < just_dump.sqlmysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 --all-databases --no-tablespaces --set-gtid-purged=OFF > just_dump.sql Method 2: Using phpMyAdmin If you're using phpMyAdmin, creating a database dump can be done through the graphical interface without manually executing commands — phpMyAdmin handles everything for you. Log In to phpMyAdmin. Open the phpMyAdmin interface and log in with your credentials.  Select the database. In the left sidebar, choose the database you want to export. This will open a page displaying the list of existing tables within the selected database. Configure the export. Click the Export button. It will take you to a dedicated page to configure the database export (dump). You can also access the export page from the phpMyAdmin home page, but doing so may not display all databases available for export. It's better to first navigate to the specific database and then click Export. Note that phpMyAdmin allows exporting only databases that contain tables. Empty databases cannot be exported. There are two export options in phpMyAdmin: Quick Export. It creates the dump using default export settings. Custom Export. It Allows you to manually configure the export settings, such as excluding specific tables, changing character encoding, and adjusting format options. phpMyAdmin supports exporting to various formats beyond just SQL, such as PDF, JSON, CSV, YAML, and others. The configuration options for creating a dump in phpMyAdmin are more user-friendly and visually intuitive than command-line flags. Start the export. Once you've configured all the export parameters, scroll down and click the Export button. The dump file will be generated and downloaded through your browser. Method 3: Using Adminer Creating a database dump in Adminer is very similar to phpMyAdmin. In fact, Adminer’s graphical interface is even simpler. Log In to Adminer: Start by logging into Adminer, then navigate to the export page by clicking the Export link in the left sidebar. Configure the Export. Adminer does not have predefined export types, so the system immediately offers all configuration options. You can select specific database tables to include in the dump. The dump file can be either saved (in a specific format or as a GZIP archive) or opened in a new window for manual copying of SQL instructions. Conclusion The native way to create a MySQL database dump, without requiring additional tools, is by using the mysqldump command with additional parameters. An alternative is to use visual database management tools with graphical interfaces. Utilities like phpMyAdmin or Adminer simplify database interactions by providing a user-friendly and interactive environment. This is particularly useful for those who are not well-versed in SQL syntax, turning tasks such as creating a dump into a series of simple mouse clicks. Frequently Asked Questions What is the command to create a MySQL dump? Just run: mysqldump -u username -p database_name > backup.sql. Simple and effective. How do I compress a MySQL dump file? Pipe it through gzip: mysqldump ... | gzip > backup.sql.gz. Saves space and upload time. Is mysqldump suitable for large databases? It works, but for very large datasets, check out tools like mydumper or solutions from Percona.
16 June 2025 · 13 min to read
MySQL

MySQL Data Types: Overview, Usage Examples & Best Practices

When setting up a database in MySQL, the user must assign a name and a data type to each column in a table. This process helps the system understand exactly what kind of data will be stored and how it will be processed, as well as determines the amount of memory required. There are several main classes of data types in MySQL tables, including numeric, character, date and time, and others. Each class contains several unique data types. It's important to define them correctly during the initial stage of table creation to avoid issues once the full database structure is complete. This article describes the main MySQL data types, their structure, valid values, and recommended usage. Numeric Types MySQL supports all standard SQL numeric data types, which are used to store numbers. This article covers: Integers; Floating-point numbers; Fixed-point numbers. Each of these will be described in more detail below. Integers Used to store signed or unsigned integers. Signed types can store both positive and negative numbers, while unsigned types store only positive numbers, effectively doubling the range. There are five main integer types: TINYINT, SMALLINT, MEDIUMINT, INT, and BIGINT. The primary difference is the range of values they can store—the larger the range, the more memory is required. It's important to select the right type to efficiently allocate memory when creating tables. Type Memory (bytes) Signed Range Unsigned Range TINYINT 1 -2⁷ to 2⁷-1 0 to 2⁸-1 SMALLINT 2 -2¹⁵ to 2¹⁵-1 0 to 2¹⁶-1 MEDIUMINT 3 -2²³ to 2²³-1 0 to 2²⁴-1 INT 4 -2³¹ to 2³¹-1 0 to 2³²-1 BIGINT 8 -2⁶³ to 2⁶³-1 0 to 2⁶⁴-1 Specifying size (e.g., MEDIUMINT(6)) does not affect the range. It's purely cosmetic and used by some tools to pad shorter numbers with spaces. You can also use the ZEROFILL attribute to pad with zeros instead of spaces, e.g., SMALLINT(8) ZEROFILL will display 256 as 00000256. Floating-Point Numbers Used to store approximate numeric values. MySQL lets you define floating-point precision as: FLOAT(p) Where p is the precision from 0 to 53. If p is less than 25, FLOAT() (single precision) is used; otherwise, DOUBLE() (double precision) is used. Type Memory (bytes) Value Range FLOAT 4 ~±3.4028 × 10³⁸ DOUBLE 8 ~±1.7976 × 10³⁸ Fixed-Point Numbers Used to store exact values with a specified precision. The DECIMAL or NUMERIC types are used, typically for financial calculations where rounding is unacceptable. Defined as: DECIMAL(M, D) M: Total number of digits (up to 65; default is 10). D: Number of digits after the decimal point (0–30, default is 0). Example: DECIMAL(6, 3) Stores values like -999.999 to 999.999. DECIMAL and NUMERIC are functionally equivalent. Character Types Designed for storing short texts, the character data types in MySQL include CHAR and VARCHAR. They are similar to each other, with the main difference being how the data is stored and retrieved. CHAR stores a fixed-length string (from 0 to 2⁸−1 characters), which is defined at the time of table creation. If we insert a string shorter than the specified length, the remaining characters are padded with spaces. The VARCHAR data type in MySQL stores a variable-length string (from 0 to 2¹⁶−1 characters), also defined during table creation. Unlike CHAR, it stores only the specified number of characters and allocates 1 byte to store the length of the string. If the input string exceeds the defined length for either CHAR or VARCHAR, it is truncated to fit the allowed length. During table creation, if CHAR and VARCHAR columns are combined, MySQL will convert the CHAR column to VARCHAR. To illustrate the difference, here's a table showing how strings of various lengths are stored using CHAR(5) and VARCHAR(5): String CHAR(5) VARCHAR(5) 'A' 'A ' 'A' 'Hello' 'Hello' 'Hello' 'Example' 'Examp' 'Examp' Text and Binary Types The TEXT data type in MySQL is used for storing large text-based data, while the BLOB type is designed for binary data, such as images, compiled code, or audio. These two types are similar in many ways, but the main difference lies in how data is stored and processed: For BLOB, sorting and comparisons are case-sensitive. For TEXT, sorting and comparisons are case-insensitive. MySQL cannot index the full length of TEXT or BLOB fields and does not support sorting using full-field indexes for these types. If the length of a string exceeds the maximum allowed by a TEXT or BLOB type, the input will be truncated to fit the allowed size. Below is a table showing all variants of the TEXT type, the required memory size, and the maximum number of characters allowed: Type Memory Size (bytes) Max Characters TINYTEXT 2⁸−1 Up to 2⁸−1 TEXT 2¹⁶−1 Up to 2¹⁶−1 MEDIUMTEXT 2²⁴−1 Up to 2²⁴−1 LONGTEXT 2³²−1 Up to 2³²−1 The BLOB types follow the same structure and size limits as their TEXT counterparts. Here's the equivalent table for BLOB types: Type Memory Size (bytes) Max Bytes TINYBLOB 2⁸−1 Up to 2⁸−1 BLOB 2¹⁶−1 Up to 2¹⁶−1 MEDIUMBLOB 2²⁴−1 Up to 2²⁴−1 LONGBLOB 2³²−1 Up to 2³²−1 Date and Time MySQL provides several data types for storing date and time information. The table below summarizes each type, along with memory usage and key details: Type Memory (bytes) Description DATE 3 Stores only the date in the format 'YYYY-MM-DD'. Separators can be any non-numeric character (not just dashes). Range: January 1, 1000 to December 31, 9999. DATETIME 8 Stores both date and time in the format 'YYYY-MM-DD HH:MM:SS'. Range: 00:00:00 on January 1, 1000 to 23:59:59 on December 31, 9999. TIMESTAMP 4 Also stores date and time in the same format as DATETIME, but uses half the memory. Range: 00:00:01 on January 1, 1970 to 03:14:07 on January 9, 2038. TIME 3 Stores only time in 'HH:MM:SS' or 'HHH:MM:SS' format for large hour values. Range: −838:59:59 to 838:59:59. YEAR 1 Stores only the year in 'YYYY' format. Range: 1901 to 2155, plus 0000 is allowed. Before you tackle semi-structured data with JSON, our How to Create Tables in MySQL guide shows you exactly how to declare each column—numeric, text, date or JSON—so your schema is solid from day one. JSON MySQL supports a native JSON (JavaScript Object Notation) data type, introduced in version 5.7.8, offering the following advantages over storing JSON as plain text: Automatic validation: MySQL ensures that stored data is valid JSON. Optimized storage format: JSON data is internally converted to a binary representation for faster access and querying. The memory required for JSON storage is approximately equivalent to LONGTEXT. Composite Types MySQL provides special string data types that store values from a predefined, fixed set. These include ENUM and SET. ENUM stores a single value from the defined list. It requires up to 2 bytes of memory. SET, on the other hand, can store up to 26 values simultaneously from the list and uses up to 8 bytes of memory. For example, if a user defines the following list for ENUM: ENUM('a', 's', 'd') Then the column can only contain one of the following values: 'a', 's', or 'd'. If the same list is defined for SET: SET('a', 's', 'd') Then the column can contain any combination of the listed values, including: 'a' 's' 'd' 'a,s' 'a,d' 's,d' 'a,s,d' or an empty set ('') If a user inserts a value like ('a', 's', 'a') into a SET, MySQL will automatically deduplicate the entries and sort them based on the original declaration order. So the stored value will be 'a,s'. Conclusion This article covered the most commonly used MySQL data types for defining table columns. With this knowledge, users can select the appropriate types and design their database structure effectively. For more detailed and less frequently used data types, refer to the official MySQL documentation. You can use Hostman cloud databases to practice working with MySQL and its different data types. Key advantages of using MySQL in the cloud include: Database maintenance and configuration by the service provider; Reliability, high availability, and fault tolerance; Automatic backups; Fast horizontal and vertical scaling; Continuous operation through cluster replication; Database operation on the computational resources of a cloud platform.
06 June 2025 · 7 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support