Sign In
Sign In

How to Find and Delete Duplicate Rows in MySQL with GROUP BY and HAVING Clauses

How to Find and Delete Duplicate Rows in MySQL with GROUP BY and HAVING Clauses
Shahid Ali
Technical writer
MySQL
19.11.2024
Reading time: 8 min

Duplicate entries may inadvertently accumulate in databases, which are crucial for storing vast amounts of structured data. These duplicates could show up for a number of reasons, including system errors, data migration mistakes, or repeated user submissions. A database with duplicate entries may experience irregularities, sluggish performance, and erroneous reporting. Using the GROUP BY and HAVING clauses, as well as a different strategy that makes use of temporary tables, we will discuss two efficient methods for locating and removing duplicate rows in MySQL. With these techniques, you can be sure that your data will always be accurate, clean, and well-organized.

Database duplication in MySQL tables can clog your data, resulting in inaccurate analytics and needless storage. Locating and eliminating them is a crucial database upkeep task. This is a detailed guide on how to identify and remove duplicate rows.

If two or more columns in a row have identical values, it is called a duplicate row. For instance, rows that have the same values in both the userName and userEmail columns of a userDetails table may be considered duplicates.

Benefits of Removing Duplicate Data

The advantage of eliminating duplicate data is that duplicate entries can slow down query performance, take up extra storage space, and produce misleading results in reports and analytics. The accuracy and speed of data processing are improved by keeping databases clean, which is particularly crucial for databases that are used for critical applications or are expanding.

Requirements

Prior to starting, make sure you have access to a MySQL database or have MySQL installed on your computer. The fundamentals of general database concepts and SQL queries. One can execute SQL commands by having access to a MySQL client or command-line interface.

To gain practical experience, you can create a sample database and table that contains duplicate records so that you can test and comprehend the techniques for eliminating them.

Creating a Test Database

Launch the MySQL command-line tool to create a Test Database.

mysql -u your_username -p

Create a new database called test_dev_db after entering your MySQL credentials.

CREATE DATABASE test_dev_db;

Then, switch to this newly created database:.

USE test_dev_db;

Add several rows, including duplicates, to the userDetails table after creating it with the CREATE TABLE query and INSERT query below.

CREATE TABLE userDetails (
userId INT AUTO_INCREMENT PRIMARY KEY,
userName VARCHAR(100),
userEmail VARCHAR(100)
);
INSERT INTO userDetails (userName, userEmail) VALUES
(‘Alisha’, ‘alisha@user.com’),
(‘Bobita, ‘bobita@user.com’),
(‘Alisha’, ‘alisha@user.com’),
(‘Alisha’, ‘alisha@user.com’);

Using GROUP BY and HAVING to Locate Duplicates

Grouping rows according to duplicate-defining columns and using HAVING to filter groups with more than one record is the simplest method for finding duplicates.

Now that you have duplicate data, you can use SQL to determine which rows contain duplicate entries. MySQL's GROUP BY and HAVING clauses make this process easier by enabling you to count instances of each distinct value. An example of a table structure is the userDetails table, which contains the columns userId, userName, and userEmail. The GROUP BY clause is useful for counting occurrences and identifying duplicates because it groups records according to specified column values. The HAVING clause  allows duplicate entries in groups formed by GROUP BY to be found by combining groups based on specific criteria.

Table userDetails Structure

userId

userName

userEmail

1

Alisha 

alisha@user.com

2

Bobita 

bobita@user.com

3

Alisha 

alisha@user.com

4

Alisha 

alisha@user.com

In the above table userDetails, records with identical userName and userEmail values are considered duplicates.

Finding Duplicates

Query for find the duplicate entries:

SELECT userName, userEmail, COUNT(*) as count
FROM userDetails
GROUP BY userName, userEmail
HAVING count > 1;

Rows are grouped by userName and userEmail in the aforementioned query, which also counts entries within the group and eliminates groups with a single entry (no duplicates).

Explanation:

  • SELECT userName, userEmail, COUNT(*) as count: Retrieves the count of each combination of username and userEmail, as well as their unique values.
  • GROUP BY userName, userEmail: Records are grouped by username and user email using the GROUP BY userName, userEmail function
  • COUNT (*): Tallies the rows in each set.
  • HAVING occurrences > 1: Recurring entries are identified by displaying only groups with more than one record.

This query will return groups of duplicate records based on the selected columns.

userName

userEmail

count

Alisha

alisha@user.com

3

Before you delete any rows, make sure you have a reliable backup—check out the MySQL Backup tutorial to learn how to dump your entire database and compress it so you can restore your data if needed.

Eliminating Duplicate Rows

After finding duplicates, you may need to eliminate some records while keeping the unique ones. Joining the table to itself and removing rows with higher userId values is one effective method that preserves the lowest userId for every duplicate.

Use the SQL query to remove duplicate rows while keeping the lowest userId entry.

DELETE u1
FROM userDetails u1
JOIN userDetails u2
ON u1. userName = u2. userName
AND u1. userEmail = u2. userEmail
AND u1. userId > u2. userId ;

Explanation:

  • u1 & u2: Aliases for the userDetails table to ease a self-join.
  • ON u1. userName = u2. userName AND u1. userEmail = u2. userEmail: Matches rows with identical userName, userEmail.
  • AND u1. userId > u2. userId: Removes rows with higher userId values, keeping only the row with the smallest userId.

Because this action cannot be undone, it is advised that you backup your data before beginning the deletion procedure.

Confirming Duplicate Removal

To confirm that all duplicates have been removed, repeat the Step 1 identification query.

SELECT userName, userEmail, COUNT(*) as count
FROM userDetails
GROUP BY userName, userEmail
HAVING count > 1;

All duplicates have been successfully eliminated if this query yields no rows.

Benefits of Employing GROUP BY and HAVING

The GROUP BY and HAVING clauses serve as vital instruments for the aggregation of data and the filtration of grouped outcomes. These functionalities are especially useful for detecting and handling duplicate entries or for condensing extensive datasets. Below are the primary benefits of employing these clauses.

  • Efficient Identification of Duplicates
  • Data Aggregation and Summarization
  • Filtering Aggregated Results with Precision
  • Versatility Across Multiple Scenarios
  • Compatibility and Simplicity
  • Enhanced Query Readability
  • Support for Complex Aggregations

The GROUP BY and HAVING clauses serve as essential instruments for data aggregation, identifying duplicates, and filtering results. Their effectiveness, ease of use, and adaptability render them crucial for database management and data analysis activities, allowing users to derive insights and handle data proficiently across a variety of applications.

Identifying Duplicates Using a Temporary Table

When dealing with large datasets, it can be easier and more efficient to separate duplicates using a temporary table before deleting them.

Creating the Table

Make a temporary table to store duplicate groups according to predetermined standards (e.g. A. username, along with userEmail.

CREATE TEMPORARY TABLE temp_view_duplicates AS
SELECT username, userEmail, MIN (userId) AS minuid
FROM userDetails
GROUP BY username, userEmail,
HAVING COUNT(*) > 1;

Explanation:

  • CREATE TEMPORARY TABLE temp_view_duplicates AS: Creates a temporary table named temp_view_duplicates.
  • SELECT userName, userEmail, MIN(userId) AS minuid: Groups duplicates by userName and userEmail, keeping only the row with the smallest userId.
  • GROUP BY userName, userEmail: Groups rows by userName, userEmail.
  • HAVING COUNT(*) > 1: Filters only groups with more than one row, identifying duplicates.

This temporary table will now contain one representative row per duplicate group (the row with the smallest id).

Deleting Duplicates from the Main Table

Now that we have a list of unique rows with duplicates in the temp_view_duplicates table, we can use the temporary table to remove duplicates while keeping only the rows with the smallest userId.

Use the following DELETE command:

DELETE FROM userDetails
WHERE (username, userEmail) IN (
SELECT username, userEmail FROM temp_view_duplicates
) AND userId NOT IN (
SELECT minuid FROM temp_view_duplicates
);

Explanation:

  • WHERE (username, userEmail,) IN: Targets only duplicate groups identified in temp_view_duplicates.
  • AND userId NOT IN (SELECT minuid FROM temp_view_duplicates): Ensures that only duplicate rows (those with higher userId values) are deleted.

Verifying Results

To confirm that duplicates have been removed, query the userDetails table:

SELECT * FROM userDetails;

Only unique rows should remain.

Temporary tables (CREATE TEMPORARY TABLE) are automatically dropped when the session ends, so they don’t persist beyond the current session.

When making extensive deletions, think about utilizing a transaction to safely commit or undo changes as necessary.

Key Advantages of Using a Temporary Table

  • Lower Complexity: By isolating duplicates, the removal process is simpler and clearer.
  • Enhanced Efficiency: It's faster for large datasets, as it avoids repeated joins.
  • Improved Readability: Using a temporary table makes the process more modular and easier to understand.

Conclusion

Eliminating duplicate records is essential for maintaining a well-organized database, improving performance, and ensuring accurate reporting. This guide presented two approaches:

  • Direct Method with GROUP BY and HAVING Clauses: Ideal for small datasets, using self-joins to delete duplicates.
  • Temporary Table Approach: More efficient for larger datasets, leveraging temporary storage to streamline deletion.

Choose the method that best fits your data size and complexity to keep your database clean and efficient.

MySQL
19.11.2024
Reading time: 8 min

Similar

MySQL

How to Import and Export Databases in MySQL or MariaDB

Database management is a crucial aspect of Linux server and web application administration. Importing and exporting databases are essential tasks for DevOps and system administrators. At a minimum, developers should know how to back up databases and transfer them between servers. This guide explains how to import and export database dumps in MySQL or MariaDB on a Linux server (using Ubuntu as an example). Working with Databases MySQL and MariaDB are popular relational database management systems (RDBMS) used for storing data in large applications. MariaDB is a fork of MySQL developed by its original creators due to licensing concerns following Oracle's acquisition of MySQL. Both MariaDB and MySQL share identical or similar APIs and operating mechanisms. Creating a Database Connect to MySQL or MariaDB with root privileges: For MySQL: mysql -u root -p   For MariaDB: mariadb -u root -p   Create a database (if it doesn't already exist): CREATE DATABASE IF NOT EXISTS <database_name>; Viewing Databases To see the list of available databases: SHOW DATABASES; Switching Databases To switch to a specific database: USE <database_name>; Viewing Tables To list all tables in the selected database: SHOW TABLES; Common SQL Commands Creating a table: CREATE TABLE IF NOT EXISTS users (  user_id INT NOT NULL PRIMARY KEY AUTO_INCREMENT,  username VARCHAR(100) NOT NULL); This creates a table named users with fields user_id and username. Inserting data into the table: INSERT INTO users (username) VALUES ('John Doe'); This adds a new row to the users table. Selecting all rows from the table: SELECT * FROM users; Monitoring MySQL/MariaDB status To check the server's global status and statistics: SHOW GLOBAL STATUS; Exporting Databases Exporting data from MySQL or MariaDB can be efficiently done using the mysqldump CLI utility or third-party tools like phpMyAdmin. The mysqldump utility allows you to save your database as an SQL dump, which contains the necessary commands for creating columns and populating them with data. This dump file can be easily managed, imported, or transferred. You will need: A database you want to export. User credentials with at least read access to the database. In the terminal, run the following command: mysqldump -u <username> -p<password> <database_name> > db_dump.SQL Where: -p<password>: Password for the database user (you can omit the password and simply use -p to prompt for it manually). db_dump.SQL: The name of the output dump file. <username>: The privileged user with read access. <database_name>: The name of the database you are exporting. When exporting over the internet, refer to our Creating an SSH Tunnel for MySQL tutorial for detailed steps on setting up local port forwarding, adjusting firewall rules, and using key-based authentication to secure your data and credentials in transit. To create dumps from a remote server, add the -h flag: mysqldump -h <ip-address> -u <username> -p<password> <database_name> > db_dump.SQL If the MySQL server uses a non-standard port, specify it with the -P flag: mysqldump -h <ip-address> -P <port> -u <username> -p<password> <database_name> > db_dump.SQL While the default export format is SQL, mysqldump also supports exporting data as CSV, XML, and other formats by configuring additional parameters. The SQL dump typically includes: Information about the RDBMS (MySQL or MariaDB) Commands for creating the required tables and their columns Data to populate those columns By default, it provides a comprehensive snapshot of the database structure and contents, making it an essential tool for database backups and migrations. Importing Data into MySQL or MariaDB To import a database dump, you don’t need mysqldump; a direct call to MySQL will suffice. Run the following command in your terminal: mysql -u <username> -p<password> <new_database_name> < db_dump.SQL Where: -p<password>: The user's password (use -p without the password to be prompted manually). db_dump.SQL: The dump file containing your database data. <username>: A privileged user with write access. <new_database_name>: The name of the target database to which you are importing the dump. If the process completes successfully, there will be no output. If any errors or warnings occur, MySQL or MariaDB will display them. You can check if the import was successful with these commands: SHOW DATABASES;  -- Lists all databasesUSE <database_name>;  -- Selects the target databaseSHOW TABLES;  -- Lists all tables within the selected database By executing these commands, you can confirm that the database structure and data have been imported correctly. Creating a systemd Service for Backup Suppose you want to automate the database backup (export) process. In this guide, we will create a service-timer that will trigger a script for backing up data. A Timer is a mechanism used to schedule the execution of a specific service at a given time or through certain intervals. Follow these steps to set it up: First, connect to the server and create the directory for backup scripts: mkdir -p /usr/bin/backup_scripts  # Create directory for backup scripts Create and open the file /usr/bin/backup_scripts/mysql_backup.sh in any text editor (e.g., nano): nano /usr/bin/backup_scripts/mysql_backup.sh Inside the file, add the following script: TIMESTAMP=$(date +'%F') # Get the current date BACKUP_DIR='<path_to_backup_directory>' MYSQL_USER='<username>' MYSQL_PASSWORD='<password>' DATABASE_NAME='<database_name>' mkdir -p "$BACKUP_DIR/$TIMESTAMP" # Create directory for this date mysqldump -u $MYSQL_USER -p$MYSQL_PASSWORD $DATABASE_NAME > "$BACKUP_DIR/$TIMESTAMP/$DATABASE_NAME-$TIMESTAMP.sql" # Create dump find "$BACKUP_DIR" -type d -mtime +28 -exec rm -rf {} \; # Delete backups older than 28 days Replace the placeholder variables with the actual backup directory path, MySQL user credentials, and the database name. Grant execution permissions to the script: chmod +x /usr/bin/backup_scripts/mysql_backup.sh Create the service file /etc/systemd/system/mysql-backup.service: sudo nano /etc/systemd/system/mysql-backup.service Add the following content: [Unit] Description=MySQL Database Backup Service [Service] Type=oneshot ExecStart=/usr/bin/backup_scripts/mysql_backup.sh [Install] WantedBy=multi-user.target Create the timer file: sudo nano /etc/systemd/system/mysql-backup.timer Add this content to schedule the backup: [Unit] Description=Run MySQL Backup Service Weekly [Timer] OnCalendar=weekly Persistent=true [Install] WantedBy=timers.target Reload the systemd configuration, enable the timer for autostart, and start it: systemctl daemon-reload  # Reload systemd configurationsystemctl enable mysql-backup.timer  # Enable timer to start automaticallysystemctl start mysql-backup.timer  # Start the timer Check the status of the timer and ensure it is working: systemctl status mysql-backup.timersystemctl list-timers  # Lists active timers Now, your system will automatically create a backup of the specified database every week. Export and Import via phpMyAdmin You can perform database imports and exports not only through command-line utilities but also through the phpMyAdmin web interface. This method is typically more convenient when the dump size is small (less than 1GB). This section will cover the basic process of importing and exporting databases. Export To export a database: Go to the phpMyAdmin interface and select the desired database from the left-hand panel. Click on the Export tab. Choose export method: Quick Export: Select this if you want a basic export with default settings. Custom Export: Choose this for more specific export options, such as selecting certain tables, formats, or compression methods. Click Export. To export specific tables: Click on the database name in the left sidebar to view its tables. Select the tables you want to export by checking the boxes next to their names. At the bottom of the page, choose Export from the list of actions. On the next page, verify the export format and options, then click Go to save the dump to your local machine. Import The process of importing a database is very similar to exporting. Follow these steps: Open phpMyAdmin and navigate to the database you want to import into. If the database doesn't exist, create it by clicking Create Database from the left sidebar. Inside the database, click on the Import tab. Click the Choose File button to browse for and select the SQL dump file from your device. Choose the necessary options like file encoding, format (typically default settings are fine), and other options as needed. Click Go to start the import process. Once the process is complete, you will see the imported files and their contents listed in the left sidebar. You can also modify them if needed. Conclusion The choice of method depends on your needs: phpMyAdmin is ideal for manually creating backups or for quick access to a database. It’s user-friendly and simple for small-scale tasks. Using command-line utilities would be more efficient and flexible for automation or if you’re transferring infrastructure. Frequently Asked Questions How do I export a MySQL or MariaDB database? Use mysqldump just like in MySQL: mysqldump -u user db > file.sql. How do I import a database dump in MariaDB? Run: mysql -u user dbname < dump.sql. Can I use the same dump file for both MySQL and MariaDB? Yup — standard SQL dumps work seamlessly between the two. How do I perform a MySQL import database operation from a dump file? Use the command mysql -u username -p database_name < backup.sql to import a database from a SQL dump file.
16 June 2025 · 8 min to read
MySQL

How to Create a MySQL Database Dump

MySQL is the most popular relational database management system that performs various operations with tables, such as adding, deleting, searching, sorting, and outputting data based on user queries. It's important to understand that MySQL controls databases but is not itself a database. Therefore, MySQL and the database are separate entities: MySQL is a program that operates on information. The database is the information recorded on a hard disk. Based on this architecture, MySQL supports exporting information — creating a database dump. This functionality allows several useful operations: Database Backup: Unexpected situations when using cloud (or local) servers can lead not only to system failures but also to data loss. Therefore, it’s important to regularly create database dumps, which can be stored on other secure storage devices. Database Transfer from One Server to Another: Manually copying database elements may be challenging or impossible when migrating from one server to another. A dump makes it possible to transfer data quickly. A database dump is essentially a sequential set of SQL instructions that create an exact copy of the original database, including both its structure and content. This guide will cover the primary methods for creating a database dump and importing it back into MySQL to restore data. Preparing a Test Database We will create a cloud database to test the examples in this guide. If you already have a MySQL database where you can test the dump creation function, you can skip this step. In the Hostman panel, we will create a MySQL 8 database, leaving all other parameters as default. You can connect to the cloud database via a terminal. The necessary command can be copied from the control panel. Let's connect to our database: mysql -u USER -p'PASSWORD' -h HOST -P 3306 -D DATABASE For example, a real connection command might look like this: mysql -u gen_user -p'sU}NEyx#<2P~\e' -h 91.206.179.29 -P 3306 -D default_db Next, we need to create a simple table consisting of three columns: CREATE TABLE People ( id INT, name VARCHAR(255) NOT NULL, bord DATE ); And populate it with some values: INSERT INTO People VALUES (120, 'Natalie', NOW()), (121, 'Meredith', NOW()), (122, 'James', NOW()); This fills the new database so that we can later create a dump from it. By the way, on the Users tab of the database management page, there are buttons that open interfaces for graphical MySQL database administration tools — phpMyAdmin and Adminer. Before running your first mysqldump command, consider securing your remote connections by following the steps in our Creating an SSH Tunnel for MySQL guide—learn how to wrap mysqldump inside an SSH port-forward so your dump credentials and data stay encrypted in transit. Method 1: Console Terminal A more traditional but less interactive way to create a MySQL database dump is by using the appropriate command in a console terminal. To do this, you need to connect to MySQL via an SSH connection and then enter the dump creation command: mysqldump -u USER -p'PASSWORD' -h ADDRESS -P PORT DATABASE > FILE Let's break down each of the specified parameters: USER: The username used to authenticate in MySQL. PASSWORD: The password for the user to authenticate in MySQL. ADDRESS: The IP address of the remote MySQL server. PORT: The port of the remote MySQL server. DATABASE: The name of the database to be dumped. FILE: The name of the file where the database dump will be saved on the local machine. There are two possible ways to create a dump via the console: Local MySQL: The dump is created from a database located on a local MySQL server. In this case, we don’t need to specify the ADDRESS and PORT parameters. Remote MySQL: The dump is created from a database located on a remote MySQL server. In this case, you need to specify ADDRESS and PORT. Local MySQL dump command example: mysqldump -u admin -p'qwerty123' default_db > just_dump.sql Remote MySQL dump command example: mysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db > just_dump.sql In both cases, for security reasons, you can omit the explicit password specification — this way, the system will prompt you to enter the password manually: mysqldump -u admin -p default_db > just_dump.sqlmysqldump -u admin -p -h 91.206.179.29 -P 3306 default_db > just_dump.sql Warnings and Errors After executing the command, several warnings and errors may appear in the console output. Let’s break down each message in detail. Password Security Warning The first warning from MySQL notifies you about the insecurity of using a password as an explicit parameter: mysqldump: [Warning] Using a password on the command line interface can be insecure. To suppress this warning, use the -p flag without specifying the password directly. Global Transaction Identifier (GTID) Warning The next warning concerns the inclusion of the Global Transaction Identifier (GTID) in the resulting dump and suggests disabling it with the --set-gtid-purged=OFF flag: Warning: A partial dump from a server that has GTIDs will by default include the GTIDs of all transactions, even those that changed suppressed parts of the database. If you don't want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete dump, pass --all-databases --triggers --routines --events. GTID (Global Transaction Identifier) is a unique 128-bit identifier associated with each transaction, which improves overall data consistency. Disabling GTID may lead to data inconsistency (for example, due to duplication of certain SQL statements). Data Dump Consistency Warning Another GTID-related warning indicates that the dump operation is not atomic: Warning: A dump from a server that has GTIDs enabled will by default include the GTIDs of all transactions, even those that were executed during its extraction and might not be represented in the dumped data. This might result in an inconsistent data dump.In order to ensure a consistent backup of the database, pass --single-transaction or --lock-all-tables or --master-data. This means that database changes performed by other applications during the dump creation may be missing, leading to data inconsistency. To avoid this issue, use one of the following flags: --single-transaction to create the dump within a single transaction. --lock-all-tables to block any other operations on the database during the dump. Access Denied Error You might encounter an error preventing the dump creation due to insufficient privileges: mysqldump: Error: 'Access denied; you need (at least one of) the PROCESS privilege(s) for this operation' when trying to dump tablespaces Even if the user specified in the command has all database privileges, they may lack the global PROCESS privilege. To grant this privilege, execute the following command: GRANT PROCESS ON *.* TO 'admin'@'localhost'; However, this is not the best solution from a security perspective. Instead of granting global privileges, it's better to use the --no-tablespaces flag during the dump command execution. With all the additional flags, the dump creation command will look like this: mysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db --no-tablespaces --set-gtid-purged=OFF --single-transaction > just_dump.sql In this case, only one harmless warning will remain about explicitly specifying the password: mysqldump: [Warning] Using a password on the command line interface can be insecure. Non-existent Database Error If you accidentally specify the name of a non-existent database, an unclear error will appear denying access to the database for the specified user: ERROR 1044 (42000): Access denied for user 'admin'@'%' to database 'default_db' This can cause confusion, so always double-check the accuracy of the database name specified in the command. Dump File After successfully executing the dump command, you can check the file system using: ls You should see the corresponding database dump file: just_dump.sql  resize.log  snap Although you can open this file in any text editor, its size may be quite large, especially if the original database contained a lot of information: cat just_dump.sql At the beginning of the file, there is information about the created dump, followed by SQL instructions: -- MySQL dump 10.13 Distrib 8.0.40, for Linux (x86_64) -- -- Host: 37.220.80.65 Database: default_db -- ------------------------------------------------------ -- Server version 8.0.22-13 /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */; /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */; /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */; ... /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */; /*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */; -- Dump completed on 2025-01-19 5:33:16 The output file doesn't have to be saved in the current directory; you can specify any other directory: mysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db --no-tablespaces --set-gtid-purged=OFF --single-transaction > /tmp/just_dump.sql In this case, we create the dump file just_dump.sql in the /tmp directory. Dumping Multiple Databases In real projects, MySQL often handles multiple databases. You can use a special flag to dump all existing databases: mysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 --all-databases --no-tablespaces --set-gtid-purged=OFF --single-transaction > just_dump.sql This command differs from the previous one only in that the --all-databases flag is specified instead of a specific database name. Alternatively, you can list the databases you want to include in the dump: mysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 db_first db_second db_third --no-tablespaces --set-gtid-purged=OFF --single-transaction > just_dump.sql Structure Without Data You can create a dump containing only the database structure (table schemas) without any data by using the --no-data flag: mysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db --no-data --no-tablespaces --set-gtid-purged=OFF --single-transaction > just_dump.sql Specific Tables Instead of dumping an entire MySQL database, you can dump only specific tables by listing their names after the database name: mysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db table1 table2 --no-data --no-tablespaces --set-gtid-purged=OFF --single-transaction > just_dump.sql On the other hand, you can dump a database excluding specific tables using the --ignore-table parameter: mysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db --ignore-table=default_db.logs --no-data --no-tablespaces --set-gtid-purged=OFF --single-transaction > just_dump.sql Note that the table name must always be preceded by the database name and separated by a dot. To exclude multiple tables, list each one with the --ignore-table option: mysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db --ignore-table=default_db.table1 --ignore-table=default_db.table2 --no-data --no-tablespaces --set-gtid-purged=OFF --single-transaction > just_dump.sql Character Encoding In some cases, it may be necessary to explicitly specify the character encoding for the dump: mysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db --no-tablespaces --set-gtid-purged=OFF --single-transaction --default-character-set=utf8 > just_dump.sql Typically, UTF-8 is the preferred character encoding. Archiving the Dump Sometimes it’s useful to compress the dump immediately after creation. You can do this by piping the mysqldump output into gzip, then saving the compressed archive: mysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db --no-tablespaces --set-gtid-purged=OFF --single-transaction | gzip > just_dump.sql.gz If you check the current directory with the ls command, you’ll see the compressed dump: just_dump.sql  just_dump.sql.gz  resize.log  snap Restoring Data A database dump is usually created to restore data in the future, for example, in case of data loss or server migration. To load the database dump into MySQL and restore data, use the following command: mysql -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db < just_dump.sql If the dump file size is too large, MySQL may have a default limit that prevents loading it. To adjust the maximum allowed dump size, you can use the --max_allowed_packet flag: mysql -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db --max_allowed_packet=64M < just_dump.sql In this example, the maximum allowed dump size is set to 64 MB. mysql -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 default_db < just_dump.sqlmysqldump -u admin -p'qwerty123' -h 91.206.179.29 -P 3306 --all-databases --no-tablespaces --set-gtid-purged=OFF > just_dump.sql Method 2: Using phpMyAdmin If you're using phpMyAdmin, creating a database dump can be done through the graphical interface without manually executing commands — phpMyAdmin handles everything for you. Log In to phpMyAdmin. Open the phpMyAdmin interface and log in with your credentials.  Select the database. In the left sidebar, choose the database you want to export. This will open a page displaying the list of existing tables within the selected database. Configure the export. Click the Export button. It will take you to a dedicated page to configure the database export (dump). You can also access the export page from the phpMyAdmin home page, but doing so may not display all databases available for export. It's better to first navigate to the specific database and then click Export. Note that phpMyAdmin allows exporting only databases that contain tables. Empty databases cannot be exported. There are two export options in phpMyAdmin: Quick Export. It creates the dump using default export settings. Custom Export. It Allows you to manually configure the export settings, such as excluding specific tables, changing character encoding, and adjusting format options. phpMyAdmin supports exporting to various formats beyond just SQL, such as PDF, JSON, CSV, YAML, and others. The configuration options for creating a dump in phpMyAdmin are more user-friendly and visually intuitive than command-line flags. Start the export. Once you've configured all the export parameters, scroll down and click the Export button. The dump file will be generated and downloaded through your browser. Method 3: Using Adminer Creating a database dump in Adminer is very similar to phpMyAdmin. In fact, Adminer’s graphical interface is even simpler. Log In to Adminer: Start by logging into Adminer, then navigate to the export page by clicking the Export link in the left sidebar. Configure the Export. Adminer does not have predefined export types, so the system immediately offers all configuration options. You can select specific database tables to include in the dump. The dump file can be either saved (in a specific format or as a GZIP archive) or opened in a new window for manual copying of SQL instructions. Conclusion The native way to create a MySQL database dump, without requiring additional tools, is by using the mysqldump command with additional parameters. An alternative is to use visual database management tools with graphical interfaces. Utilities like phpMyAdmin or Adminer simplify database interactions by providing a user-friendly and interactive environment. This is particularly useful for those who are not well-versed in SQL syntax, turning tasks such as creating a dump into a series of simple mouse clicks. Frequently Asked Questions What is the command to create a MySQL dump? Just run: mysqldump -u username -p database_name > backup.sql. Simple and effective. How do I compress a MySQL dump file? Pipe it through gzip: mysqldump ... | gzip > backup.sql.gz. Saves space and upload time. Is mysqldump suitable for large databases? It works, but for very large datasets, check out tools like mydumper or solutions from Percona.
16 June 2025 · 13 min to read
MySQL

MySQL Data Types: Overview, Usage Examples & Best Practices

When setting up a database in MySQL, the user must assign a name and a data type to each column in a table. This process helps the system understand exactly what kind of data will be stored and how it will be processed, as well as determines the amount of memory required. There are several main classes of data types in MySQL tables, including numeric, character, date and time, and others. Each class contains several unique data types. It's important to define them correctly during the initial stage of table creation to avoid issues once the full database structure is complete. This article describes the main MySQL data types, their structure, valid values, and recommended usage. Numeric Types MySQL supports all standard SQL numeric data types, which are used to store numbers. This article covers: Integers; Floating-point numbers; Fixed-point numbers. Each of these will be described in more detail below. Integers Used to store signed or unsigned integers. Signed types can store both positive and negative numbers, while unsigned types store only positive numbers, effectively doubling the range. There are five main integer types: TINYINT, SMALLINT, MEDIUMINT, INT, and BIGINT. The primary difference is the range of values they can store—the larger the range, the more memory is required. It's important to select the right type to efficiently allocate memory when creating tables. Type Memory (bytes) Signed Range Unsigned Range TINYINT 1 -2⁷ to 2⁷-1 0 to 2⁸-1 SMALLINT 2 -2¹⁵ to 2¹⁵-1 0 to 2¹⁶-1 MEDIUMINT 3 -2²³ to 2²³-1 0 to 2²⁴-1 INT 4 -2³¹ to 2³¹-1 0 to 2³²-1 BIGINT 8 -2⁶³ to 2⁶³-1 0 to 2⁶⁴-1 Specifying size (e.g., MEDIUMINT(6)) does not affect the range. It's purely cosmetic and used by some tools to pad shorter numbers with spaces. You can also use the ZEROFILL attribute to pad with zeros instead of spaces, e.g., SMALLINT(8) ZEROFILL will display 256 as 00000256. Floating-Point Numbers Used to store approximate numeric values. MySQL lets you define floating-point precision as: FLOAT(p) Where p is the precision from 0 to 53. If p is less than 25, FLOAT() (single precision) is used; otherwise, DOUBLE() (double precision) is used. Type Memory (bytes) Value Range FLOAT 4 ~±3.4028 × 10³⁸ DOUBLE 8 ~±1.7976 × 10³⁸ Fixed-Point Numbers Used to store exact values with a specified precision. The DECIMAL or NUMERIC types are used, typically for financial calculations where rounding is unacceptable. Defined as: DECIMAL(M, D) M: Total number of digits (up to 65; default is 10). D: Number of digits after the decimal point (0–30, default is 0). Example: DECIMAL(6, 3) Stores values like -999.999 to 999.999. DECIMAL and NUMERIC are functionally equivalent. Character Types Designed for storing short texts, the character data types in MySQL include CHAR and VARCHAR. They are similar to each other, with the main difference being how the data is stored and retrieved. CHAR stores a fixed-length string (from 0 to 2⁸−1 characters), which is defined at the time of table creation. If we insert a string shorter than the specified length, the remaining characters are padded with spaces. The VARCHAR data type in MySQL stores a variable-length string (from 0 to 2¹⁶−1 characters), also defined during table creation. Unlike CHAR, it stores only the specified number of characters and allocates 1 byte to store the length of the string. If the input string exceeds the defined length for either CHAR or VARCHAR, it is truncated to fit the allowed length. During table creation, if CHAR and VARCHAR columns are combined, MySQL will convert the CHAR column to VARCHAR. To illustrate the difference, here's a table showing how strings of various lengths are stored using CHAR(5) and VARCHAR(5): String CHAR(5) VARCHAR(5) 'A' 'A ' 'A' 'Hello' 'Hello' 'Hello' 'Example' 'Examp' 'Examp' Text and Binary Types The TEXT data type in MySQL is used for storing large text-based data, while the BLOB type is designed for binary data, such as images, compiled code, or audio. These two types are similar in many ways, but the main difference lies in how data is stored and processed: For BLOB, sorting and comparisons are case-sensitive. For TEXT, sorting and comparisons are case-insensitive. MySQL cannot index the full length of TEXT or BLOB fields and does not support sorting using full-field indexes for these types. If the length of a string exceeds the maximum allowed by a TEXT or BLOB type, the input will be truncated to fit the allowed size. Below is a table showing all variants of the TEXT type, the required memory size, and the maximum number of characters allowed: Type Memory Size (bytes) Max Characters TINYTEXT 2⁸−1 Up to 2⁸−1 TEXT 2¹⁶−1 Up to 2¹⁶−1 MEDIUMTEXT 2²⁴−1 Up to 2²⁴−1 LONGTEXT 2³²−1 Up to 2³²−1 The BLOB types follow the same structure and size limits as their TEXT counterparts. Here's the equivalent table for BLOB types: Type Memory Size (bytes) Max Bytes TINYBLOB 2⁸−1 Up to 2⁸−1 BLOB 2¹⁶−1 Up to 2¹⁶−1 MEDIUMBLOB 2²⁴−1 Up to 2²⁴−1 LONGBLOB 2³²−1 Up to 2³²−1 Date and Time MySQL provides several data types for storing date and time information. The table below summarizes each type, along with memory usage and key details: Type Memory (bytes) Description DATE 3 Stores only the date in the format 'YYYY-MM-DD'. Separators can be any non-numeric character (not just dashes). Range: January 1, 1000 to December 31, 9999. DATETIME 8 Stores both date and time in the format 'YYYY-MM-DD HH:MM:SS'. Range: 00:00:00 on January 1, 1000 to 23:59:59 on December 31, 9999. TIMESTAMP 4 Also stores date and time in the same format as DATETIME, but uses half the memory. Range: 00:00:01 on January 1, 1970 to 03:14:07 on January 9, 2038. TIME 3 Stores only time in 'HH:MM:SS' or 'HHH:MM:SS' format for large hour values. Range: −838:59:59 to 838:59:59. YEAR 1 Stores only the year in 'YYYY' format. Range: 1901 to 2155, plus 0000 is allowed. Before you tackle semi-structured data with JSON, our How to Create Tables in MySQL guide shows you exactly how to declare each column—numeric, text, date or JSON—so your schema is solid from day one. JSON MySQL supports a native JSON (JavaScript Object Notation) data type, introduced in version 5.7.8, offering the following advantages over storing JSON as plain text: Automatic validation: MySQL ensures that stored data is valid JSON. Optimized storage format: JSON data is internally converted to a binary representation for faster access and querying. The memory required for JSON storage is approximately equivalent to LONGTEXT. Composite Types MySQL provides special string data types that store values from a predefined, fixed set. These include ENUM and SET. ENUM stores a single value from the defined list. It requires up to 2 bytes of memory. SET, on the other hand, can store up to 26 values simultaneously from the list and uses up to 8 bytes of memory. For example, if a user defines the following list for ENUM: ENUM('a', 's', 'd') Then the column can only contain one of the following values: 'a', 's', or 'd'. If the same list is defined for SET: SET('a', 's', 'd') Then the column can contain any combination of the listed values, including: 'a' 's' 'd' 'a,s' 'a,d' 's,d' 'a,s,d' or an empty set ('') If a user inserts a value like ('a', 's', 'a') into a SET, MySQL will automatically deduplicate the entries and sort them based on the original declaration order. So the stored value will be 'a,s'. Conclusion This article covered the most commonly used MySQL data types for defining table columns. With this knowledge, users can select the appropriate types and design their database structure effectively. For more detailed and less frequently used data types, refer to the official MySQL documentation. You can use Hostman cloud databases to practice working with MySQL and its different data types. Key advantages of using MySQL in the cloud include: Database maintenance and configuration by the service provider; Reliability, high availability, and fault tolerance; Automatic backups; Fast horizontal and vertical scaling; Continuous operation through cluster replication; Database operation on the computational resources of a cloud platform.
06 June 2025 · 7 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support