Sign In
Sign In

How to Set Up Backup with Bacula

How to Set Up Backup with Bacula
Hostman Team
Technical writer
Linux
18.07.2025
Reading time: 14 min

Bacula is a cross-platform client-server open source backup software that enables you to back up files, directories, databases, mail server data (Postfix, Exim, Sendmail, Dovecot), system images, and entire operating systems.

In this guide, we’ll walk you through the process of installing and configuring Bacula on Linux, as well as creating backups and restoring user data.

To get started with Bacula, you’ll need a server or virtual machine running any Linux distribution. In this tutorial, we’ll be using a cloud server from Hostman with Debian 12. Or you can choose your own private Linux server.

Bacula Architecture

Bacula’s architecture consists of the following components:

Director (Bacula Director)

The core component responsible for managing all backup, restore, and verification operations. The Director schedules jobs, sends commands to other components, and writes information to the database.

Storage Daemon (Bacula Storage)

Handles communication with storage devices such as disks, cloud storage, etc. The Storage Daemon receives data from the File Daemon and writes it to the configured storage medium.

File Daemon (Bacula File)

The agent installed on client machines to perform the actual backup operations.

Catalog

A database (MySQL, PostgreSQL, or SQLite) used by Bacula to store information about completed jobs, such as backup metadata, file lists, and restore history.

Console (Bacula Console, bconsole)

A command-line utility for interacting with Bacula. The Console allows administrators to control the Director via a CLI. GUI tools such as Bacula Web and Baculum are also available.

Monitor (Optional)

A component for monitoring the Bacula system status. It tracks job statuses, daemon states, and storage device conditions.

Creating Test Data for Backup

Let’s create some test files to use in our backup.

Create a test directory and navigate into it:

mkdir /root/test_backups && cd /root/test_backups

Now create six sequential files:

touch file{1..6}.txt

Also, create a directory in advance for storing restored files:

mkdir /root/restored-files

Installing Bacula

In this tutorial, we will install all Bacula components on a single server. However, Bacula also supports a distributed setup where components such as the Director, Storage Daemon, Client, and database can be installed on separate servers. This decentralized setup is suitable for backing up multiple systems without overloading a single server.

We'll be using Debian 12 and installing PostgreSQL (version 15) as the backend database.

Update the package index and install Bacula (server and client components):

apt update && apt -y install bacula-server bacula-client

PostgreSQL 15 will also be installed during this process.

During installation:

  • When prompted with: “Configure database for bacula-director-pgsql with dbconfig-common?”, press ENTER.

0218df50 951a 42b8 A101 Ff85aad736c1.png

  • When asked to choose the database host, select localhost, since we are installing everything on one server.

A161e20b 7f7b 4e95 9ce3 281f9405a6a9.png

  • When prompted with: “PostgreSQL application password for bacula-director-pgsql”, set a password for the Bacula database. 

7a341b1f 69c8 424e 972e C03758fc1b0e.png

Do not leave this field empty, or a random password will be generated.

  • Re-enter the password when asked to confirm.

3822e6ec 984d 4734 A93b 82d475bd07d1.png

The installation will then continue normally.

After the installation is complete, verify the status of Bacula components and PostgreSQL.

Check the status of the Bacula Director:

systemctl status bacula-director

Check the Storage Daemon:

systemctl status bacula-sd

Check the File Daemon:

systemctl status bacula-fd

Check PostgreSQL:

systemctl status postgresql

If all components display a status of active, then Bacula has been successfully installed and is running.

Bacula Configuration

Bacula is configured by editing the configuration files of the program components. By default, all Bacula configuration files are located in the /etc/bacula directory.

929ce894 32a0 4470 B052 C39e57c232c2.png

Next, we will configure each Bacula component individually.

Configuring Bacula Director

Using any text editor, open the bacula-dir.conf configuration file for editing:

nano /etc/bacula/bacula-dir.conf

Let’s start with the Director block, which sets the main configuration parameters for the Director component:

Director {
  Name = 4142939-bi08079-dir
  DIRport = 9101
  QueryFile = "/etc/bacula/scripts/query.sql"
  WorkingDirectory = "/var/lib/bacula"
  PidDirectory = "/run/bacula"
  Maximum Concurrent Jobs = 20
  Password = "ohzb29XNWSFISd6qN6fG2urERzxOl9w68"
  Messages = Daemon
  DirAddress = 127.0.0.1
}

Explanation of parameters:

  • Name: The name of the Director component. This is a unique identifier used to connect with other components like the File Daemon and Storage Daemon. By default, it includes the server's hostname and the -dir suffix. Example: 4142939-bi08079-dir.

  • DIRport: The port that Bacula Director listens to for incoming connections from the management console (bconsole). Default is 9101.

  • QueryFile: Path to the SQL script file used to run queries on the database. It contains predefined SQL queries for job management, verification, data restoration, etc. Default: /etc/bacula/scripts/query.sql.

  • WorkingDirectory: The working directory where Bacula Director temporarily saves files during job execution.

  • PidDirectory: The directory where the Director saves its PID file (process identifier). This is used to track if the process is running.

  • Maximum Concurrent Jobs: The maximum number of jobs that can run simultaneously. The default is 20.

  • Password: Password used for authenticating the management console (bconsole) with the Director. Must match the one specified in the console’s configuration.

  • Messages: Specifies the name of the message resource that determines how messages (errors, warnings, events) are handled. Common values: Daemon, Standard, Custom.

  • DirAddress: The IP address the Director listens on. This can be 127.0.0.1 for local connections or an external IP.

Catalog Configuration

By default, Bacula comes with its own PostgreSQL instance on the same host, and in that case, database connection settings don’t need changes. But if you're deploying the database separately (recommended for production), the address, username, and password must be specified in the Catalog block:

Catalog {
  Name = MyCatalog
  dbname = "bacula"; DB Address = "localhost"; dbuser = "bacula"; dbpassword = "StrongPassword4747563"
}

7e112a31 0a15 4470 Bece 04b49c53691b.png

Explanation of parameters:

  • dbname: The name of the database used by Bacula (default is bacula). The database must already exist (when deployed separately).

  • DB Address: Host address where the DBMS is deployed. Use IP or a domain name. For local setup: localhost or 127.0.0.1.

  • dbuser: The user Bacula will use to connect to the database.

  • dbpassword: Password for the specified database user. Must be preconfigured.

Restore Job Configuration

Locate the Job block named RestoreFiles, responsible for file restoration. Set the Where parameter to specify the directory where restored files will be saved. Earlier, we created /root/restored-files, which we’ll use here:

Job {
  Name = "RestoreFiles"
  Type = Restore
  Client=4244027-bi08079-fd
  Storage = File1
  # The FileSet and Pool directives are not used by Restore Jobs
  # but must not be removed
  FileSet="Full Set"
  Pool = File
  Messages = Standard
  Where = /root/restored-files
}

57bd6862 C03a 4c80 9c0f 1e3fa9dbd994.png

Backup Schedule Configuration

Next, we set up the Schedule block that defines when backups are created.

We create:

  • A full backup every Monday at 00:01.
  • A differential backup every Sunday (2nd to 5th week) at 23:05.
  • An incremental backup daily at 23:00:
Schedule {
  Name = "WeeklyCycle"
  Run = Full 1st mon at 00:01
  Run = Differential 2nd-5th sun at 23:05
  Run = Incremental mon-sun at 23:00
}

D1bc1e2e 74eb 47f4 88fc 50a2a2e656e1.png

FileSet Configuration

Now, we specify which files and directories will be backed up. This is defined in the FileSet block. Earlier we created /root/test_backups with six files. We’ll specify that path:

FileSet {
  Name = "Full Set"
  Include {
    Options {
      signature = MD5
    }
    File = /root/test_backups
  }
}

Explanation of parameters:

  • Name: The name of the FileSet block, used for identification in configuration.
  • Options: Settings that apply to all files listed under Include.
  • signature = MD5: Specifies the checksum algorithm used to verify file integrity. MD5 generates a 128-bit hash to track file changes.

64c03e6b 0146 4ade Ac5d 0b98134eaf75.png

Exclude Configuration (Optional)

The Exclude block is used to specify files or directories that should not be backed up. This block is placed inside the FileSet definition and acts on files included via Include.

Exclude {
    File = /var/lib/bacula
    ...
}

Pool Configuration

The Pool block defines a group of volumes (storage units) used for backup. Pools help manage how data is stored, rotated, and deleted.

Pool {
  Name = Default
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Volume Retention = 7 days
  Maximum Volume Bytes = 10G
  Maximum Volumes = 2
}

Explanation of parameters:

  • Name: The pool's name, here it's Default.
  • Pool Type: Defines the pool's function:
    • Backup: Regular backups.
    • Archive: Long-term storage.
    • Cloning: Data duplication.
  • Recycle: Indicates whether volumes can be reused once they're no longer needed (yes or no).
  • AutoPrune: Enables automatic cleanup of expired volumes.
  • Volume Retention: How long (in days) to retain data on a volume. After 7 days, the volume becomes eligible for reuse.
  • Maximum Volume Bytes: The max size for a volume. If it exceeds 10 GB, a new volume is created (if allowed).
  • Maximum Volumes: Limits the number of volumes in the pool. Here, it's 2. Older volumes are recycled when the limit is hit (if Recycle = yes).

Validating Configuration and Restarting Bacula

After making all changes, check the bacula-dir.conf file for syntax errors:

/usr/sbin/bacula-dir -t -c /etc/bacula/bacula-dir.conf

If the command output is empty, there are no syntax errors. If there are errors, the output will specify the line number and error description.

Restart the Bacula Director service:

systemctl restart bacula-director

Configuring Bacula Storage

The next step is configuring Bacula Storage, where the backup files will be stored.

Using any text editor, open the configuration file bacula-sd.conf for editing:

nano /etc/bacula/bacula-sd.conf

We'll start with the Storage block, which defines the storage daemon responsible for physically saving backup files:

Storage {                             
 Name = 4149195-bi08079-sd
 SDPort = 9103                  
 WorkingDirectory = "/var/lib/bacula"
 Pid Directory = "/run/bacula"
 Plugin Directory = "/usr/lib/bacula"
 Maximum Concurrent Jobs = 20
 SDAddress = 127.0.0.1
}

Here’s what each parameter means:

  • Name: Name of the storage daemon instance, used to identify it uniquely.
  • SDPort: Port number the Storage Daemon listens on. The default is 9103.
  • WorkingDirectory: Working directory for temporary files. Default: /var/lib/bacula.
  • Pid Directory: Directory to store the PID file (process ID) for the storage daemon. Default: /run/bacula.
  • Plugin Directory: Path where Bacula’s plugins for the storage daemon are located. These plugins can provide extra features such as encryption or cloud integration.
  • Maximum Concurrent Jobs: Maximum number of jobs the storage daemon can handle simultaneously.
  • SDAddress: IP address the Storage Daemon is available at. This can be an IP or a domain name. Since in our case the Storage Daemon runs on the same server as the Director, we use localhost.

The next block to configure is Device, which defines the storage device where backups will be written.

The device can be physical (e.g., a tape drive) or logical (e.g., a directory on disk). For testing, one Device block will suffice. By default, bacula-sd.conf may contain more than one Device block, including a Virtual Autochanger — a mechanism that emulates a physical autochanger (used for managing tapes or other media). It lets you manage multiple virtual volumes (typically as disk files) just like real tapes in a tape library.

Locate the Autochanger block and remove the FileChgr1-Dev2 value from the Device parameter:

Autochanger {
  Name = FileChgr1
  Device = FileChgr1-Dev1
  Changer Command = ""
  Changer Device = /dev/null
}

Cbcd432e 7603 43cc A453 Fe910d6add06.png

Next, in the Device block below, specify the full path to the directory we previously created for storing backup files (/srv/backup) in the Archive Device parameter:

Device {
  Name = FileChgr1-Dev1
  Media Type = File1
  Archive Device = /srv/backup
  LabelMedia = yes;                   
  Random Access = Yes;
  AutomaticMount = yes;               
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 5
}

7c002890 0919 4b4a A1da 70317de2a43f.png

Any blocks referencing FileChgr2 and FileChgr1-dev2 should be deleted:

45494f14 1ac3 4b66 9325 D0bf06f6d8ea.png

Explanation of the parameters:

  • Autochanger Block:
    • Name: Identifier for the autochanger (you can have multiple).
    • Device: Name of the device linked to this autochanger—must match the Device block name.
    • Changer Command: Script or command used to manage the changer. An empty value ("") means none is used—suitable for virtual changers or simple setups.
    • Changer Device: Refers to the device tied to the autochanger, typically for physical devices.
  • Device Block:
    • Name: Identifier for the device.
    • Media Type: Media type associated with the device. Must match the Pool block media type.
    • Archive Device: Full path to the device or directory for storing backups; /srv/backup in this case.
    • LabelMedia: Whether Bacula should auto-label new media.
    • Random Access: Whether random access is supported.
    • AutomaticMount: Whether to auto-mount the device when used.
    • RemovableMedia: Specifies if the media is removable.
    • AlwaysOpen: Whether the device should always stay open.
    • Maximum Concurrent Jobs: Maximum number of simultaneous jobs using this device.

Since we previously specified the directory for backup storage, create it:

mkdir -p /srv/backup

Set the ownership to the bacula user:

chown bacula:bacula /srv/backup

Next, check the config file for syntax errors:

/usr/sbin/bacula-sd -t -c /etc/bacula/bacula-sd.conf

If there are no syntax errors, the output will be empty. Otherwise, it will indicate the line number and description of any error.

Restart the storage daemon:

systemctl restart bacula-sd

Creating a Backup

Backups in Bacula are created using the bconsole command-line tool. Launch the utility:

bconsole

If it connects to the Director component successfully, it will display 1000 OK.

Before running a backup, you can check the status of all components by entering the command:

status

This will display a list of the five Bacula system components. To check them all, enter 6.

3a083e2a F725 4888 A74d F04916aef01c.png

To initiate a backup, enter the command:

run

8ae0bca2 5120 4e94 93cf D61addf848f1.png

From the list, choose the BackupClient1 option (your client name might differ based on previous config), by typing 1.

After selecting the option, you’ll see detailed info about the backup operation.

You’ll then be prompted with three choices:

  • yes — start the backup process;
  • mod — modify parameters before starting;
  • no — cancel the backup.

07b80496 424f 4863 Ad05 B9af870458d8.png

If you enter mod, you’ll be able to edit up to 9 parameters.

0da17284 9bd8 477b Bb13 58714a7a45f4.png

To proceed with the backup, type yes.

To view all backup and restore jobs and their statuses:

list jobs

Db38902b Bc4d 41c0 9050 A4934dfcce6c.png

In our case, a backup with Job ID 1 was created:

list jobid=1

803a23fa Df3b 4e4c Aca3 4250bc23c5f6.png

If the status is T, the backup was successful.

Possible statuses in the "Terminated Jobs" column:

  • T (Success) — Job completed successfully.
  • E (Error) — Job ended with an error.
  • A (Canceled) — Job was canceled by the user.
  • F (Fatal) — Job ended due to a critical error.
  • R (Running) → Terminated — Job completed (may be successful or not).

You can also monitor backup activity and errors via the log file:

cat /var/log/bacula/bacula.log

Once the backup finishes, the file will be saved in the specified directory.

file Vol-0001

2658cf99 4409 431e B989 935dbac470d6.png

Restoring Files from Backup

Earlier, we backed up the /root/test_backups directory, which contained six .txt files. Suppose these files were lost or deleted. Let’s restore them:

Launch the Bacula console:

bconsole

Start the restore process:

restore

You’ll see 12 available restore options.

Bd1ea76b 7a6a 4063 8d40 17288feb3f2b.png

We’ll use option 3. Type 3.

Fc131259 0663 4834 Aea3 2f9fcea28890.png

Earlier we used Job ID 1 for our backup. Enter 1. 

93119f19 C369 4af9 815a Db864b572c81.png

You’ll enter a file selection mode. Since our files were in the root/test_backups directory, navigate there.

Ba432a23 629a 4b9e Ad7e 3fc2f2285605.png

All previously saved files should be visible.

To restore the whole directory, go up one level:

cd ..

Then mark the whole test_backups folder:

mark test_backups/

99f84ff3 F432 4bdd 943b A137fd64a56b.png

Finish selection:

done

The system will display a final summary showing which data will be restored and the target directory (in our case: /root/restored-files).

06f2a9d4 Cdc2 4a2e 95d2 7b3e26cf5201.png

To start the restore, enter yes.

F70754c6 508c 465e 8bc5 D2536446c5d2.png

Finally, verify that the files have been successfully restored.

1994faaf 233d 4307 B370 34dc24e290d0.png

Conclusion

We’ve now reviewed the installation and configuration of Bacula, a client-server backup solution. Bacula isn’t limited to backing up regular files—thanks to its plugin support, it can also handle backups of virtual machines, OS images, and more.

Linux
18.07.2025
Reading time: 14 min

Similar

Linux

Linux cp Command

Linux has an unlimited set of commands to perform assigned tasks. The Linux cp command is the primary tool and the basis for copying and managing files and directories in this operating system. This function is designed to duplicate files or directories in the same or different location. Armed with this functionality, users have advanced capabilities: from creating backup copies to moving files between directories. Linux cp command is simple to learn You can find all the necessary information covered in this tutorial. You will discover how the Linux cp command and cp directory work, as well as its grammatical structures, crucial hints, parameters, settings, and recommended practices. Readers will learn the tricks of the cp command, which will help them become more proficient. And if you’re looking for a reliable, high-performance, and budget-friendly solution for your workflows, Hostman has you covered with virtual servers with NVMe storage, Linux VPS Hosting options, including Debian VPS, Ubuntu VPS, and VPS CentOS. The core of the cp command in Linux The functionality of the command allows users to control the creation of copies. One feature offers overwriting existing files, another is responsible for recursively copying a directory with its entire entities, and the third protects the first data for repeating backups. This command demonstrates more features for specific purposes and user experience during the process. A key benefit of the cp command is its exceptional accuracy in duplicating files and directories. You can be absolutely sure that the duplicated files are identical to the original ones with all its interior. Therefore, the user can replicate the original file without any changes. The cp command in Linux inherently tells the user a destination directory for storing copies in a specific repository. The command's precision makes it indispensable for both novice and advanced users. Linux cp syntax This command consists of the following parameters: source file or directory and destination directory. The basic syntax of the Linux cp command is as follows: cp [...file/directory-sources] [destination] Here [file/directory-sources] specifies the files or directories sources to copy, while the [destination] specifies the location to copy the file to. There are the letter flags to specify the way of creation a replica of files and directories: -a leaves the first file attributes the same; -r recursively replicates directories and their interior entities; -v shows copied files in detail; -i requires consent to overwrite the file; -u rewrites new or missing files in the destination directory; -f forcibly copies without user consent; -s makes a symbolic link instead of a file replica; -ra recreates an exact duplicate of a file or directory without changing attributes; -rf updates or changes a file or directory with the original name in the same place; -pv (if installed) monitors and shows the time required to complete copying large folders. How to copy files with the cp command To make a file copy, apply the cp command in Linux as follows: cp ./DirectoryA_1/README.txt ./DirectoryA_2 where ./DirectoryA_1/README.txt is the source file, and ./DirectoryA_2 is the destination. The cp command was originally designed to interact with files. To replicate directories, you must use the -r flag to command that the directory with all its interior entities to be copied recursively. Therefore, you should write cp -r before the directory sources in Linux as follows: cp -r ./DirectoryA_1/Folder/ ./DirectoryA_2 The cp -r command in Linux will recursively duplicate the Folder directory in ./DirectoryA_1/ as well as all contents in the Folder directory. For instance, if you need to replicate the whole file contents in DirectoryA_1 with the .txt extension, try following command: cp ./DirectoryA_1/*.txt ./DirectoryA_2 where ./DirectoryA_1/*.txt matches files with the .txt extension in their names, and the cp command duplicates all those data to the destination. Best practices of the cp Linux command To duplicate one unit of information via the Linux cp command, write down the file name and destination directory. For instance, to replicate a file named example.txt to the 'Documents' directory, try the following command: cp example.txt Documents/ The action leads to creating a file duplicate in the 'Documents' directory with the original name. To copy multiple files at once, utilize the cp command in Linux, specifying the file names separated by a space. For instance, to duplicate three files named 'file1.txt', 'file2.txt', and 'file3.txt' to the 'Documents' directory, try the following command: cp file1.txt file2.txt file3.txt Documents/ To replicate a directory with all its interior entities, apply the -r that means cp recursive feature in Linux. For instance, to duplicate a directory named 'Pictures' to the 'Documents' directory, try the following command: cp -r Pictures Documents/ The action leads to creating a copy of the 'Pictures' directory with all its interior contents in the 'Documents' directory. To replicate a folder in Linux, you should utilize the -r flag. For instance, to duplicate a folder named 'Pictures' from the existing directory to a folder named 'Photos' in the home directory, try the following command: cp -r Pictures/ ~/Photos/ The destination folder will be created automatically if none exists. The files in the destination folder will be combined with the core of the source folder if one already exists. The cp -a feature in Linux leaves unchanged the initial file attributes while copying. Therefore, the duplicates will have the same parameters as their originals. For instance, to replicate a file named 'example.txt' to the 'Documents' directory while leaving unchanged its attributes, try the following command: cp -a example.txt Documents/ The Linux cp -v function showcases the progress of the duplication. At the same time the user can copy large files while monitoring the process. For instance, to replicate a file named 'largefile.zip' to the 'Downloads' directory while watching the progress, try the following command: cp -v largefile.zip Downloads/ The -i option requires the consent before overwriting an initial file. to protect against an accidental file rewriting. For instance, to duplicate a file named 'example.txt' to the 'Documents' directory, if a file with the identical name already exists, the cp command will require the consent before rewriting the original file. Initially, the Linux cp command copies a file or a directory to a default location. The system allows the user to specify any other location for the duplicate file or directory. For instance, to replicate a file named 'example.txt' from the 'Documents' directory to the 'Downloads' directory, try the following command: cp Documents/example.txt Downloads/ The cp -ra function in Linux is designed to carry out the copying process of directories with all their contents inside. The -r flag gives an order to repeat all the files and directories within an existing location, while the -a flag keeps the initial attributes preserved. Therefore, it is possible to make an exact duplicate of a directory without changing attributes. For instance, if you apply the command cp -ra /home/user1/documents /home/user2, it will replicate the 'documents' directory with all its entities inside in the 'user2' directory. The new folder will show the identical attributes as the initial item. The cp -rf feature in Linux is similar to the previous -ra option. The difference between these two functions is that the -f flag rewrites the given files or directories in the destination without requiring consent. Therefore, it is possible to update or replace an item with the identical name in the place of destination. For instance, if you apply the command cp -rf /home/user1/documents /home/user2, and there is already a 'documents' directory in the 'user2' directory, it will be overwritten with the contents of the 'documents' directory from the 'user1' directory. Be careful while utilizing the -rf function. Incorrect use of it leads to data loss. Check up twice the destination folder to avoid unwanted rewriting items. It is simpler to work with files and directories when you use Linux's cp -r capability with the -a and -f settings. Whereas the -rf particle modifies or replaces files and directories, the -ra particle precisely copies a directory and everything within it. You can learn how to handle stuff in this operating system by properly applying these differences. If you want to monitor and control the process of item duplication, which is not possible with other parameters of the cp command, use the -pv utility. To install the pv utility on Debian/Ubuntu you need to open the terminal and run the following command:  apt-get install pv After the installation is complete, verify it by running the following command in the terminal pv --version To install the pv utility on CentOS/Fedora, you need to connect the EPEL repository, which contains additional software packages unavailable in the default repositories. Run in the terminal: yum install epel-release Then run the following command in the terminal:  yum install pv  After the installation is complete, verify it by running the following command in the terminal:  pv --version To use this particle with the cp command, you should utilize | symbol. You can use the ~ symbol to indicate the root directory if the full path needs to be specified. For instance, to replicate a folder named 'Documents' from the root directory to a folder named 'Backup' in the home directory, try the following action: cp -r Documents/ ~/Backup/ | pv Example of executed Linux cp command Conclusion The cp command, although not an inherently difficult tool to learn, nevertheless provides basic knowledge of using the Linux operating system in terms of managing files and directories. In this tutorial, we tried to show the capabilities of the cp command in Linux from all sides, demonstrating best practices and useful tips of its various parameters. With new knowledge, you will be able to improve your skills in interacting with files and directories in Linux. The extreme accuracy of the copying process and additional options allow you to solve a wide range of problems. Multifunctionality helps users choose the file management mode and complete tasks efficiently. The command is a prime example of the many capabilities of this operating system, including the cp with progress feature in Linux. Altogether they unlock a potential of the system for novice and advanced users. Frequently Asked Questions (FAQ) How to copy files from one directory to another in Linux?  Use the cp command followed by the source path and then the destination path. Syntax: cp [source_file] [destination_directory] Example: cp /home/user/downloads/photo.jpg /home/user/pictures/ What are the most common cp command options? -r (Recursive): Essential for copying directories. It copies the folder and every file inside it. -i (Interactive): Prompts you for confirmation before overwriting an existing file. Highly recommended for beginners. -v (Verbose): Prints the name of each file as it is copied, so you can see the progress. -p (Preserve): Preserves the original file attributes like modification time, access time, and ownership modes. How do I copy a directory (folder)?  You must use the -r (recursive) flag. If you try to copy a folder without it, Linux will give you an error saying the source is a directory. Command: cp -r source_folder/ destination_folder/ How do I copy multiple files at once?  You can list multiple source files before the destination directory, or use wildcards. List: cp file1.txt file2.txt /backup/ Wildcard: cp *.jpg /home/user/images/ (Copies all JPG files). How do I prevent cp from overwriting existing files?  Use the -n (no clobber) flag. This tells Linux to silently skip any files that already exist in the destination folder, rather than replacing them. cp -n file.txt /backup/ What is the difference between cp -u and cp -n? -n never overwrites. -u (Update) only overwrites if the source file is newer than the destination file, or if the destination file is missing. This is useful for syncing folders.
22 January 2026 · 10 min to read
Linux

Using the ps aux Command in Linux

Effective system administration in Linux requires constant awareness of running processes. Whether diagnosing performance bottlenecks, identifying unauthorized tasks, or ensuring critical services remain operational, the ps aux command is an indispensable tool.  This guide provides a comprehensive exploration of ps aux, from foundational concepts to advanced filtering techniques, equipping you to extract actionable insights from process data. And if you’re looking for a reliable, high-performance, and budget-friendly solution for your workflows, Hostman has you covered with Linux VPS Hosting options, including Debian VPS, Ubuntu VPS, and VPS CentOS. Prerequisites To follow the tutorial: Deploy a Linux cloud server instance at Hostman SSH into the server instance Understanding Processes in Linux Before we explore the ps aux command, let's take a moment to understand what processes are in the context of a Linux system. What are Processes? A process represents an active program or service running on your Linux system. Each time you execute a command, launch an application, or initiate a background service, you create a process. Linux assigns a unique identifier, called a Process ID (PID), to each process. This PID allows the system to track and manage individual processes effectively. Why are Processes Grouped in Linux? Linux employs a hierarchical structure to organize processes. This structure resembles a family tree, where the initial process, init (or systemd), acts as the parent or ancestor. All other processes descend from this initial process, forming a parent-child relationship. This hierarchy facilitates efficient process management and resource allocation. The ps Command The ps (process status) command provides a static snapshot of active processes at the moment of execution. Unlike dynamic tools such as top or htop, which update in real-time, ps is ideal for scripting, logging, or analyzing processes at a specific point in time. The ps aux syntax merges three key options: a: Displays processes from all users, not just the current user. u: Formats output with user-oriented details like CPU and memory usage. x: Includes processes without an attached terminal, such as daemons and background services. This combination offers unparalleled visibility into system activity, making it a go-to tool for troubleshooting and analysis. Decoding the ps aux Output Executing ps aux generates a table with 11 columns, each providing critical insights into process behavior. Below is a detailed explanation of these columns: USER This column identifies the process owner. Entries range from standard users to system accounts like root, mysql, or www-data. Monitoring this field helps detect unauthorized processes or identify which users consume excessive resources. PID The Process ID (PID) is a unique numerical identifier assigned to each task. Administrators use PIDs to manage processes—for example, terminating a misbehaving application with kill [PID] or adjusting its priority using renice. %CPU and %MEM These columns display the percentage of CPU and RAM resources consumed by the process. Values above 50% in either column often indicate performance bottlenecks. For instance, a database process consuming 80% CPU might signal inefficient queries or insufficient hardware capacity. VSZ and RSS VSZ (Virtual Memory Size) denotes the total virtual memory allocated to the process, including memory swapped to disk. On the other hand, RSS (Resident Set Size) represents the physical memory actively used by the process. A process with a high VSZ but low RSS might reserve memory without actively utilizing it, which is common in applications that preallocate resources. TTY This field shows the terminal associated with the process. A ? indicates no terminal linkage, which is typical for background services like cron or systemd-managed tasks. STAT The STAT column reveals process states through a primary character + optional attributes: Primary States: R: Running or ready to execute. S: Sleeping, waiting for an event or signal. I: Idle kernel thread D: Uninterruptible sleep (usually tied to I/O operations). Z: Zombie—a terminated process awaiting removal by its parent. Key Attributes: s: Session leader N: Low priority <: High priority For example, a STAT value of Ss denotes a sleeping session leader, while l< indicates an idle kernel thread with high priority. START and TIME START indicates the time or date the process began. Useful for identifying long-running tasks. TIME represents the cumulative CPU time consumed since launch. A process running for days with minimal TIME is likely idle. COMMAND This column displays the command or application that initiated the process. It helps identify the purpose of a task—for example, /usr/bin/python3 for a Python script or /usr/sbin/nginx for an Nginx web server. Advanced Process Filtering Techniques While ps aux provides a wealth of data, its output can be overwhelming on busy systems. Below are methods to refine and analyze results effectively. Isolating Specific Processes To focus on a particular service—such as SSH—pipe the output to grep: ps aux | grep sshd Example output: root 579 0.0 0.5 15436 5512 ? Ss 2024 9:35 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups root 2090997 0.0 0.8 17456 8788 ? Ss 11:26 0:00 sshd: root@pts/0 root 2092718 0.0 0.1 4024 1960 pts/0 S+ 12:19 0:00 grep --color=auto sshd This filters lines containing sshd, revealing all SSH-related processes. To exclude the grep command itself from results, use a regular expression: ps aux | grep "[s]shd"  Example output: root 579 0.0 0.5 15436 5512 ? Ss 2024 9:35 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups root 2090997 0.0 0.8 17456 8788 ? Ss 11:26 0:00 sshd: root@pts/0 Sorting by Resource Consumption Identify CPU-intensive processes by sorting the output in descending order: ps aux --sort=-%cpu | head -n 10 Example output: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND mysql 1734280 0.4 36.4 1325172 357284 ? Ssl Jan30 87:39 /usr/sbin/mysqld redis 1424968 0.3 0.6 136648 6240 ? Ssl Jan18 112:25 /usr/bin/redis-server 127.0.0.1:6379 root 1 0.0 0.6 165832 6824 ? Ss 2024 5:51 /lib/systemd/systemd --system --deserialize 45 root 2 0.0 0.0 0 0 ? S 2024 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? I< 2024 0:00 [rcu_gp] root 4 0.0 0.0 0 0 ? I< 2024 0:00 [rcu_par_gp] root 5 0.0 0.0 0 0 ? I< 2024 0:00 [slub_flushwq] root 6 0.0 0.0 0 0 ? I< 2024 0:00 [netns] root 8 0.0 0.0 0 0 ? I< 2024 0:00 [kworker/0:0H-events_highpri] Similarly, you can sort by memory usage to detect potential leaks: ps aux --sort=-%mem | head -n 10 Example output: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND mysql 1734280 0.4 36.4 1325172 357284 ? Ssl Jan30 87:39 /usr/sbin/mysqld root 330 0.0 4.4 269016 43900 ? S<s 2024 22:43 /lib/systemd/systemd-journald root 368 0.0 2.7 289316 27100 ? SLsl 2024 8:19 /sbin/multipathd -d -s root 1548462 0.0 2.5 1914688 25488 ? Ssl Jan23 2:08 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock root 1317247 0.0 1.8 1801036 17760 ? Ssl Jan14 22:24 /usr/bin/containerd root 556 0.0 1.2 30104 11956 ? Ss 2024 0:00 /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers root 635 0.0 1.1 107224 11092 ? Ssl 2024 0:00 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal root 2090997 0.0 0.8 17456 8788 ? Ss 11:26 0:00 sshd: root@pts/0 root 2091033 0.0 0.8 9936 8480 pts/0 Ss 11:26 0:00 bash --rcfile /dev/fd/63 Real-Time Monitoring Combine ps aux with the watch command to refresh output every 2 seconds: watch -n 2 "ps aux --sort=-%cpu" This provides a dynamic view of CPU usage trends. Zombie Process Detection Zombie processes, though largely harmless, clutter the process list. Locate them with: ps aux | grep 'Z' Persistent zombies often indicate issues with parent processes failing to clean up child tasks. Practical Use Cases Now, let’s explore some common use cases of the ps aux command in Linux: Diagnosing High CPU Usage Follow the below steps: Execute this command to list processes by CPU consumption. ps aux --sort=-%cpu Identify the culprit—for example, a malfunctioning script using 95% CPU. If unresponsive, terminate the process gracefully with: kill [PID] Or forcibly with: kill -9 [PID] Detecting Memory Leaks Simply do the following: Sort processes by memory usage: ps aux --sort=-%mem Investigate tasks with abnormally high %MEM values. Restart the offending service or escalate to developers for code optimization. Auditing User Activity List all processes owned by a specific user (e.g., Jenkins): ps aux | grep ^jenkins This helps enforce resource quotas or investigate suspicious activity. Best Practices for Process Management Let’s now take a quick look at some best practices to keep in mind when managing Linux processes: Graceful Termination: Prefer kill [PID] over kill -9 to allow processes to clean up resources. Log Snapshots: Periodically save process lists for audits: ps aux > /var/log/process_audit_$(date +%F).log Contextual Analysis: A high %CPU value might be normal for a video encoder but alarming for a text editor. Hence, it’s essential to consider the context when making an analysis. Common Pitfalls to Avoid Here are some pitfalls to look out for when using ps aux in Linux: Misinterpreting VSZ: High virtual memory usage doesn’t always indicate a problem—it includes swapped-out data. Overlooking Zombies: While mostly benign, recurring zombies warrant investigating parent processes. Terminating Critical Services: Always verify the COMMAND field before using kill to avoid disrupting essential services. Conclusion The ps aux command is a cornerstone of Linux system administration, offering deep insights into process behavior and resource utilization. You can diagnose performance issues, optimize resource allocation, and maintain system stability by mastering its output interpretation, filtering techniques, and real-world applications.  Did you know? Hostman prepared an Object Storage for your project to save all necessary info for your server. Start using now! For further exploration, consult the ps manual (man ps) or integrate process monitoring into automated scripts for proactive system management. Frequently Asked Questions (FAQ) What is the ps aux command in Linux?  It is the most common command to view a snapshot of all running processes on the system. The flags break down as follows: a: Shows processes for all users, not just the current user. u: Displays the process's user/owner and provides detailed resource usage (CPU, RAM). x: Shows processes not attached to a terminal (background daemons). Why do we use the ps command in Linux? We use it to monitor system health and troubleshoot performance. It helps you identify which applications are consuming the most CPU or Memory, find the Process ID (PID) needed to stop a frozen program, and verify if background services are running correctly. How do you use the ps aux command to find zombie processes? Zombie processes (defunct) appear with a Z in the STAT column. You can filter for them specifically by running: ps aux | grep 'Z' Alternatively, to get a cleaner list excluding the grep command itself: ps aux | awk '$8=="Z" {print $0}' How do I sort the output by Memory or CPU usage?  By default, ps aux does not sort by usage. You can use the --sort option: Sort by Memory: ps aux --sort=-%mem Sort by CPU: ps aux --sort=-%cpu (The minus sign sorts in descending order). What do the VSZ and RSS columns mean? VSZ (Virtual Memory Size): The total virtual memory available to the process (including swap and shared libraries). RSS (Resident Set Size): The actual physical RAM the process is currently using. RSS is usually the more important number for checking memory usage. How do I kill a process I found using ps aux?  First, locate the PID (Process ID) in the second column of the output. Then run: sudo kill [PID] If the process refuses to close, you can force kill it with sudo kill -9 [PID].
22 January 2026 · 10 min to read
Linux

How to Create a Text File in Linux Terminal

In Linux, you can access and edit text files using a text editor that is designed to work with plain text. These files are not specifically coded or formatted. Choose your server now! There are several different ways to create a file in Linux. The Linux Command Line or Terminal is most likely the fastest. This is a crucial skill for any user, but especially for server administrators, who need to create text files, scripts, or configuration files quickly for their jobs. Let's proceed to the guide on four standard techniques for creating a text file on the terminal. And if you’re looking for a reliable, high-performance, and budget-friendly solution for your workflows, Hostman has you covered with Linux VPS Hosting options, including Debian VPS, Ubuntu VPS, and VPS CentOS. File Creation in Linux Can be Frustrating Sometimes Prerequisites for File Creation in Linux Ensure these prerequisites are met before generating files in a Linux environment using the command-line interface: Access to a Functional Linux System: You must either have a Linux-based operating system installed on your computer or secure access to a Linux server via SSH (Secure Shell) protocol. Operational Terminal Interface: Confirm that your terminal application is accessible and fully operational. The terminal serves as your primary gateway to executing commands. Adequate User Permissions: Verify you can create files within the chosen directory. You may need to use sudo (for directories with access restrictions) to escalate privileges. Fundamental Commands Proficiency: You must get familiar with essential commands, such as touch for file creation, echo for printing text, cat for viewing file contents, and text editors like nano, vim, or vi for editing files directly. Text Editing Utilities: Ensure your system includes text editing tools: nano for command line simplicity, vim for advanced configurations, or graphical options like gedit for user-friendly navigation. Directory Management Expertise: Develop familiarity with directory navigation commands like cd for changing the working directory and ls for listing directory contents. This knowledge streamlines your workflow and avoids potential errors. Using the touch Command Generally, we use the touch command to create empty files and change timestamps. It will create an empty file if it doesn't exist already.  To create a text file in the current directory with the touch command: Open your terminal emulator. Type the command: touch filename.txt Start with "touch" command Replace "filename" with the name you picked for the file. If the file with the same name already exists, the access and modification timestamps will be updated without affecting the content of the file. If not, a blank file with the specified name will be generated. Press Enter—if it is successful, there will be no output. Use the ls command to list the directory content and verify file creation. "LS" command is also important of you want to generate text file in Linux Using the echo Command Redirection The echo command is widely used to display text on the terminal. But its capabilities go beyond that; it may also be used to write content to a file or create an empty file. For this, combine the echo command with double redirect symbols (you can also use a single >) and the desired filename. A text file can be created by redirecting the output of the echo command to a file. See how it works: Open your terminal emulator. Type the command: echo “Your text content here” > filename.txt "Echo" command is also important in the process Replace the text in double quotations (do not delete them) with yours to add it to the file.  After you press Enter, your text will be added to the file filename.txt. It will overwrite an existing file, if there is one. Otherwise, it will just create a new one. Press Enter. To verify that the file has been created and contains the desired content, use cat command to display the content.  "Cat" command can help you to display your file you just created Using the cat Command Redirection In Linux, the cat command is mostly used to concatenate and show file contents. It can, however, also be used to generate a text document by redirecting the standard output of cat to a file. Open your terminal emulator. Type the following command: cat > filename.txt This is what you'll see after "cat" command Replace filename.txt with the name for your text file. This command instructs cat to receive input rom the terminal and to redirect it into the filename.txt. Press Enter. The terminal will be waiting for input.  Enter the text you want in the file. Press Enter after each line. Press Ctrl + D when you are done. This signals the end of input to the cat and saves the content.  Run the cat command to check that the file has been created and contains the desired content. This is how you can check how your file in Linux is created Using printf for Advanced File Creation The printf utility is a powerful alternative to echo, offering enhanced formatting options for structuring text. It allows users to create files with precisely formatted content. Open the terminal. Use printf to define the text layout, incorporating formatting elements like newlines (\n) or tabs (\t). Redirect the output to a file using the > operator. Example: printf "First Line\nSecond Line\nIndented\tThird Line\n" >  formatted_file.txt Run the cat command to inspect the file's content and ensure the formatting matches expectations. Append Without Overwriting: To add content to an existing file without overwriting its current data, replace > with the append operator >>: printf "Additional content here.\n" >> formatted_file.txt Using a Text Editor You can also create new files in linux text editors. There is always at least one integrated command-line text editor in your Linux distribution. But you can choose and install a different one according to your preferences, for example, Vim, Nano, or Emacs. Each of them has its own features and advantages. Vim vim, which stands for "Vi IMproved," is a very flexible and adaptable text editor. It is well-known for its modal editing, which allows for distinct modes for various functions like text entry, navigation, and editing. It allows split windows, multiple buffers, syntax highlighting, and a large selection of plugins for extra features. To create a text file using vim, follow the steps below: Open vim, with the desired filename as an argument. "Vim" command is one of the key steps in file creation Press i to switch to Insert mode. Start typing and editing the filename.txt.  To save and exit, press Esc to ensure that command mode is running. Type: wq (write and quit) and press Enter. Simple command to finish your work Nano nano is ideal for short adjustments and straightforward text files because it is lightweight and requires little setup. It provides support for basic text manipulation functions, search and replace, and syntax highlighting. To create a text file using nano, follow the steps below:  Run nano with the desired filename as an argument. It will open a new buffer for editing the file filename.txt. Nano is useful in you want to fix something in your text file Start typing and editing the filename.txt.  To save and exit, press Ctrl + O to write the file, confirm the filename, and then press Ctrl + X to exit Nano. Click "yes" to exit Emacs emacs is a powerful and flexible text editor that supports syntax highlighting, multiple buffers, split windows, and integration with external tools and programming languages. To create a text file using emacs, follow the steps below:  Open emacs, with the desired filename as an argument. Start typing and editing the filename.txt.  "Emacs" is more flexible text editor To save and exit, press Ctrl + X, followed by Ctrl + S to save the file, and then Ctrl + X, followed by Ctrl + C to exit Emacs. Note: If a message states that "VIM command not found", "nano command not found" or "emacs command not found" in Linux, it typically means that the vim, nano or emacs text editor is not installed on the system, or it's not included in the PATH environment variable, which is a list of directories where the operating system looks for executable files. Don't forget to install necessary command in Linux To resolve this, install the text editor first using the command:  apt-get install vim apt-get install nano  apt-get install emacs Gedit An intuitive text editor that supports working with plain text and has syntax highlighting for programming languages. A straightforward graphical interface makes it usable for various tasks, from quick edits to complex document preparation. Open the Gedit Application: Launch Gedit either through the applications menu or by executing the following command in the terminal: gedit example.txt Gedit will create a new file if the specified one does not exist. Input Your Text: Type or paste your desired content into the editor. Save the File: Save your work with Ctrl + S or select File > Save. If creating a new file, specify a filename and a location. Verify: Return to the terminal and confirm the file exists with the ls command or review its content with cat. Linux File Creation Recommendations Ensure you have sufficient permissions to create files in the target directory. If they are insufficient, consider working in a directory where you have full rights (or elevate privileges with sudo). Check if a file with the identical name is already present before using the > operator, as the command will overwrite existing content. To prevent data loss, opt for the append operator >>. Familiarize yourself with the printf, echo, and text editors like vim or nano. These tools will help you reduce errors when working with files in Linux, as well as boost productivity. Use printf for creating files requiring structured content, such as configuration files or scripts with precise formatting needs. Choose your server now! Conclusion Now you have acquainted yourself with the fundamental skill of creating a file in Linux using the terminal! Using the Linux command line, several fast and efficient methods exist to create and manage text files. Apply several techniques to meet a different requirement using the touch, echo, cat, printf commands, or text editors like vim, nano, gedit, or emacs. Users can select the method that sufficiently meets their requirements, such as creating empty files, appending text, or significantly modifying material. In summary, any of these methods enable Linux users to easily and quickly handle text files straight from the command line. Frequently Asked Questions (FAQ) How do I create an empty text file in Linux?  The standard command is touch. Simply run: touch filename.txt This creates a blank file immediately. How do I create a file and add content at the same time?  You can use the echo command with the redirection operator (>). echo "Hello World" > filename.txt This creates the file and puts "Hello World" inside it. How do I create and open a file for editing?  Use a terminal text editor like nano or vi. When you run: nano filename.txt Linux will open a blank editor screen. Once you type your text and save (Ctrl+O in nano), the file is created on your disk. What is the fastest way to create a file?  The redirection symbol alone is the quickest method for creating an empty file:> filename.txt This tells the shell to redirect "nothing" into a new file, creating it instantly. How do I create a large file for testing?  Use the fallocate command. For example, to create a 1GB file instantly:fallocate -l 1G bigfile.img How do I view the content of a text file?  Use the cat command to print the text to your terminal: cat filename.txtFor longer files, use less filename.txt to scroll through pages.
21 January 2026 · 10 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support