Sign In
Sign In

Using the ps aux Command in Linux

Using the ps aux Command in Linux
Emmanuel Oyibo
Technical writer
Linux
18.02.2025
Reading time: 9 min

Effective system administration in Linux requires constant awareness of running processes. Whether diagnosing performance bottlenecks, identifying unauthorized tasks, or ensuring critical services remain operational, the ps aux command is an indispensable tool. 

This guide provides a comprehensive exploration of ps aux, from foundational concepts to advanced filtering techniques, equipping you to extract actionable insights from process data.

Prerequisites

To follow the tutorial:

Understanding Processes in Linux

Before we explore the ps aux command, let's take a moment to understand what processes are in the context of a Linux system.

What are Processes?

A process represents an active program or service running on your Linux system. Each time you execute a command, launch an application, or initiate a background service, you create a process.

Linux assigns a unique identifier, called a Process ID (PID), to each process. This PID allows the system to track and manage individual processes effectively.

Why are Processes Grouped in Linux?

Linux employs a hierarchical structure to organize processes. This structure resembles a family tree, where the initial process, init (or systemd), acts as the parent or ancestor.

All other processes descend from this initial process, forming a parent-child relationship. This hierarchy facilitates efficient process management and resource allocation.

The ps Command

The ps (process status) command provides a static snapshot of active processes at the moment of execution. Unlike dynamic tools such as top or htop, which update in real-time, ps is ideal for scripting, logging, or analyzing processes at a specific point in time.

The ps aux syntax merges three key options:

  • a: Displays processes from all users, not just the current user.
  • u: Formats output with user-oriented details like CPU and memory usage.
  • x: Includes processes without an attached terminal, such as daemons and background services.

This combination offers unparalleled visibility into system activity, making it a go-to tool for troubleshooting and analysis.

Decoding the ps aux Output

Executing ps aux generates a table with 11 columns, each providing critical insights into process behavior. Below is a detailed explanation of these columns:

Image1

USER

This column identifies the process owner. Entries range from standard users to system accounts like root, mysql, or www-data. Monitoring this field helps detect unauthorized processes or identify which users consume excessive resources.

PID

The Process ID (PID) is a unique numerical identifier assigned to each task. Administrators use PIDs to manage processes—for example, terminating a misbehaving application with kill [PID] or adjusting its priority using renice.

%CPU and %MEM

These columns display the percentage of CPU and RAM resources consumed by the process. Values above 50% in either column often indicate performance bottlenecks. For instance, a database process consuming 80% CPU might signal inefficient queries or insufficient hardware capacity.

VSZ and RSS

VSZ (Virtual Memory Size) denotes the total virtual memory allocated to the process, including memory swapped to disk.

On the other hand, RSS (Resident Set Size) represents the physical memory actively used by the process.

A process with a high VSZ but low RSS might reserve memory without actively utilizing it, which is common in applications that preallocate resources.

TTY

This field shows the terminal associated with the process. A ? indicates no terminal linkage, which is typical for background services like cron or systemd-managed tasks.

STAT

The STAT column reveals process states through a primary character + optional attributes:

  1. Primary States:

    • R: Running or ready to execute.
    • S: Sleeping, waiting for an event or signal.
    • I: Idle kernel thread
    • D: Uninterruptible sleep (usually tied to I/O operations).
    • Z: Zombie—a terminated process awaiting removal by its parent.
  1. Key Attributes:

    • s: Session leader
    • N: Low priority
    • <: High priority

For example, a STAT value of Ss denotes a sleeping session leader, while l< indicates an idle kernel thread with high priority.

START and TIME

START indicates the time or date the process began. Useful for identifying long-running tasks.

TIME represents the cumulative CPU time consumed since launch. A process running for days with minimal TIME is likely idle.

COMMAND

This column displays the command or application that initiated the process. It helps identify the purpose of a task—for example, /usr/bin/python3 for a Python script or /usr/sbin/nginx for an Nginx web server.

Advanced Process Filtering Techniques

While ps aux provides a wealth of data, its output can be overwhelming on busy systems. Below are methods to refine and analyze results effectively.

Isolating Specific Processes

To focus on a particular service—such as SSH—pipe the output to grep:

ps aux | grep sshd

Example output:

root         579  0.0  0.5  15436  5512 ?        Ss    2024   9:35 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
root     2090997  0.0  0.8  17456  8788 ?        Ss   11:26   0:00 sshd: root@pts/0
root     2092718  0.0  0.1   4024  1960 pts/0    S+   12:19   0:00 grep --color=auto sshd

This filters lines containing sshd, revealing all SSH-related processes. To exclude the grep command itself from results, use a regular expression:

ps aux | grep "[s]shd" 

Example output:

root         579  0.0  0.5  15436  5512 ?        Ss    2024   9:35 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
root     2090997  0.0  0.8  17456  8788 ?        Ss   11:26   0:00 sshd: root@pts/0

Sorting by Resource Consumption

Identify CPU-intensive processes by sorting the output in descending order:

ps aux --sort=-%cpu | head -n 10

Example output:


USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
mysql    1734280  0.4 36.4 1325172 357284 ?      Ssl  Jan30  87:39 /usr/sbin/mysqld
redis    1424968  0.3  0.6 136648  6240 ?        Ssl  Jan18 112:25 /usr/bin/redis-server 127.0.0.1:6379
root           1  0.0  0.6 165832  6824 ?        Ss    2024   5:51 /lib/systemd/systemd --system --deserialize 45
root           2  0.0  0.0      0     0 ?        S     2024   0:00 [kthreadd]
root           3  0.0  0.0      0     0 ?        I<    2024   0:00 [rcu_gp]
root           4  0.0  0.0      0     0 ?        I<    2024   0:00 [rcu_par_gp]
root           5  0.0  0.0      0     0 ?        I<    2024   0:00 [slub_flushwq]
root           6  0.0  0.0      0     0 ?        I<    2024   0:00 [netns]
root           8  0.0  0.0      0     0 ?        I<    2024   0:00 [kworker/0:0H-events_highpri]

Similarly, you can sort by memory usage to detect potential leaks:

ps aux --sort=-%mem | head -n 10

Example output:

USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
mysql    1734280  0.4 36.4 1325172 357284 ?      Ssl  Jan30  87:39 /usr/sbin/mysqld
root         330  0.0  4.4 269016 43900 ?        S<s   2024  22:43 /lib/systemd/systemd-journald
root         368  0.0  2.7 289316 27100 ?        SLsl  2024   8:19 /sbin/multipathd -d -s
root     1548462  0.0  2.5 1914688 25488 ?       Ssl  Jan23   2:08 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root     1317247  0.0  1.8 1801036 17760 ?       Ssl  Jan14  22:24 /usr/bin/containerd
root         556  0.0  1.2  30104 11956 ?        Ss    2024   0:00 /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers
root         635  0.0  1.1 107224 11092 ?        Ssl   2024   0:00 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal
root     2090997  0.0  0.8  17456  8788 ?        Ss   11:26   0:00 sshd: root@pts/0
root     2091033  0.0  0.8   9936  8480 pts/0    Ss   11:26   0:00 bash --rcfile /dev/fd/63

Real-Time Monitoring

Combine ps aux with the watch command to refresh output every 2 seconds:

watch -n 2 "ps aux --sort=-%cpu"

This provides a dynamic view of CPU usage trends.

Zombie Process Detection

Zombie processes, though largely harmless, clutter the process list. Locate them with:

ps aux | grep 'Z'

Persistent zombies often indicate issues with parent processes failing to clean up child tasks.

Practical Use Cases

Now, let’s explore some common use cases of the ps aux command in Linux:

Diagnosing High CPU Usage

Follow the below steps:

  1. Execute this command to list processes by CPU consumption.
ps aux --sort=-%cpu
  1. Identify the culprit—for example, a malfunctioning script using 95% CPU.
  2. If unresponsive, terminate the process gracefully with:
kill [PID]

Or forcibly with:

kill -9 [PID]

Detecting Memory Leaks

Simply do the following:

  1. Sort processes by memory usage:

ps aux --sort=-%mem
  1. Investigate tasks with abnormally high %MEM values.
  2. Restart the offending service or escalate to developers for code optimization.

Auditing User Activity

List all processes owned by a specific user (e.g., Jenkins):

ps aux | grep ^jenkins

This helps enforce resource quotas or investigate suspicious activity.

Best Practices for Process Management

Let’s now take a quick look at some best practices to keep in mind when managing Linux processes:

  • Graceful Termination: Prefer kill [PID] over kill -9 to allow processes to clean up resources.

  • Log Snapshots: Periodically save process lists for audits:

ps aux > /var/log/process_audit_$(date +%F).log
  • Contextual Analysis: A high %CPU value might be normal for a video encoder but alarming for a text editor. Hence, it’s essential to consider the context when making an analysis.

Common Pitfalls to Avoid

Here are some pitfalls to look out for when using ps aux in Linux:

  • Misinterpreting VSZ: High virtual memory usage doesn’t always indicate a problem—it includes swapped-out data.
  • Overlooking Zombies: While mostly benign, recurring zombies warrant investigating parent processes.
  • Terminating Critical Services: Always verify the COMMAND field before using kill to avoid disrupting essential services.

Conclusion

The ps aux command is a cornerstone of Linux system administration, offering deep insights into process behavior and resource utilization. You can diagnose performance issues, optimize resource allocation, and maintain system stability by mastering its output interpretation, filtering techniques, and real-world applications. 

For further exploration, consult the ps manual (man ps) or integrate process monitoring into automated scripts for proactive system management.

Linux
18.02.2025
Reading time: 9 min

Similar

Linux

Monitoring Linux Server Activity with Falco

Falco is a security tool that allows you to record security events on Linux servers based on rules. It was previously developed by Sysdig and later handed over to Cloud Native Computing Foundation. This guide shows how to install Falco on Linux servers, write rules to detect malicious events executed by processes or users and eventually compares it with Linux Auditd. Prerequisites To follow this guide, you'll need access to a Debian Linux or CentOS Stream 9 server. Alternatively, you could spin up a virtual server using Hostman. The Hostman website has instructions on how to launch a virtual server. Brief Overview of Linux System Calls  In Linux, the user-space is reserved for user-facing services like web browsers, text editors, etc, whilst the kernel space is reserved for the privileged services. Services provided within the kernel space include memory management, process scheduling, file system management, etc. In the context of system calls, when a user executes the cd command, the “chdir system call’’ is invoked via the chdir() wrapper function within the glibc library to change the current working directory and returns the result to the user-space program. Usually, the name of the wrapper function is the same as the invoked system call. The GNU C Library, also known as glibc, contains system functions, acting as a wrapper around the actual function provided by the Linux kernel, allowing applications to access system functionality or make system calls through a standardized C interface. For detailed information on how Linux systems calls work and roles/tasks of glibc wrapper functions, check Linux man page. What is Falco? Falco provides runtime security across hosts, containers, Kubernetes, and other cloud native environments. It relies on both default and custom rules to detect events as malicious on Linux hosts, Kubernetes applications, etc. and associates event data with contextual metadata to deliver meaningful real-time alerts to the SIEM team. Falco relies on different sources to gather events data. It natively supports Linux system call source by default. However, it’s possible to extend Falco capabilities to support other event sources like Kubernetes audit logs, AWS Cloudtrail, KeyCloak Admin/User events via the plugin system. The plugin system consists of shared libraries that allows Falco to include or add new event sources, include new fields that extract information from events, etc. As at the time of writing this guide, some of the following plugins are: K8saudit: Monitors and detects Kubernetes cluster events. Cloudtrail: Tracks events from Cloudtrail logs. Kafka: Records events from Kafka topics. Keycloak: Detects Keycloak user/admin events. Check their website for a complete list of currently supported plugins. In order to consume events at the kernel source, the following drivers are currently supported: eBPF probe modern eBPF probe kernel module Using Modern eBPF Probe eBPF means “extended Berkeley Packet Filter”. It enables us to run isolated programs within the Linux kernel space in order to extend the capabilities of the kernel without loading additional kernel modules. They are programs that execute when specific hook points are triggered or an event takes place. eBPF probe is embedded into the userspace application and works out of the box, regardless of the kernel release. To use the modern eBPF probe, set the engine.kind parameter inside the /etc/falco/falco.yaml file to modern_ebpf to activate this feature. There is no need to install other dependencies such as clang or llvm if you want to use modern eBPF. Installing Falco This section shows how to install Falco on Linux Debian and CentOS servers. Running Falco on Debian Step 1: Import Falco GPG key. curl -fsSL https://falco.org/repo/falcosecurity-packages.asc | \sudo gpg --dearmor -o /usr/share/keyrings/falco-archive-keyring.gpg Step 2: Setup the apt repository. sudo bash -c 'cat << EOF > /etc/apt/sources.list.d/falcosecurity.listdeb [signed-by=/usr/share/keyrings/falco-archive-keyring.gpg] https://download.falco.org/packages/deb stable mainEOF' Step 3: Install the apt-transport-https package. sudo apt install apt-transport-https Step 4: Update the apt repository. sudo apt update -y Step 5: Install Falco. sudo apt install -y falco Running Falco on CentOS Stream 9 Step 1: Import the Falco GPG key. rpm --import https://falco.org/repo/falcosecurity-packages.asc Step 2: Set up the yum repository. curl -s -o /etc/yum.repos.d/falcosecurity.repo https://falco.org/repo/falcosecurity-rpm.repo Step 3: Update the yum repository. yum update -y Step 4: Install Falco. yum install -y falco Step 5: Execute the command to test whether Falco is successfully installed. falco Managing Falco with systemd In production, it's recommended to manage Falco using Systemd because it provides a centralized way to control and automate service restart instead of manually managing Falco. Systemd is the init process that starts required system services at boot time. Use the following instructions to manually configure Systemd with Falco. Step 1: Execute the following command to search for Falco services. systemctl list-units "falco*" Step 2: Use these commands to enable, start and check the status of falco-modern-bpf.service. The systemctl enable command ensures Falco starts at boot time systemctl enable falco-modern-bpf.service This command starts the service: systemctl start falco-modern-bpf.service And this is how you check if the service is running: systemctl status falco-modern-bpf.service Step 3: Execute the command systemctl list-units | grep falco to search for active related services The screenshot shows that both services are active. The latter is responsible for performing rules updates. If you don't want falcoctl to perform automatic rules update, use the command below to mask it. systemctl mask falcoctl-artifact-follow.service It prevents falcoctl service from being enabled automatically once an aliased falco service is enabled. Check this page for further information on using Systemd to manage Falco. Configuring Falco Settings This section shows how to configure some settings in the Falco configuration file located at /etc/falco/falco.yaml. watch_config_files: This key can be assigned true or false values. The true value ensures that anytime changes are made to the rules or configuration file, it automatically reloads itself to apply the updated configuration settings. rules_files: This key determines which rule files or directories are loaded first based on the values assigned to it. The example below ensures that rules in the /etc/falco/rules.d folder are checked first. rules_files:  - /etc/falco/rules.d  - /etc/falco/falco_rules.yaml - /etc/falco/falco_rules.local.yaml output_channel: Falco supports the following output channels. Syslog standard output http endpoint or webhook file output grpc service You can enable one of these channels to determine where alerts and log messages are sent to. Writing Falco Rules Basically, a rule is made up of an event and specific condition. Example of an event is a filesystem activity such as when a user accesses a file in the etc directory. Another example of an event is when someone or a service decides to connect or transfer a file to a remote host. Conditions are pragmatic expressions that define the exact details Falco should look for. It involves inspecting process arguments, network addresses, etc. Rules are written in YAML, and have a variety of required and optional keys. They are loaded at startup. Following is the structure of a rule in Falco. rule: This key defines the name of the rule, e.g. rule: Unauthorised File Access. desc: The key desc means description. It describes the purpose of the rule, e.g. Detecting unauthorized access to files in the /etc folder by regular users. condition: This key informs Falco to trigger an alert when a specific event takes place, e.g. condition: open_read and fd.name startswith /etc.  output: The message that will be shown in the notification. priority: This key defines the priority level of the rule. Priority levels include WARNING, ERROR, DEBUG, NOTICE, EMERGENCY, INFORMATIONAL, CRITICAL, and ALERT. tags: This key is used to categorize rules, e.g. ["Sensitive_Files", and "Unauthorized_Users"]. For detailed information on Falco rules, check Falco’s website. The following are rules to detect specific filesystem access and outbound network connection. Creating a Rule for Filesystem Activity Use the following steps to create a custom Falco rule. Navigate to the path /etc/falco/rules.d using the cd command. cd /etc/falco/rules.d Create a custom rule file using the following command. touch custom_rules.yaml Open and edit the custom_rules.yaml file using vim or any other text editor. vim custom_rules.yaml Then copy and paste the following into the file custom_rules.yaml. - rule: reading sensitive file desc: Detects when a user reads /etc/ folder condition: open_read and fd.name startswith /etc/ output: “suspicious file read detected file=%fd.name accessed by user=%user.name” priority: WARNING tags: [network, filesystem] Start Falco in the background. falco & To stop the background process falco from running forever, use the following command to search for process ID. pgrep falco Then use the kill command to terminate it by specifying the pid. kill -9 process-pid Now test the rule we just created to check whether Falco would alert us when a user opens or accesses the file /etc/passwd. cat /etc/passwd Creating a Rule for Detecting Outbound Connection Use the following to create a rule to monitor network connection. Navigate to the folder /etc/falco/rules.d using the command: cd /etc/falco/rules.d Use a text editor like vim to create a new file for custom rules. vim custom.yaml Copy and paste the following rule into the file custom.yaml to flag outbound connections to other hosts. - rule: "Suspicious outbound connection" desc: detect outbound connection to other hosts condition: outbound and evt.type = connect and fd.sip != 8.8.8.8 output: "Suspicious outbound connection detected destination=%fd.sip" priority: WARNING tags: [network, exfiltration] Make sure you execute the falco command before testing the preceding rule via the command: ping -c 1 blacklisted_IPaddress We'll receive a warning: Comparison Between Falco and Linux Audit Framework. Auditd is a part of the Linux auditing framework. It is responsible for writing audit records to the disk. Both tools are useful in detecting events registered as malicious via rules. In addition, both tools rely on system calls as their native event source. However, there are differences between these tools: Auditd does not have multiple event sources as compared to Falco. Auditd does not allow users to customize event output but Falco allows. Conclusion  Falco is useful in detecting events defined as malicious via rules. These define whether events are malicious or not. However, it's worth noting that the folder /etc/falco/ should be restricted to privileged users and also be monitored by Falco otherwise anyone can tweak rules in the file to avoid detection.
19 March 2025 · 9 min to read
Mail

How to Send Email in Linux from the Command Line with Sendmail and Mailx

For those managing servers or working on automation tasks, knowing how to send emails from the Linux terminal is essential. It offers complete control over email functions and eliminates the need for complex mail programs. This is useful in scenarios where speed and simplicity matter most. Common tools such as sendmail and mailx are frequently used for sending messages, checking SMTP settings, automating alerts, and integrating with scripts. They are straightforward yet effective, making them perfect for tasks like informing teams about server updates, automating reports, or testing email setups. This guide is designed for users looking to manage their email directly from the terminal. It covers the installation of essential tools and delves into more advanced tasks, such as sending attachments and configuring email tools. Why Choose Command-Line Email Tools? Two commonly used tools, sendmail and mailx, are reliable options for mail transmission in Linux. They come with a certain set of benefits: Efficiency: Traditional email software can be slow and resource-intensive. These tools enable quick and lightweight email sending directly from the terminal. Automation: They integrate smoothly with shell scripts, cron processes, and system monitoring tools. Automating mail alerts and notifications for repeated actions is possible via these Linux mail tools. Troubleshooting SMTP Problems: Debugging SMTP setups becomes more manageable. These commands provide visibility into message delivery, ensuring mail logs and errors are easier to inspect. Flexibility: Whether it’s sending alerts or generating automated reports, command-line tools like sendmail and mailx offer versatility across a range of tasks. Prerequisites  Before utilizing these Linux mail command line tools, ensure you have terminal access. Root privileges may be required in some cases, especially for configuring each mail command on Linux discussed in this guide. Setting Up a SMTP Server SMTP servers are essential for sending emails. These servers fall into two categories: External and Local SMTP servers. External SMTP Servers It refers to a mail server hosted by a third-party provider. These servers are utilized to deliver emails over the internet to recipients who are not part of your local network. They are built to manage global mail delivery while ensuring proper authentication, encryption, and spam prevention. Examples  Gmail  Address: smtp.gmail.com Port: 587 (with TLS) or 465 (with SSL) Outlook  Address: smtp.office365.com Port: 587 These servers need appropriate authentication methods (such as a username, password, or app-specific passwords) and encryption (like TLS or SSL) to ensure secure communication. Note: We’ve already provided a guide for setting up external SMTP servers. The command to send emails through Postfix remains the same as mentioned in this article. Simply configure the SMTP settings using our guide, and replace the email address with Gmail or any other preferred provider for proper email delivery. Local SMTP Servers This server functions solely within a private network or system. It is perfect for: Sending emails between users on the same network or domain (e.g., tom@office.local to jerry@office.local). Local testing and development tasks. Internal communication within an organization. Does not need internet access to operate, as they manage mail delivery internally. Setting Up a Local SMTP Server Here are the procedures to set up a local SMTP server using Postfix: Install Postfix via: sudo apt install postfix Modify the Postfix configuration file: sudo nano /etc/postfix/main.cf Update or confirm these key settings: myhostname = mail.office.local mydomain = office.local myorigin = $mydomain inet_interfaces = loopback-only local_recipient_maps = proxy:unix:passwd.byname mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain Save and exit the file after doing changes, then restart Postfix: sudo systemctl restart postfix To create email addresses like linux@office.local and hostman@office.local, set up user accounts on the server: sudo adduser linuxsudo adduser hostman Overview of sendmail sendmail is a prominent mail transfer agent (MTA) in Linux. It works flawlessly with SMTP servers for mail delivery and allows emails to be sent and routed from local systems or scripts.  Installing sendmail  Before sending emails, you must install the Linux sendmail tool. Execute the commands below based on your distribution: For Debian/Ubuntu sudo apt install sendmail For CentOS/Red Hat sudo yum install sendmail Starting and Enabling Service Once installed, make sure sendmail is running and configured to start at boot: sudo systemctl start sendmailsudo systemctl enable sendmail Testing the Configuration Check the sendmail is set up correctly by executing: echo "Testing sendmail setup" | sendmail -v your-email@example.com Verify email by executing the mail command: mail Note: Install mailutils package in case the mail command is not working. sudo apt install mailutils Or utilize the cat command: cat /var/mail/user Editing the Configuration File To customize settings for sendmail, modify the configuration file located at /etc/mail/sendmail.mc: sudo nano /etc/mail/sendmail.mc Make the required changes to fit your server. For example, if you want to define the domain name for your server, you can add or modify the following line: define(`confDOMAIN_NAME', `your_domain.com')dnl Here, replace your_domain with your actual domain name. Then rebuild the configuration file: sudo m4 /etc/mail/sendmail.mc > /etc/mail/sendmail.cf If a "permission denied" error occurs, use: sudo sh -c "m4 /etc/mail/sendmail.mc > /etc/mail/sendmail.cf" Finally, restart the service: sudo systemctl restart sendmail Sending Email Via sendmail With sendmail, you can easily deliver emails, customize subjects, and even add attachments using external tools. Let’s go over the process to send emails: Basic Example To send an email with sendmail, use the below-given instructions: First, create a file to hold the message: nano email.txt Add any content to the file, for example: Subject: Test Email from HostmanThis is a test email sent using sendmail on Linux. Deliver the file's contents: sendmail recipient@example.com < email.txt The contents of email.txt will be sent to the designated recipient. For verification, apply: mail Adding Attachments  sendmail by itself doesn’t support attachments. You’ll need to utilize uuencode or similar tools to include files. First, install sharutils for uuencode: sudo apt install sharutils Here’s how to attach a file: ( echo "Subject: Email with attachment"; uuencode file.txt file.txt ) | sendmail recipient@example.com In the above sendmail example we send an email with file.txt attached. To verify, apply the Linux command mail: mail Overview of mailx  The mailx Linux command is a simple and effective terminal application for managing emails. It is included in the mailutils package found in most Linux distributions. Installing mailx  Install mailutils package on your system to utilize the mailx command on Linux: For Debian/Ubuntu systems sudo apt install mailutils For Red Hat-based systems sudo yum install mailx Sending Email with mailx This is a simple example demonstrating the use of mailx. Include a subject line and message in your email: echo "This is the body of the email" | mailx -s "Test Email from Mailx" recipient@example.com Utilize the Linux mail command for verification: Example with Attachments Use the -A flag with the mailx command to send emails from Linux with attachments: echo "Please find the attached document" | mailx -s "Email with Attachment" -A email.txt recipient@example.com This sends email.txt as an attachment to the recipient. Conclusion Sending email from the Linux command line is an effective method for automating communication tasks, troubleshooting servers, or testing configurations. Using tools such as sendmail and mailx, you can manage everything from simple messages to more complex setups with attachments. This guide has provided detailed instructions to help you begin without difficulty. Utilize these Linux email commands to improve your workflow. If you face any issues, feel free to refer back to this tutorial.
18 March 2025 · 7 min to read
Linux

How to Compress Files in Linux Using tar Command

The tar command basically functions to put together all files and directories into one archive without altering their structure. The approach simplifies organization, creation of the backup, and the transfer of files. Once packaged, you can compress these archives by using multiple ways such as using gzip, bzip2, or xz, which help optimize storage and enhance transfer speeds. Modern Linux distributions come with updated versions of tar, enabling seamless integration with compression tools like gzip for more efficient data handling. This makes tar a valuable asset for users managing large datasets, as it supports both file consolidation and compression in a single command. Thanks to its flexibility, tar is widely used across different Linux environments. It not only facilitates backup creation but also streamlines software distribution and the management of the important data. With an array of choices available, all users can customize archives according to their requirements, whether by excluding particular directories or files, preserving permissions, or securing sensitive data. For anyone dealing with extensive information or complex storage requirements, learning everything about the tar command is crucial. This all makes it an important utility to learn for Linux users. Understand the Syntax of tar  The tar command is fundamentally divided into four distinct parts: tar (keyword) -flags (options), used to execute a specific action name of the archive path to the desired file or directory It would be written as follows: tar -flags (archive_name) (path) Archiving Files and Directories tar used with the flag -cvf has the power to essentially archive the files and also the directories. For a File: tar -cvf collectionX.tar snake.txt For a Directory: tar -cvf DRcollection.tar newDir/ This would essentially archive the file snake.txt to collectionX.tar and the directory newDir to DRcollection.tar respectively.  If desired outcome is to archive multiple files and directories, then use the following commands.For Multiple Files: tar -cvf collectionX.tar snake.txt panther.txt Tiger.txt For Multiple Directories: tar -cvf DRcollection.tar newDir1/ newDir2/ newDir3/ Compressing Files and Directories tar used with the flag -czvf has the power to compress the files as well as the directories: For a File: tar -czvf collectionX.tar.gz snake.txt For a Directory:  tar -czvf DRcollection.tar.gz newDir/ -c archives the directories and files, -z pushes for gzip compression, -v is verbose which essentially shows what’s going on with compression, and -f allows to name the archive that is going to be compressed. Add .gz after tar, if you want to compress the files and directories. For Multiple Files: tar -cvf collectionX.tar.gz snake.txt panther.txt Tiger.txt  For Multiple Directories: tar -cvf DRcollection.tar.gz newDir1/ newDir2/ newDir3/ .bz2 used with tar and both used with -cjf allow to archive and compress files and directories. -j applies bzip2 compression. For a File (with bz2): tar -cjf collectionX.tar.bz2 snake.txt For a Directory (with bz2): tar -cjf DRcollection.tar.bz2 newDir/ .xz used with .tar and both used with -cJf allow you to archive and compress files and directories. In -cJf, -J means compress with xz. For a File (with xz): tar -cJf DRcollection.tar.xz file1.txt For a Directory (with xz): tar -cJf collectionX.tar.xz newDir/ Extracting Compressed .tar Files arch1.tar.gz, arch1.tar.bz2 and arch1.tar.xz are three compressed files. Extract tar.gz: tar -xvzf arch1.tar.gz -x stands for file extraction. Extract tar.bz2: tar -xvjf arch1.tar.bz2 Extract tar.xz: tar -xvJf arch1.tar.xz Extracting Specific Files Using Wildcards If you need to extract only a specific type of file out of an archive, do this: tar -xvf arch1.tar --wildcards '*.sh' It will give you only the files with .sh extension. --wildcards help search that specific type of file and enable pattern matching while *.sh ensures that you only extract the .sh type of files. Extracting to a Specific Directory If you need to extract the complete archive to a specific directory, do this: tar -xvf arch1.tar -C ./destinationDir/pathDir/ -C changes to the specified directory path and -xvf helps extract the archive there.  Managing .tar Archives Check Contents without Extracting If you need to know what's inside an archive but don't want to uncompress files, use commands like this: tar -tzf arch1.tar.gztar -tjf arch1.tar.bz2tar -tJf arch1.tar.xz -t gives details about what’s inside the compressed archives without performing extraction on it. Appending Files to an Existing Archive To append a new file to an archive: tar -rvf arch1.tar new.sh new.sh will be added to arch1.tar. That’s how you append a file into an existing archive.  Removing a Specific File from an Archive What if you need to delete a file from an archive without having to extract it, it can be done by using --delete. tar --delete -f arch1.tar new.sh  This will remove the file new.sh from the archive arch1.tar without extracting it.  Note that --delete does not work on the compressed files, only on archives.  Comparing Archive Contents with Current Directory If you have to examine the contents of your current working directory and compare them with the archive? use: tar --diff -f arch1.tar --diff will help compare the contents of arch1.tar with the content available in the present working directory. Troubleshooting Common .tar Errors "tar: Removing leading '/' from member names" This warning appears when absolute paths are used in an archive: tar -cvf arch1.tar /home/user/file.txt Solution: Use -p to preserve absolute paths. tar -cvpf arch1.tar /home/user/file.txt "tar: Error opening archive: Unrecognized archive format" This error occurs when the archive is corrupt or the wrong decompression command is used. Solution: Verify the file type: file arch1.tar.gz Use the correct decompression command: tar -xvzf arch1.tar.gz  # For .tar.gztar -xvjf arch1.tar.bz2  # For .tar.bz2tar -xvJf arch1.tar.xz   # For .tar.xz If corruption is suspected, check integrity: gzip -t arch1.tar.gzbzip2 -tv arch1.tar.bz2 Conclusion The tar utility serves as an important tool for archiving, compression and extraction. It provides efficiency, making it a crucial component of Linux storage management. With a variety of configurations and settings, tar functions as an evergreen solution catering to diverse use scenarios. Options such as -czvf and -xvzf determine the way files are stored and retrieved, granting users complete control over data compression. Furthermore, tar supports multiple compression tools like gzip, bzip2, and xz, allowing users to optimize both speed and compression ratio based on their specific needs. For Information Technology professionals, developers, and Linux users, learning everything about tar is invaluable. Whether it’s for managing backups, distribution of data effectively, or optimizing storage, tar is by far one of the most influential archiving tools. By selecting the right configurations and commands, users can significantly enhance their workflow, automate tasks, and efficiently handle large datasets.
12 March 2025 · 6 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support