Sign In
Sign In

Installing and Configuring cloud-init in Linux

Installing and Configuring cloud-init in Linux
Hostman Team
Technical writer
Linux
26.09.2024
Reading time: 10 min

cloud-init is a free and open-source package designed for configuring Linux-based virtual machines during their startup.

In a traditional (home) environment, we would install systems from a CD or USB drive and manually configure them via a standard installer. However, in a cloud environment, we may need to configure systems regularly and frequently create, delete, and restart instances. In such cases, manual configuration becomes impractical and unfeasible.

cloud-init automates the configuration process and standardizes the setup of virtual machines.

What Is cloud-init

The main task of cloud-init is to process input metadata (such as user data) and configure the virtual machine before it starts. This allows us to pre-configure servers, install software, prepare working directories, and create users with specific permissions.

Cloud-init and Hostman Cloud Servers

Hostman cloud servers support working with cloud-init scripts through the control panel. Hostman’s documentation includes a brief guide on using cloud-init scripts directly on their cloud servers. Essentially, Hostman offers a text editor for cloud-init scripts accessible via a web browser, allowing users to pass configuration data directly to the utility before the system starts.

Installing Cloud-init

There are several ways to get a Linux OS with cloud-init:

  • Use a specialized Linux OS image with pre-installed cloud-init (we’ll mention some key examples below).

  • Use pre-built distributions from cloud providers (most cloud platforms support cloud-init, though the setup processes may vary).

  • Build a custom OS image using HashiCorp Packer.

  • Manually install the cloud-init package.

Cloud-init Images

  • Ubuntu: The most common cloud-init image is Ubuntu 22.04 Cloud Images, officially created by Canonical for public cloud use. These images are optimized and tailored for cloud tasks.

  • Debian: Similarly, Debian Cloud offers specialized cloud images for Debian users.

  • Alma Linux: Another distribution designed for cloud deployment is Alma Linux Cloud.

  • VMware: VMware’s Photon image, built for cloud environments, also comes with pre-installed cloud-init.

Alternatively, you can install cloud-init manually.

Installation via APT

In most Linux distributions, cloud-init is installed like any other package and includes three systemd services located in the /lib/systemd/system/ directory:

  • cloud-init.service

  • cloud-config.service

  • cloud-final.service

Additionally, there are two more auxiliary systemd services:

  • cloud-init-local.service

  • cloud-init-hotplugd.service

Before installing, it's best to update the list of available repositories:

sudo apt update

Then, download the cloud-init package via APT:

sudo apt install cloud-init

In some Linux images, cloud-init may already be installed by default. If so, the system will notify you after running the install command.

cloud-init also supports additional modules that expand configuration capabilities. The full list of modules is available in the official documentation.

Running cloud-init

Since cloud-init operates as a service, it starts immediately after the systemd utility starts, i.e., when the physical machine starts and before the system connects to the network. This allows for pre-configuring network settings, gateways, DNS addresses, etc.

Cloud-init Workflow

There are three main stages in cloud-init’s workflow, during which the system is configured. Each stage triggers specific cloud-init services:

  1. Before networking (init): Initial setup before the network starts, including system settings, network configurations, and disk preparation.

    • cloud-init-local.service

    • cloud-init.service

  2. After networking (config): Network is available, so updates and required packages are installed.

    • cloud-config.service

  3. Final stage (final): Final configurations, such as user creation and permission assignments, are applied.

    • cloud-final.service

    • cloud-init-hotplugd.service

Cloud-init Modules

cloud-init offers additional modules that enhance system configuration. These modules run in sequence at various stages. Depending on the specific use case, they can be triggered during any of the three stages. Module execution is managed through three lists in the configuration file:

  • cloud_init_modules: Modules run during the initialization (init) stage before the network starts.

  • cloud_config_modules: Modules run during the configuration (cloud) stage after the network is up.

  • cloud_final_modules: Modules run during the final stage.

In more detail, cloud-init’s stages can be broken down into five steps:

  1. systemd checks if cloud-init needs to run during system boot.

  2. cloud-init starts, locates local data sources, and applies the configurations. At this stage, the network is configured.

  3. During the initial setup, cloud-init processes user data and runs the modules listed under cloud_init_modules in the configuration file.

  4. During the configuration phase, cloud-init runs the modules listed under cloud_config_modules.

  5. In the final stage, cloud-init runs the modules from cloud_final_modules, installing the specified packages.

You can find more details on the cloud-init workflow in the official documentation.

Each module also has an additional parameter that specifies how often the module runs during system configuration:

  • per instance: The module runs each time a new system instance (clone or snapshot) boots.

  • per once: The module runs only once during the initial system boot.

  • per always: The module runs at every system startup.

Cloud-init Configuration

In public (AWS, GCP, Azure, Hostman) or private clouds (OpenStack, CloudStack), a service usually provides the virtual machine with environment data. cloud-init uses these data in a specific order:

  • User data (user-data): Configurations and directives defined in the cloud.cfg file. These may include files to run, packages to install, and shell scripts. Typically, user-data configure specific virtual machine instances.

  • Metadata (meta-data): Environment information, such as the server name or instance ID, used after user-data.

  • Vendor data (vendor-data): Information from cloud service providers, used for default settings, applied after metadata.

Metadata is often available at a URL like http://localhost/latest/meta-data/, and user data at http://localhost/latest/user-data/.

Cloud-init Scripts

When the system boots, cloud-init first checks the YAML configuration files with the scripts and then executes the instructions. YAML is a format for data serialization that looks like markup but is not.

The primary YAML configuration file for cloud-init is located at /etc/cloud/cloud.cfg. This file serves as the main configuration script, with directives and parameters for specific cloud-init modules.

You can write scripts as YAML files (using #cloud-config) or as shell scripts (using #!/bin/sh).

Here’s a simple example of a cloud-init script setting a hostname:

#cloud-config
hostname: my-host
fqdn: my-address.com
manage_etc_hosts: true

In this example:

  • #cloud-config: indicates that the instructions are for cloud-init in YAML format.

  • hostname: sets the short hostname.

  • fqdn: sets the fully qualified domain name.

  • manage_etc_hosts: allows cloud-init to manage the /etc/hosts file.

If this option is set to false, cloud-init won’t overwrite manual changes to /etc/hosts on reboot.

Cloud-init Script Examples

Cloud-init configuration using YAML should start with #cloud-config.

Users and Groups

When a virtual machine starts, you can predefine users with the users directive:

#cloud-config
users:
  - name: userOne
    gecos: This is the first user
    groups: sudo
    shell: sh
    system: true

  - name: userTwo
    gecos: This is the second user
    groups: sudo
    shell: /bin/bash
    system: false
    expiredate: '2030-01-02'

As shown, each new user entry begins with a dash, and parameters are specified in a "key: value" format.

These parameters mean:

  • name: User account name

  • gecos: Brief info about the user

  • groups: Groups the user belongs to

  • shell: Default shell for the user, here set to the simplest sh.

  • system: If true, the account will be a system account without a home directory.

  • expiredate: The user's expiration date in the "YYYY-MM-DD" format.

Changing User Passwords

Another simple directive is chpasswd, used to reset an existing user's password. Example configuration:

#cloud-config
chpasswd:
  list: |
    userOne:passOne
    userTwo:passTwo
    userThree:passThree
  expire: false

This sets a list of users and their new passwords. The | symbol indicates a multi-line entry. The expire parameter defines whether the password will need to be changed after expiration.

Updating the Repository List

cloud-config has a directive for updating the available package list: package_update. It's the declarative equivalent of running

 sudo apt update 

By default, it's set to true, meaning cloud-init will always update the package list unless explicitly disabled:

#cloud-config
package_update: false

Installing Specific Packages

For updating or installing specific packages, use the packages directive:

#cloud-config
packages:
  - nginx
  - nodejs

Running Commands

The runcmd directive allows you to execute console commands through cloud-config. Simply pass a list of commands that cloud-init will run in sequence:

#cloud-config
runcmd:
  - echo 'This is a string command!' >> /somefile.txt
  - [ sh, -c, "echo 'This is a list command!' >> /somefile.txt" ]

Here, two types of commands are used:

  1. As a simple string.

  2. As a YAML list specifying the executable and its arguments.

Another similar directive is bootcmd. While runcmd runs commands only on the system's first boot, bootcmd runs commands on every boot:

#cloud-config
bootcmd:
  - echo 'Command that runs at every system boot!'

Creating and Running a Script

You can combine runcmd with the write_files directive to create and run a script:

#cloud-config
write_files:
  - path: /run/scripts/somescript.sh
    content: |
      #!/bin/bash
      echo 'This script just executed!'
    permissions: '0755'
runcmd:
  - [ sh, "/run/scripts/somescript.sh" ]

The permissions parameter (set to 0755) means the script is readable and executable by all, but only writable by the owner.

Overriding Module Execution

You can override the list of modules to be executed at specific configuration stages. For example, the default cloud_config_modules list might look like this:

#cloud-config
cloud_config_modules:
  - emit_upstart
  - snap
  - ssh-import-id
  - locale
  - set-passwords
  - grub-dpkg
  - apt-pipelining
  - apt-configure
  - ubuntu-advantage
  - ntp
  - timezone
  - disable-ec2-metadata
  - runcmd
  - byobu

Remember, there are three stages:

  • cloud_init_modules

  • cloud_config_modules

  • cloud_final_modules

If you remove runcmd, for example, the commands within it won’t execute.

Updating Repositories and Installing Packages via Shell Script

cloud-init configurations can also consist purely of shell scripts. In this case, the script starts with #!/bin/sh instead of #cloud-config:

#!/bin/sh
apt update
apt -y install nodejs
apt -y install nginx

The -y flag automatically answers "yes" to any prompts during installation.

Conclusion

In this guide, we covered the theoretical and practical aspects of using cloud-init:

  • How cloud-init works.

  • How to interact with cloud-init for system configuration.

  • Writing scripts in YAML or shell format.

  • Example configurations.

cloud-init runs before the system boots, ensuring that the instance follows the desired configuration (network, directories, packages, updates). cloud-init uses modules for specific configuration tasks, and the system configuration is done in phases:

  • init (before networking)

  • config (after networking)

  • final (last stage)

More detailed information is available in the official documentation maintained by Canonical, the primary developer of Ubuntu.

Linux
26.09.2024
Reading time: 10 min

Similar

Linux

How to Use the Screen Utility in Linux

The Screen utility is a Linux window manager that allows you to switch between multiple processes in a single physical terminal. Screen provides a scrollable history buffer and a mechanism for copying and pasting text between windows. With Screen, you can create new windows with different programs, close the current windows, view a list of active windows, enable and disable output logging, and switch between windows. All windows work independently, and programs continue to run even when the session is disconnected from the user's terminal. This makes Screen a useful tool for efficiently managing multiple tasks in a single terminal. Installing the Screen Utility The Screen may be pre-installed in the operating system or require separate installation depending on the distribution. To install Screen, use the following command: For Ubuntu and Debian: apt install -y screen For CentOS and Fedora: yum install -y screen Or: dnf install -y screen Basic Commands Let's go over the basic commands for managing Linux Screen sessions. Starting a Screen Session To start Screen, simply enter the following command in your terminal: screen This will open a Screen session, create a new window, start a shell in it, and you will see a new window. Press Enter to proceed to enter commands. Creating a Named Session You can name your sessions, which is especially useful when working with multiple Screen sessions. To create a named session, use the following command: screen -S session_name It’s always helpful to choose a descriptive name for the session. Detaching from a Screen Session in Linux To detach from a Screen session at any time, type: Ctrl+a d The program running in the Screen session will continue to run in the background after you detach. Reattaching to a Screen Session in Linux To resume your Screen session, use the command: screen -r If you have multiple Screen sessions running, you need to specify the session ID or its name after the -r parameter. To see the list of currently running sessions, use: screen -ls You will see a list of sessions like this: There are screens on: 1468393.hostman (01/25/2025 02:07:34 PM) (Detached) 1466624.pts-3.1495851-user (01/25/2025 01:54:05 PM) (Detached) 2 Sockets in /run/screens/S-linuxize. To resume the session with ID 1466624.pts-3.1495851-username, enter: screen -r 1466624 To resume a session using its name, type: screen -r session_name Additional Options Screen offers a variety of useful features for convenient session management in the terminal.  You can customize each window to suit your needs, such as adjusting its size according to display settings or configuring options using a custom configuration file.  You can also pause a session and resume it later or run Screen in daemon mode to keep it running in the background.  Additionally, you can customize command keys, manage data flow, and enable logging.  It's also possible to switch between windows, change their titles, and use UTF-8 encoding, making terminal work more comfortable and adaptable to different tasks. Screen options: -a: Enables all possible features for each window, maximizing functionality. -A -[r|R]: Automatically adjusts all windows to fit the new screen width and height. -c file: Specifies an alternate configuration file instead of the default .screenrc. -d (-r): Detaches the current Screen session without terminating it so that you can reconnect later. -D (-r): Terminates the active connection to a remote session, but the session itself remains running and can be resumed. -D -RR: Takes all necessary actions to reconnect to an existing Screen session if one is available. If not, it starts a new one. -e xy: Changes the default keybindings for Screen commands to custom ones, which is useful for avoiding conflicts with other programs. -f: Enables data flow control; -fn disables it; -fa enables automatic flow control, which is helpful when working with large amounts of data. -h lines: Sets the scrollback buffer size, allowing you to scroll through more command history. -i: Interrupts data output on the screen when flow control is enabled, preventing terminal overload. -l: Logs session information to the system log to keep track of active sessions; -ln disables this. -ls [pattern]: Displays a list of all active Screen sessions currently connected. -L: Enables logging of all terminal output to a log file. -p window: Automatically selects the specified window on startup if it exists. -q: Runs Screen in “quiet” mode, suppressing unnecessary error messages. -V: Displays the version of Screen and then exits. -r [session]: Reconnects to a previously started but detached Screen session. -R: If an existing session is found, it reconnects to it; if not, it starts a new one. -S session_name: Assigns a name to the new session, making it easier to reconnect to it later. -t title: Sets a title for the window, which is displayed in the window list. -U: Enables UTF-8 encoding support for text display. -v: Displays the current version of the Screen program. -x: Attaches to an active session, allowing it to be used simultaneously on multiple screens. -X: Executes the specified command within an active Screen session. Working with Screen Windows You can work with multiple Screen sessions simultaneously, with several windows open for each session. To create a new window with a shell, press: Ctrl+a c The window will be automatically assigned a number from 0 to 9. Below are the common commands for managing windows in Screen: Ctrl+a c — Create a new window (with a shell). Ctrl+a " — Display a list of all windows. Ctrl+a 0 — Switch to window 0 (by number). Ctrl+a A — Rename the current window. Ctrl+a S — Split the current region horizontally into two regions. Ctrl+a | — Split the current region vertically into two regions. Ctrl+a tab — Move the input focus to the next region. Ctrl+a Ctrl+a — Toggle between the current and previous windows. Ctrl+a Q — Close all regions except the current one. Ctrl+a X — Close the current region. To see all commands, enter: Ctrl+a ? When Screen starts, it reads its configuration settings from /etc/screenrc and ~/.screenrc, if they exist. You can customize Screen's default settings to suit your preferences using the .screenrc file. This example includes a custom status line and several additional options: # Disable the startup message startup_message off # Automatically detach from the session when the connection is lost autodetach on # Set the scrollback buffer to 10,000 lines defscrollback 10000 # Enable logging for the current session logfile /path/to/screenlog Additional recommendations for customizing Screen configuration: Automatic Window Splitting on Startup Useful if you frequently work with multiple windows and want them to open automatically when Screen starts. screen -t shell1 split focus screen -t shell2 Logging All Sessions Useful for keeping a record of work. deflog on logfile $HOME/.screen/screenlog.%t Automatic Reconnection on Disconnection Useful when working with unstable connections. autodetach on reattach on Examples of Using Screen Example 1: To monitor file changes in real-time, you can use two Screen windows: one for editing the file and another for displaying its content using the command: tail -f filename This allows you to instantly see all changes made without having to re-run the command. Example 2: When working in a terminal over SSH using a Screen session, you won’t lose data if the connection is interrupted. Even if the connection drops, you can reconnect and resume work exactly where you left off by simply reattaching to the Screen session. Example 3: For long-running tasks, such as compiling code or performing a backup, you can start the task in one Screen session and monitor its progress. You can safely disconnect anytime, knowing the task will continue running. Later, you can reconnect to the session to check the results. Conclusion In this guide, we covered how to use Linux Screen to manage terminal sessions effectively. You learned how to: Create multiple windows within a single session. Switch between windows. Manage sessions, including detaching and resuming them. We also discussed how to customize the terminal using the .screenrc configuration file to make your work environment more convenient and personalized. You can now use Screen for a more comfortable and productive terminal experience by mastering these basic features. For more information about Screen, check out the Screen user's manual.
20 February 2025 · 7 min to read
Linux

How to Use if-else in Bash

Many programming languages have conditional statements, such as if-else. These statements are also present in Bash, the default shell used in almost all modern Linux distributions. The if-else statements are used to check conditions — they allow the execution of specific commands depending on whether the condition is true or false. The if-else statements work exactly the same way as in any programming language. In this article, we will discuss how to use if-else statements in the Bash shell through practical examples. The if Statement in Bash The if statement in Bash allows you to execute specific commands depending on the truth value of the given condition. Two logical statements are used to check for truth: True and False. The if statement is used when you need to check a condition. It controls the flow of script execution, allowing decisions to be made based on variable values, command results, and other conditions. The if statement works as follows: First, the program checks the condition (the condition can be a command or a mathematical expression) contained in the if statement. If the condition is true, the program executes the commands listed after the then keyword. If the condition is false, the program executes the commands listed after the else statement. The syntax of the if statement in Bash is as follows: if [condition]; then # commands to execute if the condition is true fi Let's break down the operation of the if statement with a simple practical example. We will create a script that asks the user for a number, and if the number entered is greater than 10, the system will return the message "The number is greater than 10." Create a new file with a .sh extension, for example, using the nano editor: nano greater_than_10.sh Insert the following code: #!/bin/bash read -p "Enter a number: " number if [ $number -gt 10 ]; then echo "The number is greater than 10." fi Provide the file with execute permissions: chmod +x greater_than_10.sh Now, run the script: ./greater_than_10.sh Output: Enter a number: Enter any number, for example, 32, and press Enter. Since 32 is greater than 10, and this condition returns True, the program will execute the echo command. Enter a number: 32The number is greater than 10 Let’s break down the script in more detail: The conditions are written in square brackets. In this example, the -gt operator is used (greater than, equivalent to the > symbol). Next, we check the condition. If it’s True, the program executes the command after the then keyword. The script ends with the fi keyword, signaling the end of the if block. However, this script has one major drawback: it does not handle the case when the entered number is less than 10. The script will not return anything because there is no condition for that case. To address this issue, we will use the else statement, which we will discuss in the next chapter. The if-else Statement in Bash In the previous section, we ran a script with only one condition in the if statement — True. We didn’t specify any action for the False condition. As a result, if we entered a value leading to False, there was no response. If we want the script to perform specific actions for the false condition False, we need to use the else statement, which follows the if statement. The if-else statement in Bash is used to perform conditional operations. It allows the execution of specific commands depending on whether the condition is true or false. The syntax for if-else is as follows: if [condition]; then # commands executed if the condition is true else # commands executed if the condition is false fi Remember that keywords, including if and else, in Bash shell scripts are case-sensitive. Be careful when using keywords in script files. Let's consider using the if-else statements in a practical example. In this case, we will create a Bash script that asks the user for a number, and the system will display whether the number is greater than or less than 10. Create a new file with a .sh extension: nano check.sh Insert the following code: #!/bin/bash read -p "Enter a number: " number if [ $number -gt 10 ]; then echo "The number is greater than 10." else echo "The number is less than or equal to 10." fi Grant the file execute permissions: chmod +x check.sh Now, run the script: ./check.sh The algorithm for the script works as follows: After the if keyword, we specify the condition in square brackets. In this example, we use the -gt operator (greater than, equivalent to the > symbol). The condition is checked. If the condition is true, the program executes the command after the then keyword— in this case, it prints the message "The number is greater than 10". If the condition is false, the program executes the command after the else keyword, printing the message "The number is less than or equal to 10". Once one of the conditions is met, the program will end, as indicated by the fi keyword at the end. Output if the number is greater than 10: Enter a number: 56The number is greater than 10. Output if the number is less than 10: Enter a number: 6The number is less than or equal to 10. Practical Use of if-else in Bash Let's look at the practical application of the if-else statement in Bash, which can be used when writing scripts. Script Example 1. Checking if Run as root First, we will create a script that checks whether the script file is run as the root user. This can be useful when writing scripts that require root privileges, such as installing packages as the root user. Create a file named check-for-root.sh: nano check-for-root.sh Use the following code to check for root user: #!/bin/bash if [[ $EUID -ne 0 ]]; then /usr/bin/printf "${R}>>>>${NC} Please run as root\n" exit 1 fi Grant the file execute permissions: chmod +x check-for-root.sh And run it: ./check-for-root.sh If the script is run as a regular user, the console will print the message "Please run as root". The check for the root user uses the condition $EUID -ne 0, where: $EUID is an environment variable that holds the numeric user ID. In Linux systems, the root user always has the ID 0, while other user accounts start at 1000. -ne is a comparison operator meaning "not equal". Instead of ne, you can also use !=. Script Example 2. Checking the Linux distribution Next, let's create another script that checks which Linux distribution is being used. If the script is run on Ubuntu, it will print the message "This is Ubuntu". If the script is run on any other Linux distribution, it will print "Not Ubuntu. You can run this script only on Ubuntu distributions". Create a file named check-for-distribution.sh: nano check-for-distribution.sh Use the following code: #!/bin/bash dist=`grep DISTRIB_ID /etc/*-release | awk -F '=' '{print $2}'` if [ "$dist" == "Ubuntu" ]; then echo "This is Ubuntu" else echo "Not Ubuntu. You can run this script only on Ubuntu distributions" fi Make the file executable: chmod +x check-for-distribution.sh And run it: ./check-for-distribution.sh If the script is run on an Ubuntu system, it will print "This is Ubuntu". On any other distribution, it will print "Not Ubuntu. You can run this script only on Ubuntu distributions". Script Example 3. Checking if File Exists Now, let’s look at another practical example. We will create a Bash script that checks if a file named file1.txt exists. If it doesn't exist, the script will create it. The script checks for the file in the same directory it is run. If the file already exists, the script will print a message without creating the file. Create a file named check-file.sh: nano check-file.sh Use the following script code: #!/bin/bash FILE="file1.txt" if [ ! -f "$FILE" ]; then touch "$FILE" echo "$FILE has been created." else echo "$FILE already exists." fi Grant execute permissions for the script: chmod +x check-file.sh Run the script: ./check-file.sh If the file1.txt file already exists in the directory from which the script is run, you will see the message "file1.txt already exists.". The file will not be created. Conclusion In this article, we reviewed the principles of logical statements such as if-else in the Bash shell and provided practical examples of using these statements. These examples are useful when writing scripts to automate system tasks or checks.
18 February 2025 · 7 min to read
Linux

Using the ps aux Command in Linux

Effective system administration in Linux requires constant awareness of running processes. Whether diagnosing performance bottlenecks, identifying unauthorized tasks, or ensuring critical services remain operational, the ps aux command is an indispensable tool.  This guide provides a comprehensive exploration of ps aux, from foundational concepts to advanced filtering techniques, equipping you to extract actionable insights from process data. Prerequisites To follow the tutorial: Deploy a Linux cloud server instance at Hostman SSH into the server instance Understanding Processes in Linux Before we explore the ps aux command, let's take a moment to understand what processes are in the context of a Linux system. What are Processes? A process represents an active program or service running on your Linux system. Each time you execute a command, launch an application, or initiate a background service, you create a process. Linux assigns a unique identifier, called a Process ID (PID), to each process. This PID allows the system to track and manage individual processes effectively. Why are Processes Grouped in Linux? Linux employs a hierarchical structure to organize processes. This structure resembles a family tree, where the initial process, init (or systemd), acts as the parent or ancestor. All other processes descend from this initial process, forming a parent-child relationship. This hierarchy facilitates efficient process management and resource allocation. The ps Command The ps (process status) command provides a static snapshot of active processes at the moment of execution. Unlike dynamic tools such as top or htop, which update in real-time, ps is ideal for scripting, logging, or analyzing processes at a specific point in time. The ps aux syntax merges three key options: a: Displays processes from all users, not just the current user. u: Formats output with user-oriented details like CPU and memory usage. x: Includes processes without an attached terminal, such as daemons and background services. This combination offers unparalleled visibility into system activity, making it a go-to tool for troubleshooting and analysis. Decoding the ps aux Output Executing ps aux generates a table with 11 columns, each providing critical insights into process behavior. Below is a detailed explanation of these columns: USER This column identifies the process owner. Entries range from standard users to system accounts like root, mysql, or www-data. Monitoring this field helps detect unauthorized processes or identify which users consume excessive resources. PID The Process ID (PID) is a unique numerical identifier assigned to each task. Administrators use PIDs to manage processes—for example, terminating a misbehaving application with kill [PID] or adjusting its priority using renice. %CPU and %MEM These columns display the percentage of CPU and RAM resources consumed by the process. Values above 50% in either column often indicate performance bottlenecks. For instance, a database process consuming 80% CPU might signal inefficient queries or insufficient hardware capacity. VSZ and RSS VSZ (Virtual Memory Size) denotes the total virtual memory allocated to the process, including memory swapped to disk. On the other hand, RSS (Resident Set Size) represents the physical memory actively used by the process. A process with a high VSZ but low RSS might reserve memory without actively utilizing it, which is common in applications that preallocate resources. TTY This field shows the terminal associated with the process. A ? indicates no terminal linkage, which is typical for background services like cron or systemd-managed tasks. STAT The STAT column reveals process states through a primary character + optional attributes: Primary States: R: Running or ready to execute. S: Sleeping, waiting for an event or signal. I: Idle kernel thread D: Uninterruptible sleep (usually tied to I/O operations). Z: Zombie—a terminated process awaiting removal by its parent. Key Attributes: s: Session leader N: Low priority <: High priority For example, a STAT value of Ss denotes a sleeping session leader, while l< indicates an idle kernel thread with high priority. START and TIME START indicates the time or date the process began. Useful for identifying long-running tasks. TIME represents the cumulative CPU time consumed since launch. A process running for days with minimal TIME is likely idle. COMMAND This column displays the command or application that initiated the process. It helps identify the purpose of a task—for example, /usr/bin/python3 for a Python script or /usr/sbin/nginx for an Nginx web server. Advanced Process Filtering Techniques While ps aux provides a wealth of data, its output can be overwhelming on busy systems. Below are methods to refine and analyze results effectively. Isolating Specific Processes To focus on a particular service—such as SSH—pipe the output to grep: ps aux | grep sshd Example output: root 579 0.0 0.5 15436 5512 ? Ss 2024 9:35 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups root 2090997 0.0 0.8 17456 8788 ? Ss 11:26 0:00 sshd: root@pts/0 root 2092718 0.0 0.1 4024 1960 pts/0 S+ 12:19 0:00 grep --color=auto sshd This filters lines containing sshd, revealing all SSH-related processes. To exclude the grep command itself from results, use a regular expression: ps aux | grep "[s]shd"  Example output: root 579 0.0 0.5 15436 5512 ? Ss 2024 9:35 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups root 2090997 0.0 0.8 17456 8788 ? Ss 11:26 0:00 sshd: root@pts/0 Sorting by Resource Consumption Identify CPU-intensive processes by sorting the output in descending order: ps aux --sort=-%cpu | head -n 10 Example output: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND mysql 1734280 0.4 36.4 1325172 357284 ? Ssl Jan30 87:39 /usr/sbin/mysqld redis 1424968 0.3 0.6 136648 6240 ? Ssl Jan18 112:25 /usr/bin/redis-server 127.0.0.1:6379 root 1 0.0 0.6 165832 6824 ? Ss 2024 5:51 /lib/systemd/systemd --system --deserialize 45 root 2 0.0 0.0 0 0 ? S 2024 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? I< 2024 0:00 [rcu_gp] root 4 0.0 0.0 0 0 ? I< 2024 0:00 [rcu_par_gp] root 5 0.0 0.0 0 0 ? I< 2024 0:00 [slub_flushwq] root 6 0.0 0.0 0 0 ? I< 2024 0:00 [netns] root 8 0.0 0.0 0 0 ? I< 2024 0:00 [kworker/0:0H-events_highpri] Similarly, you can sort by memory usage to detect potential leaks: ps aux --sort=-%mem | head -n 10 Example output: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND mysql 1734280 0.4 36.4 1325172 357284 ? Ssl Jan30 87:39 /usr/sbin/mysqld root 330 0.0 4.4 269016 43900 ? S<s 2024 22:43 /lib/systemd/systemd-journald root 368 0.0 2.7 289316 27100 ? SLsl 2024 8:19 /sbin/multipathd -d -s root 1548462 0.0 2.5 1914688 25488 ? Ssl Jan23 2:08 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock root 1317247 0.0 1.8 1801036 17760 ? Ssl Jan14 22:24 /usr/bin/containerd root 556 0.0 1.2 30104 11956 ? Ss 2024 0:00 /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers root 635 0.0 1.1 107224 11092 ? Ssl 2024 0:00 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal root 2090997 0.0 0.8 17456 8788 ? Ss 11:26 0:00 sshd: root@pts/0 root 2091033 0.0 0.8 9936 8480 pts/0 Ss 11:26 0:00 bash --rcfile /dev/fd/63 Real-Time Monitoring Combine ps aux with the watch command to refresh output every 2 seconds: watch -n 2 "ps aux --sort=-%cpu" This provides a dynamic view of CPU usage trends. Zombie Process Detection Zombie processes, though largely harmless, clutter the process list. Locate them with: ps aux | grep 'Z' Persistent zombies often indicate issues with parent processes failing to clean up child tasks. Practical Use Cases Now, let’s explore some common use cases of the ps aux command in Linux: Diagnosing High CPU Usage Follow the below steps: Execute this command to list processes by CPU consumption. ps aux --sort=-%cpu Identify the culprit—for example, a malfunctioning script using 95% CPU. If unresponsive, terminate the process gracefully with: kill [PID] Or forcibly with: kill -9 [PID] Detecting Memory Leaks Simply do the following: Sort processes by memory usage: ps aux --sort=-%mem Investigate tasks with abnormally high %MEM values. Restart the offending service or escalate to developers for code optimization. Auditing User Activity List all processes owned by a specific user (e.g., Jenkins): ps aux | grep ^jenkins This helps enforce resource quotas or investigate suspicious activity. Best Practices for Process Management Let’s now take a quick look at some best practices to keep in mind when managing Linux processes: Graceful Termination: Prefer kill [PID] over kill -9 to allow processes to clean up resources. Log Snapshots: Periodically save process lists for audits: ps aux > /var/log/process_audit_$(date +%F).log Contextual Analysis: A high %CPU value might be normal for a video encoder but alarming for a text editor. Hence, it’s essential to consider the context when making an analysis. Common Pitfalls to Avoid Here are some pitfalls to look out for when using ps aux in Linux: Misinterpreting VSZ: High virtual memory usage doesn’t always indicate a problem—it includes swapped-out data. Overlooking Zombies: While mostly benign, recurring zombies warrant investigating parent processes. Terminating Critical Services: Always verify the COMMAND field before using kill to avoid disrupting essential services. Conclusion The ps aux command is a cornerstone of Linux system administration, offering deep insights into process behavior and resource utilization. You can diagnose performance issues, optimize resource allocation, and maintain system stability by mastering its output interpretation, filtering techniques, and real-world applications.  For further exploration, consult the ps manual (man ps) or integrate process monitoring into automated scripts for proactive system management.
18 February 2025 · 9 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support