Sign In
Sign In

User Management in Linux

User Management in Linux
Hostman Team
Technical writer
Linux
11.07.2024
Reading time: 9 min

Modern operating systems (including Windows, Unix/Linux, and macOS) are multi-user. This means that multiple users can be created and managed separately without interfering with other users or programs. You can also edit users: change their names, groups they belong to, etc. In this article, we will discuss user management in Linux using the Ubuntu distribution.

This user management guide will work for any cloud server or virtual machine with a pre-installed Linux operating system.

In this article, we use Ubuntu version 22.04, but you can use any other distribution, including Debian, CentOS, etc.

All commands are executed from the root account. Regular users cannot create, delete, or modify user and group information without sudo privileges.

Where User Information is Stored

In Linux systems, whether a user was created manually or as a result of installing some software, their information is stored in the /etc/passwd file. To show all users of a Linux system, run:

cat /etc/passwd

The structure of the passwd file is as follows:

username:password:user_id:group_id:additional_info:home_directory_path:shell_path

For example:

gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin

Explanation of the fields in the passwd file:

  • Username — used, for example, for SSH login or when adding to a specific user group. The username length can be 1 to 32 characters.

  • Password — in modern Linux distributions, passwords are not stored in plain text. Instead, one of the following symbols is used:

    • x — means the password is stored in encrypted form with "salt" (a random set of characters added to the password hash to complicate the password-cracking process). The encrypted password itself is stored in the /etc/shadow file.

    • * — means the user does not have permissions to log into the operating system.

    • The user account does not have a password if neither of these symbols is present.

  • User ID (UID) — each created user in a Linux system is assigned a unique number (identifier) in the form of a positive number.

    • The number 0 always belongs to the root user. Numbers from 1 to 9 and from 10 to 499 are reserved and allocated only for pseudo-users (users created with the installation of certain programs, such as PostgreSQL or Nginx). For regular users, numbers usually start from 500 or 1000. The user ID can be changed (except for the root user).

  • Group ID (GID) — when a user is created, a group with the same name is also created. Groups allow you to assign permissions to objects in the system (files, directories, etc.) to several users at once. A user must be in at least one group. Like users, groups are assigned unique numerical IDs.

  • Additional Information — you can add additional information for users, such as full name, address, phone number.

  • Home Directory Path — each user in the Linux system has their own home directory. By default, it is located in the /home directory, but you can choose another one, such as /var or /bin. For example, when installing PostgreSQL, a user named postgres is automatically created, with a home directory at /var/lib/postgresql.

  • Shell Path and Login Prohibition — a user may have a command shell for executing commands (usually bash or sh). ALso, you can prohibit a user from logging into the server by using /bin/false or /usr/sbin/nologin.

Creating Users with useradd

To create users in Linux systems, you can use a low-level utility called useradd.

For example, to create a user named hostman:

useradd hostman

If we look at the contents of the /etc/passwd file, at the end of the list we will see information about the created user:

cat /etc/passwd

Powershell E0u Ag Ejult

However, it is important to note that by default, i.e., without using options, useradd does NOT create a password or home directory for the user. Although there is a corresponding value in the /etc/passwd file, the utility merely reserves the home directory for the user.

To set a password for the user, run the passwd command. Let's set a password for our user hostman:

passwd hostman

Powershell Z La D H0i6 N8

You will need to enter the password twice. 

Note that setting a password is mandatory. Without it, the new user will not be able to log in.

When using the useradd command, you can set a password during the user creation stage using the -p option, but the password must be pre-encrypted. The openssl utility (present by default in most Linux distributions) can be used for this. Let's create a user named new-user and set the password StrongPassword123$ for them:

useradd -m -p $(openssl passwd -1 StrongPassword123$) new-user

Check the /etc/passwd file:

cat /etc/passwd

Powershell 2h Kj B5v O Qn

You can see that new-user exists and has a password, which is indicated by the symbol x.

It is not a recommended method for password setting because the password is entered in plain text and is available in the command history.

Although useradd only creates users, the utility has options for setting the user's home directory and login shell.

To create a user and set their home directory, use the -m and -d keys, followed by the full path to the intended directory:

useradd -m -d /home/hostman hostman

If you do not specify the directory manually, the user's home directory will be created in the /home directory.

To set the user's shell, use the -s option, followed by the shell:

useradd -s /usr/bin/bash hostman

Creating Users with adduser

In the previous section, we used the useradd utility to create users. However, the useradd manual for Debian-based systems advises using adduser instead of useradd.

The adduser command, like useradd, creates users in the system but works at a higher level and is interactive. Technically, adduser is a Perl script that uses useradd to create users.

When using adduser, you specify the username, and the utility will automatically:

  • create a group with the same name for the user,
  • create a home directory in the /home directory,
  • prompt for a password,
  • fill in information such as full name (Full Name), room number (Room Number), work phone (Work Phone), home phone (Home Phone), and any other information you consider necessary (Other). If you do not want to fill in this information, just press Enter until you reach the final question: "Is the information correct?". You need to enter Y (yes) or n (no).

In the screenshot, you can see how we created a user named testuser using the adduser utility:

Powershell Gq1 Bsgi F1s

Using adduser significantly simplifies creating new users, as you do not have to use additional commands and options.

Deleting Users

To delete a user on Linux, use the userdel command. Let's delete a user named test1:

userdel test1

By default, the userdel command does not delete the user's home directory. To delete both the user and their home directory, use the -r key:

userdel -r test1

There is another way to delete a user account on Linux — using the deluser command. The deluser utility is available only in Debian-based distributions. To delete the user testuser using deluser, execute the following command:

deluser testuser

Powershell Oxhucf E1 Ko

Just like when using the userdel command, the deluser command does not delete the user's home directory and the user's directory in /var/mail where email messages are stored. To delete these directories when deleting the user, use the --remove-home option:

deluser --remove-home testuser

Managing User Passwords

As mentioned earlier, modern Linux distributions do not store passwords in plain text. Instead, the system stores only their hash functions. The hash functions are stored in a separate text file called /etc/shadow:

Powershell 4e5 S1 L Xl Mk

The structure of the shadow file is as follows:

username:encrypted_password:last_password_change_date:minimum_days_between_password_changes:maximum_days_before_password_change:days_before_password_expiry:days_after_password_expiry_until_account_is_disabled:account_expiry_date

For example:

hostman:$y$j9T$csuJ6HDxYMO4UA0WABfwj1$dZOda.QqT7XS/1Ut3UJIVIG3kJTzMli/Rn0Ku/Vwdv/:19912:0:99999:7:::

Explanation of the fields in the shadow file:

  • Username is the username for which the password is stored.

  • Password is the encrypted password in hash function form. The hash function used to create the password is indicated by symbols such as $y$, $1$, $2y$, etc.

  • Last Password Change Date is the number of days since the last password change, calculated from January 1, 1970.

  • Minimum Days Between Password Changes is the minimum number of days between password changes.

  • Maximum Days Before Password Change is the maximum number of days before the user must change the password.

  • Days Before Password Expiry is the number of days before the password expires, and the system starts notifying the user that the password needs to be changed.

  • Days After Password Expiry Until Account is Disabled is the number of days after the password expires until the account is disabled.

  • Account Expiry Date is the date the account will be disabled, counted in days from January 1, 1970.

To change a user's password, use the passwd command. The root user can change the password for any user, but a regular user can only change their own password.

For example, to change the password for the user hostman, run the command as the root user:

passwd hostman

Enter the new password for the user twice, and the password will be updated.

To force the user to change their password at the next login, use the passwd command with the -e key:

passwd -e hostman

This command sets the password expiration date to the current date, so the user will be prompted to change their password the next time they log in.

Managing User Groups

Groups in Linux are necessary for organizing users. Each user must be in at least one group. When you create a user, a group with the same name is created by default.

To view all groups in the system, use the getent command:

getent group

It will show the content of the /etc/group file. The structure of the file is as follows:

group_name:password:group_id:group_members

For example:

hostman:x:1001:

Explanation of the fields in the group file:

  • Group Name — the name of the group.

  • Password — the group password, usually indicated by the symbol x, meaning the password is stored in encrypted form.

  • Group ID — the unique identifier of the group.

  • Group Members — the list of users who are members of the group.

To create a group, you can use groupadd. Let’s create a group named group1:

groupadd group1

Groups can be renamed using groupmod. This way we rename group1 to newgrp:

groupmod -n newgrp group1

To add a user to a group in Linux, use the usermod command with the -aG option, where -a means "append" and -G specifies the group:

usermod -aG sudo hostman

To add several users at once, you can use:

gpasswd -M user1,user2 newgrp

To check a user’s group, use the groups command and the username, for example:

groups hostman

Powershell Qs Qi Tf S6 Ai

To remove a user from a group, use the gpasswd command with the -d option:

gpasswd -d hostman sudo

Powershell Mvrbw Q08 Tk

Another option is to use deluser which can also delete users from groups. For example, to remove user1 from nwgrp:

deluser user1 newgrp

Conclusion

Managing users and groups in Linux is a fundamental task for system administrators. This article covered the basic commands and utilities for creating, deleting, and managing users and groups in a Linux system. Understanding these concepts and tools is essential for maintaining a secure and efficient operating environment.

Linux
11.07.2024
Reading time: 9 min

Similar

Linux

How to Set and Manage Linux Environment Variables

Environment variables are a core component of Linux, enabling users and system processes to store configuration values dynamically. They influence everything from command execution to system-wide settings, making them essential for both interactive sessions and automated processes.   This guide explores setting and managing env variables, troubleshooting issues, securing sensitive data, and optimizing Linux systems through effective variable management.  What Are Environment Variables? An environment variable is a named value stored in system memory, affecting how processes behave. For example, when executing a command, the system references the PATH variable to locate the correct executable.  To display all active variables:   printenv To retrieve a specific value:   echo $HOME There are two primary types of environment variables:   Session-based: Exists only in the current terminal session.   Persistent: Stored in configuration files to remain active after logout or reboot.   Understanding how these variables function ensures smooth system operation and script automation.   Setting Environment Variables A temporary variable can be assigned using:   export TEMP_VAR="Temporary Value" To check if it was set:   echo $TEMP_VAR This variable exists only in the current session. Once the shell is closed, it disappears.   To create a persistent variable, append it to ~/.bashrc:   echo 'export PERSISTENT_VAR="Stored Across Sessions"' >> ~/.bashrc && source ~/.bashrc For system-wide settings, define variables in /etc/environment:   echo 'GLOBAL_VAR="Available to All Users"' | sudo tee -a /etc/environment A reboot or reloading the file ensures the changes take effect.   Using Environment Variables in Shell Scripts Scripts frequently rely on environment variables to pass configurations dynamically. Understanding their scope and behavior is key to avoiding unintended side effects. Defining and Accessing Variables in Scripts A variable can be defined within a script as follows: #!/bin/bash MY_VAR="Hello, World" echo $MY_VAR However, this variable exists only inside the script. Running it does not make MY_VAR available in the parent shell. Exporting Variables for External Use To make a variable accessible to child processes: export GLOBAL_VAR="Persistent Value" ./my_script.sh Inside my_script.sh, the variable will be available. Without export, the script would not inherit the variable. Sourcing a Script to Modify the Current Shell Environment A script cannot modify the parent shell’s environment unless explicitly sourced: source my_script.sh or . my_script.sh This method is useful for setting persistent configurations dynamically. Preventing Variable Collisions in Scripts If multiple scripts use generic names like TEMP or DATA, unexpected conflicts can occur. Prefixing variables uniquely avoids this issue: APP1_TEMP="value"APP2_TEMP="value" Alternatively, using local variables within functions prevents global namespace pollution: my_function() { local TEMP_VALUE="Temporary Data" echo $TEMP_VALUE } By keeping variables properly scoped, script behavior remains predictable. Managing Environment Variables in Cron Jobs Cron jobs run on predefined schedules but operate in a minimal shell environment, often missing user-defined env variables. This leads to common issues when using variables inside cron scripts. Unlike interactive shells, cron jobs do not load user profiles like ~/.bashrc or /etc/profile. Running a job that depends on custom variables may fail unless explicitly defined. To check cron’s default environment: env -i /bin/bash -c 'printenv' This command runs a clean shell without inherited variables, mimicking cron behavior. Defining Variables in Crontab To ensure a variable is available in a cron job, define it within the crontab file: MY_VAR="Scheduled Value" * * * * * echo $MY_VAR >> /tmp/output.log This approach works for simple variables but does not support complex setups. Using an Environment File in Cron Jobs For multiple variables, store them in a file and source it in the cron job: * * * * * source /home/user/my_env_vars.sh && /home/user/my_script.sh Alternatively, define the BASH_ENV variable to ensure the script loads the environment: BASH_ENV="/home/user/my_env_vars.sh" * * * * * /home/user/my_script.sh This method ensures a consistent environment for all scheduled tasks. Ensuring PATH Is Correctly Set Many cron job failures occur due to an incomplete PATH. To prevent this, define it explicitly: PATH="/usr/local/bin:/usr/bin:/bin" * * * * * /home/user/my_script.sh This avoids issues where commands work in a terminal but fail when run as a cron job. Debugging Environment Variable Issues If an env variable does not behave as expected, diagnosing the issue requires structured debugging.   Confirming a Variable Is Set To check if a variable exists:   printenv VAR_NAME If no output appears, the variable is unset or not exported.   To verify if it exists only in the shell:   set | grep VAR_NAME Checking If a Variable Is Exported A variable may be available in the current shell but not inherited by subprocesses. To check:   bash -c 'echo $VAR_NAME' If the output is empty, ensure the variable is exported:   export VAR_NAME="Value" Identifying Conflicting Variables If a variable does not behave as expected, another definition may be overriding it. To locate all instances:   grep VAR_NAME ~/.bashrc ~/.profile /etc/environment If multiple definitions exist, remove any unintended ones.   Testing Variables in a Clean Environment To check if an issue is related to the shell configuration, launch a clean session and then, try setting and printing the variable again  env -i bash --noprofile --norc Using Environment Variables in Systemd Services Many Linux services run as systemd units. Instead of defining env variables in shell scripts, systemd provides a structured way to manage them.   To set env variables for a systemd service, create a drop-in configuration file:   sudo mkdir -p /etc/systemd/system/my_service.service.d && echo -e "[Service]\nEnvironment=\"MY_VAR=MyValue\"" | sudo tee /etc/systemd/system/my_service.service.d/env.conf Add the content and modify ExecStart as needed: [Unit] Description=My Custom Service After=network.target [Service] ExecStart=/path/to/your/executable Restart=always Environment="MY_VAR=MyValue" [Install] WantedBy=multi-user.target Reload systemd and restart the service:   sudo systemctl daemon-reload sudo systemctl restart my_service To confirm that the variable was applied:   sudo systemctl show my_service | grep MY_VAR This method ensures that variables persist across reboots without modifying global shell configuration files.   Environment Variables and Performance Optimization Env variables affect performance by controlling system behavior at a fundamental level. Configuring them effectively can enhance efficiency and streamline resource management. Optimizing Command Execution Paths The PATH variable determines where the system looks for executables. If it contains unnecessary directories, command execution slows down. To optimize:   export PATH="/usr/local/bin:/usr/bin:/bin" Avoid overly long PATH values by removing redundant or unused paths.   Reducing Memory Usage with LD_LIBRARY_PATH Dynamic linking relies on the LD_LIBRARY_PATH variable. Setting unnecessary paths increases lookup time for shared libraries. Optimize by ensuring only required paths are included:   export LD_LIBRARY_PATH="/usr/local/lib:/usr/lib" Adjusting Process Priorities with OOM_SCORE_ADJ The OOM_SCORE_ADJ variable influences how the Out-of-Memory (OOM) killer prioritizes processes for termination. For critical processes, reduce their likelihood of being killed: echo -1000 | sudo tee /proc/$(pgrep my_critical_process)/oom_score_adj Conversely, assign a high value to non-essential processes to allow them to be terminated first: echo 500 | sudo tee /proc/$(pgrep background_task)/oom_score_adj These optimizations improve system responsiveness under heavy loads. Linux Hardening Through Environment Variable Management  Improper use of env variables can expose sensitive information and create privilege escalation risks. Careful handling of these variables is essential, and using well-structured practices reduces potential risks. Restricting Access to Configuration Files Ensure that files storing env variables have restricted permissions: chmod 600 ~/.bashrc ~/.profile For system-wide files: sudo chmod 644 /etc/environment Preventing Path Manipulation Attacks Malicious users can modify PATH to execute unintended commands. Instead of relying on cp file /backup/ always use absolute paths in scripts: /usr/bin/cp file /backup/ Clearing Sensitive Variables When handling credentials, remove them immediately after use: unset API_KEY This prevents them from being accessed through process listings or logs. Wrapping Up Linux environment variables are essential for configuring system behavior, optimizing performance, and ensuring security. By understanding how they work across different shells, debugging issues efficiently, and applying security best practices, users can create a more robust and efficient system. The inclusion of systemd service configurations, performance optimizations, containerized environments, and Linux hardening techniques ensures a well-rounded approach to env variable management. Mastering these topics provides greater control over Linux-based systems and enhances both usability and security.
03 April 2025 · 8 min to read
Linux

Monitoring Linux Server Activity with Falco

Falco is a security tool that allows you to record security events on Linux servers based on rules. It was previously developed by Sysdig and later handed over to Cloud Native Computing Foundation. This guide shows how to install Falco on Linux servers, write rules to detect malicious events executed by processes or users and eventually compares it with Linux Auditd. Prerequisites To follow this guide, you'll need access to a Debian Linux or CentOS Stream 9 server. Alternatively, you could spin up a virtual server using Hostman. The Hostman website has instructions on how to launch a virtual server. Brief Overview of Linux System Calls  In Linux, the user-space is reserved for user-facing services like web browsers, text editors, etc, whilst the kernel space is reserved for the privileged services. Services provided within the kernel space include memory management, process scheduling, file system management, etc. In the context of system calls, when a user executes the cd command, the “chdir system call’’ is invoked via the chdir() wrapper function within the glibc library to change the current working directory and returns the result to the user-space program. Usually, the name of the wrapper function is the same as the invoked system call. The GNU C Library, also known as glibc, contains system functions, acting as a wrapper around the actual function provided by the Linux kernel, allowing applications to access system functionality or make system calls through a standardized C interface. For detailed information on how Linux systems calls work and roles/tasks of glibc wrapper functions, check Linux man page. What is Falco? Falco provides runtime security across hosts, containers, Kubernetes, and other cloud native environments. It relies on both default and custom rules to detect events as malicious on Linux hosts, Kubernetes applications, etc. and associates event data with contextual metadata to deliver meaningful real-time alerts to the SIEM team. Falco relies on different sources to gather events data. It natively supports Linux system call source by default. However, it’s possible to extend Falco capabilities to support other event sources like Kubernetes audit logs, AWS Cloudtrail, KeyCloak Admin/User events via the plugin system. The plugin system consists of shared libraries that allows Falco to include or add new event sources, include new fields that extract information from events, etc. As at the time of writing this guide, some of the following plugins are: K8saudit: Monitors and detects Kubernetes cluster events. Cloudtrail: Tracks events from Cloudtrail logs. Kafka: Records events from Kafka topics. Keycloak: Detects Keycloak user/admin events. Check their website for a complete list of currently supported plugins. In order to consume events at the kernel source, the following drivers are currently supported: eBPF probe modern eBPF probe kernel module Using Modern eBPF Probe eBPF means “extended Berkeley Packet Filter”. It enables us to run isolated programs within the Linux kernel space in order to extend the capabilities of the kernel without loading additional kernel modules. They are programs that execute when specific hook points are triggered or an event takes place. eBPF probe is embedded into the userspace application and works out of the box, regardless of the kernel release. To use the modern eBPF probe, set the engine.kind parameter inside the /etc/falco/falco.yaml file to modern_ebpf to activate this feature. There is no need to install other dependencies such as clang or llvm if you want to use modern eBPF. Installing Falco This section shows how to install Falco on Linux Debian and CentOS servers. Running Falco on Debian Step 1: Import Falco GPG key. curl -fsSL https://falco.org/repo/falcosecurity-packages.asc | \sudo gpg --dearmor -o /usr/share/keyrings/falco-archive-keyring.gpg Step 2: Setup the apt repository. sudo bash -c 'cat << EOF > /etc/apt/sources.list.d/falcosecurity.listdeb [signed-by=/usr/share/keyrings/falco-archive-keyring.gpg] https://download.falco.org/packages/deb stable mainEOF' Step 3: Install the apt-transport-https package. sudo apt install apt-transport-https Step 4: Update the apt repository. sudo apt update -y Step 5: Install Falco. sudo apt install -y falco Running Falco on CentOS Stream 9 Step 1: Import the Falco GPG key. rpm --import https://falco.org/repo/falcosecurity-packages.asc Step 2: Set up the yum repository. curl -s -o /etc/yum.repos.d/falcosecurity.repo https://falco.org/repo/falcosecurity-rpm.repo Step 3: Update the yum repository. yum update -y Step 4: Install Falco. yum install -y falco Step 5: Execute the command to test whether Falco is successfully installed. falco Managing Falco with systemd In production, it's recommended to manage Falco using Systemd because it provides a centralized way to control and automate service restart instead of manually managing Falco. Systemd is the init process that starts required system services at boot time. Use the following instructions to manually configure Systemd with Falco. Step 1: Execute the following command to search for Falco services. systemctl list-units "falco*" Step 2: Use these commands to enable, start and check the status of falco-modern-bpf.service. The systemctl enable command ensures Falco starts at boot time systemctl enable falco-modern-bpf.service This command starts the service: systemctl start falco-modern-bpf.service And this is how you check if the service is running: systemctl status falco-modern-bpf.service Step 3: Execute the command systemctl list-units | grep falco to search for active related services The screenshot shows that both services are active. The latter is responsible for performing rules updates. If you don't want falcoctl to perform automatic rules update, use the command below to mask it. systemctl mask falcoctl-artifact-follow.service It prevents falcoctl service from being enabled automatically once an aliased falco service is enabled. Check this page for further information on using Systemd to manage Falco. Configuring Falco Settings This section shows how to configure some settings in the Falco configuration file located at /etc/falco/falco.yaml. watch_config_files: This key can be assigned true or false values. The true value ensures that anytime changes are made to the rules or configuration file, it automatically reloads itself to apply the updated configuration settings. rules_files: This key determines which rule files or directories are loaded first based on the values assigned to it. The example below ensures that rules in the /etc/falco/rules.d folder are checked first. rules_files:  - /etc/falco/rules.d  - /etc/falco/falco_rules.yaml - /etc/falco/falco_rules.local.yaml output_channel: Falco supports the following output channels. Syslog standard output http endpoint or webhook file output grpc service You can enable one of these channels to determine where alerts and log messages are sent to. Writing Falco Rules Basically, a rule is made up of an event and specific condition. Example of an event is a filesystem activity such as when a user accesses a file in the etc directory. Another example of an event is when someone or a service decides to connect or transfer a file to a remote host. Conditions are pragmatic expressions that define the exact details Falco should look for. It involves inspecting process arguments, network addresses, etc. Rules are written in YAML, and have a variety of required and optional keys. They are loaded at startup. Following is the structure of a rule in Falco. rule: This key defines the name of the rule, e.g. rule: Unauthorised File Access. desc: The key desc means description. It describes the purpose of the rule, e.g. Detecting unauthorized access to files in the /etc folder by regular users. condition: This key informs Falco to trigger an alert when a specific event takes place, e.g. condition: open_read and fd.name startswith /etc.  output: The message that will be shown in the notification. priority: This key defines the priority level of the rule. Priority levels include WARNING, ERROR, DEBUG, NOTICE, EMERGENCY, INFORMATIONAL, CRITICAL, and ALERT. tags: This key is used to categorize rules, e.g. ["Sensitive_Files", and "Unauthorized_Users"]. For detailed information on Falco rules, check Falco’s website. The following are rules to detect specific filesystem access and outbound network connection. Creating a Rule for Filesystem Activity Use the following steps to create a custom Falco rule. Navigate to the path /etc/falco/rules.d using the cd command. cd /etc/falco/rules.d Create a custom rule file using the following command. touch custom_rules.yaml Open and edit the custom_rules.yaml file using vim or any other text editor. vim custom_rules.yaml Then copy and paste the following into the file custom_rules.yaml. - rule: reading sensitive file desc: Detects when a user reads /etc/ folder condition: open_read and fd.name startswith /etc/ output: “suspicious file read detected file=%fd.name accessed by user=%user.name” priority: WARNING tags: [network, filesystem] Start Falco in the background. falco & To stop the background process falco from running forever, use the following command to search for process ID. pgrep falco Then use the kill command to terminate it by specifying the pid. kill -9 process-pid Now test the rule we just created to check whether Falco would alert us when a user opens or accesses the file /etc/passwd. cat /etc/passwd Creating a Rule for Detecting Outbound Connection Use the following to create a rule to monitor network connection. Navigate to the folder /etc/falco/rules.d using the command: cd /etc/falco/rules.d Use a text editor like vim to create a new file for custom rules. vim custom.yaml Copy and paste the following rule into the file custom.yaml to flag outbound connections to other hosts. - rule: "Suspicious outbound connection" desc: detect outbound connection to other hosts condition: outbound and evt.type = connect and fd.sip != 8.8.8.8 output: "Suspicious outbound connection detected destination=%fd.sip" priority: WARNING tags: [network, exfiltration] Make sure you execute the falco command before testing the preceding rule via the command: ping -c 1 blacklisted_IPaddress We'll receive a warning: Comparison Between Falco and Linux Audit Framework. Auditd is a part of the Linux auditing framework. It is responsible for writing audit records to the disk. Both tools are useful in detecting events registered as malicious via rules. In addition, both tools rely on system calls as their native event source. However, there are differences between these tools: Auditd does not have multiple event sources as compared to Falco. Auditd does not allow users to customize event output but Falco allows. Conclusion  Falco is useful in detecting events defined as malicious via rules. These define whether events are malicious or not. However, it's worth noting that the folder /etc/falco/ should be restricted to privileged users and also be monitored by Falco otherwise anyone can tweak rules in the file to avoid detection.
19 March 2025 · 9 min to read
Mail

How to Send Email in Linux from the Command Line with Sendmail and Mailx

For those managing servers or working on automation tasks, knowing how to send emails from the Linux terminal is essential. It offers complete control over email functions and eliminates the need for complex mail programs. This is useful in scenarios where speed and simplicity matter most. Common tools such as sendmail and mailx are frequently used for sending messages, checking SMTP settings, automating alerts, and integrating with scripts. They are straightforward yet effective, making them perfect for tasks like informing teams about server updates, automating reports, or testing email setups. This guide is designed for users looking to manage their email directly from the terminal. It covers the installation of essential tools and delves into more advanced tasks, such as sending attachments and configuring email tools. Why Choose Command-Line Email Tools? Two commonly used tools, sendmail and mailx, are reliable options for mail transmission in Linux. They come with a certain set of benefits: Efficiency: Traditional email software can be slow and resource-intensive. These tools enable quick and lightweight email sending directly from the terminal. Automation: They integrate smoothly with shell scripts, cron processes, and system monitoring tools. Automating mail alerts and notifications for repeated actions is possible via these Linux mail tools. Troubleshooting SMTP Problems: Debugging SMTP setups becomes more manageable. These commands provide visibility into message delivery, ensuring mail logs and errors are easier to inspect. Flexibility: Whether it’s sending alerts or generating automated reports, command-line tools like sendmail and mailx offer versatility across a range of tasks. Prerequisites  Before utilizing these Linux mail command line tools, ensure you have terminal access. Root privileges may be required in some cases, especially for configuring each mail command on Linux discussed in this guide. Setting Up a SMTP Server SMTP servers are essential for sending emails. These servers fall into two categories: External and Local SMTP servers. External SMTP Servers It refers to a mail server hosted by a third-party provider. These servers are utilized to deliver emails over the internet to recipients who are not part of your local network. They are built to manage global mail delivery while ensuring proper authentication, encryption, and spam prevention. Examples  Gmail  Address: smtp.gmail.com Port: 587 (with TLS) or 465 (with SSL) Outlook  Address: smtp.office365.com Port: 587 These servers need appropriate authentication methods (such as a username, password, or app-specific passwords) and encryption (like TLS or SSL) to ensure secure communication. Note: We’ve already provided a guide for setting up external SMTP servers. The command to send emails through Postfix remains the same as mentioned in this article. Simply configure the SMTP settings using our guide, and replace the email address with Gmail or any other preferred provider for proper email delivery. Local SMTP Servers This server functions solely within a private network or system. It is perfect for: Sending emails between users on the same network or domain (e.g., tom@office.local to jerry@office.local). Local testing and development tasks. Internal communication within an organization. Does not need internet access to operate, as they manage mail delivery internally. Setting Up a Local SMTP Server Here are the procedures to set up a local SMTP server using Postfix: Install Postfix via: sudo apt install postfix Modify the Postfix configuration file: sudo nano /etc/postfix/main.cf Update or confirm these key settings: myhostname = mail.office.local mydomain = office.local myorigin = $mydomain inet_interfaces = loopback-only local_recipient_maps = proxy:unix:passwd.byname mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain Save and exit the file after doing changes, then restart Postfix: sudo systemctl restart postfix To create email addresses like linux@office.local and hostman@office.local, set up user accounts on the server: sudo adduser linuxsudo adduser hostman Overview of sendmail sendmail is a prominent mail transfer agent (MTA) in Linux. It works flawlessly with SMTP servers for mail delivery and allows emails to be sent and routed from local systems or scripts.  Installing sendmail  Before sending emails, you must install the Linux sendmail tool. Execute the commands below based on your distribution: For Debian/Ubuntu sudo apt install sendmail For CentOS/Red Hat sudo yum install sendmail Starting and Enabling Service Once installed, make sure sendmail is running and configured to start at boot: sudo systemctl start sendmailsudo systemctl enable sendmail Testing the Configuration Check the sendmail is set up correctly by executing: echo "Testing sendmail setup" | sendmail -v your-email@example.com Verify email by executing the mail command: mail Note: Install mailutils package in case the mail command is not working. sudo apt install mailutils Or utilize the cat command: cat /var/mail/user Editing the Configuration File To customize settings for sendmail, modify the configuration file located at /etc/mail/sendmail.mc: sudo nano /etc/mail/sendmail.mc Make the required changes to fit your server. For example, if you want to define the domain name for your server, you can add or modify the following line: define(`confDOMAIN_NAME', `your_domain.com')dnl Here, replace your_domain with your actual domain name. Then rebuild the configuration file: sudo m4 /etc/mail/sendmail.mc > /etc/mail/sendmail.cf If a "permission denied" error occurs, use: sudo sh -c "m4 /etc/mail/sendmail.mc > /etc/mail/sendmail.cf" Finally, restart the service: sudo systemctl restart sendmail Sending Email Via sendmail With sendmail, you can easily deliver emails, customize subjects, and even add attachments using external tools. Let’s go over the process to send emails: Basic Example To send an email with sendmail, use the below-given instructions: First, create a file to hold the message: nano email.txt Add any content to the file, for example: Subject: Test Email from HostmanThis is a test email sent using sendmail on Linux. Deliver the file's contents: sendmail recipient@example.com < email.txt The contents of email.txt will be sent to the designated recipient. For verification, apply: mail Adding Attachments  sendmail by itself doesn’t support attachments. You’ll need to utilize uuencode or similar tools to include files. First, install sharutils for uuencode: sudo apt install sharutils Here’s how to attach a file: ( echo "Subject: Email with attachment"; uuencode file.txt file.txt ) | sendmail recipient@example.com In the above sendmail example we send an email with file.txt attached. To verify, apply the Linux command mail: mail Overview of mailx  The mailx Linux command is a simple and effective terminal application for managing emails. It is included in the mailutils package found in most Linux distributions. Installing mailx  Install mailutils package on your system to utilize the mailx command on Linux: For Debian/Ubuntu systems sudo apt install mailutils For Red Hat-based systems sudo yum install mailx Sending Email with mailx This is a simple example demonstrating the use of mailx. Include a subject line and message in your email: echo "This is the body of the email" | mailx -s "Test Email from Mailx" recipient@example.com Utilize the Linux mail command for verification: Example with Attachments Use the -A flag with the mailx command to send emails from Linux with attachments: echo "Please find the attached document" | mailx -s "Email with Attachment" -A email.txt recipient@example.com This sends email.txt as an attachment to the recipient. Conclusion Sending email from the Linux command line is an effective method for automating communication tasks, troubleshooting servers, or testing configurations. Using tools such as sendmail and mailx, you can manage everything from simple messages to more complex setups with attachments. This guide has provided detailed instructions to help you begin without difficulty. Utilize these Linux email commands to improve your workflow. If you face any issues, feel free to refer back to this tutorial.
18 March 2025 · 7 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support