Sign In
Sign In

How To Use Journalctl to View and Manipulate Systemd Logs

How To Use Journalctl to View and Manipulate Systemd Logs
Adnene Mabrouk
Technical writer
Linux
30.07.2024
Reading time: 5 min

Systemd, the init system and service manager for Linux, introduced a centralized logging system called the Journal. This system collects and stores logs in a binary format, which can be accessed and managed using the journalctl command. Unlike traditional log files scattered across /var/log, systemd's journal offers a structured approach to logging, providing richer metadata and more powerful querying capabilities.

Syntax and Basic Usage

The journalctl command is the primary interface for accessing system logs stored in the systemd journal. Its basic syntax is straightforward:

journalctl [OPTIONS] [MATCHES...]

By default, journalctl displays logs from the current boot session. Here are some essential options:

  • -b: Show logs from a specific boot.

  • -u UNIT: Show logs for a specific systemd unit.

  • -p PRIORITY: Show logs with a specific priority level.

  • -n NUM: Limit the number of log lines displayed.

Filtering and Querying Logs

One of the strengths of journalctl lies in its powerful filtering capabilities. You can filter logs based on various criteria such as time, priority, systemd unit, and more:

journalctl --since "2023-01-01 00:00:00" --until "2023-01-01 12:00:00"
journalctl -p err -b
journalctl -u nginx.service

These commands respectively filter system logs by time range, by error priority level from the current boot, and by the nginx systemd unit.

Viewing Logs for Specific Units

To inspect logs related to specific service of systemd, use the -u option followed by the unit name:

journalctl -u sshd.service

This command displays logs specifically for the sshd service, aiding in pinpointing issues related to that particular unit.

Advanced Features and Options

Beyond basic filtering, journalctl offers advanced features to refine your log analysis:

  • Output Formatting: Customize the output using options like --output to display logs in different formats (e.g., json, short, verbose).

  • Follow Mode: Watch logs in real-time as new entries are added (-f or --follow).

  • Data Export: Export logs to a file for further analysis or sharing.

These features empower administrators and developers to efficiently monitor system behavior and diagnose issues promptly.

Persistent Logging

By default, systemd's journal logs are stored in a volatile manner, meaning they are lost upon reboot. To ensure logs persist across reboots, you need to configure persistent logging. This involves creating a directory /var/log/journal, which systemd will use to store logs persistently:

sudo mkdir -p /var/log/journal
sudo systemd-tmpfiles --create --prefix /var/log/journal
sudo systemctl restart systemd-journald

With this setup, logs will be retained across system reboots, making it easier to track long-term issues and historical data.

Log Rotation and Compression

The journal can consume a significant amount of disk space over time. Systemd provides built-in log rotation and compression to manage disk usage efficiently. You can configure these settings in the /etc/systemd/journald.conf file. Key parameters include:

  • SystemMaxUse: The maximum disk space the journal may use.

  • SystemKeepFree: The amount of disk space that should remain free.

  • SystemMaxFileSize: The maximum size of individual journal files.

  • Compress: Whether to compress archived journal files.

Adjust these parameters according to your system's requirements to balance log retention and disk usage.

Security and Access Control

The systemd journal provides mechanisms to control who can read the logs. By default, only users in the systemd-journal group can access the full logs. You can add a user to this group using:

sudo usermod -aG systemd-journal <username>

This ensures that sensitive log data is protected and only accessible to authorized personnel. Additionally, logs can be encrypted to enhance security, which is particularly useful in environments with stringent data protection requirements.

Analyzing Boot Performance

journalctl can also help analyze system boot performance, identifying potential bottlenecks and issues during startup. Using the -b option with -1, -2, etc., you can view logs from previous boot sessions:

journalctl -b -1

To further break down the boot process, use the systemd-analyze command, which integrates with journalctl to provide a detailed timeline of the boot sequence.

Integrating with Monitoring Tools

For comprehensive system monitoring, journalctl can be integrated with various monitoring and alerting tools like Nagios, Zabbix, and Prometheus. These integrations allow for real-time log analysis and alerting, ensuring that critical issues are detected and addressed promptly. Tools like fluentd or Logstash can be used to forward journal logs to a centralized logging system, facilitating easier analysis and correlation with other logs and metrics.

Conclusion

Mastering journalctl is crucial for effective system administration and troubleshooting on modern Linux distributions using systemd. Its centralized and structured approach to logging, coupled with robust filtering and querying capabilities, streamlines the process of diagnosing problems and monitoring system health. By utilizing the techniques and commands outlined in this guide, you can gain deeper insights into your system's operation, ensuring smooth and reliable performance over time. Whether you're an experienced sysadmin or a Linux enthusiast, journalctl remains an indispensable tool in your toolkit for managing systemd logs effectively.

You can buy Linux VPS for your projects on Hostman. 

Linux
30.07.2024
Reading time: 5 min

Similar

Linux

Bash Regular Expressions

One of the core principles of Unix systems is the extensive use of text data: configuration files, as well as input and output data in *nix systems, are often organized as plain text. Regular expressions are a powerful tool for manipulating text data. This guide delves into the intricacies of using regular expressions in Bash, helping you fully harness the power of the command line and scripts in Linux. What Are Regular Expressions? Regular expressions are specially formatted strings used to search for character patterns in text. They resemble shell wildcards in some ways, but their capabilities are much broader. Many text-processing utilities in Linux and programming languages include a regular expression engine. However, different programs and languages often employ different regular expression dialects. This article focuses on the POSIX standard to which most Linux utilities adhere. The grep Utrequires at least one match of theility The grep program is the primary tool for working with regular expressions. grep reads data from standard input, searches for matches to a specified pattern, and outputs all matching lines. grep is typically pre-installed on most distributions. You can try the commands in a virtual machine or a VPS to practice using regular expressions. The syntax of grep is as follows: grep [options] regular_expression [file...] The simplest use case for grep is finding lines that contain a fixed substring. In the example below, grep outputs all lines that contain the sequence nologin: grep nologin /etc/passwd Output: daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin bin:x:2:2:bin:/bin:/usr/sbin/nologin sys:x:3:3:sys:/dev:/usr/sbin/nologin games:x:5:60:games:/usr/games:/usr/sbin/nologin ... grep has many options, which are detailed in the documentation. Here are some useful options for working with regular expressions: -v — Inverts the match criteria. With this option, grep outputs lines that do not contain matches: ls /bin | grep -v zip# Output:411toppm 7z 7za 7zr ... -i — Ignores case. -o — Outputs only the matches, not the entire lines: ls /bin | grep -o zip# Output:zip zip zip zip ... -w — Searches for lines containing whole words matching the pattern. ls /bin | grep -w zip# Output:gpg-zipzip For comparison, the same command without the -w option also includes lines where the pattern appears as a substring within a word. ls /bin | grep zip# Output:bunzip2 bzip2 bzip2recover funzip Basic Regular Expressions (BRE) As previously mentioned, there are multiple dialects of regular expressions. The POSIX standard defines two main types of implementations: Basic Regular Expressions (BRE), which are supported by almost all POSIX-compliant programs, and Extended Regular Expressions (ERE), which allow for more complex patterns but aren't supported by all utilities. We'll start by exploring the features of BRE. Metacharacters and Literals We've already encountered simple regular expressions. For example, the expression “zip” represents a string with the following criteria: it must contain at least three characters; it includes the characters “z”, “i”, and “p” in that exact order; and there are no other characters in between. Characters that match themselves (like “z”, “i”, and “p”) are called literals. Another category is metacharacters, which are used to define various search criteria. Metacharacters in BRE include: ^ $ . [ ] * \ - To use a metacharacter as a literal, you need to escape it with a backslash (\). Note that some metacharacters have special meanings in the shell, so enclose it in quotes when passing a regular expression as a command argument. Any Character The dot (.) metacharacter matches any character in that position. For example: ls /bin | grep '.zip' Output: bunzip2 bzip2 bzip2recover funzip gpg-zip gunzip gzip mzip p7zip pbzip2 preunzip prezip prezip-bin streamzip unzip unzipsfx One important detail: the zip program itself isn’t included in the output because the dot (.) metacharacter increases the required match length to four characters. Anchors The caret (^) and dollar sign ($) in regular expressions serve as anchors. This means that, when included, a match can only occur at the start of a line (^) or at the end ($). ls /bin | grep '^zip'# Output:zip zipcloak zipdetails zipgrep …ls /bin | grep 'zip$'# Output:funzip gpg-zip gunzip ...ls /bin | grep '^zip$'# Output:zip The regular expression ^$ matches empty lines. Character Sets Besides matching any character in a given position (.), regular expressions allow for matching a character from a specific set. This is done with square brackets. The following example searches for strings matching bzip or gzip: ls /bin | grep '[bg]zip'# Output:bzip2bzip2recovergzip All metacharacters lose their special meaning within square brackets, except two. If a caret (^) is placed immediately after the opening bracket, the characters in the set are treated as excluded from that position. For example: ls /bin | grep '[^bg]zip' Output: bunzip2 funzip gpg-zip gunzip mzip p7zip preunzip prezip prezip-bin streamzip unzip unzipsfx With negation, we get a list of filenames containing zip but preceded by any character other than b or g. Note that zip is not included here; the negation requires the presence of some character in that position. The caret serves as a negation only if it appears immediately after the opening bracket; otherwise, it loses its special meaning. Using a hyphen (-), you can specify character ranges. This lets you match a range of characters or even multiple ranges. For instance, to find all filenames that start with a letter or a number: ls ~ | grep '^[A-Za-z0-9]' Output: backup bin Books Desktop docker Documents Downloads GNS3 ... POSIX Character Classes When using character ranges, one challenge is that ranges can be interpreted differently based on locale settings. For instance, the range [A-Z] may sometimes be interpreted lexicographically, potentially excluding lowercase a. To address this, the POSIX standard provides several classes that represent various character sets. Some of these classes include: [:alnum:] — Alphanumeric characters; equivalent to [A-Za-z0-9] in ASCII. [:alpha:] — Alphabetic characters; equivalent to [A-Za-z] in ASCII. [:digit:] — Digits from 0 to 9. [:lower:] and [:upper:] — Lowercase and uppercase letters, respectively. [:space:] — Whitespace characters, including space, tab, carriage return, newline, vertical tab, and form feed. Character classes don’t provide an easy way to express partial ranges, like [A-M]. Here’s an example of using a character class: ls ~ | grep '[[:upper:]].*' Output: Books Desktop Documents Downloads GNS3 GOG Games Learning Music ... Extended Regular Expressions (ERE) Most POSIX-compliant applications and those using BRE (such as grep and the stream editor sed) support the features discussed above. The POSIX ERE standard allows for more expressive regular expressions, though not all programs support it. The egrep program traditionally supported the ERE dialect, but the GNU version of grep also supports ERE when run with the -E option. In ERE, the set of metacharacters is expanded to include: ( ) { } ? + | Alternation Alternation allows for a match with one of multiple expressions. Similar to square brackets that allow a character to match one of several characters, alternation allows for matching one of multiple strings or regular expressions. Alternation is represented by the pipe (|): echo "AAA" | grep -E 'AAA|BBB'# Output:AAA echo "BBB" | grep -E 'AAA|BBB'# Output:BBB echo "CCC" | grep -E 'AAA|BBB'# Output: (no match) Grouping You can group elements of regular expressions and treat them as a single unit using parentheses. The following expression matches filenames starting with bz, gz, or zip. Without the parentheses, the regular expression would change meaning to match filenames starting with bz or containing gz or zip. ls /bin | grep -E '^(bz|gz|zip)' Output: bzcat bzgrep bzip2 bzip2recover bzless bzmore gzexe gzip zip zipdetails zipgrep zipinfo zipsplit Quantifiers Quantifiers specify the number of times an element can occur. BRE supports several quantifiers: ? — Matches the preceding element zero or one time, meaning the previous element is optional: echo "tet" | grep -E 'tes?t'# Output:tet echo "test" | grep -E 'tes?t'# Output:test echo "tesst" | grep -E 'tes?t'# Output: (no match) * — Matches the preceding element zero or more times. Unlike ?, this element can appear any number of times: echo "tet" | grep -E 'tes*t'# Output:tet echo "test" | grep -E 'tes*t'# Output:test echo "tesst" | grep -E 'tes*t'# Output:tesst + — Similar to *, but requires at least one match of the preceding element: echo "tet" | grep -E 'tes+t'# Output: (no match) echo "test" | grep -E 'tes+t'# Output:test echo "tesst" | grep -E 'tes+t'# Output:tesst In BRE, special metacharacters { and } allow you to specify minimum and maximum match counts for the preceding element in four possible ways: {n} — Matches if the preceding element occurs exactly n times. {n,m} — Matches if the preceding element occurs at least n and at most m times. {n,} — Matches if the preceding element occurs n or more times. {,m} — Matches if the preceding element occurs no more than m times. Example: echo "tet" | grep -E "tes{1,3}t"# Output: (no match) echo "test" | grep -E "tes{1,3}t"# Output:test echo "tesst" | grep -E "tes{1,3}t"# Output:tesst echo "tessst" | grep -E "tes{1,3}t"# Output:tessst echo "tesssst" | grep -E "tes{1,3}t"# Output: (no match) Only the lines where s appears one, two, or three times match the pattern. Regular Expressions in Practice To conclude, let’s look at a couple of practical examples of how regular expressions can be applied. Validating Phone Numbers Suppose we have a list of phone numbers where the correct format is (nnn) nnn-nnnn. Out of a list of 10 numbers, three are incorrectly formatted. cat phonenumbers.txt Output: (185) 136-1035 (95) 213-1874 (37) 207-2639 (285) 227-1602 (275) 298-1043 (107) 204-2197 (799) 240-1839 (218) 750-7390 (114) 776-2276 (7012) 219-3089 The task is to identify the incorrect numbers. We can use the following command: grep -Ev '^\([0-9]{3}\) [0-9]{3}-[0-9]{4}$' phonenumbers.txt Output: (95) 213-1874(37) 207-2639(7012) 219-3089 Here, we used the -v option to invert the match and output only lines that do not match the specified format. Since parentheses are considered metacharacters in ERE, we escaped them with backslashes to treat them as literals. Finding Improper File Names The find command supports checking paths with regular expressions. It’s important to note that, unlike grep which matches parts of lines, find requires the whole path to match. Suppose we want to identify files and directories containing spaces or potentially problematic characters. find . -regex '.*[^-_./0-9a-zA-Z].*' The .* sequences at the beginning and end represent any number of any characters, which is necessary because find expects the entire path to match. Inside the square brackets, we use negation to exclude valid filename characters, meaning any file or directory name containing characters other than hyphens, underscores, digits, or Latin letters will appear in the output. Conclusion This article has covered a few practical examples of Bash regular expressions. Creating complex regular expressions might seem challenging at first. But over time, you’ll gain experience and skill in using regular expressions for searches across various applications that support them.
01 November 2024 · 10 min to read
Linux

How to Use DNF to Manage Packages on CentOS

DNF (Dandified Yum) is the next-generation version of Yum, the default package manager for CentOS and Fedora distributions. It is designed to resolve dependencies more efficiently, handle larger package sets, and improve performance over its predecessor. DNF simplifies the management of software packages by allowing users to install, update, and remove packages from the command line with a user-friendly interface. Installing and Removing Packages with DNF One of the primary functions of DNF is installing and removing software packages. To install a package using DNF, you need root or sudo privileges. The syntax is straightforward: sudo dnf install package_name For example, to install the Apache HTTP server: sudo dnf install httpd To remove a package, the command is similar: sudo dnf remove package_name For example, to remove Apache HTTP server: sudo dnf remove httpd Updating and Upgrading Packages Keeping your system up to date is essential for security and performance. DNF makes this process simple. To update all the packages on your system, use: sudo dnf update This command will update installed packages to the latest versions available in the configured repositories. If you want to upgrade your entire system to the latest release (such as when moving between CentOS versions), you can use: sudo dnf upgrade The difference between update and upgrade is that the latter will also remove obsolete packages, whereas update does not. Searching for Packages in DNF DNF allows users to search for packages before installing them. This is helpful if you're unsure of the exact package name or want to explore available options. To search for a package: sudo dnf search <keyword> For example, to search for packages related to Apache: sudo dnf search apache DNF will list all packages that match the search term, along with a brief description. Managing Repositories with DNF Repositories are essential for managing where DNF pulls its packages from. DNF automatically handles repository configuration files, usually located in /etc/yum.repos.d/. You can add, enable, or disable repositories with DNF. To add a new repository, you need to create a .repo file in /etc/yum.repos.d/. For example, let's say you want to add the EPEL (Extra Packages for Enterprise Linux) repository, which provides additional packages not available in the default CentOS repositories. Install the EPEL repository using DNF EPEL is available as a package that can be installed directly: sudo dnf install epel-release This command automatically creates the necessary .repo file and enables the EPEL repository. Manually adding a repository If you want to manually add a repository, you would create a .repo file, for instance, myrepo.repo, in /etc/yum.repos.d/, and add the following content: [myrepo]name=My Custom Repobaseurl=http://example.com/repo/centos/$releasever/$basearch/enabled=1gpgcheck=1gpgkey=http://example.com/repo/RPM-GPG-KEY-myrepo Here: name specifies the name of the repository. baseurl defines the URL from where the packages will be downloaded. enabled=1 ensures the repository is active. gpgcheck=1 enables GPG key checking for security. gpgkey provides the URL to the GPG key used to verify the packages. To disable the epel repository: sudo dnf config-manager --set-enabled epel To enable it again: sudo dnf config-manager --set-enabled epel Cleaning Up Unused Packages Over time, your system may accumulate unnecessary packages and cache files, which take up valuable space. DNF includes a built-in command to clean up unused packages and free up disk space: sudo dnf autoremove This will remove any orphaned packages that are no longer required by the system. Additionally, you can clean up cached data using: sudo dnf clean all This command clears all cached package files stored in /var/cache/dnf/. Troubleshooting DNF Issues Occasionally, you may encounter issues when managing packages with DNF. Common problems include broken dependencies or repository errors. Here are some troubleshooting tips: Broken dependencies: If you're facing dependency issues, try running: sudo dnf install --best --allowerasing This command attempts to resolve conflicts by allowing DNF to erase conflicting packages. Corrupted cache: If the cache becomes corrupted, clean it up using: sudo dnf clean metadata Failed transactions: If a DNF transaction fails, try rebuilding the database: sudo rpm --rebuilddb By using these tips, you can quickly resolve most issues you might face with DNF. Conclusion DNF is a powerful and efficient package manager that makes software management on CentOS easy. From installing and updating packages to managing repositories and cleaning up unused files, DNF provides a wide range of features to ensure your system runs smoothly. With this guide, you should be well-equipped to handle package management tasks on your CentOS system. On Hostman, you can try Linux VPS hosting for your projects. 
18 October 2024 · 4 min to read
Linux

How to Use the diff Command in Linux

The diff command in Linux is a powerful tool that allows users to compare files and directories. With the help of this command, one can identify differences between files, and perform tasks like code reviews, configuration management, and version control.  This tutorial will guide users through what is the diff command, its possible methods, and practical examples. Introduction The diff command is used in Linux to compare the content of two files line by line. When executed, this command analyzes the two files and outputs the differences in a specific format. The output shows which lines need to be added, deleted, or changed to make the files identical. Basic Syntax and Options for diff The basic syntax for the Linux diff command is provided below: diff [options] file1 file2 Here, diff is the command itself. [options] are optional flags used to modify the behavior of the diff Linux command. file1 and file2 are the two files used for Linux file comparison. The following table describes a few options that can be used with diff: Option Description -a Process every file as a text file and perform a line-by-line comparison. -b Does not consider white space differences. -c Show differences with a few lines of context around them. -d Opt for a different algorithm to pinpoint a more concise set of changes. -e Output an ed script. -E Ignore changes due to tab expansion. --binary Compare files in binary mode. -i Ignore case differences in file contents. -l Paginate the output through pr. -N Treat absent files as empty. -q Report only when files differ. -s Report when files are identical. -u Display output in a unified format, showing differences more compactly. -w Ignore all white space. For more details and to explore more options, the users can get help by opening the diff manual using the following command: man diff Comparing Two Text Files Using diff There are two ways to compare files on Linux with diff. Basic Comparison of Two Text Files The basic way to use the diff in Linux is to compare two files line by line and display their differences. To compare two text files, file1.txt and file2.txt, one can use the following command: diff file1.txt file2.txt This command will output the differences between file1.txt and file2.txt. Display Differences in a Unified Format For a more readable format, the -u option can be used with diff. This option provides a unified format that includes a few lines of context around the differences. This makes it easier to understand the changes. Follow the command provided below: diff -u file1.txt file2.txt The unified format output includes line numbers, context lines, and change indicators. Lines starting with - indicate deletions, lines starting with + indicate additions and lines starting with a space are unchanged context lines. Using diff for Directory Comparisons The Linux command diff can also be used to compare directories, it can be done using the -r option. For example: diff -r dir1 dir2 The above command when executed will recursively compare all files and subdirectories within dir1 and dir2. Understanding diff Output and Symbols The diff output uses specific symbols to indicate changes, these are provided below: ---: Denotes the first file. +++: Denotes the second file.  @@ -1,4 +1,4 @@: This line is part of the unified diff format. It gives context about where the changes are happening in the files. @@ indicates the start of a change hunk. -1,4 means the chunk starts at line 1 in the first file and spans 4 lines. +1,4 means the chunk starts at line 1 in the second file and spans 4 lines. <: This marker signifies lines that exist in the first file but not in the second one. Such lines must be removed from the first file to match the second file exactly. >: This marker indicates lines that are in the second file but not in the first one. These lines should be added to the first file to make it identical to the second file. -: This marker shows lines that have been deleted from the first file. +: This marker indicates lines that have been inserted into the second file. Let’s look at an example to make it clearer. Suppose there are two files, file1.txt and file2.txt. Contents of file1.txt: applebananacherrydate Contents of file2.txt: applebananadateraspberry Running the command diff file1.txt file2.txt will produce the following output: Here’s how to interpret this output: 3d2: This means that line 3 in file1.txt (cherry) needs to be deleted to match file2.txt. The d stands for "delete". < cherry: This indicates that cherry is present in file1.txt but not in file2.txt. 4a4: This means that after line 4 in file1.txt, users need to add "raspberry" to match file2.txt. The a stands for "add". > raspberry: This indicates that raspberry is present in file2.txt but not in file1.txt. Creating Patch Files with diff To create a patch file, the -u (unified) option is used, which provides a more readable format by showing a few lines of context around the changes. The output is then redirected to a file, typically with a .patch extension. For example: diff -u file1.txt file2.txt > changes.patch diff -u: Compares file1.txt and file2.txt and generates a unified diff. >: Redirects the output to a file named changes.patch. To apply the patch, use the patch command like this: patch file1.txt < changes.patch Using diff with Various Output Formats The diff also supports multiple output formats, here are a few examples. Unified Format This format gives users a snapshot of the changes with a few lines of context before and after each change. It’s great for quickly seeing what was added or removed. diff -u file1.txt file2.txt Context Format This format shows more surrounding lines for each change and gives users a bigger picture of where the changes happened. diff -c file1.txt file2.txt Side-by-Side Format This format places the two files next to each other and makes it easy to compare them line by line. diff -y file1.txt file2.txt Brief Format This format gives a summary of whether the files differ but does not show the actual changes. diff -q file1.txt file2.txt Practical Examples of Using diff Here are some practical examples of using the diff command in Linux. Ignoring Case Differences When comparing files, sometimes the case of the letters might differ, but the content is essentially the same. The -i option is used to ignore case differences. For example: diff -i file3.txt file4.txt In this example, diff will treat "Hello" and "hello" as identical, ignoring the case difference. Ignoring White Space White space differences, such as extra spaces or tabs, can be ignored using the -w option. This is useful when formatting changes have been made but the content remains the same. For example: diff -w file1.txt file2.txt Here, diff will ignore all white spaces, treating "Hello   World" and "Hello World" as identical. Comparing Binary Files The diff in Linux can also be used to compare binary files using the --binary option. This is helpful when users need to check if two binary files are identical or not. For example: diff --binary file1.bin file2.bin In this case, diff will compare the binary data of file1.bin and file2.bin and report any differences. Ignoring Blank Lines To ignore blank lines when comparing files, simply use the -B option, which is useful when blank lines have been added or removed. diff -B file1.txt file2.txt Conclusion The diff is a versatile command in Linux for comparing files and directories. By understanding its syntax, options, and output formats, users can efficiently identify differences and manage changes. Whether for code reviews, configuration management, or version control, the diff command is an essential part of any Linux user’s toolkit. On Hostman, you can try Linux VPS hosting for your projects. 
17 October 2024 · 7 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support