Sign In
Sign In

Sentry: Error Tracking and Monitoring

Sentry: Error Tracking and Monitoring
Hostman Team
Technical writer
Servers
15.11.2024
Reading time: 10 min

Sentry is a platform for error logging and application monitoring. The data we receive in Sentry contains comprehensive information about the context in which an issue occurred, making it easier to reproduce, trace the root cause, and significantly assist in error resolution. It's a valuable tool for developers, testers, and DevOps professionals. This open-source project can be deployed on a private or cloud server.

Originally, Sentry was a web interface for displaying traces and exceptions in an organized way, grouping them by type. Over time, it has grown, adding new features, capabilities, and integrations. It's impossible to showcase everything it can do in a single article fully, and even a brief video overview could take up to three hours.

Why Use Sentry When We Have Logging?

Reviewing logs to understand what's happening with a service is helpful. When logs from all services are centralized in one place, like Elastic, OpenSearch, or Loki, it’s even better. However, you can analyze errors and exceptions faster, more conveniently, and with greater detail in Sentry. There are situations when log analysis alone does not clarify an issue, and Sentry comes to the rescue.

Consider cases where a user of your service fails to log in, buy a product, or perform some other action and leaves without submitting a support ticket. Such issues are extremely difficult to identify through logs alone. Even if a support ticket is submitted, analyzing, identifying, and reproducing such specific errors can be costly:

  • What device and browser were used?
  • What function triggered the error, and why? What specific error occurred?
  • What data was on the front end, and what was sent to the backend?

Sentry’s standout feature is the way it provides detailed contextual information about errors in an accessible format, enabling faster response and improved development.

As the project developers claim on their website, “Your code will tell you more than what logs reveal. Sentry’s full-stack monitoring shows a more complete picture of what's happening in your service’s code, helping identify issues before they lead to downtime.”

How It Works

In your application code, you set up a DSN (URL) for your Sentry platform, which serves as the destination for reports (errors, exceptions, and logs). You can also customize, extend, or mask the data being sent as needed.

Sentry supports JavaScript, Node, Python, PHP, Ruby, Java, and other programming languages.

Image2

In the setup screenshot, you can see various project types, such as a basic Python project as well as Django, Flask, and FastAPI frameworks. These frameworks offer enhanced and more detailed data configurations for report submission.

Usage Options

Sentry offers two main usage options:

  • Self-hosted (deployed on your own server)
  • Cloud-based (includes a limited free version and paid plans with monthly billing)

The Developer version is a free cloud plan suitable for getting acquainted with Sentry.

For anyone interested in Sentry, we recommend at least trying the free cloud version, as it’s a good introduction. However, a self-hosted option is ideal since the cloud version can experience error reporting delays of 1 to 5 minutes, which may be inconvenient.

Self-Hosted Version Installation

Now, let's move on to the technical part. To deploy Sentry self-hosted, we need the getsentry/self-hosted repository. The platform will be set up using Docker Compose.

System Requirements

  • Docker 19.03.6+
  • Docker Compose 2.19.0+
  • 4 CPU cores
  • 16 GB RAM
  • 20 GB free disk space

We’ll be using a VPS from Hostman with Ubuntu 22.04.

System Setup

  1. Update Dependencies

First, we need to update the system packages:

apt update && apt upgrade -y
  1. Install Required Packages

Docker

Docker's version available in the repository is 24.0.7, so we’ll install it with:

apt install docker.io

Docker Compose

The version offered by apt is 1.29.2-1, which does not match the required version. So we need to install in manully. We’ll get the latest version directly from the official repository:

VERSION=$(curl --silent https://api.github.com/repos/docker/compose/releases/latest | grep -Po '"tag_name": "\K.*\d')
DESTINATION=/usr/bin/docker-compose
sudo curl -L https://github.com/docker/compose/releases/download/${VERSION}/docker-compose-$(uname -s)-$(uname -m) -o $DESTINATION
sudo chmod 755 $DESTINATION
  1. Verify Docker Compose Installation

To ensure everything is correctly installed, check the version of Docker Compose:

docker-compose --version

Output:

Docker Compose version v2.20.3

Once these steps are completed, you can proceed with deploying Sentry using Docker Compose.

Installation

The Sentry developers have simplified the installation process with a script. Here's how to set it up:

  1. Clone the Repository and Release Branch

First, clone the repository and checkout the release branch:

git clone https://github.com/getsentry/self-hosted.git
cd self-hosted
git checkout 24.10.0
  1. Run the Installation Script

Start the installation process by running the script with the following flags:

./install.sh --skip-user-prompt --no-report-self-hosted-issues

Flags explanation:

  • --skip-user-prompt: Skips the prompt for creating a user (we’ll create the user manually, which can be simpler).
  • --no-report-self-hosted-issues: Skips the prompt to send anonymous data to the Sentry developers from your host (this helps developers improve the product, but it uses some resources; decide if you want this enabled).

The script will check system requirements and download the Docker images (docker pull).

  1. Start Sentry

Once the setup is complete, you’ll see a message with the command to run Sentry:

You're all done! Run the following command to get Sentry running:
docker-compose up -d

Run the command to start Sentry:

docker-compose up -d

The Sentry web interface will now be available at your host's IP address on port 9000.

Before your first login, edit the ./sentry/config.yml configuration file and the line:

system.url-prefix: 'http://server_IP:9000'

And restart the containers:

docker-compose restart
  1. Create a User

We skipped the user creation during the installation, so let’s create the user manually. Run:

docker-compose run --rm web createuser

Enter your email, password, and answer whether you want to give the user superuser privileges.

Upon first login, you’ll see an initial setup screen where you can specify:

  • The URL for your Sentry instance.
  • Email server settings for sending emails.
  • Whether to allow other users to self-register.

At this point, Sentry is ready to use. You can read more about the configuration here.

Configuration Files

Sentry’s main configuration files include:

.env
./sentry/config.yml
./sentry/sentry.conf.py

By default, 42 containers are launched, and we can customize settings in the configuration files.

Currently, it is not possible to reduce the number of containers due to the complex architecture of the system. 

You can modify the .env file to disable some features.

For example, to disable the collection of private statistics, add this line to .env:

SENTRY_BEACON=False

You can also change the event retention period. By default, it is set to 90 days:

SENTRY_EVENT_RETENTION_DAYS=90

Database and Caching

Project data and user accounts are stored in PostgreSQL. If needed, you can easily configure your own database and Redis in the configuration files.

HTTPS Proxy Setup

To access the web interface securely, you need to set up an HTTPS reverse proxy. The Sentry documentation does not specify a particular reverse proxy, but you can choose any that fits your needs.

After configuring your reverse proxy, you will need to update the system.url-prefix in the config.yml file and adjust the SSL/TLS settings in sentry/sentry.conf.py.

Project Setup and Integration with Sentry

To set up and connect your first project with Sentry, follow these steps:

  1. Create a New Project
  • In the Sentry web interface, click Add New Project and choose your platform.

Image2

  • After creating the project, Sentry will generate a unique DSN (Data Source Name), which you'll need to use in your application to send events to Sentry.

Image3

  1. Configure the traces_sample_rate

Pay attention to the traces_sample_rate setting. It controls the percentage of events that are sent to Sentry. The default value is 1.0, which sends 100% of all events. 

traces_sample_rate=1.0  # 100% of events will be sent

If you set it to 0.25, it will only send 25% of events, which can be useful to avoid overwhelming the platform with too many similar errors. You can adjust this value depending on your needs.

You can read more about additional parameters of the sentry_sdk in the official documentation.

  1. Example Code with Custom Exception

Here’s an example script that integrates Sentry with a custom exception and function:

import sentry_sdk

sentry_sdk.init(
    dsn="http://979bc0c738a5e4d8b4709e50247035c7@sentry.mydomain.com:9000/3",  # DSN from project creation
    traces_sample_rate=1.0,  # Send 100% of events
    environment="production",  # Set the runtime environment
    release="my-app-1.0.0",  # Specify the app version
    send_default_pii=True,  # Send Personally Identifiable Information (PII)
)

class MyException(Exception):
    pass

def my_function(user, email):
    raise MyException(f"User {user} ({email}) encountered an error.")

def create_user():
    print("Creating a user...")
    my_function('James', 'james@mydomain.com')

if __name__ == "__main__":
    sentry_sdk.capture_message("Just a simple message")  # Send a test message to Sentry
    create_user()  # Simulate the error
  1. Run the Script

Run the Python script:

python main.py

This script will:

  • Initialize Sentry with your project’s DSN.
  • Capture a custom exception when calling my_function.
  • Send an example message to Sentry.
  1. Check Results in Sentry

After running the script, you should see the following in Sentry:

  • The Just a simple message message will appear in the event stream.
  • The MyException that is raised in my_function will be captured as an error, and the details of the exception will be logged.

You can also view the captured exception, including the user information (user and email) and any other data you choose to send (such as stack traces, environment, etc.).

Image1

In Sentry, the tags displayed in the error reports include important contextual information that can help diagnose issues. These tags often show:

  • Environment Variable: This indicates the runtime environment of the application, such as "production", "development", or "staging". It helps you understand which environment the error occurred in.
  • Release Version: The version of your application that was running when the error occurred. This is particularly useful for identifying issues that might be specific to certain releases or versions of the application.
  • Hostname: The name of the server or machine where the error happened. This can be helpful when working in distributed systems or multiple server environments, as it shows the exact server where the issue occurred.

These tags appear in the error reports, providing valuable context about the circumstances surrounding the issue. For example, the stack trace might show which functions were involved in the error, and these tags can give you additional information, such as which version of the app was running and on which server, making it easier to trace and resolve issues.

Sentry automatically adds these contextual tags, but you can also customize them by passing additional information when you capture errors, such as environment, release version, or user-related data.

Conclusion

In this article, we discussed Sentry and how it can help track errors and monitor applications. We hope it has sparked your interest enough to explore the documentation or try out Sentry.

Despite being a comprehensive platform, Sentry is easy to install and configure. The key is to carefully manage errors and group events and use flexible configurations to avoid chaos. When set up properly, Sentry becomes a powerful and efficient tool for development teams, offering valuable insights into application behavior and performance.

Servers
15.11.2024
Reading time: 10 min

Similar

Servers

Server Hardening

Server hardening is the process of improving security by reducing vulnerabilities and protecting against potential threats. There are several types of hardening: Physical: A method of protection based on the use of physical means, such as access control systems (ACS), video surveillance, safes, motion detectors, and protective enclosures. Hardware: Protection implemented at the hardware level. This includes trusted platform modules (TPM), hardware security modules (HSM, such as Yubikey), and biometric scanners (such as Apple Touch ID or Face ID). Hardware protection measures also include firmware integrity control mechanisms and hardware firewalls. Software: A type of hardening that utilizes software tools and security policies. This involves access restriction, encryption, data integrity control, monitoring anomalous activity, and other measures to secure digital information. We provide these examples of physical and hardware hardening to give a full understanding of security mechanisms for different domains. In this article, we will focus on software protection aspects, as Hostman has already ensured hardware and physical security. Most attacks are financially motivated, as they require high competence and significant time investments. Therefore, it is important to clearly understand what you are protecting and what losses may arise from an attack. Perhaps you need continuous high availability for a public resource, such as a package mirror or container images, and you plan to protect your resource for this purpose. There can be many variations. First, you need to create a threat model, which will consist of the following points: Value: Personal and public data, logs, equipment, infrastructure. Possible Threats: Infrastructure compromise, extortion, system outages. Potential Attackers: Hacktivists, insider threats, competitors, hackers. Attack Methods: Physical access, malicious devices, software hacks, phishing/vishing, supply chain attacks. Protection Measures: Periodic software updates, encryption, access control, monitoring, hardening—what we will focus on in this article. Creating a threat model is a non-trivial but crucial task because it defines the overall “flow” for cybersecurity efforts. After you create the threat model, you might need to perform revisions and clarifications depending on changes in business processes or other related parameters. While creating the threat model, you can use STRIDE, a methodology for categorizing threats (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege), and DREAD, a risk assessment model (Damage, Reproducibility, Exploitability, Affected Users, Discoverability). For a more formalized approach, you can also refer to ISO/IEC 27005 or NIST 800-30 standards. There will always be risks that can threaten both large companies and individual users who recently ordered a server to host a simple web application. The losses and criticality may vary, but from a technical perspective, the most common threats are: DoS/DDoS: Denial of service or infrastructure failure, resulting in financial and/or reputational losses. Supply Chain Attack: For example, infecting an artifact repository, such as a Container Registry: JFrog Artifactory, Sonatype Nexus. Full System Compromise: Includes establishing footholds and horizontal movement within the infrastructure. Using your server as a launchpad for complex technological attacks on other resources. If this leads to serious consequences, you will likely spend many hours in court and incur significant financial costs. Gaining advantages by modifying system resources, bypassing authentication, or altering the logic of the entire application. This can lead to reputational and/or financial losses. Some of these attacks can be cut off early or significantly complicated for potential attackers if the server is properly configured. Hardening is not a one-time procedure; it is an ongoing process that requires continuous monitoring and adaptation to new threats. The main goal of this article is to equip readers with server hardening techniques. However, in the context of this article, we will discuss a more relevant and practical example—server protection. After ordering a server, we would normally perform the initial setup. This is typically done by system administrators or DevOps specialists. In larger organizations, other technical experts (SecOps, NetOps, or simply Ops) may get involved, but in smaller setups, the same person who writes the code usually handles these tasks. This is when the most interesting misconfigurations can arise. Some people configure manually: creating users, groups, setting network configurations, installing the required software; others write and reuse playbooks—automated scripts. In this article, we will go over the following server hardening checklist: Countering port scanning Configuring the Nginx web server Protecting remote connections via SSH Setting up Port Knocking Configuring Linux kernel parameters Hardening container environments If you later require automation, you can easily write your own playbook, as you will already know whether specific security configurations are necessary. Countering Port Scanning Various types of attackers, from botnet networks to APT (Advanced Persistent Threat) groups, use port scanners and other device discovery systems (such as shodan.io, search.censys.io, zoomeye.ai, etc.) that are available on the internet to search for interesting hosts for further exploitation and extortion. One popular network scanner is Nmap. It allows determining "live" hosts in a network and the services running on them through a variety of scanning methods. Nmap also includes the Nmap Script Engine, which offers both out-of-the-box functionality and the possibility to add custom scripts. To scan resources using Nmap, an attacker would execute a command like: nmap -sC -sV -p- -vv --min-rate 10000 $IP Where: $IP is the IP address or range of IP addresses to scan. -sC enables the script engine. -sV detects service versions. -vv (from "double verbose") enables detailed output. --min-rate 10000 is a parameter defining how many requests are sent in one go. In this case, an aggressive mode (10,000 units) is selected. Additionally, the rate modes can be adjusted separately with the flag -T (Aggressive, Insane, Normal, Paranoid, Polite, Sneaky). Example of a scan result is shown below. From this information, we can see that three services are running: SSH on port 22 Web service on port 80 Web service on port 8080 The tool also provides software versions and more detailed information, including HTTP status codes, port status (in this case, "open"), and TTL values, which help to determine if the service is in a container or if there is additional routing that changes the TTL. Thus, an attacker can use a port scanner or search engine results to find your resource and attempt to attack based on the gathered information. To prevent this, we need to break the attacker's pattern and confuse them. Specifically, we can make it so that they cannot identify which port is open and what service is running on it. This can be achieved by opening all ports: 2^16 - 1 = 65535. By "opening," we mean configuring incoming connections so that all connection attempts to TCP ports are redirected to port 4444, on which the portspoof utility dynamically responds with random signatures of various services from the Nmap fingerprint database. To implement this, install the portspoof utility. Clone the appropriate repository with the source code and build it: git clone https://github.com/drk1wi/portspoof.gitcd portspoof./configure && make && sudo make install Note that you may need to install dependencies for building the utility: sudo apt install gcc g++ make Grant execution rights and run the automatic configuration script with the specified network interface. This script will configure the firewall correctly and set up portspoof to work with signatures that mask ports under other services. sudo chmod +x $HOME/portspoof/system_files/init.d/portspoof.shsudo $HOME/portspoof/system_files/init.d/portspoof.sh start $NETWORK_INTERFACE Where $NETWORK_INTERFACE is your network interface (in our case, eth0). To stop the utility, run the command: sudo $HOME/portspoof/system_files/init.d/portspoof.sh stop eth0 Repeating the scan using Nmap or any other similar program, which works based on banner checking of running services, will now look like this: Image source: drk1wi.github.io There is another trick that, while less effective as it does not create believable service banners, allows you to avoid additional utilities like portspoof. First, configure the firewall so that after the configuration, you can still access the server via SSH (port 22) and not disrupt the operation of existing legitimate services. sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 22 -j RETURN Then, initiate the process of redirecting all TCP traffic to port 5555: sudo iptables -t nat -A PREROUTING -i eth0 -p tcp -m conntrack --ctstate NEW -j REDIRECT --to-ports 5555 Now, create a process that generates pseudo-random noise on port 5555 using NetCat: nc -lp 5555 < /dev/urandom These techniques significantly slow down the scan because the scanner will require much more time to analyze each of the 65,535 "services." Now, the primary task of securing the server is complete! Configuring the Nginx Web Server Nmap alone is not sufficient for a comprehensive analysis of a web application. In addition to alternatives like naabu from Project Discovery and rustscan, there are advanced active reconnaissance tools. These not only perform standard port scanning but specialize in subdomain enumeration, directory brute-forcing, HTTP parameter testing (such as dirbuster, gobuster, ffuf), and identifying and exploiting vulnerabilities in popular CMS platforms (wpscan, joomscan) and specific attacks (sqlmap for SQL injections, tplmap for SSTI). These scanners work by searching for endpoints of an application, utilizing techniques like brute-forcing, searching through HTML pages, or connected JavaScript files. During their operation, millions of iterations occur comparing the response with the expected output to identify potential vulnerabilities and expose the service to exploitation. To protect web applications from such scanners, we suggest configuring the web server. In this example, we’ll configure Nginx, as it is one of the most popular web servers. In most configurations, Nginx proxies and exposes an application running on the server or within a cluster. This setup allows for rich configuration options. To enhance security, we can add HTTP Security Headers and the lightweight and powerful ChaCha20 encryption protocol for devices that lack hardware encryption support (such as mobile phones). Additionally, rate limiting may be necessary to prevent DoS and DDoS attacks. HTTP headers like Server and X-Powered-By reveal information about the web server and technologies used, which can help an attacker determine potential attack vectors.We need to remove these headers. To do this, install the Nginx extras collection: sudo apt install nginx-extras Then, configure the Nginx settings in /etc/nginx/nginx.conf: server_tokens off;more_clear_headers Server;more_clear_headers 'X-Powered-By'; Also, add headers to mitigate Cross-Site Scripting (XSS) attack surface: add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;add_header X-XSS-Protection "1; mode=block"; And protect against Clickjacking: add_header X-Frame-Options "SAMEORIGIN"; You can slow down automated attacks by setting request rate limits from a single IP address. Do this only if you are confident it won't impact service availability or functionality. A sample configuration might look like this: http { limit_req_zone $binary_remote_addr zone=req_zone:10m rate=10r/s; server { location /api/ { limit_req zone=req_zone burst=20 nodelay; } } } This configuration limits requests to 10 per second from a single IP, with a burst buffer of 20 requests. To protect traffic from MITM (Man-in-the-Middle) attacks and ensure high performance, enable TLS 1.3 and configure strong ciphers: ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers "TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256"; ssl_prefer_server_ciphers on; You can also implement additional web application protection using a WAF (Web Application Firewall). Some free solutions include: BunkerWeb — Lightweight, popular, and effective WAF. ModSecurity — A powerful Nginx module with flexible rules. To perform basic configuration of ModSecurity, you can install it like this: sudo apt install libnginx-mod-security2 Then, enable ModSecurity in the Nginx configuration: server { modsecurity on; modsecurity_rules_file /etc/nginx/modsecurity.conf; } Use Security Headers to analyze HTTP headers and identify possible configuration errors. When configuring any infrastructure components, it's important to follow best practices. For instance, to create secure Nginx configurations, you can use an online generator, which allows you to easily generate optimal base settings for Nginx, including ciphers, OCSP Stapling, logging, and other parameters. Protecting Remote Connections via SSH If your server is still secured only by a password, this is a quite insecure configuration. Even complex passwords can eventually be compromised, especially when outdated or vulnerable versions of SSH are in use, allowing brute force attacks without restrictions, such as in CVE-2020-1616. Below is a table showing how long it might take to crack a password based on its complexity Image source: security.org It’s recommended to disable password authentication and set up authentication using private and public keys. Generate a SSH key pair (public and private keys) on your workstation: ssh-keygen -t ed25519 -C $EMAIL Where $EMAIL is your email address, and -t ed25519 specifies the key type based on elliptic curve cryptography (using the Curve25519 curve). This provides high performance, compact key sizes (256 bits), and resistance to side-channel attacks. Copy the public key to the server. Read your public key from the workstation and save it to the authorized_keys file on the server, located at $HOME/.ssh/authorized_keys (where $HOME is the home directory of the user on the server you are connecting to). You can manually add the key or use the ssh-copy-id utility, which will prompt for the password. ssh-copy-id user@$IP Alternatively, you can add the key directly through your Hostman panel. Go to the Cloud servers → SSH Keys section and click Add SSH key.   Enter your key and give it a name. Once added, you can upload this key to a specific virtual machine or add it directly during server creation in the 6. Authorization section. To further secure SSH connections, adjust the SSH server configuration file at /etc/ssh/sshd_config by applying the following settings: PermitRootLogin no — Prevents login as the root user. PermitEmptyPasswords no — Disallows the use of empty passwords. X11Forwarding no — Disables forwarding of graphical applications. AllowUsers $USERS — Defines a list of users allowed to log in via SSH. Separate usernames with spaces. PasswordAuthentication no — Disables password authentication. PubkeyAuthentication yes — Enables public and private key authentication. HostbasedAuthentication no — Disables host-based authentication. PermitUserEnvironment no — Disallows changing environment variables to limit exploitation through variables like LD_PRELOAD. After adjusting the configuration file, restart the OpenSSH daemon: systemctl restart sshd Finally, after making these changes, you can conduct a security audit using a service like ssh-audit or this website designed for SSH security checks. This will help ensure your configuration is secure and appropriately hardened. Configuring Port Knocking SSH is a relatively secure protocol, as it was developed by the OpenBSD team, which prides itself on creating an OS focused on security and data integrity. However, even in such widely used and serious software, software vulnerabilities occasionally surface. Some of these vulnerabilities allow attackers to perform user enumeration. Although these issues are typically patched promptly, it doesn't eliminate the fact that recent critical vulnerabilities, like regreSSHion, have allowed for Remote Code Execution (RCE). Although this particular exploit requires special conditions, it highlights the importance of protecting your server's data. One way to further secure SSH is to hide the SSH port from unnecessary visibility. Changing the SSH port seems pointless because, after the first scan by an attacker, they will quickly detect the new port. A more effective strategy is to use Port Knocking, a method of security where a "key" (port knocking sequence) is used to open the port for a short period, allowing authentication. Install knockd using your package manager: sudo apt install knockd -y Configure knockd by editing the /etc/knockd.conf file to set the port knocking sequence and the corresponding actions. For example: [options] UseSyslog [openSSH] sequence = 7000,8000,9000 seq_timeout = 5 command = /usr/sbin/iptables -A INPUT -s %IP% -p tcp --dport 22 -j ACCEPT tcpflags = syn [closeSSH] sequence = 9000,8000,7000 seq_timeout = 5 command = /usr/sbin/iptables -D INPUT -s %IP% -p tcp --dport 22 -j ACCEPT tcpflags = syn sequence: The port sequence that needs to be "knocked" (accessed) in the correct order. seq_timeout: The maximum time allowed to send the sequence (in seconds). command: The command to be executed once the sequence is received correctly. It typically opens or closes the SSH port (or another service). %IP%: The client IP address that sent the sequence (the one "knocking"). tcpflags: The SYN flag is used to filter out other types of packets. Start and enable knockd to run at boot: sudo systemctl enable --now knockd Use knock or nmap to send the correct port knocking sequence: Example command with nmap: nmap -Pn --max-retries 0 -p 7000,8000,9000 $IP Example command with knock: knock $IP 7000 8000 9000 Where $IP is the IP address of the server you're trying to connect to. If everything is configured correctly, once the correct sequence of port knocks is received, the SSH port (port 22) will temporarily open. At this point, you can proceed with the standard SSH authentication process. This technique isn't limited to just SSH; you can configure port knocking for other services if needed (e.g., HTTP, FTP, or any custom service). Port knocking adds an extra layer of security by obscuring the SSH service from the general public and only allowing access to authorized clients who know the correct sequence. Configuring Linux Kernel Parameters In today's insecure world, one of the common types of attack is Living off the Land (LOTL). This is when legitimate tools and resources are used to exploit and escalate privileges on the compromised system. One such tool that attackers frequently leverage is the ability to view kernel system events and message buffers. This technique is even used by advanced persistent threats (APTs). It is important to secure your Linux kernel configurations to mitigate the risk of such exploits. Below are some recommended settings that can enhance the security of your system. To enable ASLR (Address Space Layout Randomization), set these parameters: kernel.randomize_va_space = 2: Randomizes the memory spaces for applications to prevent attackers from knowing where specific processes will run.. kernel.kptr_restrict = 2: Restricts user-space applications from obtaining kernel pointer information. Also, disable system request (SysRq) functionality: kernel.sysrq = 0 And restrict access to kernel message buffer (dmesg): kernel.dmesg_restrict = 1 With this configuration, an attacker will not know a program's memory address and won't be able to infiltrate any important process for exploitation purposes. They will also be unable to view the kernel message buffer (dmesg) or send debugging requests (sysrq), which will further complicate their interaction with the system. Hardening Container Environments In modern architectures, container environments are an essential part of the infrastructure, offering significant advantages for developers, DevOps engineers, and system administrators. However, securing these environments is crucial to protect against potential threats and ensure the integrity of your systems. To protect container environments, it's essential to adopt secure development practices and integrate DevSecOps alongside existing DevOps methodologies. This also includes forming resilient patterns and building strong security behaviors from an information security perspective. Use minimal images, such as Google Distroless, and Software Composition Analysis (SCA) tools to check the security of your images. You can use the following methods to analyze the security of an image. Docker Scout and Docker SBOM for generating a list of artifacts that make up an image. Install Docker Scout and Docker SBOM as plugins for Docker.  Create a directory for Docker plugins (if it doesn't exist): mkdir -pv $HOME/.docker/cli-plugins Install Docker Scout: curl -sSfL https://raw.githubusercontent.com/docker/scout-cli/main/install.sh | sh -s -- Install Docker SBOM: curl -sSfL https://raw.githubusercontent.com/docker/sbom-cli-plugin/main/install.sh | sh -s -- To check for vulnerabilities in an image using Docker Scout: docker scout cves gradle To generate an SBOM using Docker SBOM (which internally uses Syft): docker sbom $IMAGE_NAME $IMAGE_NAME is the name of the container image you wish to analyze. To save the SBOM in JSON format for further analysis: docker sbom alpine:latest --format syft-json --output sbom.txt sbom.txt will be the file containing the generated SBOM. Container Scanning with Trivy Trivy is a powerful security scanner for container images. It helps identify vulnerabilities and misconfigurations. Install Trivy using the following script: curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sudo sh -s -- -b /usr/local/bin v0.59.1 Run a security scan for a container image: trivy image $IMAGE_NAME $IMAGE_NAME is the name of the image you want to analyze. For detailed output in JSON format, use: trivy -q image --ignore-unfixed --format json --list-all-pkgs $IMAGE_NAME Even with the minimal practices listed in this section, you can ensure a fairly decent level of container security. Conclusion Using the techniques outlined in the article, you can significantly complicate or even prevent a hack by increasing entropy. However, it is important to keep in mind that entropy should be balanced with system usability to avoid creating unnecessary difficulties for legitimate users.
19 March 2025 · 18 min to read
Linux

How to Use SSH Keys for Authentication

Many cloud applications are built on the popular SSH protocol—it is widely used for managing network infrastructure, transferring files, and executing remote commands. SSH stands for Secure Socket Shell, meaning it provides a shell (command-line interface) around the connection between multiple remote hosts, ensuring that the connection is secure (encrypted and authenticated). SSH connections are available on all popular operating systems, including Linux, Ubuntu, Windows, and Debian. The protocol establishes an encrypted communication channel within an unprotected network by using a pair of public and private keys. Keys: The Foundation of SSH SSH operates on a client-server model. This means the user has an SSH client (a terminal in Linux or a graphical application in Windows), while the server side runs a daemon, which accepts incoming connections from clients. In practice, an SSH channel enables remote terminal management of a server. In other words, after a successful connection, everything entered in the local console is executed directly on the remote server. The SSH protocol uses a pair of keys for encrypting and decrypting information: public key and private key. These keys are mathematically linked. The public key is shared openly, resides on the server, and is used to encrypt data. The private key is confidential, resides on the client, and is used to decrypt data. Of course, keys are not generated manually but with special tools—keygens. These utilities generate new keys using encryption algorithms fundamental to SSH technology. More About How SSH Works Exchange of Public Keys SSH relies on symmetric encryption, meaning two hosts wishing to communicate securely generate a unique session key derived from the public and private data of each host. For example, host A generates a public and private key pair. The public key is sent to host B. Host B does the same, sending its public key to host A. Using the Diffie-Hellman algorithm, host A can create a key by combining its private key with the public key of host B. Likewise, host B can create an identical key by combining its private key with the public key of host A. This results in both hosts independently generating the same symmetric encryption key, which is then used for secure communication. Hence, the term symmetric encryption. Message Verification To verify messages, hosts use a hash function that outputs a fixed-length string based on the following data: The symmetric encryption key The packet number The encrypted message text The result of hashing these elements is called an HMAC (Hash-based Message Authentication Code). The client generates an HMAC and sends it to the server. The server then creates its own HMAC using the same data and compares it to the client's HMAC. If they match, the verification is successful, ensuring that the message is authentic and hasn't been tampered with. Host Authentication Establishing a secure connection is only part of the process. The next step is authenticating the user connecting to the remote host, as the user may not have permission to execute commands. There are several authentication methods: Password Authentication: The user sends an encrypted password to the server. If the password is correct, the server allows the user to execute commands. Certificate-Based Authentication: The user initially provides the server with a password and the public part of a certificate. Once authenticated, the session continues without requiring repeated password entries for subsequent interactions. These methods ensure that only authorized users can access the remote system while maintaining secure communication. Encryption Algorithms A key factor in the robustness of SSH is that decrypting the symmetric key is only possible with the private key, not the public key, even though the symmetric key is derived from both. Achieving this property requires specific encryption algorithms. There are three primary classes of such algorithms: RSA, DSA, and algorithms based on elliptic curves, each with distinct characteristics: RSA: Developed in 1978, RSA is based on integer factorization. Since factoring large semiprime numbers (products of two large primes) is computationally difficult, the security of RSA depends on the size of the chosen factors. The key length ranges from 1024 to 16384 bits. DSA: DSA (Digital Signature Algorithm) is based on discrete logarithms and modular exponentiation. While similar to RSA, it uses a different mathematical approach to link public and private keys. DSA key length is limited to 1024 bits. ECDSA and EdDSA: These algorithms are based on elliptic curves, unlike DSA, which uses modular exponentiation. They assume that no efficient solution exists for the discrete logarithm problem on elliptic curves. Although the keys are shorter, they provide the same level of security. Key Generation Each operating system has its own utilities for quickly generating SSH keys. In Unix-like systems, the command to generate a key pair is: ssh-keygen -t rsa Here, the type of encryption algorithm is specified using the -t flag. Other supported types include: dsa ecdsa ed25519 You can also specify the key length with the -b flag. However, be cautious, as the security of the connection depends on the key length: ssh-keygen -b 2048 -t rsa After entering the command, the terminal will prompt you to specify a file path and name for storing the generated keys. You can accept the default path by pressing Enter, which will create standard file names: id_rsa (private key) and id_rsa.pub (public key). Thus, the public key will be stored in a file with a .pub extension, while the private key will be stored in a file without an extension. Next, the command will prompt you to enter a passphrase. While not mandatory (it is unrelated to the SSH protocol itself), using a passphrase is recommended to prevent unauthorized use of the key by a third-party user on the local Linux system. Note that if a passphrase is used, you must enter it each time you establish the connection. To change the passphrase later, you can use: ssh-keygen -p Or, you can specify all parameters at once with a single command: ssh-keygen -p old_password -N new_password -f path_to_files For Windows, there are two main approaches: Using ssh-keygen from OpenSSH: The OpenSSH client provides the same ssh-keygen command as Linux, following the same steps. Using PuTTY: PuTTY is a graphical application that allows users to generate public and private keys with the press of a button. Installing the Client and Server Components The primary tool for an SSH connection on Linux platforms (both client and server) is OpenSSH. While it is typically pre-installed on most operating systems, there may be situations (such as with Ubuntu) where manual installation is necessary. The general command for installing SSH, followed by entering the superuser password, is: sudo apt-get install ssh However, in some operating systems, SSH may be divided into separate components for the client and server. For the Client To check whether the SSH client is installed on your local machine, simply run the following command in the terminal: ssh If SSH is supported, the terminal will display a description of the command. If nothing appears, you’ll need to install the client manually: sudo apt-get install openssh-client You will be prompted to enter the superuser password during installation. Once completed, SSH connectivity will be available. For the Server Similarly, the server-side part of the OpenSSH toolkit is required on the remote host. To check if the SSH server is available on your remote host, try connecting locally via SSH: ssh localhost If the SSH daemon is running, you will see a message indicating a successful connection. If not, you’ll need to install the SSH server: sudo apt-get install openssh-server As with the client, the terminal will prompt you to enter the superuser password. After installation, you can check whether SSH is active by running: sudo service ssh status Once connected, you can modify SSH settings as needed by editing the configuration file: ./ssh/sshd_config For example, you might want to change the default port to a custom one. Don’t forget that after making changes to the configuration, you must manually restart the SSH service to apply the updates: sudo service ssh restart Copying an SSH Key to the Server On Hostman, you can easily add SSH keys to your servers using the control panel. Using a Special Copy Command After generating a public SSH key, it can be used as an authorized key on a server. This allows quick connections without the need to repeatedly enter a password. The most common way to copy the key is by using the ssh-copy-id command: ssh-copy-id -i ~/.ssh/id_rsa.pub name@server_address This command assumes you used the default paths and filenames during key generation. If not, simply replace ~/.ssh/id_rsa.pub with your custom path and filename. Replace name with the username on the remote server. Replace server_address with the host address. If the usernames on both the client and server are the same, you can shorten the command: ssh-copy-id -i ~/.ssh/id_rsa.pub server_address If you set a passphrase during the SSH key creation, the terminal will prompt you to enter it. Otherwise, the key will be copied immediately. In some cases, the server may be configured to use a non-standard port (the default is 22). If that’s the case, specify the port using the -p flag: ssh-copy-id -i ~/.ssh/id_rsa.pub -p 8129 name@server_address Semi-Manual Copying There are operating systems where the ssh-copy-id command may not be supported, even though SSH connections to the server are possible. In such cases, the copying process can be done manually using a series of commands: ssh name@server_address 'mkdir -pm 700 ~/.ssh; echo ' $(cat ~/.ssh/id_rsa.pub) ' >> ~/.ssh/authorized_keys; chmod 600 ~/.ssh/authorized_keys' This sequence of commands does the following: Creates a special .ssh directory on the server (if it doesn’t already exist) with the correct permissions (700) for reading and writing. Creates or appends to the authorized_keys file, which stores the public keys of all authorized users. The public key from the local file (id_rsa.pub) will be added to it. Sets appropriate permissions (600) on the authorized_keys file to ensure it can only be read and written by the owner. If the authorized_keys file already exists, it will simply be appended with the new key. Once this is done, future connections to the server can be made using the same SSH command, but now the authentication will use the public key added to authorized_keys: ssh name@server_address Manual Copying Some hosting platforms offer server management through alternative interfaces, such as a web-based control panel. In these cases, there is usually an option to manually add a public key to the server. The web interface might even simulate a terminal for interacting with the server. Regardless of the method, the remote host must contain a file named ~/.ssh/authorized_keys, which lists all authorized public keys. Simply copy the client’s public key (found in ~/.ssh/id_rsa.pub by default) into this file. If the key pair was generated using a graphical application (typically PuTTY on Windows), you should copy the public key directly from the application and add it to the existing content in authorized_keys. Connecting to a Server To connect to a remote server on a Linux operating system, enter the following command in the terminal: ssh name@server_address Alternatively, if the local username is identical to the remote username, you can shorten the command to: ssh server_address The system will then prompt you to enter the password. Type it and press Enter. Note that the terminal will not display the password as you type it. Just like with the ssh-copy-id command, you can explicitly specify the port when connecting to a remote server: ssh client@server_address -p 8129 Once connected, you will have control over the remote machine via the terminal; any command you enter will be executed on the server side. Conclusion Today, SSH is one of the most widely used protocols in development and system administration. Therefore, having a basic understanding of its operation is crucial. This article aimed to provide an overview of SSH connections, briefly explain the encryption algorithms (RSA, DSA, ECDSA, and EdDSA), and demonstrate how public and private key pairs can be used to establish secure connections with a personal server, ensuring that exchanged messages remain inaccessible to third parties. We covered the primary commands for UNIX-like operating systems that allow users to generate key pairs and grant clients SSH access by copying the public key to the server, enabling secure connections.
30 January 2025 · 10 min to read
Servers

How to Protect a Server from DDoS Attacks

A DDoS attack (Distributed Denial of Service) aims to overwhelm a network with excessive traffic, reducing its performance or causing a complete outage. This is reflected in the term "denial-of-service" (refusal of service). The frequency and intensity of DDoS attacks have been rising rapidly. A report by Cloudflare noted that in 2021, the number of attacks grew by one-third compared to 2020, with a peak in activity observed in December. The duration of a DDoS attack can vary. According to research by Securelist: 94.95% of attacks end within four hours. 3.27% last between 5 to 9 hours. 1.05% persist for 10 to 19 hours. Only 0.73% of all attacks extend beyond 20 hours. Effective Tools for Protecting a Server from DDoS Attacks If you don't want to rely on vendors' solutions, paid services, or proprietary software, you can use the following tools to defend against DDoS attacks: IPTables. A powerful firewall tool available in Linux systems that allows precise control over incoming and outgoing traffic. CSF (ConfigServer Security and Firewall). A robust security tool that simplifies managing firewall rules and provides additional protection mechanisms. Nginx Modules. Modules specifically designed for mitigating DDoS attacks, such as limiting the number of requests per IP or delaying excessive requests. Software Filters. Tools or scripts that analyze and filter traffic to block malicious or excessive requests, helping to maintain service availability. IPTables. Blocking Bots by IP Address The IPTables tool helps protect a server from basic DDoS attacks. Its primary function is to filter incoming traffic through special tables. The resource owner can add custom tables. Each table contains a set of rules that govern the tool's behavior in specific situations. By default, there are only two response options: ACCEPT (allow access) and REJECT (block access). In IPTables, it is possible to limit the number of connections.  If a single IP address exceeds the allowed number of connections, the tool will block access for that IP. You can extend the tool's functionality with additional criteria: Limit: Sets a limit on the number of packet connections within a chosen time period. Hashlimit: Works similarly to Limit, but applies to groups of hosts, subnets, and ports. Mark: Used to mark packets, limit traffic, and filter. Connlimit: Limits the number of simultaneous connections for a single IP address or subnet. IPRange: Defines a range of IP addresses that are not considered as a subnet by the tool. Additionally, IPTables can use criteria such as Owner, State, TOS, TTL, and Unclean Match to set personalized configurations, effectively protecting the resource from DDoS attacks. The ipset kernel module allows you to create a list of addresses that exceed the specified connection limit. The ipset timeout parameter sets a time limit for the created list, which is enough to ride out a DDoS attack. By default, IPTables settings return to their basic configuration after a system reboot. To save the settings, you can use additional utilities (such as iptables-save or iptables-persistent), but it is recommended to start with the default options to avoid saving incorrect settings that could block server access for everyone. ConfigServer Security and Firewall While IPTables is a convenient and effective tool, it can be quite complex to configure. You’ll need to learn how to manage it and write additional scripts, and if something goes wrong, your resource may end up being a "closed club" for just a few users. CSF (ConfigServer Security and Firewall) is a "turnkey" configurator, meaning you only need to set the correct parameters and not worry about the server's security. Installing the Server Firewall The preliminary installation steps involve downloading two additional components required to run CSF: the Perl interpreter and the libwww library. The next step is to install ConfigServer Security and Firewall itself. Since the tool is not available in the official repository, you'll need to download it directly from the provided link or by fetching the ready-made archive: cd /usr/srcwget https://download.configserver.com/csf.tgz After downloading, extract the archive and move it to the defender’s files folder. Then, run the installation process. Once installed successfully, you can proceed with configuring CSF. Configuring the Server Firewall By default, the settings in ConfigServer and Firewall are active for 5 minutes, after which any changes are reset. This test format is useful for conducting experiments and understanding errors in the applied configuration. To switch to live mode, change the Testing value to 0. Proper configuration of CSF ensures reliable protection against DDoS attacks. Here are some essential commands in CSF: Specify incoming ports: TCP_IN = "22,23,25,36,75,87" Specify outgoing ports: TCP_OUT = "22,23,25,36,75,87" Configure email notifications for SSH connections: LF_SSH_EMAIL_ALERT = "1" Add an IP address to the exception list (useful for server management teams): csf -a 192.168.0.7 Block a specific IP address from connecting to the server: csf -d 192.168.0.6 Nginx Modules How can you protect your server from DDoS attacks using simpler methods? Use Nginx modules like limit_conn and limit_req. The limit_conn module limits the maximum number of connections to the server, while the limit_req module limits the number of requests within a specified time frame. For example, if you want to limit the number of simultaneous connections to 30 and restrict the number of connections within a 3-second window, the configuration will look as follows: limit_conn_zone $binary_remote_addr zone=perip: 30m;limit_req_zone $binary_remote_addr zone=dynamic:30m rate=3r/s; This configuration allows only 3 requests per second. Any additional requests are queued. The burst parameter controls the queue size. For example, if the burst value is set to 7, the module will queue up to 7 requests when the request count exceeds 10, while any further requests will be rejected with an error. Software Filter Server protection against DDoS attacks can also be achieved using web applications. A traffic filter uses JavaScript, which is inaccessible to bots, effectively redirecting DDoS attacks to a placeholder page. The operation of the filter is simple. The configuration defines conditions for blocking bots, and when a visitor meets those conditions, they are redirected to a placeholder page instead of the requested page. The filter can also specify the reason for the redirection.
03 December 2024 · 6 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support