Sign In
Sign In

Sentry: Error Tracking and Monitoring

Sentry: Error Tracking and Monitoring
Hostman Team
Technical writer
Servers
15.11.2024
Reading time: 10 min

Sentry is a platform for error logging and application monitoring. The data we receive in Sentry contains comprehensive information about the context in which an issue occurred, making it easier to reproduce, trace the root cause, and significantly assist in error resolution. It's a valuable tool for developers, testers, and DevOps professionals. This open-source project can be deployed on a private or cloud server.

Originally, Sentry was a web interface for displaying traces and exceptions in an organized way, grouping them by type. Over time, it has grown, adding new features, capabilities, and integrations. It's impossible to showcase everything it can do in a single article fully, and even a brief video overview could take up to three hours.

Why Use Sentry When We Have Logging?

Reviewing logs to understand what's happening with a service is helpful. When logs from all services are centralized in one place, like Elastic, OpenSearch, or Loki, it’s even better. However, you can analyze errors and exceptions faster, more conveniently, and with greater detail in Sentry. There are situations when log analysis alone does not clarify an issue, and Sentry comes to the rescue.

Consider cases where a user of your service fails to log in, buy a product, or perform some other action and leaves without submitting a support ticket. Such issues are extremely difficult to identify through logs alone. Even if a support ticket is submitted, analyzing, identifying, and reproducing such specific errors can be costly:

  • What device and browser were used?
  • What function triggered the error, and why? What specific error occurred?
  • What data was on the front end, and what was sent to the backend?

Sentry’s standout feature is the way it provides detailed contextual information about errors in an accessible format, enabling faster response and improved development.

As the project developers claim on their website, “Your code will tell you more than what logs reveal. Sentry’s full-stack monitoring shows a more complete picture of what's happening in your service’s code, helping identify issues before they lead to downtime.”

How It Works

In your application code, you set up a DSN (URL) for your Sentry platform, which serves as the destination for reports (errors, exceptions, and logs). You can also customize, extend, or mask the data being sent as needed.

Sentry supports JavaScript, Node, Python, PHP, Ruby, Java, and other programming languages.

Image2

In the setup screenshot, you can see various project types, such as a basic Python project as well as Django, Flask, and FastAPI frameworks. These frameworks offer enhanced and more detailed data configurations for report submission.

Usage Options

Sentry offers two main usage options:

  • Self-hosted (deployed on your own server)
  • Cloud-based (includes a limited free version and paid plans with monthly billing)

The Developer version is a free cloud plan suitable for getting acquainted with Sentry.

For anyone interested in Sentry, we recommend at least trying the free cloud version, as it’s a good introduction. However, a self-hosted option is ideal since the cloud version can experience error reporting delays of 1 to 5 minutes, which may be inconvenient.

Self-Hosted Version Installation

Now, let's move on to the technical part. To deploy Sentry self-hosted, we need the getsentry/self-hosted repository. The platform will be set up using Docker Compose.

System Requirements

  • Docker 19.03.6+
  • Docker Compose 2.19.0+
  • 4 CPU cores
  • 16 GB RAM
  • 20 GB free disk space

We’ll be using a VPS from Hostman with Ubuntu 22.04.

System Setup

  1. Update Dependencies

First, we need to update the system packages:

apt update && apt upgrade -y
  1. Install Required Packages

Docker

Docker's version available in the repository is 24.0.7, so we’ll install it with:

apt install docker.io

Docker Compose

The version offered by apt is 1.29.2-1, which does not match the required version. So we need to install in manully. We’ll get the latest version directly from the official repository:

VERSION=$(curl --silent https://api.github.com/repos/docker/compose/releases/latest | grep -Po '"tag_name": "\K.*\d')
DESTINATION=/usr/bin/docker-compose
sudo curl -L https://github.com/docker/compose/releases/download/${VERSION}/docker-compose-$(uname -s)-$(uname -m) -o $DESTINATION
sudo chmod 755 $DESTINATION
  1. Verify Docker Compose Installation

To ensure everything is correctly installed, check the version of Docker Compose:

docker-compose --version

Output:

Docker Compose version v2.20.3

Once these steps are completed, you can proceed with deploying Sentry using Docker Compose.

Installation

The Sentry developers have simplified the installation process with a script. Here's how to set it up:

  1. Clone the Repository and Release Branch

First, clone the repository and checkout the release branch:

git clone https://github.com/getsentry/self-hosted.git
cd self-hosted
git checkout 24.10.0
  1. Run the Installation Script

Start the installation process by running the script with the following flags:

./install.sh --skip-user-prompt --no-report-self-hosted-issues

Flags explanation:

  • --skip-user-prompt: Skips the prompt for creating a user (we’ll create the user manually, which can be simpler).
  • --no-report-self-hosted-issues: Skips the prompt to send anonymous data to the Sentry developers from your host (this helps developers improve the product, but it uses some resources; decide if you want this enabled).

The script will check system requirements and download the Docker images (docker pull).

  1. Start Sentry

Once the setup is complete, you’ll see a message with the command to run Sentry:

You're all done! Run the following command to get Sentry running:
docker-compose up -d

Run the command to start Sentry:

docker-compose up -d

The Sentry web interface will now be available at your host's IP address on port 9000.

Before your first login, edit the ./sentry/config.yml configuration file and the line:

system.url-prefix: 'http://server_IP:9000'

And restart the containers:

docker-compose restart
  1. Create a User

We skipped the user creation during the installation, so let’s create the user manually. Run:

docker-compose run --rm web createuser

Enter your email, password, and answer whether you want to give the user superuser privileges.

Upon first login, you’ll see an initial setup screen where you can specify:

  • The URL for your Sentry instance.
  • Email server settings for sending emails.
  • Whether to allow other users to self-register.

At this point, Sentry is ready to use. You can read more about the configuration here.

Configuration Files

Sentry’s main configuration files include:

.env
./sentry/config.yml
./sentry/sentry.conf.py

By default, 42 containers are launched, and we can customize settings in the configuration files.

Currently, it is not possible to reduce the number of containers due to the complex architecture of the system. 

You can modify the .env file to disable some features.

For example, to disable the collection of private statistics, add this line to .env:

SENTRY_BEACON=False

You can also change the event retention period. By default, it is set to 90 days:

SENTRY_EVENT_RETENTION_DAYS=90

Database and Caching

Project data and user accounts are stored in PostgreSQL. If needed, you can easily configure your own database and Redis in the configuration files.

HTTPS Proxy Setup

To access the web interface securely, you need to set up an HTTPS reverse proxy. The Sentry documentation does not specify a particular reverse proxy, but you can choose any that fits your needs.

After configuring your reverse proxy, you will need to update the system.url-prefix in the config.yml file and adjust the SSL/TLS settings in sentry/sentry.conf.py.

Project Setup and Integration with Sentry

To set up and connect your first project with Sentry, follow these steps:

  1. Create a New Project
  • In the Sentry web interface, click Add New Project and choose your platform.

Image2

  • After creating the project, Sentry will generate a unique DSN (Data Source Name), which you'll need to use in your application to send events to Sentry.

Image3

  1. Configure the traces_sample_rate

Pay attention to the traces_sample_rate setting. It controls the percentage of events that are sent to Sentry. The default value is 1.0, which sends 100% of all events. 

traces_sample_rate=1.0  # 100% of events will be sent

If you set it to 0.25, it will only send 25% of events, which can be useful to avoid overwhelming the platform with too many similar errors. You can adjust this value depending on your needs.

You can read more about additional parameters of the sentry_sdk in the official documentation.

  1. Example Code with Custom Exception

Here’s an example script that integrates Sentry with a custom exception and function:

import sentry_sdk

sentry_sdk.init(
    dsn="http://[email protected]:9000/3",  # DSN from project creation
    traces_sample_rate=1.0,  # Send 100% of events
    environment="production",  # Set the runtime environment
    release="my-app-1.0.0",  # Specify the app version
    send_default_pii=True,  # Send Personally Identifiable Information (PII)
)

class MyException(Exception):
    pass

def my_function(user, email):
    raise MyException(f"User {user} ({email}) encountered an error.")

def create_user():
    print("Creating a user...")
    my_function('James', '[email protected]')

if __name__ == "__main__":
    sentry_sdk.capture_message("Just a simple message")  # Send a test message to Sentry
    create_user()  # Simulate the error
  1. Run the Script

Run the Python script:

python main.py

This script will:

  • Initialize Sentry with your project’s DSN.
  • Capture a custom exception when calling my_function.
  • Send an example message to Sentry.
  1. Check Results in Sentry

After running the script, you should see the following in Sentry:

  • The Just a simple message message will appear in the event stream.
  • The MyException that is raised in my_function will be captured as an error, and the details of the exception will be logged.

You can also view the captured exception, including the user information (user and email) and any other data you choose to send (such as stack traces, environment, etc.).

Image1

In Sentry, the tags displayed in the error reports include important contextual information that can help diagnose issues. These tags often show:

  • Environment Variable: This indicates the runtime environment of the application, such as "production", "development", or "staging". It helps you understand which environment the error occurred in.
  • Release Version: The version of your application that was running when the error occurred. This is particularly useful for identifying issues that might be specific to certain releases or versions of the application.
  • Hostname: The name of the server or machine where the error happened. This can be helpful when working in distributed systems or multiple server environments, as it shows the exact server where the issue occurred.

These tags appear in the error reports, providing valuable context about the circumstances surrounding the issue. For example, the stack trace might show which functions were involved in the error, and these tags can give you additional information, such as which version of the app was running and on which server, making it easier to trace and resolve issues.

Sentry automatically adds these contextual tags, but you can also customize them by passing additional information when you capture errors, such as environment, release version, or user-related data.

Conclusion

In this article, we discussed Sentry and how it can help track errors and monitor applications. We hope it has sparked your interest enough to explore the documentation or try out Sentry.

Despite being a comprehensive platform, Sentry is easy to install and configure. The key is to carefully manage errors and group events and use flexible configurations to avoid chaos. When set up properly, Sentry becomes a powerful and efficient tool for development teams, offering valuable insights into application behavior and performance.

Servers
15.11.2024
Reading time: 10 min

Similar

Servers

How to Protect a Server from DDoS Attacks

A DDoS attack (Distributed Denial of Service) aims to overwhelm a network with excessive traffic, reducing its performance or causing a complete outage. This is reflected in the term "denial-of-service" (refusal of service). The frequency and intensity of DDoS attacks have been rising rapidly. A report by Cloudflare noted that in 2021, the number of attacks grew by one-third compared to 2020, with a peak in activity observed in December. The duration of a DDoS attack can vary. According to research by Securelist: 94.95% of attacks end within four hours. 3.27% last between 5 to 9 hours. 1.05% persist for 10 to 19 hours. Only 0.73% of all attacks extend beyond 20 hours. Effective Tools for Protecting a Server from DDoS Attacks If you don't want to rely on vendors' solutions, paid services, or proprietary software, you can use the following tools to defend against DDoS attacks: IPTables. A powerful firewall tool available in Linux systems that allows precise control over incoming and outgoing traffic. CSF (ConfigServer Security and Firewall). A robust security tool that simplifies managing firewall rules and provides additional protection mechanisms. Nginx Modules. Modules specifically designed for mitigating DDoS attacks, such as limiting the number of requests per IP or delaying excessive requests. Software Filters. Tools or scripts that analyze and filter traffic to block malicious or excessive requests, helping to maintain service availability. IPTables. Blocking Bots by IP Address The IPTables tool helps protect a server from basic DDoS attacks. Its primary function is to filter incoming traffic through special tables. The resource owner can add custom tables. Each table contains a set of rules that govern the tool's behavior in specific situations. By default, there are only two response options: ACCEPT (allow access) and REJECT (block access). In IPTables, it is possible to limit the number of connections.  If a single IP address exceeds the allowed number of connections, the tool will block access for that IP. You can extend the tool's functionality with additional criteria: Limit: Sets a limit on the number of packet connections within a chosen time period. Hashlimit: Works similarly to Limit, but applies to groups of hosts, subnets, and ports. Mark: Used to mark packets, limit traffic, and filter. Connlimit: Limits the number of simultaneous connections for a single IP address or subnet. IPRange: Defines a range of IP addresses that are not considered as a subnet by the tool. Additionally, IPTables can use criteria such as Owner, State, TOS, TTL, and Unclean Match to set personalized configurations, effectively protecting the resource from DDoS attacks. The ipset kernel module allows you to create a list of addresses that exceed the specified connection limit. The ipset timeout parameter sets a time limit for the created list, which is enough to ride out a DDoS attack. By default, IPTables settings return to their basic configuration after a system reboot. To save the settings, you can use additional utilities (such as iptables-save or iptables-persistent), but it is recommended to start with the default options to avoid saving incorrect settings that could block server access for everyone. ConfigServer Security and Firewall While IPTables is a convenient and effective tool, it can be quite complex to configure. You’ll need to learn how to manage it and write additional scripts, and if something goes wrong, your resource may end up being a "closed club" for just a few users. CSF (ConfigServer Security and Firewall) is a "turnkey" configurator, meaning you only need to set the correct parameters and not worry about the server's security. Installing the Server Firewall The preliminary installation steps involve downloading two additional components required to run CSF: the Perl interpreter and the libwww library. The next step is to install ConfigServer Security and Firewall itself. Since the tool is not available in the official repository, you'll need to download it directly from the provided link or by fetching the ready-made archive: cd /usr/srcwget https://download.configserver.com/csf.tgz After downloading, extract the archive and move it to the defender’s files folder. Then, run the installation process. Once installed successfully, you can proceed with configuring CSF. Configuring the Server Firewall By default, the settings in ConfigServer and Firewall are active for 5 minutes, after which any changes are reset. This test format is useful for conducting experiments and understanding errors in the applied configuration. To switch to live mode, change the Testing value to 0. Proper configuration of CSF ensures reliable protection against DDoS attacks. Here are some essential commands in CSF: Specify incoming ports: TCP_IN = "22,23,25,36,75,87" Specify outgoing ports: TCP_OUT = "22,23,25,36,75,87" Configure email notifications for SSH connections: LF_SSH_EMAIL_ALERT = "1" Add an IP address to the exception list (useful for server management teams): csf -a 192.168.0.7 Block a specific IP address from connecting to the server: csf -d 192.168.0.6 Nginx Modules How can you protect your server from DDoS attacks using simpler methods? Use Nginx modules like limit_conn and limit_req. The limit_conn module limits the maximum number of connections to the server, while the limit_req module limits the number of requests within a specified time frame. For example, if you want to limit the number of simultaneous connections to 30 and restrict the number of connections within a 3-second window, the configuration will look as follows: limit_conn_zone $binary_remote_addr zone=perip: 30m;limit_req_zone $binary_remote_addr zone=dynamic:30m rate=3r/s; This configuration allows only 3 requests per second. Any additional requests are queued. The burst parameter controls the queue size. For example, if the burst value is set to 7, the module will queue up to 7 requests when the request count exceeds 10, while any further requests will be rejected with an error. Software Filter Server protection against DDoS attacks can also be achieved using web applications. A traffic filter uses JavaScript, which is inaccessible to bots, effectively redirecting DDoS attacks to a placeholder page. The operation of the filter is simple. The configuration defines conditions for blocking bots, and when a visitor meets those conditions, they are redirected to a placeholder page instead of the requested page. The filter can also specify the reason for the redirection.
03 December 2024 · 6 min to read
Servers

How to Protect a Server: 6 Practical Methods

Any IT infrastructure requires robust protection. While information security is a vast topic, there are basic steps that can safeguard against attacks from amateur hackers and bots. This article outlines six straightforward methods to protect your server effectively. Tools and Methods of Protection Securing a server from breaches involves a combination of measures. These can be categorized into the following areas: Securing communication channels used for system administration and operation. Implementing multi-layered security for the system. Restricting access to infrastructure resources. Monitoring and auditing system activities. Backing up data. Timely updates or rollbacks of software. Antivirus protection for servers. Below, we detail six practical methods to achieve a robust security level against amateur attackers and bots. Privilege Restriction When managing access to resources, follow the principle of least privilege: users and processes should only have the minimal permissions necessary to perform their tasks. This is particularly important for databases and operating systems. This approach not only prevents unauthorized external access but also mitigates risks from internal threats. Separate Accounts for Administrators: Create individual accounts for each admin. Use non-privileged accounts for operations that don’t require elevated permissions. Active Directory: In environments using Microsoft Active Directory, regularly audit and configure group policies. Mismanagement of these policies can lead to severe security breaches, especially if exploited by a malicious admin or hacker. Minimize Root Usage in Unix Systems: Avoid working as the root user. Instead, disable the root account and use the sudo program for tasks requiring elevated permissions. To customize sudo behavior, modify the /etc/sudoers file using the visudo command. Below are two useful directives for monitoring sudo activity. By default, sudo logs to syslog. To store logs in a separate file for better clarity, add the following to /etc/sudoers: Defaults log_host, log_year, logfile="/var/log/sudo.log" This directive records command logs, along with input and output (stdin, stdout, stderr), into /var/log/sudo-io: Defaults log_host, log_year, logfile="/var/log/sudo.log" For a deeper dive into managing the sudoers file, check this guide. Mandatory Access Control (MAC) This recommendation focuses on Linux systems and builds upon the principle of access control. Many Linux administrators rely solely on discretionary access control (DAC) mechanisms, which are basic and always active by default. However, several Linux distributions include mandatory access control (MAC) mechanisms, such as AppArmor in Ubuntu and SELinux in RHEL-based systems. While MAC requires more complex configuration of the OS and services, it allows for granular access control to filesystem objects, significantly enhancing the server's security. Remote Administration of Operating Systems When remotely administering an operating system, always use secure protocols: For Windows, use RDP (Remote Desktop Protocol). For Linux, use SSH (Secure Shell). Although these protocols are robust, additional measures can further strengthen security. For RDP, you can block connections of accounts with blank passwords. You can configure it via Local Security Policy under the setting: Accounts: Limit local account use of blank passwords to console logon only. RDP sessions can be protected with the secure TLS transport protocol, which will be discussed later. By default, SSH user authentication relies on passwords. Switching to SSH key-based authentication provides stronger protection, as a long key is far more difficult to brute-force than a password. Additionally, key-based authentication eliminates the need to enter a password during login since the key is stored on the server. Setting up keys requires only a few simple steps: Generate a key pair on your local machine: ssh-keygen -t rsa Copy the public key to the remote server: ssh-copy-id username@remote_address If key-based authentication is not an option, consider implementing Fail2ban. This tool monitors failed login attempts and blocks the IP addresses of attackers after a specified number of failed attempts. Additionally, changing default ports can help reduce the likelihood of automated attacks: Default SSH port 22/tcp → Choose a non-standard port. Default RDP port 3389/tcp → Use a custom port. Firewall Configuration A robust security system is layered. Relying solely on access control mechanisms is insufficient; it is more logical to manage network connections before they reach your services. This is where firewalls come in. A firewall provides network-level access control to segments of the infrastructure. The firewall decides which traffic to permit through the perimeter based on a specific set of allow rules. Any traffic that does not match these rules is blocked. In Linux, the firewall is integrated into the kernel (via netfilter), and you can manage using a frontend tool such as nftables, iptables, ufw, or firewalld. The first step in configuring a firewall is to close unused ports and keep only those that are intended for external access. For instance, a web server typically requires ports 80 (HTTP) and 443 (HTTPS) to remain open. While an open port itself is not inherently dangerous (the risk lies in the program behind the port), it is still better to eliminate unnecessary exposure. In addition to securing the external perimeter, firewalls can segment infrastructure and control traffic between these segments. If you have public-facing services, consider isolating them from internal resources by using a DMZ (Demilitarized Zone). Additionally, it’s worth exploring Intrusion Detection and Prevention Systems (IDS/IPS). These solutions work on the opposite principle: they block security threats while allowing all other traffic through. Hostman offers a cloud firewall that provides cutting-edge defense for your server. Virtual Private Networks (VPNs) Up until now, we have focused on protecting a single server. Let’s now consider securing multiple servers. The primary purpose of a Virtual Private Network (VPN) is to provide secure connectivity between organizational branches. Essentially, a VPN creates a logical network over an existing network (e.g., the Internet). Its security is ensured through cryptographic methods, so the protection of connections does not depend on the underlying network's security. There are many protocols available for VPNs, and the choice depends on the size of the organization, network architecture, and required security level. PPTP (Point-to-Point Tunneling Protocol) is a simple option for a small business or home network, as it is widely supported on routers and mobile devices. However, its encryption methods are outdated. For high-security needs and site-to-site connections, protocols like IPsec are suitable. For site-to-host connections, options like WireGuard are more appropriate. WireGuard and similar protocols provide advanced security but require more intricate configuration compared to PPTP. TLS and Public Key Infrastructure (PKI) Many application-layer protocols, such as HTTP, FTP, and SMTP, were developed in an era when networks were limited to academic institutions and military organizations long before the invention of the web. These protocols transmit data in plaintext. To ensure the security of a website, web control panels, internal services, or email, you should use TLS. TLS (Transport Layer Security) is a protocol designed to secure data transmission over an untrusted network. While the term SSL (e.g., SSL certificates, OpenSSL package) is often mentioned alongside TLS, it’s important to note that the modern versions of the protocol are TLS 1.2 and TLS 1.3. Earlier versions of TLS and its predecessor, SSL, are now considered obsolete. TLS provides privacy, data integrity, and resource authentication. Authentication is achieved through digital signatures and the Public Key Infrastructure (PKI). PKI functions as follows: the server's authenticity is verified using an SSL certificate, which is signed by a Certificate Authority (CA). The CA’s certificate is, in turn, signed by a higher-level CA, continuing up the chain. The root CA certificates are self-signed, meaning their trust is implicitly assumed. TLS can also be used with Virtual Private Networks (VPNs), such as setting up client authentication using SSL certificates or a TLS handshake. In this case, it would be necessary to organize your own PKI within the local network, including a CA server, as well as the keys and certificates for network nodes. The Dangers of Attackers The level of threat depends on the type of attack. Cyberattacks can be broadly categorized into two main types. Breaching the Security Perimeter This type of attack involves gaining unauthorized access to the account of an authenticated user of a service or system, such as a database. Breaches of privileged accounts pose significant risks because attackers gain the ability to view sensitive information and modify system parameters. The most critical type of breach involves gaining unauthorized access to the superuser account of the operating system, potentially compromising a significant portion of the infrastructure. Disabling Systems This category of attacks aims to disrupt system operations rather than steal data, but it is no less dangerous. The most prominent example is a DoS (Denial of Service) or DDoS (Distributed Denial of Service) attack. These attacks overload the server with a flood of requests, causing it to fail and become unresponsive to legitimate users. In some cases, a DoS attack serves as a precursor to other forms of cyberattacks. The results of cyberattacks often include data breaches, financial losses, and reputational damage. For this reason, even the most basic level of security should be implemented when establishing an IT infrastructure.
02 December 2024 · 8 min to read
Servers

Load Testing: Objectives, Tasks, and Procedure

This article explores the features and benefits of load testing a web server, discussing why it is important and how to perform it correctly. What Is Load Testing? Load testing is the process of evaluating the performance and reliability of a web server using specialized tools designed to simulate real-world server loads. These tools emulate the activity of a specified number of users and document the resulting load on the server. The collected data is then analyzed to assess the performance of hardware resources, communication channels, and server software. Why Use Load Tests: Objectives of Testing Most websites and applications are created to generate revenue, or profitability is set as one of the project goals. The performance of the server—its ability to handle the planned number of simultaneous users—is a key success factor. If a server cannot handle a surge in visitors, it results in decreased traffic, negatively impacting the website's behavioral metrics. As a result, the site's ranking in search engine results drops, reducing organic traffic and leading to a decline in sales and advertising revenue. Such failures can be equally disastrous for web applications used by thousands of people. The primary goal of load testing is to evaluate server capacity under extreme conditions, pushing it to its operational limits. This helps determine whether additional resources are needed or if existing ones are sufficient for stable operation. The outcome includes mitigating the risk of site or application downtime and achieving significant cost savings in the long run. Step-by-Step Guide to Load Testing a Server Let’s break down the entire process into sequential steps: Preparation for Testing. Before conducting load testing, start with functional testing to ensure the chosen tools and configurations are correct. Define Objectives. Typical objectives include identifying the server’s performance limits and detecting system bottlenecks. Specify Requirements. Clearly define the requirements, such as: 90% of users must be served within a maximum of 10 seconds each. Develop Scenarios. Create scenarios based on typical user behavior on the website, application, or service. Choose Tools. Select software that best aligns with the testing goals. Configure Tools. Set the load levels and write scripts to simulate user behavior. Execute Testing. Gradually increase the load while documenting critical thresholds. Analyze Results. Process the collected data, draw conclusions, and prepare recommendations for improving system performance. Objectives and Requirements The type and scale of the load, as well as the metrics to monitor, depend on the specific objectives. Common tasks include: Determining the server’s performance limits. Checking configuration reliability. Monitoring backups. Identifying problematic areas in the system. Regarding requirements, they often define user service times as percentages. It’s important to avoid aiming for 100% of users to be served within a strict timeframe, as a buffer (typically around 10%) is necessary. This allows the system to handle unexpected events without failures. User Scenarios User scenarios depend on how users interact with the site. For example, a typical scenario for an online store might include: Logging in. Searching for and selecting a product. Viewing the product details. Adding the product to the cart. Proceeding to the cart. Initiating the checkout process. Filling in form fields. Confirming and paying for the purchase. The exact flow depends on the functionality of the site or application. After modeling one or more typical scenarios, identify the most resource-intensive pages and select tools to simulate the load on these critical points. Tools for Load Testing If the objectives allow, it is reasonable to use free and open-source tools for testing. One of the most popular options is Apache JMeter, a highly configurable cross-platform software that supports all web protocols. JMeter makes it easy to develop scripts that simulate user actions on a website or application. Once the scripts are created, we can set the load levels and proceed with the testing process.  However, JMeter is not the only tool for load testing. Other options include WAPT, NeoLoad, Siege, Gobench, WRK, Curl-loader, Tsung, and many more. Each of these tools has unique features. Before choosing one, review their descriptions, study available information, and consider user reviews and forums. Load Testing After defining typical scenarios and selecting appropriate tools, the testing process begins. Most scenarios involve gradually increasing the load. The number of concurrent threads or users increases until response times rise. This marks the first critical threshold, often referred to as the degradation point. The second threshold, known as the sub-critical point, occurs when response times exceed acceptable limits. The system can still process requests at this stage, but response times hit the SLA (Service Level Agreement) threshold. Beyond this point, delays accumulate rapidly, causing the system to reach the critical point. The critical point, or failure point, occurs when the server's resources are exhausted—either CPU power or memory runs out. At this stage, the server crashes, signaling the end of testing and the start of data analysis. Analysis of Load Testing Results Testers analyze the collected data to identify bottlenecks. Sometimes, you can resolve the issues by adjusting configurations or refining the code. In other cases, a specific service within the project may cause delays, requiring targeted optimization. This might involve configuration adjustments or scaling the service. For high user volumes, the most common issue is hardware overload. Typically, addressing this requires upgrading the infrastructure—for example, adding RAM or switching to a more powerful processor. Conclusion Load testing a server is an essential procedure for anyone looking to avoid failures in a growing website, service, or application. Practical experience shows that proper configuration adjustments or code optimization can significantly enhance server performance. However, to achieve these improvements, it’s critical to identify system bottlenecks, which is precisely the purpose of load testing.
02 December 2024 · 5 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support