Sign In
Sign In

iSCSI Protocol: How It Works and What It’s Used For

iSCSI Protocol: How It Works and What It’s Used For
Hostman Team
Technical writer
Servers
18.11.2024
Reading time: 5 min

iSCSI, or Internet Small Computer System Interface, is a protocol for data storage that enables SCSI commands to be run over a network connection, typically Ethernet. In this article, we’ll look at how it works, its features and advantages, and explain how to configure the iSCSI protocol.

How iSCSI Works

To understand how iSCSI functions, let’s look at its structure in more detail. The main components are initiators and targets. This terminology is straightforward: initiators are hosts that initiate an iSCSI connection, while targets are hosts that accept these connections. Thus, storage devices serve as targets to which the initiator hosts connect.

The connection is established over TCP/IP, with iSCSI handling the SCSI commands and data organization, assembling them into packets. These packets are then transferred over a point-to-point connection between the local and remote hosts. iSCSI processes the packets it receives, separating out the SCSI commands, making the OS perceive the storage as a local device, which can be formatted and managed as usual.

Authentication and Data Transmission

In iSCSI, initiators and targets are identified using special names: IQN (iSCSI Qualified Name) and EUI (Extended Unique Identifier), the latter used with IPv6 protocol.

  • Example of IQN: iqn.2003-02.com.site.iscsi:name23. Here, 2003-02 represents the year and month the domain site.com was registered. Domain names in IQN appear in reverse order. Lastly, name23 is the unique name assigned to the iSCSI host.

  • Example of EUI: eui.fe9947fff075cee0. This is a hexadecimal value in IEEE format. The upper 24 bits identify a specific network or company (such as a provider), while the remaining 40 bits uniquely identify the host within that network.

Each session involves two phases. The first phase is authentication over TCP. After successful authentication, the second phase is data exchange between the initiator host and the storage device, conducted over a single connection, eliminating the need to track requests in parallel. When the data transfer is complete, the connection is closed using an iSCSI logout command.

Error Handling and Security

To address data loss, iSCSI includes mechanisms for data recovery, such as PDU packet retransmission, connection recovery, and session restart, while canceling any unprocessed commands. Data exchange security is ensured through the CHAP protocol, which doesn’t directly transmit confidential information (like passwords) but uses a hash comparison. Additionally, all packets are encrypted and integrity-checked using IPsec protocols integrated into iSCSI.

Types of iSCSI Implementations

There are three main types of iSCSI implementations:

  1. Host CPU Processing: Processing is handled by the initiator host's CPU.

  2. TCP/IP Offload with Shared Load: Most packets are processed by the storage device, while the initiator host handles certain exceptions.

  3. Full TCP/IP Offload: All data packets are processed entirely by the storage device.

Additionally, iSCSI can be extended for RDMA (Remote Direct Memory Access) to allow direct remote memory access. The advantage of RDMA is that it transfers data without consuming system resources on network nodes, achieving high data exchange speeds. In the case of iSCSI, SCSI buffer memory is used to connect to storage, eliminating the need for intermediate data copies and reducing CPU load. This iSCSI variation is known as iSER (iSCSI Extension for RDMA).

Advantages of iSCSI

iSCSI provides not only cost-effectiveness and improved performance but also offers:

  • Simplified Network Storage: Since iSCSI operates over Gigabit Ethernet devices, network storage becomes easier to set up and manage.
  • Ease of Support: iSCSI uses the same principles as TCP/IP, so IT specialists don’t need additional training.
  • Network Equipment Compatibility: iSCSI is based on the TCP/IP network model, so almost any storage-related network equipment is compatible within an iSCSI environment.

Differences Between iSCSI SAN and FC SAN

In discussions comparing these two protocols, iSCSI SAN (Storage Area Network) and FC SAN (Fibre Channel SAN) are often seen as competitors. Let’s look at the key differences

iSCSI SAN is a more cost-effective solution than FC SAN. iSCSI offers high data transfer performance and doesn’t require additional specialized hardware—it operates on existing network equipment. However, for maximum performance, network adapters are recommended. By contrast, FC SAN requires additional hardware like switches and host bus adapters.

To illustrate, here is a table summarizing key differences between the protocols:

Feature

iSCSI SAN

Fibre Channel SAN

Operation on existing network

Possible

Not possible

Data transfer speed

1 to 100 Gbps

2 to 32 Gbps

Setup on existing equipment

Yes

No

Data flow control

No packet retransmission protection

Reliable

Network isolation

No

Yes

Conclusion

As shown in the comparison table, each protocol has its strengths, so the choice depends on the requirements of your storage system. In short, iSCSI is ideal when cost efficiency, ease of setup, and straightforward protocol management are priorities. On the other hand, FC offers low latency, easier scalability, and is better suited for more complex storage networks.

Servers
18.11.2024
Reading time: 5 min

Similar

Linux

How to Use SSH Keys for Authentication

Many cloud applications are built on the popular SSH protocol—it is widely used for managing network infrastructure, transferring files, and executing remote commands. SSH stands for Secure Socket Shell, meaning it provides a shell (command-line interface) around the connection between multiple remote hosts, ensuring that the connection is secure (encrypted and authenticated). SSH connections are available on all popular operating systems, including Linux, Ubuntu, Windows, and Debian. The protocol establishes an encrypted communication channel within an unprotected network by using a pair of public and private keys. Keys: The Foundation of SSH SSH operates on a client-server model. This means the user has an SSH client (a terminal in Linux or a graphical application in Windows), while the server side runs a daemon, which accepts incoming connections from clients. In practice, an SSH channel enables remote terminal management of a server. In other words, after a successful connection, everything entered in the local console is executed directly on the remote server. The SSH protocol uses a pair of keys for encrypting and decrypting information: public key and private key. These keys are mathematically linked. The public key is shared openly, resides on the server, and is used to encrypt data. The private key is confidential, resides on the client, and is used to decrypt data. Of course, keys are not generated manually but with special tools—keygens. These utilities generate new keys using encryption algorithms fundamental to SSH technology. More About How SSH Works Exchange of Public Keys SSH relies on symmetric encryption, meaning two hosts wishing to communicate securely generate a unique session key derived from the public and private data of each host. For example, host A generates a public and private key pair. The public key is sent to host B. Host B does the same, sending its public key to host A. Using the Diffie-Hellman algorithm, host A can create a key by combining its private key with the public key of host B. Likewise, host B can create an identical key by combining its private key with the public key of host A. This results in both hosts independently generating the same symmetric encryption key, which is then used for secure communication. Hence, the term symmetric encryption. Message Verification To verify messages, hosts use a hash function that outputs a fixed-length string based on the following data: The symmetric encryption key The packet number The encrypted message text The result of hashing these elements is called an HMAC (Hash-based Message Authentication Code). The client generates an HMAC and sends it to the server. The server then creates its own HMAC using the same data and compares it to the client's HMAC. If they match, the verification is successful, ensuring that the message is authentic and hasn't been tampered with. Host Authentication Establishing a secure connection is only part of the process. The next step is authenticating the user connecting to the remote host, as the user may not have permission to execute commands. There are several authentication methods: Password Authentication: The user sends an encrypted password to the server. If the password is correct, the server allows the user to execute commands. Certificate-Based Authentication: The user initially provides the server with a password and the public part of a certificate. Once authenticated, the session continues without requiring repeated password entries for subsequent interactions. These methods ensure that only authorized users can access the remote system while maintaining secure communication. Encryption Algorithms A key factor in the robustness of SSH is that decrypting the symmetric key is only possible with the private key, not the public key, even though the symmetric key is derived from both. Achieving this property requires specific encryption algorithms. There are three primary classes of such algorithms: RSA, DSA, and algorithms based on elliptic curves, each with distinct characteristics: RSA: Developed in 1978, RSA is based on integer factorization. Since factoring large semiprime numbers (products of two large primes) is computationally difficult, the security of RSA depends on the size of the chosen factors. The key length ranges from 1024 to 16384 bits. DSA: DSA (Digital Signature Algorithm) is based on discrete logarithms and modular exponentiation. While similar to RSA, it uses a different mathematical approach to link public and private keys. DSA key length is limited to 1024 bits. ECDSA and EdDSA: These algorithms are based on elliptic curves, unlike DSA, which uses modular exponentiation. They assume that no efficient solution exists for the discrete logarithm problem on elliptic curves. Although the keys are shorter, they provide the same level of security. Key Generation Each operating system has its own utilities for quickly generating SSH keys. In Unix-like systems, the command to generate a key pair is: ssh-keygen -t rsa Here, the type of encryption algorithm is specified using the -t flag. Other supported types include: dsa ecdsa ed25519 You can also specify the key length with the -b flag. However, be cautious, as the security of the connection depends on the key length: ssh-keygen -b 2048 -t rsa After entering the command, the terminal will prompt you to specify a file path and name for storing the generated keys. You can accept the default path by pressing Enter, which will create standard file names: id_rsa (private key) and id_rsa.pub (public key). Thus, the public key will be stored in a file with a .pub extension, while the private key will be stored in a file without an extension. Next, the command will prompt you to enter a passphrase. While not mandatory (it is unrelated to the SSH protocol itself), using a passphrase is recommended to prevent unauthorized use of the key by a third-party user on the local Linux system. Note that if a passphrase is used, you must enter it each time you establish the connection. To change the passphrase later, you can use: ssh-keygen -p Or, you can specify all parameters at once with a single command: ssh-keygen -p old_password -N new_password -f path_to_files For Windows, there are two main approaches: Using ssh-keygen from OpenSSH: The OpenSSH client provides the same ssh-keygen command as Linux, following the same steps. Using PuTTY: PuTTY is a graphical application that allows users to generate public and private keys with the press of a button. Installing the Client and Server Components The primary tool for an SSH connection on Linux platforms (both client and server) is OpenSSH. While it is typically pre-installed on most operating systems, there may be situations (such as with Ubuntu) where manual installation is necessary. The general command for installing SSH, followed by entering the superuser password, is: sudo apt-get install ssh However, in some operating systems, SSH may be divided into separate components for the client and server. For the Client To check whether the SSH client is installed on your local machine, simply run the following command in the terminal: ssh If SSH is supported, the terminal will display a description of the command. If nothing appears, you’ll need to install the client manually: sudo apt-get install openssh-client You will be prompted to enter the superuser password during installation. Once completed, SSH connectivity will be available. For the Server Similarly, the server-side part of the OpenSSH toolkit is required on the remote host. To check if the SSH server is available on your remote host, try connecting locally via SSH: ssh localhost If the SSH daemon is running, you will see a message indicating a successful connection. If not, you’ll need to install the SSH server: sudo apt-get install openssh-server As with the client, the terminal will prompt you to enter the superuser password. After installation, you can check whether SSH is active by running: sudo service ssh status Once connected, you can modify SSH settings as needed by editing the configuration file: ./ssh/sshd_config For example, you might want to change the default port to a custom one. Don’t forget that after making changes to the configuration, you must manually restart the SSH service to apply the updates: sudo service ssh restart Copying an SSH Key to the Server On Hostman, you can easily add SSH keys to your servers using the control panel. Using a Special Copy Command After generating a public SSH key, it can be used as an authorized key on a server. This allows quick connections without the need to repeatedly enter a password. The most common way to copy the key is by using the ssh-copy-id command: ssh-copy-id -i ~/.ssh/id_rsa.pub name@server_address This command assumes you used the default paths and filenames during key generation. If not, simply replace ~/.ssh/id_rsa.pub with your custom path and filename. Replace name with the username on the remote server. Replace server_address with the host address. If the usernames on both the client and server are the same, you can shorten the command: ssh-copy-id -i ~/.ssh/id_rsa.pub server_address If you set a passphrase during the SSH key creation, the terminal will prompt you to enter it. Otherwise, the key will be copied immediately. In some cases, the server may be configured to use a non-standard port (the default is 22). If that’s the case, specify the port using the -p flag: ssh-copy-id -i ~/.ssh/id_rsa.pub -p 8129 name@server_address Semi-Manual Copying There are operating systems where the ssh-copy-id command may not be supported, even though SSH connections to the server are possible. In such cases, the copying process can be done manually using a series of commands: ssh name@server_address 'mkdir -pm 700 ~/.ssh; echo ' $(cat ~/.ssh/id_rsa.pub) ' >> ~/.ssh/authorized_keys; chmod 600 ~/.ssh/authorized_keys' This sequence of commands does the following: Creates a special .ssh directory on the server (if it doesn’t already exist) with the correct permissions (700) for reading and writing. Creates or appends to the authorized_keys file, which stores the public keys of all authorized users. The public key from the local file (id_rsa.pub) will be added to it. Sets appropriate permissions (600) on the authorized_keys file to ensure it can only be read and written by the owner. If the authorized_keys file already exists, it will simply be appended with the new key. Once this is done, future connections to the server can be made using the same SSH command, but now the authentication will use the public key added to authorized_keys: ssh name@server_address Manual Copying Some hosting platforms offer server management through alternative interfaces, such as a web-based control panel. In these cases, there is usually an option to manually add a public key to the server. The web interface might even simulate a terminal for interacting with the server. Regardless of the method, the remote host must contain a file named ~/.ssh/authorized_keys, which lists all authorized public keys. Simply copy the client’s public key (found in ~/.ssh/id_rsa.pub by default) into this file. If the key pair was generated using a graphical application (typically PuTTY on Windows), you should copy the public key directly from the application and add it to the existing content in authorized_keys. Connecting to a Server To connect to a remote server on a Linux operating system, enter the following command in the terminal: ssh name@server_address Alternatively, if the local username is identical to the remote username, you can shorten the command to: ssh server_address The system will then prompt you to enter the password. Type it and press Enter. Note that the terminal will not display the password as you type it. Just like with the ssh-copy-id command, you can explicitly specify the port when connecting to a remote server: ssh client@server_address -p 8129 Once connected, you will have control over the remote machine via the terminal; any command you enter will be executed on the server side. Conclusion Today, SSH is one of the most widely used protocols in development and system administration. Therefore, having a basic understanding of its operation is crucial. This article aimed to provide an overview of SSH connections, briefly explain the encryption algorithms (RSA, DSA, ECDSA, and EdDSA), and demonstrate how public and private key pairs can be used to establish secure connections with a personal server, ensuring that exchanged messages remain inaccessible to third parties. We covered the primary commands for UNIX-like operating systems that allow users to generate key pairs and grant clients SSH access by copying the public key to the server, enabling secure connections.
30 January 2025 · 10 min to read
Servers

How to Protect a Server from DDoS Attacks

A DDoS attack (Distributed Denial of Service) aims to overwhelm a network with excessive traffic, reducing its performance or causing a complete outage. This is reflected in the term "denial-of-service" (refusal of service). The frequency and intensity of DDoS attacks have been rising rapidly. A report by Cloudflare noted that in 2021, the number of attacks grew by one-third compared to 2020, with a peak in activity observed in December. The duration of a DDoS attack can vary. According to research by Securelist: 94.95% of attacks end within four hours. 3.27% last between 5 to 9 hours. 1.05% persist for 10 to 19 hours. Only 0.73% of all attacks extend beyond 20 hours. Effective Tools for Protecting a Server from DDoS Attacks If you don't want to rely on vendors' solutions, paid services, or proprietary software, you can use the following tools to defend against DDoS attacks: IPTables. A powerful firewall tool available in Linux systems that allows precise control over incoming and outgoing traffic. CSF (ConfigServer Security and Firewall). A robust security tool that simplifies managing firewall rules and provides additional protection mechanisms. Nginx Modules. Modules specifically designed for mitigating DDoS attacks, such as limiting the number of requests per IP or delaying excessive requests. Software Filters. Tools or scripts that analyze and filter traffic to block malicious or excessive requests, helping to maintain service availability. IPTables. Blocking Bots by IP Address The IPTables tool helps protect a server from basic DDoS attacks. Its primary function is to filter incoming traffic through special tables. The resource owner can add custom tables. Each table contains a set of rules that govern the tool's behavior in specific situations. By default, there are only two response options: ACCEPT (allow access) and REJECT (block access). In IPTables, it is possible to limit the number of connections.  If a single IP address exceeds the allowed number of connections, the tool will block access for that IP. You can extend the tool's functionality with additional criteria: Limit: Sets a limit on the number of packet connections within a chosen time period. Hashlimit: Works similarly to Limit, but applies to groups of hosts, subnets, and ports. Mark: Used to mark packets, limit traffic, and filter. Connlimit: Limits the number of simultaneous connections for a single IP address or subnet. IPRange: Defines a range of IP addresses that are not considered as a subnet by the tool. Additionally, IPTables can use criteria such as Owner, State, TOS, TTL, and Unclean Match to set personalized configurations, effectively protecting the resource from DDoS attacks. The ipset kernel module allows you to create a list of addresses that exceed the specified connection limit. The ipset timeout parameter sets a time limit for the created list, which is enough to ride out a DDoS attack. By default, IPTables settings return to their basic configuration after a system reboot. To save the settings, you can use additional utilities (such as iptables-save or iptables-persistent), but it is recommended to start with the default options to avoid saving incorrect settings that could block server access for everyone. ConfigServer Security and Firewall While IPTables is a convenient and effective tool, it can be quite complex to configure. You’ll need to learn how to manage it and write additional scripts, and if something goes wrong, your resource may end up being a "closed club" for just a few users. CSF (ConfigServer Security and Firewall) is a "turnkey" configurator, meaning you only need to set the correct parameters and not worry about the server's security. Installing the Server Firewall The preliminary installation steps involve downloading two additional components required to run CSF: the Perl interpreter and the libwww library. The next step is to install ConfigServer Security and Firewall itself. Since the tool is not available in the official repository, you'll need to download it directly from the provided link or by fetching the ready-made archive: cd /usr/srcwget https://download.configserver.com/csf.tgz After downloading, extract the archive and move it to the defender’s files folder. Then, run the installation process. Once installed successfully, you can proceed with configuring CSF. Configuring the Server Firewall By default, the settings in ConfigServer and Firewall are active for 5 minutes, after which any changes are reset. This test format is useful for conducting experiments and understanding errors in the applied configuration. To switch to live mode, change the Testing value to 0. Proper configuration of CSF ensures reliable protection against DDoS attacks. Here are some essential commands in CSF: Specify incoming ports: TCP_IN = "22,23,25,36,75,87" Specify outgoing ports: TCP_OUT = "22,23,25,36,75,87" Configure email notifications for SSH connections: LF_SSH_EMAIL_ALERT = "1" Add an IP address to the exception list (useful for server management teams): csf -a 192.168.0.7 Block a specific IP address from connecting to the server: csf -d 192.168.0.6 Nginx Modules How can you protect your server from DDoS attacks using simpler methods? Use Nginx modules like limit_conn and limit_req. The limit_conn module limits the maximum number of connections to the server, while the limit_req module limits the number of requests within a specified time frame. For example, if you want to limit the number of simultaneous connections to 30 and restrict the number of connections within a 3-second window, the configuration will look as follows: limit_conn_zone $binary_remote_addr zone=perip: 30m;limit_req_zone $binary_remote_addr zone=dynamic:30m rate=3r/s; This configuration allows only 3 requests per second. Any additional requests are queued. The burst parameter controls the queue size. For example, if the burst value is set to 7, the module will queue up to 7 requests when the request count exceeds 10, while any further requests will be rejected with an error. Software Filter Server protection against DDoS attacks can also be achieved using web applications. A traffic filter uses JavaScript, which is inaccessible to bots, effectively redirecting DDoS attacks to a placeholder page. The operation of the filter is simple. The configuration defines conditions for blocking bots, and when a visitor meets those conditions, they are redirected to a placeholder page instead of the requested page. The filter can also specify the reason for the redirection.
03 December 2024 · 6 min to read
Servers

How to Protect a Server: 6 Practical Methods

Any IT infrastructure requires robust protection. While information security is a vast topic, there are basic steps that can safeguard against attacks from amateur hackers and bots. This article outlines six straightforward methods to protect your server effectively. Tools and Methods of Protection Securing a server from breaches involves a combination of measures. These can be categorized into the following areas: Securing communication channels used for system administration and operation. Implementing multi-layered security for the system. Restricting access to infrastructure resources. Monitoring and auditing system activities. Backing up data. Timely updates or rollbacks of software. Antivirus protection for servers. Below, we detail six practical methods to achieve a robust security level against amateur attackers and bots. Privilege Restriction When managing access to resources, follow the principle of least privilege: users and processes should only have the minimal permissions necessary to perform their tasks. This is particularly important for databases and operating systems. This approach not only prevents unauthorized external access but also mitigates risks from internal threats. Separate Accounts for Administrators: Create individual accounts for each admin. Use non-privileged accounts for operations that don’t require elevated permissions. Active Directory: In environments using Microsoft Active Directory, regularly audit and configure group policies. Mismanagement of these policies can lead to severe security breaches, especially if exploited by a malicious admin or hacker. Minimize Root Usage in Unix Systems: Avoid working as the root user. Instead, disable the root account and use the sudo program for tasks requiring elevated permissions. To customize sudo behavior, modify the /etc/sudoers file using the visudo command. Below are two useful directives for monitoring sudo activity. By default, sudo logs to syslog. To store logs in a separate file for better clarity, add the following to /etc/sudoers: Defaults log_host, log_year, logfile="/var/log/sudo.log" This directive records command logs, along with input and output (stdin, stdout, stderr), into /var/log/sudo-io: Defaults log_host, log_year, logfile="/var/log/sudo.log" For a deeper dive into managing the sudoers file, check this guide. Mandatory Access Control (MAC) This recommendation focuses on Linux systems and builds upon the principle of access control. Many Linux administrators rely solely on discretionary access control (DAC) mechanisms, which are basic and always active by default. However, several Linux distributions include mandatory access control (MAC) mechanisms, such as AppArmor in Ubuntu and SELinux in RHEL-based systems. While MAC requires more complex configuration of the OS and services, it allows for granular access control to filesystem objects, significantly enhancing the server's security. Remote Administration of Operating Systems When remotely administering an operating system, always use secure protocols: For Windows, use RDP (Remote Desktop Protocol). For Linux, use SSH (Secure Shell). Although these protocols are robust, additional measures can further strengthen security. For RDP, you can block connections of accounts with blank passwords. You can configure it via Local Security Policy under the setting: Accounts: Limit local account use of blank passwords to console logon only. RDP sessions can be protected with the secure TLS transport protocol, which will be discussed later. By default, SSH user authentication relies on passwords. Switching to SSH key-based authentication provides stronger protection, as a long key is far more difficult to brute-force than a password. Additionally, key-based authentication eliminates the need to enter a password during login since the key is stored on the server. Setting up keys requires only a few simple steps: Generate a key pair on your local machine: ssh-keygen -t rsa Copy the public key to the remote server: ssh-copy-id username@remote_address If key-based authentication is not an option, consider implementing Fail2ban. This tool monitors failed login attempts and blocks the IP addresses of attackers after a specified number of failed attempts. Additionally, changing default ports can help reduce the likelihood of automated attacks: Default SSH port 22/tcp → Choose a non-standard port. Default RDP port 3389/tcp → Use a custom port. Firewall Configuration A robust security system is layered. Relying solely on access control mechanisms is insufficient; it is more logical to manage network connections before they reach your services. This is where firewalls come in. A firewall provides network-level access control to segments of the infrastructure. The firewall decides which traffic to permit through the perimeter based on a specific set of allow rules. Any traffic that does not match these rules is blocked. In Linux, the firewall is integrated into the kernel (via netfilter), and you can manage using a frontend tool such as nftables, iptables, ufw, or firewalld. The first step in configuring a firewall is to close unused ports and keep only those that are intended for external access. For instance, a web server typically requires ports 80 (HTTP) and 443 (HTTPS) to remain open. While an open port itself is not inherently dangerous (the risk lies in the program behind the port), it is still better to eliminate unnecessary exposure. In addition to securing the external perimeter, firewalls can segment infrastructure and control traffic between these segments. If you have public-facing services, consider isolating them from internal resources by using a DMZ (Demilitarized Zone). Additionally, it’s worth exploring Intrusion Detection and Prevention Systems (IDS/IPS). These solutions work on the opposite principle: they block security threats while allowing all other traffic through. Hostman offers a cloud firewall that provides cutting-edge defense for your server. Virtual Private Networks (VPNs) Up until now, we have focused on protecting a single server. Let’s now consider securing multiple servers. The primary purpose of a Virtual Private Network (VPN) is to provide secure connectivity between organizational branches. Essentially, a VPN creates a logical network over an existing network (e.g., the Internet). Its security is ensured through cryptographic methods, so the protection of connections does not depend on the underlying network's security. There are many protocols available for VPNs, and the choice depends on the size of the organization, network architecture, and required security level. PPTP (Point-to-Point Tunneling Protocol) is a simple option for a small business or home network, as it is widely supported on routers and mobile devices. However, its encryption methods are outdated. For high-security needs and site-to-site connections, protocols like IPsec are suitable. For site-to-host connections, options like WireGuard are more appropriate. WireGuard and similar protocols provide advanced security but require more intricate configuration compared to PPTP. TLS and Public Key Infrastructure (PKI) Many application-layer protocols, such as HTTP, FTP, and SMTP, were developed in an era when networks were limited to academic institutions and military organizations long before the invention of the web. These protocols transmit data in plaintext. To ensure the security of a website, web control panels, internal services, or email, you should use TLS. TLS (Transport Layer Security) is a protocol designed to secure data transmission over an untrusted network. While the term SSL (e.g., SSL certificates, OpenSSL package) is often mentioned alongside TLS, it’s important to note that the modern versions of the protocol are TLS 1.2 and TLS 1.3. Earlier versions of TLS and its predecessor, SSL, are now considered obsolete. TLS provides privacy, data integrity, and resource authentication. Authentication is achieved through digital signatures and the Public Key Infrastructure (PKI). PKI functions as follows: the server's authenticity is verified using an SSL certificate, which is signed by a Certificate Authority (CA). The CA’s certificate is, in turn, signed by a higher-level CA, continuing up the chain. The root CA certificates are self-signed, meaning their trust is implicitly assumed. TLS can also be used with Virtual Private Networks (VPNs), such as setting up client authentication using SSL certificates or a TLS handshake. In this case, it would be necessary to organize your own PKI within the local network, including a CA server, as well as the keys and certificates for network nodes. The Dangers of Attackers The level of threat depends on the type of attack. Cyberattacks can be broadly categorized into two main types. Breaching the Security Perimeter This type of attack involves gaining unauthorized access to the account of an authenticated user of a service or system, such as a database. Breaches of privileged accounts pose significant risks because attackers gain the ability to view sensitive information and modify system parameters. The most critical type of breach involves gaining unauthorized access to the superuser account of the operating system, potentially compromising a significant portion of the infrastructure. Disabling Systems This category of attacks aims to disrupt system operations rather than steal data, but it is no less dangerous. The most prominent example is a DoS (Denial of Service) or DDoS (Distributed Denial of Service) attack. These attacks overload the server with a flood of requests, causing it to fail and become unresponsive to legitimate users. In some cases, a DoS attack serves as a precursor to other forms of cyberattacks. The results of cyberattacks often include data breaches, financial losses, and reputational damage. For this reason, even the most basic level of security should be implemented when establishing an IT infrastructure.
02 December 2024 · 8 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support