Sign In
Sign In

iptables: Overview and Practical Use

iptables: Overview and Practical Use
Hostman Team
Technical writer
Network
05.11.2024
Reading time: 11 min

The iptables utility allows you to manage the network firewall in Linux distributions. iptables is a popular command-line utility for interacting with the built-in Linux kernel firewall called Netfilter, which has been included in the Linux kernel since version 2.4. 

In this article, we will examine how iptables works and go through practical usage examples.

Installing iptables

As mentioned, iptables is included in nearly all Linux distributions, from the most common (Ubuntu, Debian, RHEL) to distributions like openSUSE, Arch Linux, Gentoo, and others. First, let's check if iptables is already installed on your cloud server by displaying its version with the command:

iptables --version

If this command returns a version number, iptables is already installed on the system. However, if you see the message iptables: command not found, you’ll need to install it manually. Below are instructions for installing iptables using package managers across various Linux distributions. Alternatively, you can compile and install iptables from the source code.

APT

For APT-based distributions (Ubuntu/Debian/Linux Mint/Kali Linux), use the command:

apt -y install iptables

RPM

For RPM-based distributions (CentOS, Fedora, Red Hat Enterprise Linux, ALT Linux), use one of the following commands:

For the YUM package manager:

yum -y install iptables

For the DNF package manager:

dnf -y install iptables

Pacman

For Pacman-based distributions (Arch Linux, ArchLabs, Manjaro), use the command:

pacman -S iptables

All commands must be run as the root user or as a regular user with sudo privileges.

How iptables Works

iptables operates using a system of rules. These rules control incoming and outgoing traffic, organized into chains that either allow or block traffic.

A more detailed breakdown of how iptables works is as follows:

  • Network packets pass through one or more chains.
  • As a network packet moves through a chain, each rule in that chain is applied to it. During this process, the packet is checked against specified criteria. If it does not meet a criterion, a specific action is applied to it. These actions can include allowing or blocking traffic, among other operations.

Key iptables Terminology

While working with iptables, you may encounter the following terms:

  • Chain: A sequence or set of rules that determine how traffic will be handled.
  • Rules: Defined actions that contain criteria and a target or goal.
  • Module: An added feature that provides extra options for iptables, allowing for more extensive and complex traffic filtering rules.
  • Table: An abstraction in iptables that stores chains of rules. iptables includes the following tables: Security, Raw, NAT, Filter, and Mangle. Each table has a specific function, described below.

iptables Tables

Filter Table

The Filter table is the default table, using three chains: OUTPUT, FORWARD, and INPUT.

  • INPUT: Controls incoming connections. For instance, this might manage incoming SSH connections.
  • FORWARD: Manages incoming connections not directed to the local device, typically used on a router.
  • OUTPUT: Controls outgoing connections, such as navigating to a website using a browser.

NAT Table

The NAT (Network Address Translation) table includes three chains: PREROUTING, POSTROUTING, and OUTPUT.

  • PREROUTING: Determines the destination IP address of a packet.
  • POSTROUTING: Alters the source IP address.
  • OUTPUT: Changes the target address of outgoing packets.

Mangle Table

The Mangle table is used to modify packet IP headers.

Raw Table

The Raw table provides a mechanism for marking packets to bypass connection tracking.

Security Table

The Security table enables interaction with various OS security mechanisms, such as SELinux.

iptables Rules

The rules in iptables are designed to control incoming and outgoing network traffic. Rules can also be used to configure port forwarding and create protocol-specific rules.

Each rule is made up of criteria and a target. The criteria of a rule are matched, and the specified actions are applied to the target object. If a packet doesn’t match a rule’s criteria, the next rule is processed. The decisions made by iptables are called actions. Below is a list of key actions for handling connections:

  • ACCEPT: Opens (allows) the connection.
  • DROP: Closes the connection without sending a response to the client.
  • QUEUE: Sends the packet to a queue for further processing by an external application.
  • RETURN: Returns the packet to the previous rule, stopping the processing of the current rule.
  • REJECT: Blocks the connection and sends an error message in response.
  • DENY: Drops the incoming connection without sending a response.
  • ESTABLISHED: Marks an already established connection, as the session has already received at least one packet

Practical Application of iptables

Let's look at using iptables in practice. All the commands below will work on any Linux distribution. iptables commands must be run as the root user or a regular user with sudo privileges.

To display the current iptables configuration (including all existing rules), use the command:

iptables --list

85c00f9e 64b3 4cea 9647 13304c7bb8c6

For a more detailed output, which includes the number and size of processed packets in the INPUT, FORWARD, and OUTPUT chains, along with IP addresses and port numbers in numeric format, use:

iptables --line-numbers -L -v -n

Ee0b2682 A15a 4737 Ad14 F4f1ebefd20e

You can also specify a specific chain to display rules for just that chain, such as:

iptables -L INPUT
iptables -L FORWARD
iptables -L OUTPUT

Initially, iptables does not create or store any rule chains, so the output of these commands may be empty.

Blocking IP Addresses

To block a specific IP address, add a rule to the INPUT chain and specify the appropriate table. In the command below, the table is explicitly set. If the -t option is omitted, the rule is added to the default Filter table. For example, to block the IP address 10.0.36.126:

iptables -t filter -A INPUT -s 10.0.36.126 -j REJECT

This command uses the following options:

  • -t: Specifies the table for the rule.
  • -A: Adds the rule to the specified chain, in this case, the INPUT chain.
  • -s: Specifies the source IP address to which the action applies.
  • -j: Specifies the action to take; here, traffic is rejected (action REJECT).

To block an entire subnet, specify it with the -s option:

iptables -A INPUT -s 10.0.36.0/24 -j REJECT

Or, you can specify the subnet mask in full format:

iptables -A INPUT -s 10.0.36.0/255.255.255.0 -j REJECT

To block outgoing traffic to a specific IP address, use the OUTPUT chain and the -d option:

iptables -A OUTPUT -d 10.0.36.126 -j REJECT

Blocking Ports

Ports can be blocked by specifying them directly. This is done with the --dport option, which designates the port of the service. Instead of a port number, you can use the service name. You must specify the protocol as well. For example, to block SSH connections from host 10.0.36.126 using the TCP protocol:

iptables -A INPUT -p tcp --dport ssh -s 10.0.36.126 -j REJECT

For the UDP protocol, use:

iptables -A INPUT -p udp --dport ssh -s 10.0.36.126 -j REJECT

Alternatively, to block SSH connections from 10.0.36.126 using the SSH service port (22), use:

iptables -A INPUT -p tcp --dport 22 -s 10.0.36.126 -j REJECT

To block SSH connections from any IP address over TCP:

iptables -A INPUT -p tcp --dport ssh -j DROP

Allowing an IP Address

To allow traffic from a specific IP address, use the ACCEPT action. In the example below, all traffic from the IP address 10.0.36.126 is allowed:

iptables -A INPUT -s 10.0.36.126 -j ACCEPT

To allow traffic from a specific range of IP addresses, for example, from 10.0.36.126 to 10.0.36.156, use the iprange module and the --src-range option:

iptables -A INPUT -m iprange --src-range 10.0.36.126-10.0.36.156 -j ACCEPT

Here:

  • iprange: A module for working with IP address ranges.
  • --src-range: Specifies the source IP address range.

To perform the reverse operation (allowing all traffic from the server to a specific IP range from 10.0.36.126 to 10.0.36.156), use the --dst-range option:

iptables -A OUTPUT -m iprange --dst-range 10.0.36.126-10.0.36.156 -j ACCEPT
  • --dst-range: Specifies the destination IP address range.

Opening Ports

To open a port, specify the protocol using the -p option. Supported protocols include tcp, udp, etc. A full list of supported protocols can be found in /etc/protocols:

cat /etc/protocols

Specify the port using the --dport option. You can use either numeric values or service names. The ACCEPT action is used to open ports.

To open port 22 for TCP traffic from IP address 10.0.36.126:

iptables -A INPUT -p tcp --dport 22 -s 10.0.36.126 -j ACCEPT

To open multiple ports at once, use the multiport module and the --dports option, listing the ports separated by commas. For example, to open ports 22, 80, and 443 over TCP from IP address 10.0.36.126:

iptables -A INPUT -p tcp -m multiport --dports 22,80,443 -s 10.0.36.126 -j ACCEPT
  • multiport: A module for managing multiple ports simultaneously.
  • --dports: Specifies multiple ports, unlike --dport, which supports only a single port.

Blocking ICMP Traffic

One commonly used feature in iptables is blocking ICMP traffic, often generated by the ping utility. To block incoming ICMP traffic, use the following command:

iptables -A INPUT -j DROP -p icmp --icmp-type echo-request

Image7

This command will prevent the ping command from receiving a response without displaying an error message. If you want to display an error message like "Destination Port Unreachable," replace the DROP action with REJECT:

iptables -A INPUT -j REJECT -p icmp --icmp-type echo-request

123

Allowing ICMP Traffic

To allow previously blocked ICMP traffic, run the following command:

iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT

However, it’s important to note that if ICMP traffic was previously blocked with this command:

iptables -A INPUT -j DROP -p icmp --icmp-type echo-request

and then allowed with:

iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT

the ICMP traffic will still be blocked, as the drop rule will be the first rule in the INPUT chain.

Blocking Traffic by MAC Address

In addition to IP addresses, traffic can be blocked based on the device’s MAC address. Below is an example to block traffic from a device with the MAC address 00:0c:29:ed:a9:60:

iptables -A INPUT -m mac --mac-source 00:0c:29:ed:a9:60 -j DROP
  • mac: A module for working with device MAC addresses.
  • mac-source: Specifies the MAC address of the device.

Allowing Traffic by MAC Address

To allow traffic from a specific MAC address, use this command:

iptables -A INPUT -m mac --mac-source 00:0c:29:ed:a9:60 -j ACCEPT

Blocking traffic by MAC address with iptables will only work if the devices are on the same network segment. For broader use cases, blocking traffic by IP address is generally more effective.

Allowing Traffic on the Loopback Interface

Traffic on the loopback interface can also be controlled. To allow incoming traffic on the loopback interface, use:

iptables -A INPUT -i lo -j ACCEPT

For outgoing traffic on the loopback interface, the command is:

iptables -A OUTPUT -o lo -j ACCEPT

Restricting Network Access by Schedule

One of the useful features of iptables is the ability to temporarily allow or restrict traffic to specific services or ports based on a schedule. For example, let’s say we want to allow incoming SSH access only on weekdays, Monday through Friday, from 9 AM to 6 PM. The command would look like this:

iptables -A INPUT -p tcp --dport 22 -m time --timestart 09:00 --timestop 18:00 --weekdays Mon,Tue,Wed,Thu,Fri -j ACCEPT
  • time: Module for working with time-based rules.
  • timestart: Specifies the start time for the rule.
  • timestop: Specifies the end time for the rule.
  • weekdays: Specifies the days of the week when the rule will be active, separated by commas. Supported values are: Mon, Tue, Wed, Thu, Fri, Sat, Sun, or numbers 1 to 7.

Saving iptables Rules

By default, user-created iptables rules are not saved automatically. This means that the rules are cleared after a server reboot or shutdown. To save the rules, install the iptables-persistent package with the following command:

apt -y install iptables-persistent

During the installation, two dialog boxes will appear, allowing you to save the current rules to /etc/iptables/rules.v4 for IPv4 and /etc/iptables/rules.v6 for IPv6.

To manually save all rules for the IPv4 protocol, use:

iptables-save > /etc/iptables/rules.v4

For IPv6 rules, use:

ip6tables-save > /etc/iptables/rules.v6

This method has a significant advantage: saved rules can be restored from the file, which is helpful, for example, when transferring rules to another host. To restore previously saved rules, run:

iptables-restore < /etc/iptables/rules.v4

If executing this command on a different host, transfer the rule file first and then execute the restore command.

Deleting Rules in iptables

You can delete rules in iptables using several methods.

Deleting a Specific Rule

One way to delete a rule is to target a specific rule in a chain using its line number. To display the rule numbers, use:

iptables -L --line-numbers

Image10

For example, in the INPUT chain, we might see two rules that open ports 80 and 443 over TCP for IP addresses 10.0.36.126 (rule number 1) and 10.0.36.127 (rule number 2). To delete rule number 2, use:

iptables -D INPUT 2

Then, display the list of all current rules to verify:

iptables -L --line-numbers

Rule number 2 should now be removed successfully.

Image1

Deleting All Rules in a Specific Chain

You can also delete all rules in a specific chain at once. For example, to clear all rules in the OUTPUT chain:

iptables -F OUTPUT

Deleting All Rules

To delete all rules across all chains, simply run:

iptables -F

Use caution with this command, as it will remove all existing rules, including potentially essential ones.

Conclusion

In summary, iptables is a powerful tool for managing the built-in firewall in Linux-based operating systems. Its extensive features and modular support allow flexible configuration for controlling network traffic.

For more detailed information on iptables, consult the official documentation or use the man iptables command in Linux-based systems.

Network
05.11.2024
Reading time: 11 min

Similar

API

How to Secure an API: Methods and Best Practices

APIs are the bridges between programs in the modern internet. When you order a taxi, the app communicates with the server via an API. When you buy something online, the payment system checks your card through a banking API. These invisible connections handle billions of operations every day. However, an unsecured API is an open gateway for attackers. Real statistics show the scale of the problem: 99% of organizations reported at least one API-related incident in the past year. The total number of API attacks in Q3 2024 exceeded 271 million, which is 85% more than attacks on regular websites. Most companies provide unrestricted access to half of their APIs, often without realizing it. The good news is that 90% of attacks can be blocked with simple security measures. Most attackers rely on the assumption that the API is completely unprotected. Basic security strategies filter out attackers. From this guide, you will get five practical steps to secure an API that can be implemented within a week. No complex theory—only what really works in production. After reading, you will have a secure API capable of withstanding most attacks. Step One: Authentication Authentication answers a simple question: “Who is this?” Imagine an API as an office building with a security guard at the entrance. Without checking IDs, anyone can enter: employees, couriers, or thieves. Similarly, an API without authentication is available to anyone on the internet. Anyone can send a request and access your data. Why authentication is important: Protect confidential data: Your API likely handles information that should not be publicly accessible: user profiles, purchase history, medical records. Without authentication, this data becomes public. Track request sources: When something goes wrong, you need to know where the problem originated. Authentication ties each request to a specific client, making incident investigation and blocking attackers easier. API Keys — Simple and Reliable An API key works like an office pass. Each application is issued a unique card that must be presented for each entry. How it works: The server generates a random string of 32–64 characters. The key is issued to the client application once. The application sends the key with every request. The server verifies the key in the database. Pros: Easy to implement in a few hours Simple to block a specific key Good for internal integrations Cons: Database load for each verification Difficult to manage with thousands of clients Risk of key leakage from client code JWT Tokens — Modern Standard JWT (JSON Web Token) is like a passport with built-in protection against forgery. The token contains user information and does not require constant server verification. Token structure: Header — encryption algorithm Payload — user ID, role, permissions Signature — prevents tampering When to use: Microservices architecture High-load systems Mobile applications Pros: High performance—no database queries needed Token contains all necessary information Supported by all modern frameworks Cons: Difficult to revoke before expiration Compromise of the secret key is critical Token can become large if overloaded with data OAuth 2.0 — For External Integrations OAuth 2.0 solves the problem of secure access to someone else’s data without sharing passwords. It is like a power of attorney—you allow an application to act on your behalf within limited scopes. Participants: User — data owner Application — requests access Authorization server — verifies and issues permissions API — provides data according to the token Typical scenarios: “Sign in with Google” in mobile apps Posting to social media on behalf of a user Banking apps accessing account data How to Choose the Right Method Let’s look at the characteristics of each method: Criterion API Keys JWT Tokens OAuth 2.0 Complexity Low Medium High Setup Time 2 hours 8 hours 2 days For MVP Ideal Possible Overkill Number of Clients Up to 100 Thousands Any number External Integrations Limited Poor Ideal Stage Recommendations: Prototype (0–1,000 users): Start with API keys. They protect against accidental access and give time to understand usage patterns. Growth (1,000–100,000 users): Move to JWT tokens. They reduce database load and provide more flexibility. Scale (100,000+ users): Add OAuth 2.0 for integrations with major platforms. Start with API keys, even if you plan something more complex. A working simple security system is better than a planned perfect one. Transition to other methods gradually without breaking existing integrations. Remember: An API without authentication is a critical vulnerability that must be addressed first. Step Two: Authorization Authentication shows who the user is. Now you need to decide what they are allowed to do. Authorization is like an office access system: everyone has an entry card, but only IT can enter the server room, and accountants can access the document archive. Without proper authorization, authentication is meaningless. An attacker may gain legitimate access to the API but view other people’s data or perform prohibited operations. Role System Three basic roles for any API: Admin Full access to all functions User and settings management View system analytics and logs Critical operations: delete data, change configuration User Work only with own data Create and edit personal content Standard operations: profile, orders, files Access to publicly available information Guest View public information only Product catalogs, news, reference data No editing or creation operations Limited functionality without registration Grant users only the permissions critical for their tasks. When in doubt, deny. Adding permissions is easier than fixing abuse consequences. Additional roles as the system grows: Moderator — manage user content Manager — access analytics and reports Support — view user data for issue resolution Partner — limited access for external integrations Data Access Control It’s not enough to check the user’s role. You must ensure they can work only with the data they are allowed to. A user with the “User” role should edit only their posts, orders, and profile. Example access rules: Users can edit only their profile Orders are visible to the buyer, manager, and admin Financial reports are accessible only to management and accounting System logs are viewable only by administrators Access Rights Matrix: Resource Guest User Moderator Admin Public Content Read Read Read + Moderation Full Access Own Profile - Read + Write - Full Access Other Profiles - - Read Full Access System Settings - - - Full Access Critical operations require additional checks, even for admins: User deletion — confirmation via email Changing system settings — two-factor authentication Bulk operations — additional password or token Access to financial data — separate permissions and audit Common Authorization Mistakes Checking only on the frontend: JavaScript can be bypassed or modified. Attackers can send requests directly to the API, bypassing the interface. Always check permissions on the server. Overly broad access rights: “All users can edit all data” is a common early mistake. As the system grows, this leads to accidental changes and abuse. Start with strict restrictions. Forgotten test accounts: Test accounts often remain in production with elevated permissions. Regularly audit users and remove inactive accounts. Lack of change auditing: Who changed what and when in critical data? Without logging admin actions, incident investigation is impossible. Checking authorization only once: User permissions can change during a session. Employee dismissal, account blocking, or role changes should immediately reflect in API access. Mixing authentication and authorization: “If the user is logged in, they can do everything” is a dangerous logic. Authentication and authorization are separate steps; each can result in denial. Proper authorization balances security and usability. Too strict rules frustrate users; too lax rules create security holes. Start with simple roles, increase complexity as needed, but never skip permission checks. Step Three: HTTPS and Encryption Imagine sending an important letter through the mail. HTTP is like an open postcard that any mail carrier can read. HTTPS is a sealed envelope with a personal stamp that only the recipient can open. All data between the client and the API travels through dozens of intermediate servers on the internet. Without encryption, any of these servers can eavesdrop and steal confidential information. Why HTTP is Unsafe What an attacker can see when intercepting HTTP traffic: API keys and access tokens in plain text User passwords during login Credit card numbers and payment information Personal information: addresses, phone numbers, medical records Contents of messages and documents 19% of all successful cyberattacks are man-in-the-middle attacks, a significant portion of which involve open networks (usually HTTP) or incorrect encryption configuration. Public Wi-Fi networks, corporate networks with careless administrators, ISPs in countries with strict censorship, and rogue access points with names like “Free WiFi” are particularly vulnerable. Setting Up HTTPS Obtaining SSL Certificates An SSL certificate is a digital document that verifies the authenticity of your server. Without it, browsers display a warning about an insecure connection. Free options: Let’s Encrypt — issues certificates for 90 days with automatic renewal Cloudflare — free SSL for websites using their CDN Hosting providers — many include SSL in basic plans Paid SSL certificates are used where a particularly high level of trust is required, for example for large companies, financial and medical organizations, or when an Extended Validation (EV) certificate is needed to confirm the legal identity of the site owner. Enforcing HTTP to HTTPS Redirection Simply enabling HTTPS is not enough—you must prevent the use of HTTP. Configure automatic redirection of all requests to the secure version. Check configuration: Open your API in a browser. It should show a green padlock. Try the HTTP version. It should automatically redirect to HTTPS. Use SSL Labs test to verify configuration. Security Headers (HSTS) HTTP Strict Transport Security forces browsers to use HTTPS only for your domain. Add the header to all API responses: Strict-Transport-Security: max-age=31536000; includeSubDomains This means: “For the next year, communicate with us only via HTTPS, including all subdomains.” Additional Encryption HTTPS protects data in transit, but in the database it is stored in plain text. Critical information requires additional encryption. Must encrypt: User passwords — use bcrypt, not MD5 API keys — store hashes, not raw value Credit card numbers — if processing payments Medical data — per HIPAA or equivalent regulations Recommended encryption: Personal data: phone numbers, addresses, birth dates Confidential user documents Internal tokens and application secrets Critical system settings The hardest part of encryption is secure key storage. Encryption keys must not be stored alongside encrypted data. Rotate encryption keys periodically. If a key is compromised, all data encrypted with it becomes vulnerable. HTTPS is the minimum requirement for any API in 2025. Users do not trust unencrypted connections, search engines rank them lower, and laws in many countries explicitly require encryption of personal data. Step Four: Data Validation Users can send anything to your API: abc instead of a number, a script with malicious code instead of an email, or a 5 GB file instead of an avatar. Validation is quality control at the system’s entry point. Golden rule: Never trust incoming data. Even if the data comes from your own application, it may have been altered in transit or generated by a malicious program. Three Validation Rules Rule 1: Check Data Types Age must be a number, not a string. Email must be text, not an array. Dates must be in the correct format, not random characters. Rule 2: Limit Field Length Unlimited fields cause numerous problems. Attackers can overload the server with huge strings or fill the entire database with a single request. Rule 3: Validate Data Format Even if the data type is correct, the content may be invalid. An email without @ is not valid, and a phone number with letters cannot be called. Injection Protection SQL injection is one of the most dangerous attacks. An attacker inserts SQL commands into normal form fields. If your code directly inserts user input into SQL queries, the attacker can take control of the database. Example: A search field for users. A legitimate user enters “John,” but an attacker enters: '; DROP TABLE users; --. If the code directly inserts this into a query: SELECT * FROM users WHERE name = ''; DROP TABLE users; -- Result: the users table is deleted. Safe approach: Queries and data are sent separately. The database automatically escapes special characters. Malicious code becomes harmless text. File Validation Size limits: One large file can fill the server disk. Set reasonable limits for each operation. File type checking: Users may upload executable files with viruses or scripts. Allow only safe formats. Check more than the extension: Attackers can rename virus.exe to photo.jpg. Check the actual file type by content, not just by name. Quarantine files: Store uploaded files in separate storage with no execution rights. Scan with an antivirus before making them available to others. Data validation is your first line of defense against most attacks. Spending time on thorough input validation prevents 70% of security issues. Remember: it’s better to reject a legitimate request than to allow a malicious one. Step Five: Rate Limiting Rate Limiting is a system to control the request speed to your API. Like a subway turnstile letting people through one at a time, the rate limiter controls the flow of requests from each client. Without limits, a single user could overwhelm your server with thousands of requests per second, making the API unavailable to others. This is especially critical in the age of automated attacks and bots. Why Limit Request Rates DDoS protection: Distributed denial-of-service attacks occur when thousands of computers bombard your server simultaneously. Rate Limiting automatically blocks sources with abnormally high traffic. Prevent abuse: Not all attacks are malicious. A developer may accidentally run a script in an infinite loop. A buggy mobile app may send requests every millisecond. Rate Limiting protects against these incidents. Fair resource distribution: One user should not monopolize the API to the detriment of others. Limits ensure all clients have equal access. Cost control: Each request consumes CPU, memory, and database resources. Rate Limiting helps forecast load and plan capacity. Defining Limits Not all requests place the same load on the server. Simple reads are fast; report generation may take minutes. Light operations (100–1,000 requests/hour): Fetch user profile List items in catalog Check order status Ping and healthcheck endpoints Medium operations (10–100 requests/hour): Create a new post or comment Upload images Send notifications Search the database Heavy operations (1–10 requests/hour): Generate complex reports Bulk export of data External API calls Limits may vary depending on circumstances: more requests during daytime, fewer at night; weekends may have different limits; during overload, limits may temporarily decrease, etc. When a user reaches the limit, they must understand what is happening and what to do next. Good API response when limit is exceeded: HTTP Status: 429 Too Many Requests { "error": "rate_limit_exceeded", "message": "Request limit exceeded. Please try again in 60 seconds.", "current_limit": 1000, "requests_made": 1000, "reset_time": "2025-07-27T22:15:00Z", "retry_after": 60 } Bad response: HTTP Status: 500 Internal Server Error { "error": "Something went wrong" } Rate Limiting is not an obstacle for users but a protection of service quality. Properly configured limits are invisible to honest clients but effectively block abuse. Start with conservative limits and adjust based on actual usage statistics. Conclusion Securing an API is not a one-time task at launch but a continuous process that evolves with your project. Cyber threats evolve daily, but basic security strategies remain unchanged. 80% of attacks can be blocked with 20% of effort. These 20% are the basic measures from this guide: HTTPS, authentication, data validation, and rate limiting. Do not chase perfect protection until you have implemented the fundamentals.
22 August 2025 · 14 min to read
Linux

How to Use Telnet Command on Linux

The telnet command is a great and handy Linux network service communication utility. From remote server and system port scans, to debugging network connections, telnet offers easy text-based interaction with a remote host. In this step by step guide, you can see how to install, configure, and utilize telnet in Linux. We shall also discuss its various options and features so that you can have a complete idea. What is Telnet? telnet, or "Telecommunication Network," is a remote network protocol on another computer over the Transmission Control Protocol (TCP). telnet provides the ability to directly specify the remote host on a particular port so that commands may be sent and output directly read in real time. telnet is employed primarily for: Testing Open Ports: Determine if a server has an open port. Accessing Services: Get direct access to the web, e-mail, or other networked services. Troubleshooting Network Issues: Fix network connectivity issues or port not available issues. Installing Telnet on Linux telnet is not pre-installed on most modern Linux distributions. Installation depends on your system type. For Ubuntu/Debian-Based Systems An Ubuntu or any Debian-based Linux user can install telnet with the apt package manager: sudo apt install telnet For Red Hat/CentOS-Based Systems telnet can be installed on RedHat, CentOS, or Fedora by using the yum or dnf package managers: sudo yum install telnet For newer versions: sudo dnf install telnet Understanding the Telnet Command Syntax The telnet command syntax is simple: telnet [hostname/IP] [port] Where: [hostname/IP]: Specifies the hostname or IP address of the remote host. [port]: Specifies the port number you want to connect to. It can be omitted, and the default port (23) is used.  telnet establishes one direct connection to services on specific ports, like HTTP (port 80), SMTP (port 25), or FTP (port 21). Different Options Available for the Telnet Command The telnet command is highly customizable, offering several options that enhance its usability and functionality. Option Description -4 Forces telnet to use IPv4 only when establishing a connection. -6 Forces telnet to use IPv6 only when connecting. -8 Allows transfer of 8-bit data via telnet. -E Disables the telnet escape character operation, disallowing escape sequences during the session. -K Prevents telnet from automatically passing credentials (e.g., a Kerberos ticket) to the remote host. -L Enables the loopback mode so that telnet can connect to the same host. -X atype Specifies the authentication type (i.e., KERBEROS_V4) to be used during the telnet session. -a Automatically fills in the user's login name attempting to log onto the remote system. -d Enables debugging mode, providing detailed information about the connection process and communication. -e char Alters the escape character for telnet. -l user Specifies the username for the login attempt. -n tracefile Writes session activity to a specified trace file for debugging or logging. -b addr  Defines a local interface or address for telnet to use when connecting. -r Creates a reverse telnet connection. Using Telnet: Practical Applications telnet provides diagnostic and testing capabilities for networks. Some of these include: Test Open Ports telnet is often used to verify if a specified port of a server is open. To verify port 80, enter the following command: telnet example.com 80 If the port is open, telnet will connect, and you might have a blank screen expecting input. This is a good indication that the port is listening and expecting to chat. If the port is firewalled or closed, you would get an error message such as "Connection refused." Interact with SMTP Servers telnet can debug email servers by sending raw SMTP commands. To open an SMTP server on port 25: telnet mail.example.com 587 Once connected, you can directly type SMTP commands such as HELO, MAIL FROM, and RCPT TO to communicate with the server. For example: Send HTTP Requests telnet enables manual HTTP requests to debug web servers. For example: telnet example.com 80 After connecting, type: GET / HTTP/1.1 Host: example.com Press Enter twice to send the request, and the server's response will appear. Connect Using IPv4 If the server supports both IPv4 and IPv6, you can force the connection to use IPv4: telnet -4 example.com 80 This ensures compatibility with IPv4-only networks. Debugging a MySQL Server telnet can connect to a MySQL database server to check if the port is open (default port 3306). telnet database.example.com 3306 Replace database.example.com with the MySQL server address.  If the connection is successful, telnet will display a protocol-specific greeting message from the MySQL server. Security Considerations When Using Telnet Although telnet is a handy utility, it is fundamentally unsafe since it sends the data, including passwords, in cleartext. Consequently: Don't Use Telnet Over Unsecure Networks: Utilize a secure, private network whenever possible. Use Alternatives: Use SSH (Secure Shell) for encrypted communication. Restrict Access: Turn off telnet on your servers if you do not use it. By understanding these risks, you can take precautions to secure your systems. Exploring Advanced Telnet Use Cases telnet’s utility extends to a variety of specialized scenarios: Monitoring Services: Use telnet to interactively query protocols like IMAP or POP3 to diagnose emails. IoT Device Management: telnet can be utilized as an immediate interface to communicate with IoT devices that utilize text-based communication protocols. Educational Use: It is an excellent learning tool for studying network protocols and server responses. Troubleshooting Common Telnet Issues Despite its simplicity, telnet may run into issues such as: Connection Refused: This would usually be so if the target port is firewalled or closed. Time-Out Errors: These could reflect network delay or routing issues. Permission Denied: Check appropriate user privilege and port availability. Regularly checking server configurations and network settings can help resolve these issues. Exploring Telnet Alternatives If telnet's lack of encryption is a security risk to your system, there are several alternatives that offer comparable functionality with added security and features: SSH (Secure Shell): SSH is the most common telnet substitute, providing secured communication, tunneling, and strong authentication. Use the ssh command to securely connect to remote servers. Netcat (nc): Netcat is a full-featured networking debugging tool, port scanner, and connection tester. It can handle both TCP and UDP. OpenSSL S_client: OpenSSL can be utilized to test SSL/TLS protocols securely on particular ports. Conclusion telnet in Linux is a simple and convenient network diagnostics and debugging tool. As long as you understand its security limitation and have sufficient configurations, telnet remains a convenient debugging tool, test tool, and communications tool for network services. From this guide, you have a working configuration that strikes a balance between convenience and responsible caution. Get the best out of your Linux experience and control your systems securely and efficiently remotely.
24 July 2025 · 6 min to read
Linux

How to Open a Port on Linux

Opening ports in Linux is an important task that allows certain services or applications to exchange data over the network. Ports act as communication gateways, allowing access to authorized services while blocking unauthorized connections. Managing ports is key to secure access, smooth app functionality, and reliable performance. Understanding Ports and Their Purpose Ports are the logical endpoints of network communication, where devices can send and receive information. HTTP uses port 80, HTTPS uses port 443, and SSH uses port 22. An open port means the service that listens for incoming network traffic is associated with it. A closed port, on the other hand, stops communication via that gateway. Maintaining availability and security requires proper management of Linux open ports. Check Existing Open Ports on Linux Before opening a port, check the open ports in Linux to see which ones are currently active. You may achieve this using several Linux commands. netstat To display open ports, run: netstat -tuln The netstat utility provides a real-time view of active network connections, displaying all listening endpoints. The -tuln flags refine the output to show only TCP and UDP ports without resolving hostnames. Note: In case netstat isn’t installed, install it via: sudo apt install net-tools ss The ss utility can also be utilized to check ports: ss -tuln Compared to netstat, the ss command is more recent and fast. It shows the ports that are in use as well as socket information. nmap For a detailed analysis of Linux open ports, use: nmap localhost The nmap utility scans the given host (localhost in this case) for open ports. This is useful for finding ports exposed to public networks. Note: You can install nmap on Linux via: sudo apt install nmap Opening Ports on Linux Firewall modification is required to grant access through a chosen endpoint. Linux provides several options for handling these tasks, including iptables, ufw, and firewalld. Here are the methods to open ports with these utilities. Method 1: Via iptables Iptables is a robust and lower level firewall utility that grants fine-grained control over network traffic. To open a port with iptables, take these steps: Add a Rule to Allow Traffic from a Specific Port  Enable HTTP access on port 8080 with this command: sudo iptables -A INPUT -p tcp --dport 8080 -j ACCEPT sudo: Execute the command as superuser. iptables: Refers to the firewall utility. -A INPUT: Inserts a rule in the input chain, controlling incoming traffic. -p tcp: Shows that the rule is for TCP traffic. --dport 8080: Points to port 8080 for the rule. ACCEPT: Specifies that incoming traffic matching the rule is accepted. This permits incoming TCP on port 8080. However, iptables changes are volatile and will be undone after reboot. Note: The iptables can be installed with persistent packages using: sudo apt install iptables iptables-persistent Save the Configuration For making the rule permanent and remain even after a system restart, store iptables rules via: sudo netfilter-persistent save This directive preserves current iptables or nftables rules such that they are preserved during reboots. Reload Changes Reload the firewall configuration as needed with: sudo netfilter-persistent reload Method 2: Via UFW Ufw (Uncomplicated Firewall) is a minimal front-end for managing iptables rules. It allows you to easily open ports with simple commands. This is how you can do it: Enable Ufw  First, ensure the ufw firewall is activated: sudo ufw enable Executing this command allows UFW to modify firewall settings. Note: UFW can be installed with: sudo apt install ufw Allow Traffic Via Specific Port  For instance, to open port 22 for SSH, use: sudo ufw allow 22/tcp sudo: Grants superuser privileges. ufw allow: Adds a rule to permit traffic. 22/tcp: Sets port 22 for communication while restricting the rule to TCP protocol. This permits access on port 22, enabling remote SSH connections. Verify the Firewall Status  To ensure the port is accessible and the rule is active, execute: sudo ufw status The status command displays all active rules, including the allowed ports. Method 3: Via Firewalld Firewalld is a dynamic firewall daemon present on Linux. It is simpler to customize the firewall rules compared to using iptables. Here’s how to enable port access via firewalld: Add a Permanent Rule for the Desired Port  To enable HTTPS access on port 443, run: sudo firewall-cmd --permanent --add-port=443/tcp firewall-cmd: Invokes the firewalld command. --permanent: Ensures the rule stays active after the firewall reloads or the system boots. --add-port=443/tcp: Opens port 443 to accept incoming TCP traffic. Note: Install firewalld on Linux via: sudo apt install firewalld Once installed, you should activate and run it: sudo systemctl enable firewalld sudo systemctl start firewalld Reload the Firewall  Finalize the settings to enable the newly defined policy: sudo firewall-cmd --reload Applying firewall modifications makes recent policy updates functional without rebooting. Verification Check whether the port is opened successfully: sudo firewall-cmd --list-all The --list-all command provides a complete list of rules, helping you determine if port 443 is open. Testing the Newly Opened Port Always check if the newly opened port is available for incoming connections. Here’s how: Using telnet Test the port opening via: telnet localhost port_number Successful access means the port is open and responsive. Using nmap Analyze the host to verify if the specified endpoint is accessible.: nmap -p port_number localhost The -p flag specifies the port to scan. Using curl Check HTTP service availability: curl localhost:port_number A successful response confirms the service is running on the opened port. Troubleshooting Common Issues Ports opening may occasionally fail due to configuration errors or conflicting software settings. Follow these tips: Verify Firewall Rules: Run iptables -L or ufw status to assess firewall restrictions and permissions. Check Service Status: Check if the assigned service is active with systemctl status <service-name>. Opening Specific Ports Based on Protocol Understanding the protocol used by the service can help configure ports more effectively. For instance, web traffic typically uses TCP (Transmission Control Protocol) for stable communication, while certain gaming services may require UDP (User Datagram Protocol) for faster packet transmission. Opening a TCP Port To access port 3306 for MySQL traffic: sudo ufw allow 3306/tcp This explicitly permits TCP traffic through port 3306, ensuring stable communication for database queries. Opening a UDP Port To access port 161 for SNMP (Simple Network Management Protocol), run: sudo ufw allow 161/udp UDP provides faster, connectionless communication, ideal for monitoring tools like SNMP. Managing Port Accessibility Once a port is opened, controlling its visibility ensures security and prevents unauthorized access. Restricting Access to Specific IPs To limit port access to a specific IP address (e.g., 192.168.1.100): sudo ufw allow from 192.168.1.100 to any port 22 This allows SSH access via port 22 only from the specified IP address, enhancing security. Closing Ports To revoke access to port 80: sudo ufw deny 80/tcp This denies incoming traffic on port 80, effectively closing it for HTTP services. Conclusion Confirming open ports in Linux is a key step for optimizing network functionality and deploying services effectively. With the use of utilities such as iptables, ufw, or firewalld, you can control traffic securely for your apps. You need to test and debug in order to confirm the port is open and working as expected. From web servers to SSH access, to other network services, port management skills ensure smooth operations and better security.
01 July 2025 · 7 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support