Sign In
Sign In

How to Use tcpdump to Capture and Analyze Network Traffic

How to Use tcpdump to Capture and Analyze Network Traffic
Hostman Team
Technical writer
Network
05.11.2024
Reading time: 7 min

Sometimes, troubleshooting network issues requires capturing network traffic. tcpdump is a network traffic analyzer, or "sniffer," that allows you to intercept and analyze network traffic passing through the utility. This tool provides a rich set of options and various filters, making it versatile for different purposes. tcpdump is entirely console-based, meaning it doesn’t have a graphical interface, so it can be run on servers without GUI support. The first version of tcpdump was released back in 1988. Since then, it has been actively maintained, with new versions released every year.

This article will cover various scenarios for using tcpdump.

Prerequisites

To follow this tutorial, you will need: 

  • A cloud server or virtual machine with a Linux OS installed. Any Linux distribution will work.
  • Access to the root user or a user with sudo privileges.

Installing tcpdump

We will install tcpdump on Ubuntu 22.04. The tcpdump package is available in the OS’s official repository. First, update the package index:

sudo apt update

Next, install the utility:

sudo apt -y install tcpdump

Confirm that the installation was successful by checking the tcpdump version:

tcpdump --version

Note that further use of the utility requires running it as the root user or a user with sudo privileges.

Running tcpdump Without Parameters

If you run tcpdump without any parameters, it will start capturing all traffic on all available interfaces in the system and display the data on the screen (stdout):

tcpdump

Image15

To stop the program, press Ctrl + C.

After each run, tcpdump provides the following information:

  • packets captured — shows the number of packets captured (packets that were received and processed by tcpdump).

  • packets received by filter — shows the number of packets captured using filters.

  • packets dropped by kernel — shows the number of packets dropped by the OS kernel.

Image17

By default, tcpdump does not save its output. We will discuss saving the output to a file later in the article.

tcpdump Output Format

Let's analyze the output of a captured packet using the TCP protocol as an example. By default, tcpdump displays the following data for each capture:

09:33:57.063196 IP nexus-test.com.ssh > 192.168.111.1.50653: Flags [P.], seq 27376:27440, ack 321, win 521, length 64

The parameter descriptions are provided in the table below.

Parameter

Description

09:33:57.063196

Timestamp. Uses the format hours:minutes:seconds.fraction, where "fraction" represents seconds from midnight. In this example, the packet was captured at 9:33:57.063196.

IP

Protocol used.

nexus-test.com.ssh

Domain name (or IP address) and port of the source host. Here, ssh is shown instead of port number 22. To display addresses and protocols as numbers, run tcpdump with the -n option.

192.168.111.1.50653

Domain name (or IP address) and port of the destination host.

Flags [P.]

ACK flag(s) used to indicate the connection state. Multiple values are possible. In this example, P is used, indicating the PUSH flag for processing packets immediately rather than buffering them.

seq 27376:27440

Sequence number of data in the packet. Shows the data range as bytes 27376 through 27440 in the packet.

ack 321

Acknowledgment of the received packet.

win 521

Window size in bytes, showing the available buffer space for receiving data.

length 64

Packet length in bytes, indicating the payload size as the difference between the first and last sequence bytes.

Practical Use of tcpdump

Let’s move on to practical applications of tcpdump with examples.

Displaying a List of Network Interfaces

To list all network interfaces available in the system for traffic capture, use:

tcpdump -D

Image2

Capturing Traffic from a Specific Network Interface

By default, tcpdump captures traffic from all available interfaces. To capture traffic from a specific network interface (e.g., ens33), use:

tcpdump -i ens33

Image5

Disabling IP Address to Hostname Resolution

By default, tcpdump converts IP addresses to hostnames and replaces port numbers with service names. To prevent tcpdump from converting IP addresses to hostnames, add the -n option:

tcpdump -n

3a51a06a 252e 4368 B887 B11a871e5d9f

To disable both IP-to-hostname and port-to-service name conversions, use the -nn option:

tcpdump -nn

Capturing a Specific Number of Packets

By default, tcpdump captures an unlimited number of packets. To capture a specified number of packets, for example, 4 packets, use the -c option:

tcpdump -c 4

Image19

Adding Date Information

tcpdump does not display the date of packet capture by default. To include the date in the output, use the -tttt option. The date will appear at the beginning of each line in the format year:month:day:

tcpdump -tttt

1055939d 0924 4655 Bd08 3607a36e7af5

Packet Filtering in tcpdump

tcpdump has extensive filters that allow capturing only the desired packets. Here are some key filters.

Filtering by Port

To capture traffic on a specific port, use the port option. For example, to capture traffic on port 80 directed towards the destination, you can specify dst:

tcpdump dst -n port 80

Image3

You can also specify a range of ports:

tcpdump -n portrange 80-443

Image23

Filtering by Protocol

tcpdump supports filtering by protocols. Supported protocol values include: ether, fddi, tr, wlan, ppp, slip, link, ip, arp, rarp, tcp, udp, icmp, and ipv6. Examples for capturing specific protocols are:

tcpdump icmp

Image20

tcpdump tcp

486dfb79 Ea7c 4e90 B3d5 38f6775f39e7

tcpdump arp

Image21

tcpdump udp

D85a6b9d 32a8 4ea2 Ae5c Ebae57600e97

Filtering by Packet Size

tcpdump allows capturing packets of a specified size using two options:

  • less — captures packets smaller than the specified number of bytes.
  • greater — captures packets larger than the specified number of bytes.

Here are some examples:

Capture traffic with packets that are no more than 43 bytes in size:

tcpdump less 43

Image14

Capture traffic with packets larger than 43 bytes:

tcpdump greater 43

Image25

Note that the packet size includes header size: an Ethernet header without CRC occupies 14 bytes, an IPv4 header occupies 20 bytes, and an ICMP header occupies 8 bytes.

Filtering by MAC Address

To filter by MAC address, use the ether host option. For example, to capture any traffic sent to or from a specified MAC address (e.g., 00:0c:29:c7:00:3f), use:

tcpdump ether host 00:0c:29:c7:00:3f

Image26

Filtering by Source or Destination Address

You can filter traffic using the IP address or hostname of the source or destination.

To capture traffic originating from a specific host, use the src option:

tcpdump -nn src 192.168.36.132

Image18

To capture traffic directed to a specific host, use the dst option:

tcpdump -nn dst 192.168.36.132

Image11

Using Logical Operators in tcpdump

tcpdump supports various logical operators, allowing you to combine options. The following operators are supported:

  • and or && — logical "AND." Combines multiple conditions and shows results matching all conditions.
  • or or || — logical "OR." Combines multiple conditions and shows results matching at least one condition.
  • not or ! — logical "NOT." Excludes specified conditions, showing results that do not match the given condition.

Here are examples of using logical operators:

Capture packets sent from the host 192.168.36.132 and only those listening on port 22:

tcpdump -nn src 192.168.36.132 and port 22

Image12

Capture packets on all available interfaces that are listening on either port 22 or port 80:

tcpdump -nn port 22 or port 80

45ba8aec 5798 4002 B6a6 4933d37a3a9f

Capture all packets except ICMP packets:

tcpdump -nn not icmp

C0e7ddc6 9c61 43f9 9bbf 7a1a6945001b

Saving Output to a File

As previously mentioned, tcpdump does not save its output to a file by default. To save captured data to a file, use the -w option, specifying the filename with a .pcap extension:

tcpdump -nn src 192.168.36.132 -w results.pcap

Image4

While saving to a file, results will not display in the terminal. To stop capturing packets, press CTRL + C.

To read the data saved in the file, use the -r option, followed by the filename where the tcpdump results were saved:

tcpdump -r results.pcap

973be19d 3392 4e9c 8496 77b622acf941

Conclusion

tcpdump is a powerful command-line tool for analyzing networks and identifying issues. The utility supports a wide array of options, enabling users to filter for specific packet information.

Network
05.11.2024
Reading time: 7 min

Similar

VPN

Installing and Configuring Wireproxy

Wireproxy is a WireGuard client that acts as a SOCKS5/HTTP proxy server or tunnel. It is particularly useful when you need to connect to certain websites through a WireGuard peer but do not want or cannot configure a new network interface for various reasons. In this article, we will cover how to create a SOCKS5 proxy using Wireproxy, as well as how to connect to it via the FoxyProxy extension for the Firefox browser. Main reasons why Wireproxy might be the preferred choice: Using WireGuard as a traffic proxy. No need for administrator privileges to modify WireGuard settings. Wireproxy provides full isolation from the device’s network interfaces, allowing it to be used without administrative configuration. Key Features of Wireproxy Static TCP routing for both client and server. SOCKS5/HTTP proxy support (currently only CONNECT is supported). Developers are working on additional features, including UDP support in SOCKS5 and static UDP routing. Installing Wireproxy Wireproxy supports multiple operating systems, including Linux, macOS, and Windows. There are two main installation methods: Building the project from source using Go. Downloading a precompiled version for your platform. Building from source ensures the latest code, while a precompiled version offers stability and convenience. Installing the Precompiled Version (Windows) Go to the GitHub releases page and download the archive for your operating system. For Windows, download wireproxy_windows_amd64.tar.gz. Extract the archive and place wireproxy.exe in a convenient location, e.g., create a wireproxy folder on your desktop. Open the Windows Command Prompt or PowerShell and navigate to the folder using: cd Desktop\wireproxy Verify the utility works correctly: wireproxy.exe -v Building from Source Using Go (Linux) Prerequisites Ensure Go version 1.20 or higher is installed: go version If Go is not installed, use this Ubuntu 22.04 installation guide. Build process Clone the Wireproxy repository: git clone https://github.com/octeep/wireproxy cd wireproxy Run the build process: make After the build completes, verify: ./wireproxy -v Configuring Wireproxy After installing Wireproxy, the next step is configuring the utility. You need a WireGuard configuration file. You can create a new server and set up WireGuard manually, e.g., following this Hostman guide. Alternatively, use the Marketplace section when creating a server and select Wireguard-GUI. A typical WireGuard configuration file looks like this: [Interface] PrivateKey = [Your_Private_Key] Address = 10.0.0.2/32 DNS = 8.8.8.8 [Peer] PublicKey = [Server_Public_Key] Endpoint = [Server_IP:Port] AllowedIPs = 0.0.0.0/0 PersistentKeepalive = 20 Place the WireGuard configuration file in the wireproxy folder you created earlier. In this example, the file is named wg.conf. Creating the Wireproxy Configuration In the wireproxy directory, create wp.conf for the SOCKS5 proxy configuration: WGConfig = ./wg.conf [Socks5] BindAddress = 127.0.0.1:25344 Username = hostman Password = hostman WGConfig specifies the path to your WireGuard config. BindAddress defines the local proxy address and port. Username and Password are optional login credentials for the proxy. Testing the Configuration Linux: ./wireproxy -c wp.conf -n Windows: wireproxy.exe -c wp.conf -n This checks that the configuration is correct without starting the proxy. Running Wireproxy Linux: ./wireproxy -c wp.conf Windows: wireproxy.exe -c wp.conf For background execution, use the -d flag: Linux: ./wireproxy -c wp.conf -d Windows: wireproxy.exe -c wp.conf -d Connecting to Wireproxy via Browser Extension To use Wireproxy in a browser, specialized proxy management extensions can be used. In this example, we will configure FoxyProxy in Firefox, though similar steps apply to other browsers, e.g., Chrome with Proxy SwitchyOmega. Installing and Configuring FoxyProxy in Firefox Install FoxyProxy from FoxyProxy for Firefox. Click the FoxyProxy icon and select Options to open settings. Click Add to create a new proxy configuration. Set Proxy Type to SOCKS5. Enter 127.0.0.1 as Proxy IP and 25344 as Port. If a username and password were set in Wireproxy, enter them in Username and Password. Click Save to store the configuration. Click the FoxyProxy icon again and select the newly created configuration to connect to the proxy. Visit any IP check service online to confirm that the IP address has changed. This verifies that your traffic is routed through Wireproxy. FoxyProxy supports patterns to apply proxy usage only to specific sites. Open the FoxyProxy menu and select Options. Click Patterns in your existing connection. Enable patterns by clicking the FoxyProxy icon and selecting Use Enable Proxies By Patterns and Order. After this, the proxy will only be used for websites specified in your patterns. Conclusion In this article, we covered the installation and configuration of Wireproxy, a tool for creating SOCKS5/HTTP proxies via WireGuard. Wireproxy’s standout feature is its ability to operate in user space, simplifying setup and usage, especially for users without administrative privileges. We also demonstrated integrating Wireproxy with browser extensions for convenient proxy management.
25 August 2025 · 5 min to read
API

How to Secure an API: Methods and Best Practices

APIs are the bridges between programs in the modern internet. When you order a taxi, the app communicates with the server via an API. When you buy something online, the payment system checks your card through a banking API. These invisible connections handle billions of operations every day. However, an unsecured API is an open gateway for attackers. Real statistics show the scale of the problem: 99% of organizations reported at least one API-related incident in the past year. The total number of API attacks in Q3 2024 exceeded 271 million, which is 85% more than attacks on regular websites. Most companies provide unrestricted access to half of their APIs, often without realizing it. The good news is that 90% of attacks can be blocked with simple security measures. Most attackers rely on the assumption that the API is completely unprotected. Basic security strategies filter out attackers. From this guide, you will get five practical steps to secure an API that can be implemented within a week. No complex theory—only what really works in production. After reading, you will have a secure API capable of withstanding most attacks. Step One: Authentication Authentication answers a simple question: “Who is this?” Imagine an API as an office building with a security guard at the entrance. Without checking IDs, anyone can enter: employees, couriers, or thieves. Similarly, an API without authentication is available to anyone on the internet. Anyone can send a request and access your data. Why authentication is important: Protect confidential data: Your API likely handles information that should not be publicly accessible: user profiles, purchase history, medical records. Without authentication, this data becomes public. Track request sources: When something goes wrong, you need to know where the problem originated. Authentication ties each request to a specific client, making incident investigation and blocking attackers easier. API Keys — Simple and Reliable An API key works like an office pass. Each application is issued a unique card that must be presented for each entry. How it works: The server generates a random string of 32–64 characters. The key is issued to the client application once. The application sends the key with every request. The server verifies the key in the database. Pros: Easy to implement in a few hours Simple to block a specific key Good for internal integrations Cons: Database load for each verification Difficult to manage with thousands of clients Risk of key leakage from client code JWT Tokens — Modern Standard JWT (JSON Web Token) is like a passport with built-in protection against forgery. The token contains user information and does not require constant server verification. Token structure: Header — encryption algorithm Payload — user ID, role, permissions Signature — prevents tampering When to use: Microservices architecture High-load systems Mobile applications Pros: High performance—no database queries needed Token contains all necessary information Supported by all modern frameworks Cons: Difficult to revoke before expiration Compromise of the secret key is critical Token can become large if overloaded with data OAuth 2.0 — For External Integrations OAuth 2.0 solves the problem of secure access to someone else’s data without sharing passwords. It is like a power of attorney—you allow an application to act on your behalf within limited scopes. Participants: User — data owner Application — requests access Authorization server — verifies and issues permissions API — provides data according to the token Typical scenarios: “Sign in with Google” in mobile apps Posting to social media on behalf of a user Banking apps accessing account data How to Choose the Right Method Let’s look at the characteristics of each method: Criterion API Keys JWT Tokens OAuth 2.0 Complexity Low Medium High Setup Time 2 hours 8 hours 2 days For MVP Ideal Possible Overkill Number of Clients Up to 100 Thousands Any number External Integrations Limited Poor Ideal Stage Recommendations: Prototype (0–1,000 users): Start with API keys. They protect against accidental access and give time to understand usage patterns. Growth (1,000–100,000 users): Move to JWT tokens. They reduce database load and provide more flexibility. Scale (100,000+ users): Add OAuth 2.0 for integrations with major platforms. Start with API keys, even if you plan something more complex. A working simple security system is better than a planned perfect one. Transition to other methods gradually without breaking existing integrations. Remember: An API without authentication is a critical vulnerability that must be addressed first. Step Two: Authorization Authentication shows who the user is. Now you need to decide what they are allowed to do. Authorization is like an office access system: everyone has an entry card, but only IT can enter the server room, and accountants can access the document archive. Without proper authorization, authentication is meaningless. An attacker may gain legitimate access to the API but view other people’s data or perform prohibited operations. Role System Three basic roles for any API: Admin Full access to all functions User and settings management View system analytics and logs Critical operations: delete data, change configuration User Work only with own data Create and edit personal content Standard operations: profile, orders, files Access to publicly available information Guest View public information only Product catalogs, news, reference data No editing or creation operations Limited functionality without registration Grant users only the permissions critical for their tasks. When in doubt, deny. Adding permissions is easier than fixing abuse consequences. Additional roles as the system grows: Moderator — manage user content Manager — access analytics and reports Support — view user data for issue resolution Partner — limited access for external integrations Data Access Control It’s not enough to check the user’s role. You must ensure they can work only with the data they are allowed to. A user with the “User” role should edit only their posts, orders, and profile. Example access rules: Users can edit only their profile Orders are visible to the buyer, manager, and admin Financial reports are accessible only to management and accounting System logs are viewable only by administrators Access Rights Matrix: Resource Guest User Moderator Admin Public Content Read Read Read + Moderation Full Access Own Profile - Read + Write - Full Access Other Profiles - - Read Full Access System Settings - - - Full Access Critical operations require additional checks, even for admins: User deletion — confirmation via email Changing system settings — two-factor authentication Bulk operations — additional password or token Access to financial data — separate permissions and audit Common Authorization Mistakes Checking only on the frontend: JavaScript can be bypassed or modified. Attackers can send requests directly to the API, bypassing the interface. Always check permissions on the server. Overly broad access rights: “All users can edit all data” is a common early mistake. As the system grows, this leads to accidental changes and abuse. Start with strict restrictions. Forgotten test accounts: Test accounts often remain in production with elevated permissions. Regularly audit users and remove inactive accounts. Lack of change auditing: Who changed what and when in critical data? Without logging admin actions, incident investigation is impossible. Checking authorization only once: User permissions can change during a session. Employee dismissal, account blocking, or role changes should immediately reflect in API access. Mixing authentication and authorization: “If the user is logged in, they can do everything” is a dangerous logic. Authentication and authorization are separate steps; each can result in denial. Proper authorization balances security and usability. Too strict rules frustrate users; too lax rules create security holes. Start with simple roles, increase complexity as needed, but never skip permission checks. Step Three: HTTPS and Encryption Imagine sending an important letter through the mail. HTTP is like an open postcard that any mail carrier can read. HTTPS is a sealed envelope with a personal stamp that only the recipient can open. All data between the client and the API travels through dozens of intermediate servers on the internet. Without encryption, any of these servers can eavesdrop and steal confidential information. Why HTTP is Unsafe What an attacker can see when intercepting HTTP traffic: API keys and access tokens in plain text User passwords during login Credit card numbers and payment information Personal information: addresses, phone numbers, medical records Contents of messages and documents 19% of all successful cyberattacks are man-in-the-middle attacks, a significant portion of which involve open networks (usually HTTP) or incorrect encryption configuration. Public Wi-Fi networks, corporate networks with careless administrators, ISPs in countries with strict censorship, and rogue access points with names like “Free WiFi” are particularly vulnerable. Setting Up HTTPS Obtaining SSL Certificates An SSL certificate is a digital document that verifies the authenticity of your server. Without it, browsers display a warning about an insecure connection. Free options: Let’s Encrypt — issues certificates for 90 days with automatic renewal Cloudflare — free SSL for websites using their CDN Hosting providers — many include SSL in basic plans Paid SSL certificates are used where a particularly high level of trust is required, for example for large companies, financial and medical organizations, or when an Extended Validation (EV) certificate is needed to confirm the legal identity of the site owner. Enforcing HTTP to HTTPS Redirection Simply enabling HTTPS is not enough—you must prevent the use of HTTP. Configure automatic redirection of all requests to the secure version. Check configuration: Open your API in a browser. It should show a green padlock. Try the HTTP version. It should automatically redirect to HTTPS. Use SSL Labs test to verify configuration. Security Headers (HSTS) HTTP Strict Transport Security forces browsers to use HTTPS only for your domain. Add the header to all API responses: Strict-Transport-Security: max-age=31536000; includeSubDomains This means: “For the next year, communicate with us only via HTTPS, including all subdomains.” Additional Encryption HTTPS protects data in transit, but in the database it is stored in plain text. Critical information requires additional encryption. Must encrypt: User passwords — use bcrypt, not MD5 API keys — store hashes, not raw value Credit card numbers — if processing payments Medical data — per HIPAA or equivalent regulations Recommended encryption: Personal data: phone numbers, addresses, birth dates Confidential user documents Internal tokens and application secrets Critical system settings The hardest part of encryption is secure key storage. Encryption keys must not be stored alongside encrypted data. Rotate encryption keys periodically. If a key is compromised, all data encrypted with it becomes vulnerable. HTTPS is the minimum requirement for any API in 2025. Users do not trust unencrypted connections, search engines rank them lower, and laws in many countries explicitly require encryption of personal data. Step Four: Data Validation Users can send anything to your API: abc instead of a number, a script with malicious code instead of an email, or a 5 GB file instead of an avatar. Validation is quality control at the system’s entry point. Golden rule: Never trust incoming data. Even if the data comes from your own application, it may have been altered in transit or generated by a malicious program. Three Validation Rules Rule 1: Check Data Types Age must be a number, not a string. Email must be text, not an array. Dates must be in the correct format, not random characters. Rule 2: Limit Field Length Unlimited fields cause numerous problems. Attackers can overload the server with huge strings or fill the entire database with a single request. Rule 3: Validate Data Format Even if the data type is correct, the content may be invalid. An email without @ is not valid, and a phone number with letters cannot be called. Injection Protection SQL injection is one of the most dangerous attacks. An attacker inserts SQL commands into normal form fields. If your code directly inserts user input into SQL queries, the attacker can take control of the database. Example: A search field for users. A legitimate user enters “John,” but an attacker enters: '; DROP TABLE users; --. If the code directly inserts this into a query: SELECT * FROM users WHERE name = ''; DROP TABLE users; -- Result: the users table is deleted. Safe approach: Queries and data are sent separately. The database automatically escapes special characters. Malicious code becomes harmless text. File Validation Size limits: One large file can fill the server disk. Set reasonable limits for each operation. File type checking: Users may upload executable files with viruses or scripts. Allow only safe formats. Check more than the extension: Attackers can rename virus.exe to photo.jpg. Check the actual file type by content, not just by name. Quarantine files: Store uploaded files in separate storage with no execution rights. Scan with an antivirus before making them available to others. Data validation is your first line of defense against most attacks. Spending time on thorough input validation prevents 70% of security issues. Remember: it’s better to reject a legitimate request than to allow a malicious one. Step Five: Rate Limiting Rate Limiting is a system to control the request speed to your API. Like a subway turnstile letting people through one at a time, the rate limiter controls the flow of requests from each client. Without limits, a single user could overwhelm your server with thousands of requests per second, making the API unavailable to others. This is especially critical in the age of automated attacks and bots. Why Limit Request Rates DDoS protection: Distributed denial-of-service attacks occur when thousands of computers bombard your server simultaneously. Rate Limiting automatically blocks sources with abnormally high traffic. Prevent abuse: Not all attacks are malicious. A developer may accidentally run a script in an infinite loop. A buggy mobile app may send requests every millisecond. Rate Limiting protects against these incidents. Fair resource distribution: One user should not monopolize the API to the detriment of others. Limits ensure all clients have equal access. Cost control: Each request consumes CPU, memory, and database resources. Rate Limiting helps forecast load and plan capacity. Defining Limits Not all requests place the same load on the server. Simple reads are fast; report generation may take minutes. Light operations (100–1,000 requests/hour): Fetch user profile List items in catalog Check order status Ping and healthcheck endpoints Medium operations (10–100 requests/hour): Create a new post or comment Upload images Send notifications Search the database Heavy operations (1–10 requests/hour): Generate complex reports Bulk export of data External API calls Limits may vary depending on circumstances: more requests during daytime, fewer at night; weekends may have different limits; during overload, limits may temporarily decrease, etc. When a user reaches the limit, they must understand what is happening and what to do next. Good API response when limit is exceeded: HTTP Status: 429 Too Many Requests { "error": "rate_limit_exceeded", "message": "Request limit exceeded. Please try again in 60 seconds.", "current_limit": 1000, "requests_made": 1000, "reset_time": "2025-07-27T22:15:00Z", "retry_after": 60 } Bad response: HTTP Status: 500 Internal Server Error { "error": "Something went wrong" } Rate Limiting is not an obstacle for users but a protection of service quality. Properly configured limits are invisible to honest clients but effectively block abuse. Start with conservative limits and adjust based on actual usage statistics. Conclusion Securing an API is not a one-time task at launch but a continuous process that evolves with your project. Cyber threats evolve daily, but basic security strategies remain unchanged. 80% of attacks can be blocked with 20% of effort. These 20% are the basic measures from this guide: HTTPS, authentication, data validation, and rate limiting. Do not chase perfect protection until you have implemented the fundamentals.
22 August 2025 · 14 min to read
Linux

How to Use Telnet Command on Linux

The telnet command is a great and handy Linux network service communication utility. From remote server and system port scans, to debugging network connections, telnet offers easy text-based interaction with a remote host. In this step by step guide, you can see how to install, configure, and utilize telnet in Linux. We shall also discuss its various options and features so that you can have a complete idea. What is Telnet? telnet, or "Telecommunication Network," is a remote network protocol on another computer over the Transmission Control Protocol (TCP). telnet provides the ability to directly specify the remote host on a particular port so that commands may be sent and output directly read in real time. telnet is employed primarily for: Testing Open Ports: Determine if a server has an open port. Accessing Services: Get direct access to the web, e-mail, or other networked services. Troubleshooting Network Issues: Fix network connectivity issues or port not available issues. Installing Telnet on Linux telnet is not pre-installed on most modern Linux distributions. Installation depends on your system type. For Ubuntu/Debian-Based Systems An Ubuntu or any Debian-based Linux user can install telnet with the apt package manager: sudo apt install telnet For Red Hat/CentOS-Based Systems telnet can be installed on RedHat, CentOS, or Fedora by using the yum or dnf package managers: sudo yum install telnet For newer versions: sudo dnf install telnet Understanding the Telnet Command Syntax The telnet command syntax is simple: telnet [hostname/IP] [port] Where: [hostname/IP]: Specifies the hostname or IP address of the remote host. [port]: Specifies the port number you want to connect to. It can be omitted, and the default port (23) is used.  telnet establishes one direct connection to services on specific ports, like HTTP (port 80), SMTP (port 25), or FTP (port 21). Different Options Available for the Telnet Command The telnet command is highly customizable, offering several options that enhance its usability and functionality. Option Description -4 Forces telnet to use IPv4 only when establishing a connection. -6 Forces telnet to use IPv6 only when connecting. -8 Allows transfer of 8-bit data via telnet. -E Disables the telnet escape character operation, disallowing escape sequences during the session. -K Prevents telnet from automatically passing credentials (e.g., a Kerberos ticket) to the remote host. -L Enables the loopback mode so that telnet can connect to the same host. -X atype Specifies the authentication type (i.e., KERBEROS_V4) to be used during the telnet session. -a Automatically fills in the user's login name attempting to log onto the remote system. -d Enables debugging mode, providing detailed information about the connection process and communication. -e char Alters the escape character for telnet. -l user Specifies the username for the login attempt. -n tracefile Writes session activity to a specified trace file for debugging or logging. -b addr  Defines a local interface or address for telnet to use when connecting. -r Creates a reverse telnet connection. Using Telnet: Practical Applications telnet provides diagnostic and testing capabilities for networks. Some of these include: Test Open Ports telnet is often used to verify if a specified port of a server is open. To verify port 80, enter the following command: telnet example.com 80 If the port is open, telnet will connect, and you might have a blank screen expecting input. This is a good indication that the port is listening and expecting to chat. If the port is firewalled or closed, you would get an error message such as "Connection refused." Interact with SMTP Servers telnet can debug email servers by sending raw SMTP commands. To open an SMTP server on port 25: telnet mail.example.com 587 Once connected, you can directly type SMTP commands such as HELO, MAIL FROM, and RCPT TO to communicate with the server. For example: Send HTTP Requests telnet enables manual HTTP requests to debug web servers. For example: telnet example.com 80 After connecting, type: GET / HTTP/1.1 Host: example.com Press Enter twice to send the request, and the server's response will appear. Connect Using IPv4 If the server supports both IPv4 and IPv6, you can force the connection to use IPv4: telnet -4 example.com 80 This ensures compatibility with IPv4-only networks. Debugging a MySQL Server telnet can connect to a MySQL database server to check if the port is open (default port 3306). telnet database.example.com 3306 Replace database.example.com with the MySQL server address.  If the connection is successful, telnet will display a protocol-specific greeting message from the MySQL server. Security Considerations When Using Telnet Although telnet is a handy utility, it is fundamentally unsafe since it sends the data, including passwords, in cleartext. Consequently: Don't Use Telnet Over Unsecure Networks: Utilize a secure, private network whenever possible. Use Alternatives: Use SSH (Secure Shell) for encrypted communication. Restrict Access: Turn off telnet on your servers if you do not use it. By understanding these risks, you can take precautions to secure your systems. Exploring Advanced Telnet Use Cases telnet’s utility extends to a variety of specialized scenarios: Monitoring Services: Use telnet to interactively query protocols like IMAP or POP3 to diagnose emails. IoT Device Management: telnet can be utilized as an immediate interface to communicate with IoT devices that utilize text-based communication protocols. Educational Use: It is an excellent learning tool for studying network protocols and server responses. Troubleshooting Common Telnet Issues Despite its simplicity, telnet may run into issues such as: Connection Refused: This would usually be so if the target port is firewalled or closed. Time-Out Errors: These could reflect network delay or routing issues. Permission Denied: Check appropriate user privilege and port availability. Regularly checking server configurations and network settings can help resolve these issues. Exploring Telnet Alternatives If telnet's lack of encryption is a security risk to your system, there are several alternatives that offer comparable functionality with added security and features: SSH (Secure Shell): SSH is the most common telnet substitute, providing secured communication, tunneling, and strong authentication. Use the ssh command to securely connect to remote servers. Netcat (nc): Netcat is a full-featured networking debugging tool, port scanner, and connection tester. It can handle both TCP and UDP. OpenSSL S_client: OpenSSL can be utilized to test SSL/TLS protocols securely on particular ports. Conclusion telnet in Linux is a simple and convenient network diagnostics and debugging tool. As long as you understand its security limitation and have sufficient configurations, telnet remains a convenient debugging tool, test tool, and communications tool for network services. From this guide, you have a working configuration that strikes a balance between convenience and responsible caution. Get the best out of your Linux experience and control your systems securely and efficiently remotely.
24 July 2025 · 6 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support