Sign In
Sign In

A Complete Guide to the nslookup Command in Linux and Windows

A Complete Guide to the nslookup Command in Linux and Windows
Shahid Ali
Technical writer
Network DNS
18.10.2024
Reading time: 4 min

The nslookup command is a widely used tool for querying Domain Name System (DNS) records. It helps network administrators troubleshoot DNS-related issues by allowing them to perform a range of lookups, from finding IP addresses associated with domain names to querying specific DNS servers. This tutorial will guide you through the basics of using nslookup on both Linux and Windows platforms.

In this tutorial, you will learn:

  • Basic syntax and options of nslookup
  • How to perform simple DNS queries
  • Retrieving mail exchange (MX) records
  • Performing reverse DNS lookups
  • Querying specific DNS servers
  • Using non-interactive mode

By the end of this tutorial, you will be familiar with the most common and useful nslookup commands for effective DNS troubleshooting.

Basic Syntax and Options for nslookup

The basic syntax for the nslookup command is straightforward:

nslookup [options] [domain]

Here is a breakdown of the commonly used options:

  • No parameters: Opens an interactive mode where you can enter multiple queries
  • [domain]: Performs a DNS lookup for the specified domain name
  • -type=[record_type]: Specify the type of DNS record to query (e.g., A, MX, AAAA, etc.)
  • [server]: Specify a DNS server for querying instead of using the default system server

For example:

nslookup example.com

This command performs a DNS lookup for "example.com" using your default DNS server.

Common Options for nslookup

  • -query=A: Query the IP address (default record type)
  • -query=MX: Retrieve mail exchange records
  • -query=AAAA: Query for IPv6 addresses
  • -timeout=[seconds]: Set a timeout for the response
  • -debug: Show detailed information about the query process

How to Perform a Simple DNS Query

One of the most common uses of nslookup is to resolve domain names to IP addresses.

Step-by-Step Guide to Performing a Simple DNS Query

  1. Open the terminal or command prompt.
  2. Type the nslookup command followed by the domain name:
nslookup google.com

Output:
Image1

In this example, the DNS server at 8.8.8.8 (Google's public DNS server) returned the IP address 142.250.65.238 for google.com.

Using nslookup to Retrieve MX Records

The mail exchange (MX) records for a domain indicate which mail servers are responsible for receiving emails on behalf of that domain. To retrieve the MX records using nslookup:

Use the -type=MX option to specify that you want to retrieve MX records.

    nslookup -query=MX gmail.com

Image3

The output will list the MX records, including the mail servers and their priority:

Server:		8.8.8.8
Address:	8.8.8.8#53

Non-authoritative answer:
gmail.com	mail exchanger = 20 alt2.gmail-smtp-in.l.google.com..
gmail.com	mail exchanger = 10 alt1.gmail-smtp-in.l.google.com.

In this case, the mail servers for gmail.com are listed along with their priorities. The lower the number, the higher the priority.

Performing Reverse DNS Lookups

A reverse DNS lookup translates an IP address back to its associated domain name. This is useful for identifying the domain that corresponds to a given IP address.

To perform a reverse DNS lookup, input the IP address into the nslookup command:

nslookup 142.250.65.238

The output should display the domain name associated with the IP:

Image2

Non-authoritative answer:
238.65.250.142.in-addr.arpa     name = lga25s73-in-f14.1e100.net.

In this example, the IP 142.250.65.238 resolves back to lga25s73-in-f14.1e100.net, which is part of Google's infrastructure.

Querying Specific DNS Servers

By default, `nslookup` uses the system's configured DNS server to perform queries. However, you can specify a different DNS server if needed.

To query a specific DNS server, append the server's IP address to the command:

nslookup example.com 1.1.1.1

Image5

The command will query the 1.1.1.1 DNS server (Cloudflare's DNS) for the domain example.com:

Server:		1.1.1.1
Address:	1.1.1.1#53

Non-authoritative answer:
Name:		example.com
Address:	93.184.215.14

This allows you to test DNS resolution from different servers.

Using Non-Interactive Mode in nslookup

In non-interactive mode, you can issue multiple queries without entering nslookup's interactive shell. This is useful when scripting or automating tasks.

To use nslookup non-interactively, simply pass the domain name and the server (optional) in one command:

nslookup example.com 8.8.8.8

Image4

The response will be printed directly, without entering the interactive shell:

Server:		8.8.8.8
Address:	8.8.8.8#53

Non-authoritative answer:
Name:		example.com
Address:	93.184.215.14

This method is efficient when you need to quickly query DNS records without additional input.

Conclusion

The nslookup command is a powerful and flexible tool for performing DNS queries. Whether you're troubleshooting domain resolution, retrieving MX records, or performing reverse lookups, nslookup is an essential command for network administrators. By mastering the options and syntax, you can use nslookup effectively on both Linux and Windows systems.

  • To recap, here’s what we covered in this tutorial:
  • Performing simple DNS queries
  • Retrieving MX records
  • Conducting reverse DNS lookups
  • Querying specific DNS servers
  • Using non-interactive mode
Network DNS
18.10.2024
Reading time: 4 min

Similar

VPN

Installing and Configuring Wireproxy

Wireproxy is a WireGuard client that acts as a SOCKS5/HTTP proxy server or tunnel. It is particularly useful when you need to connect to certain websites through a WireGuard peer but do not want or cannot configure a new network interface for various reasons. In this article, we will cover how to create a SOCKS5 proxy using Wireproxy, as well as how to connect to it via the FoxyProxy extension for the Firefox browser. Main reasons why Wireproxy might be the preferred choice: Using WireGuard as a traffic proxy. No need for administrator privileges to modify WireGuard settings. Wireproxy provides full isolation from the device’s network interfaces, allowing it to be used without administrative configuration. Key Features of Wireproxy Static TCP routing for both client and server. SOCKS5/HTTP proxy support (currently only CONNECT is supported). Developers are working on additional features, including UDP support in SOCKS5 and static UDP routing. And if you’re looking for a reliable, high-performance, and budget-friendly solution for your workflows, Hostman has you covered with Linux VPS Hosting options, including Debian VPS, Ubuntu VPS, and VPS CentOS. Installing Wireproxy Wireproxy supports multiple operating systems, including Linux, macOS, and Windows. There are two main installation methods: Building the project from source using Go. Downloading a precompiled version for your platform. Building from source ensures the latest code, while a precompiled version offers stability and convenience. Installing the Precompiled Version (Windows) Go to the GitHub releases page and download the archive for your operating system. For Windows, download wireproxy_windows_amd64.tar.gz. Extract the archive and place wireproxy.exe in a convenient location, e.g., create a wireproxy folder on your desktop. Open the Windows Command Prompt or PowerShell and navigate to the folder using: cd Desktop\wireproxy Verify the utility works correctly: wireproxy.exe -v Building from Source Using Go (Linux) Prerequisites Ensure Go version 1.20 or higher is installed: go version If Go is not installed, use this Ubuntu 22.04 installation guide. Build process Clone the Wireproxy repository: git clone https://github.com/octeep/wireproxy cd wireproxy Run the build process: make After the build completes, verify: ./wireproxy -v Configuring Wireproxy After installing Wireproxy, the next step is configuring the utility. You need a WireGuard configuration file. You can create a new server and set up WireGuard manually, e.g., following this Hostman guide. Alternatively, use the Marketplace section when creating a server and select Wireguard-GUI. A typical WireGuard configuration file looks like this: [Interface] PrivateKey = [Your_Private_Key] Address = 10.0.0.2/32 DNS = 8.8.8.8 [Peer] PublicKey = [Server_Public_Key] Endpoint = [Server_IP:Port] AllowedIPs = 0.0.0.0/0 PersistentKeepalive = 20 Place the WireGuard configuration file in the wireproxy folder you created earlier. In this example, the file is named wg.conf. Creating the Wireproxy Configuration In the wireproxy directory, create wp.conf for the SOCKS5 proxy configuration: WGConfig = ./wg.conf [Socks5] BindAddress = 127.0.0.1:25344 Username = hostman Password = hostman WGConfig specifies the path to your WireGuard config. BindAddress defines the local proxy address and port. Username and Password are optional login credentials for the proxy. Testing the Configuration Linux: ./wireproxy -c wp.conf -n Windows: wireproxy.exe -c wp.conf -n This checks that the configuration is correct without starting the proxy. Running Wireproxy Linux: ./wireproxy -c wp.conf Windows: wireproxy.exe -c wp.conf For background execution, use the -d flag: Linux: ./wireproxy -c wp.conf -d Windows: wireproxy.exe -c wp.conf -d Connecting to Wireproxy via Browser Extension To use Wireproxy in a browser, specialized proxy management extensions can be used. In this example, we will configure FoxyProxy in Firefox, though similar steps apply to other browsers, e.g., Chrome with Proxy SwitchyOmega. Installing and Configuring FoxyProxy in Firefox Install FoxyProxy from FoxyProxy for Firefox. Click the FoxyProxy icon and select Options to open settings. Click Add to create a new proxy configuration. Set Proxy Type to SOCKS5. Enter 127.0.0.1 as Proxy IP and 25344 as Port. If a username and password were set in Wireproxy, enter them in Username and Password. Click Save to store the configuration. Click the FoxyProxy icon again and select the newly created configuration to connect to the proxy. Visit any IP check service online to confirm that the IP address has changed. This verifies that your traffic is routed through Wireproxy. FoxyProxy supports patterns to apply proxy usage only to specific sites. Open the FoxyProxy menu and select Options. Click Patterns in your existing connection. Enable patterns by clicking the FoxyProxy icon and selecting Use Enable Proxies By Patterns and Order. After this, the proxy will only be used for websites specified in your patterns. Conclusion In this article, we covered the installation and configuration of Wireproxy, a tool for creating SOCKS5/HTTP proxies via WireGuard. Wireproxy’s standout feature is its ability to operate in user space, simplifying setup and usage, especially for users without administrative privileges. We also demonstrated integrating Wireproxy with browser extensions for convenient proxy management.
25 August 2025 · 5 min to read
API

How to Secure an API: Methods and Best Practices

APIs are the bridges between programs in the modern internet. When you order a taxi, the app communicates with the server via an API. When you buy something online, the payment system checks your card through a banking API. These invisible connections handle billions of operations every day. However, an unsecured API is an open gateway for attackers. Real statistics show the scale of the problem: 99% of organizations reported at least one API-related incident in the past year. The total number of API attacks in Q3 2024 exceeded 271 million, which is 85% more than attacks on regular websites. Most companies provide unrestricted access to half of their APIs, often without realizing it. The good news is that 90% of attacks can be blocked with simple security measures. Most attackers rely on the assumption that the API is completely unprotected. Basic security strategies filter out attackers. From this guide, you will get five practical steps to secure an API that can be implemented within a week. No complex theory—only what really works in production. After reading, you will have a secure API capable of withstanding most attacks. Step One: Authentication Authentication answers a simple question: “Who is this?” Imagine an API as an office building with a security guard at the entrance. Without checking IDs, anyone can enter: employees, couriers, or thieves. Similarly, an API without authentication is available to anyone on the internet. Anyone can send a request and access your data. Why authentication is important: Protect confidential data: Your API likely handles information that should not be publicly accessible: user profiles, purchase history, medical records. Without authentication, this data becomes public. Track request sources: When something goes wrong, you need to know where the problem originated. Authentication ties each request to a specific client, making incident investigation and blocking attackers easier. API Keys — Simple and Reliable An API key works like an office pass. Each application is issued a unique card that must be presented for each entry. How it works: The server generates a random string of 32–64 characters. The key is issued to the client application once. The application sends the key with every request. The server verifies the key in the database. Pros: Easy to implement in a few hours Simple to block a specific key Good for internal integrations Cons: Database load for each verification Difficult to manage with thousands of clients Risk of key leakage from client code JWT Tokens — Modern Standard JWT (JSON Web Token) is like a passport with built-in protection against forgery. The token contains user information and does not require constant server verification. Token structure: Header — encryption algorithm Payload — user ID, role, permissions Signature — prevents tampering When to use: Microservices architecture High-load systems Mobile applications Pros: High performance—no database queries needed Token contains all necessary information Supported by all modern frameworks Cons: Difficult to revoke before expiration Compromise of the secret key is critical Token can become large if overloaded with data OAuth 2.0 — For External Integrations OAuth 2.0 solves the problem of secure access to someone else’s data without sharing passwords. It is like a power of attorney—you allow an application to act on your behalf within limited scopes. Participants: User — data owner Application — requests access Authorization server — verifies and issues permissions API — provides data according to the token Typical scenarios: “Sign in with Google” in mobile apps Posting to social media on behalf of a user Banking apps accessing account data How to Choose the Right Method Let’s look at the characteristics of each method: Criterion API Keys JWT Tokens OAuth 2.0 Complexity Low Medium High Setup Time 2 hours 8 hours 2 days For MVP Ideal Possible Overkill Number of Clients Up to 100 Thousands Any number External Integrations Limited Poor Ideal Stage Recommendations: Prototype (0–1,000 users): Start with API keys. They protect against accidental access and give time to understand usage patterns. Growth (1,000–100,000 users): Move to JWT tokens. They reduce database load and provide more flexibility. Scale (100,000+ users): Add OAuth 2.0 for integrations with major platforms. Start with API keys, even if you plan something more complex. A working simple security system is better than a planned perfect one. Transition to other methods gradually without breaking existing integrations. Remember: An API without authentication is a critical vulnerability that must be addressed first. Step Two: Authorization Authentication shows who the user is. Now you need to decide what they are allowed to do. Authorization is like an office access system: everyone has an entry card, but only IT can enter the server room, and accountants can access the document archive. Without proper authorization, authentication is meaningless. An attacker may gain legitimate access to the API but view other people’s data or perform prohibited operations. Role System Three basic roles for any API: Admin Full access to all functions User and settings management View system analytics and logs Critical operations: delete data, change configuration User Work only with own data Create and edit personal content Standard operations: profile, orders, files Access to publicly available information Guest View public information only Product catalogs, news, reference data No editing or creation operations Limited functionality without registration Grant users only the permissions critical for their tasks. When in doubt, deny. Adding permissions is easier than fixing abuse consequences. Additional roles as the system grows: Moderator — manage user content Manager — access analytics and reports Support — view user data for issue resolution Partner — limited access for external integrations Data Access Control It’s not enough to check the user’s role. You must ensure they can work only with the data they are allowed to. A user with the “User” role should edit only their posts, orders, and profile. Example access rules: Users can edit only their profile Orders are visible to the buyer, manager, and admin Financial reports are accessible only to management and accounting System logs are viewable only by administrators Access Rights Matrix: Resource Guest User Moderator Admin Public Content Read Read Read + Moderation Full Access Own Profile - Read + Write - Full Access Other Profiles - - Read Full Access System Settings - - - Full Access Critical operations require additional checks, even for admins: User deletion — confirmation via email Changing system settings — two-factor authentication Bulk operations — additional password or token Access to financial data — separate permissions and audit Common Authorization Mistakes Checking only on the frontend: JavaScript can be bypassed or modified. Attackers can send requests directly to the API, bypassing the interface. Always check permissions on the server. Overly broad access rights: “All users can edit all data” is a common early mistake. As the system grows, this leads to accidental changes and abuse. Start with strict restrictions. Forgotten test accounts: Test accounts often remain in production with elevated permissions. Regularly audit users and remove inactive accounts. Lack of change auditing: Who changed what and when in critical data? Without logging admin actions, incident investigation is impossible. Checking authorization only once: User permissions can change during a session. Employee dismissal, account blocking, or role changes should immediately reflect in API access. Mixing authentication and authorization: “If the user is logged in, they can do everything” is a dangerous logic. Authentication and authorization are separate steps; each can result in denial. Proper authorization balances security and usability. Too strict rules frustrate users; too lax rules create security holes. Start with simple roles, increase complexity as needed, but never skip permission checks. Step Three: HTTPS and Encryption Imagine sending an important letter through the mail. HTTP is like an open postcard that any mail carrier can read. HTTPS is a sealed envelope with a personal stamp that only the recipient can open. All data between the client and the API travels through dozens of intermediate servers on the internet. Without encryption, any of these servers can eavesdrop and steal confidential information. Why HTTP is Unsafe What an attacker can see when intercepting HTTP traffic: API keys and access tokens in plain text User passwords during login Credit card numbers and payment information Personal information: addresses, phone numbers, medical records Contents of messages and documents 19% of all successful cyberattacks are man-in-the-middle attacks, a significant portion of which involve open networks (usually HTTP) or incorrect encryption configuration. Public Wi-Fi networks, corporate networks with careless administrators, ISPs in countries with strict censorship, and rogue access points with names like “Free WiFi” are particularly vulnerable. Setting Up HTTPS Obtaining SSL Certificates An SSL certificate is a digital document that verifies the authenticity of your server. Without it, browsers display a warning about an insecure connection. Free options: Let’s Encrypt — issues certificates for 90 days with automatic renewal Cloudflare — free SSL for websites using their CDN Hosting providers — many include SSL in basic plans Paid SSL certificates are used where a particularly high level of trust is required, for example for large companies, financial and medical organizations, or when an Extended Validation (EV) certificate is needed to confirm the legal identity of the site owner. Enforcing HTTP to HTTPS Redirection Simply enabling HTTPS is not enough—you must prevent the use of HTTP. Configure automatic redirection of all requests to the secure version. Check configuration: Open your API in a browser. It should show a green padlock. Try the HTTP version. It should automatically redirect to HTTPS. Use SSL Labs test to verify configuration. Security Headers (HSTS) HTTP Strict Transport Security forces browsers to use HTTPS only for your domain. Add the header to all API responses: Strict-Transport-Security: max-age=31536000; includeSubDomains This means: “For the next year, communicate with us only via HTTPS, including all subdomains.” Additional Encryption HTTPS protects data in transit, but in the database it is stored in plain text. Critical information requires additional encryption. Must encrypt: User passwords — use bcrypt, not MD5 API keys — store hashes, not raw value Credit card numbers — if processing payments Medical data — per HIPAA or equivalent regulations Recommended encryption: Personal data: phone numbers, addresses, birth dates Confidential user documents Internal tokens and application secrets Critical system settings The hardest part of encryption is secure key storage. Encryption keys must not be stored alongside encrypted data. Rotate encryption keys periodically. If a key is compromised, all data encrypted with it becomes vulnerable. HTTPS is the minimum requirement for any API in 2025. Users do not trust unencrypted connections, search engines rank them lower, and laws in many countries explicitly require encryption of personal data. Step Four: Data Validation Users can send anything to your API: abc instead of a number, a script with malicious code instead of an email, or a 5 GB file instead of an avatar. Validation is quality control at the system’s entry point. Golden rule: Never trust incoming data. Even if the data comes from your own application, it may have been altered in transit or generated by a malicious program. Three Validation Rules Rule 1: Check Data Types Age must be a number, not a string. Email must be text, not an array. Dates must be in the correct format, not random characters. Rule 2: Limit Field Length Unlimited fields cause numerous problems. Attackers can overload the server with huge strings or fill the entire database with a single request. Rule 3: Validate Data Format Even if the data type is correct, the content may be invalid. An email without @ is not valid, and a phone number with letters cannot be called. Injection Protection SQL injection is one of the most dangerous attacks. An attacker inserts SQL commands into normal form fields. If your code directly inserts user input into SQL queries, the attacker can take control of the database. Example: A search field for users. A legitimate user enters “John,” but an attacker enters: '; DROP TABLE users; --. If the code directly inserts this into a query: SELECT * FROM users WHERE name = ''; DROP TABLE users; -- Result: the users table is deleted. Safe approach: Queries and data are sent separately. The database automatically escapes special characters. Malicious code becomes harmless text. File Validation Size limits: One large file can fill the server disk. Set reasonable limits for each operation. File type checking: Users may upload executable files with viruses or scripts. Allow only safe formats. Check more than the extension: Attackers can rename virus.exe to photo.jpg. Check the actual file type by content, not just by name. Quarantine files: Store uploaded files in separate storage with no execution rights. Scan with an antivirus before making them available to others. Data validation is your first line of defense against most attacks. Spending time on thorough input validation prevents 70% of security issues. Remember: it’s better to reject a legitimate request than to allow a malicious one. Step Five: Rate Limiting Rate Limiting is a system to control the request speed to your API. Like a subway turnstile letting people through one at a time, the rate limiter controls the flow of requests from each client. Without limits, a single user could overwhelm your server with thousands of requests per second, making the API unavailable to others. This is especially critical in the age of automated attacks and bots. Why Limit Request Rates DDoS protection: Distributed denial-of-service attacks occur when thousands of computers bombard your server simultaneously. Rate Limiting automatically blocks sources with abnormally high traffic. Prevent abuse: Not all attacks are malicious. A developer may accidentally run a script in an infinite loop. A buggy mobile app may send requests every millisecond. Rate Limiting protects against these incidents. Fair resource distribution: One user should not monopolize the API to the detriment of others. Limits ensure all clients have equal access. Cost control: Each request consumes CPU, memory, and database resources. Rate Limiting helps forecast load and plan capacity. Defining Limits Not all requests place the same load on the server. Simple reads are fast; report generation may take minutes. Light operations (100–1,000 requests/hour): Fetch user profile List items in catalog Check order status Ping and healthcheck endpoints Medium operations (10–100 requests/hour): Create a new post or comment Upload images Send notifications Search the database Heavy operations (1–10 requests/hour): Generate complex reports Bulk export of data External API calls Limits may vary depending on circumstances: more requests during daytime, fewer at night; weekends may have different limits; during overload, limits may temporarily decrease, etc. When a user reaches the limit, they must understand what is happening and what to do next. Good API response when limit is exceeded: HTTP Status: 429 Too Many Requests { "error": "rate_limit_exceeded", "message": "Request limit exceeded. Please try again in 60 seconds.", "current_limit": 1000, "requests_made": 1000, "reset_time": "2025-07-27T22:15:00Z", "retry_after": 60 } Bad response: HTTP Status: 500 Internal Server Error { "error": "Something went wrong" } Rate Limiting is not an obstacle for users but a protection of service quality. Properly configured limits are invisible to honest clients but effectively block abuse. Start with conservative limits and adjust based on actual usage statistics. Conclusion Securing an API is not a one-time task at launch but a continuous process that evolves with your project. Cyber threats evolve daily, but basic security strategies remain unchanged. 80% of attacks can be blocked with 20% of effort. These 20% are the basic measures from this guide: HTTPS, authentication, data validation, and rate limiting. Do not chase perfect protection until you have implemented the fundamentals.
22 August 2025 · 14 min to read
Linux

How to Use Telnet Command on Linux

The telnet command is a great and handy Linux network service communication utility. From remote server and system port scans, to debugging network connections, telnet offers easy text-based interaction with a remote host. In this step by step guide, you can see how to install, configure, and utilize telnet in Linux. We shall also discuss its various options and features so that you can have a complete idea. What is Telnet? telnet, or "Telecommunication Network," is a remote network protocol on another computer over the Transmission Control Protocol (TCP). telnet provides the ability to directly specify the remote host on a particular port so that commands may be sent and output directly read in real time. telnet is employed primarily for: Testing Open Ports: Determine if a server has an open port. Accessing Services: Get direct access to the web, e-mail, or other networked services. Troubleshooting Network Issues: Fix network connectivity issues or port not available issues. Installing Telnet on Linux telnet is not pre-installed on most modern Linux distributions. Installation depends on your system type. For Ubuntu/Debian-Based Systems An Ubuntu or any Debian-based Linux user can install telnet with the apt package manager: sudo apt install telnet For Red Hat/CentOS-Based Systems telnet can be installed on RedHat, CentOS, or Fedora by using the yum or dnf package managers: sudo yum install telnet For newer versions: sudo dnf install telnet Understanding the Telnet Command Syntax The telnet command syntax is simple: telnet [hostname/IP] [port] Where: [hostname/IP]: Specifies the hostname or IP address of the remote host. [port]: Specifies the port number you want to connect to. It can be omitted, and the default port (23) is used.  telnet establishes one direct connection to services on specific ports, like HTTP (port 80), SMTP (port 25), or FTP (port 21). Different Options Available for the Telnet Command The telnet command is highly customizable, offering several options that enhance its usability and functionality. Option Description -4 Forces telnet to use IPv4 only when establishing a connection. -6 Forces telnet to use IPv6 only when connecting. -8 Allows transfer of 8-bit data via telnet. -E Disables the telnet escape character operation, disallowing escape sequences during the session. -K Prevents telnet from automatically passing credentials (e.g., a Kerberos ticket) to the remote host. -L Enables the loopback mode so that telnet can connect to the same host. -X atype Specifies the authentication type (i.e., KERBEROS_V4) to be used during the telnet session. -a Automatically fills in the user's login name attempting to log onto the remote system. -d Enables debugging mode, providing detailed information about the connection process and communication. -e char Alters the escape character for telnet. -l user Specifies the username for the login attempt. -n tracefile Writes session activity to a specified trace file for debugging or logging. -b addr  Defines a local interface or address for telnet to use when connecting. -r Creates a reverse telnet connection. Using Telnet: Practical Applications telnet provides diagnostic and testing capabilities for networks. Some of these include: Test Open Ports telnet is often used to verify if a specified port of a server is open. To verify port 80, enter the following command: telnet example.com 80 If the port is open, telnet will connect, and you might have a blank screen expecting input. This is a good indication that the port is listening and expecting to chat. If the port is firewalled or closed, you would get an error message such as "Connection refused." Interact with SMTP Servers telnet can debug email servers by sending raw SMTP commands. To open an SMTP server on port 25: telnet mail.example.com 587 Once connected, you can directly type SMTP commands such as HELO, MAIL FROM, and RCPT TO to communicate with the server. For example: Send HTTP Requests telnet enables manual HTTP requests to debug web servers. For example: telnet example.com 80 After connecting, type: GET / HTTP/1.1 Host: example.com Press Enter twice to send the request, and the server's response will appear. Connect Using IPv4 If the server supports both IPv4 and IPv6, you can force the connection to use IPv4: telnet -4 example.com 80 This ensures compatibility with IPv4-only networks. Debugging a MySQL Server telnet can connect to a MySQL database server to check if the port is open (default port 3306). telnet database.example.com 3306 Replace database.example.com with the MySQL server address.  If the connection is successful, telnet will display a protocol-specific greeting message from the MySQL server. Security Considerations When Using Telnet Although telnet is a handy utility, it is fundamentally unsafe since it sends the data, including passwords, in cleartext. Consequently: Don't Use Telnet Over Unsecure Networks: Utilize a secure, private network whenever possible. Use Alternatives: Use SSH (Secure Shell) for encrypted communication. Restrict Access: Turn off telnet on your servers if you do not use it. By understanding these risks, you can take precautions to secure your systems. Exploring Advanced Telnet Use Cases telnet’s utility extends to a variety of specialized scenarios: Monitoring Services: Use telnet to interactively query protocols like IMAP or POP3 to diagnose emails. IoT Device Management: telnet can be utilized as an immediate interface to communicate with IoT devices that utilize text-based communication protocols. Educational Use: It is an excellent learning tool for studying network protocols and server responses. Troubleshooting Common Telnet Issues Despite its simplicity, telnet may run into issues such as: Connection Refused: This would usually be so if the target port is firewalled or closed. Time-Out Errors: These could reflect network delay or routing issues. Permission Denied: Check appropriate user privilege and port availability. Regularly checking server configurations and network settings can help resolve these issues. Exploring Telnet Alternatives If telnet's lack of encryption is a security risk to your system, there are several alternatives that offer comparable functionality with added security and features: SSH (Secure Shell): SSH is the most common telnet substitute, providing secured communication, tunneling, and strong authentication. Use the ssh command to securely connect to remote servers. Netcat (nc): Netcat is a full-featured networking debugging tool, port scanner, and connection tester. It can handle both TCP and UDP. OpenSSL S_client: OpenSSL can be utilized to test SSL/TLS protocols securely on particular ports. Conclusion telnet in Linux is a simple and convenient network diagnostics and debugging tool. As long as you understand its security limitation and have sufficient configurations, telnet remains a convenient debugging tool, test tool, and communications tool for network services. From this guide, you have a working configuration that strikes a balance between convenience and responsible caution. Get the best out of your Linux experience and control your systems securely and efficiently remotely.
24 July 2025 · 6 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support