Sign In
Sign In

What Is a Virtual Server?

What Is a Virtual Server?
Hostman Team
Technical writer
Infrastructure

Let’s talk about virtual servers. About powerful PC’s with "hardware" shared between many users who want to create their own site or application.

We will get deeper into how these servers work, what they are capable of, how they are different from regular servers, and how to choose the best one.

The idea behind a virtual server is the same as the one behind an ordinary physical server. It is a place somewhere in the data centers around the world where webmasters and developers store files of their websites and applications.

In general, servers are a 24/7 working PC with all the data necessary data to maintain a website or another project that needs to be accessible by users around the world.

The main distinctive feature of virtual servers lies in their implementation. It uses so-called virtualization technology that makes possible the emulation of many computers on one physical machine. That way we have one powerful PC but a lot of space to create virtual ones within it, so hosting providers (who maintain servers in datacenters) don’t have to buy more hardware to extend the service to other users.

How do virtual servers work?

As we mentioned earlier — in the core of virtual servers sits technology called "virtualization". There are various types which differ in technical specifications but mainly perform the same tasks.

E0171c38128d676db1567e964d2a7c41

This virtual server is a complex program (hypervisor) imitating a full-fledged OS with BIOS and other low-level stuff. Practically, it gives users fully functional "hardware" that they can use as their own computer. But the "hardware" is not actually hardware in a real sense. It is merely equipment virtualized into a PC and shared between many webmasters and developers using the same hosting provider.

What are virtual servers used for?

Like any server, virtual servers are used to store data from different projects such as:

  • Informational platforms and online stores (most of them have to have a database that also needs a server).

  • Databases with private information to be used inside a company making it possible to share some data and keep it hidden from the outside.

  • Platforms created to test software within the team or in person (when the local machine is not powerful enough).

  • Setups that are made to work with complex systems like Odoo.

  • Gaming servers (like ones used to host Minecraft personal playable worlds) and mail servers (to obtain full control on sent and received email).

  • Systems to implement CCTV (to store a lot of GB’s of recorded videos).

  • And of course personal cloud storages. You can use a virtual server as a remote hard disk to store images, videos, audio files, etc.

And yes, even virtualized hardware can deal with everything listed above. Even if a server is being used to the maximum.

What are the benefits of virtual servers?

Talking about the advantages of virtual servers… 

Bac98c93d621c0feb8782dfa5169213a

  1. One of the main benefits of virtual servers is that such servers are not as pricey as real physical servers. Logically, virtual PCs cost less than tangible ones. And this is quite an important characteristic of the server because they usually cost a lot of money over the long term. Especially when the site or application is gaining popularity.

  2. Virtualization brings independence from the physical world. Users have something like an image of a computer that can be seamlessly transported to another hardware platform. It means that even if the hardware part fails it will take a matter of minutes to relaunch your "PC" using another physical server.

  3. The hosting provider will take care of your virtual server, doing routine stuff like monitoring system conditions and preventing any failures. There’s no need to hire a separate audition team.

  4. It is a computer with everything you need such as a Firewall, real IP-address, etc.

Disadvantages of virtual server

There are some shortcomings too…

  1. The performance of a virtual server would be worse than the performance of the same hardware configuration but for factual implementation. In fact, users of VS will get only part of the PC’s equipment; other webmasters and developers will get the rest.

  2. Even though you have access to many segments of the actual OS, you don’t have an opportunity to interact with the actual hard disc or CPU of the PC. That’s why some functions might be unsupported or inaccessible.

  3. Usually, hosts revoke some administrator’s permissions from users of a virtual server. So you’ll lose the opportunity to edit any of the system files or any low-level components.

VPS and VDS

We have two abbreviations: VPS and VDS. The first one stands for Virtual Private Server and the second one for Virtual Dedicated Server. Both are the same technologies in general. Both terms mean one of the ways to rent and use a server. But some users see a slight difference in these. So, dedicated server vs the virtual server, which is better?

You might stumble upon the opinion that VPS is a server that works with OpenVZ-technology and VDS with KVM.

OpenVZ — is a software virtualization layer which is installed on Linux Kernel and functions as a copy of that Linux system. You have a lot of virtual PCs but all of them are actually based on one kernel. That brings shortfalls such as an inability to install an OS other than Linux, no way to change the filesystem (ext4 only), software components like PPTP and OpenVPN are restricted, no privacy (the PC administrator has access to your data). But virtual private servers with OpenVZ are ordinarily cheaper.

KVM — is software virtualization implemented by a specific application called hypervisor. This app creates an isolated copy of the system that transforms into your own fully functional PC. This approach brings many privileges: you choose what OS to install, what filesystem to use, you can even control BIOS, and interact with low-level components like sockets and the kernel. But the most important part is security. Only the renter has access to the KVM server. A virtual dedicated server with this technology would be more expensive.

Windows-based virtual servers

You can rent a virtual server with preinstalled Windows Server OS. It will certainly be a KVM-one with almost uncompromising access to any component or chosen virtual PC.

We would recommend this type of VDS for those who for some reason want to or already work with Microsoft’s software:

  • You are acquainted with applications like Outlook and Office so you want to continue using them while developing an online working environment for your team or maybe yourself.

  • You work with a team that strongly relies on Microsoft’s ecosystem and are used to working with Windows-connected applications only.

  • You want to set up a remote working space with a graphical interface.

Also, a virtual server for Windows is a great place to cooperatively develop products with Microsoft’s proprietary technologies like .NET or using specialized applications like Microsoft Visual Studio.

To create a virtual server with Windows you should either rent an "empty" VPS and manually install Windows there as you would do with a regular PC or choose a plan with Windows preinstalled on your host’s website.

Linux-based virtual servers

This one could be using two different technologies: OpenVZ and KVM. You choose.

We would recommend a virtual server with Linux for those who don’t really need any Microsoft software and at the same time want to have a functional and performing platform:

  • Those who want to gain more control over the used system.

  • Who want to save on renting an expensive and overperforming server using a lightweight Linux-based system with no interface and other "resource hogs".

  • Who would like to use VDS to develop or host projects made using web technologies such as Node.js, JavaScript, etc.

Furthermore, Linux is a safer place to store different kinds of data.

To create a Linux virtual server you usually just need to buy a VPS and that’s it. Ubuntu (Linux distributive) is the number one OS pre-installed on servers. So there’s a 99% chance you won’t spend time installing or reinstalling OSes.

Virtual machine vs virtual server

Both are great tools to develop and test software products but in different ways.

A virtual machine is a virtual PC inside your PC. So it is installed locally via a hypervisor that is included with your motherboard and OS. Basically, it is similar to VDS but you’re the host. It uses your machine’s resources and you decide how many resources the server should take.

Why might you want to use a virtual machine instead of a virtual server? For example:

  1. You have an outstandingly performant computer and a VM would just be a more reliable platform to develop and test your applications.

  2. You want to save money on renting a VDS.

  3. Have poor internet connection and in any case, the VM does its job faster.

  4. Going to work with some confidential data that shouldn’t be stored somewhere on the web.

If that’s not you, a VDS might be a more reliable platform to work with.

Physical servers vs virtual servers

This is fairly straightforward. A physical server is a regular PC that stays somewhere in a data center and never in theory turns off.

Is there a big difference between virtual and actual ones? Not really. Generally, you can use VDS to do all the stuff you can do on a dedicated server. There would be almost no drawbacks. Because, as we pointed out earlier, KVM-technology makes it possible for users of VDS to access even things like BIOS.

The only reason you might want to go with a dedicated server is performance. It will be fast enough to deploy some complex and resource-intensive projects like gaming worlds where it is absolutely necessary to keep things going fast (in terms of CPU and RAM capability and internet connection capability too).

Are there free virtual servers?

Yes, but we wouldn’t recommend using them. Moreover, we would recommend avoiding them.

It seems a great opportunity to host your project on a free server. Nothing to give and a lot to get. But that’s not really true.

Free virtual server hosts will negatively affect your app or website because its hardware and software are usually quite slow. There’s no incentive for such servers to provide adequate speed of loading and operating.

Free servers give you only third-level domains. So you’d have to forget about good SEO scores.

A host would severely limit the amount of free space for your files. Of course, you would never have any control over the server.

The free server is free for you but not for the provider, so don’t be fooled by the "price". The provider will definitely try to make money out of you. For example, he might put an ad on your site or in your app without your consent. Or secretly will sell your confidential data to advertisers.

By using a free server you should be prepared to lose all of your content at any moment without any warning. So, as you can see, the price is high.

How to choose a virtual server?

In choosing a virtual server you must consider 5 key criteria:

Linux or Windows

We discussed it above, so reread that part and decide what OS do you want (or need) to use on your VDS.

Hardware

Modern technologies give hosting providers the ability to serve developers and webmasters with a certain performance level. You may without any hesitation choose VDS based on this information. For small apps and sites, you don’t need a superpowerful PC but you should definitely consider an option with SSD storage.

Geolocation

The closer the server to a user of an app or site the faster it works for him. Try to choose one that will be fast enough for everyone.

Control Panel

Besides the command line, you will sometimes use the Control Panel to interact with the server. So it should be user-friendly and functional enough to fulfil your needs.

Best virtual servers

You can find thousands of hosts around the web, but there are some big names you must consider as the best solution. For example Digital Ocean. One of the most modern and reliable providers that are quite popular and relatively inexpensive. Additionally, you might consider the IBM platform and rent VDS there.

If you don’t really need to control your server but want to host an app or website in a few clicks with the power and quality of Microsoft’s and Amazon’s ecosystems, you might want to consider Hostman as your provider.

It makes managing any web project or application a breeze, so you can concentrate on the creative part of your work while delegating all routine tasks to the Hostman’s professional administrators.

You can try with free7 days trial. Create your virtual server here.

Infrastructure

Similar

Infrastructure

What is Code Review and When Is It Needed?

You can write code. You can edit existing code. You can even rewrite it from scratch. There’s a lot you can do with code. But what’s the point if the code lives in its own echo chamber? If the same person writes, views, and edits it, many critical errors can drift from one version to another unnoticed without external evaluation. Code locked within the confines of a single text editor is highly likely to stagnate, accumulating inefficient constructs and architectural decisions, even if written by an experienced developer. This is why every developer should understand what code review is, how it’s done, and what tools are needed. Presenting your code properly to others, gathering feedback, and making changes wisely is important. Only this way can code remain “fresh” and efficient, and applications based on it — secure and high-performing. Code review is the process of examining code by one or more developers to identify errors, improve quality, and increase readability. Types of Code Review 1. Formal Review A formal review is a strict code-checking process with clearly defined stages. It’s used in critical projects where errors can have serious consequences — for example, in finance or healthcare applications. The analysis covers not just the code but also the architecture, performance, and security. Reviewers often include not just developers but also testers and analysts. For example, a company developing a banking app might follow these steps: Development: A developer completes a new authentication module and submits a pull request via GitHub. Analysis: A review group (2 senior developers + 1 security specialist) is notified and checks the code for logic, readability, and security (e.g., resistance to SQL injection and XSS attacks). Discussion: Reviewers meet the developer over Zoom and give feedback. Documentation: All notes are posted in GitHub comments and tracked in Jira. For instance, some RESTful requests may be flagged as vulnerable with a recommendation to use parameterized queries. Fixes: The developer updates the code and the pull request; the cycle repeats until approval. Approval: Once reviewers are satisfied, the code is merged into the main branch. 2. Informal Review Informal code review is less strict and more flexible, usually involving: Quick code discussions in chat or meetings Showing code to a colleague in person Asking an expert a technical question This kind of review happens often in day-to-day work and is characterized by spontaneity, lack of documentation, informal reviewer choice, and shallow checks. In simpler terms, it’s more like seeking advice than a formal third-party audit. It's a form of knowledge sharing. Types include: Over-the-Shoulder Review: One developer shows their code to another in real time (via screen share, chat message, or simply turning the monitor). Ad-hoc Review: A developer sends code to a colleague asking them to check it when convenient, e.g., I wrote this handler, but there’s an error. Can you take a look? Unstructured Team Review: Code is discussed at a team meeting, casually and collaboratively, often with knowledge sharing. Feedback is given as recommendations, not mandates. Developers can ignore or reject suggestions. Although informal reviews are less reliable than formal ones, they’re quicker and easier, and often complement formal reviews. Examples of integration: Preliminary Checks: Before a pull request, a dev shows code to a colleague to discuss and fix issues. Informal Discussion During Formal Review: Reviewers may chat to resolve issues more efficiently. Quick Fixes: Developers make changes right after oral feedback instead of long comment exchanges. 3. Pair Programming Pair programming is when two developers work together on one machine: one writes code, and the other reviews it in real-time. It’s literally simultaneous coding and reviewing, which helps catch bugs early. Roles: Driver: Writes code, focused on syntax and implementation. Navigator: Reviews logic, looks for bugs, suggests improvements, and thinks ahead. Roles can be switched regularly to keep both engaged. Variants: Strong Style: Navigator makes decisions, and the driver just types. It works well if one of the developers is more experienced. Loose Pairing: Both share decision-making, swapping roles as needed. Though rare, pair programming has advantages: Instant Feedback: Bugs are fixed immediately. In-depth Review: The second dev is deeply involved in writing the code. On-the-job Learning: Juniors learn directly from experienced peers. It’s more of a collaborative development method than a strict review. 4. Automated Review Automated code review uses tools that analyze code for errors, style, and vulnerabilities without human intervention. These tools are triggered automatically (e.g., after compilation, commit, or pull request). They analyze, run tests (e.g., unit tests), and generate reports. Some tools can even auto-merge code if it passes checks. Automated code review is part of DevOps and is common in CI/CD pipelines before deploying to production. Types: Static Analysis: Checks code without executing it — syntax errors, bad patterns, etc. Dynamic Analysis: Runs code to detect memory leaks, threading issues, and runtime errors. However, for now, tools can't catch business logic or architectural issues. As AI evolves, tools will likely become better at "understanding" code. When is Code Review Needed? Ideally, you should conduct code reviews both in small and large-scale projects. The only exceptions might be personal side-projects (pet projects), although even these can benefit from outside input. Automated testing has become standard, from JavaScript websites to C++ libraries. Still, code review can be skipped for: Trivial changes (e.g., formatting, UI text updates) Peripheral code (e.g., throwaway scripts, config files) Auto-generated code — unless manually modified In short, review the code only if it plays a critical or central role in the app and a human wrote it. Main Stages of Conducting Code Review Regardless of whether a review is formal, informal, or automated, there are several common stages. Preparation for Review Whether the written code is a new component for a production application or a modification of an existing method in a personal project, the developer is usually motivated to have it reviewed, either by fellow developers or by using automated testing tools. Accordingly, the developer has goals for the review and a rough plan for how it should be conducted, at least in broad terms. It’s important to understand who will participate in the review and whether they have the necessary competencies and authority. In the case of automated testing, it’s crucial to choose the right tools. Otherwise, the goals of the review may not be achieved, and critical bugs might remain in the code. Time constraints also matter: when all reviewers and testing tools will be ready to analyze the code, and how long it will take. It’s best to coordinate this in advance. Before starting the actual review, it can also be helpful to self-review—go over the code yourself and try to spot any flaws. There might be problems that can be fixed immediately. Once the developer is ready for the review, they notify the reviewers via chat, pull request, or just verbally. Code Analysis and Error Detection Reviewers study the code over a period of time. During this process, they prepare feedback in various formats: suggested fixes in an IDE, chat comments, verbal feedback, or testing reports. The format of the feedback depends on the tools used by the development team, which vary from project to project. Discussion of Edits and Recommendations Reviewers and the developer conduct a detailed discussion of the reviewed codebase. The goal is to improve the code while maintaining a productive dialogue. For instance, the developer might justify certain controversial decisions and avoid making some changes. Reviewers might also suggest non-obvious improvements that the developer hadn't considered. Documentation and Task Preparation All identified issues should be clearly documented and marked. Based on this, a list of tasks for corrections is prepared. Kanban boards or task managers are often used for this, e.g., Jira, Trello, and GitHub Issues. Again, the documentation format depends on the tools used by the team. Even a solo developer working on a personal project might write tasks down in a physical notebook—or, of course, in a digital one. Though keeping tasks in your head is also possible, it’s not recommended. Nowadays, explicit tracking is better than implicit assumptions. Relying on memory and intuition can lead to mistakes. Applying Fixes and Final Approval Once the list of corrections is compiled, the developer can begin making changes. They often also leave responses to comments. Bringing code to an acceptable state may take several review rounds. The process is repeated until both reviewers and the developer are satisfied. It’s crucial to ensure the code is fully functional and meets the team’s quality standards. After that, the final version of the code is merged into the main branch—assuming a version control system is being used. Tools for Code Review In most cases, code review is done using software tools. Broadly speaking, they fall into several categories: Version control systems: Most cloud platforms using version control systems (typically Git) offer built-in review tools for viewing, editing, and commenting on code snippets. Collaboration tools: Development teams often use not just messengers but also task managers or Kanban boards. These help with discussing code, assigning tasks, and sharing knowledge. Automated analyzers: Each programming language has tools for static code analysis to catch syntax issues, enforce style rules, and identify potential vulnerabilities. Automated tests: Once statically checked, the code is run through automated tests, usually via language-specific unit testing libraries. This article only covers the most basic tools that have become standard regardless of domain or programming language. GitHub / GitLab / Bitbucket GitHub, GitLab, and Bitbucket are cloud-based platforms for collaborative code hosting based on Git. Each offers tools for convenient code review. On GitHub and Bitbucket, this is called a Pull Request, while on GitLab it’s a Merge Request. Process: The developer creates a Pull/Merge Request documenting code changes, reviewer comments, and commit history. Reviewers leave inline comments and general feedback. After discussion, reviewers either approve the changes or request revisions. Each platform also provides CI/CD tools for running automated tests: GitHub Actions GitLab CI/CD Bitbucket Pipelines These platforms are considered the main tools for code reviews. The choice depends on team preferences. The toolas are generally similar but differ in details. Crucible Atlassian Crucible is a specialized tool dedicated solely to code review. It supports various version control systems: Git, SVN, Mercurial, Perforce. Crucible suits teams needing a more formalized review process, with detailed reports and customizable settings. It integrates tightly with Jira for project management. Unlike GitHub/GitLab/Bitbucket, Crucible is a self-hosted solution. It runs on company servers or private clouds. Pros and cons: Platform Deployment Managed by Maintenance Complexity GitHub / GitLab / Bitbucket Cloud Developer Low Atlassian Crucible On-premise End user/admin High Crucible demands more setup but allows organizations to enforce internal security and data policies. Other Tools Each programming language has its own specialized tools for runtime and static code analysis: C/C++: Valgrind for memory debugging Java: JProfiler, YourKit for profiling; Checkstyle, PMD for syntax checking Python: PyInstrument for performance; Pylint, Flake8 for quality analysis These tools often integrate into CI/CD pipelines run by systems like GitHub Actions, GitLab CI, CircleCI, Jenkins. Thus, formal code review tools are best used within a unified CI/CD pipeline to automatically test and build code into a final product. Best Practices and Tips for Code Review 1. Make atomic changes Smaller changes are easier and faster to review. It’s better to submit multiple focused reviews than one large, unfocused one. This aligns with the “Single Responsibility Principle” in SOLID. Each review should target a specific function so reviewers can focus deeply on one area. 2. Automate everything you can Automation reduces human error. Static analyzers, linters, and unit tests catch issues faster and more reliably. Automation also lowers developers’ cognitive load and allows them to focus on more complex coding tasks. 3. Review code, not the developer Code reviews are about the code, not the person writing it. Criticism should target the work, not the author. Maintain professionalism and use constructive language. A good review motivates and strengthens teamwork. A bad one causes stress and conflict. 4. Focus on architecture and logic Beautiful code can still have flawed logic. Poor architecture makes maintenance and scaling difficult. Pay attention to structure—an elegant algorithm means little in a badly designed system. 5. Use checklists for code reviews Checklists help guide your review and ensure consistency. A basic checklist might include: Is the code readable? Is it maintainable? Is there duplication? Is it covered by tests? Does it align with architectural principles? You can create custom code review checklists for specific projects or teams. 6. Discuss complex changes in person Sometimes it’s better to talk in person (or via call) than exchange messages—especially when dealing with broad architectural concerns. For specific code lines, written comments might be more effective due to the ability to reference exact snippets. 7. Code should be self-explanatory Good code speaks for itself. The simpler it is, the fewer bugs it tends to have. When preparing code for review, remember that other developers will read it. The clarity of the code affects the quality of the review. Put yourself in the reviewers’ shoes and ensure your decisions are easy to understand. Conclusion Code review is a set of practices to ensure code quality through analysis and subsequent revisions. It starts with syntax and architecture checks and ends with performance and security testing. Reviews can be manual, automated, or both. Typically, new code undergoes automated tests first, then manual review—or the reverse. If everything is in order, the code goes into production. If not, changes are requested, code is updated, and the process is repeated until the desired quality is achieved.
13 May 2025 · 12 min to read
Infrastructure

Public vs Private IP Addresses: What’s the Difference?

Every network, whether it’s the entire internet or your home Wi-Fi, needs a way to tell devices apart. That’s where IP addresses come in. An IP address is basically a unique ID made up of numbers separated by dots. IP addresses can be public (or “white”), which are used to connect to the wider internet, or private (internal), which are used within local networks like your home or office. Also, IPs can be static (they stay the same) or dynamic (they change over time), and they follow one of two standards: IPv4 or the newer IPv6. In this article, we’ll break down what IP addresses are, how they work, and why understanding the difference between public and private IPs actually matters. What Is an IP Address and Why Is It Needed? An IP address (Internet Protocol Address) is a unique identifier assigned to every computing device on a network. It consists of numbers separated by dots. When a user visits a website, a request is sent from their IP to a remote server, which then responds with the web page. Without IP addresses, computers and smartphones wouldn’t be able to locate each other online. They wouldn’t know where to send or receive data. An IP address is comparable to a postal code, which tells the postman where to deliver a letter and tells the recipient where it came from. What Does an IP Address Look Like? All IP addresses are unique, since they identify individual devices in a network. However, they follow the same structural format. Examples of IPv4 addresses, which consist of four integers: 78.129.229.78 172.16.254.1 203.0.113.5 192.162.74.34 184.12.203.67 Examples of IPv6 addresses, made up of eight hexadecimal blocks: 2606:4700:4700:0db8:2001:1111 2001:4860:4860::8888 fe80::1a2b:3c4d:5e6f:7g8h 1050:0:0:0:5:600:300c:326b 2001:0db8:aa10:0001:7g8h::00fb Some IPv6 addresses may contain two colons in a row (::), which represent a sequence of zeroes. For example: Full: 2001:0db8:0000:0000:0000:0000:0000:1 Shortened: 2001:db8::1 However, the double colon (::) can only appear once in an IPv6 address, or else it would be unclear how many zeroes are omitted. For instance, the address 2001::5e6f::4860 could ambiguously mean: 2001:0000:0000:5e6f:0000:4860 or 2001:0000:5e6f:0000:0000:4860 If no double colon is used, then either the address is complete or contains full zeroes. Most operating systems allow local communication via a reserved domain name — localhost. Types and Categories of IP Addresses It’s important to distinguish between the types and categories of IP addresses. Categories of IP addresses: Public IP (Public): Identifies a device on the global internet. Private IP (Private): Identifies a device within a local network. Types of IP addresses: Static: The IP address remains the same each time the device connects to the network. Used by corporate servers. Dynamic: A new IP address is assigned each time the device connects. Used by home devices. Classification by protocol: IPv4: Four integers (0–255) separated by dots, 32 bits in size, up to 4.3 billion addresses. E.g., 192.168.1.1 IPv6: Eight hexadecimal blocks, 128 bits in size, nearly unlimited addresses. E.g., fe80::1a2b:3c4d:5e6f:7g8h In summary, categories define whether an IP is public or private, while types define whether it's permanent or temporary. What Is a Public IP and How Does It Work? A public IP address is a public (external) IP that identifies a computing device on the internet. Regular internet users connect to websites and game servers using public IPs. Domain names (like hostman.com), resolved through DNS (Domain Name System), are simply a user-friendly wrapper around public IP addresses. Thus, having a public IP is a prerequisite for any computing device to interact directly with the internet. What Is a Private IP and Why Is It Used? A private IP address is a private (internal) IP used to identify a device within a local network. The main purpose of private IPs is to conserve public IPs, which are limited in number. IPv4 uses 32 bits to encode IPs, yielding a range from 0.0.0.0 to 255.255.255.255, totaling 4,294,967,296 possible addresses (256^4, not 255^4, because counting starts from 0). With over 8 billion people on Earth, and each using multiple devices (laptops, smartphones, TVs, smart speakers, etc.), there's clearly not enough public IPs for everyone. This limitation is addressed with private IPs. These are used within local networks to identify multiple devices, all of which connect to the internet through a single public IP that represents the whole network globally. Private IPs solve several problems: Communication: Devices like computers, printers, servers, and storage systems can exchange data within a local network without each requiring its own public IP. Thousands of private IPs can coexist under a single public IP. Architecture: Enable dividing networks into subnets, like one for employees and another for guests. Security: Since private IPs are not accessible from the internet, they are shielded from direct attacks like DDoS or port scanning, reducing the risk of hacking. In short, private IP addresses: Enable local communication. Conserve public IP addresses. Reduce internet traffic load. Structure local networks. Increase device security. A practical example: A home Wi-Fi router connects multiple devices (PC, phone, TV, smart speaker). All these are part of the local network and have private IPs. However, the Wi-Fi router itself is part of the global Internet and has a public IP address. Converting a Public IP to a Private IP For an internal device to access the external internet, its private IP address is converted into a public IP using NAT (Network Address Translation) technology. The conversion occurs in several steps: Request: A device with a private IP sends a request to the Internet. Translation: A router with NAT receives the request and replaces the private IP with its own public IP. Response: The remote server receives the request from the router’s public IP and sends back a response. Reverse Translation: The NAT router receives the response and, using a matching table (a list of private IPs of internal devices), forwards it to the correct internal device. In essence, NAT rewrites IPs in data packets, enabling private IP devices to interact with the internet. There are several types of IP substitution: SNAT (Source NAT): Replaces IP in outgoing packets. Outbound requests from a private IP are forwarded through a public IP. DNAT (Destination NAT): Replaces IP in incoming packets. Inbound requests to a public IP are forwarded to an internal device’s private IP. PAT (Port Address Translation): Replaces IP in outgoing packets while retaining the original port. A variation of SNAT. Full Cone NAT: Replaces IPs in both directions. Connections can be initiated by internal or external devices. Combines SNAT and DNAT. Restricted NAT: Replaces IPs in both directions. Only internal devices can initiate connections. A limited version of SNAT + DNAT. Symmetric NAT: Replaces IPs for each unique connection, creating a new mapping every time. Only internal devices can initiate connections. A modified SNAT + DNAT combination. Until the world fully transitions to IPv6, which removes the limitations of IPv4, NAT will continue to be used in Wi-Fi routers, modems, and local networks. Of course, sharing a single public IP among multiple devices isn’t always ideal. For example, if three players are gaming on a remote server using the same Wi-Fi router, and one gets IP-banned for cheating, all three could be banned, which seems unfair. However, in such cases, rebooting the router may assign a new dynamic public IP address. Also, the router's bandwidth is shared among all connected users. So, the more devices are connected to a router, the slower the connection speed for each. Where Do public and Private IP Addresses Come From? Regardless of the IP type, a common question is: Who assigns these identifiers to devices? The answer is: DHCP. DHCP (Dynamic Host Configuration Protocol) is a technology that automatically assigns IP addresses to all new devices in a network. Whether it's a local or global network, IP assignment follows these steps: Discovery: A device connects to the network and sends a DHCPDISCOVER request for an IP. Offer: The DHCP server processes the request, finds a free IP, and replies with a DHCPOFFER, including IP, subnet mask, gateway, lease time, and DNS servers. Request: The device accepts the IP and sends a DHCPREQUEST. Acknowledge: The DHCP server confirms with a DHCPACK. The device is now authorized for the lease duration. In some cases, a private IP can be manually set on a device, which it then proposes to the DHCP server. A public IP can only be manually set if it is static and pre-assigned by an ISP. If an IP conflict arises (two devices use the same IP), the DHCP server reassigns IPs to both. Public vs Private IP: Key Differences Although public and private IPs are structurally identical, they differ in usage context: Characteristic Public IP Private IP Internet Accessibility Yes No Uniqueness Across Networks Yes No Security Lower Higher NAT Use No Yes Cost Paid Free Routing Global Local Manual Assignment No Yes Assigned By ISP Network Administrator Default IP Type Dynamic Static Public IPs are assigned by an ISP via a DHCP server and are required for global internet access. Private IPs are assigned by a local administrator (via DHCP or manually) and are for local network identification. A public IP can connect to the Internet directly. A private IP can only connect to the Internet through NAT using a public IP. Public IPs can be static or dynamic, while private IPs are typically static. When Do You Need a Public IP and When a Private IP? It’s incorrect to view public and private IPs as alternatives. For Internet access, a public IP is essential — without it, the global network is inaccessible. A public IP is needed when a device must be accessible from the Internet — typically web servers or game servers. A private IP is needed for identifying devices in a local network — such as computers, smartphones, routers, or printers. In reality, local and global networks are interconnected. Multiple devices in a local network may have unique private IPs, but all access the Internet through a single public IP. Conclusion Public IPs face outward to the global network; private IPs face inward to local networks. A public IP is essential for Internet access. A private IP is vital for internal network organization — separating devices and conserving limited public IPs. Public IPs are assigned by ISPs, while private IPs are managed by local admins. Both use DHCP to automate the process. Typically, public IPs are dynamic, and private IPs are static. A full shift from IPv4 to IPv6 will eventually eliminate the need for private IPs — but this will take time, resources, and infrastructure upgrades.
06 May 2025 · 9 min to read
Infrastructure

Structure and Types of File Systems in Linux

The Linux file system is a complex tree-structured system that begins at the root. It consists of directories and subdirectories. Every file and file system is interconnected. This structure follows the typical scheme proposed by the FHS — a standard supported by the Linux Foundation. Features of File Systems A file system is how files are named, stored, retrieved, and updated on a disk or storage partition. The file system's structure must have a predefined format that the operating system understands. The organization of a file system involves formatting, partitioning, and the method of storing organized data structures on a hard (or floppy) disk. Such a system shell is divided into two segments: metadata (file name, creation date, size) and user data. Your computer uses this file system to determine the location of files in your storage. For example, Windows' main file systems are NTFS, FAT, and FAT32. NTFS supports three types of file links: hard links, junction points, and symbolic links (NTFS Links). The NTFS structure is one of the most efficient and complex to date. Each cluster on the medium has an entry in the FAT table. Entries indicate the assignment of file parts to a cluster. Each file entry concatenates with other file entries, starting from the first cluster. Since the first FAT system could handle only eight-character filenames, some limitations were lifted in FAT16 and then in FAT32. Types of File Systems in Linux File system types offered during the installation of a Linux-based OS include: Ext Ext2 Ext3 Ext4 JFS XFS Btrfs Swap These file system types have different functionalities and sets of predefined commands. Ext — extended file system. It was introduced in 1992 and is considered one of the first. Its functionality was partly developed based on the UNIX file system. The initial goal was to go beyond the file system used before it (MINIX) and overcome its limitations. Today it is hardly used. Ext2 — "second extended file system". Known since 1993. It was developed as an analog of the previous file system. It implemented innovations in memory volume and changed overall performance. It allows storing up to 2 TB of data. Like ext, it has little prospect, so it should be avoided. Ext3 — third extended file system. Introduced in 2001. It surpasses the previous one in that it is journaled. A journaling file system is one that writes changes (updates) to files and data in a separate journal before these actions are completed. This file system uses an algorithm that allows recovering files after a reboot. Ext4 — fourth extended system. Created in 2006. It overcame many limitations of the third version. It is widely used today and is the default file system in most Linux distributions. Although it may not be the most advanced, it is reliable and stable enough, so it is commonly used in a wide range of Unix systems. Therefore, if you don’t want to overthink the pros and cons of the many file systems you can choose from, experts recommend sticking with this one. Alternative File Systems JFS — created by IBM in 1990. The name JFS stands for Journaling File System. It easily restores data after a power failure and is quite reliable. Moreover, it consumes less processor power than other file systems. XFS — high-performance file system. Created by Silicon Graphics. Originally intended for their IRIX OS, it was later ported to Linux. Today, XFS for Windows also exists. Created in 1990, XFS is a 64-bit high-performance journaling system shell. It works well with large files but not particularly with smaller ones. Btrfs — an alternative file system proposed by Oracle in 2009. It is considered a competing file system to Ext4, although the latter is generally regarded as the better version (faster data transfer, more stability). However, Btrfs has several unique advantages. Overall, it offers excellent performance. Types of Linux Files Linux file types include: regular file named pipe device file soft link (symbolic link) directories socket door File Types Purpose Regular files Storing character and binary data Directories (d) Organizing access to files Symbolic links (l) Providing access to files located on any media Block devices (b) Interface for interacting with computer hardware Character devices (c)   Pipes (p) Organizing inter-process communication Sockets (s)   A directory is a file containing other organized data structures (directories) and provides pointers to them. It acts as a folder in a filing cabinet (grouping related files). But while folders contain only files, directories may contain additional directories (subdirectories). A symbolic (soft) link points to the name and location of a specific file. When a user copies, moves, or otherwise acts on the link, the operation is performed on the file it references. Hard links are created separately. A hard link points to the actual data in the file just like a regular file. Apart from the name, there is no difference between the original file and a hard link pointing to the same data. Both files are regular files. A hard link can only be distinguished from any other regular file by the number of links each has. The number of links is shown in the second column of the ls -l listing. If the number is greater than 1, then additional hard links to the data exist. All physical devices used by Linux are represented by device files. Device files are classified as special characters or special blocks. Special character files represent devices that interact with Linux character by character. Printers are an example of such devices. Block-special files are hard and floppy disks and CD-ROMs interacting with the OS using data blocks. Device files are extremely powerful because they allow users to access hardware devices such as drives, modems, and printers as if they were data files. They can be easily moved and copied, and data can be transferred between devices often without using special commands or syntax. Linux OS Directories The Linux directory structure is tree-shaped (branching). It’s important to highlight a characteristic specific to Unix-like systems: these OSes aim for simplicity and treat every object as a sequence of bytes. In Unix, these sequences are represented as files.  Unlike Windows OS, which has multiple roots, the Linux file system allows only one root. The root directory is where all other directories and OS files reside (denoted by a forward slash /). The entire Linux folder structure is represented in a single directory called the root directory. Main Directories in the Root Directory /homeThis is the home directory. Since Linux is a multi-user environment, each user is assigned a separate object in the system, accessible only to them and the superuser. /bin and /sbinbin stands for binary. This is where the OS stores core program codes. Binary files are executable structured data containing compiled source code.sbin stands for system binary. This directory is reserved for software necessary for system recovery, booting, and rollback. /optStands for "optional". This is where manually installed applications and programs are stored. /usrusr stands for Unix System Resources. This directory contains user-level applications, unlike /bin or /sbin, which house system-level applications.Subdirectories under /usr include: /usr/bin – most binary programs /usr/include – header files needed for source code compilation /usr/sbin – directories for recurring tasks /usr/lib – libraries /usr/src – kernel source code and header files /usr/share – architecture-independent files (documents, icons, fonts)Originally intended for all user-related content, /usr has evolved into a location for software and data used by users. /lib, /lib32, /lib64These are directories of library files — programs used by other applications. /bootA static bootloader that contains the kernel's executable file and other configuration files needed to start the PC. /sysThis is where the user interacts with the kernel. It is considered a structured path to the kernel. The directory is mounted with a virtual file system called sysfs, serving as the kernel interface for accessing data about connected devices. /tmpTemporary files needed by applications during a session are stored here. /devContains special device files that allow software to interact with peripherals. Device files are categorized into character and block devices.A block device performs data input/output in blocks (e.g., an SSD), while a character device handles input/output as a stream of characters (e.g., a keyboard). /procproc stands for process. This directory contains pseudo-files that provide information about system resources. /runThis directory is mounted with a virtual tmpfs file system and holds runtime files related to active processes. These files exist in RAM and disappear when the session ends. /rootThe home directory for the superuser (administrator). /srvThe service catalog. If you use a web server, you can store data for a specific webpage here. File System and Data Storage Paths on Physical Disk Linux directories map the names of structured data to their addresses on the physical disk. Linux directories have a predefined size to store metadata. Files in directories use inodes (index nodes). An inode stores the disk block address and file attributes. Each directory and file information in Linux contains an inode, and the inode itself holds a list of pointers referencing disk blocks. A directory in the file system is an inode that stores information about all structured data names it contains. Another note about inodes: Inodes are unique, but the names pointing to these nodes are not. This is why inodes track hard links. Linux Architecture The architecture of Linux consists of the hardware layer, kernel, system library, system, and utilities. At the top is user space, where user applications run. Below this is the kernel space, where the OS kernel resides. There is also a specific library collection called the GNU C Library (glibc). This library provides the OS call interface that bridges the kernel and user applications. Both user applications and the kernel operate in their own protected address spaces. Each user process has its own virtual address space, while the kernel has a unified address space. The kernel structure includes three main levels: System Call Interface (SCI) – the top level that handles system calls (e.g., file writing). Core kernel code – an architecture-independent object shared across supported architectures. Architecture-specific code – forms the Board Support Package, designed specifically for the processor and platform of the given architecture. Linux architecture is examined from various perspectives. A key goal of architectural decomposition is to enhance understanding. Kernel Tasks The kernel performs several functions: Process management – determines which processes use the CPU, when, and for how long. Memory management – monitors memory usage, allocation location, and duration. Device drivers – serve as interpreters between hardware and processes. System calls – handle service requests from active processes. The kernel is invisible to the user and operates in its own realm (kernel space). What users see (browsers, files) exists in user space. These applications interact with the kernel through the System Call Interface. Linux Operating Layers Linux Kernel – OS software residing in memory that instructs the CPU. Hardware – the physical machine consisting of RAM, CPU, and I/O devices like storage, network, and graphics. The CPU performs computations, reads memory, and writes to RAM. User Processes – running programs managed by the kernel, collectively forming user space. These processes interact with each other via inter-process communication (IPC). OS code executes on CPUs in two modes: kernel mode and user mode. Kernel mode has unrestricted hardware access, while user mode restricts access to memory, SCI, and CPU. This division also applies to memory (kernel space vs. user space) and enables complex operations like privilege separation and virtual machine creation. Linux Distributions Above the OS kernel, a Linux distribution is a collection of applications (typically open-source). A distribution may include server software, admin tools, documentation, and various desktop applications. It aims to offer a consistent interface, safe and simple software management, and often a specific operational purpose. Linux is freely distributed and accessible through multiple means. It is used by individuals and organizations and is often combined with free or proprietary software. A distribution typically includes all software needed for installation and use. Popular Linux distributions include: Red Hat Ubuntu Debian CentOS Arch Linux Linux Mint These distributions can be used by beginners and system administrators. For example, Ubuntu is suitable for novices due to its user-friendly interface. Arch Linux is more suited to professionals, offering fewer pre-installed packages.
05 May 2025 · 11 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support