Sign In
Sign In

IPS/IDS. Systems for Detecting and Preventing Cyber Attacks

IPS/IDS. Systems for Detecting and Preventing Cyber Attacks
Hostman Team
Technical writer
Infrastructure

The sphere of informational security, the branch of industry that is targeted to secure our digital data (also called infosec), is always developing new products and trying to create new mechanisms to protect users from hackers and cyber criminals of any sort. Traditional tools like firewalls and anti-malware software no longer work as effectively as expected even when it comes to protecting a small company that doesn't have a lot of data because hackers are becoming ever more inventive at getting around them.

In this article we will discuss IPS/IDS solutions, the only way to protect modern network infrastructure. Hostman uses this mechanism to protect its users from all types of cyber attack.

What is wrong with business network security?

Corp-networks are usually designed to have special endpoints that connect several different networks. These networks can be private or public. The main job of developers and administrators is to make these networks as secure as possible without sacrificing their accessibility to regular users (out of corp-network). Nowadays, cyber attacks can be so complex that even the most powerful security systems have a hard time protecting networks from unauthorized access. And they become even harder to detect if hackers get through traditional infosec-solutions like firewalls and antiviruses. Malware can send the server data that seems "normal" for anti-malware software (because it is professionally disguised). That's why modern companies like to implement IDS (Intrusion Detection System) and IPS (Intrusion Prevention System) to protect themselves.

What are IDS and IPS?

IPS/IDS is a combination of software and hardware tools that prevent cybercriminals from accessing the secured network. These systems can automatically detect intrusions, block them and notify administrators of attempts to get unauthorized access to the server. In terms of technological implementation IDS and IPS are not dissimilar but they solve different problems so you need to be able to distinguish them. IDS stands for Intrusion Detection System and IPS stands for Intrusion Prevention System. The first one detects attacks and unexpected traffic and the second one prevents them.

How are IPS/IDS different from firewalls?

Less experienced administrators might not understand why they have to use IDS or IPS when we all have firewalls that actually do the same thing when filtering user traffic. But this is a delusion.

The main function of a firewall is to control access on the network level. Firewall uses certain procedures to pinpoint which computers can communicate with other devices connected to the same network. It can admit traffic that is described as safe in the list of rules (those are prerequisites made by admins or devs). And can restrict all other connections that don’t match the list of rules. IPS/IDS work in a different way. They block the potential threat and admit anything else that does not look suspicious.

There are both conceptual and technical distinctions between both systems. Firewalls are great at protecting the server from threats of OSI-level 2, 4 but almost useless at higher levels because of the lack of necessary software utilities. This is the main reason why firewalls are only usually used to control session parameters like connection status, port numbers, and IP addresses. IDS and IPS can be useful at higher levels because they can analyze not only headers of request but also the main content. Moreover, they can decompress the data passed to the server to inspect it in a more scrupulous way so there's less chance of a virus getting into the secured system.

Intrusion Detection System (IDS)

IDS is not a wholly new way to protect servers. Tools of this kind were invented about 30 years ago and were used in an operating system called SINEX (UNIX-type OS for hardware made by Siemens) to prevent users of terminals from accessing resources of mainframes.

Functions of IDS

So we know what IDS is but what can it do? IDS is a system of detecting cyber attacks. It continuously scans traffic, registers any suspicious requests, and notifies administrators of potential threats. An IDS usually monitors traffic and logs. The system searches for any sign of malware and reports it as soon as possible, sending messages to the console, email address, or via SMS.

IDS can register different types of attacks (DDoS, Bot C&C, P2P, SQL injections, IMAP, POP3, VoIP, etc), detect attempts to change user privileges, get unauthorized access to secured data or open some ports. It can also detect different types of malware (including trojans).

You should remember that IDS is not a tool that controls the network itself. It only helps to find threats and is better described as a utility that shows infosec-specialists how secure the network is at any given moment and what you can do about it. If you've ever used tools like Wireshark you’ll understand how IDS works. The difference is the context (Wireshark inspects a different kind of connection).

Classification of IDS

There are different detection systems: hardware and software, open-sourced, and proprietary. We will look closer at two types of classifications that should be considered when choosing a security system.

The first one is a classification by the type of analyzed traffic. They can be:

  • Based on using protocols PIDS
  • Based on protocols APIDS

One of them monitors communication protocols. The other analyzes a specific list of protocols that includes PHPIDS, GreenSQL-FW, and Mod_Security.

The second one is a classification by the position of IDS in a network infrastructure. They can be:

  • Host-based (HIDS)
  • Network-based (NIDS)

HIDS monitors a single host while NIDS monitors the whole network segment it is installed on. Network IDS is more versatile because they use a deep package inspection system that helps in analyzing all the traffic passed to the server. But this kind of IDS is more of a power hog than other types so you should be prepared to sacrifice some hardware resources to use it.

There are different types of IDS. For example, VMIDS imply using virtual machines so administrators and developers don't have to install the whole software stack on a separate device.

How does IDS detect intrusions?

Usually methods of detecting intrusions are divided into two groups:

  • Detecting abuse (signature IDS)
  • Detecting anomalies

Detecting abuses

Some IDS detect abusive traffic by analyzing traffic and logs. Simply put, this method uses descriptions of attacks that already exist. The security system processes the traffic and tries to find abusive patterns that may signal an attempt to attack the server. Most IDS use this method and is a huge advantage to it. It actually tells the administrator why IDS reacted to some of the requests to the server. It is possible to manually check signatures and find out why IDS decided that a given chunk of traffic seemed to be dangerous. The only caveat is the database of signatures that quickly becomes outdated. The administrator has to update it as often as possible.

Detecting anomalies

This method works in reverse. We know what normal behavior of applications is and what harmless traffic looks like. The security system uses this data to detect any unexpected requests or actions. This kind of IDS was invented 30 years ago. Nowadays, engineers use machine learning technologies to make IDS more reliable and proactive.

There is one big disadvantage of such IDS. They must be refined by working with traffic which is considered normal. It takes time and obliges administrators to use it on some networks and nodes ineffectively. If something goes wrong and IDS reacts to it, administrators will get insufficient data to act (and the complexity of analyzing the logs will grow exponentially with the number of indicators used to distinguish "normal" traffic).

Anyway, there are problems that anomaly-searching IDS can solve better than any other security system. For example, there are no rivals to this type of IDS in detecting and preventing DDoS attacks.

Intrusion Prevention System (IPS)

When IDS can only notify you about some threats, IPS can take action. Using IDS administrators have to reconfigure the firewall manually so attackers can't get access to secured data. It takes time and usually the server has to react to any threat in real-time so there are no consequences. That's why we need IPS. They do the job and can block suspicious connections if necessary.

Different types of IPS and their capabilities

Actually, IPS and IDS can be the same device which is differently set up depending on the issues that administrators and developers want to solve. IPS itself can be described as a hybrid of IDS and firewall because the same technologies underlie both security systems.

Regarding classifications, they are mostly the same because professionals divide IPS into two main categories: HIPS (Hosting-based) and NIDS (Network-based). NIPS prevents any cyberattacks by embedding itself in one part of the network channel and filtering all the traffic passing through. In most cases, such types of IPS have some kind of remote interface that accepts data and an intrinsic interface that passes legitimate traffic on.

On top of that, some specialists divide IPS into:

  • those which monitor traffic and compare it to any signatures that are already known to be dangerous;
  • and those which try to find suspicious traffic based on protocol-analyzing systems.

The second one makes it possible to protect the network from unknown classes of attacks.

IPS may react to attacks in different ways. The most common form of protection is the reconfiguration of communicational hardware (which makes it impossible to use it in a harmful way). Also such methods as blocking off some users or hosts, disconnecting users or applications via TCP RST or firewall might be used.

Conclusion

In the final part of this article, we will pinpoint specific tasks that can be performed using IPS or IDS and outline the main requirements that administrators and developers have to consider when choosing security systems for their projects.

Security systems have two main functions. First, they develop a database of threats that can be used as evidence when infosec-specialists start to investigate any incident connected to data breach or cyber attack (as an example, when a cyber criminal uses company resources for a period of time). Secondly they monitor any potentially dangerous activity in the analyzed network. Consequently, there are two requirements:

  • completeness of the database that consists of existing exploits and vulnerabilities (signatures which are used to detect any cyberattacks);
  • and reliability of the whole system, so it won't turn off and stop gathering useful information.

A system that is used to prevent hackers from accessing secured data tries to normalize traffic via blocking attacks and minimize damage caused by cyber criminals. The requirements for IPS are a bit different. Firstly, such a system has to be as reliable as possible. There should be no reason for it to stop working and failing to secure the server. Malfunction of the IPS can lead the whole infrastructure to shut down active connections. Secondly, such a system has to have a minimal amount of fake detections.

If you want to implement a modern and multifunctional security system for your server you should consider setting up a device called UTM. It is a piece of hardware that includes all the protection components of IDS and IPS. It works as a firewall, proxy-server, anti-spam, content filter, and anti-malware software.

Infrastructure

Similar

Infrastructure

Private Cloud In Cloud Computing: Benefits and When To Use

What is a cloud? It’s virtualized server resources: RAM, processor power, disk space, and installed applications. The main advantage of the cloud-based approach to infrastructure is flexibility. If you need more performance, you can easily add memory or connect additional processing power when the load increases. Difference Between Private, Public, and Hybrid Clouds Below is the standard classification of cloud services: Private Cloud: The infrastructure is used by only one organization. The equipment can be located either at the provider’s site or on the user’s premises. Public Cloud: The infrastructure is shared by all registered users. Examples include Amazon Web Services, Microsoft Azure, and Hostman. Hybrid Cloud: A combination of private and public infrastructure. Part of the equipment may be located with the user, and part with the provider. From this classification, it’s clear that hybrid clouds are a compromise solution. However, using them isn’t always justified. To understand this better, let’s compare all three types across several parameters. Comparison Table Parameter Private Public Hybrid Complexity High level of complexity. Requires selecting equipment and developing architecture. Low level of complexity. You choose a suitable service and pay for it. High level of complexity. You need to configure the private part and connect it with the external infrastructure. Cost Expenses are borne by the owner of the equipment and licenses. Expenses are borne by the provider; the user pays a service fee. Expenses are partly on the user, partly on the provider. Maintenance The organization must monitor the system and fix failures itself. The provider manages the infrastructure. The organization must monitor its private part. Scalability Additional resources must be purchased and configured manually. Additional resources are available on demand. Additional resources are available on demand. Security High, as everything is under the owner’s control. Lower level of control; many security aspects remain the provider’s responsibility. High level of security with proper architecture, when all critical nodes are located in the private cloud. Private Cloud Advantages and Disadvantages The comparison table above clearly shows the pros and cons of private clouds. Let’s look at them in more detail. Advantages: Physical access to the equipment usually remains only with the owner. It’s also possible to limit internet connections to prevent unauthorized data access. You can save on physical equipment by investing in the development of a virtual infrastructure instead. Flexible configuration of resources and computing power for different departments within the company. Disadvantages: Requires significant financial investment: you can’t build it without purchasing or renting equipment. System design and deployment take a lot of time. Scalability is limited by the available physical resources. If more capacity is needed, new equipment must be purchased. When to Create a Private Cloud Now that we understand what a private cloud is and what its pros and cons are, let’s figure out when companies choose this approach. The main scenario comes from how a private cloud operates. Large organizations often have enough resources to purchase equipment, design a well-thought-out architecture, and fund teams of specialists to maintain the system. Such companies also tend to handle large volumes of data that require a high level of security. Based on this scenario, the main users of private clouds are: Mobile network operators Banks Insurance companies Gas and oil enterprises Retail companies The types of data that companies host in their private cloud environments can vary, but they generally fall into two main categories: Confidential or proprietary information, for which the organization must retain full control. Security policies or internal regulations may prohibit the use of public cloud servers, leaving private infrastructure as the only viable option. Data governed by strict legal or industry-specific compliance requirements. For instance, certain privacy laws or standards (such as GDPR, HIPAA, or PCI DSS) require data to be stored in secure, controlled environments or within specific jurisdictions. In such cases, a private cloud is often the most suitable solution. It provides greater control over security and compliance, reduces regulatory risks, and allows for certification under relevant industry standards. Moreover, if the company later needs to expand its infrastructure to store or process less sensitive data, it can integrate a public cloud and adopt a hybrid model, combining flexibility with strong data protection.
17 October 2025 · 4 min to read
Infrastructure

Hardware Virtualization: What It Is, Its Types, and Benefits

Hardware virtualization allows creating virtual machines (VMs) on physical hardware while distributing resources as efficiently as possible. This article explains how it works, what types exist, and what advantages it provides for business. How It Works The basic level of any virtualization is the server. Data processing occurs on physical hardware, regardless of the level of abstraction. In hardware virtualization, a hypervisor is installed on the server. This is software that creates sets of virtual resources. There are two main types of hypervisors: Type 1 (bare-metal), which runs directly on the hardware, and Type 2 (hosted), which runs on top of a host operating system. Both create and manage virtual machines, but Type 1 generally offers higher performance. We have covered hypervisors in detail in a separate article. The top layer of abstraction is the virtual machine. At first glance, it does not differ from a traditional server. Essentially, a VM consists of a single virtual disk and files describing its configuration. The virtual machine is installed on the hardware hypervisor. Once the VM is prepared, operating systems and application software can be installed on it. A helpful analogy is a book. Imagine a physical server with installed software as a book on a shelf. You can pick it up and move it physically, but creating a copy to give to someone else requires significant effort: retyping or scanning each page, assembling the pages, and binding the book. A configured VM with allocated resources and installed applications is like a digital version of the book. You can make unlimited copies with minimal time and effort. You can create a backup, transfer a copy to another machine, or share it with another user. Virtualization of CPUs and other resources enables this simplicity in deployment and management. Hardware Support For VMs to function efficiently, resource allocation must be supported at the processor level. Two major technologies exist from leading manufacturers: Intel VT and AMD-V. Both technologies have multiple development directions. The primary one is CPU virtualization, which allows running multiple systems on a single processor, either in parallel or nested within each other. Technologies for virtualized graphics and input/output are also advancing. They facilitate remote access and sharing of physical hardware, which is useful for remote workstations or online gaming. These hardware extensions are particularly important for Type 1 hypervisors, allowing virtual machines to run at near-native performance without modifying the guest operating system. Types of Hardware Virtualization The main types of virtualization are: Full virtualization: hardware is fully emulated. This creates an environment that can run on different servers without lengthy configuration. Paravirtualization: a special version of the operating system is created for the virtual machine. It can be modified or recompiled based on the server's hardware resources. This method was historically used to improve performance, but modern hardware virtualization extensions have largely reduced its advantage. Hardware-assisted virtualization: a fully virtualized VM is created using the computer's physical hardware with support from processor extensions (Intel VT/AMD-V). The choice of type depends on the tasks to be solved. Differences Between Hardware and Software Virtualization Hardware virtualization is not the only approach. Software virtualization relies on a host system and adds an extra layer of abstraction: Physical server with resources Host operating system Hypervisor installed on the host OS (Type 2 hypervisor) Virtual machines installed and managed via the hypervisor Adding this extra layer complicates the process and reduces hardware performance. Physical resources are consumed by both the host OS and the hypervisor. Other issues include: If the host system fails, all VMs running on it lose access to physical resources. Security risks become larger; if the host OS is compromised, all VMs are at risk. Updating or reinstalling the host OS requires stopping the VMs. Software virtualization is suitable for simple tasks, such as testing another operating system inside the main one, avoiding the need to install multiple OSes side by side. For business purposes, hardware virtualization is preferred because it provides higher performance and security. The key is efficient resource usage. Business Advantages Virtualization technology offers several benefits: Cost efficiency: reduces expenses for purchasing, configuring, and maintaining physical hardware. Instead of multiple servers, a single powerful machine can be used, with the virtualization system dividing resources among VMs performing different tasks. Flexibility: VM configurations can be saved as images and deployed across different hardware or moved between servers. Scalability: when workload increases, additional VMs can be quickly launched to add resources and stabilize operations. Fault tolerance: VM snapshots can be created at any time. Copies can be distributed geographically, so even if part of the hardware fails, the infrastructure continues to function. The load on active components can be managed by adding extra resources. Security: VMs are isolated from each other. Compromising one VM does not affect others. Conclusion With hardware virtualization, external applications cannot distinguish virtual machines from physical servers. Guest operating systems run on the CPU without knowledge of other OSes sharing the same hardware. This isolation helps maintain high security. Virtualization is used by large companies, small and medium businesses, and individual users. It enables building a flexible infrastructure of any size that can be easily scaled and protected against internal and external threats.
17 October 2025 · 5 min to read
Infrastructure

What Is a Virtual Data Center (vDC): Its Capabilities and Purpose

Virtual Data Center is infrastructure resources in the “cloud” allocated for business needs. Physically, the equipment is located in traditional data centers. The provider rents it out using virtualized space. A virtual data center (vDC) can be managed from anywhere in the world via a console. At the same time, the flexibility of such a solution is higher than that of a traditional data center. There are also advantages in terms of cost efficiency, scalability, and security, which we will discuss below. Virtualization Layers A virtual data center is a multi-layered system, where virtualization is applied at each level. There are five main layers: Network. Virtualization allows configuring communication between multiple servers to build a unified infrastructure. Storage. Administrators manage file placement at this level, which is convenient even if the data is stored on different devices. Resources. Virtualization enables flexible adjustment of available computing power, changing resource usage based on business needs. Applications. Virtualization solves compatibility issues, allowing applications to run independently of the host operating system. Access. User rights management, for example, for remote system access. These layers are interdependent. A virtual data center operates correctly only if interactions between the layers are properly configured. vDC Capabilities The main advantage of a virtual data center is the ability to quickly scale resources up or down, allowing businesses to address various tasks without contacting the service provider. Other important capabilities include: Data protection. Storing information in a vDC significantly reduces the risk of data loss, especially when backups are properly configured and geographically distributed. Disaster recovery. With a simple and fast backup system, the infrastructure can be restored within minutes. Flexibility. IT teams can automate routine tasks, quickly implement and test new systems and features. Reliability. Providers use high-performance equipment and maintain the physical infrastructure of the vDC. Control. All monitoring and management tools are available to the customer who ordered and paid for the resources. Cost savings on hardware. There is no need to buy, configure, or maintain physical equipment; the provider handles this. Customers pay only for the resources they use. Another important aspect is the availability of different billing models. Customers can either pay a fixed monthly amount for allocated resources or only for the resources actually consumed. Both models guarantee that the provider will allocate the requested resources, preventing situations where the client requests capacity but does not receive it in full. Scalability Features One of the main advantages of a virtual data center is rapid scalability. When demand increases, add resources; when demand decreases, reduce unused capacity to save costs. Scalability can be of two types: Horizontal scaling: new elements are added. For example, if an application normally runs on two servers but user demand increases fivefold, additional servers can be added and users distributed among them. Vertical scaling: additional resources are added to an existing server. For instance, RAM can be increased, storage expanded, or the server replaced with a more powerful one if the CPU cannot handle the load. Horizontal and vertical scaling are not mutually exclusive and often complement each other. Horizontal scaling is usually used for expanding server clusters, while vertical scaling is applied when increasing load without adding new servers. A single task can be addressed with either horizontal or vertical scaling in a vDC. Example: A web server hosting a website experiences increased traffic. Vertical scaling would involve adding CPU, RAM, or storage to the existing server. Horizontal scaling would involve cloning the server and distributing the load across multiple virtual machines. Use Cases A virtual data center can fully replace physical infrastructure and help address almost any business task. Common scenarios include: Data storage Software development and testing Increasing capacity using reserve resources during peak loads Creating a backup data center that fully replicates the main data center Setting up remote workstations These are just a few typical use cases. In practice, vDCs can be beneficial in many situations. However, this solution is not suitable for everyone, partly because vDCs are usually more expensive than standalone cloud servers. Suitable for: Medium or large companies planning or already migrating some processes to the cloud. Companies with seasonal fluctuations or plans to increase load. Startups that need to minimize infrastructure costs initially but be prepared for rapid growth. The final decision on whether to deploy a virtual data center should involve IT specialists, economists, analysts, and managers responsible for strategic development, so everyone understands what a vDC is and the risks and benefits of its implementation.
16 October 2025 · 4 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support