Before answering the key questions—which approach should be used for service communication, what is the difference between REST and RPC, and whether there is a clear winner in the REST vs. RPC debate—let's take a deeper look at both approaches.
However, before we begin, let’s clarify some terms—API, REST, RPC, HTTP, and more.
An API is a set of tools and rules that allow applications to communicate with each other. Imagine an information service, a software library, or an application as a "black box" whose internal details are hidden. The API serves as a set of controls and indicators that enable interaction with this black box.
HTTP is a protocol for transferring hypertext. As a protocol, it operates at the OSI model's application layer (Layer 7). HTTP is widely used for delivering web pages, transferring files, streaming media, and facilitating communication between information systems via open APIs.
REST is an architectural style (not a protocol, standard, or technology) for designing distributed systems. It defines constraints that make web services scalable, simple, and maintainable. The term "representational state transfer" refers to the idea that a client interacts with resources by transferring their representations. We’ll explore this concept in more detail below.
RPC is a technology that allows a client to execute computations on a server by calling a function or procedure remotely, passing parameters, and receiving results. It works as if the function were a part of the local code.
The idea of offloading computations from a low-power client to a high-performance server dates back decades. The first adopters of RPC were databases, which were then known as data banks or even knowledge bases.
Over time, RPC evolved into a flexible and powerful technology. Companies like Sybase, Sun Microsystems, Microsoft, and others played a key role in shaping the concept.
When monolithic architectures began shifting to multi-tiered architectures, RPC adapted well to the new paradigms. It also inspired the development of various industrial standards and protocols.
We will now examine two architectural solutions that use RPC-based technologies: CORBA and web services.
CORBA — or Common Object Request Broker Architecture, a generalized architecture of object request brokers. This is perhaps the most comprehensive architectural specification for building distributed systems. It emerged in the 1980s and gained widespread adoption in the 1990s. The biggest advantage of CORBA compared to other distributed architectures was that heterogeneous (or diverse) elements that implemented the standards of this architectural specification could be present in the network for computation execution and result exchange. It became possible to combine different ecosystems: Java, C/C++, and even Erlang.
While a highly flexible and efficient architecture, CORBA is nevertheless quite complex internally, containing numerous descriptions and agreements, and, to be honest, it represents a significant headache for developers who are integrating their (or a new) ecosystem into this architectural paradigm.
The second major obstacle to using CORBA is its network stack. It operates over the TCP protocol and is quite complex; some CORBA implementations use standard TCP ports (defined and reserved for CORBA), while others use arbitrary ones, and it is not regulated in any way. All of this contradicts corporate network security policies. Additionally, it makes the use of CORBA on the Internet very inconvenient and even impossible.
The workhorse of most information systems is the HTTP protocol. It uses two clearly defined TCP ports: 80 and 443. CORBA, on the other hand, requires four different TCP ports for its protocols, each with its own timing characteristics and features. Therefore, CORBA is suitable in cases where integration into an existing information system architecture built with CORBA is required. However, developing a new information system using this architectural solution is probably not advisable, as more efficient and simpler mechanisms exist today.
Given all CORBA's shortcomings, a standard was developed in the late 1990s that laid the foundation for so-called web services. Unlike CORBA, web services used an already existing, highly reliable, and simple protocol—HTTP—and fully relied on its architectural conventions. Each service had its own unique URL (Universal Resource Locator) and a set of methods that were also based on HTTP conventions. Machine- and architecture-independent formats such as XML or JSON were used as data carriers. In particular, some web service implementations use a format called SOAP (Simple Object Access Protocol), which is based on XML. The new solution was significantly more convenient than the cumbersome CORBA, used the simple and reliable HTTP protocol, and was essentially independent of the technologies, deployment mechanisms, and scaling aspects of information systems.
However, the new technology quickly became burdened with standards, rules, specifications, and other necessary but very tedious attributes of the Enterprise world. SOAP is a successful solution because XML, which underlies it, is a structured, machine-independent, user-defined data exchange language. XML already includes validation, data structure descriptions, and much more. But XML also has a downside. XML is an extremely verbose language overloaded with auxiliary elements. These include attributes, tags, namespaces, different brackets, quotation marks, and more. A large portion of SOAP packets consists of this auxiliary information. When scaled to millions of calls, this results in significant overhead due to all this informational noise. There is little that can be done to fix this issue, as it stems from the use of XML namespaces and the extremely detailed semantic definitions of the SOAP specification. Using less "noisy" data formats, such as JSON (in the JSON-RPC specification), introduces other risks, such as inconsistencies in data descriptions and the lack of structure definitions.
Since web services are one implementation of the RPC concept, they function as a synchronous data exchange channel. Synchronous transmission is inconvenient, does not scale well, and can easily overload a system.
RPC may seem an outdated concept that is best avoided in modern realities to prevent various problems and design errors. However, we have deliberately spent so much time discussing past technologies. If we take the best aspects of CORBA, wrap them in modern architectural solutions, and, like web services, run them over reliable network protocols, we get…
gRPC is an open framework developed and implemented by Google. It is very similar to CORBA, but unlike CORBA, it runs on top of the standard HTTP/2 protocol. This version of the popular transport protocol has been significantly reworked, expanded, and improved compared to previous versions, providing efficient low-latency message transmission. CORBA uses its own Interface Definition Language (IDL) for interface descriptions. In gRPC, a modern framework called Protocol Buffers serves the same purpose. Like CORBA, the gRPC environment is heterogeneous, allowing different ecosystems to interact effectively. ProtoBuf uses its own transport format (serialization and deserialization of objects), which is much more compact than JSON and XML while remaining machine-independent.
Today, gRPC has gradually replaced everything possible in the internal communication of microservices and is beginning to take over areas where web services and REST once dominated. Some bold developers are even experimenting with integrating gRPC into the front end. This is because gRPC was very well designed—it is reliable and fast and allows information systems to be built from heterogeneous nodes and components, much like the great CORBA once did.
However, let’s assume I do not need cross-ecosystem interaction; I program only in Python/Golang/Java/(insert your language), and I want tools for distributed computing. Should I use gRPC, which, by the way, requires some time to master, or is there something that can help me "immediately and at low cost"?
We are in luck. Today, RPC packages and service libraries are available in almost every programming ecosystem, such as:
xmlrpc
packagenet/rpc
packagejava.rmi
(Remote Method Invocation)WAI
, xmlrpc
, built-in OTP tools for distributed computing and clusteringJSON-RPC
Each of the aforementioned packages within its language ecosystem allows you to connect components together.
To illustrate this with code, let's take a simple example from the documentation of the xmlrpc module in Python's standard library.
RPC server code:
from xmlrpc.server import SimpleXMLRPCServer
def is_even(n):
return n % 2 == 0
server = SimpleXMLRPCServer(("localhost", 8000))
print("Listening on port 8000...")
server.register_function(is_even, "is_even")
server.serve_forever()
RPC client code:
import xmlrpc.client
with xmlrpc.client.ServerProxy("http://localhost:8000/") as proxy:
print("3 is even: %s" % str(proxy.is_even(3)))
print("100 is even: %s" % str(proxy.is_even(100)))
As we can see, on the client side, everything looks very clear and simple, as if the is_even
function is part of the client's own code. Everything is also quite simple and understandable on the server side: we define a function and then register it in the context of the server process responsible for RPC. It is important to note that the function we "expose" for external access is a regular function written in Python. It can easily be used locally in the server-side code, passing parameters to it and receiving the value it returns. The concept of RPC is very simple, elegant, and flexible: to call a function "on the other side," you only need to change the transport from local calls within a process to some network communication protocol and ensure bidirectional translation of parameters and results.
So what is wrong with RPC, and why did we end up with REST as well? The first and perhaps the most serious reason is that RPC must have a layer that describes the nature of the data, interfaces, functions, and return calls. In CORBA, this is IDL; in gRPC, it is ProtoBuf. Even the slightest change requires synchronization of all definitions and interfaces.
The second point, perhaps, stems from the very concept of a "function"—it is a black box that takes arguments as input and returns some value. A function does not describe or characterize itself in any way; the only way to understand what it does is by calling it and getting some result. Accordingly, as mentioned above, we need a description to determine the nature and order of computations.
REST, as already mentioned at the beginning of this article, stands for REpresentational State Transfer, a protocol for transmitting representational state. It is important to clarify the meaning of the term "representational"—it means "self-descriptive," representing itself. Consequently, a certain state that is transferred between exchange participants does not require additional agreements, descriptions, or definitions—everything necessary, so to speak, is clear without words and is contained in the message itself.
The term REST was introduced by Roy Fielding, one of the authors of HTTP, in 2000, in his dissertation "Architectural Styles and the Design of Network-based Software Architectures." He provided the theoretical basis for the way clients and servers interact on a global network, abstracting it and calling it "representational state transfer." Roy Fielding developed a concept for building distributed applications in which each request (REST request) from a client to a server already contains all the necessary information about the desired server response (the desired representational state), and the server is not required to store information about the client's state ("client session").
So, how does this work? In REST API, each service, each unit of information is designated by its URL. Thus, data can be retrieved simply by accessing this URL on the server. The URL in REST is structured as follows:
/object/
— directs us to a list of objects/object/id
— directs us to a single object with the specified ID or returns a 404 response if such an object is not foundThus, the very nature of defining a URL represents the nature of the server's response: in the first case—a list of objects, in the second—a single object.
But that is not all. REST, as mentioned above, uses HTTP as its transport. And in HTTP, one of the key parameters that define the nature of the data returned by the server is the method.
By using HTTP methods, we can define another set of self-descriptive states:
GET /object/
— returns a list of objectsGET /object/id
— returns an object with the specified ID or 404POST /object/
— creates a new object or returns an error (most often an error with code 400 or another)PUT /object/id
— edits an object with the specified ID or returns errorsDELETE /object/id
— deletes an object with the specified ID or returns errorsSome servers ignore the semantics of the PUT
and DELETE
methods; in this case, the POST /object/id
method is used with a request body (object data) for editing or the same POST
request with an empty body for deleting an object.
Thus, instead of the variety of choices that REST provides us, we get a minimal set of operations on data. So, where is the advantage here?
As mentioned above, REST is an architectural solution, not a technology. This means that REST does not impose any special requirements on participants in such a network, as is the case with gRPC, CORBA, or SOAP. It is only necessary to maintain the semantics of a self-defining state and a unified data transmission protocol. As a result, REST networks can combine the incompatible—a powerful cluster with load balancers, databases, and a simple "smart" light bulb with a microcontroller that is controlled via REST. Thus, REST is an extremely flexible architecture with virtually zero costs to ensure interoperability.
However, to guarantee such an impressive result, REST introduces a number of restrictions (which is why this solution is also called architectural constraints).
Let’s briefly list each of them:
By following the above constraints, we can build highly efficient systems, which is confirmed by our modern experience with distributed systems and web services.
One of the most popular patterns implemented using REST is CRUD. This acronym is formed from the first letters of the operations Create, Read, Update, and Delete—the four basic operations sufficient for working with any data entity. More complex operations, known as use cases, can utilize CRUD REST API to access data entities. Use cases can also follow the prescriptions and constraints of REST; in this case, we call our information system RESTful. In such a system, REST conventions are used everywhere, and any expansion of the system also follows these conventions. This is a very pragmatic yet highly flexible approach: a unified architecture reduces system complexity, and as system complexity decreases, the percentage of errors also goes down.
The concept of REST API is so popular that it exists in almost every programming language ecosystem. REST is built into Django and Laravel. In Go, you can use the Gin Gonic package or build your own RESTful system using only standard library packages. For Erlang, the erf library can be used, while in Elixir, REST API is already integrated into the Phoenix framework. REST, as an architecture, does not impose any restrictions on programming environments, frameworks, or anything else—it simply declares to services: "Just speak REST, and everything will work out fine."
Let’s try to answer the question we posed at the very beginning. As you may have realized from this rather extensive article, each approach has its clear advantages and very specific disadvantages. In this matter, the best option is a golden mean.
For critical services that process huge amounts of data, stability is the top priority—both in code, where data definition errors are simply unacceptable and in infrastructure, where faster system response time is always better. For such areas, the concept of RPC in its modern implementation—gRPC—is undoubtedly more convenient.
However, where business logic and complex multi-level interactions reside, REST becomes the preferable choice with its rigid and limited means of expression. The best strategy is to apply both approaches wisely and flexibly, allowing your information system to benefit from each concept's strengths (or architectural solution).
When discussing pure RPC and REST, we have deliberately abstracted from infrastructure, programming languages, machines, memory, processors, and other technical details. However, in real-world business, these aspects are equally important.
Most often, REST API and RPC API are deployed either in containers (Docker, Podman, and similar technologies) or on so-called VPS (Virtual Private Servers). Less frequently, they run on dedicated or rented hardware. Infrastructure-as-a-Service (IaaS) is a convenient and relatively inexpensive way to manage projects.
Hostman’s networking services provide an ideal solution for this. Here, you can precisely calculate the expected load and plan your expenses accordingly. The VPC (Virtual Private Cloud) from Hostman allows containers and VPS to be interconnected while ensuring that all traffic within this network remains completely isolated from the Internet.
An ideal solution for RPC, REST, or…? The decision is, of course, yours to make. But as for how to deploy everything and ensure the uninterrupted operation of your services—Hostman has you covered.