Sign In
Sign In

Microservices and the UNIX Philosophy: Similarities and Differences

Microservices and the UNIX Philosophy: Similarities and Differences
Hostman Team
Technical writer
Microservices
17.02.2025
Reading time: 23 min

Microservices are a relatively new concept in software architecture. This approach focuses on breaking down complex monolithic applications into small, isolated modules. Each of these modules, or microservices, performs only one specific task.

Interestingly, UNIX developers began applying similar principles when creating their operating system long ago. One of these principles states: "A program should do one thing and do it well." This principle suggests that a program should be limited to the required set of functions and do nothing more, while performing its task flawlessly. This closely resembles the concept of a microservice, which is also designed to perform only one specific task.

But are all the principles of microservices and the UNIX philosophy truly similar? Let's explore this further, starting with the more classic approach — the UNIX philosophy.

The UNIX Philosophy

The history of UNIX began in 1969 when Ken Thompson and Dennis Ritchie started developing the operating system at Bell Labs. Ken Thompson, known as one of the creators of UNIX, made a significant contribution not only to the system itself but also to its philosophy.

In 1973, UNIX was rewritten from the B programming language to C, bringing hardware independence and various important features that we are familiar with today, such as different variable types (int, char, float, etc.) and the need for their declaration (statically typed). Throughout the 1970s and 1980s, UNIX evolved, spreading through academic circles and commercial organizations, gradually forming its key principles.

UNIX became a revolutionary project that changed the approach to operating system development. Its creators aimed for simplicity and elegance in design, which is reflected in the system's philosophy. The UNIX philosophy, with its emphasis on modularity and efficiency, became the foundation for many modern software development approaches.

Key UNIX principles, formed during its evolution, have significantly influenced the future of development. Principles like "Do one thing and do it well" have become fundamental for many modern software design methodologies.

The UNIX philosophy evolved alongside the UNIX system itself, gradually crystallizing into a set of clear principles. Over the years, many formulations of these ideas emerged, but their essence remained unchanged. Today, we’ll look at these key principles in their modern understanding:

  1. Write programs that do one thing and do it well.

  2. Write programs to work together.

  3. Write programs that handle text streams, because that is a universal interface.

Although we're discussing the principles and philosophy of UNIX, for practical examples, we’ll use Linux, specifically Debian. This choice is due to Debian's free availability, ease of access (including on the Hostman platform), and its status as a classic example of a Linux system.

While Linux is not a direct descendant of UNIX, it inherits all its principles. Most of the commands and concepts discussed are applicable to both UNIX and Linux. It’s also worth noting that the popular Ubuntu distribution is derived from Debian, highlighting the latter's importance in the Linux world.

Principle 1: Write Programs That Do One Thing and Do It Well

If you've worked with Linux systems, you're probably familiar with the cat program (short for concatenate). Although it appears as a command in the bash (command line), it is actually a standalone program written in C, compiled, and usually located at /usr/bin/cat. Its source code is publicly available online as part of the GNU coreutils project.

Example of using cat:

$ cat /etc/passwd
  • $ is the command line prompt displayed by the terminal and isn't typed by the user.

  • cat is the program itself. We don't specify the full path because the $PATH variable stores directories where BASH searches for commands by default (in this case, /usr/bin/).

  • /etc/passwd is a text file in Linux systems that contains user information.

The result of this command will be a list of system users, similar to:

root:x:0:0:root:/root:/bin/bash
alice:x:1000:1000:Alice Smith,,,:/home/alice:/bin/bash
bob:x:1001:1001:Bob Johnson,,,:/home/bob:/bin/zsh
mysql:x:112:120:MySQL Server,,,:/nonexistent:/bin/false
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin

Each line contains the username, UID, GID, full name, home directory, and other parameters, separated by colons.

You can experiment with cat and other commands by:

  • Setting up a VPS on Hostman (with affordable hourly billing).
  • Using a virtual machine or Docker to run Linux locally.
  • Using WSL (Windows Subsystem for Linux) on Windows.
  • Directly in your system if you're already using Linux.

cat has many additional options, for example:

  • Displaying line numbers: -n or --number.
  • Showing hidden characters, such as tabs and line breaks: -A or --show-all.
  • Compressing multiple blank lines into one: -s or --squeeze-blank.

You can explore other options by running cat --help or reading the full documentation with man cat.

One important feature of cat is its ability to concatenate the contents of multiple files. This is a result of its main function: cat reads the specified files sequentially and sends their contents to standard output (stdout). This is where its name comes from — concatenate means "to link" or "to chain together." In essence, displaying content on the screen is a combination of the file's contents and the stdout stream in Linux.

Example of using cat to concatenate files:

cat /etc/hostname /etc/hosts

The output might look something like this:

myserver
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback

In this example, cat first displays the contents of /etc/hostname, followed immediately by the contents of /etc/hosts.

In this way, cat adheres to the first principle of UNIX: "Do one thing and do it well." It specializes in reading files and sending their contents to stdout without unnecessary complexity, maintaining the modular and efficient approach that defines the UNIX philosophy.

Principle 2: Write Programs to Work Together

This principle showcases one of UNIX's key features — the ability to combine simple tools to solve complex tasks. Thanks to Doug McIlroy, who introduced the concept of pipes in UNIX, commands can be chained together. The output of one command can be the input for another. Let's look at an example:

cat /etc/passwd | grep user

grep is another program (command) that filters a text stream and returns only the lines containing the specified text — in this case, user. Here, cat outputs all the lines of the /etc/passwd file. grep then checks each line; if the text user is present, the program outputs that line.

The output would be the line corresponding to the user account:

user:x:1000:1000:,,,:/home/user:/bin/bash

We can extend this process by adding more commands:

cat /etc/passwd | grep user | awk '{print $6}'

This command outputs the home directory of the user.

awk is a special programming language for processing text data. In this case, the awk program acts as an interpreter for this language. In this example, awk splits the lines into columns using the default delimiter — a colon (:) — and prints only the sixth column, which is the home directory:

/home/user

In UNIX, commands can be chained into long pipelines to perform complex operations. This modular approach allows each tool to focus on a specific task, working seamlessly together to achieve sophisticated results.

Principle 3: Write Programs That Handle Text Streams, Because This Is a Universal Interface

This principle emphasizes the importance of using text format for data exchange between programs. Text data is easily readable by both humans and machines, making it a universal interface.

In UNIX, most configuration files, logs, and command outputs are in text format. This allows users to easily view, edit, and process data using standard text tools.

Example of text stream processing:

du -h /var/log | sort -rh | head -n 5 | awk '{print $2 " - " $1}'

This complex command uses several programs that exchange data through text streams:

  1. du -h /var/log — Displays the sizes of files and directories in /var/log in a human-readable format.
  2. sort -rh — Sorts the output by size in reverse order.
  3. head -n 5 — Selects only the top five lines.
  4. awk '{print $2 " - " $1}' — Reformats the output, displaying only the first two columns in reverse order and adding a dash between them.

Example output:

/var/log/syslog - 200M
/var/log/auth.log - 150M
/var/log/kern.log - 100M
/var/log/dpkg.log - 50M
/var/log/faillog - 10M

Using text streams provides flexibility and universality when working with data. It allows users to easily combine different tools and build complex information processing systems while maintaining simplicity and efficiency in interactions between components.

This approach to text-based communication and modular design is at the heart of the UNIX philosophy, influencing many modern development practices, including the design of microservices.

Microservices

The term "microservices" was first mentioned at a software architecture conference in Venice in 2011. Since then, microservices have become an integral part of modern software architecture. By 2015, industry giants like Netflix and Amazon had successfully implemented this approach. Since then, the popularity of microservices in IT has been steadily growing.

A monolith is short for "monolithic architecture." In this type of architecture, all the project’s code is typically stored in a single Git repository, worked on by all developers. This means that even a small change in one function requires redeploying the entire application.

  • Monolithic architecture is characterized by tight coupling between components, which makes independent scaling and updating challenging.

  • As the project grows, maintenance becomes more difficult, and build and test times increase.

  • Introducing new technologies is also harder because changing one part might affect the whole system.

Despite these drawbacks, monolithic architecture can be effective for small projects or in the early stages of development due to its simplicity and cohesiveness. Notably, some exceptions like Stack Overflow and Etsy successfully use monolithic architectures even at large scales.

Microservices replace monolithic architecture when a project grows so large that it becomes difficult to manage. In a monolith, every system deployment (uploading the project to the production server) requires coordination among all developers, and testing and building take a lot of time.

Microservices break the project down into modules, each performing a specific task. The principles of microservice architecture include:

  • Service independence: Each module operates independently.
  • Decentralized data management: Each module manages its own data.
  • Infrastructure automation: Deployment and scaling are automated.

For example, a user service is connected only to the user database and handles functions related solely to it, such as adding or updating users. Payment or analytics functions are managed by other microservices, which may have their own separate databases.

Over time, services become more complex as checks, validations, and new features are added. Each module can be assigned to a separate team, which resembles encapsulation in OOP (Object-Oriented Programming).

The independence of microservices allows teams to work autonomously, speeding up the development and deployment of new features. External developers only need to understand the interfaces without delving into internal details. This also accelerates testing and building.

Microservices allow the use of different programming languages, such as:

  • Go for multithreaded operations
  • JavaScript for rapid prototyping
  • Rust where high performance is required

As mentioned earlier, each microservice can have its own database, for example:

  • Metrics are stored in InfluxDB
  • Users are managed in PostgreSQL
  • Logs are saved in MongoDB

Essentially, a microservice can function solely as an abstraction layer over a database.

An important advantage of microservices is the ease of horizontal scaling. This allows the system's capacity to be increased by adding new servers, which is typically cheaper and more efficient than vertical scaling (enhancing the power of individual servers). This approach provides flexibility and cost-efficiency as the load grows. We'll discuss this in more detail in one of the following sections.

Despite its advantages, the microservice architecture makes projects more complex. New challenges arise, such as ensuring reliable communication between components, securing data during transmission, and complicating the deployment process.

These challenges fall on the shoulders of DevOps specialists. They develop and implement strategies for effectively managing a distributed system, including monitoring, logging, and deployment automation.

They also implement CI (Continuous Integration) and CD (Continuous Delivery) practices. Although solving these problems requires additional resources, it provides the flexibility and scalability needed for the system's long-term growth.

Modern Tools for Working with Microservices

The evolution of microservice architecture is closely linked to the development of tools for creating, deploying, and managing distributed systems. Containerization and container orchestration have become key technologies in this field.

Containerization and Docker

Containerization is an OS-level virtualization method that allows isolated processes to run in a shared environment. Launched in 2013, Docker became synonymous with containerization, revolutionizing the way applications are developed and deployed.

Docker allows you to package an application with all its dependencies into a standardized unit of software — a container. Containers typically host individual microservices, making them ideal for microservice architecture. They are lightweight, start quickly, and ensure consistent runtime environments from development to production.

The standardization of containers led to the creation of the Open Container Initiative (OCI) in 2015, ensuring compatibility between different containerization tools.

Container Orchestration and Kubernetes

With the growing popularity of containers, there arose a need for tools to manage large numbers of containers in distributed environments. This led to the concept of container orchestration.

Initially developed by Google and released in 2014, Kubernetes has become the de facto standard for container orchestration. It is a platform for automating the deployment, scaling, and management of containerized applications.

Key features of Kubernetes:

  • Automatic scaling of containers based on load.
  • Load balancing between containers.
  • Self-healing capabilities for failed containers or nodes.
  • Configuration and secret management.
  • Zero-downtime deployments for updates.

Kubernetes allows the creation of clusters — groups of computers working as a unified system. This makes it ideal for microservice architecture, enabling efficient management of the lifecycle of numerous, distributed microservices.

Modern Tools for Microservices Development

Modern microservice development relies on a variety of tools and services that simplify the creation, deployment, and management of distributed systems. Russian cloud providers like Hostman offer comprehensive solutions for working with microservices:

  • Cloud Servers and VDS/VPS: Hostman provides virtual machines with pay-as-you-go billing, making them perfect for flexible microservices scaling.

  • Kubernetes: Managed Kubernetes clusters in Hostman enable efficient container orchestration, automating deployment, scaling, and management of microservices.

  • Cloud Databases: The Database-as-a-Service solution simplifies data management in microservice architectures, ensuring high availability and scalability.

  • S3 Object Storage: This service provides reliable storage for large volumes of data, which is often required in microservice applications.

  • Load Balancers: Load distribution between servers is critical for maintaining microservices' performance and fault tolerance.

  • App Platform: This service simplifies the deployment of applications from repositories, streamlining the deployment process for microservices.

These tools and services allow developers to create reliable, scalable, and secure microservice applications. They provide the necessary infrastructure and management tools, enabling teams to focus on developing business logic rather than solving infrastructure challenges.

Microservices vs Monolithic Architecture

Monolithic Architecture

Monolithic architecture has several advantages, especially in the early stages of development:

  • Simplicity in Development: All the code is located in a single repository, simplifying the development and debugging processes.

  • Unified Code Base: All developers work on the same code base, which fosters a better understanding of the project as a whole.

  • Simplified Deployment: A monolith is deployed as a single application, streamlining the deployment process.

  • Ease of Testing: Integration testing is easier since all components are within one application.

  • Performance: In some cases, a monolith can be more performant due to the lack of network overhead between components.

Microservice Architecture

However, as a project grows, microservice architecture begins to showcase its advantages:

  • Scalability: Each microservice can be scaled independently, optimizing resource usage.

  • Flexibility in Technology Choice: Different microservices can use the most suitable technology stack for their needs.

  • Independent Deployment: Services can be updated and deployed independently of each other, speeding up the development and release of new features.

  • Fault Isolation: Issues in one microservice do not affect the entire system.

  • Ease of Understanding and Maintenance: Each microservice is smaller and simpler than a monolith, making it easier to understand and maintain.

Comparison Table

Aspect

Monolith

Microservices

Development

Easier in early stages

More complex but more flexible as the project grows

Deployment

Simple but requires full updates

More complex but allows for partial updates

Scalability

Vertical, entire application

Horizontal, individual services

Reliability

One failure can affect the whole system

Failures are isolated within individual services

Tech Stack

Unified for the whole application

Can vary across different services

Performance

Potentially higher for small applications

Can be optimized for large systems

Team Collaboration

Entire team works on one codebase

Teams can work on separate services

Choosing between monolithic and microservice architecture depends on the project's size, requirements for flexibility, and scalability.

  • Monolithic Architecture is often preferable for smaller projects or MVPs (Minimal Viable Products) due to its simplicity and ease of deployment.

  • Microservices are better suited for large, complex systems with high demands for scalability and flexibility.

The decision should be based on the specific needs and long-term goals of the project.

Example of Microservices Architecture Development

Let's look at an example of creating a system for autonomous vehicle management. In this system, a cluster of interconnected servers automatically distributes containers across servers, optimizing resource usage and ensuring fault tolerance.

For instance:

  • Computer Vision Containers will run on a computer with a powerful GPU, which is necessary for fast visual data processing.

  • Vehicle Monitoring Services require a reliable CPU and can, therefore, be placed on less powerful but stable hardware.

  • Other Microservices will be evenly distributed across the cluster.

This architecture creates a fault-tolerant system. If one node (a separate unit in the distributed network) fails, the microservices can automatically move to another computer within the vehicle. Essentially, this replicates cloud architecture on local devices (on-premise), ensuring system continuity even with partial hardware failure.

On each node of this cluster, containers are launched—isolated microservices performing their specific tasks. This ensures flexibility in resource distribution and system management, optimizing the operation of each component of the autonomous vehicle.

Key Microservices in the System

  1. Motion Control Service: Controls steering, braking, and acceleration.
  2. Computer Vision Service: Processes camera data, recognizing objects and road situations.
  3. Navigation Service: Plans the route and makes driving decisions.
  4. Communication Service: Ensures communication with other vehicles and infrastructure.
  5. State Monitoring Service: Tracks the technical condition of the vehicle.

Each microservice operates autonomously and interacts with others through well-defined interfaces, providing several benefits:

  • Independent Updates

For example, the Computer Vision Microservice is a critical module responsible for recognizing road signs, lane markings, other road users, and obstacles. Its accuracy directly impacts the safety of the driver, passengers, and other road users.

With a microservices architecture, development and improvement can focus solely on this module without affecting other system components. A specialized team of computer vision and machine learning experts can work exclusively on enhancing this module.

  • Simplified Deployment

Imagine the team has developed a new machine learning model that significantly improves sign recognition under poor visibility conditions, such as fog or heavy rain. After thorough testing, only this specific module needs to be updated.

Moreover, the update can be deployed "Over the Air" (OTA) since only one microservice is updated, and the amount of data transferred is relatively small.

If the entire system had to be rebooted just to update the computer vision module, the vehicle would require a high-speed connection and a long time to download and install updates for the entire system.

  • Scalability and Fault Tolerance

Microservices architecture also enables easy scalability of individual system components. For example, if the new recognition model demands more computing power, the GPU capacity for the computer vision module can be increased without affecting other modules.

Additionally, this architecture enhances the system's fault tolerance. If the computer vision module fails during an update or operation, it won't lead to a total system crash. Other microservices can continue functioning, possibly using a previous version of the computer vision module or with limited functionality.

  • Flexibility in Technology Choice

Each microservice can be implemented using the most suitable technologies for its tasks. For the computer vision module, specialized machine learning libraries such as TensorFlow or PyTorch, optimized for GPU operations, can be used.

Meanwhile, other modules—like the communication module—can be built using different programming languages and technologies better suited for their tasks.

The microservices architecture provides the flexibility, scalability, and efficiency needed to develop and maintain complex systems like autonomous vehicle management. It allows continuous improvement of individual components without risking the integrity of the entire system.

Comparison of Microservices and the UNIX Philosophy

Despite the decades that separate the concepts of UNIX and microservices, parallels can be drawn between them. Comparing microservices with UNIX reveals both common principles and unique features of each approach. Both strive for modularity and specialization of components. Microservices, like UNIX utilities, often perform a single specific task, whether it's managing users or access or serving as an abstraction for a database. However, microservices are typically more complex and can grow with additional features.

The interaction of components is implemented differently: UNIX uses native stdin and stdout redirection through pipes, while microservices require specific protocols (REST, RPC) with clearly documented interfaces. This complicates communication between services compared to the simplicity of the UNIX approach.

However, both approaches often rely on a text-based format for data exchange. In microservices, this is typically JSON or YAML, aligning with the principle of text streams in UNIX.

These similarities and differences demonstrate the evolution of modularity and component interaction ideas in software development. Despite the time gap between their emergence, UNIX and microservices share many key concepts, highlighting the universality of certain development principles.

Similarities

Modularity and single responsibility:

  • UNIX: Utilities perform one task and do it well.

  • Microservices: Each service is responsible for a specific function (user management, access, caching).

Component Interaction:

  • UNIX: Utilities work together through pipelines.

  • Microservices: Services interact via APIs.

Text-based data format:

  • UNIX: Uses text streams for data exchange.

  • Microservices: Often use text formats (JSON, YAML) for data exchange.

Differences

Component Complexity:

  • UNIX: Utilities are usually simple and perform a minimal set of functions.

  • Microservices: Can be more complex and accumulate additional features.

Interaction Mechanism:

  • UNIX: Native stdin and stdout redirection through pipes.

  • Microservices: Require data transfer protocols (REST, RPC) with clearly defined interfaces.

Execution Context:

  • UNIX: Typically runs on a single computer with minimal delays.

  • Microservices: Can be distributed across different servers and data centers.

Goals and Application:

  • UNIX: Focused on the stability and reliability of the operating system.

  • Microservices: Focus on business logic and application flexibility.

Development and Deployment Complexity:

  • UNIX: Relatively simple development and installation of utilities.

  • Microservices: Require complex infrastructure for development, testing, and deployment.

Conclusion

We have analyzed the UNIX philosophy and microservices architecture, identifying both similarities and differences between these approaches to software development. Despite being separated by decades, both approaches demonstrate remarkable unity in key principles.

The main similarities we found include:

  • Modularity: Both UNIX and microservices aim to divide functionality into small, manageable components.

  • Specialization: Both approaches support the idea that each component should perform one task and do it well.

  • Interaction: Both UNIX and microservices emphasize effective communication between components.

However, we also identified significant differences:

  • Context of application: UNIX is designed to work on a single computer, while microservices are intended for distributed systems.

  • Complexity: Microservices are generally more complex to develop and deploy compared to UNIX utilities.

  • Flexibility: Microservices provide greater flexibility in choosing technologies and scaling individual components.

These similarities and differences are not coincidental. They reflect the evolution of software development principles in response to changing needs and technological advancements. The UNIX philosophy, created in the era of mainframes, laid the groundwork for the modular approach that today finds new expression in microservices, meeting the demands of the cloud computing and distributed systems era.

The principles underlying the UNIX philosophy and microservices architecture are also reflected in other software development methodologies. Object-Oriented Programming (OOP), with its concept of encapsulation and the SOLID principles, emphasizes the importance of modularity and specialization. The Single Responsibility Principle (SRP) from SOLID resonates with UNIX’s idea of “doing one thing well.” Design patterns such as facade, adapter, and singleton promote the creation of modular and efficiently interacting components. Functional programming, with its focus on pure functions and immutability, also shares the idea of creating small, well-defined components, which aligns with the principles of both UNIX and microservices architecture.

For modern developers and architects, understanding these principles is critically important. It allows us to learn from the time-tested ideas of UNIX, adapting them to the contemporary demands of scalability and flexibility that microservices provide.

Looking to the future, we can expect further development of both approaches. We will likely see new tools and practices that will simplify the development and deployment of microservices, making them accessible to a wider range of projects. At the same time, the principles of UNIX are likely to remain relevant, continuing to influence the design of operating systems and development tools.

Microservices
17.02.2025
Reading time: 23 min

Similar

Microservices

Sending and Applying Git Patches via Email – No GitHub Needed

Git today is the most widespread and popular version control system. Probably 99% of all current projects use Git, from the Linux Kernel to simple JavaScript libraries consisting of just one file and one function. The Linux Kernel is a huge and very complex project. It involves a large number of programmers worldwide. Coordinating changes in this project would be simply impossible without an effective solution that allows this entire community to work independently of one another. Now, this seems like a simple and obvious solution. However, the path to it was long and thorny. A Brief Retrospective 1998 was an important year for Linux. Large vendors took notice of the project, and more and more developers joined. At that time, the project followed a fairly simple model for changes: developers would send their patches to Linus Torvalds, who decided whether to include the code or not. Torvalds liked this model because it gave him control over all changes. The patch mechanism was used back when code trees were small and computers were very large. A patch literally was a set of instructions on punch cards telling what and how to replace in a stack of these media to get a new program version. Punch tapes were literally cut into pieces and glued together in a specific way to introduce changes to the program code of that time.   In general terms, a set of patches is a set of instructions that allow editing (semi- or fully automatically) the source program to get a new version. A patch set is always smaller than the full code version. This turned patches into a convenient interface for transferring changes and collaborative programming. Problems arose when the developer community began to grow. Linus Torvalds became a "bottleneck"; the number of patches grew, and the time to review them increased. Developers began using the CVS version control system to ease collaboration. Of course, this went against Torvalds' original policy on Linux kernel changes. He disliked the existence of parallel project branches with their own workflow. On the other hand, developers felt frustrated sending patches to Torvalds, who physically could not review, accept, request fixes, or reject them in a timely manner. Developers complained they had to send multiple emails to get the "benevolent dictator's" attention. The Emergence of Git The solution was to use a decentralized proprietary version control system called BitKeeper. The project used this software for a long time, but eventually, relations between the company developing BitKeeper and the Linux kernel developers soured. There was an amusing paradox: Linux Kernel is an open and free product licensed under the GNU General Public License (GPL). The main GPL principle is that anyone can freely use, distribute, and modify software released under this license, but all modifications must also be released under GPL. BitKeeper, however, was a fully closed proprietary commercial product owned entirely by its company.   Thus, the open and free project used a closed, non-free technology for coordinating development and versioning. Sooner or later, this fragile balance was going to break — and it did. This made using BitKeeper impossible. Torvalds rejected using Subversion and proposed Monotone instead. However, Monotone was unbearably slow. Eventually, Torvalds began writing his own version control system from scratch in C. Thus, Git was born. The new VCS was far from perfect but was positively received by the developer community and quickly gained the necessary tools. The new version control system rapidly gained popularity, and GitHub turned Git into the dominant solution for source code management in both open and commercial projects. Dominant... Indeed, any project, whether small or large (with thousands of contributors), is likely to be registered and hosted on GitHub. Even projects that don't use Git internally (like FreeBSD or OpenBSD) have read-only copies on GitHub. GitHub or Not GitHub? New developers (and not only them) tend to believe that without GitHub, project development and management are impossible. So, when you join a project as a developer (freelancer or FOSS contributor), you’ll be added to the team on this platform. Even if there are only two, three, or four of you... Even if the project consists of just a few dozen source files. GitHub everywhere. Is this good? It’s hard to answer simply yes or no. Certainly, GitHub has many useful tools; it’s convenient, fast, and reliable. Developers feel comfortable there, like in well-worn jeans. However, one should not forget that it’s a paid service managed by the well-known corporation Microsoft. Like any commercial product, GitHub is primarily focused on profit. If, for some reason, your project starts to interfere with that (damaging the platform’s image, etc.), your access will be instantly cut off. Recall the disputes GitHub had with the YouTube Downloader team, whose repositories were blocked, closed, and deleted simply because the RIAA demanded that GitHub restrict access to allegedly copyright-infringing software. This caused some (not a small number) teams to leave GitHub and switch to alternatives like GitLab or Gitea. In summary, setting aside moral and legal aspects, we see a contradiction: Git was designed as a decentralized version control system (unlike Subversion, for example), yet GitHub, which uses Git, enforces centralized management. Moreover, the developer effectively owns nothing; everything belongs to the "managing company." Is there life outside comfort? Can you use this great VCS without a third-party service? Can you accept patches without GitHub and send them to your team for review? Despite GitHub’s strong influence, Git’s architecture remains almost unchanged — it’s still a decentralized version control system. Git imposes absolutely no requirements on the exchange environment. You can use ordinary files (transfer them any way you want, even by copying to external media), upload patches to an FTP server, use SSH, or even Git’s built-in exchange protocol. This is very convenient. Recall the start of this article: Linus Torvalds accepted patches without GitHub (which didn’t exist then) by email and posted results on FTP servers. Sending Patches by Email Now, let's get to the main topic. Suppose we are a small, brave team that wants to be independent from anyone or anything. We have some money to buy a domain, VPS, and corporate email to exchange information and, of course, send and receive patches by email. Let's list tasks to build the necessary infrastructure for our project: Buy a domain. Buy corporate email and link it to our domain. Create mailboxes. Is it mandatory to buy a domain and corporate email? Not at all! You can use free mailboxes without a domain or purchase a domain later when needed. Everything depends on project requirements. However, from the early stages, the project may need a website, messaging (email), file exchange, and deployment infrastructure. You can buy these separately or combine them under one account for your project.  Suppose we are developing a web app and need infrastructure. After buying a domain and setting up DNS, we register as many mailboxes as needed. After creating mailboxes, we must configure access to them in mail clients and Git. Setting Up Git to Send and Receive Patches via Email It all starts with installing a special Git extension package called git-email. This is done using the package manager of your operating system or its distribution. For example: Fedora: sudo dnf install git-email Ubuntu / Debian: sudo apt-get install git-email On Windows, git-email is included in the standard Git installation package. Next step — configuration. In your OS terminal, run: git config --global --edit This will open your favorite terminal (or other) text editor, where you need to add the following lines to your Git configuration (the example uses test credentials; you should use your own!): [user] name = Maria Ortega email = zerozero@hostman-example.com [sendemail] smtpserver = smtp.hostman.com smtpuser = zerozero@hostman.site smtpencryption = ssl smtpserverport = 465 The parameter smtpencryption can be set to either ssl or tls. The second mode uses STARTTLS to initiate communication over an encrypted channel, while the first mode encrypts the connection immediately after it is established. The choice of mode and port depends on your email provider’s requirements. The [user] section is mandatory. Here, you identify yourself, and this information will appear in all patches and commits made by you. For stricter identification of patches and commits, Git supports signing sent information with GPG keys — but that’s another story. Now that we’ve set up Git to send patches via email let’s try it out. First, we need to clone a copy of the current working repository version. There are various ways to do this, which we’ll discuss at the end of the article. After cloning, make some changes to your project. Create a file named log_stderr.go: package main import ( "fmt" "time" "os" ) func logStderr(message string, args ...interface{}) { x := time.Now() fmt.Fprint(os.Stderr, x.Format(time.RFC822)) fmt.Fprint(os.Stderr, " - ") fmt.Fprintf(os.Stderr, message, args...) } Stage and commit the changes: git add log_stderr.go git commit -m "log into stderr func" Now send your patch to the project lead for review: git send-email --to="project-boss@hostman-example.com" HEAD^ The --to argument can accept multiple addresses separated by commas. This way, you can send your patch to all project members. You can also use --cc (carbon copy) to send the patch to additional email addresses separated by commas. This is useful when you want to send patches for review to the entire team or specific interested parties. To avoid specifying recipients every time on the command line, you can add them to your Git config: git config sendemail.to "project-boss@hostman-example.com" git config sendemail.cc "user1@email.tld","user2@email.tld",…,"userN@email.tld" After that, just run: git send-email HEAD^ …And your patch will be sent to the configured addresses. In this example, we sent the current changes from our working copy (HEAD^). You can send any changes, for example, two commits before the current one, or by commit hash. More details are in the Git documentation. Git will generate the patch and try to send it via the SMTP server specified in the config. If the SMTP server requires authentication, you’ll need to enter your password. If you send many patches, this can be tedious. You can save the password in the config, but note it will be stored unencrypted: git config --global sendemail.smtpPass 'your password' A better option might be to configure Git to cache your password for some time: git config --global credential.helper 'cache --timeout 3600' More advanced solutions can use password managers and the git-credential extension, but we won’t cover that here. Receiving and Integrating Patches Your team members receive your patch as a plain text email message, and they can review it — and, imagine that, reject your changes with requests to “fix” or “rewrite.” This is natural and the core of collaborative software development. The freedom and manual patch management are what attract developers to create their own information exchange solutions. What if You Are Asked to Fix Your Patch? Suppose developers ask to reduce calls to the Fprintf function and add a logging severity level. The updated code will look like this: package main import ( "fmt" "time" "os" ) type LogSeverity string const ( ERR LogSeverity = "ERROR" WARN LogSeverity = "WARN" INFO LogSeverity = "INFO" DEBUG LogSeverity = "DEBUG" ) func LogStderr(message string, severity LogSeverity, args ...interface{}) { x := time.Now() fmt.Fprintf(os.Stderr, "%s - %s - ", x.Format(time.RFC822), severity) fmt.Fprintf(os.Stderr, message, args...) fmt.Fprint(os.Stderr, "\n") } Since we’re fixing our previous patch and haven’t released any newer patches, we can simply amend the current commit: git commit -a --amend Now send the patch again, remembering we already configured the recipients: git send-email --annotate -v2 HEAD^ The -v2 flag means this is the second version of the patch. If you need another fix, use -v3, and so on. The --annotate flag allows you to add comments to your email message. Git will open a text editor showing something like: Subject: [PATCH v2] Logging function to stderr --- Added log level, reduced fmt.Fprintf calls Add your notes, save, and close the editor; the patch will then be sent again to the recipients. Always add annotations to your patches — it makes life easier for both you and your colleagues. Typing --annotate every time can get tedious, so you can automate it: git config --global sendemail.annotate yes How to Receive and Apply Patches? Receiving patches is a bit trickier. Git sends specially formatted patches in plain text email messages. There can be many such patches, and Git does not restrict the transport method (email, FTP, etc.), so it doesn’t handle how to receive patches — that’s up to the developer. Just use your mail client’s capabilities. After receiving approved annotated patches, save one or more email messages containing patches in an mbox file (Unix mailbox format). This format stores one or more email messages in a single file. Then run: git am <path_to_patches.mbox> All patches will be incorporated into your working copy. You can continue working and impressing your team. Email-based Git workflows can be as simple or sophisticated as you want. The main thing is that it suits the team and does not create unnecessary inconvenience. It seems there is nothing simpler, neater, or more elegant than working with Git over email. However, there is one major problem: distributing the working copy to new developers joining the project. If the project is large and has a rich history, the repository size might be many megabytes or even gigabytes. Sending that over email is impossible — it’s simply not designed for that. How to Provide a Newcomer with the Entire Project History? Git has an interesting feature called a bundle. It’s a snapshot of the working copy or the entire repository in a binary format of Git changes. Bundles are much more compact than a set of text patches; history and data inside the bundle are compressed, and the format allows transmitting both text and binary data. Project leads or other responsible persons can upload the current project bundle to a file-sharing service — for example, an FTP server or an S3-compatible object storage like Hostman. The newcomer downloads the project bundle and clones it: git clone project.bundle <new_place> Now <new_place> contains a new working copy ready to work with email patches. However, to be honest, bundles are somewhat of an alternative to the patch email exchange workflow described above. Collaborative work using bundles is a different story.
07 July 2025 · 12 min to read
Microservices

REST API vs RPC API: Which One to Use for Service Communication?

Before answering the key questions—which approach should be used for service communication, what is the difference between REST and RPC, and whether there is a clear winner in the REST vs. RPC debate—let's take a deeper look at both approaches. However, before we begin, let’s clarify some terms—API, REST, RPC, HTTP, and more. An API is a set of tools and rules that allow applications to communicate with each other. Imagine an information service, a software library, or an application as a "black box" whose internal details are hidden. The API serves as a set of controls and indicators that enable interaction with this black box. HTTP is a protocol for transferring hypertext. As a protocol, it operates at the OSI model's application layer (Layer 7). HTTP is widely used for delivering web pages, transferring files, streaming media, and facilitating communication between information systems via open APIs. REST is an architectural style (not a protocol, standard, or technology) for designing distributed systems. It defines constraints that make web services scalable, simple, and maintainable. The term "representational state transfer" refers to the idea that a client interacts with resources by transferring their representations. We’ll explore this concept in more detail below. RPC is a technology that allows a client to execute computations on a server by calling a function or procedure remotely, passing parameters, and receiving results. It works as if the function were a part of the local code. RPC The idea of offloading computations from a low-power client to a high-performance server dates back decades. The first adopters of RPC were databases, which were then known as data banks or even knowledge bases. Over time, RPC evolved into a flexible and powerful technology. Companies like Sybase, Sun Microsystems, Microsoft, and others played a key role in shaping the concept. When monolithic architectures began shifting to multi-tiered architectures, RPC adapted well to the new paradigms. It also inspired the development of various industrial standards and protocols. We will now examine two architectural solutions that use RPC-based technologies: CORBA and web services. CORBA CORBA — or Common Object Request Broker Architecture, a generalized architecture of object request brokers. This is perhaps the most comprehensive architectural specification for building distributed systems. It emerged in the 1980s and gained widespread adoption in the 1990s. The biggest advantage of CORBA compared to other distributed architectures was that heterogeneous (or diverse) elements that implemented the standards of this architectural specification could be present in the network for computation execution and result exchange. It became possible to combine different ecosystems: Java, C/C++, and even Erlang. While a highly flexible and efficient architecture, CORBA is nevertheless quite complex internally, containing numerous descriptions and agreements, and, to be honest, it represents a significant headache for developers who are integrating their (or a new) ecosystem into this architectural paradigm. The second major obstacle to using CORBA is its network stack. It operates over the TCP protocol and is quite complex; some CORBA implementations use standard TCP ports (defined and reserved for CORBA), while others use arbitrary ones, and it is not regulated in any way. All of this contradicts corporate network security policies. Additionally, it makes the use of CORBA on the Internet very inconvenient and even impossible. The workhorse of most information systems is the HTTP protocol. It uses two clearly defined TCP ports: 80 and 443. CORBA, on the other hand, requires four different TCP ports for its protocols, each with its own timing characteristics and features. Therefore, CORBA is suitable in cases where integration into an existing information system architecture built with CORBA is required. However, developing a new information system using this architectural solution is probably not advisable, as more efficient and simpler mechanisms exist today. Web Services, SOAP Given all CORBA's shortcomings, a standard was developed in the late 1990s that laid the foundation for so-called web services. Unlike CORBA, web services used an already existing, highly reliable, and simple protocol—HTTP—and fully relied on its architectural conventions. Each service had its own unique URL (Universal Resource Locator) and a set of methods that were also based on HTTP conventions. Machine- and architecture-independent formats such as XML or JSON were used as data carriers.  In particular, some web service implementations use a format called SOAP (Simple Object Access Protocol), which is based on XML. The new solution was significantly more convenient than the cumbersome CORBA, used the simple and reliable HTTP protocol, and was essentially independent of the technologies, deployment mechanisms, and scaling aspects of information systems. However, the new technology quickly became burdened with standards, rules, specifications, and other necessary but very tedious attributes of the Enterprise world. SOAP is a successful solution because XML, which underlies it, is a structured, machine-independent, user-defined data exchange language. XML already includes validation, data structure descriptions, and much more. But XML also has a downside. XML is an extremely verbose language overloaded with auxiliary elements. These include attributes, tags, namespaces, different brackets, quotation marks, and more. A large portion of SOAP packets consists of this auxiliary information. When scaled to millions of calls, this results in significant overhead due to all this informational noise. There is little that can be done to fix this issue, as it stems from the use of XML namespaces and the extremely detailed semantic definitions of the SOAP specification. Using less "noisy" data formats, such as JSON (in the JSON-RPC specification), introduces other risks, such as inconsistencies in data descriptions and the lack of structure definitions. Since web services are one implementation of the RPC concept, they function as a synchronous data exchange channel. Synchronous transmission is inconvenient, does not scale well, and can easily overload a system. RPC may seem an outdated concept that is best avoided in modern realities to prevent various problems and design errors. However, we have deliberately spent so much time discussing past technologies. If we take the best aspects of CORBA, wrap them in modern architectural solutions, and, like web services, run them over reliable network protocols, we get… gRPC gRPC is an open framework developed and implemented by Google. It is very similar to CORBA, but unlike CORBA, it runs on top of the standard HTTP/2 protocol. This version of the popular transport protocol has been significantly reworked, expanded, and improved compared to previous versions, providing efficient low-latency message transmission. CORBA uses its own Interface Definition Language (IDL) for interface descriptions. In gRPC, a modern framework called Protocol Buffers serves the same purpose. Like CORBA, the gRPC environment is heterogeneous, allowing different ecosystems to interact effectively. ProtoBuf uses its own transport format (serialization and deserialization of objects), which is much more compact than JSON and XML while remaining machine-independent. Today, gRPC has gradually replaced everything possible in the internal communication of microservices and is beginning to take over areas where web services and REST once dominated. Some bold developers are even experimenting with integrating gRPC into the front end. This is because gRPC was very well designed—it is reliable and fast and allows information systems to be built from heterogeneous nodes and components, much like the great CORBA once did. However, let’s assume I do not need cross-ecosystem interaction; I program only in Python/Golang/Java/(insert your language), and I want tools for distributed computing. Should I use gRPC, which, by the way, requires some time to master, or is there something that can help me "immediately and at low cost"? We are in luck. Today, RPC packages and service libraries are available in almost every programming ecosystem, such as: Python — xmlrpc package Go — net/rpc package Java — java.rmi (Remote Method Invocation) Haskell — WAI, xmlrpc, built-in OTP tools for distributed computing and clustering JavaScript — JSON-RPC Each of the aforementioned packages within its language ecosystem allows you to connect components together. To illustrate this with code, let's take a simple example from the documentation of the xmlrpc module in Python's standard library. RPC server code: from xmlrpc.server import SimpleXMLRPCServer def is_even(n): return n % 2 == 0 server = SimpleXMLRPCServer(("localhost", 8000)) print("Listening on port 8000...") server.register_function(is_even, "is_even") server.serve_forever() RPC client code: import xmlrpc.client with xmlrpc.client.ServerProxy("http://localhost:8000/") as proxy: print("3 is even: %s" % str(proxy.is_even(3))) print("100 is even: %s" % str(proxy.is_even(100))) As we can see, on the client side, everything looks very clear and simple, as if the is_even function is part of the client's own code. Everything is also quite simple and understandable on the server side: we define a function and then register it in the context of the server process responsible for RPC. It is important to note that the function we "expose" for external access is a regular function written in Python. It can easily be used locally in the server-side code, passing parameters to it and receiving the value it returns. The concept of RPC is very simple, elegant, and flexible: to call a function "on the other side," you only need to change the transport from local calls within a process to some network communication protocol and ensure bidirectional translation of parameters and results. REST So what is wrong with RPC, and why did we end up with REST as well? The first and perhaps the most serious reason is that RPC must have a layer that describes the nature of the data, interfaces, functions, and return calls. In CORBA, this is IDL; in gRPC, it is ProtoBuf. Even the slightest change requires synchronization of all definitions and interfaces. The second point, perhaps, stems from the very concept of a "function"—it is a black box that takes arguments as input and returns some value. A function does not describe or characterize itself in any way; the only way to understand what it does is by calling it and getting some result. Accordingly, as mentioned above, we need a description to determine the nature and order of computations. REST, as already mentioned at the beginning of this article, stands for REpresentational State Transfer, a protocol for transmitting representational state. It is important to clarify the meaning of the term "representational"—it means "self-descriptive," representing itself. Consequently, a certain state that is transferred between exchange participants does not require additional agreements, descriptions, or definitions—everything necessary, so to speak, is clear without words and is contained in the message itself. The term REST was introduced by Roy Fielding, one of the authors of HTTP, in 2000, in his dissertation "Architectural Styles and the Design of Network-based Software Architectures." He provided the theoretical basis for the way clients and servers interact on a global network, abstracting it and calling it "representational state transfer." Roy Fielding developed a concept for building distributed applications in which each request (REST request) from a client to a server already contains all the necessary information about the desired server response (the desired representational state), and the server is not required to store information about the client's state ("client session"). So, how does this work? In REST API, each service, each unit of information is designated by its URL. Thus, data can be retrieved simply by accessing this URL on the server. The URL in REST is structured as follows: /object/ — directs us to a list of objects /object/id — directs us to a single object with the specified ID or returns a 404 response if such an object is not found Thus, the very nature of defining a URL represents the nature of the server's response: in the first case—a list of objects, in the second—a single object. But that is not all. REST, as mentioned above, uses HTTP as its transport. And in HTTP, one of the key parameters that define the nature of the data returned by the server is the method. By using HTTP methods, we can define another set of self-descriptive states: GET /object/ — returns a list of objects GET /object/id — returns an object with the specified ID or 404 POST /object/ — creates a new object or returns an error (most often an error with code 400 or another) PUT /object/id — edits an object with the specified ID or returns errors DELETE /object/id — deletes an object with the specified ID or returns errors Some servers ignore the semantics of the PUT and DELETE methods; in this case, the POST /object/id method is used with a request body (object data) for editing or the same POST request with an empty body for deleting an object. Thus, instead of the variety of choices that REST provides us, we get a minimal set of operations on data. So, where is the advantage here? As mentioned above, REST is an architectural solution, not a technology. This means that REST does not impose any special requirements on participants in such a network, as is the case with gRPC, CORBA, or SOAP. It is only necessary to maintain the semantics of a self-defining state and a unified data transmission protocol. As a result, REST networks can combine the incompatible—a powerful cluster with load balancers, databases, and a simple "smart" light bulb with a microcontroller that is controlled via REST. Thus, REST is an extremely flexible architecture with virtually zero costs to ensure interoperability. However, to guarantee such an impressive result, REST introduces a number of restrictions (which is why this solution is also called architectural constraints). Let’s briefly list each of them: Client-server architecture. The architecture of REST networks must be based on the client-server model. Separating the client's interface needs from the server's needs improves the portability of client interface code, while simplifying the server part enhances scalability. Statelessness. The server should not store any special information about the client between calls. Traditional WEB sessions are not acceptable here. The server must receive all necessary information about the client's state from the request. Caching. The results of the server's response can be cached. This helps improve system performance. The server must ensure that the client receives up-to-date information if caching is applied. Uniform interface. This concerns the unified way of writing object URLs, which has already been discussed, and the semantics of HTTP methods. It also implies that the transport data format is one that is identically interpreted by both the server and the client. Typically, this is JSON, but there can be combined options when JSON and CBOR are used (the data type is described in the Content-Type header). Scalability and layers. The client should make no assumptions about how the server is structured. This allows for flexible system scalability, the use of caches, load balancers, and much more. By following the above constraints, we can build highly efficient systems, which is confirmed by our modern experience with distributed systems and web services. One of the most popular patterns implemented using REST is CRUD. This acronym is formed from the first letters of the operations Create, Read, Update, and Delete—the four basic operations sufficient for working with any data entity. More complex operations, known as use cases, can utilize CRUD REST API to access data entities. Use cases can also follow the prescriptions and constraints of REST; in this case, we call our information system RESTful. In such a system, REST conventions are used everywhere, and any expansion of the system also follows these conventions. This is a very pragmatic yet highly flexible approach: a unified architecture reduces system complexity, and as system complexity decreases, the percentage of errors also goes down. The concept of REST API is so popular that it exists in almost every programming language ecosystem. REST is built into Django and Laravel. In Go, you can use the Gin Gonic package or build your own RESTful system using only standard library packages. For Erlang, the erf library can be used, while in Elixir, REST API is already integrated into the Phoenix framework. REST, as an architecture, does not impose any restrictions on programming environments, frameworks, or anything else—it simply declares to services: "Just speak REST, and everything will work out fine." Let’s try to answer the question we posed at the very beginning. As you may have realized from this rather extensive article, each approach has its clear advantages and very specific disadvantages. In this matter, the best option is a golden mean. For critical services that process huge amounts of data, stability is the top priority—both in code, where data definition errors are simply unacceptable and in infrastructure, where faster system response time is always better. For such areas, the concept of RPC in its modern implementation—gRPC—is undoubtedly more convenient. However, where business logic and complex multi-level interactions reside, REST becomes the preferable choice with its rigid and limited means of expression. The best strategy is to apply both approaches wisely and flexibly, allowing your information system to benefit from each concept's strengths (or architectural solution). When discussing pure RPC and REST, we have deliberately abstracted from infrastructure, programming languages, machines, memory, processors, and other technical details. However, in real-world business, these aspects are equally important. Most often, REST API and RPC API are deployed either in containers (Docker, Podman, and similar technologies) or on so-called VPS (Virtual Private Servers). Less frequently, they run on dedicated or rented hardware. Infrastructure-as-a-Service (IaaS) is a convenient and relatively inexpensive way to manage projects. Hostman’s networking services provide an ideal solution for this. Here, you can precisely calculate the expected load and plan your expenses accordingly. The VPC (Virtual Private Cloud) from Hostman allows containers and VPS to be interconnected while ensuring that all traffic within this network remains completely isolated from the Internet. An ideal solution for RPC, REST, or…? The decision is, of course, yours to make. But as for how to deploy everything and ensure the uninterrupted operation of your services—Hostman has you covered.
01 April 2025 · 15 min to read
Microservices

Developing an HTTP Client in Go: From Installation to First Requests

Using APIs to communicate with external services is becoming more and more crucial when creating applications. With APIs, applications can transmit and receive data across a network and communicate with each other. One of the most popular standards for creating and using APIs is REST (Representational State Transfer), which is based on the HTTP protocol. Go has established itself as a powerful programming language for web development due to its performance, simplicity, and built-in support for network protocols. One of the key tasks that Go developers often need to solve is creating HTTP clients to interact with third-party REST APIs. In this article, we will help developers who are new to Go and REST APIs build their first HTTP client. We will start with the basics and progress to more advanced topics, such as sending different types of HTTP requests, handling responses, and automating requests. Additionally, we will explore practical examples and best practices to help you create secure and reliable HTTP clients. Setting Up the Environment First, let’s set up our working environment. We need to install Go tools, configure a development environment, and initialize a new project. Installing the Go Compiler Go supports all major operating systems: Windows, Linux, and macOS. We’ll briefly show the installation process for all of them. Let’s start with  Windows. Follow these steps: Go to the official Go website. Download the installation package for your operating system (either 32-bit or 64-bit version). Run the downloaded file and follow the installation wizard's instructions. Verify the installation was successful by checking the Go version. go version For macOS, you can either download and run the installer or use a package manager like Brew or MacPorts: brew install go Or: sudo port install go For Linux distributions, use a package manager: Ubuntu: sudo snap install go --classic Debian: sudo apt-get install golang-go CentOS/AlmaLinux:  sudo dnf install golang Arch Linux: sudo pacman -S go Configuring an IDE or Text Editor You don’t exactly have to use an IDE (integrated development environment) as Go provides a flexible set of tools for building applications using the command line.  However, an IDE or a text editor with Go support can still enhance your development experience, making it more convenient and efficient. Below are some popular options: Visual Studio Code (VSCode): A lightweight yet powerful editor with excellent Go support through extensions. This is the editor we use in this article. Vim/Neovim: Highly customizable editors with plugin support for Go, such as vim-go. Emacs: A powerful and customizable text editor widely used for text editing, with Go support available through various packages and extensions. If you decide to use VSCode, install the official "Go" extension from the Go development team to enable autocomplete, debugging, and other useful features. To do this: Open VSCode. Go to the Extensions tab or press Ctrl+Shift+X. Search for the Go extension and install it. Initializing a New Project Now that your development environment is ready, let's create a new Go project to develop our HTTP client. Create and navigate to your project directory: mkdir httpclient && cd httpclient Initialize a new Go module: go mod init httpclient After running this command, a go.mod file should appear, which will store information about the module and its dependencies. Create and open the main project file using VSCode: code main.go If everything is working correctly, intermediate command outputs should look normal. Open the main.go file in your editor and add the following code: package main import ( "fmt" ) func main() { fmt.Println("Hello, HTTP Client in Go!") } Run the program to verify everything is working correctly: go run main.go If you have followed the steps correctly, you should see the message: Hello, HTTP Client in Go! Now, you have a fully set up Go development environment and an initialized project. In the next chapters, we will start building a full-fledged HTTP client, sending requests to an API, and handling responses. Sending HTTP Requests in Go In this section, you will learn how to send different HTTP requests (GET, POST, PUT, DELETE) using Go’s standard net/http library. We will start with basic methods and gradually move on to more complex scenarios. Creating and Configuring an HTTP Client: GET and POST Requests Before sending requests, you need to create an instance of an HTTP client. In Go, this is done using the http.Client{} struct. For this example, we will use JSONPlaceholder, a free test API that provides basic resources accessible via HTTP methods. Such APIs are an excellent solution for testing and understanding how different requests work. No special tokens, registration, or authentication are required — you can run all the code on your local machine to see how it works in practice. The GET method is used to retrieve data. Here’s how it is implemented in Go using the http.Get() function. In your main.go file, add the following code: package main import ( "context" "fmt" "net/http" "time" "httpclient/client" ) func main() { // Initialize a custom HTTP client httpClient := client.NewHTTPClient(&http.Client{ Timeout: 10 * time.Second, }) ctx := context.Background() // Fetch an existing blog post using the custom HTTP client blogPost, _, err := httpClient.GetBlogPost(ctx, 1) if err != nil { fmt.Println("Error:", err) return } fmt.Println("Blog Post:") fmt.Printf(" ID: %d\n", blogPost.ID) fmt.Printf(" Title: %s\n", blogPost.Title) fmt.Printf(" Body: %s\n", blogPost.Body) fmt.Printf(" User ID: %d\n", blogPost.UserID) // Attempt to fetch a non-existing post blogPost, _, err = httpClient.GetBlogPost(ctx, -1) if err != nil { fmt.Println("Error:", err) return } fmt.Println("Blog Post:", blogPost) } Now, create a client.go file inside the client subdirectory and add the following code: package client import ( "bytes" "context" "encoding/json" "errors" "fmt" "io" "net/http" "net/url" "strings" ) const ( defaultBaseURL = "https://jsonplaceholder.typicode.com/" ) type HTTPClient struct { client *http.Client BaseURL *url.URL } // Initialize a new HTTP client func NewHTTPClient(baseClient *http.Client) *HTTPClient { if baseClient == nil { baseClient = &http.Client{} } baseURL, _ := url.Parse(defaultBaseURL) return &HTTPClient{ client: baseClient, BaseURL: baseURL, } } // Create a new HTTP request func (c *HTTPClient) NewRequest(method, urlStr string, body any) (*http.Request, error) { if !strings.HasSuffix(c.BaseURL.Path, "/") { return nil, fmt.Errorf("BaseURL must have a trailing slash, but %q does not", c.BaseURL) } u, err := c.BaseURL.Parse(urlStr) if err != nil { return nil, err } var buf io.ReadWriter if body != nil { buf = &bytes.Buffer{} err := json.NewEncoder(buf).Encode(body) if err != nil { return nil, err } } req, err := http.NewRequest(method, u.String(), buf) if err != nil { return nil, err } if body != nil { req.Header.Set("Content-Type", "application/json") } return req, nil } // Execute the HTTP request func (c *HTTPClient) Do(ctx context.Context, req *http.Request, v any) (*http.Response, error) { if ctx == nil { return nil, errors.New("context must be non-nil") } req = req.WithContext(ctx) resp, err := c.client.Do(req) if err != nil { select { case <-ctx.Done(): return nil, ctx.Err() default: } return nil, err } defer resp.Body.Close() err = CheckResponse(resp) if err != nil { return resp, err } switch v := v.(type) { case nil: case io.Writer: _, err = io.Copy(v, resp.Body) default: decErr := json.NewDecoder(resp.Body).Decode(v) if decErr == io.EOF { decErr = nil // Ignore EOF errors caused by empty response body } if decErr != nil { err = decErr } } return resp, err } // Check if the HTTP response indicates an error func CheckResponse(resp *http.Response) error { if c := resp.StatusCode; 200 <= c && c <= 299 { return nil } return fmt.Errorf("%s %s: %s", resp.Request.Method, resp.Request.URL, resp.Status) } // BlogPost represents a blog post entity type BlogPost struct { ID int64 `json:"id"` Title string `json:"title"` Body string `json:"body"` UserID int64 `json:"userId"` } // Fetch a blog post by ID func (c *HTTPClient) GetBlogPost(ctx context.Context, id int64) (*BlogPost, *http.Response, error) { u := fmt.Sprintf("posts/%d", id) req, err := c.NewRequest(http.MethodGet, u, nil) if err != nil { return nil, nil, err } b := new(BlogPost) resp, err := c.Do(ctx, req, b) if err != nil { return nil, nil, err } defer resp.Body.Close() return b, resp, nil } main.go: Contains the application's entry point, initializes the HTTP client, and performs basic operations. client.go: Handles the HTTP client logic, defining its structure, initialization functions, and request methods. This modular approach allows for easy reuse in other projects and makes testing the client independent of the main application. The problem with http.DefaultClient is that it is a global variable, meaning any changes to it affect the entire program which creates security and stability risks. Besides, http.DefaultClient lacks flexible configuration options, such as setting timeouts, TLS settings, proxies, or cookie management. By initializing our own HTTP client with http.Client{} and custom settings, we avoid these issues and ensure greater flexibility and security in our application. The POST method is used to send data to a server. In Go, there are two ways to send a POST request: Post() — Used for sending data in various formats (JSON, XML, binary). Features: Requires explicitly setting the Content-Type header (e.g., application/json). Data is sent as a byte array ([]byte). Allows custom request headers. PostForm() — Optimized for submitting HTML form data (application/x-www-form-urlencoded). Features: Automatically sets the Content-Type header. Accepts data as a url.Values structure (similar to map[string][]string). Simplifies working with form parameters (login, registration, search). To send POST requests, we need to add functions that allow us to send data to a server. Below, we will implement two types of POST requests: CreateBlogPost: Sends JSON data. PostForm: Sends form-encoded data. Copy the following function into your client.go file: func (c *HTTPClient) CreateBlogPost(ctx context.Context, input *BlogPost) (*BlogPost, *http.Response, error) { req, err := c.NewRequest(http.MethodPost, "posts/", input) if err != nil { return nil, nil, err } b := new(BlogPost) resp, err := c.Do(ctx, req, b) if err != nil { return nil, nil, err } defer resp.Body.Close() return b, resp, nil } Copy the following function into your client.go file: func (c *HTTPClient) PostForm(myUrl string, formData map[string]string) (string, error) { form := url.Values{} for key, value := range formData { form.Set(key, value) } resp, err := c.client.PostForm(myUrl, form) if err != nil { return "", fmt.Errorf("error making POST form request: %w", err) } defer resp.Body.Close() body, err := io.ReadAll(resp.Body) if err != nil { return "", fmt.Errorf("error reading response body: %w", err) } return string(body), nil } Don’t forget to import the net/url package in client.go. Now, modify your main.go file to call the CreateBlogPost function: package main import ( "context" "fmt" "net/http" "time" "httpclient/client" ) func main() { // Initialize a custom HTTP client httpClient := client.NewHTTPClient(&http.Client{ Timeout: 10 * time.Second, }) ctx := context.Background() input := &client.BlogPost{ Title: "foo", Body: "bar", UserID: 1, } // Create a new blog post using the custom HTTP client blogPost, _, err := httpClient.CreateBlogPost(ctx, input) if err != nil { fmt.Println("Error:", err) return } fmt.Println("Created Blog Post:") fmt.Printf(" ID: %d\n", blogPost.ID) fmt.Printf(" Title: %s\n", blogPost.Title) fmt.Printf(" Body: %s\n", blogPost.Body) fmt.Printf(" User ID: %d\n", blogPost.UserID) } After running the program (go run .), you should see an output similar to this: Working with Other Request Types (PUT, DELETE, etc.) Similarly to GET and POST, you can send other HTTP requests.  PUT is used to completely replace a resource or create it if it does not exist. DELETE is used to remove a resource at the specified URL. To work with PUT and DELETE, use a universal approach with http.NewRequest. Add the following functions to client.go: func (c *HTTPClient) PutJSON(myUrl string, jsonData []byte) (string, error) { req, err := http.NewRequest(http.MethodPut, myUrl, bytes.NewBuffer(jsonData)) if err != nil { return "", fmt.Errorf("error creating PUT request: %w", err) } req.Header.Set("Content-Type", "application/json") resp, err := c.client.Do(req) if err != nil { return "", fmt.Errorf("error making PUT request: %w", err) } defer resp.Body.Close() body, err := io.ReadAll(resp.Body) if err != nil { return "", fmt.Errorf("error reading response body: %w", err) } return string(body), nil } func (c *HTTPClient) Delete(myUrl string) (string, error) { req, err := http.NewRequest(http.MethodDelete, myUrl, nil) if err != nil { return "", fmt.Errorf("error creating DELETE request: %w", err) } resp, err := c.client.Do(req) if err != nil { return "", fmt.Errorf("error making DELETE request: %w", err) } defer resp.Body.Close() body, err := io.ReadAll(resp.Body) if err != nil { return "", fmt.Errorf("error reading response body: %w", err) } return string(body), nil } Modify your main.go file to call these new functions: package main import ( "fmt" "net/http" "time" "httpclient/client" ) func main() { httpClient := client.NewHTTPClient(&http.Client{ Timeout: 10 * time.Second, }) // Example PUT request jsonToPut := []byte(`{"id": 1, "title": "foo", "body": "bar", "userId": 1}`) putResp, err := httpClient.PutJSON("https://jsonplaceholder.typicode.com/posts/1", jsonToPut) if err != nil { fmt.Println("Error:", err) } else { fmt.Println("PUT Response:", putResp) } // Example DELETE request deleteResp, err := httpClient.Delete("https://jsonplaceholder.typicode.com/posts/1") if err != nil { fmt.Println("Error:", err) } else { fmt.Println("DELETE Response:", deleteResp) } } After running the program (go run .), you should see the following output: For more complex scenarios, you can configure: Client timeouts Retry logic Custom authorization headers This section has covered how to create and configure an HTTP client and send different types of HTTP requests. Now, you can move on to more advanced REST API interactions. Interacting with REST API in Go Now that we understand how to send HTTP requests in Go, let's explore how to interact with a REST API. We will: Create data models to handle API responses Convert received data into structured objects Demonstrate an example of usage We will start by sending a request to retrieve a list of posts and processing the received response. Creating Data Models for API Responses In Go, API responses are typically processed using structs. Defining structs to store data allows us to handle API responses more conveniently and safely. Here is an example of a Post struct: package main type Post struct { UserID int `json:"userId"` ID int `json:"id"` Title string `json:"title"` Body string `json:"body"` } This struct matches the JSON format returned by the API. The attributes are marked with JSON tags to ensure correct data conversion. Converting API Responses into Structured Data Now, let's send a GET request to the API and convert the response into a Go struct. Here is the full main.go implementation: package main import ( "fmt" "net/http" "time" "httpclient/client" ) type Post struct { UserID int `json:"userId"` ID int `json:"id"` Title string `json:"title"` Body string `json:"body"` } func main() { // Initialize HTTP client httpClient := client.NewHTTPClient(&http.Client{ Timeout: 10 * time.Second, }) // Fetch post data post, err := httpClient.GetBlogPost(1) if err != nil { fmt.Println("Error:", err) return } // Print post details fmt.Printf("Post ID: %d\n", post.ID) fmt.Printf("User ID: %d\n", post.UserID) fmt.Printf("Title: %s\n", post.Title) fmt.Printf("Body: %s\n", post.Body) } Modify the GetBlogPost function in client.go: func (c *HTTPClient) GetBlogPost(postID int) (*Post, error) { resp, err := c.Client.Get(fmt.Sprintf("https://jsonplaceholder.typicode.com/posts/%d", postID)) if err != nil { return nil, fmt.Errorf("error making GET request: %w", err) } defer resp.Body.Close() var post Post err = json.NewDecoder(resp.Body).Decode(&post) if err != nil { return nil, fmt.Errorf("error decoding response body: %w", err) } return &post, nil } In this example, we: Initialize the HTTP client Send a GET request Retrieve post data Convert the JSON response into a Post struct Print the post details After running the program (go run .), you should see output similar to this: Handling API Responses in Go In this chapter, we will explore how to process responses from a REST API in Go. We will cover topics such as checking HTTP status codes, handling response bodies, and managing and logging HTTP errors. Checking HTTP Status Codes An HTTP status code indicates the result of an HTTP request. It helps determine whether an operation was successful or if an error occurred. Two of the most common HTTP status codes are: 200 (OK) indicates that the request was successful. 404 (Not Found) means the requested resource does not exist. The main.go file: package main import ( "fmt" "net/http" ) type Post struct { UserID int `json:"userId"` ID int `json:"id"` Title string `json:"title"` Body string `json:"body"` } func main() { httpClient := NewHTTPClient() // GET request response, err := httpClient.Get("https://jsonplaceholder.typicode.com/posts/1") if err != nil { fmt.Println("Error:", err) return } defer response.Body.Close() if response.StatusCode != http.StatusOK { fmt.Printf("Error: Received non-200 response code: %d\n", response.StatusCode) return } fmt.Printf("Received a successful response. Status code: %d\n", response.StatusCode) } In the client.go file, we will define a  simple Get() method: func (c *HTTPClient) Get(url string) (*http.Response, error) { resp, err := c.Client.Get(url) if err != nil { return nil, fmt.Errorf("error making GET request: %w", err) } return resp, nil } In this example, we send a GET request and check the response status code. Depending on whether the request is successful or not, you will see different output messages. Processing the Response Body (XML) Once we have checked the HTTP status code, we can move on to processing the response body. Most APIs return data in JSON format, but some may use XML or other formats. Previously, we demonstrated handling JSON responses. Here, we will cover XML processing instead. Since JSONPlaceholder does not support XML, we will use a different public API in main.go that can work with XML: package main import ( "fmt" ) type Post struct { UserID int `json:"userId"` ID int `json:"id"` Title string `json:"title"` Body string `json:"body"` } type Response struct { XMLName xml.Name `xml:"objects"` Objects []Object `xml:"object"` } type Object struct { ID int `xml:"id"` Name string `xml:"name"` Email string `xml:"email"` Avatar string `xml:"avatar"` CreatedAt string `xml:"created-at"` UpdatedAt string `xml:"updated-at"` } func main() { httpClient := NewHTTPClient() var response Response err := httpClient.GetXML("https://thetestrequest.com/authors.xml", &response) if err != nil { fmt.Println("Error:", err) return } for _, obj := range response.Objects { fmt.Printf("ID: %d, Name: %s, Email: %s, Avatar: %s, CreatedAt: %s, UpdatedAt: %s\n", obj.ID, obj.Name, obj.Email, obj.Avatar, obj.CreatedAt, obj.UpdatedAt) } } In client.go, we’ll define a new function for a GET request, in XML: func (c *HTTPClient) GetXML(url string, v any) error { req, err := http.NewRequest("GET", url, nil) if err != nil { return fmt.Errorf("error creating GET request: %w", err) } resp, err := c.Client.Do(req) if err != nil { return fmt.Errorf("error making GET request: %w", err) } defer resp.Body.Close() if resp.StatusCode != http.StatusOK { return fmt.Errorf("received non-200 response code: %d", resp.StatusCode) } body, err := io.ReadAll(resp.Body) if err != nil { return fmt.Errorf("error reading response body: %w", err) } err = xml.Unmarshal(body, v) if err != nil { return fmt.Errorf("error unmarshalling XML response: %w", err) } return nil } In this example, we: Read the response body. Convert the XML response into our predefined structure. Print the formatted data to the console for better readability. After running the code, you will see the following output: To learn more about JSON and XML, their key differences, and best use cases, check out our article: "JSON vs. XML: Comparing Popular Data Exchange Formats." Handling HTTP Errors and Logging Proper error handling is a critical part of integrating with an API. Let's break it down into three key failure points: Request Sending Errors — Occur due to network issues, incorrect URLs, or an unreachable server. Response Reading Errors — Even a successful 200 OK status does not always guarantee valid data. Data Conversion Errors — A common issue when working with JSON/XML responses. Proper error handling is important as it prevents application crashes and simplifies debugging when something goes wrong with API communication. We will implement error logging using the following code: package main import ( "fmt" "log" "os" ) type Post struct { UserID int `json:"userId"` ID int `json:"id"` Title string `json:"title"` Body string `json:"body"` } func main() { if err := run(); err != nil { log.Printf("Error: %v", err) os.Exit(1) } } func run() error { client := NewHTTPClient() post, err := client.GetBlogPost(1) if err != nil { return fmt.Errorf("error occurred while getting post: %w", err) } fmt.Printf("ID: %d\nUser ID: %d\nTitle: %s\nBody: %s\n", post.ID, post.UserID, post.Title, post.Body) return nil } In this example, we use the log package to log errors. The log.Errorf function outputs an error message. The result of the code execution will remain the same as before since there will be no errors in the requests, but you can try changing variables to see error messages. Automating HTTP Requests In this chapter, we will explore the possibility of automating the sending of multiple HTTP requests. We will look at different approaches, including using loops, utilizing goroutines for parallel requests, and asynchronous handling of requests and responses. Using Loops to Send Multiple Requests To send multiple HTTP requests, we can use loops: package main import ( "fmt" "log" ) type Post struct { UserID int `json:"userId"` ID int `json:"id"` Title string `json:"title"` Body string `json:"body"` } func main() { client := NewHTTPClient() for i := 1; i <= 5; i++ { post, err := client.GetBlogPost(i) if err != nil { log.Printf("Error getting post %d: %v", i, err) continue } fmt.Printf("Request to post %d returned:\nID: %d \n%s \n\n", i, post.ID, post.Title) } } We use the for loop to send requests to different URLs. Then, we print the requests with the number, PostID, and title to the console. After execution, you will receive the following message: Using Goroutines for Parallel HTTP Requests Go provides built-in capabilities for parallel task execution through goroutines. This allows sending multiple requests simultaneously, significantly speeding up the program's execution. package main import ( "fmt" "log" "sync" ) type Post struct { UserID int `json:"userId"` ID int `json:"id"` Title string `json:"title"` Body string `json:"body"` } // fetchPost handles fetching a post using the GetBlogPost method and outputs the result. func fetchPost(client *HTTPClient, postID int, wg *sync.WaitGroup) { defer wg.Done() post, err := client.GetBlogPost(postID) if err != nil { log.Printf("Error getting post %d: %v", postID, err) return } fmt.Printf("Request to post %d returned:\nID: %d\nUser ID: %d\nTitle: %s\nBody: %s\n\n", postID, post.ID, post.UserID, post.Title, post.Body) } func main() { client := NewHTTPClient() var wg sync.WaitGroup postIDs := []int{1, 2, 3, 4, 5} for _, postID := range postIDs { wg.Add(1) go fetchPost(client, postID, &wg) } wg.Wait() } In this example, we create the fetchPost function, which sends a request and prints the status.  sync.WaitGroup is used to wait for the completion of all goroutines. Run this code and compare the execution speed with the previous solution. The script output may vary due to its asynchronous nature. Example of Asynchronous Request and Response Handling Asynchronous processing allows sending requests and processing responses as they arrive. Let's look at an example using a channel to transmit results: package main import ( "fmt" "log" "sync" ) type Post struct { UserID int `json:"userId"` ID int `json:"id"` Title string `json:"title"` Body string `json:"body"` } type Result struct { PostID int Post *Post Err error } // fetchPost handles fetching a post through the GetBlogPost method and sends the result to the channel. func fetchPost(client *HTTPClient, postID int, results chan<- Result, wg *sync.WaitGroup) { defer wg.Done() post, err := client.GetBlogPost(postID) results <- Result{PostID: postID, Post: post, Err: err} } func main() { client := NewHTTPClient() var wg sync.WaitGroup postIDs := []int{1, 2, 3, 4, 5} results := make(chan Result, len(postIDs)) // Launch goroutines for parallel request execution for _, postID := range postIDs { wg.Add(1) go fetchPost(client, postID, results, &wg) } // Function to close the channel after all goroutines finish go func() { wg.Wait() close(results) }() // Process results as they arrive for result := range results { if result.Err != nil { log.Printf("Error fetching post %d: %v\n", result.PostID, result.Err) continue } fmt.Printf("Request to post %d returned:\nID: %d\nUser ID: %d\nTitle: %s\nBody: %s\n\n", result.PostID, result.Post.ID, result.Post.UserID, result.Post.Title, result.Post.Body) } } In this example, we introduce a new Result structure to store requests' results and use the results channel to pass results from goroutines to the main function. At first glance, the last two approaches might seem very similar, and they are to some extent, but there are still differences: Result Handling: In the asynchronous approach with channels, results are processed in the main thread as they arrive, while in the approach without channels, results are processed within the goroutines. Synchronization: Channels provide built-in mechanisms for safely passing data between goroutines, whereas in the approach without channels, sync.WaitGroup is needed. Resource Usage: Asynchronous processing with channels may handle resources more effectively. In the first approach, all tasks are executed in parallel, but the results may be processed less efficiently. Due to the asynchronous nature, results are processed as they arrive from the channel, meaning the order of posts may not always be the same when rerunning the code. One possible output is shown below: Advanced Features and Tips The guide above is enough to write your first HTTP client. However, if you plan to advance in this area, you will be interested in exploring advanced features and best practices for development. This chapter includes the use of third-party libraries, debugging and optimization techniques, as well as security considerations. Using Third-Party Libraries for Working with APIs The Go standard library provides basic functionality for working with HTTP requests, but sometimes it's more convenient to use third-party libraries that offer advanced features and simplify the process. One such library is go-resty. To install the library, use the following command: go get -u github.com/go-resty/resty/v2 Some of the advantages of go-resty include: Automatic serialization (the process of converting data structures) and deserialization. Session management (cookie support) and retries for failed requests. Asynchronous requests. Flexible configuration of timeouts, headers, parameters, and other options. Built-in debugging features, including logging. Testing tools such as mocking. Here is an example for sending GET and POST requests using the go-resty library: package main import ( "fmt" "log" "github.com/go-resty/resty/v2" ) func main() { client := resty.New() // GET request resp, err := client.R(). SetQueryParam("userId", "1"). Get("https://jsonplaceholder.typicode.com/posts") if err != nil { log.Fatalf("Error on GET request: %v", err) } fmt.Println("GET Response Info:") fmt.Println("Status Code:", resp.StatusCode()) fmt.Println("Body:", resp.String()) // POST request post := map[string]any{ "userId": 1, "title": "foo", "body": "bar", } resp, err = client.R(). SetHeader("Content-Type", "application/json"). SetBody(post). Post("https://jsonplaceholder.typicode.com/posts") if err != nil { log.Fatalf("Error on POST request: %v", err) } fmt.Println("POST Response Info:") fmt.Println("Status Code:", resp.StatusCode()) fmt.Println("Body:", resp.String()) } The library significantly simplifies working with HTTP requests and provides many useful features. Debugging and optimization are crucial aspects of development, so let's look at some examples. Logging Requests and Responses For debugging purposes, it's helpful to log requests and responses. We can do this using the library we installed earlier: client := resty.New(). SetDebug(true) Also, use http.Transport to manage the number of open connections: client := resty.New() transport := &http.Transport{ MaxIdleConns: 10, IdleConnTimeout: 30 * time.Second, DisableKeepAlives: false, } client.SetTransport(transport) client.SetTimeout(10 * time.Second) Best Practices for Developing Secure and Reliable HTTP Clients An example of a secure and reliable HTTP client using go-resty: Error Handling: Resty automatically handles errors, simplifying response checks. Using TLS: Resty supports custom transport settings to enable TLS. Secure methods for storing and transmitting authentication tokens: package main import ( "crypto/tls" "fmt" "log" "net/http" "github.com/go-resty/resty/v2" ) func main() { // Create client with configured TLS client := resty.New() // Configure security transport layer client.SetTransport(&http.Transport{ // Using standard TLS configuration TLSClientConfig: &tls.Config{ // Additional configuration parameters can be set here MinVersion: tls.VersionTLS12, // Example: minimum TLS version 1.2 }, }) token := "your_auth_token_here" // Sending GET request with error handling and TLS verification resp, err := client.R(). SetHeader("Authorization", "Bearer "+token). Get("https://jsonplaceholder.typicode.com/posts/1") if err != nil { log.Fatalf("Error: %v", err) } if resp.StatusCode() != http.StatusOK { log.Fatalf("Non-200 response: %d", resp.StatusCode()) } // Handle response body fmt.Printf("Response: %s\n", resp.String()) } Using the SetHeader method to set the "Authorization" header with a bearer token is a standard and secure practice, provided other security aspects are followed: Proper and secure storage of tokens. On the client side, this could be a secure container protected from unauthorized access. Transmitting tokens through secure channels, such as HTTPS. Minimizing token lifespan and regularly updating tokens. Using time-limited tokens and periodic rotation increases overall security. Additional recommendations for reliable HTTP clients: Timeouts: client.SetTimeout(15 * time.Second) Retries: client.R().SetRetryCount(3).Get("https://jsonplaceholder.typicode.com/posts/1") Logging Requests and Responses: client.SetDebug(true) Using go-resty significantly simplifies the process of creating an HTTP client in Go. The library provides extensive capabilities and features for flexible configuration according to your needs. Additionally, go-resty allows you to handle more complex requests, such as file uploads, multipart forms, or custom requests, and it automatically manages headers with minimal code and effort. Conclusion Developing HTTP clients in Go is an essential skill for any developer working with web services and APIs. In this article, we covered all key aspects of creating an HTTP client, from the basics to the advanced features of the language.  For further study and a deeper understanding of the topic, we recommend the following resources: Official Go documentation net/http package documentation GitHub repository for go-resty
13 March 2025 · 27 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support