Sign In
Sign In

Developing an HTTP Client in Go: From Installation to First Requests

Developing an HTTP Client in Go: From Installation to First Requests
Hostman Team
Technical writer
Microservices
13.03.2025
Reading time: 27 min

Using APIs to communicate with external services is becoming more and more crucial when creating applications. With APIs, applications can transmit and receive data across a network and communicate with each other. One of the most popular standards for creating and using APIs is REST (Representational State Transfer), which is based on the HTTP protocol.

Go has established itself as a powerful programming language for web development due to its performance, simplicity, and built-in support for network protocols. One of the key tasks that Go developers often need to solve is creating HTTP clients to interact with third-party REST APIs.

In this article, we will help developers who are new to Go and REST APIs build their first HTTP client. We will start with the basics and progress to more advanced topics, such as sending different types of HTTP requests, handling responses, and automating requests. Additionally, we will explore practical examples and best practices to help you create secure and reliable HTTP clients.

Setting Up the Environment

First, let’s set up our working environment. We need to install Go tools, configure a development environment, and initialize a new project.

Installing the Go Compiler

Go supports all major operating systems: Windows, Linux, and macOS. We’ll briefly show the installation process for all of them. Let’s start with  Windows.

Follow these steps:

  1. Go to the official Go website.

  2. Download the installation package for your operating system (either 32-bit or 64-bit version).

  3. Run the downloaded file and follow the installation wizard's instructions.

Verify the installation was successful by checking the Go version.

go version

Image7

For macOS, you can either download and run the installer or use a package manager like Brew or MacPorts:

brew install go

Or:

sudo port install go

For Linux distributions, use a package manager:

Ubuntu:

sudo snap install go --classic

Debian:

sudo apt-get install golang-go

CentOS/AlmaLinux: 

sudo dnf install golang

Arch Linux:

sudo pacman -S go

Configuring an IDE or Text Editor

You don’t exactly have to use an IDE (integrated development environment) as Go provides a flexible set of tools for building applications using the command line. 

However, an IDE or a text editor with Go support can still enhance your development experience, making it more convenient and efficient.

Below are some popular options:

  • Visual Studio Code (VSCode): A lightweight yet powerful editor with excellent Go support through extensions. This is the editor we use in this article.

  • Vim/Neovim: Highly customizable editors with plugin support for Go, such as vim-go.

  • Emacs: A powerful and customizable text editor widely used for text editing, with Go support available through various packages and extensions.

If you decide to use VSCode, install the official "Go" extension from the Go development team to enable autocomplete, debugging, and other useful features. To do this:

  1. Open VSCode.

  2. Go to the Extensions tab or press Ctrl+Shift+X.

  3. Search for the Go extension and install it.

Image9

Initializing a New Project

Now that your development environment is ready, let's create a new Go project to develop our HTTP client.

Create and navigate to your project directory:

mkdir httpclient && cd httpclient

Initialize a new Go module:

go mod init httpclient

After running this command, a go.mod file should appear, which will store information about the module and its dependencies.

Create and open the main project file using VSCode:

code main.go

If everything is working correctly, intermediate command outputs should look normal.

Image2

Open the main.go file in your editor and add the following code:

package main

import (
    "fmt"
)

func main() {
    fmt.Println("Hello, HTTP Client in Go!")
}

Run the program to verify everything is working correctly:

go run main.go

If you have followed the steps correctly, you should see the message:

Hello, HTTP Client in Go!

Now, you have a fully set up Go development environment and an initialized project. In the next chapters, we will start building a full-fledged HTTP client, sending requests to an API, and handling responses.

Sending HTTP Requests in Go

In this section, you will learn how to send different HTTP requests (GET, POST, PUT, DELETE) using Go’s standard net/http library. We will start with basic methods and gradually move on to more complex scenarios.

Creating and Configuring an HTTP Client: GET and POST Requests

Before sending requests, you need to create an instance of an HTTP client. In Go, this is done using the http.Client{} struct.

For this example, we will use JSONPlaceholder, a free test API that provides basic resources accessible via HTTP methods. Such APIs are an excellent solution for testing and understanding how different requests work. No special tokens, registration, or authentication are required — you can run all the code on your local machine to see how it works in practice.

The GET method is used to retrieve data. Here’s how it is implemented in Go using the http.Get() function.

In your main.go file, add the following code:

package main

import (
	"context"
	"fmt"
	"net/http"
	"time"

	"httpclient/client"
)

func main() {
	// Initialize a custom HTTP client
	httpClient := client.NewHTTPClient(&http.Client{
		Timeout: 10 * time.Second,
	})

	ctx := context.Background()

	// Fetch an existing blog post using the custom HTTP client
	blogPost, _, err := httpClient.GetBlogPost(ctx, 1)
	if err != nil {
		fmt.Println("Error:", err)
		return
	}

	fmt.Println("Blog Post:")
	fmt.Printf("  ID: %d\n", blogPost.ID)
	fmt.Printf("  Title: %s\n", blogPost.Title)
	fmt.Printf("  Body: %s\n", blogPost.Body)
	fmt.Printf("  User ID: %d\n", blogPost.UserID)

	// Attempt to fetch a non-existing post
	blogPost, _, err = httpClient.GetBlogPost(ctx, -1)
	if err != nil {
		fmt.Println("Error:", err)
		return
	}

	fmt.Println("Blog Post:", blogPost)
}

Now, create a client.go file inside the client subdirectory and add the following code:

package client

import (
	"bytes"
	"context"
	"encoding/json"
	"errors"
	"fmt"
	"io"
	"net/http"
	"net/url"
	"strings"
)

const (
	defaultBaseURL = "https://jsonplaceholder.typicode.com/"
)

type HTTPClient struct {
	client  *http.Client
	BaseURL *url.URL
}

// Initialize a new HTTP client
func NewHTTPClient(baseClient *http.Client) *HTTPClient {
	if baseClient == nil {
		baseClient = &http.Client{}
	}

	baseURL, _ := url.Parse(defaultBaseURL)

	return &HTTPClient{
		client:  baseClient,
		BaseURL: baseURL,
	}
}

// Create a new HTTP request
func (c *HTTPClient) NewRequest(method, urlStr string, body any) (*http.Request, error) {
	if !strings.HasSuffix(c.BaseURL.Path, "/") {
		return nil, fmt.Errorf("BaseURL must have a trailing slash, but %q does not", c.BaseURL)
	}

	u, err := c.BaseURL.Parse(urlStr)
	if err != nil {
		return nil, err
	}

	var buf io.ReadWriter
	if body != nil {
		buf = &bytes.Buffer{}
		err := json.NewEncoder(buf).Encode(body)
		if err != nil {
			return nil, err
		}
	}

	req, err := http.NewRequest(method, u.String(), buf)
	if err != nil {
		return nil, err
	}

	if body != nil {
		req.Header.Set("Content-Type", "application/json")
	}

	return req, nil
}

// Execute the HTTP request
func (c *HTTPClient) Do(ctx context.Context, req *http.Request, v any) (*http.Response, error) {
	if ctx == nil {
		return nil, errors.New("context must be non-nil")
	}

	req = req.WithContext(ctx)

	resp, err := c.client.Do(req)
	if err != nil {
		select {
		case <-ctx.Done():
			return nil, ctx.Err()
		default:
		}

		return nil, err
	}
	defer resp.Body.Close()

	err = CheckResponse(resp)
	if err != nil {
		return resp, err
	}

	switch v := v.(type) {
	case nil:
	case io.Writer:
		_, err = io.Copy(v, resp.Body)
	default:
		decErr := json.NewDecoder(resp.Body).Decode(v)
		if decErr == io.EOF {
			decErr = nil // Ignore EOF errors caused by empty response body
		}
		if decErr != nil {
			err = decErr
		}
	}

	return resp, err
}

// Check if the HTTP response indicates an error
func CheckResponse(resp *http.Response) error {
	if c := resp.StatusCode; 200 <= c && c <= 299 {
		return nil
	}

	return fmt.Errorf("%s %s: %s", resp.Request.Method, resp.Request.URL, resp.Status)
}

// BlogPost represents a blog post entity
type BlogPost struct {
	ID     int64  `json:"id"`
	Title  string `json:"title"`
	Body   string `json:"body"`
	UserID int64  `json:"userId"`
}

// Fetch a blog post by ID
func (c *HTTPClient) GetBlogPost(ctx context.Context, id int64) (*BlogPost, *http.Response, error) {
	u := fmt.Sprintf("posts/%d", id)

	req, err := c.NewRequest(http.MethodGet, u, nil)
	if err != nil {
		return nil, nil, err
	}

	b := new(BlogPost)
	resp, err := c.Do(ctx, req, b)
	if err != nil {
		return nil, nil, err
	}
	defer resp.Body.Close()

	return b, resp, nil
}
  • main.go: Contains the application's entry point, initializes the HTTP client, and performs basic operations.

  • client.go: Handles the HTTP client logic, defining its structure, initialization functions, and request methods. This modular approach allows for easy reuse in other projects and makes testing the client independent of the main application.

The problem with http.DefaultClient is that it is a global variable, meaning any changes to it affect the entire program which creates security and stability risks. Besides, http.DefaultClient lacks flexible configuration options, such as setting timeouts, TLS settings, proxies, or cookie management.

By initializing our own HTTP client with http.Client{} and custom settings, we avoid these issues and ensure greater flexibility and security in our application.

The POST method is used to send data to a server. In Go, there are two ways to send a POST request:

  1. Post() — Used for sending data in various formats (JSON, XML, binary). Features:

    • Requires explicitly setting the Content-Type header (e.g., application/json).

    • Data is sent as a byte array ([]byte).

    • Allows custom request headers.

  2. PostForm() — Optimized for submitting HTML form data (application/x-www-form-urlencoded). Features:

    • Automatically sets the Content-Type header.

    • Accepts data as a url.Values structure (similar to map[string][]string).

    • Simplifies working with form parameters (login, registration, search).

To send POST requests, we need to add functions that allow us to send data to a server. Below, we will implement two types of POST requests:

  • CreateBlogPost: Sends JSON data.

  • PostForm: Sends form-encoded data.

Copy the following function into your client.go file:

func (c *HTTPClient) CreateBlogPost(ctx context.Context, input *BlogPost) (*BlogPost, *http.Response, error) {
	req, err := c.NewRequest(http.MethodPost, "posts/", input)
	if err != nil {
		return nil, nil, err
	}

	b := new(BlogPost)
	resp, err := c.Do(ctx, req, b)
	if err != nil {
		return nil, nil, err
	}
	defer resp.Body.Close()

	return b, resp, nil
}

Copy the following function into your client.go file:

func (c *HTTPClient) PostForm(myUrl string, formData map[string]string) (string, error) {
	form := url.Values{}
	for key, value := range formData {
		form.Set(key, value)
	}

	resp, err := c.client.PostForm(myUrl, form)
	if err != nil {
		return "", fmt.Errorf("error making POST form request: %w", err)
	}
	defer resp.Body.Close()

	body, err := io.ReadAll(resp.Body)
	if err != nil {
		return "", fmt.Errorf("error reading response body: %w", err)
	}

	return string(body), nil
}

Don’t forget to import the net/url package in client.go.

Now, modify your main.go file to call the CreateBlogPost function:

package main

import (
	"context"
	"fmt"
	"net/http"
	"time"

	"httpclient/client"
)

func main() {
	// Initialize a custom HTTP client
	httpClient := client.NewHTTPClient(&http.Client{
		Timeout: 10 * time.Second,
	})

	ctx := context.Background()

	input := &client.BlogPost{
		Title:  "foo",
		Body:   "bar",
		UserID: 1,
	}

	// Create a new blog post using the custom HTTP client
	blogPost, _, err := httpClient.CreateBlogPost(ctx, input)
	if err != nil {
		fmt.Println("Error:", err)
		return
	}

	fmt.Println("Created Blog Post:")
	fmt.Printf("  ID: %d\n", blogPost.ID)
	fmt.Printf("  Title: %s\n", blogPost.Title)
	fmt.Printf("  Body: %s\n", blogPost.Body)
	fmt.Printf("  User ID: %d\n", blogPost.UserID)
}

After running the program (go run .), you should see an output similar to this:

Image12

Working with Other Request Types (PUT, DELETE, etc.)

Similarly to GET and POST, you can send other HTTP requests. 

  • PUT is used to completely replace a resource or create it if it does not exist.

  • DELETE is used to remove a resource at the specified URL.

To work with PUT and DELETE, use a universal approach with http.NewRequest.

Add the following functions to client.go:

func (c *HTTPClient) PutJSON(myUrl string, jsonData []byte) (string, error) {
	req, err := http.NewRequest(http.MethodPut, myUrl, bytes.NewBuffer(jsonData))
	if err != nil {
		return "", fmt.Errorf("error creating PUT request: %w", err)
	}
	req.Header.Set("Content-Type", "application/json")

	resp, err := c.client.Do(req)
	if err != nil {
		return "", fmt.Errorf("error making PUT request: %w", err)
	}
	defer resp.Body.Close()

	body, err := io.ReadAll(resp.Body)
	if err != nil {
		return "", fmt.Errorf("error reading response body: %w", err)
	}

	return string(body), nil
}

func (c *HTTPClient) Delete(myUrl string) (string, error) {
	req, err := http.NewRequest(http.MethodDelete, myUrl, nil)
	if err != nil {
		return "", fmt.Errorf("error creating DELETE request: %w", err)
	}

	resp, err := c.client.Do(req)
	if err != nil {
		return "", fmt.Errorf("error making DELETE request: %w", err)
	}
	defer resp.Body.Close()

	body, err := io.ReadAll(resp.Body)
	if err != nil {
		return "", fmt.Errorf("error reading response body: %w", err)
	}

	return string(body), nil
}

Modify your main.go file to call these new functions:

package main

import (
	"fmt"
	"net/http"
	"time"

	"httpclient/client"
)

func main() {
	httpClient := client.NewHTTPClient(&http.Client{
		Timeout: 10 * time.Second,
	})

	// Example PUT request
	jsonToPut := []byte(`{"id": 1, "title": "foo", "body": "bar", "userId": 1}`)
	putResp, err := httpClient.PutJSON("https://jsonplaceholder.typicode.com/posts/1", jsonToPut)
	if err != nil {
		fmt.Println("Error:", err)
	} else {
		fmt.Println("PUT Response:", putResp)
	}

	// Example DELETE request
	deleteResp, err := httpClient.Delete("https://jsonplaceholder.typicode.com/posts/1")
	if err != nil {
		fmt.Println("Error:", err)
	} else {
		fmt.Println("DELETE Response:", deleteResp)
	}
}

After running the program (go run .), you should see the following output:

Image3

For more complex scenarios, you can configure:

  • Client timeouts
  • Retry logic
  • Custom authorization headers

This section has covered how to create and configure an HTTP client and send different types of HTTP requests. Now, you can move on to more advanced REST API interactions.

Interacting with REST API in Go

Now that we understand how to send HTTP requests in Go, let's explore how to interact with a REST API. We will:

  • Create data models to handle API responses

  • Convert received data into structured objects

  • Demonstrate an example of usage

We will start by sending a request to retrieve a list of posts and processing the received response.

Creating Data Models for API Responses

In Go, API responses are typically processed using structs. Defining structs to store data allows us to handle API responses more conveniently and safely.

Here is an example of a Post struct:

package main

type Post struct {
    UserID int    `json:"userId"`
    ID     int    `json:"id"`
    Title  string `json:"title"`
    Body   string `json:"body"`
}

This struct matches the JSON format returned by the API.

The attributes are marked with JSON tags to ensure correct data conversion.

Converting API Responses into Structured Data

Now, let's send a GET request to the API and convert the response into a Go struct.

Here is the full main.go implementation:

package main

import (
	"fmt"
	"net/http"
	"time"

	"httpclient/client"
)

type Post struct {
	UserID int    `json:"userId"`
	ID     int    `json:"id"`
	Title  string `json:"title"`
	Body   string `json:"body"`
}

func main() {
	// Initialize HTTP client
	httpClient := client.NewHTTPClient(&http.Client{
		Timeout: 10 * time.Second,
	})

	// Fetch post data
	post, err := httpClient.GetBlogPost(1)
	if err != nil {
		fmt.Println("Error:", err)
		return
	}

	// Print post details
	fmt.Printf("Post ID: %d\n", post.ID)
	fmt.Printf("User ID: %d\n", post.UserID)
	fmt.Printf("Title: %s\n", post.Title)
	fmt.Printf("Body: %s\n", post.Body)
}

Modify the GetBlogPost function in client.go:

func (c *HTTPClient) GetBlogPost(postID int) (*Post, error) {
	resp, err := c.Client.Get(fmt.Sprintf("https://jsonplaceholder.typicode.com/posts/%d", postID))
	if err != nil {
		return nil, fmt.Errorf("error making GET request: %w", err)
	}
	defer resp.Body.Close()

	var post Post
	err = json.NewDecoder(resp.Body).Decode(&post)
	if err != nil {
		return nil, fmt.Errorf("error decoding response body: %w", err)
	}

	return &post, nil
}

In this example, we:

  1. Initialize the HTTP client

  2. Send a GET request

  3. Retrieve post data

  4. Convert the JSON response into a Post struct

  5. Print the post details

After running the program (go run .), you should see output similar to this:

Image1

Handling API Responses in Go

In this chapter, we will explore how to process responses from a REST API in Go.

We will cover topics such as checking HTTP status codes, handling response bodies, and managing and logging HTTP errors.

Checking HTTP Status Codes

An HTTP status code indicates the result of an HTTP request. It helps determine whether an operation was successful or if an error occurred.

Two of the most common HTTP status codes are:

  • 200 (OK) indicates that the request was successful.

  • 404 (Not Found) means the requested resource does not exist.

The main.go file:

package main

import (
	"fmt"
	"net/http"
)

type Post struct {
	UserID int    `json:"userId"`
	ID     int    `json:"id"`
	Title  string `json:"title"`
	Body   string `json:"body"`
}

func main() {
	httpClient := NewHTTPClient()

	// GET request
	response, err := httpClient.Get("https://jsonplaceholder.typicode.com/posts/1")
	if err != nil {
		fmt.Println("Error:", err)
		return
	}
	defer response.Body.Close()

	if response.StatusCode != http.StatusOK {
		fmt.Printf("Error: Received non-200 response code: %d\n", response.StatusCode)
		return
	}

	fmt.Printf("Received a successful response. Status code: %d\n", response.StatusCode)
}

In the client.go file, we will define a  simple Get() method:

func (c *HTTPClient) Get(url string) (*http.Response, error) {
	resp, err := c.Client.Get(url)
	if err != nil {
		return nil, fmt.Errorf("error making GET request: %w", err)
	}
	return resp, nil
}

In this example, we send a GET request and check the response status code. Depending on whether the request is successful or not, you will see different output messages.

Image4

Processing the Response Body (XML)

Once we have checked the HTTP status code, we can move on to processing the response body. Most APIs return data in JSON format, but some may use XML or other formats. Previously, we demonstrated handling JSON responses. Here, we will cover XML processing instead.

Since JSONPlaceholder does not support XML, we will use a different public API in main.go that can work with XML:

package main

import (
	"fmt"
)

type Post struct {
	UserID int    `json:"userId"`
	ID     int    `json:"id"`
	Title  string `json:"title"`
	Body   string `json:"body"`
}

type Response struct {
    XMLName xml.Name `xml:"objects"`
    Objects []Object `xml:"object"`
}

type Object struct {
    ID        int    `xml:"id"`
    Name      string `xml:"name"`
    Email     string `xml:"email"`
    Avatar    string `xml:"avatar"`
    CreatedAt string `xml:"created-at"`
    UpdatedAt string `xml:"updated-at"`
}

func main() {
    httpClient := NewHTTPClient()

    var response Response

    err := httpClient.GetXML("https://thetestrequest.com/authors.xml", &response)
    if err != nil {
        fmt.Println("Error:", err)
        return
    }

    for _, obj := range response.Objects {
        fmt.Printf("ID: %d, Name: %s, Email: %s, Avatar: %s, CreatedAt: %s, UpdatedAt: %s\n",
            obj.ID, obj.Name, obj.Email, obj.Avatar, obj.CreatedAt, obj.UpdatedAt)
    }
}

In client.go, we’ll define a new function for a GET request, in XML:

func (c *HTTPClient) GetXML(url string, v any) error {
	req, err := http.NewRequest("GET", url, nil)
	if err != nil {
		return fmt.Errorf("error creating GET request: %w", err)
	}

	resp, err := c.Client.Do(req)
	if err != nil {
		return fmt.Errorf("error making GET request: %w", err)
	}
	defer resp.Body.Close()

	if resp.StatusCode != http.StatusOK {
		return fmt.Errorf("received non-200 response code: %d", resp.StatusCode)
	}

	body, err := io.ReadAll(resp.Body)
	if err != nil {
		return fmt.Errorf("error reading response body: %w", err)
	}

	err = xml.Unmarshal(body, v)
	if err != nil {
		return fmt.Errorf("error unmarshalling XML response: %w", err)
	}

	return nil
}

In this example, we:

  • Read the response body.

  • Convert the XML response into our predefined structure.

  • Print the formatted data to the console for better readability.

After running the code, you will see the following output:

Image2

To learn more about JSON and XML, their key differences, and best use cases, check out our article: "JSON vs. XML: Comparing Popular Data Exchange Formats."

Handling HTTP Errors and Logging

Proper error handling is a critical part of integrating with an API. Let's break it down into three key failure points:

  • Request Sending Errors — Occur due to network issues, incorrect URLs, or an unreachable server.

  • Response Reading Errors — Even a successful 200 OK status does not always guarantee valid data.

  • Data Conversion Errors — A common issue when working with JSON/XML responses.

Proper error handling is important as it prevents application crashes and simplifies debugging when something goes wrong with API communication.

We will implement error logging using the following code:

package main

import (
	"fmt"
	"log"
	"os"
)

type Post struct {
	UserID int    `json:"userId"`
	ID     int    `json:"id"`
	Title  string `json:"title"`
	Body   string `json:"body"`
}

func main() {
	if err := run(); err != nil {
		log.Printf("Error: %v", err)
		os.Exit(1)
	}
}

func run() error {
	client := NewHTTPClient()

	post, err := client.GetBlogPost(1)
	if err != nil {
		return fmt.Errorf("error occurred while getting post: %w", err)
	}

	fmt.Printf("ID: %d\nUser ID: %d\nTitle: %s\nBody: %s\n", post.ID, post.UserID, post.Title, post.Body)

	return nil
}

In this example, we use the log package to log errors. The log.Errorf function outputs an error message. The result of the code execution will remain the same as before since there will be no errors in the requests, but you can try changing variables to see error messages.

Automating HTTP Requests

In this chapter, we will explore the possibility of automating the sending of multiple HTTP requests. We will look at different approaches, including using loops, utilizing goroutines for parallel requests, and asynchronous handling of requests and responses.

Using Loops to Send Multiple Requests

To send multiple HTTP requests, we can use loops:

package main

import (
	"fmt"
	"log"
)

type Post struct {
	UserID int    `json:"userId"`
	ID     int    `json:"id"`
	Title  string `json:"title"`
	Body   string `json:"body"`
}

func main() {
	client := NewHTTPClient()

	for i := 1; i <= 5; i++ {
		post, err := client.GetBlogPost(i)
		if err != nil {
			log.Printf("Error getting post %d: %v", i, err)
			continue
		}

		fmt.Printf("Request to post %d returned:\nID: %d \n%s \n\n",
			i, post.ID, post.Title)
	}
}

We use the for loop to send requests to different URLs. Then, we print the requests with the number, PostID, and title to the console. After execution, you will receive the following message:

Image5

Using Goroutines for Parallel HTTP Requests

Go provides built-in capabilities for parallel task execution through goroutines. This allows sending multiple requests simultaneously, significantly speeding up the program's execution.

package main

import (
	"fmt"
	"log"
	"sync"
)

type Post struct {
	UserID int    `json:"userId"`
	ID     int    `json:"id"`
	Title  string `json:"title"`
	Body   string `json:"body"`
}

// fetchPost handles fetching a post using the GetBlogPost method and outputs the result.
func fetchPost(client *HTTPClient, postID int, wg *sync.WaitGroup) {
	defer wg.Done()

	post, err := client.GetBlogPost(postID)
	if err != nil {
		log.Printf("Error getting post %d: %v", postID, err)
		return
	}

	fmt.Printf("Request to post %d returned:\nID: %d\nUser ID: %d\nTitle: %s\nBody: %s\n\n",
		postID, post.ID, post.UserID, post.Title, post.Body)
}

func main() {
	client := NewHTTPClient()
	var wg sync.WaitGroup

	postIDs := []int{1, 2, 3, 4, 5}

	for _, postID := range postIDs {
		wg.Add(1)
		go fetchPost(client, postID, &wg)
	}

	wg.Wait()
}

In this example, we create the fetchPost function, which sends a request and prints the status.  sync.WaitGroup is used to wait for the completion of all goroutines. Run this code and compare the execution speed with the previous solution. The script output may vary due to its asynchronous nature.

Example of Asynchronous Request and Response Handling

Asynchronous processing allows sending requests and processing responses as they arrive. Let's look at an example using a channel to transmit results:

package main

import (
	"fmt"
	"log"
	"sync"
)

type Post struct {
	UserID int    `json:"userId"`
	ID     int    `json:"id"`
	Title  string `json:"title"`
	Body   string `json:"body"`
}

type Result struct {
	PostID int
	Post   *Post
	Err    error
}

// fetchPost handles fetching a post through the GetBlogPost method and sends the result to the channel.
func fetchPost(client *HTTPClient, postID int, results chan<- Result, wg *sync.WaitGroup) {
	defer wg.Done()

	post, err := client.GetBlogPost(postID)
	results <- Result{PostID: postID, Post: post, Err: err}
}

func main() {
	client := NewHTTPClient()
	var wg sync.WaitGroup

	postIDs := []int{1, 2, 3, 4, 5}
	results := make(chan Result, len(postIDs))

	// Launch goroutines for parallel request execution
	for _, postID := range postIDs {
		wg.Add(1)
		go fetchPost(client, postID, results, &wg)
	}

	// Function to close the channel after all goroutines finish
	go func() {
		wg.Wait()
		close(results)
	}()

	// Process results as they arrive
	for result := range results {
		if result.Err != nil {
			log.Printf("Error fetching post %d: %v\n", result.PostID, result.Err)
			continue
		}
		fmt.Printf("Request to post %d returned:\nID: %d\nUser ID: %d\nTitle: %s\nBody: %s\n\n",
			result.PostID, result.Post.ID, result.Post.UserID, result.Post.Title, result.Post.Body)
	}
}

In this example, we introduce a new Result structure to store requests' results and use the results channel to pass results from goroutines to the main function. At first glance, the last two approaches might seem very similar, and they are to some extent, but there are still differences:

  • Result Handling: In the asynchronous approach with channels, results are processed in the main thread as they arrive, while in the approach without channels, results are processed within the goroutines.
  • Synchronization: Channels provide built-in mechanisms for safely passing data between goroutines, whereas in the approach without channels, sync.WaitGroup is needed.
  • Resource Usage: Asynchronous processing with channels may handle resources more effectively. In the first approach, all tasks are executed in parallel, but the results may be processed less efficiently.

Due to the asynchronous nature, results are processed as they arrive from the channel, meaning the order of posts may not always be the same when rerunning the code. One possible output is shown below:

Image6

Advanced Features and Tips

The guide above is enough to write your first HTTP client. However, if you plan to advance in this area, you will be interested in exploring advanced features and best practices for development. This chapter includes the use of third-party libraries, debugging and optimization techniques, as well as security considerations.

Using Third-Party Libraries for Working with APIs

The Go standard library provides basic functionality for working with HTTP requests, but sometimes it's more convenient to use third-party libraries that offer advanced features and simplify the process. One such library is go-resty.

To install the library, use the following command:

go get -u github.com/go-resty/resty/v2

Some of the advantages of go-resty include:

  • Automatic serialization (the process of converting data structures) and deserialization.
  • Session management (cookie support) and retries for failed requests.
  • Asynchronous requests.
  • Flexible configuration of timeouts, headers, parameters, and other options.
  • Built-in debugging features, including logging.
  • Testing tools such as mocking.

Here is an example for sending GET and POST requests using the go-resty library:

package main

import (
	"fmt"
	"log"

	"github.com/go-resty/resty/v2"
)

func main() {
	client := resty.New()

	// GET request
	resp, err := client.R().
		SetQueryParam("userId", "1").
		Get("https://jsonplaceholder.typicode.com/posts")
	if err != nil {
		log.Fatalf("Error on GET request: %v", err)
	}
	fmt.Println("GET Response Info:")
	fmt.Println("Status Code:", resp.StatusCode())
	fmt.Println("Body:", resp.String())

	// POST request
	post := map[string]any{
		"userId": 1,
		"title":  "foo",
		"body":   "bar",
	}
	resp, err = client.R().
		SetHeader("Content-Type", "application/json").
		SetBody(post).
		Post("https://jsonplaceholder.typicode.com/posts")
	if err != nil {
		log.Fatalf("Error on POST request: %v", err)
	}
	fmt.Println("POST Response Info:")
	fmt.Println("Status Code:", resp.StatusCode())
	fmt.Println("Body:", resp.String())
}

The library significantly simplifies working with HTTP requests and provides many useful features. Debugging and optimization are crucial aspects of development, so let's look at some examples.

Logging Requests and Responses

For debugging purposes, it's helpful to log requests and responses. We can do this using the library we installed earlier:

client := resty.New().
    SetDebug(true)

Also, use http.Transport to manage the number of open connections:

client := resty.New()
transport := &http.Transport{
    MaxIdleConns:       10,
    IdleConnTimeout:    30 * time.Second,
    DisableKeepAlives:  false,
}

client.SetTransport(transport)
client.SetTimeout(10 * time.Second)

Best Practices for Developing Secure and Reliable HTTP Clients

An example of a secure and reliable HTTP client using go-resty:

  • Error Handling: Resty automatically handles errors, simplifying response checks.
  • Using TLS: Resty supports custom transport settings to enable TLS.
  • Secure methods for storing and transmitting authentication tokens:
package main

import (
  "crypto/tls"
  "fmt"
  "log"
  "net/http"

  "github.com/go-resty/resty/v2"
)

func main() {
  // Create client with configured TLS
  client := resty.New()

  // Configure security transport layer
  client.SetTransport(&http.Transport{
    // Using standard TLS configuration
    TLSClientConfig: &tls.Config{
      // Additional configuration parameters can be set here
      MinVersion: tls.VersionTLS12, // Example: minimum TLS version 1.2
    },
  })

  token := "your_auth_token_here"

  // Sending GET request with error handling and TLS verification
  resp, err := client.R().
    SetHeader("Authorization", "Bearer "+token).
    Get("https://jsonplaceholder.typicode.com/posts/1")
  if err != nil {
    log.Fatalf("Error: %v", err)
  }

  if resp.StatusCode() != http.StatusOK {
    log.Fatalf("Non-200 response: %d", resp.StatusCode())
  }

  // Handle response body
  fmt.Printf("Response: %s\n", resp.String())
}

Using the SetHeader method to set the "Authorization" header with a bearer token is a standard and secure practice, provided other security aspects are followed:

  • Proper and secure storage of tokens. On the client side, this could be a secure container protected from unauthorized access.
  • Transmitting tokens through secure channels, such as HTTPS.
  • Minimizing token lifespan and regularly updating tokens. Using time-limited tokens and periodic rotation increases overall security.

Additional recommendations for reliable HTTP clients:

  • Timeouts:

client.SetTimeout(15 * time.Second)
  • Retries:

client.R().SetRetryCount(3).Get("https://jsonplaceholder.typicode.com/posts/1")
  • Logging Requests and Responses:

client.SetDebug(true)

Using go-resty significantly simplifies the process of creating an HTTP client in Go. The library provides extensive capabilities and features for flexible configuration according to your needs. Additionally, go-resty allows you to handle more complex requests, such as file uploads, multipart forms, or custom requests, and it automatically manages headers with minimal code and effort.

Conclusion

Developing HTTP clients in Go is an essential skill for any developer working with web services and APIs. In this article, we covered all key aspects of creating an HTTP client, from the basics to the advanced features of the language. 

For further study and a deeper understanding of the topic, we recommend the following resources:

Microservices
13.03.2025
Reading time: 27 min

Similar

Microservices

Designing the Ideal Microservice: Part 2

This is the second part of the article "Designing the Ideal Microservice." In the first part, we discussed technical aspects related to design principles, communication between microservices, and key patterns. In this part, we will explore security, automation, and monitoring and discuss the future of microservices and emerging trends. Security and Reliability of Microservices Security and reliability are critical aspects of microservices design. In this chapter, we examine the main approaches and best practices that help secure microservices against threats and enhance their resilience to failures. Authentication and Authorization To ensure microservices security, you should implement reliable authentication and authorization mechanisms: OAuth2 and OpenID Connect: Standards for authentication and authorization that provide secure access to resources. Advantages: Support for various scenarios, widespread use, and strong library and framework support. Disadvantages: Complexity in configuration and token management. JSON Web Tokens (JWT): A token format used for transferring information between parties as JSON. Advantages: Compact, flexible data inclusion and broad support. Disadvantages: Limited token lifespan and the need for secure storage and transmission. API Gateway: A centralized node through which all requests to microservices pass, allowing it to handle authentication and authorization. Advantages: Centralized security management, reducing complexity for microservices. Disadvantages: Potential bottleneck, complex configuration, and management. Data Encryption Data encryption is crucial for protecting information in a microservices architecture. Data Encryption in Transit: Using HTTPS and TLS protocols to secure data transmission between microservices. Advantages: Protection against interception and man-in-the-middle attacks. Disadvantages: Slightly increased latency and resource consumption. Data Encryption at Rest: Encrypting data stored in databases and other storage solutions. Advantages: Protection against unauthorized access in case of storage compromise. Disadvantages: Increased complexity in key management and cryptographic operations. Threat Detection and Prevention To protect microservices from internal and external threats, monitoring and incident response mechanisms are essential: Web Application Firewalls (WAF): Filters and monitors HTTP requests to web applications. Advantages: Protection against threats like SQL injection and XSS. Disadvantages: Potential false positives and complex configuration. Intrusion Detection and Prevention Systems (IDS/IPS): Monitor and analyze traffic to detect and prevent suspicious activity. Advantages: Anomaly and attack detection, protecting networks and applications. Disadvantages: High resource requirements and constant tuning and updating needs. Reliability and Fault Tolerance Ensuring reliability and fault tolerance is critical for the successful operation of microservices. Here are the main approaches and patterns: Circuit Breaker Pattern: Protects against cascading failures by blocking calls to an unstable microservice. Advantages: Increases system resilience and prevents overloads. Disadvantages: Complex state management and the need for monitoring. Retry Pattern: Repeats failed requests after a certain interval. Advantages: Improves reliability by automatically recovering from temporary failures. Disadvantages: Can increase system load and requires careful timeout and interval management. Bulkhead Pattern: Isolates system components to prevent cascading failures. Advantages: Fault isolation and improved system resilience. Disadvantages: Increased management complexity and potential resource duplication. Service Mesh: An infrastructure layer that manages communication between microservices, including load balancing, routing, and monitoring. Advantages: Improved observability, automated communication management, enhanced security. Disadvantages: Complex configuration and maintenance, and additional resource requirements. Monitoring and Logging Monitoring and logging enable timely issue detection and response, as well as performance analysis and microservices behavior insights. Centralized Logging: Collects and stores logs from all microservices in one place. Advantages: Simplifies error analysis and event correlation. Disadvantages: Requires effective log volume and logging system performance management. Metrics Monitoring: Collects and analyzes performance and health metrics of microservices. Advantages: Early issue detection and resolution, performance improvement. Disadvantages: Requires setup and management of the monitoring system. Alerting: Configures notifications for rapid incident and anomaly response. Advantages: Quick issue detection and resolution, minimizing downtime. Disadvantages: Potential for false alarms and the need for precise threshold configuration. Ensuring the security and reliability of microservices requires a comprehensive approach, including authentication and authorization, data encryption, threat detection and prevention, and fault tolerance patterns and monitoring. Testing and Debugging Microservices Testing and debugging microservices are crucial aspects of their successful development and operation. Microservice architecture offers numerous advantages, such as modularity and component independence, but also presents challenges in testing and debugging due to the distributed nature of the system. In this chapter, we will explore the main approaches and tools for testing and debugging microservices. Types of Testing Effective microservice testing requires various types of tests to validate functionality, performance, and reliability. Unit Testing: Testing individual modules or functions of a microservice in isolation from other components. Advantages: Fast execution, ease of debugging, early detection of errors during development. Disadvantages: Limited verification of interactions between components. Integration Testing: Testing the interaction between microservices and other system components. Advantages: Detection of integration issues and verification of interaction correctness. Disadvantages: More complex and time-consuming execution, requiring test environment setup. Contract Testing: Testing microservice APIs against predefined contracts between services. Advantages: Ensures consistency between services and prevents issues during updates. Disadvantages: Requires prior contract definition and can face discrepancies with changes. End-to-End (E2E) Testing: Testing the entire system, including all microservices and external components. Advantages: Comprehensive system validation and detection of complex bugs. Disadvantages: Long test execution times and complex test environment setup and maintenance. Load Testing: Testing system performance under high load conditions. Advantages: Identifies bottlenecks and verifies resilience under high loads. Disadvantages: Requires significant resources and complex result analysis. Tools for Testing Microservices A wide range of tools is used for testing microservices, each addressing specific tasks. JUnit: A framework for unit testing in Java. Advantages: Widely used with integration support for various tools. Disadvantages: Limited to Java usage. Mockito: A library for creating mock objects in tests. Advantages: Easy to use with support for different testing frameworks. Disadvantages: Limited to Java usage. Postman: A tool for API testing that allows sending requests and validating responses. Advantages: User-friendly interface and automation support. Disadvantages: Primarily for API testing with limited integration testing capabilities. JMeter: A tool for load testing. Advantages: Extensive testing scenario creation and protocol support. Disadvantages: Complex setup and result analysis. Docker: A platform for application containerization used to create test environments. Advantages: Easy creation of isolated environments and CI/CD support. Disadvantages: Requires learning and configuring Docker. Debugging Microservices Debugging microservices can be challenging due to their distributed nature and multiple component interactions. Here are the main approaches and tools for debugging microservices: Logging: Recording events and errors in logs for later analysis. Advantages: Detailed analysis of microservices behavior and error detection. Disadvantages: Log volume management and complex data analysis. Distributed Tracing: Tracking requests across all microservices involved in processing. Advantages: Visibility of the entire request processing chain. Disadvantages: Complex setup and integration, requiring additional monitoring. Profiling Tools: Analyzing microservice performance, identifying bottlenecks, and optimizing code. Advantages: Detailed performance analysis and optimization potential. Disadvantages: High resource requirements and extensive data analysis. Using API Gateway Services: Monitoring and managing traffic between microservices via API Gateway. Advantages: Centralized management and monitoring with traffic analysis. Disadvantages: Potential bottleneck and complex setup and maintenance. CI/CD and Test Automation Test automation and CI/CD (Continuous Integration/Continuous Deployment) processes increase the efficiency of microservice development and deployment. Jenkins: A tool for CI/CD automation. Advantages: Extensive configuration options and plugin support. Disadvantages: Complex setup and management, requiring additional resources. GitLab CI/CD: An automation tool integrated with the GitLab version control system. Advantages: GitLab integration, easy setup, and powerful automation features. Disadvantages: Limited usage with other version control systems. Travis CI: A cloud service for CI/CD automation. Advantages: Easy to use with GitHub integration. Disadvantages: Limited free builds and dependency on cloud infrastructure. CircleCI: A cloud service for CI/CD automation. Advantages: Fast build execution and integration with various version control systems. Disadvantages: Limitations in the free version and cloud infrastructure dependency. Testing and debugging microservices require careful preparation and the use of different approaches and tools. Test automation and the implementation of CI/CD processes enhance microservice quality and reliability while reducing development and deployment time. In the next chapter, we will explore the management and monitoring aspects of microservices, which also play a key role in their successful operation. Management and Monitoring of Microservices Managing and monitoring microservices are essential aspects that ensure their stable operation. In this chapter, we will take a closer look at the key approaches and tools for managing and monitoring microservices, which help maintain high performance and reliability of the system. Managing Microservices Managing microservices requires various methodologies and tools to automate and optimize deployment, configuration, and administration processes. Container Orchestration Kubernetes: A platform for automating deployment, scaling, and managing containerized applications. Advantages: Automation of container management, high scalability, and support for various infrastructures. Disadvantages: Complex setup and management with a steep learning curve. Docker Swarm: A container orchestration tool integrated with Docker. Advantages: Ease of setup and integration with Docker. Disadvantages: Limited functionality compared to Kubernetes and lower popularity. Configuration Management Systems Ansible: A tool for automating configuration management and application deployment. Advantages: Ease of use and flexibility for various automation tasks. Disadvantages: Limited capabilities for complex orchestration scenarios. Chef: A platform for automating configuration management, deployment, and infrastructure management. Advantages: Extensive automation capabilities and support for complex scenarios. Disadvantages: Complex setup and usage with a steep learning curve. Puppet: A tool for configuration management and infrastructure automation. Advantages: High scalability and support for various platforms. Disadvantages: Complex setup and administration requiring specialized training. Monitoring Microservices Monitoring microservices is crucial for tracking their status, performance, and identifying issues at early stages. Here are the main tools and approaches for monitoring microservices. Monitoring Systems Prometheus: A monitoring and alerting system with time-series support. Advantages: High performance, flexible configuration, and integration with Kubernetes. Disadvantages: Complex setup and management, requiring additional components for full functionality (e.g., Grafana for visualization). Grafana: A platform for metrics visualization and monitoring. Advantages: Advanced visualization options and integration with various data sources. Disadvantages: Requires configuration and integration with monitoring systems. ELK Stack (Elasticsearch, Logstash, Kibana): A suite of tools for collecting, analyzing, and visualizing logs and metrics. Advantages: Comprehensive data analysis and visualization, supporting scalability. Disadvantages: Complex setup and management with high resource requirements. Incident Detection and Response PagerDuty: A platform for incident management and alerting. Advantages: Efficient incident management and integration with various monitoring systems. Disadvantages: High cost with the need for configuration and integration. Opsgenie: A service for incident management and alerting. Advantages: Flexible alert configuration and integration with multiple monitoring tools. Disadvantages: Requires training and configuration. Distributed Tracing Jaeger: A distributed tracing platform designed to track requests across microservices. Advantages: In-depth performance analysis and microservices dependency mapping, integrating Kubernetes and other tools. Disadvantages: Complex setup and management requiring specialized training. Zipkin: A distributed tracing tool for collecting latency data in distributed systems. Advantages: Easy integration and support for various programming languages. Disadvantages: Limited functionality compared to Jaeger with a need for configuration and management. Metrics and Alerting Effective microservices monitoring requires defining and tracking key metrics and configuring alerting for quick incident response. Performance Metrics CPU and Memory Usage: Resource consumption by microservices. Response Time: Latency between request and response. Throughput: Number of processed requests per unit time. Reliability Metrics Uptime: The time during which the microservice is available. Failure Rate: Number of microservice failures over a specified period. Mean Time to Recovery (MTTR): Average time required to restore a microservice after failure. Alert Configuration Threshold Values: Setting thresholds for key metrics that trigger alerts when exceeded. Notification Integration: Setting up notifications via email, SMS, messaging apps, or incident management systems. Alert Management: Grouping and prioritizing alerts to minimize false positives and ensure rapid response to critical incidents. Configuration and Secrets Management Managing microservice configurations and secrets requires secure and flexible tools and approaches. HashiCorp Vault: A tool for secrets management and data protection. Advantages: High security and support for various types of secrets. Disadvantages: Complex setup and management requiring specialized training. Kubernetes Secrets: A mechanism for managing secrets in Kubernetes. Advantages: Seamless integration with Kubernetes and ease of use. Disadvantages: Limited functionality compared to specialized tools. Configuration Files and Management Systems: Using configuration files and management systems for centralized microservices configuration. Advantages: Flexibility and scalability with compatibility with various tools. Disadvantages: Requires versioning and security management for configuration files. Effective management and monitoring of microservices are key to their successful operation. Organizations can leverage modern tools and approaches to maintain high performance and reliability, respond promptly to incidents, and optimize configuration and secrets management. Future Trends and Development of Microservice Architecture Microservice architecture continues to evolve, offering new opportunities and addressing emerging challenges. In this chapter, we will explore the key trends and future prospects of microservices that will help you stay at the forefront of technology and adapt to changing business requirements. Serverless and Functions as a Service (FaaS) Serverless and FaaS represent the next step in the evolution of microservice architecture, offering greater abstraction from infrastructure and simplifying development. Serverless: A computing model where developers write code and cloud providers manage servers and infrastructure. Advantages: Reduced infrastructure management costs, on-demand scalability, and pay-as-you-go pricing. Disadvantages: Execution time limitations for functions and dependency on cloud providers. FaaS: A subset of serverless, where code is deployed as individual functions that run in response to events. Advantages: Rapid deployment, automatic scaling, and high flexibility. Disadvantages: Limited capabilities for complex tasks and potential issues with cold starts. Multi-Cloud and Hybrid Architectures Multi-cloud and hybrid architectures are becoming increasingly popular as companies seek to avoid vendor lock-in and leverage the benefits of different cloud platforms. Multi-Cloud Strategies: Using multiple cloud providers to host different parts of the infrastructure. Advantages: Resilience to outages from a single provider, cost optimization, and flexibility in choosing the best services. Disadvantages: Complexity in management and integration, and the need for compatibility compliance. Hybrid Architectures: Integrating on-premises infrastructure with public cloud services. Advantages: Resource management flexibility, enhanced security, and compliance with regulatory requirements. Disadvantages: Complex integration requiring additional management tools. Service Mesh Service Mesh is becoming an essential part of the microservice infrastructure, providing traffic management, security, and observability at the network interaction level. Istio: One of the most popular platforms for building service meshes. Advantages: Flexible configuration, comprehensive monitoring and security features, and integration with Kubernetes. Disadvantages: Complex setup and management with high resource requirements. Linkerd: A lightweight alternative to Istio for building service meshes. Advantages: Ease of use, high performance, and Kubernetes integration. Disadvantages: Fewer features compared to Istio and less configuration flexibility. DevOps and GitOps DevOps and GitOps continue to evolve, offering new approaches to managing and automating microservice development and operations processes. DevOps: A professional approach to software development that brings together developers and operations teams for improved collaboration and process automation. Advantages: Enhanced development quality and speed, reduced time-to-market, and CI/CD automation. Disadvantages: Requires changes in organizational culture and training in new tools. GitOps: An approach to managing infrastructure and applications using Git as the single source of truth. Advantages: Increased transparency and repeatability, simplified configuration management, and automated deployments. Disadvantages: Dependency on Git and the need to configure and manage GitOps tools. Artificial Intelligence and Machine Learning (AI/ML) Integrating AI/ML into microservice architecture opens up new opportunities for automation and process optimization. AI/ML for Monitoring and Failure Prediction: Using machine learning algorithms to analyze metrics and logs to predict failures and automate responses. Advantages: Increased system resilience and proactive incident management. Disadvantages: Complex model setup and the need for large datasets for training. Automated CI/CD with AI/ML: Applying AI/ML to optimize deployment and testing processes. Advantages: Faster development cycles, improved testing quality, and automated error detection and correction. Disadvantages: Complex integration and the need for specialized expertise. Future of Microservice Architecture The future of microservice architecture is tied to continuous evolution and adaptation to new challenges and requirements. Implementing new technologies, improving automation, and integrating with various platforms and services will enable the creation of more flexible, scalable, and reliable systems capable of efficiently solving modern business challenges. Conclusion Designing the perfect microservice is a multifaceted process that requires a careful approach to every aspect of development, deployment, and operation. Throughout this article, we have explored the key principles and practices that help build high-quality, flexible, and scalable microservices. For successful design and operation of microservices, consider the following recommendations: Focus on Business Value: Develop microservices that clearly align with business objectives and help achieve specific outcomes. Invest in Automation: Automating deployment, testing, and monitoring processes significantly reduces time and costs for microservice management. Ensure Security: Data security and secure communication between microservices should be prioritized throughout development and operation. Continuous Learning and Adaptation: Technologies and practices are constantly evolving. Stay updated with new trends and apply them to enhance your architecture. Maintain Documentation: Good documentation simplifies maintenance, improves team collaboration, and enhances the quality of the final project.
19 February 2025 · 18 min to read
Microservices

Designing the Ideal Microservice: Part 1

Microservices, as one of the key elements of modern software development, offer flexible mechanisms for building resilient, reliable, and scalable systems. Unlike traditional monolithic applications, microservice architecture segments the system into a collection of small, independent services. Each is designed to perform a specific task and can be developed, deployed, and scaled independently of other services. A microservice is a standalone, modular, and easily deployable software component that performs a clearly defined business function. These services interact through defined interfaces, often using lightweight protocols such as HTTP/REST or messaging systems. The shift to microservice architecture is driven by various factors related to the diversification of business processes and technological evolution: Scalability: By breaking down the system into microservices, individual components can be scaled separately, simplifying resource management and ensuring efficient load distribution. Development Flexibility: Different teams can work on isolated services in parallel, speeding up the implementation of new features and significantly reducing the development cycle. System Resilience: A failure in one microservice does not lead to a complete system outage, greatly enhancing reliability and stability. Independent Deployment: The ability to deploy and update microservices individually reduces downtime and accelerates the release of updates. Technological Independence: Each microservice can be developed using different programming languages and technologies, allowing teams to choose the best tools for solving specific tasks. Microservice architecture offers numerous advantages: Improved Development Organization: Microservices encourage forming small, autonomous teams, each responsible for specific services, fostering organization and accountability. Simplified Maintenance and Evolution: The limited codebase of each microservice simplifies maintenance and testing, reducing technical debt. Faster Change Implementation: The independence of microservices facilitates the rapid deployment of new features and fixes without needing a full application rollout. Increased Reliability: Error localization and the ability to roll back changes at the individual service level enhance overall system reliability. In summary, this approach to software development can improve system flexibility, scalability, and reliability. In the following chapters, we will delve deeper into microservice design principles, communication patterns, state and data management, as well as security, reliability, and testing aspects. Microservice Design Principles Designing the ideal microservice requires understanding and applying key principles that ensure the system's resilience, scalability, and ease of maintenance. This chapter explores the fundamental principles on which microservice architecture is built. Separation of Concerns Each microservice is responsible solely for its own functionality and ensures its execution. This simplifies development, testing, and maintenance while enhancing flexibility and resistance to change. Applying this principle promotes the creation of small, autonomous components that can be easily replaced or updated without impacting the overall application. Independent Deployment and Scalability Microservices should be designed to be deployed and scaled independently. This enables efficient resource management and enhances system resilience. Independent deployment also accelerates the release of new features and bug fixes, minimizing downtime. Loose Coupling Microservices should interact with each other as minimally as possible so that changes in one do not require modifications or interventions in others. This can be achieved through well-defined interfaces and communication standards such as HTTP/REST or gRPC. Low coupling simplifies the development, testing, and deployment processes, enhancing the system's flexibility, resilience, and adaptability. Fault Tolerance Microservice architecture should include mechanisms that ensure system resilience during failures. Each microservice must be able to handle errors and recover without impacting overall functionality. This is achieved through patterns like Circuit Breaker, Retry, Bulkhead, and Timeouts, which help minimize failure consequences and maintain system continuity even during partial component outages. Decentralized Data Management Each microservice should autonomously manage its data, avoiding the use of a shared database across all services. This prevents bottlenecks and scaling issues, simplifies data management, and improves consistency. This approach supports independent deployment and updates of services. Automation and Continuous Integration (CI/CD) Successful microservice design and deployment require automation and continuous integration processes. This includes automated testing, building, deployment, and monitoring of microservices. Utilizing CI/CD pipelines enables rapid delivery of updates and bug fixes, minimizing human error and enhancing system reliability. Backward Compatibility Changes in microservices should maintain backward compatibility to avoid disrupting other autonomous components of the system. This is achieved through API versioning and careful dependency management between services. Ensuring backward compatibility allows microservices to be updated without simultaneously updating all dependent components. Contract-Driven Development A contract-driven approach allows us to minimize dependencies and ensure microservice independence. It means that the interfaces and interaction contracts between microservices are defined in advance and form the basis of their implementation. This approach ensures clear communication and simplifies testing and integration within the overall system. Applying these principles enables the creation of resilient, flexible, and easily scalable microservices that effectively address business needs and adapt smoothly to change. In the following chapters, we will delve into practical applications of these principles and explore tools and technologies that can help implement them. Defining Microservice Boundaries One of the key aspects of designing microservices is correctly defining their boundaries. These boundaries determine a microservice's functionality and how it will interact with other services. In this chapter, we explore the main methods and approaches for defining these boundaries. Business Logic and Domain Models Defining microservice boundaries should begin with analyzing business logic and domain models. Business logic describes the system's tasks and functions, while domain models represent the main entities and their interactions within business processes. The separation of microservices should reflect the structure and logic of business processes, ensuring clarity and consistency in implementation. Business Process Analysis: Understand the key business processes and tasks the system must perform. Domain Segmentation: Divide the business logic into distinct domain areas, each responsible for a specific part of the functionality. Identifying Key Entities: Determine the main entities and their relationships within each domain area. Event storming is a powerful method for visualizing and analyzing business processes by identifying events occurring in the system and determining their impact on different domain areas. This approach helps to better understand how systems interact with each other and to define natural microservice boundaries. Event Identification: Identify all significant events occurring in business processes. Defining Commands: Determine the commands that trigger these events. Event-Command Relationships: Visualize how events and commands relate to each other and how they affect different parts of the system. Grouping Events and Commands: Organize events and commands into logically related groups representing individual microservices. Examples of Microservice Segmentation Let's look at several examples of successful microservice segmentation in real-world projects to better understand how to apply these principles in practice. Example 1. E-commerce Platform Product Catalog: A microservice responsible for managing product listings, descriptions, and categories. Shopping Cart: A microservice handling the user's shopping cart management. Orders: A microservice responsible for order creation, processing, and tracking. Payments: A microservice managing user payments. Users: A microservice managing user accounts and profiles. Example 2. Project Management System Projects: A microservice for creating and managing projects. Tasks: A microservice for creating, assigning, and tracking tasks within projects. Users: A microservice for managing users and their roles in projects. Reports: A microservice responsible for generating reports on projects and tasks. Example 3. Social Network User Profiles: A microservice managing user profiles and personal information. News Feed: A microservice for generating and displaying users' news feeds. Messaging: A microservice for sending and receiving messages between users. Notifications: A microservice for sending notifications to users about various events. To successfully define microservice boundaries, use these key evaluation criteria: Data Cohesion: The data managed by a microservice should be logically consistent and belong to a single domain area. Independent Deployment: A microservice should be deployable and updatable independently of other microservices. Load Handling and Scalability: A microservice should be able to scale and handle the load associated with its functions. Dependency Management: Interactions between microservices should be minimal and clearly defined through APIs or messaging. Technological Compatibility: A microservice can use different technologies suitable for its tasks but should still easily integrate into the overall system. Correctly defining microservice boundaries is a crucial step in design that significantly impacts their performance, scalability, and ease of maintenance. Communication Between Microservices Communication between microservices is a key aspect of their successful operation. Choosing the right communication patterns and protocols impacts the overall system's peak performance and seamless functioning. In this chapter, we explore how microservices communicate with each other and look at the main approaches to communication between microservices, along with their advantages and disadvantages. Types of Communication Communication between microservices can be either synchronous or asynchronous, depending on system requirements and the nature of interactions. Synchronous Communication: One microservice sends a request to another and waits for a response. This is commonly implemented using HTTP/REST or gRPC. Advantages: Simplicity of implementation and predictable interactions. Disadvantages: High interdependence between microservices, potential delays, and decreased performance due to waiting for responses. Asynchronous Communication: One microservice sends a message to another without waiting for an immediate response. This is typically achieved using messaging systems such as RabbitMQ, Kafka, or AMQP. Advantages: High independence, improved scalability, and increased resilience. Disadvantages: Complexity of implementation and debugging, as well as the need to ensure data consistency. Protocols and Technologies Choosing the right protocols and technologies for microservice communication depends on various factors, including performance requirements, reliability, and scalability. HTTP/REST: The most widely used protocol for synchronous communication, leveraging REST principles for building APIs. Advantages: Simplicity, broad support, and suitability for web services. Disadvantages: Network dependency and potential scalability issues under heavy load. gRPC: A high-performance framework developed by Google, utilizing Protocol Buffers for data serialization. Advantages: High performance, support for streaming, and strong typing. Disadvantages: More complex setup and debugging compared to REST. GraphQL: A query language for APIs that allows clients to request only the data they need. Advantages: Flexible queries and the ability to fetch data from multiple sources in a single request. Disadvantages: Added complexity in implementation and maintenance. Message Queues (RabbitMQ, Kafka): Messaging systems enabling asynchronous communication between microservices. Advantages: High independence of microservices, scalability, and fault tolerance. Disadvantages: Complex setup and maintenance, along with the need to handle duplicate messages. Communication Patterns Various communication patterns are used to coordinate interactions between microservices, each addressing specific challenges with its own set of advantages. API Gateway: A central entry point through which all requests to microservices are routed. Advantages: Simplified request management, centralized security and authentication, and reduced complexity for clients. Disadvantages: Potential bottleneck and management complexity in large systems. Service Mesh: An infrastructure layer managing communication between microservices, providing features such as service discovery, load balancing, encryption, and traffic management. Advantages: Enhanced observability, security, and automated communication management. Disadvantages: Complex setup and maintenance, as well as additional resource requirements. Event-Driven Architecture: An architectural style where microservices exchange events to coordinate actions. Advantages: Scalability and support for asynchronous interactions. Disadvantages: Complexity in debugging and the need to ensure data consistency. Backends for Frontends (BFF): A pattern where a separate backend is created for each client type (e.g., mobile app, web app), aggregating data from various microservices. Advantages: Optimized interactions for different clients and reduced complexity on the client side. Disadvantages: Increased number of backends, potentially complicating maintenance. Communication between microservices is fundamental to their successful operation. Selecting the right protocols and interaction patterns ensures the system's performance, reliability, and scalability. Microservices Design Patterns Effective microservices design requires various patterns and templates that help solve common issues and enhance system performance. In this chapter, we explore the main design patterns for microservices. Database per Service Pattern Each microservice has its own dedicated database, ensuring data independence and minimizing dependencies between components. Advantages: Data independence and isolation Simplified scaling and deployment of microservices Enhanced performance by optimizing for the specific needs of each microservice Disadvantages: Complexity in synchronizing data across microservices Potential for data duplication Aggregator Pattern An aggregator microservice collects data from multiple microservices and provides a consolidated response to the client. Advantages: Simplified client interactions with the system Reduced number of requests from client to server Disadvantages: Potential bottleneck and single point of failure Increased complexity of the aggregator's logic Producer-Consumer Pattern One microservice (producer) generates data or events, which are then processed by another microservice (consumer). Advantages: Asynchronous data processing Improved scalability and fault tolerance Disadvantages: Challenges in ensuring data consistency Need for effective message queue management Saga Pattern Saga is a pattern for managing distributed transactions by breaking them down into a sequence of local transactions, each completing a part of the overall task. Advantages: Ensures data consistency without the need for distributed transactions Enhanced fault tolerance Disadvantages: Complex implementation and management Possibility of partial data processing in case of failures Circuit Breaker Pattern A pattern designed to protect the system from temporary failures and overloads by blocking calls to an unstable microservice until it recovers. Advantages: Protection against cascading failures Improved fault resilience Disadvantages: Need for state management of the Circuit Breaker Potential temporary service unavailability Sidecar Pattern This pattern involves offloading additional functionalities such as logging, monitoring, or security into a separate container (sidecar) that operates alongside the primary microservice. Advantages: Simplified core logic of the microservice Enhanced modularity and code reuse Disadvantages: Increased deployment and container management complexity Need for synchronization between the main microservice and the sidecar API Gateway Pattern A centralized entry point through which all requests to microservices are routed. An API Gateway can perform routing, authentication, data aggregation, and monitoring functions. Advantages: Centralized security and authentication management Simplified client interaction with microservices Potential for caching and load balancing Disadvantages: Possible bottleneck and single point of failure Increased complexity in developing and maintaining the API Gateway Bulkhead Pattern This pattern isolates the system into separate groups (bulkheads) to prevent a failure in one group from impacting others. Advantages: Improved fault resilience Isolation of failures Disadvantages: Complexity in managing isolated groups Potential resource duplication Applying these design patterns significantly contributes to building microservices systems with high performance, fault tolerance, and ease of maintenance. State and Data Management in Microservices Architecture Managing state and data is a critical aspect of microservices design. An improper approach can lead to performance issues, consistency problems, and reduced reliability. In this chapter, we explore key strategies and patterns for managing state and data within a microservices architecture. Decentralized Data Management One of the core principles of microservices architecture is decentralized data management. Ideally, each microservice should use its own database, avoiding reliance on a shared database. This approach eliminates bottlenecks and improves system scalability. Advantages: Data Independence and Isolation: Enhances microservices' autonomy, reliability, and fault tolerance. Performance Optimization: Customizing each database to the specific needs of a microservice boosts performance. Simplified Deployment and Scaling: Each microservice can be deployed and scaled independently. Disadvantages: Complex Data Synchronization: Requires mechanisms to ensure data consistency across microservices. Data Duplication: Potential data redundancy across microservices, increasing storage requirements. Data Management Patterns To address consistency, availability, and performance challenges, several data management patterns can be employed: Event Sourcing Pattern This pattern saves events that represent state changes in an event log. The current state is reconstructed by replaying these events. Advantages: Complete history of changes, enabling state recovery at any point in time. Disadvantages: Complex implementation. Increased data storage requirements. CQRS (Command Query Responsibility Segregation) Pattern Separates state-changing operations (commands) from read operations (queries) into different data models. Advantages: Optimized read and write performance. Flexibility to use different data storage solutions for commands and queries. Disadvantages: More complex architecture. Need for synchronization between the data models. Saga Pattern Manages distributed transactions by breaking them into a sequence of local transactions, each addressing a part of the task. Advantages: Ensures data consistency without global transactions. Disadvantages: Complex implementation and management. Potential for incomplete data processing in case of failures. State Management Microservices can be either stateless or stateful. State management involves various techniques for data storage and processing. Stateless Microservices. Do not maintain state between requests, relying on external data sources. Advantages: Simple implementation and scaling. Disadvantages: Dependency on the performance of external data sources. Stateful Microservices. Maintain state between requests using internal data storage. Advantages: Enhanced performance through local data storage. Disadvantages: Complex state management and consistency maintenance. Data Consistency Mechanisms To minimize data inconsistencies in microservices architecture, various mechanisms are used: BASE (Basically Available, Soft state, Eventually consistent). Allows temporary inconsistencies that are eventually resolved. Advantages: High availability and performance. Disadvantages: Temporary data inconsistencies. Event-Driven Model. Microservices exchange events to notify each other about state changes. Advantages: Asynchronous communication. Disadvantages: Complex event management and delivery assurance. Two-Phase Commit. A protocol for performing distributed transactions to ensure data consistency across microservices. Advantages: Ensures data consistency. Disadvantages: Complex implementation. Reduced performance due to coordination overhead. Caching Caching data can significantly improve microservices performance by reducing database load and speeding up request processing. Local Caching Data is cached locally within a microservice. Advantages: High performance and low latency. Disadvantages: Potential data inconsistency across microservices. Distributed Caching Data is cached in a distributed cache accessible to all microservices. Advantages: Consistent data and reduced database load. Disadvantages: Increased data access latency. Complex distributed cache management. Effective state and data management in microservices architecture requires careful planning and the use of appropriate patterns and mechanisms. Conclusion In the first part, we covered the fundamental principles of microservices architecture design and key patterns. In the second part, we will discuss security aspects, automation, monitoring, and the future of microservices and emerging trends.
19 February 2025 · 18 min to read
Microservices

Database Connection in Python, Go, and JavaScript

Databases are an essential part of almost any project today. Database interactions are especially familiar to system and database administrators, DevOps/SRE professionals, and software developers. While administrators typically deploy one or multiple database instances and configure the necessary connection parameters for applications, developers need to connect directly to the database within their code. This article explores how to connect to databases using different programming languages. Prerequisites We will provide examples for connecting to MySQL, PostgreSQL, Redis, MongoDB, and ClickHouse databases using Python, Go, and JavaScript. To follow this guide, you will need: A database deployed on a server or in the cloud. Installed environments for Python, Go, and JavaScript, depending on your application programming language. Additionally for Python: pip installed. Additionally for JavaScript: Node.js and npm installed. Database Connection in Python MySQL and Python For connecting to MySQL databases, we can use a Python driver called MySQL Connector. Install the driver using pip: pip install mysql-connector-python Initialize a new connection: Import the mysql.connector library and the Error class to handle specific connection errors. Create a function named create_connection, passing the database address (host), user name (user), and user password (password). To establish the connection, define a class called create_connection that receives the variable names containing the database connection details. import mysql.connector from mysql.connector import Error def create_connection(host_name, user_name, user_password): connection = None try: connection = mysql.connector.connect( host="91.206.179.29", user="gen_user", password="m-EE6Wm}z@wCKe" ) print("Successfully connected to MySQL Server!") except Error as e: print(f"The error '{e}' occurred") return connection def execute_query(connection, query): cursor = connection.cursor() try: cursor.execute(query) connection.commit() print("Query executed successfully") except Error as e: print(f"The error '{e}' occurred") connection = create_connection("91.206.179.29", "gen_user", "m-EE6Wm}z@wCKe") Run the script. If everything works correctly, you will see the "Successfully connected to MySQL Server!" message. If any errors occur, the console will display error code and description. Create a new table: Connect to the database using the connection.database class, specifying the name of the database. Note that the database should already exist. To create a table, initialize a variable create_table_query containing the SQL CREATE TABLE query. For data insertion, initialize another variable insert_data_query with the SQL INSERT INTO query. To execute each query, use the execute_query class, which takes the database connection string and the variable containing the SQL query. connection.database = 'test_db' create_table_query = """ CREATE TABLE IF NOT EXISTS users ( id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100) NOT NULL, age INT NOT NULL ) """ execute_query(connection, create_table_query) insert_data_query = """ INSERT INTO users (name, age) VALUES ('Alice', 30), ('Bob', 25) """ execute_query(connection, insert_data_query) if connection.is_connected(): connection.close() print("Connection closed") Run the script. PostgreSQL and Python Python offers several plugins for connecting to PostgreSQL, but the most popular one is psycopg2, which we will use here. Psycopg2 is one of the most frequently used Python plugins for PostgreSQL connections. One of its key advantages is its support for multithreading which allows you to maintain the database connection across multiple threads. Install psycopg2 using pip (if not already installed): pip install psycopg2-binary Connect to PostgreSQL. Import the Python psycopg2 package and create a function create_new_conn, using the try block. Establish the connection with the psycopg2.connect function, which requires the database name, user name, password, and database address as input. To initialize the connection, use the create_new_conn() function. Here’s the full code example for connecting to a database: import psycopg2 from psycopg2 import OperationalError def create_new_conn(): conn_to_postgres = None while not conn_to_postgres: try: conn_to_postgres = psycopg2.connect( default_db="default_db", default_user="gen_user", password_for_default_user="PasswordForDefautUser9893#", db_address="91.206.179.128" ) print("The connection to PostgreSQL has been successfully established!") except OperationalError as e: print(e) return conn_to_postgres conn_to_postgres = create_new_conn() Run the script: python3 connect_to_postgres.py If successful, you will see the "The connection to PostgreSQL has been successfully established!" message. . Next, create a table named books, which will have three columns. Use the cursor class for SQL expressions, such as creating database objects. If the query involves adding or modifying data, you must call the conn_to_postgres.commit() function afterward to apply the changes. import psycopg2 from psycopg2 import OperationalError def create_new_conn(): conn_to_postgres = None while not conn_to_postgres: try: conn_to_postgres = psycopg2.connect( default_db="default_db", default_user="gen_user", password_for_default_user="PasswordForDefautUser9893#", db_address="91.206.179.128" ) except OperationalError as e: print(e) return conn_to_postgres conn_to_postgres = create_new_conn() cursor = conn_to_postgres.cursor() cursor.execute(""" CREATE TABLE books ( book_id INT PRIMARY KEY NOT NULL, book_name VARCHAR(255) NOT NULL, book_author VARCHAR(255) NOT NULL ) """) conn_to_postgres.commit() print("Table Created successfully") Run the script: python3 create_table.py Now, let’s run INSERT INTO to add a new line: cursor.execute(""" INSERT INTO books (book_id,book_name,book_author) VALUES (1, 'Long Walk to Freedom', 'Nelson_Mandela') """) The full code is below: import psycopg2 from psycopg2 import OperationalError def create_new_conn(): conn_to_postgres = None while not conn_to_postgres: try: conn_to_postgres = psycopg2.connect( default_db="default_db", default_user="gen_user", password_for_default_user="PasswordForDefautUser9893#", db_address="91.206.179.128" ) except OperationalError as e: print(e) return conn_to_postgres conn_to_postgres = create_new_conn() cursor = conn_to_postgres.cursor() cursor.execute(""" INSERT INTO books (book_id,book_name,book_author) VALUES (1, 'Long Walk to Freedom', 'Nelson_Mandela') """) conn_to_postgres.commit() conn_to_postgres.close() print("Data inserted successfully") Run the script: python3 insert-data.py Redis and Python Redis belongs to the class of NoSQL databases, where data is stored in memory rather than on hard drives. It uses a key-value format for data storage. Redis has a wide range of applications, from data storage and caching to serving as a message broker. We will use the redis-py (or simply redis) library for connecting to Redis. Install the Redis library using pip: pip install redis Connecting to a Redis instance: Use a try block structure for connection, specifying the function redis.StrictRedis where you provide the Redis address, port, and user password. import redis try: connect_to_redis_server = redis.StrictRedis( redis_db_host=91.206.179.128, redis_db_port=6379, redis_user_password='PasswordForRedis6379') print connect_to_redis_server connect_to_redis_server.ping() print 'Successfully connected to Redis Server!' except Exception as ex: print 'Error:', ex exit('Failed to connect to Redis server.') Run the script: python3 connect_to_redis.py If successful, you will see a message like "Successfully connected to Redis Server!". Unlike relational databases, Redis stores data in a key-value format. The key uniquely identifies the corresponding value. Use the set method to create a new record. The example below creates a record with the key City and the value Berlin: print('Create new record:', connect_to_redis_server.set("City", "Berlin")) Use the get method to retrieve the value associated with a key: print('Print record using record key:', connect_to_redis_server.get("City")) Use the delete method to remove a record by its key: print('Delete record with key:', connect_to_redis_server.delete("City")) The complete code fragment is below. import redis try: connect_to_redis_server = redis.StrictRedis( redis_db_host=91.206.179.128, redis_db_port=6379, redis_user_password='PasswordForRedis6379') print ('New record created:', connect_to_redis_server.set("City", "Berlin")) print ('Print created record using record key', connect_to_redis_server.get("City")) print ('Delete created record with key :', connect_to_redis_server.delete("City")) except Exception as ex: print ('Error:', ex) MongoDB and Python MongoDB is another widely used NoSQL database that belongs to the document-oriented category. Data is organized as JSON-like documents. To connect to a MongoDB database with Python, the recommended library is PyMongo, which provides a synchronous API. Install the PyMongo plugin: pip3 install pymongo Connect to MongoDB server using the following Python code. Import the pymongo module and use the MongoClient class to specify the database server address. To establish a connection to the MongoDB server, use a try block for error handling: import pymongo connect_to_mongo = pymongo.MongoClient("mongodb://91.206.179.29:27017/") first_db = connect_to_mongo["mongo-db1"] try: first_db.command("serverStatus") except Exception as e: print(e) else: print("Successfully connected to MongoDB Server!") connect_to_mongo.close() Run: python3 connect_mongodb.py If the connection is successfully established, the script will return the message: "Successfully connected to MongoDB Server!" Add data to MongoDB. To add data, you need to create a dictionary. Let's create a dictionary named record1, containing three keys: record1 = { "name": "Alex", "age": 25, "location": "London" } To insert the dictionary data, use the insert_one method in MongoDB. insertrecord = collection1.insert_one(record1) import pymongo connect_to_mongo = pymongo.MongoClient("mongodb://91.206.179.29:27017/") db1 = connect_to_mongo["newdb"] collection1 = db1["userdata"] record1 = { "name": "Alex", "age": 25, "location": "London" } insertrecord = collection1.insert_one(record1) print(insertrecord) Run the script: python3 connect_mongodb.py ClickHouse and Python ClickHouse is a columnar NoSQL database where data is stored in columns rather than rows. It is widely used for handling analytical queries. Install the ClickHouse driver for Python. There is a dedicated plugin for ClickHouse called clickhouse-driver. Install the driver using the pip package manager: pip install clickhouse-driver Connect to ClickHouse. To initialize a connection with ClickHouse, you need to import the Client class from the clickhouse_driver library. To execute SQL queries, use the client.execute function. You also need to specify the engine. For more details on supported engines in ClickHouse, you can refer to the official documentation. We'll use the default engine, MergeTree. Next, create a new table called users and insert two columns with data. To list the data to be added to the table, use the tuple data type. After executing the necessary queries, make sure to close the connection to the database using the client.disconnect() method. The final code will look like this: from clickhouse_driver import Client client = Client(host=91.206.179.128', user='root', password='P@$$w0rd123', port=9000) client.execute(''' CREATE TABLE IF NOT EXISTS Users ( id UInt32, name String, ) ENGINE = MergeTree() ORDER BY id ''') data = [ (1, 'Alice'), (2, 'Mary') ] client.execute('INSERT INTO Users (id, name) VALUES', data) result = client.execute('SELECT * FROM Users') for row in result: print(row) client.disconnect() Database Connection in Go Go is one of the youngest programming languages, developed in 2009 by Google.  It is widely used in developing microservice architectures and network utilities. For example, services like Docker and Kubernetes are written in Go. Go supports integrating all popular databases, including PostgreSQL, Redis, MongoDB, MySQL, ClickHouse, etc. MySQL and Go For working with the MySQL databases in Go, use the go-sql-driver/mysql driver. Create a new directory for storing project files and navigate into it: mkdir mysql-connect && cd mysql-connect Create a go.mod file to store the dependencies: go mod init golang-connect-mysql Download the MySQL driver using the go get command: go get -u github.com/go-sql-driver/mysql Create a new file named main.go. Specify the database connection details in the dsn variable: package main import ( "database/sql" "fmt" "log" _ "github.com/go-sql-driver/mysql" ) func main() { dsn := "root:password@tcp(localhost:3306)/testdb" db, err := sql.Open("mysql", dsn) if err != nil { log.Fatal(err) } defer db.Close() if err := db.Ping(); err != nil { log.Fatal(err) } fmt.Println("Successfully connected to the database!") query := "INSERT INTO users (name, age) VALUES (?, ?)" result, err := db.Exec(query, "Alex", 25) if err != nil { log.Fatal(err) } lastInsertID, err := result.LastInsertId() if err != nil { log.Fatal(err) } fmt.Printf("Inserted data with ID: %d\n", lastInsertID) } PostgreSQL and Go To connect to PostgreSQL, use the pq driver. Before installing the driver, let's prepare our environment. Create a new directory for storing the project files and navigate into it: mkdir postgres-connect && cd postgres-connect Since we will be working with dependencies, we need to create a go.mod file to store them: go mod init golang-connect-postgres Download the pq driver using the go get command: go get github.com/lib/pq Create a new file named main.go. In addition to importing the pq library, it is necessary to add the database/sql library as Go does not come with official database drivers by default. The database/sql library consists of general, independent interfaces for working with databases. It is also important to note the underscore (empty identifier) when importing the pq module: _ "github.com/lib/pq" The empty identifier is used to avoid the "unused import" error, as in this case, we only need the driver to be registered in database/sql. The fmt package is required to output data to the standard output stream, for example, to the console. To open a connection to the database, the sql.Open function is used, which takes the connection string (connStr) and the driver name (postgres). The connection string specifies the username, database name, password, and host address: package main import ( "database/sql" "fmt" "log" _ "github.com/lib/pq" ) func main() { connStr := "user=golang dbname=db_for_golang password=Golanguserfordb0206$ host=47.45.249.146 sslmode=disable" db, err := sql.Open("postgres", connStr) if err != nil { log.Fatal(err) } defer db.Close() err = db.Ping() if err != nil { log.Fatal(err) } fmt.Println("Successfully connected to PostgreSQL!") } Compile and run: go run main.go If everything works correctly, the terminal will display the message Successfully connected to PostgreSQL! Now, let's look at an example of how to insert data into a table.  First, we need to create a table in the database. When using Hostman cloud databases, you can copy the PostgreSQL connection string displayed in the "Connections" section of the Hostman web interface. Make sure that the postgresql-client utility is installed on your device beforehand. Enter the psql shell and connect to the previously created database: \c db_for_golang Create a table named Cities with three fields — city_id, city_name, and city_population: CREATE TABLE Cities ( city_id INT PRIMARY KEY, city_name VARCHAR(45) NOT NULL, city_population INT NOT NULL); Grant full privileges to the created table for the user: GRANT ALL PRIVILEGES ON TABLE cities TO golang; The function db.Prepare is used to prepare data. It specifies the query for insertion in advance. To insert data, use the function stmt.Exec. In Go, it's common to use plain SQL without using the ORM (Object-Relational Mapping) approach. stmt, err := db.Prepare("INSERT INTO Cities(city_id, city_name, city_population) VALUES($1, $2, $3)") if err != nil { log.Fatal(err) } defer stmt.Close() _, err = stmt.Exec(1, "Toronto", 279435) if err != nil { log.Fatal(err) } fmt.Println("Data inserted successfully!") } If all works correctly, you will see: Data inserted successfully! Redis and Go To connect to Redis, you need to use the go-redis driver. Сreate a new directory: mkdir connect-to-redis && cd connect-to-redis Prepare the dependency file: go mod init golang-connect-redis And optimize them: go mod tidy Download the go-redis module: go get github.com/go-redis/redis/v8 To connect to Redis, use the redis.Options function to specify the address and port of the Redis server. Since Redis does not use authentication by default, you can leave the Password field empty and use the default database (database 0): package main import ( "context" "fmt" "log" "github.com/go-redis/redis/v8" ) func main() { rdb := redis.NewClient(&redis.Options{ Addr: "91.206.179.128:6379", Password: "", DB: 0, }) ctx := context.Background() _, err := rdb.Ping(ctx).Result() if err != nil { log.Fatalf("Couldn't connect to Redis: %v", err) } fmt.Println("Successfully connected to Redis!") } You should see the message «Successfully connected to Redis!» MongoDB and Go To work with MongoDB, we'll use the mongo driver. Create a new directory to store the project structure: mkdir connect-to-mongodb && cd connect-to-mongodb Initialize the dependency file: go mod init golang-connect-mongodb Download the mongo library: go get go.mongodb.org/mongo-driver/mongo Connect to MongoDB using the options.Client().ApplyURI method. It takes a connection string such as mongodb://91.206.179.29:27017, where 91.206.179.29 is the MongoDB server address and 27017 is the port for connecting to MongoDB. The options.Client().ApplyURI string is used only for specifying connection data. To check the connection status, you can use another function, client.Ping, which shows the success or failure of the connection: package main import ( "context" "fmt" "log" "time" "go.mongodb.org/mongo-driver/mongo" "go.mongodb.org/mongo-driver/mongo/options" ) func main() { clientOptions := options.Client().ApplyURI("mongodb://91.206.179.29:27017") client, err := mongo.Connect(context.TODO(), clientOptions) if err != nil { log.Fatalf("Couldn't connect to MongoDB server: %v", err) } fmt.Println("successfully connected to MongoDB!") ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() err = client.Ping(ctx, nil) if err != nil { log.Fatalf("Could not ping MongoDB server: %v", err) } fmt.Println("Ping MongoDB server successfully!") } You should see the message: successfully connected to MongoDB!Ping MongoDB server successfully MongoDB uses collections to store data. You can create collections using the .Collection function.  Below, we will create a database called first-database and a collection called first-collection. The collection will have a new document, containing three keys: user-name, user-age, and user-email. collection := client.Database("first-database").Collection("first-collection") document := map[string]interface{}{ "user-name": "Alice", "user-age": 25, "user-email": "alice@corporate.com", } insertResult, err := collection.InsertOne(ctx, document) if err != nil { log.Fatalf("Couldn't insert new document: %v", err) } fmt.Printf("Inserted new document with ID: %v\n", insertResult.InsertedID) if err := client.Disconnect(ctx); err != nil { log.Fatalf("Could not disconnect from MongoDB: %v", err) } fmt.Println("Disconnected from MongoDB!") } If successful, you will see the Inserted new document message with the document ID.  ClickHouse and Go To work with ClickHouse, use the clickhouse-go driver. Create a new directory to store the project files and navigate to it: clickhouse-connect && cd clickhouse-connect Create a go.mod file to store the dependencies: go mod init golang-connect-clickhouse Download the Clickhouse driver using the command: go get github.com/ClickHouse/clickhouse-go/v2 Create a new file named main.go, where you will specify the connection data to ClickHouse. package main import ( "database/sql" "log" "github.com/ClickHouse/clickhouse-go/v2" ) func main() { dsn := "tcp://localhost:9000?username=user1&password=PasswordForuser175465&database=new_db" db, err := sql.Open("clickhouse", dsn) if err != nil { log.Fatal(err) } defer db.Close() if err := db.Ping(); err != nil { log.Fatal(err) } log.Println("Connected to ClickHouse!") } Database Connection in JavaScript In JavaScript, all connections to external services are made using the Node.js platform. Make sure that you have Node.js and the npm package manager installed on your device. MySQL and JavaScript To work with MySQL, use the mysql2 driver. Create a directory where we will store the project files: mkdir js-mysql-connect && cd js-mysql-connect Initialize the project: npm init -y Install the mysql2 library: npm install mysql2 Use the following code to connect to MySQL: const mysql = require('mysql2'); const connection_to_mysql = mysql.createConnection({ host: 'localhost', user: 'root', password: 'PasswordForRoot74463', database: db1, }); connection_to_mysql.connect((err) => { if (err) { console.error('Error connecting to MySQL:', err.message); return; } console.log('Successfully connected to MySQL Server!'); connection_to_mysql.end((endErr) => { if (endErr) { console.error('Error closing the connection_to_mysql:', endErr.message); } else { console.log('Connection closed.'); } }); }); PostgreSQL and JavaScript Connecting to PostgreSQL is done using the pg library. Create a directory where we will store the project files: mkdir js-postgres-connect && cd js-postgres-connect Initialize the project: npm init -y Install the pg library: npm install pg To connect to PostgreSQL, first import the pg library. Then, create a constant where you specify variables for the database address, username, password, database name, and port. Use the new pg.Client class to pass the connection data. We will create a table called cities and add two records into it. To do this, we will use the queryDatabase function, which contains the SQL queries. const pg = require('pg'); const config = { postgresql_server_host: '91.206.179.29', postgresql_user: 'gen_user', postgresql_user_password: 'PasswordForGenUser56467$', postgresql_database_name: 'default_db', postgresql_database_port: 5432, }; const client = new pg.Client(config); client.connect(err => { if (err) throw err; else { queryDatabase(); } }); function queryDatabase() { const query = ` DROP TABLE IF EXISTS cities; CREATE TABLE cities (id serial PRIMARY KEY, name VARCHAR(80), population INTEGER); INSERT INTO cities (name, population) VALUES ('Berlin', 3645000); INSERT INTO cities (name, population) VALUES ('Paris', 2161000); `; client .query(query) .then(() => { console.log('Table created successfully!'); client.end(console.log('Closed client connection')); }) .catch(err => console.log(err)) .then(() => { console.log('Finished execution, exiting now'); process.exit(); }); } Use this command to run the code: node connect-to-postgres.js Redis and JavaScript To work with Redis, use the ioredis library. Create a directory to store the project files: mkdir js-redis-connect && cd js-redis-connect Initialize the project: npm init -y Install the ioredis library: npm install ioredis To connect to Redis, import the ioredis library. Then create a constant named redis and specify the Redis server address. Inserting data, i.e., creating key-value objects, is done using an asynchronous function named setData, which takes two values — key and value, corresponding to the data format of the Redis system. const Redis = require('ioredis'); const redis = new Redis({ host: '91.206.179.29', port: 6379, password: 'UY+p8e?Kxmqqfa', }); async function setData(key, value) { try { await redis.set(key, value); console.log('Data successfully set'); } catch (error) { console.error('Error setting data:', error); } } async function getData(key) { try { const value = await redis.get(key); console.log('Data retrieved'); return value; } catch (error) { console.error('Error getting data:', error); } } (async () => { await redis.select(1); await setData('user', 'alex'); await getData('user'); redis.disconnect(); })(); Run: node connect-to-redis.js MongoDB and JavaScript To work with MongoDB, use the mongodb driver. Create a directory for storing the project files: mkdir js-mongodb-connect && cd js-mongodb-connect Initialize the project: npm init -y Install the mongodb library: npm install mongodb To connect to MongoDB, import the mongodb library. Specify the database address in the constant uri and pass the address into the MongoClient class. const { MongoClient } = require('mongodb'); const uri = "mongodb://91.206.179.29:27017"; const client = new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true }); async function connectToDatabase() { try { await client.connect(); console.log("Successfully connected to MongoDB!"); const database = client.db("myDatabase"); const collection = database.collection("myCollection"); const documents = await collection.find({}).toArray(); console.log("Documents found:", documents); } catch (error) { console.error("Error connecting to MongoDB:", error); } finally { await client.close(); console.log("Connection closed."); } } connectToDatabase(); ClickHouse and JavaScript To work with ClickHouse, use the clickhouse/client driver. Create a directory where we will store the project files: mkdir js-clickhouse-connect && cd js-clickhouse-connect Initialize the project: npm init -y Install the @clickhouse/client library: npm install @clickhouse/client To connect to ClickHouse, use the code below where we set the connection details and execute a simple SQL query that will return the first 10 records from the system table named system.tables: const { ClickHouse } = require('@clickhouse/client'); const client = new ClickHouse({ host: 'http://localhost:8123', username: 'default', password: 'PasswordforDefaultUser45435', database: 'default', }); async function connectAndQuery() { try { console.log('Successfully connected to ClickHouse Server!'); const rows = await client.query({ query: 'SELECT * FROM system.tables LIMIT 10', format: 'JSON', }).then((result) => result.json()); console.log('Query results:', rows); } catch (error) { console.error('Error Successfully connected to ClickHouse Server! or running the query:', error); } finally { console.log('Done.'); } } connectAndQuery(); Conclusion In today's article, we thoroughly explored how to connect to PostgreSQL, Redis, MongoDB, MySQL, and ClickHouse databases using Python, Go, and JavaScript. These languages can be used to create both web applications and microservices that utilize databases in their operation.
18 February 2025 · 23 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support