Sign In
Sign In

How to Install and Set Up Redis on Windows, Ubuntu, and CentOS

How to Install and Set Up Redis on Windows, Ubuntu, and CentOS
Hostman Team
Technical writer
Redis
20.03.2025
Reading time: 4 min

Redis is a database management system that stores data in a key-value format, where each unique key in the database corresponds to a specific value. 

To help you integrate Redis into your projects, we have prepared installation guides for three operating systems: Windows, Ubuntu, and CentOS. 

After installation, we will configure Redis to make it ready for use.

Installing Redis on Windows

You will need WSL2 (Windows Subsystem for Linux) to install Redis on Windows. Refer to the WSL2 installation guide on the Microsoft website.

Once the installation and setup of WSL is complete, you will have a Linux system (by default, Ubuntu) running on your machine via WSL. 

Now you can proceed to the Installing Redis on Ubuntu chapter and follow the instructions.

Installing Redis on Ubuntu 22.04

We will install Redis on Ubuntu using the official repository. This guide is also applicable for Debian installations.

Before installing Redis, update the package lists to ensure you get the latest available version:

sudo apt update

Now, install Redis using the following command:

sudo apt install redis-server -y

Start the Redis service and enable it to start automatically on system boot:

sudo systemctl start redis
sudo systemctl enable redis

To check if Redis is running correctly, use:

sudo systemctl status redis

If Redis is active, the output will confirm that the service is running.

Image2

Installing Redis on CentOS 9

Now, let's install Redis on CentOS 9.

Run the following command:

sudo dnf install redis

Start the Redis service:

sudo systemctl start redis

Enable Redis to start automatically on system boot:

sudo systemctl enable redis

Configuring Redis

Redis setup is done by modifying the configuration files. These files remain the same across all versions. In this guide, we will set up Redis on Ubuntu and connect to it remotely.

To set up basic configuration, we need to modify two key parameters:

  1. Enable remote connections
  2. Set up a password

Here are the steps to achieve that.

Step 1: Generate a secure password:

openssl rand 25 | openssl base64 -A

Example output:

/37DQhAt5MBq/34Lj24Ppn5LI/UZksAZJQ==

Since attackers can attempt up to 150,000 passwords per second, it is crucial to choose a strong password. Copy the generated password and proceed to the configuration file.

Step 2: Open the configuration file:

sudo nano /etc/redis/redis.conf

Modify configuration settings. Find the line:

bind 127.0.0.1 ::1

Comment it out by adding # at the beginning:

# bind 127.0.0.1 ::1

Find the line:

protected-mode yes

Change it to:

protected-mode no

In the Security section, locate the commented-out line:

# requirepass foobared

Add the following line below it, replacing the value with your generated password:

requirepass /37DQhAt5MBq/34Lj24Ppn5LI/UZksAZJQ==

Save and close the file.

Step 3: Restart Redis to apply changes:

sudo systemctl restart redis.service

Step 4: Verify Redis listening interfaces. Check which network interfaces Redis is listening to:

sudo netstat -lnp | grep redis

Image1

Step 5: Connect remotely to your Redis instance. On your local machine, open Terminal or Command Prompt and use redis-cli to connect:

redis-cli -h 192.168.43.37 -p 6379 -a /37DQhAt5MBq/34Lj24Ppn5LI/UZksAZJQ==

Step 7: Test the connection by running the ping command:

192.168.43.37:6379> ping

# Output:
PONG

Set and retrieve a key-value pair:

192.168.43.37:6379> set key test

# Output:
OK
192.168.43.37:6379> get key

# Output:
"test"

Redis Database as a Service (DBaaS)

A Database as a Service (DBaaS) is a cloud-hosted database solution. Using Redis as a DBaaS provides several advantages:

  • Reduced maintenance – no need to manually update or manage the database.
  • Scalability – easily scale the database as needed.
  • Quick setup – ready to use from the start.
  • High availability – ensures uninterrupted operation.

With Hostman, you can set up a cloud Redis database in just a few minutes.

Redis
20.03.2025
Reading time: 4 min

Similar

Redis

Redis Message Broker: How It Works and How to Use It

Originally, Redis was developed as a database and cache to temporarily store data in memory. However, with the release of Redis 2.0, the creators introduced the PUBLISH/SUBSCRIBE feature, allowing Redis to function as a message broker. Today, Redis is widely used to support chat applications and messaging services, as well as for message transmission using the pub/sub model. In this article, we will explore how a message broker works and how it can be used. Why Use a Message Broker? A message broker  is a tool that simplifies information exchange between systems and applications, even if they run on different languages and platforms. When there are only a few participants, they can exchange data directly. However, as the number of participants grows, the need for greater interactivity arises, making direct exchange inefficient. In such cases, a message broker manages the process and acts as an intermediary between the sender and the recipient. Message brokers are especially useful for asynchronous interactions between microservices. Asynchronous communication does not require real-time responses. A good example is email, where users can send a message and continue working on other tasks. Key Components of a Message Broker Producer – Responsible for sending messages to the broker, which then delivers them to subscribers. Consumer – Receives and processes messages from the message broker. Queue – A model where senders place messages in a queue, and receivers retrieve them. Each message is processed by only one recipient. Topic – A model where a producer sends messages to a specific topic, and all subscribers to that topic receive the messages. Messaging Models Using a Message Broker There are several ways to implement message exchange. Here, we will examine three main models. Point-to-Point Messaging In the P2P model, each message is sent to a specific queue and can be processed by only one recipient. The message broker ensures that undelivered messages remain in the queue until they are processed. The key advantage of the P2P model is that each transaction is processed only once. A common use case for P2P messaging is transaction processing, where reliability and security are critical. Publish/Subscribe In the pub/sub pattern, messages are distributed among all users subscribed to a common topic. This is useful for implementing notification mechanisms or distributing independent tasks. Redis supports different message types, including strings, hashes, lists, and sets, making it suitable for transmitting various types of data. Some simple examples of this model in action include Telegram channels and online chats. Additionally, Redis can be used to share event information between different application instances. This mechanism helps track database changes and user activity. Hybrid Messaging Model This model combines both P2P and Pub/Sub. Messages can be sent either to a specific queue or to a topic-based channel, allowing all subscribers to receive them simultaneously. How to Use Redis as a Message Broker To use Redis as a message broker, you first need to download and install it on your server. Once Redis is installed, start it using the redis-server command and verify that the installation is correct by running redis-cli. Finally, check the configuration file to ensure that the Redis Pub/Sub module is enabled. If it is not working for some reason, add the following line to the configuration file: loadmodule /usr/lib/redis/modules/redisearch.so Creating a Queue Using Lists Redis provides a convenient data structure called lists, which we can use to create simple queues. Redis Lists support basic operations and can be locked, making them efficient for building high-performance chat systems, comment feeds, news feeds in social networks, and real-time server communication systems. To create a message queue, use the following commands: RPUSH <queue_name> <message>RPUSH <queue_name> <message2> This creates a queue named queue_name and adds two messages, message and message2. The RPUSH command adds new elements to the end of the list. To add elements to the beginning of the list, use LPUSH instead. Queue Management Commands LPOP <queue_name> – Retrieves and removes the first element from the queue. RPOP <queue_name> – Retrieves and removes the last element from the queue. BLPOP <queue_name> <timeout> – Blocking retrieval and removal of an element from the beginning of the queue. The timeout (in seconds) is specified as the last argument. BRPOP <queue_name> <timeout> – Blocking retrieval and removal of an element from the end of the queue. BRPOPLPUSH <source_queue_name> <destination_queue_name> <timeout> – Blocking retrieval and transfer of an element from the end of one list to the beginning of another. LLEN <queue_name> – Retrieves the number of elements in the queue. LRANGE <queue_name> <start> <stop> – Retrieves a list of messages sent to the channel. The start and stop parameters define the range of indices. LINDEX <queue_name> <index> – Retrieves an element from the queue by index. These commands allow for simple message queue creation (LPUSH/RPUSH, LPOP/RPOP, BLPOP/BRPOP, BRPOPLPUSH) and queue monitoring (LLEN, LINDEX). Reliable Queues in Redis Reliable queues ensure that messages are delivered in the correct order. This can be implemented using the BLMOVE command, which transfers elements between lists while maintaining their order and blocking if the list is empty. If the source queue does not fully process messages, BLMOVE moves the message to a backup destination queue for further processing. Syntax of BLMOVE: BLMOVE source destination LEFT|RIGHT LEFT|RIGHT timeout source – The name of the source list from which the element is extracted. destination – The name of the destination list to which the element will be moved. LEFT|RIGHT (first instance) – Specifies the direction of element extraction from the source queue. LEFT|RIGHT (second instance) – Specifies how the operation blocks if the list is empty. timeout – The blocking time in seconds (0 for infinite blocking). Implementing the Pub/Sub Mechanism in Redis In Redis Pub/Sub, messages sent through channels are called push messages. To subscribe to a channel, use the SUBSCRIBE command with the channel name: SUBSCRIBE hostman_channel To publish a push message to the channel, use the PUBLISH command: PUBLISH hostman_channel "Welcome to our channel" Similarly, you can unsubscribe from a channel using the UNSUBSCRIBE or PUNSUBSCRIBE commands. Creating a Stream-Based Queue A stream-based queue offers extended capabilities, including grouping, callbacks, and message consumption using an iterator. To create a message stream, use the following command: XADD name_stream * message1 _message2 _message3 Here, XADD creates a new message in the stream and adds it to the end of the stream. To read messages from the stream: XREAD COUNT 3 STREAMS name_stream 1 Where: XREAD reads messages from one or multiple streams. COUNT 3 specifies the number of messages to read from the stream. 1 defines the starting message index. Additional Stream Queue Management Commands XGROUP – Creates a new stream group or updates an existing one. XREADGROUP – Reads messages from a specified stream group. XDEL – Deletes messages from the stream. XPENDING – Retrieves information about unprocessed messages in a stream group. Advantages of Using Redis as a Message Broker Support for Various Data Structures – Redis supports lists, strings (up to 512MB), hashes, bitmaps, HyperLogLogs, and more. High Performance – Redis enables millions of operations per second, as it stores data in server memory rather than disk storage like traditional databases. Data Persistence – Even if the server fails, Redis ensures data safety through snapshot backups (RDB) and append-only file (AOF) mechanisms stored in non-volatile memory. Flexibility and Ease of Use – Redis supports multiple programming languages, including C, Java, Python, and Ruby, and comes with built-in data structures. While Redis provides low latency, support for multiple protocols, and diverse data formats, it is not a fully reliable message broker. It does not guarantee 100% message delivery in case of node failures. Alternatives such as RabbitMQ, Apache Kafka, and ActiveMQ may be more suitable for applications requiring stronger message durability.
25 March 2025 · 7 min to read
Redis

How to Implement a Queue in Redis

Redis is a database that stores and processes data primarily in RAM. Because of this, Redis is often used to handle fast, short-lived data. One common use case for Redis is caching. However, another powerful use is message queue processing, where Redis acts as a message broker like Apache Kafka or RabbitMQ. That’s how a message broker works: Service 1 (e.g., a backend microservice) sends a message to the broker. Service 2 (or multiple services: Service 3, 4, 5, etc.) retrieves the message and processes it. All messages are buffered, forming a message queue. This allows multiple services to send messages simultaneously while others retrieve and process them over time. Methods for Implementing a Queue in Redis Redis provides several built-in tools for implementing a message queue. Each method has its own advantages and limitations: 1. Pub/Sub (Publish/Subscribe) A service publishes a message to a queue. Only services subscribed to the queue can receive the message. If no subscribers are listening, the message is lost. 2. List (FIFO Queue: First In, First Out) The first message sent is the first message received. Each message is processed by only one subscriber. 3. Stream (Guaranteed Delivery) Works like Pub/Sub, but ensures message delivery. If no service is available to process the message, it remains in the queue until it is handled. Creating a Redis Database in Hostman For this guide, we will use Hostman cloud databases. Log in to Hostman and navigate to the Databases section. Click Create database. Select Redis as the database type. Configure your database settings. For a test project, choose the minimal configuration to reduce costs. Click Order to create the database. The database will take a few moments to initialize and become active. Once setup is complete, your Redis instance will be ready for message queue processing. Queue Implementation Let's go through the process of implementing a queue in Redis step by step. This guide uses a Hostman cloud server running Ubuntu 22.04. Step 1. Update the system before configuring the server for a Python application: sudo apt updatesudo apt upgrade Step 2: Install Python. First, check if you already have Python: python --version The console output should look something like this: Python 3.10.12 If Python is not installed, use the APT package manager to install it: sudo apt install -y python3 The -y flag automatically answers "yes" to any prompts during installation. Step 3: Install the Python virtual environment: sudo apt install python3-venv -y Step 4: Create a working directory for the Python project: mkdir my-hostman-project Navigate into the directory: cd my-hostman-project Step 5. Create a virtual Python environment in the working directory: python -m venv venv Now check the directory contents: ls If everything is set up correctly, a virtual environment folder should appear: venv Activate the environment: source ./venv/bin/activate Step 6: Install the Pip package manager sudo apt install python3-pip -y To verify the installation, check the Pip version: pip -V The console output should look something like this: pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10) Step 7: Now install the Python module for working with Redis: pip install redis Later, we will import this module into the Python application. Writing a Python Application Let's explore the basic ways to create a queue using Pub/Sub, List, and Stream. Queue Using Pub/Sub In the working directory, create a handler file that will read the message queue: sudo nano consumerPS.py The Python code inside the file is as follows: import redis import time connection = redis.Redis( host="IP", # specify the Redis server's IP address password="PASSWORD", # specify the Redis server's root password port=6379, # standard port for connecting to Redis without SSL db=0, decode_responses=True # automatically decodes Redis server responses into readable format ) queue = connection.pubsub() # create a Pub/Sub queue queue.subscribe("channelFirst", "channelSecond") # subscribe to the specified channels # infinite loop for processing the message queue while True: time.sleep(0.01) msg = queue.get_message() # retrieve a message if msg: # check if the message is empty if not isinstance(msg["data"], int): # check the type of data in the "data" field (msg is a dictionary) print(msg["data"]) # print the message to the console First, the script connects to the remote Redis server and then creates a Pub/Sub queue. Note that when connecting to Redis, you must specify the remote host address and root password.  This example uses a non-SSL connection, so port 6379 is specified. The queue subscribes to two channels: channelFirst and channelSecond. Inside an infinite loop, the script periodically checks for new messages. If there is one, the console displays it.  Now, create a sender file that will publish messages to the queue: sudo nano producerPS.py Its contents should be as follows: import redis # similar connection to the remote Redis server connection = redis.Redis( host="IP", password="PASSWORD", port=6379, db=0, decode_responses=True ) connection.publish('channelFirst', 'This message was sent to the first channel') # send a message to the first channel connection.publish('channelSecond', 'This message was sent to the second channel') # send a message to the second channel First, the script connects to the remote Redis server in the same way as consumerPS.py. Then, two messages are sent over the open connection—one to the first channel and another to the second. Now, we can execute the scripts to verify that the example works. First, run the message handler in an open terminal window: python consumerPS.py Next, open a second terminal and activate the virtual environment: source ./venv/bin/activate Then, start the message sender: python producerPS.py As a result, the first terminal will display the following output: This message was sent to the first channel This message was sent to the second channel Queue Using List Now, let's implement a similar queue but using the List entity. First, create a handler file: sudo nano consumerList.py Write the following code in the file: import redis import random import time connection = redis.Redis( host="IP", password="PASSWORD", port=6379, db=0, decode_responses=True ) len = connection.llen("listQueue") # get the size of the message queue list # read messages from the list until the list size becomes zero while connection.llen("listQueue") != 0: msg = connection.rpop("listQueue") # read the message, which is a dictionary data type if msg: print(msg) # print the message to the console Note that in this example, we extract and remove the message "from the right" instead of "from the left." In other words, instead of using the lpop function, we use rpop. Now, create the sender file: sudo nano producerList.py Its contents will be as follows: import redis import random connection = redis.Redis( host="IP", password="PASSWORD", port=6379, db=0, decode_responses=True ) # send 3 messages at once for i in range(0,3): connection.lpush("listQueue", "Message â„–" + str(random.randint(0, 100))) # add a message with a unique number to the list It is important to note that the messages are added to the list from right to left. For example, if 3 messages were sent: Message â„–1Message â„–2Message â„–3 After that, the list will look like this: [ Message â„–3, Message â„–2, Message â„–1 ] Therefore, if the message handler code uses the rpop function, the messages will be processed in the order they were sent. If lpop is used, they will be processed in reverse order. The same applies to sending messages using the rpush function instead of lpush. Run the sender script to fill the message queue list: python producerList.py Then, process the messages: python consumerList.py The console should display output similar to this (only the message numbers will differ): Message â„–94Message â„–96Message â„–24 Queue Using Stream Another useful tool for implementing a queue is "Streams." There are several basic commands for managing streams: XADD: Adds a new entry to the stream. XREAD: Reads one or more entries starting from a specified position and moving forward in time. XRANGE: Returns a range of entries between two provided record IDs. XLEN: Returns the length of the stream. Create the message sender file: sudo nano producerStream.py The code inside should be: import redis import random connection = redis.Redis( host="IP", password="PASSWORD", port=6379, db=0, decode_responses=True ) # send 3 messages at once for i in range(0,3): connection.xadd("queueStream", { "data":"Message â„–" + str(random.randint(0, 100))}) # add a message with a unique number to the queue (dictionary type) print("Queue length: " + str(connection.xlen("queueStream"))) # print the queue size to the console In this example, we send 3 messages with a unique number to the stream via a for loop. After sending, the terminal will display the size of the stream. Now, implement the message handler functionality: sudo nano consumerStream.py The code inside should be: import redis import random connection = redis.Redis( host="IP", password="PASSWORD", port=6379, db=0, decode_responses=True ) len = connection.xlen("queueStream") # get the length of the stream if len > 0: messages = connection.xread(count=len, streams={"queueStream":0}) # get the entire list of messages in the stream # iterate over the list of messages for msg in messages: print(msg) # print the message to the console First, we extract the message queue from the stream and then process it sequentially in a for loop. Run the written scripts in the following order: Start the message sender: python producerStream.py The console should display: Queue length: 3 Then, process the queue messages: python consumerStream.py The console output will look something like this: ['queueStream', [('1711712995031-0', {'data': 'Message â„–74'}), ('1711712995033-0', {'data': 'Message â„–54'})]] From this output, you can notice that each message has a unique identifier automatically assigned by Redis. However, this example has one drawback — each time we read the entire stream. Let's improve the code in consumerStream.py so that each new script run reads only new messages from the stream: import redis import random connection = redis.Redis( host="IP", password="PASSWORD", port=6379, db=0, decode_responses=True ) # create a Redis variable to store the ID of the last message (if this variable does not already exist) if connection.get("last") == None: connection.set("last", 0) len = connection.xlen("queueStream") # get the length of the stream if len > 0: messages = connection.xread(count=len, block=1000, streams={"queueStream":connection.get("last")}) # pass the last message ID as an argument (or 0) print(connection.get("last")) # print the last message ID (or 0) # iterate over the list of new messages for msg in messages: print(msg) # print the message to the console connection.set("last", msg[-1][-1][0]) # set the ID of the last read message as the value for the "last" variable Now, each new request to Redis will print only fresh messages to the console. Working with Redis streams is somewhat more complex than working with lists or subscribers. For a full understanding of streams when integrating this type of queue into your project, it's best to familiarize yourself with the official Redis examples. Conclusion This guide demonstrated several basic ways to create a queue in Redis: Pub/Sub, List, and Stream. The examples shown are minimal implementations that perform the logic of a message queue. A real project will require the logic to be more complex to meet developer criteria and solve specific tasks. For instance, you can wrap the message queue functionality in classes and objects or implement it as a separate internal library for the project. Each specific project will require further unique development of this implementation to solve its tasks. To learn more about the Redis commands designed for working with different message queue tools, refer to the official Redis documentation: Pub/Sub List Stream
07 March 2025 · 10 min to read
Redis

How to Cache Node.js Applications with Redis

Caching is the process of storing copies of files in a cache — a temporary storage that is much faster to access than other available storage methods in the system. When developing Node.js applications, caching becomes highly relevant because database queries can take significantly longer than fetching data from temporary storage. For example, there is no need to reload the HTML markup of a webpage for every user request to the server — this would add several (sometimes dozens of) milliseconds to the response time. Storing the page (or JSON data for rendering in a SPA application) is much more efficient in the cache. In simple terms, caching is about optimization. This article will explore how to cache application data in a Node.js application using Redis with the Express framework. What is Redis? Redis (Remote Dictionary Server) is an open-source, in-memory database with simple "key-value" data structures. The terminology may vary. Some refer to Redis as a database, others as a caching tool, or something else. The key point is that Redis stores data in RAM instead of a hard drive, which results in higher performance. This is why Redis is referred to as an "in-memory" database. Although the data is kept in RAM, it is periodically saved to a hard drive in the form of snapshots. Redis is often used together with relational DBMSs, such as managed PostgreSQL. Installing Redis Server The installation process for Redis differs depending on the operating system, and you can find detailed instructions for each system on the official website. This article focuses on Ubuntu or Debian. Therefore, we will install the latest version of Redis from the official APT (Advanced Packaging Tool) repository — packages.redis.io: sudo apt update sudo apt install redis Once this is done, the Redis server is ready to use.  For Windows, you need to download the installer from the official GitHub repository. After installation, start the Redis server with the following CLI command: redis-cli For macOS, you can install Redis using the Homebrew package manager: brew install redis Once installed, start the server with: redis-server Node.js Project Configuration Before we dive into how to interact with Redis through a Node.js application, let's first create a separate working directory and navigate to it: mkdir node_redis cd node_redis As usual, let's create a package.json configuration file with a minimal set of data: { "name": "node_redis", "version": "1.0.0", "description": "Simple example of using Redis by Hostman", "main": "index.js", "license": "MIT", "dependencies": { "express": "latest", "axios": "latest", "redis": "latest" } } Note the specified dependencies. For this project, we will need the latest versions of the Express framework and the official Redis client for Node.js from NPM. This is a separate library that provides a high-level API (classes and functions) for interacting with a Redis server. The Axios module will help parse the JSON data the remote server will return in response to API requests. To install these dependencies, we will use the NPM package manager. If you don't have it yet, install it with the following command: sudo apt install npm You can read a separate guide on how to install the latest version of Node.js on Ubuntu. Since the app will use the async/await syntax, the minimum required version of Node.js is 8. Now, once all dependencies are specified, they can be installed: npm install Express Application Without Caching In this example, the application will use a fake API from JSONPlaceholder, specifically created for such purposes. We will send a request to the URL https://jsonplaceholder.typicode.com/posts/1 and receive mock data in JSON format: { "userId": 1, "id": 1, "title": "sunt aut facere repellat provident occaecati excepturi optio reprehenderit", "body": "quia et suscipit\nsuscipit recusandae consequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto" } Subsequent loading of data from the cache (instead of making repeated requests to the remote server) will increase the speed of the application. However, we will first implement the process of handling user requests without caching and add it later. Let's first create and edit our index.js file. The script will use modern JavaScript (ES6) syntax with async/await operators whenever possible: const express = require("express"); // import the Express framework const axios = require("axios"); // import the Axios module for working with JSON data const app = express(); // create an instance of the app // create an async function to request data from the remote server using axios async function getRemoteData() { const information = await axios.get(`https://jsonplaceholder.typicode.com/posts/1`); // send a request to the remote API console.log("There was a request to a remote server"); // log the informational message to the console return information.data; // return the raw JSON data } // create an async function to handle user requests async function onRequest(req, res) { let results = await getRemoteData(); // call the previously created function to get data from the remote server if(results.length === 0) throw "API error"; // handle empty responses with an error res.send(results); // respond to the user's request with the raw JSON data } app.get('/', onRequest); // attach the previously created function to the GET request hook app.listen(8080); // start listening for incoming requests on the default HTTP server port Now, you can run the script, open localhost in your browser, and see the raw JSON data displayed on the web page: node index.js Each request to the local server will, in turn, trigger a request to the remote server. For example, if you refresh the page three times in the browser, the message "There was a request to a remote server" will be printed three times in the terminal of the running Node.js server. But why? From a rational perspective, this is unnecessary. The data retrieved the first time should be cached to reduce the number of operations and user wait times. This is relevant only when the data is expected to remain static for a certain period — in other words, you can only cache data that doesn't change frequently. Express Application with Caching Let's modify the previous example so our application "learns" to cache data. To do this, we'll first connect the Redis client — add a new line at the beginning of the index.js: const redis = require("redis");  Now, naturally, we need to connect to the Redis server we started earlier, and only after that can we set and get keys. Let's add a few more lines of code: (async () => { client = redis.createClient(); client.on("error", (error) => console.log('Something went wrong', error)); // set up an error handler for Redis connection issues await client.connect(); // connect to the Redis server })(); Note that the connection to the Redis server is done in an anonymous self-invoking asynchronous function. This ensures that all pre-configuration steps are executed sequentially. Additionally, the connect function returns a promise, which can be handled using then/catch or inside an async function. In our example, the caching logic will be as follows: if the API request to the remote server is made for the first time, we cache the obtained data. If the data has been previously retrieved, it will be available in the cache — we fetch it and send it to the user. Let's modify the onRequest function (middleware) to implement caching: async function onRequest(req, res) { let results; // declare the variable for the result const cacheData = await client.get("post"); // try to get the "post" key from Redis database if (cacheData) { results = JSON.parse(cacheData); // parse the data from a raw string format into a structure } else { results = await getRemoteData(); // call the function to get data from the remote server if (results.length === 0) throw "API error"; // handle empty result with an error await client.set("post", JSON.stringify(results)); // cache the obtained data } res.send(results); // respond to the request with JSON data } Notice that the get function returns null if no value is saved for the given key in Redis. If this happens, an API request is made to the remote server. If data exists in the cache, it is retrieved and sent to the user. The set function is responsible for caching — it stores the given value under a specified key so we can retrieve it later with get. The full code of the application at this stage looks like this: const express = require("express"); // import Express framework const axios = require("axios"); // import Axios module for working with JSON data const redis = require("redis"); // import Redis client const app = express(); // create an instance of the application // Connect to Redis server (async () => { client = redis.createClient(); client.on("error", (error) => console.log('Something went wrong', error)); // set up an error handler for Redis connection issues await client.connect(); // connect to the Redis server })(); // create an asynchronous function to request data from the remote server using axios async function getRemoteData() { const information = await axios.get(`https://jsonplaceholder.typicode.com/posts/1`); // send a request to the remote server with the API console.log("There was a request to a remote server"); // log an informational message to the console return information.data; // return the obtained JSON data in raw form } // create an asynchronous function to handle user requests async function onRequest(req, res) { let results; // declare the variable for the result const cacheData = await client.get("post"); // attempt to retrieve the "post" variable from the Redis database if (cacheData) { results = JSON.parse(cacheData); // parse the data from a raw string into a structured format } else { results = await getRemoteData(); // call the function to fetch data from the remote server if (results.length === 0) throw "API error"; // handle empty result with an error await client.set("post", JSON.stringify(results)); // cache the obtained data } res.send(results); // respond with the JSON data } // run the HTTP server with the necessary configurations app.get('/', onRequest); // associate the created function with the GET request hook app.listen(8080); // start handling incoming requests on the standard HTTP server port Setting Cache Expiration We should periodically update the data stored in the cache to prevent it from becoming outdated. In real-world projects, APIs often provide additional information about how frequently cached data should be updated. This information is used to set a timeout — the duration for which the data in the cache remains valid. Once this time expires, the application makes a new request to obtain fresh data. In our case, we will take a simpler approach that is commonly used in practice. We will set a constant cache expiration time of 60 seconds. After this period, the application will make another request to the remote server for fresh data. It’s important to note that cache expiration is handled by Redis. This can be achieved by providing additional parameters when using the set function. To implement this, we will modify the set function call to include an additional structure as the third argument. Thus, the line: await client.set("post", JSON.stringify(results)); // cache the received data Will be updated to: await client.set("post", JSON.stringify(results), { EX: 60, NX: true }); // cache the received data with expiration In this case, we updated the previous line of code by adding the EX parameter, which is the cache expiration time in seconds. The NX parameter ensures that the key is only set if it does not already exist in the Redis database. This last parameter is important because re-setting the key would update the cache timeout without it, preventing it from fully expiring. Now, the Redis database will store the value of the post key for 60 seconds and then delete it. This means that every minute, the cacheData variable in our app will receive a null value, triggering an API request to the remote server and re-caching the obtained result. Conclusion This article demonstrated how in-memory storage can serve as a "mediator" between processing and storing data on solid-state drives. All of this is a form of caching that reduces unnecessary computational (and network) operations, thereby improving the application's performance and reducing the server's load. As shown, you can quickly set up such storage using Redis with a Node.js client. In our case, we used a mock API that returned trivial JSON data. In one scenario, the data was requested every time, while in the other, it was cached — sometimes with an expiration time. The examples provided are just the basics. As always, you can find more information on using Redis in the official documentation. The same applies to the documentation for Express and Axios.
26 December 2024 · 11 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support