Sign In
Sign In

Cloud Database

Unlock the power of flexible, secure, and scalable cloud database solutions for your business.
Contact Sales
Cloud Database
Advanced Scalability
With Hostman Cloud Database, you can effortlessly scale your databases up or down to match your workload demands. Our cloud-based database solutions ensure that you can handle sudden traffic spikes without compromising performance, providing seamless growth for your applications.
Enhanced Security
Security is a top priority at Hostman. Our cloud database security measures include advanced encryption, regular security audits, and stringent access controls. Your data is protected against unauthorized access and potential threats, ensuring peace of mind.
Easy Integration
Integrate additional resources and services with ease using Hostman Cloud Database. Whether you need to expand storage, add new applications, or connect to third-party services, our platform supports seamless integration, enabling you to enhance your capabilities effortlessly.
In-house tech support
Enjoy 24/7 technical support with Hostman Cloud Database. Our dedicated support team is always available to assist you with any issues or questions, ensuring that your database operations run without interruptions.

Tailored database solutions for every need

Versatile capabilities and distributions.

MySQL

Streamline app development with our fully managed MySQL environments, designed for optimal performance and scalability.

PostgreSQL

Unlock the power of PostgreSQL. We manage the details: you harness its advanced capabilities for your data-driven solutions.

Redis

Accelerate with managed Redis. Blazing-fast data handling, zero management overhead — all in your control.

MongoDB

Flexible, dynamic MongoDB management lets you focus on innovation while we handle the data agility your app needs.

OpenSearch

Managed OpenSearch powers your insights. We handle the complexity, you enjoy lightning-fast, scalable search capabilities.

ClickHouse

Instant analytics with managed ClickHouse. Fast, reliable, and maintenance-free — query at the speed of thought.

Kafka

Effortless data streaming with Kafka. Our management means reliable, scalable, real-time processing for your applications.

RabbitMQ

Seamless messaging with RabbitMQ. Let us manage the queues while you build responsive, interconnected app features.

What is Cloud Database?

A cloud database is a flexible, scalable solution that allows you to store and manage data in a cloud environment. It eliminates the need for physical hardware, offering seamless integration, automated scaling, and strong security measures. With Hostman Cloud Database, you can easily adapt to changing workloads, ensuring your applications run smoothly, even during traffic spikes.

Get started with Hostman
cloud database platform

Don’t let your database slow you down. Choose Hostman
for reliable, scalable cloud database solutions that
grow with your business.

Transparent pricing for your needs and predictable pricing

MySQL
New York
1 x 3 GHz CPU
CPU
1 x 3 GHz
1 GB RAM
RAM
1 GB
20 GB NVMe
NVMe
20 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
$4
 /mo
2 x 3 GHz CPU
CPU
2 x 3 GHz
2 GB RAM
RAM
2 GB
60 GB NVMe
NVMe
60 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
$9
 /mo
2 x 3 GHz CPU
CPU
2 x 3 GHz
4 GB RAM
RAM
4 GB
80 GB NVMe
NVMe
80 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
$18
 /mo
4 x 3 GHz CPU
CPU
4 x 3 GHz
8 GB RAM
RAM
8 GB
160 GB NVMe
NVMe
160 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
$36
 /mo
6 x 3 GHz CPU
CPU
6 x 3 GHz
16 GB RAM
RAM
16 GB
320 GB NVMe
NVMe
320 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
$72
 /mo
8 x 3 GHz CPU
CPU
8 x 3 GHz
32 GB RAM
RAM
32 GB
640 GB NVMe
NVMe
640 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
$114
 /mo
16 x 3 GHz CPU
CPU
16 x 3 GHz
64 GB RAM
RAM
64 GB
1280 GB NVMe
NVMe
1280 GB
200 Mbps Bandwidth
Bandwidth
200 Mbps
$288
 /mo

One panel to rule them all

Easily control your database, pricing plan, and additional services
through the intuitive Hostman management console.
Easy set up and management
Ready-to-deploy cloud database solutions come pre-configured. Choose your setup, launch your database, and begin managing your data with ease.
Saves time and resources
Forget about configuring hardware and software or manual database management—our service has it all covered for you.
Security
Deploy databases on an isolated network to maintain private access solely through your own infrastructure.
Hostman management console, statistics for an hour
Anup k.
Associate Cloud Engineer
5.0 out of 5

"Hostman Comprehensive Review of Simplicity and Potential"

It been few years that I have been working on Cloud and most of the cloud service...
Mansur H.
Security Researcher
5.0 out of 5

"A perfect fit for everything cloud services!"

Hostman's seemless integration, user-friendly interface and its robust features (backups, etc) makes it much easier...
Adedeji E.
DevOps Engineer
5.0 out of 5

"Superb User Experience"

For me, Hostman is exceptional because of it's flexibility and user-friendliness. The platform's ability to offer dedicated computing resources acr...
Yudhistira H.
Mid-Market(51-1000 emp.)
5.0 out of 5

"Streamlined Cloud Excellence!"

What I like best about Hostman is their exceptional speed of deployment, scalability, and robust security features. Their...
Mohammad Waqas S.
Biotechnologist and programmer
5.0 out of 5

"Seamless and easy to use Hosting Solution for Web Applications"

From the moment I signed up, the process has been seamless and straightforward...
Mohana R.
Senior Software Engineer
5.0 out of 5

"Availing Different DB Engine Services Provided by Hostman is Convenient for my Organization usecases"

Hostman manages the cloud operations...
Faizan A.
5.0 out of 5

"Hostman is a great fit for me"

Hostman is a great fit for me. What do you like best about Hostman? It was very easy to deploy my application and create database, I didn't have
Adam M.
5.0 out of 5

"Perfect website"

This website is extremely user friendly and easy to use. I had no problems so didn't have to contact customer support. Really good website and would recommend to others.
Anup K.
4.0 out of 5

"Simplifying Cloud Deployment with Strengths and Areas for Growth"

What I like best about Hostman is its unwavering commitment to simplicity...
Naila J.
5.0 out of 5

"Streamlined Deployment with Room for Improvement"

Hostman impresses with its user-friendly interface and seamless deployment process, simplifying web application hosting...

Compare Hostman Cloud Database with leading providers

Discover how Hostman stands out against other top cloud database providers in terms of pricing, support, and features.
Hostman
DigitalOcean
Google Cloud
Vultr
Price
From $4/mo
$6
$6.88
$5
Tech support
Free
$24/mo
$29/mo + 3% of
monthly charges
$29/mo or 3% of
monthly charges
Backups
from $0.07/GB
20% or 30% higher
base daily/weekly fee
$0.03/GB per mo
20% higher base
monthly/hourly fee
Bandwidth
Free
1 TB Free further $0.01/GiB additional transfer
$0.01 per GB
$0.09/GB first
10 TB / mo
Live chat support
Avg. support response time
<15 min
<24 hours
<4 hours
<12 hours

Ready to get started?

Sign up and discover how easy cloud database
management can be with Hostman.

Start turning your ideas into solutions with Hostman products and services

See all Products

Trusted by 500+ companies and developers worldwide

Global network of Hostman's data centers

Our servers, certified with ISO/IEC 27001, are located in Tier 3 data centers across the US, Europe, and Asia.
Hostmans' Locations
🇺🇸 San Francisco
🇺🇸 San Jose
🇺🇸 Texas
🇺🇸 New York
🇳🇱 Amsterdam
🇳🇬 Lagos
🇩🇪 Frankfurt
🇵🇱 Gdansk
🇦🇪 Dubai
🇸🇬 Singapore

Explore more of Hostman cloud databases

React

Optimizing Server Requests With React Hooks

In the world of modern web applications, efficient server request management is becoming an increasingly important task. The increased volume of requests can lead to slower performance, poor system responsiveness, and resource overuse. Developers aim to create applications that not only provide high-quality service to users but also make efficient use of available resources. React Hooks are functions from the React library that allow interaction with functional components, including managing state, context, lifecycle, and other aspects. With the introduction of React Hooks, developers gained an effective and versatile tool for configuring server interactions, allowing them to determine the timing and frequency of network requests accurately. In this React Hooks tutorial, we will thoroughly explore how the use of this tool contributes to the optimization of server requests. We will analyze various methods for managing application state that help reduce server load and enhance the user experience. By the end, you will learn how to integrate these methods into your React applications to create faster and more responsive interfaces. To follow all the steps outlined in this guide, you will need the following skills and tools: Basic knowledge of JavaScript and the ability to create a React application from scratch Understanding the basics of React Hooks Skills in performing server requests in JavaScript A working development environment A code editor Don’t worry if you're unfamiliar with server request optimization — this guide is intended for developers of all skill levels. If some concepts are new to you, you may need to conduct additional research and experiments and invest extra time in learning new material. Creating a New React Project Before diving into React application development, let's begin with the fundamental step—creating a new project. This initial stage lays the foundation for all subsequent development, and setting up the project correctly can greatly simplify the process of building and maintaining your code in the future. React, as one of the most popular libraries for building user interfaces, offers a variety of tools and templates to streamline and simplify the early stages of development. By leveraging modern tools and approaches such as Create React App (CRA), you can quickly create a stable, ready-to-use base that allows you to focus on writing functionality rather than configuring the environment. Before starting work on your project, ensure that your computer has all the necessary components for working with React, specifically Node.js and npm. Otherwise, download them from the official website. After installing these tools, open your terminal or command prompt, navigate to the directory where you want to create your application, and follow the instructions below based on the tool you prefer to work with: Create React App or Vite. Create React App Run the following command to initialize a new React project: npx create-react-app hostman-app Replace hostman-app with the actual name of your project. This command will download all the necessary dependencies and create a boilerplate project ready for development. If the application is successfully created, the console will display output similar to the following: Success! Created hostman-app at your_file_path/hostman-app Inside that directory, you can run several commands: npm start Starts the development server. npm run build Bundles the app into static files for production. npm test Starts the test runner. npm run eject Removes this tool and copies build dependencies, configuration files and scripts into the app directory. If you do this, you can’t go back! We suggest that you begin by typing: cd hostman-app npm start Happy hacking! This will create a new directory with the name of your project containing all the necessary files. After the setup process is complete, navigate to the newly created directory using the following command: cd hostman-app Then, run the following command as instructed in the welcome message to start the development server: npm start If everything is done correctly, the server will start, and you should see the following output on your screen: Compiled successfully! You can now view hostman-app in the browser. http://localhost:3000 Note that the development build is not optimized. To create a production build, use npm run build. This will open your new React application in the default browser. You should see the React startup screen: You can now begin working with hooks in the App.js file, located at /hostman-app/src/. Vite To initialize a new React project with Vite, run the following command: npm create vite@latest hostman-app Replace hostman-app with the actual name of your project. This command will download all the necessary dependencies and create a boilerplate project ready for development. When prompted to install Vite, confirm by pressing -y. Select the React type of application. After the setup process is complete, navigate to the newly created directory using the command: cd hostman-app Then execute the following commands to start the development server: npm installnpm run dev If everything is done correctly, the server will start, and the address of your application will be displayed, such as: http://localhost:5173 Copy the address from the console and paste it into your browser's address bar. You should see the React welcome screen. You can now begin working with hooks in the App.jsx file, located at /hostman-app/src/. Synchronizing Components with the useEffect Hook Developing modern web applications with React requires special attention to managing side effects. These effects can include sending server requests, subscribing to data updates, modifying document headers, setting timers, and other actions beyond merely displaying information. The useEffect hook in React provides developers with a powerful tool to control these side effects in functional components, improving performance and making processes more transparent. One of the key and widely used applications of useEffect is making server requests and subsequently updating the component's state. An example of using the useEffect hook for server requests involves calling a function that performs the request inside this hook. The function can use the Fetch API or the Axios library to make the request and then update the component's state using the setState hook. Below is an example of using the useEffect hook to fetch data from the JSON Placeholder API and update the component's state. Navigate to the App.js or App.jsx file inside the src folder of your project. Delete the default code and replace it with the following example: import React, { useEffect, useState } from 'react'; function MyComponent() { const [data, setData] = useState([]); useEffect(() => { async function fetchData() { const response = await fetch('https://jsonplaceholder.typicode.com/posts'); const data = await response.json(); setData(data); } fetchData(); }, []); return ( <div> {data.map((item) => ( <div key={item.id}> <h2>- {item.title}</h2> <p>{item.body}</p> </div> ))} </div> ); } export default MyComponent; After importing standard hooks and declaring the functional component, we use the useState hook to create the data state and the setData function, which will be used to update this state. The initial state is an empty array since we expect a list of data. The useEffect hook performs an asynchronous API request during the component's initial render using the fetchData function. Using the .map method, we render each element from the updated data state as separate components. Each element includes a title and body. We assign item.id as a unique key for each <div> to enable React to identify and manage these components in the DOM efficiently. Finally, the MyComponent component is exported so it can be used in other parts of the application. If you refresh the browser or the application, you should see the result of the request displayed based on the provided code. An important aspect is minimizing unnecessary re-renders, which can negatively impact performance. Proper error handling when performing server requests is also critically important to prevent component failures. You can implement error handling by adding a try-catch block inside the fetchData function and using the setError hook to update the component's state with an error message. This way, the application can display an error message to the user if something goes wrong. Optimizing Server Request Performance with the useMemo Hook The useMemo hook in React is a performance optimization tool that allows developers to memoize data, storing the result of computations for reuse without repeating the process itself. useMemo returns a cached value that is recalculated only when specified dependencies change. This prevents costly computations on every component render. One effective way to use the useMemo hook in the context of server requests is to memoize data fetched from the server and then use it to update the component's state. To achieve this, you can call the useMemo hook inside a useEffect hook, passing the server data as the first argument and a dependency array as the second. The dependency array should include all props or state variables that affect the calculation of the memoized data. Below is an example of using the useMemo hook to memoize data from the JSON Placeholder API and update the component's state. Replace the code in the App.js / App.jsx file with the provided snippet. import { useEffect, useState, useMemo } from 'react'; function MyComponent({ postId }) { const [data, setData] = useState({}); useEffect(() => { async function fetchData() { const response = await fetch(`https://jsonplaceholder.typicode.com/posts/1`); const data = await response.json(); setData(data); } fetchData(); }, [postId]); const title = useMemo(() => data.title, [data]); return ( <div> <h2>{title}</h2> </div> ); } export default MyComponent First, we import the useEffect, useState, and useMemo hooks to manage the component's state. We use useState to create the data state and the setData function to update it. The initial state is an empty object, which will later hold the post information fetched from the server. Using the fetchData function, we make an API request, passing the postId parameter in the dependency array. This ensures that the effect is executed only when postId changes. Within the component, we use the useMemo hook to memoize the title by passing data.title as the first argument and [data] as the second argument, so the memoized value updates only when the data state changes. The subsequent steps are similar to the previous useEffect example. It is important to note that useMemo is not always necessary. You should use it only when the component depends on frequently changing props or state and when the value computation is expensive. Improper use of useMemo can lead to memory leaks and other performance issues. Managing Server Request State with the useReducer Hook The useReducer hook in React is similar to the useState hook but provides a more structured and predictable way of managing state. Instead of updating the state directly, useReducer allows you to dispatch actions that describe the state update and use a reducer function to update the state based on the dispatched action. One of the key benefits of using useReducer for managing server requests is improved organization of logic. Rather than spreading server request handling logic throughout the component, you can encapsulate it within the reducer. This makes the component's code cleaner, more readable, and easier to maintain. To try this approach using the useReducer hook for managing data fetched from the JSON Placeholder API and updating the component's state, replace the code in the App.js / App.jsx file with the provided snippet. import { useReducer } from 'react'; const initialState = { data: [], loading: false, error: '' }; const reducer = (state, action) => { switch (action.type) { case 'FETCH_DATA_REQUEST': return { ...state, loading: true }; case 'FETCH_DATA_SUCCESS': return { ...state, data: action.payload, loading: false }; case 'FETCH_DATA_FAILURE': return { ...state, error: action.payload, loading: false }; default: return state; } }; function MyComponent() { const [state, dispatch] = useReducer(reducer, initialState); const fetchData = async () => { dispatch({ type: 'FETCH_DATA_REQUEST' }); try { const response = await fetch('https://jsonplaceholder.typicode.com/posts'); const data = await response.json(); dispatch({ type: 'FETCH_DATA_SUCCESS', payload: data }); } catch (error) { dispatch({ type: 'FETCH_DATA_FAILURE', payload: error.message }); } }; return ( <div> {state.loading ? ( <p>Loading...</p> ) : state.error ? ( <p>{state.error}</p> ) : ( <div> {state.data.map((item) => ( <div key={item.id}> <h2>{item.title}</h2> <p>{item.body}</p> </div> ))} </div> )} <button onClick={fetchData}>Load data</button> </div> ); } export default MyComponent; In the code snippet above, we call the useReducer hook with a reducer function and an initial state passed as arguments. Initially, the component's state is set up as follows: An empty array for storing data A loading variable set to false An empty string for displaying error messages Clicking the "Load Data" button triggers the fetchData function. This function dispatches actions based on the result of the request: either a successful response or an error. Additionally, the useReducer hook allows for more effective management of complex states. Using actions and reducers to update the state simplifies handling how different actions affect various parts of the state, making adding new features and debugging issues in the application easier. Conclusion This guide has covered the basics of optimizing server requests using React Hooks. Optimizing server requests is essential for improving the performance and usability of your web application. In this article, we explored key techniques for request optimization: Caching results with useMemo Centralized state management of requests with useReducer Efficient use of useEffect for performing asynchronous operations dependent on your component's state parameters Understanding and applying these optimization methods reduces response time, decreases server load, and enhances the overall user experience. Mastering these techniques will help you build more efficient and responsive applications ready to handle various loads and usage scenarios. Once you’ve mastered the basic hooks, several advanced concepts are worth exploring for more sophisticated state and logic management in React applications. Here are some additional hooks to consider: useContext. This hook allows access to a context created using React.createContext. It enables you to share information between components in a hierarchy without passing props at every level. useCallback. This hook provides a memoized version of a callback function, which updates only when dependencies change. It's a tool for improving application performance. useRef. This hook returns an object with a .current property, useful for persisting values across renders without triggering re-renders. useImperativeHandle. Used with React.forwardRef to customize the instance value assigned to a parent component when using refs. useLayoutEffect. Similar to useEffect, but it runs synchronously after all DOM changes. It's helpful when you need to measure and modify the DOM immediately after rendering. These hooks provide powerful tools for managing component state and behavior in React, each with unique use cases and benefits. Exploring and incorporating them into your projects can help you create highly dynamic, efficient, and maintainable applications. In addition, you can deploy React applications on our platform as a service.
31 January 2025 · 13 min to read
Node.js

Difference Between Polling and Webhook in Telegram Bots

When developing Telegram bots using Node.js, there are two main methods for receiving user messages: Polling and Webhook. Both serve the purpose of handling incoming requests, but each has its unique features, making them suitable for different scenarios. What is Polling? Polling is a method of fetching updates from the Telegram server by periodically sending requests. The bot sends requests at specific time intervals to check for new messages or events. There are two types of polling: Long Polling and Short Polling. Long Polling In Long Polling, the bot sends a request to the server and waits for a response. If there are no new messages, the server holds the request open until a new message arrives or the timeout period ends. Once the bot receives a response, it immediately sends a new request. Here’s an example where the bot is configured to poll the Telegram server every 3 seconds, with a timeout of 10 seconds: const TelegramBot = require('node-telegram-bot-api'); const token = 'TOKEN'; // Create a bot instance with Long Polling enabled const bot = new TelegramBot(token, { polling: { interval: 3000, // Interval between requests (3 seconds) autoStart: true, // Automatically start polling params: { timeout: 10 // Request timeout (10 seconds) } } }); bot.on('message', (msg) => { const chatId = msg.chat.id; const text = msg.text; // Respond to the received message bot.sendMessage(chatId, `You wrote: ${text}`); }); bot.onText(/\/start/, (msg) => { const chatId = msg.chat.id; bot.sendMessage(chatId, 'Hello! I am a bot using Long Polling.'); }); Short Polling In Short Polling, the bot sends requests to the server at short intervals, regardless of whether new messages are available. This method is less efficient because it generates more network requests and consumes more resources. In this case, the bot constantly requests updates from the server without keeping the connection open for a long time. This can lead to high network usage, especially with heavy traffic. Here’s an example of a bot using Short Polling: const TelegramBot = require('node-telegram-bot-api'); const token = 'TOKEN'; // Create a bot instance with Short Polling enabled const bot = new TelegramBot(token, { polling: true }); bot.on('message', (msg) => { const chatId = msg.chat.id; const text = msg.text; bot.sendMessage(chatId, `You wrote: ${text}`); }); bot.onText(/\/start/, (msg) => { const chatId = msg.chat.id; bot.sendMessage(chatId, 'Hello! I am a bot using Short Polling.'); }); What is Webhook? Webhook is a method that allows a bot to receive updates automatically. Instead of periodically polling the Telegram server, the bot provides Telegram with a URL, where POST requests will be sent whenever new updates arrive. This approach helps to use resources more efficiently and minimizes latency. In the following example, the bot receives requests from Telegram via Webhook, eliminating the need for frequent server polling. This reduces server load and ensures instant message handling. const TelegramBot = require('node-telegram-bot-api'); const express = require('express'); const bodyParser = require('body-parser'); const token = 'TOKEN'; // Your server URL const url = 'https://your-server.com'; const port = 3000; // Create a bot instance without automatic polling const bot = new TelegramBot(token, { webHook: true }); // Set the Webhook URL for your server bot.setWebHook(`${url}/bot${token}`); // Configure the Express server const app = express(); app.use(bodyParser.json()); // Request handler for incoming updates from Telegram app.post(`/bot${token}`, (req, res) => { bot.processUpdate(req.body); res.sendStatus(200); }); bot.on('message', (msg) => { const chatId = msg.chat.id; bot.sendMessage(chatId, `You wrote: ${msg.text}`); }); // Start the server app.listen(port, () => { console.log(`Server running on port ${port}`); }); To run the code and start the bot, install the required libraries: npm install node-telegram-bot-api express Server Setup We need to set up a server to work with Webhook. We'll use Hostman for this. Step 1: Set Up a Cloud Server Log in to your Hostman control panel and start by creating a new project. Next, create a cloud server. During the server creation process, select the Marketplace tab and choose Node.js. When the server starts, Node.js will automatically be installed. Choose the nearest region with the lowest ping. You can choose the configuration according to your needs, but for testing purposes, the minimum configuration will suffice. In the Network settings, make sure to assign a public IP. In the Authorization and Cloud-init settings, leave them unchanged.  In the server's information, specify the server name and description, and select the project created earlier. Once all settings are configured, click on the Order button. The server will start, and you will receive a free domain. Step 2: Install SSL Certificate Since Telegram's API only works with HTTPS, you need to install an SSL certificate. For this, you will need a registered domain name. To set up the web server and install the certificate, execute the following commands sequentially: Update available package lists: sudo apt update Create and open the Nginx configuration file: sudo nano /etc/nginx/sites-available/your_domain Inside this file, add the following configuration: server { listen 80; server_name your_domain; location / { proxy_pass http://localhost:3000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } } Replace your_domain with your actual domain name in this file and throughout the console. Create a symbolic link to the file: sudo ln -s /etc/nginx/sites-available/your_domain /etc/nginx/sites-enabled/ Restart Nginx: sudo systemctl restart nginx Install certbot to create SSL certificates: sudo apt install certbot python3-certbot-nginx Use certbot to configure the SSL certificate: sudo certbot --nginx -d your_domain Replace your_domain with your actual domain name. Examples of Using Polling and Webhook Before choosing a method for receiving messages, it is important to consider the characteristics of each approach and its applicability in different situations. Polling: Local Development: When developing and testing a bot on a local machine, using Long Polling allows for easy updates without the need to set up a server. Small Projects: If you are creating a bot for a small group of users or for personal use, and you do not have strict requirements for response time, Polling will be sufficient. Low Traffic Projects: If your bot is not expecting a large number of messages, using Short Polling can be appropriate as it is simple to implement. Webhook: Production Applications: For bots working in a production environment where immediate responses to events are important, Webhook is the preferred choice. For example, bots that handle payments or respond to user queries in real time should use Webhook to ensure high performance. High Traffic Systems: If you're developing a bot that will serve a large number of users, Webhook will be more efficient since it reduces server load by eliminating continuous requests. Systems with Long Operations: If your bot performs long operations (such as generating reports or processing data), Webhook can be used to notify users once these operations are complete. Comparison of Polling and Webhook To better understand the differences between the two approaches, here is a comparison table of their characteristics: Characteristic Polling Webhook Method of Data Retrieval Periodic requests to the Telegram server Automatic sending of updates to a specified URL Setup Simple setup, no additional resources required Requires HTTPS server setup and SSL certificate Response Speed May have slight delays due to polling intervals Near-instant message reception Resource Usage Continuously requests updates, taxing the server More resource-efficient since updates come automatically Infrastructure Requirements Does not require a public server Requires a public HTTPS server Reliability Does not depend on the availability of an external server Can be unavailable if there are issues with the HTTPS server Setup Issues in Local Environment Can be used locally for testing Difficult to use without public access Conclusion The choice between Polling and Webhook depends on the specific needs of your project. Polling is a simple and quick way to develop, especially in the early stages, while Webhook offers more efficient message processing for production environments.
31 January 2025 · 7 min to read
Node.js

How to Create a Telegram Bot Using Node.js

Telegram bots have become an integral part of this messenger: every day, hundreds of thousands of people use them—and for good reason. Telegram bots are easy for users to interact with, and developers can quickly and comfortably create them thanks to the constantly evolving Telegram API, which aims to improve daily. The main idea behind Telegram bots is task automation and extending the messenger’s functionality. Bots can serve as simple assistants performing commands or as complex systems with full-fledged business logic. From sending out news updates to implementing intricate game mechanics—the possibilities for building bots are nearly limitless. With Node.js, you can implement almost any functionality for a Telegram bot, thanks to its ecosystem of libraries and frameworks. Node.js, as a platform with asynchronous request handling, is ideal for building bots that need to work in real-time and interact with thousands of users simultaneously. Here are some capabilities that can be implemented: Basic Functionality Responding to commands Inline bots Buttons Integration with External Services APIs and databases Webhooks Notifications Sending scheduled notifications or alerts when certain events occur Automatically sending news updates from sources every N seconds Analytics Collecting various statistics Creating a Telegram Bot First, you need to create a bot within Telegram. Use the official BotFather bot to register your bot. Click the "Start" button (or if you’ve already interacted with the bot, send the command /start). In BotFather’s response, find and select the /newbot command. BotFather will ask you to provide a bot name and then a username. The username must end with the word bot. For example, if your bot’s name is Tetris, the username should be one of the following: TetrisBot Tetris_bot Tetrisbot Tetris_Bot If everything is entered correctly, your bot will be created. BotFather will also give you a unique bot token, which you must keep private. Development We will create a bot that sends various quizzes in the form of Telegram polls. The quiz topics will be school subjects. The bot will have two commands: one for sending questions and another for selecting quiz topics. Preparing the Environment Before starting development, ensure that Node.js and npm are installed on your PC. You can download Node.js from the official website, and npm will be installed automatically along with Node.js. If you are using Linux, you can install npm by following this guide. Once Node.js is installed, you can begin developing the bot. First, create a new private repository on GitHub and select Node under the Add .gitignore section. Now, clone this repository to your PC using the terminal. If you want the project to be on your desktop, enter: cd Desktop Then enter: git clone https://github.com/username/School-Quiz Replace username with your actual GitHub username. You can also replace School-Quiz with any other project name. After cloning the repository, without closing the terminal, enter: cd School-Quiz Replace School-Quiz with the actual name of the folder where your project was cloned from GitHub. To initialize the project, run the following command: npm init You will be prompted to enter the package name, version, description, default entry file, test command, Git repository, keywords, author, and license. You can press "Enter" to accept the default values. Now, let’s install the library that will be used to write the bot’s code. Enter the following command in the terminal (ensuring that you are in the project folder): npm install node-telegram-bot-api Writing Code for the Quiz After the installation is complete, you can start writing the code. Open the package.json file and find the scripts section. Inside it, above the test command, add the following line: "start": "node index.js", This allows you to start the project by simply entering npm start in the terminal instead of typing node followed by the file name. Now, create a file called index.js and add the following code: const TelegramBot = require('node-telegram-bot-api'); const fs = require('fs'); const bot = new TelegramBot('TOKEN', { polling: true }); // Replace 'TOKEN' with the actual token provided by BotFather const ADMIN_ID = '1402655980'; let awaitingSupportMessage = {}; // Stores information about users waiting for support // Stores selected topics for users let userTopics = {}; // Topics and their respective question files const topics = { math: { name: 'Math', file: 'questions/math.json' }, spanish: { name: 'Spanish', file: 'questions/spanish.json' }, history: { name: 'History', file: 'questions/history.json' } }; // Function to retrieve questions based on selected topics function getQuestionsByTopics(userId) { const selectedTopics = userTopics[userId] || Object.keys(topics); let allQuestions = []; selectedTopics.forEach(topic => { const questions = JSON.parse(fs.readFileSync(topics[topic].file, 'utf8')); allQuestions = allQuestions.concat(questions); }); return allQuestions; } function getRandomQuestion(userId) { const questions = getQuestionsByTopics(userId); const randomIndex = Math.floor(Math.random() * questions.length); return questions[randomIndex]; } bot.onText(/\/quiz/, (msg) => { const chatId = msg.chat.id; const userId = msg.from.id; // Retrieve a random question const questionData = getRandomQuestion(userId); // Send the poll as a quiz bot.sendPoll( chatId, questionData.question, // The question text questionData.options, // Answer options { type: 'quiz', // Quiz type correct_option_id: questionData.correct_option_id, // Correct answer is_anonymous: false // The quiz won't be anonymous } ).then(pollMessage => { // Handle poll results bot.on('poll_answer', (answer) => { if (answer.poll_id === pollMessage.poll.id) { const selectedOption = answer.option_ids[0]; // Check if the answer is correct if (selectedOption !== questionData.correct_option_id) { bot.sendMessage(chatId, questionData.explanation); } } }); }); }); bot.onText(/\/settopic/, (msg) => { const chatId = msg.chat.id; const userId = msg.from.id; const keyboard = Object.keys(topics).map(topicKey => ({ text: `${(userTopics[userId] || []).includes(topicKey) ? '✅ ' : ''}${topics[topicKey].name}`, callback_data: topicKey })); bot.sendMessage(chatId, 'Select the topics for questions:', { reply_markup: { inline_keyboard: [keyboard] } }); }); // Topic selection handler bot.on('callback_query', (callbackQuery) => { const message = callbackQuery.message; const userId = callbackQuery.from.id; const topicKey = callbackQuery.data; // Initialize selected topics for the user if they don't exist if (!userTopics[userId]) { userTopics[userId] = Object.keys(topics); } // Add or remove the selected topic if (userTopics[userId].includes(topicKey)) { userTopics[userId] = userTopics[userId].filter(t => t !== topicKey); } else { userTopics[userId].push(topicKey); } // Update the message with buttons const keyboard = Object.keys(topics).map(topicKey => ({ text: `${userTopics[userId].includes(topicKey) ? '✅ ' : ''}${topics[topicKey].name}`, callback_data: topicKey })); bot.editMessageReplyMarkup({ inline_keyboard: [keyboard] }, { chat_id: message.chat.id, message_id: message.message_id }); }); bot.onText(/\/start/, (msg) => { const chatId = msg.chat.id; bot.sendMessage(chatId, "Hello! Type /quiz to start a quiz. Use /settopic to choose topics."); }); console.log('Bot is running.'); Quiz Questions Files Now, create a folder named questions inside your project. Within this folder, create three JSON files: spanish.json [ { "question": "How do you say 'I' in Spanish?", "options": ["Yo", "Tú", "Nosotros"], "correct_option_id": 0, "explanation": "The correct answer is: Yo." }, { "question": "What does the verb 'correr' mean?", "options": ["to run", "to walk", "to stand"], "correct_option_id": 0, "explanation": "The correct answer is: to run." }, { "question": "How do you say 'she' in Spanish?", "options": ["Tú", "Ella", "Vosotros"], "correct_option_id": 1, "explanation": "The correct answer is: Ella." } ] history.json [ { "question": "In which year did World War II begin?", "options": ["1939", "1941", "1914"], "correct_option_id": 0, "explanation": "The correct answer is: 1939." }, { "question": "Who was the first president of the United States?", "options": ["Abraham Lincoln", "George Washington", "Franklin Roosevelt"], "correct_option_id": 1, "explanation": "The correct answer is: George Washington." }, { "question": "Which country was the first to send a human into space?", "options": ["USA", "USSR", "China"], "correct_option_id": 1, "explanation": "The correct answer is: USSR." } ] math.json [ { "question": "What is 2 + 2?", "options": ["3", "4", "5"], "correct_option_id": 1, "explanation": "The correct answer is: 4." }, { "question": "What is 5 * 5?", "options": ["10", "20", "25"], "correct_option_id": 2, "explanation": "The correct answer is: 25." }, { "question": "What is 10 / 2?", "options": ["4", "5", "6"], "correct_option_id": 1, "explanation": "The correct answer is: 5." } ] Each JSON file contains the question, answer options, the index of the correct answer, and an explanation that will be sent if the user selects the wrong answer. Telegram Stars Recently, Telegram introduced an internal currency called Telegram Stars, along with an API update allowing bots to support donations in Stars. Let’s add a /donate command to the index.js file. When users send this command, the bot will generate a payment invoice. Add the following code inside index.js: bot.onText(/\/donate/, (msg) => { const chatId = msg.chat.id; bot.sendInvoice(chatId, 'Donation', 'Support the project with a donation', 'unique_payload', '', // Empty provider_token for Stars Payments 'XTR', // Currency "XTR" [{ label: 'Donation', amount: 1 }] // Amount: 1 Star ); }); Support Command Let’s add another command called /support. This command allows a large number of users to contact you without creating multiple unnecessary chats. Users will be able to send text, photos, and videos, and the bot will forward these messages directly to the admin (in this case, you). Place the following code inside index.js. At the beginning of the file, add: const ADMIN_ID = 'ID'; let awaitingSupportMessage = {}; // Stores information about users waiting for support The ADMIN_ID tells the bot where to forward the user’s message. To find your ID, you can use the Get My ID bot by simply sending the /start command to it. At the end of the file, add the following code: bot.onText(/\/support/, (msg) => { const chatId = msg.chat.id; const userId = msg.from.id; // Inform the user that we are waiting for their message bot.sendMessage(chatId, "Please send your message in a single message, including text, photos, or videos!"); // Mark the user as currently composing a support message awaitingSupportMessage[userId] = true; }); Handling All Messages This section processes all incoming messages and checks if they are part of a support request. Add the following code to handle different types of user content: bot.on('message', (msg) => { const userId = msg.from.id; // Check if the user is sending a message after the /support command if (awaitingSupportMessage[userId]) { const chatId = msg.chat.id; const caption = msg.caption || ''; // Include caption if present // Check the type of message and forward the corresponding content to the admin if (msg.text) { // If the message contains text bot.sendMessage(ADMIN_ID, `New support request from @${msg.from.username || msg.from.first_name} (ID: ${userId}):\n\n${msg.text}`); } else if (msg.photo) { // If the message contains a photo const photo = msg.photo[msg.photo.length - 1].file_id; // Select the highest resolution photo bot.sendPhoto(ADMIN_ID, photo, { caption: `New support request from @${msg.from.username || msg.from.first_name} (ID: ${userId})\n\n${caption}` }); } else if (msg.video) { // If the message contains a video const video = msg.video.file_id; bot.sendVideo(ADMIN_ID, video, { caption: `New support request from @${msg.from.username || msg.from.first_name} (ID: ${userId})\n\n${caption}` }); } else { // If the message type is unsupported bot.sendMessage(msg.chat.id, "Sorry, this type of message is not supported."); } // Confirm to the user that their message has been sent bot.sendMessage(chatId, "Your message has been sent. The administrator will contact you soon."); // Remove the user from the list of those composing a support message delete awaitingSupportMessage[userId]; } }); Deployment on a Server For our bot to operate continuously, we must upload and run it on a server. For deployment, we will use Hostman cloud servers. Uploading to GitHub Before launching the bot on the server, you first need to upload the project files to GitHub. Run the following commands in the console in sequence: Add all changes in the current directory to the next commit: git add . Create a commit with the message "first commit", recording all changes added with git add: git commit -m "first commit" Push the changes to GitHub: git push Server Setup Go to your Hostman control panel and: Create a New Project (optional): Specify an icon, a name, a description, and add users if necessary. Create a Cloud Server: Either from your project or from the Cloud servers page start creating a new cloud server. Select the Region: Choose the region that is closest to you or where the lowest ping is available. Go to the Marketplace tab in the second step and select Node.js. Set the Ubuntu version to the latest one. This ensures that Node.js will already be installed on the server when it starts, so you won’t need to install it manually. Choose Configuration: Select the configuration according to your needs. For running the project, the minimum configuration is sufficient. If the project requires more resources in the future, you can upgrade the server without disrupting its operation. Network Settings: Ensure that you assign a public IP for the server. Configure any additional services as needed. Authorization and Cloud-init: In the Authorization step, you can add your SSH key to the server. However, it’s optional, and you can leave these settings as they are. Server Information: Provide the server’s name and description, and select the project to which you want to add the server. Once everything is set up, click the Order button. After a short while, the server will be up and running, and you can proceed with the next steps. Launching the Bot After creating the server, go to the Dashboard tab, copy the Root password, and open the Console tab. Enter the username root and press Enter. Next, paste the password you copied and press Enter again. When typing or pasting the password, it will not be visible! If everything is correct, you will see a welcome message. Now, run the following command to get the latest updates: sudo apt-get update Create a new folder where you will place the bot. Enter these commands in sequence: cd /sudo mkdir Botcd Bot You can replace the folder name "Bot" with any other name you choose. To ensure Git is installed on the server (it is usually pre-installed by default), check the version using: git --version Next, set up global Git settings to link it to your GitHub profile: git config --global user.name "your GitHub username"git config --global user.email "email used during registration" After this, clone the repository by entering the following command with your repository URL: git clone https://github.com/username/School-Quiz During cloning, you will be prompted to enter your username and then your password. If you have two-factor authentication (2FA) enabled on your GitHub account, entering your regular password will result in an error saying the password is incorrect. To clone a repository with 2FA enabled, you need to create a personal access token. Click your profile picture in the top-right corner and select “Settings”. In the left-hand menu, click “Developer settings”. Under the “Personal access tokens” section, select “Tokens (classic)” and click “Generate new token”. Set token parameters: In the “Note” field, provide a description for the token. Set the expiration date for the token in the “Expiration” field. Under “Select scopes”, choose the necessary permissions for the token. For example, to work with repositories, select repo. Click “Generate token”. Copy the generated token and store it in a secure place. Note that you won’t be able to view the token again after closing the page. Once you have the personal access token, use it instead of your password when prompted during the repository cloning process. Navigate to your project folder using the following command: cd School-Quiz Replace School-Quiz with the actual name of your project. To install the project dependencies, run: npm install Once the packages are installed, you can start the project by running: npm start In the console, you should see the message “Bot is running”. However, there is one issue—if you restart the server or close the console, the bot will stop working! To ensure the bot runs continuously and automatically starts after a server reboot, you need to install a process manager like pm2. Install pm2 globally using the following command: sudo npm install pm2 -g Next, start the Node.js server using pm2: sudo pm2 start index.js --name "bot-quiz" --watch In this example, the process is named bot-quiz, but you can use any name you prefer. Set up automatic startup on server reboot: sudo pm2 startup Save all the changes made: sudo pm2 save Conclusion In this guide, we covered the entire process of creating a Telegram bot using Node.js, from registering the bot via BotFather to deploying the finished solution on a server.
31 January 2025 · 15 min to read
Debian

How to Install and Configure VNC on Debian

The term Virtual Network Computing (VNC) refers to a system for remote access to a computer’s desktop environment. It allows users to interact with the interface, access files on storage, run applications, and modify operating system settings. A similar approach is used for managing virtual machines rented from providers like Hostman. This guide will walk you through setting up a VNC server on a VPS/VDS running Debian, with a secure connection established over SSH. For this example, we’ll use the TightVNC utility, known for its reliable performance even over low-speed connections and seamless file transfers in both directions (to and from the server). Technical Requirements Before starting, ensure you have a prepared Debian server, either in the cloud or locally. Apart from having the operating system ready, it's recommended to configure both a root user and a sudo user (the former without privileges and the latter with them). Additionally, you must allow SSH connections through the firewall. You will need the following: A machine running Windows or macOS. A pre-installed VNC client such as TightVNC, RealVNC, or UltraVNC on Windows, or Screen Sharing on macOS. Alternatively, if you are using another Linux machine, you can install a VNC client such as Vinagre, KRDC, RealVNC, or TightVNC. Installing the VNC Server and Desktop Environment By default, a Debian server doesn’t have a graphical interface for easier management, nor does it include a remote management tool. Therefore, the first step is to install both. In this example, we’ll use the Xfce desktop environment and TightVNC, both of which are available in Debian’s official repository. Update the Package List First, update the list of available packages on the host system by running: sudo apt update Install the Xfce Desktop Environment Next, install the Xfce desktop environment along with additional utilities: sudo apt install xfce4 xfce4-goodies During the installation, the system will prompt you to select a keyboard layout from the provided list. Choose the desired option and press Enter to continue. Once the installation is completed, proceed to install the VNC server. Install the TightVNC Server Use the following command to install TightVNC: sudo apt install tightvncserver After the installation, you need to configure TightVNC by setting a security password and generating configuration files where connection parameters will be stored. Initial VNC Configuration Run the following command to start configuring the VNC server: vncserver The program will prompt you to set a password for connecting to the remote system: You will require a password to access your desktops.Password:Verify: The password must be between 6 and 8 characters long. If a longer password is entered, it will be automatically truncated. Additionally, you can set up a view-only mode, where the connected user can only observe the desktop without being able to control the keyboard or mouse. This mode is useful for demonstrations. After entering both passwords, the utility will generate a configuration file: Would you like to enter a view-only password (y/n)? nxauth:  file /home/username/.Xauthority does not existNew 'X' desktop is your_hostname:1Creating default startup script /home/username/.vnc/xstartupStarting applications specified in /home/username/.vnc/xstartupLog file is /home/username/.vnc/your_hostname:1.log Configuring the VNC Server The VNC server needs to be configured so that it knows what commands to execute upon startup—for example, specifying the desktop environment to be launched when a connection is established. These startup instructions are located in the xstartup file, which resides in the .vnc subdirectory of the home directory. This file is automatically created when you launch the vncserver for the first time.  In this guide, we’ll modify the configuration to launch the Xfce graphical interface upon startup. By default, VNC communicates with remote hosts using port 5901, also known as the display port for "display 1". Additional instances can be started on ports 5902, 5903, etc. Stop the VNC Server Before configuring VNC on Debian, stop the currently running instance with the following command: vncserver -kill :1 The output will look something like this: Killing Xtightvnc process ID 17648 Backup the Original Configuration File It’s a good practice to create a backup of the original xstartup file, so you can easily revert the settings if anything goes wrong: mv ~/.vnc/xstartup ~/.vnc/xstartup.bak Create and Edit a New xstartup File Now, generate a new xstartup file and open it for editing using a text editor (in this case, nano): nano ~/.vnc/xstartup The commands you add to this file will be automatically executed when the VNC server starts or restarts. Add the following lines to launch the Xfce desktop environment: #!/bin/bashxrdb $HOME/.Xresourcesstartxfce4 & Here: The first line specifies that the script should be executed using the bash shell. The line with xrdb loads the .Xresources file, which defines terminal colors, cursor themes, font rendering, and other desktop appearance settings. The line startxfce4 & launches the Xfce graphical interface. Make the xstartup File Executable After editing the configuration file, make it executable by running: sudo chmod +x ~/.vnc/xstartup Restart the VNC Server Finally, restart the VNC server: vncserver You’ll see the following output on the screen: New 'X' desktop is your_hostname:1Starting applications specified in /home/username/.vnc/xstartupLog file is /home/username/.vnc/your_hostname:1.log Configuring the VNC Desktop By default, TightVNC establishes a connection without encryption. However, for our purposes, we require a secure tunnel using the SSH protocol. This involves creating a secure connection on the client side, which forwards data to localhost for handling by the VNC utility. You can achieve this by running the following command in the terminal (Linux or macOS): ssh -L 5901:127.0.0.1:5901 -C -N -l user your_server_ip The -L option specifies port forwarding. The default configuration uses port 5901 on both the remote and local hosts. The -C option enables compression, which reduces the size of data sent between the client and server. The -N option tells the SSH protocol that no remote commands will be executed and that it is only being used for port forwarding. The -l option specifies the username for the remote connection. In the above command, replace user with the username (typically a non-privileged root user) and your_server_ip with the actual IP address of the remote host. If you are using Windows, you can create the SSH tunnel using PuTTY, a popular SSH client with a graphical interface. In PuTTY, you need to: Enter the IP address of the remote host. Configure port forwarding by adding localhost:5901 as the new port for data redirection. Save the session settings and initiate the connection. Once you initiate the connection, the system will prompt you to enter the password you set during the initial VNC server configuration. The tunnel will only be activated after successful user authentication. Once connected, you will see the Xfce graphical interface as configured in the .Xresources file. You can finalize the desktop setup by selecting "Use default configuration" in the menu. To end the SSH session, press the key combination Ctrl+C. This will close the tunnel and terminate the remote session. Running VNC as a System Service In the final step, we will configure VNC Server as a system service on Debian, enabling you to start, stop, and restart it just like other system services. This ensures that the utility starts automatically with the server. To do this, we'll edit the configuration file /etc/systemd/system/[email protected]: sudo nano /etc/systemd/system/[email protected] The @ symbol is used as an argument to modify the service parameters. It is applied when you need to specify the display port used by the VNC utility. Add the following lines to the file (replace user, group, workingdirectory, and username with your own values): [Unit]Description=Start TightVNC server at startupAfter=syslog.target network.target[Service]Type=forkingUser=usernameGroup=usernameWorkingDirectory=/home/usernamePIDFile=/home/username/.vnc/%H:%i.pidExecStartPre=-/usr/bin/vncserver -kill :%i > /dev/null 2>&1ExecStart=/usr/bin/vncserver -depth 24 -geometry 1280x800 :%iExecStop=/usr/bin/vncserver -kill :%i[Install]WantedBy=multi-user.target The ExecStartPre command allows you to stop the VNC server if it is already running. The ExecStart command will restart the server and set the resolution to 1280x800 with 24-bit color. After editing the file, apply the changes and inform the system about the new file: sudo systemctl daemon-reload Next, enable the service: sudo systemctl enable [email protected] The 1 after the @ represents the display number where the service should be activated. It will always be "1" unless you change the default configuration, but you can specify another number if needed. Now, stop the active instance of the VNC server and start the new service: vncserver -kill :1sudo systemctl start vncserver@1 You can check if the VNC server is running with: sudo systemctl status vncserver@1 The result will look like this: [email protected] - Start TightVNC server at startup   Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: enabled)   Active: active (running) since Wed 2018-09-05 16:47:40 UTC; 3s ago  Process: 4977 ExecStart=/usr/bin/vncserver -depth 24 -geometry 1280x800 :1 (code=exited, status=0/SUCCESS)  Process: 4971 ExecStartPre=/usr/bin/vncserver -kill :1 > /dev/null 2>&1 (code=exited, status=0/SUCCESS)  Main PID: 4987 (Xtightvnc)... After these steps, the VNC server will be available after the system restarts. Now, initiate the SSH tunnel again: ssh -L 5901:127.0.0.1:5901 -C -N -l username your_server_ip This command will create a connection using the client application that forwards the connection from localhost:5901 to your local machine. Conclusion We have completed configuring and launching a secure VNC server on a Debian system. Now, you can perform all usual operations: installing and uninstalling software, configuring programs, managing files, surfing the web, etc.
31 January 2025 · 9 min to read
Linux

How to Use SSH Keys for Authentication

Many cloud applications are built on the popular SSH protocol—it is widely used for managing network infrastructure, transferring files, and executing remote commands. SSH stands for Secure Socket Shell, meaning it provides a shell (command-line interface) around the connection between multiple remote hosts, ensuring that the connection is secure (encrypted and authenticated). SSH connections are available on all popular operating systems, including Linux, Ubuntu, Windows, and Debian. The protocol establishes an encrypted communication channel within an unprotected network by using a pair of public and private keys. Keys: The Foundation of SSH SSH operates on a client-server model. This means the user has an SSH client (a terminal in Linux or a graphical application in Windows), while the server side runs a daemon, which accepts incoming connections from clients. In practice, an SSH channel enables remote terminal management of a server. In other words, after a successful connection, everything entered in the local console is executed directly on the remote server. The SSH protocol uses a pair of keys for encrypting and decrypting information: public key and private key. These keys are mathematically linked. The public key is shared openly, resides on the server, and is used to encrypt data. The private key is confidential, resides on the client, and is used to decrypt data. Of course, keys are not generated manually but with special tools—keygens. These utilities generate new keys using encryption algorithms fundamental to SSH technology. More About How SSH Works Exchange of Public Keys SSH relies on symmetric encryption, meaning two hosts wishing to communicate securely generate a unique session key derived from the public and private data of each host. For example, host A generates a public and private key pair. The public key is sent to host B. Host B does the same, sending its public key to host A. Using the Diffie-Hellman algorithm, host A can create a key by combining its private key with the public key of host B. Likewise, host B can create an identical key by combining its private key with the public key of host A. This results in both hosts independently generating the same symmetric encryption key, which is then used for secure communication. Hence, the term symmetric encryption. Message Verification To verify messages, hosts use a hash function that outputs a fixed-length string based on the following data: The symmetric encryption key The packet number The encrypted message text The result of hashing these elements is called an HMAC (Hash-based Message Authentication Code). The client generates an HMAC and sends it to the server. The server then creates its own HMAC using the same data and compares it to the client's HMAC. If they match, the verification is successful, ensuring that the message is authentic and hasn't been tampered with. Host Authentication Establishing a secure connection is only part of the process. The next step is authenticating the user connecting to the remote host, as the user may not have permission to execute commands. There are several authentication methods: Password Authentication: The user sends an encrypted password to the server. If the password is correct, the server allows the user to execute commands. Certificate-Based Authentication: The user initially provides the server with a password and the public part of a certificate. Once authenticated, the session continues without requiring repeated password entries for subsequent interactions. These methods ensure that only authorized users can access the remote system while maintaining secure communication. Encryption Algorithms A key factor in the robustness of SSH is that decrypting the symmetric key is only possible with the private key, not the public key, even though the symmetric key is derived from both. Achieving this property requires specific encryption algorithms. There are three primary classes of such algorithms: RSA, DSA, and algorithms based on elliptic curves, each with distinct characteristics: RSA: Developed in 1978, RSA is based on integer factorization. Since factoring large semiprime numbers (products of two large primes) is computationally difficult, the security of RSA depends on the size of the chosen factors. The key length ranges from 1024 to 16384 bits. DSA: DSA (Digital Signature Algorithm) is based on discrete logarithms and modular exponentiation. While similar to RSA, it uses a different mathematical approach to link public and private keys. DSA key length is limited to 1024 bits. ECDSA and EdDSA: These algorithms are based on elliptic curves, unlike DSA, which uses modular exponentiation. They assume that no efficient solution exists for the discrete logarithm problem on elliptic curves. Although the keys are shorter, they provide the same level of security. Key Generation Each operating system has its own utilities for quickly generating SSH keys. In Unix-like systems, the command to generate a key pair is: ssh-keygen -t rsa Here, the type of encryption algorithm is specified using the -t flag. Other supported types include: dsa ecdsa ed25519 You can also specify the key length with the -b flag. However, be cautious, as the security of the connection depends on the key length: ssh-keygen -b 2048 -t rsa After entering the command, the terminal will prompt you to specify a file path and name for storing the generated keys. You can accept the default path by pressing Enter, which will create standard file names: id_rsa (private key) and id_rsa.pub (public key). Thus, the public key will be stored in a file with a .pub extension, while the private key will be stored in a file without an extension. Next, the command will prompt you to enter a passphrase. While not mandatory (it is unrelated to the SSH protocol itself), using a passphrase is recommended to prevent unauthorized use of the key by a third-party user on the local Linux system. Note that if a passphrase is used, you must enter it each time you establish the connection. To change the passphrase later, you can use: ssh-keygen -p Or, you can specify all parameters at once with a single command: ssh-keygen -p old_password -N new_password -f path_to_files For Windows, there are two main approaches: Using ssh-keygen from OpenSSH: The OpenSSH client provides the same ssh-keygen command as Linux, following the same steps. Using PuTTY: PuTTY is a graphical application that allows users to generate public and private keys with the press of a button. Installing the Client and Server Components The primary tool for an SSH connection on Linux platforms (both client and server) is OpenSSH. While it is typically pre-installed on most operating systems, there may be situations (such as with Ubuntu) where manual installation is necessary. The general command for installing SSH, followed by entering the superuser password, is: sudo apt-get install ssh However, in some operating systems, SSH may be divided into separate components for the client and server. For the Client To check whether the SSH client is installed on your local machine, simply run the following command in the terminal: ssh If SSH is supported, the terminal will display a description of the command. If nothing appears, you’ll need to install the client manually: sudo apt-get install openssh-client You will be prompted to enter the superuser password during installation. Once completed, SSH connectivity will be available. For the Server Similarly, the server-side part of the OpenSSH toolkit is required on the remote host. To check if the SSH server is available on your remote host, try connecting locally via SSH: ssh localhost If the SSH daemon is running, you will see a message indicating a successful connection. If not, you’ll need to install the SSH server: sudo apt-get install openssh-server As with the client, the terminal will prompt you to enter the superuser password. After installation, you can check whether SSH is active by running: sudo service ssh status Once connected, you can modify SSH settings as needed by editing the configuration file: ./ssh/sshd_config For example, you might want to change the default port to a custom one. Don’t forget that after making changes to the configuration, you must manually restart the SSH service to apply the updates: sudo service ssh restart Copying an SSH Key to the Server On Hostman, you can easily add SSH keys to your servers using the control panel. Using a Special Copy Command After generating a public SSH key, it can be used as an authorized key on a server. This allows quick connections without the need to repeatedly enter a password. The most common way to copy the key is by using the ssh-copy-id command: ssh-copy-id -i ~/.ssh/id_rsa.pub name@server_address This command assumes you used the default paths and filenames during key generation. If not, simply replace ~/.ssh/id_rsa.pub with your custom path and filename. Replace name with the username on the remote server. Replace server_address with the host address. If the usernames on both the client and server are the same, you can shorten the command: ssh-copy-id -i ~/.ssh/id_rsa.pub server_address If you set a passphrase during the SSH key creation, the terminal will prompt you to enter it. Otherwise, the key will be copied immediately. In some cases, the server may be configured to use a non-standard port (the default is 22). If that’s the case, specify the port using the -p flag: ssh-copy-id -i ~/.ssh/id_rsa.pub -p 8129 name@server_address Semi-Manual Copying There are operating systems where the ssh-copy-id command may not be supported, even though SSH connections to the server are possible. In such cases, the copying process can be done manually using a series of commands: ssh name@server_address 'mkdir -pm 700 ~/.ssh; echo ' $(cat ~/.ssh/id_rsa.pub) ' >> ~/.ssh/authorized_keys; chmod 600 ~/.ssh/authorized_keys' This sequence of commands does the following: Creates a special .ssh directory on the server (if it doesn’t already exist) with the correct permissions (700) for reading and writing. Creates or appends to the authorized_keys file, which stores the public keys of all authorized users. The public key from the local file (id_rsa.pub) will be added to it. Sets appropriate permissions (600) on the authorized_keys file to ensure it can only be read and written by the owner. If the authorized_keys file already exists, it will simply be appended with the new key. Once this is done, future connections to the server can be made using the same SSH command, but now the authentication will use the public key added to authorized_keys: ssh name@server_address Manual Copying Some hosting platforms offer server management through alternative interfaces, such as a web-based control panel. In these cases, there is usually an option to manually add a public key to the server. The web interface might even simulate a terminal for interacting with the server. Regardless of the method, the remote host must contain a file named ~/.ssh/authorized_keys, which lists all authorized public keys. Simply copy the client’s public key (found in ~/.ssh/id_rsa.pub by default) into this file. If the key pair was generated using a graphical application (typically PuTTY on Windows), you should copy the public key directly from the application and add it to the existing content in authorized_keys. Connecting to a Server To connect to a remote server on a Linux operating system, enter the following command in the terminal: ssh name@server_address Alternatively, if the local username is identical to the remote username, you can shorten the command to: ssh server_address The system will then prompt you to enter the password. Type it and press Enter. Note that the terminal will not display the password as you type it. Just like with the ssh-copy-id command, you can explicitly specify the port when connecting to a remote server: ssh client@server_address -p 8129 Once connected, you will have control over the remote machine via the terminal; any command you enter will be executed on the server side. Conclusion Today, SSH is one of the most widely used protocols in development and system administration. Therefore, having a basic understanding of its operation is crucial. This article aimed to provide an overview of SSH connections, briefly explain the encryption algorithms (RSA, DSA, ECDSA, and EdDSA), and demonstrate how public and private key pairs can be used to establish secure connections with a personal server, ensuring that exchanged messages remain inaccessible to third parties. We covered the primary commands for UNIX-like operating systems that allow users to generate key pairs and grant clients SSH access by copying the public key to the server, enabling secure connections.
30 January 2025 · 10 min to read
Docker

How to Automate Jenkins Setup with Docker

In the modern software development world, Continuous Integration and Continuous Delivery (CI/CD) have become an integral part of the development process. Jenkins, one of the leading CI/CD tools, helps automate application build, testing, and deployment. However, setting up and managing Jenkins can be time-consuming and complex, especially in large projects with many developers and diverse requirements. Docker, containerization, and container orchestration have come to the rescue, offering more efficient and scalable solutions for deploying applications and infrastructure. Docker allows developers to package applications and their dependencies into containers, which can be easily transported and run on any system with Docker installed. Benefits of Using Docker for Automating Jenkins Setup Simplified Installation and Setup: Using Docker to deploy Jenkins eliminates many challenges associated with installing dependencies and setting up the environment. You only need to run a few commands to get a fully functional Jenkins server. Repeatability: With Docker, you can be confident that your environment will always be the same, regardless of where it runs. This eliminates problems associated with different configurations across different servers. Environment Isolation: Docker provides isolation of applications and their dependencies, avoiding conflicts between different projects and services. Scalability: Using Docker and orchestration tools such as Docker Compose or Kubernetes allows Jenkins to be easily scaled by adding or removing agents as needed. Fast Deployment and Recovery: In case of failure or the need for an upgrade, Docker allows you to quickly deploy a new Jenkins container, minimizing downtime and ensuring business continuity. In this article, we will discuss how to automate the setup and deployment of Jenkins using Docker. We will cover all the stages, from creating a Docker file and setting up Docker Compose to integrating Jenkins Configuration as Code (JCasC) for automatic Jenkins configuration. As a result, you'll have a complete understanding of the process and a ready-made solution for automating Jenkins in your projects. Prerequisites Before you begin setting up Jenkins with Docker, you need to ensure that you have all the necessary tools and software. In this section, we will discuss the requirements for successfully automating Jenkins and how to install the necessary components. Installing Docker and Docker Compose Docker can be installed on various operating systems, including Linux, macOS, and Windows. Below are the steps for installing Docker on the most popular platforms: Linux (Ubuntu) Update the package list with the command: sudo apt update Install packages for HTTPS support: sudo apt install apt-transport-https ca-certificates curl software-properties-common Add the official Docker GPG key: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - Add the Docker repository to APT sources: sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" Install Docker: sudo apt install docker-ce Verify Docker is running: sudo systemctl status docker macOS Download and install Docker Desktop from the official website: Docker Desktop for Mac. Follow the on-screen instructions to complete the installation. Windows Download and install Docker Desktop from the official website: Docker Desktop for Windows. Follow the on-screen instructions to complete the installation. Docker Compose is typically installed along with Docker Desktop on macOS and Windows. For Linux, it requires separate installation: Download the latest version of Docker Compose: sudo curl -L "https://github.com/docker/compose/releases/download/$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep -Po '"tag_name": "\K.*?(?=")')/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose Make the downloaded file executable: sudo chmod +x /usr/local/bin/docker-compose Verify the installation: docker-compose --version Docker Hub is a cloud-based repository where you can find and store Docker images. The official Jenkins Docker image is available on Docker Hub and provides a ready-to-use Jenkins server. Go to the Docker Hub website. In the search bar, type Jenkins. Select the official image jenkins/jenkins. The official image is regularly updated and maintained by the community, ensuring a stable and secure environment. Creating a Dockerfile for Jenkins In this chapter, we will explore how to create a Dockerfile for Jenkins that will be used to build a Docker image. We will also discuss how to add configurations and plugins to this image to meet the specific requirements of your project. Structure of a Dockerfile A Dockerfile is a text document containing all the commands that a user could call on the command line to build an image. In each Dockerfile, instructions are used to define a step in the image-building process. The key commands include: FROM: Specifies the base image to create a new image from. RUN: Executes a command in the container. COPY or ADD: Copies files or directories into the container. CMD or ENTRYPOINT: Defines the command that will be executed when the container starts. Basic Dockerfile for Jenkins Let’s start by creating a simple Dockerfile for Jenkins. This file will use the official Jenkins image as the base and add a few necessary plugins. Create a new file named Dockerfile in your project directory. Add the following code: FROM jenkins/jenkins:lts RUN jenkins-plugin-cli --plugins workflow-aggregator git EXPOSE 8080 EXPOSE 50000 This basic Dockerfile installs two plugins: workflow-aggregator and git. It also exposes ports 8080 (for the web interface) and 50000 (for connecting Jenkins agents). Adding Configurations and Plugins For more complex configurations, we can add additional steps to the Dockerfile. For example, we can configure Jenkins to automatically use a specific configuration file or add scripts for pre-configuration. Create a jenkins_home directory to store custom configurations. Inside the new directory, create a custom_config.xml file with the required configurations: <?xml version='1.0' encoding='UTF-8'?> <hudson> <numExecutors>2</numExecutors> <mode>NORMAL</mode> <useSecurity>false</useSecurity> <disableRememberMe>false</disableRememberMe> <label></label> <primaryView>All</primaryView> <slaveAgentPort>50000</slaveAgentPort> <securityRealm class='hudson.security.SecurityRealm$None'/> <authorizationStrategy class='hudson.security.AuthorizationStrategy$Unsecured'/> </hudson> Update the Dockerfile as follows: FROM jenkins/jenkins:lts RUN jenkins-plugin-cli --plugins workflow-aggregator git docker-workflow COPY jenkins_home/custom_config.xml /var/jenkins_home/config.xml COPY scripts/init.groovy.d /usr/share/jenkins/ref/init.groovy.d/ EXPOSE 8080 EXPOSE 50000 In this example, we are installing additional plugins, copying the custom configuration file into Jenkins, and adding scripts to the init.groovy.d directory for automatic initialization of Jenkins during its first startup. Docker Compose Setup Docker Compose allows you to define your application's infrastructure as code using YAML files. This simplifies the configuration and deployment process, making it repeatable and easier to manage. Key benefits of using Docker Compose: Ease of Use: Create and manage multi-container applications with a single YAML file. Scalability: Easily scale services by adding or removing containers as needed. Convenience for Testing: Ability to run isolated environments for development and testing. Example of docker-compose.yml for Jenkins Let’s create a docker-compose.yml file to deploy Jenkins along with associated services such as a database and Jenkins agent. Create a docker-compose.yml file in your project directory. Add the following code to the file: version: '3.8' services: jenkins: image: jenkins/jenkins:lts container_name: jenkins-server ports: - "8080:8080" - "50000:50000" volumes: - jenkins_home:/var/jenkins_home networks: - jenkins-network jenkins-agent: image: jenkins/inbound-agent container_name: jenkins-agent environment: - JENKINS_URL=http://jenkins-server:8080 - JENKINS_AGENT_NAME=agent - JENKINS_AGENT_WORKDIR=/home/jenkins/agent volumes: - agent_workdir:/home/jenkins/agent depends_on: - jenkins networks: - jenkins-network volumes: jenkins_home: agent_workdir: networks: jenkins-network: This file defines two services: jenkins: The service uses the official Jenkins image. Ports 8080 and 50000 are forwarded for access to the Jenkins web interface and communication with agents. The /var/jenkins_home directory is mounted on the external volume jenkins_home to persist data across container restarts. jenkins-agent: The service uses the Jenkins inbound-agent image. The agent connects to the Jenkins server via the URL specified in the JENKINS_URL environment variable. The agent's working directory is mounted on an external volume agent_workdir. Once you create the docker-compose.yml file, you can start all services with a single command: Navigate to the directory that contains your docker-compose.yml. Run the following command to start all services: docker-compose up -d The -d flag runs the containers in the background. After executing this command, Docker Compose will create and start containers for all services defined in the file. You can now check the status of the running containers using the following command: docker-compose ps If everything went well, you should see only the jenkins-server container in the output. Now, let’s set up the Jenkins server and agent. Open a browser and go to http://localhost:8080/. During the first startup, you will see the following message: To retrieve the password, run this command: docker exec -it jenkins-server cat /var/jenkins_home/secrets/initialAdminPassword Copy the password and paste it into the Unlock Jenkins form. This will open a new window with the initial setup. Select Install suggested plugins. After the installation is complete, fill out the form to create an admin user. Accept the default URL and finish the setup. Then, go to Manage Jenkins → Manage Nodes. Click New Node, provide a name for the new node (e.g., "agent"), and select Permanent Agent. Fill in the remaining fields as shown in the screenshot. After creating the agent, a window will open with a command containing the secret for the agent connection. Copy the secret and add it to your docker-compose.yml: environment: - JENKINS_URL=http://jenkins-server:8080 - JENKINS_AGENT_NAME=agent - JENKINS_AGENT_WORKDIR=/home/jenkins/agent - JENKINS_SECRET=<your-secret-here> # Insert the secret here To restart the services, use the following commands and verify that the jenkins-agent container has started: docker-compose downdocker-compose up -d Configuring Jenkins with Code (JCasC) Jenkins Configuration as Code (JCasC) is an approach that allows you to describe the entire Jenkins configuration in a YAML file. It simplifies the automation, maintenance, and portability of Jenkins settings. In this chapter, we will explore how to set up JCasC for automatic Jenkins configuration when the container starts. JCasC allows you to describe Jenkins configuration in a single YAML file, which provides the following benefits: Automation: A fully automated Jenkins setup process, eliminating the need for manual configuration. Manageability: Easier management of configurations using version control systems. Documentation: Clear and easily readable documentation of Jenkins configuration. Example of a Jenkins Configuration File First, create the configuration file. Create a file named jenkins.yaml in your project directory. Add the following configuration to the file: jenkins: systemMessage: "Welcome to Jenkins configured as code!" securityRealm: local: allowsSignup: false users: - id: "admin" password: "${JENKINS_ADMIN_PASSWORD}" authorizationStrategy: loggedInUsersCanDoAnything: allowAnonymousRead: false tools: jdk: installations: - name: "OpenJDK 11" home: "/usr/lib/jvm/java-11-openjdk" jobs: - script: > pipeline { agent any stages { stage('Build') { steps { echo 'Building...' } } stage('Test') { steps { echo 'Testing...' } } stage('Deploy') { steps { echo 'Deploying...' } } } } This configuration file defines: System message in the systemMessage block. This string will appear on the Jenkins homepage and can be used to inform users of important information or changes. Local user database and administrator account in the securityRealm block. The field allowsSignup: false disables self-registration of new users. Then, a user with the ID admin is defined, with the password set by the environment variable ${JENKINS_ADMIN_PASSWORD}. Authorization strategy in the authorizationStrategy block. The policy loggedInUsersCanDoAnything allows authenticated users to perform any action, while allowAnonymousRead: false prevents anonymous users from accessing the system. JDK installation in the tools block. In this example, a JDK named OpenJDK 11 is specified with the location /usr/lib/jvm/java-11-openjdk. Pipeline example in the jobs block. This pipeline includes three stages: Build, Test, and Deploy, each containing one step that outputs a corresponding message to the console. Integrating JCasC with Docker and Docker Compose Next, we need to integrate our jenkins.yaml configuration file with Docker and Docker Compose so that this configuration is automatically applied when the Jenkins container starts. Update the Dockerfile to copy the configuration file into the container and install the JCasC plugin: FROM jenkins/jenkins:lts RUN jenkins-plugin-cli --plugins configuration-as-code COPY jenkins.yaml /var/jenkins_home/jenkins.yaml EXPOSE 8080 EXPOSE 50000 Update the docker-compose.yml to set environment variables and mount the configuration file. Add the following code in the volumes block: - ./jenkins.yaml:/var/jenkins_home/jenkins.yaml After the volumes block, add a new environment block (if you haven't defined it earlier): environment: - JENKINS_ADMIN_PASSWORD=admin_password Build the new Jenkins image with the JCasC configuration: docker-compose build Run the containers: docker-compose up -d After the containers start, go to your browser at http://localhost:8080 and log in with the administrator account. You should see the system message and the Jenkins configuration applied according to your jenkins.yaml file. A few important notes: The YAML files docker-compose.yml and jenkins.yaml might seem similar at first glance but serve completely different purposes. The file in Docker Compose describes the services and containers needed to run Jenkins and its environment, while the file in JCasC describes the Jenkins configuration itself, including plugin installation, user settings, security, system settings, and jobs. The .yml and .yaml extensions are variations of the same YAML file format. They are interchangeable and supported by various tools and libraries for working with YAML. The choice of format depends largely on historical community preferences; in Docker documentation, you will more often encounter examples with the .yml extension, while in JCasC documentation, .yaml is more common. The pipeline example provided below only outputs messages at each stage with no useful payload. This example is for demonstrating structure and basic concepts, but it does not prevent Jenkins from successfully applying the configuration. We will not dive into more complex and practical structures. jenkins.yaml describes the static configuration and is not intended to define the details of a specific CI/CD process for a particular project. For that purpose, you can use the Jenkinsfile, which offers flexibility for defining specific CI/CD steps and integrating with version control systems. We will discuss this in more detail in the next chapter. Key Concepts of Jobs in JCasC Jobs are a section of the configuration file that allows you to define and configure build tasks using code. This block includes the following: Description of Build Tasks: This section describes all aspects of a job, including its type, stages, triggers, and execution steps. Types of Jobs: There are different types of jobs in Jenkins, such as freestyle projects, pipelines, and multiconfiguration projects. In JCasC, pipelines are typically used because they provide a more flexible and powerful approach to automation. Declarative Syntax: Pipelines are usually described using declarative syntax, simplifying understanding and editing. Example Breakdown: pipeline: The main block that defines the pipeline job. agent any: Specifies that the pipeline can run on any available Jenkins agent. stages: The block that contains the pipeline stages. A stage is a step in the process. Additional Features: Triggers: You can add triggers to make the job run automatically under certain conditions, such as on a schedule or when a commit is made to a repository: triggers { cron('H 4/* 0 0 1-5') } Post-Conditions: You can add post-conditions to execute steps after the pipeline finishes, such as sending notifications or archiving artifacts. Parameters: You can define parameters for a job to make it configurable at runtime: parameters { string(name: 'BRANCH_NAME', defaultValue: 'main', description: 'Branch to build') } Automating Jenkins Deployment in Docker with JCasC Using Scripts for Automatic Deployment Use Bash scripts to automate the installation, updating, and running Jenkins containers. Leverage Jenkins Configuration as Code (JCasC) to automate Jenkins configuration. Script Examples Script for Deploying Jenkins in Docker: #!/bin/bash # Jenkins Parameters JENKINS_IMAGE="jenkins/jenkins:lts" CONTAINER_NAME="jenkins-server" JENKINS_PORT="8080" JENKINS_AGENT_PORT="50000" VOLUME_NAME="jenkins_home" CONFIG_DIR="$(pwd)/jenkins_configuration" # Create a volume to store Jenkins data docker volume create $VOLUME_NAME # Run Jenkins container with JCasC docker run -d \ --name $CONTAINER_NAME \ -p $JENKINS_PORT:8080 \ -p $JENKINS_AGENT_PORT:50000 \ -v $VOLUME_NAME:/var/jenkins_home \ -v $CONFIG_DIR:/var/jenkins_home/casc_configs \ -e CASC_JENKINS_CONFIG=/var/jenkins_home/casc_configs \ $JENKINS_IMAGE The JCasC configuration file jenkins.yaml was discussed earlier. Setting Up a CI/CD Pipeline for Jenkins Updates To set up a CI/CD pipeline, follow these steps: Open Jenkins and go to the home page. Click on Create Item. Enter a name for the new item, select Pipeline, and click OK. If this section is missing, you need to install the plugin in Jenkins. Go to Manage Jenkins → Manage Plugins. In the Available Plugins tab, search for Pipeline and install the Pipeline plugin. Similarly, install the Git Push plugin. After installation, go back to Create Item. Select Pipeline, and under Definition, choose Pipeline script from SCM. Select Git as the SCM. Add the URL of your repository; if it's private, add the credentials. In the Branch Specifier field, specify the branch that contains the Jenkinsfile (e.g., */main). Note that the Jenkinsfile should be created without an extension. If it's located in a subdirectory, specify it in the Script Path field. Click Save. Example of a Jenkinsfile pipeline { agent any environment { JENKINS_CONTAINER_NAME = 'new-jenkins-server' JENKINS_IMAGE = 'jenkins/jenkins:lts' JENKINS_PORT = '8080' JENKINS_VOLUME = 'jenkins_home' } stages { stage('Setup Docker') { steps { script { // Install Docker on the server if it's not installed sh ''' if ! [ -x "$(command -v docker)" ]; then curl -fsSL https://get.docker.com -o get-docker.sh sh get-docker.sh fi ''' } } } stage('Pull Jenkins Docker Image') { steps { script { // Pull the latest Jenkins image sh "docker pull ${JENKINS_IMAGE}" } } } stage('Cleanup Old Jenkins Container') { steps { script { // Stop and remove the old container if it exists def existingContainer = sh(script: "docker ps -a -q -f name=${JENKINS_CONTAINER_NAME}", returnStdout: true).trim() if (existingContainer) { echo "Stopping and removing existing container ${JENKINS_CONTAINER_NAME}..." sh "docker stop ${existingContainer} || true" sh "docker rm -f ${existingContainer} || true" } else { echo "No existing container with name ${JENKINS_CONTAINER_NAME} found." } } } } stage('Run Jenkins Container') { steps { script { // Run Jenkins container with port binding and volume mounting sh ''' docker run -d --name ${JENKINS_CONTAINER_NAME} \ -p ${JENKINS_PORT}:8080 \ -p 50000:50000 \ -v ${JENKINS_VOLUME}:/var/jenkins_home \ ${JENKINS_IMAGE} ''' } } } stage('Configure Jenkins (Optional)') { steps { script { // Additional Jenkins configuration through Groovy scripts or REST API sh ''' # Example script for performing initial Jenkins setup curl -X POST http://localhost:${JENKINS_PORT}/scriptText --data-urlencode 'script=println("Jenkins is running!")' ''' } } } } post { always { echo "Jenkins setup and deployment process completed." } } } On the page of your new pipeline, click Build Now. Go to Console Output. In case of a successful completion, you should see the following output. For this pipeline, we used the following files.  Dockerfile: FROM jenkins/jenkins:lts USER root RUN apt-get update && apt-get install -y docker.io docker-compose.yml: version: '3.7' services: jenkins: build: . ports: - "8081:8080" - "50001:50000" volumes: - jenkins_home:/var/jenkins_home - /var/run/docker.sock:/var/run/docker.sock environment: - JAVA_OPTS=-Djenkins.install.runSetupWizard=false networks: - jenkins-network volumes: jenkins_home: networks: jenkins-network: Ports 8081 and 50001 are used here so that the newly deployed Jenkins can occupy ports 8080 and 50000, respectively. This means that the main Jenkins, from which the pipeline is running, is currently located at http://localhost:8081/. One way to check if Jenkins has been deployed is to go to http://localhost:8080/, as we specified this in the pipeline. Since this is a new image, a welcome message with authentication will appear on the homepage. Conclusion Automating the deployment, updates, and backups of Jenkins is crucial for ensuring the reliability and security of CI/CD processes. Using modern tools enhances this process with a variety of useful features and resources. If you're further interested in exploring Jenkins capabilities, we recommend the following useful resources that can assist with automating deployments: Official Jenkins website Jenkins Configuration as Code documentation Pipeline Syntax
30 January 2025 · 19 min to read
R

How to Find Standard Deviation in R

Standard deviation is a statistical technique, which shows to what extent the values ​​of the studied feature deviate on average from the mean.  We use it to determine whether the units in our sample or population are similar with respect to the studied feature, or whether they differ significantly from each other. If you want to learn how to find standard deviation in R or just learn what standard deviation is, then read on. This guide will offer a detailed explanation of calculating standard deviation in R, covering various methods and practical examples to assist users in analyzing data efficiently. The Mathematics Behind Standard Deviation Standard deviation is a measure defining the average variation of individual values ​​of a statistical feature from the arithmetic mean. It has a more intuitive interpretation as a measure of variability of a distribution. If the calculation had been undertaken with distances from the mean, then the sum would always be 0, a very shaky result.  The mathematical formula of standard deviation is: =∑(xi​−μ)2N​ Where Σ represents sum, xi​ is each observation, μ is the mean of the data, and N is the total number of observations. It is usually abbreviated as SD.  The smaller the standard deviation, the closer the values are to the average, which shows that the data is more consistent. To properly judge whether the SD is small or large, it's important to know the range of the scale being used. The Significance of Standard Deviation The standard deviation is very helpful when comparing the variability between two data sets of similar size and average. Only using the simple average often does not help in deeper analysis. What good is it that we know the average salary in the company, if one does not know the variability of the salary? Do all employees get exactly the same? Or maybe the manager is overstating the average salary? To dig deeper and get to the underlying truth, we will have to calculate standard deviation. Similarly, standard deviation is also helpful to find the risk while making investment decisions. If on the stock exchange, one company brought an average annual profit of 4% and another an average annual profit of 5%, it does not mean that it is better to choose the second company without thinking.  Setting aside both fundamental and technical analysis of a specific company, as well as the broader macroeconomic conditions, it's valuable to focus on the fluctuations in the quotations themselves. If the stock value of the first company had slight, several percent fluctuations during the year, and the other fluctuated by several dozen percent. Then it is logical that the investment in the first company was much less risky. And to compare different rates of return and check their riskiness, you can use the standard deviation. Different Ways to Find Standard Deviation in R To perform any kind of analysis, first we must have some data. In R you can input manually by defining a vector or importing it from external sources, such as excel or CSV file. Let’s create a vector with six values: data <- c(4, 8, 6, 5, 3, 7) Alternatively, datasets can be imported using the read.csv() function, which loads data from a CSV file into R. Here's an example of importing data: # Read a CSV file into a data frame data <-read.csv("datafile.csv") # Install the 'readxl' package install.packages("readxl") # Load the library library(readxl) # Read an Excel file into a data frame data_excel <- read_excel("datafile.xlsx", sheet = 1) Finding Sample Standard Deviation in R A quick and easy way to standard deviation of a sample is through the sd() function which is one of the built-in function in R. It takes a data sample, often in the form of a vector, as input and returns the standard deviation. For example, to measure the SD of the vector created earlier: sd(data) Output: [1] 1.870829 If your sample has missing or null values, then you just need to set the parameter na.rm=TRUE in the sd() function and the missing value will not be included in the analysis: standard_deviation <- sd(data, na.rm = TRUE) Finding Population Standard Deviation in R To calculate the population standard deviation, we will first find the mean and subtract it from each observation in the dataset and square the results. Once we have the squared differences, we just have to find their average to find the variance. Finally, taking the square root of the variance will give us the population SD. Here is the R code to manually compute population standard deviation: mean_data <- mean(data) squared_differences <- (data - mean_data)^2 mean_squared_diff <- mean(squared_differences) standard_deviation_manual <- sqrt(mean_squared_diff) print(standard_deviation_manual) Grouped Standard Deviation in R Let's say you are analyzing the grades of students across different subjects in a school. The categorical variable here is “subject,” and you want to know not only the average grade for each subject but also the variation in grades. This will help us understand if certain subject have a wide or uniform range of grades. To determine the standard deviation for each category in a dataset containing categorical variables, one can utilize the dplyr package. The group_by() function facilitates the segmentation of the data by the categorical variable, and summarise() then calculates the SD for each distinct group. Before moving to calculation, we will install the dplyr package: install.packages("dplyr") Following our earlier example, let’s take a dataset which contains grades of students across different subjects: library(dplyr) # Example data frame with class and grades data <- data.frame( Subject = c('Math', 'Math', 'Math', 'History', 'History', 'History'), grade = c(85, 90, 78, 88, 92, 85) ) # Calculate standard deviation for each class grouped_sd <- data %>% group_by(Subject) %>% summarise(Standard_Deviation = sd(grade)) print(grouped_sd) Output: # A tibble: 2 × 2 Subject Standard_Deviation <chr> <dbl> 1 History 3.511885 2 Math 6.027714 Finding Column-Wise Standard Deviation  In R, there are a number of different ways to find column-wise standard deviation. To find the SD of specific columns, you can use apply the sd() function. A more efficient way is to use the summarise() or summarise_all() functions of the dplyr package. Example using apply(): data_frame <- data.frame(A = c(1, 2, 3), B = c(4, 5, 6)) apply(data_frame, 2, sd) Example using dplyr: library(dplyr) data_frame %>% summarise(across(everything(), sd)) Weighted Standard Deviation Now imagine that you are a manager of a sports league where a team has 5 players while others have 50 players. If you calculate the SD of scores across the entire league and treat all teams equally, the 5-player teams would contribute just as much to the calculation as the 50-player teams, even when they have far fewer players. Such an analysis will be misleading, therefore we need a measure like weighted standard deviation which controls for the weights based on the size of the teams, ensuring that teams with more players contribute proportionally to the overall variability. The formula for calculating the weighted standard deviation is as follows: Dw=∑wi​(xi​−μw​)2∑wi​​​ Where: 𝑤i ​represents the weight for each data point, 𝑥i ​ denotes each data point, μw is the weighted mean, calculated as: μw​=​∑wi​xi∑wi​​ Though R does not have a built-in function for measuring weighted standard deviation, it can be computed manually. Manually Find Weighted Standard Deviation Let's say we have test grades data with corresponding weights, and we want to measure the weighted standard deviation: # Example data with grades and weights grades <- c(85, 90, 78, 88, 92, 85) weights <- c(0.2, 0.3, 0.1, 0.15, 0.1, 0.15) # Calculate the weighted mean weighted_mean <- sum(grades * weights) / sum(weights) # Calculate the squared differences from the weighted mean squared_differences <- (grades - weighted_mean)^2 # Calculate the weighted variance weighted_variance <- sum(weights * squared_differences) / sum(weights) # Calculate the weighted standard deviation weighted_sd <- sqrt(weighted_variance) print(weighted_sd) Output: [1] 3.853245 Conclusion Standard deviation is quite easy to calculate, despite those cruel sums and roots in the formula, and even easier to interpret. If you just want to make friends with statistics or data science, then like it or not, you also have to make friends with standard deviation and how to measure it in R. 
30 January 2025 · 7 min to read
Kubernetes

Kubernetes Requests and Limits

When working with the Kubernetes containerization platform, it is important to control resource usage for cluster objects such as pods. The requests and limits parameters allow you to configure resource consumption limits, such as how many resources a pod can use in a Kubernetes cluster. This article will explore the use of requests and limits in Kubernetes through practical examples. Prerequisites To work with requests and limits in a Kubernetes cluster, we need: A Kubernetes cluster (you can create one in the Hostman control panel). For testing purposes, a cluster with two nodes will suffice. The cluster can also be deployed manually by renting the necessary number of cloud or dedicated (physical) servers, setting up the operating system, and installing the required packages. Lens or kubectl for connecting to and managing your Kubernetes clusters. Connecting to a Kubernetes Cluster Using Lens First, go to the cluster management page in your Hostman panel. Download the Kubernetes cluster configuration file (the kubeconfig file). Once Lens is installed on your system, launch the program, and from the left menu, go to the Catalog (app) section: Select Clusters and click the blue plus button at the bottom right. Choose the directory where you downloaded the Kubernetes configuration file by clicking the Sync button at the bottom right. After this, our cluster will appear in the list of available clusters. Click on the cluster's name to open its dashboard: What are Requests and Limits in Kubernetes First, let's understand what requests and limits are in Kubernetes. Requests are a mechanism in Kubernetes that is responsible for allocating physical resources, such as memory and CPU cores, to the container being launched. In simple terms, requests in Kubernetes are the minimum system requirements for an application to function properly. Limits are a mechanism in Kubernetes that limits the physical resources (memory and CPU cores) allocated to the container being launched. In other words, limits in Kubernetes are the maximum values for physical resources, ensuring that the launched application cannot consume more resources than specified in the limits. The container can only use resources up to the limit specified in the Limits. The request and limit mechanisms apply only to objects of type pod and are defined in the pod configuration files, including deployment, StatefulSet, and ReplicaSet files. Requests are added in the containers block using the resources parameter. In the resources section, you need to add the requests block, which consists of two values: cpu (CPU resource request) and memory (memory resource request). The syntax for requests is as follows: containers: ... resources: requests: cpu: "1.0" memory: "150Mi" In this example, for the container to be launched on a selected node in the cluster, at least one free CPU core and 150 megabytes of memory must be available. Limits are set in the same way. For example: containers: ... resources: limits: cpu: "2.0" memory: "500Mi" In this example, the container cannot use more than two CPU cores and no more than 500 megabytes of memory. The units of measurement for requests and limits are as follows: CPU — in millicores (milli-cores) RAM — in bytes For CPU resources, cores are used. For example, if we need to allocate one physical CPU core to a container, the manifest should specify 1.0. To allocate half a core, specify 0.5. A core can be logically divided into millicores, so you can allocate, for example, 100m, which means one-thousandth of a core (1 full CPU core contains 1000 millicores). For RAM, we specify values in bytes. You can use numbers with the suffixes E, P, T, G, M, k. For example, if a container needs to be allocated 1 gigabyte of memory, you should specify 1G. In megabytes, it would be 1024M, in kilobytes, it would be 1048576k, and so on. The requests and limits parameters are optional; however, it is important to note that if both parameters are not set, the container will be able to run on any available node in the cluster regardless of the free resources and will consume as many resources as are physically available on each node. Essentially, the cluster will allocate excess resources. This practice can negatively affect the stability of the entire cluster, as it significantly increases the risk of errors such as OOM (Out of Memory) and OutOfCPU (lack of CPU resources). To prevent these errors, Kubernetes introduced the request and limit mechanisms. Practical Use of Requests and Limits in Kubernetes Let's look at the practical use of requests and limits. First, we will deploy a deployment file with an Nginx image where we will set only the requests. In the configuration below, to launch a pod with a container, the node must have at least 100 millicores of CPU (1/1000 of a CPU core) and 150 megabytes of free memory: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-test-deployment namespace: ns-for-nginx labels: app: nginx-test spec: selector: matchLabels: app: nginx-test template: metadata: labels: app: nginx-test spec: containers: - name: nginx-test image: nginx:1.25 resources: requests: cpu: "100m" memory: "150Mi" Before deploying the deployment, let's create a new namespace named ns-for-nginx: kubectl create ns ns-for-nginx After creating the namespace, we will deploy the deployment file using the following command: kubectl apply -f nginx-test-deployment.yml Now, let's check if the deployment was successfully created: kubectl get deployments -A Also, check the status of the pod: kubectl get po -n ns-for-nginx The deployment file and the pod have been successfully launched. To ensure that the minimum resource request was set for the Nginx pod, we will use the kubectl describe pod command (where nginx-test-deployment-786d6fcb57-7kddf is the name of the running pod): kubectl describe pod nginx-test-deployment-786d6fcb57-7kddf -n ns-for-nginx In the output of this command, you can find the requests block, which contains the previously set minimum requirements for our container to run: In the example above, we created a deployment that sets only the minimum required resources for deployment. Now, let's add limits for the container to run with 1 full CPU core and 1 gigabyte of RAM by creating a new deployment file: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-test-deployment-2 namespace: ns-for-nginx labels: app: nginx-test2 spec: selector: matchLabels: app: nginx-test2 template: metadata: labels: app: nginx-test2 spec: containers: - name: nginx-test2 image: nginx:1.25 resources: requests: cpu: "100m" memory: "150Mi" limits: cpu: "1.0" memory: "1G" Let's create the deployment in the cluster: kubectl apply -f nginx-test-deployment2.yml Using the kubectl describe command, let's verify that both requests and limits have been applied (where nginx-test-deployment-2-6d5df6c95c-brw8n is the name of the pod): kubectl describe pod nginx-test-deployment-2-6d5df6c95c-brw8n -n ns-for-nginx In the screenshot above, both requests and limits have been set for the container. With these quotas, the container will be scheduled on a node with at least 150 megabytes of RAM and 100 milli-CPU. At the same time, the container will not be allowed to consume more than 1 gigabyte of RAM and 1 CPU core. Using ResourceQuota In addition to manually assigning resources for each container, Kubernetes provides a way to allocate quotas to specific namespaces in the cluster. The ResourceQuota mechanism allows setting resource usage limits within a particular namespace. ResourceQuota is intended to limit resources such as CPU and memory. The practical use of ResourceQuota looks like this: Create a new namespace with quota settings: kubectl create ns ns-for-resource-quota Create a ResourceQuota object: apiVersion: v1 kind: ResourceQuota metadata: name: resource-quota-test namespace: ns-for-resource-quota spec: hard: pods: "2" requests.cpu: "0.5" requests.memory: "800Mi" limits.cpu: "1" limits.memory: "1G" In this example, for all objects created in the ns-for-resource-quota namespace, the following limits will apply: A maximum of 2 pods can be created. The minimum CPU resources required for starting the pods is 0.5 milliCPU. The minimum memory required for starting the pods is 800MB. CPU limits are set to 1 core (no more can be allocated). Memory limits are set to 1GB (no more can be allocated). Apply the configuration file: kubectl apply -f test-resource-quota.yaml Check the properties of the ResourceQuota object: kubectl get resourcequota resource-quota-test -n ns-for-resource-quota As you can see, resource quotas have been set. Also, verify the output of the kubectl describe ns command: kubectl describe ns ns-for-resource-quota The previously created namespace ns-for-resource-quota will have the corresponding resource quotas. Example of an Nginx pod with the following configuration: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-with-quota namespace: ns-for-resource-quota labels: app: nginx-with-quota spec: selector: matchLabels: app: nginx-with-quota replicas: 3 template: metadata: labels: app: nginx-with-quota spec: containers: - name: nginx image: nginx:1.22.1 resources: requests: cpu: 100m memory: 100Mi limits: cpu: 100m memory: 100Mi Here we define 3 replicas of the Nginx pod to test the quota mechanism. We also set minimum resource requests for the containers and apply limits to ensure the containers don't exceed the defined resources. Apply the configuration file: kubectl apply -f nginx-deployment-with-quota.yaml kubectl get all -n ns-for-resource-quota As a result, only two of the three replicas of the pod will be successfully created. The deployment will show an error message indicating that the resource quota for pod creation has been exceeded (in this case, we're trying to create more pods than allowed): However, the remaining two Nginx pods were successfully started: Conclusion Requests and limits are critical mechanisms in Kubernetes that allow for flexible resource allocation and control within the cluster, preventing unexpected errors in running applications and ensuring the stability of the cluster itself. We offer an affordable Kubernetes hosting platform, with transparent and scalable pricing for all workloads.
29 January 2025 · 9 min to read
Python

How to Update Python

As software evolves, so does the need to keep your programming environment up-to-date. Python, known for its versatility and widespread application, frequently sees new version releases. These updates frequently bring new features, performance enhancements, and crucial security patches for developers and organizations that depend on Python. Ensuring that Python is up-to-date guarantees enhanced performance and fortified security. We'll explore different methods for updating Python, suited to your needs. Prerequisites Before starting, ensure you have: Administrative access to your cloud server. Reliable internet access. Updating Python Several methods are available to update Python on a cloud server. Here are four effective methods to achieve this. Method 1: Via Package Manager Employing a package manager makes updating Python a quick and effortless task. This approach is simple and quick, especially for users who are familiar with package management systems. Step 1: Find the Current Python Version Begin by validating the Python version on your server via: python --version or for Python 3: python3 --version Step 2: Update Package Repository Make sure your package repository is updated to receive the latest version data by applying: sudo apt update Step 3: Upgrade Python Then, proceed to use your package manager to install the current version of Python: sudo apt install --upgrade python3 This will bring your Python installation up to the latest version provided by your package repository. Method 2: Building Python from Source Compiling Python from the source provides the ability to customize the build process and apply specific optimizations. This method is especially useful for developers who need a customized Python build tailored to their requirements. Check out these instructions: Step 1: Install Dependencies Get the essential dependencies from the default package manager for building process: sudo apt install build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev pkg-config libffi-dev wget Step 2: Download Python Source Code Then, get the updated Python source code by visiting the official website.  You could also opt to download it directly using wget: wget https://www.python.org/ftp/python/3.13.1/Python-3.13.1.tgz Substitute 3.13.1 with your preferred Python version number. Step 3: Extract the Package Once downloaded, simply extract the tarball via: tar -xf Python-<latest-version>.tgz Step 4: Set Up and Compile Python Enter the extracted folder and configure the installation using these commands: cd Python-<latest-version>./configure --enable-optimizations Once done, compile Python via make command given below: make -j $(nproc) Note: The above command utilizes all available CPU cores to speed up the build process. On a machine with limited resources, such as CPU and 1GB RAM, limit the number of parallel jobs to reduce memory usage. For example, apply: make -j1 Step 5: Install Python Following compilation, go ahead and install Python through: sudo make install Note: The make altinstall command can be applied too instead of make install. Implementing this will prevent any interruptions to your system tools and applications that require the default Python version. However, extra steps are needed: Verify the installed location via: ls /usr/local/bin/python3.13 Apply update-alternatives system for managing and switching multiple Python versions: sudo update-alternatives --install /usr/bin/python3 python3 /usr/local/bin/python3.13 1sudo update-alternatives --config python3 Step 6: Validate the Python Installation Close the terminal and open again. Then check the newly installed version via: python3 --version Method 3: Via Pyenv  Pyenv is a go-to solution for maintaining different Python versions on the same system. It offers a versatile method for installing and switching between various Python versions. To update Python through Pyenv, use the following instructions. Step 1: Install Dependencies First, set up the dependencies needed for compiling Python: sudo apt install -y make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev libffi-dev liblzma-dev git Step 2: Install Pyenv Following that, utilize the curl command to download and install Pyenv: curl https://pyenv.run | bash Step 3: Update Shell Configuration After that, reload the shell configuration: export PYENV_ROOT="$HOME/.pyenv"[[ -d $PYENV_ROOT/bin ]] && export PATH="$PYENV_ROOT/bin:$PATH"eval "$(pyenv init - bash)" Step 4: Install Recent Python  Once completion is completed, display all available Python versions with: pyenv install --list Then proceed to install the version you want via: pyenv install <latest-version>   Configure the newly installed version as the system-wide default through: pyenv global <latest-version> Step 5: Verify the Installation Confirm the new Python version by applying: python --version Method 4: Via Anaconda  Anaconda supplies a full-featured distribution of Python and R, specifically aimed at data science and computational applications. It simplifies package handling and implementation, providing an accessible and efficient framework for developers. Here’s are the steps: Step 1: Fetch Anaconda Installer Fetch the Anaconda installer script directly from the official site: wget https://repo.anaconda.com/archive/Anaconda3-<latest-version>-Linux-x86_64.sh Replace <latest-version> with the desired version number. For example: wget https://repo.anaconda.com/archive/Anaconda3-2024.06-1-Linux-x86_64.sh Step 2: Run the Installer Run the installer script through bash: bash Anaconda3-<latest-version>-Linux-x86_64.sh Adhere to the prompts to finalize the installation. Step 3: Initialize Anaconda Set up Anaconda by incorporating it into your shell configuration using: source ~/.bashrc Step 4: Update Anaconda Ensure Anaconda is updated by applying: conda update conda Confirm the Python installation through: conda install python=<version> Step 5: Verify the Installation Identify the Python version being utilized in your Anaconda configuration. Apply: python --version Additional Tips for Maintaining Your Python Environment Listed below are some key practices to ensure your Python environment runs smoothly and efficiently: Regular Updates and Maintenance For maintaining optimal performance and security, it is important to keep your Python environment updated frequently. It's recommended to check for updates periodically and apply them as needed.  Using Virtual Environments It's a good idea to use virtual environments when working with Python. They let you set up separate environments for each project, so dependencies and versions stay separate. Tools like venv and virtualenv can help manage these environments efficiently. Backup and Version Control It's always a good idea to maintain backups of your important projects and configurations. Git helps you record changes, work with teammates, and switch back to older versions when needed. Troubleshooting Common Issues Listed here are frequent problems you may face and ways to solve them: Dependency Conflicts Sometimes, upgrading Python or installing new packages can lead to dependency conflicts. To resolve these conflicts, consider using tools like pipenv or poetry that manage dependencies and virtual environments. Path Issues After upgrading Python, you might encounter issues with the PATH environment variable. Ensure that your system recognizes the correct Python version by updating the PATH variable in your shell configuration file (e.g., .bashrc, .zshrc). Security Considerations Ensuring the protection of your Python environment is essential. Follow these recommendations to maintain a secure environment: Stick to trusted sources when downloading packages. Use PIP's hash-checking mode to confirm package integrity. Always review the code and documentation before incorporating a new package. Stay informed with security updates and advisories from the Python ecosystem and package maintainers. Keep PIP and your packages updated regularly to ensure protection with the newest security fixes and improvements. FAQs Q1: What's the recommended approach to updating Python on a cloud server? A: The best method depends on your requirements. For a straightforward update, using a package manager is ideal. For customization, building from source is recommended. Pyenv is great for managing multiple versions, while Anaconda is tailored for data science needs. Q2: How frequently should I update my Python environment? A: Periodically review for updates and implement them to ensure top performance and robust security. Q3: What should I do if I encounter issues after updating Python? A: Refer to the troubleshooting section for common issues. Check the PATH variable for accuracy, and use virtual environments to solve any dependency conflicts. Conclusion Updating Python on a cloud server can be accomplished through various methods depending on your preferences and requirements. Whether using a package manager, compiling from source, managing versions with Pyenv, or leveraging Anaconda, each approach has its benefits. By following this comprehensive guide, you can ensure your Python environment remains current, secure, and equipped with the latest features. Regularly updating Python is essential to leverage new functionalities and maintain the security of your applications.
29 January 2025 · 8 min to read
Linux

How to Download Files with cURL

Downloading content from remote servers is a regular task for both administrators and developers. Although there are numerous tools for this job, cURL stands out for its adaptability and simplicity. It’s a command-line utility that supports protocols such as HTTP, HTTPS, FTP, and SFTP, making it crucial for automation, scripting, and efficient file transfers. You can run cURL directly on your computer to fetch files. You can also include it in scripts to streamline data handling, thereby minimizing manual effort and mistakes. This guide demonstrates various ways to download files with cURL. By following these examples, you’ll learn how to deal with redirects, rename files, and monitor download progress. By the end, you should be able to use cURL confidently for tasks on servers or in cloud setups. Basic cURL Command for File Download The curl command works with multiple protocols, but it’s primarily used with HTTP and HTTPS to connect to web servers. It can also interact with FTP or SFTP servers when needed. By default, cURL retrieves a resource from a specified URL and displays it on your terminal (standard output). This is often useful for previewing file contents without saving them, particularly if you’re checking a small text file. Example: To view the content of a text file hosted at https://example.com/file.txt, run: curl https://example.com/file.txt For short text documents, this approach is fine. However, large or binary files can flood the screen with unreadable data, so you’ll usually want to save them instead. Saving Remote Files Often, the main goal is to store the downloaded file on your local machine rather than see it in the terminal. cURL simplifies this with the -O (capital O) option, which preserves the file’s original remote name. curl -O https://example.com/file.txt This retrieves file.txt and saves it in the current directory under the same name. This approach is quick and retains the existing filename, which might be helpful if the file name is significant. Choosing a Different File Name Sometimes, renaming the downloaded file is important to avoid collisions or to create a clear naming scheme. In this case, use the -o (lowercase o) option: curl -o myfile.txt https://example.com/file.txt Here, cURL downloads the remote file file.txt but stores it locally as myfile.txt. This helps keep files organized or prevents accidental overwriting. It’s particularly valuable in scripts that need descriptive file names. Following Redirects When requesting a file, servers might instruct your client to go to a different URL. Understanding and handling redirects is critical for successful downloads. Why Redirects Matter Redirects are commonly used for reorganized websites, relocated files, or mirror links. Without redirect support, cURL stops after receiving an initial “moved” response, and you won’t get the file. Using -L or --location To tell cURL to follow a redirect chain until it reaches the final target, use -L (or --location): curl -L -O https://example.com/redirected-file.jpg This allows cURL to fetch the correct file even if its original URL points elsewhere. If you omit -L, cURL will simply print the redirect message and end, which is problematic for sites with multiple redirects. Downloading Multiple Files cURL can also handle multiple file downloads at once, saving you from running the command repeatedly. Using Curly Braces and Patterns If filenames share a pattern, curly braces {} let you specify each name succinctly: curl -O https://example.com/files/{file1.jpg,file2.jpg,file3.jpg} cURL grabs each file in sequence, making it handy for scripted workflows. Using Ranges For a series of numbered or alphabetically labeled files, specify a range in brackets: curl -O https://example.com/files/file[1-5].jpg cURL automatically iterates through files file1.jpg to file5.jpg. This is great for consistently named sequences of files. Chaining Multiple Downloads If you have different URLs for each file, you can chain them together: curl -O https://example1.com/file1.jpg -O https://example2.com/file2.jpg This approach downloads file1.jpg from the first site and file2.jpg from the second without needing multiple commands. Rate Limiting and Timeouts In certain situations, you may want to control the speed of downloads or prevent cURL from waiting too long for an unresponsive server. Bandwidth Control To keep your network from being overwhelmed or to simulate slow conditions, limit the download rate with --limit-rate: curl --limit-rate 2M -O https://example.com/bigfile.zip 2M stands for 2 megabytes per second. You can also use K for kilobytes or G for gigabytes. Timeouts If a server is too slow, you may want cURL to stop after a set time. The --max-time flag does exactly that: curl --max-time 60 -O https://example.com/file.iso Here, cURL quits after 60 seconds, which is beneficial for scripts that need prompt failures. Silent and Verbose Modes cURL can adjust its output to show minimal information or extensive details. Silent Downloads For batch tasks or cron jobs where you don’t need progress bars, include -s (or --silent): curl -s -O https://example.com/file.jpg This hides progress and errors, which is useful for cleaner logs. However, troubleshooting is harder if there’s a silent failure. Verbose Mode In contrast, -v (or --verbose) prints out detailed request and response information: curl -v https://example.com Verbose output is invaluable when debugging issues like invalid SSL certificates or incorrect redirects. Authentication and Security Some downloads require credentials, or you might need a secure connection. HTTP/FTP Authentication When a server requires a username and password, use -u: curl -u username:password -O https://example.com/protected/file.jpg Directly embedding credentials can be risky, as they might appear in logs or process lists. Consider environment variables or .netrc files for more secure handling. HTTPS and Certificates By default, cURL verifies SSL certificates. If the certificate is invalid, cURL blocks the transfer. You can bypass this check with -k or --insecure, though it introduces security risks. Whenever possible, use a trusted certificate authority so that connections remain authenticated. Using a Proxy In some environments, traffic must route through a proxy server before reaching the target. Downloading Through a Proxy Use the -x or --proxy option to specify the proxy: curl -x http://proxy_host:proxy_port -O https://example.com/file.jpg Replace proxy_host and proxy_port with the relevant details. cURL forwards the request to the proxy, which then retrieves the file on your behalf. Proxy Authentication If your proxy requires credentials, embed them in the URL: curl -x https://proxy.example.com:8080 -U myuser:mypassword -O https://example.com/file.jpg Again, storing sensitive data in plain text can be dangerous, so environment variables or configuration files offer more secure solutions. Monitoring Download Progress Tracking download progress is crucial for large files or slower links. Default Progress Meter By default, cURL shows a progress meter, including total size, transfer speed, and estimated finish time. For example: % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current                                 Dload  Upload   Total   Spent    Left  Speed100  1256  100  1256    0     0   2243      0 --:--:-- --:--:-- --:--:--  2246 This readout helps you gauge how much remains and if the transfer rate is acceptable. Compact Progress Bar If you want fewer details, add -#: curl -# -O https://example.com/largefile.iso A simpler bar shows the overall progress as a percentage. It’s easier on the eyes but lacks deeper stats like current speed. Capturing Progress in Scripts When using cURL within scripts, you might want to record progress data. cURL typically sends progress info to stderr, so you can redirect it: curl -# -O https://example.com/largefile.iso 2>progress.log Here, progress.log contains the status updates, which you can parse or store for later review. Conclusion cURL shines as a flexible command-line tool for downloading files in multiple protocols and environments. Whether you need to handle complex redirects, rename files on the fly, or throttle bandwidth, cURL has you covered. By mastering its core flags and modes, you’ll be able to integrate cURL seamlessly into your daily workflow for scripting, automation, and more efficient file transfers.
29 January 2025 · 7 min to read

Answers to Your Questions

How secure are cloud databases?

Hostman Cloud Databases emphasize security through advanced measures like encryption, regular security audits, and stringent access controls. Our robust cloud database security ensures that your data is protected against threats and unauthorized access, giving you peace of mind.

What types of databases are supported in your cloud database offering?

Various database types are supported by Hostman Cloud Database, including NoSQL databases like MongoDB and Redis, as well as relational databases like MySQL, PostgreSQL, and many more. You can choose any database for your unique requirements thanks to our flexible platform.

Can I easily scale my database resources in your cloud environment?

Yes, scaling with Hostman Cloud Databases is simple and effortless. Your database resources can be quickly scaled up or down to meet the demands of your workload, guaranteeing peak performance during traffic surges without sacrificing productivity.

What backup and recovery options are available for cloud databases?

Reliable backup and recovery choices are offered by Hostman Cloud Database. Point-in-time recovery, automated backups, and regularly scheduled snapshots guarantee that your data is safe and can be promptly restored in the event of an emergency or breakdown.

How does your pricing model work for cloud databases?

Our cloud database services operate on a pay-as-you-go model. You only pay for the storage and processing power that you actually use.

Is there any downtime during database maintenance?

Hostman Cloud Database's continuous updates and automatic failover reduces downtime during upgrades or maintenance. Even when undergoing maintenance, your databases will continue to be accessible and functional due to our cloud-based database solutions.

Can I migrate my existing databases to your cloud platform?

Absolutely, you may migrate your current databases to Hostman Cloud Database with ease. Transferring databases to our cloud platform with the least amount of downtime and disturbance is made possible by our professional support team and migration tools.

What level of support do you provide for managing and troubleshooting database issues?

For maintaining and resolving database problems, Hostman provides extensive help. To ensure seamless database operations, our technical support staff is accessible around-the-clock to help with any inquiries or issues.

Can I integrate third-party tools or applications with your cloud databases?

Yes, you can quickly connect new resources and services to enhance the usefulness of your database with the aid of our flexible platform.

How do you handle data encryption and data privacy in your cloud database environment?

Hostman uses extensive encryption mechanisms to guarantee data confidentiality and privacy. Securing data ensures compliance with data protection rules and protects it against illegal access, both during transmission and at rest.

What monitoring and performance tuning options are available for cloud databases?

Advanced monitoring and performance optimization features are available with Hostman Cloud Database. Real-time monitoring, performance analysis, and automated alerting tools are all included in our platform, which enables you to maximize database performance and promptly handle any problems.

Take your database
to the next level

Contact us for a personalized consultation. Our team is here
to help you find the perfect solution for your business and
support you every step of the way.
Email us
Hostman's Support