Sign In
Sign In

Docker Complete Guide: All You Need to Know About Docker and Docker Containers

Docker Complete Guide: All You Need to Know About Docker and Docker Containers
Hostman Team
Technical writer
Infrastructure

Want to know what a Docker container is? In this guide, we’ll learn everything about Docker and find out what a Docker container is. We’ll also tell you how to run Docker containers and what’s the difference between docker images and containers.

If you want to really understand the thought process behind Docker, there’s no better source than the person who created it - Solomon Hykes, founder and CTO of dotCloud. Although this YouTube introduction was uploaded 7+ years ago, it is still perfectly relevant.

That said, you might not have 47 minutes to spare on watching the full presentation, especially since it’s pretty technical and might require multiple viewings. That’s why we’ve created this quick guide for you.

What is docker?

Docker is a lightweight, open-source virtualization tool.

Here’s the description on Docker’s official website:

“Docker takes away repetitive, mundane configuration tasks and is used throughout the development lifecycle for fast, easy and portable application development - desktop and cloud. Docker’s comprehensive end to end platform includes UIs, CLIs, APIs and security that are engineered to work together across the entire application delivery lifecycle.”

Architecture

Docker runs at the operating system level. It automates the deployment of applications in Linux containers, and allows you to package an application with all the necessary dependency structures (code, runtime, libraries, environment variables, configuration files) into a container.

In his presentation, Solomon breaks down the unique proposition that distinguishes Docker from other solutions out there:

"A lot of [container] tools use containers as miniature servers… just like a VM [virtual machine] but way faster…. We [Docker] use containers as a unit of software delivery."

What are Docker containers and what are they used for?

A container is an isolated environment whose processes do not interfere with operations outside of it.

Docker Containerized Appliction Blue Border 2

The container only uses a portion of the operating system. This allows you to significantly reduce the consumption of system resources by allocating the application and its data exclusively to the container, rather than to the entire operating system as in the case of a virtual machine.

This makes Docker containers particularly suited to rapid application deployment, ease of testing, maintenance, and troubleshooting, while enhancing security.

One of the practical benefits of Docker containers is simplifying big development team work. Tasks can be easily split up and implemented in different programming languages.

Common use cases for Docker include:

  • Automating the packaging and deployment of applications

  • Creating lightweight, private PAAS environments

  • Automating testing and continuous integration/deployment

  • Deploying and scaling web apps, databases and backend services

How does a container work?

There are five basic tenets of Docker containers:

  1. The lifespan of a container is tied to the lifespan of the process contained within it.

  2. Inside the container, this process has pid = 1, which means it is the parent process that starts before all other processes.

  3. Alongside the process with pid = 1, you can spawn as many other processes as you wish (within the limitations of the OS). Killing (restarting) the process with pid = 1 stops the container. (see item 1)

  4. Inside the container, you will see the usual FHS-compliant directory layout. This location is identical to the source distribution (from which the container is taken).

  5. The data created inside the container remains in the container and is not saved anywhere else. The host OS has access to this layer, but deleting the container will discard all changes. For this reason, the data is not stored in containers, but taken out to the host OS.

How to create a Docker container

In this guide, we’ll be showing you how to install Docker on Ubuntu 20.04 - the most popular repository.

We recommend you to use official Docker files for installation, you can find all the needed info here. No complicated configuration is required at this stage. Once it’s finished installing, start the service, check its status and set it to start at boot:

sudo apt-get update

sudo apt-get install \

apt-transport-https \

ca-certificates \

curl \

gnupg \

lsb-release

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg echo \

"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu

\$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io

systemctl start docker

systemctl enable docker

systemctl status docker

99291a659bfe8515e256a2a86458da8e

Next, run a test image to check that Docker is working correctly:

docker run hello-world

You should see the following message:

“Hello from Docker. This message shows that your installation appears to be working correctly."

7fad108ef513aff62bda5d0163401ee7

How to list Docker containers

To avoid having to recognize and remember the container ID, you can assign a unique name to each container by using the -name command line option when creating it. See the example below:

docker run --name myname ubuntu cat / etc / debian_version

Once you’ve done this, you can start using the container (execute the start, stop, remove, top, stats commands) by referring to it by name, for example:

docker start myname – container start

docker stats myname – display resource usage statistics

docker top myname – displays the processes running in the container

How to start and restart Docker containers?

To start the container in daemon mode, use the -d option.

docker run -td ubuntu bash

To reconnect to a running container, you need to know its ID or name. Run the Docker ps command, followed by attach and the container’s ID or name.

docker attach CONTAINER

How to stop Docker container

A container is usually terminated automatically after the completion of the process. However, there might be instances where you want to terminate the container yourself. The stop command performs a "soft" shutdown of the container, by default allowing 10 seconds for all processes to terminate:

docker container stop CONTAINER

If you want immediate termination, you can execute the kill command. However, in most situations, using stop is preferable.

docker container kill CONTAINER

Here’s the command for immediate termination of all running containers:

docker container kill $ (docker ps -q)

How to delete a container

To delete a Docker container, use the following command:

docker container rm CONTAINER

Here’s how to remove all Docker containers that are not running:

docker container rm $ (docker ps -a -q)

How to run Docker containers

To interactively connect to the shell of the container and run commands, as in a regular Linux system, you need to create a container with the following settings:

docker container run -it ubuntu bash

The structure of this command is as follows:

  • -i sets the start of an interactive session.

  • -t allocates TTYs and includes standard input and output streams.

  • ubuntu is the image used to create the container.

  • /bin/bash is a command run in an Ubuntu container.

After starting the container with the above settings, we sort of fall into the container. Use the exit command to exit a running session and return to your node's terminal. This interrupts all container processes and stops the container:

exit

If you are interactively connected to a container and need to log out without interrupting the session, you can exit the console and return to your host's terminal by pressing Ctrl + P and then Ctrl + Q.

There are several services that help users to easily deploy Docker containers with just a few clicks. Hostman has swiftly earned a reputation for being the go-option when it comes to GitHub, Bitbucket, or GitLab repositories.

How to SSH into a Docker container?

For SSH authentication, or when connecting remotely (for example, rsync), the main methods are a login-password pair (the password is entered from the keyboard in the console) and key authorization (a private-public key pair is created on the server and the public key is transmitted to the remote server). The first method cannot be used in any way in scripts executed on the crown, and it is necessary to configure a transparent input.

Generating keys for the current user:

ssh-keygen -t rsa

After entering this command you’ll be asked some questions. You can just agree with all the default options.

Copy the key to the remote server (enter the password of the remote server once).

ssh-copy-id -i ~ / .ssh / id_rsa user@ssh-server

Checking the ability to log in:

ssh user@ssh-server

How to connect to a running container?

If you have multiple Docker containers running and want to choose which one to work with, you will need to list them by using the ls command. In addition to displaying a list of containers, this command also displays useful information about them. The command without any settings displays a list of running containers:

docker container ls

7d95801c8da635db78c9016a4f02abe4

The -a setting tells the command to list all containers, not just running ones, while the -s option displays the size of each container:

docker container ls -a

B1740f948e903784f709c66967e2cba6

The inspect setting displays a lot of useful information about the container:

docker container inspect CONTAINER

To display container logs, run the logs command:

docker container logs CONTAINER

What is the difference between a Docker container and a Docker image?

Docker works with the following fundamental objects:

  • A container is an application environment. When a container is launched from an image containing the necessary configuration data, a new level with a variable structure is built on top of this image. If you save the changes, the new image level is saved and the old one remains unchanged.

  • An image is a static snapshot of a container's configuration state. The image is a permanent layer, all changes are made at the highest level and are saved only by creating a new image. Each image depends on one or more parent images.

Why are Docker images so valuable?

Docker images are valuable because they are used to create and conduct the environment to get started with Docker. First, you will need to download an image from the Docker Hub to your machine.

What benefits do containers offer?

When working with application containerization in Big Data, the following advantages of this technology are most significant:

  1. Standardization – thanks to the base of open standards, containers can work in all major distributions of Linux, Microsoft and other popular operating systems;

  2. The independence of the container from the resources or architecture of the physical host on which it runs, facilitates portability of the container image from one environment to another, providing a continuous pipeline of DevOps processes from development and testing to deployment (CI / CD pipeline);

  3. Isolation – the application in the container runs in an isolated environment and does not use the memory, processor or disk of the host OS. This guarantees the isolation of processes inside the container and provides some level of security.

  4. Reusability – all the components required to run the application are packaged into one image that can be run multiple times;

  5. Fast deployment – creating and launching a container is considerably less time-consuming than using a virtual machine instance or setting up a full-fledged working environment;

  6. Increasing labor productivity – having each microservice of a complex system packaged in a separate container for which one developer is responsible, makes it possible to parallelize work tasks without mutual dependencies and conflicts;

  7. Simplified Monitoring – versioning container images makes it possible to track updates and prevent synchronization issues.

Summary

Supporting Big Data applications (yours or others’) that no longer fit in your head, doesn’t have to be a nightmare anymore.

With this handy guide to Docker, you’ll be able to cold-start an application on a new machine, with just a few clicks, in under a minute.

More importantly, with the reassurance that all of your data is safe, you’ll finally be able to focus exclusively on writing useful code instead of wasting time and energy on server-related troubleshooting.

Infrastructure

Similar

Infrastructure

Best Midjourney Alternatives in 2025

Midjourney is one of the most popular AI networks for image generation. The service has established itself as a leader in the field of generative AI. However, the existence of a paid subscription and access limitations (for example, the requirement to use Discord or lack of support in certain regions) increasingly prompts users to consider alternatives. We have compiled the best services that can replace Midjourney,  from simple tools to professional solutions. Why Are Users Looking for a Midjourney Alternative? Midjourney is a powerful tool, but it has its drawbacks: Paid Access: Since March 2023, Midjourney has fully switched to a paid model, with a minimum subscription of $10 per month, which may be expensive for beginner users. Usage Limitations: A Discord account is required, and for users in some countries, access is restricted due to regional limitations. Complex Interface: Beginners may find it difficult to navigate working through the Discord bot. Fortunately, there are many apps like Midjourney that offer similar functionality and more user-friendly interfaces. We will review seven of the best Midjourney alternatives. For all the AI networks considered, we will generate an image using the following prompt: “Generate an image of the Swiss Alps.” Free Alternatives First, let’s look at Midjourney alternatives that can be used for free. Playground AI Playground AI is an AI network that works on modern generative models, including Stable Diffusion XL, and allows generating images from text prompts or editing existing images. A unique feature of Playground AI is the ability not only to generate an image from scratch but also to refine it within the same interface. Users can correct individual details, replace elements (for example, hands), perform upscaling to increase detail, or draw additional parts of the image on a special working field (canvas) with a seamless continuation of the image. Using the free plan, users can generate up to 5 images every 3 hours. Advantages: Work with a library of ready-made images and prompts, and the ability to copy and refine other users’ creations. Built-in canvas tool for extending and editing images while maintaining stylistic consistency. Support for multiple models. Image generated by Playground AI using the prompt “Generate an image of the Swiss Alps” Bing Image Creator Bing Image Creator is an image generation tool from Microsoft, based on the latest version of OpenAI’s DALL·E model. The service works using a diffusion architecture: the AI network analyzes the text prompt and synthesizes a unique image considering specified styles, details, emotions, backgrounds, and objects. Users can describe the desired image in any language, and the AI interprets the prompt to generate multiple options for selection. Advantages: Completely free. Multiple image generation models to choose from. Integration with Microsoft ecosystem: Microsoft Copilot, Bing, Bing Chat, Microsoft Edge. Built-in content filtering and internal security algorithms to prevent illegal or inappropriate image generation. Image generated by Bing Image Creator using the prompt “Generate an image of the Swiss Alps” Paid Alternatives Among the paid Midjourney alternatives, the following stand out. Leonardo AI Leonardo AI functions as a cloud platform for AI-based image generation. Its main function is creating high-quality visual materials from text descriptions. Leonardo AI uses modern image generation algorithms similar to diffusion models, with additional innovative tools to improve quality and flexibility. Users can select from multiple artistic styles and genres, and also use the Image2Image feature to upload a reference image for more precise control. Users can adjust the “weight” of the generated image to balance between strict adherence to the reference and creative interpretation of the text. Advantages: Free access with a limit (up to 150 tokens per day). Ability to train custom AI models. Wide choice of styles and customization tools. Support for generating textures and 3D objects. Convenient prompt handling: a built-in prompt generator helps beginners formulate queries, while experienced users can optimize prompts for better results. Image generated by Leonardo AI using the prompt “Generate an image of the Swiss Alps” Stable Diffusion Stable Diffusion is a modern text-to-image generation model that uses diffusion model technology. Developed by Stability AI in collaboration with researchers from LMU Munich and other organizations, the model was released in 2022 and quickly gained popularity due to its openness and high efficiency. Stable Diffusion can be accessed through many services, including DreamStudio, Stable Diffusion Online, Tensor.Art, and InvokeAI. Advantages: Multiple interfaces available. Flexible settings (Negative Prompt, aspect ratio, generation steps, fine-tuning, service integration, inpainting for parts of an image, outpainting for backgrounds). Numerous custom models (anime, realism, fantasy). Possibility of local deployment on powerful PCs. Open-source code. Unlike many proprietary models (DALL-E, Midjourney), Stable Diffusion can be run, trained, and modified locally. Image generated by Stable Diffusion using the prompt “Generate an image of the Swiss Alps” NightCafe NightCafe is an online platform for generating images from text prompts and images. It uses multiple advanced algorithms and generation models, such as VQGAN+CLIP, DALL·E 2, Stable Diffusion, Neural Style Transfer, and Clip-Guided Diffusion. Users input a text prompt or upload an image, and the AI transforms it into a unique artistic work. Various styles, effects, resolution and detail settings, as well as editing and upscaling options, are available. Advantages: Numerous options for customizing generated images, suitable for digital art, NFTs, and other purposes. Built-in functionality for modifying existing images via text prompts, scaling without quality loss, and object removal. Free access with limited generations. Support for multiple styles and algorithms. User-friendly interface. Image generated by NightCafe using the prompt “Generate an image of the Swiss Alps” Artbreeder Artbreeder operates using generative adversarial networks (GANs). The main principle is creating new images by “crossing” or blending two or more images (“parents”), with fine control over parameters (“genes”) that determine various image traits. Users can interactively control the resulting image with sliders, adjusting characteristics like age, facial expression, body type, hair color, level of detail, and other visual elements. Advantages: Interactive blending allows combining different images to create unique compositions, such as portraits, landscapes, or anime styles. Detailed manual adjustments of each image parameter (brightness, contrast, facial features, accessories, etc.) allow for highly refined results. Image generated by Artbreeder using the prompt “Generate an image of the Swiss Alps” Ideogram  Ideogram is a generative AI model specialized in creating images containing text. It uses advanced deep learning and diffusion algorithms. Unlike many other AI visualization tools, Ideogram can generate clear, readable text within images, making it especially useful for designing logos, posters, advertisements, and other tasks where combining graphics and text is important. Advantages: Free generations with selectable styles. Support for integrating readable and harmonious text into images—convenient for designers, marketing teams, and social media specialists. Built-in social platform with user profiles, sharing capabilities, and community interaction. Image generated by Ideogram using the prompt “Generate an image of the Swiss Alps” Conclusion The choice of a Midjourney alternative depends on your goals and preferences: if you need the highest-quality image generation, consider Ideogram or Stable Diffusion 3. For free solutions, Leonardo AI and Playground AI are suitable, and if speed and simplicity are priorities, Bing Image Creator from Microsoft is a good option. Each service has its own advantages, whether it is accessibility, detail quality, or flexibility of settings. It’s worth trying several options to find the best tool for your needs.
11 September 2025 · 7 min to read
Infrastructure

Google AI Studio: Full Guide to Google’s AI Tools

Google AI Studio is a web platform from Google for working with neural networks. At the core of the service is the family of advanced multimodal generative models, Gemini, which can handle text, images, video, and other types of data simultaneously. The platform allows you to prototype applications, answer questions, generate code, and create images and video content. Everything runs directly in the browser—no installation is required. The main feature of Google AI Studio is versatility. Everything you need is in one place and works in the browser: you visit the site, write a query, and within seconds get results. The service allows users to efficiently leverage the power of Google Gemini for rapid idea testing, working with code or text. Additionally, Google AI Studio can be used not only for answering questions but also as a starting point for future projects. The platform provides all the necessary tools, and Google does not claim ownership of the generated content. You have access not only to a standard chat with generative AI but also to specialized models for generating media content, music, and applications. Let’s go through each in detail. Chat This is the primary workspace in Google AI Studio, where you work with prompts and configure the logic and behavior of your model. Chat Options At the top, there are tools for working with the chat itself. System Instruction The main configuration block, which defines the “personality,” role, goal, and limitations for the model. It is processed first and serves as a permanent context for the entire dialogue. The system instruction is the foundation of your chatbot. The field accepts text input. For maximum effectiveness, follow these principles: define the role (clearly state what the model is), define the task (explain exactly what the model should do), set the output format, establish constraints (prevent the model from going beyond its role). Example instruction: "You are a Senior developer who helps other developers understand project code. You provide advice and explain the logic of the code. I am a Junior who will ask for your help. Respond in a way I can understand, point out mistakes and gaps in the code with comments. Do not fully rewrite the code I send you—give advice instead." Show conversation with/without markdown formatting Displays text with or without markdown formatting. Get SDK Provides quick access to API code by copying chat settings into code. All model parameters from the site are automatically included. Share prompt Used to send a link to your dialogue with the AI. You must save the prompt before sharing. Save prompt Saves the prompt to your Google Drive. Compare mode A special interface that allows you to run the same prompt on different language models (or different versions of the same model) simultaneously and instantly see their responses side by side. It’s like parallel execution with a visual comparison. Clear chat Deletes all messages in the chat. Model Parameters In this window, you select the neural network and configure its behavior. Model Select the base language model. AI Studio provides the following options: Gemini 2.5 Pro: a “thinking” model capable of reasoning about complex coding, math, and STEM problems, analyzing large datasets, codebases, and documents using long context. Gemini 2.5 Flash: the best model in terms of price-to-performance, suitable for large-scale processing, low-latency tasks, high-volume reasoning, and agentic scenarios. Gemini 2.5 Flash-Lite: optimized for cost-efficiency and high throughput. Other available models include Gemini 2.0, Gemma 3, and LearnLM 2.0. More details about Gemini Pro, Flash, Flash-Lite, and others can be found in the official guide. Temperature: Controls the degree of randomness and creativity in the model’s responses. Higher values produce more diverse and unexpected answers, usually less precise. Lower values make responses more conservative and predictable. Media resolution: Refers to the level of detail in input media (images and video) that the model processes. Higher resolution allows Gemini to “see” and analyze more details, but requires more tokens for analysis. Thinking mode: Switches the model into a reasoning mode. The AI decomposes tasks and formulates instructions rather than outputting a result immediately. Set thinking budget: Limits the maximum number of tokens for the reasoning mode. Structured output: Allows developers and users to receive AI responses in predefined formats like JSON. You can specify the desired output format manually or via a visual editor. Grounding with Google Search: Enables Gemini to access Google Search in real-time for the most relevant and up-to-date information. Responses are based on search results rather than internal knowledge, reducing “hallucinations.” URL Context: Enhances grounding by allowing users to direct Gemini to specific URLs for context, rather than relying on general search. Stop sequences: Allows up to 5 sequences where the model will immediately stop generating text. Stream The Stream mode is an interactive interface for continuous dialogue with Gemini models. Supports microphone, webcam, and screen sharing. The AI can “see” and “hear” what you provide. Turn coverage: Configures whether the AI continuously considers all input or only during speech, simulating natural conversation including interruptions and interjections. Affective dialog: Enables AI to recognize emotions in your speech and respond accordingly. Proactive audio: When enabled, AI filters out background noise and irrelevant conversations, responding only when appropriate. Generate Media This section on the left panel provides interfaces for generating media: speech, images, music, and video. Gemini Speech Generator Converts text into audio with flexible settings. Use for video voice-overs, audio guides, podcasts, or virtual character dialogues. Tools include Raw Structure (scenario definition), Script Builder, Style Instructions, Add Dialog, Mode (monologue/dialogue), Model Settings, and Voice Settings. Main tools on the control panel: Raw Structure: Defines the scenario—how the request to the model for speech generation will be constructed. Script Builder: Instruction for dialogue with the ability to write lines and pronunciation style for each speaker. Style Instructions: Set the emotional tone and speech pace (for example: friendly, formal, energetic). Add Dialog: Add new lines and speakers. Mode: Choice between monologue and dialogue (up to 2 participants). Model Settings: Adjust model parameters, for example, temperature, which affects the creativity and unpredictability of speech. Voice Settings: Select a voice, adjust speed, pauses, pitch, and other parameters for each speaker. Image Generation A tool for generating images from a text description (prompt). Three models are available: Imagen 4 Imagen 4 Ultra Imagen 3 Imagen 4 and Imagen 4 Ultra can generate only one image at a time, while Imagen 3 can generate up to four images at once. To generate, enter a prompt for the image and specify the aspect ratio.  Music Generation A tool for interactive real-time music creation based on the Lyria RealTime model. The main feature is that you define the sound you want to hear and adjust its proportion. The more you turn up the regulator, the more intense the sound will be in the final track. You can specify the musical instrument, genre, and mood. The music updates in real time. Video Generation A tool for video generation based on Veo 2 and Veo 3 models (API only). Video length up to 8 seconds, 720p quality, 24 frames per second. Supports two resolutions—16:9 and 9:16. Video generation from an image: Upload a file and write a prompt. The resulting video will start from your image. Negative prompt support: Allows specifying what should not appear in the frame. This helps fine-tune the neural network’s output. App Generation Google AI Studio instantly transforms high-level concepts into working prototypes. To do this, go to the Build section. Describe the desired application in the prompt field and click Run. AI Studio will analyze this request and suggest a basic architecture, including necessary API calls, data structures, and interaction logic. This saves the developer from routine setup work on the initial project and allows focusing on unique functionality. The app generation feature relies on an extensive template library. Conclusion Google AI Studio has proven itself as a versatile platform for generative AI. It combines Gemini chat, multimodal text, image, audio, video generation, and app prototyping tools in one interface. The platform is invaluable for both developers and general users. Even the free tier of Google AI Studio covers most tasks—from content generation to MVP prototyping. Recent additions include Thinking Mode, Proactive Audio, and Gemini 2.5 Flash, signaling impressive future prospects.
10 September 2025 · 8 min to read
Infrastructure

Cloud vs Dedicated Server for E-commerce

If your online store is growing, sooner or later a key infrastructure question arises: cloud or dedicated server? Which one can be launched faster, which will survive peak loads without “crashes,” and how much will it cost with backups and administration? In this article, we will examine the key differences between the cloud and a dedicated server, ways of calculating the total cost of ownership (TCO), and typical bottlenecks in e-commerce: the database, cache, and static files. Cloud and Dedicated Server: Main Differences Let’s draw a simple analogy. The cloud is like a room in a hotel: you can move in quickly, request another room if necessary, cleaning and maintenance are included.  A dedicated server is like owning a house: it is completely yours, no one shares resources, but you need to take care of it yourself. How the Cloud Works You create a cloud server with the necessary configuration and can quickly change its parameters: increase memory, disk space, or add another server for web applications. Usually, this is accompanied by a flexible payment system—for example, in Hostman it is hourly.  The advantages are quick launch, scalability, convenient backups and snapshots. The disadvantages are that with excessive resources it is easy to overpay, and with round-the-clock high load, the cost may be higher than that of a dedicated server. How a Dedicated Server Works This is renting a physical server in a data center. The resources are entirely yours: CPU, memory, disks—without any “neighbors.”  The advantages are stable performance and a favorable price with constant heavy traffic. The disadvantages are slower scaling (waiting for an upgrade or migration), service downtime during failures may last longer, and administration of the server and organization of backups are entirely the responsibility of the client. What’s More Important for a Small Store You can launch an online store in the cloud today, in mere hours. When renting a dedicated server, allow time for its preparation: engineers need to assemble and test the configuration, especially if you ordered a custom one. Usually this takes a couple of days.  In the cloud, resources can be increased in a few clicks. On a dedicated server, the scaling process takes longer: you need to coordinate changes with engineers, wait for components, and install them in the data center. In some cases, it may require migration to a new server. Cloud offers many ready-made tools and automation. A dedicated server, as a rule, will require more manual configuration and regular involvement of an engineer. Money: if you have 20–300 orders per day and traffic “jumps,” the cloud is usually more profitable and quite suitable for solving such tasks. If orders are consistently high, 24/7, without sharp spikes, a dedicated server will be more reliable. In short: if you are just starting out, choose the cloud. When the load becomes consistently high, you can consider a dedicated server. Key Criteria When Choosing Infrastructure for an Online Store Let’s look at the key criteria to pay attention to when choosing between a cloud and a dedicated server. Speed of launch It is important for a business to launch in hours, not days. A cloud server and database are ready in just minutes. A dedicated server takes longer to prepare: on average, about an hour, and when ordering a custom configuration, it may take several days. Expenses Expenses in a small project can be calculated as the sum of these items:  Infrastructure: server, disks, traffic, IP, domains, CDN.  Reliability: backups and storing copies separately.  Administration: updates, monitoring, on-call duty.  Downtime: how much one hour of downtime costs (lost revenue + reputation). Peak loads Sometimes stores run sales, order promotions from bloggers, or it is simply the business season.  In the cloud, you can scale horizontally, setting up another VM, and vertically, by adding more vCPU and RAM.  To speed up images and static files loading, you can connect a CDN—this is equally available in the cloud and on a dedicated server.  With a dedicated server, you either have to pay for all the reserve capacity year-round, or request installation of additional modules—which, again, can take some time (hours or days, depending on component availability). Reliability and recovery There are two main parameters to consider when planning recovery time.  RTO: how much time the project can take to recover after downtime (goal: up to an hour).  RPO: how much data you are ready to lose during recovery (goal: up to 15 minutes, meaning that after the system is restored, you may lose only the data created in the last 15 minutes before the failure). Check: are backups scheduled, are copies stored outside the production server, will the system be able to recover automatically if production goes down. Security At a minimum, configure the site to work through an SSL certificate, set up multi-factor authentication in the control panel for administrators, and create a private network between the web server and the database. Performance Usually the bottlenecks of e-commerce are the database, cache, and images. To avoid problems when scaling, put images and videos in object storage, keep the database as a separate service, preferably with data replication. Monitor the response times of the cart and checkout pages—this is where sales most often fail if pages respond slowly. Growth and flexibility We recommend starting with a simple and reliable scheme: one cloud server + one separate database (DBaaS) + object storage for media. If you plan a sale, add another cloud server and a load balancer to distribute user traffic. Afterwards, return to the original scheme. Flexibility in this case may be more important than the “perfect” architecture at the start. Team competencies If there is no system administrator or developer in the team who can perform sysadmin functions, choose simple solutions: ready CMS images, managed DBs, automatic backups, built-in monitoring. The less manual work, the fewer risks. Building Reliable Infrastructure For a small store, a simple logic works: start with minimal but healthy architecture, and quickly increase capacity during sales. And just as quickly return to normal mode. Start with a clean cloud server on Ubuntu LTS, connect access via SSH keys, and disable password login. At the firewall level, leave only ports 80/443, the others are better disabled.  An alternative option is to use control panels (cPanel, FastPanel, etc.), where the stack is deployed “out of the box” and administration is available through a convenient graphical interface. Place the database separately and connect it to the application through a private network. This way it will not be accessible from the internet, and delays will be reduced. Create a separate DB user with minimal rights for the site, enable daily backups and store them outside the production environment. For sessions and cache use Redis: it will reduce load on the database and speed up product cards, search, and order processing. Transfer media files to object storage: CMS can easily be configured so that new uploads go to S3. On top of this, connect a CDN for images, JS, and CSS—this will provide a stable response speed for users from any region and relieve a significant load from web servers. Do not forget about Cache-Control and ETag headers: they will allow users’ browsers to keep static files longer in local cache, which speeds up site loading and reduces server load. Backups are part of the daily routine. For the database, make a daily full backup and several incremental points during the day, store copies for at least 30 days, and place them in another project or storage. Protect files and media with versioning in S3 and weekly server snapshots. Once a quarter perform a recovery “from scratch” on a clean machine to check your RTO and RPO. Monitoring allows you to reduce risks and prevent losses before failures occur. Monitor the response time for the cart and checkout, CPU load, and free disk space. Threshold values should be tied to your traffic: if response time goes down and CPU stays high, get ready to scale. A sales campaign should be prepared as carefully as a separate release. A day or two before launch make a snapshot and bring up a second machine, enable the load balancer, and check that sessions are in Redis so carts are not lost. Prepare the CDN in advance: open the most visited pages, product cards, and search results. Increase database resources in advance and check indexes on fields used for filtering and sorting. After the campaign ends, disable additional servers. Approach security issues without excessive measures, but consistently and systematically. In the store’s admin panel, enable multi-factor authentication and roles, on servers, prohibit SSH passwords, limit by IP, and use fail2ban against password brute force. To avoid overpaying, calculate infrastructure by roles: server, DB, S3 storage, CDN, snapshots and admin hours. Launch additional capacity only during peak days, and under normal load, plan infrastructure based on basic needs. Evaluate the cost of downtime: if it is higher than the cost of an additional server for a week, reserving resources for a promotion will be economically justified. Migration from a dedicated server to cloud hosting is safe if done in two phases. Prepare a copy of the infrastructure, place media files in S3 storage, and run the site on a test domain with regular DB synchronization. On migration day, freeze changes, make the final dump, lower TTL, and switch DNS. After switching, monitor metrics and logs, and keep the previous production environment in “read-only” mode for a day for emergency access. If you need size guidelines, think in terms of load.  Up to one hundred orders per day is usually enough with a server of 2 vCPU and 4–8 GB of memory, a separate DB of 1–2 vCPU and 2–4 GB, SSD of 60–120 GB, and a combination of S3+CDN with Redis.  With a load of 100–500 orders per day it is reasonable to use two cloud servers and a load balancer, a database with 2–4 vCPU and 8–16 GB, and if necessary, add a read replica.  With stable peak loads, the infrastructure is scaled to 2–3 cloud servers with 4–8 vCPU and 16 GB, a database with 4–8 vCPU and 32 GB, replication, and mandatory CDN.  These are starting points; further decisions are dictated by metrics. Conclusion There is no single correct answer in this subject. The choice between cloud and dedicated server depends on traffic, frequency of peaks, team competencies, and how much one hour of downtime costs you. It is important not to guess, but to rely on numbers and understand how quickly you can increase capacity and recover after a failure. If the store is small or growing, it is reasonable to start with the cloud: one server for the application, a separate DB, and object storage for media. Such a scheme can be launched in an evening, handles sales without long downtime, and does not force you to pay for “reserve” all year. The main thing is to immediately enable backups, configure a private network between the server and the DB, and have a scaling plan ready for sales days. When traffic becomes steady and high 24/7, and requirements for performance and integrations tighten, it makes sense to consider a dedicated server or hybrid. Often a combination works where the frontend application and static files remain in the cloud for flexibility, while the heavy DB or specific services move to “hardware.” The decision should be made not by preference, but by TCO, RTO/RPO, and load metrics.
09 September 2025 · 10 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support