Sign In
Sign In

Docker Container Storage and Registries: How to Store, Manage, and Secure Your Images

Docker Container Storage and Registries: How to Store, Manage, and Secure Your Images
Hostman Team
Technical writer
Infrastructure

Docker containerization offers many benefits, one of which is image layering, enabling fast container generation. However, containers have limitations — for instance, persistent data needs careful planning, as all data within a container is lost when it's destroyed. In this article, we’ll look at how to solve this issue using Docker’s native solution called Docker Volumes, which allows the creation of persistent Docker container storage.

What Happens to Data Written Inside a Container

To begin, let’s open a shell inside a container using the following command:

docker run -it --rm busybox

Now let’s try writing some data to the container:

echo "Hostman" > /tmp/data
cat /tmp/data

Hostman

We can see that the data is written, but where exactly? If you're familiar with Docker, you might know that images are structured like onions — layers stacked on top of each other, with the final layer finalizing the image. Each layer can only be written once and becomes read-only afterward. When a container is created, Docker adds another layer for handling write operations.

Since container lifespans are limited, all data disappears once the container is gone. This can be a serious problem if the container holds valuable information. To solve this, Docker provides a solution called Docker Volumes. Let’s look at what it is and how it works.

Docker Volumes

Docker Volumes provide developers with persistent storage for containers. This tool decouples data from the container’s lifecycle, allowing access to container data at any time. As a result, data written inside containers remains available even after the container is destroyed, and it can be reused by other containers. This is a useful solution for sharing data between Docker containers and also enables new containers to connect to the existing storage.

How Docker Volumes Work

A directory is created on the server and then mounted into one or more containers. This directory is independent because it is not included in the Docker image layer structure, which allows it to bypass the read-only restriction of the image layers for containers that include such a directory.

To create a volume, use the following command:

docker volume create

Now, let’s check its location using:

docker volume inspect volume_name

The volume name usually consists of a long alphanumeric string. In response, Docker will display information such as the time the volume was created and other metadata, including the Mountpoint. This line shows the path to the volume. To view the data stored in the volume, simply open the specified directory.

There are also other ways to create a Docker Volume. For example, the -v option can be added directly during container startup, allowing you to create a volume on the fly:

docker run -it --rm -v newdata:/data busybox

Let’s break down what’s happening here:

The -v argument follows a specific syntax, indicated by the colon right after the volume name (in this case, we chose a very creative name, newdata). After the colon, the mount path inside the container is specified. Now, you can write data to this path, for example:

echo "Cloud" > /data/cloud

Data written this way can easily be found at the mount path.

As seen in the example above, the volume name is not arbitrary — it matches the name we provided using -v. However, Docker Volumes also allow for randomly generated names, which are always unique to each host. If you’re assigning names manually, make sure they are also unique.

Now, run the command:

docker volume ls

If the volume appears in the list, it means any number of other containers can use it. To test this, you can run:

docker run -it --rm -v newdata:/data busybox

Then write something to the volume. Next, start another container using the exact same command and you’ll see that the data is still there and accessible — meaning it can be reused.

Docker Volumes in Practice

Now let’s take a look at how Docker Volumes can be used in practice. Suppose we're developing an application to collect specific types of data — let’s say football statistics. We gather this data and plan to use it later for analysis — for example, to assess players’ transfer market values or for betting predictions. Let’s call our application FootballStats.

Preserving Data After Container Removal

Obviously, if we don’t use Docker Volumes, all the collected statistics will simply be lost as soon as the container that stored them is destroyed. Therefore, we need to store the data in volumes so it can be reused later. To do this, we use the familiar -v option: 

-v footballstats:/dir/footballstats

This will allow us to store match statistics in the /dir/footballstats directory, on top of all container layers.

Sharing Data

Suppose the FootballStats container has already gathered a certain amount of data, and now it's time to analyze it. For instance, we might want to find out how a particular team performed in the latest national championship or how a specific player did — goals, assists, cards, etc.

To do this, we can mount our volume into a new container, which we’ll call FootballStats-Analytics. The key advantage of this setup is that the new container can read the data without interfering with the original FootballStats container’s ongoing data collection. At the same time, analysis of the incoming data can be performed using defined parameters and algorithms.

This information can be stored anywhere, either in the existing volume or a new one, if needed.

Other Types of Mounts

In addition to standard volumes, Docker Volumes also supports other types of mounts designed to solve specialized tasks:

Bind Mount

Bind mounts are used to attach an existing path on the host to a container. This is useful for including configuration files, datasets, or static assets from websites. To specify directories for mounting into the container, use the --mount option with the syntax <host path>:<container path>.

Tmpfs Mount

Tmpfs mounts serve the opposite purpose of regular Docker Volumes — they do not persist data after the container is destroyed. This can be useful for developers who perform extensive logging. In such cases, continuously writing temporary data to disk can significantly degrade system performance. The --tmpfs option creates temporary in-memory directories, avoiding constant access to the file system.

Drivers

Docker Volume Drivers are a powerful tool that enable flexible volume management. They allow you to specify various storage options, the most important being the storage location — which can be local or remote, even outside the physical or virtual infrastructure of the provider. This ensures that data can survive not only the destruction of the container but even the shutdown of the host itself.

Conclusion

So, we’ve learned how to create and manage storage using Docker Volumes. For more information on how to modify container storage in Docker, refer to the platform’s official documentation

Infrastructure

Similar

Infrastructure

Best Midjourney Alternatives in 2025

Midjourney is one of the most popular AI networks for image generation. The service has established itself as a leader in the field of generative AI. However, the existence of a paid subscription and access limitations (for example, the requirement to use Discord or lack of support in certain regions) increasingly prompts users to consider alternatives. We have compiled the best services that can replace Midjourney,  from simple tools to professional solutions. Why Are Users Looking for a Midjourney Alternative? Midjourney is a powerful tool, but it has its drawbacks: Paid Access: Since March 2023, Midjourney has fully switched to a paid model, with a minimum subscription of $10 per month, which may be expensive for beginner users. Usage Limitations: A Discord account is required, and for users in some countries, access is restricted due to regional limitations. Complex Interface: Beginners may find it difficult to navigate working through the Discord bot. Fortunately, there are many apps like Midjourney that offer similar functionality and more user-friendly interfaces. We will review seven of the best Midjourney alternatives. For all the AI networks considered, we will generate an image using the following prompt: “Generate an image of the Swiss Alps.” Free Alternatives First, let’s look at Midjourney alternatives that can be used for free. Playground AI Playground AI is an AI network that works on modern generative models, including Stable Diffusion XL, and allows generating images from text prompts or editing existing images. A unique feature of Playground AI is the ability not only to generate an image from scratch but also to refine it within the same interface. Users can correct individual details, replace elements (for example, hands), perform upscaling to increase detail, or draw additional parts of the image on a special working field (canvas) with a seamless continuation of the image. Using the free plan, users can generate up to 5 images every 3 hours. Advantages: Work with a library of ready-made images and prompts, and the ability to copy and refine other users’ creations. Built-in canvas tool for extending and editing images while maintaining stylistic consistency. Support for multiple models. Image generated by Playground AI using the prompt “Generate an image of the Swiss Alps” Bing Image Creator Bing Image Creator is an image generation tool from Microsoft, based on the latest version of OpenAI’s DALL·E model. The service works using a diffusion architecture: the AI network analyzes the text prompt and synthesizes a unique image considering specified styles, details, emotions, backgrounds, and objects. Users can describe the desired image in any language, and the AI interprets the prompt to generate multiple options for selection. Advantages: Completely free. Multiple image generation models to choose from. Integration with Microsoft ecosystem: Microsoft Copilot, Bing, Bing Chat, Microsoft Edge. Built-in content filtering and internal security algorithms to prevent illegal or inappropriate image generation. Image generated by Bing Image Creator using the prompt “Generate an image of the Swiss Alps” Paid Alternatives Among the paid Midjourney alternatives, the following stand out. Leonardo AI Leonardo AI functions as a cloud platform for AI-based image generation. Its main function is creating high-quality visual materials from text descriptions. Leonardo AI uses modern image generation algorithms similar to diffusion models, with additional innovative tools to improve quality and flexibility. Users can select from multiple artistic styles and genres, and also use the Image2Image feature to upload a reference image for more precise control. Users can adjust the “weight” of the generated image to balance between strict adherence to the reference and creative interpretation of the text. Advantages: Free access with a limit (up to 150 tokens per day). Ability to train custom AI models. Wide choice of styles and customization tools. Support for generating textures and 3D objects. Convenient prompt handling: a built-in prompt generator helps beginners formulate queries, while experienced users can optimize prompts for better results. Image generated by Leonardo AI using the prompt “Generate an image of the Swiss Alps” Stable Diffusion Stable Diffusion is a modern text-to-image generation model that uses diffusion model technology. Developed by Stability AI in collaboration with researchers from LMU Munich and other organizations, the model was released in 2022 and quickly gained popularity due to its openness and high efficiency. Stable Diffusion can be accessed through many services, including DreamStudio, Stable Diffusion Online, Tensor.Art, and InvokeAI. Advantages: Multiple interfaces available. Flexible settings (Negative Prompt, aspect ratio, generation steps, fine-tuning, service integration, inpainting for parts of an image, outpainting for backgrounds). Numerous custom models (anime, realism, fantasy). Possibility of local deployment on powerful PCs. Open-source code. Unlike many proprietary models (DALL-E, Midjourney), Stable Diffusion can be run, trained, and modified locally. Image generated by Stable Diffusion using the prompt “Generate an image of the Swiss Alps” NightCafe NightCafe is an online platform for generating images from text prompts and images. It uses multiple advanced algorithms and generation models, such as VQGAN+CLIP, DALL·E 2, Stable Diffusion, Neural Style Transfer, and Clip-Guided Diffusion. Users input a text prompt or upload an image, and the AI transforms it into a unique artistic work. Various styles, effects, resolution and detail settings, as well as editing and upscaling options, are available. Advantages: Numerous options for customizing generated images, suitable for digital art, NFTs, and other purposes. Built-in functionality for modifying existing images via text prompts, scaling without quality loss, and object removal. Free access with limited generations. Support for multiple styles and algorithms. User-friendly interface. Image generated by NightCafe using the prompt “Generate an image of the Swiss Alps” Artbreeder Artbreeder operates using generative adversarial networks (GANs). The main principle is creating new images by “crossing” or blending two or more images (“parents”), with fine control over parameters (“genes”) that determine various image traits. Users can interactively control the resulting image with sliders, adjusting characteristics like age, facial expression, body type, hair color, level of detail, and other visual elements. Advantages: Interactive blending allows combining different images to create unique compositions, such as portraits, landscapes, or anime styles. Detailed manual adjustments of each image parameter (brightness, contrast, facial features, accessories, etc.) allow for highly refined results. Image generated by Artbreeder using the prompt “Generate an image of the Swiss Alps” Ideogram  Ideogram is a generative AI model specialized in creating images containing text. It uses advanced deep learning and diffusion algorithms. Unlike many other AI visualization tools, Ideogram can generate clear, readable text within images, making it especially useful for designing logos, posters, advertisements, and other tasks where combining graphics and text is important. Advantages: Free generations with selectable styles. Support for integrating readable and harmonious text into images—convenient for designers, marketing teams, and social media specialists. Built-in social platform with user profiles, sharing capabilities, and community interaction. Image generated by Ideogram using the prompt “Generate an image of the Swiss Alps” Conclusion The choice of a Midjourney alternative depends on your goals and preferences: if you need the highest-quality image generation, consider Ideogram or Stable Diffusion 3. For free solutions, Leonardo AI and Playground AI are suitable, and if speed and simplicity are priorities, Bing Image Creator from Microsoft is a good option. Each service has its own advantages, whether it is accessibility, detail quality, or flexibility of settings. It’s worth trying several options to find the best tool for your needs.
11 September 2025 · 7 min to read
Infrastructure

Google AI Studio: Full Guide to Google’s AI Tools

Google AI Studio is a web platform from Google for working with neural networks. At the core of the service is the family of advanced multimodal generative models, Gemini, which can handle text, images, video, and other types of data simultaneously. The platform allows you to prototype applications, answer questions, generate code, and create images and video content. Everything runs directly in the browser—no installation is required. The main feature of Google AI Studio is versatility. Everything you need is in one place and works in the browser: you visit the site, write a query, and within seconds get results. The service allows users to efficiently leverage the power of Google Gemini for rapid idea testing, working with code or text. Additionally, Google AI Studio can be used not only for answering questions but also as a starting point for future projects. The platform provides all the necessary tools, and Google does not claim ownership of the generated content. You have access not only to a standard chat with generative AI but also to specialized models for generating media content, music, and applications. Let’s go through each in detail. Chat This is the primary workspace in Google AI Studio, where you work with prompts and configure the logic and behavior of your model. Chat Options At the top, there are tools for working with the chat itself. System Instruction The main configuration block, which defines the “personality,” role, goal, and limitations for the model. It is processed first and serves as a permanent context for the entire dialogue. The system instruction is the foundation of your chatbot. The field accepts text input. For maximum effectiveness, follow these principles: define the role (clearly state what the model is), define the task (explain exactly what the model should do), set the output format, establish constraints (prevent the model from going beyond its role). Example instruction: "You are a Senior developer who helps other developers understand project code. You provide advice and explain the logic of the code. I am a Junior who will ask for your help. Respond in a way I can understand, point out mistakes and gaps in the code with comments. Do not fully rewrite the code I send you—give advice instead." Show conversation with/without markdown formatting Displays text with or without markdown formatting. Get SDK Provides quick access to API code by copying chat settings into code. All model parameters from the site are automatically included. Share prompt Used to send a link to your dialogue with the AI. You must save the prompt before sharing. Save prompt Saves the prompt to your Google Drive. Compare mode A special interface that allows you to run the same prompt on different language models (or different versions of the same model) simultaneously and instantly see their responses side by side. It’s like parallel execution with a visual comparison. Clear chat Deletes all messages in the chat. Model Parameters In this window, you select the neural network and configure its behavior. Model Select the base language model. AI Studio provides the following options: Gemini 2.5 Pro: a “thinking” model capable of reasoning about complex coding, math, and STEM problems, analyzing large datasets, codebases, and documents using long context. Gemini 2.5 Flash: the best model in terms of price-to-performance, suitable for large-scale processing, low-latency tasks, high-volume reasoning, and agentic scenarios. Gemini 2.5 Flash-Lite: optimized for cost-efficiency and high throughput. Other available models include Gemini 2.0, Gemma 3, and LearnLM 2.0. More details about Gemini Pro, Flash, Flash-Lite, and others can be found in the official guide. Temperature: Controls the degree of randomness and creativity in the model’s responses. Higher values produce more diverse and unexpected answers, usually less precise. Lower values make responses more conservative and predictable. Media resolution: Refers to the level of detail in input media (images and video) that the model processes. Higher resolution allows Gemini to “see” and analyze more details, but requires more tokens for analysis. Thinking mode: Switches the model into a reasoning mode. The AI decomposes tasks and formulates instructions rather than outputting a result immediately. Set thinking budget: Limits the maximum number of tokens for the reasoning mode. Structured output: Allows developers and users to receive AI responses in predefined formats like JSON. You can specify the desired output format manually or via a visual editor. Grounding with Google Search: Enables Gemini to access Google Search in real-time for the most relevant and up-to-date information. Responses are based on search results rather than internal knowledge, reducing “hallucinations.” URL Context: Enhances grounding by allowing users to direct Gemini to specific URLs for context, rather than relying on general search. Stop sequences: Allows up to 5 sequences where the model will immediately stop generating text. Stream The Stream mode is an interactive interface for continuous dialogue with Gemini models. Supports microphone, webcam, and screen sharing. The AI can “see” and “hear” what you provide. Turn coverage: Configures whether the AI continuously considers all input or only during speech, simulating natural conversation including interruptions and interjections. Affective dialog: Enables AI to recognize emotions in your speech and respond accordingly. Proactive audio: When enabled, AI filters out background noise and irrelevant conversations, responding only when appropriate. Generate Media This section on the left panel provides interfaces for generating media: speech, images, music, and video. Gemini Speech Generator Converts text into audio with flexible settings. Use for video voice-overs, audio guides, podcasts, or virtual character dialogues. Tools include Raw Structure (scenario definition), Script Builder, Style Instructions, Add Dialog, Mode (monologue/dialogue), Model Settings, and Voice Settings. Main tools on the control panel: Raw Structure: Defines the scenario—how the request to the model for speech generation will be constructed. Script Builder: Instruction for dialogue with the ability to write lines and pronunciation style for each speaker. Style Instructions: Set the emotional tone and speech pace (for example: friendly, formal, energetic). Add Dialog: Add new lines and speakers. Mode: Choice between monologue and dialogue (up to 2 participants). Model Settings: Adjust model parameters, for example, temperature, which affects the creativity and unpredictability of speech. Voice Settings: Select a voice, adjust speed, pauses, pitch, and other parameters for each speaker. Image Generation A tool for generating images from a text description (prompt). Three models are available: Imagen 4 Imagen 4 Ultra Imagen 3 Imagen 4 and Imagen 4 Ultra can generate only one image at a time, while Imagen 3 can generate up to four images at once. To generate, enter a prompt for the image and specify the aspect ratio.  Music Generation A tool for interactive real-time music creation based on the Lyria RealTime model. The main feature is that you define the sound you want to hear and adjust its proportion. The more you turn up the regulator, the more intense the sound will be in the final track. You can specify the musical instrument, genre, and mood. The music updates in real time. Video Generation A tool for video generation based on Veo 2 and Veo 3 models (API only). Video length up to 8 seconds, 720p quality, 24 frames per second. Supports two resolutions—16:9 and 9:16. Video generation from an image: Upload a file and write a prompt. The resulting video will start from your image. Negative prompt support: Allows specifying what should not appear in the frame. This helps fine-tune the neural network’s output. App Generation Google AI Studio instantly transforms high-level concepts into working prototypes. To do this, go to the Build section. Describe the desired application in the prompt field and click Run. AI Studio will analyze this request and suggest a basic architecture, including necessary API calls, data structures, and interaction logic. This saves the developer from routine setup work on the initial project and allows focusing on unique functionality. The app generation feature relies on an extensive template library. Conclusion Google AI Studio has proven itself as a versatile platform for generative AI. It combines Gemini chat, multimodal text, image, audio, video generation, and app prototyping tools in one interface. The platform is invaluable for both developers and general users. Even the free tier of Google AI Studio covers most tasks—from content generation to MVP prototyping. Recent additions include Thinking Mode, Proactive Audio, and Gemini 2.5 Flash, signaling impressive future prospects.
10 September 2025 · 8 min to read
Infrastructure

Cloud vs Dedicated Server for E-commerce

If your online store is growing, sooner or later a key infrastructure question arises: cloud or dedicated server? Which one can be launched faster, which will survive peak loads without “crashes,” and how much will it cost with backups and administration? In this article, we will examine the key differences between the cloud and a dedicated server, ways of calculating the total cost of ownership (TCO), and typical bottlenecks in e-commerce: the database, cache, and static files. Cloud and Dedicated Server: Main Differences Let’s draw a simple analogy. The cloud is like a room in a hotel: you can move in quickly, request another room if necessary, cleaning and maintenance are included.  A dedicated server is like owning a house: it is completely yours, no one shares resources, but you need to take care of it yourself. How the Cloud Works You create a cloud server with the necessary configuration and can quickly change its parameters: increase memory, disk space, or add another server for web applications. Usually, this is accompanied by a flexible payment system—for example, in Hostman it is hourly.  The advantages are quick launch, scalability, convenient backups and snapshots. The disadvantages are that with excessive resources it is easy to overpay, and with round-the-clock high load, the cost may be higher than that of a dedicated server. How a Dedicated Server Works This is renting a physical server in a data center. The resources are entirely yours: CPU, memory, disks—without any “neighbors.”  The advantages are stable performance and a favorable price with constant heavy traffic. The disadvantages are slower scaling (waiting for an upgrade or migration), service downtime during failures may last longer, and administration of the server and organization of backups are entirely the responsibility of the client. What’s More Important for a Small Store You can launch an online store in the cloud today, in mere hours. When renting a dedicated server, allow time for its preparation: engineers need to assemble and test the configuration, especially if you ordered a custom one. Usually this takes a couple of days.  In the cloud, resources can be increased in a few clicks. On a dedicated server, the scaling process takes longer: you need to coordinate changes with engineers, wait for components, and install them in the data center. In some cases, it may require migration to a new server. Cloud offers many ready-made tools and automation. A dedicated server, as a rule, will require more manual configuration and regular involvement of an engineer. Money: if you have 20–300 orders per day and traffic “jumps,” the cloud is usually more profitable and quite suitable for solving such tasks. If orders are consistently high, 24/7, without sharp spikes, a dedicated server will be more reliable. In short: if you are just starting out, choose the cloud. When the load becomes consistently high, you can consider a dedicated server. Key Criteria When Choosing Infrastructure for an Online Store Let’s look at the key criteria to pay attention to when choosing between a cloud and a dedicated server. Speed of launch It is important for a business to launch in hours, not days. A cloud server and database are ready in just minutes. A dedicated server takes longer to prepare: on average, about an hour, and when ordering a custom configuration, it may take several days. Expenses Expenses in a small project can be calculated as the sum of these items:  Infrastructure: server, disks, traffic, IP, domains, CDN.  Reliability: backups and storing copies separately.  Administration: updates, monitoring, on-call duty.  Downtime: how much one hour of downtime costs (lost revenue + reputation). Peak loads Sometimes stores run sales, order promotions from bloggers, or it is simply the business season.  In the cloud, you can scale horizontally, setting up another VM, and vertically, by adding more vCPU and RAM.  To speed up images and static files loading, you can connect a CDN—this is equally available in the cloud and on a dedicated server.  With a dedicated server, you either have to pay for all the reserve capacity year-round, or request installation of additional modules—which, again, can take some time (hours or days, depending on component availability). Reliability and recovery There are two main parameters to consider when planning recovery time.  RTO: how much time the project can take to recover after downtime (goal: up to an hour).  RPO: how much data you are ready to lose during recovery (goal: up to 15 minutes, meaning that after the system is restored, you may lose only the data created in the last 15 minutes before the failure). Check: are backups scheduled, are copies stored outside the production server, will the system be able to recover automatically if production goes down. Security At a minimum, configure the site to work through an SSL certificate, set up multi-factor authentication in the control panel for administrators, and create a private network between the web server and the database. Performance Usually the bottlenecks of e-commerce are the database, cache, and images. To avoid problems when scaling, put images and videos in object storage, keep the database as a separate service, preferably with data replication. Monitor the response times of the cart and checkout pages—this is where sales most often fail if pages respond slowly. Growth and flexibility We recommend starting with a simple and reliable scheme: one cloud server + one separate database (DBaaS) + object storage for media. If you plan a sale, add another cloud server and a load balancer to distribute user traffic. Afterwards, return to the original scheme. Flexibility in this case may be more important than the “perfect” architecture at the start. Team competencies If there is no system administrator or developer in the team who can perform sysadmin functions, choose simple solutions: ready CMS images, managed DBs, automatic backups, built-in monitoring. The less manual work, the fewer risks. Building Reliable Infrastructure For a small store, a simple logic works: start with minimal but healthy architecture, and quickly increase capacity during sales. And just as quickly return to normal mode. Start with a clean cloud server on Ubuntu LTS, connect access via SSH keys, and disable password login. At the firewall level, leave only ports 80/443, the others are better disabled.  An alternative option is to use control panels (cPanel, FastPanel, etc.), where the stack is deployed “out of the box” and administration is available through a convenient graphical interface. Place the database separately and connect it to the application through a private network. This way it will not be accessible from the internet, and delays will be reduced. Create a separate DB user with minimal rights for the site, enable daily backups and store them outside the production environment. For sessions and cache use Redis: it will reduce load on the database and speed up product cards, search, and order processing. Transfer media files to object storage: CMS can easily be configured so that new uploads go to S3. On top of this, connect a CDN for images, JS, and CSS—this will provide a stable response speed for users from any region and relieve a significant load from web servers. Do not forget about Cache-Control and ETag headers: they will allow users’ browsers to keep static files longer in local cache, which speeds up site loading and reduces server load. Backups are part of the daily routine. For the database, make a daily full backup and several incremental points during the day, store copies for at least 30 days, and place them in another project or storage. Protect files and media with versioning in S3 and weekly server snapshots. Once a quarter perform a recovery “from scratch” on a clean machine to check your RTO and RPO. Monitoring allows you to reduce risks and prevent losses before failures occur. Monitor the response time for the cart and checkout, CPU load, and free disk space. Threshold values should be tied to your traffic: if response time goes down and CPU stays high, get ready to scale. A sales campaign should be prepared as carefully as a separate release. A day or two before launch make a snapshot and bring up a second machine, enable the load balancer, and check that sessions are in Redis so carts are not lost. Prepare the CDN in advance: open the most visited pages, product cards, and search results. Increase database resources in advance and check indexes on fields used for filtering and sorting. After the campaign ends, disable additional servers. Approach security issues without excessive measures, but consistently and systematically. In the store’s admin panel, enable multi-factor authentication and roles, on servers, prohibit SSH passwords, limit by IP, and use fail2ban against password brute force. To avoid overpaying, calculate infrastructure by roles: server, DB, S3 storage, CDN, snapshots and admin hours. Launch additional capacity only during peak days, and under normal load, plan infrastructure based on basic needs. Evaluate the cost of downtime: if it is higher than the cost of an additional server for a week, reserving resources for a promotion will be economically justified. Migration from a dedicated server to cloud hosting is safe if done in two phases. Prepare a copy of the infrastructure, place media files in S3 storage, and run the site on a test domain with regular DB synchronization. On migration day, freeze changes, make the final dump, lower TTL, and switch DNS. After switching, monitor metrics and logs, and keep the previous production environment in “read-only” mode for a day for emergency access. If you need size guidelines, think in terms of load.  Up to one hundred orders per day is usually enough with a server of 2 vCPU and 4–8 GB of memory, a separate DB of 1–2 vCPU and 2–4 GB, SSD of 60–120 GB, and a combination of S3+CDN with Redis.  With a load of 100–500 orders per day it is reasonable to use two cloud servers and a load balancer, a database with 2–4 vCPU and 8–16 GB, and if necessary, add a read replica.  With stable peak loads, the infrastructure is scaled to 2–3 cloud servers with 4–8 vCPU and 16 GB, a database with 4–8 vCPU and 32 GB, replication, and mandatory CDN.  These are starting points; further decisions are dictated by metrics. Conclusion There is no single correct answer in this subject. The choice between cloud and dedicated server depends on traffic, frequency of peaks, team competencies, and how much one hour of downtime costs you. It is important not to guess, but to rely on numbers and understand how quickly you can increase capacity and recover after a failure. If the store is small or growing, it is reasonable to start with the cloud: one server for the application, a separate DB, and object storage for media. Such a scheme can be launched in an evening, handles sales without long downtime, and does not force you to pay for “reserve” all year. The main thing is to immediately enable backups, configure a private network between the server and the DB, and have a scaling plan ready for sales days. When traffic becomes steady and high 24/7, and requirements for performance and integrations tighten, it makes sense to consider a dedicated server or hybrid. Often a combination works where the frontend application and static files remain in the cloud for flexibility, while the heavy DB or specific services move to “hardware.” The decision should be made not by preference, but by TCO, RTO/RPO, and load metrics.
09 September 2025 · 10 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support