Sign In
Sign In

How to Install and Configure SSH on an Ubuntu Server

How to Install and Configure SSH on an Ubuntu Server
Hostman Team
Technical writer
Ubuntu
24.11.2023
Reading time: 10 min

Secure Shell (SSH) is a network protocol for secure client-server communication. Each interaction is encrypted. It allows you to securely manage the server, transfer files, and perform other tasks. 

For example, you have ordered a cloud server on Hostman and want to manage it from your laptop. To do this, you only need to set up SSH access. Through a secure connection, you will be able to perform all necessary administration actions.

For successful configuration, you need to: 

  1. Install the SSH server components on your server. The openssh-server package will cover that.

  2. Have the SSH client on your local machine from which you will connect to the remote host. 

    For this purpose, the openssh-client package is usually used. It's pre-installed in most Linux and BSD distributions and also in the latest Windows versions. On older versions of Windows, you'll need to install additional utilities. One of the most popular solutions is PuTTY.

Enabling SSH

By default, remote access via a secure network protocol is forbidden. However, installing SSH in Ubuntu is very easy.

Start the console of the server where you need to configure SSH. 

Update the package manager:

sudo apt update

Install the software:

sudo apt install openssh-server

Both operations require superuser rights, which you get with sudo.

On Ubuntu, the OpenSSH starts automatically after installation but you can check its status using the command:

sudo systemctl status ssh

The output should indicate that the service is running and allowed to start on system boot: 

ssh.service - OpenBSD Secure Shell server
   Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2022-03-21 12:34:00 CEST; 1m ago

This means that the installation was successful. To return to the command prompt, press the q key.

If the service is not active, start it manually with the command:

sudo systemctl enable --now ssh

Ubuntu comes with a firewall configuration tool called UFW. If you have a firewall enabled on your system, be sure to open the SSH port:

sudo ufw allow ssh

Now you can connect to your Ubuntu system via SSH from any remote computer.

-

Creating an SSH key pair

To make the connection even more secure and authentication more convenient, use an SSH key pair: a public and a private SSH keys. The public key is stored on the host, and the private key is stored on the user's computer.

Let's see how to create keys in different operating systems. Let's start with Ubuntu.

To generate a new 2048-bit RSA key pair, open a terminal and run the command below:

ssh-keygen -t rsa

A prompt will appear asking you where to save the keys. If you press Enter, the system will save the key pair in the default .ssh subdirectory of the home folder. You can also specify an alternate path where you want to save the key pair. However, it is recommended to use the default directory. It makes further management much easier.

If you have already created a key pair on the client computer, the system will prompt you to overwrite it. The choice is entirely up to you, but be careful. If you choose to overwrite it, you will not be able to use the previous key pair to log in to the server. It will be deleted. Fixing the conflict is easy; just specify a unique name for each new pair. The storage folder can remain the same.

You will also be prompted to enter a passphrase to add an extra layer of security that prevents unauthorized users from accessing the host. Press Enter if you do not want to use it.

To verify that the keys have been created, run the command:

ls -l ~/.ssh/id_*.pub. 

The terminal will display a list of keys.

Similarly, you can generate a pair on macOS or newer Windows versions.

If you're using an older Windows OS, you'll need to download the PuTTY utility suite. It contains the PuTTYgen application. To create an SSH key pair, all you need to do is run the PuTTYgen and swipe with your mouse. You can also select a folder to store the keys and add a passphrase for maximum protection.

Adding the SSH key to the server

The private key is stored on the computer. You should never transfer it to anyone. But you need to transmit the public part to the server.

If you have password access to the host, you can transfer the public key using ssh-copy-id. Example command:

ssh-copy-id hostman@123.456.78.99 

Instead of hostman enter your username, instead of 123.456.78.99 enter the server IP address. Enter the password when prompted, and after which the public key will be transferred to the host.

To connect to the server using the SSH keys, run the command:

ssh hostman@123.456.78.99

Instead of hostman enter your username, instead of 123.456.78.99 enter the server IP address. If you have not set a passphrase, you will log in without further authentication. The security system will check the public and private parts of the key and establish a connection if they match. 

Configuring SSH

You can configure your Ubuntu Server through the /etc/ssh/sshd_config file. Before making changes to it, make a backup copy. It will keep you from wasting time on reinstallation if you suddenly make a mistake.

To make a copy, run the command:

sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.factory-defaults

The /etc/ssh/sshd_config.factory-defaults will store the default settings. You will be editing the /etc/ssh/sshd_config file.

Disabling password authentication

SSH password authentication on the Ubuntu Server isn't bad. But if you create long, complex passwords, you can be tempted to store them insecurely. Using encryption keys to authenticate the connection is a more secure alternative. In this case, the password may be unnecessary and you can disable it.

Before proceeding, keep the following in mind:

Disabling password authentication increases the likelihood of being locked out of your server. You can be locked out if you lose your private key or break the ~/.authorized_keys file .

If you are locked out, you can no longer access any application files.

You should only disable password authentication if you are very familiar with the key authentication mechanism and understand the potential consequences of losing access to your server.

To disable password authentication, connect to the server as root and edit the sshd_config file. Change the PasswordAuthentication parameter value to No instead of Yes

Then restart the SSH service by running the following command:

sudo systemctl restart sshd

After that, you will no longer be able to use passwords for authentication. You will only be able to connect using Linux SSH keys.

Disabling root access

To improve security on your remote Ubuntu system, consider disabling root user login via SSH.

To do this, edit the configuration file:

sudo vi /etc/ssh/sshd_config

Change the PermitRootLogin value to No.

Another option is allowing the root user to log in using any authentication mechanism other than a password. To do this, set the PermitRootLogin parameter to prohibit-password.

This configuration lets you log in as the root user with a private key. The main thing is to ensure that you have copied the public key to the system before restarting the SSH service.

To apply the updated configuration, restart the service:

sudo systemctl restart sshd

Changing the default port

By default, the SSH server uses port 22. To increase security, you can set it to any other value. We recommend using ports from the upper range, from 50000 to 65000. It is also preferable to pick numbers in which all digits are different, for example, 56713.

Open the configuration file:

sudo vi /etc/ssh/sshd_config

Uncomment the line Port 22. Instead of 22, specify another number, for example, Port 56713. Save the changes and close the file.

To apply the configuration, restart the service:

sudo systemctl restart sshd

After a successful restart, verify that the connection is now on a different port:

ssh -p 56713 user@server_ip

Remember to restart the service after each change. Otherwise, SSH connections will follow the old rules.

Configuring tunneling

Tunneling is a method of transmitting unencrypted traffic or data over an encrypted channel. In addition to file transfers, tunneling can also be used to access internal network services through firewalls and to create a VPN.

There are three types of tunneling (forwarding):

  • Local,

  • remote,

  • dynamic.

To configure some of them, you will need to edit the SSH configuration file.

Local forwarding

It is a port forwarding from a client computer to a remote computer. The connection is then redirected to another port on the target computer.

The SSH client checks for a connection on the given port. When it receives a connection request, it tunnels it with the specified port on the remote host. The host then connects to another target computer through the configured port.

Mostly, local forwarding is used to connect externally to a service from an internal network. For example, this is how you can configure access to a database. It is also used for remote file sharing.

The -L argument is used for local forwarding. For example:

ssh hostman@server.example -L 8080:server1.example:3000 

Now open a browser on the local computer. You can use localhost:8080 to access the remote application instead of accessing it using the address server.example:3000.

Remote redirection

Remote redirection allows you to connect to a local computer from a remote computer. SSH does not support remote port forwarding by default. Therefore, you need to enable it in the SSH configuration file. It will require some additional configuration of the Ubuntu server. 

Open the configuration file:

sudo vi /etc/ssh/sshd_config 

Set the GatewayPorts parameter to Yes.

Save the changes and restart the service:

sudo systemctl restart sshd

Use the -R argument to configure forwarding. Example command:

ssh -R 8080:127.0.0.0.1:3000 -N -f user@remote.host 

After running this command, the host will listen on port 8080 and redirect all traffic to port 3000, which is open on the local computer.

Remote redirection is mainly used to give someone from outside access to an internal service.

Dynamic forwarding

Local and remote forwarding methods allow you to tunnel and communicate with a single port. With dynamic forwarding, you can tunnel and communicate with multiple ports.

Dynamic tunneling creates a socket on the local computer. It works like a SOCKS proxy server. Basically, your local computer is used as a SOCKS proxy server and listens on port 1080 by default. When the host connects to this port, it is redirected to the remote machine and then to the dynamic machine through the dynamic port.

The -D argument is used to configure dynamic tunneling. Example command:

ssh -D 9090 -N -f user@remote.host

Once you have set up tunneling, you can configure your application to use it. For example, to add a proxy to the browser. You'll need to configure redirection separately for each application you want to tunnel traffic for.

Disabling SSH

To disable the Open SSH server, stop the SSH service by running the command:

sudo systemctl disable --now ssh

To start the service back up, run the command:

sudo systemctl enable --now ssh

The enable command in Ubuntu does not reinstall the software, so you don't have to reconfigure anything. It simply starts up the previously installed and configured service.

Conclusion

In this article, we have covered the basics of using SSH on an Ubuntu machine. Now you know how to install the necessary software to set up a secure connection, configure it, route the tunnel, and even disable the service when it is not in use.

Connecting via SSH in Ubuntu is a common task, so you'll definitely need this knowledge. If not in development and administration, then for personal purposes, such as establishing a secure connection between devices in a local network.

Ubuntu
24.11.2023
Reading time: 10 min

Similar

Ubuntu

Installing and Configuring Samba on Ubuntu 22.04

Let’s look at the process of installing Samba software on a cloud server with the Ubuntu 22.04 operating system. This guide is also suitable for installing Samba on Debian. Let’s start with a brief description of this software. What is Samba Samba is a software package developed to provide compatibility and interaction between UNIX-like systems and Windows. The software has been distributed under a free license for over 30 years. Samba ensures seamless integration of servers and PCs running UNIX into an AD (Active Directory) system. This software can be used as a controller and as a standard component of a domain. Thus, users can flexibly configure cloud file storages. Samba provides extensive functionality for managing file and database access rights by assigning specific user groups. Creating a New Server Go to the control panel and create a new server.  Select the Ubuntu 22.04 image and then the minimum server configuration.  After creating the server, connect to it via SSH, and you can begin configuration. Adding a User This is simple — enter the command: sudo useradd -p new_server_pass new_server_user Instead of new_server_pass and new_server_user, you can use any password and any username. Enter your own data instead of the example ones. Note that we immediately set the password, which was possible thanks to the -p command. Installing Samba on Ubuntu For convenience, we have broken the installation process into separate steps. Step 1. Preparation To start the installation process, use the following command: sudo apt install samba -y Now you need to remember the system name of the service. In most cases, it is smbd. Therefore, if you want to call the service, use this name. First, let’s configure autostart, which is done with the command: sudo systemctl enable smbd Now start it using the familiar command: sudo systemctl start smbd Then check the system status using: sudo systemctl status smbd To stop Samba, use: sudo systemctl stop smbd To restart the service, enter: sudo systemctl restart smbd If you want Samba to no longer start automatically, use the command: sudo systemctl disable smbd The reload command is used to refresh the configuration. The following command will forcibly open port 445, as well as 137–139. To allow them in the ufw firewall, use: sudo ufw allow Samba Step 2. Configuring Anonymous Access Suppose you have some remote server located outside your cloud. Network security rules require that you never open direct access to it through its IP. You can only do this through a tunnel, which is already set up. Typically, servers with granted access have the address 10.8.0.1, and this is the address we will use further. To share data and grant anonymous access to it, first open the configuration file. It is located here: /etc/samba/smb.conf. We recommend making a backup of the clean file — this will help you quickly restore the original program state without needing to reinstall. Now remove all comments, leaving only the code, and enter the command testparm to ensure the program works properly. In the shared folder settings, enter the following parameters: [share]     comment = share     path = /data/public_share     public = yes     writable = yes     read only = no     guest ok = yes Also, make sure that the following four fields (mask and mode) have matching numeric values (for example, 0777). Regarding the specific lines: [share] — the name of the shared folder, which will be visible to everyone connecting to your server; comment — a comment that can be anything; path — the path to the data storage folder; public — gives permission for public access: if you do not want users to view the folder contents, set this to no; writable — determines whether data can be written to the folder; read only — specifies that the folder is read-only: to allow users to create new files, set it to no; guest ok — determines whether guests can access the folder. Thus, the folder name and path may differ depending on what values you specify for the shared folder. The comment can also be anything, and for the last four parameters, values are set as yes or no. Now restart the program and check if you can connect to the server from Windows. Step 3. Configuring Access by User Credentials To create access by login and password, you first need to create a new directory and configure permissions. In the configuration file, set all parameters to no (see above), except writable: in this line, the value should be yes, meaning that writing in the folder should be enabled. Use the mkdir command to create a new directory, then create a user with useradd someone (where someone can be any username) and set a password for them with the command passwd. For example: passwd something Now, with the command below, add the new user and try to log in: if everything is configured correctly, you will have access to the folder. sudo smbpasswd -a someone Step 4. Configuring Group Access Configuring group access is necessary when you need to create restricted access for specific user groups. In smb.conf, after the line guest ok, additionally specify the following lines (all usernames here are generated simply for example): valid users = admin, mary_smith, jane_jameson, maria ortega, nathalie_brown write list = admin, nathalie_brown In the valid users line, list the users who are granted access to the directory. And in the write list, list those who can modify data in the folder. In addition, after the force directory mode line, add another line with the following value: inherit owner = yes This enables inheritance of created objects. Now save the settings and restart the service, after which the new settings should take effect. Step 5. Connecting to Samba from Windows and Linux For quick connection to Samba from Windows, press Ctrl+E and enter the path. Note that you need to use \\ to indicate the network path to the resource. And to avoid reconnecting to the server each time, you can choose the option to connect the resource as a drive, if your security policy allows it. In the new window, specify the drive letter and fill in the required data. For connecting to Samba from Linux, you use the cifs utilities, which are installed with the command: sudo apt install cifs-utils -y Next, the resource is mounted and connected. This is done with: sudo mount.cifs //10.8.0.1/our_share /share The path and resource name can be anything. You can also perform automatic mounting using the configuration file fstab with its own settings. Step 6. Configuring the Network Trash Bin This operation is needed to avoid accidental permanent deletion of files. For this, create the following directory: [Recycle]     comment = Trash for temporary file storage     path = /directory/recycle     public = yes     browseable = yes     writable = yes     vfs objects = recycle     recycle:repository = .recycle/%U     recycle:keeptree = Yes     recycle:touch = Yes     recycle:versions = Yes     recycle:maxsize = 0     recycle:exclude = *.tmp, ~$*     recycle:exclude_dir = /tmp Now, let’s review line by line what these parameters mean: vfs objects = recycle — indicates use of the corresponding subsystem; repository — the path for storing deleted data; keeptree — whether to keep the directory tree after deletion; touch — whether to change the timestamps of files when they are moved to the trash; versions — whether to assign a version number if files with identical names are deleted; maxsize — the maximum size of a file placed in the trash. A value of 0 disables limits; exclude — which file types to exclude; exclude_dir — which directories to exclude. Conclusion That’s it — now you know how to install Samba on an Ubuntu cloud server and configure it for your own needs.
04 July 2025 · 7 min to read
Ubuntu

Deleting a User in Ubuntu 22.04

A server administrator often has to work with user accounts — adding, deleting, and configuring access modes. Removing outdated user accounts is one security measure that can significantly reduce the number of vulnerabilities in the system. The Linux utilities deluser and userdel are used for deletion. However, before proceeding directly to deleting a user, we must take certain steps. In this article, we will explore how to delete a user in Ubuntu without compromising the system. At the same time, we will preserve the ability to access the user’s home directory files after deletion. In this article, we will work with the user hostman, which was created beforehand. This article will primarily focus on removing an Ubuntu user via the terminal, but we will also provide instructions for deleting a user account through the graphical interface. Please note that you will need superuser privileges to work with user accounts. The instructions will be suitable for any cloud server running Ubuntu OS. Checking the User Account First, you need to check whether the user is currently logged into the system. This will affect further steps: if the user is currently authorized on the server, you will need to terminate their connection and change the password. Check the list of users authorized in the system using the who utility or its alias w: who If you see that the user hostman is authorized, you need to check which processes are running under this user. This is a necessary step because if background operations are being performed, Ubuntu 22.04 will not allow us to delete the user. Check with the ps utility: sudo ps -u hostman As a result, you might see a response like this:    PID TTY          TIME CMD 1297129 pts/2    00:00:00 bash 1297443 pts/2    00:00:00 htop For testing purposes, we launched the htop utility under the hostman account, which is running in the background. Blocking Access Before stopping the user’s processes, you need to block their access to the system. You can do this by changing their password. User passwords are stored in the system in encrypted form in the /etc/shadow file. This file is readable only by the root user, and in addition to password hashes, it contains their expiration information. There is a special utility that allows you to remove a user’s password in Ubuntu — passwd. To restrict access, we will use the passwd utility with the -l (or --lock) flag, which puts the utility into lock mode: sudo passwd -l hostman As a result, the utility will add an exclamation mark at the beginning of the encrypted password string. That is all that is needed to prevent the user from logging in again since the hashes will no longer match. Killing Processes In Ubuntu, you cannot delete a user via the console if any processes are running under their name. To terminate a process, you can use one of the following commands: kill — deletes a process by its identifier. You can determine the IDs of the hostman user processes with: top -U hostman or ps -u hostman pkill — deletes a process by its name. For example, if the user hostman has launched the top process, you can terminate it with: sudo pkill top killall — deletes all processes, including child processes. Often, a process will launch many so-called subprocesses; stopping them by name or identifier can be complex and time-consuming. We will use the last command to reliably kill all user processes: sudo killall -9 -u hostman The -9 flag means the processes will receive a SIGKILL signal. This means the process will be forcibly terminated, since this signal cannot be ignored or blocked. Essentially, it is equivalent to a “force quit” of a non-responding program in graphical operating systems. After completing the user’s processes, they will no longer be authorized in the system. You can verify this using the who command. Since we locked the login in the previous step, the hostman user will not be able to log in again. Optional — Archiving the Home Directory Quite often, when deleting a Linux user account, you may need to keep its home directory, which might contain important files required either by the user or by the organization you are serving as an administrator. The built-in Ubuntu utilities allow you to remove a user while keeping their home directory. However, this is not recommended for two reasons: Disk Space — the user’s home directory may contain a large amount of data. It is irrational and excessive to store data from all outdated accounts on the main work disk. Over time, you might run out of space for new users. Data Relevance — it is good practice to keep the /home directory containing only the directories corresponding to active user accounts. Keeping this list in order helps with administration. We will use the tar utility to archive the home directory of the hostman user: sudo tar -cvjf /mnt/nobackup/hostman.homedir.tar.gz /home/hostman Let’s go over the arguments and flags: -c — creates the resulting .tar archive file -v — enables verbose mode, showing debugging information and listing archived files -z — creates a compressed .gz archive -f — indicates that the first argument will be used as the archive name The first argument is the final location of the archive. In our example, we place the archive with the user’s home directory on the nobackup disk, which, as the name implies, is not subject to backup. The second argument is the path to the directory from which the archive is created. Stopping Scheduled Jobs Before deleting a user in Ubuntu, it is recommended to stop all cron scheduler tasks launched by that user. You can do this with the crontab command. We will launch it under the hostman user with the -u flag and switch it to delete mode with the -r flag: sudo crontab -r -u hostman Now you can be sure that after deleting the user account, no unknown scripts will be executed for which no one is responsible. Deleting the User Once all the previous steps have been completed, it is time to proceed with the main task: deleting the Ubuntu user. There are two ways to do this: the deluser and userdel utilities. To delete the user account, we will use the deluser utility. Running it without parameters will delete the user account but leave their home directory and other user files intact. You can use the following flags: --remove-home — as the name suggests, deletes the user’s home directory --remove-all-files — deletes all system files belonging to the user, including the home directory --backup — creates an archive of the home directory and mail files and places it in the root directory. To specify a folder for saving the archive, use the --backup-to flag. As you can see from the parameter descriptions above, manually archiving the user’s home directory is not strictly necessary — deluser can do everything for you. In addition, with deluser you can remove a user from a group in Ubuntu or delete the group itself: sudo deluser hostman administrators The command above removes the user hostman from the administrators group. Let’s proceed with the complete deletion of the user and the hostman group without preserving the home directory: sudo deluser --remove-home hostman Deleting the User via Graphical Interface The entire article above is about how to delete a user in the Ubuntu terminal. But if you have a system with a graphical interface, you can delete a user in just a few simple steps. Open the Users section in System Settings. To switch to superuser mode, click the Unlock button. After that, the Delete User button will become active. When you click it, a dialog box will appear, offering to delete the user’s files, specifically those in the home directory. Conclusion Deleting a user in Ubuntu is not difficult; you just need to use the deluser utility with the required parameters. However, in this article, we described several steps that will help you safely delete a user account while preserving the system’s stability.
04 July 2025 · 7 min to read
Wordpress

How to Install WordPress with Nginx and Let’s Encrypt SSL on Ubuntu

WordPress is a simple, popular, open-source, and free CMS (content management system) for creating modern websites. Today, WordPress powers nearly half of the websites worldwide. Hostman offers Wordpress cloud hosting with quick load times, robust security, and simplified management.  However, having just a content management system is not enough. Modern websites require an SSL certificate, which provides encryption and allows using a secure HTTPS connection. This short guide will show how to install WordPress on a cloud server, perform initial CMS configuration, and add an SSL certificate to the completed site, enabling users to access the website via HTTPS. The Nginx web server will receive user requests and then proxied to WordPress for processing and generating response content. A few additional components are also needed: a MySQL database, which serves as the primary data storage in WordPress, and PHP, which WordPress is written in. This technology stack is known as LEMP: Linux, Nginx, MySQL, PHP. Step 1. Creating the Server First, you will need a cloud server with Ubuntu 22.04 installed. Go to the Hostman control panel. Select the Cloud servers tab on the left side of the control panel. Click the Create button. You’ll need to configure a range of parameters that ultimately determine the server rental cost. The most important of these parameters are: The operating system distribution and its version (in our case, Ubuntu 22.04). Data center location. Physical configuration. Server information. Once all the data is filled in, click the Order button. Upon completion of the server setup, you can view the IP address of the cloud server in the Dashboard tab, and also copy the command for connecting to the server via SSH along with the root password: Next, open a terminal in your local operating system and connect via SSH with password authentication: ssh root@server_ip Replace server_ip with the IP address of your cloud server. You will then be prompted to enter the password, which you can either type manually or paste from the clipboard. After connecting, the terminal will display information about the operating system. Now you can create a user with sudo priviliges or keep using root. Step 2. Updating the System Before beginning the WordPress installation, it’s important to update the list of repositories available through the APT package manager: sudo apt update -y It’s also a good idea to upgrade already installed packages to their latest versions: sudo apt upgrade -y Now, we can move on to downloading and installing the technology stack components required for running WordPress. Step 3. Installing PHP Let's download and install the PHP interpreter. First, add a specialized repository that provides up-to-date versions of PHP: sudo add-apt-repository ppa:ondrej/php In this guide, we are using PHP version 8.3 in FPM mode (FastCGI Process Manager), along with an additional module to enable PHP’s interaction with MySQL: sudo apt install php8.3-fpm php-mysql -y The -y flag automatically answers “yes” to any prompts during the installation process. To verify that PHP is now installed on the system, you can check its version: php -v The console output should look like this: PHP 8.3.13 (cli) (built: Oct 30 2024 11:27:41) (NTS)Copyright (c) The PHP GroupZend Engine v4.3.13, Copyright (c) Zend Technologies    with Zend OPcache v8.3.13, Copyright (c), by Zend Technologies You can also check the status of the FPM service: sudo systemctl status php8.3-fpm In the console output, you should see a green status indicator: Active: active (running) Step 4. Installing MySQL The MySQL database is an essential component of WordPress, as it stores all site and user information for the CMS. Installation We’ll install the MySQL server package: sudo apt install mysql-server -y To verify the installation, check the database version: mysql --version If successful, the console output will look something like this: mysql  Ver 8.0.39-0ubuntu0.22.04.1 for Linux on x86_64 ((Ubuntu)) Also, ensure that the MySQL server is currently running by checking the database service status: sudo systemctl status mysql The console output should display a green status indicator: Active: active (running) MySQL Security This step is optional in this guide, but it’s worth mentioning. After installing MySQL, you can configure the database’s security settings: mysql_secure_installation This command will prompt a series of questions in the terminal to help you configure the appropriate level of MySQL security. Creating a Database Next, prepare a dedicated database specifically for WordPress. First, log in to MySQL: mysql Then, execute the following SQL command to create a database: CREATE DATABASE wordpress_database; You’ll also need a dedicated user for accessing this database: CREATE USER 'wordpress_user'@'localhost' IDENTIFIED BY 'wordpress_password'; Grant this user the necessary access permissions: GRANT ALL PRIVILEGES ON wordpress_database.* TO 'wordpress_user'@'localhost'; Finally, exit MySQL: quit Step 5. Downloading and Configuring Nginx The Nginx web server will handle incoming HTTP requests from users and proxy them to PHP via the FastCGI interface. Download and Installation We’ll download and install the Nginx web server using APT: sudo apt install nginx -y Next, verify that Nginx is indeed running as a service: systemctl status nginx In the console output, you should see a green status indicator: Active: active (running) You can also check if the web server is functioning correctly by making an HTTP request through a browser. Enter the IP address of the remote server in the address bar, where you are installing Nginx. For example: http://166.1.227.189 If everything is set up correctly, Nginx will display its default welcome page. For good measure, let’s add Nginx to the system’s startup list (though this is typically done automatically during installation): sudo systemctl enable nginx Now, you can proceed to make adjustments to the web server configuration. Configuration In this example, we’ll slightly modify the default Nginx configuration. For this, we need a text editor. We will use nano. sudo apt install nano Now open the configuration file: sudo nano /etc/nginx/sites-enabled/default If you remove all the comments, the basic configuration will look like this: server { listen 80 default_server; listen [::]:80 default_server; root /var/www/html; index index.html index.htm index.nginx-debian.html; server_name _; location / { try_files $uri $uri/ =404; } } To this configuration, we’ll add the ability to proxy requests to PHP through FastCGI: server { listen 80 default_server; listen [::]:80 default_server; root /var/www/html; # added index.php to index files index index.html index.htm index.nginx-debian.html index.php; # specify the domain name to obtain an SSL certificate later server_name mydomain.com www.mydomain.com; location / { # try_files $uri $uri/ =404; # direct root requests to /index.php try_files $uri $uri/ /index.php?$args; } # forward all .php requests to PHP via FastCGI location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php8.3-fpm.sock; } } Note that the server_name parameter should contain the domain name, with DNS settings including an A record that directs to the configured server with Nginx. Now, let’s check the configuration syntax for errors: sudo nginx -t If everything is correct, you’ll see a confirmation message in the console: nginx: the configuration file /etc/nginx/nginx.conf syntax is oknginx: configuration file /etc/nginx/nginx.conf test is successful Then, reload the Nginx service to apply the new configuration: sudo systemctl reload nginx Step 6. Installing an SSL Certificate To obtain an SSL certificate from Let’s Encrypt, we’ll use a special utility called Certbot. In this guide, Certbot will automate several tasks: Request the SSL certificate. Create an additional Nginx configuration file. Edit the existing Nginx configuration file (which currently describes the HTTP server setup). Restart Nginx to apply the changes. Obtaining the Certificate Like other packages, install Certbot via APT: sudo apt install certbotsudo apt install python3-certbot-nginx The first command installs Certbot, and the second adds a Python module for Certbot’s integration with Nginx. Alternatively, you can install python3-certbot-nginx directly, which will automatically include Certbot as a dependency: sudo apt install python3-certbot-nginx -y Now, let’s initiate the process to obtain and install the SSL certificate: sudo certbot --nginx First, Certbot will prompt you to register with Let’s Encrypt. You’ll need to provide an email address, agree to the Terms of Service, and optionally opt-in for email updates (you may decline this if desired). Then, enter the list of domain names, separated by commas or spaces, for which the certificate should be issued. Specify the exact domain names that are listed in the Nginx configuration file under the server_name directive: mydomain.com www.mydomain.com After the certificate is issued, Certbot will automatically configure it by adding the necessary SSL settings to the Nginx configuration file: listen 443 ssl; # managed by Certbot # RSA certificate ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot # Redirect non-https traffic to https if ($scheme != "https") { return 301 https://$host$request_uri; } # managed by Certbot So, the complete Nginx configuration file will look as follows: server { listen 80 default_server; listen [::]:80 default_server; listen 443 ssl; # managed by Certbot # RSA certificate ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot root /var/www/html; index index.html index.htm index.nginx-debian.html index.php; server_name domain.com www.domain.com; # Redirect non-https traffic to https if ($scheme != "https") { return 301 https://$host$request_uri; } # managed by Certbot location / { # try_files $uri $uri/ =404; # direct root requests to /index.php try_files $uri $uri/ /index.php?$args; } # forward all .php requests to PHP via FastCGI location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php8.3-fpm.sock; } } Automatic Certificate Renewal Let’s Encrypt certificates expire every 90 days, so they need to be renewed regularly. Instead of manually renewing them, you can set up an automated task. For this purpose, we’ll use Crontab, a scheduling tool in Unix-based systems that uses a specific syntax to define when commands should run. Install Crontab: sudo apt install cron And enable it: sudo systemctl enable cron Now open the Crontab file: crontab -e Add the following line to schedule the Certbot renewal command: 0 12 * * * /usr/bin/certbot renew --quiet In this configuration: The command runs at noon (12:00) every day. Certbot will check the certificate’s expiration status and renew it if necessary. The --quiet flag ensures that Certbot runs silently without generating output. Step 7. Downloading WordPress In this guide, we’ll use WordPress version 6.5.3, which can be downloaded from the official website: wget https://wordpress.org/wordpress-6.5.3.tar.gz Once downloaded, unpack the WordPress archive: tar -xvf wordpress-*.tar.gz After unpacking, you can delete the archive file: rm wordpress-*.tar.gz This will create a wordpress folder containing the WordPress files. Most core files are organized in the wp-content, wp-includes, and wp-admin directories. The main entry point for WordPress is index.php. Moving WordPress Files to the Web Server Directory You need to copy all files from the wordpress folder to the web server’s root directory (/var/www/html/) so that Nginx can serve the PHP-generated content based on user HTTP requests. Clear the existing web server directory (as it currently contains only the default Nginx welcome page, which we no longer need): rm /var/www/html/* Copy WordPress files to the web server directory: cp -R wordpress/* /var/www/html/ The -R flag enables recursive copying of files and folders. Set ownership and permissions. Ensure that Nginx can access and modify these files by setting the www-data user and group ownership, as well as appropriate permissions, for the WordPress directory: sudo chown -R www-data:www-data /var/www/html/sudo chmod -R 755 /var/www/html/ This allows Nginx to read, write, and modify WordPress files as needed, avoiding permission errors during the WordPress installation process. Step 8. Configuring WordPress WordPress configuration is managed through an intuitive web-based admin panel. No programming knowledge is necessary, though familiarity with languages like JavaScript, PHP, HTML, and CSS can be helpful for creating or customizing themes and plugins. Accessing the Admin Panel Open a web browser and go to the website using the domain specified in the Nginx configuration, such as: https://mydomain.com If all components were correctly set up, you should be redirected to WordPress’s initial configuration page: https://mydomain.com/wp-admin/setup-config.php Select Language: Choose your preferred language and click Continue. Database Configuration: WordPress will prompt you to enter database details. Click Let’s go! and provide the following information: Database Name: wordpress_database (from the previous setup) Database Username: wordpress_user Database Password: wordpress_password Database Host: localhost Table Prefix: wp_ (or leave as default) Click Submit. If the credentials are correct, WordPress will confirm access to the database. Run Installation: Click Run the installation. WordPress will then guide you to enter site and admin details: Site Title Admin Username Admin Password Admin Email Option to discourage search engine indexing (recommended for development/testing sites) Install WordPress: Click Install WordPress. After installation, you’ll be prompted to log in with the admin username and password you created. Accessing the Dashboard Once logged in, you'll see the WordPress Dashboard, which contains customizable widgets. The main menu on the left allows access to core WordPress functions, including: Posts and Pages for content creation Comments for moderating discussions Media for managing images and files Themes and Plugins for design and functionality Users for managing site members and roles Your WordPress site is now fully configured, and you can begin customizing and adding content as needed. Conclusion This guide showed how to install WordPress along with all its dependencies and how to connect a domain and add a SSL certificate from Let’s Encrypt to an already functioning website, enabling secure HTTPS connections with the remote server. The key dependencies required for WordPress to function include: PHP: The scripting language WordPress is written in. MySQL: The database system used by WordPress to store content and user data. Nginx (or Apache in other implementations): The web server that processes user requests initially. For more detailed information on managing site content through the WordPress admin panel, as well as creating custom themes and plugins, refer to the official WordPress documentation. Frequently Asked Questions How do I install WordPress on Ubuntu? First set up Nginx, PHP, and MySQL. Then either download WordPress manually or use a deployment script. How do I enable HTTPS with Let’s Encrypt? Use Certbot to generate a certificate, then automate renewal with a simple cron job. Is Nginx better than Apache for WordPress? For performance and memory efficiency, yes. Nginx handles high traffic with fewer resources.
16 June 2025 · 13 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support