Sign In
Sign In

Cloud Service Provider
for Developers and Teams

We make it simple to get started in the cloud and scale up as you grow —
whether you have one virtual machine or ten thousand
By signing up you agree to the Terms of Service and Privacy Policy
99.9% Uptime
Our cloud service provides the ultimate in server dependability and stability
Money-back Guarantee
Experience our high-speed cloud services without any risk, assured by our money-back guarantee
Easy to Deploy
Manage your services with ease using our intuitive control panel, where deploying software is a matter of minutes
Reliable and Available
Select from 6 datacenter regions around the world based on latency or deploy across regions for redundancy

Robust cloud services for every demand

See all Products

Cloud Servers

Cutting-edge hardware for cloud solutions: powerful Intel and AMD processors, ultra-fast NVMe disks

Databases

We provide a cloud database ready to store everything you have. The best DBMSs are on deck: MySQL, Redis, Kafka, and more

App Platform

Just link your repo, pick a project to deploy, and Hostman will have it running in the cloud with just a couple of clicks from the dashboard

S3 Storage

A universal object storage compatible with the S3 protocol

Firewall

Multi-layered protection from vulnerability scanning, DDoS, and cyber-attacks

Kubernetes

Automate the management of containerized applications, from deployment and scaling to monitoring and error handling

Managed Backups

Our server and application backup feature allows for both on-demand and scheduled backup and one-click data restoration

Images

Create images for backup free of charge or deploy your own in the Hostman cloud

Hostman's commitment to simplicity
and budget-friendly solutions

1 CPU
2 CPU
4 CPU
8 CPU
Configuration
1 CPU, 1 GB RAM, 25 GB SSD
Hostman
DigitalOcean
Google Cloud
AWS
Vultr
Price
$4
$6
$6.88
$7.59
$5
Tech support
Free
$24/mo
$29/mo + 3% of
monthly charges
$29/mo or 3% of
monthly charges
Free
Backups
from $0.07/GB
20% or 30% higher
base daily/weekly fee
$0.03/GB per mo
$0.05/GB per mo
20% higher base
monthly/hourly fee
Bandwidth
Free
$0.01 per GB
$0.01 per GB
$0.09/GB first
10 TB / mo
$0.01 per GB
Live chat support
Avg. support response time
<15 min
<24 hours
<4 hours
<12 hours
<12 hours
Anup k.
Associate Cloud Engineer
5.0 out of 5

"Hostman Comprehensive Review of Simplicity and Potential"

It been few years that I have been working on Cloud and most of the cloud service...
Mansur H.
Security Researcher
5.0 out of 5

"A perfect fit for everything cloud services!"

Hostman's seemless integration, user-friendly interface and its robust features (backups, etc) makes it much easier...
Adedeji E.
DevOps Engineer
5.0 out of 5

"Superb User Experience"

For me, Hostman is exceptional because of it's flexibility and user-friendliness. The platform's ability to offer dedicated computing resources acr...
Yudhistira H.
Mid-Market(51-1000 emp.)
5.0 out of 5

"Streamlined Cloud Excellence!"

What I like best about Hostman is their exceptional speed of deployment, scalability, and robust security features. Their...
Mohammad Waqas S.
Biotechnologist and programmer
5.0 out of 5

"Seamless and easy to use Hosting Solution for Web Applications"

From the moment I signed up, the process has been seamless and straightforward...
Mohana R.
Senior Software Engineer
5.0 out of 5

"Availing Different DB Engine Services Provided by Hostman is Convenient for my Organization usecases"

Hostman manages the cloud operations...
Faizan A.
5.0 out of 5

"Hostman is a great fit for me"

Hostman is a great fit for me. What do you like best about Hostman? It was very easy to deploy my application and create database, I didn't have
Adam M.
5.0 out of 5

"Perfect website"

This website is extremely user friendly and easy to use. I had no problems so didn't have to contact customer support. Really good website and would recommend to others.
Anup K.
4.0 out of 5

"Simplifying Cloud Deployment with Strengths and Areas for Growth"

What I like best about Hostman is its unwavering commitment to simplicity...
Naila J.
5.0 out of 5

"Streamlined Deployment with Room for Improvement"

Hostman impresses with its user-friendly interface and seamless deployment process, simplifying web application hosting...

Trusted by 500+ companies and developers worldwide

Deploy a cloud server
in just a few clicks

Set up your сloud servers at Hostman swiftly and without any fees, customizing them for your business with a quick selection of region, IP range, and details—ensuring seamless integration and data flow

Code locally, launch worldwide

Our servers, certified with ISO/IEC 27001, are located in Tier 3 data
centers across the US, Europe, and Asia
🇺🇸 San Francisco
🇺🇸 San Jose
🇺🇸 Texas
🇺🇸 New York
🇳🇱 Amsterdam
🇳🇬 Lagos
🇩🇪 Frankfurt
🇵🇱 Gdansk
🇦🇪 Dubai
🇸🇬 Singapore

Latest News

Mail

How to Set Up Your Own Email Server

A mail server is a system that manages the correct delivery of emails from the sender to the recipient. For instance, when you send an email via Gmail, you’re using Google’s mail server. A mail client, on the other hand, is a program used to read, send, and store emails. Examples of mail clients include Microsoft Outlook, Thunderbird, and others. Protocols for Receiving and Sending Emails SMTP (Simple Mail Transfer Protocol) The outgoing mail server uses the SMTP protocol, which stands for "Simple Mail Transfer Protocol." Its primary job is to relay messages between the sender and recipient. SMTP has two main functions: Verifying the sender's configuration and allowing the sending device to transmit the email. Sending the message and receiving the response code. SMTP servers use ports 25 (unencrypted) and 465 (encrypted) for sending emails. POP3 (Post Office Protocol) POP3 is a protocol for receiving emails. It allows a mail client to connect to the server and download the email to the local device, making it available for offline access. However, the email is typically removed from the remote server after downloading (though there's an option to keep a copy on the server). POP3 operates on ports 110 (unencrypted) and 995 (SSL/TLS encrypted). IMAP (Internet Message Access Protocol) Like POP3, IMAP is used to receive emails, but it allows you to manage the messages directly on the server without downloading them locally. This is useful for accessing your email from multiple devices. IMAP uses ports 143 (unencrypted) and 993 (SSL/TLS encrypted). Why Set Up Your Own Email Server? The most common answer is: “To use your own domain in your email address.” While this is true, it's not the full picture. You can use your custom domain without setting up a dedicated mail server. Many services allow you to connect your domain to their mail servers. All you need to do is buy a domain and link it to their servers in the settings. The main advantage of this approach is that they already have a configured SMTP server. This means you don't need to deal with complicated tasks like configuring forwarding, setting up anti-spam filters, or managing other advanced settings. You simply need to correctly set the NS records (Name Server records), which is much easier than setting up your own mail server. However, using third-party services comes with limitations: Limited control: You won’t have full control over how your email operates. Sending limits: Every company has its own limits on how many emails you can send. If you send a large volume of emails (e.g., for corporate use or newsletters), these limits might be restrictive. If you decide to create your own mail server, you’ll need to manage: Email quotas and sending limits. Backups to avoid losing data. Avoiding spam blacklists. Setting up access permissions if multiple people are using the same mail system. Creating and maintaining your own email server gives you more control but also requires a higher level of technical expertise and responsibility. Setting Up an Email Server In this guide, we'll explore how to create your own email server. For this purpose, we'll use a Hostman cloud server, selecting Ubuntu 20.04 as the operating system. The mail server we configure will support encryption, have anti-spam protection, and include a web-based admin panel for managing emails. Preparing the Server The first step is to switch to the superuser root mode: sudo su Before installing the necessary software, update the server packages: apt update && apt upgrade Next, check whether the server's hostname matches the domain of your email: hostname If the response shows something different from your desired server name, use the following command: hostnamectl set-hostname mail.hostname.com Here, replace mail.hostname.com with your actual hostname. For the mail server and schedulers to work correctly, we need to configure the time zone by installing the time synchronization package chrony: apt install chrony timedatectl set-timezone Europe/Athens You can choose the appropriate timezone using the command: timedatectl list-timezones Start the chrony service: systemctl enable chrony Next, we need to open the necessary ports for the mail server using the iptables utility. The required ports are: 25, 465 — SMTP (for sending mail) 110, 995 — POP3 (for receiving mail) 143, 993 — IMAP (for receiving mail) 80, 443 — HTTP (for web services) The first port in each pair is for standard connections, and the second is for secure connections. Use iptables to open these ports: iptables -I INPUT 1 -p tcp --match multiport --dports 25,110,143,465,587,993,995,80,443 -j ACCEPT Since iptables rules are only stored for the current session and will reset after a reboot, we need to make them persistent: netfilter-persistent save Now, with the server properly set up, we can proceed to install the necessary software to create a fully functional mail server. Installing and Configuring Postfix Postfix is an open-source mail transfer agent (MTA). It has a modular architecture, which means it doesn't require running as the root user. Let's install Postfix and the postfix-mysql package for database integration: apt install postfix postfix-mysql During installation, select Internet Site when prompted. This assumes you have access to edit DNS records and can specify an FQDN (Fully Qualified Domain Name). In the next window, keep the server name as is and continue. After installation, create a new user account to handle mail operations. First, create a group called vmail using groupadd: addgroup -gid 1080 vmail Next, create the vmail user and assign the home directory to /home/mail: adduser --home /home/mail -gid 1080 -uid 1080 vmail Where 1080 is the group ID (GID) and user ID (UID). If 1080 is already taken, you can choose a different value. Verify that the /home/mail directory belongs to the vmail user and group: ll /home Configuring Postfix With the user created, proceed to configure Postfix by editing the main configuration file /etc/postfix/main.cf: nano /etc/postfix/main.cf Make the following edits: # Domains we accept mail for mydestination = localhost.$mydomain, localhost, localhost.localdomain # Postfix protocol inet_protocols = ipv4 # Path to the public certificate smtpd_tls_cert_file = /etc/ssl/mail/public.pem # Path to the private certificate smtpd_tls_key_file = /etc/ssl/mail/private.key Then, add additional options required for Postfix to function correctly: # Mail storage location virtual_mailbox_base = /home/mail # Path to alias maps virtual_alias_maps = proxy:mysql:/etc/postfix/mysql_virtual_alias_maps.cf # Domain storage format virtual_mailbox_domains = proxy:mysql:/etc/postfix/mysql_virtual_domains_maps.cf # Mailbox storage format virtual_mailbox_maps = proxy:mysql:/etc/postfix/mysql_virtual_mailbox_maps.cf # Minimum virtual user ID virtual_minimum_uid = 1080 # UID for the main user handling mail virtual_uid_maps = static:1080 # GID for the group handling mail virtual_gid_maps = static:1080 # Register Dovecot as the mail delivery agent virtual_transport = dovecot # Enable secure authentication smtpd_sasl_auth_enable = yes smtpd_sasl_exceptions_networks = $mynetworks smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth # Enable TLS encryption for outgoing SMTP connections smtp_use_tls = yes # Enable TLS support for incoming connections smtpd_use_tls = yes smtpd_tls_auth_only = yes smtpd_helo_required = yes Now, create the files referenced in the Postfix configuration. Start with the alias maps: nano /etc/postfix/mysql_virtual_alias_maps.cf Add the following content: user = postfix password = postfixPa$$w0rd hosts = localhost dbname = postfix query = SELECT goto FROM alias WHERE address='%s' AND active = '1' Similarly, configure the domain maps: nano /etc/postfix/mysql_virtual_domains_maps.cf Add: user = postfix password = postfixPa$$w0rd hosts = localhost dbname = postfix query = SELECT domain FROM domain WHERE domain='%u' Finally, configure the mailbox maps: nano /etc/postfix/mysql_virtual_mailbox_maps.cf Add: user = postfix password = postfixPa$$w0rd hosts = localhost dbname = postfix query = SELECT CONCAT(domain,'/',maildir) FROM mailbox WHERE username='%s' AND active = '1' Edit the master.cf file: nano /etc/postfix/master.cf Add the following settings: submission inet n - n - - smtpd -o smtpd_tls_security_level=may -o smtpd_sasl_auth_enable=yes -o smtpd_sasl_type=dovecot -o smtpd_sasl_path=/var/spool/postfix/private/auth -o smtpd_sasl_security_options=noanonymous -o smtpd_sasl_local_domain=$myhostname smtps inet n - n - - smtpd -o syslog_name=postfix/smtps -o smtpd_tls_wrappermode=yes -o smtpd_sasl_auth_enable=yes -o smtpd_client_restrictions=permit_sasl_authenticated,reject dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -d ${recipient} Generating Security Certificates To ensure secure email transactions, generate security certificates using openssl. First, create the directory where the certificates will be stored (as specified in the main.cf file): mkdir -p /etc/ssl/mail Generate the certificates: openssl req -new -x509 -days 1000 -nodes -out /etc/ssl/mail/public.pem -keyout /etc/ssl/mail/private.key -subj "/C=CY/ST=Limassol/L=Limassol/O=Global Security/OU=IT Department/CN=mail.devnullhost.com" Finally, enable and restart Postfix: systemctl enable postfix && systemctl restart postfix Installing and Configuring Dovecot Dovecot is an open-source IMAP and POP3 server. We'll install it along with the required modules for database integration: apt install dovecot-imapd dovecot-pop3d dovecot-mysql Next, we configure the mail storage method. Open the Dovecot configuration file: nano /etc/dovecot/conf.d/10-mail.conf In the file, specify the directory structure for storing mail. We'll use a hierarchy of domain → user: mail_location = maildir:/home/mail/%d/%u/ In the same file, configure the authentication method: service auth { unix_listener /var/spool/postfix/private/auth { mode = 0666 user = postfix group = postfix } unix_listener auth-userdb { mode = 0600 user = vmail group = vmail } } service stats { unix_listener stats-reader { user = vmail group = vmail mode = 0660 } unix_listener stats-writer { user = vmail group = vmail mode = 0660 } } Edit the Dovecot authentication configuration file: nano /etc/dovecot/conf.d/10-auth.conf Replace the line !include auth-system.conf.ext with !include auth-sql.conf.ext, indicating that SQL-based authentication should be used. Next, configure SSL encryption in Dovecot: nano /etc/dovecot/conf.d/10-ssl.conf In this file, add the following: ssl = required ssl_cert = </etc/ssl/mail/public.pem ssl_key = </etc/ssl/mail/private.key When users connect for the first time, we want their mailboxes to be automatically created. To enable this, open the following file: nano /etc/dovecot/conf.d/15-lda.conf Add the following line: lda_mailbox_autocreate = yes Now, configure Dovecot to connect to the database. Open the SQL configuration file: nano /etc/dovecot/dovecot-sql.conf.ext Add the following lines to configure MySQL: driver = mysql connect = host=localhost dbname=postfix user=postfix password=postfixPa$$w0rd default_pass_scheme = MD5-CRYPT password_query = SELECT password FROM mailbox WHERE username = '%u' user_query = SELECT maildir, 1080 AS uid, 1080 AS gid FROM mailbox WHERE username = '%u' user_query = SELECT CONCAT('/home/mail/',LCASE(`domain`),'/',LCASE(`maildir`)), 1080 AS uid, 1080 AS gid FROM mailbox WHERE username = '%u' Open the main Dovecot configuration file to set up the server interface: nano /etc/dovecot/dovecot.conf In this file, add the following line to listen on all available network interfaces: listen = * Finally, enable and restart Dovecot to apply the configuration: systemctl enable dovecot && systemctl restart dovecot Installing and Configuring PostfixAdmin For PostfixAdmin to work correctly, you need a configured web server, PHP, and a MySQL database (either a LAMP or LEMP stack). In this guide, we will skip the web server setup and go directly to installing PostfixAdmin. First, install the necessary PHP extensions: apt install php-mysql php-mbstring php-imap Download PostfixAdmin to the web server's root directory using wget: wget https://sourceforge.net/projects/postfixadmin/files/latest/download -O postfixadmin.tar.gz Create a directory for PostfixAdmin and extract the archive contents: mkdir -p /var/www/html/postfixadmin && tar -C /var/www/html/postfixadmin -xvf postfixadmin.tar.gz --strip-components 1 Then, create a directory for storing template caches: mkdir /var/www/html/postfixadmin/templates_c Set the correct permissions for the web server to access the PostfixAdmin directory: chown -R www-data:www-data /var/www/html/postfixadmin Create the database and a user for PostfixAdmin: mysql -u root CREATE DATABASE postfix DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci; GRANT ALL ON postfix.* TO 'postfix'@'localhost' IDENTIFIED BY 'postfixPa$$w0rd'; exit; In the local configuration file of PostfixAdmin, add the database configuration: nano /var/www/html/postfixadmin/config.local.php The file contents should be: <?php $CONF['configured'] = true; $CONF['default_language'] = 'en'; $CONF['database_password'] = 'postfixPa$$w0rd'; $CONF['emailcheck_resolve_domain']='NO'; ?> Open the PostfixAdmin setup page in your browser by going to /postfixadmin/public/setup.php. You will be prompted to generate a password hash for authentication. Enter the password and click the button. A message containing the hash will appear below the form. Copy this hash and insert it into the config.local.php file: nano /var/www/html/postfixadmin/config.local.php Refresh the /postfixadmin/public/setup.php page and log in with the password you used to generate the hash. If everything is configured correctly, you should see a configuration check page. At the bottom of this page, there is a form to create an admin account. After successfully creating the admin, go to /postfixadmin/public/login.php and log in with the credentials you just set up. You will be redirected to the PostfixAdmin administration panel. Creating a Mailbox in PostfixAdmin In your browser, go to /postfixadmin/public/. In the top menu, choose "Domain List → New Domain." Then, in the "Overview → Create Mailbox" section, enter the details for a test email account. You can now test the connection using email clients. Use the following connection parameters: Server: Your server's hostname IMAP: Port 143, STARTTLS POP3: Port 110, STARTTLS SMTP: Port 25, STARTTLS Login and Password: The credentials you specified when creating the mailbox Conclusion In this guide, we explored what a self-hosted email server is and the technologies and software used to configure it. The main advantage of having your own mail server is the ability to fully customize it: create unlimited mailboxes, assign aliases, and manage users. All these features can be implemented by renting a cloud server from Hostman and configuring the server following this guide.
17 October 2024 · 12 min to read
SQLite

A Beginner’s Guide to SQLite

SQLite is an in-process library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine. The source code for SQLite is publicly available, modifiable, and free. SQLite is known for its speed, minimalism, and reliability. You can install it on a VPS on Hostman. How to Install SQLite The SQLite library is precompiled and available for download and installation from the official website. If necessary, you can also compile the source code yourself. You can use a simple command-line tool called sqlite3 to write and execute queries on SQLite databases. There are also many free (e.g., SQLiteStudio) and commercial graphical interface tools for managing SQLite databases. Installing and Running SQLite on Windows Go to the SQLite download page and download the files necessary for running SQLite on Windows, including sqlite3. Create a new folder on your computer, for example, C:\sqlite. Extract the contents of the downloaded file into the C:\sqlite folder. You should see three programs: sqlite3.exe sqlite3_analyzer.exe sqldiff.exe Open the command prompt, navigate to the folder containing sqlite3.exe, and run the file. You can specify the name of the database: C:\>cd C:\sqlitesqlite3 <SQLite database name> If a file with that name doesn’t exist, it will be created automatically. If no database file name is specified in the command line, a temporary database will be created, which will automatically be deleted when you exit sqlite3. Windows users can also double-click the sqlite3.exe icon to open a terminal window with SQLite running. However, since double-clicking launches sqlite3 without arguments, no database file will be specified, and a temporary database will be used, which will be deleted at the end of the session. Installing and Running SQLite on Linux Here’s how to install SQLite on Linux using Ubuntu as an example: To install sqlite3 on Ubuntu, first update the package list: sudo apt update Then install sqlite3: sudo apt install sqlite3 To confirm the installation, check the version: sqlite3 --version If successful, you will see something like: 3.46.1 2024-08-27 11:03:15 3bfa9cc97da10589251b342961df8f5f68c7399fa117345eeb516bee837balt1 How to Use SQLite Now let's proceed to working with SQLite. Creating a Database There are several ways to create a database in SQLite: Using the sqlite3 command When launching sqlite3, you can specify the database name: sqlite3 my_first_db.db If the database my_first_db.db exists, it will open; if not, it will be created and automatically deleted upon exiting sqlite3 if no queries were made to the database. To ensure the database is saved to disk, you can run an empty query by typing ; and pressing Enter: ; After working with the database, changes can be saved using the SQLite .save command with the database name: .save my_first_db.db Or with the full path to the database: .save C:/sqlite/my_first_db.db Be careful when using the .save command, as it will overwrite any existing file with the same name without asking for confirmation. Using the .open command You can also create a database in SQLite using the .open command: .open my_first_db.db As in the first case, if the database exists, it will open; otherwise, it will be created. Unlike the first method, the new database won’t disappear when exiting sqlite3, but changes must be saved using the .save command as shown above. Creating a temporary database If you run sqlite3 without any arguments, it will create a temporary database that will be deleted when the session ends. However, this database can be saved to disk using the .save command: sqlite3 SQLite version 3.46.1 2024-08-27 11:03:15Enter ".help" for usage hints.Connected to a transient in-memory database.Use ".open FILENAME" to reopen on a persistent database.sqlite> ...many SQL commands omitted...sqlite> .save db1.db Creating a Table In SQLite, information is stored in tables which you can create with the CREATE TABLE statement. This command must include the table name and the names of the fields (columns). It can also include data types, field descriptions (e.g., primary key), and default values. For example, let’s create a table to store details about different dog breeds using the CREATE TABLE statement: CREATE TABLE dog_params (    id integer PRIMARY KEY,    dog_breed text,    speed_km_per_h integer,    weight_kg integer); In this table, the id column is marked as the primary key, meaning it will be the index column, and an integer value will be automatically generated for it. Inserting Records into a Table To insert new records into a table, use the INSERT INTO SQL command, where you specify the table and columns to insert the new values. The structure of the query is: INSERT INTO table_name (column1, column2)VALUES (value1, value2); If the number of values matches the number of columns in the table, you can omit the column names. Columns not listed will be filled with default values (if specified in the CREATE TABLE statement) or NULL if no default value was provided. For example: INSERT INTO dog_params (dog_breed, speed_km_per_h, weight_kg)VALUES ("Greyhound", 72, 29); Alternatively, you can insert all fields, including manually specifying the id: INSERT INTO dog_params VALUES (2, "Jack Russell Terrier", 61, 5);INSERT INTO dog_params VALUES (3, "Dalmatian", 59, 24); In the first case, the id was automatically generated because it's the primary key, while in the second case, we manually specified the id. You can also insert multiple records at once, and the id will be automatically generated: INSERT INTO dog_params (dog_breed, speed_km_per_h, weight_kg)VALUES ("Borzoi", 58, 39), ("Standard Poodle", 48, 27); Viewing Tables To view the entire contents of a table, use the SELECT query: SELECT * FROM dog_params; The result will look like this: 1|Greyhound|72|292|Jack Russell Terrier|61|53|Dalmatian|59|244|Borzoi|58|395|Standard Poodle|48|27 You can use the WHERE clause to filter rows that meet certain conditions. For example, let's display dog breeds with speeds less than 60 km/h: SELECT * FROM dog_params WHERE speed_km_per_h < 60; The output will be: 3|Dalmatian|59|244|Borzoi|58|395|Standard Poodle|48|27 Modifying Records in the Table You can modify a table using the ALTER TABLE command along with additional options like: Rename the table: RENAME TABLE Add a column: ADD COLUMN Rename a column: RENAME COLUMN Delete a column: DROP COLUMN For example, let's add a new column for the height of the dog at the withers: ALTER TABLE dog_params ADD COLUMN height_cm integer; To update existing records in the table, use the UPDATE query. This allows you to change the value of a single cell in one or more rows. For instance, let's add height values for the dogs in our table: UPDATE dog_params SET height_cm=71 WHERE id=1;UPDATE dog_params SET height_cm=28 WHERE id=2;UPDATE dog_params SET height_cm=53 WHERE id=3;UPDATE dog_params SET height_cm=69 WHERE id=4;UPDATE dog_params SET height_cm=61 WHERE id=5; The final table will look like this: SELECT * FROM dog_params; 1|Greyhound|72|29|712|Jack Russell Terrier|61|5|283|Dalmatian|59|24|534|Borzoi|58|39|695|Standard Poodle|48|27|61 For more information, we recommend checking the SQLite documentation. How to Use SQLiteStudio You can work with SQLite databases from the command line and graphical interface tools like SQLiteStudio. SQLiteStudio is free, portable, intuitive, and cross-platform. It provides many essential functions for working with SQLite databases, such as importing and exporting data in various formats, including CSV, XML, and JSON. You can download the SQLiteStudio installer or portable version from the official website: https://sqlitestudio.pl. Then, extract (or install) the downloaded file into a folder, for example, C:\sqlite\gui\, and launch the program. 
17 October 2024 · 7 min to read
Linux

How to Use the diff Command in Linux

The diff command in Linux is a powerful tool that allows users to compare files and directories. With the help of this command, one can identify differences between files, and perform tasks like code reviews, configuration management, and version control.  This tutorial will guide users through what is the diff command, its possible methods, and practical examples. Introduction The diff command is used in Linux to compare the content of two files line by line. When executed, this command analyzes the two files and outputs the differences in a specific format. The output shows which lines need to be added, deleted, or changed to make the files identical. Basic Syntax and Options for diff The basic syntax for the Linux diff command is provided below: diff [options] file1 file2 Here, diff is the command itself. [options] are optional flags used to modify the behavior of the diff Linux command. file1 and file2 are the two files used for Linux file comparison. The following table describes a few options that can be used with diff: Option Description -a Process every file as a text file and perform a line-by-line comparison. -b Does not consider white space differences. -c Show differences with a few lines of context around them. -d Opt for a different algorithm to pinpoint a more concise set of changes. -e Output an ed script. -E Ignore changes due to tab expansion. --binary Compare files in binary mode. -i Ignore case differences in file contents. -l Paginate the output through pr. -N Treat absent files as empty. -q Report only when files differ. -s Report when files are identical. -u Display output in a unified format, showing differences more compactly. -w Ignore all white space. For more details and to explore more options, the users can get help by opening the diff manual using the following command: man diff Comparing Two Text Files Using diff There are two ways to compare files on Linux with diff. Basic Comparison of Two Text Files The basic way to use the diff in Linux is to compare two files line by line and display their differences. To compare two text files, file1.txt and file2.txt, one can use the following command: diff file1.txt file2.txt This command will output the differences between file1.txt and file2.txt. Display Differences in a Unified Format For a more readable format, the -u option can be used with diff. This option provides a unified format that includes a few lines of context around the differences. This makes it easier to understand the changes. Follow the command provided below: diff -u file1.txt file2.txt The unified format output includes line numbers, context lines, and change indicators. Lines starting with - indicate deletions, lines starting with + indicate additions and lines starting with a space are unchanged context lines. Using diff for Directory Comparisons The Linux command diff can also be used to compare directories, it can be done using the -r option. For example: diff -r dir1 dir2 The above command when executed will recursively compare all files and subdirectories within dir1 and dir2. Understanding diff Output and Symbols The diff output uses specific symbols to indicate changes, these are provided below: ---: Denotes the first file. +++: Denotes the second file.  @@ -1,4 +1,4 @@: This line is part of the unified diff format. It gives context about where the changes are happening in the files. @@ indicates the start of a change hunk. -1,4 means the chunk starts at line 1 in the first file and spans 4 lines. +1,4 means the chunk starts at line 1 in the second file and spans 4 lines. <: This marker signifies lines that exist in the first file but not in the second one. Such lines must be removed from the first file to match the second file exactly. >: This marker indicates lines that are in the second file but not in the first one. These lines should be added to the first file to make it identical to the second file. -: This marker shows lines that have been deleted from the first file. +: This marker indicates lines that have been inserted into the second file. Let’s look at an example to make it clearer. Suppose there are two files, file1.txt and file2.txt. Contents of file1.txt: applebananacherrydate Contents of file2.txt: applebananadateraspberry Running the command diff file1.txt file2.txt will produce the following output: Here’s how to interpret this output: 3d2: This means that line 3 in file1.txt (cherry) needs to be deleted to match file2.txt. The d stands for "delete". < cherry: This indicates that cherry is present in file1.txt but not in file2.txt. 4a4: This means that after line 4 in file1.txt, users need to add "raspberry" to match file2.txt. The a stands for "add". > raspberry: This indicates that raspberry is present in file2.txt but not in file1.txt. Creating Patch Files with diff To create a patch file, the -u (unified) option is used, which provides a more readable format by showing a few lines of context around the changes. The output is then redirected to a file, typically with a .patch extension. For example: diff -u file1.txt file2.txt > changes.patch diff -u: Compares file1.txt and file2.txt and generates a unified diff. >: Redirects the output to a file named changes.patch. To apply the patch, use the patch command like this: patch file1.txt < changes.patch Using diff with Various Output Formats The diff also supports multiple output formats, here are a few examples. Unified Format This format gives users a snapshot of the changes with a few lines of context before and after each change. It’s great for quickly seeing what was added or removed. diff -u file1.txt file2.txt Context Format This format shows more surrounding lines for each change and gives users a bigger picture of where the changes happened. diff -c file1.txt file2.txt Side-by-Side Format This format places the two files next to each other and makes it easy to compare them line by line. diff -y file1.txt file2.txt Brief Format This format gives a summary of whether the files differ but does not show the actual changes. diff -q file1.txt file2.txt Practical Examples of Using diff Here are some practical examples of using the diff command in Linux. Ignoring Case Differences When comparing files, sometimes the case of the letters might differ, but the content is essentially the same. The -i option is used to ignore case differences. For example: diff -i file3.txt file4.txt In this example, diff will treat "Hello" and "hello" as identical, ignoring the case difference. Ignoring White Space White space differences, such as extra spaces or tabs, can be ignored using the -w option. This is useful when formatting changes have been made but the content remains the same. For example: diff -w file1.txt file2.txt Here, diff will ignore all white spaces, treating "Hello   World" and "Hello World" as identical. Comparing Binary Files The diff in Linux can also be used to compare binary files using the --binary option. This is helpful when users need to check if two binary files are identical or not. For example: diff --binary file1.bin file2.bin In this case, diff will compare the binary data of file1.bin and file2.bin and report any differences. Ignoring Blank Lines To ignore blank lines when comparing files, simply use the -B option, which is useful when blank lines have been added or removed. diff -B file1.txt file2.txt Conclusion The diff is a versatile command in Linux for comparing files and directories. By understanding its syntax, options, and output formats, users can efficiently identify differences and manage changes. Whether for code reviews, configuration management, or version control, the diff command is an essential part of any Linux user’s toolkit.
17 October 2024 · 7 min to read
Windows

RDP Protocol: What It Is, What It's Used For, and How It Works

Remote desktop connection to a host can be achieved in various ways, such as using third-party applications like UltraVNC. However, the most common tool for using the Remote Desktop Protocol (RDP) is Microsoft's built-in utility called Remote Desktop. In this article, we'll explore the history of RDP, how it works, and how data is protected during transmission. History of RDP The Remote Desktop Protocol (RDP) was created by Microsoft and initially developed as one of the primary methods for remote access to computers or servers. It was also intended to enable weak local machines to connect to more powerful servers for tasks like complex calculations. Today, RDP is primarily a tool for setting up remote workplaces. Key milestones in the development of RDP: 1998: RDP was first used in Windows NT 4.0 Terminal Server Edition. It continues to be supported in all Windows operating systems, up to Windows 11 and Windows Server 2022. RDP remains a default remote access tool in Windows, despite the existence of other technologies like VNC. There are clients for nearly all major operating systems, including Linux, FreeBSD, macOS, iOS, Android, and more. The current version is RDP 10, which includes features like auto-resizing and enhanced graphics compression using the H.264/AVC codec. General Characteristics of RDP Supported color depths: 32-bit and lower (8, 15, 16, and 24-bit). Data protection: 128-bit encryption using the RC4 algorithm. Other features: Audio redirection, printer and port forwarding, clipboard sharing between the local and remote computers. Citrix Systems played a key role in the early development of remote desktop technology. In the early 1990s, they developed the WinFrame system based on Windows NT 3.51. Citrix eventually collaborated with Microsoft, and in 1997, Microsoft acquired certain rights to Citrix's technology. As a result, Citrix retained rights to the ICA protocol, while Microsoft developed RDP based on the ITU T.120 standard. Citrix and Microsoft remain competitors, with Citrix focusing on high-performance systems and Microsoft leading in the mid-range server market with Terminal Services. Both companies continue to expand their product capabilities to reach more users. Advantages of Terminal Services: Easy application setup for clients. Centralized session management. Functionality tied to active Terminal Services licenses. Network Setup Using Terminal Services Microsoft products allow the use of the RDP protocol in two different modes: for connecting to application servers (Terminal Server Mode) or for managing system settings (Remote Administration Mode). Let's look at both options: Remote Administration Mode All versions of Microsoft Windows support this mode. The only difference is the number of remote connections allowed simultaneously. In desktop versions, only one connection is supported: a local login or a remote host connection. In server versions, the options are broader: two network connections and one local client. Terminal Server Mode This mode is only available on Windows Server editions. The advantage of this mode is that there are no limits on the number of connections—you can have as many as needed. However, you must purchase licenses and configure the system properly to use this mode. The server can be set up as either a dedicated node or a terminal server. Without these steps, access cannot be granted. How RDP Works The Remote Desktop Protocol is an application-layer TCP protocol. Initially, the computers establish a connection. After confirming the connection, the system initiates the RDP session at the transport layer. Once this process is successfully completed, the terminal server begins transmitting the desktop image and allows input from the keyboard and mouse. The display can either be a rendered image or graphical primitives. The system supports the simultaneous use of multiple virtual channels within a single physical connection. This is necessary to enable the following functions: Printing to a printer or data exchange via a serial port. Clipboard functionality and other operations with the disk subsystem. Use of audio playback and recording systems. The parameters for the virtual channels are set at the start, during the initial connection with the host. Two security options are available when initiating a connection: integrated Standard RDP Security and optional Enhanced RDP Security. Let's look at their functionality in more detail and the features of each approach. Standard RDP Security This security approach involves authentication, data encryption, and integrity monitoring through modules integrated into RDP. The encryption uses the RC4 algorithm with a key length between 40-168 bits (depending on the Windows version). When establishing a connection, the system generates a pair of keys to encrypt the information exchanged between the client and the server. The authentication process works as follows: The system generates a pair of RSA keys. It then creates a public key certificate (Proprietary Certificate). This certificate is signed with the RSA key embedded in Windows. The client receives the Proprietary Certificate to access the terminal server. After verifying the certificate, the server's public key is sent to the client. Providing the account and password triggers these actions only upon successful verification. All transmitted data undergoes integrity checks using a Message Authentication Code (MAC) algorithm, based on MD5 and SHA1. The system can be switched to use 3DES encryption, which was introduced in Windows 2003 Server to meet FIPS standards. Enhanced RDP Security The second security option involves using external modules like TLS 1.0 and CredSSP. The TLS protocol was introduced in Windows Server starting with the 2003 release and is used when the client machine supports RDP. Before connecting, it's recommended to check the utility version (RDP 6.0 or higher is required). Users can choose between generating their own certificate or using an existing one from the Windows system. CredSSP (Credential Security Support Provider) is a combined solution that integrates TLS, Kerberos, and NTLM technologies. The advantages of this approach include: Login authorization is verified before completing the RDP connection, saving server resources when dealing with a high number of requests. Encryption and account identification are handled via TLS standards. Single sign-on to the host is supported using Kerberos or NTLM. CredSSP can be selected by enabling the "Use Network Level Authentication" option, available in all operating systems starting with Windows Vista/2008 Server. Terminal Services Licensing  A popular method for accessing Terminal Services is through a "thin client." Its main feature is creating and activating a license server, without which the system will not function (although connections through a "thick client" or to a standard desktop are possible, but only in a remote mode). Licenses are issued in two modes. Without activation, the user is granted a temporary license for the duration of the current session or for a limited period. Once activated, the user is issued a "permanent" license on the server. This process involves issuing a digital certificate confirming the connected workstation's legitimacy. Types of Client Licenses: Temporary Terminal Server CAL: A temporary license limited by a specified period. Device Terminal Server CAL: Tied to a specific device. User Terminal Server CAL: Linked to the specific user defined in the settings. External Terminal Server Connector: Designed for external connections. Let's look at these options in more detail. Time-Based License Upon their first connection to the terminal server, a time-based license is issued to the user. Its validity period is 90 days, regardless of the client's activity. If a second connection is successfully established, the server will attempt to assign a permanent license to the client machine. Licenses must be available in the server's storage for this to happen. Device-Based License This option is designed for a client working from a specific physical device. The license is valid for a period ranging from 52 to 89 days. If 7 or fewer days remain before expiration, the server will attempt to replace it with a permanent license upon any connection attempt. If the device is changed, the cycle will restart from the beginning. User-Based License The user-based license is the most flexible option, as it allows users to work from any device, creating convenient conditions for deploying remote workspaces. A notable feature of this mode is that Terminal Services does not have a built-in connection counter. As a result, when additional clients connect, the number of available licenses remains unchanged. This does not comply with Microsoft's licensing rules, but it is a known fact. If you need to issue both device and user licenses simultaneously, the server must be configured for device-based licensing. In a mixed setup, there will also be no counter for tracking different client connections, and limitations can only be enforced by linking them to specific hardware. External User License This is a special type of license for granting rights to external users. It does not limit the number of clients, regardless of the volume specified, because, under the EULA agreement, a dedicated server must be allocated for such users. This restricts external clients' access to corporate sessions. This type of license is not very popular due to its high cost. RDP Connection on macOS Microsoft has released a Remote Desktop utility for macOS, which you can download from the App Store. The connection process is similar to that on Windows: first, you enter the server's IP address or domain, and then, when prompted, enter the username and password of the user who has remote access rights. After launching, you may receive a warning about an untrusted certificate. To resolve this, simply click "Show Certificate" and select the "Always trust..." option. From then on, the system will connect without asking again. Remote Desktop Connection on Ubuntu When connecting from Windows to a remote host running Ubuntu, the Remote Desktop utility is used, just like on Windows. However, you need to prepare the Ubuntu server by installing support for the technology: sudo apt install xrdp After installation, start the service: sudo systemctl status rdp If you're using Linux as the local machine, the Remmina utility will be useful. You will need to switch the protocol from VNC to RDP. Despite the different systems, the user experience will feel seamless. Other Linux clients include Gnome Connection, Vinagre, Xfreerdp, and Rdesktop. To install the Remmina package: sudo apt-add-repository ppa:remmina-ppa-team/remmina-nextsudo apt-get updatesudo apt-get install remmina remmina-plugin-rdp libfreerdp-plugins-standard Conclusion We've completed our overview of the RDP protocol, covering what it is and how it works. One final note is that a license server can be assigned one of two roles: Domain or Workgroup License Server – operates within a domain or workgroup. Entire Enterprise License Server – operates within an organization. The choice affects how the license server is scanned. In the first case, the system searches through Active Directory, while in the second, it uses NetBIOS broadcast requests.  
16 October 2024 · 9 min to read
Laravel

Migrations and Seeders in Laravel: Database Setup

Migrations and seeders are two tools in the PHP framework Laravel that simplify working with databases and solve issues related to inconsistency. In Laravel, migrations function similarly to version control for databases. They allow the development team to define and share a common database schema, ensuring all team members are aware of any changes. Seeders in Laravel are tools used to populate the database with test data. There are several seeder classes available that allow for control over the seeding process. In this article, you will learn how to get started with Laravel and create migrations and seeders. Installing and Setting Up Laravel In this tutorial, we'll use Docker to run a Laravel application. Laravel provides a built-in command-line interface called Laravel Sail for Docker, which is in the Laravel package by default. We will install all necessary software on Windows 10. This guide consolidates instructions from various sources, including Microsoft, Docker, and Laravel. First, we will install WSL (Windows Subsystem for Linux), then Docker, and finally Laravel. The instructions below are specifically for Windows 10. For other operating systems, refer to the official Laravel documentation. Installing WSL WSL (Windows Subsystem for Linux) is a compatibility layer for running Linux applications on Windows. If you have Windows 10 version 2004 or later, you can install WSL by running this command in PowerShell as an administrator: wsl --install By default, the Ubuntu distribution will be used. We need WSL 2, so let's switch to this version. This process will take a few minutes: wsl --set-version Ubuntu 2 To check the version, use the command: wsl.exe -l -v Example output: PS C:\WINDOWS\system32> wsl.exe -l -v  NAME      STATE           VERSION* Ubuntu    Stopped         2 To check your Windows version, press Win+R, and type winver. A window will appear with your system information. If your version is lower than 2004, the wsl --install command will not work. Here's a list of steps for older versions (you'll need at least version 1903 with build 18362): Enable the Windows Subsystem for Linux: Open PowerShell as an administrator and run: dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart Enable the Virtual Machine feature: Run this command in the same PowerShell window: dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart Restart your computer to complete the installation. Download and install the Linux kernel update package from this link. Set WSL 2 as the default version: After rebooting, open PowerShell again as an administrator and run: wsl --set-default-version 2 Install a Linux distribution: Choose your preferred Linux distribution, download it from the Microsoft Store, and install it. After installation, you can access your Linux terminal from the search menu. Installing Docker Go to docker.com, click Get Started, and download Docker Desktop for Windows. During the installation process, make sure to select the option Use WSL 2 instead of HYPER-V. Open Docker, click the gear icon on the top panel to enter the settings menu. In the General tab, you can adjust the basic settings. Here, you can enable WSL 2 and disable unnecessary features like telemetry. After setting these options, move to the Resources tab and enable integration with Ubuntu. After this, click Apply & Restart. Docker will reboot with the new settings. Setting up Laravel Let's create a new Laravel application. Open the Ubuntu terminal and run the following command: curl -s https://laravel.build/example-app | bash Here, example-app is the directory for the new Laravel application. On the first run, the command will download the necessary images, which may take some time. Once the download completes, Laravel Sail will be available. Navigate to the project directory: cd example-app Next, create Laravel Sail containers: ./vendor/bin/sail up The container creation process may take a while, but subsequent Sail launches will be faster. Sail will start the Laravel web application with informational resources that are accessible on localhost. If you encounter the error: The Stream Or File “/Var/Www/Html/Storage/Logs/Laravel.Log” Could Not Be Opened In Append Mode: Failed To Open Stream: Permission Denied You need to change the file permissions for docker.sock using this command: sudo chmod 666 /var/run/docker.sock Working with Migrations As mentioned earlier, migrations are similar to version control. They allow users to create and modify the database schema or roll back changes. Migrations do not define the content of tables (except for constraints). In a Laravel project, migration files are stored in the directory at ./database/migrations/. When you create a Laravel Sail application, it comes with several migrations by default. Let's run these migrations and check the results. Open the Ubuntu terminal and enter the following commands: ./vendor/bin/sail up -d     // Start the application in the background./vendor/bin/sail artisan migrate Output: Migration table created successfully. Migrating: 2014_10_12_000000_create_users_table Migrated: 2014_10_12_000000_create_users_table (1,846.42ms) Migrating: 2014_10_12_100000_create_password_resets_table Migrated: 2014_10_12_100000_create_password_resets_table (1,965.53ms) Migrating: 2019_08_19_000000_create_failed_jobs_table Migrated: 2019_08_19_000000_create_failed_jobs_table (2,196.68ms) Migrating: 2019_12_14_000001_create_personal_access_tokens_table Migrated: 2019_12_14_000001_create_personal_access_tokens_table (3,325.95ms) Migrations in Laravel are executed in the order they are created. This allows Laravel to implement the rollback mechanism correctly. Running migrations in an ad hoc manner can lead to data loss in the database. Next, let's check whether the corresponding tables were created in the database. We can connect to the database using HeidiSQL. The connection parameters can be found in the environment configuration file .env: DB_CONNECTION=mysql DB_HOST=mysql DB_PORT=3306 DB_DATABASE=example_app DB_USERNAME=sail DB_PASSWORD=password You can also set different parameters here. Once connected to the database, you will see its current state. Now, let's create a new migration and see how it works. Creating Migrations First, let's create a migration class. To do this, execute the make:migration command in the Ubuntu terminal: ./vendor/bin/sail artisan make:migration example_migr Output: Created Migration: 2024_06_02_005033_example_migr The name of the migration consists of two parts: a timestamp and a name chosen by the user. The timestamp helps to run migrations in the correct order. Now, open the created file in any editor of your choice: <?php use Illuminate\Database\Migrations\Migration; use Illuminate\Database\Schema\Blueprint; use Illuminate\Support\Facades\Schema; return new class extends Migration { /** * Run the migration * * @return void */ public function up() { // } /** * Rollback the migration * * @return void */ public function down() { // } }; Migrations use two methods: up() and down(). In the up() method, you describe the database changes that need to be executed when the migration is run. The down() method undoes the changes made by up(). Now let's write the migration code to create a table called items with the following columns: id — primary key name — name of the item cost — price of the item count — quantity of the item in stock Here is the migration code for this table: <?php use Illuminate\Database\Migrations\Migration; use Illuminate\Database\Schema\Blueprint; use Illuminate\Support\Facades\Schema; return new class extends Migration { /** * Run the migration. * * @return void */ public function up() { Schema::create('items', function (Blueprint $table) { $table->id(); $table->string('name'); $table->integer('count'); $table->integer('cost'); }); } /** * Rollback the migration * * @return void */ public function down() { Schema::dropIfExists('items'); } }; Schema::create creates a new table. Schema::dropIfExists deletes the table if it exists. Save the changes to the file and run the migration again through the terminal: ./vendor/bin/sail artisan migrate Output: Migrating: 2024_06_02_005033_example_migrMigrated:  2024_06_02_005033_example_migr (658.56ms) Rolling Back Migrations There are several commands to rollback migrations: migrate:rollback — rolls back the last migrate command. You can also rollback a specific number of migrations using the --steps [number] parameter. migrate:reset — rolls back all migrations. migrate:refresh — rolls back all migrations and then runs migrate. migrate:fresh — deletes all tables from the database and then runs migrate. Seeders At this point, the database only contains empty tables. They can be filled using seeders. Seeders are necessary for populating the database with dummy or test data. You can create a seeder using the make:seeder command: ./vendor/bin/sail artisan make:seeder ExampleSeeder Output: Seeder created successfully. All seeders are located in the database/seeders directory. Navigate to it and open ExampleSeeder in your editor: <?php namespace Database\Seeders; use Illuminate\Database\Console\Seeds\WithoutModelEvents; use Illuminate\Database\Seeder; class ExampleSeeder extends Seeder { /** * Run the database seeds. * * @return void */ public function run() { // } } In the run() method, you need to place the code for generating data. Let's add 10 random records to the items table: <?php namespace Database\Seeders; use Illuminate\Database\Console\Seeds\WithoutModelEvents; use Illuminate\Support\Facades\DB; use Illuminate\Database\Seeder; use Illuminate\Support\Str; class ExampleSeeder extends Seeder { /** * Run the database seeds. * * @return void */ public function run() { for ($i = 1; $i <= 10; $i++) { DB::table('items')->insert([ 'id' => $i, 'name' => Str::random(5), 'cost' => rand(5, 20), 'count' => rand(0, 10), ]); } } } Using the insert() command, we add a record to the items table. However, new seeder classes do not run automatically. You need to add its call to the main DatabaseSeeder class: public function run(){    $this->call(ExampleSeeder::class);} After that, we can run the data seeding: ./vendor/bin/sail artisan db:seed Output: Seeding: Database\Seeders\ExampleSeederSeeded: Database\Seeders\ExampleSeeder (841.68ms)Database seeding completed successfully. And here is the result in the database: Conclusion We have explored tools in the Laravel framework, such as migrations and seeders, which simplify working with databases. You can always rent a server for any experiments on Hostman.
16 October 2024 · 9 min to read
JavaScript

How to Use JavaScript Array map()

Arrays play a crucial role in JavaScript programming. They let you store and manipulate collections of data with ease. One powerful method for working with arrays is the map() function. The map() method creates a new array by applying a given function to each element of the original array. Simply put, map() takes your array, processes each item through a function you provide, and returns a new array with the transformed items. It doesn't change the original array, which helps prevent unintended side effects in your code.  In this guide, we'll explore the map() function, how it works, and how you can use it in your JavaScript projects. Basic Syntax and Parameters of map() The map() function is a built-in method in JavaScript that you can use on arrays. It creates a new array by applying a function to each element of the original array. Below is the syntax: let newArray = originalArray.map(function(currentValue, index, array) { // return element to newArray }); Or, using arrow function syntax: let newArray = originalArray.map((currentValue, index, array) => { // return element to newArray }); Let's break down the parameters: currentValue: The current element being processed in the array. index (optional): The index of the current element. array (optional): The original array to which map() is being applied. However, you must provide a callback function that tells map() how to transform each element. This function runs once for every item in the array. Importantly, map() does not modify the original array; instead, it returns a new array with the transformed elements. For example, if you have an array of numbers and want to add 1 to each number, you can use map() like this: let numbers = [1, 2, 3, 4, 5]; let incrementedNumbers = numbers.map(number => number + 1); console.log(incrementedNumbers); Output: [ 2, 3, 4, 5, 6 ] In this example, the map() function takes each number in the numbers array, adds 1 to it, and returns a new array called incrementedNumbers. Using map() to Transform Array Elements The map() method is helpful when you need to transform each element in an array. It applies a function to every item and returns a new array with the transformed elements. Let's explore how you can use map() to manipulate array data. Let’s say you have an array of numbers, and want to double each number. Instead of writing a loop, you can achieve this with map() in a more concise way. let numbers = [1, 2, 3, 4, 5]; let doubled = numbers.map(number => number * 2); console.log(doubled); Output: [ 2, 4, 6, 8, 10 ] In this example, numbers.map(number => number * 2) takes each element in the numbers array, multiplies it by two, and stores the result in a new array called doubled. The original numbers array remains unchanged. You can also transform arrays of strings. Let's say you have an array of names and you want to convert them all to uppercase letters. let names = ['alice', 'bob', 'charlie']; let upperNames = names.map(name => name.toUpperCase()); console.log(upperNames); Output: [ 'ALICE', 'BOB', 'CHARLIE' ] Here, map() applies the toUpperCase() method to each string in the names array, resulting in a new array upperNames with all names in uppercase. Another practical use is when working with arrays of objects. For example, you might have an array of user objects and want to extract a specific property from each one. let users = [ { name: 'Alice', age: 25 }, { name: 'Bob', age: 30 }, { name: 'Charlie', age: 35 } ]; let userNames = users.map(user => user.name); console.log(userNames); Output: [ 'Alice', 'Bob', 'Charlie' ] In this case, map() extracts the name property from each user object and creates a new array userNames. Practical Examples of map() The map() function really shines when working with real-world data. Let's see some practical examples to see how map() can simplify your JavaScript code. Example 1: Converting Temperatures Let’s assume you have an array of temperatures in Celsius and want to convert them to Fahrenheit. You can use map() to apply the conversion formula to each element. let celsius = [0, 20, 30, 40]; let fahrenheit = celsius.map(temp => temp * 9/5 + 32); console.log(fahrenheit); Output: [ 32, 68, 86, 104 ] In this example, map() takes each temperature in the celsius array, converts it to Fahrenheit, and returns a new array called fahrenheit. Example 2: Extracting Usernames from Emails Imagine you have an array of email addresses and you want to extract the usernames. let emails = ['[email protected]', '[email protected]', '[email protected]'];let usernames = emails.map(email => email.split('@')[0]);console.log(usernames); Output: [ 'alice', 'bob', 'charlie' ] Here, map() processes each email address, splits it at the @ symbol, and extracts the username part. Example 3: Applying Discounts to Products If you have an array of product objects and need to apply a discount to each product's price, map() makes this task easy. let products = [ { name: 'Laptop', price: 1000 }, { name: 'Phone', price: 500 }, { name: 'Tablet', price: 750 } ]; let discountedProducts = products.map(product => { return { name: product.name, price: product.price * 0.9 // Apply a 10% discount }; }); console.log(discountedProducts); Output: [ { name: 'Laptop', price: 900 }, { name: 'Phone', price: 450 }, { name: 'Tablet', price: 675 } ] In this case, map() creates a new array discountedProducts with the discounted prices, leaving the original products array unchanged. Example 4: Formatting Dates If you have an array of date strings and want to format them into a more readable form. let dates = ['2023-01-01', '2023-02-14', '2023-12-25']; let formattedDates = dates.map(date => { let options = { year: 'numeric', month: 'long', day: 'numeric' }; return new Date(date).toLocaleDateString(undefined, options); }); console.log(formattedDates); Output: [ 'January 1, 2023', 'February 14, 2023', 'December 25, 2023' ] Here, map() converts each date string into a Date object and formats it using toLocaleDateString(). Example 5: Generating HTML Elements If you're working with the DOM, you can use map() to create an array of HTML elements. let fruits = ['apple', 'banana', 'cherry'];let fruitListItems = fruits.map(fruit => `<li>${fruit}</li>`);console.log(fruitListItems); Output: [ '<li>apple</li>', '<li>banana</li>', '<li>cherry</li>' ] You can then join these items and insert them into your HTML. let fruits = ['apple', 'banana', 'cherry']; let fruitListItems = fruits.map(fruit => `<li>${fruit}</li>`); let fruitList = `<ul>${fruitListItems.join('')}</ul>`; console.log(fruitList); Output: <ul><li>apple</li><li>banana</li><li>cherry</li></ul> This demonstrates how map() can help generate dynamic content. Chaining map() with Other Array Methods The map() function becomes even more powerful when you chain it with other array methods like filter(), reduce(), or sort(). You can perform complex data transformations easily when you combine these methods. Filtering and Mapping Let’s say you have an array of numbers and you want to double only the even numbers. let numbers = [1, 2, 3, 4, 5, 6]; let doubledEvens = numbers .filter(number => number % 2 === 0) .map(number => number * 2); console.log(doubledEvens); Output: [ 4, 8, 12 ] In this example, filter() first selects the even numbers from the numbers array. Then, map() doubles each of those numbers. Mapping and Reducing Imagine you have an array of objects representing products and want to calculate the total price of all products after applying a discount. let products = [ { name: 'Laptop', price: 1000 }, { name: 'Phone', price: 500 }, { name: 'Tablet', price: 750 } ]; let totalDiscountedPrice = products .map(product => product.price * 0.9) // Apply a 10% discount .reduce((total, price) => total + price, 0); console.log(totalDiscountedPrice); Output: 2025 Here, map() creates a new array with the discounted prices. Then, reduce() sums up the prices to get the total. Sorting Mapped Values Imagine you have an array of strings and want to convert them to uppercase and then sort them alphabetically. let fruits = ['banana', 'apple', 'cherry']; let sortedFruits = fruits .map(fruit => fruit.toUpperCase()) .sort(); console.log(sortedFruits); Output: [ 'APPLE', 'BANANA', 'CHERRY' ] In this case, map() transforms each fruit name to uppercase, and sort() arranges them alphabetically. Complex Data Transformation Let’s assume you have an array of user objects, and you want to get a sorted list of active users' usernames. let users = [ { username: 'alice', active: true }, { username: 'bob', active: false }, { username: 'charlie', active: true } ]; let activeUsernames = users .filter(user => user.active) .map(user => user.username) .sort(); console.log(activeUsernames); Output: [ 'alice', 'charlie' ] Here, you filter out inactive users, extract their usernames with map(), and sort the usernames alphabetically. Handling Edge Cases with map() While the map() function is powerful, handling edge cases is important to prevent unexpected behavior. Let's explore some common scenarios and how to manage them. Dealing with Empty Arrays If you apply map() to an empty array, it simply returns another empty array. There's no need to add extra checks for this case. let emptyArray = [];let result = emptyArray.map(item => item * 2);console.log(result); Output: [ ] Handling Undefined or Null Elements When your array contains undefined or null values, map() will pass them to your callback function. You need to ensure your function can handle these values without throwing errors. let values = [1, null, 3, undefined, 5]; let processedValues = values.map(value => { if (value === null || value === undefined) { return 0; // Assign a default value } return value * 2; }); console.log(processedValues); Output: [ 2, 0, 6, 0, 10 ] In this example, we check for null or undefined and assign a default value of 0 before performing the calculation. Avoiding Holes in Arrays Arrays can have holes if elements are deleted or uninitialized. The map() function skips these holes, which might lead to unexpected results. let sparseArray = [1, , 3, , 5]; // Note the missing elementslet doubled = sparseArray.map(number => number * 2);console.log(doubled); Output: [ 2, <1 empty item>, 6, <1 empty item>, 10 ] To handle this, you can use the Array.from() method to create an array without holes before mapping. let sparseArray = [1, , 3, , 5]; // Note the missing elements let doubled = sparseArray.map(number => number * 2); let denseArray = Array.from(sparseArray); let doubledDense = denseArray.map(number => number * 2); console.log(doubledDense); Output: [ 2, NaN, 6, NaN, 10 ] Now, the missing elements are converted to undefined, and you can handle them accordingly. Managing Asynchronous Operations The map() function doesn't support asynchronous operations natively. To perform asynchronous tasks, consider using Promise.all() with map(). let urls = ['url1', 'url2', 'url3']; let fetchPromises = urls.map(url => fetch(url)); Promise.all(fetchPromises) .then(responses => { // Handle responses }) .catch(error => { // Handle errors }); Here, map() creates an array of promises, and Promise.all() waits for all of them to resolve. Using map() with Callback Functions The map() method relies on a callback function to process each element in an array. Let's explore different ways to use map() with callback functions. Using Named Functions You can pass a named function to map() for better readability, especially when the transformation logic is complex. function doubleNumber(number) { return number * 2; } let numbers = [1, 2, 3, 4, 5]; let doubledNumbers = numbers.map(doubleNumber); console.log(doubledNumbers); Output: [ 2, 4, 6, 8, 10 ] In this example, the doubleNumber function is passed as the callback to map(). Each element in the numbers array is doubled, resulting in a new array doubledNumbers. Using Anonymous Functions Anonymous functions can be used directly within map() for simple transformations. let numbers = [1, 2, 3, 4, 5]; let squaredNumbers = numbers.map(function(number) { return number * number; }); console.log(squaredNumbers); Output: [ 1, 4, 9, 16, 25 ] Here, an anonymous function computes the square of each number in the array. Using Arrow Functions Arrow functions offer a concise syntax and are commonly used with map(). let numbers = [1, 2, 3, 4, 5]; let tripledNumbers = numbers.map(number => number * 3); console.log(tripledNumbers); Output: [ 3, 6, 9, 12, 15 ] The arrow function number => number * 3 triples each number in the array. Using Callback Functions with Multiple Parameters The callback function can accept additional parameters like index and array, providing more context during the mapping process. let numbers = [10, 20, 30, 40, 50]; let adjustedNumbers = numbers.map((number, index) => number - index * 2); console.log(adjustedNumbers); Output: [ 10, 18, 26, 34, 42 ] In this example, each number is adjusted based on its index in the array. Using thisArg with map() The map() method accepts an optional second argument called thisArg, which sets the value of this inside the callback function. let multiplier = { factor: 2 }; let numbers = [1, 2, 3]; let scaledNumbers = numbers.map(function(number) { return number * this.factor; }, multiplier); console.log(scaledNumbers); Output: [ 2, 4, 6 ] By passing multiplier as the thisArg, the callback function can access this.factor to scale each number. Performance Considerations for map() When using map(), it's important to consider performance. Since map() creates a new array, it consumes additional memory. Therefore, with very large arrays, this extra memory usage can affect your application's efficiency. Furthermore, while map() offers clean and readable code, traditional loops like for or while loops might execute faster in performance-critical situations. If speed is crucial, consider using a loop instead of map(). Moreover, keep your callback functions simple. Complex operations inside the map() callback can slow down processing. To maintain optimal performance, aim for straightforward transformations. Finally, use map() only when you need the new array it returns. If you're performing actions without needing the resulting array, methods like .forEach() are more appropriate. This avoids unnecessary memory allocation and enhances performance. Conclusion The JavaScript map() function is a powerful tool for array manipulation. By applying a function to each element, it creates a new array with transformed values, making tasks like data conversion and extraction straightforward. Understanding how to use map() effectively allows you to write cleaner, more efficient code.
16 October 2024 · 12 min to read
Flask

How to Handle Incoming Request Data in Flask

Websites where you watch videos, read news or chat with friends operate on a client-server architecture. The client (e.g., a browser) sends a request to the server, and based on that request, the server responds—usually by sending an HTML document back. When we look under the hood of how websites work, request handling follows a general logic: Accept the request. Choose a request handler. Execute the handler. Send a response back to the client. A framework is a developer tool used to implement this logic. Flask is a microframework for building websites using Python. The term "micro" means that Flask provides a small, lightweight set of tools that are still sufficient for creating most websites. Flask is great for building APIs or websites hosted on cloud servers. In this article, we'll create a Flask application and explore how to process incoming requests. Getting Started We'll use PyCharm + pipenv on Windows 10. There are plenty of tutorials online for installing this setup on other operating systems, but here’s how to install it on Windows 10. Install Python. Visit the official website www.python.org/downloads, choose the Python version for your OS, and follow the installer instructions. Install pipenv. Pipenv is a package for managing virtual environments in Python. To install it, open the Windows command prompt and run this command: py -m pip install pipenv Install PyCharm. PyCharm is an IDE (Integrated Development Environment) for Python. Download and install it from the official site www.jetbrains.com/pycharm, choosing either the Community or Professional version. Creating a Project In PyCharm, click "File" → "New Project" and specify the following: Project directory Virtual environment settings Path to the Python interpreter Path to pipenv executable After setting everything up, click "Create." Installing Flask After creating the project, open the terminal at the bottom of PyCharm and run: pip install flask If everything is set up correctly, Flask will install. If you encounter an error like "Execution of scripts is disabled," open PowerShell as an administrator and run this command: Set-ExecutionPolicy RemoteSigned Then type [A] to allow the script to run. Working with URLs Once the necessary components are installed, let's start interacting with Flask. First, import Flask: from flask import Flask We’ll write a small Flask application to demonstrate working with URL requests: from flask import Flask app = Flask(__name__) @app.route("/hostman") def hostman(): return "This is the /hostman page" @app.route("/blog") def blog(): return "This is the /blog page" @app.route("/cloud") def cloud(): return "This is the /cloud page" if __name__ == "__main__": app.run() Explanation: First, we import Flask. We then create an instance of the Flask class, app. We use the .route decorator to assign functions to specific URLs. Finally, we run the application using app.run(). Running the Application In the terminal, you will see the local address of your site (127.0.0.1:5000). You might see nothing if you navigate to this address because we still need to define a handler for the root URL (/). However, if you visit: 127.0.0.1:5000/hostman 127.0.0.1:5000/blog 127.0.0.1:5000/cloud You’ll see responses for these routes. Flask automatically logs each request in the console. Handling GET Requests You can pass various parameters to the server via a GET request. For instance, consider this URL: hostman.com/blog?framework=flask&language=python&version=3.10 Here, the parameters are passed as key-value pairs after the question mark (?): framework=flask language=python version=3.10 Let’s write a function, get_example(), to extract these parameters in Flask: from flask import request @app.route("/get_example") def get_example(): framework = request.args.get('framework') language = request.args.get('language') version = request.args.get('version') return f"language = {language}; framework = {framework}; version = {version}" This code: Imports the request class and its methods. Uses the .args.get() method to extract query parameters. Returns the extracted values in the response. If you go to the URL: http://127.0.0.1:5000/get_example?language=python&framework=flask&version=3.10 The application will correctly process the GET request and display the extracted parameters. Handling POST Requests By default, .route() handles only GET requests. Flask can also handle POST requests. Let’s add a simple login form that accepts a username and password: @app.route('/authorization', methods=['GET', 'POST']) def authorization(): if request.method == 'POST': login = request.form.get('Login') password = request.form.get('Password') if login == "admin" and password == "admin": return "Correct" else: return "Incorrect" return ''' <form method="POST"> <div><label>Login: <input type="text" name="Login"></label></div> <div><label>Password: <input type="text" name="Password"></label></div> <input type="submit" value="Enter"> </form>''' Explanation: The decorator handles both GET and POST requests. On a GET request, a form for login and password is displayed. On a POST request (when the form is submitted), the credentials are checked. If correct, it returns "Correct"; otherwise, it is "Incorrect." Visit the URL: 127.0.0.1:5000/authorization If you enter "admin" as both login and password, you will see "Correct." Otherwise, you’ll see "Incorrect." Full Code Example from flask import Flask, request app = Flask(__name__) @app.route("/hostman") def hostman(): return "This is the /hostman page" @app.route("/blog") def blog(): return "This is the /blog page" @app.route("/cloud") def cloud(): return "This is the /cloud page" @app.route("/get_example") def get_example(): framework = request.args.get('framework') language = request.args.get('language') version = request.args.get('version') return f"language = {language}; framework = {framework}; version = {version}" @app.route('/authorization', methods=['GET', 'POST']) def authorization(): if request.method == 'POST': login = request.form.get('Login') password = request.form.get('Password') if login == "admin" and password == "admin": return "Correct" else: return "Incorrect" return ''' <form method="POST"> <div><label>Login: <input type="text" name="Login"></label></div> <div><label>Password: <input type="text" name="Password"></label></div> <input type="submit" value="Enter"> </form>''' if __name__ == "__main__": app.run() Conclusion This article explored how to handle GET and POST requests in Flask. However, it's important to remember that security is critical when handling client data. Always validate and sanitize input to prevent malicious attacks. Prevention is the best way to protect your server from unwanted consequences.
15 October 2024 · 6 min to read
Python

Web Scraping with Python and Beautiful Soup

Beautiful Soup is a Python library that allows you to extract data from web documents. The library provides a simple and intuitive interface for parsing and processing web documents, making it useful for both web scraping and data analysis. In this article, we'll describe how to use Beautiful Soup in Python, explore several essential functions of this library, including applying search filters, using built-in functions, and modifying the DOM tree. But first, let's cover some key terminology. What is the DOM Tree? The DOM (Document Object Model) is a hierarchical structure that represents the content of an HTML document as specific objects called nodes. The DOM tree consists of nodes, each with its own type. For example, elements might be tags like <div>, <p>, or <a>, while text nodes contain textual information displayed on the page. The DOM tree allows access to the document's content, enabling developers to manipulate elements, attributes, and page content using programming languages. For example, DOM lets you modify element content, add new elements, or remove existing ones. Installing Beautiful Soup To install Beautiful Soup for Python 3, open the command line or terminal on your computer and enter the following command: pip install beautifulsoup4 On Windows, you can open the command line by clicking the "Start" button, typing cmd in the search bar, and pressing Enter. On MacOS and Linux, open the terminal from the applications menu or press Ctrl + Alt + T. After that, type the command pip install beautifulsoup4 and press Enter to begin the installation process. If you encounter an error like "pip" is not recognized as an internal or external command… on Windows, it may mean that the path to the pip executable is not set in the PATH variable. To fix this: Open Control Panel. Choose "System and Security" or "System" depending on your Windows version. Click on "Advanced system settings." Select "Environment Variables." Find the PATH variable and click "Edit." Add the path to the Scripts folder where the pip executable is located (usually C:\PythonXX\Scripts, where XX is your Python version). Add it at the end of the line, separated by a semicolon. Be careful not to alter any other paths! After making these changes, click "OK" and close all windows. Now, rerun the pip install beautifulsoup4 command, and it should work. Searching Elements in the DOM find_all() is the standard method in Beautiful Soup used to search for elements on a web page that match certain criteria. find_all() searches for all elements that meet the given parameters and returns them as a list of objects. For example, if you want to find all H2 tags in an HTML document, first specify the path to the document: from bs4 import BeautifulSoup with open("C:/Files/my_wiki_example.html") as fp: soup = BeautifulSoup(fp, 'html.parser') Now, perform the search: for heading in soup.find_all('h2'):    print(heading.text) If there are headings, you will see output like: Chapter 1 Chapter 2 Chapter 3 Chapter 4 If Beautiful Soup doesn’t find anything, it will return an empty string. You can also search using regular expressions. Let’s try to print all headings from the page using: import re for heading in soup.find_all(re.compile('^h[1-6]')): print(heading.name + ' ' + heading.text.strip()) This will return: h1 My First Wiki Doc h2 Chapter 1 h2 Chapter 2 h2 Chapter 3 h2 Chapter 4 As we see, find_all() accepts two arguments: the tag name and a dictionary of attributes. You can use either argument or both together for searching. You can also list tags directly: for heading in soup.find_all(['h1', 'h2', 'h3']): print(heading.name + ' ' + heading.text.strip()) You’ll get: h1 My First Wiki Doc h2 Chapter 1 h2 Chapter 2 h2 Chapter 3 h2 Chapter 4 If you want to count a certain number of elements, you can use: len(soup.find_all(True)) To solve more complex searches, you can write a custom function. Here’s a simple example: def big_lists(tag): return len(tag.contents) > 5 and tag.name == 'ul' len(soup.find_all(big_lists)) This code searches for unordered lists (<ul>) with more than five items and shows that one such list is found. Filtering Elements To filter elements using find_all(), you can search by specific attributes. Let's search by class names and filter the results: import re len(soup.find_all(class_='table')) # Finds tables 18 len(soup.find_all(class_='image')) # Finds images 12 len(soup.find_all(class_='references')) # Finds references 5 len(soup.find_all(class_='redirect')) # Finds redirects 3 If you want to limit the number of results, for instance, to get only the first three tables: soup.find_all(class_='table', limit=3) The output might look like: <span class='table' id='Table1'>Table 1</span> <span class='table' id='Table2'>Table 2</span> <span class='table' id='Table3'>Table 3</span> To filter direct descendants, you can use: len(soup.html.find_all('meta', recursive=False)) The recursive=False stops global searching and limits it to direct child elements. Alternatively, you can use find(). For instance, to find an H2 header: soup.find('h2') Modifying the DOM Beautiful Soup also allows modifying the content of web documents. To change the text of an H1 heading: title = soup.find('h1')title.string = 'New Title' For confirmation, you can check: soup.find('h1')# Output: <h1>New Title</h1> You can also add a new attribute to the first paragraph: paragraph = soup.find('p')paragraph['data-id'] = '12345' To delete a paragraph: paragraph = soup.find('p')paragraph.extract() Use prettify() and print() to check the modified document: print(soup.prettify()) Other useful methods include: append() to add content to an element, insert() to insert content inside an element, clear() to clear an element’s content without removing the tag, decompose() to completely remove an element and its content from the DOM, replace_with() to replace an element with another. These methods allow flexible modifications to web documents. With a bit of practice, you’ll be able to easily find elements in HTML documents and modify their content with no trouble.
15 October 2024 · 5 min to read
JavaScript

How to Use the Date Object in JavaScript

JavaScript provides a built-in Date object that simplifies working with dates and times. This tutorial will guide developers through the essential aspects of the Date object, including its creation, retrieval of date and time information, formatting, manipulation, and handling of time zones. Introduction to JavaScript Date Object The Date object in JavaScript represents datetime values, enabling developers to manage temporal data with ease. It is essential for tasks such as scheduling, time tracking, and logging. The Date object helps: Represent specific date and time values, such as "2022-07-25T14:30:00.000Z". Perform operations like retrieving, formatting, and manipulating dates and times. Simplify date and time calculations across different time zones. Understanding the basics of the Date object will help in managing time-sensitive data efficiently in web applications. Creating a Date Object Creating a Date object in JavaScript is straightforward. Here are the two common ways to achieve it:  Using the current time Using a timestamp Creating a Date Object for the Current Time The simplest way to create a Date object is by instantiating the current date and time: const currentDate = new Date();console.log(currentDate); This creates an object representing the current date and time in the user's local timezone. Creating a Date Object Using a Timestamp You can also create a Date object from a specific timestamp (milliseconds since January 1, 1970): const timestamp = 1643723400000; const dateObject = new Date(timestamp); console.log(dateObject); This is useful for manipulating dates stored in timestamp format. Now that we know how to create a date object, let’s see how to retrieve date and time information. Retrieving JS Date and Time Information The Date object provides methods for extracting various components of a date, such as a year, month, and hour. Key methods include getDate(): Returns the day of the month. getFullYear(): Returns the four-digit year. getMonth(): Returns the month (0-11, where 0 represents January). getHours(): Returns the hour (0-23). getMinutes(): Returns the minutes (0-59). getSeconds(): Returns the seconds (0-59). For example, to retrieve the current date and format it as MM/DD/YYYY HH:MM:ss: function formatDate(dateObject) { const year = dateObject.getFullYear(); const month = dateObject.getMonth() + 1; // Months are zero-indexed const day = dateObject.getDate(); const hours = dateObject.getHours(); const minutes = dateObject.getMinutes(); const seconds = dateObject.getSeconds(); return `${month}/${day}/${year} ${hours}:${minutes}:${seconds}`; } console.log(formatDate(new Date())); Here is the result when executing the function.  There are however interesting methods we can use to format datetime into readable formats.  Formatting Date and Time JavaScript provides methods to format datetime values into human-readable strings. This allows developers to convert a Date object to a string or vice versa. Formatting a Date as a String To output a Date object as a human-readable string, use methods like toDateString() and toTimeString(): const currentDate = new Date(); const dateString = currentDate.toDateString(); console.log(dateString); const timeString = currentDate.toTimeString(); console.log(timeString); Here is the output: Converting Strings to Date Objects Developers can also convert readable strings into Date objects using the Date constructor: const dateFromString = new Date("October 10, 2024");console.log(dateFromString); However, it’s better to use recommended formats.  Recommended Formats To avoid errors when working with date strings, it is advisable to use reliable formats: ISO 8601 Format (Recommended): The safest and most reliable format is the ISO 8601 date format: YYYY-MM-DDTHH:mm:ss.sssZ. If only the date part is provided, it assumes the time as midnight 00:00:00. const date = new Date("2024-10-10T14:48:00Z");console.log(date); RFC2822 Format: Another accepted format is the RFC2822 format commonly used in email headers: Day, DD Mon YYYY HH:mm:ss GMT. const date = new Date("Wed, 10 Oct 2024 14:48:00 GMT");console.log(date); We now know how to format datetime values using the Date object. Let’s see how to manipulate date values for simple scheduling and calculations. Manipulating Date Values Date manipulation is essential for tasks like scheduling and calculating deadlines. JavaScript provides setter methods for modifying specific components of a Date object. Modifying Date Components Developers can modify specific components of a Date object using setter methods. Note that months are zero-indexed: let date = new Date(); date.setFullYear(2025); date.setMonth(5); // Set month to June date.setDate(15); // Set day to 15th date.setHours(10); // Set hour to 10 AM date.setMinutes(30); // Set minutes to 30 date.setSeconds(45); // Set seconds to 45 console.log(date); Adding or Subtracting Days Developers can easily add or subtract days using setDate(): let date = new Date(); date.setDate(date.getDate() + 5); // Add 5 days console.log(date); Date arithmetic can be accomplished using timestamps (milliseconds since January 1, 1970): let now = new Date(); let oneDayInMs = 24 * 60 * 60 * 1000; let tomorrow = new Date(now.getTime() + oneDayInMs); console.log(tomorrow); Comparing Date Objects Date objects can be compared using their timestamps: let date1 = new Date('2024-10-10'); let date2 = new Date('2024-12-25'); console.log(date1 > date2); // false (October 10 is earlier than December 25) console.log(date1 < date2); // true console.log(date1.getTime() === date2.getTime()); // false Now that we now how to manipulate dates values for calculation, let’s see how we can handle dates with time zones. Working with Time Zones The Date object is timezone-agnostic, meaning it doesn't have a built-in concept of time zones. However, JavaScript’s Date object handles dates in local time (system time zone) and UTC. When creating Date objects, it is essential to be aware of time zone conversions, especially when performing operations across different regions. Local Time vs. UTC JavaScript can work with localtime and UTC. Local time allows you to represent the time by the Date object when created without any specific time zone information, reflecting the local time of the environment in which JavaScript is executed. For example, creating a Date object in Paris will reflect the central European time zone. UTC is the time standard not affected by time zones or DayLight Saving Time (DST). Using Coordinated Universal Time (UTC) ensures consistency and avoids ambiguity when working with dates and times across different time zones, simplifying time calculations, logging, and user experience management in applications that serve users in multiple regions. Creating Date Objects in UTC To create a Date object in UTC, use the ISO 8601 format: const utcDate = new Date("2024-10-10T14:30:00Z");console.log(utcDate); Converting Local Time to UTC To retrieve UTC date components, use getUTCDate(), getUTCMonth(), etc.: const localDate = new Date();console.log(localDate.getUTCDate(), localDate.getUTCMonth() + 1, localDate.getUTCFullYear()); Converting UTC to Local Time Similarly, to convert a UTC date to local time, you can use the local equivalent methods: const utcDate = new Date("2024-10-10T14:30:00Z"); // UTC date const localDay = utcDate.getDate(); const localMonth = utcDate.getMonth() + 1; // Months are zero-indexed const localYear = utcDate.getFullYear(); console.log(`Local Date: ${localMonth}/${localDay}/${localYear}`); Being mindful of time zones when working with dates in JavaScript is essential for ensuring accurate datetime representation, especially in applications that require coordination across different regions. Let’s learn more about common Date Object methods. Common Date Object Methods JavaScript provides several static and instance methods that simplify working with dates. Here are some key methods: Date.now(): Returns the current timestamp in milliseconds since January 1, 1970. Date.parse(): Parses a date string and returns the number of milliseconds since the Unix Epoch (January 1, 1970). If the string cannot be parsed, it returns NaN. Date.UTC(): Creates a Date object from UTC values. Date.toString(): Returns a string representation of the Date object in a readable format. valueOf(): Returns the primitive value of the Date object. These methods provide essential functionality for working with dates in JavaScript, enabling developers to efficiently manage and manipulate date values in their applications.  Conclusion The JavaScript Date object is an essential tool for managing datetime in web development. From creating dates to formatting and performing date arithmetic, mastering this object will enable developers to handle time-sensitive data efficiently, regardless of time zone or locale. By using built-in methods and libraries like Moment.js, date-fns, or Day.js, developers can ensure their applications deliver a smooth user experience when working with dates.
15 October 2024 · 8 min to read
Linux

Writing Scripts in Linux Bash

Bash (Bourne-Again SHell) is a command interpreter in UNIX-like operating systems that allows for task automation at the command line level. Bash scripts are files containing a sequence of commands that can be executed by the Bash interpreter. Bash scripts can be used to automate repetitive tasks. For example, if you need to generate and send a report via email every day, you can write a bash script that performs these actions automatically. This saves a lot of time and reduces the likelihood of errors. In this article, we will cover the basic concepts and tools for writing Bash scripts in Linux. Bash Script Syntax Bash scripts can be written in any text editor and must have executable permissions. Let’s consider some of the most popular editors: Nano is a simple text editor that comes with most Linux distributions. It has an intuitive interface and useful features like syntax highlighting. Vim is one of the most popular text editors for Linux, though it may seem complicated for beginners. Vim offers many features to speed up coding, such as syntax highlighting, autocompletion, and macros. Emacs is another popular text editor for Linux. It also has many features that can simplify the coding process. One of its main features is the ability to run the Bash interpreter inside the editor, allowing you to test scripts without exiting the editor. At the beginning of each script, there must be a line called a shebang, which tells the operating system which interpreter to use to execute the script. The shebang should start with a hash symbol (#) followed by an exclamation mark (!), and then the path to the interpreter. To use the Bash interpreter, the shebang will look like this: #!/bin/bash While writing the script, you can also leave comments that start with a hash symbol and continue until the end of the line. Comments will not be executed by the interpreter and are used to describe the functionality of the script. For example: # This is a comment Below, we will write our first script. Suppose we want to create a script in Linux that greets the user and displays the current date and time on the screen. To do this, create a file named greeting.sh in any directory on your computer and add the following code: #!/bin/bash echo "Hello, $USER!" echo "Today is $(date)" The first line indicates that this is a Bash script. The next line, echo "Hello $USER!", outputs a greeting with the current user's name. $USER is a system variable that contains the name of the current user. The third line, echo "Today is $(date)", displays the current date and time. $(date) is used to call the date command, which returns the current date and time in the system's format. When creating a Bash script, it’s important to ensure the file is executable. To do this, you need to change the file permissions. We’ll cover this and how to run the script in the next chapter. Running Scripts To run a script in Linux, it must have executable permissions. To make a file executable, you can use the chmod command (short for "change mode"). This command allows you to change the access permissions of files and directories in Linux. The syntax for the chmod command is as follows: chmod [options] access_rights file where access_rights is a special code that sets the access permissions for a file or directory, and file is the path to the file or directory whose permissions you want to change. To make a file executable, you need to add the execute (x) permission to its access rights. For example, to make the greeting.sh file executable, use the following command: chmod +x greeting.sh This command will add execute permissions for the current user. Now, we can run the Bash script in Linux by invoking it from the terminal: ./greeting.sh The result of running the script is shown below. Command Line Parameters Command line parameters allow you to pass arguments to Linux scripts when they are run. Command line parameters can be accessed in the script as $1, $2, $3, etc., where $1 is the first parameter, $2 is the second parameter, and so on. Let's rewrite the script from the previous chapter to greet the user using a command-line argument: #!/bin/bashecho "Hello $1!" Then run the script, passing the $USER argument: ./greeting.sh $USER The result is shown below. Additionally, you can use special command line parameters: $0 — the name of the script (i.e., the name of the file that was run) $# — the number of passed parameters $* or $@ — a list of all passed parameters (as a single string or array, respectively) $? — the return code of the last executed command For example, to display the number of passed parameters, you can use the following code: #!/bin/bash echo "Hello $1!" echo "Number of passed parameters: $#" The result of running the script is shown below. Variables Variables in Bash are used to store data, such as strings and numbers. They can be explicitly defined by assigning a value or implicitly defined through automatic assignment during certain operations. To create a variable in Bash, you need to assign it a value using an equal sign (=). For example: company="Hostman" Note that there should be no spaces between the variable name, the equal sign, and the value. You can retrieve the value of a variable by specifying its name after the echo command and the $ sign. For example: echo $company It's also possible to assign a variable value through user input using the read command. For example, the following script prompts the user for their name and stores it in a variable: #!/bin/bash echo "What is your name?" read name echo "Hello, $name!" The result of this script is shown below. In Bash, there are several special variables that are automatically defined and filled by the system. For example, the $HOME variable contains the path to the user's home directory, while $PWD contains the path to the current working directory.  Additionally, there are environment variables that are defined by the system and can be used in scripts. For example, $PATH contains a list of directories where Bash looks for executable files. Variables can also be used to pass values between different commands and scripts. For example, to pass a variable’s value from one script to another, use the export command: export variable_name Conditional Operators Conditional operators allow you to execute a specific set of actions depending on whether a condition is true or false. In Bash scripts, conditions are written in brackets and passed to the if command. The syntax of the if operator looks like this: if [ condition ] then commands to execute if the condition is true fi Here, in the square brackets, you specify the condition that needs to be checked. If the condition is true, the commands between then and fi will be executed. For example, let’s write a Linux script, evenodd.sh, that checks whether the number entered by the user is even or odd: #!/bin/bash echo "Enter a number: " read n if (( $n % 2 == 0 )) then echo "The number $n is even" else echo "The number $n is odd" fi In this example, we use the % operator, which calculates the remainder of division by 2. If the remainder is 0, the number is even; otherwise, it’s odd. The result of running the script is shown below. Additionally, there are several comparison operators that can be used in conditional constructions: -eq – equal to; -ne – not equal to; -gt – greater than; -lt – less than; -ge – greater than or equal to; -le – less than or equal to. For example, to check if the variable $a is greater than the variable $b, you can write the following: if [ $a -gt $b ] then echo "$a is greater than $b" fi It is important to remember that you need to use spaces around the comparison operators in conditional constructions. If there are no spaces, Bash will treat this as one large string instead of a comparison operation. In addition to if, Bash scripts also use the case structure. This allows you to check a variable's value against several possible options. We will discuss this in the next chapter. The Case Construction The case construction in Bash scripts allows you to simplify writing conditional operators for comparing variables with multiple possible values. The syntax of the case construction is as follows: case variable in pattern1) command1 ;; pattern2) command2 ;; pattern3) command3 ;; *) default command ;; esac where variable is the variable to check, pattern1, pattern2, pattern3 are the possible values to check, and command1, command2, command3 are the commands to execute depending on the value of the variable. The * symbol at the end of the list of values is used as a default handler if none of the values match the variable. For example, let’s look at a script that checks the day of the week and performs the corresponding action: #!/bin/bash day=$(date +%u) case $day in 1) echo "Today is Monday" ;; 2) echo "Today is Tuesday" ;; 3) echo "Today is Wednesday" ;; 4) echo "Today is Thursday" ;; 5) echo "Today is Friday" ;; 6) echo "Today is Saturday" ;; 7) echo "Today is Sunday" ;; *) echo "Invalid day of the week" ;; esac In this example, we use the day variable, which we define using the date +%u command. In this case, %u is used to obtain the numeric value of the day of the week, from 1 (Monday) to 7 (Sunday). Then we compare this variable with the days of the week using the case construction. If its value matches a certain day of the week, we display the corresponding message. If the value does not match any of the listed days, we display an error message. The result of running the script is shown below.  Loops Loops in Bash are used to perform repetitive actions. There are two types of loops: for and while. The for loop is used to execute commands for each element in a list. The syntax of the for loop is as follows: for variable in list do commands done Here, the variable takes the value of an element from the list, and for each of them, the commands between do and done are executed. Example: #!/bin/bash for i in {1..10}; do echo "Number: $i" done In this example, i takes values from 1 to 10, and for each of them, the echo "Number: $i" command will be executed. The result of running this loop will look like this: The while loop is used to execute commands as long as the condition remains true. The syntax of the while loop is as follows: while [ condition ] do commands done Here, in square brackets, you specify the condition that is checked before each iteration of the loop. The commands between do and done will be executed as long as the condition is true. Example: #!/bin/bash count=1 while [ $count -le 10 ]; do echo "Count: $count" count=$((count+1)) done In this example, count increases by 1 after each iteration of the loop. When the value of count reaches 10, the loop terminates. The result of running this loop will look like this: Functions Functions in Bash are used to group commands into logically related blocks. Functions can be called from a script using their name.  The syntax of a function is as follows: function_name () { commands_and_expressions } The function name must start with a letter or an underscore and can contain only letters, numbers, and underscores. After the function name comes a list of arguments in parentheses. The commands and expressions to be executed when the function is called must be enclosed in curly braces. Here’s an example of a function that outputs the current time and date: #!/bin/bash print_date () { echo "Today's date: $(date)" } print_date # Function call The result of running the script is shown below. Functions can also accept arguments, which are passed as parameters inside the parentheses when calling the function. Here’s an example of a function that takes two arguments and outputs their sum: #!/bin/bash sum_numbers () { result=$(( $1 + $2 )) echo "The sum of $1 and $2 is $result" } sum_numbers 10 20 # Function call In this example, $1 and $2 are variables that contain the values of the first and second arguments, respectively. sum_numbers 10 20 will call the sum_numbers function with the arguments 10 and 20, and output the following result: Functions can also return values using the return keyword. Let’s rewrite the previous example using this new knowledge: #!/bin/bash sum_numbers () { result=$(( $1 + $2 )) return $result } sum_numbers 12 24 # Function call echo "The sum of the numbers is $?" # Output Here, the result is stored in the result variable and returned from the function using the return command. The $? variable contains the return code of the function, which in this case is the result of the sum calculation. The result of running the script is shown below. There is another way to handle the result of a function call without using return. Let’s slightly modify the previous script: #!/bin/bash sum_numbers () { result=$(( $1 + $2 )) echo $result } sum=$(sum_numbers 9 11) echo "The sum of the numbers is $sum" # Output Here, instead of using $? and return, we store the result of the function call in the sum variable and then output its value. The result is shown below. Working with Files and Directories Bash scripts can be used to perform various operations with files and directories in Linux. For example, to check if a file exists, you can use the following command:  test -e filename  If the file exists, the command will return a value of 0; otherwise, it will return a non-zero value. To work with directories in Bash scripts, you can use commands like cd, mkdir, rmdir, ls, and others. Script Debugging Debugging Bash scripts can be a challenging task because problems can be caused by various factors, such as syntax errors, incorrect use of variables or functions, etc. For debugging Bash scripts, you can use tools like set -x, set -v, and set -e. The set -x command allows you to display the commands before they are executed The set -v command displays the values of variables before they are used The set -e command stops the execution of the script in case of an error Conclusion Bash scripts are a powerful tool for automating tasks in UNIX-like operating systems. In this article, we covered the basic concepts and tools for writing Bash scripts, such as syntax, variables, conditional operators, loops, functions, and running scripts. We hope this guide helps you become a more productive and experienced Linux user.
14 October 2024 · 12 min to read

Answers to Your Questions

What is Hostman used for, and what services do you offer?

Hostman is a cloud platform where developers and tech teams can host their solutions: websites, e-commerce stores, web services, applications, games, and more. With Hostman, you have the freedom to choose services, reserve as many resources as you need, and manage them through a user-friendly interface.

Currently, we offer ready-to-go solutions for launching cloud servers and databases, as well as a platform for testing any applications.

 

  • Cloud Servers. Your dedicated computing resources on servers in Poland and the Netherlands. Soon, we'll also be in the USA, Singapore, Egypt, and Nigeria. We offer 25+ ready-made setups with pre-installed environments and software for analytics systems, gaming, e-commerce, streaming, and websites of any complexity.

  • Databases. Instant setup for any popular database management system (DBMS), including MySQL, PostgreSQL, MongoDB, Redis, Apache Kafka, and OpenSearch.

  • Apps. Connect your Github, Gitlab, or Bitbucket and test your websites, services, and applications. No matter the framework - React, Angular, Vue, Next.js, Ember, etc. - chances are, we support it.

Can I have confidence in Hostman to handle my sensitive data and cloud-based applications?

Your data's security is our top priority. Only you will have access to whatever you host with Hostman.

Additionally, we house our servers in Tier IV data centers, representing the pinnacle of reliability available today. Furthermore, all data centers comply with international standards: 

  • ISO: Data center design standards

  • PCI DSS: Payment data processing standards

  • GDPR: EU standards for personal data protection

What are the benefits of using Hostman as my cloud service provider?

User-Friendly. With Hostman, you're in control. Manage your services, infrastructure, and pricing structures all within an intuitive dashboard. Cloud computing has never been this convenient.

 

Great Uptime: Experience peace of mind with 99.99% SLA uptime. Your projects stay live, with no interruptions or unpleasant surprises.

 

Around-the-Clock Support. Our experts are ready to assist and consult at any hour. Encountered a hurdle that requires our intervention? Please don't hesitate to reach out. We're here to help you through every step of the process.

 

How does pricing work for your cloud services?

At Hostman, you pay only for the resources you genuinely use, down to the hour. No hidden fees, no restrictions.

Pricing starts as low as $4 per month, providing you with a single-core processor at 3.2 GHz, 1 GB of RAM, and 25 GB of persistent storage. On the higher end, we offer plans up to $75 per month, which gives you access to 8 cores, 16 GB of RAM, and 320 GB of persistent storage.

For a detailed look at all our pricing tiers, please refer to our comprehensive pricing page.

Do you provide 24/7 customer support for any issues or inquiries?

Yes, our technical specialists are available 24/7, providing continuous support via chat, email, and phone. We strive to respond to inquiries within minutes, ensuring you're never left stranded. Feel free to reach out for any issue — we're here to assist.

Can I easily scale my resources with Hostman's cloud services?

With Hostman, you can scale your servers instantly and effortlessly, allowing for configuration upsizing or downsizing, and bandwidth adjustments.

Please note: While server disk space can technically only be increased, you have the flexibility to create a new server with less disk space at any time, transfer your project, and delete the old server

What security measures do you have in place to protect my data in the cloud?

Hostman ensures 99.99% reliability per SLA, guaranteeing server downtime of no more than 52 minutes over a year. Additionally, we house our servers exclusively in Tier IV data centers, which comply with all international security standards.

 

How can I get started with Hostman's cloud services for my business?

Just sign up and select the solution that fits your needs. We have ready-made setups for almost any project: a vast marketplace for ordering servers with pre-installed software, set plans, a flexible configurator, and even resources for custom requests.

If you need any assistance, reach out to our support team. Our specialists are always happy to help, advise on the right solution, and migrate your services to the cloud — for free.

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start
Email us