Downloading content from remote servers is a regular task for both administrators and developers. Although there are numerous tools for this job, cURL stands out for its adaptability and simplicity. It’s a command-line utility that supports protocols such as HTTP, HTTPS, FTP, and SFTP, making it crucial for automation, scripting, and efficient file transfers.
You can run cURL directly on your computer to fetch files. You can also include it in scripts to streamline data handling, thereby minimizing manual effort and mistakes. This guide demonstrates various ways to download files with cURL. By following these examples, you’ll learn how to deal with redirects, rename files, and monitor download progress. By the end, you should be able to use cURL confidently for tasks on servers or in cloud setups.
The curl command works with multiple protocols, but it’s primarily used with HTTP and HTTPS to connect to web servers. It can also interact with FTP or SFTP servers when needed.
By default, cURL retrieves a resource from a specified URL and displays it on your terminal (standard output). This is often useful for previewing file contents without saving them, particularly if you’re checking a small text file.
Example: To view the content of a text file hosted at https://example.com/file.txt
, run:
curl https://example.com/file.txt
For short text documents, this approach is fine. However, large or binary files can flood the screen with unreadable data, so you’ll usually want to save them instead.
Often, the main goal is to store the downloaded file on your local machine rather than see it in the terminal. cURL simplifies this with the -O
(capital O) option, which preserves the file’s original remote name.
curl -O https://example.com/file.txt
This retrieves file.txt
and saves it in the current directory under the same name. This approach is quick and retains the existing filename, which might be helpful if the file name is significant.
Sometimes, renaming the downloaded file is important to avoid collisions or to create a clear naming scheme. In this case, use the -o
(lowercase o) option:
curl -o myfile.txt https://example.com/file.txt
Here, cURL downloads the remote file file.txt
but stores it locally as myfile.txt
. This helps keep files organized or prevents accidental overwriting. It’s particularly valuable in scripts that need descriptive file names.
When requesting a file, servers might instruct your client to go to a different URL. Understanding and handling redirects is critical for successful downloads.
Redirects are commonly used for reorganized websites, relocated files, or mirror links. Without redirect support, cURL stops after receiving an initial “moved” response, and you won’t get the file.
To tell cURL to follow a redirect chain until it reaches the final target, use -L
(or --location
):
curl -L -O https://example.com/redirected-file.jpg
This allows cURL to fetch the correct file even if its original URL points elsewhere. If you omit -L
, cURL will simply print the redirect message and end, which is problematic for sites with multiple redirects.
cURL can also handle multiple file downloads at once, saving you from running the command repeatedly.
If filenames share a pattern, curly braces {}
let you specify each name succinctly:
curl -O https://example.com/files/{file1.jpg,file2.jpg,file3.jpg}
cURL grabs each file in sequence, making it handy for scripted workflows.
For a series of numbered or alphabetically labeled files, specify a range in brackets:
curl -O https://example.com/files/file[1-5].jpg
cURL automatically iterates through files file1.jpg
to file5.jpg
. This is great for consistently named sequences of files.
If you have different URLs for each file, you can chain them together:
curl -O https://example1.com/file1.jpg -O https://example2.com/file2.jpg
This approach downloads file1.jpg
from the first site and file2.jpg
from the second without needing multiple commands.
In certain situations, you may want to control the speed of downloads or prevent cURL from waiting too long for an unresponsive server.
To keep your network from being overwhelmed or to simulate slow conditions, limit the download rate with --limit-rate
:
curl --limit-rate 2M -O https://example.com/bigfile.zip
2M
stands for 2 megabytes per second. You can also use K
for kilobytes or G
for gigabytes.
If a server is too slow, you may want cURL to stop after a set time. The --max-time
flag does exactly that:
curl --max-time 60 -O https://example.com/file.iso
Here, cURL quits after 60 seconds, which is beneficial for scripts that need prompt failures.
cURL can adjust its output to show minimal information or extensive details.
For batch tasks or cron jobs where you don’t need progress bars, include -s
(or --silent
):
curl -s -O https://example.com/file.jpg
This hides progress and errors, which is useful for cleaner logs. However, troubleshooting is harder if there’s a silent failure.
In contrast, -v
(or --verbose
) prints out detailed request and response information:
curl -v https://example.com
Verbose output is invaluable when debugging issues like invalid SSL certificates or incorrect redirects.
Some downloads require credentials, or you might need a secure connection.
When a server requires a username and password, use -u
:
curl -u username:password -O https://example.com/protected/file.jpg
Directly embedding credentials can be risky, as they might appear in logs or process lists. Consider environment variables or .netrc
files for more secure handling.
By default, cURL verifies SSL certificates. If the certificate is invalid, cURL blocks the transfer. You can bypass this check with -k
or --insecure
, though it introduces security risks. Whenever possible, use a trusted certificate authority so that connections remain authenticated.
In some environments, traffic must route through a proxy server before reaching the target.
Use the -x
or --proxy
option to specify the proxy:
curl -x http://proxy_host:proxy_port -O https://example.com/file.jpg
Replace proxy_host
and proxy_port
with the relevant details. cURL forwards the request to the proxy, which then retrieves the file on your behalf.
If your proxy requires credentials, embed them in the URL:
curl -x https://proxy.example.com:8080 -U myuser:mypassword -O https://example.com/file.jpg
Again, storing sensitive data in plain text can be dangerous, so environment variables or configuration files offer more secure solutions.
Tracking download progress is crucial for large files or slower links.
By default, cURL shows a progress meter, including total size, transfer speed, and estimated finish time. For example:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1256 100 1256 0 0 2243 0 --:--:-- --:--:-- --:--:-- 2246
This readout helps you gauge how much remains and if the transfer rate is acceptable.
If you want fewer details, add -#
:
curl -# -O https://example.com/largefile.iso
A simpler bar shows the overall progress as a percentage. It’s easier on the eyes but lacks deeper stats like current speed.
When using cURL within scripts, you might want to record progress data. cURL typically sends progress info to stderr
, so you can redirect it:
curl -# -O https://example.com/largefile.iso 2>progress.log
Here, progress.log
contains the status updates, which you can parse or store for later review.
cURL shines as a flexible command-line tool for downloading files in multiple protocols and environments. Whether you need to handle complex redirects, rename files on the fly, or throttle bandwidth, cURL has you covered. By mastering its core flags and modes, you’ll be able to integrate cURL seamlessly into your daily workflow for scripting, automation, and more efficient file transfers.