Curl cookbook


Last updated: 6 April 2020
Get coronavirus/Covid-19 statistics for your country, real-time or historical
Force curl not to show the progress bar
Download a web page via GET request setting Chrome version 74 as the User-Agent.
Download a web page via GET request setting Googlebot version 2.1 as the User-Agent
Download a page via https ignoring ceritficate errors
Download a page using SOCKS5 proxy listening on 127.0.0.1 port 10443
Download a page using SOCKS5 proxy listening on 127.0.0.1 port 10443 and using remote host for hostname resolving
Download a page and report time spent in every step starting with resolving
Make sure Curl follows redirections (Location:) automatically, also using the correct Referer on each redirection
Send GET request with digest authentication
Download a remote file only if it's newer than the local copy
Enable support for compressed encoding in response, as the real browser would do
Verify CORS functionality of a website
Convert Curl command into ready to be compiled C source file
Display just the HTTP response code
Download file with SCP protocol
Get external IP address of the machine where the curl is installed
Send e-mail via SMTP
Make curl resolve a hostname to the custom IP address you specify without modifying hosts file or using DNS server hacks
Show how many redirects were followed fetching the URL

Get coronavirus/Covid-19 statistics for your country, real-time or historical

Add your country code after the slash, e.g. for Israel "il".

$ curl -L -s covid19.trackercli.com/il
╔══════════════════════════════════════════════════════════════════════╗
║ COVID-19 Tracker CLI v3.1.0 - Israel Update                          ║
╟──────────────────────────────────────────────────────────────────────╢
║ As of 4/6/2020, 6:55:12 AM [Date:4/6/2020]                           ║
╟─────────────╤──────────────╤───────────╤─────────────╤───────────────╢
║ Cases       │ Deaths       │ Recovered │ Active      │ Cases/Million ║
╟─────────────┼──────────────┼───────────┼─────────────┼───────────────╢
║ 8,611       │ 515857,975       │ 995           ║
╟─────────────┼──────────────┼───────────┼─────────────┼───────────────╢
║ Today Cases │ Today Deaths │ Critical  │ Mortality % │ Recovery %    ║
╟─────────────┼──────────────┼───────────┼─────────────┼───────────────╢
║ 18121410.59        │ 6.79          ║
╟─────────────╧──────────────╧───────────╧─────────────╧───────────────╢
║ Source: https://www.worldometers.info/coronavirus/                   ║
╟──────────────────────────────────────────────────────────────────────╢
║ Code: https://github.com/warengonzaga/covid19-tracker-cli            ║
╚══════════════════════════════════════════════════════════════════════╝

Historical data:

$ curl -L -s covid19.trackercli.com/history/il

Force curl not to show the progress bar

Use -s option to make it silent:

curl -o index.html -s https://yurisk.info

Download a web page via GET request setting Chrome version 74 as the User-Agent.

Use -A to set User-Agent.

curl -o Index.html  -A  "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36" http://example.com

Resources: https://developers.whatismybrowser.com/useragents/explore/


Download a web page via GET request setting Googlebot version 2.1 as the User-Agent.

curl -o Index.html -A "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"  http://example.com


Download a page via https ignoring ceritficate errors

Add -k to ignore any SSL certificate warnings/errors.

curl -k -o Index.html https://example.com

Download a page using SOCKS5 proxy listening on 127.0.0.1 port 10443

curl -x socks5://localhost:10443  https://yurisk.info

Download a page using SOCKS5 proxy listening on 127.0.0.1 port 10443 and using remote host for hostname resolving

curl -x socks5h://localhost:10443  https://yurisk.info

The idea here is to tunnel DNS requests to the remote end of the tunnel as well, for example for privacy concerns to prevent DNS leak.

Download a page and report time spent in every step starting with resolving:

Source: Stackoverflow

  • Step 1: Put the parameters to write into a file called say curl-params (just for the convenience instead of CLI):
    time_namelookup:  %{time_namelookup}\n
       time_connect:  %{time_connect}\n
    time_appconnect:  %{time_appconnect}\n
   time_pretransfer:  %{time_pretransfer}\n
      time_redirect:  %{time_redirect}\n
 time_starttransfer:  %{time_starttransfer}\n
                    ----------\n
         time_total:  %{time_total}\n
  • Step 2: Run the curl supplying this file curl-params:
curl -w "@curl-params" -o /dev/null -s https://example.com
    time_namelookup:  0.062
       time_connect:  0.062
    time_appconnect:  0.239
   time_pretransfer:  0.239
      time_redirect:  0.000
 time_starttransfer:  0.240
                    ----------
         time_total:  0.241

Make sure Curl follows redirections (Location:) automatically, using the correct Referer on each redirection

curl -L -e ';auto' -o index.html https://example.com

NOTE: All the downloaded pages will be appended to the same output file, here index.html.

Send GET request with digest authentication

curl --digest http://user:pass@example.com/login

Download a remote file only if it's newer than the local copy

curl -z index.html -o index.html https://example.com/index.html 

NOTE: file to compare/download, here index.html, is compared for timestamp only, no content hashing or anything else.

Enable support for compressed encoding in response, as a real browser would do

curl -compressed  -o w3.css https://yurisk.info/theme/css/w3.css

Note: this option causes curl to sent Accept-Encoding: gzip in the request.

Verify CORS functionality of a website

curl -H "Access-Control-Request-Method: GET" -H "Origin: http://localhost" --head https://yurisk.info/2020/03/05/fortiweb-cookbook-content-routing-based-on-url-in-request-configuration/pic1.png

Output:

Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET

Convert Curl command into ready to be compiled C source file

curl -o index.html https://yurisk.info --libcurl index.c

The output file index.c will contain the source code to implement the same command using Curl C library:

/********* Sample code generated by the curl command line tool **********
 * All curl_easy_setopt() options are documented at:
 * https://curl.haxx.se/libcurl/c/curl_easy_setopt.html
 ************************************************************************/
#include <curl/curl.h>

int main(int argc, char *argv[])
{
  CURLcode ret;
  CURL *hnd;

  hnd = curl_easy_init();
  curl_easy_setopt(hnd, CURLOPT_BUFFERSIZE, 102400L);
  curl_easy_setopt(hnd, CURLOPT_URL, "https://yurisk.info");
  curl_easy_setopt(hnd, CURLOPT_USERAGENT, "curl/7.66.0");
  curl_easy_setopt(hnd, CURLOPT_MAXREDIRS, 50L);
  curl_easy_setopt(hnd, CURLOPT_HTTP_VERSION, (long)CURL_HTTP_VERSION_2TLS);
  curl_easy_setopt(hnd, CURLOPT_SSH_KNOWNHOSTS, "/home/yuri/.ssh/known_hosts");
  curl_easy_setopt(hnd, CURLOPT_TCP_KEEPALIVE, 1L);

  /* Here is a list of options the curl code used that cannot get generated
     as source easily. You may select to either not use them or implement
     them yourself.

  CURLOPT_WRITEDATA set to a objectpointer
  CURLOPT_INTERLEAVEDATA set to a objectpointer
  CURLOPT_WRITEFUNCTION set to a functionpointer
  CURLOPT_READDATA set to a objectpointer
  CURLOPT_READFUNCTION set to a functionpointer
  CURLOPT_SEEKDATA set to a objectpointer
  CURLOPT_SEEKFUNCTION set to a functionpointer
  CURLOPT_ERRORBUFFER set to a objectpointer
  CURLOPT_STDERR set to a objectpointer
  CURLOPT_HEADERFUNCTION set to a functionpointer
  CURLOPT_HEADERDATA set to a objectpointer

  */

  ret = curl_easy_perform(hnd);

  curl_easy_cleanup(hnd);
  hnd = NULL;

  return (int)ret;
}
/**** End of sample code ****/

Display just the HTTP response code

curl -w  '%{http_code}' --silent -o /dev/null https://yurisk.info

Output:

200

Download file with SCP protocol

 curl scp://99.23.5.18:/root/pdf.pdf -o pdf.pdf -u root

Note: curl checks ~/.ssh/known_hosts file to verify authencity of the remote server. If the remote server is not already in the known_hosts, curl will refuse to connect. To prevent it - connect to the remote server via SSH, this will add it to the known hosts. Also, curl should be compiled with support for libssh2 library.

Get external IP address of the machine where the curl is installed

 curl -s http://whatismyip.akamai.com/
87.123.255.103

Send e-mail via SMTP

First, put the message body and From/To/Subject fields in a file:

# cat message.txt
From: Joe Dow <joedow@example.com>
To: Yuri <yuri@yurisk.info>
Subject: Testing curl SMTP sending

Hi, curl can now send e-mails as well!

Now, send the e-mail using the created file and setting e-mail envelope on the CLI:

curl -v  smtp://aspmx.l.google.com/smtp.example.com  --mail-from Joedow@example.com  --mail-rcpt yuri@yurisk.info  --upload-file message.txt

Here:
aspmx.l.google.com - the mail server for the recipient domain (curl does NOT look for the MX record itself).
smtp.example.com (Optional) - domain the curl will use in greeting the mail server (HELO/EHLO).
--mail-from - sender address set in the envelope.
--mail-rcpt - recipient for the mail set in the envelope.

NOTE: the mail sending is subject to all the anti-spam checks by the receiving mail server, so I recommend to run this with the -v option set to see what is going on in real-time.

Make curl resolve a hostname to the custom IP address you specify without modifying hosts file or using DNS server hacks

Useful when testing local copy of a website.
Problem: You want curl to reach a website "example.com" at IP address 127.0.0.1 without changing local hosts file or setting up fake DNS server.
Solution: Use --resolve to specify IP address for a hostname, so curl uses it without querying real DNS servers.

curl -v  --resolve "example.com:80:127.0.0.1" http://example.com
* Added example.com:80:127.0.0.1 to DNS cache
* Hostname example.com was found in DNS cache
*   Trying 127.0.0.1:80...
* Connected to example.com (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> Host: example.com
> User-Agent: curl/7.67.0
> Accept: */*

Show how many redirects were followed fetching the URL

Use num_redirects variable for that:

 curl -w '%{num_redirects}' -L  -o /dev/null https://cnn.com -s
2