HTTPS For Mortals

HTTPS For Mortals

HTTPS, specifically the underlying SSL/TLS, is important to protect the privacy and security of internet users, yet can be opaque when it fails.

SSL problems in your web server, CDN assets, or backend APIs can result in interesting breakage: service connection errors, CDN assets not loading, and general funky behavior for old clients.

In this post, I want to give a bit of background about SSL and how to debug errors with SSL in HTTPS connections. Those versed in SSL can skip to the Examples section for copypasta commands.

Note for the pedantic: I'm referring to what is generally known as TLS in modern days when I say SSL.

How SSL Works

SSL, and its successor TLS, has been around since the 90s. It's used both to provide privacy (that communications between you and a server aren't interceptible) and integrity (that communications between you and a server haven't been tampered with).

Modern SSL does this via Public Key Cryptography. I won't go into gory details about how the math behind it works (mostly because I fully admit I don't completely understand it).

The concept is that a server prevents a Public Key (aka an X.509 Certificate) that a client can use to encrypt data sent to it.

To decrypt the data, the server uses a Private Key and a previously agreed upon algorithm (a Cipher Suite). It's critically important that the Private Key be protected, as anybody with the Private Key can impersonate the server.

You configure your web server (nginx, Apache, IIS, Tomcat, your Golang web app, whatever) to use a public and private key to serve content over HTTPS.

How Does Math Keep Baddies Away?

Cryptographically strong math does not, in and of itself, ensure you're talking to who you think you are. I can certainly spin up a web server and, given network access, trick your computer into connecting to when you try to connect to

The X.509 Certificate (cert) has many fields, one of which is the Subject field. This basically says “This certificate is to be used to encrypt communications to”.

If I present a cert with a Subject saying it's for when you're trying to connect to, browsers will show a scary error message (and possibly prevent any further communication to that endpoint; more on that later).

A Subject field in a certificate can also contain a wildcard for a single level of a domain, allowing matching for multiple hosts. This is generally referred to as a Wildcard Cert.

Back to my ill-intentioned example, what if I generated a cert for and had my nefarious web server use that?


SSL certificates have the concept of a signature. Certs can be either self-signed or signed by another Certificate Authority (CA).

Any schmuck can generate a self-signed SSL certificate for any given name.

A certificate is much more useful when it's signed by another authority.

Signing a certificate is done to broker trust. By signing a certificate, an authority says something to the effect of “I assert that this certificate belongs to the organization responsible for this domain name.”

There are various CAs responsible for signing SSL certificates, with their own validation policies and pricetags. For example, some CAs require you to submit a legal document showing ownership of a domain. AWS's ACM has various validation methods available, such as making DNS records or performing manual validation. In the case of LetsEncrypt, you prove you own a domain by spinning up an endpoint or making a DNS record using ACME.

These strict policies lead to operating system, library, or browser vendors distributing a list of trusted Certificate Authorities. These are stored in your “Trust Store” for your client application. By maintaining a list of trusted organizations and their CA fingerprints, software vendors say “Trust that certificates signed by these organizations are legit.”

On my Fedora Linux install, the trusted CA bundle lives at /etc/pki/tls/certs/ca-bundle.crt. Java has its own location (generally the JVM-provided cacerts file, or specified by, and Golang uses a list of locations, based on OS.

Windows validates SSL certificates via its Certificate Database service, and MacOS uses Keychain along with various Apple libraries for trust.

Organizations can create their own Certificate Authorities and distribute them to their clients. Chances are your work's InfoSec department has its own CA that is used to sign internal services’ certificates.

For security purposes, certificate authorities don't usually sign certs with these trusted certs themselves. They sign a signing CA's certificate, and use THAT to sign certificates. This forms a chain of trust. A consequence of this is that you must transmit the (signed) certificate of the authority that signed your certificate along with your certificate. This forms what's called a Certificate Chain.

The issuing CA must be signed by a certificate that your OS or library trusts. That could, in theory, mean that the cert is signed by itself. This would be valid if the CA is itself trusted.

Going back to my sketchy example, I wouldn't (we hope) be able to get my certificate signed by a trusted CA. Your browser would note that this certificate isn't signed by anything legitmately trusted, and would still throw an error.

On Time

Another outage causing… erm, important field of SSL certificates is their validity period.

This ensures that certificates aren't permanent, and are valid only for a certain amount of time, kind of like your passport.

If a certificate is not valid at the current time according to your client, SSL libraries will refuse to connect.

Though this merits another post, it is important to monitor when your service's SSL cert expires. This is a type of outage that's easily avoidable that will wreak havoc among both human users and API users of your service. Getting paged because an SSL cert expired is anathema to me.


A consequence of all of this is that certificate subjects have to be very precise. A certificate for is intuitively not valid for However, a certificate for is also NOT valid for, and vice-versa.

The Subject Alternate Name (SAN) field of certs provides the ability for multiple domains to share a given certificate.

This field is a list, which has a universally unagreed upon maximum length based on implementations of client libraries. It has two important properties:

  1. It should contain the Subject of the cert (technically you populate this field with DNS names, and the RFC says that the Subject must be included in this list.)
  2. It can contain wildcards valid for a specific subdomain, just like a Subject field. uses a certificate that's issued for but also valid for * via SANs. This presumably reduces operational burden of having to maintain seperate certificates for each subdomain.

Let's look at another, more interesting domain:

% echo '' | openssl s_client -connect 2>&1 | openssl x509 -noout -text | egrep '(Subject:|DNS:|Subject Alternative Name)'                                                                       
        Subject: C = US, ST = California, L = Sunnyvale, O = Oath Inc, CN = *
            X509v3 Subject Alternative Name: 
                DNS:*, DNS:*, DNS:*,,,,,,,,,, DNS:*

We send a \n to openssl s_client, which connects to a socket over SSL and prints the certificate and chain the server provides, closing the connection afterwards. Piping that into openssl x509, we grep the output of that for something containing Subject:, DNS:, or Subject Alternative Name.

Notice that presents a wildcard cert for it's subject, along with SANs for other domains it fronts (through seemingly branded marketing agreements).

This means that you can attempt to load, get pointed to this server, and the connection will be valid and secure.

Socket To Me

The implementation-minded of you might think “OK, I can write a server that accepts HTTPS connection with a keypair and have it listen on a socket. But… it would only work for one domain name and that's not super useful.”

You'd be correct. This was a problem for quite a while with web servers. Virtual IPs, NAT/PAT trickery, or other methods had to be used to allow clients to connect to multiple domains hosted on one server.

Enter Server Name Indication (SNI). This mechanism allows clients to specify which hostname they're trying to connect to before the SSL handshake starts, so a single socket can know which cert to present to the client.

It's important to specify a server name when trying to connect to a server that may front multiple domains or a load balancer. Modern browsers generally do this automatically. The openssl s_client command allows specifying the server name via SNI with the -servername flag:

 -servername val            Set TLS extension servername (SNI) in ClientHello (default)

Beware that some old browsers (specifically any flavor of IE on Windows XP) do not support SNI. My deepest condolences if you have the pleasure of working with this constraint.

HTTPS By Default (HSTS)

To ensure connections always go over HTTPS, you can set the Strict-Transport-Security header over your plain HTTP connections, coloquially known as HSTS. This will cause connecting clients to try to connect over HTTPS from now on. You specify a policy that contains a time period to always force HTTPS connections, along with whether or not to include subdomains. The header looks like Strict-Transport-Security: max-age=31536000; includeSubDomains

A word of caution: If you set HSTS and are no longer able to serve valid HTTPS (for example, a cert expires), ALL requests to your domain will fail. This is painful to revoke once in place (requring poking around in various internal browser settings).

Another consequence of turning HSTS on for all subdomains is that all subdomains now must be served over HTTPS or requests will not proceed. Hope has HTTPS enabled…

Organizations can also have their domains added to the HSTS preload list maintained by Chromium, such that browsers will only connect over HTTPS by default, without ever connecting to the service before.


Enough talk. Let's see some practical examples of how to debug an SSL connection to a server.


My reflex for probing connections over SSL is openssl s_client.

As shown above in the SAN example, openssl s_client will establish an SSL handshake and print the certificate to stdout, along with some information about the cert verification to stderr. You can use openssl x509 to print information about the certificate. openssl x509 has a bunch of flags that can be used to print information about a certificate (see openssl x509 -help), but for now let's just print the metadata of the cert in human-readable form:

% openssl s_client -connect 2>&1 | openssl x509 -noout -text 
        Version: 3 (0x2)
        Serial Number:
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: C = US, O = Google Trust Services, CN = GTS CA 1O1
            Not Before: Dec  3 14:47:31 2019 GMT
            Not After : Feb 25 14:47:31 2020 GMT
        Subject: C = US, ST = California, L = Mountain View, O = Google LLC, CN =
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                RSA Public-Key: (2048 bit)
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication
            X509v3 Basic Constraints: critical
            X509v3 Subject Key Identifier: 
            X509v3 Authority Key Identifier: 

            Authority Information Access: 
                OCSP - URI:
                CA Issuers - URI:

            X509v3 Subject Alternative Name: 
      , DNS:*

openssl s_client will hang until it gets some sort of input (HTTP requests can be passed here as well, kind of like telnet'ing to a port), so generally I pass in a newline to it.


You can use curl to connect to an https endpoint too! It's a quick way of verifying that a cert is working properly. The -v command prints out headers and TLS information.

% curl -v
*   Trying
* Connected to ( port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
*  subject: C=US; ST=California; L=Mountain View; O=Google LLC;
*  start date: Dec  3 14:47:31 2019 GMT
*  expire date: Feb 25 14:47:31 2020 GMT
*  subjectAltName: host "" matched cert's ""
*  issuer: C=US; O=Google Trust Services; CN=GTS CA 1O1
*  SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55e7b15fe180)
> GET / HTTP/2
> Host:
> User-Agent: curl/7.66.0
> Accept: */*
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* Connection state changed (MAX_CONCURRENT_STREAMS == 30)!
< HTTP/2 301 
< location:
< content-type: text/html; charset=UTF-8
< x-content-type-options: nosniff
< date: Sat, 28 Dec 2019 03:56:25 GMT
< expires: Sun, 29 Dec 2019 03:56:25 GMT
< server: sffe
< content-length: 226
< x-xss-protection: 0
< cache-control: public, max-age=86400
< age: 61849
< alt-svc: clear
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">


So far we've seen what a request to a properly-configured HTTPS service looks like. Let's take a look at some common errors.

Expired Certificate

When you connect to an https website with a cert that's expired, Firefox will say something to the effect of

Firefox detected an issue and did not continue to The website is either misconfigured or your computer clock is set to the wrong time.

It’s likely the website’s certificate is expired, which prevents Firefox from connecting securely. If you visit this site, attackers could try to steal information like your passwords, emails, or credit card details.

curl will say curl: (60) SSL certificate problem: certificate has expired

This is a sign that your certificate has expired. To verify, let's use s_client to inspect:

%  echo '' | openssl s_client -connect 2>&1 | openssl x509 -noout -text | grep Not 
            Not Before: Jul 14 01:13:21 2019 GMT
            Not After : Oct 12 01:13:21 2019 GMT
% date
Sat 28 Dec 2019 01:33:11 PM PST

…oops. I should get letsencrypt automatic renewal working.

You can also just print the expiration dates with the -dates flag of openssl x509:

% echo '' | openssl s_client -connect -servername 2>&1 | openssl x509 -noout -dates
notBefore=Jul 14 01:13:21 2019 GMT
notAfter=Oct 12 01:13:21 2019 GMT

openssl x509 -enddate will merely output the expiration of the cert, too.

You can see that the cert expired on October 12th (GMT), and thus is no longer valid.

Let's fix that.

Missing Chain

If you've configured SSL but aren't presenting a full cert chain, curl will complain thusly:

% curl
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here:

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.

Let's see what openssl says:

% openssl s_client -connect -servername
depth=0 CN =
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 CN =
verify error:num=21:unable to verify the first certificate
verify return:1
Certificate chain
 0 s:/
   i:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
Server certificate

You'll note that the Certificate Chain only shows a cert for signed by the Let's Encrypt authority. It's missing the rest of the chain, with the signed Let's Encrypt certificate. This is necessary because the OS will compare the certificate authority listed in the chain with its trust store, ensuring THAT is signed by a valid, trusted, CA.

Let's see what a correct cert chain looks like:

% openssl s_client -connect -servername
depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
verify return:1
depth=0 CN =
verify return:1
Certificate chain
 0 s:/
   i:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
 1 s:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
   i:/O=Digital Signature Trust Co./CN=DST Root CA X3
Server certificate

The DST Root CA is trusted by my OS's trust store, so the chain is thus complete.

Connection Refused

If you see something to the effect of “Connection Refused” or “Timeout”, make sure you're trying to connect to the correct port. AKA, make sure your server is actually listening on port 443 :).

Wrong Subject

Let's force CURL to try to connect to a server with the wrong name:

% curl --resolve ""
curl: (51) Unable to communicate securely with peer: requested domain name does not match the server's certificate.

Firefox will say something like

Warning: Potential Security Risk Ahead

Firefox detected a potential security threat and did not continue to If you visit this site, attackers could try to steal information like your passwords, emails, or credit card details.

What can you do about it?

The issue is most likely with the website, and there is nothing you can do to resolve it. You can notify the website’s administrator about the problem.

This implies that the subject name of the cert does not match the domain you requested, or that SNI isn't working as you expect it to.


I hope this shed a bit of light onto some of the common SSL errors you'll see. openssl commands are somewhat unergonomic, but can be commited to “muscle” memory pretty easily.

This is the first of (hopefully) many guides I hope to write. Thanks for reading, and let me know on Twitter @chris_cmolik if you have any comments, insights, or inaccuracies :)