End to End encryption with let’s encrypt behind a nginx reverse proxy
The goal of this article is to setup a reverse proxy with nginx while maintaning an end to end encryption between the client and the backend VMs.
The SSL certificate used are provided by Let’s Encrypt using the certbot tool.
Additionnaly, the backend VMs are in an internal network and don’t have access to the Internet.
Architecture:
For this example, we will use 3 VM:
vm-proxy:93.184.216.34,192.168.10.1/24vm-back1:192.168.10.2/24vm-back2:192.168.10.3/24
All the VMs are on debian 11, but it should not have any impact other than the way packages are installed.
The network 192.168.10.1/24 is not routed to the Internet. vm-proxy is connected to the Internet with the ip 93.184.216.34.
In addition to nginx, there is an apt proxy on vm-proxy to allow vm-back1 and vm-back2 to install and update packages.
vm-back1serve the sitet1.example.comvm-back2serve the sitet2.example.comvm-proxyserve the sitet3.example.com
In theory, the reverse proxy should not serve a website itself, but life is messy sometime, soo I may as well deal with this case.
The backends serve the website using nginx. Other webservers are out of the scope of this article, but nginx can be used as a local reverse proxy to handle any website.
Reverse Proxy configuration
TLS flux
First we want to redirect the SSL flux to the right backend. To chose the right backend without having to decrypt the http payload, we need to use the ssl_preread feature of nginx. This feature read the contend of the client hello message that contain the SNI (server name indication) of the server the client want to contact. The hello message is used to initialise a tls session, so it is not encrypted.
From the SNI, we can chose the right backup and foward the connection using a stream server in nginx:
stream {
# Proxy request from the front end address
map $ssl_preread_server_name $name_from_front {
t1.example.com vm-back1;
t2.example.com vm-back2;
default local;
}
upstream vm-back1 {
server 192.168.10.2:443;
}
upstream vm-back2 {
server 192.168.10.3:443;
}
upstream local {
server 127.0.0.1:8443;
}
server {
listen 93.184.216.34:443;
proxy_pass $name_from_front;
ssl_preread on;
}
# Proxy request from the back end address
map $ssl_preread_server_name $name_from_back {
acme-v02.api.letsencrypt.org acme;
r3.o.lencr.org r3;
default local-back;
}
upstream acme {
server acme-v02.api.letsencrypt.org:443;
}
upstream r3 {
server r3.o.lencr.org:443;
}
upstream local-back {
server 127.0.0.1:9443;
}
server {
listen 192.168.10.1:443;
proxy_pass $name_from_back;
ssl_preread on;
}
}
To allow vm-proxy to serve its own site, any unknown requests to an unknown SNI is foward to another port of vm-proxy (here, 127.0.0.1:8443). The websites served by vm-proxy itself must be served to 127.0.0.1:8443.
HTTP proxy
We still have to handle http connection to the proxy. The first thing to do is to redirect HTTP connection to HTTPS connection with a code 302. The next thing to do is to add an exception to foward the request to /.well-known/acme-challenge/ to the backend. This will be used by the backend to prove ownership of the domain to Let’s Encrypt.
http {
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 93.184.216.34:80;
server_name t1.example.com;
location /.well-known/acme-challenge {
proxy_pass http://172.20.198.2:80;
proxy_set_header Host $host;
}
location / {
return 302 https://$host$request_uri;
}
}
server {
listen 93.184.216.34:80;
server_name t2.example.com;
location /.well-known/acme-challenge {
proxy_pass http://192.168.10.3:80;
}
location / {
return 302 https://$host$request_uri;
}
}
# See next section for the config of t3.example.com
}
Backend configuration
For consistancy sake, we will use verry similare configurations for the backend. This is probably not optimal, but it allows to easily chain proxy if for some reason we want to extend this hybrid setup.
We add to nginx.conf of vm-back1:
stream {
map $ssl_preread_server_name $name_from_front {
default local;
}
upstream local {
server 127.0.0.1:8443;
}
server {
listen 192.168.10.2:443;
proxy_pass $name_from_front;
ssl_preread on;
}
}
We add the same config to nginx.conf of vm-back2, replacing the ip accordingly.
Then for the http servers, we need to redirect /.well-known/acme-challenge/ to a folder, and define an TLS endpoint:
http {
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name t1.example.com;
location /.well-known/acme-challenge/ {
root /var/www/well-known/acme-challenge/;
}
location / {
return 302 https://$host$request_uri;
}
}
server {
listen 127.0.0.1:8443 ssl http2;
ssl_certificate /etc/nginx/certs/t1.example.com.crt;
ssl_certificate_key /etc/nginx/certs/t1.example.com.key;
server_name t1.example.com;
location /.well-known/acme-challenge/ {
root /var/www/well-known/acme-challenge/;
}
location / {
absolute_redirect off;
return 200 '<h1> Hello <h1/>';
default_type text/html;
}
}
}
About absolute_redirect
Nginx will try to correct hypertext links, but because the http server listen on port 8443, it will add the port 8443 to the url when the user clic on an link local to the website, and because it listen only to localhost on port 8443, the request will fail (and beside, non-standard port in the url are uggly). The absolute_redirect off; directive tells nginx to not rewrite links.
Dummy Certificate
We need to create the folder for the acme challenge: sudo mkdir -p /var/www/well-known/acme-challenge/.well-known/acme-challenge/.
We also need to create a dummy certificate to allow nginx to load configuration without error before generating the actual certificate:
sudo mkdir /etc/nginx/certs
sudo openssl genrsa -out /etc/nginx/certs/dummy.key 2048
sudo openssl req -new -key /etc/nginx/certs/dummy.key -out /etc/nginx/certs/dummy.csr -subj "/C=BZ/L=AnyWhere/O=AnyOne/CN=dummy"
sudo openssl x509 -req -days 3650 -in /etc/nginx/certs/ -signkey /etc/ssl/dummy.key -out /etc/ssl/dummy.crt
sudo openssl x509 -req -days 3650 -in /etc/nginx/certs/dummy.csr -signkey /etc/nginx/certs/dummy.key -out /etc/nginx/certs/dummy.crt
sudo ln -s /etc/nginx/certs/dummy.key /etc/nginx/certs/t1.example.com.key
sudo ln -s /etc/nginx/certs/dummy.crt /etc/nginx/certs/t1.example.com.crt
The same configuration is used for t2.example.com on vm-back2 and t3.example.com on vm-proxy.
Certot
If all the VM are connected to the internet, we could just use certbot now, but vm-back1 and vm-back2 don’t have acces to internet. Without access to the Internet, certbot cannot contact Let’s Encrypt to get a certificate. The usual solution to this problem is to use a proxy HTTPS and set the env variable HTTPS_PROXY when runing certbot.
I believe an HTTP proxy is a little extreme and may add weaknesses to the network is missconfigured. Instead, I decided to use nginx as a reverse proxy once again. vm-proxy will foward TLS connection to acme-v02.api.letsencrypt.org.
We already configured nginx on vm-proxy for that, now, we need make certbot contact vm-proxy instead to the actual server on vm-back1 and vm-back2. We have several options here. For instance we could have used a lying DNS. Instead I decided to eddit the /etc/hosts file for vm-back1 and vm-back2 and add this line to the file:
192.168.10.1 acme-v02.api.letsencrypt.org
192.168.10.1 r3.o.lencr.org
To generate a certificate, we run:
sudo certbot certonly \
--agree-tos \
--register-unsafely-without-email \
--domain t1.deso-palaiseau.fr \
--non-interactive \
--webroot \
--webroot-path /var/www/well-known/acme-challenge \
--post-hook "systemctl restart nginx"
Then we replace the dummy certificate:
sudo unlink /etc/nginx/certs/t1.example.com.crt
sudo unlink /etc/nginx/certs/t1.example.com.key
sudo ln -s /etc/letsencrypt/live/t1.example.com/fullchain.pem /etc/nginx/certs/t1.example.com.crt
sudo ln -s /etc/letsencrypt/live/t1.example.com/privkey.pem t1.example.com.key
Certbot will reload nginx after rotating the certificate, but you still need to reload nginx manualy after running certbot and changing the links to the actual certificate, with sudo systemctl reload nginx