Motivation

Centralizing certificate issuance is step one. Step two is making nodes consume that certificate automatically, without copying files by hand like it’s a 2004 LAN party.

In this guide we’ll pull the wildcard cert for yourdomain.com from a “cert VM”, store it locally, and use it in a docker-compose stack serving HTTPS content

is issuance a word? it made sense in my head

Pre-requisites

A wildcard cert issued to yourdomain.com (Example)

In the cert VM, you export certs look something like: Example

  • /srv/certs/homelab-fullchain.pem
  • /srv/certs/homelab-privkey.pem

In the node, you want them stored at:

  • /srv/certs/yourdomain.com/fullchain.pem
  • /srv/certs/yourdomain.com/privkey.pem

You will use this for a Docker Application: https://uptime.yourdomain.com

  • We will be using uptime-kuma as example

The Stack: What You’ll Use

Debian VM/LXC - our preferred OS SSH Services & Key Authentication rsync in both Node and Cert VM Bash script, because they are the foundation and MVP of the *nix automation systemd services and timers

Architecture Overview

Pull Mechanism
[node] → systemd.Timer → rsync download of SSL → systemd.Hook → SSL Storage → Caddy restart

Application Layer
[user browser] → [node caddy:443] → [SSL Certificate] → [SSL Docker Application proxy]

Step-by-Step Guide

Create Dedicated SSH Key in the node

sudo ssh-keygen -t ed25519 -f /root/.ssh/cert-pull -N ""

Copy the public key to the cert VM:

sudo ssh-copy-id -i /root/.ssh/cert-pull.pub root@cert-vm.yourdomain.com

Yes, it’s root. No, I don’t love it either. We’ll secure access in a second.

You can adapt this strategy to use a specific user if you have it existing in both Systems, see below.

SSH Key Security "hardening"

In the cert VM, edit ~/.ssh/authorized_keys for the key you just added and force it to only allow rsync read-only.

Better way:

Create a restricted user:

sudo adduser --system --home /nonexistent --shell /usr/sbin/nologin certpull

Enable read access to /srv/certs:

sudo chgrp -R certpull /srv/certs
sudo chmod -R g+rX /srv/certs

Install the node key for that user:

sudo mkdir -p /home/certpull/.ssh
sudo nano /home/certpull/.ssh/authorized_keys
sudo chown -R certpull:certpull /home/certpull/.ssh
sudo chmod 700 /home/certpull/.ssh
sudo chmod 600 /home/certpull/.ssh/authorized_keys

This mitigates the issue of having "root" key used, but it does require a bit more setup, your call.

Pull Certificates using rsync

In the Node:

sudo mkdir -p /srv/certs/yourdomain.com
sudo chmod 750 /srv/certs/yourdomain.com

and now, you can pull the certs:

sudo rsync -av \
  -e "ssh -i /root/.ssh/cert-pull" \
  certpull@cert-vm.yourdomain.com:/srv/certs/
  /srv/certs/yourdomain.com/

Then rename to what we want locally:

sudo cp /srv/certs/yourdomain.com/homelab-fullchain.pem /srv/certs/yourdomain.com/fullchain.pem
sudo cp /srv/certs/yourdomain.com/homelab-privkey.pem   /srv/certs/yourdomain.com/privkey.pem
sudo chmod 640 /srv/certs/yourdomain.com/*.pem

Automate the Certificates Pull using systemd

In the node, we create a script:

sudo nano /usr/local/sbin/pull-central-cert.sh with this:

#!/bin/bash
set -euo pipefail

REMOTE="certpull@cert-vm.yourdomain.com:/srv/certs/"
LOCAL="/srv/certs/yourdomain.com"

mkdir -p "$LOCAL"

rsync -a --delete \
  -e "ssh -i /root/.ssh/cert-pull" \
  "$REMOTE" "$LOCAL/"

cp "$LOCAL/homelab-fullchain.pem" "$LOCAL/fullchain.pem"
cp "$LOCAL/homelab-privkey.pem"   "$LOCAL/privkey.pem"
chmod 640 "$LOCAL/"*.pem

# Reload Caddy if it exists (no downtime)
if docker ps --format '{{.Names}}' | grep -q '^caddy$'; then
  docker exec caddy caddy reload --config /etc/caddy/Caddyfile >/dev/null 2>&1 || true
fi

Make it executable:

sudo chmod +x /usr/local/sbin/pull-central-cert.sh

Create a systemd service:

sudo nano /etc/systemd/system/pull-central-cert.service

with this:

[Unit]
Description=Pull wildcard TLS cert from cert VM

[Service]
Type=oneshot
ExecStart=/usr/local/sbin/pull-central-cert.sh

systemd timer creation:

sudo nano /etc/systemd/system/pull-central-cert.timer with this:

[Unit]
Description=Daily pull of wildcard TLS cert

[Timer]
OnCalendar=daily
Persistent=true

[Install]
WantedBy=timers.target

systemd enablement:

sudo systemctl daemon-reload
sudo systemctl enable --now pull-central-cert.timer
sudo systemctl start pull-central-cert.service

Practical Example - Automate Uptime-Kuma Docker-Compose application using PULLED certificate

We demonstrate the usage of this certificate in a simple application install, uptime-kuma + caddy to enable web front end and ssl management. In the node, use a predictable folder structure - Example:

/opt/uptime-kuma/
  docker-compose.yml
  Caddyfile

/srv/certs/yourdomain.com/
  fullchain.pem
  privkey.pem

Create your Docker Compose file to resemble something like this:

services:
  uptime-kuma:
    image: louislam/uptime-kuma:latest
    container_name: uptime-kuma
    restart: unless-stopped
    volumes:
      - ./data:/app/data
    networks:
      - web

  caddy:
    image: caddy:latest
    container_name: caddy
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - /srv/certs/yourdomain.com:/certs:ro
      - caddy_data:/data
      - caddy_config:/config
    networks:
      - web

networks:
  web:

volumes:
  caddy_data:
  caddy_config:

Notice the Read-Only paths to the local SSL certificates in the node to be used by Caddy. Create your Caddy file in the node folder:

uptime.yourdomain.com {
  tls /certs/fullchain.pem /certs/privkey.pem
  reverse_proxy uptime-kuma:3001
}

This is where it all ties in, by now, your node will be creating a docker stack with two applications, uptime kuma and Caddy. Caddy will receive the 80/443 requests and internally in the docker network proxy the uptime kuma application. You can verify that the SSL is being in use with the following, from your computer:

curl -vkI https://uptime.yourdomain.com

Will let you read in screen the SSL certificate properties.

Conclusion

This how-to enables secure, repeatable wildcard SSL certificates being distributed across your #homelab applications/services and lessens the admin time to enable HTTPS in a small scale environment:

  • Certificates stay centralized and renewed once - Cert VM
  • Nodes pull them automatically - Bash + systemd timers and hooks
  • Services use standard file mounts - Docker Compose
  • Reload happens cleanly, no manual intervention. This is the kind of automation that actually scales in a one-person homelab: not “enterprise,” just repeatable, without any additional overhead/stacks. This method is extensible to most of the setup of applications you may have, the POC for Caddy+Uptime Kuma can be extended to Apache2 apps, PHP, etc. it just requires a bit of analysis on your side.

Future Work Now that i think about it, i should move away this SH script to Python, more elegant. #Willdothisinfuture.

Next Post Previous Post