I set up my first Ghost blog in 2016, on a VPS I'd barely configured, with an SSL certificate I'd manually generated using Let's Encrypt DNS validation. It was janky, it broke constantly, and I loved it. Eight years and one CVE-induced admin key theft later, I'm still self-hosting — but I've learned a few things along the way.
This isn't a purist manifesto about escaping Big Tech. It's a practical account of what I run, what broke, what I'd do differently, and whether it's actually worth the effort.
What I'm Running
My current setup lives on a single VPS — nothing fancy, nothing enterprise. Here's the stack:
| Service | Tool | Why |
|---|---|---|
| Blog | Ghost | Been running it since 2016. Fast, simple, Node.js-based. |
| Reverse Proxy | Nginx | Terminates TLS, routes traffic to Docker containers. |
| SSL | Certbot (Let's Encrypt) | Free TLS certificates. Auto-renew via cron. |
| Containers | Docker + Compose | Every service is containerised. No exceptions. |
| Monitoring | Uptime Kuma | Lightweight, self-hosted uptime monitor with notifications. |
| DNS | Cloudflare | Free tier. CDN + DDoS protection + DNS in one. |
| Backups | rclone + cron | Encrypted backups to object storage. Automated daily. |
Everything runs in Docker. I've been containerising since I wrote about Docker Compose for Hadoop/Spark/Kafka clusters back in 2017. The pattern hasn't changed — if it runs on my server, it runs in a container with a docker-compose.yml that I can recreate from scratch in minutes.
The Docker Compose That Ties It Together
Here's a simplified version of my production compose file. I keep one file per service for simplicity, but the pattern is the same:
# ghost/docker-compose.yml
services:
ghost:
image: ghost:latest
container_name: ghost
ports:
- "2368:2368"
volumes:
- ghost_content:/var/lib/ghost/content
environment:
- url=https://davideandreazzini.co.uk
- database__client=mysql
- database__connection__host=db
- database__connection__user=ghost
- database__connection__password=${DB_PASSWORD}
- database__connection__database=ghost
depends_on:
- db
restart: unless-stopped
db:
image: mysql:8.0
container_name: ghost_db
volumes:
- ghost_db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=${DB_ROOT_PASSWORD}
- MYSQL_DATABASE=ghost
- MYSQL_USER=ghost
- MYSQL_PASSWORD=${DB_PASSWORD}
restart: unless-stopped
volumes:
ghost_content:
ghost_db:
Every secret lives in a .env file that's never committed to git. The database runs in its own container — not because I need MySQL scalability, but because it makes backups clean: dump the volume, compress it, ship it to object storage.
Lessons from the CVE Incident
In April 2026, CVE-2026-26980 hit — and someone extracted my Ghost admin key in seven seconds while I was asleep. I wrote about the details in a separate post, but the infrastructure lessons were the real takeaway:
1. Network-Layer Defence Matters More Than Application Patches
By the time a CVE is published and you've patched, the botnets have already scanned every exposed instance. What actually saved me (partially) was Cloudflare acting as a buffer between the public internet and my origin server. The attacker still got through — but Cloudflare's WAF caught the subsequent enumeration attempts.
The lesson: don't expose services directly to the internet. Put them behind a reverse proxy and a CDN/WAF, even for personal projects.
2. Rotate Secrets Like You Mean It
My admin API key hadn't been rotated since I set up the blog. That's on me. Now I rotate them monthly via a simple script:
#!/bin/bash
# rotate-ghost-keys.sh
# Run monthly via cron
GHOST_DIR=/opt/ghost
CONFIG_FILE=$GHOST_DIR/config.production.json
BACKUP_DIR=/opt/backups/ghost
DATE=$(date +%Y-%m-%d)
# Backup current config
cp $CONFIG_FILE $BACKUP_DIR/config-$DATE.json
# Generate new keys and update config
# (Ghost handles key rotation on restart with environment variables)
docker compose -f $GHOST_DIR/docker-compose.yml restart ghost
echo "[$DATE] Ghost keys rotated" >> /var/log/ghost-rotation.log
3. Monitoring Isn't Optional
I didn't have any monitoring when the CVE hit. I found out by checking my logs the next morning. Now I run Uptime Kuma — it pings every endpoint every 60 seconds and sends me a Telegram message if anything goes down or responds with an unexpected status code.
# uptime-kuma/docker-compose.yml
services:
uptime-kuma:
image: louislam/uptime-kuma:1
container_name: uptime-kuma
ports:
- "3001:3001"
volumes:
- uptime_kuma_data:/app/data
restart: unless-stopped
volumes:
uptime_kuma_data:
Setup takes 10 minutes. The peace of mind lasts indefinitely.
The Backup Strategy That Saved Me More Than Once
Backups are the one area where I invested real effort, and it's paid off multiple times. Here's the setup:
#!/bin/bash
# backup-ghost.sh — runs daily via cron at 02:00
DATE=$(date +%Y-%m-%d)
BACKUP_DIR=/opt/backups/ghost/$DATE
S3_BUCKET=s3://my-backups/ghost
mkdir -p $BACKUP_DIR
# Dump the MySQL database
docker exec ghost_db mysqldump -u root -p$DB_ROOT_PASSWORD ghost \
| gzip > $BACKUP_DIR/ghost-db-$DATE.sql.gz
# Copy the Ghost content volume
docker run --rm -v ghost_content:/data -v $BACKUP_DIR:/backup alpine \
sh -c "cd /data && tar czf /backup/ghost-content-$DATE.tar.gz ."
# Encrypt and upload to object storage
gpg --symmetric --cipher-algo AES256 --passphrase $BACKUP_KEY \
$BACKUP_DIR/ghost-db-$DATE.sql.gz
rclone copy $BACKUP_DIR/ghost-db-$DATE.sql.gz.gpg $S3_BUCKET/db/
rclone copy $BACKUP_DIR/ghost-content-$DATE.tar.gz $S3_BUCKET/content/
# Clean up local backups older than 7 days
find /opt/backups/ghost -type d -mtime +7 -exec rm -rf {} +
echo "[$DATE] Ghost backup complete" >> /var/log/ghost-backup.log
Three things make this work:
- Database dumps, not volume copies — a SQL dump is consistent. A volume copy might be mid-write.
- Encryption before upload — even though the object storage is private, I encrypt anyway. Defence in depth.
- Automatic cleanup — old backups get deleted locally after 7 days. Object storage keeps them longer.
The Nginx Configuration That Handles Everything
All traffic hits Nginx first. It terminates TLS, sets security headers, and routes to the right Docker container:
server {
listen 443 ssl http2;
server_name davideandreazzini.co.uk;
ssl_certificate /etc/letsencrypt/live/davideandreazzini.co.uk/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/davideandreazzini.co.uk/privkey.pem;
# Security headers
add_header X-Frame-Options DENY always;
add_header X-Content-Type-Options nosniff always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header Content-Security-Policy "default-src 'self'; script-src 'self' cdn.jsdelivr.net; style-src 'self' 'unsafe-inline'; img-src 'self' images.unsplash.com data:;" always;
# Ghost blog
location / {
proxy_pass http://127.0.0.1:2368;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
}
}
# Redirect HTTP to HTTPS
server {
listen 80;
server_name davideandreazzini.co.uk;
return 301 https://$server_name$request_uri;
}
Certbot auto-renews certificates every 60 days via a cron job. I wrote about the manual DNS verification process back in 2017 — these days, the HTTP challenge is simpler, but the principle is the same.
What I'd Do Differently Starting Over
If I were setting up from scratch today, here's what I'd change:
- Use Docker networks instead of exposing ports — my early setup exposed container ports to the host unnecessarily. Now I use internal Docker networks and only expose the Nginx ports.
- Automate everything from day one — my current setup is mostly automated, but it took years of manual steps to get there. I'd write Ansible playbooks from the start.
- Set up monitoring before I need it — I learned this one the hard way. Uptime Kuma goes up before the services do.
- Use a dedicated backup user with minimal permissions — my backup script runs as root. It shouldn't need to.
Is Self-Hosting Worth It?
Honestly? It depends on what you value.
The costs: I spend maybe 2-3 hours a month on maintenance. I pay for a VPS and object storage, which together cost less than a single SaaS subscription. The real cost is the 3 AM incidents when something breaks and there's no support team to call.
The benefits: I control my data. I understand my infrastructure end to end. I can spin up a new service in minutes. And I've learned more about networking, security, and Linux administration from running my own servers than I ever would from using managed platforms.
For me, the trade-off is worth it. I'm a software engineer who likes understanding how things work, not just that they work. Self-hosting forces you to confront the messy reality of certificates, reverse proxies, DNS propagation, and security hardening — and that knowledge makes you better at everything else you build.
Start small. Containerise one thing. Add monitoring. Add backups. Then iterate. You don't need a rack of servers — just a VPS, Docker, and the willingness to fix things when they break at inconvenient hours.