I’ve spent a lot of time experimenting with hosting APIs. Like most developers, I started with the usual suspects: AWS, Azure, serverless platforms, Kubernetes clusters. All of them are incredibly capable — and for large-scale systems, they make a lot of sense.
But if you’re just trying to run a few production APIs without racking up cloud costs or wrestling with control planes and managed gateways, most of those options are massive overkill.
So I simplified.
For the past year, I’ve been running multiple production-ready APIs on a single $24/month droplet from DigitalOcean. Everything runs in Docker. I get automatic SSL, container redeploys, and zero downtime — with a setup that’s stable, boring, and easy to replicate.
The core idea is simple: one VPS, a docker-compose.ymal
file, and a small group of containers that handle all the moving parts. Nginx sits at the front as a reverse proxy. It routes incoming traffic and handles TLS termination. Let’s Encrypt certs are issued automatically using acme-companion
, so there’s no manual renewal involved. And I use Watchtower to monitor image updates and trigger redeploys as soon as something changes.
Spinning up a new API is as easy as duplicating a block in the docker-compose.yaml
file, pointing a new domain to the droplet, and running docker compose up -d
. Within seconds, traffic is routed, SSL is provisioned, and the container is live. Everything’s isolated, containerized, and versioned — and I’ve been surprised how reliable it’s been, even with multiple services running side by side.
Of course, this isn’t a silver bullet. If you’re planning for high availability, need multi-region failover, or want dynamic scaling, then Kubernetes or a managed platform makes more sense. But for early-stage products, internal tools, or even stable APIs that just need to run — this setup has been rock solid.
What I like most about it is that it gets out of the way. I don’t have to log into a web console, manage IAM roles, or worry about vendor lock-in. It’s fast to set up, predictable to operate, and incredibly cheap for what it gives me.
I’ve used this setup for everything from side projects to client work, and I’d recommend it to anyone looking for a low-maintenance way to host production APIs without the complexity of modern cloud platforms.
It’s not flashy, but it works — and for most use cases, that’s more than enough.
services:
nginx-proxy:
image: nginxproxy/nginx-proxy:latest
container_name: nginx-proxy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx/certs:/etc/nginx/certs:rw
- ./nginx/vhost:/etc/nginx/vhost.d
- ./nginx/html:/usr/share/nginx/html
- ./nginx/dhparam:/etc/nginx/dhparam
- ./nginx/conf.d:/etc/nginx/conf.d
environment:
- ENABLE_IPV6=true
networks:
- proxy_network
acme-companion:
image: nginxproxy/acme-companion:latest
container_name: acme-companion
restart: unless-stopped
depends_on:
- nginx-proxy
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./nginx/certs:/etc/nginx/certs:rw
- ./nginx/vhost:/etc/nginx/vhost.d
- ./nginx/html:/usr/share/nginx/html
environment:
- DEFAULT_EMAIL=your@example.com
- NGINX_PROXY_CONTAINER=nginx-proxy
networks:
- proxy_network
watchtower:
image: containrrr/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- WATCHTOWER_CLEANUP=true
- WATCHTOWER_POLL_INTERVAL=900
my-app:
image: crccheck/hello-world
container_name: my-app
restart: unless-stopped
environment:
- LETSENCRYPT_HOST=yourdomain.com
- HOST=yourdomain.com
- VIRTUAL_HOST=yourdomain.com
- VIRTUAL_PORT=8000
networks:
- proxy_network
networks:
proxy_network:
driver: bridge