We run around 40 Flask apps on one dev machine at any given time. Shipping season, sometimes 60. Every app needs a port. How do you pick?
The "correct" answer is: use a service discovery system. Consul, etcd, whatever. Apps register themselves, a reverse proxy routes by hostname, nobody cares about port numbers. That's the answer that would work if we had a team of five and a real ops budget.
Our answer is: take the SHA-1 of the app's slug, modulo 300, add 8400. Every app binds to exactly one port, every time, computed from its name.
def port_for(slug: str) -> int:
h = hashlib.sha1(slug.encode()).hexdigest()
return 8400 + (int(h, 16) % 300)
# Always the same:
port_for("cheesemaking") # → 8437
port_for("honeybees") # → 8612
port_for("bookcircle") # → 8420
No registry. No service discovery. No config file. No port conflicts if you're running less than ~200 apps (collisions are possible in theory; we haven't hit one yet). The same app always binds to the same port, across all environments, forever.
This Is Objectively Not Elegant
I want to be upfront about the tradeoffs. This approach has real downsides:
- If we get above 200-ish concurrent apps on one machine, we'll start having collisions and have to expand the port range or add a salt.
- There's no service health-checking. If an app's port is in use by something else at boot, the app fails to start. You have to notice.
- You can't easily run two copies of the same app on one machine (for blue/green deployments, say). The port is tied to the name.
- It's weird. If you're used to service discovery, this looks like a regression.
These are real tradeoffs. I'm not going to pretend they aren't.
But It Ships
Here's what the deterministic-port choice actually bought us, across the hundred-plus apps we've built on the template:
Zero port-config per vertical. A new vertical's app.py doesn't need to decide on a port. It calls port_for("myslug") and that's it. Every dev on the team (which is one person right now but could be more) knows that running python app.py will Just Work without port collisions.
Predictable bookmarks. localhost:8437 is always cheesemaking. Always. I know that without checking. If I left the dev machine three months ago and come back, cheesemaking is still at 8437. That kind of environmental stability is worth something.
Simple nginx config. The reverse proxy in front of our hosted apps has one location block per app, hardcoded to the deterministic port. No dynamic service discovery. When we deploy a new vertical, the nginx config gets one new block generated from the slug list, and it's guaranteed to match the app's binding.
No config drift. Service discovery systems fail interestingly when the registry gets out of sync with reality. Our system can't drift because there's nothing to drift — the port is computed from the name, and the name isn't changing.
The Broader Principle
The reason to write this post isn't to convince you to use SHA-1 port hashes. It's to make an argument for a class of choices that share a quality: they don't scale elegantly, but they ship, and shipping is what matters when you're building 220 things.
Other examples from the same family in our stack:
SQLite instead of Postgres. SQLite won't handle 10,000 concurrent writes per second. Neither will any of our apps. SQLite is a file. You can back it up by copying it. You can inspect it with any GUI tool. The self-host user never has to set up a database server. For our scale, there's nothing Postgres gives us that SQLite doesn't, and SQLite gives us operational simplicity we'd have to work to preserve with Postgres.
Systemd instead of Docker/Kubernetes. Each app runs as a systemd user service. One service file per app. systemctl restart cheesemaking restarts cheesemaking. systemctl status honeybees tells you how honeybees is doing. No pods, no deployments, no ingress controllers. If I ever need to scale past what one machine can do, I'll add Docker; I won't start there.
Single-password auth. I wrote about this in the template post. Full user management, OAuth, SSO — all of that is overkill for apps used by one person or a small team. Shipping single-password auth on day one means the app ships on day one. If a specific vertical outgrows it, upgrade to the multi-user module; most never will.
Git push to main as deploy. No feature branches, no review gates, no staging environment promotion. CI catches the worst bugs. If something slips through, I notice in ~10 minutes and roll back with a revert commit. This is terrible practice for a 30-engineer team. It's excellent practice for a family business.
The right stack for a 30-engineer team is usually wrong for a two-person team. The right stack for a two-person team is usually wrong for a 30-engineer team. Build for the team you actually have.
The Unstated Corollary
If we 10x the team, a lot of these choices will have to change. I know that. The port-hashing scheme will need a registry. SQLite will need to graduate to Postgres for some verticals. Single-password auth will need to become proper multi-user. Git-push-to-main will need CI gates.
That's fine. The cost of migrating is lower than the cost of premature abstraction. Every day we're running on simple infrastructure is a day we're shipping features instead of operating a platform.
And frankly: if we ever get to the point where the simple infrastructure is genuinely a problem, that means the business is working, which is the whole goal. I'd rather hit the wall of "we need to move to Postgres because we have too many customers" than the wall of "we never shipped enough because we were busy setting up Kubernetes."
What To Do With This
If you're building a portfolio of small projects, audit your stack for unfashionable choices that save operational complexity. Every time you're about to install something with a steeper learning curve than you need right now, ask: is there a dumber thing that would work for the next 12 months?
The dumber thing usually exists. SQLite over Postgres. Systemd over Kubernetes. Cron over Airflow. SSH keys over LDAP. A hash function over a service registry. These aren't wrong — they're just calibrated for a smaller operation. If you're running a smaller operation, they fit.
And when you do outgrow them, the migration is usually less painful than you think, because the old choices made your system simpler, which makes it easier to understand, which makes it easier to change.
That's the argument. Deterministic ports won't scale. They will ship. That's the trade.