Every time I write about the Dangercorn template — which is Flask + SQLite + Jinja — someone on HackerNews or lobste.rs rolls in to explain that I'm building on toys. FastAPI is faster. Postgres is production-grade. SQLite doesn't do concurrent writes. You should be using async. You should be using a real ORM. You should be using Redis. You should be using Kubernetes.
The commenters are not wrong about the technical properties of those tools. They're wrong about the context we're operating in. 220 vertical SaaS apps, each aimed at a small audience, most hitting under 10,000 requests a day. The stack that matches that workload is nothing like the stack that matches a consumer social app.
Let me walk through why Flask + SQLite is the right answer for our scale, and where it'd stop being right.
The Actual Request Volume
Here are the real numbers from our busiest vertical landing pages over the last 30 days:
- cheesemaking: 1,847 unique visitors, ~4,200 requests.
- honeybees: 1,204 unique visitors, ~2,800 requests.
- childmilestone: 892 unique visitors, ~1,900 requests.
Peak hour for the busiest vertical was 340 requests in an hour, or about 0.1 requests/second average. This is not Twitter. This is not even a moderately active e-commerce store. Flask on a single gunicorn worker can handle two orders of magnitude more than we're throwing at it.
For the app behind the landing page — which is actual SaaS state, not a static landing — the volume is lower. A cheesemaking self-host user might have 4-8 sessions a day, each with 20-30 requests. An app with 50 paying users handles maybe 5,000 requests a day. Flask + SQLite eats this for breakfast.
SQLite's Actual Write Story
"SQLite doesn't do concurrent writes" is the most-repeated and most-wrong critique. SQLite in WAL mode handles concurrent reads while writes are in-flight, and serializes writes through a single writer at a rate of a few thousand writes per second on modern SSD. That's not a bottleneck for any of our workloads.
The actual limit is long-running transactions. If you hold a write transaction open for 10 seconds, everyone else is waiting. The fix is to not do that. Keep transactions small. Commit often. Read your writes in a separate connection if you need to. Basic database hygiene, same as you'd practice on Postgres.
# Enable WAL, set reasonable defaults
# (from our template's db.py)
def get_db():
db = sqlite3.connect(DB_PATH, timeout=30.0)
db.row_factory = sqlite3.Row
db.execute("PRAGMA journal_mode=WAL")
db.execute("PRAGMA synchronous=NORMAL")
db.execute("PRAGMA foreign_keys=ON")
db.execute("PRAGMA busy_timeout=5000")
return db
WAL mode. Sync NORMAL (not FULL — full is paranoid for most workloads). Foreign keys on. 5-second busy timeout. That's it.
The Backup Story
SQLite's backup story is "copy the file." In WAL mode you need to checkpoint first or use the backup API, but it's one command. The entire database is one file. Move it to S3 on a cron. Done.
Postgres's backup story is pg_dump with specific format choices, a regular base backup, WAL archiving for point-in-time recovery, and a restore procedure you should test periodically. All of which is fine, and necessary at scale, and vastly more machinery than any of our verticals need.
Our verticals do nightly sqlite3 app.db ".backup /backups/app-$(date +%Y%m%d).db" and that's the whole system. Restore is: copy the file back. Tested nightly by default.
Why Not Postgres Anyway
The case for Postgres in a small vertical SaaS is: "you'll outgrow SQLite eventually, so start with Postgres." This is future-proofing, and future-proofing is almost always wrong in early-stage product work.
If a vertical grows to needing Postgres — say cheesemaking hits 10,000 paying users on Hosted Pro and the write contention is real — we migrate that one vertical. The cost of migrating one app after it's demonstrated product-market fit is small. The cost of running Postgres in production for 220 apps that never need it is large.
Concretely: a Postgres instance for a small SaaS on a $5 VPS is about 100MB of RAM overhead and ongoing maintenance. Across 220 apps that's 22GB of RAM and a big chunk of ongoing attention. SQLite's overhead is zero — it's a library linked into the app process.
Why Not FastAPI
FastAPI is a great framework. I've used it for other projects. For vertical SaaS it's overpowered in a way that costs us:
- Async everywhere. FastAPI assumes you're writing async handlers. Most of our code is CPU-bound or simple DB-bound; async doesn't help and makes the code harder to read.
- Auto-generated OpenAPI. Useful for public APIs. None of our verticals have public APIs. The machinery runs regardless.
- Pydantic models everywhere. Schema validation is great for JSON APIs. For HTML form handling, it's friction.
Flask fits our workload. Routes return HTML (via Jinja) or small JSON payloads (for HTMX partials). Forms are WTForms when validation matters, plain request.form when it doesn't. No schema generation, no async context manager stack, no ASGI. A request comes in, a function runs, a response goes out. That's the thing.
Where This Stops Working
I'd switch stacks if any of these were true:
- Sustained write load over ~200 writes/second. SQLite's serialized writer stops being a toy and starts being a ceiling. We're two orders of magnitude below this.
- Multi-region active-active. SQLite on one disk doesn't do distributed. For most of our verticals, one region is fine.
- Real-time features with many-to-many broadcast. Flask + SQLite isn't suited for WebSocket pub/sub at scale. We don't ship features like this.
- 100+ GB single-database scale. SQLite handles this technically, but the backup and migration story gets tedious.
None of our 220 verticals has any of these needs. Most have <1GB of data, <10 writes/second, single-region, request/response HTML. This is SQLite's sweet spot.
The Counterintuitive Piece
The mental shift is: your stack should match your workload, not your aspirations. We are not a unicorn with 10M users. We are 220 small businesses. Stack choices should fit "220 small businesses," not "the one that might become a unicorn."
The stack that would be wrong for Twitter is the correct stack for a cheesemaker's batch tracker. Choose based on the workload in front of you.
Related
The template walkthrough. Deterministic ports. Patterns across 220 database schemas. For spotlights that actually run on this stack: cheesemaking, coachboard, shiftfill.
Repo: github.com/Dangercorn-Enterprises/dangercorn-saas-template.