All matters§Matter II

Shipping Civic Tech on a $30 Budget

What changes when your entire production stack has to fit on a 4GB box, survive a public audit, and cost less than lunch.

Filed
2026-04
Jurisdiction
EU · NGO
Practice
civic tech
Stack
Docker · Nginx · PostgreSQL · GCP
Read
6 min

The problem

Commercial software shops rarely have to think about the unit cost of their infrastructure. A junior engineer clicks a button, provisions a managed Postgres, attaches it to a managed Kubernetes cluster, and by the end of the week the company is spending four figures a month without anyone having made a decision. The cost is absorbed into a line item called infrastructure that nobody questions because the alternative — thinking about it — is more expensive than the bill.

Civic tech does not have this luxury.

A public-interest project typically runs on a mix of grants, pro-bono labor, and the patience of one or two people who keep showing up. Every euro spent on a managed service is a euro not spent on a developer's hour, a legal consultation, or a user-research session with the actual humans the product is supposed to serve. If the infrastructure budget swallows the project budget, the project does not exist.

So the problem is not how do we run this cheaply. The problem is: given that cheaply is the only available mode, what does a responsible production stack look like, and what do we give up to get there?

The constraint

The real constraint is a single small virtual machine. In the version of this I work with most, that means a GCP e2-medium: one shared vCPU, 4 GB of RAM, a few dozen euros a month all-in including the egress and the static IP. Everything the platform needs — web app, API, database, scrapers, queues, reverse proxy, TLS — has to fit.

That's not a hypothetical. It's a literal memory budget, enforced at the Docker level, sketched out on a whiteboard before the first container is built:

Nginx              64 MB    (port 80/443)
PostgreSQL        256 MB    (port 5432)
Web (Next.js)     512 MB    (port 3000)
API (NestJS)      512 MB    (port 4000)
Scraper A         384 MB    (port 8001)
Scraper B         384 MB    (port 8002)
Redis × 2         128 MB each
---------------------------------
Total            ~2.4 GB of 4 GB

The remaining 1.6 GB is headroom for the kernel, page cache, log shipping, and the occasional build. The rule is that no new service gets added without taking memory from a neighbor. The budget is a contract.

A second constraint, less visible but more important: everything has to be legible to someone who wasn't there when it was built. A civic-tech project does not have a devops team. The next person to touch the box might be a volunteer who last deployed something in 2019. The stack has to make sense at 11pm on a Sunday after a one-page README.

The approach

Three principles, each a direct response to a class of mistakes I've seen civic-tech projects make.

Principle 1: One box, many containers. No Kubernetes. No managed services except what the cloud provider forces on you. A single hardened VM running Docker Compose, with every service described in one file that a human can read in one sitting. The alternative — spreading services across managed platforms — is faster on the up-slope and catastrophic on the down-slope when the one person who knows how it fits together moves on.

Principle 2: Harden the base, once, properly. The VM is treated like a production server because it is a production server. The first hour of the life of the box is spent on the unglamorous work: kernel updates, UFW firewall with three ports open, SSH key-only access with root login disabled, fail2ban for the brute-forcers, automatic security updates enabled, a static external IP. None of this is clever. All of it is the difference between a box that runs for two years and a box that becomes a warning story.

Principle 3: Deploy like you will forget how it works. Every service has a one-command deploy: push to git, SSH in, pull, rebuild the target container, restart. A tiny shell script that does nothing you couldn't type by hand, but captures the order of operations so the volunteer at 11pm on Sunday doesn't have to remember. The script is the documentation.

The build

The working stack, again abstracted:

  • Nginx as the reverse proxy and TLS termination. Let's Encrypt via certbot, auto-renewing. Routes by host and path to the right upstream container.
  • PostgreSQL for auth and anything transactional. Small enough to run in-process with the app if you had to, but isolating it is worth the 256 MB.
  • Web and API containers for the user-facing product. Next.js and NestJS in this case, but the pattern works for any pair of frontend and backend processes.
  • Two scraper microservices — Python, FastAPI, BullMQ, each with its own small Redis because the upstream platforms demand isolation. These are the highest-risk services in terms of upstream API changes, so they get their own blast radius.
  • An external managed database for the application's document store. This is the one managed service I allow, because the cost of operating it yourself on a 4 GB box is not the disk space — it's the backup discipline, and the backup discipline is where amateur civic-tech projects go to die.

The whole thing sits behind a single static IP, a single domain, a single certificate. When someone asks how to check if production is up, the answer is "curl this URL." When someone asks what it costs, the answer is "about thirty euros a month, including the IP reservation."

The outcome

The platform runs. It has run through kernel updates, Docker upgrades, a full-stack redeploy while I was at a café, and at least one bad push that took it down for about eight minutes. The cost line on the monthly invoice is roughly the same as one round of drinks.

What it cannot do is scale horizontally without a serious rebuild. There is exactly one of everything. A traffic spike past the capacity of the 4 GB box is handled by either a temporary upgrade to the next instance size or by the traffic going away on its own. This is an explicit tradeoff, made with the operators' knowledge.

What it can do is exist. The project is not held hostage to a managed service's pricing page. The next volunteer can look at one docker-compose.yml, one nginx.conf, one deploy.sh, and understand the entire production environment in half an hour. That legibility is the feature.

Aftermath

The most common question I get about this setup is whether it's responsible. The framing of the question usually presumes that production-grade means managed, and that a single VM is a toy.

My answer: it depends on what you're protecting. If you are protecting uptime against a once-a-year outage, the single-VM pattern is genuinely worse than a managed equivalent. If you are protecting the existence of the project against the month where the grant lapses, it is genuinely better. Civic tech's mortality curve is dominated by projects that ran out of money, not projects whose instances crashed.

The deeper lesson, for anyone building in this corner: constraints are not limitations. They are design inputs. Having to fit a whole stack into 4 GB forces a kind of clarity that a generous budget actively discourages. Every service has to earn its memory. Every dependency has to justify its image size. Every layer has to be something the next person can read.

A budget that feels inadequate will produce a system you can hold in your head. A budget that feels abundant will produce a system that, when something breaks, you cannot. For civic tech — where the next person is always a stranger — the first is the responsible choice.