Setting Up a Postgres, Go Gin, and React project on Kubernetes
I spent past few days setting up a Kubernetes project Finance Dashboard and this post is the curtain fall of a useful side project. The core code lives inside a single file: finance-dashboard.yaml. Much of the documenation on how to set it up is provided in the README of Finance Dashboard. Instead I’ll briefly mention some issues I encountered.
Finance Dashboard

The moving parts
- Backend (Go/Gin) + Postgres + Frontend (nginx serving a static React build)
- One namespace:
finance - Frontend proxies to the backend via
/api
Problems I hit (and fixes that stuck)
-
DNS flakiness and upstream resolution
- Problem: nginx tried to resolve
finance-backendbefore the Service had endpoints → 502s and crash loops. - Fix: Use a ConfigMap-mounted nginx config and avoid
nginx -tat start. Added a tiny init wait where needed. Later I eliminated DNS altogether by co-locating containers.
- Problem: nginx tried to resolve
-
“Pod can’t reach Postgres” whack‑a‑mole
- Problem: One-directional connectivity, or
pg_isreadypassing while the app still failed. - Fixes:
- Simplified to a single Pod (
finance-stack) running Postgres + backend (+ frontend). Backend connects to Postgres via127.0.0.1, zero CNI/DNS drama. - Explicit
?sslmode=disableinDATABASE_URLfor local clusters.
- Simplified to a single Pod (
- Problem: One-directional connectivity, or
-
Readiness/Liveness probes fighting real life
- Problem:
/healthrequired DB; probes flipped endpoints to “not ready,” then nginx upstream broke. - Fix: TCP readiness on port 8080 (backend is ready once it’s listening) and longer liveness delays. The app itself handles DB retries/logging.
- Problem:
-
Frontend proxy rewriting
- Problem:
/api/*requests were getting mangled (trailing slash onproxy_pass). - Fix:
proxy_pass http://finance-backend:8080;(no slash), plus config delivered via ConfigMap:/→ static files/api/*→ backend- (Optional)
/health→ backend/healthfor quick checks
- Problem:
-
Seeding and demo data
- Problem: Empty UI during demos, or duplicate categories after multiple runs.
- Fixes:
- Categories are unique on
(name, type)with a dedupe step. - Added a
-seed-demoflag to load realistic transactions/budgets only if none exist.
- Categories are unique on
The final pattern I used
- One Deployment, three containers (Postgres, backend, frontend/nginx). One Service for the frontend (ClusterIP), one for the backend (ClusterIP).
- Frontend proxies
/apito127.0.0.1:8080when co-located, or to the backend Service if split out later. - Port-forward the frontend Service in dev (
kubectl port-forward -n finance svc/finance-frontend 8081:80) and openhttp://localhost:8081.
The payoff
Once I stopped pretending this needed to be “production distributed” for a local Minikube demo, everything became boring and reliable. Co-locating Postgres + backend + nginx in one Pod made the finance-dashboard.yaml simpler, removed all the networking guesswork, and let me focus on the UI and data instead of the cluster.
That’s the configuration I’ll keep for now. When I go to a real cluster, I can split Services/Deployments again, but now I know exactly which knobs matter. After this project my finances have never looked better. Thank you Finance Dashboard!