
In 2021, we migrated everything to Kubernetes. We wanted "cloud neutrality," infinite scalability, and the cool factor of running efficient clusters. We hired DevOps engineers, built Helm charts, and configured service meshes.
Four years later, we moved back to a managed Platform-as-a-Service (PaaS). The Kubernetes tax—the complexity, the maintenance, the sheer cognitive load—was bleeding us dry. This is the story of how we realized we weren't Google and didn't need their infrastructure tools.
The Kubernetes Seduction
The pitch for Kubernetes (K8s) is intoxicating. Write once, run anywhere. Efficient bin-packing of resources. Self-healing pods. It sounds like the future of infrastructure.
We bought into it fully. We moved from Heroku to EKS (Amazon's Elastic Kubernetes Service). We told ourselves the cost savings on raw EC2 instances vs. Heroku premiums would pay for the engineering time. We convinced ourselves that "vendor lock-in" was the ultimate evil to avoid.
Initially, it felt powerful. We could spin up complex microservices architectures. We had granular control over network policies. We felt like "real" engineers.
Then the reality of Day 2 operations set in.
The Hidden Tax
Kubernetes is not a platform; it's a platform-building framework. When you adopt K8s, you become a platform engineering company whether you want to or not.
We spent months configuring ingress controllers, certificate managers, monitoring stacks (Prometheus/Grafana), logging aggregators, external DNS syncing, and secrets management. Each of these required maintenance. When an upstream chart changed, our deployment broke.
Our "savings" on compute were laughable compared to the salary costs of the engineers required to keep the cluster breathing. We needed two full-time DevOps engineers just to maintain the plumbing. That's $300k+ annually to save maybe $20k in hosting premiums.
Upgrades were specialized projects. Moving from K8s 1.25 to 1.26 wasn't a button click; it was a week of testing for API deprecations and ensuring our 50 YAML files didn't reference a dead version.
The "Cloud Neutrality" Lie
We moved to K8s to avoid vendor lock-in. The irony? We just traded "Heroku lock-in" for "Kubernetes complexity lock-in."
Moving our cluster from AWS to GCP would have been months of work. The specific storage classes, load balancer annotations, and IAM roles were inextricably tied to AWS. The idea that K8s makes you portable is theoretically true but practically false for most teams.
We realized that worrying about cloud portability is premature optimization for 99% of companies. If we ever grew big enough that AWS pricing was our biggest problem, we'd have the resources to rewrite infrastructure then. Optimizing for it at $5M ARR was madness.
The Developer Experience Nightmare
On PaaS, a developer pushed code and it ran. On K8s, they had to understand Dockerfiles, readiness probes, liveness probes, resource limits, and affinity rules.
"It works on my machine" became "It works on my machine but crashes in the cluster because of a memory limit causing an OOMKill that doesn't show up in the application logs."
Debugging was torture. Instead of heroku logs --tail, engineers were running kubectl get pods, finding the right hash, running kubectl logs -f, realizing the pod was restarting, checking kubectl describe pod... The friction slowed feature development significantly.
We tried to paper over this with complex internal developer platforms (IDPs). We built tools to abstract K8s away. So we were building a PaaS on top of K8s to mimic the PaaS we had left. The absurdity was painful.
The Migration Back
We evaluated modern PaaS options. Vercel, Railway, Render, Fly.io. The landscape had matured significantly since 2021.
We chose a managed service that offered "git push to deploy" simplicity but on our own AWS account (a "Bring Your Own Cloud" PaaS model). This gave us the compliance and credits of AWS with the ergonomics of Heroku.
The migration took three weeks. We deleted thousands of lines of YAML. We disbanded the "Platform Team" and those engineers moved to product engineering, where they were far happier writing features than debugging etcd latency.
The Results
Infrastructure Costs: Actually increased by 15%. Managing services charge a premium. We happily paid it.
Total Ops Costs: Decreased by 70%. We no longer needed dedicated "K8s caretakers."
Deployment Frequency: Doubled. The friction of deployment disappeared.
Uptime: Improved. It turns out managed services are better at keeping databases and load balancers alive than a team of three frantic startup engineers.
Who Needs Kubernetes?
Kubernetes is amazing technology. If you are Netflix. If you are Uber. If you have 500 engineering teams and need to standardize deployment across heterogeneous environments. If your compute bill is $10M/month and efficient bin-packing saves you $3M.
But for a B2B SaaS with 50 engineers? It's a resume-driven development trap. It's complexity you don't need solving problems you don't have.
Conclusion
We stopped playing Google. We accepted that paying a markup for managed infrastructure is the cheapest money a startup can spend. We deploy code, not clusters. And we sleep much better at night.
If you're a startup struggling with K8s complexity, ask yourself: Are you maintaining this infrastructure because it serves the product, or because it makes you feel like a "serious" tech company? The most serious company is the one that ships value, not YAML.
Written by XQA Team
Our team of experts delivers insights on technology, business, and design. We are dedicated to helping you build better products and scale your business.