Senior / Staff Software Engineer
I design and build distributed systems that hold up under pressure. 10+ years turning complex problems into reliable, scalable architecture — from zero to production.
Mexico
The work
Not a list of capabilities. A set of convictions — things I've chosen to go deep on because they matter, and things I won't compromise on because they should matter to you too.
I. Simplicity is the work
I remove things. Complexity is a debt you pay with your team's attention, your on-call rotation, and your architecture's future. I've learned — the hard way — that doing less, deliberately, produces more durable work than doing everything cleverly.
"What is essential? Everything else — eliminate."
II. Systems fail. Design for it.
I've built services that run at scale inside cloud infrastructure used by hundreds of thousands of people. The work that matters happens at the boundary between what you assumed and what actually occurs in production. I design for that boundary, not around it.
"Obstacles are the way. Build through them."
III. Culture is the product
I've led engineers across 0→1 builds. What I learned is that good architecture is downstream of good communication. When people understand the why, they make better decisions without being asked. That's the environment I work to create.
"You can't delegate understanding."
IV. Craft over output
I don't write code to close tickets. I write code because there's something worth building — and because the act of building it well is its own kind of integrity. The pipeline that handles 5 GB a day reliably, the API that never surprises you, the refactor that makes the next engineer's job easier — that's what I care about leaving behind.
"Do less. Do it better. Do it with intention."
V. Tools are not judgment
I use agentic tools deliberately and without ceremony. They compress time. But in distributed systems, the consequences of wrong assumptions compound — no tool reasons about your failure modes. That judgment stays with me.
"The craftsman never blames his tools."
The tools
Tools are just tools. What matters is knowing when to reach for each one — and when not to.
Current primary stack
Go · Python · React · TypeScript
This is the combination I reach for most today. Go for the backend core, Python with FastAPI when the problem suits it, React and TypeScript when the product needs a real frontend. Everything else on this page extends from here.
Primary language
My tool of choice for backend systems that need to be correct, fast, and boring in the best way. I chose Go because it forces you to be explicit — about errors, about concurrency, about what your code actually does. That discipline matches how I think. I've used it to build services running inside critical infrastructure at cloud scale.
When the UI matters too
I'm a backend engineer who can own the full stack when the problem requires it. React with TypeScript for component-driven UIs. I understand the boundary between what belongs in the client and what belongs in the system — and I don't blur it.
Python done right
Python when the problem rewards it — data work, scripting, internal tooling. FastAPI specifically when I need a backend service that's clean, typed, and fast to build without sacrificing correctness. Async-first and honest about its contracts.
Where systems get interesting
Kafka for high-throughput async pipelines. gRPC for tight service contracts. PostgreSQL when you need to trust your data. MongoDB and ClickHouse when the shape of the problem demands it. I pick the right tool, not the fashionable one.
Infrastructure that disappears
AWS is the environment I've worked in most — SQS for async decoupling, the broader ecosystem for reliability. Good infrastructure should be invisible. If you're thinking about it too much, something is wrong with the design.
Containers & orchestration
Docker for reproducible, portable builds — if it works in the container it works everywhere, and I hold that line. Kubernetes for when you need to run things at scale with real operational control over scheduling, networking, and rollout. ArgoCD for GitOps-driven deployments where the desired state lives in the repo and the cluster converges to it — no manual kubectl surprises, no configuration drift.
Infrastructure as Code
Infrastructure should be version-controlled, reviewable, and reproducible — the same standards I hold for application code. IaC isn't a devops concern separate from engineering; it's part of the system design. Clicking around a cloud console is how you get infrastructure nobody understands six months later.
The edge of the system
Nginx and Traefik for reverse proxying, TLS termination, load balancing, and routing rules. I understand what lives at the edge of a system — and how decisions made there ripple into everything downstream. Getting proxy configuration wrong is one of the fastest ways to introduce subtle reliability problems.
Authorization & policy
Authorization logic doesn't belong hardcoded in your services. I've worked with policy engines to externalize access control — making it auditable, testable, and decoupled from business logic. When the policy is the product, treating it like first-class code is the only serious approach.
Also explored — not claimed as expertise
A good engineer should be comfortable picking up new languages because understanding patterns is more valuable than memorizing syntax. I've read, explored, and shipped small things in these — enough to reason about their tradeoffs, pair with someone who uses them daily, and not be lost in a codebase. I don't claim expertise. I claim intellectual honesty and the ability to get up to speed fast.
PHP
Widely deployed, pragmatic. Understand its model well enough to work in Laravel-era codebases without slowing a team down.
Ruby
Expressive and opinionated. Rails taught the industry conventions-over-configuration. I understand why people love it.
Elixir
The actor model and fault-tolerant concurrency via OTP genuinely fascinate me. BEAM's "let it crash" philosophy is deeply aligned with how I think about resilience.
C++
Read more than written. Enough to understand what happens below the abstractions — memory, ownership, performance at the metal level.
Challenges conquered
If any of these sound like your team's problems right now — I've been in that room, and I know how to get out of it.
The challenge
Critical backend systems failing silently in a distributed environment — correctness couldn't be assumed.
What I did
Redesigned backend services in Go with explicit failure handling, idempotency guarantees, and recovery paths baked into the architecture — not bolted on. Validated every design assumption through architectural review and testing before shipping.
Outcome
Services that hold up when the environment around them doesn't — at global cloud scale.
The challenge
A real-time communications platform serving 10,000+ users needed backend pipelines that could handle massive, continuous data volume without degrading.
What I did
Designed and built event-driven microservices processing 5+ GB daily — consuming and producing Kafka streams, exposing gRPC APIs, and persisting analytical workloads into ClickHouse. Throughput, reliability, and observability were first-class concerns.
Outcome
Stable, observable pipelines processing 5+ GB/day with no single point of failure.
The challenge
A tightly coupled monolith was creating cascading failures and blocking the team from scaling individual parts of the system independently.
What I did
Led the decomposition into an event-driven architecture using AWS SQS, replacing synchronous dependencies with async message passing. Each service was given a clear boundary, its own failure domain, and the ability to scale without touching anything else.
Outcome
Eliminated cascade failures. Teams could deploy, scale, and reason about services independently.
The challenge
A fintech lender needed KYC and identity workflows where a bug isn't just a bug — it's a compliance failure.
What I did
Designed backend systems for identity verification with auditability and correctness as hard requirements. Migrated core infrastructure to AWS, replacing fragile on-prem setup with a reliable, observable foundation built to grow with the business.
Outcome
Compliant, auditable identity flows on a stable AWS infrastructure that scaled with user growth.
Who I am
Engineering is not just something I do. It's a practice — like a discipline or a craft. These aren't traits I put on a résumé. They're things I try to live by.
The foundation
I am an essentialist and a stoic. I believe in doing fewer things — and doing them with everything I have.
Most of what we build is noise. The work I'm proud of is the work where I said no to nine things so that the tenth could be exceptional. That applies to code, to features, to meetings, to entire platforms. The discipline of subtraction is harder than the act of addition — and far more valuable.
Culture above everything
I've seen technically brilliant teams produce mediocre systems because no one trusted each other. Culture isn't a perk — it's the environment where good engineering becomes possible. I invest in it the same way I invest in architecture: deliberately, and for the long term.
Control what you can
Stoicism isn't pessimism — it's preparation. I don't rage against production incidents. I design for them. I don't resent ambiguity. I navigate it. What I can control is the quality of my thinking, the honesty of my communication, and the care I put into the work.
The work is the art
There's no separation between craft and engineering for me. The readable function, the honest error message, the migration that doesn't surprise anyone — these are acts of respect for the people who come after. That attention is what turns engineering into something worth doing.
What you actually get
You get someone who will tell you the truth about the system — even when the truth is uncomfortable.
I don't over-promise timelines. I don't under-communicate risk. I don't ship things I'm not willing to put my name on. If something is wrong with the design, I'll say so early — not after a post-mortem. That's not a personality quirk. That's the only way I know how to do this work seriously.
Let's connect
I'm open to senior / staff engineering roles, especially teams working on hard distributed systems problems. If that sounds like your team — let's talk.