I’ve used computers longer than most people have used their own names. I was around before the GUI, before Windows made everything clickable and cute. Back when everything was a prompt, and curiosity came with a blinking cursor.

Naturally, I gravitated toward the machines. And stayed.

Sysadmin for ten years. Software developer for another ten. Then came DevOps — the hybrid child of both disciplines, born in the YAML mines and raised on CI/CD pipelines. It’s clever. It’s powerful. It’s… solved.

And that’s the problem.


In software development, the job is to take something messy and reduce it. Deconstruct the beast into bite-sized problems, and tackle them one by one. It’s elegant, until you start seeing the same types of problems repeating across domains. And worse: you realize they were already solved — sometimes decades ago.

You don’t feel like a pioneer. You feel like a curator.

Worse still, you begin to carry this clarity outside of code. You see problems in life the same way: deconstruct, solve, repeat. You stop being surprised. It’s a blessing, but also a quiet curse. The magic fades.

Sysadmin work? I can’t go back. The roadmap is public domain at this point: build a pipeline, enforce policies, monitor everything, rotate keys, stay ahead of drift. There’s real money in it — because most companies are always five years behind — but the intellectual puzzles are gone.

Software development? Not unless I’m handed a greenfield and a blank check. Most of the time, it’s glorified data shuffling. And when it’s not, you’re spelunking in legacy codebases that were built by accident and defended by inertia.

Yes, I flirted with the idea of picking up COBOL or FORTRAN — where the money hides in bank vaults and mainframes. But monetizing entropy isn’t a good enough reason to endure it. Life’s too short for linguistic archaeology.

So I looked again. Back to what I enjoyed. I’ve always liked virtualization. I understood machines by abstracting them. From jails to chroot, from LXC to Docker, from containers to orchestration. I followed the thread to Kubernetes, and from Kubernetes to multi-tenant edge networks and ephemeral compute.

I arrived at the truth: most of the infrastructure world has also been tamed.

Our industry poured its demons into YAML, and then called it best practice. If you want to deploy something now, you declare it. YAML is our spellbook, and Kubernetes is our conjurer. And yet, within this hyper-declarative universe, fragility hides in the details.

Which node fails first? Where do you place trust in a zero-trust mesh? Can you survive chaos when the scheduler becomes the bottleneck?

There are still dragons at the edge. Disaster recovery, failover topologies, RTO, RPO — these acronyms become art when you’re optimizing for systems that aren’t allowed to break.

I know all this not just because I built it. But because I’ve watched it break. Over and over.

And now I’m starting to think: maybe it’s time to go on offense.

Because once you’ve mastered how systems are built, you understand how they can be broken. You see the cracks before others even see the wall.

The platform is there. The boredom is just pressure waiting for purpose.

And I’ve always worked best under pressure.