Early in my career, software engineering felt like magic.
I started out in embedded systems, where you’d flash code onto a tiny chip and suddenly your washing machine knew how to run a spin cycle. It was hard not to see it as sorcery. But, of course, the more you learn about how things work, the less magical they seem. Eventually, it’s just bits and bytes. Ones and zeros.
I had the same realization when neural networks became popular. At first, it sounded revolutionary. But underneath all the headlines? It’s just math. A lot of math, sure — but still math. Weighted sums, activation functions, matrix multiplications. Nothing supernatural.
The marketing layer of software engineering
Somewhere along the way, marketing started playing a bigger role in software engineering. That wasn’t really the case a decade ago. Back then, it was enough to build useful tools. Today, you need to wrap them in a story.
And that’s fine—marketing helps new ideas spread. But it also means there’s more hype to filter through.
Take large language models (LLMs). Fundamentally, they’re just probabilistic models trained on huge datasets. Underneath it all, you’re still working with ones and zeros. Just like always.
These models are designed to predict the next word in a sequence, following statistical patterns from the data they’ve seen. My guess? Their outputs follow something close to a normal distribution. Which means most of what they produce will be… average. Sometimes impressive, sometimes mundane—but always centered around the statistical “middle.”
That’s why it can feel like LLMs are progressing toward magic, when really they’re just really good at remixing what already exists.
Garbage in, garbage out—still true
I’ve used these models for a lot of tasks. They’re helpful. They save me time. But the old rule still applies: garbage in, garbage out. Companies often underestimate how much work it takes to produce the clean garbage—the high-quality prompts, structured data, and thoughtful inputs — that lead to useful outputs.
And yes, using LLMs as an enhancer is great. I do it daily. But it’s not world-changing magic. It’s a tool. A powerful one, but still a tool.
Where I land
I’m not anti-AI, and I’m not cynical. I’m just realistic.
Software engineering is still about solving problems with logic and math. LLMs are part of that toolkit now. But they’re not some mystical new force — they’re the same ones and zeros, repackaged in a new (and very marketable) way.
And that’s okay. Just don’t forget what’s behind the curtain.