Skip to content
Go back

Why 'Just Write Better Prompts' Isn’t the Answer

Published:  at  01:36 PM

Every time I challenge the real value of LLMs, I get the same reply:

“You just need to write better prompts.”

It’s become the catch-all answer. If an LLM fails, the blame lands on the human: your prompt wasn’t good enough.
And for a while, I wondered—am I missing something? Is there a magical prompt I’m not writing?

I don’t think so.

Because I can’t see how LLMs replace humans, or even fully replace one human function. And the reason is deceptively simple.

Computers Are Binary. LLMs Are No Exception.

We like to pretend LLMs are these fuzzy, intuitive thinkers. But at the core, they’re just very large, very sophisticated pattern-matching machines built on rigid rules.

Pick the next token. Then the next. Then the next.

It’s still 0s and 1s.

“Better prompting” usually means giving the model more context, more constraints, more explicit detail so it can better guess the next word. And while that might work for structured tasks—summaries, code generation, formatting — there’s a reason it breaks down for real decision-making.

Humans don’t work like that.

We don’t consciously enumerate every piece of context, every tradeoff, every edge case before making a decision. If we did, we’d never finish anything.

We use experience, pattern recognition, gut instinct, and intuition. We also carry unspoken context—things we know but would never think to explain.

The Architectural Decision Problem

Take software architecture.

When I make an architectural decision, I’m not just listing pros and cons into a prompt. The decision is shaped by years of accumulated experience:

If I tried to craft a “perfect prompt” with every relevant detail, not only would it be impossibly long — it would mean I already performed the entire decision-making process.

By the time an LLM could answer, the work is already done.

That’s the flaw in the “just prompt better” argument:
if you need to spell out every ounce of context for a model, you don’t need the model.

LLMs Don’t Have Experience — They Simulate It

The acronym itself tells us what’s going on: Large Language Model.
Not:

Just language.

They predict the statistically most probable continuation of text. And yes, with enough scale, that can look magical. But it’s not intuition. There’s no internal representation of regret, tradeoffs, risk tolerance, or the politics of a codebase and a business.

LLMs make confident guesses. Humans make decisions.

What LLMs Are Good At (And It’s Fine)

None of this means LLMs are useless. They’re incredible at:

They’re tools — powerful tools — but not replacements for engineers, designers, product thinkers, or leaders.

Because to replace a human, you would first need to replace context.

Not the words — the lived experience.

And that’s not a prompting problem.



Next Post
Learning by Reimplementing Raft