Darwin AI · · 445 words · 2 min
Software is now non-deterministic. Be the human who's responsible.
Until now, every technology humanity built was deterministic. A thermostat measures temperature; if it’s above N, it turns on the AC. Software did exactly what you wrote. Engineers were trained to think deterministically. Errors were bugs, not features.
DETERMINISTIC NON-DETERMINISTIC (LLM) ───────────── ───────────────────────── input ──→ [if x > N] ──→ output input ──→ [ LLM ] ──→ output ↓ ↓ same varies 100 runs: 100 runs: all 100 identical 100 reasonable answers, none identical you can prove it works you have to evaluate it
LLMs broke that. A language model can take in a thousand details about a room and the people in it and decide whether to turn on the AC, the way a thoughtful person would. Different inputs, different but reasonable outputs. No two runs identical. No deterministic logic underneath the decision.
This is the first time in human history we delegate real decisions — not just micro-decisions, macro ones too — to a system that doesn’t run deterministically. And the system itself is a moving target: same input, new model version, different output.
The implications keep unfolding. We can hand entire categories of decisions to AI: marketing campaigns, budget allocation, product roadmap inputs, support replies, even draft strategy. The economically interesting move is to let the model decide and let other models evaluate the decision from different angles before execution. Multi-agent self-eval is real and works.
But not every decision should be delegated. The clearest example: firing someone. Don’t delegate that. It’s a profoundly human moment. The model can probably do it more “efficiently” — that’s not the point.
So what’s left for the human in this world?
The answer that keeps emerging is: be responsible. Have skin in the game. The model, when it makes a mistake, says “oops, sorry” and moves on. It doesn’t feel anything. It’s not impacted by being wrong. We are. Customers, partners, employees, regulators — they need someone to hold accountable, and the model can’t be that someone.
The human’s job is to own the outcome. To say I am responsible for what this AI did on my behalf, and if it was wrong, I’ll fix it. That ownership is the part that doesn’t get automated. Without it, AI decision-making is unsafe at any speed; with it, you can let the system run very fast.
If you’re a founder, this changes how you think about org design. Not “who does the work” — “who is responsible for the work the AI did.” Pick the people who can hold that responsibility, and let them ride.