lautaro@wilsf — bash — ~/darwin/non-deterministic-software.md

Darwin AI · · 445 words · 2 min

Software is now non-deterministic. Be the human who's responsible.

  • #ai-thoughts
  • #product
  • #responsibility

Until now, every technology humanity built was deterministic. A thermostat measures temperature; if it’s above N, it turns on the AC. Software did exactly what you wrote. Engineers were trained to think deterministically. Errors were bugs, not features.

   DETERMINISTIC                       NON-DETERMINISTIC (LLM)
   ─────────────                       ─────────────────────────

   input ──→ [if x > N] ──→ output      input ──→ [  LLM  ] ──→ output
                                                         
                same                                    varies

   100 runs:                            100 runs:
     all 100 identical                    100 reasonable answers,
                                          none identical

   you can prove it works              you have to evaluate it

LLMs broke that. A language model can take in a thousand details about a room and the people in it and decide whether to turn on the AC, the way a thoughtful person would. Different inputs, different but reasonable outputs. No two runs identical. No deterministic logic underneath the decision.

This is the first time in human history we delegate real decisions — not just micro-decisions, macro ones too — to a system that doesn’t run deterministically. And the system itself is a moving target: same input, new model version, different output.

The implications keep unfolding. We can hand entire categories of decisions to AI: marketing campaigns, budget allocation, product roadmap inputs, support replies, even draft strategy. The economically interesting move is to let the model decide and let other models evaluate the decision from different angles before execution. Multi-agent self-eval is real and works.

But not every decision should be delegated. The clearest example: firing someone. Don’t delegate that. It’s a profoundly human moment. The model can probably do it more “efficiently” — that’s not the point.

So what’s left for the human in this world?

The answer that keeps emerging is: be responsible. Have skin in the game. The model, when it makes a mistake, says “oops, sorry” and moves on. It doesn’t feel anything. It’s not impacted by being wrong. We are. Customers, partners, employees, regulators — they need someone to hold accountable, and the model can’t be that someone.

The human’s job is to own the outcome. To say I am responsible for what this AI did on my behalf, and if it was wrong, I’ll fix it. That ownership is the part that doesn’t get automated. Without it, AI decision-making is unsafe at any speed; with it, you can let the system run very fast.

If you’re a founder, this changes how you think about org design. Not “who does the work”“who is responsible for the work the AI did.” Pick the people who can hold that responsibility, and let them ride.

about:blank ↗ open in new tab
site won't load? ↗ open in new tab
doom.exe — id Software, 1993

click inside · arrows / WASD to move · ctrl to fire · esc to quit

Trash
empty
Finder
~

$ what is this?

whatilearnedsofar.blog is a personal site by Lautaro Schiaffino — a serial founder. It collects what he's learned from building three companies (Rodati, Sirena, Darwin AI) and from living, plus a few side rooms (books, food, board games, portfolio).

$ how do I navigate?

Three ways:

  1. Tabs at the top of the terminal window (~ · sirena · darwin · rodati · whoami · portfolio · books · boardgames · food) click any to switch sections.
  2. Keyboard shortcuts — press ? to see all of them. g+s jumps to Sirena, D toggles dark mode, etc.
  3. Shell — click the + at the end of the tab bar to open an interactive shell. Try tree, ls darwin, cat sirena/lesson-1.md, open whoami, subscribe you@example.com, help.

$ what about the menu bar?

$ traffic lights work

The three dots in the title bar do something: red closes the window (icon appears on the desktop, click to reopen), yellow minimizes (pill at bottom of desktop, click to restore), green maximizes.

$ contact

Reach me on x.com, subscribe at /newsletter, or read more about how this site was made at /colophon.

navigation
g hhome (~/)
g wwhoami
g ssirena
g ddarwin
g rrodati
g ffood
g ccolophon
g nnewsletter
g ttags
g uuses
view
Dtoggle dark mode
+bigger text
smaller text
0reset text size
edit
aselect all
ycopy page url
window
nnew shell tab
mminimize
zzoom (max)
xclose window
obring to front
help
?toggle this help
escclose