Darwin AI · · 319 words · 1 min
Veto anyone on your team who doesn't believe AI keeps getting better
Investors in San Francisco are starting to slice their portfolios in two: companies founded before the GPT launch, companies founded after. The hypothesis is that the “after” cohort outperforms — faster shipping, faster decisions, teams that fundamentally believe AI rewrites everything.
I don’t know if that hypothesis is true yet. But the underlying intuition matters: a team that doesn’t believe AI keeps improving will accidentally build for a world that won’t exist. They’ll over-engineer for today’s model limits. They’ll cap their roadmap at today’s prices. They’ll architect around assumptions that will be wrong in twelve months.
So we have an internal rule at Darwin: veto any comment that assumes AI doesn’t keep getting better, or that it gets more expensive instead of cheaper.
We veto each other. Junior people veto me. I veto senior people. The rule is: if someone’s argument depends on “the model won’t be able to do X” as a permanent claim, that’s a veto. Saying “we have to build this workaround for now because the model can’t do it yet” is fine — that’s tactical. Saying “we have to build this because the model will never do it” is not.
If you let the second kind of statement live, it spreads. Engineers build extra scaffolding. Product managers add escape hatches. Designers add complexity to compensate. The whole org turns into a hedge against a future that’s not coming.
We’ve all been vetoed at this point. It’s healthy. The veto isn’t punitive — it’s a flag that says “you just made a bet against AI capability, please re-examine it.”
Going back to the sailing analogy: the veto rule is what keeps the sail big. A team that lets AI-pessimism in is a team that stops trimming the sail. Eventually the wind comes and they’re not pointed right.
Try it for a week. Veto every “AI won’t be able to” statement. See what happens to your roadmap.