Discussion about this post

User's avatar
Matt Duffy's avatar

I agree that the world is messier than we think. I'd also extend that idea: AI's failure modes are so inhuman that almost any error rate is intolerable because error detection is very difficult. When humans genuinely don't know how to complete a task correctly, they hesitate, hedge, ask for clarifications, and engage in social self-checks that enable their collaborators to ratchet up the trust verification threshold. AI's "overconfidence" and AI's lack of social accountability signals make verification much harder. It's not that a non-zero error rate is intolerable per se, but the relatively more difficult error detection makes errors much costlier because they'll persist.

Alex Hunt's avatar

Good piece. I really liked Ok's post but agree with you on this: "Oks’ framing is skewed toward inefficiency and irrationality. The problem isn’t primarily that we’re poor at solving our problems – it’s that the problems are genuinely hard."

6 more comments...

No posts

Ready for more?