Bad Potatoes
Erosion of insight and the problem of objective specification
A consultant for a large CPG company visits a client’s factory floor to assess its processes.
He walks over to a sorting area, where potatoes are moving down a belt along with some random objects: a golf ball, a plastic bottle. The worker is busy removing rotted, misshapen potatoes from the belt and discarding them into a large bin, but lets the other objects go by.
The consultant asks the worker: What are you doing? Why did you let those objects go through?
The worker replies: Sir, I’m just here to pick out the bad potatoes.
This is a real story, and when our friend first regaled us with this anecdote from work, my husband and I laughed. How could this person not understand that part of his remit, in addition to picking out bad potatoes, was to pick out things that weren’t potatoes at all? A golf ball is a pretty bad potato, we joked.
That is, until I had my own embarrassing “bad potato” moment.
My husband started a slow cooker one morning—beef stew maybe, or beef fajitas, I can’t quite remember. He left for the office, and since I typically work from home, it was my job to keep an eye on the slow cooker and give it a stir from time to time. My husband was even kind enough to remind me during the day: “did you stir?” Yes, I assured him.
But that particular day the slow cooker somehow got turned off, maybe by someone wiping the counter or brushing past in the kitchen. And while I diligently stirred it a few times during the day, I somehow neglected to notice that the thing wasn’t on at all, and the meat wasn’t cooking.
I can give plenty of excuses—I was distracted, thinking about work, etc—but here’s how I assess what happened in my little dumb brain:
Not having been the person who conceived the project, I did not have as much cognitive investment in the overall outcome.
I focused too narrowly on checking the box of my assigned task, and had some kind of situational blinders. Maybe on some level I registered that the meat wasn’t looking how I expected, but not enough to investigate further.
I failed to generalize my task, “stir the slow cooker,” to the broader objective: ensure the meat is cooked in time for dinner.
I will never live down my “bad potatoes” moment. My husband still teases me, but now we also can’t help noticing “bad potato” scenarios everywhere in our work and life. “That was totally bad potatoes,” we’ll sometimes say.
I’ve come to realize that these situations share a deeper pattern. Somewhere between intention and execution, something is lost.
It’s easy to dismiss “bad potatoes” as stupidity or ineptitude.
(Or maybe this whole essay is just cope for my own stupidity and ineptitude, but bear with me.) But I think it points to a bigger observation: it is hard to give objectives to a system and reliably get the intended outcome. This is the classic problem of objective specification in complex systems. It appears across disciplines as mechanism design (economics), control theory (engineering), and reward design (AI).
Lately, I’ve been thinking about a phrase I came across in policy scholar Robert Klitgaard’s 2022 book Prevail: erosion of insight. Klitgaard uses “erosion of insight” to describe how powerful insights often decay into crude versions of themselves. As he writes, “an insight that begins as a strikingly helpful simplification and clarification of the world can become a painfully unhelpful complete view of the world.”
In other words, a good idea starts as a useful shortcut. Over time, the shortcut becomes the rule, and the rule replaces the insight that made it useful in the first place. What began as a helpful simplification or abstraction hardens into a brittle procedure.
The bad potatoes problem is exactly this kind of erosion, and what looks like stupidity is actually a gradual loss of insight behind the task.
In the world of bad potatoes, this erosion happens in three parts: erosion of ownership, erosion of context, and erosion of objective.
Erosion of ownership happens as the objective moves from the person who conceived it to the person executing it. The executor does not have the same cognitive investment in the “why” behind the task, and therefore is less likely to question whether the task is actually accomplishing the intended outcome.
Erosion of context happens when a task becomes detached from the surrounding situation it was meant to operate within. The executor performs the task as written, but does not necessarily incorporate related signals, edge cases, or obvious anomalies that fall just outside the narrow definition of the task.
Erosion of objective happens when the task itself becomes a proxy for the real goal. Over time the proxy becomes the objective. The worker removes bad potatoes, but no longer remembers that the real goal was to ensure that only potatoes go into the bag.
In large organizations, fighting these erosions is a constant battle: ensuring context isn’t lost between teams or across knowledge supply chains. I have written about related concepts, like assumption provenance and the bullwhip effect in knowledge work, which are symptoms of this broader erosion of insight.
AI, though, might actually help systems and organizations resist it.
While it is not always easy to specify objectives for an AI system, incorporating AI into workflows creates a useful forcing function:
Forces you to document assumptions the model should consider
Forces you to point to all relevant context
Forces you to specify clear objectives
Forces you to define success criteria in advance instead of assuming shared mental models
Forces you to notice gaps in the specification when the model predictably does the wrong thing
In other words, the problem of bad potatoes is (obviously) not about potatoes. It is about how easily intentions degrade as they move through a system. The larger and more distributed the system becomes, the more likely it is that someone somewhere is faithfully executing the wrong task.
The worker removes the bad potatoes. The stew gets stirred. And yet, somehow, dinner still isn’t ready.

