What are you optimizing for?
It’s more and more apparent to me that the central question behind most of my clients’ more tactical questions is one about the relationship of Purpose and Progress.
And this central question presents itself when they’ve realized they’ve built themselves a trap. It’s a familiar issue, clients transition from one state to another. First they have a product or service they’re desperate to get off the ground or a market they want to wedge into. Next they look to expand (new products, new features, new audiences, new markets…). All the while building the apparatus of growth - metrics, KPIs, data dashboards, and ever more optimization architecture. And what this does is increasingly constrain the shape and function of their org to reduce entropy. Efforts at efficiency, across every function, further cement the implicit logic of the system because these very techniques of efficiency are not value or content neutral. Instead, they are purpose-built to rationalize and translate real tangible stuff into tokens of symbolic exchange.
You’ve essentially built an architecture of criteria, a machine of optimization, that dictates what is legible, because when you thought you were reducing entropy what you were in fact doing was applying a particular type of voracious compression. What’s signal to you and your customers is noise to the system. What’s signal to the system is only the effect of value exchange. It can’t participate in anything else, in fact, it must assume the original source value is protected for, and regenerated, elsewhere. And further, for it to work, that original value is what the system consumes to run. It’s a story as old as finance capitalism and liberalism, and it’s the same mechanism that’s driving Russia to embrace Orthodoxy and post-liberals to abandon libertarianism. Things get worse as you scale if you aren’t finding new sources of value. But for a corporation and their customers it shows up as crappier clothes, tasteless pizza, and unreliable cars. It’s why OpenAi is getting into porn, libido seeming an almost unlimited well of desire to extract from.
At some point leaders of any organization that has not fully converted to a financial institution reach a state wherein they just don’t know what to do next. They work within this apparatus and lose the ability to think past it. It was easier when they didn’t know. They don’t recognize a clear opportunity or extension, they struggle for original ideas, and the ground beneath them starts to erode. Maybe their product is getting commoditized, maybe a disruptive new entrant is taking their market share, maybe a new technology is making them obsolete, maybe their customers are disappearing, etc. Maybe all of these simultaneously. They typically switch to triage, look to convert product into financial instruments (like try to have subscription charges for heated seats in a car), or get into porn and ads (same thing).
What finance capitalism is to product, AI is to workforce
But I think something is different given AI, and it doesn’t much matter how real or hyped the AI revolution is, at least not for the point I’m trying to make here. AI is an accelerant but not of productivity as you know it.
People are existentially spooked by AI. On top of the usual business state-changes, now they have a new capability that has massive implications for their workforce, their processes, even how they understand what a business looks like. And when the AI bot does some activity even partially as well as their own team, the question becomes “what do I really need in terms of staff?”
When it does something partially as well as themselves, the question looms, “What’s special about this business, about my team, about me, or about humanity?”
And where this is clearest to me is precisely when the AI makes some plausible bullshit. AI doesn’t even need to have good output, because most human output isn’t very good. Whether it’s a bit of illustration, or “artwork”, or a report, or makes some insights about their business or customers. I think the crises is not when your AI tools make something incredible. That would be genuinely exciting. It’s when the AI makes something good enough but actually kind of shitty if you look at it close enough.
And this is most disconcerting when people realize they’re looking at a pretty good mimic of their own output.
Here you’re not seeing what you are called to be, or what you’re destined to do, but what you’ve been doing, and not noticing, because the doing of it required enough time and investment to occlude or distort the real value of it. There’s a few cognitive and structural biases at work that can no longer do the heavy lifting of producing the sensation of progress where none existed. In other words, so much of human output that I’ve seen is slop, but so many people pretended it wasn’t because it kept people busy and consumed a lot of resources. Both Public Choice theory, and memetic desire, explains why this gets exacerbated in organizations. So one day you wake up and have a pretty high-fidelity mirror that provides an instantaneous reflection of what a lot of your work looks like. AI slop is just replacing human slop. And slop wasn’t better just because it was made by people.
At this point, where doing business once looked like an exciting, if intractable, set of external challenges to overcome, animating the workforce, it starts to look more like a lack of imagination, virtue, and capability set on a path-dependent course for commoditization of things and people. That’s when the existential questions start to creep in. It’s hard to get up and fight the good fight when you don’t even believe it’s good or even a fight.
Purpose
So if you’re in a leadership role experiencing any of this, the first question that comes up is one of “purpose”. If you can just unlock the purpose you can get back on track.
Let’s make an assumption to test it out (this is admittedly a wrong assumption but it illustrates the point). Assuming AI gets much better - so much so that it effectively reproduces your staff, tools and processes. Let’s assume you can now do whatever you want and whatever you need without the performance and friction of managing people as you have been. You reduce workforce entropy just as you have production entropy. Now what? You’ve achieved a sort of commercial omnipotence (and so have all of your competitors) but what do you do with it? Maybe you can vibe-code a stupid app but now (without all of the blood, sweat, and tears) you can see in an instant that the app is pretty stupid. You can churn out an artificial sinfluencer campaign to exploit people’s insecurities and vices even faster than before, at the exact same speed as your competitors.
What once felt like good work, because it kept people employed and felt like a hero’s journey, is now exposed. The veil of “productivity” is lifted and you’re a bit like Bill Murray in Groundhog’s Day. You’re not producing anything.
So you think back to why you started all of this in the first place, solving a real problem. But the particular problem you’d started with (maybe it was making pizzas or building cars) has likely been solved, and you’re a little wiser about where it all leads, and you ask yourself “Why do I exist now? What should I be doing? What’s my purpose?”
It’s here that you can see the attractiveness of something like Simon Sinek’s “The Why” model. But I’ve not once seen it produce anything decent. For every leader interrogating their “why”, I’ve seen only churn, frustration, and abandonment. And in terms of output, just to get the exercise over with, I’ve seen more underspecified language, kicking the can down the road, and the least inspiring pablum you could imagine. AI/People slop. And I think the problem is with the question. It quickly becomes apparent when you see the types of answers people arrive at.
Content-agnostic “goods”: this is when you end up with context-independent nouns that sound like “good things” as long as you leave the content and context out. Something like “Freedom”’ or “Joy” or “happiness”. It’s the discourse of political sloganeering. Under-specified, vague, and impotent to make critical distinctions. This is the kicking the can down the road answer.
Content-dependent “goods”: this is when is you substitute a product for a purpose and call it done. “We’ll make the best [X].” It’s not terrible but brackets out the constitutive causal structure that connects the solution to the needs, context, capabilities, and skills you have. And the problem is exactly the one articulated above - you likely don’t have the apparatus to make the best product because your product is just a vehicle for something else you built the apparatus to deliver anyway.
The most typical answers combine the two. Bolt a few adjectives onto a solution and hope for the best (“We’ll build the [x’s] in order to provide [y].”). For example, “We build the tools that help people take back their time.” Or “We will build the best platform to provide human connection.” It works well enough if your real purpose is externally generated anyway, otherwise it’s the cargo cult of answers. The big issue here is that you abdicate the criteria for what “best” means.
Short of becoming a monk, the best answers are typically the ones that bracket purpose altogether (or more accurately, treat it as an emergent property of practice, a property that is disclosed rather than represented) and instead focus on what you have, what you know, and what you can positively do with it. You assess your skills and capabilities, look around, and see what you can improve. There are two critical conditions of this that I can see:
You concede that “the why” is not representable (outside of deep theology), it’s enacted; and now focus on “doing better”.
You address head-on what “better” looks like given your context and area of practice. You focus on Progress.
OK, so progress toward what end? (Back to purpose…). Not necessarily. I’ll explain in the next article, but to give a hint, there are multiple models of progress available, and not all require a representable purpose (generally it’s the utopian ones that do).