Tacit Knowledge and the SaaSpocalypse
Why the hottest job in tech is the one AI can't replace
When Anthropic released Claude Legal this week, $285 billion in SaaS market cap evaporated in a day. Traders at Jefferies coined it the “SaaSpocalypse.” The thesis is straightforward: if a general-purpose AI can handle contract review, compliance workflows, and legal summaries, why pay for seat-based software licenses?
It’s a fair question. But before the SaaSpocalypse had a name, something was already happening in the labor market that complicates the story. Last November, the Financial Times reported that monthly job listings for “forward deployed engineers” grew more than 800 percent between January and September 2025. These are software engineers who embed physically with customers, working alongside them as they do their actual work. The explosion of AI-generated code has coincided with surging demand for people whose job is, essentially, to go sit in someone else’s office.
This is not a coincidence.
I worked at Palantir between 2010 and 2016, embedded with Fortune 100 companies building data infrastructure. Palantir famously sent engineers into conflict zones; my own deployments were more prosaic, but the principle was the same. More recently I’ve built AI products for radiologists and lawyers, two professions routinely cited as ripe for full automation. I have obvious reasons to find comfort in the idea that embedding with customers can’t be automated. But I think there’s something real in the data, and I think I can explain what it is.
What you learn by showing up
Forward deployment means leaving your office and embedding in the customer’s environment: their conference rooms, Slack channels, and hallway conversations. The point is not primarily to write code. Code is a key output, but it’s downstream of something that doesn’t exist in any database: an understanding of how work actually gets done.
You learn that the org chart is a polite fiction. The real map of influence and trust is invisible, legible only through presence, and it determines which initiatives actually ship and which ones die in committee. You learn which tools people rely on versus which ones they dutifully log into for compliance. You observe the workarounds, the informal protocols, the tribal knowledge passed between colleagues. None of this is written down. Very little of it can be written down.
Tyler Cowen likes to say “context is that which is scarce.” Forward deployed engineers are hunters of scarce context.
Polanyi’s insight
The philosopher Michael Polanyi, a physical chemist before turning to philosophy in his fifties, made a deceptively simple claim in his 1958 Personal Knowledge: “We can know more than we can tell.”
The foundations of expertise, Polanyi argues, are essentially inarticulable. The apprentice watches the master and picks up rules of the art “including those which are not explicitly known to the master himself.” An art which cannot be specified in detail cannot be transmitted by prescription. It can only be passed on by example.
If we take Polanyi seriously, and I think we should, then we can make a precise claim about AI’s limits: if we cannot write down everything we know, then AI cannot learn everything we know. Not because AI isn’t sophisticated enough, but because the knowledge doesn’t exist in a form that can be digitized.
The bitter lesson, revisited
There is a well-known result in machine learning called the bitter lesson: simple methods with more compute and data tend to win over clever hand-engineered approaches. Applied here, the objection would be that tacit knowledge is just explicit knowledge we haven’t instrumented yet, and that better recording solves the problem. This deserves real engagement, because it might be right.
But I think better recording fails to solve the problem, for two reasons. The first is that much tacit knowledge isn’t just unspoken but unreflective. The experienced lawyer doesn’t consciously know which arguments will persuade this particular general counsel. She’s absorbed it through hundreds of interactions that were never documented, and couldn’t fully articulate it even under interrogation. The knowledge lives in practice, not in any potential dataset.
The second reason is more fundamental. The forward deployed engineer embedded with a customer isn’t passively observing a process that could theoretically be filmed. They’re participating in conversations where real constraints emerge only through trust built over weeks. They’re present for the hallway moment where someone reveals the actual reason a project failed. You cannot train on what was never recorded, and much of what matters most in organizations is never recorded precisely because it requires trust and presence to surface.
Could you build a surveillance apparatus comprehensive enough to capture all of this? Maybe. Brain-computer interfaces might someday access knowledge that even the knower can’t articulate. But we’re a long way from reading the neural correlates of “I trust this person enough to tell them why the project actually failed,” and even further from a society that would accept it. For now, and for any planning horizon that matters to the companies navigating this moment, tacit knowledge remains accessible only through presence.
What automation actually does
Building AI tools for radiologists and lawyers has taught me something that doesn’t fit neatly into either the utopian or dystopian narrative. When you automate the routine work, human attention shifts to harder problems. But the important thing is not that these problems are new. Lawyers have always asked how a provision should be structured for a client’s unusual circumstances. Radiologists have always faced ambiguous imaging. The difference is one of proportion: when AI handles the commodity work, tacit-knowledge-intensive problems go from being 20 percent of someone’s day to being 80 percent of it. The work that remains is disproportionately the work that requires having sat in enough of the client’s meetings to sense unspoken tensions between departments.
This creates a problem for the automation-replaces-everything thesis. Each layer of automation makes the remaining human work more dependent on tacit knowledge, not less. We’re not watching AI slowly close the gap between explicit and tacit. We’re watching it clarify where the gap lies.
Predictions
If I’m wrong, the forward deployed engineering boom should be a transitional blip, a brief adjustment period before AI learns to access context without human intermediaries.
If I’m right, in five years the companies winning in legal tech and other vertical software will employ more forward deployed engineers per customer than they do today, not fewer. The proportion of code written by engineers who are embedded with customers, rather than engineers who have never met one, will increase.
If I’m right, the SaaS companies that survive the current repricing will be those that already have deep customer embedding practices, not those with the most features or the best integrations.
If I’m wrong, we should see general-purpose AI agents successfully handling complex, context-dependent enterprise workflows without human intermediaries by 2028 or so. I’d bet against it.
What the SaaSpocalypse actually reveals
The market sell-off reflects a genuine insight about two converging forces. AI commoditizes code generation, turning feature sets from moats into table stakes. Simultaneously, AI empowers companies to build their own internal tools, starting with the tacit knowledge their employees already possess. This changes what external software vendors can offer. The winners will be those that embed deeply enough to access tacit knowledge that rivals what internal teams have, then leverage that understanding to build tools internal teams cannot.
The SaaSpocalypse isn’t the end of software as a valuable human endeavor. It’s a clarification: the value was never in the code.

