The Sorcerer's Apprentice Problem
AI can do the tasks. But who's learning the judgment?
Last week, IBM announced it would triple entry-level hiring in the United States. That alone is notable, given the direction most companies are heading. But the more revealing detail is what IBM did to the jobs. The company’s chief human resources officer, Nickle LaMoreaux, personally rewrote every entry-level job description. Junior developers now spend less time writing routine code, which AI handles, and more time working directly with customers. HR staff intervene when chatbots produce bad answers rather than fielding every question themselves. The jobs aren’t preserved. They’re redesigned.
This is a strange move for a company deeply invested in AI. IBM’s own tools are among the reasons entry-level coding tasks are disappearing. Why would a firm building the automation also triple the workforce that automation ostensibly replaces? The answer matters, because 37 percent of organizations plan to simply replace entry-level roles with AI, according to a Korn Ferry survey of over 1,600 global talent leaders. A recent paper by Stanford researchers Brynjolfsson, Chandar, and Chen, using ADP payroll data covering millions of workers, found that employment for 22-to-25-year-olds in AI-exposed occupations declined roughly 6 percent relative to older workers in the same roles between 2018 and 2024. Entry-level tech hiring at the 15 largest firms fell 25 percent between 2023 and 2024.
These companies believe they’re rationally eliminating work that AI can now perform. I think they’re making a specific mistake, one the political philosopher Michael Oakeshott identified in 1947.
Two kinds of knowledge
In his essay “Rationalism in Politics,” Oakeshott drew a distinction between technical knowledge and practical knowledge. Technical knowledge can be formalized and transmitted as rules: a checklist for client onboarding, a style guide for legal memos, a prioritization framework for deciding what to build next. Practical knowledge cannot. It is absorbed through immersion in a practice. The senior engineer who senses, from the pattern of questions in a client meeting, that the stated problem is not the real problem. The associate who has learned which executives champion new initiatives in planning meetings and which ones actually allocate budget for them. This is not intuition in any mystical sense. It is knowledge that resists formalization, the kind that accumulates through the experience of working inside a practice and cannot be fully captured by a checklist or manual.
Oakeshott’s central argument is that rationalists chronically confuse the relationship between these two kinds of knowledge. They see the checklist and assume it is self-sufficient. His claim is that the dependency runs the other direction:
The knowledge which rationalism identifies as rational is itself really a product of experience and judgment. It consists of rules, methods, or techniques abstracted from practice, tools that, far from being substitutes for experience and judgment, cannot be effectively used in their absence.
Every framework, every process document, every set of best practices is a compressed artifact of someone’s practical knowledge. It captures what that person learned, but not the judgment required to apply it well. Hand the framework to someone without that underlying judgment, and they will follow it literally when the situation demands improvisation, and improvise when they should have followed it literally. Oliver Wendell Holmes Jr. put the legal version of this crisply: “The life of the law has not been logic: it has been experience.” A judge has statutes, precedent, and procedural rules. What she also has, and what no manual provides, is the judgment to know when the letter of the rule serves its purpose and when it defeats it.
Automate the tasks, redesign the apprenticeship
Entry-level work has always been two things at once: a set of tasks to be completed and an apprenticeship in practical knowledge. The junior analyst running routine reports is also learning, without anyone teaching her, which stakeholders actually matter, which data people trust, and how decisions get made as opposed to how the process documentation says they get made. The tasks are useful and worth doing. They are also the vehicle through which the apprenticeship happens.
The correct response to AI is to automate the tasks and redesign the apprenticeship around what remains. This is what IBM appears to be attempting: junior developers still work at the company, but their work has shifted toward customer-facing problem-solving rather than writing boilerplate code. The 37 percent of companies simply replacing entry-level roles are making a different bet. They think they are automating the tasks while preserving everything else. What they are actually doing is eliminating the context in which practical knowledge forms, without building anything to take its place.
Why this matters more than it appears
Practical knowledge is substantially local. Wharton’s Matthew Bidwell studied external hires across investment banks and found that they are paid roughly 18 percent more than internal promotees, receive lower performance evaluations for their first two years, and have higher exit rates. That two-year ramp-up is the time it takes to acquire the practical knowledge of this organization: its actual decision-making norms, its informal networks, its unwritten rules. You can hire credentials. You cannot hire someone else’s institutional context.
This means companies cutting their junior pipelines are not just losing future mid-level talent. They are creating the conditions for a permanent dependency on outside specialists. If no one inside the firm develops the practical knowledge to evaluate, adjust, or override automated workflows, the organization will eventually need expensive external consultants and system integrators to maintain systems it built but no longer understands. The Bidwell data suggests this is not a hypothetical: every external hire is already, in effect, an expensive partial substitute for the institutional knowledge the organization failed to develop internally.
AI tools will keep getting better. They will handle more situations, more reliably. But someone inside the organization still needs to recognize when the AI’s output doesn’t fit this client, this market, this moment. Someone needs to know when to override the system and when to trust it. That person needs practical knowledge of the domain, and they can only acquire it by working in it. Last December, a mass power outage in San Francisco killed traffic lights across the city. Human drivers adapted, treating dead intersections as four-way stops, reading body language, improvising. Waymo’s autonomous vehicles froze in place, blocking intersections, and the company suspended service. A Carnegie Mellon robotics professor observed: “What if this had been an earthquake?” The AI performed brilliantly within the domain its training covered. At the boundary, it could not improvise.
Immersion, not seniority
Practical knowledge requires immersion in a practice, but the duration and form of immersion vary. Consider software product development. Teams use prioritization frameworks and value-versus-effort analyses to decide what to build next. These tools structure thinking. But they do not make the final call. At some point, judgment takes over.
Steve Jobs in 2007: “Today, Apple is going to reinvent the phone.” Patrick Collison, at 22, launching Stripe: “Most tech companies are building cars. Stripe is building roads.” Mark Zuckerberg in 2021: “The metaverse isn’t a thing a company builds. It’s the next chapter of the internet overall.” These are bets on the future, and some of them, as the metaverse pivot showed, turn out to be wrong. But the capacity to make them, to see what a situation demands before the data confirms it, does not come from a framework. Collison was 22. He didn’t acquire his judgment through long institutional tenure. He acquired it through deep immersion in a specific practice from an early age: building things on the internet, watching what broke, absorbing the distance between how the payments system was supposed to work and how it actually worked. Practical knowledge is not seniority. It is the residue of participation. For most people, the workplace is where that participation happens, which is why eliminating junior roles matters.
What follows
IBM is not guaranteed to get this right. Tripling entry-level hiring is easy to announce and hard to execute. But they have identified the correct problem: not whether to automate entry-level tasks, but how to redesign the apprenticeship so that practical knowledge continues to form.
The United States has historically relied on informal apprenticeship: you learned by being junior. Formal registered apprenticeships cover just three-tenths of one percent of the labor force. That worked because entry-level employment was abundant. The interesting institutional question is whether new forms of apprenticeship will emerge organically — through companies like IBM redesigning roles — or whether the informal mechanism breaks down faster than alternatives appear. Some firms will get this right and develop a compounding advantage. Others will discover, as the Bidwell data predicts, that practical knowledge is expensive to acquire after the fact and diluted when acquired from the outside.
In the Goethe poem that gives this essay its title, the apprentice enchants brooms to carry water. The brooms do exactly what they’re told, tirelessly and without error, until the workshop floods. The apprentice panics. He wants to stop the brooms but cannot, because he never learned the master’s deeper knowledge. He tries to break them; each half becomes a new broom, doubling the problem. The sorcerer returns and breaks the spell with a word. The poem ends with his admonition: only a master should invoke powerful spirits. The 37 percent of companies replacing their junior employees with AI are producing very capable brooms. The question they are not asking is who is becoming the master.

