This is such an insightful piece - many thanks for thinking this through and setting it out. Lots to ponder but one specific thought strikes me. Coming for a big 4 consulting background, this sounds like the path from entry level is quite clear.
Junior staff always did two things: lots of grunt work which can be automated; and living and working in the client's business which created an enormous surface area for understanding the real business.
The entry level job gets reframed as FDEs but the need for that surface area is even more acute. Engineering the right context depends on seeing can capturing the right signals. Often these never reach the centre so the senior person who has the experience to ask the right questions still needs to be fed with the the right information.
My guess is the firms that have cut back on graduate hiring will regret it in 2/3 years when the economy shifts towards this model.
This is a great piece. "As models become more capable, organizations deploy them on harder, more judgment-intensive tasks, and each new class of task requires its own context engineering. Capability without relevant context produces confident but misaligned output." And thank you for the Last Mile shoutout!
The trunks and twigs metaphor nails this. I've been butting heads with exactly this distinction in practice. What goes into my CLAUDE.md (the persistent config file for coding sessions) is trunk knowledge; stable conventions, build commands, architectural constraints. The twigs are the per-task context I load before each implementation, and they rot fast if you don't curate them.
Have you looked at multi-agent patterns as a response to the curation problem? Spinning up isolated subagents for investigation keeps the main session lean. It's a rough version of the "glean" step but it works surprisingly well in practice. Wondering whether that maps to your Channel 2 distinction or if it's something else entirely.
Love this piece Chris; I guess the obvious pivot here for white collar professionals is to get their boots on the ground, work with customers, and get better reps. I work in a mixture of M&A and corporate finance in the RNG space right now, and some of the bigger drivers not capture in our model are related to regulatory winds shifting, trust in counterparties, and feedstock subtleties.
This question of centralization vs. decentralization keeps coming up. It's not hard to see why, as it relates to power dynamics, which both motivates those who seek it, and worries those that prefer not to. We don't have to look too far to find a case where consolidation of power has enabled great harm. That said, it gets a bit of a bad rap because the total lack of any consolidation has never been stable enough to end up anywhere good either. Our historical precedents should really tell us a balance has worked best.
If you follow libertarians, for example Tyler Cowen, they occasionally imagine an idealized world where AI agent are given total information about our preferences, and then go out and act on our behalf with these preferences in a market oriented way to establish an equilibrium, but without the mess of all the human time. While interesting, I've long seen that once preferences have been given to an agent, they are alienated (or legible), and the entire decentralized market becomes unnecessary and inefficient. If there is a equilibrium outcome, from a information processing point of view, it can be found more efficiently by bringing that data together. To the libertarian mind, that's a freakish thought though, since it's ultimately authoritarian if implemented that way, even though the two computational routes would follow the same rules.. just one with far lower latency.
Of course, there isn't one outcome, since the adversarial capability each agent and the assumptions that establish the market are still a matter that shapes the outcome, thus undermining the libertarian ideal that this market represents some ideal version of fair compromise between preferences. Those differentiating factors don't represent anything that we can think of relating to fairness, so you might as well debate fairness in a different context and implement it directly than assume it will arise organically from the interaction of personal agents.
Getting past those two debates, I think it is clear that AI overall supports centralization due to these changes in information processing, information availability, and the ability to integrate information in a way that cuts into our most protected domains. Thus arises the concern that power would become centralized by a few individuals or corporations (or few individuals who control a few corporations).
It's a reasonable concern for sure. But if you follow the market approach to it's terminus, you see there's a resolution. If information processing has advance in this way, it's not just large corporations that benefit, but the biggest corporation of all, government. The information processing disadvantage of a non-market oriented economy disappears. That's led some to suggest a need for good old communist style expropriation. But in those dynamics lurks another resolution. If government can invest as effectively as markets, it can use a moderate level of taxation and matching performance to slowly accrete all market value: https://substack.com/@norabble/note/c-200989526?r=10qod6
The tension im feeling with this is the view that context is something that we can fully get ourselves around so as to “engineer” it. It is an assumption made by the tools themselves. But humans act within a reality within which we are immersed. “Externalization” - standing outside a problem so as to solve it works for some things (coding) but not others.
I suspect that the view that we can fully stipulate our context to make a machine act as we would, is basically wrong.
So, even if we say that centralization is wrong, decentralization does not solve the issue fully enough to reliably act in the world on our behalf.
It's not really a question of centralization vs decentralization, it's a question of computational structure and algorithm. Questions emerge in unpredictable ways and at unpredictable times. Answering those questions efficiently and correctly may require a small scope of information or a large scope. Increasingly, larger scopes will be preferred if they provide incrementally better answers. This means that the competitiveness of an enterprise is about choosing how to answer such questions and how it will be structured to answer such questions with better and better answers.
This is the same structural question that leads to both Hayek's conclusion and to the fall of the soviet union.
I believe Brynjolfsson and Hitzig are right about centralization, but not for the reasons they give.
Independently, I've been building a decomposition of transactions costs into three separate components, including the "synchronization tax" which broadly fits what you're describing as "curation cost." I tie these costs directly to the thermodynamic costs of building overlapping descriptions. You might find the analysis interesting or useful:
The distributed/centralized information coordination axis poses a homonculus problem. So what if you've got a giant brain to process feeds from the twigs? What does that brain actually do? It needs to distill signal from noise, evaluate options, combine information, draw from experience, weigh goals, apply policies, and recommend decisions. That's knowledge work, and the central evaluator requires twig level knowledge in order to do it well.
Can this be done by some super-advanced undecipherable AI mind trained through RL or the like? Maybe, but that approach brings huge risk of technology overhang.
Instead, context engineering becomes a focal task to which we can join domain level knowledge with larger frameworks like optimization, market dynamics, game theory, and computer science.
Whether distributed or centralized, the key design consideration remains the organization of knowledge in the information architecture.
"Experienced judgment about what matters" is one thing. I'm starting to think the real difficulty is "knowing what the basic approach is when it's not in your context."
The "unmet need" example is a good one. As a 20-year UX research consultant, I thought the basic approach was obvious. (I also am willing to bet you didn't need as many sources as you used, and that the system you built was WAY more expensive than even a team of me's.) But I've learned how rare this is when working with clients. It's outside their wheelhouse, which means that usually they can't integrate or even comprehend the approach. Surely this is true of many other white-collar intellectual disciplines. What's trivially obvious to one discipline is sheer enigma to another.
To put it another way, if you won an hour-long call with Warren Buffett, AMA, would you even know what to ask? Presumably not about your 401k...?
This is such an insightful piece - many thanks for thinking this through and setting it out. Lots to ponder but one specific thought strikes me. Coming for a big 4 consulting background, this sounds like the path from entry level is quite clear.
Junior staff always did two things: lots of grunt work which can be automated; and living and working in the client's business which created an enormous surface area for understanding the real business.
The entry level job gets reframed as FDEs but the need for that surface area is even more acute. Engineering the right context depends on seeing can capturing the right signals. Often these never reach the centre so the senior person who has the experience to ask the right questions still needs to be fed with the the right information.
My guess is the firms that have cut back on graduate hiring will regret it in 2/3 years when the economy shifts towards this model.
This is a great piece. "As models become more capable, organizations deploy them on harder, more judgment-intensive tasks, and each new class of task requires its own context engineering. Capability without relevant context produces confident but misaligned output." And thank you for the Last Mile shoutout!
The trunks and twigs metaphor nails this. I've been butting heads with exactly this distinction in practice. What goes into my CLAUDE.md (the persistent config file for coding sessions) is trunk knowledge; stable conventions, build commands, architectural constraints. The twigs are the per-task context I load before each implementation, and they rot fast if you don't curate them.
Wrote up the practitioner's side of this when the official best practices guide dropped https://reading.sh/context-is-the-new-skill-lessons-from-the-claude-code-best-practices-guide-3d27c2b2f1d8?postPublishedType=repub and the overlap with your cCE framing is striking. Their language is blunter ("context is a finite resource with diminishing marginal returns") but it's the same economic insight.
Have you looked at multi-agent patterns as a response to the curation problem? Spinning up isolated subagents for investigation keeps the main session lean. It's a rough version of the "glean" step but it works surprisingly well in practice. Wondering whether that maps to your Channel 2 distinction or if it's something else entirely.
Love this piece Chris; I guess the obvious pivot here for white collar professionals is to get their boots on the ground, work with customers, and get better reps. I work in a mixture of M&A and corporate finance in the RNG space right now, and some of the bigger drivers not capture in our model are related to regulatory winds shifting, trust in counterparties, and feedstock subtleties.
This question of centralization vs. decentralization keeps coming up. It's not hard to see why, as it relates to power dynamics, which both motivates those who seek it, and worries those that prefer not to. We don't have to look too far to find a case where consolidation of power has enabled great harm. That said, it gets a bit of a bad rap because the total lack of any consolidation has never been stable enough to end up anywhere good either. Our historical precedents should really tell us a balance has worked best.
If you follow libertarians, for example Tyler Cowen, they occasionally imagine an idealized world where AI agent are given total information about our preferences, and then go out and act on our behalf with these preferences in a market oriented way to establish an equilibrium, but without the mess of all the human time. While interesting, I've long seen that once preferences have been given to an agent, they are alienated (or legible), and the entire decentralized market becomes unnecessary and inefficient. If there is a equilibrium outcome, from a information processing point of view, it can be found more efficiently by bringing that data together. To the libertarian mind, that's a freakish thought though, since it's ultimately authoritarian if implemented that way, even though the two computational routes would follow the same rules.. just one with far lower latency.
Of course, there isn't one outcome, since the adversarial capability each agent and the assumptions that establish the market are still a matter that shapes the outcome, thus undermining the libertarian ideal that this market represents some ideal version of fair compromise between preferences. Those differentiating factors don't represent anything that we can think of relating to fairness, so you might as well debate fairness in a different context and implement it directly than assume it will arise organically from the interaction of personal agents.
Getting past those two debates, I think it is clear that AI overall supports centralization due to these changes in information processing, information availability, and the ability to integrate information in a way that cuts into our most protected domains. Thus arises the concern that power would become centralized by a few individuals or corporations (or few individuals who control a few corporations).
It's a reasonable concern for sure. But if you follow the market approach to it's terminus, you see there's a resolution. If information processing has advance in this way, it's not just large corporations that benefit, but the biggest corporation of all, government. The information processing disadvantage of a non-market oriented economy disappears. That's led some to suggest a need for good old communist style expropriation. But in those dynamics lurks another resolution. If government can invest as effectively as markets, it can use a moderate level of taxation and matching performance to slowly accrete all market value: https://substack.com/@norabble/note/c-200989526?r=10qod6
The tension im feeling with this is the view that context is something that we can fully get ourselves around so as to “engineer” it. It is an assumption made by the tools themselves. But humans act within a reality within which we are immersed. “Externalization” - standing outside a problem so as to solve it works for some things (coding) but not others.
I suspect that the view that we can fully stipulate our context to make a machine act as we would, is basically wrong.
So, even if we say that centralization is wrong, decentralization does not solve the issue fully enough to reliably act in the world on our behalf.
It's not really a question of centralization vs decentralization, it's a question of computational structure and algorithm. Questions emerge in unpredictable ways and at unpredictable times. Answering those questions efficiently and correctly may require a small scope of information or a large scope. Increasingly, larger scopes will be preferred if they provide incrementally better answers. This means that the competitiveness of an enterprise is about choosing how to answer such questions and how it will be structured to answer such questions with better and better answers.
This is the same structural question that leads to both Hayek's conclusion and to the fall of the soviet union.
I believe Brynjolfsson and Hitzig are right about centralization, but not for the reasons they give.
Independently, I've been building a decomposition of transactions costs into three separate components, including the "synchronization tax" which broadly fits what you're describing as "curation cost." I tie these costs directly to the thermodynamic costs of building overlapping descriptions. You might find the analysis interesting or useful:
https://www.symmetrybroken.com/maintaining-divergence/#the-three-part-decomposition
Fantastic article!
The distributed/centralized information coordination axis poses a homonculus problem. So what if you've got a giant brain to process feeds from the twigs? What does that brain actually do? It needs to distill signal from noise, evaluate options, combine information, draw from experience, weigh goals, apply policies, and recommend decisions. That's knowledge work, and the central evaluator requires twig level knowledge in order to do it well.
Can this be done by some super-advanced undecipherable AI mind trained through RL or the like? Maybe, but that approach brings huge risk of technology overhang.
Instead, context engineering becomes a focal task to which we can join domain level knowledge with larger frameworks like optimization, market dynamics, game theory, and computer science.
Whether distributed or centralized, the key design consideration remains the organization of knowledge in the information architecture.
"Experienced judgment about what matters" is one thing. I'm starting to think the real difficulty is "knowing what the basic approach is when it's not in your context."
The "unmet need" example is a good one. As a 20-year UX research consultant, I thought the basic approach was obvious. (I also am willing to bet you didn't need as many sources as you used, and that the system you built was WAY more expensive than even a team of me's.) But I've learned how rare this is when working with clients. It's outside their wheelhouse, which means that usually they can't integrate or even comprehend the approach. Surely this is true of many other white-collar intellectual disciplines. What's trivially obvious to one discipline is sheer enigma to another.
To put it another way, if you won an hour-long call with Warren Buffett, AMA, would you even know what to ask? Presumably not about your 401k...?