This is excellent of course. Your claim about the research enterprise is roughly the same as I'm making about the higher ed enterprise: AI cannot produce knowledge that depends on particular people in particular places doing particular work under expert supervision. Universities are supposed to be where expert knowledge transfer happens but instead, most schools deliver generalities at scale.
This is a brilliant application of Mokyr’s 'Industrial Enlightenment' to the current AI trajectory. While the discourse usually fixates on compute and model capability, you correctly identify that the bottleneck isn't just intelligence, it's the 'institutional connective tissue.' Your point about the 'exploitation trap' is particularly salient; it suggests that without reforming how we reward scientific risk, AI might actually lead to a more efficient form of intellectual stagnation rather than a revolution.
At the same time AI will massively scale the appearance of thinking…more people will now produce work that looks thoughtful, coherent, insightful etc. In that sense the challenge will be to know to recognize what’s real.
I wonder if a place like substack is one of those feedback channels? My feed consists of a lot of practitioners (admittedly marketing focused), CEOs, mid-level magnets, etc working through their thoughts in public.
It doesn’t match the scale of AI but if I was a professional scientist trying to get real-word signal, reading certain substacks might not be a bad place to start
This is excellent of course. Your claim about the research enterprise is roughly the same as I'm making about the higher ed enterprise: AI cannot produce knowledge that depends on particular people in particular places doing particular work under expert supervision. Universities are supposed to be where expert knowledge transfer happens but instead, most schools deliver generalities at scale.
This is a brilliant application of Mokyr’s 'Industrial Enlightenment' to the current AI trajectory. While the discourse usually fixates on compute and model capability, you correctly identify that the bottleneck isn't just intelligence, it's the 'institutional connective tissue.' Your point about the 'exploitation trap' is particularly salient; it suggests that without reforming how we reward scientific risk, AI might actually lead to a more efficient form of intellectual stagnation rather than a revolution.
At the same time AI will massively scale the appearance of thinking…more people will now produce work that looks thoughtful, coherent, insightful etc. In that sense the challenge will be to know to recognize what’s real.
I wonder if a place like substack is one of those feedback channels? My feed consists of a lot of practitioners (admittedly marketing focused), CEOs, mid-level magnets, etc working through their thoughts in public.
It doesn’t match the scale of AI but if I was a professional scientist trying to get real-word signal, reading certain substacks might not be a bad place to start