8 Comments

Thanks for this. I'm often telling people that "True Understanding" is a red herring. From now on I'll just link them to this.

Expand full comment

"Begging the question" is not best used as a substitute for "prompting [or suggesting] the question" in an article that purports to be about philosophy.

Expand full comment

Humans (and groups of humans) can do original conceptual science research, including (in the distant past) inventing language and science in their entirety completely from scratch.

Do you think human brains are doing something that no computer chip could ever possibly do? If so, what?

(My own strong opinion is that it is possible to put human-brain-like algorithms on computer chips, and that it is likely to happen eventually for better or worse, and that those AI algorithms will be different from LLMs in numerous ways, and that those AI algorithms will obviously be perfectly capable of doing fundamental physics research.)

Expand full comment

I do believe in the material basis for the human mind. I suspect there are still differences that might prevent us from simulating an intelligence-that-can-discover-new-things on a computer chip. The human mind can be thought of as an algorithm, I agree, but it's also one that exists physically embedded in a world, culture and society, amidst other minds. Can that nature be simulated on a computer chip? Maybe, but I have doubts. Rather than approaching the problem from silicon though, we may get better and better at genetically modifying human intelligence, and I see this as a possibly likelier path to creating an intelligence that can discover, if it's possible at all.

The thought experiment I go back to is, if you gave an AI model access to all human knowledge but only up to 1914, the year before Einstein published the theory of General Relativity, would/could the model put forth that theory (or one similarly groundbreaking)? I'm open to the answer being yes, but I'm not sure!

Expand full comment

For "physically embedded in the world"—even if that's true, an AI algorithm on a computer chip could have a robot body.

(For my part, I don't think literal robot bodies are necessary—for example see lifelong quadriplegics like https://en.wikipedia.org/wiki/Christopher_Nolan_(author) , plus there could be virtual bodies in a VR environment—but even if robot bodies WERE necessary, OK fine, whatever, it's obviously perfectly possible to plug a future AI algorithms into a robot body.)

For "culture and society, amidst other minds"—even if that's true, an AI algorithm on a computer chip with a robot body could have 200 or 200,000,000 other similar-but-not-identical AI algorithms on computer chips with corresponding robot bodies to chat with and hang out with.

Anyway, if you want to make claims about the limitations of LLMs, that's fine, and in fact I would probably even agree with such claims. But then you also made claims about "fundamental limits" on "advanced AIs", with no restrictions / caveats. Unless you specify further, an "advanced AI" can be any algorithm whatsoever, including algorithms that don't exist yet and won't be invented for 10000 years. For example, an "advanced AI" can be a simulation of a VR environment with millions of human brains and bodies, with each brain simulated neuron-by-neuron, running on a quadrillion dollars worth of specialized computer chips, churning for a million years, starting in the year 2800, etc. etc. It seems very obvious to me that such an "advanced AI" can invent general relativity (as Einstein did) or conjure up a grammatical language from nothing (e.g. https://en.wikipedia.org/wiki/Nicaraguan_Sign_Language ). Right?

That's a deliberately over-the-top example, obviously. But if you make a claim about "fundamental limits" of "advanced AI", then such a claim would apply to absolutely everything including silly over-the-top examples. And if you didn't mean to make such a broad claim, then you should edit your post to say something more specific than "advanced AI", and/or less aggressive than "fundamental limit".

(For "genetically modifying human intelligence"—I'm arguing against your claim "advanced AI … [is] fundamentally limited", so whether future humans will genetically modify themselves or not is irrelevant, unless you think those future modified humans will stop doing AI research forever at some point for some reason, in which case I’m interested why you think that.)

My blog post here is related, I think: https://www.alignmentforum.org/posts/KJRBb43nDxk6mwLcR/ai-doom-from-an-llm-plateau-ist-perspective

Expand full comment

As someone who also agrees with the premise that "superintelligence" is fear-mongering at best, I think this article attacks a strawman. It's true that LLMs and AI trained on prior data have limits - but the existential risk from superintelligence argument doesn't rely on LLMs or similar constructions. The argument is as follows:

- (AGI assumption) Assume it is possible to create an AI that is capable of better-than-human performance on arbitrary tasks, (note this does not have to be based on current approaches to AI!)

- (Superintelligence assumption) Assume further that it is possible to tune or create an AI such that its performance on arbitrary tasks exceeds the best humans always,

- Then, given the task of defeating or destroying all humans, the AI will necessarily win because it will be able to exceed human performance on the subtask of overcoming human resistance (from both the AGI and superintelligence assumptions)

These are both very strong preconditions for AI existential risk. AGI is not yet possible. Superintelligence is plausible, but no one knows how it might occur. The danger isn't in that they will happen, but that, with sufficient R&D, it might happen.

I hope it also goes to show just how silly and contrived the AI existential risk argument is in this form. Perhaps AGI is achievable in our lifetime, but superintelligence is like a "get-out-of-jail" card that AI risk enthusiasts love to wave about to any possible defense ("your proposed theoretical defense won't work, because the AI will figure it out - don't ask me how, I'm not smart enough, but the AI will be!"). It requires a pre-belief in the idea that all tasks are solvable with enough computational power, but there are many games where computational power is not an edge and it's certainly possible to deny any player (AGI or otherwise) the resources needed to acquire the necessary edge in those games. Anyone claiming otherwise would need to bring some powerful evidence to the contrary.

Expand full comment

Wrong. Read the Hacker News thread and educate yourself.

Expand full comment

Fair point, but it misses the reality that they are training LLMs on "synthetic" (non-human) data as well. LLM outputs that "compress" non-language concepts into a self-created language. the possibility of creating new language like token structures out of non-language skill knowledge, like filmmaking or painting or manipulating the genome, not to mention all the new medications that humans have never thought about. these new grammars and syntaxes are all new ideas, under your definition, and would seem to invalidate your argument's assumptions.

Expand full comment