8 Comments

Thanks for this. I'm often telling people that "True Understanding" is a red herring. From now on I'll just link them to this.

Expand full comment

"Begging the question" is not best used as a substitute for "prompting [or suggesting] the question" in an article that purports to be about philosophy.

Expand full comment

Humans (and groups of humans) can do original conceptual science research, including (in the distant past) inventing language and science in their entirety completely from scratch.

Do you think human brains are doing something that no computer chip could ever possibly do? If so, what?

(My own strong opinion is that it is possible to put human-brain-like algorithms on computer chips, and that it is likely to happen eventually for better or worse, and that those AI algorithms will be different from LLMs in numerous ways, and that those AI algorithms will obviously be perfectly capable of doing fundamental physics research.)

Expand full comment

As someone who also agrees with the premise that "superintelligence" is fear-mongering at best, I think this article attacks a strawman. It's true that LLMs and AI trained on prior data have limits - but the existential risk from superintelligence argument doesn't rely on LLMs or similar constructions. The argument is as follows:

- (AGI assumption) Assume it is possible to create an AI that is capable of better-than-human performance on arbitrary tasks, (note this does not have to be based on current approaches to AI!)

- (Superintelligence assumption) Assume further that it is possible to tune or create an AI such that its performance on arbitrary tasks exceeds the best humans always,

- Then, given the task of defeating or destroying all humans, the AI will necessarily win because it will be able to exceed human performance on the subtask of overcoming human resistance (from both the AGI and superintelligence assumptions)

These are both very strong preconditions for AI existential risk. AGI is not yet possible. Superintelligence is plausible, but no one knows how it might occur. The danger isn't in that they will happen, but that, with sufficient R&D, it might happen.

I hope it also goes to show just how silly and contrived the AI existential risk argument is in this form. Perhaps AGI is achievable in our lifetime, but superintelligence is like a "get-out-of-jail" card that AI risk enthusiasts love to wave about to any possible defense ("your proposed theoretical defense won't work, because the AI will figure it out - don't ask me how, I'm not smart enough, but the AI will be!"). It requires a pre-belief in the idea that all tasks are solvable with enough computational power, but there are many games where computational power is not an edge and it's certainly possible to deny any player (AGI or otherwise) the resources needed to acquire the necessary edge in those games. Anyone claiming otherwise would need to bring some powerful evidence to the contrary.

Expand full comment

Wrong. Read the Hacker News thread and educate yourself.

Expand full comment
Aug 5, 2023·edited Aug 5, 2023

Fair point, but it misses the reality that they are training LLMs on "synthetic" (non-human) data as well. LLM outputs that "compress" non-language concepts into a self-created language. the possibility of creating new language like token structures out of non-language skill knowledge, like filmmaking or painting or manipulating the genome, not to mention all the new medications that humans have never thought about. these new grammars and syntaxes are all new ideas, under your definition, and would seem to invalidate your argument's assumptions.

Expand full comment