What’s a better use of time than a six-month pause on AI Research?

Spread the love

TLDRUpFront: Much has been made about the signatories of a letter requesting the government to enforce a six-month ban on AI Research. But there is a better use we could put that six months to for Hinton, Yadowsky, Pachniewski, and all the other AGI Doomers.

 

FullContextInTheBack: What these men would benefit more from a six-month pause on AI research would be six-month therapy coaching classes on learning that “IQ” and “knowledge” are not all that’s needed to “dominate” life, let alone to live in life. And after that, maybe some introductions to calculus, systems science, and Dana Meadows Limits to Growth.

I do think it’s important to distinguish “general concerns” about economic and social changes resulting from AI from what I term “existential alarmism,” which is the “humanities going to get killed” or the “AI takes over.” I wasted my morning listening to Hinton and I suspect he’s marginally more accepted by some because his nonsense is flavored in liberal rather than neocon idioms, but he and Yadowusky are selling the same brand of nonsense.

Hinton & Yadowosky – Blue & Red Flavors of the Same Mental Model

After watching the Hinton talk, I’m sorry, a person who says with a straight face that a species that can instantly propagate a mutation across its entire population and sees that as a strength and not a risk does not understand how fragility works in tightly coupled systems.

Likewise, a person who with a straight face argues we’re not sophisticated enough to handle the technology because we “can’t keep assault rifles out of the hands of teenage boys” shares a mental model understanding of things on similar lines to someone who would argue we “can’t build a wall to keep all the Mexicans out” to make the same point. Suitable, perhaps, for a FoxNews or MSNBC talking head spot, but not a deep-thinker of the kinds of real-world systems of interactions and complexities they are proposing are going to be extinguished by AI.
And Yadowusky, you just need to hit up his Twitter feed on any given day.
“Superintelligences are vastly more dangerous than nuclear weapons. Reason #1: they’re smarter than you. Reason #2: you can build a nuke, and leave it lying around for 5 years, and come back, and not find an expanding 9.9-light-year-wide sphere of von Neumann bots.”

Fear is not an Argument

Being fearful is not a valid argument. Concern for the isolation of reason is not a virtue. And we can test to say whether it’s an irrational or anxious fear driving these concerns or reason. Essentially, so far, every AGI Existential/Lethalities argument has come down to an exponential growth of intelligence hidden from our awareness leads to a superpower AGI that then due to poor alignment and a series of hypothetical “and then they might do this” results in the end of humanity.
Now…change one thing in that argument, the “poor alignment” and the same argument supports a superpower AGI turning life into a paradise utopia for all humans.
And this is how we test whether fear or reason is driving this, do any of these folks accept the possibility that a superpower AGI could unleash humanity into a new age of paradise and ease? There’s no reason they *shouldn’t* from a logical standpoint accepting the premise of their argument. A superpower IQ executing a series of hypothetical jazz-hands actions that result in killing all humans is the same argument as a series of hypothetical jazz-hands actions resulting in paradise for all humans. “Because it’s so much smarter.”
But none of these alarmists are seriously arguing that they think in 3-5 years a superpower AGI IQ can make life utopia for all humans. They’ve all adopted the opposite argument, that we all die. Why? Accepting all their premises but alignment in good faith, arbitrarily confining the AGI’s alignment to either “good” or “bad” from the perspective of humans we effectively arrive at a coin flip probability that we’re all going to die or we’re all going to live in utopia.
But none of them are saying that. If you pressed, I don’t think they’d accept that as a reasonable possibility. They’d find all sorts of reasons why it’s “harder” for a series of hypothetical jazz hands to result in utopia than apocalypse. And in doing so they undermine their own argument. For those same reasons, it’s “hard” to kill all the humans that exist as well, but they jazz-hand past it with “but it’s really smart.” At the end, they have to admit they are arbitrarily selecting fear over optimism because they arrived at this argument not by logic or reason or evidence, but a series of hypothetical hand waves *guided* by their fears and anxieties.
Beyond everything else, this reasoning from anxiety is a tremendous flaw; and it is not improved by having a series of people say the same thing. Any more than the argument from fear against Saddam was improved each time a new neocon made it.

SOURCES:

Hinton Talk: https://www.youtube.com/watch?v=sitHS6UDMJc
Yudkowsky Tweet: https://twitter.com/ESYudkowsky/status/1659159024481488900

DISCUSSION

I thought Yudkowsky was over reacting until I heard Tom Davidson on an 80,000 hours podcast make a good case for a takeoff within 3 years. My argument: Lets say Microsoft has a parallel internal AI effort that borrows from Open AI’s progress AND the users efforts and results to improve. Its primary goal is the same as MSFT, maximize revenue share of global GDP without doing anything technically illegal. This includes funding legislation to decide what is technically illegal. Perhaps rejuvenation, rewilding, and space colonization help. Perhaps a snowball earth of just bots is optimal. The challenge is we can’t know what it decides and our input matters exponentially less every month
If Rumsfeld’s argument for invading Iraq is sloppy and full of holes, then if Cheney, Wolfowitz and Bush all use the same argument – has the argument improved?
A parallel construction of the “more outrageous the claim, the stronger the evidence” in national security is “the more severe the proposed risk, the more rigorous the reasoning about it.” These two need to ride side by side whenever discussing systemic risk. If a risk is worth discussing, it’s worth discussing rigorously and not sloppily.
The alignment problem isn’t AI specifically, its already a problem and AI is a tool that is amplifying it.
And this is where we shift from AGI Existentialism/Lethalities concerns, which I am regularly targeting, and a more general form of concern about social, economic, and other transitionary upheavals. And why I find those more valid than “omg we’re all going to die!” Because to your point, someone controlling the alignment of an AGI for self-serving purposes may be bad, but it implies as a premise that the AGI alignment *can* be controlled. And at that point, it’s just a technology cycle like any other, with winners or losers, the need for some (but not too much) regulatory oversight, and the question of ‘aligning’ human interests amongst *humans*…all of which we muddle through in ways similar to ones we have in the past. That’s an entirely reasonable and justifiable concern in my book.