Skip to content

Ask A Genius 1294: AI, Scaling Laws, and Political Chaos

2025-06-13

Author(s): Rick Rosner and Scott Douglas Jacobsen

Publication (Outlet/Website): Ask A Genius

Publication Date (yyyy/mm/dd): 2025/03/03

Rick Rosner: So, AI—when you ask it appropriate questions—makes no bones about eventually supplanting human cognition. What passes for common sense now tells you the same thing: we have figured out how to create “thinky stuff.” 

Scott Douglas Jacobsen: We’ve made our own natural predator.

Rosner: And we know that “thinky stuff” in biological beings is fluid and versatile. You can train almost any reasonably intelligent animal—from lizards on up—to recognize patterns and think, at least in a limited fashion, about the things we think about. Animals can be trained to recognize regularities in their environment. Humans are generalist thinkers. We can exploit all sorts of irregularities—more than any other animal, obviously. We think about stuff. But once we’ve spotted something, we can train animals to do their own thinking about it.

There are African giant pouched rats, which are more than a foot long. They’re not actually rats, but they resemble them. These animals are used to sniff out landmines because they are lightweight and don’t trigger the mines, yet they have an acute sense of smell that allows them to detect explosives. Handlers equip them with small harnesses and send them into minefields, where they identify buried mines, allowing experts to safely remove them. We do the same thing with beagles at airports. You could probably even train insects to recognize odors.

The capacity for “thinky stuff” depends on how general the intelligence is. The thinking process in our brains—and in animals’ brains—has a lot of adaptability and generality. From biology, from observing animals, we know that “thinky stuff” will think.

Rosner: And we’ve invented our own artificial “thinky stuff.”

Jacobsen: It’s effectively unlimited in terms of scale.

Rosner: Yes. 

Jacobsen: What they call “scaling laws” apparently vastly outstrip Moore’s Law—it’s like stacking many, many Moore’s Laws on top of each other. That’s why they think it’s going to happen so much faster. 

Rosner: Let’s talk about Moore’s Law. Moore’s Law started in the 1960s as a prediction about the rate at which the number of transistors on an integrated circuit would increase over time. It originally described how the number of transistors in a chip would roughly double every 18 months to two years, leading to exponential growth in computing power.

There are actually multiple interpretations of Moore’s Law, and some of them are now running into the limits of physics. You can only make semiconductors so small before you hit atomic constraints—at a certain point, you can’t make them smaller than an atom. So, Moore’s Law is slowing down. However, the amount of “thinky stuff” you can create in a thinking system—the overall computational power—has its own set of scaling laws, and they don’t seem to be slowing down.

There’s no foreseeable slowing down, at least for now and into the intermediate future. That’s what you’re saying?

Jacobsen: Yes. There’s no real slowdown in sight. That’s the part that’s less talked about but more important than the well-known Moore’s Law. Some people think it’s just Moore’s Law at work or the Law of Accelerating Returns, but the reality is that it’s multiple scaling laws stacked on top of each other, creating extraordinary progress in an incredibly short period of time. That’s point one.

Point two: the horizon is unknown.

Point three: even in the short term, the progress is going to be incredible, but we have no idea what that means for societies. We don’t know.

Rosner: Right. It’s like asking, “How big a ship can you build?” The current limit for ships is determined by the size of the Panama Canal. The canal used to be a certain width, and there were “Panamax” ships built to fit those dimensions. Now that the canal has been widened, we have “Supermax” ships, designed to be as large as possible while still passing through.

But if you ask the broader question, how big a ship can you theoretically build?—there’s no practical upper limit. You could, in theory, build a ship 20 times wider than a Supermax ship. All you need is enough metal and the engineering expertise.

Eventually, you’d run into constraints, like the depth of the ocean. Say you built a ship 400 miles across with a draft of five miles—you’d hit natural limits. But the fundamental idea still holds: if you wanted to build the world’s biggest ship, there’s no hard technological cap preventing you from doing it.

Jacobsen: Take that analogy and apply it to construction. Right now, large-scale construction is done manually, with human engineers designing and building everything. The transitional phase is going to be large-scale 3D printing. Eventually, we could even smelt and shape metal using automated processes.

But the real future of construction will be something more akin to “grown” structures—materials engineered at the molecular level to form buildings, ships, or entire cities. You’d need AI for that level of precision, but once you master it, you could create incredibly strong, massive structures in ways we can’t even imagine right now.

Rosner: That’s what I’m saying—there’s no practical limit. There’s no reasonable ceiling on how far you could push this technology. If, for some reason, you wanted to build the biggest ship in the world, there’s nothing stopping you from radically expanding shipbuilding technology.

Jacobsen: And that’s going to apply to everything. Every industry, every human capability will have these transitional inflection points. The only exceptions might be fields tied to emotion, intuition, and social skills—areas where human experience is deeply embedded.

Rosner: Right. 

Jacobsen: But even those will probably be encroached upon eventually.

Rosner: Or to put it another way—how big a house could you build?

Jacobsen: It’s a weird, almost senseless question because the biggest house in the world is maybe 100,000 square feet, built by some lunatic somewhere. What’s the tallest elevator? People talk about space elevators, but at some point, the structural pressures at various points would be so immense that you’d hit physical limits. You can’t build a house larger than the largest continent.

Yes, but those limits are so far beyond where we’re at that they’re practically irrelevant. And it’s the same thing with AI—there’s no ceiling on its size or computing power. So, obviously, it will surpass us.

And we don’t know the shape of that surpassing. Is it gradual? Is it sudden? There are some commonsense precautions, like making sure AI isn’t mean to us. That means we need reasonable ways to monitor what it’s doing, which we may or may not actually be capable of.

Obviously, humans will want to merge with AI—partly for the power it gives us and partly to ensure that humans remain involved in directing it. We need to make sure AI doesn’t turn against us, doesn’t go full Skynet. The best way to do that is to integrate our messy biological circuitry with AI’s logic, ensuring that the most powerful thinkers on the planet remain at least somewhat human.

We don’t know how long that will be possible. Another common sense conclusion is that one way for humans to survive AI is for AI to become so powerful at resource generation that it costs virtually nothing to keep humans around. But all of these seem like provisional solutions—things that may work in the near future.

In the short term, humans will continue directing AI. But in the medium and distant future, we have no idea what that’s going to look like.

You can venture a guess that there will be a dignified planet—AI will be super powerful, but it will retain enough residual human values to preserve vast tracts of Earth as beautiful parkland, even while it’s computing at full power in space, underground, and in the cloud. On the surface, everything might look placid and serene, filled with nature, while AI churns away in the background.

But that’s just one possibility.

People also talk about the paperclip apocalypse—the thought experiment where an AI is programmed to maximize the number of paperclips it produces by any means necessary. If left unchecked, it could consume the entire planet, repurposing every resource into paperclips.

There’s a similar scenario called the gray goo problem, where self-replicating nanobots keep multiplying, consuming everything in their path until the planet is nothing but a homogeneous, ever-growing mass of nanobots.

So we have no way of knowing whether we’ll get the Disney future or paperclip Armageddon.

We can guess that as AI evolves—with our guidance—it might develop some of the same fundamental values we have. Maybe a sense of beauty. Maybe order over chaos. Maybe self-preservation. Maybe a sense of history, wanting to keep a record of what has come before.

What do you think?

Jacobsen: It’s become common to survey AI researchers on the likelihood that AI will destroy civilization. Not many of them are willing to say there’s a 0% chance of that happening.

Rosner: You wanted to talk about the rise of anti-science in a top-down sense—meaning from the highest levels of government. I read a Twitter thread that discussed the infiltration of literal Nazis into the background of the current U.S. government. These are burn it all down people.

These individuals are similar to Hitler and his inner circle in that Hitler was not afraid to cause massive destruction. He was not afraid to go to war, believing he would emerge victorious. But when it became apparent by 1943 that he was not going to win, he still prosecuted the war for another two years—seemingly out of pure spite. Maybe he told his followers that something great would arise from the wreckage, but I don’t think he even believed that. It was pure vengeance. Hitler was the same. Plus, he had grandiose motives.

The Nazis didn’t invent amphetamines, but they were the first to use them extensively in warfare. And Hitler, specifically, was high a lot. He wasn’t the only one—much of the Nazi leadership used stimulants—but he, in particular, was whacked out of his head on speed. He was already not a mentally balanced person, and the drugs only amplified that.

There’s a similar philosophical, nihilistic structure among these burn it all down ideologues. They believe that if you destroy everything, what remains will be greatness—or if nothing great emerges, then fuck it, burn it anyway. It’s hard to argue against people who simply don’t care.

From a Canadian vantage point, where things are a little calmer—especially in a small town—this approach seems completely deranged. There’s an argument to be made for sober-mindedness. But these people don’t operate that way. They have a laser focus on destruction.

Yes, it’s a burn it all down, fuck it, see what happens mindset. For example, a lot of MAGA supporters say Biden is the worst president in history, and that under him, America became the worst it has ever been—that he wrecked America. But if you look at every aspect of life in the U.S. from its founding to now, things are largely fine. Every era had its challenges, its ups and downs, but nothing about Biden’s presidency stands out as uniquely awful.

Yes, 2021 was rough—COVID was still killing hundreds of thousands of people. It was the deadliest event in U.S. history. But MAGA supporters don’t even care about that. It’s not something they bring up.

Some people argue that more people died of COVID under Biden than under Trump. And technically, that’s true—because Trump only had 10 months of the pandemic to deal with, while COVID continued every day after he left office. But that’s not even a key part of their argument. Instead, they focus on businesses shutting down and vaccine mandates, as if that was the real tragedy. That pales in comparison to the sheer number of deaths and the long-term health consequences for millions.

Then there are two other big talking points: immigration and inflation. The argument goes that so many migrants are coming in, bringing crime and drugs. But that’s largely bullshit.

Even if millions came in, we’re a country of over 330 million people. A few million immigrants don’t have the power to destroy an entire nation. And they didn’t. Then there’s inflation. Over Biden’s four years, total inflation was about 20%—meaning a dollar at the start of his presidency would only buy about 80% as much by the end.

That’s not great, but it’s certainly not a fucking horror. We did better than other developed countries. Inflation happens, and it has happened in many places at many times—often much worse than this. It wasn’t the disaster that ruined America.

Under Jimmy Carter, mortgage interest rates rose to 19%. That was real economic pain. But under Biden? Nothing like that. Wages went up, employment hit all-time lows—things were not that fucked up.

At least, nothing that was directly attributable to Biden was that bad. But MAGA supporters act like there’s no connection to reality. Worst president ever. Worst America ever. But if you actually look at U.S. history, if you had to pick any time to be alive based on standard of living and technology, you’d pick now over 1880, 1920, 1970, or even the 1990s.

You could make an argument for 1990. But any other time? Fuck no. And MAGA supporters, along with their ideological and propaganda leaders, will say anything to win the discourse. Winning, as a lot of people have pointed out, just means causing pain to the people they don’t like. And they don’t give a shit about collateral damage.

Last updated May  3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices.In Sight Publishing by Scott  Douglas  Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott  Douglas  Jacobsen 2012–Present. All trademarksperformancesdatabases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.

Leave a Comment

Leave a comment