Skip to content

Ask A Genius 1530: AI Hype vs Reality: Efficiency and Externalities

2025-11-26

Author(s): Scott Douglas Jacobsen and Rick Rosner

Publication (Outlet/Website): Ask A Genius

Publication Date (yyyy/mm/dd): 2025/09/29

Will brute-force AI evolve into efficient, general systems without bankrupting the planet—or plateau as a powerful but limited tool?

Scott Douglas Jacobsen and Rick Rosner examine AI’s hype cycle and trajectory. Building on Cory Doctorow’s skepticism, they agree short-term disappointment is likely while acknowledging domains where machines surpass humans. They debate consolidation of AI firms, sunk costs, and the environmental and privacy externalities of massive compute. Rosner’s Packard analogy frames current systems as brute force; future efficiency may reshape economics or plateau. They contrast adoption with value, noting smoking and fads, and caution against simplistic energy comparisons. Chess wins and 20-watt brains illustrate capability versus cost. The pair end on emergence: human priorities are messy; AI may inherit them.

Rick’s Opening Thoughts for the Day

Scott Douglas Jacobsen: What are your general thoughts today? 

Rick Rosner: I want to go back to what Cory Doctorow said—that AI will never live up to the hype. I agree that in the short term it will not, and that there will be a crash. However, eventually, I think it becomes everything.

That could be because I believe the universe is a giant information processor, and that the tendency for advanced civilizations is to turn toward massive computation. That may be going out on a limb, since there are many directions civilizations could take.

For me to assume that every civilization tends to become computational—that is worth discussing.

Jacobsen: We have had two sessions on this, the last two nights. Each time you asked a slightly different question about whether AI is just hype. I have given a similar answer each time. It is half true. 

The first part of my answer is that there are obvious domains where computers outperform humans. It may require more computing and energy to achieve that superior performance, but outperforming people in many domains is undeniable. In many other domains, however, the answer is no. If Doctorow is making a subtler point—and I assume he is—he is probably pointing to hype leading to an economic downturn.

Those big AI companies could consolidate into fewer serious competitors. To cover losses, some will court defence and other enterprise customers. The capital outlays are massive—industry plans for the next few years involve hundreds of billions of dollars, with some roadmaps citing up to roughly $500 billion and multi-gigawatt campuses.

If there is a crash, the key issue for the builders of the largest models is whether they recoup their investment. If some firms are wiped out, successors that take over their assets may face less near-term pressure because much of the spending is sunk.

Rosner: We also know that technology fitting in a small space can, with relatively little energy, achieve human-level computing—our brains fit in this space and use about 20 watts (roughly a dim light bulb’s draw).

Jacobsen: Claims about AI energy should not be oversimplified. For example, Texas is seeing proposals for power dedicated to data centers on the order of a gigawatt—there is an active plan exploring a 1.1-GW natural gas plant to serve data center demand—while separate AI projects (like xAI’s Memphis supercomputer) discuss hundreds of megawatts. The land, grid build-out, and embodied energy all matter.

Similarly, “how much energy to build a person” is not comparable to running a data center; you are mixing biological development with industrial infrastructure. 

The Context Matters

Rosner: Still, the apples-and-oranges comparison is a reminder that context matters. 

Jacobsen: And at the end of this, we ask: AI can compete in chess, and humans can understand and compete in chess.

They can have different types of processing and infrastructure and produce equivalent levels of performance in terms of output. However, the whole infrastructure—biological and non-biological machines—and the thought processes behind them are entirely different. There is a whole scaffolding that is not being taken into account.

The framework is both simplistic and brutal. We get this basic image: “It takes this much compute, it outperforms humans, therefore we are headed for an apocalypse.” There is a lot beneath the surface that we are not even aware of. We lack the mental fortitude to turn it upside down because we do not understand the internal mechanisms of brain cells.

Rosner: Let me give an analogy. Current AI is brute force. Large language models derive results from billions of inputs, and I am unsure how many inputs feed into visual, video, or image models. It is like a 1927 Packard—the height of elegance at the time, maybe with a 12-cylinder engine that got four miles to the gallon—a massive hunk of metal.

Now, a century later, we have cars that can get the equivalent of 60 miles per gallon and do vastly more. Brute-force AIs use enormous amounts of energy. Their tricks are impressive, but they do not have anywhere near the flexibility of human cognition. Our brains may seem inefficient compared to AI, but that is by design, as dictated by evolution. We do not remember everything because it would be an inefficient use of resources—what you could call cognitive economics, or cognitive thrift.

Over time, AI will become more efficient, more flexible, and better at doing what people and AI themselves want it to do. The question is whether AI eventually becomes so thrifty in terms of cognition that it overcomes any economic resistance. That is one possibility.

A second possibility is that it changes the economic landscape so radically that today’s calculations become obsolete.

We have discussed Feynman’s three paths of science many times: first, that science can figure everything out; second, that science may stop short because the universe is too complicated to comprehend fully; third, that science can make steady progress, continually discovering new things indefinitely.

You could make the same arguments about AI’s role in the world: that AI may never be powerful enough to make new findings and improve the world continually, or that it will plateau, or that it will keep advancing indefinitely. AI will continue to radically reshape the world through its cognitive power.

If you wanted to frame it like a Feynman analogy, there could be a middle path. AI does not completely reshape the world or completely fail. Instead, it steadily contributes to development without becoming everything. It becomes a force in the world, but not the dominant force.

I think we agree that the first possibility—that AI totally fails and turns out to be mostly hype—is the least likely path. 

The Paths of AI

Jacobsen: It is completely closed off now, because there are already many areas of life where AI has shown real functionality that hundreds of millions of people use. So it is helpful to us.

Rosner: Just because hundreds of millions of people use AI does not mean it is the best thing in the world. Hundreds of millions of people smoked, and smoking was harmful. Hundreds of millions of people have contracted herpes—it spreads, but that does not make it good.

Jacobsen: People go to ChatGPT to get help, just as people smoke for relief and often become addicted. People try to quit smoking, and people try to avoid herpes. The analogies are almost completely terrible. 

Rosner: In the 1970s, hundreds of thousands of people bought pet rocks and mood rings. Just because many people adopt something does not mean it is valuable.

Jacobsen: I think you are playing devil’s advocate for its own sake. Let me answer. Is herpes in any way helpful to your life? 

Rosner: No. 

Jacobsen: Are mood rings helpful in writing essays, generating medical diagnostics, summarizing texts into visuals, creating artificial images, video production, or coding at near-Olympiad levels? 

Rosner: No. 

Jacobsen: Are pet rocks helpful in any of these? 

Rosner: No. The point of pet rocks is that they do nothing. However, hundreds of millions of people also watch pornography. That does not mean it is the greatest thing in the world—it just means people are drawn to it because we are sexual beings. Similarly, we may be drawn to AI because we are cognitive beings, but it could still turn out to be hollow.

I do not believe that argument, but it can be made. Just because we love AI and use it widely does not mean it is the best thing in the world.

Promise and Perils of AI

Jacobsen: So are you making an argument and undermining it in the same breath?

Rosner: Yes. AI is very promising. However, at the same time, people have gone all-in on worthless or destructive things before. In the 1930s, a country turned to National Socialism, believing it would solve its problems. It did not.

Jacobsen: AI, however, already provides tangible benefits in specific domains. Supercomputers often outperform humans in certain tasks. People are consistently beaten at chess by computers. There is, however, a level of immediate functionality that we are seeing. However, what we perceive as excellent functionality in artificial cognition might turn out to be a dead end. I do not believe that, but one could argue it.

Similarly, many of our own ways of thinking are flawed as well. We do not have the best reality testing. When it goes wrong, we develop all sorts of personality pathologies. Similarly, with these large language models, there will be glitches. They are just one approach—though they are now used as the foundation for many others.

The legitimate critique is not their usefulness but their wastefulness. In economic terms: externalities. They are costly to the environment in terms of clean water, energy consumption, and the infrastructure built solely to support computing. There is also the vulnerability they create by absorbing so much personal data.

Rosner: A 19th-century philosopher—Thoreau, not Emerson—said that “the mass of men lead lives of quiet desperation.” That came to mind today. Evolution and biology have put us in terrible situations. We are the product of billions of years of evolution that do not particularly care about our individual welfare, so we live absurd lives with absurd priorities. There is a chance that as we evolve technology, AI will inherit and amplify that absurdity.

Last updated May 3, 2025. These terms govern all In-Sight Publishing content—past, present, and future—and supersede any prior notices.In-Sight Publishing by Scott  Douglas  Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In-Sight Publishing by Scott  Douglas  Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.

Leave a Comment

Leave a comment