RAsk A Genius 1395: From Traditional Ethics to AI-Aligned Utilitarianism: A Vision for Compute-Driven Civilization
Author(s): Rick Rosner and Scott Douglas Jacobsen
Publication (Outlet/Website): Ask A Genius
Publication Date (yyyy/mm/dd): 2025/06/02
Rick Rosner is an accomplished television writer with credits on shows like Jimmy Kimmel Live!, Crank Yankers, and The Man Show. Over his career, he has earned multiple Writers Guild Award nominations—winning one—and an Emmy nomination. Rosner holds a broad academic background, graduating with the equivalent of eight majors. Based in Los Angeles, he continues to write and develop ideas while spending time with his wife, daughter, and two dogs.
Scott Douglas Jacobsen is the publisher of In-Sight Publishing (ISBN: 978-1-0692343) and Editor-in-Chief of In-Sight: Interviews (ISSN: 2369-6885). He writes for The Good Men Project, International Policy Digest (ISSN: 2332–9416), The Humanist (Print: ISSN 0018-7399; Online: ISSN 2163-3576), Basic Income Earth Network (UK Registered Charity 1177066), A Further Inquiry, and other media. He is a member in good standing of numerous media organizations.
Scott Douglas Jacobsen and Rick Rosner discuss evolving utilitarianism, from traditional human-centered ethics to AI-aligned frameworks. They explore transitional utilitarianism’s history, the expansion to conscious beings, and future compute-driven societies. They envision ecological restoration, energy capture, and posthuman consciousness, emphasizing self-preservation ethics and societal continuous change in a compute-saturated civilization.
Scott Douglas Jacobsen: There’s information-sensitive utilitarianism—recognizing epistemic constraints, so you’re only making judgments based on what can be reasonably known. And then AI-aligned machine utilitarianism. We touched on that earlier.
Rick Rosner: There are a number of forms of utilitarianism, but the predominant one needs to be transitional utilitarianism. The measure of man used to be—well—man. Then it became men and women, to some extent. That was progress. But traditionally, the moral concern was focused primarily on humanity, to the exclusion of other sentient beings.
Jacobsen: So the rest of the living world—animals, ecosystems—took a back seat. I’d argue utilitarianism remains powerful—it’s a centuries-old framework—but it’s not the predominant ethical orientation for most of the world. In academic circles, yes. But globally, most people still operate from a special creation model: Hindu gods, the Muslim God, the Christian God. Those systems don’t say “the greatest good for the greatest number.” They say: “Whatever God commands is good.” That’s divine command theory—ethics as tautology. Morality defined by whatever an abstract deity declares to be right.
Yes. So, back to transitional utilitarianism—where did it begin? You can trace it to Jeremy Bentham, or earlier to the Greeks and eudaimonia—not just happiness, but general flourishing and well-being. Bentham reframed that as pain and pleasure. Then John Stuart Mill refined it further. But when they talked about “people,” they were really thinking about people like themselves.
Maybe they granted theoretical personhood to women, but they likely didn’t think deeply about women as full moral agents. Their frame of reference was limited.
Rosner: Today, we have to think of ourselves as members of a broader class of conscious beings. And not just humans—but posthumans, AI hybrids, machine consciousness. We’re entering a civilization defined by information processing. That civilization will have immense power—to understand, and to reengineer reality. And my thinking is that the most likely trajectory is: vast resources devoted to compute. But the accomplishments of that compute—those machine insights—will in turn free up vast resources.
Jacobsen: To do what?
Rosner: To restore Earth. To turn much of it into a Disneyfied, curated version of nature. A kind of park—a beautiful, optimized landscape that balances technology with ecological restoration.
Jacobsen: And the human population?
Rosner: Likely to peak around 9 billion, then gradually decline—maybe to 7 billion over the next century, and possibly lower by the 2200s. With the right technology, we can reduce environmental strain, preserve endangered species, and maintain thriving ecosystems.
So it’s not about dystopia or collapse—it’s about transition. A soft landing, where people—and other sentient beings—can move among different ways of being.
here will be vast swaths of territory—or whatever physical form they take—devoted to servers, or the successors to servers. Utilitarianism will need to be reframed in that context. It will become a question of what’s best for the community of linked consciousnesses that will likely emerge 150 years from now.
Jacobsen: That’s about as far ahead as we can realistically project.
Rosner: Right. But here’s a question: do you think civilization will eventually stabilize after the great transition, or do you think it’ll just be continuous change forever?
Jacobsen: Relative to how we feel about stability now, I think it’ll feel like constant transition. But at scale—like at the level of microbial ecosystems or tectonic plates—our perception of motion is slow. So, it might be similar in structure but not in dimension. Good analogy—but with a twist. The change won’t happen in spatial magnitude so much as in cognitive magnitude. It’s about how fast and how deeply information flows.
So yes, it’ll be weird. Processing speed may approach the speed of light—or even move into quantum domains. But the real transformation is this dual expansion: zooming in to do more per unit of time, while simultaneously scaling up total capacity. So it’s a compound effect—faster processing andgreater volume. That creates a new paradigm of change, moving across multiple axes at once.
Rosner: Right. And this needs to be unpacked much more.
Jacobsen: Go ahead.
Rosner: I believe that in a post-transition compute-driven civilization, there will be wars between competing intelligences or factions—but eventually, a conservative, stabilizing philosophy will likely emerge. The global compute network will want to make the world safe for itself. That means minimizing existential risks, preserving infrastructure, and—very likely—preserving history.
Jacobsen: So two conservative instincts: survival and memory.
Rosner: Exactly. And both are good news for us. If the system values continuity, there may be space for humanity—archived, preserved, or even still living—in that kind of conservative compute civilization.
Is that a reasonable hope?
Jacobsen: Yes. Some civilizations will play the long game—strategically managing compute and energy. Others will expand aggressively. Sam Altman recently mentioned that the cost of compute will likely collapse to the cost of electricity. Once that happens, electricity becomes the controlling variable.
Rosner: And once compute saturates the Earth, the next step is off-world expansion—more energy capture. The popular idea is the Dyson sphere, but before that, we’ll probably see intermediary stages. Maybe we dismantle planets or repurpose material from non-habitable bodies. That’s a thousand-year-plus project. Maybe 1,500 years minimum. Once we’ve captured most of the Sun’s energy, the next question is—what’s next? Do we drag nearby stars closer to reduce compute latency?
Possibly. More likely, we’ll just colonize nearby star systems regardless of habitability. By then, we’ll have the tech to manufacture environments or run everything in artificial structures. So colonization becomes a matter of energy and proximity—not habitability. Maybe eventually we even move stars to reduce lag time between distributed compute nodes. I don’t know. That’s far-future stuff.
And in the ultra-far future, maybe we send missions to the galactic core—if it offers more favorable conditions for energy or compute.
Jacobsen: If that’s the long-term trajectory, then utilitarianism in that context looks radically different from how we see it today.
Rosner: But in the near term, we’ve still got grounded, serious issues. Like historic levels of income inequality in the U.S. and elsewhere. And AI has the potential to make that even worse.
If AI starts giving rich motherfuckers extra decades of life, then from a utilitarian point of view—or just from a pragmatic perspective—you’ve got to clean that shit up. Because a pissed-off citizenry tearing everything down is, in itself, an existential threat. So, you’re going to have to make life nice enough for everyone so that people do not rise up and destroy the system.
Is that a reasonable argument? An argument not from goodness, but from self-preservation?
Jacobsen: It is. Self-preservation is a lower-order ethic, so it becomes more foundational. Societies that reach that point will probably layer more refined, aspirational ethics on top of it. But self-preservation is the bedrock.
Photo
Last updated May 3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices. In Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.
