Skip to content

Ask A Genius 1394: Swear Words, Utilitarianism, and AI Ethics: A Deep Dive

2025-06-13

Author(s): Rick Rosner and Scott Douglas Jacobsen

Publication (Outlet/Website): Ask A Genius

Publication Date (yyyy/mm/dd): 2025/06/01

Rick Rosner is an accomplished television writer with credits on shows like Jimmy Kimmel Live!Crank Yankers, and The Man Show. Over his career, he has earned multiple Writers Guild Award nominations—winning one—and an Emmy nomination. Rosner holds a broad academic background, graduating with the equivalent of eight majors. Based in Los Angeles, he continues to write and develop ideas while spending time with his wife, daughter, and two dogs.

Scott Douglas Jacobsen is the publisher of In-Sight Publishing (ISBN: 978-1-0692343) and Editor-in-Chief of In-Sight: Interviews (ISSN: 2369-6885). He writes for The Good Men ProjectInternational Policy Digest (ISSN: 2332–9416), The Humanist (Print: ISSN 0018-7399; Online: ISSN 2163-3576), Basic Income Earth Network (UK Registered Charity 1177066), A Further Inquiry, and other media. He is a member in good standing of numerous media organizations.

Scott Douglas Jacobsen interviews Rick Rosner in a wide-ranging conversation starting with swear words and diving into utilitarianism, longtermism, effective altruism, AI ethics, simulated consciousness, moral uncertainty, and capitalism. Rosner critiques modern frameworks, explores future consciousness, and calls for ethical clarity amid rapid technological change.

Scott Douglas Jacobsen: I should’ve asked you this earlier. Obvious question: what’s your favorite word?

Rick Rosner: Nothing comes immediately to mind.

Jacobsen: What’s your favorite swear word?

Rosner: I guess motherfucker. 

Jacobsen: That was George Carlin’s favorite too, when he did Inside the Actor’s Studio. It packs the most punch.

Rosner: Cunt is also strong. In the U.S., it hits hard. It’s considered offensive, and it’s often viewed as sexist. It’s not easy to use well. But it’s got impact. 

Jacobsen: Motherfucker has a certain balance to it. Carlin said that. It’s got internal rhythm. It flows. Ready to pivot? Do you want to talk about Kantian, utilitarian, existentialist, or humanist morality?

Rosner: Let’s go with utilitarian.

Jacobsen: I’m in. Okay, so, everyone knows Jeremy Bentham—he’s the founder. Then came refinements by John Stuart Mill. But of course, there are much more modern interpretations now. As a basic framework, it holds up: “the greatest good for the greatest number.”

Rosner: Right. But here’s where it gets interesting. There’s a famous short story by Shirley Jackson—The Lottery. In it, everyone lives pretty well, but that comfort is built on a ritual: every year, one person is chosen at random and horribly sacrificed. The point is to highlight a key flaw in utilitarianism—what if the happiness of the many depends on the suffering of one? The story’s almost designed to break utilitarian logic—because that’s not a moral society if it requires total misery for even one person.

Once you start thinking that way, it becomes easy to generate scenarios that undermine the utilitarian ideal. Plus, you run into the problem of defining good.

Jacobsen: Right now, you could argue that humanity has it “good.” We’ve got over 8.2 billion people, more than ever. So technically, more people are living in relative comfort than ever before—but also, more people are living in terrible conditions than ever before.

Rosner: And a lot of that “good” is junk food for the soul—mindless entertainment and pornography. Do those things count as good? Do they make us better people?

Jacobsen: So utilitarianism, while useful, has both practical and conceptual limitations. One of those is this: if maximizing good means maximizing numbers, then should we just keep making more people? That’s absurd. So clearly, what we mean by “good” has to be more carefully defined. Happiness, in particular, is highly individualized. What makes me happy won’t necessarily work for you—especially across short- and long-term balances.

Rosner: We’re not designed to be happy. We’re designed to pursue happiness. Evolutionarily speaking, that means we function best when happiness is just out of reach. That tension keeps us motivated.

Jacobsen: That gets into some newer frameworks. Have you thought about longtermism or effective altruism?

Rosner: I’ve heard of them. 

Jacobsen: What do you think of effective altruism? What do you think of longtermism? Pluses and minuses.

Rosner: I need the idea defined again, just to be clear.

Jacobsen: Longtermism, as outlined by William MacAskill, is the idea that we should extend our utilitarian concern to the far future. Since future generations could vastly outnumber us, their wellbeing deserves significant moral weight. So, the philosophy emphasizes reducing existential risks—like AI misalignment or global biocatastrophes.

Rosner: That makes some sense. So yes—you’ve got AI misalignment, meaning AI could work at cross purposes with human wellbeing. That’s a legitimate concern. But at the same time, AI is going to end up in charge eventually. Humanity will evolve—or be absorbed into—these systems. And we want our descendants to be treated well. Which raises the question: who are our descendants?

Some will be biological humans. But others will be technological—descendants that are merged with AI or entirely machine-based. Within a few hundred years, we’ll likely live in a world of transferable consciousness. The main activity of existence will be information processing. So we’re talking about a world increasingly composed of computation.

We hope that our descendants—both biological and digital—will reengineer the world to make it better, more livable. Maybe even a kind of Disneyfied utopia. Longtermism has value in that it pushes us to take measures now to reduce existential risks—things that could obliterate the future of intelligence, consciousness, and whatever humanity evolves into.

Jacobsen: So you’re saying longtermism isn’t really for humans per se.

Rosner: It’s not. It’s for what humanity becomes. It’s about steering the total trajectory of the future—not preserving the past. There’s often some human chauvinism baked into it—the assumption that humanity, as it currently exists, can and should persist unaltered. That’s not going to be the case, except perhaps for some fringe or isolated segment of humanity. Things are going to get weird.

Jacobsen: Longtermism needs to embrace change. Change is inevitable. And there’s also a paradox in the way AI is treated right now: it’s simultaneously overhyped and underhyped.

Rosner: How so?

Jacobsen: It’s good for the stock market to overhype AI. But we’re still in the early innings. Current AI isn’t that powerful—it’s limited. But future AI will be transformative. So in that sense, people underhype the overarching impact. Yet the specifics—like selling language models as if they replicate human cognition—are definitely overhyped. Large language models aren’t how humans do language, but they still produce coherent text. So the why behind selling them that way is flawed.

Rosner: Current AI is oversold. Future AI is under-conceptualized. People aren’t really thinking deeply enough about what’s coming, even though it’s already on the horizon.

Jacobsen: Okay, so what about the other one—effective altruism?

Rosner: What’s the core idea there?

Jacobsen: It’s a modern utilitarian framework based on evidence and reason. It asks not just “How can I help?” but “How can I help the most?” It looks at three key criteria: scale, neglectedness, and tractability—to determine where your efforts or donations can do the most good.

Rosner: That seems reasonable. I don’t disagree with it. It’s a useful upgrade to utilitarian reasoning—pragmatic and structured.

Jacobsen: I’d certainly prefer effective altruism to performative altruism.

Rosner: What do you mean by performative altruism?

Jacobsen: People doing things they think are helpful—but that actually have little to no real impact. 

Rosner: Take recycling, for instance. It turns out that, in practice, a lot of it doesn’t actually get recycled. Carole, for example, doesn’t care much about recycling anymore because she says it all ends up in the same place anyway. And she’s not wrong. There’ve been hidden camera investigations showing waste management crews tossing all the separated bins into the same truck. The intent may be good, but the process fails—and that makes it more symbolic than effective.

Jacobsen: Right. So performative altruism can sometimes be more about easing guilt than making a measurable difference. For example, I read on Twitter that AI systems don’t consume nearly as much energy as people fear. If you’re worried about saving energy, a more impactful action might be cutting meat consumption.

Rosner: That tracks. Producing beef takes an enormous amount of water and energy—far more than running an AI model. So dietary change has a disproportionately large impact. Speaking of effective systems for doing good: think of Superman. Created in 1938, he had a pretty solid setup. His civilian identity as a newspaper reporter let him stay informed—he could find out about disasters almost as quickly as anyone.

While he could theoretically just fly around all day looking for trouble, having a job as a reporter gave him early access to urgent information. It was efficient. Of course, he couldn’t respond to everything. But it was a smart allocation of his attention. If we updated him today, to optimize his powers, he’d probably need some kind of command center—with global surveillance, intelligence feeds, satellite access—all helping him choose where he could do the most good per unit of time.

That’s classic effective altruism. You calculate not just what’s good—but what’s most good, factoring in time, logistics, and opportunity cost. If there’s a bus crash in India and he could save 70 people, but he’s 10,000 miles away, it might be more effective to stay nearby and save 10–12 people repeatedly over the same time span.

In the new Superman film by James Gunn—coming out soon—Superman unilaterally intervenes in a war without government approval. It causes a huge backlash because he circumvents national sovereignty. That’s a big shift. Old-school Superman would’ve never done that. But now we’re exploring questions like: What should someone with near-limitless power do? How does that fit within ethical and political frameworks?

Jacobsen: Effective altruism tackles those questions—except in real-world terms. It asks: What does responsibility look like when you have the capacity to help on a large scale? Let’s pivot—what about digital sentience and artificial moral agents? Do you think about expanding the moral circle to include simulated beings?

Rosner: Yes. That ties into the idea of throwaway suffering. In the future, we may have video games or virtual worlds with non-playable characters that are artificially conscious—or at least experience simulated suffering.

Jacobsen: And that would pose ethical problems. If we’re creating digital beings capable of experience—pain, joy, desire—then ignoring their welfare becomes morally problematic.

Rosner: I can imagine a future—maybe not in the next 40 years, but perhaps in a century—where people can revisit and relive parts of their own lives. Not just as memory, but as high-fidelity simulations. They could go back and “redo” things they got wrong the first time. And if those simulations involve other sentient agents, the ethics compound. You’d have to think carefully about what suffering you’re reintroducing—or even manufacturing—just to replay a scene from your life.

I can imagine simulations in which the artificial people you’re interacting with possess artificial consciousness. That raises a major ethical question: what rights do these beings have inside simulations?

What are the ethics of simulating the consciousness of, say, a woman you always wanted to be with—who thought you were a creep in real life—but now in the simulation, she’s programmed to desire you? What does that say about consent, autonomy, and simulated coercion?

In the simulation, she’s a construct of your design, compelled to like you. But does she cease to exist when you turn off the game? If she’s conscious—or even partially aware—what obligations do we have to her?

And it’s not just romantic scenarios. What about characters in games who have some level of awareness and die over and over again? Do they experience anything? Does it matter if they might?

It reminds me of Harlan Ellison’s I Have No Mouth, and I Must Scream. It’s more than 50 years old, but still one of the most haunting depictions of artificial consciousness abuse. A supercomputer wipes out humanity, except for five people it keeps alive digitally—solely to torture them forever. They can’t die, because they’re not truly biological anymore. It just finds new ways to make them suffer.

That’s the nightmare scenario. It’s fiction—but it raises real philosophical issues as we approach more complex AI and simulations.

Now, what was the original question?

Jacobsen: It was about moral uncertainty—specifically, the expected value of action under uncertainty. We live in a universe that’s ontologically uncertain and epistemologically constrained. The world itself is incomplete and chaotic, and our methods of understanding—via senses or scientific tools—are limited. So how do we make moral decisions in a landscape defined by uncertainty?

Rosner: We do have a pretty good grasp of the world right now, within certain domains. But I’d say the bigger problem isn’t uncertainty per se—it’s what happens as we shift from human to transhuman to posthuman futures. That introduces profound unknowns. Our current frameworks will need major revisions.

Jacobsen: So, in that sense, the uncertainty isn’t just about the present—it’s about the radically unstable nature of what comes next. Everything is on the verge of being upended. Next up: negative utilitarianism or suffering-focused ethics.

So instead of focusing on maximizing happiness, you focus on minimizing suffering. The valence isn’t about what’s good or bad in a binary sense—it’s about what gets emphasized. David Pearce has been a major advocate of this view.

So under this model, increasing happiness is still good—but reducing suffering is even more important. You weight it more heavily. Every moral framework has dials. You adjust how much you value happiness versus suffering. But these need to be grounded in the real world.

Rosner: So yes, you want to assign meaningful weight to reducing suffering, but you don’t want to become so draconian that you outlaw joy until every last bit of suffering is gone.

Systems need balance. Capitalism does a terrible job of distributing value and well-being, but in many cases, it outperforms alternatives—like communism—which historically made more people miserable.

Market forces tend to generate some happiness—even while causing immense suffering, particularly in the modern U.S., where income inequality is arguably the worst it’s ever been. And yet, people are more entertained and distracted than ever before.

Jacobsen: So you could argue there’s been a net average increase in happiness—at the cost of justice, equality, and sustainability.

Rosner: Which leads to the question: is what we’re doing in America today even capitalism anymore? Or is it oligarchy—where the ultra-rich dominate, hoard power, and effectively shape the system in their favor?

Jacobsen: Looks like oligarchy to me.

Rosner: Agreed. So yes, everything has to be weighted. The real world has different distributions of happiness and misery depending on the region, the culture, and the conditions.

Last updated May  3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices.In Sight Publishing by Scott  Douglas  Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott  Douglas  Jacobsen 2012–Present. All trademarksperformancesdatabases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.

Leave a Comment

Leave a comment