Ask A Genius 1485: Humanism in the Age of AI: Ethics, Displacement, and Human Flourishing
Author(s): Scott Douglas Jacobsen
Publication (Outlet/Website): Ask A Genius
Publication Date (yyyy/mm/dd): 2025/08/12
Scott Douglas Jacobsen and Rick Rosner discuss the intersection of humanism and artificial intelligence, exploring whether AI poses a genuine threat to humanity or offers primarily additive benefits. They examine humanism’s ethical grounding in the Golden Rule and its focus on maximizing human flourishing amid historical challenges like war, disease, and industrial change. AI’s rapid advancements raise questions about displacement—not just of labor, but of human decision-making and influence. They consider AI’s potential priorities, philosophical divergence from humans, and its role in shaping societal values, emphasizing the need to preserve human creativity, empathy, and critical thought in a rapidly evolving world.
Scott Douglas Jacobsen: We are continuing on humanism.
Rick Rosner: So, nobody knows at this point. We are about three years into AI’s large-scale public exposure. In that time, AI’s capabilities have proliferated—not just in large language models, but also in image, graphics, and video generation. Experts’ opinions vary widely: some say “AI is limited, will always be limited, and is not a real threat.” In contrast, others warn that “AI may continually improve its capabilities, eventually surpassing human intelligence, and could act in ways harmful to humanity.”
Now, what is the tie-in to the previous conversation? Humanism values humans within a framework of ethics determined by humans. It is grounded in the Golden Rule: humanism seeks for every person to live the most fulfilled life possible. It helps define what a fulfilled life means and examines forces that work against human flourishing.
Historically, forces counter to humanism have included fascism, war, famine, disease, and—depending on their use—religion and politics, when they serve to increase human suffering without cause. AI now appears to be a new kind of challenge.
Human quality of life has been threatened by technological and industrial change before, for example, during the Industrial Revolution, when dangerous working conditions were common. Harsh environments have persisted across history: London in the nineteenth century, when coal pollution made the air hazardous, or modern-day cities like Beijing, which have faced severe air pollution. These conditions have never supported a fully flourishing life.
However, AI may represent a qualitatively different kind of challenge. It has the potential to outperform humans in many cognitive tasks, which could displace us as the most capable problem-solvers and decision-makers on the planet. That shift could reduce the perceived value of human labour, skills, and even human decision-making authority.
Scott Douglas Jacobsen: So, let us take a deeper dive. We use terms like “apex thinkers,” and I want to unpack that more. On the one hand, framing this as “apex thinkers being displaced” may overemphasize the role of abstract reasoning in human worth. Human value is also tied to many other things: relationships, creativity, care, empathy, art, and the ability to find meaning. I know you are not ignoring those, but I want to be clear that rational thought is only one dimension of humanity.
Rosner: I agree. Our consciousness—our lived human experience—is among the most valuable things we have. Love, family, having children, appreciating nature—all of these matter. However, being “outthought” is the lever technology could use to displace us in many areas of influence. Unlike humans, AI does not feel emotions, form attachments, or have subjective experiences; it processes information without consciousness or empathy.
I mean, maybe that will be part of the displacement, but thinking has been our primary tool for gaining dominion over the planet. I am thinking that AI’s “thinking” will be its tool, as opposed to all the other things we value in ourselves.
Jacobsen: Another question tied to this is the nature of displacement. However, that raises the question: what exactly do we mean by “displacement”?
Also, beyond the general idea of displacement, what will be its specific character? I do not know if we have a clear answer to that. For instance, people still have weddings, but now they record them and take photos. That is an addition for many people—it has not eliminated the practice.
Rosner: When I think about displacement, the first thing that comes to mind is competition for resources. AI is going to need resources. Currently, the primary resource AI requires is energy for computation and the necessary infrastructure, including servers and circuitry. Eventually, it will seek land, orbital resources, or permission to build large-scale, high-impact energy facilities such as cheap fission reactors. I do not know. However, it will have its priorities eventually, and displacement comes when AI’s priorities do not align with human priorities.
Jacobsen: So, we do not yet know the character of that conflict. At this point, projecting forward, we do not even know the proper categories to measure and calculate the type, degree, and timing of potential displacement. We are in a period of uncertainty—even for people who think about this constantly, like Yuval Noah Harari or Max Tegmark, and also the theorist who came up with the “paperclip maximizer” scenario Nick Bostrom.
Rosner: Also, AI may eventually have philosophical differences with us. In science fiction, for example, some AIs tell humans, “You need to join us; we will not allow you to remain separate.”
Jacobsen: We can even imagine minor, ironic footnotes to this. I will send you a photo that you might find funny.
Rosner: Go ahead.
Jacobsen: It is a screenshot of Altman and Musk fighting on Twitter, with Grok weighing in. You have to click it to see it. I will read it out: Elon Musk says, “You got 3 million views on your bullshit post, you liar—far more than I have received on many of mine, despite me having many times your follower count.” Altman replies, “Will you sign an affidavit that you have never directed changes to the X algorithm in a way that has hurt your competitors or helped your own companies? I will apologize if so.”
Idia—the handle—asks Grok, “Who is right? Do not be biased.” Grok responds: “Based on verified evidence, Sam Altman is right. Musk’s Apple antitrust claims are weakened by the fact that competitors like DeepSeek and Perplexity have thrived in 2025. Conversely, Musk has a history of directing algorithmic changes on X to boost his posts and various inputs, according to 2023 reports and ongoing probes. Hypocrisy noted.”
So, in terms of how AI will affect our lives, it will be more additive than subtractive or destructive, at least early on.
Rosner: However, I do not know. AI may want to herd humans and direct our thinking.
Jacobsen: Humans want to herd humans and direct our thinking.
Rosner: Yes.
Jacobsen: That is often what a multinational corporation or a significant political movement does. Herding is frequently counter to humanism. In a recent email correspondence, the issue of complacency came up. I also found in an email exchange with a humanist today that there is a problem with the nullifying of critical thought, sometimes through religion, though not always. Complacency can also develop through consumerism, multinational influence, and large-scale advertising.
Rosner: Because corporations bring us many things that are convenient or enjoyable, we tend to overlook the ways they might be exploiting us.
Jacobsen: In many ways, yes. Since we are trying to get at the idea of human flourishing, we need to ask: what does that mean, and how do we optimize it for a person’s talents, temperament, personality, context, and the community they are in, given what is available in their society?
For example, suppose you are in Haiti, poor, and have minimal access to education due to insufficient infrastructure. Working on a cruise ship might be a significant improvement in the quality of life compared to your local options. However, if you grew up in Silicon Valley with two PhD parents, completed postdocs at MIT, and now work on generative AI for Amazon, your opportunities—and thus your definition of flourishing—will be vastly different.
Humanism accounts for the fact that there is an objective world, that there are intersubjective social contracts, and that there are relative subjective differences. Those differences do not negate the objective world or our attempts to approximate it through intersubjective social norms, even if those norms are subjectively interpreted.
Last updated May 3, 2025. These terms govern all In-Sight Publishing content—past, present, and future—and supersede any prior notices. In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In-Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.
