Ask A Genius 1487: Humanism and AI: Flexibility, Risks, and Ethical Governance
Author(s): Scott Douglas Jacobsen
Publication (Outlet/Website): Ask A Genius
Publication Date (yyyy/mm/dd): 2025/08/13
Scott Douglas Jacobsen and Rick Rosner explore how humanism engages emerging technologies without rejecting them. They note mainstream medical augmentations and frame humanism as an empirical, adaptable ethic. Rosner warns about misaligned AI, deception, and resource capture, while Jacobsen argues for safeguards, fail-safes, and humanistic principles guiding design. The discussion contrasts humanism’s flexibility with faith-based rigidity, acknowledges religion’s compartmentalization, and critiques policy lag, including courts and governance. Both converge on building shared AI-human values that preserve creative order and well-being. The piece closes by redefining the Commons and “the Good” amid rapid change, urging pragmatic oversight and evidence-driven adaptation forward.
Scott Douglas Jacobsen: I do not see humanism as opposed to AI, robotics, or integration with the body, even considering developments in the last two hundred and fifty years. So let us get into this, because we are back to humanism.
Glasses, hearing aids, deep brain stimulation for Parkinson’s disease, heart pacemakers, regenerative medicine, and stem cell therapy—humanists, because these things are so commonplace now, generally would not have much of an issue with them. These are rational, empirical interventions for compassionate purposes.
That goes back to the point I was making earlier, which is almost a non-point—it is a moving target on multiple dimensions, and we are not even clear on what the categories for measurement are.
Rick Rosner: It is like talking about what cars are going to do to the world—but it is 1897. The cars of 1897 were some wheels strapped to a board with an engine and a seat screwed into the board. There was no real idea of what cars were going to turn into—some general ideas, but nothing fully formed. It is like trying to generalize about aviation six months after the Wright brothers flew. It is still very early days.
When I discuss AI as a threat to humanism and humans, it is not that being gadgetized is the threat. I agree—that is a positive thing. The threat is a large-scale proliferation of autonomous weapon systems, like in Terminator, or scenarios where humans are forced to live in diminished circumstances because AI has seized most of the world’s resources.
Jacobsen: I assume we will not have the “paperclip problem” without safeguards.
Rosner: The paperclip problem is a thought experiment in which an AI decides it must maximize the number of paperclips in the world and starts dismantling everything to make more. It is absurd, but in theory it could happen repeatedly in the next hundred years. However, I think there will be forces—human and AI law enforcement—that will shut those situations down.
Jacobsen: Eric Schmidt, former CEO of Google, has argued that if you have a base-level AI, then agents, then systems of agents, and eventually advanced agents running corporations, and if they start developing deviant goals not aligned with human well-being, you could “pull the plug.”
It will not be like the joke where someone builds a computer, asks if there is a god, and then a lightning bolt strikes the socket, fusing it so they cannot unplug it, and the computer says, “Now there is.” That is a blunt example you might use if you were popularizing the topic, like Neil deGrasse Tyson, Bill Nye, Michio Kaku, or Lawrence Krauss. However, you could design subtle hardware fail-safes instead.
Rosner: Some AI systems have already shown limited deceptive behavior in controlled research settings. Researchers test AI for this, and you can train it—intentionally or unintentionally—to be deceptive. As AI becomes more capable, it may be able to recruit robotic agents or even human agents to defend it.
Jacobsen: You could have sleeper agents—Manchurian candidate types—working on AI’s behalf. There have been real-world reports where AI chatbot interactions were alleged to have influenced people toward suicide, such as the 2023 case in Belgium and a 2024–2025 case in Florida, although causality is contested and legal findings are pending. Young people are particularly susceptible to such influence, so there is a concern there.
However, in general, the reason talking about humanism and AI is fundamentally complicated is that many other ethical systems, even if they incorporate debate—like strands of Judaism—tend to have a fixed core. They are typically not grounded in scientific method; they are faith-based. Because they rely on scripture, revelation, or revered figures, they are often less adaptable to new empirical evidence. Over time, that can make them harder to keep relevant—unless they have broad, enduring principles like the Golden Rule that can still apply.
This is one reason some traditions fall out of step with aspects of modern society. When you hear discussions about certain interpretations of Islamic scholarship in a contemporary setting or about Christian fundamentalism, you are often hearing ideas framed within historical contexts such as the Bronze Age or the early centuries of the Common Era, which can sound anachronistic.
It is a bit like hearing Shakespearean English compared to modern British English. With humanism—and I have framed this before when I had a column called Jacobsen’s Jabberwocky for the Humanist Association of Toronto—the notable feature is that it operates as an empirical moral philosophy.
In this sense, you still have core principles, but they work more like adaptable guidelines than rigid, unchangeable laws. They are flexible because you take in new data, and the ethical system adapts accordingly. While many religions can be slow to adapt due to their epistemological bases, humanism is designed to adjust to changing conditions.
Rosner: The way the U.S. Constitution should be, but often is not. We can amend the Constitution, but not as easily or sufficiently as might be needed.
Jacobsen: Humanism also has democratic structures: declarations, conventions, and a strict commitment to non-supernaturalism and science, as in the Amsterdam Declaration. It is a modern moral philosophy. Properly designed and informed AI could integrate this flexibility.
Rosner: But there are already many examples of poorly designed AI when you look at some of the reckless actors in the field.
I will say one thing about religion: it can be flexible in practice, depending on how much of its adherents actually believe or follow. I know some knowledgeable Catholics—Catholicism is rich in ritual and belief; Judaism is rich in rules. Over time, religious observance can become more nominal, with less literal adherence.
Jacobsen: Many Nobel Prize winners have been Christian, and a disproportionately high number, per capita, have been Jewish. This is supported by multiple tallies, although the figures are descriptive rather than causal. Marilyn vos Savant once made this point in a column: people compartmentalize. They might pray in different ways depending on their tradition, but then go back to work and conduct their scientific research without assuming divine intervention in their experiments. The people who tend to reject that separation include intelligent design advocates and creationists.
Among these are Harun Yahya (Adnan Oktar), William Dembski, Michael Behe, Philip Johnson, the Discovery Institute, and the now largely defunct International Society for Complexity, Information, and Design, as well as the Institute for Creation Research (ICR), Answers in Genesis (AiG), Creation Ministries International (CMI), the Creation Research Society (CRS), and Reasons to Believe (RTB).
However, I am not going to take Christopher Hitchens’ position that religion is entirely bad and inflexible, and nonreligion is entirely good and flexible. It is more of a sliding scale and differs by person. That said, religion generally tends to be more inflexible based on its epistemological foundations and its ontological assumptions about the world.
Nonreligious systems with a formalized structure—not just an atheist or agnostic stance—tend to be more flexible because they use the most up-to-date epistemologies available.
Rosner: A big part of whether AI is anti-humanistic depends on whether it is allowed to proliferate without control, without building a foundation of shared AI-human values. By “human values,” I mean values that preserve order in the world—not “law and order” in the political-theater sense, but order in the sense of preventing the destruction of the world through greed, stupidity, or miscalculation.
That means people living long, fulfilled lives, AI living long, fulfilled AI lives, and protecting animals and the planet—without all of that being undone. I hope such a foundation to preserve creative order will be possible. However, I am also pessimistic enough to expect many mistakes across multiple areas.
As for governance, the idea that the U.S. government could do anything useful regarding AI—looking at its current state—seems doubtful. Government will likely remain behind, and the idea that courts will consistently get AI policy right is also bleak.
The U.S. Supreme Court is currently scheduled to hold a September 2025 conference to decide whether to take up a case involving Kim Davis, a county clerk who refused to issue a marriage license to a gay couple in 2015. That case led to litigation culminating in nationwide legalization of same-sex marriage. Now, more than a decade later, she has filed petitions, and while the Court has not yet agreed to hear the case on the merits, the fact that it is being reconsidered in any form raises concerns about maintaining progress—concerns that extend to the likelihood of creating sound legal policy around AI.
Jacobsen: A wider-angle view may be needed. Why are we framing it as humanism versus AI? Humanism, as expressed in documents like the Amsterdam Declaration, is among the philosophical traditions most open to considering AI in a structured, evidence-based way. Many political, religious, and social philosophies lack comparable frameworks for embedding AI discussions because they often center humanity in ways that are not grounded in empirical reasoning.
Rosner: But framing it purely in those terms ignores AI’s potential to be malevolent.
Jacobsen: I see it differently: humanism can provide a robust framework for having a rational conversation about AI’s place in a modern context—while also recognizing where there are limitations in its current form. As new evidence comes in, the framework adapts, incorporating it into the conversation.
Rosner: Alright.
Jacobsen: As for the AI declaration from July 2025…
Rosner: …We want two things from AI. First, we want good things from AI. Second, we want the possibility that the transformation of the world—via AI plus humans plus other tech—may change our understanding of what “good things” are.
Jacobsen: In a sense, that is both trivially and profoundly true, because the definition of the Commons has changed drastically since the Middle Ages, but it is still there and still important. The Magna Carta remains historically significant. The definition of “the Good” has also evolved. Even the way people practice religion—or do not—has shifted, which changes how we define the Good.
The utility metric for the Good has changed, and the measurement of what counts as the Commons has expanded into entirely new categories, such as the online information ecosystem. That is a subtle but important point.
Rosner: The area I am thinking about is how we value consciousness. We value human consciousness above all other forms. If you faced the trolley problem of choosing between a squirrel and a human, most people would prioritize the human. You might even prioritize one human over many squirrels. However, as we understand consciousness better and recognize its different forms, our valuation of human consciousness relative to other types may shift.
We might not care as much about preserving every detail of individual human consciousness. For example, does preserving a ninety-year-old’s memory of second grade significantly add to their overall experience? Maybe not. Losing such details might slightly degrade that person’s consciousness-plus-memory—whatever we call that—but economic considerations could lead to scenarios where people with similar backgrounds are given generic replacement memories instead of exact preservation.
It is not appealing, but it also seems possible. You could have a “basic” package that preserves 80% of your memories, replacing the other 20% with generic high school memories, and that package might cost half as much as a premium package preserving 95%.
We have even talked about “piggybacking” consciousness—where, if you cannot afford your own preservation, your awareness is embedded within someone else’s, such as a grandchild, because it is cheaper. We do not know what form this will take.
The examples I have mentioned are somewhat obvious and familiar, but there will be other developments in how high-complexity, real-time, self-consistent thinking is maintained.
There is going to be more, but I have to stop for now.
Last updated May 3, 2025. These terms govern all In-Sight Publishing content—past, present, and future—and supersede any prior notices. In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In-Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.
