Skip to content

Ask A Genius 1484: Humanism, Religion, and AI: Rick Rosner and Scott Douglas Jacobsen on Emerging Ethical Challenges

2025-11-08

Author(s): Scott Douglas Jacobsen

Publication (Outlet/Website): Ask A Genius

Publication Date (yyyy/mm/dd): 2025/08/11

In a thought-provoking dialogue, Rick Rosner and Scott Douglas Jacobsen explore the evolving challenges to humanism, from historical conflicts with religion to the disruptive potential of artificial intelligence. They discuss the Humanists International Minimum Statement on Humanism, the Luxembourg Declaration on AI and Human Values 2025, and the alignment problem between AI and human ethics. Topics range from brain–computer interfaces and technological augmentation to the possibility of extending human cognition and lifespan. Drawing on examples from science fiction and cultural narratives, they consider how humanist values—reason, compassion, dignity, and freedom—can guide humanity through profound technological and societal transformations.

Rick Rosner: I would assume—being very naïve about humanism, or at least uninformed—that one of the major factors hindering humanism in the past has been religion. Religion postulates superior beings with their own religion-based rules, which are not necessarily to the benefit of human beings. Right? I assume religion has been a stumbling block for humanism, along with other forces such as fascism.

Has anything else been detrimental to humanism? I mean, I assume that humanism is the idea that—well, you can read the definition if you want. Go ahead and do that. 

Scott Douglas Jacobsen: But we have thought about this quite thoroughly. We have a minimum definition you must accept to be considered a humanist. So, before you embarrass yourself, here is the minimum statement on humanism.

This is the Humanists International (formerly the International Humanist and Ethical Union, or IHEU) Minimum Statement on Humanism, first adopted in 1996 and still in force today. Humanists International was previously known—as I am hyper-enunciating for the transcript—as the International Humanist and Ethical Union because it was a global umbrella organization for humanist, rationalist, secular, and ethical culture groups. The name was officially changed to Humanists International in 2019.

In 1996, at the IHEU General Assembly in Mexico City, the following statement was unanimously ratified:

Any organisation wishing to become a member of IHEU is now obliged to signify its acceptance of this statement:

Humanism is a democratic and ethical life stance, which affirms that human beings have the right and responsibility to give meaning and shape to their own lives. It stands for the building of a more humane society through an ethic based on human and other natural values in the spirit of reason and free inquiry through human capabilities. It is not theistic, and it does not accept supernatural views of reality.

In that intricate yet straightforward definition, you can see that religion, with its own set of values, has often conflicted with humanism. I take Noam Chomsky’s orientation on religion: you are positing some outside force intervening, often in your favour. For instance, there might be the idea of a divine plan in place, yet you are praying for an intervention in that plan to help you win a spelling bee, improve your finances, find a spouse, or repair your marriage.

Somehow, that outside force is supposed to insert itself into the whole order of things to manipulate circumstances in your favour. That is the supernatural, and humanism rejects it entirely.

One of the simplest demonstrations of how praying for intervention can be absurd is when two sports teams, both made up of devoutly religious people, each pray, “Please, Jesus, let us win the game.” Inevitably, one side is going to be disappointed.

God is deliberately disappointing one team if He favours another, by its binary logic. So, this kind of thinking breaks down easily in many ways.

Rosner: So, okay. That is well understood and established, even if you do not know a lot about humanism. However, now we have a whole new threat—and I am not sure if it is a threat to humanism, but it certainly is—and that is AI. It is being sold as a huge benefit to humanity.

At the same time, people are very fearful that it will wipe out humanity. Moreover, at the very least, it will impose its limitations. Eventually, we will not be able to—there is an alignment problem with AI, which is that as AI advances, it becomes increasingly challenging to align with human values. Go ahead.

Jacobsen: So, I was in Luxembourg at the General Assembly for Humanists International, the largest umbrella group for humanists in the world. Now, we have adopted a declaration. So, in July 2025, regarding your concerns about AI and related issues, it emphasizes that AI must reflect core humanist values: reason, compassion, dignity, freedom, and human flourishing.

The document adopted was the Luxembourg Declaration on Artificial Intelligence and Human Values 2025. This went through an international discussion, more of a discussion and development. The document—now a formal policy—contains ten main points, each with expanded discussion. The bolded subheadings are: human judgment, shared good, democratic governance, transparency and autonomy, protection from harm, shared prosperity, creators and artists, reason, truth and integrity, future generations, and human freedom and flourishing.

So, your concern is more oriented toward that tenth category—human freedom, and, really, only half of it: human flourishing.

Rosner: I see AI as potentially impinging on almost all of those ten points, if not all of them. Plus, in addition to AI imposing itself, there will come the power to raise other entities to human levels of cognition and feeling.

I am writing a near-future science fiction novel, set mainly in the 2030s, not far in the future. I am using a little hocus-pocus technology to make it more fun, but one aspect is this: when you chip people—I call it “mesh” in the book because it is a mesh that you lay over the surface of your brain—

Jacobsen: I call it “mush” because you are essentially combining everything to some degree. It is a more straightforward concept. 

Rosner: Anyway, when you do something like Neuralink—which is Musk’s product, and I am sure dozens of other companies are working on similar brain-interface technologies—you can communicate directly with the brain.

You do not have to go through a sensory pipeline. You can share whatever computation the brain is doing with external computational devices to transmit information more directly. If you do this, you can make humans more potent in their thinking, but you can also make dogs more powerful. Moreover, you can also link people—link people together.

Moreover, yeah, we have talked about all of this a lot. 

Jacobsen: From a humanist perspective, it is built into humanism that we acknowledge the shared humanity of every person and that they deserve all good things within practical reason.

Rosner: But when other entities rise to human levels of cognition, feeling, and judgment, you have to open the door to those beings too—and that adds to the mess of the future.

Jacobsen: If I had to put it schematically, I would say that religion’s incursions and attacks on humanism come from below—from primitivism, from beliefs in supernatural beings without explanation, or with explanations that themselves rely on the fantastic. However, this is an attack on humanism almost from above—from principles rooted in technology, which can be argued with mathematical precision. Whether that mathematical precision is legitimate or not, it still carries the heft of tech.

Rosner: You cannot effectively ask God to make you win every single one of your softball games—no technology gets God to intervene on your behalf. I mean, there is “prayer technology,” but it does not work. Technology, however, does work. So in a way, it is a more powerful incursion into human society, human values, and human structures.

We can mention the alignment problem with AI, which is that, ideally for humans, we would be able to control AI so thoroughly that what AI wants would never be out of alignment with what humans want. However, that may not be possible. Moreover, even if it were possible, it would require methodical, slow progress to ensure that AI is not getting out of hand. It would also require that the people and companies developing AI are not stupid and greedy, which, demonstrably, is not the case.

Go ahead.

Jacobsen: I will quote a 1991 policy—rather than the 1996 one—that was ratified by the Board of Directors of the IHEU, the former name for Humanists International. It is shorter, but an older variation. Based on subsequent updates and debates, there are different orientations, but this is one of the most concise formal policy statements I have come across in our international community:

Humanism is a democratic, non-theistic, and ethical life stance which affirms that human beings have the right and responsibility for giving meaning and shape to their own lives.
It therefore rejects supernatural views of reality.

Full stop.

So, in a sense, you could have something like a pantheist god, but that is equivalent to Spinoza’s or Einstein’s idea that it is simply the laws of nature. There is nothing supernatural there—it is entirely naturalistic. However, religious gods, of the sort we have very little evidence for, generally do not directly interfere with people. AI, however, certainly will. 

Rosner: Moreover, what was the other point I was going to make? I lost it. You talk, and I will try to get it back.

Jacobsen: There is a symmetry between human thought and AI thought right now because the “skim” of human thought in language production—text—is used as the fundamental basis for interaction and representation of machine thought. If that is incorporated at the core, then the character of human thought—our interiority, rather than our physical appearance—will be represented.

In a way, our architecture of mind, averaged over a large number of human interactions, will be represented in these machines. That will deviate as time goes forward, but there is a core alignment.

I once heard an interpretation of the philosophy of The Matrix: the machines did not kill human beings, not simply because they needed energy and used them as batteries (which is probably not even efficient), but because their core programming prevented them from eliminating all people.

So they had to build a system that provided for them. In some sense, it is a warped form of anger at human beings for fighting against them. However, they also cannot kill them, so they incorporate them into their energy base.

Rosner: Alright. The humanist perspective wants people to live their best, most fulfilled lives within the understood human framework of, say, the twentieth century. Just as an example: you are born, you learn, you become actualized, you do what you want creatively, perhaps raise a family, and you try to be fulfilled. Then you get old and die.

Humanism does not have a problem with any of that—that is the way humans have lived for tens of thousands of years. Humanism wants everyone to live their best life, and that includes death as part of the human experience.

Now, with the coming of advanced technology, human patterns of existence are going to be disrupted. We already have tech billionaires spending enormous sums and conducting extensive research to extend their lives, either within their bodies or by attempting to download their connectomes so their consciousness can continue beyond the lifespan of the body.

So, what is humanism doing to prepare for this onslaught of disruption? 

Jacobsen: There is still a chance that people could live their best lives in an AI-driven world. The key is to foster a culture of critical thinking, enabling people to navigate the technology effectively as it becomes available. That is anticipatory worldview-building.

However, that is not new—we have been doing that for fifty, sixty, seventy years, through, for example, science fiction. One of the greatest humanists—president of the American Humanist Association, president of American Mensa, and science fiction writer—was Isaac Asimov.

There is a vibrant tradition of this. One of the most significant humanistic trends I have observed, though not always explicitly described as such, is evident in popular anime like Dragon Ball, where robots and androids are portrayed not only as villains but also as allies—beings with their distinct ways of doing things that are respected. 

Much Japanese culture reflects that, and anime often presents realistic portrayals of humanoid robots capable of complex, legitimate interactions with the world, without necessarily detracting from human life.

Rosner: In the future, I keep coming back to the movie Her, now about twelve years old. There, the issue is that humans form thoughts much more slowly than AI does, meaning humans may need augmentation to keep up with artificial thinking. However, we may be envisioning this wrong. We do not think everyone needs to be “souped up” because calculators can already help anyone perform advanced calculations without altering the mind. Augmentation may only apply to certain areas and activities of human thought.

Jacobsen: That might be more Blade Runner-realistic. The “mesh” will probably be messy. Some people will be permanently and significantly altered, while others will be only slightly or temporarily altered, depending on the situation. Some people will live largely human-to-human existences, while others will interface constantly with technology.

Some people are happy with just a phone. Others will want contact lenses but not interactive glasses. Some will prefer implants over contacts. You will have all these different tiers. However, in advocating for smaller and smaller increments of integration, you also hit rapid diminishing returns on the benefits.

Rosner: What will determine how this plays out—beyond political forces—will be economic forces: competition for resources and the nature of those resources. In the early days of AI, sheer power—electricity—will likely be a major, perhaps the primary, resource of value, fueling quintillions of calculations. 

And I do not know—how will unaugmented or slightly augmented humans obtain financial resources? How will they get money or power in an AI world? One argument I have made is that humans may turn out to be so cheap to maintain in the future that anything we need can be had, as it will not cost much in the future economy..

Jacobsen: Alright. Sleep well.

Rosner: Thanks. Good night.

Last updated May 3, 2025. These terms govern all In-Sight Publishing content—past, present, and future—and supersede any prior notices.In-Sight Publishing by Scott  Douglas  Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In-Sight Publishing by Scott  Douglas  Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.

Leave a Comment

Leave a comment