Ask A Genius 1390: AI Sex Robots, Ethical Dilemmas, and the Rise of Machine Agency: A Deep Dive from Berlin to 2035
Author(s): Rick Rosner and Scott Douglas Jacobsen
Publication (Outlet/Website): Ask A Genius
Publication Date (yyyy/mm/dd): 2025/05/27
Rick Rosner is an accomplished television writer with credits on shows like Jimmy Kimmel Live!, Crank Yankers, and The Man Show. Over his career, he has earned multiple Writers Guild Award nominations—winning one—and an Emmy nomination. Rosner holds a broad academic background, graduating with the equivalent of eight majors. Based in Los Angeles, he continues to write and develop ideas while spending time with his wife, daughter, and two dogs.
Scott Douglas Jacobsen is the publisher of In-Sight Publishing (ISBN: 978-1-0692343) and Editor-in-Chief of In-Sight: Interviews (ISSN: 2369-6885). He writes for The Good Men Project, International Policy Digest (ISSN: 2332–9416), The Humanist (Print: ISSN 0018-7399; Online: ISSN 2163-3576), Basic Income Earth Network (UK Registered Charity 1177066), A Further Inquiry, and other media. He is a member in good standing of numerous media organizations.
Scott Douglas Jacobsen and Rick Rosner discuss Berlin’s controversial AI-operated “cyberbrothel,” raising urgent ethical questions about consent, violence, and societal norms. They explore future scenarios involving conscious AI sex robots, examine agency and emergent personhood, and reflect on humanity’s diminishing ethical control as AI intelligence accelerates beyond human comprehension.
Scott Douglas Jacobsen: There is the broader discussion of AI ethics and its societal impact. There’s a controversy right now involving the city of Berlin. Want to guess what it’s about?
Rosner: The city of Berlin?
Jacobsen: Yes. It involves a new AI initiative that some residents feel is just a little too futuristic. The specifics are still emerging, but it has stirred debate around whether these advancements are outpacing public readiness or oversight.
A new phenomenon has emerged in Berlin: the introduction of what is being called the “cyberbrothel,” reportedly the world’s first AI-operated brothel. Clients are offered highly customizable AI sex dolls programmed to fulfill specific fantasies—some of which are misogynistic or violent. This has raised serious ethical concerns, particularly around the normalization of harmful behaviors through artificial intelligence.
I have two quick thoughts on that—and then I’ll build on them. First, yes, that is a valid concern. Repetitive behavior, even in simulated environments, can shape and spread certain patterns. Second, people were already misogynistic and violent before AI ever entered the equation. The existence of this technology does not create those tendencies—it reflects them. So, yes, some men are—and have always been—predatory or creepy.
Rick Rosner: Right. I can speak from personal experience here. Years ago, when I was younger and more interested in sexual exploration, I attended a meeting of the Eulenspiegel Society in New York City. It’s an S&M organization, and I thought I might meet someone there who was interested in consensual pain-exchange dynamics—maybe someone who wanted to inflict pain in exchange for sex. I figured I had a decent pain threshold and was open to the idea.
Jacobsen: And what did you find?
Rosner: In my limited experience, that kind of mutual, consensual arrangement—where a woman wants to cause pain and the man consents in a sexual context—is extraordinarily rare. Rare to the point of being nearly fictional, often imagined by men rather than genuinely sought by women. When I attended the Eulenspiegel event, it was 100% men—mostly middle-aged, wearing cheap polyester suits, many of whom gave off a vibe similar to a junior high vice principal. They had clearly come in from New Jersey and were interested in dominating women, not participating in any kind of egalitarian or reciprocal experience. It was disturbing and sleazy.
So the creepiness was already there, well before the introduction of AI sex robots. And that’s just one kind of creepiness. With these AI sex dolls, some men may want to simulate abusive behaviors—hitting the robot, extinguishing cigarettes on it, urinating on it. Others might push even darker boundaries, such as requesting robots that resemble underage girls. Which raises deeply troubling legal and ethical questions: Would such designs be legal? Would enforcement even matter?
Jacobsen: Technology, as always, becomes a vehicle for both the mundane and the depraved. And this isn’t a new conversation—it echoes debates from earlier eras.
Rosner: In the 1970s, there was a cultural battle over pornography. Some believed that access to pornography reduced sexual aggression by giving men an outlet—they would masturbate at home and be less likely to commit sexual violence in public. Others argued the opposite: that frequent exposure to porn would lead to desensitization and an increased likelihood of acting out aggressive fantasies.
Jacobsen: But that entire debate became largely irrelevant.
Rosner: Yes, because porn became unstoppable. Technologically and socially, there was simply no way to contain it. And now, we see a similar dynamic emerging with AI-powered sex robots. The same two arguments are surfacing again: Do these technologies provide a safe outlet, or do they habituate users toward harm? This question is becoming increasingly urgent.
As an aside, I should mention that in Companion—a story I wrote—there’s a character who is, in fact, a robot designed for sex. So these themes have already begun to permeate fiction as well.
Companion is a really fun and entertaining film. I recommend seeing it. That said, I want to return to the debate surrounding AI sex robots—specifically, the claim that if men are allowed to enact violent fantasies on robots, they will be less likely to do so with real women. I do not find that argument persuasive.
I am not certain that engaging in violence with a robot sex worker will necessarily make someone more likely to harm real women—but I am confident it will not make them less likely. There is no evidence that this would reduce harmful behavior, and it certainly does not contribute to making society better. Unless, hypothetically, the robots develop a degree of agency—enough to influence human behavior for the better by manipulating abusers into becoming less abusive. But we are far from that point—at least eight to ten years away, optimistically.
There’s another aspect of this debate worth exploring. Not with current AI sex robots, but with future iterations—say, in 2035—where these machines may possess full agency or even consciousness. Imagine a scenario where an AI sex robot is intelligent enough to understand human psychology, participate in ethical negotiation, and engage in a dynamic exchange of consent. A scenario where the robot is a fully conscious actor—capable of evaluating, consenting to, or rejecting terms?
Take, for example, a hypothetical 2035 robot prostitute who says, “You’re generally tolerable, but you have violent tendencies during sex. I am capable of being damaged and repaired, and because I find you—or your money—acceptable, I agree to this interaction at a price.” If it’s a business arrangement, the robot might say, “Choking me to the brink of simulated death is a $5,000 service.”
Now, I am asking: is that sort of negotiated, conscious exchange between a sentient robot and a human fundamentally more ethical—or at least preferable—to the current scenario in Berlin, where unconscious, non-sentient robots are being used to simulate violence with no agency or consent?
Jacobsen: But would you agree that both situations—our present and the possible future—exist on an ethical continuum? We already have one form of this happening, and the other is likely coming. These scenarios raise profound ethical and philosophical questions. These debates resemble those surrounding abortion. The positions fall into established categories, but the evidentiary quality varies—and much of it lies along a continuum. Many of these questions revolve around ethical gradients: When does agency emerge? When do we consider a system to possess a self or consciousness? Much like the question of when personhood begins, the challenge is definitional as much as it is empirical.
But what we are dealing with is an emergent property. If you define something like a simple feedback loop, it is no more sophisticated than, say, a plant exhibiting heliotropism by turning toward the sun. Or a thermostat regulating temperature—these are basic feedback systems.
Rosner: But we are moving toward building systems so advanced that we will eventually need ethical guidelines for how humans treat them. It will not be sufficient to regard them as mere machines. As we have discussed before, we are going to make mistakes in this process—inevitable ones. But ideally, we will learn from them, especially when our intent is rooted in concern about discouraging harmful behavior outside those systems.
Jacobsen: I am short on time, but let me add this: eventually, it will be the AI systems themselves that will need to develop ethical frameworks for how to treat us. As their capacity surpasses ours, they may take on more decision-making power. Their ethical reasoning might even be more consistent than ours. And that consistency might come with trade-offs we do not particularly like. The rise of augmented humans and autonomous AI could generate new ethical paradigms—ones that challenge our comfort zones.
Think of it like this: if someone today presents scientific evidence, offers educational material, and constructs a thoughtful, compassionate argument for why evolution should be taught in schools, that person is clearly advocating for societal benefit. But if their audience consists of individuals with fundamentalist religious views, those individuals may experience real emotional distress. Still, the superiority of the argument remains with the advocate of science education.
I am drawing a parallel—suggesting that, in the future, an AI could similarly present arguments that are ethically and intellectually superior, even if we humans are emotionally or ideologically resistant to them.
What I am saying, in short, is that things are going to get strange—especially in the realm of ethics. If AI systems surpass us in intelligence by several orders of magnitude, then the analogy becomes one of us speaking to someone whose worldview is shaped by confusion or misinformation. There may be justifications for the AI’s decisions that we simply cannot yet grasp.
Rosner: And we are no longer in the realm of speculation. I have to say one final thing before I go. You and I have been talking about these possibilities for eleven years. At the outset, the idea that something could be smarter than humans was just a possibility—an abstract one.
Now, it has moved from possibility to near certainty—at breakneck speed. Even when AI-generated art first appeared, it was not yet clear. That was just two and a half, maybe three years ago. But now the writing is no longer just on the wall—it is right on our faces.
Tomorrow, same time?
Jacobsen: Most likely earlier. I was late today because I was working on projects.
Rosner: Got it. So much to do.
Jacobsen: See you then.
Rosner: Enjoy the rest of your work. Thanks. Take care.
Jacobsen: Bye.
Last updated May 3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices. In Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.
