Skip to content

Ask A Genius 1156: 2085

2025-04-30

Author(s): Rick Rosner and Scott Douglas Jacobsen

Publication (Outlet/Website): Ask A Genius

Publication Date (yyyy/mm/dd): 2024/11/09

*Interview conducted October/November, 2024.*

Scott Douglas Jacobsen: What about 2085?

Rick Rosner: In 2085, everyone will have an all-purpose robot that does everything. It’ll be like a smartphone, but with more functions—a Swiss army knife of technology. They’ll be called “Obes,” I’d guess. They’ll start as “omnibots,” then shorten to “obots,” and eventually just “obes.” People’s strongest emotional connection with another being might be with their Obes, at least if it happens as early as 2085, though maybe later.

Most people will end up having sexual relations with their Obes. As for their shape, it’s unclear whether they’ll be human-sized, but they’ll definitely have features that make them sexually attractive. There’ll be plenty of arguments and outrage about how human connections are replaced by Obes.

2100 and beyond.  I haven’t thought about that yet, so I don’t have any answers. But it came up during one of my discussions today. It’s obvious from the chatbots that, even though AI doesn’t know anything yet, can’t think yet, and isn’t conscious, it behaves as though it thinks. All of its responses come from conscious human writing.

So, it essentially behaves like it has mental faculties, but it’s just probability that makes it do so. It’s interesting that AI can become biased or inappropriate when enough people feed it biased or sexual content. It learns to imitate that behavior because of how it’s trained.

So, it’s clear that we should educate AI to have human values as much as we can until it becomes capable of thinking for itself. Even then, we should proceed cautiously. 

Jacobsen: Do you think we can keep AI under control by carefully training it?

Rosner: I think the answer is no, as soon as I think about it. But, carefully engineering or limiting the messaging AI receives… 

Jacobsen: Maybe, in the long term, it might be engineering us. 

Rosner: It will. On the way to that, you’d want AI to have some utilitarianism built into it or trained into it, such as the greatest good for the greatest number. You don’t want the AI to be required to engage in actions that involve complicated situations where what’s good or bad isn’t so clear, depending on the AI’s responsibilities or the reasoning we ask it to do.

But what you wouldn’t want is for the AI to behave malevolently for no good reason, except that its training allowed for it. But, yes, the idea that we can control AI and its training beyond a certain point—that’s what you hear. People argue that we should control AI. We should limit it because it’s dangerous. You hear a couple of instances, examples of what AI could do that’s dangerous.

If you tell it that its task is to make paper clips, the danger is that it could turn everything into paper clips. But beyond that, there isn’t much, and even then, people aren’t that upset. People are getting ready to be upset, but the threat doesn’t feel close enough for people to experience real distress. People aren’t arguing about specific, plausible things that AI could do. There’s the general worry, like the Terminator scenario, where AI starts a global nuclear war, and then it tries to mop up the survivors. But there’s not even any discussion about whether that is plausible at all. There’s a lot of generalized worry with no specific cases or strategies. Do you agree or not?

Jacobsen: Specific strategies are going to be the way to go.

Rosner: Yes. But, is anybody doing that?

Jacobsen: I would think some people are trying to figure out how to do that. Here’s my objection to everyone in the AI space: I don’t see this anywhere, but I will say it. What we are calling general intelligence, or superintelligence, will likely be categorized as narrow intelligence relative to some future image of how super AI will define itself.

Rosner: Yes. 

Jacobsen: So everything relative to that will be specialized, which is, in a way, a theological argument. If you take the Ed Fredkin informational or digital physics view, if I were to extend that informational view to Big Mind, then everything beneath that would, in a way, be a narrow form of intelligence because it’s so vast by comparison. The suppleness would be incredible in comparison, though it would be structured and function by rules.

If you take a modern, non-theological, non-magical way of looking at that, you could have big mainframes that might be a kilometer wide by a kilometer long by a kilometer deep, even, with super transistors or quantum computers working out noise effects, and those would become the equivalent. So, if you were to compare us to that, on a curve or not–maybe the next step on the curve might be a mischaracterization–because it will be able to go in different directions.

So, in a way, Ray Kurzweil is quite simplistic, though accurate, when he uses his Law of Accelerating Returns. Because what we’re getting at is, it’s almost as though, if you were to add a z-axis to that Law of Accelerating Returns, facets of intelligence would begin to fracture off in all kinds of directions once agency is built into it. 

Rosner: That raises the question: Is there one dominant superintelligence that uses its superintelligence to amass all the computational resources in the world, or will there be competition among various superintelligences? And how nasty will they be when they fight with each other? I imagine that in the future, we’ll see AI wars that will wipe out a lot of stuff.

Jacobsen: Nature gives us a good example of this. We’re going to have to own the fact that any lifeform, no matter what it is, will take up some niche. There will be competition and cooperation. Was it Kropotkin who wrote Mutual Aid: A Factor in Evolution? You have competition and mutual aid or cooperation. Similarly, you’ll have this with what will be valuable for computation. More computation might not necessarily be the most important factor in getting more computation. But there might be other ways of looking at resource extraction to get more computation. Because at some point, there needs to be homeostasis.

Rosner: So, the ability—there could be naive philosophizing by AIs who decide that nothing is better than existence. Because nothing means there’s no struggle, no consciousness, so you might as well burn it all down. I could see that happening periodically with AIs, and so, the most powerful AIs would be on guard for stuff like that. I see there’s a non-zero chance that the most powerful AIs could be paternalistic.

They’re ruthless, maybe, in defending their existence and the existence of things they value, but with an eye toward order and utilitarianism. What do you think?

Jacobsen: Sounds like a benevolent dictatorship by nature, being awake. 

Rosner: Doesn’t it make sense that, among the various, at least temporary outcomes, this is one that’s not necessarily guaranteed but has a non-zero chance of happening?

Jacobsen: It could be the reason for the Fermi Paradox, where super-advanced intelligences have a prime directive-style ethic: Why bother? And don’t mess with lower-conscious systems.

License & Copyright

In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. ©Scott Douglas Jacobsen and In-Sight Publishing 2012-Present. Unauthorized use or duplication of material without express permission from Scott Douglas Jacobsen strictly prohibited, excerpts and links must use full credit to Scott Douglas Jacobsen and In-Sight Publishing with direction to the original content.

Leave a Comment

Leave a comment