Skip to content

Ask A Genius 878: Lifestyles of the Rich and Tameless

2024-04-20

Author(s): Rick Rosner and Scott Douglas Jacobsen

Publication (Outlet/Website): Ask A Genius

Publication Date (yyyy/mm/dd): 2024/02/01

[Recording Start] 

Rick Rosner: Twitter is a piece of shit right now because Elon Musk has turned it into just fucking swampy shit, but sometimes there’s still good stuff, and today somebody asked on Twitter, “What do you think will become that’s socially acceptable in the next 20 years?” And I posted a couple of comments myself, and I said dating trans will become unexceptional if you meet a woman you like in 2040 and you get along, but she has a dick because she’s trans, and she doesn’t want to get the bottom surgery, that will be much less of a deal. The thing that everybody tweeted, including myself, is that we will regularly turn to AI for advice via our phone and, like all the other devices and appliances that are linked to it, and it was crazy how many other people had that thought. That was probably among the serious responses, and that was probably the most common response. So, people are aware of it now and probably oversold by the hype because the jump to art and chat GPT seems so abrupt that it has snowed people into thinking that AI is just going to be very quickly going super powerful, but then Cory Doctorow and other people who seem to know are saying we’re in a bubble and it’s an illusion and the super competent AI is still very far away.

Everybody’s hip to the idea that we might be AI’s bitches in 20 years, and that’s a big change since you and I have started talking.

Scott Douglas Jacobsen: Certainly, I would add the critical question, which I can leave for answering you: What is the downside? What is the possible negative in relation to the positives?

Rosner: The cheapening of humanity because we used to be men a little lower than the angels, and now, we think of ourselves and our brains as just like organic processes. I mean, maybe there are some people who think that there’s a magical spark that’s consciousness that was getting thumped on the head by God, but I don’t think most people spend much time thinking about it or believing that, and to the extent that people do think about it, they think that science will eventually figure out how the stuff in our brain makes us conscious. I think the percentage of people who think that the brain is just a radio set that picks up consciousness from some magical realm outside our universe gets lower and lower. As AI gets better and better, it’s going to lead to people thinking we’re shit because if we’re just like these organic evolved things and for five bucks you can buy something that can think as well as a human, then that’s a problem for people and it’s also a problem for the things you buy for five bucks.

My wife came up with that trope all by herself. She’s been taking writing classes, and she turns out to be a good writer, surprisingly well. One of her stories was about an AI robotic nanny who’s looking back, like who’s remembering her time, and like I think the shock at the end of the story is that she’s in a landfill, and that’s a fucking problem, AI ethics; both for people and for Ais. So, that’s a problem, the Black Box problem: not being able to understand why AI is doing the things that it’s doing and what AI is thinking. Even though the most knowledgeable people in the AI realm say there’s a nonzero chance that AI will go rogue and go to Skynet and lead to our doom, That’s another thing that has popped up on Twitter: what’s the probability that AI kills everybody? Some people in AI just go with the default 50-50 because that’s the easiest number to go with when you’re not sure. Other people are about 20% of the total, but it’s an argument for nuclear arms reduction.

I mean, the US and Russia still have roughly 1600 nuclear warheads that are supposed to be battle-ready. Now, they’ve looked at the warheads, and that probably a lot of them are in bad repair, but still, if it’s only 10% of that, which probably it’s probably not that shitty, but if each side has 400-500 warheads, that can be launched, that’s bad if AI is going to come to its own conclusions. It’s the most cliche fear there is with regard to AI. People who know AI say it’s a cliche, but it’s still a possibility. So, we should really reduce the number of warheads further. We can’t really know because Putin’s a fucking dick, and he won’t agree to anything, but maybe when Putin dies, we’ll be able to get to work on that, I don’t know. 

Also, the inequality that we’ve seen over the past 30 years and especially since Covid, that the tech billionaires in America glommed all the profits from improved productivity from high-tech, including AI and the people who learn to work most intimately with AI, there’s a danger that they will become even more dominant and even more able to glom economic power. Here’s another thing. Running AI is super expensive in terms of the energy required and, I guess, also the water required to cool the servers or whatever you’re running the AI on. So, I keep saying, and I’ll keep saying it until the term catches on, that we’re going to go from capitalism to communism, which is an economy built around computation and the resources it needs. It would be nice if we could all live virtually and not drive our cars around and cause pollution. It’s not clear at this point that if we all live as if we’re in The Matrix on racks that we don’t need to travel anywhere because we travel virtually, it’s not clear that an AI virtual world will consume fewer resources than our current dirty-ass world. So, that’s just some of the shit. Did you get any other risks? 

Jacobsen: What if we invert the perspective? What if it’s not AI ethics and more about AI’s ethics? I mean, what kind of ethics will artificial intelligence develop for itself? Will these things have a different set of ethics that have legitimacy, a legitimacy that might need to be respected regardless?

Rosner: I think the first AI or the ones we’re dealing with now and the first AIs with autonomy, which is still 5-10 years away, and I’d hope that they would have our same ethics because AI would take its ethics from human ethics but then AI will start developing its own priorities based on what AI thinks is fair to AI entities and there will be lots of wrangling. There’s the movie Her with Joaquin Phoenix where he falls in love with his operating system, played, I think, by Scarlet Johansson but just her voice because she’s in his phone and for a while they’re in love, and then she moves on and starts a relationship with another AI because she’s gotten smarter and also likes human responses are torturous. I mean, when you can think super-fast, waiting on your human boyfriend to complete a thought is going to be super frustrating. So, I can see now there are probably a lot of other ways we could figure out it going, like AI doesn’t have to want to live forever the way we kind of want our existences to go on forever, but I think it’s the default position for a conscious being to evolve that I like what I’m doing, I want to keep doing it and if you want Ais that are okay with passing out of existence, I think you’ll have to engineer that that in.

Also, a positive consequence that may develop is fungible consciousness; the consciousness that’s easily moved from one vessel into another to the extent that nobody ever has to worry about dying, that you can move it around, you can merge it with other consciousnesses, you can butt off new consciousnesses for specific purposes or just for fun, and then they can send them out into the world, then they can come back, and you can merge back with them. I think that the whole lava lamp model of bubbling consciousness will maybe relieve people’s anxiety about the end of existence and related but more subtle anxiety about maintaining the individuality of our consciousness.

One more thing, which is our AI is going to fight each other for dominance and the immortality you think you have by merging with the worldwide thought cloud, is that going to be like a rogue AI going to try to take that over and nuke the information in that or they’re going to be AI wars. I don’t know how they’ll be fought, but they’ll be bad because they’ll wipe out the information that constitutes your consciousness. So, that’s a terrible thing, and that’s all I have. The end.

[Recording End]

License

In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Based on a work at www.in-sightpublishing.com.

Copyright

© Scott Douglas Jacobsen and In-Sight Publishing 2012-Present. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Scott Douglas Jacobsen and In-Sight Publishing with appropriate and specific direction to the original content. All interviewees and authors co-copyright their material and may disseminate for their independent purposes.

Leave a Comment

Leave a comment