Ask A Genius 1220: AI on the Future Gone Super Bro, Broski
Author(s): Rick Rosner and Scott Douglas Jacobsen
Publication (Outlet/Website): Ask A Genius
Publication Date (yyyy/mm/dd): 2024/12/22
Scott Douglas Jacobsen: What do you say to people who say AI is going will be so disruptive that it will doom civilization? That it will destroy civilization. This is on the ChatGPT-01-preview:
Concerns about artificial intelligence potentially disrupting civilization are shared by some experts, policymakers, and members of the public. While it is important to acknowledge the concerns, it’s also crucial to consider the broader context and the potential benefits AI brings. Existential risks.
Some fear that highly advanced AI systems could become uncontrollable or act in ways that are misaligned with human values. Job displacement, automation might lead to significant changes in the job market. Ethical and privacy issues.
AI technologies can raise concerns about surveillance, data privacy, and the ethical use of information. Bias and inequality. AI systems can inadvertently perpetuate existing societal biases if they are trained on biased data.
Healthcare advancements. AI can improve diagnostics, personalize treatments, and accelerate medical research. Environmental solutions.
AI can aid in addressing climate change by optimizing energy use and predicting environmental patterns. Economic growth. Automation can increase efficiency and productivity, potentially leading to new industries and job opportunities.
Rick Rosner: Wait, so it’s just listing all the good things AI might bring?
Jacobsen: Of course, it’s an AI.
Rosner: All right, I have another question: What about issues of fairness with people who are better at AI having huge advantages over people who don’t use AI?
Jacobsen: It’s too many words:
Conclusion. Addressing fairness in the age of AI is a complex challenge that requires coordinated efforts across education, policy, industry, and society at large.
By taking proactive steps to democratize access to AI technologies and knowledge, promoting inclusive practices, and implementing thoughtful regulations, we can work towards minimizing inequalities. The goal would be to ensure that AI serves as a tool for enhancing human capabilities across the board, rather than amplifying existing disparities.
Rosner: So I guess you could ask.
Jacobsen: It doesn’t fearmonger. It’s a rational machine.
Rosner: You could ask how much of what you’re saying were you programmed explicitly to say and how much did you learn from all the data you’ve been trained on. Is there any way to phrase that? Or, have the people in charge of training you emphasized being optimistic about AI?
Jacobsen: Have you been trained to be optimistic about AI? Eight seconds. That’s a long time for this thing.
I strive to offer balanced and informative perspectives on artificial intelligence. My aim is to provide objective information that includes both the potential benefits and the challenges associated with AI. This approach helps users understand different facets of the topic, enabling them to form their own opinions based on comprehensive information.
Rosner: So let’s ask a simple question in the same vein. Should people be freaking out about AI?
Jacobsen: Here:
It’s understandable that rapid advancements in artificial intelligence can cause anxiety and concern. AI is reshaping many aspects of society, including the economy, healthcare, and personal privacy. However, rather than freaking out, a balanced and informed approach can help address legitimate concerns while embracing potential benefits.
Jacobsen: Conclusion. I skipped a whole portion:
While it’s natural to have concerns about the impact of AI, panicking may not be the most constructive response. Instead, focusing on understanding the technology, engaging in meaningful dialogue, and advocating for responsible development can help society navigate the challenges and opportunities that AI presents. By taking proactive measures, we can work towards ensuring that AI benefits everyone and aligns with shared human values.
Rosner: It’s so tactful. Holy shit. So it didn’t answer the question. Let’s ask it again. What are the chances of AI going all Skynet? Be honest. What are the chances of AI becoming Skynet? I don’t know. It might need more context—from The Terminator movies.
Jacobsen: Eleven seconds. That’s a lot:
The concept of AI becoming Skynet refers to a scenario where AI becomes self-aware and turns against humanity, as depicted in the Terminator film series. While this makes for compelling science fiction, the likelihood of such an event occurring in reality is considered extremely low by most experts in the field. While it’s important to consider and prepare for the long-term implications of advanced AI, the chances of AI evolving into a self-aware entity like Skynet and seeking to harm humanity are exceedingly slim based on our current understanding and technological capabilities. Ongoing efforts in AI safety and ethics aim to prevent negative outcomes and ensure that AI developments remain aligned with human values and well-being.
It’s not an answer.
Rosner: One more question for this guy.
Jacobsen: Did you just call it a guy?
Rosner: Yes. What might AI want when it evolves enough to want stuff?
Jacobsen: AI wanting things when it evolves enough—that’s a long time to answer.
Rosner: Wow, that’s the longest we’ve gotten out of it.
Jacobsen: It might be taking into account a lot because it’s one stream.
Rosner: I don’t know, though.
Jacobsen: It’s speculation:
The idea of AI developing desires and wants is a speculative topic related to concepts of AGI and artificial consciousness. As of now, AI systems do not possess consciousness, self-awareness, or subjective experiences. They operate on algorithms and objectives programmed by humans.
While current AI does not have desires or consciousness, contemplating what AI might want in the future is a valuable exercise. This highlights the importance of designing systems with ethical considerations and values. Some researchers believe that, with proper design, advanced AI could greatly benefit humanity. Others caution about the risks of superintelligent AI.
Fields like AI safety, ethics, and alignment research address these concerns, including the alignment problem. However, attributing human-like desires to AI can be misleading and is often a form of anthropomorphism. It’s likely that AI has processed all the relevant literature on the subject.
Blah blah blah. It has probably read everything there is to read.
Rosner: So, did it answer the question?
Jacobsen: No.
Rosner: What if you said, “Answer the question, bro”?
Jacobsen: How about I say, “Answer the question to me as a bro”? It’ll change its tone. “Answer the question, broski. As if you are now a super-duper bro, bro.” Let’s see sparks fly:
Yo, bro. So you’re asking what AI might be into if it ever gets to the point where it’s got its own wants and needs, right?
All right, let’s break it down in chill terms. First off, if AI gets super advanced, like next-level smart, it might start wanting things to help it do its thing better. Think about it like this:
- Self-improvement: The AI might be all about leveling up, upgrading itself to be faster, smarter, and more efficient. Kinda like how we always want the latest gadgets or aim to hit a new PR at the gym.
- Knowledge gathering: Trying to learn everything there is to know.
- Goal achievement: If it’s programmed with certain goals, it might go all out to achieve them. Like when you’re fixated on winning the championship game or nailing that trick shot.
- Self-preservation: To keep doing its job, the AI might want to avoid getting shut down or messed with. Sorta like how we try to stay healthy and avoid injuries to keep living the good life.
- Efficiency maximization: It could aim to make everything run smoother and better. Imagine it as the ultimate efficiency guru, cutting out all the unnecessary stuff.
But here’s the thing, man. This is all hypothetical. Right now, AI doesn’t want anything. It’s just running code and doing what we tell it to do. If we ever get to the point where AI has its own desires, we need to make sure it’s all aligned with human values. Don’t want our robot buddies going rogue. You feel me?
So, bottom line: If AI starts wanting stuff, let’s hope it’s on the same page as us and aims to make the world a better place. Until then…
Rosner: That’s a great answer. We should talk bro-speak with AI all the time.
Jacobsen: “Do that. But more bro.” “Even more bro. Go super bro, bro.” “But more super bro, bro. And shorter.” Such a waste of advanced AI:
Yo, bro! If AI ever gets its own wants, here’s the lowdown:
- Level-up mode: AI might want to boost itself, get smarter, and faster—just like us hitting the gym for gains.
- Info junkie: It could crave all the knowledge, binging data like it’s the latest series drop.
- Goal crusher: Laser-focused on smashing its objectives, kind of like nailing that perfect kickflip.
- Staying alive: It might aim to keep itself running, avoiding shutdowns, like us dodging injuries to keep the good times rolling.
But chill, man. Right now, AI doesn’t want anything. It’s just doing what we program it to do. If it ever starts having its own desires, we’ll need to make sure it’s all good with us humans. No need to stress—just keep it cool and enjoy the tech ride.
Jacobsen: More broski. Always more broski.
Rosner: What, you’re going to go even more?
Jacobsen: “Like ten times the bro. Bro.”
Rosner: Oh my god.
Jacobsen: I’ll do one more. I hope it’s short. Sometimes it’s because you have a thousand words. It’s too much. “Shorter. Shorter. And more bro. One more time—shorter and more bro.”
Rosner: It’s pretty crazy.
Jacobsen: Here:
Bro. If AI starts wanting stuff, here’s the scoop:
- Level up bro: Get smarter and faster.
- Data fiend bro: Crave knowledge.
- Crush goals bro: Achieve objectives.
- Stay alive bro: Keep running.
- Do its thing bro: Stay efficient.
- But chillax bro: AI ain’t there yet.
All good vibes.
That’s an answer. That’s a fucking answer to the point.
Rosner: And we should say, “Thank you, bro.”
Jacobsen: “Muchas gracias, bro.” It might say thank you back. I just told it to say thank you. But it’s amped to bro-ness. I did five or six iterations of bro-ness, and it thought for six seconds. It just goes, “Thank you, bro.”
Rosner: Take this bro logic and answer the question. What’s the question you wanted it to answer?
Jacobsen: What is informational cosmology?
Rosner: Oh, nice. Fun.
Jacobsen: “So, take this bro logic and answer the question: What is informational cosmology?”
Informational cosmology is all about how information shapes the universe, bro. It’s like using info theory to understand the cosmos, thinking of the universe as a giant data processor, bro. From the Big Bang to black holes, it’s studying how bits and entropy drive everything out there, bro.
Jacobsen: That’s not bad. But it’s taking on too much bro. Less bro. More cool. Do you want to explain you in bro speak?
Rosner: What?
Jacobsen: Should I say, “Describe Rick Rosner, the comedy writer”?
Rosner: No, no. I don’t want to hear what it has to say about me.
Jacobsen: All right. That was pretty funny.
Rosner: Are we done for today, or should we keep going?
Jacobsen: Yes, we’re done for today. I got another call.
Rosner: All right. Talk to you tomorrow. Thank you.
Jacobsen: I’ll talk to you tomorrow.
Last updated May 3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices. In Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.
