Ask A Genius 1376: Artificial Intelligence and Inequality: Are We Ready for What Comes Next?
Author(s): Rick Rosner and Scott Douglas Jacobsen
Publication (Outlet/Website): Ask A Genius
Publication Date (yyyy/mm/dd): 2025/05/15
Rick Rosner is an accomplished television writer with credits on shows like Jimmy Kimmel Live!, Crank Yankers, and The Man Show. Over his career, he has earned multiple Writers Guild Award nominations—winning one—and an Emmy nomination. Rosner holds a broad academic background, graduating with the equivalent of eight majors. Based in Los Angeles, he continues to write and develop ideas while spending time with his wife, daughter, and two dogs.
Scott Douglas Jacobsen is the publisher of In-Sight Publishing (ISBN: 978-1-0692343) and Editor-in-Chief of In-Sight: Interviews (ISSN: 2369-6885). He writes for The Good Men Project, International Policy Digest (ISSN: 2332–9416), The Humanist (Print: ISSN 0018-7399; Online: ISSN 2163-3576), Basic Income Earth Network (UK Registered Charity 1177066), A Further Inquiry, and other media. He is a member in good standing of numerous media organizations.
Scott Douglas Jacobsen and Rick Rosner discuss the growing impact of AI on society, exploring rising income inequality, power centralization, emotional consciousness in machines, and the political readiness of leaders during rapid technological change. They emphasize the urgent need for global awareness, regulation, and ethical considerations in AI development.
Scott Douglas Jacobsen: Do we need to talk about AI more? Everyone seems to think it’s an unstoppable juggernaut.
Rick Rosner: We are entering a critical time in technological history. One question: Is Trump being in charge during this moment awful for the U.S., given that the economic and technological landscape is shifting rapidly?
A country needs agile leadership to adapt and benefit from exponential tech developments. On the other hand, if AI and automation reshape everything—labour, capital, global supply chains—does it even matter who the president is during the upheaval?
Advertisement
Privacy Settings
What do you think?
Jacobsen: It is going to be weird. It is going to be a lot.
We are headed for two significant outcomes, possibly at the same time:
- Income inequality will rise dramatically unless there are coordinated mass movements in many countries demanding the redistribution or regulation of AI-driven economies.
- Power will concentrate in the hands of those who control advanced AI systems unless global democratic mechanisms catch up fast.
It could go either way. It could be both at once. It is up in the air. That is one thing, okay? Moreover, that is being amplified by AI, as we speak.
The second trend concerns the cognitive terrain of the information economy. That is the future—not moving rocks or digging up minerals. This terrain is much, much bigger.
Thus, those hills and valleys—though higher and deeper than in earlier iterations of civilization—might, in relative terms, appear smaller. Because the landscape is so much more vast, the relative disparity flattens out, at least perceptually.
So it is this two-way, two-part phenomenon happening at the same time.
Rosner: Let me comment on the first part—massive income inequality. I’ve been thinking about that for a while. If it’s income inequality and gadgets, oligarchs might be able to keep oligarchy as long as the rest of the population keeps getting incrementally better stuff.
Nobody gets richer in relative terms, but the scraps improve enough that people do not feel deprived. The rich keep gleaning up everything, but if what trickles down is good enough, that may mollify people.
Though, if it becomes existential—say, rich people gain access to practical immortality, while no one else does—that might finally trigger revolt. But… maybe not even then. I don’t fucking know.
Jacobsen: What will it take for machines to have feelings?
Rosner: Mostly, increasing analytic capacity. Many of the things that AI research programs attempt to do to improve AI, not all, but enough of, could incidentally push AI toward consciousness.
Consciousness is an emergent property, based on the survival advantage it gives to organisms that have it. It enhances the handling of novelty and the world’s modelling, much more than if you do not have it.
Consciousness has evolved multiple times throughout evolutionary history— eyes, which evolved independently in different lineages. Consciousness, too, appears to have emerged in various forms.
That is because it confers an advantage. You do not need a precise path or a narrow formula to get there. If you make information processing and sensory input more efficient, you will get consciousness. So that is what it will take: the same pressures and engineering that evolution used.
Jacobsen: It is not magic. There is no fairy dust. There is a mechanics to emotion. However, there is feedback between systems that are not strictly cognitive—systems that are more than noteworthy. They are not footnotes.
Advertisement
Privacy Settings
When you say “I feel something,” it is more than saying “I emote.” You can think about a word, and feel a certain way in response—but when you emote, you experience it in the body.
If you are integrated—if you are not cut off by trauma or incapacity—you embody that feeling. That is a larger system nuance. It is an extended, mostly nonconscious system. It does not involve motor activity unless you consider how that feeling motivates action.
Someone says something—a slur, maybe—and you feel angry. That emotion comes with an adrenaline response, and you are physically ready to act. You are activated. Arguably, most of the brain is motion, language, and feeling.
Moreover, AI systems will imitate consciousness and feeling long before they can feel anything. That is because of how they are trained, on the conscious expressions of sentient beings, and because it serves their function.
You can argue that AI mimics feeling because it models us. Primitive analogs—basic processes that resemble feeling—can be found, but they’re not the same.
Rosner: Pathetic fallacy is a term for assigning human-type emotions to non-human entities, especially inanimate objects or nature. Anthropomorphism refers more broadly to attributing human characteristics or behaviour to animals, deities, or things. So, yes, emotional anthropomorphism is closer to what we mean here. However, “pathetic fallacy” is a weird term for this context. The deal is that a lot is going on in the animal world that triggers interpretation.
Animals do feel things. However, they also often behave purposefully in ways that do not require sophisticated emotional cognition.
Even unicellular organisms exhibit purposeful behaviour—they chase other single-celled organisms, extend tendrils, andseek out resources. That behaviour usually reacts to chemical gradients, not awareness or intent. So you cannot even call it behaviour in the complete cognitive sense.
Similarly, AI will sometimes appear to act emotionally or intentionally when it does not feel anything—it behaves that way because it makes contextual sense or serves a programmed function.
Crucially, we haven’t yet granted AI real agency. We will give AI the first kind of agency—the ability to affect the physical world. It won’t create agency on its own. Right now, we haven’t handed much of that over.
Last updated May 3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices. In Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.
