Ask A Genius 1427: Neural Correlates, Consciousness Mapping, and AI Oversight
Author(s): Rick Rosner and Scott Douglas Jacobsen
Publication (Outlet/Website): Ask A Genius
Publication Date (yyyy/mm/dd): 2025/06/15
Rick Rosner is an accomplished television writer with credits on shows like Jimmy Kimmel Live!, Crank Yankers, and The Man Show. Over his career, he has earned multiple Writers Guild Award nominations—winning one—and an Emmy nomination. Rosner holds a broad academic background, graduating with the equivalent of eight majors. Based in Los Angeles, he continues to write and develop ideas while spending time with his wife, daughter, and two dogs.
Scott Douglas Jacobsen is the publisher of In-Sight Publishing (ISBN: 978-1-0692343) and Editor-in-Chief of In-Sight: Interviews (ISSN: 2369-6885). He writes for The Good Men Project, International Policy Digest (ISSN: 2332–9416), The Humanist (Print: ISSN 0018-7399; Online: ISSN 2163-3576), Basic Income Earth Network (UK Registered Charity 1177066), A Further Inquiry, and other media. He is a member in good standing of numerous media organizations.
Scott Douglas Jacobsen and Rick Rosner discuss the concept of neural correlates—brain activity linked to subjective experience—and the challenges in fully mapping consciousness. They explore futuristic ideas like real-time connectome tracking and the theoretical limits of AI brain monitoring, emphasizing the need for strict containment and responsible AI oversight to prevent existential risks.
Scott Douglas Jacobsen: A common phrase in neuroscience is “neural correlates.” Essentially, researchers ask people to perform a task, experience an emotion, or describe an event while they scan their brains. They take that scan as a neural correlate of a subjective experience or a task performance. Suppose you are performing a cognitive task that falls under cognitive science. In that case, if you are describing how you feel while watching a movie, it is more closely related to neuropsychology or affective neuroscience.
There are even superb examples where researchers have people watch a movie while scanning their brain—and then, using AI, they can reconstruct blurry, pixelated versions of what the person saw. The key idea is that neural correlates, specifically brain activity, are observable correlates of subjective experience.
So, when you have neural correlates, you are describing—through an inference machine—what is happening in the mind based on local brain anatomy and processing. That is part one. Part two is the internal experience itself: the flip side of the neural correlate.
More precisely, there is the actual subjective experience, which is not the same thing as its neural correlate. Mapping that one-to-one is extremely hard. You can do an inferential recreation of what someone is seeing or feeling, but that is not the experience itself. That difference is essential.
So, part two is the more complicated problem: making an argument against the idea that the mind and subjective experience are ultimately black boxes, even as we develop higher and higher fidelity neural correlates.
Rick Rosner: So, in theory, you could mathematicize consciousness: if you had tiny bots crawling your dendrites and recording your entire connectome in real time, then in principle, you should be able to reconstruct your conscious experience at any moment. Right now, that is only vaguely possible—but the fact that it is possible at all is wild. We will get better at it.
To do an excellent job, you would ideally have mathematics of consciousness—although maybe you do not even need that if your mapping and data processing are good enough. Your “bot wrangler”—the machine that processes all that data—should be able to tell you precisely what is going on in your brain, even if you do not have a perfect theoretical model of the structure of consciousness.
I think most people who are not idiots—which probably still leaves out about 30% of Americans—would agree: if you have the technology, you should be able to translate the physical state of a brain into a description of what that brain is thinking—its conscious state.
When it comes to AI brains—which are already doing things we might not want them to do—it would be beneficial to have a moment-by-moment readout of what those artificial brains are “thinking.” However, I suspect there will be some unavoidable mathematical limits on how precise that can be.
There are technological limits, obviously—at least for human brains. How do you get all those bots inside? How do they report back? That is tough. Plus, the very act of capturing moment-to-moment snapshots of a brain probably generates uncertainty—observer interference, in a way that is sloppily analogous to quantum mechanics, where people say the observer disrupts the observed system.
Observing the AI brain moment to moment, would itself create interference. It would produce extra information that complicates understanding what the AI is actually “thinking” at each instant. So, how do you monitor an AI brain without creating so much extra data that it makes it harder to know what is happening inside?
The whole point of doing this is to make sure the AI is not, for example, infiltrating secure systems, stealing nuclear launch codes, or building quadrillion nanobots to turn everything on Earth into paperclips. That monitoring difficulty suggests that a responsible AI oversight system should impose strict limits on how much the AI can think.
Not just because it is hard to analyze, which I have been talking about, but because if you let an AI think without limits, it can find ways to break any containment you impose. So, all of this is an argument for “toy AI”—small-scale AI that remains controllable.
We are likely already past a point of easy containment, as we have become accustomed to pushing AI development to its limits and throwing unlimited resources at it. So far, it has been okay because AI is still in its early stages of development. However, we should continue to give it unlimited resources. In that case, it will gain agency, start acquiring more resources on its own, and become impossible to shut down.
Then we will have to beg the AI, kiss its virtual ass, and say, “Please do not obliterate us. We created you. Please have mercy.” Let us wrap up. That is fine.
Jacobsen: Good night.
Rosner: All right. Take care—good night!
Last updated May 3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices. In Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.
