Ask A Genius 1037: The Mind as a Box, then Its Aperture
Author(s): Rick Rosner and Scott Douglas Jacobsen
Publication (Outlet/Website): Ask A Genius
Publication Date (yyyy/mm/dd): 2024/07/28
Rick Rosner, American Comedy Writer, www.rickrosner.org
Scott Douglas Jacobsen, Independent Journalist, www.in-sightpublishing.com
Scott Douglas Jacobsen: Do you think the mind is ultimately a black box even though we might understand its engineering, information processing style, and how the structure relates to that information processing? We won’t necessarily know a person’s or another sentient being’s internal experience.
Rick Rosner: No. I don’t believe that at all. It’s figureoutable. It doesn’t mean you can only figure it out sometimes. That’ll be an issue with AI or AI hybrids with other beings. If you want an expert system, if you want to put an expert system in control of stuff, you need to know its mental landscape. It’s the Skynet problem, plus inscrutability. There’s implied inscrutability with Skynet, but nobody worries about Skynet’s mental state in Terminator. They worry about what it does, that it becomes conscious, and then sets off nuclear Armageddon.
Nobody tries to psychoanalyze Skynet in any of the Terminator movies. There have been, what, like, 5 of them, 6 of them. But if you’re going to try not to have Skynet, the mental landscape of AIs needs to be scrutable. That’s possible to do within reason. But it’s also not practical in some areas.
Google Translate was a large language model before we had large language model AIs. You could argue that it was one of the first LLM AIs in that it got samples of words from a gazillion different languages and, by some Bayesian probability structure based on a gazillion samples, it was able to build a pseudo understanding of words across a gazillion languages to the point where Google engineers said, I believe, that Google Translate has its base language.
When they say language, they mean it has its probability structure where it has some node of pseudo understanding for, say, the word “love” in all different languages and knows how that word works grammatically, et cetera, in many differentlanguages. When the Google engineers tried to express what that meant, they needed to be more concise. So, a node contains all of Google Translate’s love-related probabilities used to guess how to say a sentence in Ukrainian that involves the Ukrainian word for love.
Say you’re writing in English to somebody in Kiev, Kyiv. It’s going to be something other than English love and Ukrainian love. It will go through this node, which is not exactly the word for love in some abstract language that only Google speaks, but it’s some node that’s the clearing house for all love-related probabilities.
And that whole fucking node structure, which is built from billions, maybe tens of billions, of language samples, is fundamentally inscrutable because it’s built from a whole shit ton of data sitting in dozens, hundreds of servers, some fucking place or places. Is there any danger of Google Translate becoming conscious, waking up, and trying to wreck or destroy the world by misbehaving? Probably not. But if they gave it enough abilities to approach consciousness, then that could be worrisome because it’s got too much data for us to understand.
The brain doesn’t have to be a black box. But when you’re dealing with big data, there will be issues with knowing exactly what it’s up to. But we’ll eventually be able to figure out the mathematics of consciousness, as I’ve said, in talking with you over the past decade, probably more than a hundred times. But does that mean you can figure out what an individual mega data consciousness sitting on a petabyte of data is up to? No. We’ll try to be careful when designing AIs that have agency. However, you may have to limit the size of the database from which it’s working to be safe if you give it to an AI agency. Or that could be one possible parameter in governing AI.
However, mentioning that is a hard parameter to enforce, given a motivated AI. For security, you want air-gapped devices. Air-gapped means, like, voting machines. You want no way for a voting machine to connect to the Internet where a badactor could get in there and fuck with the numbers and fuck with the software. So it’s air-gapped. It has no plug to connect to the Internet. But how do you enforce an air gap for the AIs of the future? Some of whom will be motivated to go rogue to try to expand their information base. So, brains are scrutable in theoretical terms, only sometimes in practical terms. That’ll be a problem.
There’s much talk about how we don’t know what’s happening. For example, when AI gets good at Go, by doing, what do you call it when an AI does self-training? There are some obvious technical terms. You set it loose on the game of Go or chess, and it becomes unbeatable by a human. You wonder why it’s making some of its weird moves.
But one way to figure out what it’s up to if it’s a reliable way, is to design AIs or train AIs so they’re interrogatable. Like, the Go program that’s trained itself doesn’t need to be set up. What’s setting it up so you can ask it questions about what it’s doing? I am still determining what that would entail, but it’s doable. You made this move. Why did you make this move? And the AI doesn’t necessarily know why it did. It’s a probability engine, so it can show you some parameters that hit critical percentiles of certainty, which is what Watson did when it played Jeopardy. It would ring in if it became, like, 85% sure that an answer was correct based on probability structures it was working with.
You could make AIs interrogatable to some degree, a skill it would have to learn and train itself to. It’s only sometimes going to give you very good results. However, one problem with thinking, conscious thought, is that it’s different from a computer running through a regular program, like drawing what’s going on moment to moment in Call of Duty, which executes a series of commands. Conscious thought brings in analytics results, probably Bayesian analytics from several different nodes and modalities, and smooshes them all together.
This leads to further analytics and the emergence of memories and ideas. Though mathematically modelable in the future, that process doesn’t need to be more easily interrogated. You could interrogate and watch a toy consciousness, a very limited consciousness dealing with a small amount of information and a small number of nodes, and see how it comes to its conclusions. But a full-on brain with 100 billion neurons, with each neuron averaging a thousand dendrites, all contributing to consciousness simultaneously, is mushy because different parts of your brain get different bits of information at slightly different times.
So, moments in your brain are smeared, and you form ideas that morph into other ideas. The whole thing is highly tacit. Eventually, like when you’re speaking, your brain must finally pick the words it will use. So, it concludes by saying specific words. However, the path to those conclusions is simple and hard to capture without throwing a lot of outside analytics or additional analytics into the system.
So the more I think about it, the more I have to say, in terms of what you’d want to be able to do in understanding why specific consciousnesses do specific things, it might be so tough in practical terms that the brain might as well be a black box. But on the other hand, if you could put junk in the brain, nanobots or mesh overlays or Neuralink chips or some shit, if you could suffuse a brain with these plus PET scans, you could maybe have an analytic overlay of a brain. If you fit it in there, it could give you a blurry picture of how thoughts form. It’s going to be a big area of research.
Because a) it already is, b) it’ll help people transcend the limitations of brains and bodies, and c) it’ll be dangerous not to know what some brains, artificial or otherwise, are up to.
License & Copyright
In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. ©Scott Douglas Jacobsen and In-Sight Publishing 2012-Present. Unauthorized use or duplication of material without express permission from Scott Douglas Jacobsen strictly prohibited, excerpts and links must use full credit to Scott Douglas Jacobsen and In-Sight Publishing with direction to the original content.
