Ask A Genius 1015: The First Areas of Consciousness to be Mastered
Author(s): Rick Rosner and Scott Douglas Jacobsen
Publication (Outlet/Website): Ask A Genius
Publication Date (yyyy/mm/dd): 2024/07/17
Rick Rosner, American Comedy Writer, www.rickrosner.org
Scott Douglas Jacobsen, Independent Journalist, www.in-sightpublishing.com
Scott Douglas Jacobsen: Look at the landscape of technological development now: software, hardware, and expertise in analyzing these systems. What area of consciousness will be the first to be mastered? Because, as stated repeatedly, it’s an emergent property. So, before we have emergent consciousness, we will have AIs claiming to be conscious or mimicking discussions about their internal emotional states. When they don’t have consciousness or emotional states, many will be increasingly convincing at doing this.
Rick Rosner: This will happen because they’ve absorbed millions of statements about emotions and consciousness from their large datasets. As AI gets more powerful and more able to generate new conclusions and ideas, it will better simulate human-like states. The aspects of consciousness that AI will demonstrate first will be the elements built into it as early as possible, such as the ability to speak coherently. AI already has this capability because it’s been built into it through billions of language samples.
Speaking coherently is an element of human consciousness, and AI can do that. It can’t speak originally for the most part, but if we measure the conclusions AI can reach, we might see that as AI gets more powerful, it should be able to extrapolate more reasoning and make conclusions similar to SAT-type reading comprehension questions. For example, a question that provides a reading passage and asks for another way to express what the author says in the passage tests the ability to understand and paraphrase, which could be more creative, and AI can do it.
When we bridge two modes of understanding, verbal and artistic, we’ll see AI translate words into images and vice versa. In general, the elements of consciousness AI will demonstrate as early as possible will be built into it as soon as possible. We are right at the beginning of multimodality, one of the biggest remaining hurdles to consciousness. Human consciousness involves real-time thought based on real-time sensory input plus real-time retrieval of memories and other associations filtered through multimodal systems, different systems of interpreting the world, and value judgments about the information that consciousness is receiving and what those developments mean for the conscious individual.
For instance, the events in your day are, to some extent, good or bad news. If you barely miss your bus, it’s bad news. It’s alright, but you must catch your bus to work on time. Or if you drive up to a traffic light that turns yellow and then red before you can get through, it’s bad news. You don’t like it. It’s bad news.
These elements are close to being sufficient for delineating consciousness. There’s also the matter of agency: moving yourself through the world and taking actions that can change your circumstances. However, you can have consciousness with little agency, perhaps with greatly reduced value judgments. If so, then we are already on the way to artificial consciousness. Nothing is anywhere near having artificial consciousness now, but we’re on the path to it.
What do you think?
Jacobsen: I have a nuanced take on one thing: it’s nuanced in particular and in general. Integrating systems for multimodality will be an easier problem than individual modality. The particularity of a sensory aim has to carve out and tickle part of reality more precisely, whereas integrating them only involves how you integrate those systems. So, you’re not dealing with a particular problem but a general one, linking up systems where we look at many different computer systems. You can link them up in various ways.
Let’s say you get 50% wrong the first time about what you’re aiming for. You can narrow it down quickly, dialling in precision. Individual modalities and sensory systems will be more difficult than the general integration problem.
Rosner: We might discuss different points, but I don’t disagree with you as long as you have the computation. Resources are available that make multimodal integration not the toughest thing to do, though efficient integration, interpretation, and learning are things that brains do a lot better. The large language models speak well because they’ve been exposed to billions of snippets of words and longer passages, too. We learn to speak that well from much less data.
Jacobsen: We have a general point that the cognitive revolution shut down the behaviourist perspective. One piece of paper written by Noam Chomsky critiqued the founder of behaviourism. The general argument is that you cannot learn everything from experience alone. There must be an integrated system already in place that receives information from the environment, but it is already structured, like universal generative grammar, which allows human beings to generate rather than construct language, even from an early age. AI, more or less, specifically constructs. It doesn’t originate and generate from a simplistic set of systems and symbols to create an infinite array. That was Chomsky’s main point and why he became famous.
Rosner: It’s a reasonable point. We know our bodies and the evolutionary process mean that efficient shortcuts are integrated into animals and organisms as long as they are evolutionarily reasonable and can be preserved genetically because they provide a survival advantage or aren’t eliminated. We can assume that our brains have cognitive shortcuts and structural biases towards elements of experience we are most likely to encounter, including speech. For example, there may be a bias towards grammar, as people naturally put thoughts or sentences together efficiently, such as using subjects and verbs. This reflects the grammatical structure we encounter in the world.
Jacobsen: Will AI have these shortcuts?
Rosner: Yes. It may generate shortcuts spontaneously, or people could figure out how to build them. We have yet to tell large language models how to write a good essay. We’ve had to feed them much competent writing, and they’ve probabilistically calculated various elements of good writing. For example, if you asked AI to complete “when in the course of,” it would likely say “human events” rather than “ook, ah, goo, gobble, gabble, fuck, piss, asterisk.” It makes probabilistic guesses based on the data it has been fed, reflecting what has been published and copied into the large language model.
License & Copyright
In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. ©Scott Douglas Jacobsen and In-Sight Publishing 2012-Present. Unauthorized use or duplication of material without express permission from Scott Douglas Jacobsen strictly prohibited, excerpts and links must use full credit to Scott Douglas Jacobsen and In-Sight Publishing with direction to the original content.

It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
LikeLike