Ask A Genius 1050: The Paul Cooijmans Session 2, Extended Thoughts
Author(s): Rick Rosner and Scott Douglas Jacobsen
Publication (Outlet/Website): Ask A Genius
Publication Date (yyyy/mm/dd): 2024/07/31
Rick Rosner, American Comedy Writer, www.rickrosner.org
Scott Douglas Jacobsen, Independent Journalist, www.in-sightpublishing.com
Scott Douglas Jacobsen: This is Paul Cooijmans, session two. Go ahead.
Rick Rosner: So I asked an AI, which is called Claude 3.5 Sonnet, what would be some ways to check if AI has become conscious? And it responds: “Determining whether AI has consciousness is a complex philosophical and scientific challenge. As of 2024, there needs to be a consensus on how to test for machine consciousness. However, here are some approaches that researchers and philosophers have proposed or might consider, and we can go through them.”
Number one, behavioural tests: Turing test variations. We’ve talked about the Turing test and its shortcomings. Everything under behaviour could be clearer. More advanced conversational and problem-solving tasks are still just a Turing test. The final bullet point, testing for self-awareness, such as the mirror test used for animals, is even more garbage. That harkens back to when somebody said humans are conscious because if you show a human a mirror, they can recognize that it’s themselves they’re looking at. That’s just ridiculous. Plus, fricking monkeys can do that. It’s like humans are conscious because only humans know how to use language. So, anything that uses language is conscious—another garbage, half-assed argument.
All right, number two, neurological analogs: searching for patterns in AI systems similar to those associated with consciousness in human brains. That seems more reasonable. But you need to understand brain architecture, and consciousness lies in the connectome, the 10 billion neurons, each with its thousand dendrites. This encodes memory, and as you light up parts of the brain, you get what feels like consciousness. Informationally, you’ve got combinatorial coding going on in the brain, not purely combinatorial coding. Words are combinatorial codes, where you have 26 letters. You can’t have each letter stand for a thing in the world because you’ll quickly run out of letters. But if you have combinations of letters to denote things in the form of words and a series of words, then you can name, describe, and code for everything in the world. There’s not a specific place in your brain that lights up when you think of an orange or something that is the colour orange, but there may be a combination of neurons that light up and give you orange-ness. The same neurons that light up to give you orange, in different combinations with other neurons, can give you other things. Combinatorics gives you way more combinations of neurons than single neurons.
I’m getting at it here, and we’ll learn more about brain architecture and how it helps create consciousness. So if you’ve got a similar AI architecture, coupled with an AI that uses that architecture in a way that seems similar to ours, that’s a decent way to approach the test. Then, under the same thing, neurological analogs monitor for emergent behaviours that mimic human neural activity. I need clarification on their meaning; that’s not a well-formed statement. The behaviours are one thing, and the neural activity is another thing, so that could have been said better by the AI.
Number three, philosophical inquiries: probing the AI’s Understanding of qualia, asking it to tell you what it’s seeing, and probably also subjective conscious experiences. Eh, I don’t buy that. We talked about that yesterday, where AI will claim to be conscious long before it is conscious. The next bullet examines AI’s ability to contemplate its existence. That’s a little garbage. Can it recognize itself in a mirror? Yes, but I don’t buy it, except to the extent that it’s a Turing test.
Number four is ethical reasoning: another Turing test assessing AI’s capacity for moral decision-making. The AI will be able to parrot all the morality that’s fed into it. That doesn’t reflect consciousness. If it can come up with new moral principles that haven’t been fed to it, maybe that indicates some creativity which may indicate consciousness, but how are you going to differentiate between what’s been fed to it and what it comes up with itself because you’re talking about a massive database and if you’re testing for consciousness, are you going to have the money to look up every fricking thing the AI comes up with and compare it to your 10 billion, 30 billion sample database? That sounds impractical. Unless the AI comes up with some wild ethical principles that are way different than you’ve ever seen before, that’s a clue. The last bullet point under ethical reasoning evaluates its ability to understand and apply ethical principles in complex scenarios. Yes, it’s a Turing test, but it could be decent.
Part five or number five, creativity and abstraction: testing the AI’s ability to generate truly novel ideas. Yes, that is interesting. It doesn’t necessarily indicate consciousness, but AGI (artificial general intelligence), a truly powerful AI, doesn’t have to be conscious. It just has to be powerful. However, some people could argue that to have powerful creativity, it might be efficient to be conscious. Consciousness emerges because it is efficient. So, yes, that might be a good indicator, but not a definitive indicator. The next bullet point assesses AI’s abstract reasoning and symbolic thinking capacity. Yes, another Turing test.
Number six is emotional intelligence. It depends on where that ability comes from. You can train it for that and weigh its priorities to help with that. But unless you’re careful, it’s just another Turing test. The next bullet is testing its capacity for empathy. Again, at some point, if there’s an element to the Turing test, you bail out on deciding whether or not a thing is conscious. You say, “It might as well be.” So, if you’ve developed an emotionally engineered intelligence, it’s just expressive and empathetic as all get out. You might throw your arms up and say, “If it’s conscious, it certainly feels me. It gets me.” Whether it’s truly getting it, does it have a true subjectivity, or has it just been trained so hard on getting somebody that it feels better to be with than any human I’ve ever been with? I just don’t give a fuck. Let’s go with this empathy machine.
Number seven, self-modification and growth, involves observing the AI’s ability to alter its code or decision-making processes. I thought AI already did that. I need to be better versed in how AI trains itself, but that is something that AI already does and, thus, is not a huge indicator. Another bullet point assesses its capacity for learning and adapting beyond its initial programming. Again, that seems something that today’s dumb AI can do within its limited purview. Learns how to kick ass at video games, at Go, at chess. It doesn’t have to be conscious. AI doesn’t have to know anything.
Unpredictability and free will: looking for signs of decision-making that its programming can’t fully explain. You can call that uncanniness, and it gives you the creeps because it’s getting up to unexpected stuff or disquiets you. But because it gives you the creeps or you’re afraid it will turn into Skynet, that’s not definitive for consciousness. Assessing its ability to make choices against its training seems more powerful, but it’s still in the same direction.
Nine, integrated information theory: applying measures of information integration to assess consciousness. What it’s getting at here is multimodality, which maybe Max Tegmark is also getting at. But yes, if you can create a mathematical index of how much information sharing is going on and how many nodes the engineered intelligence has, that is a reasonable indicator. So, out of all the ones we’ve gotten through here, I buy this one the most.
The last one is suffering and pain response: investigating whether the AI can experience or understand suffering. Understanding does much work here because to understand something, you have to know something equals consciousness, and that’s circular reasoning. I was talking about empathy and emotional intelligence. That the AI will, that AIs will, be able to mimic all that stuff long before they’re conscious. So there you go.
In real terms, we will definitively know once we get that mathematics of consciousness. And even then, probably not even then, because AIs have evolved too much data and are too black boxy, we may default to, in many situations: Does it feel conscious? Or, if it feels like it if it’s been certified by… There was a company called Underwriters Laboratories that used to be fairly omnipresent in American homes. They put little tags on electrical appliances. I assume the underwriters referred to insurance underwriters and that they’d taken the device into their lab and messed with it to ensure it wasn’t dangerous, so it was certified. We’re going to have to do something similar with AIs.
Advertisements
REPORT THIS AD
People with the best ideas about measuring and characterizing what’s happening within the information space of engineered intelligence will… This could be a good business if we bring robots and engineered intelligence into our bedrooms and kitchens. There should be a company that certifies that these things are safe and tells you what to look for, that they might be getting to become unsafe. We’ve got VAERS in the US, the Vaccine Adverse Event Reporting System. It stands for tracking any time somebody goes to a doctor or a paramedic after they’ve been vaxxed. Somebody’s supposed to fill out a report on anything from a broken arm to headaches. Then, somebody’s supposed to analyze unusual emerging statistical trends that might indicate that the vaccine has side effects. We need an AI database that lets people track when AIs go off the rails.
So, for regular people in the world interacting with a ton of AIs every day, we’re going to have to rely on certification companies, databases, and data analysis to be reasonably confident that at least the AI isn’t going to harm us. Part of that should be having a picture of whether the AI is thinking, conscious, how it feels, is likely to develop behaviours that violate Asimov’s three laws, and will start valuing its existence over human existence.
We’ll have many of the same issues with human-facing AIs in the future. Our buddy Chris Cole says that by 2100, there will be a trillion AIs worldwide, but not all will always be interfacing with humans. Smart sidewalks that keep track of traffic and monitor whether they’re becoming worn or cracked. Nobody expects a chip in a sidewalk to be a fully formed robot butler. But for the human-facing AIs, we will run into the same problems of trust that we run into with people.
We’ve seen in the US that the American political system wasn’t set up to prevent a psychopath or sociopath–I never know the difference–but from a con man from becoming president. Most of the ways psychopaths victimize people is that they’re rare enough that people don’t generally have their defences up around new people that they meet. When you get divorced, I assume it’s still that half of all marriages end in divorce, and an element of divorce, or breakups where people didn’t get married, is that the person turned out not to be the person you thought they were. That’s probably an element in at least 40% of divorces. There are other causes where a middle-aged guy wants a hottie so he can get a boner more easily. Maybe his wife is exactly who he thought she was, but this guy wants to get laid with somebody 20 years younger. But I would think for a plurality of divorces, it’s that either the person isn’t who you thought they were, or they changed.
So, verifying other people’s consciousness is a problem. It has always been a problem. We assume other people are conscious, but the contents of that consciousness are not accessible to us. So we must take their word for it or observe them over time and hope they’re not faking it. Given the black boxy nature of AI, I assume there are going to be elements of that, of having to trust the people who have evaluated the technology and our instincts and end up not knowing for sure what’s going on in your robot girlfriend’s mind, including whether they’re conscious. The end.
License & Copyright
In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. ©Scott Douglas Jacobsen and In-Sight Publishing 2012-Present. Unauthorized use or duplication of material without express permission from Scott Douglas Jacobsen strictly prohibited, excerpts and links must use full credit to Scott Douglas Jacobsen and In-Sight Publishing with direction to the original content.
