Skip to content

Ask A Genius 907: Simulations of Consciousness Before Consciousness

2024-05-21

Author(s): Rick Rosner and Scott Douglas Jacobsen

Publication (Outlet/Website): Ask A Genius

Publication Date (yyyy/mm/dd): 2024/04/06

[Recording Start] 

Rick Rosner: Some people say, or at least one person I read, that AI is a misnomer; it’s just high technology. Calling it intelligence, artificial, or whatever you want to call it, it’s just increasingly powerful technology. We have the same genetics as humans did 100,000 years ago. We’re not getting any smarter biologically, which means it’s harder and harder for humans to keep up with the world created by technology without being aided by and combined with technology. It makes it almost tautological that we must define ourselves by this technology. You suggested I ask our buddy Chris, who knows more about this stuff. We do not know where AI is going, so the question is, will AI get smart? Will it have general intelligence, which is fluid intelligence, the kind of intelligence that we think of when we think of human intelligence, which is the ability to understand the world and come up with clever ideas on how to deal with it, and that includes to some extent our idea of smartness becoming conscious. 

That will all happen. The second question is if it will happen and when. I’m no authority, but it’s going to happen. You and I have talked about consciousness extensively over the past ten years, and we understand its elements. We have a reasonably good model of consciousness. So, we know what AI doesn’t have and what it will need to have to be conscious. People like Cory Doctorow say that regardless of what happens to AI in the medium future, in the short term, there’s likely to be an AI crash the same way there was an internet crash in about the year 2000 because everybody got super psyched in the late 90s. My writing partner and I were in charge of the website for The Man Show. The website was themanshow.com, and we thought we would all become millionaires off our hope because if you had the right portal and internet gateway, you would make a million bucks. Then there was a crash when people figured out that this wasn’t going to happen and that the internet was still pretty shitty. Things like pets.com went away and took away a lot of people’s money. 

Then, of course, the internet did become everything that we thought it would be with the coming of Google, streaming, and all the social media once the tech was in place to do all this stuff. So, there was a short-term crash, and then Google came along around 2005 and posted Google; the internet has boomed and comes to full-ish fruition. Doctorow and other people think before AI comes into full fruition, if ever we’re going to have a vast AI crash when AI doesn’t live up to the huge expectations people have now, both in terms of performance and in terms of return on investment. Well, AI is real people, which is ironic. However, tens of thousands of low-wage people worldwide take the world’s information and digest it, chew it up like a mama bird chews up food and spits it into the mouth of a baby bird. Information must be processed before it can become the probabilistic fill-in-the-blanks that AI is. 

The article I read has hundreds of people looking for pictures with people wearing shirts in them. Then they circle the shirts and add hashtags to the shirts so that AI gets an idea of what a sweater is and how it works in the world, but not an idea, just a way to predict how an artificially generated picture that includes a shirt, how this shirt should behave. At this point, the AI doesn’t know anything. It knows how to make impressive predictions, but filling the AI with the information to make those beautiful predictions is expensive. Getting a return on those predictions and making those predictions pay off may not pan out in the short term. So, in the short term, say in the next two- or three years, people may say AI is not this. McKenzie, with a semi-evil business consultant company, predicts that AI could double the world’s GDP. That’s a super high expectation, so in the short term, when it doesn’t look like it’s going to do anything like that, people will freak out, and we’ll have a crash. 

The two questions I initially discussed were, will AI get smart, and when those are still in play, just delayed in people’s expectations by the crash by a few years. AI has become conscious in some labs, spending billions on messing with AI, like Microsoft or Alphabet. Before 2032, did it become conscious everyday where you could have your own conscious AI? No, not for years after that, though it’ll improve at simulating consciousness. I’ve looked at a lot of pornographic AI images a) because it’s naked ladies and b) because it’s one of the areas where you can watch AI change by the day as it understands more and more of the world of naked ladies and sex. Remember, it was only a year ago that AI didn’t even understand how many fingers people have and how underwear, to stay on your body, has to have a band that goes all the way around your waist. However, I think what makes naked AI ladies attractive is that they look very human. You know the uncanny valley, right? 

Well, the Uncanny Valley is from 20 years ago with CG animation. There was the Tom Hanks Christmas Train movie, in which we’re okay with cartoon characters and enjoy them. We like photographs of people, but between prominent cartoon characters and pictures, there are CG-generated images that look pretty close to accurate but are far enough off to give us the creeps. As I said, the uncanny valley is from 20 or more years ago. Now, the image is generated by AI unless they’re creepy because the AI doesn’t know how many legs people have or which sets of genitals belong to which gender. Besides obvious errors, AI can make beautiful and not creepy images. In this novel I’m writing about the near future, there’s a kind of porn that is based on presenting images of women who are disquieted by being in porn. I mean, among the various erotic charges that porn can give you, there’s the charge of seeing the humanity of the person participating in porn and getting an idea that the woman doing the porn isn’t entirely comfortable making porn.

Now, there’s a charge going the other way too. You can like porn where the woman seems to be totally into it. However, there’s the other way where the woman appears not so into it. There’s like a kind of sadistic charge to that, and in my novel of the near future, porn is made more porn-y by the porn-simulating consciousness. Similarly, video games have a perverse charge where you get a charge out of engaging in combat with background players who appear to be conscious. So anyway, I think there will be a market outside these perverse areas. However, it will just be a market for making AI friendly to deal with and making AI appear conscious. So, AI will simulate consciousness years and years before it becomes conscious. 

[Recording End]

License

In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Based on a work at www.in-sightpublishing.com.

Copyright

© Scott Douglas Jacobsen and In-Sight Publishing 2012-Present. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Scott Douglas Jacobsen and In-Sight Publishing with appropriate and specific direction to the original content. All interviewees and authors co-copyright their material and may disseminate for their independent purposes.

One Comment
  1. Grant Castillou permalink

    It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

    Like

Leave a comment