On Computers Matching Human Capabilities w/ Self-Understanding
Author(s): Scott Douglas Jacobsen
Publication (Outlet/Website): The Good Men Project
Publication Date (yyyy/mm/dd): 2024/01/22
According to some semi-reputable sources gathered in a listing here, Rick G. Rosner may have among America’s, North America’s, and the world’s highest measured IQs at or above 190 (S.D. 15)/196 (S.D. 16) based on several high range test performances created by Christopher Harding, Jason Betts, Paul Cooijmans, and Ronald Hoeflin. He earned 12 years of college credit in less than a year and graduated with the equivalent of 8 majors. He has received 8 Writers Guild Awards and Emmy nominations, and was titled 2013 North American Genius of the Year by The World Genius Directory with the main “Genius” listing here.
He has written for Remote Control, Crank Yankers, The Man Show, The Emmys, The Grammys, and Jimmy Kimmel Live!. He worked as a bouncer, a nude art model, a roller-skating waiter, and a stripper. In a television commercial, Domino’s Pizza named him the “World’s Smartest Man.” The commercial was taken off the air after Subway sandwiches issued a cease-and-desist. He was named “Best Bouncer” in the Denver Area, Colorado, by Westwood Magazine.
Rosner spent much of the late Disco Era as an undercover high school student. In addition, he spent 25 years as a bar bouncer and American fake ID-catcher, and 25+ years as a stripper, and nearly 30 years as a writer for more than 2,500 hours of network television. Errol Morris featured Rosner in the interview series entitled First Person, where some of this history was covered by Morris. He came in second, or lost, on Jeopardy!, sued Who Wants to Be a Millionaire? over a flawed question and lost the lawsuit. He won one game and lost one game on Are You Smarter Than a Drunk Person? (He was drunk). Finally, he spent 37+ years working on a time-invariant variation of the Big Bang Theory.
Currently, Rosner sits tweeting in a bathrobe (winter) or a towel (summer). He lives in Los Angeles, California with his wife, dog, and goldfish. He and his wife have a daughter. You can send him money or questions at LanceVersusRick@Gmail.Com, or a direct message via Twitter, or find him on LinkedIn, or see him on YouTube. Here we – two long-time buddies, guy friends – talk about computers and their capabilities.
Rick Rosner: Well, it started with me reading a tweet that said that cheap AIs can be almost as good as expensive AIs. Apparently for language model AIs you can spend millions of dollars pumping them full of information and get a chat bot who’s pretty good at chatting but this tweet said there’s some chat bots that have been trained, for a few hundred bucks, that do a pretty good job of chatting. So I went to one of them and I had a 3000 word back and forth with this thing and it seemed pretty good. It was kind of repetitive, I mean I was asking it about itself basically like ‘are there any questions that you prefer getting because they help you improve your skills faster’ and the AI writes back ‘since I’m just a machine I don’t have preferences. And I go, “How about a 100 years from now; do you think AIs will be sophisticated enough to have preferences?” And the AI’s like “A lot can happen in a hundred years.”
Then it gave me like some standard boilerplate vs about stuff that we’ve talked about that the whole thing will have to be approached ethically, that everything’s going to be disrupted and that optimally all this stuff will be handled with fairness to everybody involved. I write back but looking historically, that’s not how it goes. Humans don’t develop new ethical understandings and systems until there’s already been a lot of suffering and the AI writes back ‘quite reasonably, there are certain risks.’ The responses were well phrased but also kind of repetitive and sounded a little bit canned as if a bunch of people had already been asking these chat bot similar things. So it had moved to this kind of boiler plate-y set of responses. It began almost every response with “as an AI language model.” It seemed to be trying very hard to make sure that people didn’t get the wrong idea about its capabilities, that people don’t anthropomorphize it. And this was a cheap one. Do you have to pay to chat to the more expensive ones?
Scott Douglas Jacobsen: Yes, the ChatGPT Plus takes money.
Rosner: How much is it?
Jacobsen: I don’t recall how much but not much.
Rosner: I might use it to try cheating a little bit with my writing to see if it generates anything that I can tweak into something usable because I tried a couple prompts to see if I could generate usable writing that I could use and it just gave me more kind of boiler plate-y bullshit. I asked it what it thinks about humans dating animals and what are some reasons why that would be a bad idea or why would that be a good idea and it gave me some boilerplate bullshit about how it can’t you know help me make personal decisions. Anyway, what’s clear is that, and tell me if you agree or not, is that these AI models can handle… if they can’t do it now, it’s certainly within the reasonable horizon, that they can handle in task expertise at a close to human level. If you’re looking for verbal interaction a chat bot is able to have human level syntax and fluidity knowledge which is in line with what you can expect from most people. Most of the time you’re not talking to an award-winning poet, so you’re not going to get a high level of creativity, you’re going to get somebody telling you what they know or their opinions which are not exceptional. It’s just kind of the opinions that they have selected from the universe of common opinions that they agree with, right?
Jacobsen: Yes, generally I think computers are going to quickly match human competencies with things that can be made binary and then obviously surpass them.
Rosner: So within a specific task like conversing or generating written work or generating art, they’re able to do that but that obviously doesn’t mean they’re conscious but it’s huge on a micro level. And I think once you start looking at cross node integration obviously you want a real time sensory input. I mean that’s one aspect of human consciousness; a thing could have slow consciousness based on not being able to get enough… needing a lot of buffering because it can’t absorb real world information as fast as we do but I don’t think that’s a huge technical hurdle. Maybe it is, there are probably issues with it but I mean the main hurdle between single task expertise in AI and human consciousness is integrating the various expert nodes, right?
Jacobsen: There’s one assumption which is substrate independence. Well, three things; substrate Independence, embodied Independence, and the style of processing. So one, do you need a carbon-based evolved brain to produce consciousness? Two, doesn’t have to be in a body that’s integrated with it very well and three, our style of processing; do you need that to make consciousness or can you get at it from different angles so the input’s the same hypothetically, the processing is different but the output’s the same.
Rosner: So in our talks we’ve come to a couple conclusions. One is that consciousness is advantageous or an information processing system dealing with a lot of novelty, right?
Jacobsen: Yes. It’s sort of like having a quick purview on pertinent information then making a conscious choice. It’s almost like automated processing is picking a single thing out of a network and consciousness is really deliberating a field of choices than picking those. It’s kind of different.
Rosner: Yeah, the field of choice is informed by expertise from a number of different expert nodes. Every part of the brain chimes in including memory and it’s a big associational net that you’re trawling with to pull in all the information that may be pertinent. So thing one is that consciousness is advantageous. Thing two is that consciousness isn’t a tough thing to create given that mammals are conscious and there are other beings that are conscious. Just about any sufficiently smart organism is also conscious because consciousness is advantageous and it’s easily developed given the right stuff with that stuff to be specified. Given enough brain stuff a species is going to evolve towards consciousness because it’s helpful, it’s super helpful and it’s not super expensive. It’s kind of expensive but it’s worth it.
So given that, it’s quite reasonable to think that doing all the reasonable things you think you would need to do to develop machine consciousness, if you do those things you’re going to get something that’s conscious with those things being huge associative net among various expert nodes plus memory. I asked the cheap ass AI about feelings and judgment and it’s like I’m an AI and I don’t have feelings. I’m like “Yeah but don’t you think eventually that we’ll be able to figure out how feelings work in humans and replicate those systems in AIs?” And the AI kind of bullshitted about couldn’t be pinned down. It had a kind of canned response to that, that there’s going to be lots of different things happening in the future. I feel like if I talk to a more sophisticated chat bot I might get answers that are slightly less canned.
I haven’t previously done extensive talking to chat bots but it’s clear to me and I think to you that the micro level, the specific tasks, AI will be able to handle that shit at a human level if not now, then within a few years, right? I mean there is the creativity angle like when you’re doing AI art, the creativity is still coming largely from the human, the prompts from the human and then the AI is just skating through its library. I don’t know, maybe it’s not so clear. I mean a lot of human creativity is going to your own library of possible approaches to things and then picking out the one that catches your fancy and certainly AI can do that too um.
What’s going on with self-prompting? Like all the art that you get from AI, most of the art, at least all the good art that I know of is a human typing prompts at the AI but there’s nothing to stop an AI from looking at a library of a billion different prompts and assembling its own likely prompts based on what it’s learned about prompts, right?
Jacobsen: So when we talk about connecting nodes, we have very good example. We have text textual analysis or linguistic algorithms tied to visual algorithms, photorealistic algorithms. And so that you could say those are two sophisticated programs. You get them in one system that in a way is what we’re talking about with the human mind.
Rosner: Yeah but that system’s still not conscious, it still doesn’t understand anything that it’s working with.
Jacobsen: It’s on the way though, it’s not fully integrated like it’s not turning visual information into a text for itself into ‘okay this is a picture of my mind that I’m going to draw.’
Rosner: I suspect that you get something that’s very close to consciousness depending on the number and variety of nodes that understand each other.
Jacobsen: Yeah, so it’s almost like there’s the algorithm itself, there’s recursion within itself for self-understanding and then there’s a system of co-communication between those two nodes themselves and that’s a very sophisticated model but if that ramped up beyond kind of simple language encapsulation but it could be done. Why not? They’re all engineering problems. Consciousness is a natural phenomenon and it has been evolved. So it was engineered by an environment, a dumb environment over a long period of time. A smart engineer team over a shorter amount of time should be able to do it.
Rosner: I’m guessing in a brute force way. There’s not a magical like hidden principle of consciousness. Magical is the wrong word but there’s not something hidden or non-intuitive about consciousness that needs to be learned before you can start building consciousness. I think if you take the elements of human consciousness, the ones we’ve talked about for years, and try to engineer them in; that will likely be sufficient to get a machine consciousness. What do you think?
Jacobsen: Yeah. I mean just these language production models, they aren’t producing language the way we do but it’s a way to get the same kind of output.
Rosner: There’s another thing that’s going on. I think that the thousands of years of people getting consciousness wrong has convinced people just on an intuitive level that consciousness is hard conceptually and also as an engineering problem and probably harder than it actually is. What do you think?
Jacobsen: Yeah, I mean also a lot of more typically fundamentalists’ religious outlooks try to centralize a human specialness and I think consciousness is one of those last frontiers. I mean we aren’t special in most ways and the degrees to which we are special on a spectrum however you want to analyze it; language, level of integration processing, physical strength, dominance of the planet, reproductive cycle… however you want to do it, there’s a spectrum for all those things and for most of them we’re not really outstanding at all. And I think that is an argument for decentralization of human beings and I mean if you want to make it grandiose again I think that decentralization human is just a general process of looking at things more objectively. We aren’t central and the universe was not made for us.
Rosner: Yes, speaking of not being special, this is on a totally different subject except that it’s just a personal thing. I quit benching with free weights for a very long time just because I figured I could maintain my strength well enough without needing to fuck around with weights on a bar but recently I’ve started using free weights again and at my strongest and with terrible form and with trampolining the bar off my chest dangerously, I could bench press about 1.77 times my body weight.
Jacobsen: Nice.
Rosner: Yeah not now going back to it I’m at 1.02 times my body weight which is very disheartening. I’ve lost a lot of weight and being skinny like just bone and skin is not good for bench pressing. Anyway, I’m feeling very not special.
License
In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Based on a work at www.in-sightpublishing.com.
Copyright
© Scott Douglas Jacobsen and In-Sight Publishing 2012-Present. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Scott Douglas Jacobsen and In-Sight Publishing with appropriate and specific direction to the original content. All interviewees and authors co-copyright their material and may disseminate for their independent purposes.
