Skip to content

The Whole Deal With Artificial Intelligence Is About Intelligence

2024-01-20

Author(s): Scott Douglas Jacobsen

Publication (Outlet/Website): The Good Men Project

Publication Date (yyyy/mm/dd): 2024/01/19

According to some semi-reputable sources gathered in a listing hereRick G. Rosner may have among America’s, North America’s, and the world’s highest measured IQs at or above 190 (S.D. 15)/196 (S.D. 16) based on several high range test performances created by Christopher HardingJason BettsPaul Cooijmans, and Ronald Hoeflin. He earned 12 years of college credit in less than a year and graduated with the equivalent of 8 majors. He has received 8 Writers Guild Awards and Emmy nominations, and was titled 2013 North American Genius of the Year by The World Genius Directory with the main “Genius” listing here.

He has written for Remote ControlCrank YankersThe Man ShowThe EmmysThe Grammys, and Jimmy Kimmel Live!. He worked as a bouncer, a nude art model, a roller-skating waiter, and a stripper. In a television commercialDomino’s Pizza named him the “World’s Smartest Man.” The commercial was taken off the air after Subway sandwiches issued a cease-and-desist. He was named “Best Bouncer” in the Denver Area, Colorado, by Westwood Magazine.

Rosner spent much of the late Disco Era as an undercover high school student. In addition, he spent 25 years as a bar bouncer and American fake ID-catcher, and 25+ years as a stripper, and nearly 30 years as a writer for more than 2,500 hours of network television. Errol Morris featured Rosner in the interview series entitled First Person, where some of this history was covered by Morris. He came in second, or lost, on Jeopardy!, sued Who Wants to Be a Millionaire? over a flawed question and lost the lawsuit. He won one game and lost one game on Are You Smarter Than a Drunk Person? (He was drunk). Finally, he spent 37+ years working on a time-invariant variation of the Big Bang Theory.

Currently, Rosner sits tweeting in a bathrobe (winter) or a towel (summer). He lives in Los AngelesCalifornia with his wife, dog, and goldfish. He and his wife have a daughter. You can send him money or questions at LanceVersusRick@Gmail.Com, or a direct message via Twitter, or find him on LinkedIn, or see him on YouTube. Here we talk about as two friends, getting on about artificial intelligence and its relationship with human intelligence.

Scott Douglas Jacobsen: I wanted to talk about artificial intelligence in the context of IC. So there’s this whole phrase in IC: the principles of existence aren’t necessarily just the laws of physics, but they comprise them. And I don’t think anything not permitted by them exists, but if they permit things, they exist. So, within that context, they are entirely natural if they are allowed by the principal’s existence; human beings exist, our form of computation exists, and artificial intelligence exists in simple forms. So I think the term artificial intelligence… So, I think the universe as an information processor is fundamentally about computation in one word but a multi-faceted, multi-form type of computation and human computation has a certain subjectivity. So, I would consider that computation with human emphasis. 

I would consider artificial intelligence another form of computation with different types of emphasis and sometimes human character in them because we’re the ones making them. So it’s things that we’ve talked about. So, I want to get your take on the idea that artificial intelligence: A) is not truly artificial. It’s as natural as human intelligence, just a different variation. B) you can take a unified frame of information processing by considering computation as a fundamental basis and having different forms of emphasis. So you can have Homo Sapiens having a particular type of emphasis. So, in computation with human emphasis, you can have “artificial intelligence,” computation with different emphasis, and things like that. I think that simplifies it because it gives you a basis, and you see different outcroppings of different types of computation. What do you think?

Rick Rosner: Okay, so there’s some stuff going on. Let me start with computation. In the most basic sense, computation is just doing basic logic and arithmetic operations, and calculators can do it, people can do it with a pen and paper, we can do it in our heads, and it’s barely information processing how we think. When we think of information processing, we think information processing is doing a lot of basic operations. To add 19 and 13 doesn’t take many operations. So you’d barely think of that as information processing, but to take however many operations per second it takes to make a video game play, that’s information processing because we’re talking about billions of operations. So I’m sure when you talk to most people about information processing, they think about stuff that goes on in modern computers: millions and billions of operations and more, trillions.

If you solve a video game and get through Call of Duty, that computer’s probably done more than 100 billion basic logic gate flips with zero to one and all that stuff. We know that information processing is inextricably linked to the universe’s processes and that as the universe plays out, information is being processed at various levels if IC is right. You’ve got the information within the universe’s processing purview, if I see it is right. Space-time matter and how they all play out is the universe processing information in what’s likely to be some kind of consciousness. That consciousness and its subconscious or unconscious parts are all part of the purposeful information processing of an entity or linked sets of entities in a world beyond ours.

Then, at another informational level, you’ve got what’s happening informationally as matter interacts with the universe according to the information-based laws of quantum mechanics. Not everything that happens, not every physical interaction, most little teeny individual physical interactions according to the laws of quantum mechanics, don’t impinge upon if the universe is an aware entity processing information. Most of the little quantum events in our universe don’t appreciably impact the universe’s thinking. The interactions are too small and don’t leave a record, but to get to computation and consciousness as we experience them in our world, we’re conscious entities, a bunch of animals are conscious, and now we have AI. People are starting to feel that AI is between computer-based computation and human-conscious computation. How people feel about AI has changed drastically in the past year or two. I was just watching a second of Free Guy, the movie with Ryan Reynolds. I’ve seen it probably three times; it’s from 2021. Have you seen it? Probably not; you don’t see a lot of movies.

Jacobsen: No.

Rosner: Okay. It’s about an NPC, a non-player character, in a video game that becomes conscious and starts acting with agency, and it makes for a movie I like. However, it was never a believable movie that this could happen within a video game. However, two years later, the movie hits differently because now it’s easy to imagine that such a character in a video game, via AI, could start manifesting the behaviours seen by that character in the movie. What else is happening with AI is that people claiming to know how AI works are claiming it legitimately. I agree with them about AI doing things well enough or even better than humans in some ways, like writing. Chris Cole emailed some Mega members that GPT-4, an AI, solved a mega-level letter series problem. I guess somebody input into GPT-4 what the next letter in this series is. I don’t remember what the letters are, and I came up with the answer.

And we all know at this point in March 2023 that you can give a verbal prompt to various AIS, and they’ll give you an essay or a chapter or, if you let it go, maybe even a whole book on some subject that would be mostly passable. It wouldn’t be the greatest chapter or book in the world, but it would be usable. Somebody threw up on Twitter today and told some chatbot to explain Thompson scattering or some scattering at a refractive barrier or something. It got it wrong but in a way that the person posting the Tweet said that with a little more tweaking, that was a really good first effort and would probably get it right. The major deal, I think, is that the principle is we’ve talked about it before, but it applies increasingly much as the current crop of AIs do their stuff, such that the Turing test is obsolete and there’s no one Turing test. It’s a whole range of awareness of AI products.

The original Turing test, which Turing called the imitation game, took place on slips of paper being sent back and forth via a slit in a wall in the 1950s, maybe, the late 40s. Turing said, according to this test, that if you’re typing messages and sending them through a hole in the wall and getting typed messages back and after you do this for a while, there’s no evidence that you’re not talking with a person, then according to the Turing test, I might be getting this wrong, then what’s happening behind that wall is thinking regardless of whether it’s a human doing it or a computer doing it. Is that correct? Is that the right understanding?

Jacobsen: Yes.

Rosner: Okay. Now that we’ve been working with AI for a while, we know that AI can pass superficial and naive evaluation in a Turing way. You look at a headshot made by AI; at first glance, you can’t tell it’s a headshot. There’s a site that’s, I think, called ‘this person does not exist,’ and you look at the people on that site, and they look like photos, but they were images generated by AI, and if you had like two seconds to look at each of them and you didn’t know how to look at them, they’d pass your superficial Turing test. But if you know what to look for, you can see that AI is still not great at: earlobes, earrings, backgrounds, maybe the rate at which photos become blurry with distance, and the depth of field. Those photos pass naive Turing tests but not educated Turing tests, and that certainly applies to, I would think, any current product of AI that somebody who’s looked at a lot of the products of AI can tell what AI is as spit out. So, the Turing test has fragmented or been replaced with some more sophisticated version.

Also, along with that more sophisticated version is an expert opinion that even though the shit generated by AI is good, it doesn’t reflect consciousness that there’s not a consciousness generating this stuff. Even though there’s a minority opinion among kind of educated lunatics or just people who come to the wrong conclusion that this stuff might be conscious. My opinion is that you could probably design a video game character that would look like it was acting with independence and agency and would come up with surprising and sophisticated behaviours, and then you have to define behaviour. You have to be conscious to have behaviour. What’s happening with AI is requiring a lot of definitions to have to be made more precise.

Finally, for this part of what I’m saying, I believe in having consciousness, you need to have the setup that generates the feeling of consciousness, which isn’t an emotion; it’s being within consciousness and feeling that you are within your consciousness which is as we’ve talked about at the very least broadband information sharing among a set of analytical nodes, right? That’s why we decided that that’s a core necessity for consciousness.

Jacobsen: Another aspect of that probably which we haven’t discussed much would be real-time; it is the constant input-output of that complex multinodal networked information processing system.

Rosner: Yeah, the real-time is tricky because you can imagine a thing being conscious in slow motion with the rate at which it experiences things being limited by the hardware.

Jacobsen: Well, that’s also another thing. We know that the speeds at which we process sound, smell, physiology, and sight differ, yet we have this illusion of this unitary sensory experience.

Rosner: Right, but the things that slow us down, it’s not computation that slows us down, or maybe it is; I haven’t thought about it enough, but when you think about what slows us down… Like I said, it might be computation. Getting the signals processed and into your central consciousness seems to lead to lags. I mean, maybe if we thought about it and talked about it more, we would think that it also lags in central consciousness, but central consciousness seems to be via evolution to have adopted a way of keeping things seamless. When signals hit at different times, the way we’re arranged and the way we’re used to thinking, we can handle signals arriving at different times without it making us particularly notice those lags or those lags making us crazy most of the time.

I’m thinking about a machine-based potential consciousness, the actual processing, though now that I think about it, I don’t know; probably AI could make that pretty efficient. Without having thought about it a lot, I’m claiming that you might have a thing that experiences a kind of buffering that it can’t experience reality with the detail and think about reality with the detail you’d want in real-time. So, it would have to absorb chunks of reality and be slower at processing those little slices than we are. It would have to not work in real-time but still be conscious because it just doesn’t have the moment-to-moment processing power we do. I don’t know; that’s a whole discussion, but the deal is that current AI doesn’t have a lot of the hardware. It doesn’t have real-time linked multiple analytic nodes.

Now, people are working on linking verbal and visual, linking ChatGPT to a dolly so that you’ve got a thing sending information back and forth between its verbal and visual analytics. And that’s a step toward consciousness, except there’s no sensory hardware. It doesn’t have senses. It’s got inputs, but these inputs are not broadband at all; they’re just like portals for entering information. That kind of hardware is not yet anywhere near our sensory input hardware. And I assume there are various choke points in AI where there are just non-existent information processing nodes or systems that we have that we’ve evolved to make ourselves efficient thinkers that have yet to be incorporated into AI systems.

So you could have an AI, and somebody will do this pretty soon that animates a human-like character that appears to have agency but is a very if system; that character is not conscious. It uses big data to replicate human behaviour and falls far short of consciousness. One last thing is, given that, we’ll eventually have to examine human thought and behaviour to see how far we fall into the as-if system because we’re as-if also. We behave as if we have consciousness with a degree of fidelity based on sophisticated, powerful broadband information processing. That fidelity gives us consciousness, behaving as if we have consciousness with all this stuff that facilitates it makes us conscious. So in a way, we’re doing the same thing that the shitty AI is doing; it’s just that our systems are so much better than we are conscious.

License

In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Based on a work at www.in-sightpublishing.com.

Copyright

© Scott Douglas Jacobsen and In-Sight Publishing 2012-Present. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Scott Douglas Jacobsen and In-Sight Publishing with appropriate and specific direction to the original content. All interviewees and authors co-copyright their material and may disseminate for their independent purposes.

Leave a Comment

Leave a comment