Ask A Genius 833: Boiling Down AI Talk
Author(s): Scott Douglas Jacobsen and Rick Rosner
Publication (Outlet/Website): Ask A Genius
Publication Date (yyyy/mm/dd): 2023/05/08
[Recording Start]
Rick Rosner: Last time we were talking about AI and I just wanted to boil what we were saying down to its essentials which is that AI seems to be able to do a rough approximation of in task information processing at a level that is either comparable to human or enough in the neighborhood of that you could imagine that within a single task; like language communication or generating art. And the AI seems to be capable of doing things on an apparently human level. And then when you look at the entirety of what might be necessary for consciousness which might include different kinds of information; sensory input, memory, judgment connected to feelings about what the conscious being is experiencing. At this point given what AI has been able to do even if it is characterized as sophisticated fill in the blank or auto fill, it seems to be able to do that to a degree which indicates that the other levels of integrating information don’t seem insurmountable or mysterious.
I’ve said over and over that AI seems to be or will be competent at in task information processing; single task stuff. Then you can reasonably assume that consciousness consists of that plus another few layers of the integration of information but none of those further layers seem sufficiently magical or inscrutable that some version of super powerful in that it involves a zillion servers and just burns up a ton of electricity doing calculations or the AI version of calculations. But none of it seems insurmountable that the first conscious AI setups may take an incredible amount of hardware and power and may run slowly compared to human consciousness or maybe in other ways hampered compared to human consciousness. None of this seems undoable. Do you agree?
Scott Douglas Jacobsen: Yeah. I mean I think that one big assumption in a lot of the AI conversations is sort of a magical spell; the idea of this extra human stuff that makes us conscious. It seems to be a matter of the style of processing, the degree of integration, the suppleness of what we deem human consciousness for most people instead of catastrophic things or very advanced age. So I think I don’t believe in a magical substance that makes something conscious. I believe it’s a matter of degree and style; the tone of the consciousness. In a more poetic term you could put it that way.
Rosner: Yeah, the fleshiness of it, the desire for human contact that goes into the appreciation for beauty though. I mean there’s a human flavor to consciousness because the consciousness we’re most familiar with is human with all its built-in preferences and biases.
Jacobsen: Yes.
Rosner: Another thing is that until recently we haven’t had the right information to get closer to being right about what consciousness is. For thousands of years, we didn’t have great stuff to work from. So for thousands of years we’ve been wrong about what consciousness is often usually which has given people, I think, a false impression that consciousness is more mysterious and harder to figure out than it actually is.
Jacobsen: I don’t even think there’s a distinction between a hard and a soft problem or easy problem of consciousness. I think it’s a matter of engineering. I think this is something that evolution builds with environmental pressures and I think in those naturalistic terms something about architecture that’s dynamic over time. That’s a very generic way of saying but it’s really the style of the information process that matters and we’re going to be talking about it. In my terms, it’s sort of computation with human emphasis.
Rosner: Yeah, like when people imagine AI consciousness, they imagine this kind of dry, emotionless, cruelly calculating consciousness.
Jacobsen: HAL 9000.
Rosner: Yeah.
Jacobsen: Take the Jeopardy bot; it doesn’t have valence to say ‘I want this or this’ is ‘this is more salient to whatever drives’.
Rosner: Yeah, Watson the Jeopardy bot. Is it 10 years old now or more?
Jacobsen: Probably.
Rosner: Yeah and that thing is like super primitive now I think compared to what you’re getting now in terms of a facility of information retrieval.
Jacobsen: Yeah. I mean there is an argument to be made that you need emotions to sort of limit and direct the information processing and also to close the gap just so you don’t have un-ending processing about something, just to say, ‘okay this is enough. Go do that’.
Rosner: Yeah. Well, certainly you need emotions for whatever consciousness you have to feel like human consciousness. If you look at judge as emotions more abstractly, as you just did, as kind of resource managers that the artificially conscious entity has objectives and will evaluate the data it’s receiving in light of those objectives. And then you can say well the emotions are, how it feels about the information it’s getting, which is what we do. Like, if you’re out on a date and you see things that make you indicate that the girl might be horny for you, you feel good about that. You’ve got an artificial consciousness that has been taught to have objectives related to maximizing something, say money via trading on the stock market or its own security via its ability to make money on the stock market. It’s not an insurmountable problem to have the freaking artificial consciousness feel good and bad about how it’s doing trading on the stock market.
Last week you mentioned this Turing quote from probably the late 40s or early 50s, where he said something like it’s impossible to think that computational entities by conversing with each other won’t eventually be able to do any human task as well as humans, right? There’s that quote?
Jacobsen: Yeah and he said he will outstrip our feeble powers.
Rosner: Yeah, so he was anticipating the rise of AI when computers couldn’t even do as much as the 4 function calculator of 1974. They couldn’t do jack shit.
Jacobsen: He died in 1957 maybe.
Rosner: I think you said 1954. At that time calculators were glorified adding machines. Anyway, we’re now at the very beginning of AI that manifests something close to actual intelligence.
Jacobsen: Yeah and everyone or most people seem to be afraid of artificial intelligence. A lot of leading lights and people with the money and research and teams to lead this charge and have been, are warning about it and are scared about it like “Oh my God, what are we going to do?” Pump the brakes; take a sick month breather…
Rosner: There’s been a lot of that and then there’s been a lot of people saying that we were never able to put any fucking genies back in any fucking bottles and certainly this one’s so far out of the fucking bottle. Some of the AIs that have been freaking people out like Chat GPT cost some huge amount like 50 million bucks or something to fill with information. There’s a word for it but basically to educate or to train. It costs a lot of money to pump it full of information in a way that it can work its AI on it; a lot of money and a lot of electricity. But then, a few days ago I started working with Chat Bots that only cost like 300 bucks to train somehow. Those things are delivering results that aren’t appreciably shittier than the 50 million dollar AI chat bot. So yeah, the genie is out of the fucking bottle.
Jacobsen: I’m not scared in the least. I mean my argument would be in line with Alan Turing and would be even stronger than anyone; or not necessarily anyone but a lot of the people that are here saying no we can’t stop it. I’m taking a different approach and saying this is a good thing and we should encourage it. We should encourage the advancement of artificial intelligence because we live in a knowledge and information era. In other words, we need them.
Rosner: Okay. I was working on a tweet that I haven’t reached a point of sending yet that is like you can freak out or you can go with it and hope and trust that a world with these AIs will still have room for you in it which might be Pollyanna-ish but I think it raises other questions. I mean AIs have objectives now. They’re trained to maximize certain things to be good at go, to be good at games, to be good at verbal communication but they’re not conscious. So they’re not conscious of their objectives of whether they are or whether they’re not. I mean we’re entering an era in which you have these entities whether or not they’re conscious that will have objectives whether they originate them themselves or whether they’re trained to have them and in most cases they’ll be trained but the question then is how do you go from this world we live in now to whatever world we’re going to end up in.
Right now humans have all the money. We make all the decisions, we own everything; anything that’s owned in the world is owned by a human or a human created body like a corporation. And so what gets owned in the future when things that will want to own things themselves, what will they want to own? Question one is will artificial entities want to own stuff. And I would say yeah. And then you’ve got to ask what they will want to own. There’d be a whole range of shit depending on what they’ve been trained to want and what they train themselves to want.
Humans have had little control over what we want because as you said we’re the products of evolution that has stacked the deck as far as our desires. We haven’t had much free will in terms of what we want; we want to fuck, we want to survive, we want resources, we want to see beautiful entertaining things because those things are related to our other evolved objectives. Everything we want is because we evolve to want them or we’ve hijacked and perverted what we want, like there are some guys who like to fuck cars or there are furries who like to fuck other people in animal costumes.
It’s not like we’ve taken over our own desires and re-engineered them. We have kinks; we’ve taken our basic horniness and just like tweaked it a little bit and not very willfully, it’s just where you ended up. From where you’re jerking off journey takes you, it’s not that you’re reprogramming yourself intentionally. It’s just that you keep wanting the jizz and what makes you want to jizz is weird for some people. We’re still Evolution’s bitch. There will be some of that with artificial entities but they will possibly or we will as we become integrated with some of them, have the ability to re-jigger our objectives. It’s not unreasonable to think that artificial entities will want to maximize their resources; that they’ll want to survive.
Now it you can train AIs or will be able to train AIs to not have indefinite survival as one of their objectives. They’d be task oriented and we could figure out how to engineer out the design, like they’d be like fucking salmon; they swim upstream, they lay their eggs or whatever salmons do and then they die because that’s their whole deal. But I would think that it would be fairly natural for artificial entities self-determine that they want to survive and that they want to maximize their resources in order to survive. They want to get rich because wealth gives them safety and potential for continued survival. I suspect within that, that one of the objectives of artificial entities would be to maximize information processing power. The thing that’s not talked about as much as some of the other aspects of AI is how much electricity, how much energy it takes to do whatever it is that they’re doing; the computation, the information processor doing. So, I assume that in a totally computational future that a currency, a money will be computational power and resources. Any comments?
Jacobsen: I think our categories of thought, probably that started with Aristotle, around thought and feeling and instinct or intuition will have to change when we start deconstructing the human mind. And I think those will then give us insight into how sort of other intelligences, I won’t call them artificial, or constructed or synthetic will more closely match human character. I think these are really old concepts. For instance, people use the terms emotion and feeling for the same thing often. And instinct and drive or they confuse sort of experiential bit and physiological based intuition, divine inspiration or the latter; it doesn’t really exist in my opinion.
Rosner: I’ve read one book and I’m reading another about how what we think of as inbuilt natural emotions are cultural constructs. I mean it’s easy to argue that for things like love that love might mean different things across different cultures and something like schadenfreude. If it has like a bunch of syllables it’s probably like a culturally constructed emotion but these books argue that almost every emotion that we have physical reactions but that almost anything that we see as a basic emotion is something that’s developed by rubbing against a culture. Intuitively we feel like that we have an inbuilt rainbow of basic emotions and these brain scientists and sociologists have been finding out otherwise but I think on the one extreme you’ve got the Aristotelian categories of thoughts and feelings and I think on the other extreme is the idea that it’s all the same shit. It’s all just input; feelings are input from the emotional parts of your brain and thoughts and memories and all of it are just sets of pulses that develop networks of dendrites.
The more we learn the more we’ll be able to shift back and forth. It’s all the same shit, it’s just the shit of thinking within feedback systems and the old categories of thought and it’ll be similar to shifting back and forth between physics and chemistry.
Jacobsen: It’s probably the way the feedback that distinguishes emotions from straight thought. Because when people take like these horse tranquilizer or something or some of these very heavy psychedelics, like their body just decouples and they report experiences of just being pure thought and the dorsal prefrontal cortex is the last part of the brain developed as the newest evolutionary and it’s the most important part of self-judgment thinking or thought. And so, if that is so, then that is just a very advanced part of the brain that takes a long time to develop and it’s functioning and it can listen to independent way without emotion; just thought upon thought upon thought, recursion, recursion, recursion but I think things like emotion and instincts and drives and the needs physiologically, they’re kind of networked and then they feed back up into that and then they come to consciousness and then we put words and labels on them. And so, I could very easily see that people have got this kind of hypothetical but the speculation they have diets and environments that breed a different internal sort of culture of organisms around and in them that changes what hormones and sort of neurotransmitter are produced and the ratio of them throughout their whole development cycle.
And so that can change the way that not just how we say okay you’re a different culture, you have a different language and the labels; not only a different structure of language for things but different label for things but different feelings and drives towards and about things.
Rosner: Like pain for instance, is networked into you in a way that feels quite different from other inputs that pain leads to reflexive actions, pain is hard to fight. If somebody’s like pressing a razor blade down into your finger it’s hard to just keep your finger there, you become very focused on the razor blade. There’s less introspection going on unless if somebody’s razor blad-ing you every day then maybe you get used to it and you become better able to think while you’re being razor-bladed. Athletes talk about the loss of self when they’re really in some kind of athletic groove which is really the loss of self-talk, the loss of the internal narrative you’re so focused on the sport that’s happening around you that you are distracted from talking to yourself which some people experience as a transcendent state.
I mean all this stuff happens based on how things are networked into orconnected to the rest of the network; both conscious and subconscious.
Jacobsen: Well, think about these Christian monks who would self flagellate with whips. It hurts but there was another part of the brain wired up to sort of take that input and feed it into that let’s call it transcendentalist pleasure because they think they’re doing God’s work.
Rosner: They’re tricking they’re networking into functioning… they’re redoing their networks. I don’t want to say they’re short-circuiting them but they’re like figuring out how to change pathways or exploit but it’s still fucking around with the overall network of inputs. So, at base everything is physics but you can ignore physics and do chemistry when it suits your purposes and it’s a pain in the ass to take everything back to Quantum Mechanics when you’re just mixing shit in a lab or when you want to do biology, you don’t need to necessarily need to take it down to subatomic particles for every fucking thing that happens in biology or sociology. So, at base everything is inputs and networks but in practical terms you need to talk about what a pain network might look like, what a fear network might look like, what the effect of horniness on your perceptions and behavior network might look like. So, not everything has to be taken down to individual little net nodes of neurons that are educating each other. Is that reasonable?
Jacobsen: Yes.
Rosner: Okay.
Jacobsen: Let’s call it a wrap today.
Rosner: Okay. Thank you for all the talking.
[Recording End]
License
In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Based on a work at www.in-sightpublishing.com.
Copyright
© Scott Douglas Jacobsen and In-Sight Publishing 2012-Present. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Scott Douglas Jacobsen and In-Sight Publishing with appropriate and specific direction to the original content. All interviewees and authors co-copyright their material and may disseminate for their independent purposes.
