Ask A Genius 771: More on the Turing Test
Author(s): Scott Douglas Jacobsen and Rick Rosner
Publication (Outlet/Website): Ask A Genius
Publication Date (yyyy/mm/dd): 2022/07/27
[Recording Start]
Scott Douglas Jacobsen: This is on the Turing test. Go ahead.
Rick Rosner: So I mentioned I wanted to talk about the Turing test and we did get around to it but you did tell me you read Hawkins’s paper on it. You told me to read it and I didn’t get around to doing that but you said there’s a quote in the paper.
Jacobsen: Oh, it’s a quote in an actual Alan Turing paper – “The question can machines think is too meaningless to deserves discussion”
Rosner: Did he have an analogy?
Jacobsen: It’s if you want to say submarine swim, then yeah you call that swimming. If you want to say what machines do is thinking then yeah, call that thinking. Something like that.
Rosner: All right. So we know that Alan Turing was maybe the smartest guy with regard to computation of his era which was tragically cut short because he was chemically castrated for being gay and he hated it. So he ate a cyanide apple killing himself at an early age but not before he saved Britain in World War II by decoding the German code machines. A brilliant guy came up with all these principles of computing including the Turing test which is that if you’re typing back and forth with somebody or something in a room, they’re in a room and the only communication you can have is via typed messages. His test was if you can’t tell whether the thing typing is a person or not, then that thing can think. And he does a very interesting thing probably not on purpose but probably because given the time, the 1950s I guess, when he was talking about computing he could only think about computers in terms of typed output.
That was just the technology of the time or the technology that could be easily imagined of the future but that room with typed messages coming out of it, it’s easy to overlook that as an important condition but it’s a super important condition that we’re now seeing put to the test now that we have machine learning systems that really I mean that can really be put to the Turing test but only within limited contexts. I think we talked about these AI are applications where you give the system a prompt like I saw one today where the prompt was somebody posted on Twitter ‘Pokemon trading cards from the 17th century’ and they freaking looked great and they looked like they were done by a professional human artist. They were these ancient looking trading cards with Baroque looking Pokemon on them.
I think in some contexts you’d be hard-pressed to at least quickly be able to tell the difference between the works of a professional artist or let’s say illustrator who’s working on a deadline and the products of these AI art generators. And obviously context is important because we’re generalists. We could draw a 17th century Pokemon trading card and you can play chess with us and you can get us to help you hang a picture and you can get us to talk with you about what you like in a romantic partner. We have a full set of contexts in which we’re used to thinking of humans as thinking beings whereas all these AIs are getting Turing tested in their one specific context. And that turns out to be the deal with modern AI systems that generate amazing results is that you can have stuff that can at least delay flunking the Turing test for hours and hours.
Advertisement
I believe that the most sophisticated AI powered chat Bots; it used to be that you could see through one of these in five minutes with the right question. It was wasn’t really hard to boggle one of these systems but now I think that you could probably chat with a sophisticated chat Bot via exchanging text messages probably for two or three hours without necessarily being able to tell whether it’s human or AI. And the mechanism or one of the mechanisms is just Big Data contextual sampling which belies or it pushes the idea of is this thing conscious or not, kind of down the road because there are other issues you have to resolve first.
Like, we know that Google Translate translates by accumulating trillions of examples of language and context. If somebody says this in German or if somebody says this in English and they want to know how to say that in German but if you just compare word strings probabilistically and use that probability to build some kind of giant, there was probably an official name for it, but you can call it a Bayesian net; just a net of linked probabilities of linked contexts. If somebody’s saying this in English and based on our 300000 examples that are close to it in German they would probably say something like this in German. Big Data use probabilistically and ditto for a chat Bot; that if somebody said to an AI shrink “I’ve been sleeping badly lately,” well based on our quarter million examples of what people say in conversations when somebody says I haven’t been able to sleep, a human partner in a conversation would probably could say something like this and when you have enough of these Big Data based contexts you can beat a Turing test for a long time. Ditto for those AI art generators; they probably have some kind of data set of billions or trillions of pieces of art. I don’t know if the trillion pieces have already even exist in the world but they have a lot and then that are linked to verbal descriptions and they can just pull everything out of context and along with their library of artistic techniques and generate something that’s plausibly enough art to actually be art even though no human generated it.
But then you still have to ask the question; do any of these AI systems know what they’re doing? Do they know the meaning of what Google Translate is talking about? Does Google Translate know the meaning of love? And that pushes you back to the question of what do us as humans know the meaning of Love. Do our general experience and our ability to function as thinking beings in general contexts, does that give us additional insight into what love means or what art is compared to an AI system that has all these contextual nets? And you could argue and I would argue that there’s a difference in knowledge; humans knowledge of love and Google Translate’s knowledge of love but can you quantify that difference? Not now. And given that you can’t quantify it, there’s probably some level at which there’s a level of understanding and knowledge that a bunch of people discussing it, hashing it over, doing calculations across a decade or so could come up with a reasonable set of measuring sticks as to what a level of understanding reaches the level of consciousness. But we can’t measure any of this shit yet.
So we can’t answer the question. We’ve just we’ve managed to push the question. We’ve got a bunch of things that can pass Turing tests for quite a while, hours at a time, or dozens of pieces of art at a time, or this person does not exist which is a web app that just creates very real looking human faces for people who don’t exist. And so what we’ve done is kind of we have the Turing test in action now but the Turing tests connection to thinking still lives in some gray areas. And we can talk about it more but that’s all I got on it right now.
[Recording End]
License
In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Based on a work at www.in-sightpublishing.com.
Copyright
© Scott Douglas Jacobsen and In-Sight Publishing 2012-Present. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Scott Douglas Jacobsen and In-Sight Publishing with appropriate and specific direction to the original content. All interviewees and authors co-copyright their material and may disseminate for their independent purposes.
