Ask A Genius 1066: The Chris Cole Session 4, “Pinky and the Brain”
Author(s): Rick Rosner and Scott Douglas Jacobsen
Publication (Outlet/Website): Ask A Genius
Publication Date (yyyy/mm/dd): 2024/08/05
Scott Douglas Jacobsen: Some people, I do these sessions. I let them know about Ask A Genius. Obviously, it’s named after you. I tell them. Would you have any questions for him? And then that’s where this comes in because they’re members of those communities, so they’d be the ones that I thought would be interested. So, a follow-up from Chris Cole says, “Let me ask the question differently: A bulldozer is a machine that is many times stronger than a human being. Nonetheless we don’t worry that it will try to take over the world. An AI is a machine that is many times smarter than a human being. Why should we worry that it will try to take over the world?” So, I am reminded of Pinky and the Brain, to what degree has the archetype of the brain taken over our concept of AI?
Rick Rosner: Humans have historically misunderstood how brains work, consciousness, and many other concepts. However, I would argue that we have a better understanding now than ever before. Recently, I read an article debunking AI experts’ predictions of doom caused by AI. The article argued that assigning numerical estimates or probabilities to AI causing global harm is challenging because we lack reasonable priors. This Bayesian term refers to needing examples of similar events happening or not happening in the past. Given the unprecedented nature of AI development, the article also suggested that many AI experts, perhaps all, may not fully grasp what we will face in the future, even if they understand the current situation.
A bulldozer can be turned off and does not exhibit contrary or unpredictable behavior. This contrasts with the Boeing 737s, where the autopilot engaged based on a faulty Pitot valve signal that misread the plane’s flight slope. Boeing failed to instruct pilots on how to identify and disable the autopilot in this situation. Consequently, the pilots fought the autopilot to the ground at 600 miles an hour, resulting in a fatal crash. This incident was not directly related to AI but was a combination of corporate negligence, technical failure, and inadequate pilot training. However, it illustrates how computer-related mishaps can lead to catastrophic outcomes.
Consider another example involving nuclear reactors. Chernobyl was a reasonably safe reactor until a poorly planned safety drill was conducted in the middle of the night, leading to a meltdown and rendering hundreds of square miles uninhabitable. This disaster was not caused by AI but by human error combined with existing technology. If AI fails, it may not be due to a malevolent AI like Skynet. Instead, it is more likely to involve a series of mishaps where AI complicates and amplifies existing mechanical or human errors.
In plane crashes, rarely is a single factor responsible. Typically, multiple issues compound to transform a manageable situation into a fatal one. Thus, I could convincingly argue that AI is inherently dangerous because humans and machinery have always posed risks. Regardless of the additional dangers AI may introduce, the combination of AI with other failures will likely lead to significant damage, injury, and death. This does not even touch upon AGI or superintelligent AI, whose arrival and capabilities remain unpredictable.
Current behavior suggests that AI will claim to be conscious long before achieving true consciousness. AI will mimic statements about consciousness and thinking because it has been trained on such data. We know AI can exhibit biased or inappropriate behavior when influenced by users, either due to their biases or for trolling purposes. Therefore, AI is indeed dangerous, potentially in new and more significant ways, but I am not qualified to assign probabilities to these risks.
Rick Rosner, American Comedy Writer, www.rickrosner.org
Scott Douglas Jacobsen, Independent Journalist, www.in-sightpublishing.com
License & Copyright
In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. ©Scott Douglas Jacobsen and In-Sight Publishing 2012-Present. Unauthorized use or duplication of material without express permission from Scott Douglas Jacobsen strictly prohibited, excerpts and links must use full credit to Scott Douglas Jacobsen and In-Sight Publishing with direction to the original content.
