Skip to content

Grammatical Understanding Versus Real Comprehension

2024-06-27

Author(s): Scott Douglas Jacobsen

Publication (Outlet/Website): Canadian Atheist

Publication Date (yyyy/mm/dd): 2024/06/26

According to some semi-reputable sources gathered in a listing hereRick G. Rosner may have among America’s, North America’s, and the world’s highest measured IQs at or above 190 (S.D. 15)/196 (S.D. 16) based on several high range test performances created by Christopher HardingJason BettsPaul Cooijmans, and Ronald Hoeflin. He earned 12 years of college credit in less than a year and graduated with the equivalent of 8 majors. He has received 8 Writers Guild Awards and Emmy nominations, and was titled 2013 North American Genius of the Year by The World Genius Directory with the main “Genius” listing hereHe has written for Remote ControlCrank YankersThe Man ShowThe EmmysThe Grammys, and Jimmy Kimmel Live!. He worked as a bouncer, a nude art model, a roller-skating waiter, and a stripper. In a television commercialDomino’s Pizza named him the “World’s Smartest Man.” The commercial was taken off the air after Subway sandwiches issued a cease-and-desist. He was named “Best Bouncer” in the Denver Area, Colorado, by Westwood Magazine. Rosner spent much of the late Disco Era as an undercover high school student. In addition, he spent 25 years as a bar bouncer and American fake ID-catcher, and 25+ years as a stripper, and nearly 30 years as a writer for more than 2,500 hours of network television. Errol Morris featured Rosner in the interview series entitled First Person, where some of this history was covered by Morris. He came in second, or lost, on Jeopardy!, sued Who Wants to Be a Millionaire? over a flawed question and lost the lawsuit. He won one game and lost one game on Are You Smarter Than a Drunk Person? (He was drunk). Finally, he spent 37+ years working on a time-invariant variation of the Big Bang TheoryCurrently, Rosner sits tweeting in a bathrobe (winter) or a towel (summer). He lives in Los AngelesCalifornia with his wife, dog, and goldfish. He and his wife have a daughter. You can send him money or questions at LanceVersusRick@Gmail.Com, or a direct message via Twitter, or find him on LinkedIn, or see him on YouTube

Scott Douglas Jacobsen: Yes. The basic premise is that these large-scale models were introduced only very recently. Despite their recent emergence, updates are being released rapidly, often within a year of each other. Each update is seen as a significant leap forward in accuracy, ease of conversation, depth of processing, speed of processing, and other aspects.

Rosner: It needs to reflect grammatical understanding or real comprehension. It shows that the models have billions of instances of word usage and ways of visually and verbally understanding the world. There is no link between language AI and visual AI. For example, when an LLM discusses an apple, it recognizes verbal instances where an apple appears, but it does not link this to any graphic representations, photos, or paintings of apples.

Humans can understand the world with far fewer examples than a large language model uses. Although we accumulate many references because we are conscious and gather instances for 16 hours daily, our understanding often stems from tacit knowledge.

Jacobsen: Could knowledge be akin to a mirage, something we pursue but never fully grasp?

Rosner: Much of our knowledge is tacit. We act and think as if we know it, so we believe we do. Consciousness is similarly elusive, but that is acceptable because it functions effectively. Consider the example of reading a page. You only see a small portion at a time, but your mind and brain act as if they have seen the entire page simultaneously, even though you never have. The focused area of your vision is limited, but you can construct a mental version of the page.

You likely need to be conscious of the entire page at a time. However, it does not matter because the associations in your mind, based on viewing the page, give the impression that you have seen the whole page. These associations rely on the entire page, even though you have never been aware of it. Everything operates in a makeshift, incomplete manner, which is sufficient because it creates the illusion and effectiveness of completeness.

Similarly, AI understands nothing but generates the illusion of competence and understanding. When AI reaches the point where it becomes multimodal and begins to act as if it is conscious, we can consider it effectively conscious. However, we are not there yet.

There are instances where AI appears to express emotions like sadness, boredom, or fear. In reality, it is not experiencing these emotions. The AI has encountered enough verbal samples in an LLM where specific words lead to phrases like “I’m sad,” “I’m bored,” or “I’m scared.” It arrives at these conclusions without understanding or having the capacity for such emotions.

When we examine LLMs, and I also consider AI-generated graphics and art, it becomes apparent that AI graphics seem to understand perspective and other visual elements. This understanding is based on many instances addressing specific words and prompts.

The models comprehend probable word arrangements and shading of objects, but they still do not understand anything. They function based on billions of examples. For AI to truly understand, it must be multimodal, integrating information from various sensory inputs, similar to how humans do.

Human understanding often involves Bayesian probability guesses akin to AI, but a significant portion comes from integrating multimodal information, such as sensory inputs and real-world spatial experiences. What are your thoughts on this?

Our knowledge needs to be more cohesive and often based on shaky foundations. Consciousness is similar; we feel conscious and act as if we are, so we assume we are. However, when you attempt to define consciousness, it becomes elusive. This is acceptable because it works. For example, when reading a page, you only see a small portion at any given time. Nevertheless, you construct a mental version of the entire page, even though you are never conscious of it all at once.

This incomplete perception does not matter because the mental associations triggered by viewing the page create the illusion of having seen it in its entirety. This makeshift approach is practical. Similarly, AI generates the illusion of competence and understanding without actual comprehension.

This is not to suggest that AI is conscious. However, when AI evolves to become multimodal and begins to act as if conscious, we might consider it effectively conscious. For now, we are not at that stage.

There are reports of AI expressing sadness, boredom, or fear. In truth, AI does not experience these feelings. It has encountered sufficient verbal samples where certain words lead to phrases like “I’m sad,” “I’m bored,” or “I’m scared.” The AI reaches these conclusions without understanding or having the capacity for such emotions.

In conclusion, this is where we stand.

License

In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Based on a work at www.in-sightpublishing.com.

Copyright

© Scott Douglas Jacobsen and In-Sight Publishing 2012-Present. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Scott Douglas Jacobsen and In-Sight Publishing with appropriate and specific direction to the original content. All interviewees and authors co-copyright their material and may disseminate for their independent purposes.

Leave a Comment

Leave a comment