Skip to content

Ask A Genius 1362: The Vector of Intelligence: Trust, AI, Evolution, and Cognitive Direction

2025-06-13

Author(s): Rick Rosner and Scott Douglas Jacobsen

Publication (Outlet/Website): Ask A Genius

Publication Date (yyyy/mm/dd): 2025/05/07

Rick Rosner: So I was thinking about violations of trust, specifically, the treatment of animals and people. The biggest violator of trust is evolution. However, you cannot pin anything on evolution because it is not a being. It is a process without intention or awareness. It operates through natural selection—a statistical, impersonal mechanism. There is no accountability built into it.

That is how things work. However, through evolution, many creatures have emerged with innate drives—like seeking connection, safety, and reproduction—that often go unfulfilled. Some drives do get fulfilled. For example, the species-perpetuating drive—reproduction does occur in sufficient numbers to keep a species going. That is the evolutionary success story. However, again, evolution does not care.

But the drive to continue living—to keep experiencing life-that one ultimately fails for everyone, because death is inevitable. Moreover, again, evolution does not care. There is no built-in justice to any of it. I started wondering about the violation of those drives, especially in animals, and how trust plays into it. I was thinking about meat animals versus pets. Meat animals, especially those raised in industrial agriculture, do not develop expectations of love. Most are raised in conditions that prevent anything close to bonding or trust.

That said, there are exceptions. For example, in programs like 4-H in North America, young people raise animals with great care and even affection, then those animals are eventually slaughtered. Those animals have been treated with apparent kindness and likely form bonds with their caretakers. So when they are killed, it could be seen as a betrayal of that bond. The animal would not understand why someone who treated them well would suddenly cause or allow their death.

That feels like a deeper violation than what happens in factory farms. Animals raised in those conditions experience neglect, fear, and monotony—but perhaps without the same expectation of safety or affection. They suffer, but not necessarily from betrayal. Their world is harsh from the beginning.

So I wonder: is it worse to betray an animal that has known love than to abuse one that has only known suffering? I do not have a clear answer. I have guesses, but nothing sophisticated. Still, it feels like a fundamental question—one worth pausing to consider. Not that it is the most significant ethical question out there, but it might be revealing. That is thing one.

Thing two: I watched a 2023 science fiction film called The Artifice Girl on the flight back. I thought it was called Companion, but I was misremembering. The Artifice Girl is about an AI child developed to trap online predators. One of the characters is an artificial person—her intelligence and emotional complexity evolve, and there is a point where her cognitive abilities can be adjusted. Someone mentioned this film in an interview I saw recently, too.

t is a thoughtful and entertaining film. I liked it more than I expected. If you get the chance, you should watch it. It could have been a huge bummer, but it is pretty fun, especially compared to The Substance, which, yes, it shares some similarities with. However, the Substance was built up to be a huge bummer. Anyway, it got me thinking about cognition in artificial people—simulating or replicating various levels of intelligence.

I remember one line in Companion—at a certain percentage, the artificial person would have the cognitive ability of an Ivy League graduate. That stuck with me. Then, I read that Sam Altman thing you sent me recently—he says that AI already operates at the level of at least an average bright college student. That is the kind of baseline he is working with.

However, with an artificial person, you don’t necessarily need actual intelligence. You need to appear at different levels of cognition. You are simulating people. If you are offering the operator—someone interacting with the artificial person—different “intelligence settings,” then the artificial person has to appear to be functioning at those levels: a high school graduate, a college graduate, a postdoc, a bright autodidact.

That is different from being those types of people. You are not replicating their minds—you’re conveying the impression of being that kind of person. Moreover, I imagine that simulating the intelligence of, say, a junior college graduate probably has aspects that are easier and harder than actually constructing the whole mental landscape of such a person.

It is a standard issue in sci-fi. In movies like Blade Runner, one problem is that some artificial beings are designed not to know they are synthetic. Others do know—and they are pissed. However, for those meant to believe they are human, the creators implant fake memories. You can’t fabricate a full 22 years of continuous memory for a being who thinks they’re 22 years old. So you implant a few key memories and build a mental framework where the artificial person doesn’t feel compelled to probe the blank spaces in between.

In Companion, the approach is similar. The film shows how you generate those key moments—anchoring memories—that are supposed to be enough to convince the artificial person they’re fully human. And then you guide them away from introspection or scrutinizing their own experience too deeply. There’s another artificial being in the movie, whose intelligence is probably set a little higher. He eventually figures out he’s not a natural-born person.

That made me think about how people can make themselves smarter than they are, functionally, if they embrace the right attitudes. We’ve talked about this before. Like, in the 1680s, if you overheard a bunch of proto-scientists talking about experimental method and nature in a coffeehouse—because that’s where a lot of the scientific revolution started, with caffeinated people in public debate—and you decided, “Hey, this makes sense,” you’ve effectively increased your intelligence just by choosing to align with a powerful intellectual current.

Even if you were, say, a silversmith—it might not have helped with your craft directly. But embracing scientific thinking could still shape your worldview. It may not have been immediately useful in 1680, but eventually, that mindset permeated more of life. 

Scott Douglas Jacobsen: Over time, it becomes transformative. So embracing the proper attitudes toward knowledge and thinking—toward evidence, doubt, and curiosity— doesn’t necessarily change your raw mental capacity but changes how you apply what you have. You take your intellectual profile—your strengths and weaknesses—and direct it with purpose. That direction, that vector, is everything.

It’s like someone with an illness that fragments focus—they may still have the same cognitive architecture, but it’s scattered. Or schizophrenia, where the internal landscape becomes entirely fractured. Then it’s not just about the hills and valleys—it’s about how navigable the map is.

So that’s where you get—where you can justify—the outcomes in these cases where individuals have lopsided mathematical or verbal intelligence. Take Richard Feynman, for example. The guy was doing math in a strip club. He was doing it constantly. His vector—his cognitive direction—was honed toward one thing.

You see pathologies of this in people on the autism spectrum, or cases of savant syndrome, like Kim Peek. But generally speaking, the more important question is not “What’s your intelligence?”—even though how we measure intelligence is deeply flawed, especially from a hard scientific standpoint. I was informed of this in an extensive interview by the former international supervisory psychometrician for Mensa.

IQ scores above 145 on a standard deviation of 15, for instance, are widely considered unreliable. That was a stricter standard than even I expected. I had assumed 160 was the point of unreliability. But 145, especially on properly normed, gold-standard, supervised IQ tests, is where the numbers start getting fuzzy. But again, the more important question is: What’s the vector? How are you directing that skill set?

I don’t mind if we say the same thing differently. Here’s another example: I did a second interview with Alexis Rockman today. He’s a well-known artist who created the official Earth Day poster this year. He’s worked on several other significant projects.

At the end of the interview, we discussed whether his ability is inborn or cultivated, he said it’s both. And that makes sense—people visualize how to do things all the time, but don’t necessarily put it into words. Words are still important, but visual thinking dominates some people.

So in cases like Alexis Rockman or even your friend Lance, these people are cognitively lopsided in a particular way. Their visual and spatial proclivity is far above average, and it has been honed with long-term commitment. The arc of that “vector space” is highly directed. You would call that a profession. Moreover, the output—what we see—is shaped by that professional direction.

Rosner: That makes sense. There are clear ways to conceptualize this. I also think about this in terms of career and social attitudes. Take people today who embrace AI. They likely have a productive attitude toward the world, at least in that domain. That attitude makes them smarter than others in the context of what is coming.Now, compare that with another group, like the crypto crowd, especially on Twitter. These folks tend to be overconfident. Some might be savvy and ride the pump-and-dumps, sure. However, I read an article today about the Trump meme coin.

It has been out for a while now. Seven hundred sixty-four thousand people have lost money on it. Moreover, only 58 people have made over $10 million each. The article did not mention how many people made small profits, like $3 or $200. However, the imbalance is staggering. That shows the difference between an aligned vector and raw confidence without direction. Confidence can simulate intelligence in the short term, but without that underlying structure, it collapses.

t would be helpful to know those missing numbers, but I assume fewer than three-quarters of a million people lost money on it. I do not know—crypto seems like a sucker’s game. Then you have got people who embrace MAGA and QAnon conspiracy theories. That kind of attitude seems like it artificially lowers your intelligence and your effectiveness in the world. You are embracing stupid shit—it is not going to make you smarter.

It is not going to help you, unless you’ve got cronies in the government and your warped attitudes somehow get you appointed to something. Otherwise, it is just going to make you dumber. What do you think?

Jacobsen: I would agree with that.

Rosner: Have we exhausted this topic for now?

Jacobsen: Probably.

Last updated May  3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices.In Sight Publishing by Scott  Douglas  Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott  Douglas  Jacobsen 2012–Present. All trademarksperformancesdatabases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.

Leave a Comment

Leave a comment