Skip to content

Ask A Genius 1526: The Future of Algorithms and AI, From Primitive Mistakes to Digital Concierges

2025-11-26

Author(s): Scott Douglas Jacobsen and Rick Rosner

Publication (Outlet/Website): Ask A Genius

Publication Date (yyyy/mm/dd): 2025/09/26

How will today’s crude recommendation algorithms evolve into AI-powered digital concierges that both empower and manipulate people, shaping future intelligence and autonomy?

Algorithms today are crude, often clumsy systems that drive ads, recommendations, and online shopping results. Rick Rosner and Scott Douglas Jacobsen explore how these imperfect tools—mocked for errors like selling washing machines after one purchase—are evolving into powerful AI-driven “digital concierges.” Such systems could provide personalized, helpful services, even aiding homeless individuals, but also pose risks of manipulation and surveillance, as dramatized in Minority Report. The dialogue contrasts current inefficiencies with looming sophistication, raising ethical questions about autonomy, critical thinking, and whether future generations will depend on technology like hermit crabs rely on fragile shells for protection.

Rick Rosner: Can we discuss the algorithm for a moment?

Scott Douglas Jacobsen: What algorithm?

Rosner: The one people refer to when you’re on your phone, and it suddenly throws up articles related to something you were just talking about in the room—as if it had been listening. People say, “That’s the algorithm.” Or when you’re shopping online, it suggests related products. Or on Netflix, it recommends shows based on what you’ve watched. Everyone calls it “the algorithm.”

Jacobsen: You’re saying everyone calls it that. I’m not denying people use that term; I’m saying I never personally use it that way.

Rosner: Fair enough. In my house, we do. It’s sloppy usage, but let’s talk about it anyway. We know it’s pretty primitive. It makes a lot of dumb mistakes. Really, it’s not one algorithm, but many—one for each service you use.

People make fun of it. Buy one washing machine, and suddenly you get ads for five more washing machines, which makes no sense. We could discuss why it’s so bad and whether it will remain that way.

My favourite recent example: I like searching for bikinis online because the algorithm then serves me lots of pictures of women modelling bikinis. I never buy one, but I like getting those images as spam.

On platforms like AliExpress—similar to Temu, a Chinese e-commerce aggregator—manufacturers post products for global buyers. They flood it with bikinis, swimsuits, and yoga gear. Some of it carries sexually explicit slogans or symbols, like “BBC” (a pornography acronym) or a spade-symbol “Q” (which, in fetish contexts, signals “queen of spades”). “Spade” is also a racist slur, so these items have a disturbing subtext.

I don’t believe American women—or women anywhere—are flocking to buy yoga pants advertising “big black cock.” What likely happened is that the algorithm scraped pornography where women wore garments signalling that fetish. Those images then influenced product listings.

The algorithm seems to assume, “This is just everyday American women.” I doubt it even understands the symbols it pushes onto workout gear or bikinis. It simply scrapes symbols from images—probably from American porn—and mistakes them for retail opportunities.

I browse AliExpress and see what it offers. For example, I like Lego, so it shows me Lego knockoffs. Recently, Chinese manufacturers have even started copying micro-mosaics. It’s fun to watch these aggregators at work.

Back to the algorithm—it can be wildly wrong. One reason is that it costs almost nothing to serve ads. When you shop for something on eBay, the algorithm suggests, “You might also like this.” The cost is negligible, even if it only works under 10% of the time.

Sometimes eBay’s algorithm offers me a cheaper version of the exact item I’m already viewing—maybe 8% less from a different vendor. That undermines sellers because eBay is effectively undercutting them. One reason the algorithm is flawed is that expectations are low and the cost of mistakes is minimal.

The algorithm is also blamed for influencing the 2016 U.S. presidential election. Cambridge Analytica, a UK firm, was hired by the GOP and used Facebook data to divide voters into buckets—maybe six categories—and then targeted propaganda at each.

It was effective, maybe less because the buckets were bright and more because of the sheer volume of propaganda on Facebook. The algorithm that assigned people to buckets was primitive, but the saturation was overwhelming.

Jacobsen: The real question is when the algorithm gets less crude. What happens when we’re immersed in systems that truly know us and deliver sophisticated suggestions? Then you get “agents.” They could be deeply layered, capable of very targeted manipulation. Imagine a cyber-butler, cyber-girlfriend, or cyber-Jiminy Cricket on your shoulder—a digital concierge. It’s like a concierge company, but filtered through one butler just for you.

Rosner: Right. And I think you’re correct—it can take both helpful and insidious forms, often simultaneously. For example, I’ve had some training in what it takes to help homeless people. It requires concierge-level service because every homeless person’s situation is unique. You need a human contact who says, “What’s your deal? Here’s what we can do for you,” and then eases them into a less miserable existence.

A digital concierge for homeless people could be helpful. Imagine giving someone a tablet that says, “Hello, Jim. Here’s what’s available today: food here, showers here, housing applications here, medications here.” Jim might be mentally ill, have substance issues, or just be down on his luck. He might use the suggestions—or he might throw the tablet into traffic. But at sixty dollars a tablet, that’s far cheaper than Jim ending up in the ER eight times a year, which would cost the city sixty thousand dollars. It could be a relatively inexpensive attempt at concierge-level help.

For people who aren’t homeless, the same digital concierge would be both helpful and insidious. It would guide them, but also nudge them in the direction vendors want them to go. That’s already obvious and well-documented. The best-known fictional example is Minority Report.

Tom Cruise running through the subway station while personalized ads pop up, shouting his name. He’s trying to hide, but the system knows his identity and keeps calling him out.

That’s where algorithms are headed. They’ll improve significantly, very quickly, now that they’re AI-powered. But AI itself is still limited. The question is how quickly it will improve.

Jacobsen: Do you agree with Sam Altman’s general argument—that his kids and future generations will never be more intelligent than even today’s AI, such as GPT-5.5 and its successors? I set aside an editorial from this weekend’s LA Times. The headline sums it up: “The internet made us stupid. AI promises to make it worse.” Written by Christopher Cheschin.

Rosner: As AI use grows, researchers warn that the future of critical thinking doesn’t look good. You mentioned Sam Altman earlier—he said his kids will never be smarter than the AIs of the future. He framed it optimistically—as if that would be a good thing for them.

Jacobsen: Both Altman’s statement and that LA Times editorial point in the same direction. We’ve discussed before the process of domestication from wolves to dogs. Dogs are much less autonomous than coyotes or wolves. They surrendered some independence and critical skills to humans. Dogs don’t really know what’s going on—they rely on us for survival.

I don’t think Altman meant future kids will be stupid. He meant future AIs will be extremely smart. However, the editorial presents a darker argument: future children might be less intelligent, or at least less critical thinkers.

I see future kids more like hermit crabs. At one of the bars I worked, we had hermit crab races. Every week, I had to look after the crabs. They didn’t fare well in captivity—two or three died each week. Out of their shells, hermit crabs are weak, pathetic, and defenceless.

That’s how I picture future people. With technology—their “shell”—they’ll be formidable. Without it, stripped bare, they’ll be weak and helpless. However, it is rare for people to be separated from their technology.

Last updated May 3, 2025. These terms govern all In-Sight Publishing content—past, present, and future—and supersede any prior notices.In-Sight Publishing by Scott  Douglas  Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In-Sight Publishing by Scott  Douglas  Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.

Leave a Comment

Leave a comment