Skip to content

Fumfer Physics 29: How the Human Mind Measures Time, Space, and Thought

2025-11-03

Author(s): Scott Douglas Jacobsen

Publication (Outlet/Website): Vocal.Media

Publication Date (yyyy/mm/dd): 2025/10/31

In this dialogue, Scott Douglas Jacobsen and Rick Rosner explore the perceptual boundaries of human experience—the limits of what we can truly sense in time and space. Rosner explains that our temporal resolution hovers around a tenth of a second, the scale of reflexes and thought formation, while spatial awareness reaches down to roughly 50 microns, the threshold of the naked eye. They discuss how linguistic processing, births, and deaths occur within similar temporal slices, linking consciousness to the continuous flow of global life. The conversation ultimately frames thought as holographic—relational, dynamic, and resistant to discrete measurement.

Scott Douglas Jacobsen: I’ve got another question. For the scale of space we live in, and for the time perception we have—from the moment a photon hits the retina, travels up, is processed, and becomes consciously registered—what is the smallest magnitude of space or time we can legitimately perceive? Where do the gaps begin?

For instance, take the classic example: you draw a sequence of walking images on a Post-it notepad and flip through them. It tricks the brain into perceiving motion. The same principle applies to film. There are gaps there. So what’s the smallest time perception we have? And what’s the smallest spatial perception?

Rick Rosner: You’re talking about our brains interacting with the world. You’re not talking about the smallest possible units.

Jacobsen: Right. I’m not talking about the Planck length or fundamental limits of physics. I’m talking about perceptual limits—the relationship between a smaller subjective system with finite sensory capacity and the larger objective world.

Rosner: I can speak to that. For most people who aren’t highly trained at detecting fine differences, the just-noticeable difference—or Weber fraction—depends on the sense. For lifted weight, it’s about 2 percent; for brightness, around 8 percent; for loudness, roughly 10 percent.

If you give someone two bags of flour and one is about 2 percent heavier, many people will notice. At 5 percent, with only one quick lift, more will notice, but performance still depends on context and experience.

So it might be about a one-percent difference in a lot of cases. When you’re talking about minimum perceptible duration—say, if you showed people flashing lights where one stayed on for half a second and another stayed on for 0.55 seconds, about ten percent longer—people would likely notice that. But when the flashes differ between 0.2 seconds and 0.22 seconds, the failure rate goes up.

If one light stays on for a tenth of a second and another for a ninth, can people still tell? I don’t know. But time perception generally operates within fractions of a second unless people are trained. With training, accuracy improves.

When I was booked on Jeopardy!, I had a year to train because the season ended and they hadn’t brought me on yet. There was a car wash in Santa Monica that had a game: you’d put a dime in, and it would drop the dime at random intervals. If you caught the dime within about a tenth of a second, you got your dime back. If your reaction was slower than that, they kept it. I spent a lot of dimes trying to make my reactions fast for the Jeopardy! buzzer. That game taught me that reflexes operate on roughly a tenth-of-a-second scale.

Then there are thoughts. We’ve talked about this—how long it takes for a thought to form in your brain. The timescale is similar. If you put your hand on a hot stove, the signal has to travel to your brain and then back down your arm before you pull away. I don’t think we have reflexes that completely bypass the brain—like something at the elbow saying, “I’ll pull back without waiting for headquarters.” That doesn’t happen.

A lot of brain activity occurs on the scale of a third to a tenth of a second. But in a car wreck, when everything feels like it’s slowing down, perception sharpens. You can tell the order of events—the first contact with the other car, the windshield cracking, the airbag deploying, something flying off the other vehicle—even if those events are separated by only a few hundredths of a second.

So in crisis situations, maybe the minimum discernible time difference is around a fiftieth of a second. We know from film and television that we don’t perceive flicker when images are shown at 24 or 30 frames per second, which corresponds to refresh intervals of about 1/24 to 1/30 of a second. Instead of seeing discrete stills, we see smooth motion.

When I was a kid, I think there were some cartoons so cheaply made they ran at only 12 frames per second. I might be wrong—I haven’t checked—but at 12 fps, you could definitely tell. The motion looked choppy, like something wasn’t quite right. So that’s roughly the perceptual scale of time for humans.

Scale of space—so, I work with tiny things: little pieces of glass in micromosaics. I also, well, pick at myself. You know those pore strips people put on their noses? You leave them for a couple of hours, then peel them off, and they pull out the solidified oil from your pores—it looks like a gross little porcupine.

I do that manually. If I don’t have my contacts in, I can see really close up, and I’ll just start squeezing those little things out of my pores. They’re usually no more than a millimeter long, maybe about 0.4 millimeters across, and you can definitely feel them when you roll them between your fingers. You can feel even smaller stuff—probably down to a fifth of a millimeter, maybe even a tenth. You can feel it as it rolls along the ridges of your fingerprints. So you can feel textures down to about 100 microns.

And when you get a hair in your mouth or on your tongue, you can feel it instantly—that’s on the same order of magnitude. You can probably see, with the naked eye, objects down to about 50 microns, roughly a twentieth of a millimeter, maybe slightly less. So, that’s the spatial scale of perception. Have we talked enough about this, or should we move on? 

Jacobsen: Let’s build on it. I looked up how many people die per day. It’s about 169,400 deaths per day worldwide.

Rosner: Wait, that tracks roughly. You should lose about one person in a hundred over a year. With eight billion people, that’s around 80 million deaths annually, which divided by 365 gives about 220,000 deaths per day. So 170,000-something is in the right range.

Jacobsen: So, the number of seconds before the first death of the day isn’t even a full second—it’s about half a second.

Rosner: Because there are 86,400 seconds in a day, right?

Jacobsen: Yes. So by the time you get to the first full second of the day, two people have already died somewhere in the world.

Now, when reading a word, the visual cortex detects letter shapes—like the dark lines of an “O”—in roughly 0 to 100 milliseconds. It decodes those shapes into known letter patterns between 100 and 250 milliseconds. Then lexical access—recognizing the word itself—occurs between 250 and 400 milliseconds. Finally, semantic integration, or understanding the word’s meaning in context, happens between 400 and 600 milliseconds. So, for a single word, comprehension takes about half a second. 

Rosner: But fluent readers move their eyes ahead before their brains have completely processed the previous word. Reading is continuous; you don’t pause a half second per word. The words flow together at a steady clip.

Jacobsen: And on the other side of that time scale, globally, there’s a birth roughly every 0.4 seconds. So, if someone starts reading at midnight, by the time they’ve finished two individual words—not a full sentence, not Ulysses—two to three people will have been born, and two people will have died, and a second will have passed.

Rosner: I just think that’s pretty remarkable. But it depends on the words. If they’re familiar, recognition is instantaneous. If people see “Coca-Cola,” it’s immediate. If they see “fuck you,” it’s immediate.

Jacobsen: So, in terms of the 0 to 250 millisecond range—the visual cortex decoding stage—there’s nothing unusual there. But for lexical access, it’s likely on the lower end, and semantic integration probably happens closer to 400 milliseconds rather than the upper bound of 600 milliseconds. Fair?

Rosner: Yep.

Jacobsen: All right. So that’s the timeline for linguistic thought. I was just trying to put it into perspective—how many people are born and die every day. It’s staggering. What does that timeline of lexical access and semantic integration tell us about our style of thought in relation to the world? We were just talking about small time intervals and fine sensory registration.

Rosner: We’ve talked about this before. Language—putting names on things—is an enormous leap in efficiency. I can’t explain it perfectly, but it’s so much faster.

When you think in words, you’re not decoding written symbols; the words arrive already formed in your mind. I don’t see every word I say appear before my eyes as I speak. The only exceptions are visual associations: when I say “Coca-Cola,” I picture the logo; when I say “fuck you,” I see the phrase. But generally, the sentences flow without visual imagery.

Words let you pack an incredible number of ideas into your head. You’ve met my dogs, right? At least one of them is kind of an idiot.

Jacobsen: I’d say I met both bodies of dogs, and I met single dog, but neither of them met me—if you know what I mean.

Rosner: What you’re saying, philosophically, is that dogs aren’t exactly intellectuals. 

Jacobsen: It’s like that Republican joke about Biden—that he doesn’t know what’s going on in his own head because he doesn’t even know he’s there.

Rosner: But seriously, some dogs are smarter. Border collies understand quite a lot. Coyotes too—they’re probably more attuned to their world than house pets are. If a coyote wandered into your house, it would be completely confused, but in its own environment, it behaves with far greater sophistication than a domestic dog.

Still, animal understanding is limited. Part of that’s brain size—a dog’s brain is about the size of a lemon compared to ours. But the bigger factor is language. Language compresses thought. You can store vastly more understanding if you have a coding system that tags complex concepts with short words. Instead of reconstructing an idea every time, you shorthand it.

That’s a massive step forward in efficiency. If linguistic thought weren’t such a powerful evolutionary advantage, we never would have evolved brains capable of it.

But brains evolve a lot. They’ve gotten bigger and more complex across evolutionary history. Being able to think better clearly confers a reproductive advantage—though not in every case. Some organisms, like mollusks, start out with functional nervous systems. When they’re larvae, they can think and move around. But once they attach to a surface and become something like a barnacle, they lose most of that neural machinery. They don’t need it anymore.

For most animals, though, the ability to think is a big deal. And similarly, having words for things—that is, symbolic thought—offers a huge advantage. Anything else?

Jacobsen: I don’t know. I mean, we don’t really know the minimal unit of information for human thought. We understand some of the basic components of brain activity—nerve impulses, neurotransmitters, neuromodulators, neurohormones—and we know how they integrate to produce complex effects. 

We also know that the brain is a massively interconnected network. There are glial cells—probably ten times more than neurons—and around 86 billion neurons in the average human brain. Some glial cells act as cleanup crews, but others participate in information processing.

Neurons communicate through summation, meaning they fire based on statistical combinations of incoming signals. So brain activity isn’t binary in the way computers are. It’s probabilistic, dynamic, and context-dependent.

Rosner: So when we talk about a “minimum unit of thought,” it’s not as clear-cut as in computing. A bit—zero or one—is the smallest unit of digital information. But in the brain, information doesn’t work like that.

It’s more like those puzzles in People magazine, where they show two nearly identical pictures and ask you to spot the eight differences. That’s closer to the idea of minimal change in consciousness: what’s the smallest alteration in your mental landscape that you would actually register as different?

But even that is messy. Thought is not discrete; every element of a thought is defined by its relationships to every other element. You can’t isolate one unit cleanly. Consciousness is a network, not a sequence of bits.

Unlike a computer, where a circuit is either in a one state or a zero state, everything in your mind exists only through its relationships with everything else in your mind. It’s much more holographic. That means it isn’t easily defined by discrete units of information.

On the other hand, there should be a quantifiable amount of information in a single thought. When your brain is fully conscious—when you’re looking around, perceiving, remembering, processing—your mind at full capacity has a measurable information bandwidth from moment to moment.

People have tried to estimate that, to calculate the amount of information in a moment of consciousness, which you could loosely call a “thought.” If you can say, “There’s this much information in that moment,” then you’ve effectively assigned a number of informational units to thought.

So, it’s theoretically possible to measure, though prone to error. Much of what we think we’re thinking in a moment is tacit understanding—unspoken, automatic comprehension. Does that mean we can act as if we’ve had a super-complex thought when, in reality, much of it is implicit?

If tacit or implicit information underlies conscious thought, does it occupy fewer informational “units” than explicit knowledge? Or is that a false distinction—that everything we know is tacit and implicate, and nothing truly explicit? I don’t know. Those are some of the problems in trying to quantify thought. Is that reasonable?

Jacobsen: I think so. I could spin that question endlessly, but let’s leave it there for today.

Last updated May 3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices.In Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarksperformancesdatabases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.

Leave a Comment

Leave a comment