Fumfer Physics 23: Why the Universe May Never Face Heat Death
Author(s): Scott Douglas Jacobsen
Publication (Outlet/Website): Vocal.Media
Publication Date (yyyy/mm/dd): 2025/10/15
In this dialogue, Scott Douglas Jacobsen and Rick Rosner explore how Information Cosmology (IC) diverges from the Big Bang model. IC rejects the concept of heat death, arguing that as the universe expands, it would require ever-increasing information to define matter precisely—a paradox that breaks conservation of information. Instead, IC predicts an eventual contraction after vast time scales, with cosmic structures gradually fading as information coherence weakens. The framework posits a universe that behaves like an immense computational system with finite capacity, maintaining equilibrium over immense epochs rather than expanding endlessly toward entropy.
Scott Douglas Jacobsen: What are the key concepts where IC differs from standard Big Bang cosmology frameworks—especially as the universe evolves? For instance, in standard cosmology, the universe ends in heat death.
Rick Rosner: In standard cosmology, heat death is the leading expectation if accelerated expansion persists. In IC, that’s not going to happen because it contradicts the IC idea that the scale of space and the amount of matter are proportional to the amount of information in the universe.
Heat death assumes the universe keeps expanding forever. As that happens, large-scale distances grow. However, this does not make particles like protons more precisely defined: the Planck scale is fixed by fundamental constants, and bound systems (atoms, protons, solar systems, galaxies) do not partake in the Hubble expansion. So there’s no need for “extra information” to keep a proton defined. IC diverges from standard cosmology for other reasons tied to its information-capacity assumptions. So, under IC, you’re not going to have a heat death like that, whereas under standard cosmology, it remains the likely far-future outcome if acceleration continues.
In IC, the “death” of a universe happens when things collapse out of existence—when stars and galaxies run out of energy. That’s similar to the Big Bang model’s heat death in some ways, but in the IC framework, the universe doesn’t expand forever. It eventually contracts.
In an IC universe, as time goes on, the Hubble redshift keeps increasing only imperceptibly on human timescales (the redshift drift is minimal). The universe becomes increasingly disconnected and fragmented. Parts of the cosmos that once shared a common history lose that connection entirely. That shared history is erased, and the universe, in a sense, flees from itself until nothing is meaningfully connected to anything else. Every stellar body, every particle, ends up alone—until everything eventually winks out.
But the IC universe lives much longer—maybe a quintillion times longer, a gazillion times longer—than a Big Bang universe. The Big Bang model describes a universe that is homogeneous in space: wherever you are, space looks roughly the same.
If you’re standing on a planet orbiting a star in a galaxy and you look out, the number and distances of visible stars will vary depending on whether your system is near the galactic center or toward the outskirts. But overall, if someone were placed in a galaxy similar to ours—Earth being about two-thirds of the way out from the center—they’d see roughly the same thing. Space is homogeneous primarily, with galaxies distributed fairly uniformly, aside from the filaments and clusters.
Time, however, is not homogeneous. You can always tell when you are in a Big Bang universe by the size of the universe itself—it changes moment by moment as expansion continues.
Jacobsen: So there’s an asymmetry between space and time. Space in the IC model exhibits homogeneity, while time shows heterogeneity. Why this asymmetry? Why aren’t they symmetric—or isomorphic in patterning?
Rosner: Because Einstein’s equations don’t allow a stable, matter-filled static universe without special tuning. That bothered him deeply. He wanted a cosmos where everything—stars, planets, all of it—could hang there in equilibrium.
Newton imagined an infinite, static universe, balanced because everything pulled on everything else equally. Even though every mass exerts gravity on every other, he thought the pull from distant matter in all directions would cancel out locally, keeping the universe stable.
But Einstein realized that if space itself participates in gravity—if spacetime curves and responds to mass-energy—then that balance can’t hold generically. His equations show that a universe filled with matter must be dynamical—expanding or contracting. It can’t just sit still in a stable way.
So he introduced the cosmological constant, a fudge factor to hold the universe in place—a kind of cosmic anti-gravity term. Later, when Hubble discovered the expansion of the universe, Einstein called that constant his “biggest blunder.” Ironically, modern cosmology revived it under a new name: dark energy.
So, why is the universe observed to be expanding? Because there’s observational evidence that it is—and because, for realistic contents, Einstein’s equations naturally yield dynamical solutions (expanding or contracting), with ours observed to be in expansion.
Under IC, the universe is locally homogeneous. In other words, any large region looks spatially uniform, much as it does in standard cosmology. But if you go far enough from the bright, active areas—what I call the neighbourhood near t₀—you’ll find something different. It’s temporally homogeneous instead. That means the universe looks roughly the same 10 billion years from now, 100 billion, a trillion years in the future—or a trillion years in the past. The specific galaxies that are lit up may differ, but the scale of space remains about the same in IC’s picture.
If the universe is an information-processing system, then like any such system, it has parameters—a size, a capacity. Our brains are a helpful analogy. They have a finite information capacity. If you were to map the information content of your brain at any given time, there’s an upper limit. When you’re asleep, the amount of processed information is lower; when you’re awake, it rises to near the ceiling. The brain can’t process more than a certain amount, no matter how busy it gets.
Similarly, the universe’s size corresponds to the amount of information it’s processing in the IC view. If the universe is a kind of cosmic computer, its scale is tied to its computational load. That doesn’t fluctuate drastically from one “universal moment” to the next.
Now, imagine the mathematical version of consciousness—the “information space” of our minds. It might expand slowly over the years as we learn and form new neural connections. Children’s brains grow and prune dendrites, refining their mental models of the world, gradually increasing their information capacity.
But moment to moment, thought to thought, that capacity hardly changes. Say you have three thoughts per second—that’s about 10,000 per hour, 120,000 per waking day. The amount of information you add between one thought and the next is negligible.
If the universe functions similarly, and if we’re living inside a “thought” that takes, say, 20 billion years to unfold, then the universe’s size stays nearly constant from one cosmic “thought” to the next in IC. If each thought lasts 20 billion years and there are 100 of them, that’s two trillion years of relative stability. If there are 100,000 such thought-cycles, you get roughly two quadrillion years where the universe remains almost the same size.
Jacobsen: So, will the eventual mathematical framework for this philosophy of physics be clean or messy?
Rosner: Reasonably clean. At some point, someone will formalize the principles that define what counts as information in the universe’s processing. It’ll be expressible in equations.
Jacobsen: Will those equations fully capture what’s happening?
Rosner: Some will describe things precisely, but most will be approximations—like the thermodynamic equations we already use. Thermodynamics works because it compresses vast statistical behaviour into neat formulas. When you have enough molecules interacting, individual noise becomes insignificant compared to the overall trends.
Thermodynamic equations describe much chaotic activity that gets smoothed out by sheer numbers—so many molecules interacting that the randomness averages into order.
Every physics equation is probably an approximation. Some capture systems with less chaos offer more precision than others. But there will eventually be equations and physical models that describe what information is and how it behaves. Some aspects of that might barely be describable.
Rosner: As the universe expands, the Planck scale itself does not shift, and protons do not become “fuzzier” due to expansion; bound structures are unaffected by the Hubble flow. If you talk about “informational fidelity” in IC, it would have to be defined in terms other than a changing Planck scale—for example, via an information density or capacity notion.
Jacobsen: So the fidelity of the universe’s informational content is not proportional to a changing Planck scale (which is fixed); the question is whether there’s some other criterion at which the informational structure needed for well-defined spacetime fails. At what point would loss of information become so severe that spacetime itself—this higher-order structure—could no longer remain well-defined?
Rosner: That’s probably a question for tomorrow; the heater just came on, and I’m losing focus. But I think I get what you’re asking—at what point does matter become so diffuse, so fuzzy, that humans couldn’t exist?
Jacobsen: Not just humans, but any organized spacetime structure. Humans would disappear long before spacetime itself collapses.
Rosner: You could imagine a universe with only a hundred particles—maybe fifty. Humans, or anything like us, require a universe with at least around 10⁶⁰ to 10⁷⁰ particles—this is speculative, not a standard threshold. Somewhere around that 10⁶⁰ range, the structure of spacetime might become too coarse to support stable, complex systems. 10⁷⁰ might be the lower limit for a universe capable of sustaining human-like intelligence. That would be a cosmos with about one quadrillionth the matter of our current universe. Could conscious, intelligent beings with brains as complex as ours exist in a universe that small? Maybe. Hard to say. That’s one for further thought.
Last updated May 3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices. In Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.
