Skip to content

Ask A Genius 1323: Quantum Entanglement, Informational Cosmology, and the Limits of Computation

2025-06-13

Author(s): Rick Rosner and Scott Douglas Jacobsen

Publication (Outlet/Website): Ask A Genius

Publication Date (yyyy/mm/dd): 2025/03/26

Scott Douglas Jacobsen: What are the fundamental implications of quantum computing and information in informational cosmology?

Rick Rosner: I do not know much about the mathematics of entangled computation. It involves working with highly entangled quantum states, which allow certain classes of problems to be solved more efficiently than with classical computers when used in quantum computers. The universe itself can be described as a highly entangled quantum information system. However, this does not mean that entanglement is easily exploitable. The entanglement naturally occurring in the universe is not readily accessible for computational use. If we want to harness entanglement, we must engineer systems that create and maintain entanglement for our specific computational purposes. Stephen Hawking once proposed that the universe’s structure might be described using knot theory. Knot theory, a branch of topology, deals with the properties of closed, non-self-intersecting curves in three-dimensional space—essentially, mathematical knots. Have you ever looked into knot theory?

Jacobsen: Is everything else not-knot theory?

Rosner: Okay, sure. This involves strings, but it is not string theory. Knot theory considers one-dimensional, flexible objects—like strings—that cannot pass through each other. In three-dimensional space, this leads to a classification of possible knots. Hawking once speculated that, at least metaphorically, one could consider the universe’s history as a set of knotted world lines—the paths that particles take through spacetime. When particles interact, their world lines can twist around each other in complex ways, forming a kind of stitching. In this view, the universe is continuously woven together as particles trace their trajectories through four-dimensional spacetime, with time as the dimension along which they all progress. In that sense, the universe is ‘knitted’ over time through entanglements—literally, in this speculative framework. As for the limitations of quantum computing in the context of informational cosmology, I can say that you cannot defeat entropy in a closed system. In open systems—where information or energy can be exchanged—you can locally reverse entropy, but in a closed system, entropy always increases overall.

Advertisement

Privacy Settings

Entropy is the breakdown of order and specificity. A thermodynamically ordered system is one where, for example, you have heat on one side and coolness on the other—a state of order. It is sorted. When you allow heat to flow or molecules to mix, the system becomes disordered—the sorting is undone through random motion. You cannot overcome that creeping disorder because the energy required to restore order produces more heat, and thus more disorder, than the amount of order you manage to recover. Similarly, at least in the early stages of quantum computing, it takes significant effort to intentionally entangle particles for computational purposes, as opposed to the natural entanglement in the universe. I assume there are constraints—such as conservation laws or energy costs—so that the work done to create entanglement must be compensated somehow, probably by the one doing the work. I am not entirely certain. That might be a simplistic view. Perhaps I did not need to bring entropy into this.

The issue may be that, with our current level of technology, it is difficult to entangle particles. And the entanglement is fragile. First, you must prepare your qubits. I watched a short segment of a presentation recently—it appears that it is now possible to build circuits that form relatively robust qubits. You can maintain their entanglement long enough to complete your computation. So, honestly, much of what I just said might be incorrect. It was probably a bad analogy involving entropy. I could have just said that, given the current state of quantum technology, constructing stable quantum bits is complex. So, I just spent much time saying something quite misguided.

Jacobsen: What about quantum error correction?

Rosner: Essentially, you work with a network of entangled qubits. Now, I might be off on the details, but the idea is that you have multiple qubits in superposed, indeterminate states relative to each other. When you input data, these superpositions allow the system to represent multiple possibilities simultaneously. If you can preserve those superpositions throughout the calculation, the system will behave as if it is performing several computations in parallel. Then, when you input a value, it is as if you are applying multiple operations to that input simultaneously. Your output reflects a combined result from those multiple calculations. This can be extremely efficient for certain tasks, like factoring large numbers—one of the key examples of quantum computing. You could, for example, search for the prime factors of a very large number by effectively testing many possibilities at once. I am speculating here, but it is as though you receive a confirmation signal—like a “ding”—if one of the operations returns a valid factor. That allows you to complete factorization much faster than with classical methods. This is also where interpretations like the many-worlds theory come into play. From that perspective, the computation happens across multiple parallel worlds, each representing a different configuration of your system. So, in theory, you are conducting several versions of computation simultaneously, and at the end, you retrieve a superimposed result that reflects the contribution of each. This is useful for certainproblems, like finding optimal paths, though, to be honest, I am probably oversimplifying or misrepresenting the science here. You are asking me about something outside my area of expertise. We should move on to a topic I can address more confidently.

Jacobsen: How could you take a non-computing perspective on quantum mechanics that would still be relevant to informational cosmology?

Jacobsen: I mean, we have discussed this many times. A non-computational perspective suggests that the universe defines itself through the history of interactions among its particles. That is the quantum perspective: quantum mechanics is the mathematics of incomplete information. The universe can only generate a finite amount of information—each quantum event contributes a discrete bit of information. Over time, quantum events cumulatively generate more information, but still only a finite amount per unit of time. Even over long durations, the universe remains incompletely specified because it would take infinite information to define it with infinite precision. Therefore, by definition, the universe operates on incomplete information. But we have talked about this at least a dozen times.

Advertisement

Privacy Settings

Rosner: Does the entropy style in quantum mechanics differ from sheer randomness?

Jacobsen: Well, I do not know specifically about entropy in quantum mechanics, but in informational cosmology, what appear to us as random quantum events may reflect something deeper. If the information within our universe is modelling something external to it, then those quantum events may represent the outcomes of processes occurring in the system being modelled. What do you call the information received through the senses? Qualia, right?

If the universe is mind-like, some quantum events producing new information may be qualia. That would mean they are indicators of events happening in another universe—a source beyond our own. This does not violate quantum mechanics. The events appear random because their causes lie outside our observable universe. Aside from entanglement, it has been proven that you cannot have hidden variables that fully determine quantum outcomes. Entanglement is a separate matter. But outside of that, you cannot consult the rest of the universe to predict the value of a quantum event. That information does not exist within our universe. You have these open quantum events, and unlike in a deterministic or clockwork universe, you cannot infer their outcomes based on surrounding information. The information needed to determine them might exist outside the universe if the universe is itself modelling something beyond it. And I have forgotten what your original question was.

Entropy behaves differently at different scales. On a local scale—within a closed system that cannot radiate away waste heat—entropy appears always to increase. That gives the impression of a one-way path from order to thermodynamic disorder. However, when you consider the universe as a whole, that linear progression does not necessarily hold. Entropy can behave differently at the universal level than in subsystems.

Jacobsen: Would there be any theoretical computation beyond quantum superiority? I mean, that is what the debate has been about since quantum mechanics was first formulated—whether there exists another framework, technique, or domain in the universe that would allow us to characterize the universe more effectively than quantum theory currently does.

It may not be about finding a completely new type of computation but reframing how we conceptualize it. For example, the way we distinguish between GPUs and CPUs—nature, having had billions of years to experiment seems to operate simultaneously with multiple layers of processing: short-term, long-term, parallel, and single-path calculation types. So perhaps the next step is contextual computing—where the core computational advantage lies in adaptability to context, not just brute-force superpositioning or exhaustive quantum state searches.

Rosner: You cannot think of quantum mechanics generally, but what you suggest sounds like a typology-based approach. If you could develop a classification of situations where outcomes could be probabilistically inferred using a Bayesian framework, then yes, you could say, “Oh, this situation looks like that situation,” and derive predictive value. That seems plausible. For example, if I see a Tesla truck, I immediately become more cautious because I have yet to see a Tesla truck driven by someone who is not at least slightly obnoxious. So, while vehicles may be generally unpredictable, certain vehicles—or those driving—introduce context-specific predictability. A Tesla truck might cut across lanes abruptly or roll through a stop sign. Or take a BMW driven by a 22-year-old guy: you brace for someone speeding through a parking garage at 35 miles per hour. These are behavioural patterns that can be contextualized.

So, can constraints and parameters be applied to specific scenarios such that outcome prediction becomes more accurate than what quantum mechanical uncertainty alone would suggest? I do not know. Would a fully quantum mechanical model include the context, like the behavioural pattern of a 22-year-old driving a BMW? I have no idea. AA’s complete quantum mechanical description could incorporate such information, though I doubt it in practice.

There are certain strategies to increase your predictive success. For instance, you can configure quantum systems in ways biased toward yielding classical outcomes. That is, you intentionally reduce the uncertainty by building in constraints—effectively imposing determinism. But even then, you are still operating under the laws of quantum mechanics. Whether or not quantum events are inherently random, if you hang a hammer from a string and cut it, it will fall. That outcome is determined regardless of the underlying quantum indeterminacy.

You can engineer both macro-level events and constrained micro-level events to make outcomes predictable. Yet, the foundational substrate remains quantum. And that is part of why we exist at the scale we do. We are enormous relative to atoms because it takes this much matter and evolutionary complexity to function in a way that yields reliable action. The macroscopic scale acts as a quantum error correction.

Macroscopic objects are more stable because they are large. That is what quantum error correction is: the robustness of macro systems shields them from the randomness of quantum events. It is like crossing a street—you do not rely on just one photon to tell you whether the light is red. You wait until enough red photons hit your retina and are confident the light is red. It is just the macros of things. That is how we survive and function.

Let’s call it a night.

Jacobsen: No, it’s a day.

Rosner: All right. Thank you. Talk to you tomorrow.

Jacobsen: Thank you. Talk to you tomorrow, too.

Last updated May  3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices.In Sight Publishing by Scott  Douglas  Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott  Douglas  Jacobsen 2012–Present. All trademarksperformancesdatabases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.

Leave a Comment

Leave a comment