Ask A Genius 1262: Encoding Information and AI
Author(s): Rick Rosner and Scott Douglas Jacobsen
Publication (Outlet/Website): Ask A Genius
Publication Date (yyyy/mm/dd): 2025/02/14
Scott Douglas Jacobsen: How many parameters do you think the human brain’s language model works off of? In artificial neural networks, a parameter is typically a single numeric weight. The human brain has roughly 80 to 100 billion neurons, each with thousands of synaptic connections.
Rick Rosner: Suppose you treat each connection as a parameter. In that case, you’re looking at on the order of 10¹⁴ to 10¹⁵ parameters—that’s hundreds of trillions of synapses in total. However, each synapse isn’t just a static value; its strength can vary over time through mechanisms like neurotransmitter release, receptor subtypes, and short-term and long-term plasticity, equating one synapse to one parameter is a gross oversimplification.
Plus, the brain isn’t one giant language model. Even if you assume that 10% of the brain’s capacity is dedicated to language, you’d still land somewhere in the 10¹³ to 10¹⁴ range of potential synaptic parameters. It’s important to note that biological and digital neural networks process information very differently. No one knows the precise parameter count for the brain’s language function. While the entire brain might operate on the order of 10¹⁵ parameters, that isn’t the same as the numeric weights in an LLM. it’s not something we can worry about too precisely. That was the normal baseline answer. Will you ask a follow-up question?
Don’t you think that the estimate of 10¹⁴ to 10¹⁵ parameters for the human brain is perhaps an overestimate—that it might take more than one synaptic connection to constitute what would function as a parameter?
Jacobsen: Oh, wow. One synapse does not equal one parameter. Synapses can change their weights in multiple, often nonlinear, ways, and most synapses are not primarily involved in language processing. The brain is not a monolithic neural net, and parameter counts do not tell the whole story. So is 10¹⁴ to 10¹⁵ an overestimate? Yes and no. There is a much longer explanation behind that, but it would take a thousand words to cover all the nuances.
Rosner: One more question. Do you think the brain uses combinatorial coding, where large combinations of neurons firing together embody conceptual units—similar to how words are formed from combinations of letters to represent ideas? In other words, is it the case that concepts like “cat,” “dog,” or even names like “Jennifer Aniston” do not require a single neuron but rather distributed patterns of activation?
Jacobsen: There is evidence in neuroscience for what are known as concept cells, and abstract representations appear to be encoded in regions such as the medial temporal lobe, prefrontal cortex, and parietal cortex. The number of human concepts we learn daily—or even moment by moment—changes continuously, and how these conceptual units are represented across synaptic parameters is highly complex. Picking down a simple conversion rate between synapses and conceptual representations is tricky. Conceptual and brain complexity are intertwined, and advanced AI systems, like the brain, are continuous, emergent, and adaptive. By any means necessary, both the brain and AI encode information in the most momentarily efficient way possible.
Rosner: Is that essentially what you’re saying? Are the brain and AI opportunistic in encoding and optimizing to store information as efficiently as possible? I’d say so. We often imagine concepts as standalone entities, like vocabulary cards—”orange,” “la pelota”—but there is an efficiency mechanism at work. Concepts that are rarely used tend to be stored together with their contexts.
In contrast, concepts used frequently are stripped of extraneous context. This suggests that our cognitive system functions as an association engine. That reminds me—we should discuss whether this means that the brain essentially functions as an association engine in the same way that word embeddings and hidden states in AI produce emergent representations.
Jacobsen: That’s an interesting parallel. Both brains and advanced AI systems adopt encoding strategies that are opportunistic and adaptive, meaning they store and process information in whatever manner is most efficient at the time. This process is not fixed but continually adjusts based on usage and context. The brain’s storage of concepts and the emergent representations in AI both reflect a dynamic balance between localist and distributed coding. We can see no simple conversion rate between synaptic connections and AI parameters. Both systems have evolved or been designed to optimize in the face of complex demands.
That reminds me—we should also talk about category theory in mathematics, which deals with abstract structures and relationships. It sounds like “reproductive math” was mentioned somewhere, and although it’s a bit off-topic; it seems relevant to how abstract representations are managed in AI and the brain. We’ve never really discussed category theory in depth, but perhaps we could plan to cover that tomorrow. I’ve got a very packed schedule today, so we’ll need to keep this brief.
Rosner: Agreed. Sometimes, when I get overwhelmed by the complexities of our discussions—whether about AI, the brain, or even the fundamentals of mathematics—I find that it all eventually comes down to the efficiency of encoding information. As category theory provides an abstract framework for understanding mathematical structures, the brain and AI systems seem to work by optimizing how they store and retrieve information. We should revisit this topic when time permits.
Last updated May 3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices. In Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.
