Send Messages, Not Metadata: Kee Jeffreys on Session and the Future of Private Messaging
Author(s): Scott Douglas Jacobsen
Publication (Outlet/Website): The Good Men Project
Publication Date (yyyy/mm/dd): 2025/12/26
Kee Jefferys is the technical co-founder of Session, a decentralized, end-to-end encrypted messaging app that minimizes metadata and removes phone-number identities. Working with the Switzerland-based Session Technology Foundation, he leads protocol and network design, focusing on onion-routed communication, community-run nodes, and crypto-economic incentives for privacy infrastructure. Trained as a software engineer and blockchain specialist, Jefferys has become a prominent voice in debates over chat control, secure messaging, and state surveillance. His work centres on building practical tools that journalists, activists, and ordinary users around the world can rely on when privacy is not a luxury but a condition of survival.
In this interview, Scott Douglas Jacobsen speaks with Kee Jefferys about why mainstream encrypted messaging is structurally insecure. Jefferys argues that legal pressure on centralized platforms, partial end-to-end encryption, and heavy metadata logging allow states to map social graphs even without reading content. He explains how phone numbers, SIM registration, IP addresses, and group metadata reliably link accounts to real identities. Session is presented as an alternative: a decentralized, community-run, onion-routed network without phone numbers that minimizes metadata and resists Sybil attacks. Jefferys also outlines future upgrades, including post-quantum cryptography and perfect forward secrecy, grounded in a robust, user-centred privacy-as-a-right philosophy.
Scott Douglas Jacobsen: What was the key moment that convinced you that mainstream encrypted messaging was fundamentally flawed?
Kee Jefferys: There was not a single key moment; instead, there was a series of developments that, taken together, undermined my confidence in mainstream encrypted messaging applications. One prominent example was the arrest of Pavel Durov, the founder and CEO of Telegram, by French authorities in August 2024 at Le Bourget Airport, as part of a criminal investigation into how Telegram was being used for activities such as drug trafficking, and other organized crime, allegedly due to insufficient moderation of the platform. Telegram also does not offer end-to-end encryption by default for standard” chats; only its “secret chats” feature is end-to-end encrypted per conversation. That combination—legal pressure on a key executive and the fact that most user conversations are technically accessible to the service itself—raised concerns about the potential for authorities to gain access to user data.
In addition, I have been troubled by the growing reliance on metadata rather than message content. When messages are end-to-end encrypted and cannot be read, authorities shift their focus to metadata: who talked to whom, when, from which IP addresses, using which phone numbers or devices. Telegram, for example, acknowledges that it may collect metadata such as IP addresses, device information, and username history for up to 12 months. Former NSA and CIA director Michael Hayden once stated, “We kill people based on metadata.” The remark is stark, but it correctly captures the analytical power metadata provides to state-level actors.
Even without message content, if authorities know that one person messaged another at a specific time, along with the IP addresses, phone numbers, and device identifiers involved, they can construct a detailed behavioural picture. From that, they can infer what users are likely discussing and, more importantly, map the social graph—showing who is connected to whom and how—using metadata alone, even when the underlying communication is encrypted.
Jacobsen: I want to examine the concept of metadata further. Many people may know Pavel Durov only indirectly through media coverage of Telegram’s vulnerabilities, especially in wartime contexts, or through unrelated personal news. What exactly is contained directly in metadata, and what additional information can skilled analysts infer from it?
Jefferys: It depends on the application, but most mainstream messaging platforms—WhatsApp, Telegram, and Signal—have historically required a phone number to create an account. That number then becomes the primary identifier. In many countries, mandatory SIM-registration laws require a person to present government-issued identification, such as a national ID card, driver’s license, or passport, to obtain a phone number. As a result, your messaging identity can be indirectly but reliably linked to your legal identity, even if the application never asks for your name.
So, the broader shift I see is twofold: first, recognizing that some “encrypted” platforms are only partially end-to-end encrypted and retain significant control over user data; and second, observing how law enforcement and intelligence agencies increasingly rely on metadata rather than message content. Together, these trends show that focusing solely on whether a message is encrypted is insufficient. Metadata and phone-number-based identity have become the central battlegrounds for privacy.
They can then, if they see one phone number communicating with another phone number on their servers—and because that number is associated with your messaging ID and the messages you send over those services—an intelligence agency can backtrack and determine, “We know this phone number belongs to this real person, because when they registered, they provided verified information or paid for the number with a credit card tied to their identity.” That becomes a crucial point, because if the phone number is linked to everything else sent through the messaging service, it becomes very easy to de-anonymize a user.
If you are not using phone numbers, as with some other messaging applications that rely on an anonymized ID or require an email address to sign up, then those services typically store the user’s IP address. IP addresses are also often tied to a real-world identity because, when you obtain service from an internet provider, you usually pay with a credit card and provide personal information. So you may have an IP address, a phone number, and metadata such as the time a message is sent and the time it is received.
If it is a group message, the service may retain information about the identities of group members—for example, each participant’s phone number or IP address. You can see this type of data across the entire history of an account if you operate the server relaying these messages.
From that, you can build an extremely accurate picture: which user messaged which user at what time, which IP addresses were involved, which phone numbers were associated with the conversation, and, in a group setting, who the other participants were. You can observe that specific individuals message each other every 24 hours, or during a specific three-hour window, during which they exchange one hundred messages. That window might correlate with another real-world event.
You gain a remarkably detailed understanding of what a user is doing on these networks if you act as the central server relaying all messages, even if the contents are fully encrypted.
Jacobsen: It seems as though much of the intelligence work focuses on multivariable inputs—timing, individuals involved, correlations with real-world events—but nothing definitively content-based. Is that more or less correct?
Jefferys: Yes. The evidence can become so overwhelming that you arrive at an explanation that is nearly definitive regarding who participated in a given conversation. I’ve never worked for any intelligence agency and nor do I ever intend to, but based on public information and leaks from those agencies, they try to aim for a high degree of certainty, but they do not require absolute certainty to act.
Their required level of confidence is often shaped by political considerations, operational priorities, or assessments of the risks to the individuals involved if they are wrong. They may not need absolute certainty; they may only need a high probability that the individuals they have identified are the correct ones.
Jacobsen: What is the Session protocol, and what is a decentralized node network?
Jefferys: Returning to what I described earlier, many existing messaging networks rely on centralized servers. Even when messages are end-to-end encrypted, those messages are relayed through a central server operated by the provider. With Session, the architecture is fundamentally different. As a developer who contributes to Session, I do not have access to any centralized server because Session does not use one to relay messages.
Instead, Session uses a decentralized network of roughly 1,600 nodes operated by community members worldwide. When you send a message through Session, it is routed through these nodes before reaching its destination, where it is stored temporarily. This design mitigates the security issue inherent in centralized systems, where developers or service operators can access metadata associated with user conversations.
In Session, the metadata is effectively distributed across multiple nodes run by unrelated parties. As a result, developers cannot simply access a central server to see, for example, which user sent which message at what time. That is the core of the decentralized model.
It provides additional protections as well. Session does not require a phone number to sign up, eliminating the privacy risks associated with phone-number-based identity verification. Session also uses onion routing, a technique best known from Tor. While commercial VPNs are not identical to onion routing, some of them approximate aspects of it to obscure user IP addresses. In Session, when you send a message, your IP address is not exposed to the node storing it. This removes a substantial amount of metadata that would otherwise be tied to your identity.
So the key components are the decentralized network, the absence of phone-number registration, and the use of onion routing to mitigate metadata leakage—features specifically designed to address the vulnerabilities found in mainstream, centralized messaging applications.
Jacobsen: So something like NordVPN—VPN plus onion?
Jefferys: Yes, conceptually in that direction.
Jacobsen: When you combine those—double VPN or VPN plus onion—how much stronger is the actual security? Is it a multiplier, or is it additive?
Jefferys: If you use a VPN with onion routing, what you are essentially doing is connecting to a VPN server before you connect to the servers that handle onion routing. That VPN server is typically operated by a commercial provider such as NordVPN or Surfshark. You connect to their server, and then their server connects you to the onion-routing network. From there, you usually have three additional hops before your traffic reaches its destination.
What this achieves is primarily the concealment of your IP address from the first hop in the onion-routing chain. Usually, when you use onion routing, your IP address is visible to the first relay node. That can have metadata implications. The relay node does not know the content of your traffic, because it is already encrypted at that stage, but some users worry that their internet service provider (ISP) might detect that they are using Tor simply because they connect directly to that first node.
Adding a VPN changes this. If your ISP is monitoring your activity, it only sees that you are connecting to a VPN, not to Tor. The VPN then sees that you are connecting to Tor, but your ISP does not. The extra hop adds obfuscation, specifically by indicating that you are using Tor or a Tor-like onion network such as Session. It does not dramatically increase privacy in a multiplicative sense; it mainly mitigates what some users perceive as the vulnerability of the first hop in onion-routing systems.
Jacobsen: The Session network relies on community-run nodes and crypto-economic incentives. Is the model fragile if the token economy collapses?
Jefferys: Actually, the effect is the opposite of fragility. Consider Tor as a comparison. Tor is entirely run by volunteer operators who receive no financial incentive for maintaining nodes. As a result, anyone can deploy thousands of Tor nodes and potentially participate in enough hops in a user’s path to de-anonymize them. There have been documented cases of malicious actors operating large numbers of Tor nodes. The motives vary, but some operators run exit nodes to sniff unencrypted traffic or manipulate traffic through malicious redirects. Tor has no robust built-in mechanism to prevent malicious parties from flooding the network over time.
Session handles this differently. To operate a Session node, you must stake cryptocurrency. That requirement creates a financial barrier, making it significantly more difficult for a malicious actor to deploy thousands of nodes. The cost increases as one attempts to control more of the network. In practice, this provides a form of Sybil resistance—protection against an attacker who attempts to overwhelm the network by creating large numbers of nodes.
So, rather than introducing fragility, the crypto-economic model strengthens the network’s resilience by ensuring that operating nodes have a clear, nontrivial cost. It discourages large-scale malicious participation and aligns incentives toward stability and genuine contribution. Nodes that are actually serving traffic, responding within the correct timeframes, and running the required services remain active. I am not suggesting that any of this is a perfect solution. If the cryptocurrency used for staking fluctuates in price, that affects the rewards nodes receive when converted into U.S. dollars. But in those situations, some nodes drop off the network, and the reward for each remaining node increases proportionally.
There is always an incentive to stay on the network if other nodes leave, because your share of the rewards increases. The way Session is designed, the cryptocurrency-based model helps mitigate the Sybil-attack problem—where thousands of nodes join the network without operating honestly. The staking requirement significantly reduces the feasibility of such attacks. It also provides incentives to maintain high service quality and enables the enforcement of specific behaviours across the node network.
Jacobsen: Looking ahead to the next iteration—say, 2026—how would you improve Session’s overall functionality or network? It sounds like the system has many moving parts, and I imagine there are several areas for enhancement.
Jefferys: Looking ahead to 2026, one of the significant focus areas from a network perspective is upgrading the onion-routing protocol. At present, we use a completely stateless protocol that imposes a limit on the amount of data that can be transmitted in a single request. For Session users, that means you can only send files up to 10 megabytes. Other messaging applications allow file transfers of 100 megabytes or far more.
On the network side, we are developing a complete reconstruction of the onion-routing protocol using a framework called Session Router, which we have been working on for several years. That protocol is significantly faster, more efficient, and better at utilizing network bandwidth. It can also support multiple data protocols that were previously unsupported.
This upgrade will remove many of the existing limitations on Session, enabling larger file transfers and generally improving message-sending and message-receiving speeds. That is one of the significant advancements we are focused on for the network’s next phase.
Jacobsen: Anonymity can make some people safer. In my industry, I have a journalist in Afghanistan, and we keep them anonymous. That is compelling—it keeps them alive, out of jail, or out of torture. Are there any contexts in which extremist groups could misuse this technology as bad actors?
Jefferys: When we think about who uses Session and how different actors around the world use it, the most critical priority for us is designing Session in ways that minimize misuse. Many of Session’s design principles revolve around consent. For example, when someone sends you a message, and you have never spoken to them before, the message goes into a request folder that is hidden from the main interface. You must manually accept the request before the conversation can begin.
We also reduce the visibility of groups or large communities within the Session. On Telegram, for instance, you can search for groups of any kind—beneficial or harmful—and immediately begin messaging large audiences. In Session, you cannot search for groups by keyword. Group discovery happens through out-of-band communication. If someone wants you to join a group, they must invite you or share a link directly. It is not discoverable inside the application. These design choices intentionally reduce avenues for large-scale misuse because we have seen how discoverable groups on other platforms can be exploited.
At the same time, messaging applications are tools that allow people to communicate. No privacy-focused application can—or should—monitor every message to determine whether a user’s intent is good or bad. If you want to protect journalists, whistleblowers, and human rights activists, you cannot surveil their communications. The exact mechanism that protects them could, in theory, also protect a bad actor, but introducing surveillance would undermine the very populations that rely on secure communication to survive.
It is a difficult space to operate in, but Session strikes the right balance: minimizing misuse where possible through design constraints while preserving the fundamental protections that activists, journalists, and vulnerable groups depend on.
Jacobsen: Do you foresee pressure from a state actor to give access to any internal components of Session? I recall that Meta, in at least one case, coordinated with authorities to some degree, limited, but still coordinated. Could you imagine any collaborative efforts like that being directed at Session?
Jefferys: The company that stewards Session is a Swiss foundation, but it does not have access to any user data because it does not operate the decentralized network itself. That network is run by community operators distributed globally. So if an intelligence or law-enforcement agency approaches the foundation and requests data—for example, “We want to see this user’s message history” or “We want all user data from this time period”—the Session Foundation can honestly state that it does not have access to that data.
It is not a matter of refusing to hand it over for legal or political reasons; instead, the foundation does not technically possess the data in the first place. From a design and architectural standpoint, it is difficult for the Session Foundation to interact with agencies seeking user information because the information they desire is not in the foundation’s custody. That is the starting point every time someone asks the foundation for user data.
Jacobsen: You use no phone numbers, no email addresses, and minimal metadata. We have discussed how some information can still be inferred. Are there more sophisticated security protocols you would like to incorporate, but that require more time because the development work is complex?
Jefferys: Yes. One of the protocol suites Session recently announced its intention to implement includes perfect forward secrecy and post-quantum cryptography. Perfect forward secrecy helps minimize the impact of a data breach if a user’s device is compromised and a long-term key is stolen. In a hypothetical scenario where an attacker also manages to scrape encrypted messages from the network—already difficult in Session’s case because it would require large-scale participation in the decentralized network—perfect forward secrecy limits how much historical data becomes readable.
We aim to implement this in 2026, but it is technically challenging due to Session’s decentralized architecture and support for linked devices, including numerous mobile devices and desktops. Most messaging applications either do not support multi-device encryption or do so at the expense of security. Session maintains full end-to-end encryption across all linked devices, which makes adding perfect forward secrecy more complex—but still achievable, and we have initial designs underway.
Post-quantum cryptography is also essential. No publicly known quantum computer currently exists that can break standard elliptic-curve cryptography, but we must prepare for the scenario in which an adversary stores encrypted messages today and decrypts them later once quantum capabilities emerge. This is known as a “store now, decrypt later” attack. Post-quantum schemes have improved significantly in efficiency and performance over the past several years, making them viable for real-world deployment.
We plan to introduce post-quantum cryptography in 2026 as well, to protect users against future quantum-enabled attacks.
Jacobsen: Do you think your model will become the norm, or will it remain just one option among many?
Jefferys: That is an interesting question. Centralized messaging applications will remain. They are much easier to develop and do not carry the technical complexity of a decentralized system like Session. Centralised platforms also benefit from user-experience advantages that come from controlling a single server environment.
So I expect centralized applications to continue. But decentralized options will also increase their presence in the messaging landscape. We are already seeing that shift—Session’s user base is growing, and adoption trends indicate rising interest in decentralized privacy tools.
The messaging space will maintain its current level of fragmentation. Users rarely enjoy hearing that because many already juggle multiple applications: one for work, one for family, one for friends, plus integrated messengers in Instagram, Facebook, LinkedIn, and so on. That paradigm will likely persist. But within the private messaging segment, we will see a shift toward decentralized applications, as they offer the strongest privacy and security guarantees.
People will continue to have one primary messaging app for work, another for social circles, and then a privacy-oriented app for sensitive tasks—sending a password, sharing financial details, or just having a conversation where they don’t want to have the feeling that someone is watching over their shoulder. .
Jacobsen: Another question. Privacy is not just an optional feature; it is a fundamental human right. Is that a founding philosophy or framework for cooperation within Session?
Jefferys: Too often, we see protocols or applications treat privacy as an afterthought. They build an entirely centralized system, release it, and only when users begin expressing insecurity—or when a data breach occurs, which is increasingly common—do they respond by saying, “We will add more protections now.”
Because the entire system up to that point has been built in a centralized, privacy-unfriendly way, it becomes tough for those applications to retrofit privacy into their networks. They may, for example, add end-to-end encryption, but that does not solve the metadata problem created by a centralized architecture.
Thinking about privacy as a human right—privacy first—and designing everything from that foundation means that when users raise questions about security or privacy, you are not forced to go backwards and retrofit protections. The system was built with that purpose from the start. It resolves many of the issues users face when interacting with social media systems or messaging applications.
Jacobsen: Is this vastly growing out of a hacker ethos?
Jefferys: I think so, to an extent. Hacker communities and the people tinkering with technologies developed many of the protocols on which Session is built. Onion routing, for example, was pioneered through the Tor project, which included early cypherpunks, hackers, and hacktivists. We have adapted that protocol for our network. The same is true for end-to-end encryption, much of which originated from cypherpunk discourse. Cryptocurrency likewise emerged from that cultural and technical environment.
The cypherpunk–hacktivist community laid the foundation for much of what Session does. ession takes those concepts and refines them into something highly usable. Many early tools developed in hacker communities offered strong security but poor usability. By refining those ideas, we can provide an application that feels as simple as a centralized messenger but without the inherent vulnerabilities of centralized systems. That is where Session has been most successful.
Jacobsen: Any quotes to conclude today? Any final thoughts? They can be playful—one of my favourites is, “Wisdom has been chasing you, but it outran you today,” or perhaps, “All that glitters is not gold.”
Jefferys: I don’t have any perfect quotes. I may not have the poetic background to offer something lyrical or memorable.
Jacobsen: Thank you for the opportunity and your time, Kee.
Last updated May 3, 2025. These terms govern all In-Sight Publishing content—past, present, and future—and supersede any prior notices. In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In-Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.
