Skip to content

Scalable, Energy-Efficient Quantum Computing

2025-08-26

Author(s): Scott Douglas Jacobsen

Publication (Outlet/Website): The Good Men Project

Publication Date (yyyy/mm/dd): 2025/05/26

 John Levy, Co-founder, CEO, and Chair of SEEQC, talks about quantum computing as a company developing scalable, energy-efficient digital quantum systems. Levy discusses SEEQC’s origins in Hypres and its evolution through IARPA’s C3 program. The company uses superconducting single flux quantum (SFQ) technology to achieve dramatic reductions in power consumption—up to nine orders of magnitude. Levy outlines partnerships with NVIDIA, error correction infrastructure, and SEEQC’s vision for contextual and heterogeneous computing that integrates CPUs, GPUs, and CPUs. With its SEEQC Orange platform and chip-based architecture, SEEQC aims to enable real-time quantum-classical processing and unlock the practical utility of quantum computing at the scale.

Scott Douglas Jacobsen: Today, we are here with John Levy. He is the Co-founder, CEO, and Chair of SEEQC (“Seek”), a leading company developing scalable, energy-efficient quantum computers. He has over 35 years of experience at the intersection of technology and finance, previously serving as chairman of Hypres and sitting on the boards of goTenna and BioLite. He is a founding partner of L Capital Partners, where he has led investments in the technology sector and served on the boards of companies such as WiSpry, OnPATH Technologies, and HiGTek.  He earned an A.B. in Psychology and an MBA from Harvard Business School. Two things often come up in the news.

John Levy: By the way, your company’s name is SEEQC—as in, you have an aspiration; you are seeking to do something. It is an acronym. 

Jacobsen: Two things about quantum computing often appear in the news—particularly scalability and energy efficiency. People often discuss computation as one barrier to competition, but another is energy. I recall Eric Schmidt, in a recent interview, suggesting that the U.S. may need to partner closely with Canada to access sufficient hydroelectric power for large-scale AI data centers. So these things pop up. Within that context—was that the modus operandi?

Levy: First, thanks for reaching out and doing this.Yes. So, first, SEEQC began as a spinout of Hypres, which had its roots in IBM’s superconducting electronics division. When we were operating as Hypres, around 2014, the Intelligence Advanced Research Projects Activity (IARPA) launched the Cryogenic Computing Complexity (C3) program to explore superconducting computing as a solution to the power and cooling challenges of exascale computing.

They realized that, over time, we would need nuclear power plants to run data centers—which is precisely what is happening. Astonishingly, they foresaw this. Their response was: Let us see if we can develop entirely new kinds of classical computers that are incredibly energy efficient.

That was the idea: exascale computing at orders of magnitude lower power because that is the trajectory we needed to be on.

So we started working. There was a program at IARPA called C3, and we partnered with IBM and Raytheon BBN to build superconducting logic and superconducting memory for energy-efficient classical computers that could scale to exascale. That was the core idea.

We finished that project in early 2017. Following this, we held a strategic planning session at Hypres to further develop the idea.

We realized it was a perfect fit because we were already operating in the superconducting domain (i.e., 4 kelvins and below), and quantum computers—at least those using superconducting qubits—needed to operate at temperatures in the millikelvins.

Our technology could power quantum computers, and we knew how to scale them. We had, and still have, a chip foundry to do it. The core idea is that CMOS chips—designed and manufactured by companies such as TSMC, Intel, AMD, and others—consume excessive power.

Power turns into heat, which must be dissipated. CMOS chips are too slow and prone to noise.

If we could substitute the circuits we were building—based on single flux quantum (SFQ) technology—where we are using Josephson junctions instead of transistors and niobium instead of copper, we could change everything.

We produced these circuits in an entirely different way, and that is how we realized we could build scalable, energy-efficient quantum computers. That is how SEEQC was born.

We spun out. It took us a couple of years, but we officially spun out in 2019. Since then, we have been focused on that.

Now, to make that real, let us consider Google’s November announcement about error correction and what their Willow quantum computer looked like. It was a fantastic piece of engineering—honestly, it could be in an art museum. It is beautiful.

However, here is the reality: every qubit needs five cables. Moreover, to power each qubit using room-temperature electronics, you need 2 to 5 watts per qubit.

Because of what we are doing—by building energy-efficient circuits—our circuits operate at three nanowatts. That is nine orders of magnitude—a billion times more energy efficient—not 10%, not 1%, not a fraction of a percent—a billion times more efficient.

Why? Because our circuits are operating at the same cryogenic temperatures as the qubits. CMOS cannot do this—putting too much power into CMOS would generate heat and destabilize the qubits.

So, this is a technology that we have developed, patented, and brought into practice. Moreover, we are running quantum systems today based on this core technology.

We announced SEEQC Orange, which is now operational. It is digitally controlled using SFQ logic and digitally multiplexed to reduce cabling requirements significantly.

Now, think about what that means for data centers.

It is one thing to demonstrate this on a small circuit. However, imagine you want to build a quantum data center with a million qubits.

We studied the energy budget for building a 100,000-qubit system using conventional methods. The estimate ranged from 10 to 40 megawatts of power.

Using our approach, we estimate just 63 kilowatts of power, a massive reduction.

Jacobsen: So you have provided a real-world demonstration of scaling laws that show gains of nine orders of magnitude in energy efficiency. Sam Altman has said there are no foreseeable limits to the scaling laws—well, this seems like one of those parameters.

Levy: But it is critical to realize that this is only one of many technical hurdles—serious, nuanced engineering problems—that must be overcome. Moreover, you cannot just solve one. You have to solve all of them to build a utility-scale quantum computer.

Cabling is a huge issue, too, right? Do you have five cables per qubit? Or even three? Or two? Not if you are scaling to a million qubits. It is not feasible. It would be prohibitively expensive. The meantime between failures would be extremely short. The physical complexity would be overwhelming—encompassing space, thermal load, reliability, and other factors.

So you must solve that problem.

How do you do it?

You package chips in a way that allows them to communicate directly—essentially forming a multi-chip module. Think of it like a sandwich: one chip stacked directly on another.

That gives you direct connectivity, reduced latency, and increased speed.

So that is key for things like error correction. Do you want to do error correction? You have to have low latency. You have to solve that. I am telling you—it is like whack-a-mole. However, it is whack-a-mole at a level you cannot even imagine.

Jacobsen: You are also partnering with a major company, NVIDIA. Jensen Huang is known to be a sober and mature individual. Unlike some others, he does not speak off the cuff too often. So, what does this partnership mean for SEEQC?

Levy: Let me explain how it happened—it is a great story. I love this.

So, about three years ago, we were at one of those many quantum computing events. I met Tim Costa and Sam Stanwyck, who run Heterogeneous Compute and Quantum Computing at NVIDIA, respectively.

I often said, “Hey, let us get together—I will share our latest results.” So, I showed them what we were working on.

By the way, I have not mentioned this yet: our chips are fully digital.

Now remember—quantum computing exists in the analog space. People typically control and read out quantum computers using microwave pulses—that is, analog RF signals. Everything is in the analog domain.

However, we are doing it in the digital domain.

So I told them, “Imagine a GPU and a CPU connected to a QPU—chip-to-chip—with the same latency you have with NVLink between CPU and GPU.”

Ultimately, we want it to operate at such low latency and high speed that we can share memory across the QPU, CPU, and GPU. So, instead of having two separate systems connected by Ethernet or PCIe, we would have chip-to-chip-to-chip communication.

Just as NVIDIA’s CPU-GPU superchip is connected internally via NVLink, imagine a system where the QPU, CPU, and GPU operate together as one unit inside a single computing node.

Now, think about the possibilities we have opened up. We are introducing the concept of trustworthy heterogeneous computing. You can combine a quantum algorithm with a classical algorithm or an AI learning model. We are building the infrastructure to allow that.

Moreover, they said, “Yes.” 

Yes, they were in. That is what we have been building together.

Our recent announcement marked the first time we had demonstrated it. Now, we are focused on two main goals:

  1. Reducing latency—getting it down below a microsecond, ideally into the hundreds of nanoseconds, to make it viable for error correction.
  2. This makes it bandwidth-efficient—it does not require terabytes of data transfer but rather gigabytes, which is much more feasible.

Since we are working in the digital domain, we can optimize for both low latency and high throughput. That is the winning combination.

Jacobsen: I talked to a friend the other day about all this. One thing that came up was that we have GPUs, CPUs, and QPUs, but you are building an entirely new architecture. What you explained to me feels like contextual computing (Contextual Compute), where the system optimizes computation dynamically depending on what is needed at the time.

Levy: You could call it that.

You have made it once you build the software layer on top of this architecture.

That is what we are doing. We are building tools—new tools, such as a quantum computer.

However, even a QPU—let us be careful how we define it—is more than just a “quantum processor.” CPUs and GPUs are more than just arrays of transistors. They are architectures, ecosystems, and toolchains. We are doing the same thing for quantum computing.

They are high-level functions, such as arithmetic units, cache memory, power management, and I/O subsystems, among others. When people talk about QPUs, they often mean just an array of connected qubits. However, they do not encompass the full system-level functionality that constitutes a complete processor from a systems engineering perspective. That is what we are doing, and that is an important distinction. 

So, when you connect an integrated QPU—or something architecturally complete—and connect it, it becomes contextual computing. I love that idea. Jensen Huang, at NVIDIA, thinks of it as accelerated computing. Moreover, rightly so—think about it: he was moving from CPUs, which are fundamentally serial processors, to parallel GPUs. He would explain this better than I could. However, that is the idea—acceleration. 

Here, though, we are going beyond acceleration. We are changing the model entirely. Just as quantum computing represents a metaphorical leap from classical digital computing, what we are building represents a similar leap. It is not just about speed anymore—it is about solving NP-hard problems, doing so in the quantum domain, and coordinating those results with the classical domain when needed. That is an extraordinary shift. It is the kind of thing dreams were made of in the golden age of science fiction. It is the reason I do this.

Jacobsen: What did Isaac Asimov write about? This would be akin to laying the foundations for a positronic brain. Right? Because there is a certain resilience in the human brain, even after injury or insult, we often assume the brain is our working model of general intelligence—though “general” always needs a frame of reference. Still, what we are doing could be viewed as forming the synthetic equivalent of such resilience and versatility. Moreover, thinkers like David Deutsch have frameworks for describing systems like this—universal constructors. Shall we go there? We are not going down the panpsychism path. We will not claim that everything is conscious. Honestly, that is the kind of conceptual rabbit hole. The one who more or less caused all the panpsychism noise—he invented a problem, then offered no solution. Right.

Even when it comes to evolved systems like the human brain, which shows tremendous versatility and operates with high efficiency over decades, what we are building now forms the foundational architecture of something analogous. Moreover, that is incredibly exciting.

So, what do you see as the first immediate application—even before you get to those higher-level functions?

Levy: Funny enough, the first application we are considering is entirely internal to the quantum computer—error correction. So imagine how we manage error correction now: trying to do everything using FPGAs or cryo-CMOS. Instead, imagine a different structure where you do a portion of the processing on-chip at the millikelvin level using high-speed, low-power superconducting logic. That would handle the quick, easy stuff. Then, that chip is connected via superconducting ribbon cable to a digital pre-decoder operating at 100 millikelvins or even a single Kelvin. That would do the next layer of processing. If the error cannot be resolved at those two levels, the system hands it off to a GPU or classical processor that can take a global view of the data and run more complex algorithms.

The idea is to build a chip-based infrastructure for quantum error correction—something versatile and adaptable that software developers and quantum scientists can work with. That way, anyone with a new algorithm or software approach to error correction can plug it into this infrastructure. They do not have to reinvent the hardware stack. It gives them a toolkit. Our first instantiation of this heterogeneous computing system will most likely be focused on—error correction. Once we unlock quantum error correction effectively, we also unlock the real capabilities of this new form of contextual—or, as you said—contextualized computing.

Jacobsen: Where do you see the unknowns in developing this infrastructure, especially after the hardware layer, once we start layering algorithms on top of it?

It is funny—everyone asks a slightly different version of this question. Moreover, it is a good one because when I say, “Hey, we are building a new architecture,” or at least extending the existing one and bringing together two entirely different computing domains, it naturally begs the question: for what purpose? Where is this going? How is this going to play out?

Moreover, I will give you the same answer I gave Jensen at GCC. Imagine someone takes you down into the basement of the University of Pennsylvania in 1946 and shows you the ENIAC. You look at it and ask, “What is this good for?” Moreover, the answer might be, “Well, it is good for arithmetic. It is a super-calculator. I can crunch enormous volumes of numbers.” That is all very impressive—for that time. However, it is not the same as imagining that, one day, you would have a device in your pocket that could stream every movie and song ever created or help drive a car that picks an optimized route, pays for gas via a chip, and texts your friend that you are picking them up—all in real-time. No one imagined that in 1946.

Similarly, we are developing these tools and infrastructure without necessarily knowing the full extent to which they will be utilized. I mentioned one example earlier—error correction. However, broadly, we are trying to build a computational capability that can be released into the world, allowing others to discover what they want to become. Louis Kahn, the architect, used to say things like, “What does a brick want to be?”—as if his materials had their ambitions. His goal was to understand the brick deeply and let it express itself.

That is what we are doing. We are developing these technologies, engineering them with precision, and putting them in the hands of the Louis Kahns of the world to figure out what they should become. 

Jacobsen: It is like Michelangelo saying that David was always in the stone—he just had to carve him out.

Levy: Right. By the way, there is an excellent book I have been reading called The Rigor of Angels. Have you read it?

Jacobsen: No.

Levy: It is about Kant, Borges, and Heisenberg. Moreover, a recurring theme of infinity and unknowability pervades philosophy, literature, and physics. That is the thinking we need now—cross-disciplinary, multidisciplinary, with an open head and heart. That is how we determine what we want to express through this technology. Moreover, it is going somewhere—undeniably. 

Jacobsen: I am reminded of that famous Michio Kaku story about the U.S. attempting to build a particle collider three times the size of CERN’s in Geneva. They dug a massive hole—and spent a billion dollars doing it. Moreover, when someone asked, “Are you going to find God with this machine?” They said, “No. We are going to find the Higgs boson.” Then Congress promptly spent another billion to fill the hole back in. 

That is the tension—people want immediate answers to fundamental work that will pay off in decades. However, all I can say is this: we have reached the point where, in some integrated systems and breakout circuits, you have built all the core elements of a quantum computer—on a single chip, digitally, operating at temperatures in the millikelvins and ultra-high efficiency. As I mentioned earlier, by the end of next year, we will have complete core quantum computing functionality on-chip digitally.

As we refine this, we will enhance our connectivity to GPUs and CPUs and continue to expand our infrastructure. Some of that work is already happening at the National Quantum Computing Centre in the UK, where we expect the next generation of our contextual computing systems to emerge.

I like contextual computing. It is a good idea. I might use it.

Jacobsen: Because, conceptually, for me, it is taking the physics of this new infrastructure that you have built—and integrating that with a new stack of algorithms. Whether they are layered, modular, or stacked, the point is that they become bright and aware in a way that allows them to say, “I do not need to use this for that—I will use that for this.” It becomes efficient in a fundamentally new way.

Levy: Yes. Look, the issue, of course, is that we need to scale.

We are currently operating at a relatively small scale because you need to scale up before scaling out. Moreover, that is precisely what we are focused on—scaling up. We are building the foundational elements to scale out once we have all the core functionality integrated into a single chip.

Moreover, that is when this starts to come alive. That is when it becomes real at a systems level. At the very least, we are headed in the right direction.

Moreover, as I said earlier—error correction will likely be the first serious focus, as pick-and-shovel as that might sound. However, it is the groundwork we need to lay for everything else to follow.

Jacobsen: John, thank you so much for your time today. I appreciate your expertise.

Levy: Yes. No—it was great to meet you.

Jacobsen: Great to meet you, too. This was helpful.

Levy: Excellent.

Last updated May 3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices.In Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarksperformancesdatabases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.

Leave a Comment

Leave a comment