Skip to content

SEEQC CTO Shu-Jen Han on Digital Quantum Control, Error Correction, and Energy-Efficient Scaling

2026-05-02

Author(s): Scott Douglas Jacobsen

Publication (Outlet/Website): The Good Men Project

Publication Date (yyyy/mm/dd): 2026/01/13

Dr. Shu-Jen Han, PhD, is Chief Technical Officer (CTO) at SEEQC. He joined SEEQC in 2021 (initially as VP of Engineering) and leads global, multidisciplinary R&D toward a chip-based digital quantum computing system, including responsibility for long- and short-term technology roadmaps. Previously, he managed the nanoelectronics effort at IBM’s T.J. Watson Research Center, and later served at HFC Semiconductor (ultimately Associate VP), driving multiple generations of MRAM from development through product qualification. He earned his PhD in Materials Science & Engineering at Stanford (minor in Electrical Engineering) and has authored 100+ publications and holds 200+ issued U.S. patents.

Scott Douglas Jacobsen spoke with Dr. Shu-Jen Han about SEEQC’s digital approach to superconducting quantum systems. Han contrasts today’s microwave, room-temperature control—effectively analog and wiring-heavy—with superconducting digital pulses generated near qubits at millikelvin temperatures. Local control and readout reduce latency and could enable real-time quantum error correction, crucial as errors accumulate over thousands of gates. He argues fault-tolerant machines may need 100,000–1,000,000 qubits, making cabling, bandwidth, and power untenable without chip-level integration. Han also outlines SEEQC’s partner-focused business model and growing Taiwan collaborations. He cites tens to 100 megawatts for conventional systems and calls Google’s Willow results a scaling signal. 

Scott Douglas Jacobsen: Today, in the bio that I have here for you, we’ll be with Dr. Shu-Jen Han, Chief Technology Officer at SEEQC, where he leads multidisciplinary global R&D developing chip-based digital quantum computing systems and the company’s technology roadmap. Before SEEQC, he began his career at IBM’s semiconductor research division working on advanced CMOS technology and later managed the nanoscale device and technology group at IBM’s T.J. Watson Research Center, focusing on post-silicon transistor research. He then served as Senior Director and Associate Vice President at HFC Semiconductor, leading MRAM product development. He joined SEEQC in 2021 as Vice President of Engineering and now serves as CTO, overseeing multidisciplinary teams and the company’s technology strategy. Dr. Han earned his PhD in Materials Science and Engineering with a minor in Electrical Engineering from Stanford University and has authored numerous technical publications and patents. When it comes to digital quantum computing systems, how do you distinguish between quantum computing as mathematical modeling and algorithms versus the hardware of quantum computing systems? What is the distinction, and how do those come together in these frontier digital or electronic systems?

Dr. Shu-Jen Han: Today’s quantum computing is more or less analog computing because in classical computing we are familiar with CPU- and GPU-based digital systems, with deterministic 1 and 0. In quantum computing, you have superposition, which is a probability between 1 and 0, making it closer to analog. The hardware of today’s quantum computing uses microwave pulses to manipulate qubits. Many existing systems are based on this analog scheme. The problem, from our point of view, is that this approach is not scalable because microwave pulses must travel from room temperature down to millikelvin temperatures where qubits must remain extremely cold. There is another way, which we call digital quantum computing. Instead of relying on analog microwave signals, we use superconducting digital electronics to generate coherent digital pulses that manipulate qubits. That is a key distinction between our technology and conventional analog approaches.

Jacobsen: When you are combining different fundamental approaches to computation—not just different algorithms within a linear or parallel model, but different forms of computational modeling—how do you integrate those in your digital quantum computing architecture in an optimized way? Some methods are more efficient for certain problems than others; you do not want to use the most advanced computation for a simple computation. How do you integrate those appropriately, and how do you determine the appropriate context for the type of computation?

Han: I think one thing I want to emphasize, because in our company we did not try to replace existing quantum computing. They run the algorithm. What we are trying to do is make the system more scalable. Of course, there is an additional advantage to using our technology because we have this digital approach to control the qubits and to read the qubits. That gives a lot of advantages. Going back to your question, we can utilize this advantage to enhance certain algorithms. One good example is that we have this unique digital control and digital readout on-chip, directly next to the qubits. As I mentioned in your first question, with the current approach you need to send the signal from room temperature and read the signal all the way back to room temperature. There is a huge delay between when you send signals to the qubits and when you read signals from the qubits. But if we can do everything next to the qubits, we do not have that delay. That being said, we can use our approach to enhance a lot of so-called error correction. You might have heard about quantum error correction because qubits have a lot of error. If you do not correct them, it is not useful. I think that is one of the reasons you need to put the qubits at 10 millikelvin, because at slightly higher temperature, thermal noise will mess up your qubit information. But even if you put the qubits at 10 millikelvin, it is still very noisy. So people are pushing so-called fidelity, meaning what percent of error will happen. People are already pushing to 99.99%. But even that tiny percentage of error, when you start to accumulate it, becomes a problem. When you do the computation, it is like thousands of these gates—we call them quantum gates. Each quantum circuit consists of thousands of these gates. If any gate has a tiny error, and you multiply that by 1,000 or 10,000, the end result will not be correct. That is why we need to do quantum error correction along the way. I keep correcting this error along the way. But you can imagine that if you need to do this error correction by sending the signal out, using room-temperature electronics to do the correction, and sending the corrected data back in, it is very resource-wasteful. Also, sometimes you cannot immediately correct the error because this will cause latency. So if we can do all this control and readout next to the qubits, which is what our technology can potentially do, we can do some kind of real-time error correction. When an error forms, we detect it and correct it immediately. We do not even need to send the signal out, using our digital approach. That enables a new type of quantum error correction and significantly improves the robustness of the quantum computer. That is one example of how our technology can enhance an algorithm. 

Jacobsen: What is the upper limit to quantum computation and the amount of quantum error correction that can be done while computations are live?

Han: Quantum error correction is very powerful. At a high level, the concept is that you use redundant qubits. Once you measure enough qubits, you can think of it as something similar to a parity check. In a simple way, if the majority outcome is one, you say the data qubit is one; if the majority is zero, you say the data qubit is zero. As long as you measure enough qubits and they are all entangled together, this works. For example, you might have 100 physical qubits representing a single data qubit. If they are all entangled and supposed to be one, some will flip to zero because of errors. But if you measure enough of them, from a probability point of view you can say there is a high likelihood that the data qubit should be one, or vice versa zero. That is the basic idea of quantum error correction.

In principle, there is no fundamental limit to how accurate you can be. It is a resource issue. If you could use an unlimited number of physical qubits to represent one data qubit, you could achieve extremely high accuracy. But that is the ideal case. In practice, that is why when people talk about practical or utility-scale quantum computers, they often talk about needing on the order of 100,000 to even one million qubits. It is not that all of those qubits are doing computation. The majority of them are doing quantum error correction.

Even though, in theory, you can keep increasing accuracy with more physical qubits, implementing this in reality is extremely difficult. That is one of the reasons we formed SEEQC, to resolve this scalability issue. SEEQC stands for Scalable Energy Efficient Quantum Computing, and scalability is our first mission. If you want to use so many physical qubits, the first problem is how to connect them. In the conventional approach, you have to send microwave signals from room temperature all the way down to millikelvin temperatures and read the qubits all the way back up to room temperature. That requires long cables running from room temperature to millikelvin. If you are talking about 100,000 to one million qubits, there is no way to put millions of cables into a dilution refrigerator. There is simply no space, and the heat generated by all of those cables is unacceptable.

Another major concern is bandwidth. You send data in and read data out, and the bandwidth requirements can be on the order of tens or even hundreds of terabits per second. There is no interface today that can accommodate that kind of bandwidth. Even companies like NVIDIA do not have interfaces designed for that scale. These are engineering problems, and I would even call them fundamental problems, that block our ability to build utility-scale quantum computers using conventional approaches.

That is where SEEQC comes in. As I mentioned earlier, we do not send all signals out to room temperature. Many signals are generated locally, next to the qubits, using our digital approach. We do qubit processing locally, including control, readout, and error correction. By doing that, we eliminate many of these fundamental constraints.

Jacobsen: What about energy? How do the energy curves for different forms of computation work? Do they start at roughly equivalent efficiency and then diverge as the amount of computation increases—for example, resolving a Google query, an LLM query, a simple calculation like a tabletop calculator, or something that needs to run for five minutes of computation? How do those energy curves, in terms of wattage consumed, compare for SEEQC-style quantum computation versus other approaches?

Han: Energy efficiency is a critical question, and it is central to our company’s mission. As I mentioned earlier, SEEQC focuses on two core challenges: scalability and energy efficiency. Scalability is what I explained in the previous question, and energy efficiency is closely related. In the current approach, most of the electronics are built at room temperature, mainly using high-performance FPGA-based electronics, along with dilution refrigerators. These are extremely high power-consumption systems. Based on our estimates, if you consider a medium-scale qubit system—which is generally what is required for fault-tolerant quantum computing—you are talking about tens of megawatts up to 100 megawatts per system, assuming you can even build it. That level of power consumption is comparable to a modern AI data center. Today’s AI data centers can consume hundreds of megawatts, even approaching gigawatts, so a single quantum computer consuming around 100 megawatts is not far off. From an energy perspective alone, that approach is not scalable.

Our technology is very different. We reduce energy consumption by roughly four to five orders of magnitude. We still require some room-temperature electronics to control our digital chips, but we drastically reduce their number. We also reduce the number of dilution refrigerators needed, because our solution is chip-based and integrates much of the functionality directly on the chip. Instead of needing many refrigerators to support extremely large numbers of qubits, integration allows us to reduce that infrastructure significantly. This lower overall energy consumption makes large-scale quantum computing more realistic and approachable.

Our approach is strongly inspired by microelectronics and semiconductor engineering, which is my background. In classical microelectronics, you do not connect every transistor with individual physical cables. If you tried to build a processor that way, it would be impossible. That is essentially what many current quantum computing approaches resemble. What we are doing is making something analogous to an integrated circuit for quantum computing. Instead of using physical cables to connect each qubit, we integrate qubits directly with control and readout electronics on the same chip. In our case, this integrated circuit is not based on CMOS silicon technology but on superconducting single-flux-quantum digital electronics. You can think of it as a digital circuit with extremely low power consumption. By integrating qubits with local control, readout, and processing, we remove fundamental barriers related to energy, wiring, and scale. Based on the history and lessons of microelectronics, this kind of integrated-circuit approach is the only realistic way to scale quantum computing.

Jacobsen: I think that is a strong point. Where is this technology going in 2026, and where is it heading for the rest of this decade? Are you talking about quantum computing in general, or your specific style of quantum computing and software–hardware integration?

Han: That is a good question. Our business model is different from most quantum computing companies. Many companies are trying to build their own large quantum systems to sell, or to place in data centers and offer as cloud services. They may have a unique qubit technology or a software advantage, but the goal is to deliver a full system to end users. Our business model is different. We focus on building unique chip technologies—qubit control, qubit readout, and error correction chips—and integrating those solutions into the systems of large quantum computing companies. Those companies are our customers. We do not sell directly to end users. We sell to large quantum computing companies. The reason is that, internally, they know their current approaches may not be scalable, even if they do not say that publicly. Our vision is to integrate our technology into their large systems so that when they deliver fault-tolerant systems, our technology is at the core of those systems.

Jacobsen: What are the limitations of quantum computation? People often talk about quantum computing’s potential in cryptography, such as breaking algorithms that classical computers could not crack even with astronomical amounts of time. But the media rarely asks critical questions beyond that, such as energy consumption. What are the broader general and specific limitations in the quantum computing space?

Han: There are two main questions. The first is whether we can build a large-scale quantum computer at all. The second is whether, once we build it, it will be useful. The first question is easier to answer. So far, we cannot build a truly large-scale quantum computer, but there are approaches to get there, including ours. We provide a more scalable approach from the control, readout, and integration perspective. However, there are other challenges the field still needs to address. For example, qubit quality is still not where it needs to be. We do not specialize in making qubits; we specialize in making control and readout electronics that integrate with qubits. Other companies focus on qubit fabrication, but overall qubit quality still needs significant improvement.

When you build a very large-scale qubit array, system performance is not determined by the average or best-performing qubits. It is determined by the worst qubits, the tail of the distribution. That is how large systems behave. The field still needs to improve qubit quality and tighten the performance distribution by eliminating those worst-performing qubits. Once that happens, scaling becomes much more realistic. There has been significant progress in recent years. For example, there has been a lot of discussion around Google’s Willow chip, which reflects meaningful advances, even if many people have not yet examined it in depth.

My personal view is that Google’s Willow chip really triggered the recent acceleration of interest in quantum computing. If you look at the market, many quantum computing companies began to receive much more attention, even reflected in stock prices starting around 2025. One of the biggest trigger points was Willow. This is not hype; it was an important breakthrough demonstration. What Google showed is that when you scale up the number of qubits, the error rate can actually start to decrease. As I mentioned earlier, traditionally when you scale up qubits, quantum error correction does not work well because qubit quality is non-uniform. There are always bad qubits—the tail of the distribution—and those worst qubits determine overall system performance. When you scale up with those bad qubits, error correction fails and logical qubit error rates remain high. What Willow demonstrated, for the first time, is that as the number of qubits increases, the logical qubit error rate for real data qubits starts to drop significantly. That suggests their qubit quality has reached a level where scaling becomes feasible. They still only have on the order of a few hundred qubits, so it is not yet a large-scale system, but it is a very strong proof point. It also gives purpose to our work. If the industry now has qubits that are ready to scale, then SEEQC’s technology can play a major role in enabling that scalability. 

Jacobsen: Any final thoughts based on today’s conversation?

Han: One additional point relates to collaboration with Taiwan. I am not sure whether Davis mentioned this to you, but Taiwan has become extremely important in this space. Taiwan entered quantum technology a bit later, but it has arguably the strongest semiconductor ecosystem in the world. I did my undergraduate studies in Taiwan before starting my PhD, and I spent much of my early life there. Because we are doing chip-based quantum computing, we want to leverage Taiwan’s semiconductor expertise. Even though our technology is not CMOS, many CMOS semiconductor techniques can still be applied to our platform. That is why we are actively leveraging chip resources from Taiwan. We now have multiple collaborations there, including recent work with E3 in Taiwan. We have our own foundry, but we also want a second foundry, and we are working on CMOS design and room-temperature electronics collaborations with Taiwanese companies. There are many active engagements with Taiwanese companies and organizations right now.

Jacobsen: Thank you for the opportunity and your time, Shu. 

Last updated May 3, 2025. These terms govern all In-Sight Publishing content—past, present, and future—and supersede any prior notices.In-Sight Publishing by Scott  Douglas  Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In-Sight Publishing by Scott  Douglas  Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.

Leave a Comment

Leave a comment