From Quantum Theory to Quantum Practice

Theoretical, Possible, Inevitable: The Evolution of Quantum Computing

The development and implementation of any technology that opens new frontiers involves the transition from concept to prototype to minimum viable product, culminating in a scalable system with desired functionality. Early developments offer glimpses of the potential, while the broader achievement of scalable capacity unlocks true commercial value. These earlier stages of theory and possibility are essential for understanding what is necessary to progress to inevitability.

There are many examples of technological development from theory to commercial reality—of which air travel is one. Prior to 1903, the goal of sustained human flight was theoretical. From watching birds, bats and bugs, humans had long known flight was possible, but machines modelled on their wings (most famously done by da Vinci) proved unsuccessful for centuries. A shift to a fixed wing approach brought with it physical models that demonstrated aircraft flight was possible, but these models lacked a power source. The Wright brothers’ first 12 second flight proved that human flight via heavier-than-air vehicles was possible and, years later, improvements were made to aircraft that allowed sustained transatlantic passenger flight, initiating a global inevitability. However, it wasn’t until the development of jet engines, pressurized fuselages, and a network of international airports, that long distance, reliable, profitable transport of people and cargo by air became a reality.

Quantum computing, by commercializing a branch of physics, is following a similar, phased, development trajectory. What is especially exciting in the field now is the building momentum. In the past decades, quantum computing has moved from theory to possibility, and at Photonic, we’re focused on making distributed, fault-tolerant, quantum computing a global inevitability.

Photonic’s approach is a combined technological platform for both quantum computing and networking; our architecture is optimized for entanglement distribution and leverages colour centre spins in silicon (T centres). Using low-overhead quantum error correction, and with a native optical interface that enables high performance non-local operations, silicon T centres can drastically accelerate the timeline for realizing modular, scalable fault-tolerant quantum processors and repeaters. Photonic has been relentlessly pursuing (and achieving!) the technological milestones required for this end goal.

 

Photonic’s Quantum Phases: The Path to Possible

Why do we place such importance on having a platform based on spins in silicon with a native optical interface, that can serve as a combined platform for both fault-tolerant quantum computing and quantum networking? Because we know that without the ability to reliably scale and distribute entanglement, the full value of quantum computing can’t be realized. A silicon-based platform allows us to leverage years of development to enable scalability of high-performance chips, and the optical interface allows us to plug into a system of optical interconnects to distribute entanglement between chips, leading to a fully integrated system based on research and tested technologies with years of optimization and engineering from other industries.

Those familiar with quantum computing will know the path to distributed fault-tolerant quantum computing is typically broken out into three phases, and that there exists variation in both naming (phases, levels, stages) and the specifics of each step. Here we outline Photonic’s Three Phases of development for commercial quantum computing, our position on the state of each, with our focus, as always, being the successful implementation of Phase 3:

  • Phase 1: NISQ (“noisy intermediate-scale quantum”) phase: In this phase, the prototypes that emerged were restricted to a single module and contained qubits which were too few and/or too noisy to implement quantum error correction effectively.
  • Phase 2: Monolithic, small-scale, fault-tolerant qubit phase: In Phase 2, we are still limited to a single module, however qubit counts continue to increase, and noise has decreased enough that some number of error-corrected (logical) qubits can be implemented.
  • Phase 3: Large-scale, distributed, fault-tolerant qubit phase: In Phase 3, quantum computers will have grown enough that they contain multiple connected (or distributed) modules, each with many logical qubits and noise below the threshold for quantum error correction. At this phase, large fault-tolerant computations are performed across the many-module system.

Phase 1 NISQ machines do not have enough high-quality qubits to reliably run algorithms that provide sufficient advantages over classical supercomputers. Specifically, they do not have the capacity to provide the required number or quality of qubits for quantum error correction, and without that capability, lack a path to delivering a system that is appreciably better than what can be achieved with classical systems. While Phase 1 has been instrumental in exploring the potential of quantum computing, it hasn’t yielded any significant commercially relevant use cases to date.

Phase 2 Presently we are just seeing the start of very active Phase 2 quantum demonstrations. Small-scale fault tolerant qubits have relatively recently become a reality. We are in the ‘pre-dominant design explosion’ of methods as demonstrated by the diversity of approaches in the quantum computing ecosystem. In Phase 2 we will see single quantum modules demonstrate quantum error correction (QEC) protocols such as surface code or QLPDC codes. There may be useful scientific results to emerge from these single-module computers in Phase 2, however, the known high-value use cases of commercial relevance (e.g. Shor’s algorithm) require more qubits than any current approach to Phase 2 single module capabilities will likely be able to provide—the projected upper bounds for individual modules projected by most quantum hardware platforms is 100-200 logical qubits. This boundary condition supports the necessity of reliable entanglement distribution between modules.

Phase 3 It is in Phase 3 that the industry will see systems of a size where it is possible to implement algorithms that provide exponential speedups for tackling complex computational challenges and market demand for high-value, high-impact use cases such as drug discovery, materials science, and catalyst development. To work effectively at Phase 3, quantum platforms will need to work in a distributed, networked manner, with operations acting within and between modules. Operations that act on more than one module consume distributed entanglement. Commercially viable quantum platforms will need to be able to distribute entanglement at a rate that meets or exceeds the needed amount to avoid bottlenecks in computation time. High connectivity aids in this because it allows for the distribution of entanglement directly to where it’s needed rather than taking a winding route through multiple connections (akin to flying direct rather than having multiple stopovers). So, a critical technical consideration for Phase 3 is the total entanglement distribution bandwidth between modules.

 

Inevitable Progress to Phase 3 and Beyond

At Photonic, we’ve been working from a quantum systems engineering perspective, starting with what a Phase 3, distributed, scalable, quantum computing network would require. We’ve focused on these requirements (e.g., telecom connectivity, data centre compatibility) to build a platform to accommodate both performance of the individual computing module and the scalability of the network.

Our architecture is designed to enable horizontal scalability while optimizing for entanglement distribution, harnessing the potential of silicon colour centres. The telecom, silicon-integrated approach we have taken at Photonic has intrinsic capabilities in this regard, as colour centres (photons and spins) are extremely promising for high-fidelity operation, while high connectivity brings fault tolerance within reach with low overheads.

Our native optical interface produces entangled photons that can be used to distribute entanglement at telecom wavelength so that no transduction is required to use optical interconnects between modules. The additional benefit of operating at telecom wavelengths is the ability to integrate with existing infrastructure – namely optical switch networks that have been developed for telecommunications. As such, Photonic’s architecture is well suited to quickly and reliably execute large-scale algorithms across multiple modules using a fast, reliable, quantum network.

Photonic’s strategy lies in Phase 3, where the true potential of large-scale, distributed, fault-tolerant systems for commercially relevant quantum computing applications will be realized.

  • As the interest in NISQ-based tech declines, we’re seeing a highly competitive race to deliver commercial value, especially in the form of Phase 2. There will increasingly be expectations for fault-tolerant qubits, and the appetite for them will grow exponentiallyow exponentially.
  • Many companies and researchers are making great progress in Phase 2. Each milestone reached benefits the industry as a whole. We’re all able to learn from innovations in materials, qubits, and better understand the strengths and limits of various approaches.
  • There is ever increasing desire to get to a stage where the key, known, commercially valuable algorithms are implementable; this necessitates a functional, modular, Phase 3 system. At this stage, we anticipate platforms capable of distributed quantum computing will thrive. We are encouraged to see the industry also acknowledging the necessity of interconnected quantum modules. A recent Global Quantum Intelligence Outlook Report on Scalable Quantum Hardware released the following conclusion:

The report highlights the necessity for a modular approach to scaling in nearly all proposed quantum computing architectures. This modular approach, which emphasizes distributed rather than monolithic quantum computing stacks, offers not only scalability but also flexibility, maintainability, and redundancy. It also emphasizes how most architectures will ultimately need to leverage interconnects, and how performant optical photonic interconnects hold the promise of synergies in quantum communications and networking. 

We cannot overstate how exciting it is to be on the cusp of technological advancements that will have impacts that we can’t yet fully fathom. Just as the Wright brothers and others at the earliest frontiers of flight were likely not able to consider the range of applications from specialized water bombers to fighter jets to planes capable of carrying 250 tons of cargo or conducting edge of space flights, our understanding of the potential of quantum computing is just beginning. From catalyst discovery for new fuels, materials science implication for solar energy capture and battery development, and pharmaceutical design for medical advancement, there are seemingly endless possibilities. We are dedicated to developing quantum computing and networking capacity to unleash the maximum potential for doing good in the world.

More from Photonic