Quantum Glossary
Your guide to the key terms in quantum computing and classical programming—covering the concepts and language behind Horizon Quantum’s technology
A
An abstraction level is a way of representing complex systems by hiding certain details to make interactions more readable and manageable. For example, purchasing a coffee is a high-level interaction with a machine’s interface, while the internal mechanics of how it brews the coffee take place at a lower level, hidden from the user. In programming, abstraction levels describe how much detail is hidden or exposed when working with a computer system. Higher levels of abstraction conceal low-level implementation details, allowing programmers to think in broader, simpler and more natural terms (e.g. writing code in Python without worrying about how the processor executes each instruction). Lower levels of abstraction reveal more of the underlying hardware or operations (e.g. assembly language or machine code). Shifting between abstraction levels helps manage complexity by letting developers focus either on the big picture or on fine-grained control, depending on the task.
An algorithm is a finite set of step-by-step instructions for solving a type of problem or carrying out a computation. Algorithms are a cornerstone of computer science and form the building blocks of computer programs. They are used for calculation, data processing, and automation, and they range from simple procedures (such as sorting) to advanced methods (such as combinatorial optimisation) applied in areas like quantum computing.
The process of designing and specifying algorithms to solve problems or perform computations. For conventional computing, this involves creating algorithms that run on classical hardware, following deterministic, step-by-step logic. For quantum computing, algorithm construction requires exploiting quantum effects—such as interference, superposition, and entanglement—to design methods that can sometimes solve problems more efficiently than classical algorithms.
An API is a standard way for different software systems to talk to each other. It lets one program use the functions or data of another without needing to know how that other program is built. APIs make it easier for developers to connect tools, build new applications, and share data across software systems.
B
Conventional computers operate on electrical signals that naturally have two states, such as on/off. Binary code (0s and 1s) is used to represent these states—which can also be interpreted as false/true or no/yes—with 0 indicating one state and 1 the other. At the hardware level, all machine code is expressed in binary.
A bit is the smallest unit of data in computing, representing a single binary value: either 0 or 1. Bits are the building blocks of binary code. By combining bits, computers can store and process more complex information.
Blind quantum computing is a secure method that allows users to run computations on a remote quantum computer without revealing their input, algorithm, or output to the server. The “blindness” refers to the fact that the server does not know what computations it is performing, ensuring privacy and security for the user. As quantum computers become more prevalent, cloud-based quantum services—where users access quantum computers without needing to own one—are likely to become the norm. Blind quantum computing addresses the security risks introduced alongside cloud-based quantum computing by providing unconditional security, guaranteed by the principles of quantum physics, which makes it possible to entrust remote services with sensitive data and proprietary algorithms.
C
In conventional computing, a circuit is a network of logic gates that process binary signals. By combining many gates, a circuit can carry out arithmetic, decision-making, and other functions such as memory storage and data processing. Physically, classical circuits are built from electronic components such as transistors on silicon chips. At the hardware level, all programs are ultimately executed through circuits.
In quantum computing, a circuit is a sequence of operations, consisting of quantum gates and measurements, carried out on qubits within a quantum processing unit (QPU). Much like logic circuits express computations in classical computers, a quantum circuit is the basic way of expressing a computation on a quantum computer. By arranging operations in different sequences, quantum circuits implement algorithms, and the interplay of gates within them gives rise to quantum interference, enabling quantum computers to perform computations that a classical computer cannot.
Circuit templating in quantum algorithm design is the practice of creating reusable, parameterised circuit structures that serve as blueprints for solving different computational problems. Instead of building each circuit from scratch, a template defines the overall structure of operations, with certain gates or parameters left adjustable. These templates can then be adapted to specific problems and optimised, often through hybrid quantum-classical methods, such as automatically generating circuit templates from high-level algorithm descriptions.
A classical programming language is one designed for use on conventional computers, which operate using machine code and logic circuits. These languages, such as C, Java, and Python, allow programmers to code at higher abstraction levels, writing instructions in a readable form which are then compiled down into machine code that hardware can execute.
Coherence is the ability of a quantum system to maintain a stable phase relationship between different states in a superposition. The phase relationship describes how the “waves” associated with quantum states line up with one another, much like peaks and troughs in water waves. For a qubit’s blend of states to be useful, the relative “timing” of the states— their phase relationship—must stay intact. When this relationship is preserved, quantum states can interfere, combining in ways that amplify some outcomes and cancel others.
Coherence allows qubits to behave as more than just classical bits. When coherence is preserved, the states in superposition can interfere with one another, resulting in the parallelism and speed-up central to many quantum algorithms. Without it, the delicate interference patterns disappear, and the system reverts to ordinary classical behaviour.
Because quantum systems are highly sensitive to their surroundings, coherence is fragile: interactions with the environment, such as thermal fluctuations or electromagnetic noise, disturb the phase relationship and destroy the quantum information. The coherence time measures how long a system can preserve its coherence before this breakdown occurs. Longer coherence times enable more complex and more reliable computations, and improving them is a key objective in quantum hardware design.
A compiler is a software tool that translates code written in a high-level programming language into machine code, the low-level binary instructions that a computer's hardware can execute. A compiler may translate step by step from one abstraction level to the next, or all the way down to machine code. Compilation happens before the program runs, allowing developers to code in languages that are easier to read and write while still producing instructions the machine can understand.
In quantum computing, concurrent classical functions are classical computations that run at the same time as a quantum circuit, rather than only before or after it. Traditionally, quantum programs are structured as static circuits: the circuit is designed in advance, executed on the quantum processor, and then the results are processed classically. Concurrent classical functions allow classical and quantum steps to be interwoven more tightly. For example, a classical function may process a mid-circuit measurement result and immediately decide which gate to apply next, or update parameters on the fly to steer the quantum computation dynamically. This concurrent execution is powerful because it allows feedback loops between classical and quantum processes, making quantum algorithms more flexible and efficient
In classical computing, control flow is the order in which individual instructions in a program are executed. Instead of always running commands in a straight line from top to bottom, programs can make decisions or repeat steps based on conditions. For example, an if/else statement allows a program to take different paths depending on whether a condition is true or false. A while loop makes the program repeat a block of instructions until a condition is no longer met. These control flow structures make it possible to write programs that adapt to different inputs and situations.
In quantum computing, control flow is much more challenging. Traditional quantum programming frameworks are typically confined to static circuits where all the operations are fixed in advance. This limitation arises because qubits cannot be freely copied like classical bits, and measuring them collapses their state. These constraints make it difficult to introduce dynamic branching and looping directly into quantum algorithms.
A quantum control system is the combination of hardware and software used to manipulate and measure qubits in a quantum computer or experiment. It relies on classical electronics to generate highly precise signals that drive qubit operations and to process measurement feedback in real time. These systems enable complex tasks such as characterisation of the physical properties of the underlying processors, calibration of specific gate operations, error correction protocols, and gate sequences, while also suppressing noise and maintaining the stability of sensitive quantum hardware. Quantum control systems are typically tailored to the underlying qubit technology—for example, superconducting qubits, trapped ions, or photons each require different types of control signals and electronics.
D
Decoherence is the process by which a quantum system loses its quantum character, such as superposition, because of interactions with its environment. These interactions destroy the phase relationship between the different states of the system. As a result, the system no longer behaves like overlapping quantum waves that can interfere with each other, but instead like an ordinary object governed only by probabilities. (This concept explains why the everyday world appears classical: macroscopic objects are constantly bumping into air molecules, light, and other forms of environmental noise, which rapidly erase their quantum behaviour and leave only classical outcomes.)
In a quantum computer, decoherence ruins the delicate qubit states needed for computation, turning them into classical bits and making the quantum information unusable.
Decoherence can be caused by many things, including thermal fluctuations, electromagnetic interference, gate imperfections, and even the faint cosmic radiation that permeates space. Because it threatens the stability of qubits, controlling and reducing decoherence is one of the central challenges in building practical quantum computers
The Dowling-Neven Law describes the doubly exponential growth of quantum computing power. There is an exponential relationship between the number of states that can be described by qubits and the number of qubits. The number of states that can be represented by n qubits is 2n, meaning that each additional qubit doubles the processing power of the system. At the same time, the number of qubits available is itself increasing exponentially over time, rather than growing linearly. Together, these trends create a doubly exponential growth pattern. Mathematically, because qubit count grows exponentially and the processing capacity of qubits also grows exponentially, the overall growth rate of quantum processing power can be expressed as 22n.
In classical computing, dynamic memory allocation is the process of assigning memory to a program while it is running, rather than fixing the memory size in advance. Instead of the programmer deciding ahead of time how much memory is needed, the program can request and release memory as it runs. For example, with static allocation, if a programmer declares a data structure with a fixed length in C, its size is set at compile time. If the programmer doesn’t know how big the data structure needs to be until the program is running, they can use dynamic allocation to get exactly as much memory as is needed.
Dynamic memory allocation is important because it makes programs more flexible, efficient and scalable. It allows them to handle data whose size is not known until runtime, reuse memory by freeing it when it is no longer needed, and adapt to the amount of memory available on a machine. Without it, programmers would have to guess memory needs in advance, which could waste memory if they overestimate or cause the program to fail if they underestimate.
While this type of memory management is standard in classical computing, it remains rare in quantum systems. Traditional quantum circuits pre-label and allocate qubits before execution begins, and most programming frameworks are designed around this static model. However, programs that use control flow may require qubits to be allocated and released on the fly. A quantum runtime that supports dynamic allocation would be able to track which qubits are free and which are in use during execution, and allow users to release qubits once they are no longer needed so they can be repurposed.
E
Entanglement is a uniquely quantum phenomenon in which two or more particles share a single quantum state—a complete description of their measurable properties, such as spin or energy level—so that the state of each particle cannot be described independently of the others. This connection holds even when the particles are far apart: observing or measuring one immediately provides information about the state of the other. Entanglement creates strong connections between particles that have no parallel in classical physics. In quantum computing, entanglement is critical to expressing the full power of the technology. It enables qubits to work together in ways that are not possible classically, and can be harnessed in certain types of quantum logic gates, algorithms, and communication methods.
The error rate is the probability that a qubit's state changes in an unintended way during storage, manipulation, or measurement. Such errors can result from interactions with the external environment, operational imperfections (such as unintended bit flips or phase flips), measurement errors, or unwanted interactions between qubits. The rate reflects how often a quantum gate operation or qubit interaction produces an incorrect outcome, with lower rates indicating higher accuracy. Even small error rates can accumulate quickly during complex algorithms, making low error rates essential for reliable quantum computation.
Errors in quantum computing arise from the inherent fragility and sensitivity of quantum states to environmental noise, which can cause unintended changes to the state of qubits or the outcome of operations. Errors include bit-flip errors (where a qubit’s state changes from 0 to 1 or vice versa) and phase-flip errors (where the relative phase of a superposition is altered), as well as combinations of both. Errors can result from a variety of sources, including gate errors (incorrect operations applied to qubits), measurement errors (mistakes during the readout process), crosstalk (unwanted interactions between nearby qubits), noise (external interference such as electromagnetic signals), and decoherence (the loss of a qubit’s quantum state through interactions with its environment). Because qubits are so vulnerable to these effects, errors occur more frequently in quantum systems than in classical computers, making error correction a central challenge for practical quantum computing.
Quantum error correction is a method used to protect quantum information from errors caused by environmental disturbances, hardware imperfections, or unwanted interactions that can cause qubits to change state. Unlike classical error correction, which can rely on duplicating information, quantum error correction must use more sophisticated techniques because quantum information cannot be copied. It is essential for building fault-tolerant quantum computers capable of carrying out long and reliable computations.
Error correction overhead refers to the additional resources required to protect information from errors and preserve data integrity. In conventional computing, this takes the form of extra bits added to the original data, which allow errors to be detected and corrected but reduce the efficiency of storage or transmission. In quantum computing, it appears as the large number of physical qubits and operations needed to encode and stabilise a single logical qubit, ensuring the quantum information remains intact in the face of noise or interference. In both cases, stronger protection against errors requires greater overhead, while less overhead means weaker protection.
An error-corrected qubit is a logical qubit that is actively stabilised by quantum error correction. When error correction is continuously applied to the underlying physical qubits, the logical qubit remains stable, making the error-corrected qubit the realised form of a logical qubit.
F
Fault tolerance refers to a system’s ability to continue operating correctly even when errors occur, without losing the information it is processing or storing. Conventional computers achieve fault tolerance through built-in error correction, which allows them to detect and fix problems automatically. Quantum computers face a greater challenge because qubits are extremely sensitive: changes in their environment, such as tiny temperature shifts or electromagnetic disturbances, can cause them to change state in unintended ways.
Fault-tolerant quantum computing refers to the development of quantum computers that can perform long, complex computations reliably, even in the presence of noise and errors. It is considered a key milestone toward realising the full potential of quantum technologies. Researchers pursue fault-tolerant quantum computing through a combination of error correction codes and using more physical qubits to create stable logical qubits.
G
A logic gate is a basic building block of classical circuits. It takes one or more binary inputs (0 or 1) and produces a single binary output according to a logical rule. For instance, an AND gate outputs 1 only if all inputs are 1. By combining many logic gates, classical circuits can perform complex computations.
A quantum gate is the basic building block of quantum circuits. It changes the state of one or more qubits by applying a reversible mathematical transformation called a unitary operation. Because they are reversible, unitary operations preserve the total probability of all possible outcomes, ensuring that a quantum gate never loses information—unlike many classical logic gates, which discard information. For example, with an AND gate, the inputs (binary information of two bits) cannot always be recovered from the output (one bit). The reversible aspect of quantum gates means that when operations are performed on qubits, whether in simple gates or full algorithms, the process can always be reversed, allowing the system to return to an earlier qubit state. This reversibility speaks to the wider possibilities and freedoms of quantum computing, where the basic building blocks are inherently more flexible and powerful than their classical counterparts. Because quantum gates exploit quantum effects such as superposition and entanglement, when combined in circuits, they can perform complex computations beyond the limits of conventional computers.
In quantum computing, gate-level code refers to programs written directly in terms of the quantum gates that act on qubits. At this level, the programmer specifies the exact sequence of gate operations—such as Hadamard or CNOT—that make up a quantum circuit. Gate-level code is analogous to assembly language in classical computing: it provides fine-grained control over the hardware but can be difficult to write and maintain for larger algorithms. It requires a specialised skillset in quantum computing, including understanding how gates manipulate qubits and how circuits are constructed.
Grover’s algorithm is a quantum algorithm for searching through unstructured data. Classically, each item in a set must be checked one by one to find the correct result. Grover’s algorithm, by contrast, searches the entire set at once by creating a superposition of all the possible solutions and applying a mathematical function known as an oracle. The oracle “lights up” when it sees the correct answer. With this function in place, the algorithm amplifies the probability of finding the correct answer compared to the probability of choosing any other answer. As a result, on average, Grover’s algorithm will find the correct item in roughly the square root of the time required by a classical search.
H
A hardware modality is the physical approach used to build and operate qubits in a quantum computer. Each modality has distinct advantages and challenges. The choice of modality affects factors such as qubit stability, error rates, scalability, and operating conditions, as well as the design and performance of a quantum system. No single modality has yet proven dominant, and multiple approaches are being actively explored in research and industry. Many researchers believe that different modalities may be better suited to different types of problems. Research into the most effective hardware modalities is central to scaling quantum computers and making them practical for real-world applications.
In quantum computing, a hardware testbed is an experimental setup where new computing devices and components can be built, tested, and evaluated under real operating conditions. Beyond testing hardware performance, a testbed also provides a platform for integrating hardware with software, allowing researchers to develop, refine, and validate software in a realistic environment. This tight coupling of hardware and software enables more effective testing, faster iteration, and deeper insights into how the two interact.
I
An integrated development environment is a software suite that provides programmers with the tools they need to write, test, and debug code efficiently. An IDE typically combines features such as a code editor, build automation tools, version control integration, and debugging support into a single application. By bringing these components together, IDEs are designed to maximise developer productivity and streamline the software development process.
Interference is a phenomenon where two waves combine and affect each other. When the waves are in sync (their peaks and valleys line up) they reinforce one another, producing a larger effect. This is called constructive interference. When the waves are out of sync (the peak of one meets the valley of another) they cancel each other out, reducing or eliminating the effect. This is called destructive interference. These interference effects occur in familiar waves, such as light, radio, sound, or ripples on water, and they also apply to quantum systems, where particles behave like waves.
In quantum computing, interference is not just a curiosity—it is what makes computation possible. Quantum algorithms are carefully designed so that the probability of measuring wrong answers is suppressed through destructive interference, while the probability of measuring correct answers is amplified through constructive interference. This ability to steer outcomes by harnessing interference is one of the key advantages that gives quantum computing its power over classical methods.
L
In quantum computing, a library is a collection of software resources that help developers design, simulate, and run quantum programs. Libraries make it easier to work with quantum systems by providing ready-to-use functions, algorithms, and interfaces that simplify complex tasks, such as building quantum circuits or combining quantum and classical computation. Providing reusable building blocks and standardised interfaces saves users from having to build everything from scratch. While libraries offer convenience and consistency, they can also introduce limitations: they may restrict users to certain programming models, hardware backends, or algorithmic approaches. This makes them powerful for rapid development and testing but less flexible for performing highly customised or experimental work, developing practical applications, and solving real-world problems across different domains.
A logical qubit is a group of physical qubits encoded to protect against errors and enable longer, more reliable computations. Unlike a physical qubit, which refers to the actual hardware, a logical qubit is a higher-level abstraction used in fault-tolerant quantum computing. In classical computers, repetition codes repeat information multiple times to detect and correct errors. Because quantum information cannot be duplicated, a logical qubit distributes the information of a single qubit across multiple physical qubits, allowing errors in individual qubits to be detected and corrected without having to measure and disturb the original qubit’s quantum state. To do so, they use physical qubits both as data qubits, which store the quantum information, and as syndrome qubits, which help identify and correct errors.
M
In quantum computing, all computations end with measurement, the process of extracting classical information from qubits. Unlike classical bits, which are always 0 or 1, qubits can exist in a superposition—a state where they simultaneously hold probabilities of being both 0 and 1. Measurement collapses this superposition, forcing the qubit into a definite state of either 0 or 1. Measuring one qubit in a system of multiple qubits affects the entire quantum system. Measurement is critical because it returns outputs as usable classical information, which is essential for solving problems. However, the act of measuring can also disturb qubits and introduce errors, making it an inherently imperfect process. Because measurement outcomes are probabilistic, experiments must often be repeated many times to build up reliable statistics. Measurement is also irreversible: once a qubit is measured, its prior superposition is lost.
Measurement-based quantum computing is a model of quantum computation where the main steps are carried out by making measurements on a large, highly entangled quantum state. The computer first prepares a special resource state, often called a cluster state. A resource state is a specially prepared quantum state that provides the starting point for performing a computation or protocol. It is called a “resource” because its unique properties — such as entanglement or specific correlations between qubits — can be consumed or used up as the computation proceeds. Once the cluster state is prepared, computation proceeds by measuring individual qubits in carefully chosen ways, with the measurement results guiding the next steps. Instead of applying a long sequence of quantum gates one after another, as in the circuit model, the computation here is effectively “front-loaded” into the preparation of the cluster state, with measurements driving the process that follows. This approach shifts much of the complexity into the initial state preparation and can, in principle, make some operations easier to implement or more resilient to certain errors. Measurement-based quantum computing is also referred to as one-way quantum computing because the original resource state is destroyed in the process of measuring it. As a result, it is an interesting model for pursuing secure quantum computing and advancements in quantum cryptography.
Mid-circuit measurement is the practice of measuring qubits during the computational process, rather than only at the end. In a typical quantum circuit, all qubits are measured after the computation is finished, and the results are returned to a classical computer for post-processing. If further processing is needed, a new circuit must be constructed and queued for execution — a slow, iterative process that can substantially increase the program’s runtime.
With mid-circuit measurement, selected qubits are measured while the circuit is still running. When a qubit is measured, its quantum state collapses into a definite value, and that outcome can be used immediately to influence the rest of the computation. For example, the result of a measurement can determine whether a specific gate is applied to another qubit, allowing the circuit to branch conditionally as it runs. Where the hardware supports it, mid-circuit measurements let classical logic operate alongside quantum computation because the circuit does not terminate after the measurement is taken.
Mid-circuit measurements can reduce the number of qubits required, make certain algorithm more efficient, and are essential for error correction and quantum communication protocols. However, they also introduce technical challenges: every measurement adds opportunities for error, increases circuit complexity, and demands precise hardware and control systems to keep noise within acceptable levels.
Moore’s Law is the observation that the number of transistors on an integrated circuit tends to double approximately every two years, leading to a corresponding increase in computing power and efficiency. First stated by Gordon Moore in 1965, it was originally a projection based on the pace of progress in semiconductor manufacturing. For decades, this trend accurately described the exponential growth of computing performance, driving advances across the technology industry.
N
Network I/O (input/output) during computation refers to data transfers between a program and an external network. When a program needs data that isn't immediately available, its computation is paused while the data transfer occurs. These transfers can often take more time than the processing itself. This pause creates a performance bottleneck, slowing down the program. However, modern systems are designed for efficiency. While the specific computation waits for the I/O operation to complete, the CPU (Central Processing Unit) will typically execute other tasks to avoid being idle.
Neutral atoms are atoms with no net electric charge, meaning they have an equal number of protons and electrons. In quantum computing, scientists exploit the internal energy levels of neutral atoms to use them as qubits. They trap individual atoms in a vacuum chamber and cool them with lasers to near absolute zero (-273.15°C or 0 Kelvin), reducing their motion. Other lasers, known as optical tweezers, precisely arrange the atoms into specific configurations and can replace misplaced or missing atoms. Optical tweezers also allow scientists to move neutral-atom qubits during computation without disturbing their quantum states. To carry out computation, lasers manipulate the atoms’ energy levels in controlled ways that create interactions between neighboring atoms. Scientists then use imaging techniques to measure the neutral-atom qubits.
Neutral atoms offer advantages such as scalability to large qubit systems, the ability to hold their quantum state for relatively long periods, flexible qubit interactions, and high gate fidelity. Furthermore, while neutral atoms need to be cooled with lasers to ultra-cold temperatures, they don’t need cryogenic refrigerators like other types of qubits. However, neutral atoms pose technically demanding challenges such as requiring highly precise laser cooling, trapping, and manipulation.
In quantum computing, noise refers to unwanted disturbances that affect qubits and the operations performed on them. It can arise from many sources, including environmental factors such as temperature fluctuations or electromagnetic interference; imperfections in the control systems used to operate quantum gates; and unintended interactions between qubits. Because the quantum information stored in qubits is fragile, noise can disrupt their state and corrupt the information. As noise builds up, the likelihood of an algorithm producing the correct result decreases — and if the noise is too high, the quantum computer becomes unusable.
P
A physical qubit is the hardware-based implementation of a qubit in a quantum computer. Physical qubits can be realised in many different forms, depending on the hardware modality, such as trapped ions, neutral atoms, or photons. They are the raw qubits built directly from physical systems in materials in contrast to logical qubits, which are error-corrected, conceptual units built from many physical qubits working together. Their states are manipulated using laser pulses, microwave pulses, or other control signals. Because physical qubits are highly susceptible to environmental noise and prone to errors, they are too fragile to perform long or complex computations on their own, which has resulted in significant research into quantum error correction as well as the development of logical qubits.
In quantum computing, pulse control refers to the use of precisely shaped electromagnetic signals—called pulses—to manipulate qubits. These pulses are carefully timed and tuned to implement quantum gates by changing the state of the qubits. The type of pulse depends on the underlying technology, for instance, microwave pulses are used for superconducting qubits. Pulse control gives researchers fine-grained access to the hardware, allowing them to optimise performance, correct for imperfections, and explore new gate designs beyond the standard high-level programming abstractions.
In quantum computing, a pulse sequence is an ordered set of control pulses used to manipulate qubits and carry out quantum operations. Instead of describing computation only in terms of abstract quantum gates, a pulse sequence specifies the timing, frequency, and shape of the pulses that directly drive the hardware. This programmable approach gives developers and researchers fine-grained control over qubit behaviour, making it possible to design custom gates, compensate for hardware imperfections, and optimise performance for specific algorithms.
In quantum computing, pulse-level programming is the practice of controlling qubits by directly specifying the pulses that drive their operations. Unlike gate-level programming, where programmers work with abstract quantum gates, pulse-level programming exposes the hardware controls themselves. This approach allows researchers to fine-tune qubit behaviour, optimise gate performance, and experiment with customised control schemes, but it also requires detailed knowledge of the underlying hardware.
Q
In quantum computing, quantum advantage refers broadly to the point at which a quantum computer can solve a problem more efficiently than the best known classical methods running on the best classical hardware. However, the term has no universal definition, and it is not always used consistently.
Most often, quantum advantage refers to the experimental demonstration of a quantum algorithm solving a real-world problem faster than any classical algorithm could. In this sense, it contrasts with quantum supremacy, which is usually defined as a quantum computer solving any problem (even a contrived one with no practical value) that no classical computer can solve in a reasonable time. In other words, supremacy is about demonstrating feasibility while advantage is about solving useful problems. Unlike supremacy, which can be achieved without error correction, demonstrations of quantum advantage are generally expected to rely on quantum error correction to ensure the results are reliable and commercially meaningful
In some contexts, the term covers theoretical speedups not yet demonstrated in practice. In others, it also covers benefits beyond speed alone, such as enhanced precision or more efficient compression of classical data. For industry and business, the most relevant sense of quantum advantage is pragmatic: when a quantum computer solves a problem in significantly less time, energy, or money than classical methods, or it enables solutions that would otherwise be out of reach.
A quantum algorithm is a set of step-by-step operations designed to run on a quantum computer. Using quantum effects such as superposition and entanglement, they can perform certain computations more efficiently than conventional computers, and in some cases solve problems that are otherwise intractable. Quantum algorithms are expressed as sequences of quantum gates organised into circuits. Designing them requires specialised knowledge of both quantum mechanics and computer science, and implementing them on current quantum hardware is challenging because of errors and the number of qubits required, which has led to research into error correction, fault tolerance, and hardware optimisation. Quantum algorithms are central to the promise of quantum computing, with potential applications in areas such as cryptography, logistics, finance, and medicine. Well-known examples include Shor’s algorithm, which can factor large numbers efficiently, and Grover’s algorithm, which accelerates database search.
Quantum communication is the use of quantum states to transmit information securely. In classical communication, information is carried by bits that can be copied, intercepted, and read without detection. Quantum communication instead uses qubits, which lose their quantum state when eavesdroppers attempt to measure or copy the information they carry. The most developed application today is quantum key distribution (QKD), where two parties can create a shared secret key for encryption. Any attempt to intercept the key disturbs the quantum states and reveals the presence of an eavesdropper, providing a level of security guaranteed by the laws of physics.
Quantum mechanics is a fundamental branch of physics that describes the behaviour of matter and energy at the atomic and subatomic scale. It explains how particles can exist in multiple states at once (superposition), display both wave-like and particle-like properties, and influence each other instantly across distances (entanglement). These rules differ sharply from the laws of classical physics and often feel counterintuitive compared with everyday experience, which makes them challenging to grasp, but it is by exploiting these uniquely quantum effects that quantum computing becomes possible.
A quantum processing unit (QPU) is a common industry term for the core of a quantum computer: a system made up of physical qubits and the apparatus used to control them, such as lasers, microwave generators, and other supporting electronics. A QPU is where the qubits reside and where computation takes place. It is often described as the “brain” of a quantum computer. Like the central processing unit (CPU) in a classical computer, a QPU requires significant supporting infrastructure — and the nature of that infrastructure can vary widely depending on the underlying hardware design.
QPUs actually calculate much more slowly than CPUs. However, they can perform certain tasks with far greater efficiency, so that overall computation time is reduced for specific classes of problems. In the future, the use of QPUs might more closely mirror today’s graphics processing unit (GPU) in that most computation will continue to be performed by CPUs, while specialised processors — GPUs and QPUs among them — will be used for targeted tasks that benefit from their unique capabilities. The potential applications of QPUs lie in problems that are classically intractable, such as combinatorial optimisation, quantum chemistry simulations, large-scale classical data compression, factorisation, and random number generation.
Unlike CPUs, QPUs are not standardised in design. Different hardware modalities each bring their own strengths, weaknesses, and engineering challenges. QPU performance also lacks a single standardised metric. The first point of comparison is usually the number of qubits, but true capability is also shaped by other factors, including how reliably qubits maintain their quantum state over time, their connectivity, gate speeds, and error rates.
A qubit is the basic unit of information in quantum computing, analogous to the bit in classical computing. While a classical bit can only take the value 0 or 1, a qubit can exist in a superposition, meaning it holds probabilities of being measured as either 0 or 1. Before measurement, these probabilities are captured in a mathematical description called a wave function. When measured, the wave function collapses and the qubit takes on a definite value of 0 or 1, producing classical information that can be used in computation. Qubits are typically realised using physical systems that exhibit quantum behaviour, such as photons, electrons, trapped ions, or superconducting circuits. The power of qubits grows exponentially when they are connected together.
R
A runtime environment is the support system a program needs while it is running. It provides the behind-the-scenes services that allow code to execute smoothly, such as managing memory, handling input and output, and giving the program access to hardware resources like storage, networks, or displays.
The runtime environment does not translate high-level code into machine code—that’s the role of a compiler—but it provides the setting necessary for that program to run once it has been translated. It supplies common functionality, such as garbage collection, system libraries, and error handling, so that programs don’t need to re-implement those tasks themselves. In quantum computing, a quantum runtime environment plays a similar role by providing the infrastructure needed to let quantum programs run across different hardware backends. It often handles tasks such as scheduling, error mitigation, and integration with classical computing resources.
S
Shor’s algorithm is a quantum algorithm that, in theory, can efficiently factor large composite numbers into their prime factors. It predicts an exponential speedup over the best-known classical factoring methods, making it one of the most famous and influential results in quantum computing.
The key idea is to turn factoring into a problem of finding a hidden periodic structure in numbers. In practice, this means looking for a repeating cycle—the “period” —in the remainders that appear when powers of a number are divided by the integer being factored. Quantum computers can uncover this cycle efficiently because they can place many possibilities into superposition, use interference effects to highlight the repeating pattern, and then extract it through measurement. (As an analogy, Shor’s algorithm is like detecting a steady rhythm of drumbeats hidden within a jumble of sounds: once the rhythm is found, it reveals the prime building blocks of the number).
Because many cryptographic schemes, such as RSA, rely on the difficulty of prime factorisation, the discovery of Shor’s algorithm highlighted a potential vulnerability in classical encryption. Its proposal marked a turning point in quantum computing, demonstrating that quantum algorithms could solve problems long thought intractable for classical computers and motivating the development of quantum-resistant cryptography.
Although the algorithm is theoretically efficient, practical implementation remains out of reach. Running Shor’s algorithm at scale requires large numbers of qubits and robust error correction, which today’s hardware cannot yet support. Nevertheless, ongoing efforts to implement it have driven advances in quantum error correction, algorithm design, and hardware development.
Spin qubits encode quantum information in the intrinsic spin of electrons, a fundamental property often described as a kind of intrinsic angular momentum. The electron’s spin can be oriented “up” or “down,” forming a natural two-level system that corresponds to the 0 and 1 of a classical bit. A spin qubit can also exist in a superposition, where it simultaneously holds probabilities of being measured as “up” or “down” until the measurement takes place. Spin qubits are commonly implemented in solid-state systems using quantum dots—microscopic regions in a semiconductor that effectively create tiny "boxes" that confine electrons. These man-made dots are manufactured on a substrate, the semiconductor material that serves as the base for building the device. Above each quantum dot sits a gate electrode (the transistor’s control terminal), and by applying voltages to it, scientists can trap one or more electrons in the dot and manipulate their spins with electromagnetic fields. Much research focuses on silicon-based spin qubits, but spin qubits have also been implemented in other semiconductor materials, including gallium arsenide, germanium, and graphene.
Spin qubits offer several advantages, including long lifetimes, fast gate speeds, and the ability to leverage existing semiconductor fabrication methods, making them relatively low-cost and potentially scalable to very large numbers of qubits on a single chip. Their small size and partial shielding from the substrate provide additional stability. However, they face challenges such as sensitivity to charge noise, fabrication defects, difficulties integrating with classical control electronics, and the need of magnets often immersed in the low-temperature region of the cryogenic environments. Scaling spin qubits to practical fault-tolerant quantum computers is a hurdle, and unlike some other modalities, there are currently no publicly available spin-qubit-based quantum computers.
A superconducting qubit is implemented on specialised chips, typically made of silicon or sapphire, using microfabrication techniques similar to those used in classical processors. Superconducting qubits are tiny electrical circuits made from superconducting materials, which exhibit zero electrical resistance when cooled to millikelvin temperatures. This is drastically different from normal conductors, where electrons flow, collide with atoms and defects, and lose energy, creating electrical resistance
These tiny circuits typically include components called Josephson junctions, which are made by placing a very thin layer of insulating material between two superconducting materials. When electrons tunnel across this barrier at low temperatures, the circuit takes on discrete energy levels, meaning it can only occupy certain fixed-energy states, not a continuous range. With careful circuit design, the two lowest of these states can be isolated and used to encode a qubit’s 0 and 1 states. Because they behave like atoms, superconducting qubits are often called artificial atoms.
Scientists manipulate them with microwave signals to perform quantum operations. Superconducting qubits are among the most widely developed hardware modalities because they can be manufactured using existing chip-making methods and execute operations quickly. However, they are difficult to scale because of the short lifetimes of their quantum states, the need for cooling to near absolute zero, and their sensitivity to fabrication errors. These errors, introduced during manufacturing, can affect the electrical circuits forming qubits, reducing their performance or reliability.
In quantum mechanics, all quantum systems can be described as a combination of possible states, known as basis states. For a qubit, the basis states are 0 and 1. Unlike a classical bit, which can only be 0 or 1 at a time, a qubit can exist in a superposition—a blend, or linear combination, of both states at once. In other words, the qubit is described by certain weights attached to 0 and to 1, which determine how likely the computation is to result in each outcome. This blend can be represented as a point on the Bloch sphere. The Bloch sphere shows the true power of a qubit: it can point anywhere on the globe unlike a classical bit that can only point to the north or south pole.
When qubits are measured, they lose their superposition and collapse into either 0 or 1. Because measuring a qubit only gives one outcome at a time, quantum algorithms must be run many times to build up the full probability distribution and reveal the correct answer.
Superposition is a central feature of quantum computing and a major source of its potential for solving complex problems more efficiently than classical ones. It allows a quantum computer to process many possibilities at once, creating a kind of built-in parallelism that can yield exponential speedups for certain types of tasks. Superposition is the foundation of well-known quantum algorithms, such as Shor's and Grover’s. However, maintaining superposition is difficult: interaction with the surrounding environment can quickly destroy it, leaving the qubit in a simple classical state. This fragility makes error correction an essential step toward building practical quantum computers.
T
A topological qubit is a type of qubit designed to store quantum information in a way that is inherently protected from many types of errors. These qubits are based on exotic quasiparticles called Majorana zero modes, which can exist in certain materials at very low temperatures. Quasiparticles are not fundamental particles like electrons but rather collective behaviours of many particles that act as if they were single particles. For instance, vibrations of atoms in a solid can behave like particles called phonons.
Unlike conventional qubits, where information is stored in discrete energy levels of a circuit or atom, topological qubits encode information in the global properties of the system’s wavefunction, which theoretically makes them more robust than other qubits. For example, in a rope that’s twisted into a knot, the knot itself is a global property — small changes like tugging the rope don’t undo the knot. Likewise, for topological qubits, the information is stored in a similar way: it depends on the overall configuration of the system, so small local disturbances don’t destroy it, making topological qubits more stable and resistant to errors and environmental noise than most other qubit types.
Manipulating topological qubits involves moving these quasiparticles around one another in precise patterns ( an operation called braiding), which changes the qubit state in a way that is resilient against small local disturbances. The main advantage of topological qubits is their intrinsic error protection, which could dramatically reduce the need for complex error-correction schemes required by superconducting or trapped ion qubits. This advantage could make large-scale quantum computers more feasible in the long run. However, topological qubits are very challenging to realise experimentally. They require topological materials (e.g. superconductivity combined with nontrivial electronic properties) and extremely low temperatures, and reliably creating and manipulating those quasiparticles has not yet been achieved at scale. As a result, while they offer potentially transformative advantages in stability and error resistance, they remain a longer-term, high-risk technology compared to more mature qubit platforms like superconducting qubits or trapped ions.
In quantum computing, scientists use charged atomic particles (ions) as qubits by “trapping” them in a vacuum with an ion trap, which creates electromagnetic fields that confine the ions. Each ion serves as a qubit, with different electronic states corresponding to the 0 and 1 states. Using lasers, scientists cool the ions to near absolute zero to minimise their motion and precisely control their behaviour. With lasers or microwave pulses, they manipulate the ions’ internal states and interactions to perform quantum operations.
Trapped-ion systems have played a major role in demonstrating fundamental quantum algorithms and advancing quantum computing from theory to experiment. They offer advantages such as long qubit lifetimes (the ability to hold their quantum state for extended periods) and high-fidelity operations. However, scaling to large numbers of ions and integrating the supporting technology into compact, practical systems remain challenging.
A Turing-complete language is a programming language that can be used to express any computation that a real-world computer can perform, given enough time and memory. To be Turing complete, a language must support basic features such as conditional branching (e.g. if/else statements), loops, or recursion, and the ability to work with memory in a flexible way. Most general-purpose programming languages, including Python, Java, and C, are Turing-complete.