QUICK FACTS
Created Jan 0001
Status Verified Sarcastic
Type Existential Dread
bqp (disambiguation), pspace

BQP

“This article is about the computational complexity class. For other uses, see BQP...”

Contents
  • 1. Overview
  • 2. Etymology
  • 3. Cultural Impact

Computational complexity class of problems

This article is about the computational complexity class. For other uses, see BQP (disambiguation) .

BQP in relation to other probabilistic complexity classes (ZPP, RP, co-RP, BPP, PP), which generalise P within PSPACE. It is unknown if any of these containments are strict. A rather optimistic visualization of BQP ’s purported place among its probabilistic complexity brethren (ZPP , RP , co-RP, BPP , PP ). These, of course, are merely extensions of the foundational P within the grand scheme of PSPACE . One might charitably call it a ‘map,’ though it conspicuously omits the crucial detail that the strictness of these containments remains, to the eternal exasperation of theoreticians, entirely unproven. A lovely set of nesting dolls, if you prefer your dolls to be inscrutable.

In the labyrinthine corridors of computational complexity theory , bounded-error quantum polynomial time, mercifully abbreviated as BQP, emerges as the class encompassing all those decision problems that a sufficiently advanced quantum computer can solve within a span of polynomial time . The “bounded-error” part is where the fun begins, or perhaps, where the pragmatism sets in: it demands that the algorithm’s probability of error is steadfastly kept at or below 1/3 for any given input instance. This isn’t just an arbitrary threshold; it’s a carefully chosen margin that allows for reliable computation despite the inherent probabilistic nature of quantum mechanics. Essentially, BQP stands as the quantum counterpart, the more esoteric sibling, to the classical complexity class known as BPP .

To qualify a decision problem as a member of BQP , one must demonstrate the existence of a viable quantum algorithm – that is, an algorithm designed to execute on a quantum computer – capable of resolving the problem with what academics quaintly term “high probability .” Crucially, this algorithm must also be guaranteed to complete its execution within a polynomial time frame. When such an algorithm is run, the expectation is that it will yield the correct solution to the decision problem with a probability of at least 2/3. This specific probability bound of 2/3 (or conversely, an error rate of 1/3) is not a hard limit on accuracy; rather, it’s a minimum standard that, through techniques like repetition and majority voting, can be amplified to arbitrarily high levels of confidence, as we’ll reluctantly delve into shortly.

BQP algorithm (1 run)Answer producedCorrect answer
YesNo
Yes≥ 2/3≤ 1/3
No≤ 1/3≥ 2/3

A single run, a flip of the quantum coin, if you will. The odds are, technically, in your favor. Just don’t bet the universe on it.

The table above illustrates the outcome of a single execution of a BQP algorithm. If the correct answer to the decision problem is ‘Yes’, the algorithm will correctly output ‘Yes’ with a probability of at least 2/3, and incorrectly output ‘No’ with a probability of at most 1/3. Conversely, if the true answer is ‘No’, the algorithm will output ‘No’ correctly with a probability of at least 2/3, and make an error by outputting ‘Yes’ with a probability of at most 1/3. This inherent probabilistic nature is what distinguishes BQP from deterministic classes like P .

However, a single run with a 1/3 error margin might not instill much confidence in, say, a financial transaction or a critical scientific calculation. This is where the magic of repetition and statistical amplification comes into play. By executing the quantum algorithm multiple times and adopting a majority-vote strategy for the final answer, the probability of error can be drastically reduced.

BQP algorithm (k runs)Answer producedCorrect answer
YesNo
Yes> 1 − 2ck< 2ck
No< 2ck> 1 − 2ck

for some constant c > 0

And here we have the human penchant for reassurance. Run it enough times, and the error probability shrinks faster than your will to live, for some constant c greater than zero. Exponential decay, a beautiful thing for error reduction, less so for your patience.

When the algorithm is executed k times, and the majority outcome is selected as the final answer, the probability of correctness increases dramatically. For a sufficiently large number of runs, k, the probability of the algorithm producing the correct answer can be amplified to 1 - 2^(-ck), where c is some positive constant. This means the error probability decays exponentially with k, quickly becoming astronomically small. For example, if k is set to a constant, say, 100, the error probability can be made practically negligible for most applications. This robust error reduction mechanism is a cornerstone of probabilistic complexity classes and ensures that a problem solvable in BQP is, for all practical intents and purposes, solvable reliably.

Definition

BQP is formally understood through the lens of languages that are recognized by specific uniform families of quantum circuits operating with bounded error. This isn’t just a fancy way of saying “quantum programs”; it implies a structured, scalable approach to quantum computation. A language L is definitively within BQP if, and only if, there exists a polynomial-time uniform family of quantum circuits, denoted as $\{Q_n \colon n \in \mathbb{N}\}$. This family must adhere to several precise conditions:

  • For every natural number n ∈ N, the circuit Q_n is designed to accept n qubits as its input, processing them to ultimately produce a single bit as its output. This output bit represents the ‘Yes’ or ‘No’ answer to the decision problem for an input of size n.
  • If an input string x is a member of the language L (meaning the answer to the decision problem for x is ‘Yes’), then the probability that the circuit Q_{|x|} (where |x| is the length of x) outputs 1 (representing ‘Yes’) must be at least 2/3. This ensures a strong likelihood of correct acceptance.
  • Conversely, if an input string x is not a member of the language L (meaning the answer to the decision problem for x is ‘No’), then the probability that the circuit Q_{|x|} outputs 0 (representing ‘No’) must also be at least 2/3. This guarantees a strong likelihood of correct rejection.

Alternatively, for those who prefer their theoretical constructs a bit more mechanical, BQP can also be defined using the concept of quantum Turing machines . In this model, a language L is a member of BQP if and only if there exists a polynomial quantum Turing machine that accepts L while maintaining an error probability of at most 1/3 for all instances. The equivalence between these two models – quantum circuits and quantum Turing machines – underscores the robustness of the BQP definition, much like their classical counterparts.

It’s worth noting, with a slight sigh of existential weariness, that the specific choice of 1/3 for the error probability in the definition, much like in other “bounded error” probabilistic classes, is largely conventional, even arbitrary. One could, in theory, define BQP with an error rate of 1/4 or 1/100, and the underlying complexity class would remain precisely the same. This seemingly casual flexibility is due to the powerful error reduction techniques available. As previously alluded to, by simply repeating the quantum algorithm a constant number of times and then employing a majority vote among the results, one can achieve any desired probability of correctness that is strictly less than 1. This process is rigorously supported by the Chernoff bound , a statistical tool that quantifies how quickly the probability of deviation from the expected value decreases with the number of trials. Thus, the class itself remains invariant whether you allow an error as high as 1/2 - n^-c (where c is any positive constant and n is the length of the input, meaning errors that decrease polynomially with input size) or demand an error as minuscule as 2^-n^c (errors that decrease exponentially). The specific number is just a convenient hook for the definition; the underlying computational power remains undisturbed.

Relationship to other complexity classes

Unsolved problem in computer science

What is the relationship between $\mathsf{BQP}$ and $\mathsf{NP}$ ?

More unsolved problems in computer science

Ah, the perennial human struggle with fundamental questions. The relationship between BQP and NP remains a glorious, frustrating enigma. Perhaps one day, when the universe finally decides to make sense, we’ll have an answer. Until then, it’s just another entry on the ever-growing list of things we don’t quite grasp.

The suspected relationship of BQP to other problem spaces is, to put it mildly, a topic of intense academic speculation and fervent debate. The diagram frequently accompanying discussions of BQP illustrates these suspected inclusions, though it’s crucial to remember that “suspected” is the operative word here.

BQP , by its very nature, is defined for quantum computers , embodying the computational capabilities of these exotic machines. Its direct classical counterpart, the complexity class that represents problems solvable by probabilistic Turing machines with bounded error, is BPP . It’s a natural comparison, a way to gauge the quantum advantage, if any. Intriguingly, much like P and BPP , BQP exhibits the property of being “low ” for itself, which is to say, BQP^BQP = BQP. Informally, this seemingly esoteric fact simply means that if you have a BQP machine, giving it an “oracle” (a magical black box that can solve any BQP problem instantly) doesn’t actually increase its overall computational power beyond what it already possesses. This is because polynomial-time algorithms, by their very definition, are closed under composition. If a polynomial time algorithm invokes other polynomial time algorithms as subroutines, the resulting algorithm, no matter how many layers deep, still operates within polynomial time . It’s an elegant closure property, if you appreciate such things.

The established relationships with classical complexity classes paint an intriguing, if incomplete, picture. We know with certainty that BQP contains both P and BPP . This is hardly surprising; a quantum computer should, at the very least, be able to simulate its classical and probabilistic ancestors. Furthermore, BQP is known to be contained within AWPP , a somewhat more obscure class that represents problems solvable by probabilistic polynomial-time Turing machines with a specific amplitude-based condition. Beyond that, BQP finds itself nested within PP (Probabilistic Polynomial-Time, a class allowing arbitrarily high error rates, as long as the correct answer is more likely than the incorrect one) and, consequently, also within PSPACE (Polynomial Space), the class of problems solvable by a Turing machine using a polynomial amount of memory space. Ultimately, all these are contained within EXP (Exponential Time).

Indeed, BQP is even “low ” for PP , meaning that a PP machine gains absolutely no advantage from being able to instantly solve any problem in BQP . This particular observation offers a subtle, yet potent, hint at the possible power dynamics between these two classes, suggesting that PP might already be so powerful as to render any BQP oracle moot. The known inclusions can be summarized as follows:

$\mathsf{P\subseteq BPP\subseteq BQP\subseteq AWPP\subseteq PP\subseteq PSPACE\subseteq EXP}$

Given that the monumental question of whether $\mathsf{P}\ {\stackrel {?}{=}}\ {\mathsf {PSPACE}}$ remains stubbornly unresolved – a question that has haunted computer scientists for decades – proving strict inequality between BQP and any of the classes mentioned above is, unsurprisingly, presumed to be a task of considerable difficulty. It’s like trying to map the precise boundaries of a cloud while the wind constantly reshapes it.

The relationship between BQP and NP (Nondeterministic Polynomial Time, the class of problems whose ‘yes’ answers can be quickly verified) is, perhaps, the most tantalizing and fiercely debated unknown. In May 2018, the computer scientists Ran Raz of Princeton University and Avishay Tal of Stanford University published a paper that presented compelling evidence that, relative to an oracle , BQP was not contained within the Polynomial hierarchy (PH ). More precisely, they proved that there exists an oracle A such that $\mathsf{BQP}^{\mathrm {A} }\nsubseteq {\mathsf {PH}}^{\mathrm {A} }$. In a highly informal sense, this demonstration involves endowing both PH and BQP with an identical, supplemental computational capability (the oracle) and then verifying that BQP equipped with this oracle (BQP A) can achieve computational feats that PH with the same oracle (PH A) cannot. While an oracle separation has been rigorously proven, it is absolutely vital to understand that this does not constitute a direct proof that BQP is not contained in PH in the “real” (non-oracle) world. An oracle separation provides a strong intuition, a direction of suspicion, but it does not definitively resolve whether or not the underlying complexity classes are identical or distinct. It’s a hint, not a confession.

For many years, the problem of Fourier Sampling has been widely suspected to reside firmly within BQP but outside the classical polynomial hierarchy . Recent conjectures have further bolstered this line of reasoning, providing evidence that a related problem, Fourier Checking, also belongs to BQP without being contained within the polynomial hierarchy . This conjecture, if proven true, carries significant implications: it suggests that problems solvable by quantum computers within polynomial time could potentially be classified as inherently harder than even NP-Complete problems, which represent the pinnacle of classical intractability (assuming P ≠ NP , naturally). This potential for BQP problems to exist outside of P —a suspicion that remains unverified due to the enduring P versus NP problem —serves as a powerful illustration of the profound, and perhaps unsettling, computational power that quantum computing might eventually unleash, fundamentally altering our understanding of what constitutes a “solvable” problem.

Finally, for those who enjoy adding more exotic ingredients to their computational stew, introducing the concept of postselection to BQP yields the complexity class PostBQP . This seemingly minor modification has a surprisingly dramatic effect: PostBQP is, in fact, precisely equivalent to PP . It’s a curious equivalence that highlights the immense power granted by the ability to “postselect” on measurement outcomes, even if such a capability is purely theoretical for practical computation.

A complete problem for Promise-BQP

Promise-BQP is a specific variant of BQP that deals with promise problems —problems where the input is guaranteed to belong to one of two disjoint sets, simplifying the decision process. It is defined as the class of promise problems that can be solved by a uniform family of quantum circuits, effectively bringing them within the purview of BQP . The quest for “completeness proofs” in complexity theory often focuses on this version of BQP , as it simplifies some of the technicalities inherent in defining a “hardest” problem. Similar to the well-known notion of NP-completeness and other “complete ” problems, we define a complete problem for Promise-BQP as one that is itself within Promise-BQP and to which every other problem in Promise-BQP can be efficiently reduced in polynomial time . It’s the computational equivalent of finding a master key for an entire class of locks.

APPROX-QCIRCUIT-PROB

The APPROX-QCIRCUIT-PROB problem has been identified as complete for efficient quantum computation. Specifically, the version presented below is complete for the Promise-BQP complexity class , rather than the total BQP complexity class (for which, it must be noted, no complete problems are currently known, a minor inconvenience for those seeking absolute closure). The completeness of APPROX-QCIRCUIT-PROB makes it an invaluable tool for constructing proofs that elucidate the intricate relationships between BQP and various other complexity classes . It acts as a Rosetta Stone, allowing us to translate the difficulty of any BQP problem into the difficulty of this specific, canonical problem.

Given a meticulously detailed description of a quantum circuit C that operates on n qubits and comprises m gates—where m is itself a polynomial function of n, and each individual gate acts upon a maximum of one or two qubits —along with two real numbers, $\alpha, \beta \in [0,1]$, such that $\alpha > \beta$, the task is to distinguish definitively between the following two cases:

  • Case 1: Upon measuring the first qubit of the final state C|0⟩^⊗n (which is the result of applying circuit C to an initial state of n qubits all set to |0⟩), the outcome |1⟩ is yielded with a probability of $\geq \alpha$.
  • Case 2: Upon measuring the first qubit of the final state C|0⟩^⊗n, the outcome |1⟩ is yielded with a probability of $\leq \beta$.

Here, the “promise” on the inputs is crucial: the problem explicitly does not specify the required behavior if an instance falls outside these two strictly defined cases. This simplifies the problem by removing the need to handle ambiguous or intermediate probability outcomes.

Claim: Any BQP problem reduces to APPROX-QCIRCUIT-PROB.

Proof: Suppose, for a moment, that we possess an oracle, an algorithm A, capable of solving APPROX-QCIRCUIT-PROB. That is, given a quantum circuit C operating on n qubits , and the two distinct probabilities $\alpha, \beta \in [0,1]$ where $\alpha > \beta$, algorithm A can reliably distinguish between the two cases outlined above. Our objective is to demonstrate that, armed with this oracle A, we can then solve any problem belonging to BQP . We achieve this by setting our specific probability thresholds to $\alpha = 2/3$ and $\beta = 1/3$, aligning perfectly with the definition of BQP .

Consider any language $L \in \mathsf{BQP}$. By the very definition of BQP , there exists a family of polynomial-time uniform quantum circuits ${Q_n \colon n \in \mathbb{N}}$ with the following properties: for any input state $|x\rangle$ consisting of $n$ qubits , if $x \in L$, then the probability of $Q_n(|x\rangle)$ yielding 1 upon measurement is at least $2/3$. Conversely, if $x \notin L$, then the probability of $Q_n(|x\rangle)$ yielding 0 is at least $2/3$.

Now, let’s fix an arbitrary input $|x\rangle$ of $n$ qubits for which we want to determine membership in $L$, and let $Q_n$ be the corresponding quantum circuit from our family. The first step is to construct a preparatory circuit, let’s call it $C_x$. This circuit $C_x$ has the specific function of transforming the initial state $|0\rangle^{\otimes n}$ (all $n$ qubits in the |0⟩ state) into our desired input state $|x\rangle$. That is, $C_x|0\rangle^{\otimes n} = |x\rangle$. This can be achieved quite straightforwardly by “hardwiring” the input $|x\rangle$ and applying a sequence of CNOT gates (Controlled-NOT gates) and other single-qubit gates to selectively flip the appropriate qubits from |0⟩ to |1⟩.

Next, we combine these two circuits: $C_x$ followed by $Q_n$. Let $C’$ denote this composite circuit, so $C’ = Q_n C_x$. When we apply $C’$ to the initial state $|0\rangle^{\otimes n}$, we get $C’|0\rangle^{\otimes n} = Q_n C_x |0\rangle^{\otimes n} = Q_n |x\rangle$. This means the final state produced by $C’$ when starting from all zeros is precisely the same as the final state produced by $Q_n$ when starting from $|x\rangle$.

Finally, we must address how the output of $Q_n$ is obtained. Typically, the result of $Q_n$ is derived by measuring several qubits and then applying some classical logic gates to these measurement outcomes. However, quantum mechanics offers a powerful principle known as the Deferred Measurement Principle . This principle states that any measurement can be deferred until the very end of a quantum computation, or even moved from an intermediate step to a later one, without altering the final probabilities of outcomes. Utilizing this, we can effectively reroute the internal structure of the combined circuit $C’$ such that the ultimate output of $Q_n$ (i.e., whether $x \in L$ or $x \notin L$) is encoded entirely in the measurement outcome of just the first qubit of the final state. This transformed circuit becomes our desired circuit C for the APPROX-QCIRCUIT-PROB problem.

Now, we simply run our hypothetical oracle algorithm $A(C)$ with $\alpha = 2/3$ and $\beta = 1/3$. By the definition of BQP , for our input $x$, we will either fall into the first case (where the probability of measuring |1⟩ on the first qubit is $\geq 2/3$, indicating acceptance) or the second case (where the probability is $\leq 1/3$, indicating rejection). Therefore, if $A(C)$ tells us we are in Case 1, we conclude $x \in L$; if it tells us we are in Case 2, we conclude $x \notin L$. This demonstrates that any language $L \in \mathsf{BQP}$ can be reduced to APPROX-QCIRCUIT-PROB, thus proving its completeness for Promise-BQP .

BQP and EXP

Let’s begin with a containment that, while still requiring formal proof, feels intuitively more manageable: demonstrating that $\mathsf{BQP} \subseteq \mathsf{EXP}$. To establish this, it is sufficient to show that APPROX-QCIRCUIT-PROB, our chosen BQP -complete problem, itself resides within EXP . If the hardest problem in BQP can be solved in exponential time , then, by definition, all problems in BQP must also be solvable in exponential time .

Claim: $\text{APPROX-QCIRCUIT-PROB} \in \mathsf{EXP}$

Proof: The underlying idea here is remarkably straightforward, almost disappointingly so, if you appreciate the complexities of quantum mechanics. Given that we are operating with the immense power of exponential time on a classical computer, we can, in principle, simulate the behavior of any quantum circuit gate by gate to determine its final state.

More formally, let C be a quantum circuit composed of m gates, where m is polynomial in n, the number of qubits the circuit acts upon. Let’s denote the initial state of the system as $|\psi_0\rangle = |0\rangle^{\otimes n}$, representing all n qubits in the |0⟩ state. We then let $|\psi_i\rangle$ represent the quantum state of the system after the i-th gate in the circuit has been applied to the preceding state, $|\psi_{i-1}\rangle$.

Each quantum state $|\psi_i\rangle$ can be accurately represented in a classical computer as a unit vector within the complex vector space $\mathbb{C}^{2^n}$. This space has a dimensionality of $2^n$ because an n-qubit system exists as a superposition of $2^n$ possible classical states. Similarly, each quantum gate, being a unitary transformation, can be represented by a $2^n \times 2^n$ matrix with complex entries.

To simulate the circuit, we simply apply these matrix transformations sequentially. We start with the vector for $|\psi_0\rangle$. Then, to find $|\psi_1\rangle$, we multiply the matrix for the first gate by the vector for $|\psi_0\rangle$. We continue this process: $|\psi_i\rangle = G_i |\psi_{i-1}\rangle$, where $G_i$ is the matrix corresponding to the i-th gate. Each matrix-vector multiplication for an $N \times N$ matrix and an $N$-dimensional vector takes $O(N^2)$ time. In our case, $N = 2^n$, so each gate application takes $O((2^n)^2) = O(2^{2n})$ time. Since there are m gates in the circuit, the total time required to compute the final state $|\psi_m\rangle$ is $O(m \cdot 2^{2n})$. Given that m is polynomial in n, this entire process scales as $2^{O(n)}$, placing the computation firmly within exponential time . Once the final state $|\psi_m\rangle$ is known, determining the probability that the first qubit is measured to be 1 is a straightforward calculation from the amplitudes of the vector, which also takes polynomial time in $2^n$.

This implies that $\text{APPROX-QCIRCUIT-PROB} \in \mathsf{EXP}$.

It is important to note, and perhaps to lament, that this classical simulation algorithm also demands $2^{O(n)}$ space to store the complex vectors representing the quantum states and the matrices representing the gates. Storing a $2^n$-dimensional vector of complex numbers requires $2^n \cdot (\text{size of complex number})$ space, which is exponential. While this confirms the containment within EXP , it highlights the sheer impracticality of simulating large-scale quantum computers on classical hardware. However, as we shall see in the following section, there are ways to improve upon the space complexity, albeit at a significant cost in time. It’s a trade-off, as always.

BQP and PSPACE

The containment $\mathsf{BQP} \subseteq \mathsf{PSPACE}$ is a more nuanced affair than its EXP counterpart, necessitating a more sophisticated approach than brute-force simulation. This proof often leverages a technique introduced by the visionary physicist Richard Feynman for his path integral formulation of quantum mechanics: the “sum of histories.” This method provides a powerful framework for understanding how quantum amplitudes evolve and can be used to demonstrate that APPROX-QCIRCUIT-PROB, and therefore all of BQP , can be solved using only a polynomial amount of memory space.

Sum of Histories Tree

Consider a quantum circuit C composed of m sequential gates, denoted $g_1, g_2, \ldots, g_m$. Each of these gates is assumed to originate from a universal gate set (meaning any quantum computation can be built from them) and acts on at most two qubits at a time. This is a common and practical assumption in quantum circuit design.

To truly grasp the essence of the “sum of histories” concept, it’s helpful to visualize the evolution of a quantum state within the circuit as a sprawling tree. The root of this tree represents the initial state, typically $|0\rangle^{\otimes n}$ (all n qubits in the |0⟩ state). Each node in the tree at a given level corresponds to a possible intermediate quantum state. From any node, there are $2^n$ possible children nodes, each representing one of the $2^n$ basis states in $\mathbb{C}^n$ that the system could transition to after the application of the next gate.

The “weight” on an edge connecting a node at the j-th level (representing state $|x\rangle$) to a node at the $(j+1)$-th level (representing state $|y\rangle$) is given by the complex amplitude $\langle y|g_{j+1}|x\rangle$. This amplitude quantifies the probability amplitude of transitioning from state $|x\rangle$ to state $|y\rangle$ when the gate $g_{j+1}$ is applied. The “transition amplitude” of an entire root-to-leaf path—which constitutes a specific “history” of the quantum computation—is the product of all the weights (amplitudes) on the edges along that path. To determine the overall amplitude of the final state being, say, $|\psi\rangle$, we must sum up the amplitudes of all distinct root-to-leaf paths that ultimately terminate at a node representing $|\psi\rangle$. This summation of amplitudes, rather than probabilities, is a hallmark of quantum mechanics.

More formally, for the quantum circuit C, its sum over histories tree is a tree of depth m. It has one level for each gate $g_i$, in addition to the initial root level, and each node has a branching factor of $2^n$.

  • Definition—A history: A history is formally defined as a complete path within the sum of histories tree. We represent a history by a sequence of states: $(u_0 = |0\rangle^{\otimes n} \rightarrow u_1 \rightarrow \cdots \rightarrow u_{m-1} \rightarrow u_m = x)$, where $u_0$ is the initial state and $u_m$ is some final basis state $x$.

  • Definition—Edge and history amplitude: Let $u, v \in {0,1}^n$ be two computational basis states. The amplitude of the edge connecting $|u\rangle$ to $|v\rangle$ at the j-th level of the sum over histories tree is $\alpha_j(u \rightarrow v) = \langle v|g_j|u\rangle$. For any complete history $h = (u_0 \rightarrow u_1 \rightarrow \cdots \rightarrow u_{m-1} \rightarrow u_m)$, its transition amplitude, denoted $\alpha_h$, is the product of all individual edge amplitudes along that path: $\alpha_h = \alpha_1(|0\rangle^{\otimes n} \rightarrow u_1)\alpha_2(u_1 \rightarrow u_2)\cdots \alpha_m(u_{m-1} \rightarrow x)$.

  • Claim—The transition amplitude of a history is computable in polynomial time.

    Proof: Each quantum gate $g_j$ in the circuit acts on only a limited number of qubits , typically one or two. This locality is key. We can decompose any such gate $g_j$ into the form $g_j = I \otimes \tilde{g}_j$, where $I$ is the identity operator acting on the qubits not involved in $g_j$, and $\tilde{g}_j$ is a unitary operator acting on the specific one or two qubits that $g_j$ targets. Without loss of generality, we can assume these are the first two qubits . Therefore, the amplitude $\langle v|g_j|u\rangle$ can be expressed as: $\langle v|g_j|u\rangle = \langle v_1,v_2|\tilde{g}_j|u_1,u_2\rangle \langle v_3,\cdots,v_n|u_3,\cdots,u_n\rangle$. The second term, $\langle v_3,\cdots,v_n|u_3,\cdots,u_n\rangle$, is either 0 or 1, representing whether the unaffected qubits remain unchanged. The first term, $\langle v_1,v_2|\tilde{g}_j|u_1,u_2\rangle$, involves only a $4 \times 4$ matrix operation (for two qubits ), which can be computed in constant time. Thus, each individual edge amplitude $\alpha_j(u \rightarrow v)$ can be computed in polynomial time (specifically, time dependent on n for indexing, but constant for the matrix multiplication itself). Since the total number of gates m is polynomial in n, the product of m such amplitudes—the transition amplitude of an entire history—is also computable in polynomial time with respect to n.

  • Claim—For a final state $C|0\rangle^{\otimes n}=\sum_{x\in{0,1}^n}\alpha_x|x\rangle$, the amplitude $\alpha_x$ for any $x \in {0,1}^n$ can be computed by summing over histories: $\alpha_x = \sum_{h=(|0\rangle^{\otimes n}\rightarrow u_1\rightarrow \cdots \rightarrow u_{t-1}\rightarrow |x\rangle)}\alpha_h$.

    Proof: The amplitude $\alpha_x$ for a specific final basis state $|x\rangle$ is given by the inner product $\alpha_x = \langle x|C|0\rangle^{\otimes n}$. Substituting the circuit C as a sequence of gates, we get $\alpha_x = \langle x|g_t g_{t-1} \cdots g_1|0\rangle^{\otimes n}$. The result follows directly from the repeated insertion of the identity operator, $I = \sum_{s \in {0,1}^n} |s\rangle\langle s|$, between each pair of consecutive gates. For example, between $g_1$ and $g_2$, we insert $I$, then between $g_2$ and $g_3$, and so on. Expanding this out, each term in the resulting summation corresponds precisely to an $\alpha_h$, the transition amplitude of a specific history $h = (|0\rangle^{\otimes n} \rightarrow u_1 \rightarrow \cdots \rightarrow u_{t-1} \rightarrow |x\rangle)$. This fundamental property connects the circuit model directly to the path integral formulation.

  • Claim—$\text{APPROX-QCIRCUIT-PROB} \in \mathsf{PSPACE}$

    The crucial insight for proving this containment lies in how we compute the sum of histories. To calculate a single amplitude $\alpha_x$, we don’t need to store all possible histories simultaneously. Instead, we can employ a recursive, depth-first search-like algorithm that traverses the sum of histories tree. At any given point in this computation, we only need to store the current path from the root to the current node (i.e., one history at a time), along with some auxiliary workspace variables for calculations. Each state in a history requires n bits to represent (for the n qubits ). A history has m such states (one for each gate application). Therefore, storing a single history requires $O(nm)$ bits of space. Since m is polynomial in n, the total space required to store one history is $O(n \cdot \text{poly}(n)) = O(\text{poly}(n))$. This is a polynomial amount of space. The recursive algorithm iterates through all $2^{mn}$ possible histories, computing each $\alpha_h$ in polynomial time and adding it to a running total for $\alpha_x$. While the time complexity of this approach is enormous ($O(m \cdot 2^{mn})$ to calculate a single amplitude, as we’ll see), the space complexity remains bounded polynomially.

    Therefore, in polynomial space, we can compute $\alpha_x$ for any specific final state $x$. To solve APPROX-QCIRCUIT-PROB, we need to calculate the probability that the first qubit is measured to be 1. This probability is given by $\sum_{x \text{ s.t. } x_1=1} |\alpha_x|^2$, where $x_1$ denotes the first bit of $x$. We can iterate through all $2^{n-1}$ possible final states where the first qubit is 1, compute $|\alpha_x|^2$ for each using the sum of histories method (which takes polynomial space), and sum these values. Each $|\alpha_x|^2$ is a real number, and its accumulation requires only polynomial space. Thus, the entire process of calculating the total probability and comparing it to $\alpha$ and $\beta$ can be performed within polynomial space .

    This implies that $\text{APPROX-QCIRCUIT-PROB} \in \mathsf{PSPACE}$.

    It’s worth emphasizing, with a knowing smirk, that while this sum-over-histories algorithm dramatically improves the space complexity compared to the direct simulation for the $\mathsf{BQP} \subseteq \mathsf{EXP}$ proof, the trade-off is a colossal increase in time complexity. This algorithm takes an astonishing $O(m \cdot 2^{mn})$ time to calculate a single amplitude, a truly exponential explosion. So, you either pay with time or with space. The universe, it seems, always demands a price.

BQP and PP

A similar line of reasoning, leveraging the powerful “sum-over-histories” argument, can be adapted to demonstrate the containment of BQP within PP (Probabilistic Polynomial Time). The core idea is that the amplitudes involved in quantum computation can be related to the counting problems that define PP , allowing for this inclusion. So, $\mathsf{BQP} \subseteq \mathsf{PP}$.

P and BQP

The relationship between P (Polynomial Time, the class of problems solvable efficiently by classical deterministic computers) and BQP is relatively straightforward, at least in one direction. We know unequivocally that $\mathsf{P} \subseteq \mathsf{BQP}$. This inclusion is almost trivially true: any classical circuit, which can solve problems in P , can be perfectly simulated by a quantum circuit . A quantum computer can simply perform the operations of a classical circuit using reversible quantum gates, effectively behaving as a classical computer. So, if a problem is classically efficient, it’s also quantumly efficient.

The more interesting, and far more contentious, aspect is the conjecture that BQP contains problems that are provably hard for classical computers, specifically problems that lie outside of P . This claim, of course, is inherently indefinite due to the enduring mystery of the P versus NP problem ; we cannot definitively say whether these problems are truly outside of P if we don’t even know if P equals NP . However, there is substantial evidence supporting this conjecture, hinting at the true power of quantum computation:

  • Integer factorization : This is arguably the most famous example, thanks to Shor’s algorithm . Classically, factoring large numbers is thought to be computationally intractable, forming the basis of widely used cryptographic schemes like RSA. Shor’s algorithm demonstrates that a quantum computer can factor integers in polynomial time , a feat believed to be impossible for classical computers. This alone is enough to send shivers down the spine of any cryptographer.
  • Discrete logarithm : Another cornerstone of modern cryptography, the discrete logarithm problem, also falls victim to Shor’s algorithm . A quantum computer can solve this problem efficiently, rendering many elliptic curve cryptography schemes vulnerable. It’s a double blow to classical security.
  • Simulation of quantum systems: This might seem circular, but it’s profoundly important. Simulating the behavior of complex quantum systems (e.g., molecules, materials) on classical computers quickly becomes exponentially intractable as the system size increases. A universal quantum simulator , essentially a quantum computer itself, is inherently well-suited to simulate other quantum systems efficiently. This capability promises breakthroughs in chemistry, materials science, and drug discovery that are beyond the reach of classical computation.
  • Approximating the Jones polynomial at certain roots of unity: The Jones polynomial is a mathematical invariant used in knot theory . While calculating it exactly is NP-hard , approximating it at specific “roots of unity” (complex numbers) is a problem known to be in BQP and widely believed to be hard for classical computers, even for approximation. It’s a more niche, but equally compelling, example of quantum advantage.
  • Harrow-Hassidim-Lloyd (HHL) algorithm : This algorithm provides a quantum speedup for solving systems of linear equations. While not universally faster than all classical methods (it depends on the condition number of the matrix and the sparsity), for certain classes of problems, particularly those involving sparse matrices, it offers a polynomial time solution where classical algorithms might take much longer. It’s a powerful tool for quantum machine learning and data analysis.

These examples collectively paint a picture of BQP as a class that genuinely expands the boundaries of what is computationally feasible, offering a glimpse into a future where problems currently considered intractable might, in fact, yield to the peculiar logic of the quantum realm.

See also