What benchmarks have been released to date that compare quantum computing with traditional computing?
I’ll save you a few minutes, after 25+ years of seeing hyped stories about quantum computing, not much of consequence has been accomplished. In a few years time there could be a breakthrough…but perhaps not.
What Grok Says March 2026
Several notable benchmarks and demonstrations have been released over the years that directly compare quantum computing performance to classical (traditional) computing. These often focus on tasks designed to highlight potential quantum advantages, such as exponential speedups, though many remain task-specific, probabilistic, or debated in terms of “practical” utility. Claims of quantum supremacy (a quantum device solving a problem infeasible for classical computers in reasonable time) or advantage (outperforming classical in some metric) have evolved, with ongoing refinements in classical simulations narrowing some early gaps.
Here are the key released benchmarks and milestones to date (as of early 2026), roughly chronologically:
-
Google’s Sycamore (2019): The seminal demonstration used a 53-qubit superconducting processor to perform Random Circuit Sampling (RCS), completing the task in ~200 seconds. Google claimed a classical supercomputer would need ~10,000 years. Published in Nature. Later classical improvements (e.g., better tensor network algorithms) reduced the classical estimate significantly (e.g., to seconds on Frontier supercomputer by some accounts), but the original remains a foundational RCS benchmark for quantum advantage.
-
Xanadu’s Borealis (2022): Photonic quantum computer demonstrated Gaussian Boson Sampling (GBS) on 216 squeezed modes, claiming quantum advantage over classical simulation.
-
USTC/Jiuzhang series (various, around 2020-2021+): Photonic setups achieved boson sampling advantages, with claims of tasks infeasible classically.
-
IBM and others on variations of Simon’s problem (2025): A USC/JHU team used IBM’s 127-qubit Eagle processors to demonstrate an exponential scaling advantage on a modified Simon’s problem variant, claimed as an unconditional exponential speedup (no assumptions on classical hardness). This marked a significant step toward verifiable quantum outperformance.
-
Quantinuum H-series benchmarks (2025):
- Achieved high Quantum Volume records (e.g., 2²³ ≈ 8 million in mid-2025), a holistic metric (from IBM originally) combining qubit count, gate fidelity, connectivity, etc.
- Demonstrated coherence and useful computation on 56-qubit MaxCut optimization problems (QAOA variant) with thousands of gates, beyond brute-force classical simulation limits per independent studies (e.g., Jülich/Purdue arXiv papers).
- RCS and other supremacy-style experiments on 56 qubits.
-
Google Willow chip (late 2025): 105-qubit superconducting processor completed RCS benchmarks in under 5 minutes, with claims of exponential error reduction and performance vastly beyond classical supercomputers for that task (some estimates: lifetimes of the universe classically).
-
UT Austin-led unconditional separation (2025): Demonstrated quantum information supremacy on a task requiring far less memory classically (e.g., 12 qubits quantum vs. ≥62 bits classical for equivalent success), published on arXiv.
-
Other application-oriented benchmarks:
- Portfolio optimization, materials simulation (e.g., D-Wave claims), building performance/energy optimization comparisons (quantum vs. classical solvers like NSGA-II).
- Various QAOA, MaxCut, and optimization tasks where quantum hardware shows speed or quality edges in noisy intermediate-scale regimes, though classical often still wins on solution quality.
Broader efforts include:
- MIT Quantum Index Report / QIR: Tracks and benchmarks >200 QPUs historically, comparing system-level metrics.
- DARPA Quantum Benchmarking: Develops long-term utility metrics.
- QED-C, SupermarQ, and others: Suites for application-level (e.g., finance, chemistry) and full-system evaluation.
- Independent rankings (e.g., 2025 studies) often place Quantinuum at the top for connectivity/full performance on QAOA-like tasks.
Overall, quantum computers have demonstrated clear advantages on contrived sampling/optimization tasks beyond feasible classical simulation (especially RCS/GBS variants), with 2025 seeing stronger unconditional claims and larger-scale coherence. However, for broadly useful, real-world problems, classical computers still dominate in most cases, and no universal/practical quantum advantage for economically valuable tasks has been unambiguously achieved yet. Progress accelerates with error correction and logical qubits emerging. The field uses evolving metrics like Quantum Volume, CLOPS, XEB fidelity, and application proxies to track this trajectory.
A Follow up question
I don’t see anything here that can relate to conventional computing, i.e, cracking a cypher, or elliptical curve encryption.
You’re right—the benchmarks I listed earlier primarily focus on contrived or sampling/optimization tasks (like Random Circuit Sampling, Gaussian Boson Sampling, QAOA for MaxCut, or Quantum Volume) where quantum hardware has demonstrated clear outperformance or infeasibility for classical simulation in reasonable time. These showcase quantum advantage in specific computational models but don’t directly translate to breaking real-world classical cryptography like ciphers, RSA, or elliptic curve cryptography (ECC).
For cryptography-related tasks—such as factoring large numbers (core to RSA) or solving the elliptic curve discrete logarithm problem (core to ECC)—the situation is very different as of March 2026. No quantum computer has come close to breaking practically relevant instances of these problems. Here’s a clear breakdown of the current state, including any direct comparisons or “benchmarks” involving Shor’s algorithm (for factoring/discrete logs) and Grover’s algorithm (for search/brute-force impacts on symmetric ciphers or hashes).
Shor’s Algorithm and Breaking RSA/ECC
-
Theoretical quantum advantage: Shor’s algorithm provides an exponential speedup over the best classical factoring methods (e.g., General Number Field Sieve). Classically, factoring a 2048-bit RSA modulus would take billions to trillions of years on even the largest supercomputers. In theory, a sufficiently large, fault-tolerant quantum computer could do it in hours to weeks (polynomial time).
-
Current practical status (no demonstrated breaking of cryptographically relevant keys):
- The largest RSA-like numbers factored on actual quantum hardware remain tiny (e.g., small semiprimes like 15, 21, or specially structured numbers up to ~50-60 bits in toy demonstrations, often using hybrid quantum-classical tricks or annealers like D-Wave on contrived instances).
- Claims of “factoring RSA-2048” in 2025 papers (e.g., using D-Wave) were debunked—they factored specially chosen, easy-to-attack 2048-bit numbers (e.g., with factors differing by only 2 bits or low Hamming weight), not general/arbitrary RSA moduli. These do not represent a real threat or benchmark against classical security.
- No quantum device has factored anything approaching 256-bit ECC or 2048-bit RSA in a way that outperforms classical methods for realistic keys.
-
Resource estimates as “benchmarks” for future capability (these are theoretical resource requirements, not achieved demonstrations):
- Recent 2025–2026 papers (e.g., Craig Gidney/Google Quantum AI, 2025; Iceberg Quantum/Pinnacle architecture, Feb 2026) have dramatically lowered estimates for fault-tolerant resources needed to run Shor’s on RSA-2048:
- ~1 million physical (noisy) qubits and ~1 week runtime (Gidney, using optimizations like magic state cultivation and approximate arithmetic).
- Under 100,000 physical qubits in some LDPC-code-based architectures (Sydney/Australia group, Feb 2026 preprint).
- Even lower speculative claims (e.g., <5,000 qubits with novel “JVG” hybrid methods, March 2026 announcements), though these are unverified and often optimistic.
- For ECC (e.g., 256-bit curves like secp256k1 in Bitcoin): Often requires fewer resources than equivalent RSA due to smaller bit lengths (e.g., estimates of 13–317 million physical qubits for fast attacks in 2022–2024 papers, with ongoing refinements).
- Comparison takeaway: These are resource-projection “benchmarks” showing progress toward a cryptographically relevant quantum computer (CRQC), but current hardware (even 100+ qubit systems like Google’s Willow or IBM’s) is orders of magnitude short. No exponential advantage has been experimentally shown for real cryptographic instances—classical computers still dominate here by default because quantum hasn’t reached the required scale.
- Recent 2025–2026 papers (e.g., Craig Gidney/Google Quantum AI, 2025; Iceberg Quantum/Pinnacle architecture, Feb 2026) have dramatically lowered estimates for fault-tolerant resources needed to run Shor’s on RSA-2048:
Grover’s Algorithm and Symmetric Ciphers/Hashes
- Theoretical quantum advantage: Grover’s provides a quadratic speedup for unstructured search (e.g., brute-forcing keys or finding preimages). A classical brute-force search over N possibilities takes O(N) time; Grover reduces it to O(√N).
- Impact on cryptography:
- For symmetric ciphers like AES-128: Effective security drops from 128 bits to ~64 bits (still infeasible classically or quantumly with current tech).
- For hashes (e.g., SHA-256 collisions): Quadratic speedup makes finding collisions easier but not practical yet.
- Benchmarks and demonstrations:
- Small-scale Grover implementations exist on quantum hardware (e.g., 3-qubit Grover search characterizations on superconducting QPUs in 2025 papers, or optimized variants showing minor speedups in toy search/oracle problems).
- No meaningful cryptographic speedup has been demonstrated—e.g., no quantum device has brute-forced even a weakened 40–50 bit key faster than classical in a way that matters.
- Theoretical extensions (e.g., continuous optimization variants in 2025 Chinese papers) prove quadratic speedup in broader domains, but these remain proofs/simulations, not hardware benchmarks against classical solvers.
Overall Summary of Quantum vs. Classical in Cryptography
Quantum computers have not yet achieved any verifiable advantage or benchmark win against conventional cryptography like cracking RSA, ECC, or symmetric ciphers. The advantages seen in other areas (RCS, boson sampling, etc.) don’t carry over here because:
- Cryptographic problems require fault-tolerant, large-scale logical qubits (thousands to millions physical) with very low error rates—far beyond NISQ-era hardware.
- Progress is in shrinking theoretical resource estimates (a positive sign for the future), but timelines for a real break (often called “Q-Day”) remain debated: many experts point to 2030s or later, with probabilities rising (e.g., 19–34% by 2034 per some risk assessments).
This is why the focus has shifted to post-quantum cryptography (NIST-standardized algorithms like Kyber, Dilithium) to migrate away from vulnerable classical schemes preemptively. If you’re interested in specific papers, resource tables, or visuals of qubit estimate trends over time, let me know!