Get in Touch
Close

Contact

Level 42-01, Suntec Tower 3
8 Temasek Boulevard
Singapore 038988

+65 6829 2349

[email protected]

Quantum Computing in Portfolio Optimization

Articles
6-ezgif.com-optijpeg

Introduction

Portfolio optimization is the process of selecting the best mix of assets to maximize returns for a given risk (or minimize risk for a target return). In classical finance, this is often framed by Markowitz’s mean-variance model, which balances expected return against portfolio risk (variance) to find the efficient frontier of optimal portfolios. However, real-world constraints (like limiting the number of assets, sector caps, etc.) turn this into a complex combinatorial optimization problem. For instance, imposing a cardinality constraint (“use at most k assets”) makes the problem NP-hard once the asset universe grows beyond a few dozen choices. In practical terms, exhaustively searching for the best portfolio among, say, 100 assets is computationally intractable as the possibilities grow exponentially.

Quantum computing offers a new approach to tackle this complexity. Quantum bits (qubits) and quantum algorithms can in principle evaluate many combinations in superposition, potentially “tunneling” through the vast search space more efficiently than classical brute force. This promise has spurred intense interest in applying quantum computers to portfolio optimization. Financial institutions and quantum startups alike are experimenting with quantum algorithms in hopes of eventually gaining a computational edge in constructing optimal portfolios. In this case study, we examine why quantum computing is promising for portfolio optimization, what approaches are being tried, the current state of the art, what is needed to achieve a true quantum advantage, and the potential impact for whoever gets there first.

Why Quantum Computing Holds Promise for Portfolio Optimization

At its core, portfolio selection with discrete constraints can be mapped onto a binary optimization problem. For example, one can represent each asset by a binary variable (1 = include the asset, 0 = exclude it). The portfolio’s expected return and risk (variance) can be encoded into a cost function. With quadratic terms (for pairwise covariances between assets) and linear terms (for expected returns or other costs), this becomes a Quadratic Unconstrained Binary Optimization (QUBO) problem. QUBO can be naturally translated to the language of qubits: a QUBO cost function can be written as an Ising Hamiltonian, e.g.

$$H \;=\; \sum_i h_i\,\sigma_i^z \;+\; \sum_{i<j} J_{ij}\,\sigma_i^z \sigma_j^z$$,

where $$\sigma_i^z$$ are Pauli-Z spin operators for each asset qubit. Here the coefficients $$h_i$$ and $$J_{ij}$$ encode the asset’s individual contribution (like negative of expected return, to favor higher return) and the interaction between asset i and j (related to their covariance risk, and possibly penalty terms to enforce constraints). The lowest-energy state of this Hamiltonian corresponds to the optimal (or near-optimal) portfolio under the given objective and constraints. This quantum-native formulation is promising because quantum annealers and quantum gate algorithms are adept at exploring such Ising/QUBO landscapes.

Quantum annealing: Devices like D-Wave’s quantum annealers are designed to physically realize an Ising model in hardware. They gradually evolve the qubits from an initial easy state to a final state that encodes the cost function $H$. In principle, if the anneal is slow enough, the system will settle in the lowest energy configuration – ideally the optimal portfolio. Quantum annealing is being tested on portfolio problems because it naturally handles the combinatorial search in hardware. Notably, the Spanish bank BBVA partnered with D-Wave to run a 60-asset portfolio optimization with real market data and constraints (like budget limits and a cap on number of assets). The outcome was encouraging: the quantum annealer’s solution quality matched classical genetic algorithms in runtime, and in many runs the quantum approach achieved portfolios with about 40% lower tracking error (deviation from a benchmark) than the classical heuristic. This suggests that even if today’s quantum annealers don’t vastly outperform classical optimizers in speed, they can at least find equally good or sometimes better solutions in terms of risk-return tradeoff. Another D-Wave study by a firm called Chicago Quantum showed that with 60 U.S. stocks, quantum annealing could indeed select “attractive portfolios” comparable to those found by classical methods. In their experiment, both classical and quantum annealing methods produced viable optimal portfolios, underlining that current quantum annealers are already reaching the solution quality of traditional techniques (albeit for smaller problem sizes).

That said, today’s annealers have limitations: the problem must be mapped onto the hardware’s qubit connectivity graph with limited resolution for weights. Researchers found practical tricks to improve performance, such as scaling the $$J_{ij}$$ coupling strengths to fit the hardware’s analog range and doing greedy repair steps on the output to fix any minor constraint violations due to noise. These hybrid tweaks were crucial in the BBVA experiment to enforce strict portfolio constraints. Overall, quantum annealing has proven to be a powerful heuristic tool – it may not guarantee the absolute optimal solution, but by effectively tunneling through the energy landscape, it can escape local minima where classical solvers might get stuck. This could be valuable for highly constrained portfolio problems.

Gate-model algorithms (QAOA and others): On gate-based quantum computers, the leading approach for discrete optimization is the Quantum Approximate Optimization Algorithm (QAOA). QAOA is essentially a variational circuit that alternates between applying the problem Hamiltonian (phase rotations proportional to $H$) and a mixing Hamiltonian (to explore the search space). The algorithm has tunable parameters (sometimes denoted $\gamma, \beta$ for each round) that a classical optimizer adjusts to minimize the measured cost. QAOA can in theory approach the optimal solution as the number of alternating rounds (circuit depth) increases. In portfolio optimization, QAOA has been tested on small instances (dozens of assets) using simulators and some hardware. Notably, researchers have looked at specialized versions like Fermionic QAOA, which encodes assets in a way that reduces circuit depth by leveraging mathematical structure (one demo halved the circuit layers required for a 40-asset problem). Others explored different mixer Hamiltonians and entanglement strategies tailored for portfolio problems; a 2024 benchmark found that certain custom mixer setups converged ~30% faster to good solutions compared to the standard QAOA approach.

Because current gate quantum computers are limited in qubit count and are error-prone, there’s also interest in hybrid quantum-classical schemes for larger portfolios. One such approach by JPMorgan in collaboration with AWS used block decomposition: the large portfolio is divided into smaller correlated blocks (using classical methods), each block is optimized on a quantum processor with QAOA, and then the partial solutions are stitched together classically. This hybrid pipeline showed a modest 12% speedup in runtime at equal solution quality compared to a classical solver alone. While 12% is a small gain, it’s notable as one of the first signs that even today’s small quantum chips, when used cleverly in a hybrid mode, can provide some advantage on a real financial problem. It hints that as devices scale, these quantum boosts could become more significant.

Another quantum algorithm avenue is solving the continuous optimization version of the portfolio problem (where asset weights can be fractional) using quantum linear algebra. The classic Markowitz problem with N assets can be solved by inverting an $N \times N$ covariance matrix or via convex programming. The Harrow-Hassidim-Lloyd (HHL) quantum algorithm can solve linear systems in logarithmic time under certain conditions. Researchers found that you can cast the portfolio optimization (with certain formulations) into a linear system $$A\mathbf{x}=\mathbf{b}$$ and use HHL to solve for the optimal weight vector $$\mathbf{x}$$. However, HHL is very demanding – it requires robust quantum phase estimation and long circuits, which are far beyond today’s noisy machines. To overcome this, J.P. Morgan’s tech researchers developed a Hybrid HHL++ algorithm that uses classical processing for parts of the task and quantum subroutines for smaller pieces. In 2024 they demonstrated this Hybrid HHL++ on actual hardware (Quantinuum’s trapped-ion system) for small-scale portfolio instances, successfully computing properties of the optimal portfolio. This was a proof-of-concept that even without full fault tolerance, some pieces of the quantum algorithm (like small matrix inversions) can run on current devices when combined with clever classical post-processing. It didn’t beat classical solvers in speed, but it showed that the building blocks are starting to work on real quantum hardware.

Current State of Practice

Who is experimenting? Virtually every major player at the intersection of finance and quantum is exploring portfolio optimization as a use-case. Banks and financial institutions have active research programs: for example, JPMorgan and Goldman Sachs have dedicated quantum teams, and other banks like BBVA, Barclays, and HSBC have run pilots or formed partnerships in quantum computing. Quantum computing startups and vendors also target this problem: D-Wave (quantum annealing company) has multiple client case studies on portfolio optimization, and gate-based hardware companies (IBM, Rigetti, IonQ, IQM, etc.) have showcased small demos or collaborations. In a recent example, IQM (a European quantum hardware startup) partnered with DATEV to apply a 20-qubit quantum processor to an industry-relevant portfolio challenge, executing a custom algorithm on real data. While these demonstrations are on small scales, they reveal the potential of quantum methods and help identify practical issues. According to a World Economic Forum report, over 85% of finance companies are already investing in quantum computing readiness, underscoring how much attention this area receives.

So far, no true “quantum advantage” (outperforming the best classical methods) has been achieved in portfolio optimization – which is expected, given the early hardware. Classical algorithms for portfolio optimization (especially without integer constraints) are quite efficient for many assets (modern solvers handle thousands of assets continuously ). And even with discrete constraints, classical heuristics (greedy methods, evolutionary algorithms, simulated annealing, etc.) are very strong on typical problem sizes. Quantum prototypes have therefore focused on benchmarking against classical heuristics on toy problems. The results generally show parity or small advantages in either solution quality or runtime for the tested instances. For example, the D-Wave 2000Q annealer could find optimal or near-optimal selections among 60 stocks about as fast as a classical optimizer (both under a minute), although one had to run many anneals and aggregate results (totaling ~21 seconds of quantum anneal time in that experiment). On the gate model side, QAOA runs have been limited to much smaller universes (often <10 assets on hardware, or a few dozen in simulation) due to qubit count limits. They do show that QAOA can find the correct optimal subset in simple cases. A research article in Nature in 2022, for instance, tried QAOA on an 8-asset problem with an IBM quantum computer and did manage to identify the efficient frontier portfolio in that toy scenario (though not faster than brute force).

One noteworthy achievement was by BBVA in 2023 using D-Wave, where the quantum approach not only matched classical solver speed but also returned portfolios with better risk metrics in some trials. This hints that even without speedup, quantum heuristics might explore the solution space differently (potentially avoiding certain local minima) to yield portfolios with improved out-of-sample performance (like lower tracking error to a benchmark). Another milestone is the hybrid quantum-classical decomposition by JPMorgan + AWS (2024) which showed a measurable runtime reduction (12%) on a constrained optimization by partitioning the problem. While 12% won’t turn heads, it’s important because it was achieved on a realistic dataset and problem size beyond trivial toy models. It suggests we are inching toward regimes where quantum can complement classical in practical settings.

Overall, the state of practice is experimental and hybrid. Quantum computers today cannot handle a full-scale portfolio optimization better than classical ones, but they are being used as co-processors or testbeds for algorithms. Many experiments use quantum simulators (running the quantum algorithms on classical supercomputers) to validate algorithms up to moderate sizes, then test small instances on actual quantum hardware to verify that the real qubits behave as expected in principle. This iterative approach has led to improved methods – for example, improved constraint encoding techniques and error mitigation strategies (like zero-noise extrapolation on QAOA outputs to counteract quantum gate errors ). The community has also created open-source tooling: frameworks like Qiskit and Ocean have portfolio optimization application libraries, so that quants and researchers can easily plug in their data and try solving a QUBO on a quantum backend.

Challenges and Requirements for Real Quantum Advantage

Despite promising early results, several major challenges must be overcome to achieve a true quantum advantage in portfolio optimization:

  • Scaling to Larger Portfolios: Thus far, quantum tests have involved on the order of $$10^1$$–$$10^2$$ assets at most. But a real-world investment universe might have hundreds or thousands of assets to choose from. Quantum annealers like D-Wave have thousands of qubits, but not all can be used for a fully connected problem due to sparse hardware connectivity – embedding a dense portfolio QUBO requires chains of qubits that consume capacity. Gate-model devices currently have at most a few hundred qubits (with only a fraction effectively usable for algorithms after error mitigation). To beat classical, quantum devices likely need to handle problem sizes well beyond what brute force or classical heuristics can do – perhaps hundreds of assets with complex constraints. This could demand thousands of high-quality qubits. As an example, one study suggests classical interior-point methods can solve thousands of assets efficiently, but pushing into tens of thousands of assets might be where quantum could shine – an ambitious target far beyond today’s quantum capacities.
  • Noise and Error Rates: Present quantum hardware is noisy, meaning computations are imprecise and gates/anneals introduce errors. For optimization algorithms like QAOA, noise blurs the energy landscape and can prevent the algorithm from finding the true minimum. The medium-term goal is to use error mitigation techniques (not full error correction yet) to push noise rates low enough for the algorithm to get a meaningful signal. For instance, simultaneous perturbation stochastic approximation (SPSA) optimizers and readout error mitigations have recovered about 70% of the performance in QAOA trials despite noise. Ultimately, however, to reliably outperform classical methods on large instances, error-corrected quantum computers will likely be needed so that deep algorithms (or many repetitions) can run without decoherence wiping out the solution quality. This implies major hardware advances – fault-tolerance with thousands of logical qubits – which might be years away.
  • Problem Mapping and Data Loading: Efficiently mapping a finance problem onto qubits is non-trivial. In optimization, one challenge is encoding all the problem data (expected returns, covariance matrix, constraints) into the quantum machine. On annealers, this means setting hundreds of analog coupling strengths with limited precision and dealing with biases. On gate devices, it means constructing cost operators and initial states that reflect the input data. Data loading can dominate the runtime for large problems; one analysis noted that beyond ~100 qubits, simply preparing the problem Hamiltonian or necessary input states could be the bottleneck. Research is ongoing into more efficient encodings and oracles to input data, as well as techniques like quantum Random Access Memory (QRAM) – though QRAM itself would need to be quantum-error-corrected to be effective in a real advantage scenario.
  • Algorithmic Improvements: The quantum algorithms themselves still need refinement to handle real constraints. QAOA, for example, struggles with hard constraints (like an exact budget or cardinality limit) – these are usually added via large penalty terms in $H$, which requires careful tuning. There is active research on constraint-aware mixers or smarter penalty schedules to enforce feasibility without overwhelming the objective. Likewise for annealers, managing minor embedding and analog errors requires hybrid post-processing to ensure constraints are satisfied. Another line of work is entirely new algorithms: the Quantum Interior Point Method (QIPM) is an algorithm that tries to speed up the classical interior-point approach using quantum subroutines (like solving linear systems faster via quantum linear solvers). Goldman Sachs and AWS did an end-to-end resource analysis of a QIPM for portfolio optimization – finding that while theoretically possible, a full QIPM would need thousands of logical qubits and is not efficient on near-term devices. On the positive side, that analysis helps pinpoint which subroutines (like linear solvers or tomography) are the most resource-intensive, guiding where to focus algorithmic improvements.

Given these challenges, experts anticipate that true quantum advantage in portfolio optimization (solving a real, large-scale portfolio problem faster or better than classical methods) will require fault-tolerant quantum computers or extremely advanced error mitigation if using NISQ devices. We likely need a combination of: hundreds or thousands of logical (error-corrected) qubits, gate operation speeds in the MHz regime or faster, and clever algorithms that minimize the depth of the circuit or annealing time. As a point of reference, current estimates for some quantum optimization algorithms suggest needing on the order of $$10^3$$–$$10^4$$ logical qubits and millions of quantum operations to surpass classical results on meaningful problem sizes. These numbers hint that we’re not there yet – but they are not out of the question in the long term (perhaps within a decade or so, given the rapid progress in hardware).

Outlook and Potential Impact

If and when quantum computing achieves a clear advantage in portfolio optimization, the implications would be significant for the financial industry. Portfolio selection is at the heart of investment management – even a small improvement in optimizing risk-reward tradeoffs can translate to “free” return basis points or reduced risk for a fund. A quantum advantage could mean an institution consistently finds better portfolios (closer to the true optimal on the efficient frontier) than competitors using classical tools, or finds good portfolios faster than others – enabling quicker rebalancing in response to market moves. An oft-quoted idea is that as the efficient frontier shifts upward with better optimization, “teams mastering these quantum methods will capture the extra basis points first”. In highly competitive markets, an edge of even a few basis points in performance, obtained consistently, is huge – it can attract more investment and translate to millions in profits.

Moreover, a quantum leap in optimization capability might enable new kinds of products and strategies. For example, it could handle extremely large universes of assets (tens of thousands of assets, including alternative investments) to build hyper-diversified portfolios, or optimize under a vast number of constraints (personalized portfolios satisfying many custom goals). It might allow real-time portfolio optimization – adjusting allocations on the fly as conditions change, because the computation could be done in minutes instead of hours or not at all. It could also improve risk management by finding portfolios that are optimal not just for expected return but also under worst-case scenarios or stress tests, by embedding those considerations into the optimization cost function more directly than classical solvers can.

The first organizations to achieve a quantum-assisted portfolio optimization that beats the classical state-of-the-art will likely gain a reputational and strategic advantage. They could offer clients better returns per unit risk, or lower capital requirements for the same return, etc. In addition, demonstrating success in this domain would validate quantum computing’s value in finance, likely spurring further investment and development. It’s worth noting, however, that any advantage might be short-lived if the knowledge spreads – finance is an arms race, and competitors would quickly adopt similar quantum tools or collaborate with quantum providers. Still, being first confers the opportunity to patent techniques (indeed, JPMorgan has filed patents on quantum portfolio optimization methods ), build expertise, and influence how the technology is integrated into financial workflows.

In summary, quantum computing holds real promise for portfolio optimization because it attacks the combinatorial explosion of possibilities in a fundamentally new way. While current experiments show we are still in the early innings – with quantum solutions matching classical ones on small problems – the steady progress in algorithms (QAOA enhancements, hybrid methods, etc.) and hardware indicates that larger and more useful demonstrations will come. Achieving a practical quantum advantage will require surmounting significant technical hurdles, but the payoff is enticing. Portfolio optimization is a cornerstone of finance, and any institution that can do it even marginally better or faster with quantum help will find themselves at a potentially lucrative frontier. As one enthusiast put it, quantum portfolio optimization is “not a silver bullet but a powerful new heuristic frontier”, and as the technology matures, it may well bend the efficient frontier upward in a way classical methods cannot.