-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Description
TLDR; Made the issue to be able to make PR for MPS encoding as requested. I have the vanilla version ready to go.
Is your feature request related to a use case or problem? Please explain
What are MPS? Matrix Product States (MPS) are 1D tensor networks which allow approximation of dense statevectors using
How are MPS converted to circuits? We can convert any MPS to a single layer of unitaries exactly where the bond dimension dictates how many qubits each unitary would act on. To improve depth, and keep it to only 1 and 2 qubit gates, we first compress the MPS to bond dimension 2, and then convert to a unitary layer. This instead approximates the MPS, and to get better fidelity, we do this in layers. That is, make a compressed version of the MPS, convert to unitary layer and store it, contract its inverse with the original non-compressed MPS to "update" it, and repeat. Very much like a disentangling algorithm, we stop when the number of layers has been reached, or if the state is close enough to product state
What are the downsides of this approach? MPS by itself with no optimization can only reliably approximate area-law entangled states with high fidelity. For the general case, one would need to be able to do well with volume-law states as well, but because the volume-law case requires exponentially larger bond dimension, we simply fail to fit it in the constant bond dimension aforementioned above with even moderate fidelity (<0.5). Furthermore, for states which are long-range entangled, MPS does slightly worse as it cannot inherently move qubits around to be in independent clusters, instead it has to "move" via adjacent qubits to form long-range entanglement, i.e., if qubits 1 and 4 are entangled, it doesn't do the same as 1 and 2 but instead with 4, rather it goes 1 and 2, 2 and 3, and then 3 and 4, forming a staircase, which naturally contributes to depth as well.
Are these flaws or can they be fixed? They are limitations to the "vanilla" implementation. We can improve the performance for the general case drastically by sweeping (requires full contraction of the circuit, which is NOT feasible for large circuits, as we want the QPU to be doing the contraction) and/or variationally optimizing the compression of the MPS which is more tangible. These have shown to remedy the fidelity for volume-law based on random Clifford circuits, matching that of area-law entangled ones. So, "vanilla" is more practical, but has its limitations.
Reference for algorithm:
[1] https://arxiv.org/abs/1908.07958 (core-algorithm)
[2] https://arxiv.org/pdf/[2209.00595](https://arxiv.org/pdf/2209.00595) (sweeping)
The variational fine-tuning during compression is something I tried on a tangent and fortunately worked.
How urgent is this for you? Is it blocking important work?
P3 – I'm not really blocked by it; it's an idea I'm proposing based on principle