Unknown error in CVX 3.0 beta

Now I have a convex optimization problem, which I can solve well in CVX version 2.2, but as I add dimensions, there will be memory overflows. So I want to use the SCS solver, and for that, I run my code with CVX version 3.0 of CVX, but it was an error. This error does not occur in CVX version 2.2.

Eroor in reshape (line 16)
Size vector must have at least two elements.

I know how to solve it, can you help me? I guess it is possible due to CVXQUAD.

The original code:

n = 2;
cvx_begin sdp quiet
    variable inc_state(n,1);
    variable sig(n,n) diagonal hermitian;
    minimize minquantum_re_entropy_Cr_withX(sig,a,b)   
    subject to
        sig == diag(inc_state);
        sig >= 0
Cr_min = real(cvx_optval)

Added: running it need CVXQUAD and below function

function RelEnt= minquantum_re_entropy_Cr_withX(sig,a,b)

cvx_begin sdp quiet
    variable  rho(n,n) complex hermitian;
    minimize quantum_rel_entr(rho,sig)
    subject to
        trace(rho) == 1;
        rho >= 0;
        real(trace(X*rho))>= a;
        real(trace(X*rho))<= b;
format long

Do not use 3.0 beta. It is known to be buggy and is unlikely to be fixed ever.

What @Erling said, to the max!!

Per CVXQUAD: How to use CVXQUAD's Pade Approximant instead of CVX's unreliable Successive Approximation for GP mode, log, exp, entr, rel_entr, kl_div, log_det, det_rootn, exponential cone. CVXQUAD's Quantum (Matrix) Entropy & Matrix Log related functions

Note: CVX 3.0beta is not really recommended to anyone due to its numerous bugs. Do NOT use CVXQUAD with CVX 3.0beta… Please use CVX 2.2 instead.

Unfortunately, CVXQUAD’s quantum_rel_entr doesn’t scale well, due to creating LMI’s as large as 2^n^2 by 2*n^2, for original n by n matrices. Thus, I cam see why you want to use SCS. But if bugs are occurring under CVX 3.0beta, the best alternative is likely to be the latest Mosek 9.3 under CVX 2.2.

Even if you got quantum_rel_entr to run without error messages under CVX 3.0beta, I wouldn’t trust the results, because there are several known instances of constraints being ignored under CVX 3.0beta, resulting in incorrect result, even though the solver and CVX claim the problem has been solved.

I also recommend you run without quiet option until you have everything running well. That way you can see all the solver and CVX output, which will help you diagnose and assess things.

Thanks for your reply. Before I ask a question, I use the Mosek 9.3 under CVX 2.2. Unfortunately, my computer will occur memory overflow when I set n=2^6. What should I do if I want a higher n?

As @Mark_L_Stone said, CVXQUAD’s quantun_rel_entr doesn’t scale well, which is the reason I want to use SCS.

Per p.3 of https://arxiv.org/pdf/1705.00812.pdf
quantum rel_entr produces m LMIs of size (n^2 + 1) by (n^2 + 1) and k LMIs of size 2n^2 by 2n^2 each, where m and k are parameters in CVXQUAD which control the accuracy of the matrix log approximation.

So one thing you could do, which might help a little, is to reduce the values of m and k inside the CVXQUAD code. This could help a little, but at the expense of loss of accuracy. But even if you get it to work for a particular value of n, it still has the same scaling of number of variables as n^4, no matter the values of k and m, but just reducing the multiplicative factor for each value of n.

There currently is no “magic bullet” for this. Of course, more memory will help.

I have noticed this paper, and I also try the method you said. But my problem is about quantum bits, the N is exponentially rising, such as 4,8,16,…1024,2048… thus, my computer will occur memory overflow and I don’t even know how much memory I need to solve when n=2^6.

Then you probably need some approach other than CVXQUAD.

Here is what one CVX forum poster did after discovering quantum_rel_entr doesn’t scale well. https://arxiv.org/pdf/1710.05511.pdf .

BTW, even with SCS, or Super SCS, I think the memory needed with CVXQUAD’s quantum_rel_entr would be O(n^4), which admittedly is better than O(n^8) (I think) with Mosek, but still scales pretty badly. I could be wrong on these scalings, but that is my guess.

I find no words to express my admiration for you. Thank you again for your given paper, I will read it.

You might want to look at DOMAIN-DRIVEN SOLVER (DDS) VERSION 2.0:


It claims to handle quantum relative entropy much better than CVXQUAD. But I’ve never used it, and don’t know much about it.

Per p. 23 of the first link

DDS 2.0 uses the following barrier (not yet known to be s.c.) for solving problems involving quantum relative entropy constraints:
Φ(t, X, Y ) := ln(t −qre(X, Y )) −ln det(X) −ln det(Y )

I don’t know what the consequences of using a barrier not known to be self-concordant are. Can that result in an incorrect solution?

Per http://www.optimization-online.org/DB_FILE/2021/05/8387.pdf

Specifically, if we are interested in quantum relative entropy problems where we minimize the trace of X1, as occurs in the context of the Matrix Perspective Reformulation Technique, we may achieve this using the domain-driven solver developed by [DDS paper linked above]… However, we are not aware of any IPMs which can currently optimize over the full quantum relative entropy cone.

I don’t understand why the authors make that last statement, and in particular, what types of quantum relative entropy optimization problem would be outside the scope of DDS. Does that have to do with DDS using a barrier function which is not known to be self-concordant?

Here’s another paper https://arxiv.org/pdf/1906.00037.pdf .and an earlier paper on the algorithm by the same authors http://www.optimization-online.org/DB_FILE/2019/04/7165.pdf .discussing more explicitly the alternating variable fixing for quantum relative entropy.

I leave you to sort through all of this and figure out what’s really going on, because I personally haven’t put in the effort.