Now I have a convex optimization problem, which I can solve well in CVX version 2.2, but as I add dimensions, there will be memory overflows. So I want to use the SCS solver, and for that, I run my code with CVX version 3.0 of CVX, but it was an error. This error does not occur in CVX version 2.2.

Eroor in reshape (line 16)
Size vector must have at least two elements.

I know how to solve it, can you help me? I guess it is possible due to CVXQUAD.

The original code:

a=0.764;
b=0.784;
n = 2;
cvx_begin sdp quiet
variable inc_state(n,1);
variable sig(n,n) diagonal hermitian;
minimize minquantum_re_entropy_Cr_withX(sig,a,b)
subject to
sig == diag(inc_state);
sig >= 0
trace(sig)==1
cvx_end
Cr_min = real(cvx_optval)

Added: running it need CVXQUAD and below function

function RelEnt= minquantum_re_entropy_Cr_withX(sig,a,b)
X=[0,1;1,0];
Z=[1,0;0,-1];
I=[1,0;0,1];
Y=[0,-1i;1i,0];
n=2;
cvx_begin sdp quiet
variable rho(n,n) complex hermitian;
minimize quantum_rel_entr(rho,sig)
subject to
trace(rho) == 1;
rho >= 0;
real(trace(X*rho))>= a;
real(trace(X*rho))<= b;
cvx_end
format long
cvx_optval;
RelEnt=cvx_optval/log(2);
end

Note: CVX 3.0beta is not really recommended to anyone due to its numerous bugs. Do NOT use CVXQUAD with CVX 3.0beta… Please use CVX 2.2 instead.

Unfortunately, CVXQUAD’s quantum_rel_entr doesn’t scale well, due to creating LMI’s as large as 2^n^2 by 2*n^2, for original n by n matrices. Thus, I cam see why you want to use SCS. But if bugs are occurring under CVX 3.0beta, the best alternative is likely to be the latest Mosek 9.3 under CVX 2.2.

Even if you got quantum_rel_entr to run without error messages under CVX 3.0beta, I wouldn’t trust the results, because there are several known instances of constraints being ignored under CVX 3.0beta, resulting in incorrect result, even though the solver and CVX claim the problem has been solved.

I also recommend you run without quiet option until you have everything running well. That way you can see all the solver and CVX output, which will help you diagnose and assess things.

Thanks for your reply. Before I ask a question, I use the Mosek 9.3 under CVX 2.2. Unfortunately, my computer will occur memory overflow when I set n=2^6. What should I do if I want a higher n?

Per p.3 of https://arxiv.org/pdf/1705.00812.pdf
quantum rel_entr produces m LMIs of size (n^2 + 1) by (n^2 + 1) and k LMIs of size 2n^2 by 2n^2 each, where m and k are parameters in CVXQUAD which control the accuracy of the matrix log approximation.

So one thing you could do, which might help a little, is to reduce the values of m and k inside the CVXQUAD code. This could help a little, but at the expense of loss of accuracy. But even if you get it to work for a particular value of n, it still has the same scaling of number of variables as n^4, no matter the values of k and m, but just reducing the multiplicative factor for each value of n.

There currently is no “magic bullet” for this. Of course, more memory will help.

I have noticed this paper, and I also try the method you said. But my problem is about quantum bits, the N is exponentially rising, such as 4,8,16,…1024,2048… thus, my computer will occur memory overflow and I don’t even know how much memory I need to solve when n=2^6.

BTW, even with SCS, or Super SCS, I think the memory needed with CVXQUAD’s quantum_rel_entr would be O(n^4), which admittedly is better than O(n^8) (I think) with Mosek, but still scales pretty badly. I could be wrong on these scalings, but that is my guess.

It claims to handle quantum relative entropy much better than CVXQUAD. But I’ve never used it, and don’t know much about it.

Per p. 23 of the first link

DDS 2.0 uses the following barrier (not yet known to be s.c.) for solving problems involving quantum relative entropy constraints:
Φ(t, X, Y ) := ln(t −qre(X, Y )) −ln det(X) −ln det(Y )

I don’t know what the consequences of using a barrier not known to be self-concordant are. Can that result in an incorrect solution?

Specifically, if we are interested in quantum relative entropy problems where we minimize the trace of X1, as occurs in the context of the Matrix Perspective Reformulation Technique, we may achieve this using the domain-driven solver developed by [DDS paper linked above]… However, we are not aware of any IPMs which can currently optimize over the full quantum relative entropy cone.

I don’t understand why the authors make that last statement, and in particular, what types of quantum relative entropy optimization problem would be outside the scope of DDS. Does that have to do with DDS using a barrier function which is not known to be self-concordant?