I am currently working on optimizing an objective function which contains quantum relative entropy (sometimes referred to matrix relative entropy) and is described in https://en.wikipedia.org/wiki/Quantum_relative_entropy.
The above may be written in MATLAB code as
trace(rho*logm(rho) - rho*logm(zrho)) where
zrho are density matrices.
Quantum relative entropy is known to be convex, but it seems that CVX is currently unable to support such a function. The question I would like to pose is how would one implement quantum relative entropy in CVX?
In the scalar case, you’re all set, using CVX’s rel_entr
In the matrix case, I think https://en.wikipedia.org/wiki/Trace_inequalities#Joint_convexity_of_relative_entropy shows the convexity of your expression, presuming that I am correctly interpreting the article as applying to the matrix log, as opposed to element level log, and presuming that rho and zrho are both semidefinite. I think what you need in this case is not support for logm, but for quantum relative entropy, which perhaps might also be known as matrix relative entropy.
mcg can provide the definitive prognosis when he comes around.
I hate to be the bearer of bad news but it’s not possible to add this to CVX. The computational architecture just doesn’t support it, and the solvers don’t either. A version of CVX that handles it would end up being an entirely new package.
Take a look at CVXQUAD.https://github.com/hfawzi/cvxquad/ which supports
quantum_rel_entr(X,Y) defined as
trace(X*(logm(X)-logm(Y))) . It uses a (supposedly) more accurate approximation to the exponential cone than CVX’s successive approximation method. Per the link, it only requires swapping out one CVX file from a CVX 2.1 installation.
I have not tried it and can’t vouch for it.
I had previously come across CVXQUAD. Unfortunately, I found that both the run time and memory usage meant that I could only consider trivial problems of interest.
We later developed a technique for minimizing quantum relative entropy as outlined in https://arxiv.org/abs/1710.05511. Essentially, we minimize the objective function via the Frank-Wolfe algorithm then applying some additional machinery to get a reliable lower bound on the relative entropy. The method has worked quite well in practice, however, there still are some issues with the rate of convergence of the Frank-Wolfe algorithm.
That sounds great, @awinick. I don’t think you’ll ever get a satisfactory result out of CVX simply due to its architecture and inherent limitations. Perhaps a different first-order method will give you the best of both worlds.