I use CVX Version 2.1, Build 1127 on Matlab R2018b and downloaded cvxquad today. As it should be, the equality
-sum(rel_entr(1,1+p.*lambda)) = sum_log(1+p.*lambda)
holds numerically, so the cost functions are the same not only in theory but also in practice (even though what goes on under the hood is completely different). I also have my own primal dual custom made algorithm that gives the same result as the original CVX solver, so I trust that one. Is it a bug in cvxquad or did I oversee something obvious?

@mcg This does appear to be a bug somewhere. Thanks for reporting it. I have no idea whether the bug is in CVX or CVXQUAD 9alkthough, CVXQUAD;s Pade approximant is not invoked). The original version is producing a much better â€śoptimalâ€ť objective value than what you call the â€śCVXQUADâ€ť version. Both solutions satisfy the constraints, but the â€śCVXQUADâ€ť version has no active constraints, and its solution can clearly seen to be non-optimal even locally, But actually, the CVXQUAD Pade approximant is not being invoked for the â€śCVXQUADâ€ť version - I donâ€™t know why, and it is still using the Successive Approximation method, at least on my machine (I don;t know why)â€¦

For the random problem instance I tried, the â€śCVXQUADâ€ť(but not really) version had optimal pâ€™ of 0.0556 0.0500 0.3569 0.0496 0.0717 0.4646 0.1627 0.0829 0.0473 0.0470,
producing objective value (I used the original maximization which has a negative sign in font of rel_entr) of 0.4432. That is the same value as obtained using that p on the objective function from the original version. I.e., sum(-rel_entr(1,1+p.*lambda)) = sum_log(1+p.*lambda) = 0.4432 â€¦ So it appears that these functions are being correctly evaluated when provided with numeric (not CVX expression) arguments. I donâ€™t know whether they are being correctly evaluated with CVX expression arguments.

I leave you to try out alternative formulations for use in CVXQUAD, as described in mcgâ€™s post at Solve optimization problems of exp function .Please let us know what happens when you try these other ways of dealing with the log.

As a follow-up, it turns out that the problem isnâ€™t directly related to cvxquad, but appears as well if I use the sum(rel_entr(â€¦)) formulation of the cost function with the original CVX inplementation. On the other hand, redefining the cost function using extra variables and explicit use of the exponential cone construct seems to work correctly:

(Iâ€™m currently on another computer where I havenâ€™t yet installed cvxquad, so the above has only been tested using plain CVX for the moment).

Some possible hints on what goes wrong with the sum(rel_entr (1,â€¦)) formulation are:

The solution found by CVX gives equal values for all terms of the sum, i.e. all elements of p.*lambda, and I get the same solution if I replace minimize sum(rel_entr(1,1+p.*lambda));
by minimize max(rel_entr(1,1+p.*lambda));

The value of cvx_optval also corresponds to max(rel_entr(1,1+p.*lambda)), rather than to sum(rel_entr(1,1+p.*lambda)).

I donâ€™t understand the inner workings of CVX well enough to trace it down further.

I think I have located the bug. The problem is that rel_entr doesnâ€™t correctly handle the case where the first argument is a scalar and the second is a vector. With the current version of CVX, the simple workaround is to replace sum(rel_entr(1,x)) by sum(rel_entr(ones(size(x)),x)), i.e. in my example code, to write

The bug seems to go away if I edit the file cvx/functions/@cvx/rel_entr.m and move the line
sz = size( x );
from line 46 to above current line 10. I have no idea if this might break any other use cases, but it was clearly suspicious that lines 10-16 did a setting of the variable sz for some special cases, that then later was overridden by another setting on line 46 before the variable was actually used.