Using CVX to solve a convex maximization problem iteratively but yielding decreasing answers

Here I have a maximization problem needed to be solved by CVX, and I have expressed it in standard form. I tried to work it out iteratively since CVX may yield locally optimal points. However, when I carried it out in Matlab code by using a feasible point x_ini, I found that in each iteration it produce a decreasing result. Is any wrong in my code?

while 1
    x = H*x_ini; % x_ini is an initialized feasible point
    cvx_begin quiet
          variable x_opt(16,1) nonnegative;
          % x_opt is the optimization variable like [a11 a12 a13 a14 b11 b12 b13 b14 c11 c12 c13 c14 d11 d12 d13 d14]';
          variable d nonnegative; % d is the objective variable
          expression x_Con(1,16);  

         maximize(d); % objective function

         subject to
               KronI*x_opt - [4 4 4 4]' == 0;
               % That means a11+a12+a13+a14 == 4, b11+b12+b13+b14 == 4, c11+c12+c13+c14 == 4, d11+d12+d13+d14 == 4;

               x_Con=(H*x_opt)'; % H is a 16*16 diagonal matrix

               for k=1:120
                     2*x*E(:,:,k)*(x_Con')-x*E(:,:,k)*(x') >= d;
                     % E is a 16*16*120 matrix used to calculate the norm(x(i)-y(j),'fro'),i,j=1,...,16;
               end;
        cvx_end;
       if norm(min_x/min_x_Con-1,'fro')<0.001 % when their norm is less than 0.001, finish this iteration
             break;
      else
             x_ini = x_opt; % if not, using the new point as the next feasible point
      end;
end;

It appears you are trying to solve a (non-convex?) problem via an iterative series of (convex) CVX problem solutions, perhaps along the lines of stephen_boyd’s answer in How to handle nonlinear equality constraints? ?. Start by reading that, if you haven’t already. So of course the disclaimers there apply. But you have 120 turned around inequalities? Have you considered the possibility that given your initial value of x_ini, that your algorithm, even if nominally correct in some sense, is diverging, and if so, have you tried other initial values? If your actual problem is non-convex, the best you can hope for in an iterative approach such as this is to find a local optimum, and the local optimum found, if any, may depend on the initial value. What is the actual problem you’re trying to solve via this iterative approach?

Does your actual problem have the constraint
xE(:,:,k)(x’) == d for k=1:120?
That is of course non-convex. If that is what you are trying to deal with via iterating on (convex) inequalities, then why do you have a multiplier of 2 on your >= constraint (and not dividing at the iteration by the appropriate norm)? Is it even feasible, given the requirement to be satisfied simultaneously for 120 values of k. So I withhold judgment until you tell us what you’re actually trying to solve, but it looks nasty.

CVX is only addressing convex problems, so barring solution (numerical) difficulties, it (solver) should find the global optimum, as indeed convex problems have no non-global local optima. So I presume the actual reason for your iterative approach is that your actual problem is non-convex. Also, given that you are getting results you don’t understand, you may want to not use the quiet option until you do understand what’s going on.

I am a freshman in CVX and quite appreciating your reply !
I am really trying to solve a non-convex problem. More precisely, it belongs to the class of nonconvex quadratically constrained quadratic programming (QCQP) problems. And my actual problem has the constraint
min (xE(:,:,k)(x’) for k=1:120) >= d
In my Matlab code, in order to find the optimum, more than 10,000 initial feasible points are randomly generated. However, some initial points bring new problems I mentioned above (and some even make program enter " dead circulation").
As for your next question why do I have a multiplier of 2 on my >= constraint, it comes from
(x1-x2)E(:,:,k)((x1-x2)’)=x1E(:,:,k)(x1’)-2x2E(:,:,k)(x1’)+x2E(:,:,k)*(x2’)>=0
For this reason, I can get an approximation to linearize the min constraint above.
Since I am a freshman in convex optimization, I hope I can get any help or referrence from you and others.

Look at How to handle nonlinear equality constraints? , http://stanford.edu/class/ee364b/lectures/seq_slides.pdf , http://stanford.edu/~boyd/papers/cvx_ccv.html .

Are your E(:,:,k) all psd? If psd, at least now your problem will have been stated in a somewhat more understandable fashion than originally, so maybe someone can come along to provide you more specific guidance or direct you to a more appropriate forum if you still need it.

Presuming your E(:,:,k) are positive definite, I think you can directly adapt the stephen_boyd solution in How to handle nonlinear equality constraints? to your problem by letting Ck=chol(E(:,:,k) ). Assuming x is a column vector, then your constraints are norm(Ckx) >= sqrt(d) . So impose the inequality for each k: qkCkx >= sqrt(d) and update qk=Ckx_previous/norm(Ck*x_previous), where x_previous is the old value of x. Instead of actually using sqrt(d) which would cause a problem, just use d and then when you find the optimal d to your overall problem, take the sqrt.

There may be a better way to do this, as I am no expert in this area, but this approach worked very well on a sample one inequality problem I tried. I don’t know how well it will work when you have 120 simultaneous inequalities and qk updates. I leave you the task of integrating this into your overall problem, since I don’t understand which parts of what you have done correspond to the problem you really want to solve vs. supporting your linear approximation (for instance x_Con=(H*x_opt)’ ).

Thanks again for your enthusiastic help !
Actually My E(:,:,k) are positive semidefinite, and it can also be wrote as E(:,:,k)=(R’)*R. So I will seriously consider your constructive advice.
Since I used to express my model too complex before, now I simplify it as follows.
Obj. maximize (d);
s.t. f1(x) = A;
fi(x) >= d, i=2,…,M;
where x is a vector, A is a constant matrix, d and M are both scalars, and fi (i=1,…,M) are convex functions.
It seems that this problem have two optimization variables(x and d). So when using the CVX toolbox to solve it, I define both x and d as variables between cvx_begin and cvx_end, while only use the initial point of x as an initial feasible point. Does this operation right or not, and is this still a problem can be solved by Convex-concave procedure (CCP)?

What is called qk in my post will have to be initialized by you for each k. However, initial values for optimization variables d and x in CVX are not needed, and in fact ignored by CVX (and whatever solver it calls).

You can try using CCP on fi(x) >= d, along the lines of my suggestion, but I have no idea how well it will work on your problem. The constraint f1(x) == A is fine if f(x) is affine; if not, it will not be accepted by CVX and is unlikely to be convex. If f1(x) is convex but not affine, I can suggest, with no idea whether this has any chance of working successfully, to specify the inequality f1(x) <= A entered straightforwardly in CVX, combined with an inequality in the other direction f1(x) >= A to be treated via CCP as above; again, let me state my complete lack of confidence as to whether this approach might work. Someone else might be able to make a better assessment than I can on this approach for f1(x) == A. Can you tell us what f1(x) is?

I think f1(x) is affine, since its specific form is showed in the Matlab code in the beginning of my question (following after the “subject to” line, KronI*x_opt == [4 4 4 4]’)

As I think my problem is a convex optimization problem, I use CVX to solve it. However, unlike what Stephen Boyd said in his paper enter link description here ( the left side of the formula (2) (f0(xk)-g0(xk))-(f0(x(k+1))-g0(x(k+1)))<=Delta would always be nonnegative since it is a convex function), what I get is a result which becomes negative sometimes. And that is the most important thing confuses me.

O.k., that’s good news about f1(x) being affine, so you can just enter it in CVX in a straightforward manner.

As for the 120 “wrong direction” direction inequalities to be solved by CCP, I guess you won’t know how well it works until you try it, and its convergence or lack thereof could depend on the initial values for all 120 of the qk. Can you start off with just one of the fi, i=2…M inequalities, and if CCP converges, try working your way up to all 120? To be successful with CCP, you need all 120 qk to converge; what I don’t understand is if there is any (deleterious) coupling between the convergence of the qk for different k due to a common x and d underlying all of them, and of course d changing over the course of the CCP (outer) iterations, and therefore individual qk being pulled in different directions.

It seems that the form of the circle packing problem Stephen Boyd mentioned on his paper looks similar to my model, and I think I would be inspired from it. However, from the website he only shares a python example of it. Would you help me transform it into MATLAB code using CVX toolbox like other examples mentioned on the same paper?

Yes, your problem appears to have some similarity with the circle packing problem. I’m not the right guy to translate the solution for that problem to yours. Or to figure out what custom tricks might be needed or help in yours. As I pointed out, your problem may be difficult due to having 120 simultaneous wrong-way inequalities. And note that your problem is NOT convex; it is a perhaps difficult non-convex problem, even though inner convex problems are generated by the CCP. In my previous post, I recommended just starting with a single wrong-way inequality constraint and trying to get the CCP to converge. If you can do that, then try working your way up. I don’t know if that’s possible, or which tricks might be needed to do so.

After my retification to the MATLAB code, I get the optimized results (i.e. cvx_optval) in every iteration, and it chagnes as follows.
-2.887450e-06
-1.053837e-05
-1.492328e-05
-1.573030e-05
-1.573322e-05
-1.573318e-05
-1.573320e-05
-1.573325e-05
-1.573318e-05
-1.573322e-05
-1.573322e-05
-1.573321e-05
-1.573322e-05
-1.573335e-05
-1.573317e-05
-1.573321e-05
-1.573321e-05
-1.573321e-05
-1.573324e-05
-1.573322e-05
-1.573317e-05
-1.573322e-05
-1.573320e-05
-1.573325e-05
Maybe, due to the improper stopping criterion I adopted the iteration goes on all the time. But do these changes mean that the CCP algorithm has converged?

Looks like a scaling problem. Frankly, as far as most of the solvers are concerned, those optimal values are all “near zero”.

An optimal objective value (d) of zero (is that what you’re showing here?) is the worst possible value given that the problem is feasible (i.e. linear constraints are feasible) because all your E(::,:,k) are positive definite. However, that does not show that your CCP found the global optimum of your actual optimization problem. So perhaps there is some d > 0 which is also feasible, but you haven’t found (maybe, maybe there is some starting value of the variables for which CCP will converge to an optimal d > 0) . I can suggest again that you try the CCP with one inequality first, and try to work your way up and see what happens. Can you get a d > 0?