# The solution of CVX satisfies all the constraints, but does not match the objective function

The Optimization problem is a SOCP problem. CVX works fine (Nans sometimes appear).When I bring it back, it satisfies all the constraints, but it’s very different from my objective function, in other words, it doesn’t minimize the objective function.
Here’s my code:

1 Like

Sometimes it has no solution:

## Calling SDPT3 4.0: 1606 variables, 776 equality constraints For improved efficiency, SDPT3 is solving the dual problem.

num. of constraints = 776
dim. of sdp var = 2, num. of sdp blk = 1
dim. of socp var = 1594, num. of socp blk = 5
dim. of linear var = 6
dim. of free var = 3 *** convert ublk to lblk

SDPT3: Infeasible path-following algorithms

## number of iterations = 21 residual of dual infeasibility certificate X = 1.04e-09 reldist to infeas. <= 2.69e-15 Total CPU time (secs) = 2.97 CPU time per iteration = 0.14 termination code = 2 DIMACS: 6.4e-04 0.0e+00 3.0e-04 0.0e+00 -1.0e+00 5.3e-04

Status: Infeasible
Optimal value (cvx_optval): +Inf

Optimization problem:

It appears you are using some type of crude (unsafeguarded) Successive Convex Approximation (SCA), which may be unstable, and is perhaps producing some infeasible problems. You may be better off using an off-the-shelf non-convex nonlinear optimizer.

You might improve the numerical stability of an individual optimization problem a little by changing to norm(f) <= sqrt(Pt) which might be better conditioned.

Bu fundamentally, SCA might not converge at all, might produce infeasible problems along the way, might diverge with larger and larger magnitude inputs and optimal solutions, and other bad things. if it does converge to anything, it is not necessarily even a local, let alone global, optimum of the original problem. But it might work, depending on the input data, and crucially, depending on the starting value of the variable9s) being iteratively updated.

Papers are full of cockamamie algorithms which often don’t work well in practice. Sometimes, the authors have to hunt for input data and starting values which make it work fo ran example in the paper. Or they only tried it on a problem for which they were lucky.

Also in the future, please copy and paste code using Preformattted text icon, rather than posting image of the code.

Thanks a lot.I tried to convert the constraints to first order, but the results were still terrible

Looking more carefully at your program, I don’t understand what it does.

it appears that no matter how many times the while loop is executed, there are only ever 2 distinct optimization problems solved. The first time through the loop, p_n has the value to which p was initialized. The 2nd time through the loop, p_n has the value to whichp was set at the end of the first time through the loop, but that value has nothing to do with the results of the optimization problem which was just solved. That value of p does not appear to depend on anything which happened earlier in that time though the loop , .e., it is always set to the same thing. So that 2nd value of p_n will be the same for the 3rd and subsequent times through the for loop,. Hence, the CVX optimization problem instance is the same for the 2md, 3rd, 4th, etc. time through the loop.

Sorry, I didn’t optimize the program very well. Here the second subproblem is LS problem about p. The paper uses alternating minimization method to optimize variables f and p. In LS problem, the value of p is related to f (f1 + f2 + f3 =S* f).In line 96 of the code, I update the value of p.

Well, I guess you didn’t show all the code, such as where p gets optimized, because in the code you showed, other than the first time through the loop,it is set to value which appears to depend only on constants. Anyhow, alternating minimization can be precarious.

Thank you for your reply. But in line 96 of the code: p = pinv(D*A)D Fi S F, where p is optimized. It’s a direct application of the LS problem.

That is how I incorrectly read your code for my first post. Hence my referring to it being SCA (I incorrectly read p as a least squares solution whose input involved the just solved CVX optimization problem.

But that is not the code you showed in your original post, which is
p = pinv(D*A)*D*fi*(f1+f2+f3);
That does not involve the optimization variable f, and therefore is a constant. Hence, my 2nd post, which was based on my more carefully reading your code and seeing that f did not appear on the RHS of the assignment for p.

So now that you have shown the “correct” code, you are left with an alternating optimization algorithm. I don’t know what the paper says about provable convergence, stability, etc. of that algorithm, but many such algorithms do not reliably converge for all problems.

BTW, kudos for using pinv(A)*b to solve least squares problem A\b. That immediately became my favorite way of solving linear least squares problems when I first used an ancient version of MATLAB (written in FORTRAN) 40 1/2 years ago.

Thanks for reply.I’m sorry for the inconvenience.But my problem is still there, from my final results, CVX does not give a satisfactory solution, the objective function is not minimized.I iterate many times(>1000) is still not close to the target. The figure below shows the result after ten iterations.

Perhaps it is the paper’s algorithm which is deficient, not “CVX”. I warned you that it might not converge to anything, let alone the right thing.

Thank you. I’ll check the algorithm again.It really gets me down.

By the way, why does converting norm(f) <= sqrt(Pt) to a first-order form improve the stability of a single solution, and does that apply to all second-order problems

The norm() is a nicer function that behaves like a linear function e.g

norm(a*x) = |a|*norm(x)

whereas

norm(ax)^2 = |a|^2norm(x)^2

behaves like the quadratic function it is.

Also the number

sqrt(Pt)

is closer to 1 than

Pt

which makes your model better scaled.

What is your argument that a squared norm is better than the norm?