Basic question related to the convergence of the algorithm

I have maximization optimization problem whose variables are x and y. I do not know if the problem is convex or not. But I do know that if one of the variable is fixed then the problem is convex in the other variable. Further, I know that the objective has a finite upper bound. So, I wrote an algorithm that alternately optimizes the individual variables (that is for a fix value of y it maximizes w.r.t x, and then for the optimized value of x it maximizes w.r.t y). This is done a number of times. At each iteration the objective value should be non-decreasing. However, when I see the optimal value on the Matlab screen then I see that for sufficiently higher iteration number the objective value does decrease. Is it possible for such thing to occur or there must be something wrong with my code? (Please note that although the objective value decrease however the decreases is almost negligible.) Your thoughts on the problem may help improve my understanding. Thank you.

Optimization problems are only solved to within a tolerance. So “negligible” fluctuations, including in the “wrong direction”, can occur.

1 Like

Thank you Mark. One other thing I am noticing is that when I run CVX (using Mosek solver) I get some display on the Matlab screen which has number of columns and last column entries are either solved or failed. Even when the last column contains some rows with fail status the final status that I get at the end of my program is solved. Is it also normal thing?

The output you are describing means that CVX’s successive approximation algorithm has been invoked. Please read this section

If you always get solved at the end, that’s what matters. For more reliability, you might want to install CVXQUAD and its exponential.m replacement, as discussed in the 2nd paragraph of my answer at Logistic Regression + linear programming - getting only Nan . MOSEK can still be called as the solver.

1 Like

Thank you Mark.:heart_eyes: