My original problem has a convex objective (affine) and non-convex constraints. Thus, I’m using SCP and approximating non-convex constraints by their first-order Taylor expansion at each iteration, and if the absolute difference between the objective function at the previous and the current iteration is less than a tolerance I stop. However, what’s been going one is that after a few iterations the value of the objective function starts to oscillates between two values (i.e. f_1 and f_2) , such that |f_1 - f_2| > tolerance. Is there an alternative way to deal with this problem or is it just one of the possible pitfalls of employing SCP?

I don’t think there is any guarantee that the SCP scheme will converge to anything, let alone even a local minimum of the original non-convex problem.

There might be alternative convexifications which work better on your problem. It may also be that the SCP converges for certain starting values of the Taylor expansion point - have you tried different starting values?

In any event, I think what you are doing is the equivalent of an unsafeguarded optimization (no line search or trust region). You might have more success with a safegaurded non-convex optimizer based on SQP or interior point method, or possibly with a global optimizer. Safeguading can make a big difference on whether nonlinear optimization algorithms (of which SCP is an unsafeguarded version) converge. SCP may work well on certain problems, but it is not a panacea and there are alternative algorithms which might be better for many problems.

Further help on making your SCP work, beyond help with the CVX portion, is outside the scope of this forum.

It does converge for different starting values, so, you’re probably right. I’m going to try alternative convexifications and/or add a trust region constraint. Thanks for the help!