Unsafeguarded (no line search or trust region)Successive Convex Approximation (SCA) or generally, alternating variables optimization are unreliable. It might not descend (for minimization problem), i.e., iterates could get worse. It might not converge to anything; and if it does converge, it might not be to a local optimum of the original problem, let alone a global optimum. The solution of successive iterations, and therefore subproblem inputs, can become wilder and wilder, until at some point the solver fails, or makes erroneous determination of infeasibility or unboundedness.
https://twitter.com/themarklstone/status/1586795881168265216
Don’t apply crude, unsafeguarded (no Trust Region or Line Search) Successive Convex Approximation (SCA) to a new problem … unless your name happens to be Stephen Boyd.
There’s a reason high quality non-convex nonlinear optimization solvers are more than 10 lines long.
Bad initial scaling might be making things worse. .Try to change the units to keep all input data within a small number of orders of magnitude of one. But even if the input data for the first iteration of SCA is well-scaled, the input data on later iterations might become badly scaled because SCA might produce wilder and wilder solutions, which become input data for the next iteration.
Mosek might do better than SDPT3 at handling numerically difficult CVX optimization problems… But even Mosek can’t make SCA always work well, because the SCA algorithm itself is unreliable, no matter what solver is used.