For feasibility-check problem, if I use SCA(successive convex approximation) for one of the non-convex constraint, do I have to finish the whole SCA process?

Now I’m dealing with a feasibility-check problem. But one of the constraints is non-convex, so I used SCA(successive convex approximation) to make it convex. But because this problem is a feasibility-check problem, if the first iteration of SCA is feasible, does it mean this problem is feasible? (If so, I don’t need to finish the whole process of the SCA to find the values of the variables, because what only matters for me, is whether this problem is feasible)

1 Like

I think the only way you determine feasibility of the problem is to evaluate a candidate feasible point against the original constraints, to include any non-convex constraints. Determining when, in the course of running SCA, to evaluate feasibility of a point against the original constraints, is up to you. Failure of SCA to converge to a feasible point does not prove that the original non-convex problem is infeasible. If the first (or later) iteration of SCA results in CVX declaring that sub-problem to be infeasible, what are you going to do? How do you update SCA other than by trying a different initial (starting) point for SCA?

If you use whatever algorithm and find a point which is feasible for the original problem, you have shown the original problem is feasible. Given at least one non-convex constraint, the only way to reliably conclude infeasibility is if a global optimizer (or constraint solver) reports infeasibility.

Of course, one thing you can try is to run CVX on the problem with the non-convex constraints omitted. If the problem is reported infeasible, then so is the original problem. Any candidate feasible solution from the CVX problem can be evaluated against the non-convex constraints to assess its feasibility. This can also be done at any point in the course of running SCA.

1 Like

I think you are right. But my problem is a little bit special. The non-convex constraint has a DC form, it should be <= 0. If I’ve used first-order Taylor approximation to make it convex, which means the transformed convex constraint is an upper tangent of the original non-convex constraint. Because this constraint should be <= 0. So we can say that the transformed convex constraint narrowed the feasible set.

So, if the first iteration of SCA is convex, it can show that the problem is feasible. But, the problem is the narrowed feasible set is dependent on the starting point of SCA. So, if the first iteration of SCA is infeasible, we can’t say that the original problem(with non-convex constaint) is infeasible.

How do you think about it?

1 Like

I think SCA is not a reliable approach for what you want to do. it may work, or it may be inconclusive.

To clarify my earlier statement “Given at least one non-convex constraint, the only way to reliably conclude infeasibility is if a global optimizer (or constraint solver) reports infeasibility.”, if a convex relaxation of the original problem is infeasible, then you can conclude the original problem is infeasible; but the converse (or inverse) is not true.

1 Like

Hi @Beibei,
I like your ingenuity … your thought through process and reasons for attempting to transform
non-convex into convex using first-order Taylor approximation. Good attempt on your part and credit
has to given for thinking outside the box. However some of our key moderators such as @Mark_L_Stone have highlighted numerously that Successive convex approximations has it’s inherent limitation that need to be avoided for non-convex constraints.

Supra

Hi,Mark,
Recently I used the solver Sedumi in CVX to achieve a problem optimization by SCA(successive convex approximation). And I found the results are “Solved” at first, but after several iterations, the results had become “Infeasible”. I want to know if the results got from last iteration have influence in next optimization process? Otherwise is there some other problems?

The reason given for the error is “Primal infeasible, dual improving direction found.”

It seems wierd, for my code, if the initial point is feasible, I think that the subsequent iterations should be feasible.

There’s no reason to think if one iteration of SCA is feasible that the next iteration will necessarily be feasible. Unsafeguarded SCA, i.e., with no line search or trust region, is an unstable algorithm which may not converge to anything; and if it does converge to something, it may not even be a local optimum of the original problem. I suggest you use a non-convex solver, as available, for instance, under YALMIP; and don’t try to use SCA.

If you try a different starting point, you might get lucky and SCA might converge to something (or maybe not)/

Thanks a lot for your response. I will try a different starting point in SCA. If it does not work, I will use YALMIP.