How to handle nonlinear equality constraints?

Hi,
I want to put an equality constraint on the total transmit power from my MIMO system. It simply means that I need to have a scaler equality constraints since the norm of linear operation Aw needs to be equal to P; or less strictly, between P_\text{min} and P_\text{max}. Here is my optimization problem:

$$\begin{array}{lll}\text{minimize}&|Aw|_\infty\
\text{subject to}&|Aw|2=P & \text{or} \
\text{subject to}&P
\text{min} \leq |Aw|2 \leq P\text{max}
\end{array}

Where, A is a matrix and w is a vector so $\|Aw\|_2$ is a scalar. How do I write this problem in DCP form in a way that minimizes the infinity norm but does not give me a $w$ vector of zeros! ? Thanks

The problem as you are trying to solve is not convex; nor is it expressable as a mixed-integer convex program, which the upcoming version of CVX supports. CVX simply cannot solve it.

Nonlinear equality constraints can rarely be expressed in convex form. In the disciplined convex programming framework that CVX imposes, they are forbidden altogether. The same reasoning applies to lower bounds on a convex function.

As I know the l_2 norm (or l_1 or infinity) operation on a linear transform is a convex function? isn’t that true or the problem is having them (nonlinear functions) in the constraints. In that case can I reformulate my problem as an unconstraint problem with a negative regularization parameter (\lambda) as

$$\text{minimize} |Aw|_\infty + \lambda |Aw|_2$$

Thanks

No. Using a negative regularization parameter is not convex either.

mcg’s answer is correct; the problem is not convex, and cannot be expressed in DCP format. more specifically, a constraint like \|Aw\|_2 \leq P (with A a constant matrix and w a variable) is OK (convex, and DCP), but the converse constraint, \|Aw\|_2 \geq P, is not convex, and (therefore) not DCP.

i’d like to add that there are heuristic methods for “handling” such problems, using the so-called convex-concave procedure (see EE364b, lecture on sequential convex programming). this is a heuristic method, that uses convex optimization as a subroutine, to “solve” problems like this one. heuristic means the method can fail; the w found need not be the global solution. but such methods can work well in practice.

for a reverse norm inequality like this, the method works as follows. we replace \|Aw\|_2 \geq P with the convex (and DCP) constraint q^T(Aw) \geq P, where q=(Aw^\mathrm{prev})/\|Aw^\mathrm{prev}\|_2, where w^\mathrm{prev} is the value of w at the last iteration. solving the resulting convex problem gives us the new value of w. (you must start with a nonzero value of w. in fact, you might start the convex-concave procedure with several different values of w; you could well get different final results.)

our experience is that this works quite well in practice. but remember that this method does not find the global solution.

Can we conclude, in general, that an optimization problem is convex when the objective function is convex provided that the constraints are linear? are there exceptions on monotonic function for constraints like log determinant functions or so?