# Linearization/Convexification of a Constraint

Assume we have an mixed integer optimization problem and we have managed to express the objective and all the constraints except one in linear form. The last constraint is expressed as:
\prod\limits_n {\left( {1 + {x_n}} \right)} \geq a, where x_n are real positive variables and a is a real positive number. I am not sure what is the best way (if any) to handle such a constraint. I am thinking about the following options:

1. Since all the other expressions
are linear, is it a good practice to
try to linearize this as well and
then follow MIDCP? And how this
could be done, through 1st order
Taylor expansion for example?

2. Is it more efficient to try to
achieve a convex approximation
instead of an affine one, and then
again use MIDCP?

3. If the constraint expression was reversed, then this would be a
posynomial expression. Instead it is a signomial. I guess i
could apply a monomial approximation
and then transform it to a valid posynomial
constraint. But again I am not sure
if Mixed Linear Geometric
Programming can be solved
efficiently (assuming I relax the
integer variables, as Boyd’s GP
tutorial suggests).

I am totally aware that there is no straightforward approach to such a problem, but suggestions are very much welcome.

a.

People often come to this site puzzled why a particular function or constraint can’t be accepted by CVX: when in fact, the function is quite clearly non-convex. Had they taken the time to verify convexity, they would not have needed to ask the question! And if you are using CVX without knowing in advance that your constraints or objectives are convex, you are using CVX incorrectly!

You have precisely the opposite issue here. You clearly recognize that f(x)=\prod_n (1+x_n) is not a concave function, so CVX cannot accept it in its current form. You’re definitely approaching the modeling process properly and I applaud you for it.

And YET… if you carefully examine not just the function f(x) but the entire inequality f(x)\geq a itself, you will find that it describes a convex set (at least if you also restrict f(x) to positive values of x, which you are doing.) It turns out that f(x) is quasi-concave (see Prof. Boyd’s book). That means there is hope to express the set described by the inequality in terms of convex functions.

In this case, all you have to do is t take the $n$th root of both sides! Then use the geo_mean function:

geo_mean( x + 1 ) >= a^(1/n)


You could also take the logarithm of both sides:

log_prod( x + 1 ) >= log( a )


But I do not recommend this, for two reasons: first, logarithms use CVX’s experimental (and less reliable) successive approximation approach; and second, because log_prod is implemented using geo_mean. So geo_mean is definitely the simplest, best way to implement this constraint.

Thanks Michael for your super-clarifying answer which as usual is a simple and robust one.

Let me go forward with another issue that I am now facing. I have assumed that the x_n variables are strictly positive so-far, and in this case the geometric-mean representation is exact and convex. But, how about letting x_n to be zero as well. And in fact in my problem, some of these variables should be zero, to drive the solution to the optimal point. Then it seems that the geometric-mean representation does not allow any x_n to be zero, and so excludes possible good solutions (i refer to many solutions since i am using this expression in an integer problem setup). So, if i understood well, i have two options:

1. stick with the geometric-mean representation and get a convex MINLP problem, but with losing some (probably good) solutions;

2. come up with another approximative and not exact representation of my expression in order to allow zero-values for the optimization variables;…

What do you think?

a.

I don’t understand. If x=0, then 1+x=1, so you never encounter the boundary case of geo_mean. I see no problem with allowing x to be zero. In fact, I see no problem with allowing x+1 to be zero, either