# Solving Mixed-Integer Non-Linear Problem

I’m trying to solve the following mixed-integer non-linear optimization problem:
\text{subject to}

l \in \mathbb{R}^{N \times 1} represents the eigenvalues of the matrix \mathbf{H}^{H}\mathbf{H}, where \mathbf{H}\in \mathbb{C}^{N \times N} is a full rank matrix and e is a constant (always we use 0.1 or any number less than 0.1). The problem is not convex due to binary variables multiplication in the first and the second constraints and also due to the product \prod_{\mathcal{K}}... in the first constraint.

To linearize the binary variables, I have used proposition 1 in the following paper SVD-TWC as follows:
p-(1-a)*f_{max} \leq p \leq a.*f_{max}
h-(1-b)*f_{max} \leq h \leq b.*f_{max}
While for the product term in the first constraint, I have used (geo_mean) function as suggested in this post.
The only term that I was not able to deal with is the second term at the second constraint (b*e*P). I used the following code to test the solution without considering b*e*P

cvx_begin
cvx_solver gurobi
variables p(N) h(N) y(N)
variable a(N) binary
variable b(N) binary
expression x0
x0=(e*(sum(p)+sum(h)));
minimize sum(p)+sum(h)
subject to
geo_mean(1+p.*l+x0)>=2^(c/N)*(1+x0);
sum(h.*l+y)==s;
p<=a.*fmax;
p>=p-(1-a).*fmax;
h<=b.*fmax;
h>=h-(1-b).*fmax;
0<=p+h<=fmax;
0<=p<=fmax;
0<=h<=fmax;
sum(a)+sum(b)<=N;
0<=y<=b*N*e*fmax;
0<=x0-y<=(1-b)*N*e*fmax;
cvx_end


and this works fine. Any idea how to deal with the term b*e*P?

Thanks

A.

Can you apply the technique in section 2.8 of http://www.fico.com/en/node/8140?file=5125 to linearize the product of a binary with a continuous variable?

Thanks Mark!

The problem with the technique in section 2.8 is adding up N*e*P instead of b*e*P. I used 0 as the lower bound and N*e*f_{max} as the upper bound and relaxed the term b*e*P as follows:
0 \leq y \leq N*e*f_{max}
0 \leq e*P-y \leq (1-b)*e*f_{max}

I used to declare y as a new variable in the CVX and updated the constraint sum(h.*l) to sum(h.*l)+y and added the above equations as new constraints.

The solution that I expect is \sum_{1}^{K}(h_kl_k+e*P), where K=\sum{b} but applying the obove technique gives \sum_{1}^{K}h_kl_k+\sum_{1}^{N}e*P.

I’m rather confused by the post. I don;t see the difference between the solution you expect vs. what you got. I’m even confused as to the dimensions of your original 2nd inequality (are e and s scalars ?).

In your new inequalities, why does b not appear on RHS of the first? Where is N in RHS of 2nd?

I’m not sure whether you’re already doing this (i’m confused on dimensions), but you need to apply the technique in section 2.8 separately to each scalar product term, even though you can vectorize the resulting formulation.

There may be more I don’t understand, but perhaps this is enough to get you reexamining things as necessary.

It is my mistake, I missed to put b and N on the inequalities above and they are updated to
0 \leq y \leq b*N*e*f_{max}
0 \leq e*P-y \leq (1-b)*N*e*f_{max}

Both e and s are scalars. However, I have tested the code after this modification and it seems the problem is solved but only a minor issue remaining. The issue is that sometimes the solution returns h with some values approximately zero (1e-10). Is there any way to avoid this issue?

BTW, the constraint sum(h.*l)+y should be sum(h.*l+y) and my point here if the optimal solution, for example, requires three of the eigenvalues l to satisfy s, then there are only 3*e*P will be added up the sum of h*l i.e; (\sum_{j=1}^{3} h_j.l_j)+3*e*P

I don’t understand what your program does or is supposed to do, what the eigenvalues are supposed to satisfy, etc.

What is the issue with h values being approximately zero? If you want them to be exactly zero, then adjust the solution after CVX completes.

I have no idea what you’re talking about in your last paragraph.