# How to deal with the SDP problem with nonlinear constraint?

As I know, the constraints and objective function are all linear functions in the conventional SDP problem.
And there exist some solvers which can solve the nonlinear SDP problems.

There exist three constraint as follows:
a>=0;
inv_pos(a)<=trace(W),
W>=0;
where ‘a’ and ‘W’ are two variables, the first one is a scalar, the second one is the Hermitian complex variable.

We point out that when a>=0, the function ‘inv_pos(a)’ is convex.
Therefore, we can use the first-oder Taylor expansion to reformulate the second constraint.

Question:
Besides the Taylor expansion, how can we deal with the above constraint? or without dealing with the above constraint, we can use some special solvers to solve the cosrresponding SDP problem directly?

This problem can be entered in CVX, unless there are some other constraints or objective function you haven’t shown us which would be in violation of CVX rules.

``````cvx_begin
variable a nonnegative
variable W(n,n) complex semidefinite
% insert objective function, if there is one
inv_pos(a) <= trace(W)
% insert any other constraints
cvx_end
``````

This is o.k., because each constraint satisfies CVX’s rules. CVX will formulate this as a semidefinite cone plus a Second Order Cone (SOC) obtained from the inv_pos constraint.

So this is just a linear SDP (which includes the SOC constraint as special case). Taykor expansion is not necessary, and is not advised.

I tried in the YALMIP. By using the mosek solver, the solver does not support that constraint, i.e. inv_pos(a)<=trace(W).
I have not tried in the cvx. (I also think it can support the above constraint.)
Because under using solver (mosek) and solving the same SDP problem, the cvx can not achieve the right solution. The scaling of my problem is bad. There exist some small values. e.g. 0.0122e-4, 0.1245e-4 and so on.

`inv_oos` is a CVX command, not a YALMIP command. If you applied `inv_pos` to a YALMIP expression, I presume you got a model creation error when that constraint was used in `optimize`.

Choose either CVX or YALMIP. Don’t apply commands from one tool to expressions of the other tool.

Enter the code I showed in CVX. It should be accepted. if the problem scaling is bad, fix the scaling by changing units. if you show your complete problem, preferably with all input data, and show all the Mosek and CVX output, perhaps a forum reader can give you an assessment. If you can not show us all the input data, at least show us your code, and show all Mosek and CVX outout.

BTW, YALMIP will allow you to enter BMIs (Bilinear SDPs) and certain nonlinear SDPs, using PENLAB or PENBMI as solver. You can even try to solve to global optimality using BMIBNB, with PENLAB or PENBMI as upper solver. Whether the problems are successfully solved is another matter.

BTW, the command to use in YALMIP instead of `inv_pos` is `cpower`.

`cpower(a,-1) <= trace(W)`, which automatically imposes the constraint `a >= 0`. So it is identical in behavior to `inv_pos(a)` in CVX.

haha!
My bad! I have thought that the ‘inv_pos’ and ‘rel_entr’ can also be used in the YALMIP.
Acctually, when I use the above conmmands in the Yalmip, it’s OK!
The YALMIP did not give warnings.
But the YALMIP do not surrport the constraint e.g. inv_pos(a)<=trace(W).

Sir!
Tomorrow, I will reformulate the problem and make good scaling. And I will try in the cvx toolbox.
This optimization problem has bothered me for a long time!!!

`kullbackleibler(x,y)` in YALMIP is the same as `rel_entr(x,y)` in CVX.

I have tried in the cvx by using the ‘inv_pos’. That works.
Initially, I know there exists the ‘inv_pos’ command in the cvx, but I did not know the theory of ‘inv_pos’. Therefore, I have not used the command in the right way.
You told me that the command ‘inv_pos’ can transform the expression in a Second Order Cone. Then, I understand the theory behind the ‘inv_pos’.

Instead of using the ‘inv_pos’, the Taylor expansion results in thta the iteration do not converge. I think the reason is that theree are two Taylor expansions in my formulated problem. I know that is bad! And I will pay attention to this in my future work!

I think the reasons also applies to the CVX. The best way to dealing with the precision of solver is to scale the optimization probelm in a good way.

And you have told me the two commands i.e. ‘cpower’ and ’ kullbackleibler(x,y)’.
They are same with ‘inv_pos’ and ‘rel_entr’, respectively. Thank you very much!

BTW, do you have the link in the YALMIP where the equal transformation is summaried.

You can look at the online help for the YALMIP functions, or the YALMIP wiki pages.

help cpower

cpower Power of SDPVAR variable with convexity knowledge

cpower is recommended if your goal is to obtain
a convex model, since the function cpower is implemented
as a so called nonlinear operator. (For p/q ==2 you can
however just as well use the overloaded power)

t = cpower(x,p/q)

For negative p/q, the operator is convex.
For positive p/q with p>q, the operator is convex.
For positive p/q with p<q, the operator is concave.

A domain constraint x>0 is automatically added if
p/q not is an even integer.

Note, the complexity of generating the conic representation
of these variables are O(2^L) where L typically is the
smallest integer such that 2^L >= min(p,q)

help kullbackleibler

kullbackleibler

y = kullbackleibler(x,y)

Computes/declares the convex Kullback-Leibler divergence sum(x.*log(x./y))
Alternatively -sum(x.*log(y/x)), i.e., negated perspectives of log(y)