expr2 is convex, so constraint_term is log-concave, because its log is concave.

In the objective, a constant is subtracted from constraint_term, which violates CVX"s log-convexity (log-concavity) rules. which are documented only at Log of sigmoid function - #3 by mcg .

Anyhow, it looks like this can be rewritten making use of exp(x-y) = exp(c)*exp(-y) Then you get something like control_cost_min*2^(-constraint_term**2/states_num)*2^(*Eig_val-1e-6)**2/states_num)*exp(1) , modified appropriately with the rest of the stuff in your objective, which are all input data. Perhaps the part I did show is not quite right. I’ll let you work out the correct details.

I think the basic thing you have is minimize(2^(pow_abs(xi, 2))), which CVX will accept. I think your actual objective winds up being that, but with various positive constants multiplying things, plus some additive terms. I think all this other stuff, winds up being irrelevant to the optimization, at least for determination of argmin. You should check the correctness of my statement, in case I didn’t read through the whole objective correctly.

That looks rather complicated, and I don’t even know what all the things in it are.

Nevertheless, is the problem convex? I don’t see how the left-most inequality of the 2nd constraint is convex. I don’t know exactly what that | | means here, but 1 <= it doesn’t seem like it would be convex.

Please see the standards for convexity proof described in this link.

the second constraint means that the abs(.) of all elements on the diagonal of matrix \Psi are between 1 and \eta_max. Actually these constriants are affine.

with the consideration of the fact that the argument of exp(…) is always positive. exp(…)>0
it is worth mentioning that all other parameters are constant.

I haven’t checked the details of your argument. But now you are talking about alternating optimization. That is because the stated optimization problem is not convex. What matters to CVX is whether each problem provided to it is (mixed-integer) convex, and follows its rules.

I will leave the details to you. As to whether the alternating optimization will converge to anything, let alone a global or even local optimum of the original problem …? Perhaps the paper has something to say about it.