Is it possible to solve optimization problems with convex nonlinear matrix inequalities as the constraint by
CVX? For example consider the following convex matrix inequality:

[x1 x2;x3 x4+y^2]<=0;

where x1, x2, x3, x4 and y are the optimization variables. Can CVX solve such a problem?

As written, with (scalar) variables x1, x2, x3, x4, y

[x1 x2;x3 x4+y^2]<=0;

is not an LMI, and so can not be input as such in CVX.

However, if you are able to define a variable y_square, instead of y, and are able to express the rest of your model in terms of y_square, following CVX’s DCP rules in terms of y_square, then that would work, and your matrix inequality would be expressed in CVX as

cvx_begin
variables x1 x2 x3 x4 y_square
-[x1 x2;x3 x4+y_square] == semidefinite(2)
<remainder of CVX code, including minimize or maximize command and any other constraints>
cvx_end

cvx_begin sdp
variables x1 x2 x3 x4 y z
x4 + y^2 <= z
[ x1, x2 ; x3, z ] <= 0

You’ll want to confirm this for yourself. That is, prove, that if x_1,x_2,x_3,x_4,y satisfy your nonlinear LMI, then there must exist a z that satisfies these LMIs; and likewise, if there is a x_1,x_2,x_3,x_4,y,z satisfies these LMIs, then the nonlinear LMI is also satisfied.

It seems rather evident to me, but my grade/research/publication isn’t depending on it! So do verify this.

EDIT: Actually, it’s not difficult to prove. For a symmetric matrix X, \lambda_{\max}(X) is convex in the elements of X; and it is also non-decreasing in the diagonal elements of X. Therefore, standard composition rules of convex analysis say that convexity is preserved if the diagonal elements are convex functions of underlying variables.

So your nonlinear LMI is definitely convex, and the approach I’ve outlined here will indeed give you the results you seek. Of course, in general, nonlinear LMIs cannot be handled by CVX; but in the specific case where one or more of the diagonal elements is a convex expression, you can handle it as I have outlined here.

Does this mean that the DCP or CVX ruleset could be expanded to allow diagonal elements to be convex per existing rules, and thereby make the reformulation automatic, as it is, for example, with matrixfrac?

It’s not the ruleset that needs expanding—the rules already provide for this scenario. It is the way that SDP constraints are implemented. That would need changing. I’m certainly considering making the change at some point.

Hello,
I have to solve an optimisation problem as below:
Scalar positive variables : x, y
Objective: Maximize(y-x)
Subject to:
[A B; B’ -x^2.I]<=0
[-A -B; -B’ y^2.I]<=0