How to modeling the rank(X)>=p constraint?

As illustrated in Jon Dattorro’s book ‘Convex Optimization & Euclidean Distance Geometry’, the constraint:

rank(X)>=p, X is a positive semidefinite matrix in R(m*m)

is a convex constraint. I tried to modeling the constraint as follows:

[E C;C’ X]>=0, where E is a unit matrix of p*p,and C is a matrix of p**n;

If the rank of C is p, then we can guarantee the rank of X no less than p.
I was confused of how to further finish the modeling!

Thanks for your help in advance!

Rank constraints are not convex. Claims to the contrary are simply incorrect. YALMIP supports the LMIrank solver, which uses a heuristic approach to constrain rank. You might consider that.

You can relax your constraint and use the “trace” instead of the “rank” constraint. Depending of the problem you can get very good results.

Theorem 2.9.2.9.3 in Jon Dattorro’s book ‘Convex Optimization & Euclidean Distance Geometry’, the convexity of the set: {X is semidefinite | rank(X)>=p, p>=0}. I discussed this with Mr Stone previously, but I feel really confused about the specific modeling.

It’s possible that particular rank constraint is convex. CVX however cannot handle it, as far as I can tell.

Thanks for providing the pointers to the precise text. I now agree that I was wrong, and this particular rank constraint is convex. I’m afraid CVX still can’t handle such constraints at all, but the reason is a bit more subtle, and interesting. So thank you for bringing it up, and making me think about it more!

As Mr. Dattoro points out, the set of PSD matrices with rank greater than equal to a fixed k>0 is not closed. This is what prevents CVX, or really any practical numerical solver, from doing anything useful with it.

For example, consider the following model:
$$\begin{array}{ll} \text{minimize} & z \ \text{subject to} & X = \begin{bmatrix} 1 & 0 \ 0 & z \end{bmatrix} \succeq 0 \ & \mathop{\textrm{rank}}(X) \geq 1 \end{array}$$
This model has an optimal value of 0, but it has no optimal solution because of the rank constraint. You can get arbitrarily close to the optimal value by choosing z extremely small. When CVX tries to solve this model without the rank constraint, it will start with a positive value of z and reduce it asymptotically towards zero. At every point in the solution process, the rank constraint will be satisfied! But the optimal solution it seeks is z=0, which is not feasible.

This remains true for more complex models as well. The algorithms that CVX employs to solve its models always trace trajectories of strictly positive definite solutions. The matrices involved will always be full rank throughout the entire solution process, and any lower bounds on rank will be trivially satisfied for the entire problem trajectory. If the optimal solution happens to have degenerate rank, it will approach that solution only asymptotically.

It’s very similiar to the caution we give in the CVX manual about strict inequalities. The underlying solvers simply do not respect them, and CVX issues a warning if you try to add them.

I’ve provided a more complete answer below.

If you state the problem as inf rather than min, then indeed the optimal value is 0, and as you point out, numerical solvers are doing min, not inf. Maybe the next generation CVX with mcg’s or Rockafellar’s engram embedded into the program will be able to find infima, and not just minima.

I don’t think so. What would that mean, exactly? That is to say: if a particular optimization model has an infimum but not a minimum, what optimal point should the solver return?

It’s one thing for a model to have an unattainable optimum, like, say, minimizing e^{-x}. But it’s another thing altogether to minimize over an open set. No practical mathematical engine is going to be able to exclude the boundary of the feasible set from consideration.

Thank you so much! I now understand the complexity caused by the openness of the convexity set. Moreover, could the rank constraint be mathematically expressed as an SDP constraint form in your opinion?

I really don’t think it can.

That’s why I mentioned the engrams so that a mathematician’s smarts could return non-numerical solutions to complement the numerical solutions the solver is capable of. Perhaps a bit of a math-fi dream as of now.

I once had convex class by the now retired Prof Ken Kortanek. There we covered a paper that by using a more general number system was able to close all duality gaps in convex optimization. It sounds similar to engrams. From practical point of view I doubt this gonna be useful in practice.