Can CVX handle the function log_det( I + inv(X) )

Hi everyone!

I want to solve an optimization that minimize log_det( I + inv(X) ) under the semidefinite constraint and some linear constraints to X. For the background of this problem, see this reference.

As mentioned in this reference, though log_det( I + inv(X) ) is a convex function of X, numerically solving the problem is not easy.

I tried to express the function in CVX as log_det( I + inv(X) ) and log_det( X + I ) - log_det( X ). However, CVX cannot accept these expressions.

I want to know whether CVX is able to handle the function. If not, is there any other softwares that can handle the function?

1 Like

The scalar case can be handled per the last entry in section 5.2.10 of the Mosek Modeling Cookook https://docs.mosek.com/modeling-cookbook/expo.html#modeling-with-the-exponential-cone .

I don’t know whether the matrix case can be handled in CVX. Perhaps @Michal_Adamaszek does? I showed how to use functions in CVXQUAD to handle some logdet perspective functions http://ask.cvxr.com/t/perspective-of-log-det-function-and-cvx-formulations-of-related-log-det-problems-using-quantum-relative-entropy-from-cvxquad/4033/3e , but this problem is different than those.

The trick in the scalar case linked by @Mark_L_Stone is to rewrite the argument to the log-function as a harmonic mean, so the inversion can be used to flip the sign of the log-function, i.e.:
t >= log(1 + 1/x) = log(1 / (x / (x+1)) ) = log(1 / (1-1/(x+1)) )
so that it can be rewritten:
-t <= log( 1 - 1/(x+1) )
and then you just substitute out 1/(x+1) for a dummy variable using a quadratic cone.

This generalize to the matrix case step-by-step as follows:
t >= log(det( I + inv(X) )) = log(det( inv(X*inv(X+I)) ) = log(det( inv(I - inv(X+I)) ))
so that it can be rewritten:
-t <= log(det( I - inv(X+I) ))
but I am missing a matrix-equivalent for substituting out inv(X+I).

1 Like

I believe
t \geq \log(\det( I + {\rm inv}(X) ))

is equivalent to
-t \leq \log(\det( I - {\rm inv}(X+I) ))

is equivalent to
-t \leq \log(\det( I - U ))
U \succeq {\rm inv}(X + I)

is equivalent to
-t \leq \log(\det( I - U ))
\left[\begin{array}{cc}X+I & I\\I & U\end{array}\right] \succeq 0

The first step follows from the scalar to matrix generalization above which you can prove more rigidly. The second step follows by applying \log(\det( I - U )) \leq \log(\det( I - V )) on I \succeq U \succeq V to our case with V = {\rm inv}(X + I). The last step is the Schur complement lemma.

2 Likes

Thanks @hfriberg. I implemented your idea in CVX and it worked.
The equivalent optimization problem is

\begin{align} \mathrm{max} \ & \ \log \det ( I - U ) \\ \mathrm{s.t.} \ & \ X \succeq 0 \ \text{and some linear constrains to} \ X \\ & \ I - U \succeq 0 , \\ & \ \left[ \begin{array}{cc} X + I & I \\ I & U \end{array} \right] \succeq 0 \end{align}

Here, I add I - U \succeq 0 to ensure continuity of the function.
At first I also considered to apply Schur complement and introduce a variable V \succeq X ^ {-1}, but the optimization became minimizing a log_det function, will is concave. The first equivalence in the trick converts the target to be maximing a log_det function. That’s really ingenious :grinning:

The I - U \succeq 0 part is redundant in cvx and may lower performance. Arguments to log_det are implicitly constrained this way already, as you’ll find in the cvx reference:

When used inside a CVX specification, log_det constrains its argument to be symmetric (if real) or Hermitian (if complex) and positive definite.

OK, I got it. Thanks for your help.

hello, this matrix([X+I, I;I, U]) maybe not correct, i think that X+I and U should change positions.

That’s equivalent. …