Can I use CVX to solve a minimax optimization problem?

Hi. I need to solve a problem like this

\begin{align} \min_x \max_y \ & \ f(x,y) \\ \mathrm{subject \ to} \ & \ Ax = b, \ Cy = d \end{align}

where the continous function f(x,y) is concave to y and is convex to x.
This problem can be solved following from the KKT condition

\begin{aligned} & \nabla_x f(x,y)+A^T \lambda = 0, \ Ax = b, \\ & \nabla_y f(x,y)+C^T \nu= 0, \ Cy = d. \\ \end{aligned}

I hope to use optimization softwares like CVX to solve the optimization or the KKT equations, but I have no idea how to make it.

You haven;t told us what the function f is. Most likely, formulating and solving the KKT conditions is not the best way of solving it with CVX.

CVX can be used to solve some, but not all, minimax problems.

The complete expression of the optimization is

\begin{aligned} \min_{\mathbf{Q}} \max_{\mathbf{{d}} } \ & \ \sum _ {k=1} ^ K \mu_k \log \frac{ \det (\mathbf{Q}+ \sum_{i=1}^k d_i \mathbf{h}_i \mathbf{h}_i ^ H )} { \det (\mathbf{Q}+ \sum_{i=1}^{k-1} d_i \mathbf{h}_i \mathbf{h}_i ^ H )} \\ \mathrm{subject \ to} \ & \ \sum_{k=1}^K d_k = P, \ \mathbf{d} \geq 0, \ \mathrm{tr}(\mathbf{R} \mathbf{Q}) = \mathrm{tr} (\mathbf{R}),\ \mathbf{Q} \succeq 0, \ \mathbf{Q} \ \mathrm{is \ diagonal} \end{aligned}

where \mu_1 \geq \mu_2 \cdots \geq \mu_K. The optimization variables include \mathbf{d} = [d_1, \ldots, d_K] ^ T and \mathbf{Q}. The function is concave to \mathbf{d} and is convex to \mathbf{Q}.

That doesn’t look like a convex optimization problem to me.

The problem is indeed convex if \mu_1 \geq \mu_2 \ldots \geq \mu_K \geq 0.
First, the expression

\log \frac{ \det (\mathbf{Q}+ \sum_{i=1}^k d_i \mathbf{h}_i \mathbf{h}_i ^ H )} { \det (\mathbf{Q}+ \sum_{i=1}^{k-1} d_i \mathbf{h}_i \mathbf{h}_i ^ H )} = \log \det \left( \mathbf{I} + \left(\mathbf{Q} + \sum_{i=1}^{k-1} d_i \mathbf{h}_i \mathbf{h}_i ^ H \right)^{-1} d_k \mathbf{h}_k \mathbf{h}_k ^ H \right)

is convex to \mathbf{Q}, so the function is convex to \mathbf{Q}.
To show that the function is concave to \mathbf{d}, we can use the following equation

\log \frac{ \det (\mathbf{Q}+ \sum_{i=1}^k d_i \mathbf{h}_i \mathbf{h}_i ^ H )} { \det (\mathbf{Q}+ \sum_{i=1}^{k-1} d_i \mathbf{h}_i \mathbf{h}_i ^ H )} = \log \det (\mathbf{Q}+ \sum_{i=1}^k d_i \mathbf{h}_i \mathbf{h}_i ^ H ) - \log \det (\mathbf{Q}+ \sum_{i=1}^{k-1}d_i \mathbf{h}_i \mathbf{h}_i ^ H ),

and rewrite the function to be

\sum_{k=1} ^ {K-1} (\mu_k - \mu_{k+1}) \log \det (\mathbf{Q}+ \sum_{i=1}^k d_i \mathbf{h}_i \mathbf{h}_i ^ H) + \mu_K \log \det ( \mathbf{Q}+ \sum_{i=1}^K d_i \mathbf{h}_i \mathbf{h}_i ^ H ) - \mu_1 \log \det (\mathbf{Q}).

You need the maximum (objective value) with respect to d to be convex with respect to Q. Otherwise, it is not a convex optimization problem.I.e., you need the objective function of the outer minimization to be jointly convex in d and Q.

Thanks for your reply. For now I can’t prove the convexity of the outer minimization.
Perhaps I need to implement a primal-dual algorithm myself to solve it.

As you mentioned that CVX can be used to solve some, I am interested in what kinds of minimax problems can be solved with CVX.

Convex optimization problems which are DCP-representable.

For example, if x is a variable, minimize(max(exp(2*x),x^2)). That is a mininax problem and follows CVS’s rules.

Ok. Now I know that CVX cannot be used to solve the minimax or saddle point problem of a continous convex-concave function f(x,y), unless the problem can be converted to a standard convex (or conic) problem.