I’m trying to maximize the following concave objective under some convex constraints in CVX:

-\text{tr}(\textbf{L}\textbf{L}^H)-\text{tr}(\mathbf{\Lambda}^H \mathbf{\Sigma}^{-1} \mathbf{\Lambda}(\textbf{R}+\textbf{G}{\textbf{L}}{\textbf{L}}^H\textbf{G}^H))-\text{tr}(\mathbf{\Sigma}^{-1}) \\ +2\Re(\text{tr}(\mathbf{\Sigma}^{-1}\mathbf{\Lambda}\textbf{G} \textbf{L}))-\log_2|\mathbf{\Sigma}|

where \mathbf{\Lambda}, \mathbf{\Sigma},\textbf{R} and \textbf{G} are complex constant matrices, and \textbf{L} is the optimization variable. Here is how I implement it in CVX

```
maximize(-trace(square_pos(norm(L,'fro'))-real(-trace(lam'*inv(sig)*lam*(R+(G*(square_pos(norm(L,'fro')))*G')))-trace(inv(sig)) ...
+2*real(trace(inv(sig)*lam*G*L))-log2(det(sig)))))
```

The problem is that CVX returns the following error:

Disciplined convex programming error:

Cannot perform the operation: {complex affine} .*

{convex}

My questions are:

1- in DCP rule set, the multiplication of a concave with a non-negative constant is a valid expression, so why this error happened ?

2- Does CVX allows variable substitution of variables that are function of the optimization variables, i.e., can i define

\textbf{Z}=(\textbf{R}+\textbf{G}{\textbf{L}}{\textbf{L}}^H\textbf{G}^H) and use it in the objective and still have the same optimal solution ? This replacement solves the issue but i think CVX will treat \textbf{Z} as a constant despite its clearly a function of \textbf{L}.

3- This error appears also if all the matrices are scalars. For some reasons, I can’t make a substitution of the quadrature form like \textbf{K}=\textbf{L}\textbf{L}^H and perform the optimization over \textbf{K} which solves the problem.

Best wishes,

Sami.