I would like to solve

[\begin{aligned}

\min_{X\succcurlyeq 0}& &\mathsf{trace}\left(X\right)+\lambda \lVert X \rVert_1\

\text{subject to} & &\lVert \mathcal{A}\left(X\right)-b\rVert_2 \leq \varepsilon

\end{aligned}]

numerically, where \lambda\geq0, \varepsilon\geq 0, b, and the linear operator \mathcal{A} are given. Also \lVert X\rVert_1 denotes the \ell_1-norm of X. I wonder if it’s possible to implement this program in TFOCS.

I tried to apply `tfunc_sum`

to get a mixture of `prox_trace`

and `prox_l1`

as the objective function, but that doesn’t seem to work. (the error message was saying there was too many input arguments when the objective was called in one of the subroutines.)

I’d appreciate it if you can tell me if and how I can implement the optimization above in TFOCS.