Actually, any symmetric monotone function of the singular values of a matrix is DCP-representable. See Ben-Tal and Nemirovski â€śLectures on Modern Convex Optimizationâ€ť, Proposition 4.2.2. But the best representation (presented there, at least) is huge!

Letâ€™s see how this works for a PSD matrix A. (The case of rectangular matrices is similar.)

the sum S_k(A) of the top k eigenvalues of A is the solution to the problem

minimize sk + Tr(Z)
subject to
Z âŞ° 0
A âŞ° 0
Z + sI âŞ° A

g(\lambda(A)) is the solution to the problem

minimize g(s)
subject to
s[1] >= ... >= s[n]
S_k(A) <= s[1:k], k = 1, ..., n
trace(A) == sum(s)

The only difficulty is that this representation introduces a new n x n PSD variable for each eigenvalue, resulting in a problem with nn x n SDP constraints â€” which is not exactly fast to solve.

This is great, Madeleine. Of course, we do have the sum of the top k eigenvalues in A. Iâ€™d say that even though this corrects my claim that it is not possible, it is certainly not practical for all but the smallest matricesâ€¦ If someone were to implement this, it would be interesting to see!

Per my answer in Maximize the minimum singular value , the minimum singular value of a matrix of dimension >= 2 is neither convex nor concave in general. This is not inconsistent with your statement though, because min is not a monotone function.

Truth in advertising: I had to look up the definition of monotone function.

This is fantastic â€“ I have been looking for something like this for ages! I went ahead and implemented this for the (k,p)-norms (i.e., the Schatten p-norms of the k largest singular values), and the code is available here as part of my â€śQETLABâ€ť package (but you donâ€™t need the whole package: just this one .m file).

It actually runs much better than I thought it would. On my laptop, it computes the Schatten 3-norm of a random 100-by-100 matrix in about 3.5 seconds, and the Schatten 5-norm of a random 200-by-200 matrix in about 11.5 seconds.