Actually, any symmetric monotone function of the singular values of a matrix is DCPrepresentable. See BenTal and Nemirovski “Lectures on Modern Convex Optimization”, Proposition 4.2.2. But the best representation (presented there, at least) is huge!
Let’s see how this works for a PSD matrix A. (The case of rectangular matrices is similar.)
 the sum S_k(A) of the top k eigenvalues of A is the solution to the problem
minimize sk + Tr(Z)
subject to
Z ⪰ 0
A ⪰ 0
Z + sI ⪰ A

g(\lambda(A)) is the solution to the problem
minimize g(s)
subject to
s[1] >= ... >= s[n]
S_k(A) <= s[1:k], k = 1, ..., n
trace(A) == sum(s)
The only difficulty is that this representation introduces a new n x n
PSD variable for each eigenvalue, resulting in a problem with n
n x n
SDP constraints — which is not exactly fast to solve.