This is a bit of a general question regarding how CVX and the various solvers tackle a non-smooth problem.

For example, CVX supports the lambda_sum_largest(X,k) (i.e. sum of largest k eigenvalues) function, which is convex, but not differentiable. I have some basic familiarity with interior-point methods, but as far as I recall, they require the objective to be twice differentiable.

So my question is: what are the underlying algorithms that CVX uses for these non-smooth (but still convex) problems?

CVX calls conic optimization solvers, which make use of conic (cone) constraints. CVX transforms non-smooth objectives into conic constraints via epigraph formulation.

Thanks for your prompt reply as always Mark! I’ll go through the Nemirovski paper, seems like a great resource.

I do have one quick question though: wouldn’t the epigraph formulation of the lambda_sum_largest(X,k) function have a combinatorial number of convex constraints? Then doesn’t this imply that even checking the feasibility of a given point would take a ridiculously long time? In practice, having tried using this with fairly large matrix sizes, the running time doesn’t grow combinatorially though (which is good news , but I struggle to wrap my head around it).

In any event, you can look at the CVX source code to see what it does.
I’ve cut out the error checking and alternative paths, etc, and left the essence of what they do when given affine CVX expressions.

function cvx_optval = lambda_max( x )
...
z = [];
n = size( x, 1 );
cvx_begin
epigraph variable z
z * eye( n ) - x == semidefinite( n, ~isreal( x ) ); %#ok
cvx_end

function cvx_optval = lambda_sum_largest( x, k )
n = size( x, 1 );
...
S = [];
cvx_begin
variable S(n,n) symmetric
S == semidefinite(n); %#ok
minimize( k * lambda_max( x - S ) + trace( S ) );
cvx_end