I’m trying to compute an ‘extended’ Ky-Fan norm, as in, sum of the first k singular values + epsilon*(k+1)th value. I understand there are possibly two ways to do this:
lambda_sum_largest([zeros(n),X;X’,zeros(n)],k+epsilon) : The big advantage is that k can be a real scalar but maybe the flip side is forming the ‘extended’ matrix argument.
(1-epsilon)KyFanNorm(X,k) + epsilon*KyFanNorm(X,k+1) from the QETLAB package: I’ve to use this roundabout expression as k can only be a positive integer here. This results in two calls to the function (which increases the computation time)
Which is a better/faster way to go about?
Also, if I want to use the former, how can I avoid forming the extended matrix and instead maybe write an SVD routine first to get the ∑ matrix of the decomposition and call lambda_sum_largest(∑,k)?
When called with a CVX variable or expression, rather than numeric, argument, KyFanNorm just calls lambda_sum_largest , as in your first version. In such case, it will accept non-integer k. So your first version is more efficient, as it only makes a single call to KyFanNorm. Or just make a single call to KyFanNorm with non-integer argument, which will incur a little extra MATLAB overhead from extra function calls to get to the lambda_sum_largest.
cvx_begin;variable X(4,4);
minimize(KyFanNorm(X,3.2))
X==eye(4);
cvx_end
Calling SDPT3 4.0: 74 variables, 36 equality constraints
------------------------------------------------------------
num. of constraints = 36
dim. of sdp var = 16, num. of sdp blk = 2
dim. of linear var = 1
dim. of free var = 1 *** convert ublk to lblk
*******************************************************************
SDPT3: Infeasible path-following algorithms
*******************************************************************
version predcorr gam expon scale_data
HKM 1 0.000 1 0
it pstep dstep pinfeas dinfeas gap prim-obj dual-obj cputime
-------------------------------------------------------------------
0|0.000|0.000|6.7e-01|6.4e+00|1.8e+03| 9.000000e+01 0.000000e+00| 0:0:00| chol 1 1
1|1.000|0.983|5.2e-07|1.8e-01|1.2e+02| 7.860765e+01 4.717366e-01| 0:0:00| chol 1 1
2|1.000|0.373|4.4e-07|1.1e-01|2.9e+01| 2.111162e+01 5.092962e-01| 0:0:00| chol 1 1
3|0.993|0.433|2.3e-07|6.4e-02|7.8e+00| 7.017229e+00 8.723023e-01| 0:0:00| chol 1 1
4|0.942|0.928|1.9e-08|4.7e-03|7.3e-01| 3.721233e+00 3.056044e+00| 0:0:00| chol 1 1
5|0.969|0.908|9.1e-10|4.3e-04|4.0e-02| 3.218773e+00 3.184744e+00| 0:0:00| chol 1 1
6|0.988|0.987|2.6e-10|6.1e-06|5.0e-04| 3.200235e+00 3.199819e+00| 0:0:00| chol 1 1
7|0.989|0.989|2.7e-11|2.9e-06|1.9e-05| 3.200003e+00 3.199998e+00| 0:0:00| chol 2 2
8|1.000|0.989|1.3e-13|1.1e-07|7.0e-07| 3.200000e+00 3.200000e+00| 0:0:00| chol 2 2
9|1.000|0.989|1.1e-12|4.1e-09|2.6e-08| 3.200000e+00 3.200000e+00| 0:0:00|
stop: max(relative gap, infeasibilities) < 1.49e-08
-------------------------------------------------------------------
number of iterations = 9
primal objective value = 3.20000001e+00
dual objective value = 3.20000000e+00
gap := trace(XZ) = 2.63e-08
relative gap = 3.55e-09
actual relative gap = 1.03e-09
rel. primal infeas (scaled problem) = 1.05e-12
rel. dual " " " = 4.08e-09
rel. primal infeas (unscaled problem) = 0.00e+00
rel. dual " " " = 0.00e+00
norm(X), norm(y), norm(Z) = 4.1e+00, 2.0e+00, 2.8e+00
norm(A), norm(b), norm(C) = 8.7e+00, 3.0e+00, 6.4e+00
Total CPU time (secs) = 0.21
CPU time per iteration = 0.02
termination code = 0
DIMACS: 1.6e-12 0.0e+00 6.2e-09 0.0e+00 1.0e-09 3.5e-09
-------------------------------------------------------------------
------------------------------------------------------------
Status: Solved
Optimal value (cvx_optval): +3.2
Changing the objective to minimize(lambda_sum_largest([zeros(n),X;X',zeros(n)],3.2)) produces the same result, because that is what the KyFanNorm call is converted to.
You’re looking at the numeric version of lambda_sum_largest . The version which is applied to CVX variables and expressions is in functions/@cvx. You will see in the “guts”, a semidefinite optimization problem solved
cvx_begin
variable S(n,n) symmetric
S == semidefinite(n); %#ok
minimize( k * lambda_max( x - S ) + trace( S ) );
cvx_end