Hello,
Thanks for helping me…
I’d like to solve a “rank minimization + sparsity” problem which has been reduced to a “trace minimization + l1 norm” minimization problem. Here is what I’ve written:
cvx_begin
x=[2;0;-2;0;4;0;-4;0;4;2;-2;-4;-4;-2;2;4;4;0;-4;0;2;0;-2;0];
y=[0;2;0;-2;0;4;0;-4;2;4;4;2;-2;-4;-4;-2;0;4;0;-4;0;2;0;-2];
z=[4;4;4;4;2;2;2;2;0;0;0;0;0;0;0;0;-2;-2;-2;-2;-4;-4;-4;-4];
zz=[0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0];
variable Q(24,24) symmetric;
A=sum(Q,2);
D=diag(A);
L=D-Q;
variable Ya(24,24);
variable Za(24,24);
Mp=[[Ya L];[transpose(L) Za]];
minimise(trace(Ya)+trace(Za)+norm(L,1));
subject to
Mp== semidefinite( 48 );
L*x==zz;
L*y==zz;
L*z==zz;
cvx_end
Since x,y and z are given, the constraints are linear. However, solving this problem, CVX returns trivial solution of L=Q=0 (all elements in matrices L and Q are zero). I seek for the non-trivial solution, but I do not know what kind of constraints do it.
In addition, so far, to avoid this trivial solution, my trick has been to evaluate one of the matrix Q elements (e.g. Q(1,2)=1) then CVX solves it and returns other elements with respect to the defined one, but this looks ridiculous and is not what I need in this optimization.
I’d be very grateful if somebody can help me.