Unexpeted results in a convex problem

Hi, I have a convex problem aiming at the minimization of two matrices, namely A and C, subject to some constraints. In particular:

A is an NxN symmetric matrix semidefinite positive, with offdiagonal values <=0 and diagonal >=0

C is a semidefinite positive matrix KxK, who admit \mathbf{C=B^{-1} \Lambda_{k} B}, where Lambda is a diagonal matrix.

The objective function to minimize is

\min _ { \mathbf { A \in \mathbb{R^{N \times N}}} , \mathbf { C \in \mathbb{R^{K \times K}} } } \sum _ { m = 1 } ^ { M } \sum _ { i , j = 1 } ^ { N } -A _ { i j } \left| Y _ { i m } - Y _ { j m } \right| + \mu \| \mathbf { A} \| _ { F } ^ { 2 }

subject to

\operatorname { tr } ( \mathbf { A } ) = p , A _ { i j } = A _ { j i } \leq 0 , i \neq j , \quad \mathbf { A } \cdot \mathbf { 1 } = \mathbf { 0 } , \\ \mathbf { C } \succeq \mathbf { 0 }, \mathbf{A U= UC}

The code I have written is as follow:

cvx_clear
    cvx_begin
        variable A_recovered(param.N,param.N) semidefinite
        variable Ck(param.K,param.K) semidefinite

        expression Objective;
        Objective = param.mu* sum_square(vec(A_recovered))
        for m=1:param.M
            for i=1:param.N
                for j=1:param.N
                    Objective= Objective - A_recovered(i,j)*abs(Y(i,m)-Y(j,m));
                end
            end
        end
       minimize Objective
       subject to
            A_recovered*U==U*C
            A_recovered*ones(param.N,1)==zeros(param.N,1)
            for i=1:param.N
                for j=1:param.N
                    if j~=i
                        A_recovered(i,j)<=0;
                        A_recovered(j,i)<=0;
                        A_recovered(i,j)==A_recovered(j,i);

                    end
                end
            end

            diag(A_recovered)>=0;
            trace(A_recovered)==p;
    cvx_end

Is the format of both objective function and constraints written in the correct form as in the mathematical formlation? Thank You

What are your unexpected results?

Have you even implemented this in CVX? If not, please read the CVX Users’ Guide CVX Users’ Guide — CVX Users' Guide, and then let us know if you encounter any difficulties in formulating this problem in CVX and solving it.

If you have implemented it and get unexpected results, then please show us the results and explain why they are unexpected.

The formulation you show looks straightforward oi implement in CVX. However, your description

who admit \mathbf{C=B^{-1} \Lambda_{k} B}, where Lambda is a diagonal matrix.

is not clear to me, and does not seem to be reflected in the formulation you show. Of course, any symmetric matrix is diagonalizable, but it’s not clear to me whether that constraint involves a specific provided (input) value of B… If it does, you can declare Lambda as a diagonal matrix, and "define"with the expression assignment C = inv(B)*Lambda*B.

Thank you for the reply. I’m interested in the recovering of \mathbf{C} and \mathbf{A} , so I have not explicitly put the matrix \mathbf{B}, \Lambda_{k} in the optimization problem but only the relation between the two optimization variables

So what are your unexpected results?

I have to learn the graph laplacian matrix A, and I can compare it (also visually with a graph) with the groundtruth artificially made by me and it is not verosimil. In particular I note that when recovered A, in the diagonal, except 4 entries of 14, I find the same exact values equal to p / 10, where p is the value in the trace constraint of A

You will have o provide more information, including your CVX code, preferably with all inputs so that it is reproducible, if you are to have any chance of getting useful help here. That said, this is a CVX forum, not a modeling forum, and not a forum addressing graph optimization. So forum readers may be able to help assess whether your CVX code correctly matches a mathematical problem formulation you provide, and may be able to help assess whether CVX or the solver it calls encountered any difficulties in solving the provided problem, Burt forum readers are not in a position to tell you whether your model formulation is a good or adequate representation of whatever “real world” problem you are trying to address, such as learning a graph laplacian matrix.

Thank you Mark, I have only explained the context of my problem, and not asking why the results are enexpected. I’m asking if the code I provided is sintattically correct and the same as in the problem formulation provided

Sorry, I hadn’t seen that you had edited your first post to include CVX code.

Here are some discrepancies to get you started. If the omission of declaration of A_reovered is not just a copy and paste error into the forum post, then I don;'t know what you have actually been running. Perhaps A_recoverd already had a value in MATLAB prior to CVX invocation? If so, it would be treated by CVX as being a constant (matrix)

A_reocvered is not declared. It should be declared semidefinite. The rest of this paragraph describes things which are not erroneous, but can simplified and improved. The semidefinite declaration will make A_recovered symmetric; therefore the explicit symmetry constraints are not needed. Instead of for loops, the offf-diagonal can be constrained <= 0 by triu(A_recovered,1) <= 0 , which due to symmetry also constrains the lower triangle to be <= 0.

Should have + rather than minus before A_recovered in
Objective= Objective - A_recovered(i,j)*abs(Y(i,m)-Y(j,m)); .
So you have been in effect maximizing the portion of your objective other than the Frobenius norm squared penalty term (presuming A_recoverd was declared as a CVX variable).

Should be A_recovered*U==C*U, not A_recovered*U==U*C .

1 Like

Thank you Mark for your advices and corrections. It was A_recovered and not L_recovered in the variable declatations (now I have edited the post) and the objective function is Frobenius norm + the other term. In this term, since A_{i,j} <=0 I have the - so that the objetive function increase when adding such A_{i,j} for which Y_{im}-Y_{jm} is high. (I have edited also this mistake in the problem formulation of the first post).

Also the constraint \mathbf{AU=CU} is not correct in my problem formulation, since I intended that \mathbf{C} has to be the eigenvalue matrix of \mathbf{A}. Given this edited problem formulation, are there error in writing in CVX code? Thank you

Your CVX code now appears to match your mathematical problem formulation.

I forgot to mention in my previous post that the constraint diag(A_recovered)>=0 is redundant given the declaration of A_recovered as semidefinite, which implies nonnegative diagonal. However, the code as is is not wrong. Also, as stated in my previous comment, you can do away with the double for loop constraints and replace that with triu(A_recoverd) <= 0., and the symmetry constraints on A_recovered are implied by semidefiinite declaration. But it iis not wrong as is.

Yes, I modified my source code avoiding the for loop in the constraints. I have also avoided thefor loop in the obj function doing the following:

Objective = sum(sum(repmat(-A_recovered,1,size(Y,2)).* abs(diffmat(Y)))) + param.mu* sum_square(vec(A_recovered))

i.e by doing an element pairwise product between the entries of the matrix A_recovered (stacked many times (M) as the columns size of Y) and the block-difference pairwise matrix, i.e. a fat matrix in which there are M NxN matrix each of which is the difference matrix of each vector in Y, that is

N= size(Y,1);
M= size(Y,2);
Y_diff= zeros(N,N*M);
for m=1:M
   Yrep = repmat(Y(:,m),1,N);
   Y_diff(:,N*(m-1)+1:N*(m-1)+N)=Yrep'-Yrep;
end

Do you think this is an equivalent compact form as the for loop?

I’ll let you check it. If for testing purposes, you make A_reoverd a numerically populated MATLAB variable instead of a CVX variable, and if the value of Objective matches what it was with the for loops, then it should also “match” in CVX. The final proof in the pudding is to match optimal objective value (cvx_optval) within solver tolerance. Note that even if the problem formulations were identical in exact arithmetic, they may not be exactly the same due to roundoff error; and even having things (constraints, which objective will become) in a different order can cause a solver to produce a different optimal solution.

With artificial populated \mathbf{A\_recovered} the result is the same, so I think that also in CVX. Thank you Mark for all the precious advices