CVX is returning NaN for a differently termed Frobenious Norm Minimization problem

Hello!

I have been trying to solve a Orthogonal Procrustes problem which is posed as a Frobenious Norm minimization problem where the frobenious norm of AX-B is minimized. In words, we are looking for the best linear mapping A for mapping X to B. A,X and B all are matrices. When I try to solve this unconstrained minimization problem, CVX is giving me outputs.

To put some constraints, instead of minimizing norm(AX-B,‘fro’), I am trying to minimize equivalent problem termed as:

minimize trace(X’QX) - 2*trace(B’AX) where Q = A’A. The transposes are conjugate transposes. In CVX, the code is written as below:

cvx_begin
cvx_solver sedumi
variable codebooks(N,M*J) complex;
variable Q(M*J,M*J) hermitian toeplitz;
minimize abs(trace(indicator'*Q*indicator-2*y'*codebooks*indicator))
subject to
for i=1:M*J
    Q(i,i) == 1;
end
cvx_end

Unfortunately CVX is accepting my problem but returning NaN as the output.

Calling SeDuMi 1.34: 144 variables, 0 equality constraints

SeDuMi 1.34 (beta) by AdvOL, 2005-2008 and Jos F. Sturm, 1998-2003.
Alg = 2: xz-corrector, Adaptive Step-Differentiation, theta = 0.250, beta = 0.500
eqs m = 1, order n = 6, dim = 148, blocks = 3
nnz(A) = 1 + 0, nnz(ADA) = 1, nnz(L) = 1
it : by gap delta rate t/tP t/tD* feas cg cg prec
0 : 4.73E+00 0.000
1 : 8.96E-02 1.96E+00 0.000 0.4148 0.9000 0.9000 1.28 1 1 2.1E+00
2 : 1.09E-02 8.86E-02 0.000 0.0451 0.9900 0.9900 1.36 1 1 6.5E-01
3 : 3.67E-06 7.54E-04 0.000 0.0085 0.9945 0.9945 1.03 1 1 3.2E-01
4 : 2.73E-04 5.05E-05 0.000 0.0670 0.9675 0.9675 -0.93 1 1 5.8E-04
5 : 1.89E-04 1.20E-06 0.000 0.0237 0.9900 0.9900 -1.14 1 1 4.7E-04
6 : 1.52E-04 5.43E-10 0.000 0.0005 0.9999 0.9999 -1.00 1 1 4.2E-04
7 : 1.11E-04 9.23E-11 0.000 0.1699 0.8438 0.8438 -1.04 1 1 3.5E-04
8 : 1.19E-04 2.64E-12 0.000 0.0287 0.9900 0.9900 -1.04 1 1 3.6E-04
9 : 1.05E-04 6.06E-13 0.175 0.2290 0.7875 0.7875 -1.01 2 2 3.4E-04
10 : 7.05E-05 2.95E-14 0.000 0.0487 0.9675 0.9675 -0.90 3 2 3.0E-04
11 : 7.05E-05 2.30E-15 0.000 0.0779 0.9000 0.0000 -0.99 3 3 2.9E-04
12 : 6.78E-05 8.04E-17 0.294 0.0350 0.9675 0.9652 -1.00 3 3 2.8E-04
13 : 5.13E-05 2.46E-17 0.203 0.3055 0.6750 0.6750 -0.99 3 3 2.7E-04

iter seconds |Ax| [Ay]_+ |x| |y|
13 0.6 1.5e-02 1.8e-17 5.9e+00 2.6e-17
Failed: no sensible solution/direction found.

Detailed timing (sec)
Pre IPM Post
1.480E-01 3.830E-01 1.001E-02
Max-norms: ||b||=1, ||c|| = 1,
Cholesky |add|=0, |skip| = 0, ||L.L|| = 1.

Status: Failed
Optimal value (cvx_optval): NaN

Is any DCP ruleset being violated for this alternative representation of main problem which I could solve in CVX? I will be truly grateful for any sort of kind suggestion. Thanks in advance!

Because the solvder was called, that means that CVX accepted your problem, so your code complies with the DCP rules.

The solver failed. This is a Linear Program, so any solver callable from CVX can be used. If you have CVX professional or academic, together with Gurobi or Mosek, try thoose solvers,because they should be the most robust solvers available under CVX. Otherwise, try SDPT3, and if that fails, try ECOS (you will have to insratll ECOS yourself)…

You haven’t provided the input data, so your problem is not reproducible. If you provide the input data, perhaps a forum member can give you more specific advice including whether the you might be able improve scaling so at to make the problem easier for solvers to solve.

Can you try a smaller version of the same problem (smaller values of M, N, and/or J) ans see whether the problem is solved and the solution seems correct? If you can do that, then worry about trying the full size problem of interest.

1 Like

Dear Sir, thanks for your time! I actually had been trying with small values for N, M and J. The reason behind linearization is to rule out the Quadratic constraint trans(A)*A. SDPT3 and MOSEK solver worked but unfortunately I am not getting the same output as I got for the original problem statement. I shall surely post the reproducable code. Although its not a CVX question but may I ask can I apply Schur Complement here? I did it for the case of vectors but never did for matrix. I am truly sorry for the naive question.

The Orthogonal Procrustes problem is npon-convex. You can;t expect a Linear Programming approximation to the problem to be a very good approximation. Perhaps there are (convex) semidefinte relaxations, using Schur complement or otherwise ( I don’t know), but discussing good convex approximations or relaxations to non-convex problems is out of scope of this forum.

1 Like