Why mosek returns "failed"

I am solving the following problem with mosek but it returns “failed”. Can anyone help me please?
------------------------------------------cvx code-----------------------------------------------------
cvx_begin
variable Y(n+1,n+1) %symmetric
variable G(1,n+1)
minimize (trace(U1_hat’(Y-F1G))+(rho2/2)*square_pos(norm((X1-Y),‘fro’))+(rho1/2)square_pos(norm((Y-F1G),‘fro’))+(rho3/2)*square_pos(norm((F1-G’),‘fro’)));
subject to
-1<=Y<=1;
-1<=G<=1;
cvx_end

U1_hat, F1 and X1 are all inputs. Y and G are matrix variables. The output of Mosek is as follows:
Calling Mosek 9.1.9: 99085 variables, 24818 equality constraints
For improved efficiency, Mosek is solving the dual problem.

MOSEK Version 9.1.9 (Build date: 2019-11-21 11:34:40)
Copyright © MOSEK ApS, Denmark. WWW: mosek.com
Platform: Windows/64-X86

Problem
Name :
Objective sense : min
Type : CONIC (conic optimization problem)
Constraints : 24818
Cones : 6
Scalar variables : 99085
Matrix variables : 0
Integer variables : 0

Optimizer started.
Presolve started.
Linear dependency checker started.
Linear dependency checker terminated.
Eliminator started.
Freed constraints in eliminator : 0
Eliminator terminated.
Eliminator - tries : 1 time : 0.00
Lin. dep. - tries : 1 time : 0.01
Lin. dep. - number : 0
Presolve terminated. Time: 0.05
Problem
Name :
Objective sense : min
Type : CONIC (conic optimization problem)
Constraints : 24818
Cones : 6
Scalar variables : 99085
Matrix variables : 0
Integer variables : 0

Optimizer - threads : 12
Optimizer - solved problem : the primal
Optimizer - Constraints : 24809
Optimizer - Cones : 6
Optimizer - Scalar variables : 99079 conic : 49467
Optimizer - Semi-definite variables: 0 scalarized : 0
Factor - setup time : 0.05 dense det. time : 0.00
Factor - ML order time : 0.02 GP order time : 0.00
Factor - nonzeros before factor : 1.11e+05 after factor : 1.11e+05
Factor - dense dim. : 4 flops : 2.09e+06
ITE PFEAS DFEAS GFEAS PRSTATUS POBJ DOBJ MU TIME
0 1.5e+00 2.0e+00 5.7e+00 0.00e+00 3.000000000e+00 -1.750000000e+00 1.0e+00 0.14
1 1.5e+00 2.0e+00 5.7e+00 6.76e-01 2.999123162e+00 -1.749493477e+00 1.0e+00 0.20
2 1.5e+00 2.0e+00 5.7e+00 5.55e-01 2.856621011e+00 -1.891143194e+00 1.0e+00 0.23
3 1.5e+00 2.0e+00 5.7e+00 5.40e-01 2.798147177e+00 -1.949304095e+00 1.0e+00 0.25
4 1.5e+00 2.0e+00 5.7e+00 5.36e-01 2.798147177e+00 -1.949304095e+00 1.0e+00 0.31
5 1.5e+00 2.0e+00 5.7e+00 5.36e-01 2.798147177e+00 -1.949304095e+00 1.0e+00 0.39
Optimizer terminated. Time: 0.48

Interior-point solution summary
Problem status : UNKNOWN
Solution status : UNKNOWN
Primal. obj: 2.7981471768e+00 nrm: 4e+00 Viol. con: 2e+00 var: 1e-04 cones: 2e-01
Dual. obj: -1.9493040946e+00 nrm: 2e+00 Viol. con: 0e+00 var: 2e+00 cones: 0e+00
Optimizer summary
Optimizer - time: 0.48
Interior-point - iterations : 6 time: 0.47
Basis identification - time: 0.00
Primal - iterations : 0 time: 0.00
Dual - iterations : 0 time: 0.00
Clean primal - iterations : 0 time: 0.00
Clean dual - iterations : 0 time: 0.00
Simplex - time: 0.00
Primal simplex - iterations : 0 time: 0.00
Dual simplex - iterations : 0 time: 0.00
Mixed integer - relaxations: 0 time: 0.00


Status: Failed
Optimal value (cvx_optval): NaN

It would be nice if you can dump the problem to a file with cvx_solver_settings('write', 'dump.task.gz') and send that file to Mosek support. Without the data it is anyway hard to give any definitive answer.

One thing you can try is instead of

square_pos(norm(M, 'fro'))

everywhere use the more direct equivalent

sum_squares(M)

(or whatever is the CVX syntax to get the sum of squares of elements of M).

That’s sum(sum_square(...)) in the case of matrix argument

Hmm, I wonder whether the problem sent by CVX to the solver (Mosek) is any different for sum(sum_square(...)) vs. square_pos(norm(...,'fro')). Perhaps the O.P. can try it both ways and send the task files to Mosek support so @Michal_Adamaszek can see if there are any differences.

help sum_square

sum_square Sum of squares.
For vectors, sum_square(X) is the sum of the squares of the elements of
the vector; i.e., SUM(X.^2).

For matrices, sum_square(X) is a row vector containing the application
of sum_square to each column. For N-D arrays, the sum_square operation
is applied to the first non-singleton dimension of X.

sum_square(X,DIM) takes the sum along the dimension DIM of X.

Disciplined convex programming information:
    If X is real, then sum_square(X,...) is convex and nonmonotonic in
    X. If X is complex, then sum_square(X,...) is neither convex nor
    concave. Thus, when used in CVX expressions, X must be affine. DIM
    must be constant.

Okay I will try. Thank you.

Thank you Mark, I will try.