What’s strange is that even if I comment all constraints out, CVX still return +Inf.
Edit: New Data Can be Downloaded Here.
I regenerated a new data with a much smaller scale and convert the original problem to a one dimentional linear regression problem:Ax=b (A:2884086, b:288) . This time, all data were normalized to -1 ~ 1. But CVX still cannot calculate a results. even with no constraints.
I’d wanna know what’s wrong with my CVX.
Mosek reported dual infeasible. Because Mosek was provided the dual, CVX therefore determined the problem is (primal) infeasible.
sum(Sum1) = sum(Sum2), so `indeed the problem is actually feasible. Although the number they equal, 3.806119691049674e+06, is rather large, and not a really nice number numerically. But I don’t think that’s what’s causing the problem here.
At this point, I would like to congratulate you for setting a new record on this forum.The non-zero entries in K2 range from 1 (which is fine and dandy) all the way down to 9.88e-324, as well as multiple entries for each order of magnitude in between. That is a span of 323 orders of magnitude. (Side note: i don’t understand how this can be smaller than 2.225073858507201e-308; nevertheless, there it is). cond(K2) = 5.6e16, which is beyond the ability of double precision to handle (actually, cond(K2) = 2e17 when calculated in quad precision).
That is likely to cause all sorts of problems with double precision solvers, and quad precision as well. CVX’s reformulation to epigraph form places that matrix in the constraints. Mosek is a robust solver, but 323 orders of magnitude vastly exceeds what it can handle. Fortunately, Mosek warns about near-zero elements.
SeDuMi reported infeasibility after 34 iterations.
is reported by Mosek to be infeasible,. Mosek also issues warnings about near zero elements, The matrix A has elements as small in magnitude as 3.782361859452318e-18, which is terrible. The maximum magnitude element of A is 2.205118450483101e+02. So the elements of A span 20 orders of magnitude: not good. Or as Gene Golub would have more emphatically labeled it in the Statistical Computing class I took from him: NFG. (I don’t know what he would have labeled the 323 orders of magnitude. The only labels he ever used in class were E, G, NG, NFG).
is reported by Mosek to be dual infeasible. Mosek also issues warnings about near zero elements,
I calculated the unconstrained least squares solution of A*x = b by SVD (which is what pinv uses) and evaluated the norm of its residual
x_svd = pinv(A)*b;
This appears to show A*x= b is not undetermibned, but is overdetermined, so that A*x=b does not have an exact solution, and therefore CVX/Mosek’s determination of infeasibility of A*x == b is correct.
However, that appearance is INCORRECT!! I redid the SVD calculation and norm of residual evaluation using quad precision (34 digits) iin Advanpix Multiprecision Computing Toolbox; the resulting norm = 1.5e-24. So it now appears that Ax = b really is consistent. I then redid the SVD calculation in double quad precision (70 digits) and got norm of residual = 1.59 e-60, reaffirming the consistency of Ax=b.
cond(A) = 9e12 as evaluated in both double precision and quad precision. This is too badly conditioned for Mosek to handle, It s too badly conditioned for double precision SVD to handle accurately. However quad precision can handle it.
That depends on exactly what you consider to be “your problem”.
With the data for your Tikhonov formulation, essentially nothing will be suitable. It is artificially generated in a way which makes it atrocious. Perhaps “actual” data (whatever that may be), is much easier for software to handle, and CVX and the solvers it calls could do a fine job.
The A*x=b formulation for which A is horribly conditioned, but there is an x which solves it exactly, can be handled by SVD, or probably QR, as well as backsolve, IF a high enough precision is used. But did you start with the x and artificially generate b as A*x? If so, being able to recover x is perhaps not of much practical utility for application to “actual” data.
I don’t understand what you’re really trying to do. Is the atrocious artificial data you have supplied just for development and test purposes, or is this the “actual” data you really want to deal with?