I’m solving a least-squares regression problem with equality constraints. the CVX returns NAN for all values; however, when I solved it with a closed-form solution approach, there is a solution to this problem
The code is as follows:
x::(531)
LHS_training:: (900053)
Library_constraint:: (9000*53)
Any advice?
cvx_begin quiet
variable x(n);
minimize( norm(LHS_training-Library_training*x) );
subject to
Library_constraint*x == LHS_constraint;
if k>1
x(smallinds)==0;
end
cvx_end;
The first piece of advice is to not use the quiet option. That way you can see the CVX and solver output. Perhaps the problem was reported as infeasible? if so, that should be in the output when you remove quiet.
If the problem is reported as infeasible, you can follow the advice at Debugging infeasible models - YALMIP, all of which except for section 1 also applies to CVX. In particular, try solving the problem without the objective. Do you expect those constraints to be satisfied exactly, as opposed to in some kind of least squares or other approximate sense? Because that is what you are requiring in your CVX code. You haven’t told us how you calculated the closed-form solution - are you sure those constraints are actually satisfied with your supposed solution?
Hopefully you are following the advice given to you previously to use CVX 2.2 rather than CVX 3.0beta, which due to bugs is known to not always handle constraints correctly.
where upper_limit is a value of your choosing. You could solve this “parametrically” for several different values of upper_limit. You can also chose which norm to use (two-norm, as I show, inf, or 1, or any “p” >= 1.
Alternatively, you could add multiplier*norm(Library_constraint*x - LHS_constraint) to the objective function as a penalty term. You can do this parametriically for multiple values of multiplier and choose which “p” to use in the norm,
Have you evaluated norm(Library_constraint*x - LHS_constraint) for your closed-form solution? Does it also satisfy
if k>1
x(smallinds)==0;
end
`
It would not be a bad idea to improve the conditioning of the input data, if you can.
@Mark_L_Stone
Error using [cvx/norm]
I’m trying to solve for this objective
minimize( (1/2)* norm( (LHS_training-Library_training*x).^2 ) );
but an error appears:
Based on the error message, the argument of norm is convex, not affine as CVX requires. You haven’t shown the complete code, so i don’t know what the offending item is, nor do I know whether your problem is actually convex and can be formulated in accordance with CVX’s DCP rules. Presuming x is an optimization (CVX) variable, it must be the case, given the error message, that one or both of LHS_training and Library_training are not input data, and were either declared as CVX variables, or are CVX expressions in terms of CVX variables. Perhaps your code has an erroneous variable declaration?
If you succeed in getting the argument of norm to be affine, you need to use square_pos rather than ^2 to square the norm. But because it is the only term in the objective function, you can just not square it, and you will get the same argmin, and the problem will also be more numerically favorable for the solver (i.e.,more reliably solved).
help cvx/norm
Disciplined convex programming information:
norm is convex, except when P<1, so an error will result if
these non-convex "norms" are used within CVX expressions. norm
is nonmonotonic, so its input must be affine.
You have not provided a complete reproducible code which generates the error message. I don’t see how this code would produce the error message you showed.
I suggest you follow the advice, provided earlier in this topic, to not use the quiet option.
quiet only matters if CVX calls the solver. Given that error message, CVX never called the solver. But if it ever does, you should look at the solver and CVX output. If you are calling CVX in a loop, and the failure occurs on some iteration after the first, you should be looking at the results, if they become input pt the next iteration.
You should provide a compete program, with all input data, starting from the beginning of a MATLAB session, if you expect to receive additional help on this. Otherwise, the forum readers have no idea what’s going on.
You FINALLY showed the code which generated the error message. The error message is due to your having square_pos in the wrong place, which even if it were allowed, would produce the wrong answer (as for instance, if applied to double precision data)… square_pos should be outside of norm, not inside it. But as I stated previously, you would be better off not using square_pos at all, i.e., don’t square the norm.
In the future, you’ll find it easier to get effective help if you post the code which actually generates the error or output you are seeking help with, rather than some different code which didn’t generate it.