I am solving a integer programming problem in CVX employing MOSEK. I decalr a variable to take integer values.
But, the solver gives me output with non-integer values. For example,
if the variable is supposed to take value 1, the long format is giving a value like this. 0.999999951
or if the variable is supposed to take value 2, the long format is giving a value like this. 1.999999981
or if the variable is supposed to take value 0, the long format is giving a value like this. 2.999999e-9 etc.
This is normal behavior for a (mixed) integer programming solver, which works in floating point arithmetic, even though you might think you are specifying integer arithmetic for variables declared integer or binary.
You can round the solution x after CVX exits by using round(x) The values you show are so close to integers that rounding should be perfectly acceptable to you.
If you really are not satisfied just doing that, you can try adjusting the following parameters using cvx_solver_settings As you can see from the Accepted (i.e., allowable) values for these parameters, 1e-9 is the smallest tolerance you can specify, and you are already almost achieving that anyway, so I doubt there is much “improvement” you can expect in this case. Nevertheless, if you wish to minimize rounding error, you can try setting both parameters to 1e-9/ Perhaps making the parameters smaller than their default values will increase run time, but I defer to Mosek personnel for discussion of the considerations in changing these values.
I would be careful about changing optimizer parameters. That usually cause more problems than it solves.
If cannot round those values to integer values if needed, then most likely something is else will wrong.
I mean the rounding is just tiny perturbation of the solution and if the result is sensitive to tiny perturbation then you have an unstable problem.
Complaining about the near integer values is common for users who do understand that computations are done in finite precision floating point arithmetic and hence everything is an approximation in any case.
The case i can think of where it makes sense to tighten up the integer feasibility tolerance is Big M modeling applied to an intrinsically poorly scaled problem having large magnitude variable bounds. This can result in a Big M "logic" error if the integer feasibility tolerance is not small enough to accommodate the magnitude of the bounds, meaning that the Big M logic needs to work correctly, even accounting for worst case departure from integrality given the integer feasibility tolerance. I have seen several cases where people used the default integer feasibility tolerance, and the solution from the solver violated the intended logical constraint, even though the solution was within numerical tolerance of the Big M constraints as written.
In light of this, I don’t understand why Mixed Integer Programming vendors usually choose 1e-5 or 1e-6 as the default integer feasibility tolerance. Would it kill you to make the default value 1e-8, and at least buy “protection” for variable bound magnitudes well into the millions (many naive users choose one million as their big M value)? That being said, I follow Johan Lofberg’s maxim of just big enough M, which I specify per constraint, and then I make sure integer feasibility tolerance is small enough so that the logic works correctly when departure from integrality is the maximum allowable per the integer feasibility tolerance.
Also,@Erling don’t tell anyone at Mosek , but I have found that changing MSK_DPAR_INTPNT_CO_TOL_PFEAS from its default value of 1e-8 to as low as 1e-12 (I’ve even experimented with smaller values) can be effective, at least on the problems I was solving, in getting the minimum eigenvalue of an SDP constraint closer to zero (or multiple eigenvalues closer to zero, as the case may be) when the true solution has minimum eigenvalue of exactly zero. Whether or not this parameter is changed, the eigenvalues below threshold can be adjusted to exactly zero. However, if the solver solution’s eigenvalue is not sufficiently close to zero, this adjustment can cause significant consequences elsewhere. For instance, when calling Mosek to solve SDP sub-problems, the achievable solution accuracy of the outer algorithm spawning the SDP sub-problems might be orders of magnitude worse than the size of the eigenvalue adjustment, which can be eliminated or substantially reduced by tightening the tolerance. Perhaps this comes at some increase in run time, or increases the chance that optimality can not be achieved, but I have found that on problems I did this on, the optimizer worked just fine and declared optimality. And those minimum eigenvalues were much closer to zero : . Perhaps if my model were very poorly-conditioned, or not as well-scaled and formulated, this parameter change would not work out as well. That being said, on many problems, there is no crucial need for"zero" eigenvalues to be less than 1e-8 in magnitude. So unlike integer feasibility tolerance, I am not advocating for changing the default value of this parameter.