CVX/MOSEK Cone Constraint Tolerance

Hello to all!

I am solving a mixed integer problem through CVX/MOSEK. This involves a series of Cone Constraints, expressed as norm([…]) <= affine(),
Is it possible to control the precision of the particular cone constraint?
In particular, after cvx ends. I am printing the values of the expression {norm([…]) - affine()} and I get some values > 0, of the order 1e-3 or 1e-4. So could I set this tolerance down to 1e-5 or 1e-6 for example?
I have also set “cvx_precision best” with no luck, so I wonder if there is a MOSEK parameter that I should change and not a CVX one.

Many thanks,

a.

EDIT:
So, following Micheal’s suggestion I provide the exact model for the MI-SOCP problem I am referring.

% problem data: H(M,K) complex; P_max real, oh_max real, t real
cvx_begin;
cvx_solver mosek;

variable    W(M,K) complex;
variable    a(M,K) binary;

minimize(sum(sum(a))); % objective
subject to
% C1: cone constraints (active)
for k=1:K
    norm([1;(H(k,:)*W(:,:)).']) <= sqrt(1+1/t)*real(H(k,:)*W(:,k));
end
% C2: cone constraints
for m=1:M
    norm(W(m,:).') <= sqrt(P_max);
end
% C3: integer/cone constraints
for m=1:M
    for k=1:K
        norm(W(m,k)) <= a(m,k)*sqrt(P_max);
    end
end
% C4: linear constraint
sum(sum(a)) <= oh_max;

cvx_end;

The problems arise due to the discrete nature of the a_{m,k} variables. A possible work-around recommended by Michael is to first search for a satisfactory a matrix, then fix it, and finally resolve the problem with a new objective minimize(norm(W,‘fro’)) (several W elements will be zero based on the fixed a).

Regards,

a.

While you await a full answer to your question, could you kludge it by tightening the constraint a little (by subtracting 1e-3 or whatever) from the right hand side so that the constraint violation against your true constraint then achieves your acceptable tolerance? Of course it would be nice to know how much tolerance the solver is allowing (and how “constant” or not that tolerance is), and whether there are any hidden side effects of such an approach, or the consequences of tightening the constraint too much.

I would have made this a comment rather than an answer, but I don’t have sufficient karma to comment. This seems backwards from what it should be, since, if anything, more “credibility” should be required for an answer, which is supposed to be authoritative, than for a comment. I had some useful comments, but not authoritative answers, to make on some previous threads, but did not post at all due to this rule.

First of all, Mark, please do not hesitate to post answers at least until your karma is high enough. I do not know if it will be possible for me to modify the scoring rules here. And frankly we haven’t been making much use of the karma and voting system around here (although I wish we would).

EDIT: After further discussion with the original poster, I have come to the conclusion that both CVX and MOSEK are doing the right thing. The reason why his conic constraints are “loose” is because his objective function involves discrete variables. So the underlying convex relaxation is not necessarily tight, even at the global optimum. Furthermore, he has changed an internal MOSEK setting so that the integer solution obtained is not at the global optimum. So the model is truly “loose” at the returned point; that is, none of the constraints are tight.

This is one of the artifacts of doing mixed-integer programming, and it’s something that we’re all going to have to get used to—mixed-integer problems are new to CVX! I thank the original poster for his question, and I’m glad we have an open forum here to discuss this.

If the original poster is willing, I think it would be useful if he would include his full model here—perhaps incorporating the improvements I suggested to him so that it is easier to read.

MORE EDIT: Since the poster was kind enough to provide his model for our inspection, I wanted to share my own suggested changes to the model. It turns out that with some careful function choices you can avoid those for loops, which is always a good thing! The reason we can avoid those loops is due to the norms function which we created especially for CVX.

cvx_begin
    variable W(M,K) complex
    variable a(M,K) binary
    minimize(sum(sum(a)))
    subject to
         norms([ones(size(H,1),1),H*W],2,2) <= sqrt(1+1/t)*real(diag(H*W));
         norms(W,2,2) <= sqrt(P_max);
         abs(W) <= a * sqrt(P_max);
         sum(sum(a)) <= oh_max;
cvx_end

At first glance, the final inequality sum(sum(a)) <= oh_max may appear redundant, since you are already minimizing sum(sum(a)). If you are solving to full optimality, you can just delete that constraint, and then check to see if sum(sum(a)) <= oh_max is satisfied at the optimum. But in fact, with mixed-integer programming, one often chooses to terminate the optimization before the full global optimum is found. In that case, having this constraint is actually useful: it ensures that the solver does not terminate before it has found a solution that is at least as good as oh_max (or until the solver can prove that this bound cannot be obtained).

Many thanks for your suggestions!

I am actually working with MOSEK now since my problem involves integer variables as well. GUROBI seems to be very slow compared to MOSEK so far (although to be honest I have not devoted a significant effort for test purposes yet). I also intend to try CPLEX which is (?) the most famous one.
Cheers,

a.