Hi, I’ve got an objective function similar to this:
minimize (eE+tT*(1-O)+c*O) . So, “e”, “t” and “c” are constant numbers (vector actually). “T” and “E” or non-negative variables and “O” is binary. Is that a way to solve this problem with CVX since I am multiplying two variables in objective function?
CVX does not allow optimization variables to be multiplied as you have.
However, given that one of the variables being multiplied is binary, perhaps there is a reformulation which complies with the MIDCP rules. However, we can not determine that from the little snippet you have shown. What does the binary variable represent, and what are you trying to accomplish (model) by multiplying it by another optimization variable?
So the problem is minimizing the cost function. so “t " is a penalty cost per unit and “c” represents the maximum penalty. However, just one of them should be effective. If we have a penalty cost ''t” for any specific j, then it wont be any maximum penalty cost so “c” should be given 0 and vice versa. In other words, based on my constraints, in any specific j, when Oj=1 then cost “cj” will be effective and the cost “tj” should be ineffective. when Oj = 0, we should have “tj” effective and “cj” ineffective. Am I making any sense?
One more question please. Is it acceptable to put variable in IF statement in CVX? For instance, lets say “X” is binary variable. Can I say if Xij==0, then Aij = Bij + Cij assuming A and B as variables and C as parameter?
You are not allowed to use a CVX variable in an “if”. Use techniques along the lines of the linked document. If you work your way through that document and try to understand why those formulations accomplish what the document claims, you should be in a good position to effectively use binary variables. I leave you to do the work.
See for instance, section 2.8 in the linked document, to see how to handle the product of a binary variable with a continuous variable, by using the Big M method.
Thank you. The Big M worked here. I’ve got another question. So logically low precision should lead to less computational time and the best precision probably should bring the most computational time. I test this in different size of same problem by running over 100 times and get the average time. For bigger problems low precision gives us less computational time comparing to best, high and default respectively. But for smaller problems, low precision not necessary gives the least time and in some cases it gives the most computational time comparing to the best or high.
That depends on the solver as well as the problem being solved, and might be different for continuous vs. mixed integer
For mixed integer problems, the specified absolute or relative gap may have a large impact on run time, and is highly dependent on the exact problem being solved.
For continuous problems with 2nd order conic solvers, it is often the case that the exact solver tolerance being specified makes little difference to run time, and the solver finishes when it has done about as well as it can. For a first order solver such as SCS, run time is usually strongly affected by solver tolerance, so decreasing requested tolerance ,may speed up the solution considerably.