Hello CVX users, convex optimization experts, and anyone willing to help,
I am being asked for a publication to compare the absolute execution time of a problem solved with convex optimization (which I solved with CVX using the MOSEK and SDPT3 solvers) vs. the same problem being solved brute-force (i.e. using exhaustive search).
Now, it’s obvious that a convex solver is in general much faster (as it finishes in polynomial time) than exhaustive search (which finishes in exponential time), however the reviewers, not being from a mathematical/optimization background, want a “case study comparison”.
The problem I am running into conceptually is trying to find a fair comparison. When solving with CVX, I define, in Matlab, a continuous variable, e.g. 0.5 < x < 1.5, etc. If I am solving this via exhaustive search, I have to decide on a discretization, e.g. into how many values for brute-force evaluation of x am I going to divided the interval from 0.5 to 1.5? This has obviously a major impact on the computation time.
I am trying to find a “floor” on the discretization, which got me thinking, since on computers there are no actual continuous numbers but only discrete, what is the actual numerical precision when solving problems with CVX? If I define a “continuous” variable, how many actual values can that variable take?
- Defined by MATLAB?
- Defined by CVX? or
- Defined by the actual solver used?
Is it actually machine epsilon (i.e. the round-off error for floating point numbers on my computer), or something larger?
I have read this page: http://cvxr.com/cvx/doc/solver.html , especially the section “Controlling precision”, but I am not sure it really answers my question…I interpret the solver tolerance to mean that the solver will stop if the difference between one iteration and the next is less than the tolerance, not that the tolerance defines the precision of the value a variable can take (or am I wrong?).
Any insight or help is appreciated. Many thanks