Machine epsilon / numerical precision / step size

Hello CVX users, convex optimization experts, and anyone willing to help,

I am being asked for a publication to compare the absolute execution time of a problem solved with convex optimization (which I solved with CVX using the MOSEK and SDPT3 solvers) vs. the same problem being solved brute-force (i.e. using exhaustive search).

Now, it’s obvious that a convex solver is in general much faster (as it finishes in polynomial time) than exhaustive search (which finishes in exponential time), however the reviewers, not being from a mathematical/optimization background, want a “case study comparison”.

The problem I am running into conceptually is trying to find a fair comparison. When solving with CVX, I define, in Matlab, a continuous variable, e.g. 0.5 < x < 1.5, etc. If I am solving this via exhaustive search, I have to decide on a discretization, e.g. into how many values for brute-force evaluation of x am I going to divided the interval from 0.5 to 1.5? This has obviously a major impact on the computation time.

I am trying to find a “floor” on the discretization, which got me thinking, since on computers there are no actual continuous numbers but only discrete, what is the actual numerical precision when solving problems with CVX? If I define a “continuous” variable, how many actual values can that variable take?

Is this

  1. Defined by MATLAB?
  2. Defined by CVX? or
  3. Defined by the actual solver used?

Is it actually machine epsilon (i.e. the round-off error for floating point numbers on my computer), or something larger?

I have read this page: http://cvxr.com/cvx/doc/solver.html , especially the section “Controlling precision”, but I am not sure it really answers my question…I interpret the solver tolerance to mean that the solver will stop if the difference between one iteration and the next is less than the tolerance, not that the tolerance defines the precision of the value a variable can take (or am I wrong?).

Any insight or help is appreciated. Many thanks

Tell those reviewers to stick it where the sun don’t shine … but in a nice way. Educate rather than humor. Humoring should have its limits.

Seriously, you have not provided a context as to what field/type of publication this is, and what role your optimization problem formulation and solution play in your paper. Why compare to the worst method? There are standard NLP solvers which can be used to solve convex optimization problems. As well, CVX is intended to save human time, not provide the fastest computer execution time.

Perhaps you can tell us something about the problem you solved?

Hi Mark,

Thanks for your reply. I used geometric programming to model multi-objective optimization of power electronic converters (power supplies), for losses and volume. In this field, people talk a lot about optimization, but they focus on the output, the optimized design itself, rather than the process of optimization itself (i.e. how to formulate the problem, which algorithms and solvers to use, etc.). Typically, people focus a lot on getting accurate models of their converters, which they then evalute in a loop - this is the most prevalent approach, and the approach the typical reviewer is familiar with. A minority of papers in the field conclude that the models are very nonlinear and complex and then try to use something like genetic optimization and so forth (which has its own limitations); and then there are maybe 2-3 papers where people tried to formulate the problem as a general NLP (but some or are quite old and their approach is dated for various other reasons I won’t bore you with).

So, what I am trying to do is to suggest that there is value in putting effort into modeling the power converter as a GP (which I have shown to be feasible), not least of which is the fact that you can get solutions much faster (or explore a much larger solution space in the time you have allotted for converter design/optimization).

Now a reviewer is asking explicitly for a comparison with the most common approach in the field, and that is brute force optimization. Now naturally I can set up a brute force loop, and decrease the discretization step for the variables until I show that in this particular case exhaustive search is X times slower, but that is no general conclusion…I am trying to formulate some kind of more general statement, e.g. solving for x, y, z between the given bounds using a GP is “equivalent” to a brute force search of Y value combinations of the variables (x,y,z). If that makes sense at all. Hence my question about machine epsilon and the like.

I did already include a brief remark on the complexity analysis of interior point methods vs. exhaustive search, but this was not deemed sufficient it seems.

Thanks again

GPs using MOSEK (8.x) or SDPT3 solvers in CVX require use of CVX’s successive approximation method, which is not that fast or reliable on some problems. I recommend you try installing CVXQUAD https://github.com/hfawzi/cvxquad as an add-on to CVX, and in particular, replace CVX’s exponential.m with CVXQUAD’s version, as described at the link in the paragraph “Replacing successive approximation”. CVXQUAD’s Pade approximation should be automatically invoked for GPs; you shouldn’t have to change your CVX program at all. it should be more reliable and faster (at least in solver time, but not sure about formulation time) than CVX’s successive approximation method.

I leave it to you how to do comparison. Maybe you should focus on how much better your solution is than that obtained by the traditional method, how you never leave anything on the table. You know better than us what that traditional method is, including what level(s) of discretization would be used. Then compare times and goodness of solutions obtained.

Thanks, Mark. I’ll look into it.

Hello,

I have read the replies to this conversation but I did not understand one thing.
What I am looking for is the meaning of the tolerance level in the solver. I have read this page: http://cvxr.com/cvx/doc/solver.html but I could not find the explicit meaning of the cvx_slvtol value.
So does anyone know the interpretation of the cvx_slvtol value? and what is the relationship between this value and ‘’ ϵsolver≤ϵstandard≤ϵreduced’’ in cvx_precision command?

Thank you for your help.

You may have to settle for this somewhat vague statement:
http://cvxr.com/cvx/doc/solver.html#controlling-precision

Numerical methods for convex optimization are not exact; they compute their results to within a predefined numerical precision or tolerance. Upon solution of your model, the tolerance level the solver has achieved is returned in the cvx_slvtol variable. Attempts to interpret this tolerance level in any absolute sense are not recommended. For one thing, each solver computes it differently. For another, it depends heavily on the considerable transformations that CVX applies to your model before delivering it to the solver. So while you may find its value interesting we strongly discourage dependence upon it within your applications.

Combined perhaps with solver output and discussion of solver tolerances in solver documentation, such as MOSEK Optimization Toolbox for MATLAB 8.1.0.83 — MOSEK Optimization Toolbox for MATLAB 8.1.0.83 .

Thank you. As you said I should look into solver documentation.