Using GPUs to accelerate computation


I want to use my GPU (Graphic Processing Unit) to accelerate the computations.
Of course, I’m solving LMIs, usually with SDPT3.

My questions are as follows:

  1. Does CVX support using GPUs for solving optimization problems?

  2. If so, do I use the standard Matlab command “gpuArray” to store the decision variables, or is there something special I should do?

  3. Whether or not the GPU can be used to accelerate the solution of a single optimization problem, is it possible to invoke the GPU parallel processing capabilities to solve several optimization problems in parallel?


As an optimization vendor i.e. responsible for MOSEK then I can tell you that is very hard to exploit GPUs from optimization software with advantage. For that reason none of the commercial packages do it.

Btw MOSEK also solve SDPs and can be quite a bit faster than SDPT3. If MOSEK happens to be slow we would be happy to get an instance so can analyze and maybe speed up MOSEK.

Erling is correct. I’ll be simpler: CVX cannot exploit GPUs in the construction of LMIs, and will never do so.

If the solvers exploit GPUs in the solution process, that’s fine, but I have no control over that. And as Erling said, it’s not particularly easy to do for SDPs.