Using CVX with GPU Parallel Computing

Hi,

I have used CVX to solve MISOCP problem using Gurobi. Now, I have to solve this optimization problem over 10^5 realizations, so I used ‘parfor’ to speed up the simulation.

Is it possible to use CVX with GPU parallel processing to speed up the simulation. I’m thinking of exploiting the massive number of threads offered by the GPU so I can solve more problems each time. I have checked the following post Using GPUs to accelerate computation but still not sure about the answers as I’m not looking to accelerate the solution of a single problem.

Thanks

Ahmed

You will find many threads if you search for parfor, parallel, or GPU.

If you try to do anything, I think you are on your own, although you can feel free to report your results, clever workarounds, or whatever.

In Dec 2012 in CVX in a parallel loop , CVX developer mcg wrote:

Unfortunately, CVX cannot be used in a parallel loop. I have been investigating it, but it will require a non-zero financial expense for me to implement it. Thus it is likely to happen, but only when a commercial client is willing to pay for it :stuck_out_tongue:

I’m pretty sure it hasn’t happened.

Thank you, Mark, for the prompt reply.

I asked about using GPU because I was thinking of buying an external GPU to use it for my simulation in case CVX works fine with GPU. However, I’ll try to buy a cheap GPU, maybe a second hand, to investigate this issue.

BTW, I have tested the running time of my code with ‘parfor’ and with the normal ‘for’ using tic-toc in Matlab and found that:
with ‘parfor’: 0.9987 seconds
with ‘for’ : 2.5195 seconds

The time is averaged over 1000 trails. I used Matlab R2015a, CVX version 2.1. My PC has Intel Core i5-3570 CPU (4 cores) and 8G RAM. The OS is Windows 7 Professional. So parfor worked good for me and hopefully with GPU is working even better.

Thanks

Ahmed

I think is very unlikely a GPU will do any good for you at least when it comes to reduce optimization time.

To the best of knowledge my knowledge none of the commercial optimizers exploit a GPU simply because it is impossible to get any benefit from it in that use case.

Thank you for the reply.
I’m not looking to reduce the optimization time of a single problem, but to speed up the overall time by distributing the optimization problem over the threads offered by the GPU. I mean something like what ‘parfor’ does but imagine that I have 16 cores instead of 4 cores and I have a single optimization problem need to be solved each time at different parameters.

1 Like

This represents a misunderstanding of what a GPU does. You basically want to run multiple clones of CVX independently. But GPUs are meant for very tight, low level, SIMD parallelism. They cannot be used as standard parallel processing engines.

1 Like

Many thanks for the clarifications. Also, I apologize for my misunderstanding.

What you proposed makes sense and is often done. If there is little interaction between the different simulations, then you can do it. The SIMD restriction is that each of the GPU cores needs to be doing the same thing.

Hi, Ahmed. Have you figured it out how to implement parallel optimization with GPU? Im computing a large-scale problem by ADMM algorithm. So I want to use GPU to speedup the optimization time.

Almost no optimization software I am aware of can exploit GPUs. Btw I have been working in this domain for 20+ years. For machine learning GPUs might be useful.

If you tell us what problem type you want to solve, its size and how long it takes we might be able to recommend the best approach.

Thank you very much!
I have a large-scale convex problem need to be solved. So I decompose it into 365 sub-problems. Every sub-problem is a small convex problem. In every iteraion, 365 sub-problems need to be solved. I want to utilzie parallel computing to speedup the iteration.

See the discussion at