Using CVX with GPU Parallel Computing


(Ahmed Al-baidhani) #1

Hi,

I have used CVX to solve MISOCP problem using Gurobi. Now, I have to solve this optimization problem over 10^5 realizations, so I used ‘parfor’ to speed up the simulation.

Is it possible to use CVX with GPU parallel processing to speed up the simulation. I’m thinking of exploiting the massive number of threads offered by the GPU so I can solve more problems each time. I have checked the following post Using GPUs to accelerate computation but still not sure about the answers as I’m not looking to accelerate the solution of a single problem.

Thanks

Ahmed


(Mark L. Stone) #2

You will find many threads if you search for parfor, parallel, or GPU.

If you try to do anything, I think you are on your own, although you can feel free to report your results, clever workarounds, or whatever.

In Dec 2012 in CVX in a parallel loop , CVX developer mcg wrote:

Unfortunately, CVX cannot be used in a parallel loop. I have been investigating it, but it will require a non-zero financial expense for me to implement it. Thus it is likely to happen, but only when a commercial client is willing to pay for it :stuck_out_tongue:

I’m pretty sure it hasn’t happened.


(Ahmed Al-baidhani) #3

Thank you, Mark, for the prompt reply.

I asked about using GPU because I was thinking of buying an external GPU to use it for my simulation in case CVX works fine with GPU. However, I’ll try to buy a cheap GPU, maybe a second hand, to investigate this issue.

BTW, I have tested the running time of my code with ‘parfor’ and with the normal ‘for’ using tic-toc in Matlab and found that:
with ‘parfor’: 0.9987 seconds
with ‘for’ : 2.5195 seconds

The time is averaged over 1000 trails. I used Matlab R2015a, CVX version 2.1. My PC has Intel Core i5-3570 CPU (4 cores) and 8G RAM. The OS is Windows 7 Professional. So parfor worked good for me and hopefully with GPU is working even better.

Thanks

Ahmed


(Erling D.Andersen) #4

I think is very unlikely a GPU will do any good for you at least when it comes to reduce optimization time.

To the best of knowledge my knowledge none of the commercial optimizers exploit a GPU simply because it is impossible to get any benefit from it in that use case.


(Ahmed Al-baidhani) #5

Thank you for the reply.
I’m not looking to reduce the optimization time of a single problem, but to speed up the overall time by distributing the optimization problem over the threads offered by the GPU. I mean something like what ‘parfor’ does but imagine that I have 16 cores instead of 4 cores and I have a single optimization problem need to be solved each time at different parameters.


(Michael C. Grant) #6

This represents a misunderstanding of what a GPU does. You basically want to run multiple clones of CVX independently. But GPUs are meant for very tight, low level, SIMD parallelism. They cannot be used as standard parallel processing engines.


(Ahmed Al-baidhani) #7

Many thanks for the clarifications. Also, I apologize for my misunderstanding.