I have used CVX to solve MISOCP problem using Gurobi. Now, I have to solve this optimization problem over 10^5 realizations, so I used ‘parfor’ to speed up the simulation.
Is it possible to use CVX with GPU parallel processing to speed up the simulation. I’m thinking of exploiting the massive number of threads offered by the GPU so I can solve more problems each time. I have checked the following post Using GPUs to accelerate computation but still not sure about the answers as I’m not looking to accelerate the solution of a single problem.