I’ve seen many papers (for example, this one on sequential convex programming) where non-convex problems are solved with convex optimization methods by first linearizing the problem at iteration k about the solution given at iteration k-1, and implementing a trust region constraint where ||x^{k}-x^{k-1}||\leq \delta in order to make convergence to a solution more robust. However, I recently tried to use a trust region constraint in a problem that used a sequential convex programming technique, and found that without the trust region constraint it converged to a stable solution, but *with* a trust region constraint, the problem blew up, giving me NaN values after the first iteration. I thought that trust regions were supposed to increase convergence robustness, but it seems that I’ve somehow achieved the opposite result. I’m hoping it’s just a trivial mistake I’ve made somewhere, so thought I’d see if trust regions causing problems to blow up are common with beginners in this field.

This really isn’t the right forum for general purpose modeling questions. You should consider a more general mathematics or optimization forum, such as Math StackExchange. (Even if you used CVX in the above exercise—well, there’s no *code* here for us to assist with.) Feel free to return with specific CVX usage questions if you have them, of course.