Is there any way to write multiple Lorentz constraints in one single constraint by using matrices?
As an example can we merge the following two constraints and write it in a single line?

{z_1, t_1} == lorentz(n)
{z_2, t_2} == lorentz(n)

If it is possible, then I can get rid of the ‘for’ loop and it will make my code more efficient.

EDIT: I originally wrote that I don’t believe that is possible in CVX. However, see my post below for how to vectorize in CVX.

YALMIP’s cone can be vectorized, eliminating the need for for loops. So you could consider switching to YALMIP if you find the CVX model generation time to be excessive.

Thanks! Now I need to learn YALMIP
Do you believe that YALMIP is more efficient than CVX?
My question is ‘do you advise me to learn YALMIP and use it instead of CVX?’

YALMIP is more flexible in terms of handling non-DCP and non-convex models, as well as supporting many more solvers. YALMIP allows vectorization of second order cone constraints, whereas CVX doesn’t. YALMIP also has an “optimizer” capability which allows elimination of most of the model generation time when there are many instances of the same problem structure with different input data.

However, I’m not sure of the efficiency of this approach, which is less efficient (use of norm and geo_mean rather than rotated_lorentz) if there is only one constraint.

Rotated second order cone constraints can also be vectorized by using quad_over_lin, which can be vectorized. For example norm(x) <= sqrt(y*z), rewritten as x'*x/y <= z. That can be vectorized by using a matrix X instead of vector x, and with row vectors y and z, as quad_over_lin(X,y) <= z. I don’t know what actually happens in this situation, and whether there is an effective speedup from vectorization when CVX internally converts this to involve norms, or if the vectorization is not really end to end.

I’m also not completely sure that rotated_lorentz can’t be used in vectorized fashion, but if so, it’s undocumented.

Thank you for your reply. But ‘norms’ is far less efficient than using ‘Lorentz’ functions. First I was using norm functions, then I changed it to Lorentz, and the code was running more than hundreds of times faster.

I think vectorized use of YALMIP’s’ cone would likely be faster than whatever can be done in CVX. But I’m not sure of the efficiency of any of these alternatives.