Hello,

I am trying to use CVX to solve an optimization problem that has a constraint of the form

$$r_{k}\geq\alpha_{k}\Vert x-\eta_{j}^{(k)}\Vert -\eta_{j}^{k\top}Q_{k}\eta_{j}^{k}-\eta_{j}^{(k)\top}q_{k}$$

where in my case i have that:

- r_{k} are scalar optimization variables
- x is a vector optimization variable
- Q_{k} and q_{k} are optimization variables, the former being a symmetric matrix and the latter a vector.
- \eta_{j}^{\left(k\right)} are constant real valued vectors. However, for different values of k there is a different number of these vectors and so I have been storing them as matrices inside of a k\times1 cell array. The above constraint needs to be satisfied for all the rows of the matrices found inside the cells of the cell array.
- the constrain given here uses the norm function in the RHS but generally there might be a different function h(x,\eta) there (one that is convex in x).

The naive (extremely slow) way to do this would be to use two nested loops:

variables x(1,n) q(K,n) r(K)

variable Q(K,n,n)

for k = 1:K

for j=1:N

curr_eta = etas{k};

r(k) >= alphas(k)*norm(x-curr_eta(j,:)) -…

quad_form(curr_eta(j,:),Q(k,:,:)) - dot(q(k,:),curr_eta(j,:));

end

end

With N=10000 two dimensional \eta vectors this takes about 35 minutes on my MacBook which is extremely slow, especially since I will need to solve cases where N might be as large as even several hundred thousands. The k variable is small and thus I don’t mind iterating over k but as j can be very large I am trying to avoid iterating over j by doing things in a vectorized manner. When n (the dimension of the vectors) is 1 the constraint becomes $$r_{k}\geq\alpha_{k}|x-\eta_{j}^{(k)}|-Q_{k}(\eta_{j}^{(k)})^{2}-q_{k}\cdot\eta_{j}^{(k)}$$ which can easily be implemented in a vectorized way as follows:

variables x(1) q(K) r(K) Q(K)

for k = 1:K

r(k) >= alphas(k) * abs(x-etas{k}) - Q(k)*etas{k}.^2 -q(k) * etas{k};

end

When n>1 then this of course doesn’t directly generalize since the code

variables x(1,n) q(K,n) r(K)

variable Q(K,n,n)

for k = 1:K

r(k) >= alphas(k) * norm(x-etas{k}) - quad_form(etas{k},Q(k,:,:)) - q(k,: ) * etas{k};

end

Obviously does not make sense. I tried using arrayfun in some manner in order to vectorize but it does not support passing cvx objects as variables. I managed to get around this by defining a few auxiliary functions and cell arrays and using cellfun:

variables x(1,n) q(K,n) r(K)

variable Q(K,n,n)

for k = 1:K

H1 = @(x,y) alphas(k)*norm(x-y);
H2 = @(x,Q) x*Q

*x’;*

H3 = @(x,y) xy;

H3 = @(x,y) x

Z1 = mat2cell(etas{k},ones(N,1),n);

Z2 = cell(N,1); Z2(: ) = {x}; %the space after : is added since the forum displays an emoticon

Z3 = cell(N,1); Z3(: ) = {Q(k,:,:)};

Z4 = cell(N,1); Z4(: ) = {q(k,:)};

ZZ1 = cellfun(H1,Z1,Z2,‘UniformOutput’,0);

ZZ2 = cellfun(H2,Z1,Z3,‘UniformOutput’,0);

ZZ3 = cellfun(H3,Z1,Z4,‘UniformOutput’,0);

ZZZ1 = cellfun(@minus,ZZ1,ZZ2,‘UniformOutput’,0);

ZZZ2 = cellfun(@minus,ZZZ1,ZZ3,‘UniformOutput’,0);

for j=1:N %this is required since CVX won’t take inequalities w.r.t cell arrays

W(j) = ZZZ2{j};

end

r(k) >= W; %W is a vector containing the RHS of the constraint

end

Using this code with N=10000 takes 12 minutes which is about a third of what the nested loop solution took but is still definitely very slow. For the sake of comparison, with n=1 and N=10000 the compute times are:

- About 2 seconds for the effective vectorized code.
- About 12 minutes for the cellfun method.
- About 35 minutes for the nested loop method.

I would appreciate any ideas/suggestions as to how to implement this constraint in an efficient vectorized way that will also potentially scale well in the dimension of the vectors (n) and not only in the number of vectors (N).

Thanks!