I want to express this formula in cvx and as part of the formula in figure 2, where qq is an array of (S×N) and I want to add each column, e.g. i=1, and add 2~S. Here’s my simulation program, I tried log_sum_exp but I’m not sure it’s right

for jj = 1:Smax
for kk = 1:N
w = log(N0*W)/log(2);
t(jj,kk) >= log_sum_exp([w y(jj,kk)])/log(2);
qq(jj,kk)/(alf(ii,kk)*Pu*rou^2+0.0001) >= exp(-y(jj,kk));
end
end

Should I bring t into the equation in figure 2? , I’m not sure if I’m summing log_sum_exp correctly

Also, what should I do if I have zero variables in my [alf] matrix

I have no idea what you’re doing or what this shows. I don’t even know what the constraint is which you are claiming is convex. id the original constraint convex? Or is a Taylor approximation of it convex? You show a constraint involving log2, and now you show something which does not have any log (after Taylor approximation)?

if the Taylor approximation is what is convex, you should be showing us the constr4aint which uses that approximation, and writing the code to implement it.

My qq is a real affine about x[n], and now that qq is used as an argument in the following equation, I want to express the following equation

One of the places about log_sum I’m not familiar with, I referred to your previous reply and used log_sum_exp,but I’m not sure if it’s correct, for an S×N matrix I want to sum the 2nd element of each column to the Sth element (sum(2 to S))
I referenced this reply

for jj = 1:Smax
for kk = 1:N
w = log(N0*W)/log(2);
t(jj,kk) >= log_sum_exp([w y(jj,kk)])/log(2);
qq(jj,kk)/(alf(ii,kk)Purou^2+0.0000603) >= exp(-y(jj,kk));
end
end

For each k, you need a single constraint t_k >= log_sum_exp([...])./log(2) in which [...] contains all the terms in the sum.
And then you need constraints x_i >= exp(-y_i) for all the contributing variables.

When used in a CVX model, log_sum_exp(X) causes CVX's successive
approximation method to be invoked, producing results exact to within
the tolerance of the solver. This is in contrast to LOGSUMEXP_SDP,
which uses a single SDP-representable global approximation.
If X is a matrix, LOGSUMEXP_SDP(X) will perform its computations
along each column of X. If X is an N-D array, LOGSUMEXP_SDP(X)
will perform its computations along the first dimension of size
other than 1. LOGSUMEXP_SDP(X,DIM) will perform its computations
along dimension DIM.
Disciplined convex programming information:
log_sum_exp(X) is convex and nondecreasing in X; therefore, X
must be convex (or affine).

However, if x_k(n) is an optimization variable, the first term on the LHS of the constraint would be convex, which would make the constraint non-convex (unless you are using Taylor approximation of that?).

Thank you for your advice, I will try to take the left hand constraint to the first order Taylor expansion, because the left hand will be multiplied by some other constant terms, which I have abbreviated, I will try to take the full Taylor expansion, the Taylor expansion of the convex function is its lower bound, and subtract the right hand log sum exp term, which is a concave constraint, because beta is my objective function, I want to maximize my beta under this constraint

I think I will try YALMIP, thank you for your advice, and there is a small question to bother you, can I use a loop to replace like this in this simulation? If I want to merge the replacement into the complete constraint, can I directly use t(jj,kk)?