# How can I express the following formula？

I want to express this formula in cvx and as part of the formula in figure 2, where qq is an array of (S×N) and I want to add each column, e.g. i=1, and add 2~S. Here’s my simulation program, I tried log_sum_exp but I’m not sure it’s right

Here’s my simulation program

``````for jj = 1:Smax
for kk = 1:N
w = log(N0*W)/log(2);
t(jj,kk) >= log_sum_exp([w y(jj,kk)])/log(2);
qq(jj,kk)/(alf(ii,kk)*Pu*rou^2+0.0001) >= exp(-y(jj,kk));
end
end
``````

Should I bring t into the equation in figure 2? , I’m not sure if I’m summing log_sum_exp correctly

Also, what should I do if I have zero variables in my [alf] matrix

qq is real affine,pu 、N0*W and rou are constant
alf is (Smax×N)，but there are zero variables for the alf matrix

You show a mess, and I have no idea what’s what. You say `qq` is affine ,. but i don’t even know what `qq` is.

`log_sum_exp` is not a get out of non-convexity jail free card. The argument of `log_sum_exp` must be convex (or affine).

Your first step is to prove that this is a convex constraint.

qq is real affine

That is not a convexity proof.

Using the Taylor approximation for Eq. 1, I think this proves that qq is a convex function

Thank you very much for replying to me, add: this is my program for qq

I have no idea what you’re doing or what this shows. I don’t even know what the constraint is which you are claiming is convex. id the original constraint convex? Or is a Taylor approximation of it convex? You show a constraint involving log2, and now you show something which does not have any log (after Taylor approximation)?

if the Taylor approximation is what is convex, you should be showing us the constr4aint which uses that approximation, and writing the code to implement it.

My qq is a real affine about x[n], and now that qq is used as an argument in the following equation, I want to express the following equation

One of the places about log_sum I’m not familiar with, I referred to your previous reply and used log_sum_exp,but I’m not sure if it’s correct, for an S×N matrix I want to sum the 2nd element of each column to the Sth element (sum(2 to S))

for jj = 1:Smax
for kk = 1:N
w = log(N0*W)/log(2);
t(jj,kk) >= log_sum_exp([w y(jj,kk)])/log(2);
qq(jj,kk)/(alf(ii,kk)Purou^2+0.0000603) >= exp(-y(jj,kk));
end
end

Here’s my code for this equation

For each k, you need a single constraint
`t_k >= log_sum_exp([...])./log(2)` in which `[...]` contains all the terms in the sum.
And then you need constraints `x_i >= exp(-y_i)` for all the contributing variables.

Matrix arguments are handled per
help log_sum_exp

log_sum_exp log(sum(exp(x))).
log_sum_exp(X) = LOG(SUM(EXP(X)).

``````When used in a CVX model, log_sum_exp(X) causes CVX's successive
approximation method to be invoked, producing results exact to within
the tolerance of the solver. This is in contrast to LOGSUMEXP_SDP,
which uses a single SDP-representable global approximation.

If X is a matrix, LOGSUMEXP_SDP(X) will perform its computations
along each column of X. If X is an N-D array, LOGSUMEXP_SDP(X)
will perform its computations along the first dimension of size
other than 1. LOGSUMEXP_SDP(X,DIM) will perform its computations
along dimension DIM.

Disciplined convex programming information:
log_sum_exp(X) is convex and nondecreasing in X; therefore, X
must be convex (or affine).
``````

However, if `x_k(n)` is an optimization variable, the first term on the LHS of the constraint would be convex, which would make the constraint non-convex (unless you are using Taylor approximation of that?).

Thank you for your advice, I will try to take the left hand constraint to the first order Taylor expansion, because the left hand will be multiplied by some other constant terms, which I have abbreviated, I will try to take the full Taylor expansion, the Taylor expansion of the convex function is its lower bound, and subtract the right hand log sum exp term, which is a concave constraint, because beta is my objective function, I want to maximize my beta under this constraint

Maybe you should use a non-convex solver, for instance under YALMIP, rather than trying to use a Taylor approximation.

I think I will try YALMIP, thank you for your advice, and there is a small question to bother you, can I use a loop to replace like this in this simulation? If I want to merge the replacement into the complete constraint, can I directly use t(jj,kk)?

You could have a for loop over `kk`. For each value of `kk`, have the log_sum_exp be summing over all values of jj by indexing with `:`

I’ll let you work out the details.

Thank you for your advice, I will try the details on this basis