Help:Cannot perform the operation: {real affine} ./ {real affine}

Recently,i am coding but i meet the problem
Cannot perform the operation: {real affine} ./ {real affine}

here is my code,i dont know where is wrong,i am very confused

clear all;
clc;

B = 20 ;
L = 150;
p = 100 ;
sigma = sqrt(10*10^-14)*10^3;
b = 5 ; % xita n =5
d = 1 ; %yita n =1
derta = 2 ; %derta n =2
N = 2; %2small cell
Kn = 2; %2small cell 2 users

e = zeros(N,Kn);
e_ele = log2(1+p/sigma);
e = e+e_ele;

fai = b * B * e_ele - derta * B - d *B * e_ele;

cvx_begin
variable a(2,2) nonnegative %a
variable s(2,2) nonnegative %s
expression y;
y=0;
for j=1:2
for i=1:2
y=y+a(i,j).*log( (s(i,j)*fai) /(a(i,j)+eps) );
end
end
maximize(y)
subject to
%% 1
s(1,1) + s(2,1) <= 1;
s(1,2) + s(2,2) <= 1;
%% 2
(s(1,1) + s(2,1)) * B * e_ele <= L;
(s(1,2) + s(2,2)) * B * e_ele <= L;
%% 3
a(1,1) >= s(1,1);
a(2,1) >= s(2,1);
a(1,2) >= s(1,2);
a(2,2) >= s(2,2);
%%
for i = 1:2
for j= 1:2
0<=a(i,j)<=1
0<=s(i,j)<=1
end
end
cvx_end

Get rid of eps which will do nothing useful.

Then

expression y;
y=0;
for j=1:2
for i=1:2
y=y+a(i,j).*log( (s(i,j)*fai) /(a(i,j)+eps) );
end
end
maximize(y)

can be replaced with
maximize(sum(sum(-rel_entr(a,a+s*fai))))

The constraints can be more simply written as

sum(s,1) <= min(1,L/(B * e_ele);
a >= s 
0 <= a <= 1 
0 <= s <= 1 

Follow the instructions at CVXQUAD: How to use CVXQUAD's Pade Approximant instead of CVX's unreliable Successive Approximation for GP mode, log, exp, entr, rel_entr, kl_div, log_det, det_rootn, exponential cone. CVXQUAD's Quantum (Matrix) Entropy & Matrix Log related functions . Use Mosek 9.x with CVX 2.2 if available to you, otherwise, install CVXQUAD with its exponential.m replacement. No changes to your code are necessary because CVXQUAD processes rel_entr as is.

Thank you so much!
the code can run smoothly
however what if I want the numbers in matrix a and s to be random decimals from 0 to 1, now the numbers in matrix a are the same number, and I want them to be four different decimals,what should i do?

rand(2,2)
ans =
0.398550434016254 0.185001205373067
0.831383004688908 0.500785757513212

yeah i know this function
but what should i write in the code?a and s is the decision variable I defined(
variable a(2,2) nonnegative
variable s(2,2) nonnegative )
do I need to add some restrictions?
i add the
0 <= a <= 1
0 <= s <= 1
however there are only four identical elements in the matrix,i want them be the different
thank you so much!

a and s are 2 by 2 matrix decision variables. There is nothing random about them. You want all elements of each matrix to be different? That is non-convex. So you will need to implement Big M logic constraints and decide on a tolerance for what constitutes “different” - search at https://or.stackexchange.com/ and seek further help there if necessary.

Disciplined convex programming error:
Cannot perform the operation: {real affine} ./ {real affine}

出错 ./ (line 19)
z = times( x, y, ‘./’ );

出错 power_allocation_and_placement (line 209)
alpha(k)=P_UAV*(norm(h(k))^2)beta1(k)./(P_UAV(norm(h(k))^2)*sum_Bbeta(k)+sigma);

I have the same problem,please help me .here is my code
cvx_begin
variable beta1(1,2K);
expression x(1,2
K);
expression y(1,2K);
expression R_K1_UAV(1,2
K);
expression R_min_allo(1,2K);
expression sum_Bbeta(1,2
K);
expression alpha(1,2K);
sum_Bki=zeros(1,2
K);
sum_Aki=zeros(1,2*K);

z0=0;z3=0;z4=0;z5=0;
for k=1:1:2K
for i=1:1:2
K
z0=z0+B(k,i)beta1(i);
z5=z5+A(k,i)P(i)(norm(h(i))^2);
end
sum_Bbeta(k)=z0;
sum_Aki(k)=z5;
lamd(k)=sigma./(P_UAV
(norm(h(k))^2));
alpha(k)=P_UAV*(norm(h(k))^2)beta1(k)./(P_UAV(norm(h(k))^2)*sum_Bbeta(k)+sigma);
R_K1_UAV(k)=log2(1+(P(k)*norm(h(k))^2./(sigma+ sum_Aki(k))));
end

for k=1:1:2K
y(k)=log(alpha(k));
x(k)=log(beta1(k));
R_min_allo(k)=min(y(k),R_K1_UAV(2
K-k+1));
z3=z3+exp(x(k));
end

for k=1:1:2K
for i=1:1:2
K
z4=z4+B(k,i)exp(y(k));
end
end
maximize sum( R_min_allo);
subject to
for k=1:1:2
K
log((sum_Bki(k)*exp(y(k))+exp(y(k))*lamd(k))./exp(x(k)))./log(2)<=0;
end
z3<=1;
cvx_end

Have you proven this is a convex optimization problem? It better be the case that y(k) = log(alpha(k)) is concave. That looks dubious to me. There might be plenty of other non-convexities as well.

actually ,the whole problem is convex with respect to x(k) and y(k)

In order to be a convex optimization problem, it needs to be jointly convex in all the optimization variables, which in your case are the elements of beta.

Let’s look at a simplex example: minimize sin(x)^2 with respect to x. That is not a convex optimization problem. Your claim is analogous to defining y = sin(x), and stating your problem is minimize y^2 , which is convex with respect to y. But it is not convex with respect to the optimization variable x.

Indeed, your rationale, were it valid, would enable convexification of any optimization problem. Have a non-convex objective? No problem, just define y = objective function, and voila it is convex with respect to y. Similarly with constraints. (Every optimization problem could even be made into a Linear Programming problem by that rationale.) That is of course nonsense. Expressions in CVX are just a shorthand way of making complicated formulas more understandable by writing them with placeholders (expressions) which hide.some of the complexity - they don’t change the inherent convexirty, or not, of anything.