# Changing the variables and Failed status

I have an optimization problem that is not convex, by changing the variables (x=exp(y) and y=ln x, that y is the new variable), it changes to the convex problem.

In old problem my constraints were x>=0 and sum(x,1)<=1 , i.e., the elements of x and also the summation of column of x are between 0 and 1. Now, by this changing on the variables, I faced with Failed status that I think my constraints are its cause.

For x>=0, I have not considered equivalent constraint and for sum(x,1)<=1 the equivalent constraint (that I not sure is true) is sum(exp(y),1)<= exp(log(1))

For more detail, my code is attached. I will appreciate you if advise me.

my code:

``````close all;
clear;
clc;
M=3;%num_rrh
N=2;%num_channel
K=4;%num_user
n_harvest=2;
user_per_rrh=[4 4 4];
E0=.02;
N0=10^-9;
Bw=100e6;
loc_r=[200 600 1000];
loc_bbu=[700 1200];
pathloss=1;
slot_num=10;
energy_threshold=6e-4;
% pmax_r=4;
% pr_max=pmax_r*ones(1,M);
min_user_rate=0;
n_l=2;
R_rrh_min=0;
landa=0.005;
p_threshold=1e-5;
a_threshold=1e-2;
dinke_ite=2;
info=[];
info_asm=[];
slot_duration=1;
max_itr=4;
alfa_asm=zeros(1,max_itr)
b_asm=[];
e_asm=[];
pr_asm=[];
din_threshold=1e-3;
pmax_r=1;
e_nkm_old=(1/N)*ones(N,K,M);
e_avail=ones(1,K,M);
pr_max=pmax_r*ones(1,M);
b_nkm_old=(1/N)*ones(N,K,M);
p_rrh_old=(pmax_r/2)*ones(N,M);
alfa_old=[.5 .5 .5];
b_nkm_old= ones(N,K,M);
y_nkm_old=exp(e_nkm_old);

cvx_begin
variables y_nkm(N,K,M) pr(N,M)
% tr=log(1+cnr.*pr);
tr = -rel_entr(ones(N,M),1+cnr.*pr);
r_rrh=cvx(zeros(N,M));
objfunc=cvx(zeros(K,M));

for m=1:M
E_km=e_avail(:,:,m);
J_2=J_1(:,:,m);
for k=1:K
lj(m,k)=sum(J_2(:,k)*E_km(k),1);
end
end
[sj ij]=sort(lj,2,'descend');
for m=1:M
E_km=e_avail(:,:,m);
r_rrh(:,m)=((1-alfa_old(m))*tr(:,m));
x=cvx([]);
s_omeg=0;
makhraj_o=0;
for k=1:K
t=0;
sum_o=0;
ma2=0;
for n=1:N
t=t+1;%
for nn=1:N
end
for s=k+1:K%K-k:K
for n=1:N
t=t+1;
end
end
say(m,ij(m,k))=-alfa_old(m)*log_sum_exp(x);
omega(m,ij(m,k))=-alfa_old(m)*log(sum_o+s_omeg);
for i=1:K
if(i~=ij(m,k))
s1=0;
for a=1:N
end
else
s2=0;
s3=0;
for l=1:N
for r=1:N
end
end

objfunc(m,ij(m,k))=say(m,ij(m,k))-omega_hat(m,ij(m,k));
end
end

end

end

end
maximize sum(sum(objfunc))+sum(sum(r_rrh))-(landa*sum(sum(pr)))
subject to
%constraint for the power of the rrh
for m=1:M
%        temp=sum(pr,1);
%        temp(:,m)-pr_max(m) <=0;
s=0;
for n=1:N
s=s+ pr(n,m);
end
s-pr_max(m) <=0;
end

%constraint for the enegy of the user
be=exp(log(b_nkm_old).*y_nkm);%b_nkm_old.*(exp(y_nkm));
for m=1:M
be_temp= be(:,:,m);
be_t2=sum(be_temp,1);
for k=1:K
be_t2(:,k)-1<=0;
end
end

for m=1:M
for n=1:N
pr(n,m)>=0;
end
end

y0=exp(y_nkm);
for m=1:M
y1=y0(:,:,m);
for k=1:K
sum(y1(:,k),1)-exp(log(1))<=0
end
end
cvx_end
y_nkm(N,K,M); pr(N,M);
v=exp(y_nkm);``````

CVX accepts your problem. The successive approximation method is failing (although with sedumi, it reports infeasible).

Even with CVXQUAD, some more work is required to avoid the successive approximation method. Use of `-rel_entr(1,x)` in place of `log(x`) and use of the explicit exponential cone construct (see mcg’s answer at Solve optimization problems of exp function ).

I think you want `sum(exp(y),1) <= exp(1)` , not `sum(exp(y),1) <= exp(log(1))` , but doing this still results in failure with sdpt3 and infeasible with sedumi. I don;t know what will happen when you make the changes described in the previous paragraph, thereby avoiding use of the successive approximation method.

Dear Mark_L_Stone

When x is the variable instead of log x , I should use -rel_entr(x) , I have considered that matter. But about exp, although I read section “successive approximation”, but I have not understand what should I do and which function should be used, I will appreciate you if advise me by more detail.

Best wishes,

Please note there was a typo in my answer, which I have now corrected. That statement should be Use of `-rel_entr(1,x)` in place of `log(x)` . As I wrote above, look at mcg’s answer at Solve optimization problems of exp function for hwo to change exp intp exponential(1) , or use the rel_entr expressions shown inmcg’s answer.

Dear Mark

I have eliminated the difficulty of exp. by utilizing Taylor series. Now my problem is that the Status is Unbounded and the constraints are not satisfied.

My variables were x(N,K,M) and by changing the variable x=exp(y) and y=ln x, the optimization problem is based on y.

For more detail, my constraints are enclosed.(each column of x should be between 0 and 1, also sum of element of each column of x should be between 0 and 1).

I will appreciate you if advise me.

Best regards,

A.Hooshiary

my code is as follows:
close all;
clear;
clc;
M=3;%num_rrh
N=2;%num_channel
K=4;%num_user
n_harvest=2;
user_per_rrh=[4 4 4];
E0=.02;
N0=10^-9;
Bw=100e6;
loc_r=[200 600 1000];
loc_bbu=[700 1200];
pathloss=1;
slot_num=10;
energy_threshold=6e-4;
% pmax_r=4;
% pr_max=pmax_r*ones(1,M);
min_user_rate=0;
n_l=2;
R_rrh_min=0;
landa=0.005;
p_threshold=1e-5;
a_threshold=1e-2;
dinke_ite=2;
info=[];
info_asm=[];
slot_duration=1;
max_itr=4;
alfa_asm=zeros(1,max_itr)
b_asm=[];
e_asm=[];
pr_asm=[];
din_threshold=1e-3;
pmax_r=1;
e_nkm_old=(1/N)ones(N,K,M);
e_avail=ones(1,K,M);
pr_max=pmax_r
ones(1,M);
b_nkm_old=(1/N)*ones(N,K,M);
p_rrh_old=(pmax_r/2)*ones(N,M);
alfa_old=[.5 .5 .5];
b_nkm_old= ones(N,K,M);
y_nkm_old=log(e_nkm_old);

cvx_begin
variables y_nkm(N,K,M)

objfunc=cvx(zeros(K,M));

for m=1:M
E_km=e_avail(:,:,m);
J_2=J_1(:,:,m);
for k=1:K
lj(m,k)=sum(J_2(:,k)*E_km(k),1);
end
end
[sj ij]=sort(lj,2,‘descend’);
for m=1:M
E_km=e_avail(:,:,m);

``````x=cvx([]);
s_omeg=0;
makhraj_o=0;
for k=1:K
t=0;
sum_o=0;
ma2=0;
for n=1:N
t=t+1;%
if d1 ~=0
x(t)=y_nkm(n,ij(m,k),m)+log(d1)+log(N0*alfa_old(m));
else
x(t)=y_nkm(n,ij(m,k),m)+log(N0*alfa_old(m));
end
for nn=1:N
end
for s=k+1:K%K-k:K
for n=1:N
t=t+1;
if c1 ~=0 & c2 ~=0
x(t)=y_nkm(n,ij(m,k),m)+log(c1)+y_nkm(n,ij(m,s),m)+log(c2);
elseif c1 ~=0 && c2==0
x(t)=y_nkm(n,ij(m,k),m)+log(c1)+y_nkm(n,ij(m,s),m);
elseif c1==0 & c2 ~=0
x(t)=y_nkm(n,ij(m,k),m)+y_nkm(n,ij(m,s),m)+log(c2);
else
x(t)=y_nkm(n,ij(m,k),m)+y_nkm(n,ij(m,s),m);
end
end
end
say(m,ij(m,k))=-alfa_old(m)*log_sum_exp(x);
omega(m,ij(m,k))=-alfa_old(m)*log(sum_o+s_omeg);
for i=1:K
if(i~=ij(m,k))
s1=0;
for a=1:N
end
else
s2=0;
s3=0;
for l=1:N
for r=1:N
end
end

objfunc(m,ij(m,k))=say(m,ij(m,k))-omega_hat(m,ij(m,k));
end
end

end

end
``````

end
maximize sum(sum(objfunc))
subject to

% y_nkm<=0;
y_nkm<=1e-6;
cons1=cvx(zeros(M,K));
cons2=cvx(zeros(M,K));
for m=1:M
y4=y_nkm(:,:,m);
for k=1:K
sum1=0;sum2=0;
for n=1:N
sum1=sum1+(Taylorb(y4(n,k))*b_nkm_old(n,k,m));% sum1=sum1+exp(y4(n,k))*b_nkm_old(n,k,m);
sum2=sum2+Taylorb(y4(n,k));
end
cons1(m,k)=sum1;
cons2(m,k)=sum2;
sum1<=1;
sum2<=1;
end

end
%

``````cvx_end
y_nkm;
e=exp(y_nkm)
``````

Function of Taylor:
function [ output ] = Taylorb( x )

output=1+x;%+((x^2)/2);

end

There should be no need to use Taylor series because your use of exp was already convex. Taylor series may be a bad thing to do.