Conversion to double from cvx is not possible?Successive convex approximation

The code is here:

clear all;
N=12;T=80;V=3;deta=T/N;h0=10^(-8);alpha=2.2;Ps=1;M=10;N0=10^(-15);Pj_average=1;Pj_k=0.9124;
q_r_0=[100,600];q_j_0=[100,600];q_r_sf=[100,-600];q_j_sf=[100,-600];
w_b=[0,0];w_u=[100,0];w_e=[200,0];
d_ju_0=norm(q_j_0-w_u);d_je_0=norm(q_j_0-w_e);d_b_0=norm(q_r_0-w_b);d_u_0=norm(q_r_0-w_u);d_e_0=norm(q_r_0-w_e);
g_ju=(h0*d_ju_0^(-alpha))^0.5;g_je=(h0*d_je_0^(-alpha))^0.5;
an=Ps*h0^2*M^2/N0/(d_b_0*d_u_0)^alpha;bn=(g_ju^2)/N0;cn=Ps*h0^2*(M^2)/N0/(d_b_0*d_e_0)^alpha;dn=(g_je^2)/N0;
% Pj=zeros(N,1);
for n=1:N
cvx_begin  
variable Pj(N)
expression an
expression bn
expression cn
expression dn
expression sum
expression sum_pj
  A=-an*bn/log(2)/(bn*Pj_k+1)/(bn*Pj_k+an+1);
  R(n)=A*Pj(n)+log2(1-cn*inv_pos(dn*Pj(n)+1)); 
  sum=0;sum_pj=0;
  sum=sum+R(n);
  sum_pj=sum_pj+Pj(n);
maximize (sum);
  subject to
      0<=sum_pj<=N*Pj_average;
      0<=Pj(n)<=4*Pj_average;
cvx_end
 Pj(n)=Pj(n);
cvx_begin
B=h0^2*M^2*Ps/(Pj(n)*g_ju^2+N0);
sum_trj=0;z_l=2;v_l=2;
variable q_r(N,2)
variable z(N)
variable v(N)
      sum_trj=sum_trj+log2(1+B*inv_pos(z_l))-B.*(z(n)-z_l)/z_l/(z_l+B)/log(2)+log(1-B*inv_pos(v(n)));
      d_b(n)=sum_square(q_r(n,:)-w_b);
      d_u(n)=sum_square(q_r(n,:)-w_u);
      d_e(n)=sum_square(q_r(n,:)-w_e);
 maximize (sum_trj);
      subject to
        norm(q_r(1,:)-q_r_0)<=V*deta;
        norm(q_r(n+1,:)-q_r(n,:))<=V*deta;
        q_r(1,:)==q_r_0;
        q_r(N,:)==q_r_sf; 
        z(n)^(2/alpha)>=0.5*((d_b(n)^2+d_u(n)^2)^2-(d_b_0^4+d_u_0^4))-2*d_b_0^2*(q_r_0-w_b)'*(q_r(n,:)-q_r_0)-2*d_u_0^2*(q_r_0-w_u)'*(q_r(n,:)-q_r_0);
        v_l^(2/alpha)+2/alpha*v_l^(2/alpha-1)*(v(n)-v_l)<= ...
        0.5*((d_b_0^2+d_e_0^2)^2-(d_b(n)^4+d_e(n)^4))+2*d_b_0^2*(2*q_r_0-w_b-w_e)'*(q_r(n,:)-q_r_0)+2*d_e_0^2*(2*q_r_0-w_b-w_e)'*(q_r(n,:)-q_r_0)
cvx_end
end

I want to realize Successive convex approximation.So the Pj is optimized first,Then, q_r,z(N),v(N)is optimized.But the error is "Conversion to double from cvx is not possible"in line “Conversion to double from cvx is not possible”.

Perhaps you could take the time to make sure the code you post is the same as what you ran. For instance, log2(cvx_ expression) triggers an error, and needs to be changed to log(cvx_expression)/log(2) before the code can get to the error you reported. I recommend you first try running the code without the outer for loop, and make sure that runs correctly, before proceeding to including the for loop so that you can run SCA.

It is surprised that it didn’t have a error about the log2

clear all;
N=12;T=80;V=3;deta=T/N;h0=10^(-8);alpha=2.2;Ps=1;M=10;N0=10^(-15);Pj_average=1;Pj_k=0.9124;
q_r_0=[100,600];q_j_0=[100,600];q_r_sf=[100,-600];q_j_sf=[100,-600];
w_b=[0,0];w_u=[100,0];w_e=[200,0];
d_ju_0=norm(q_j_0-w_u);d_je_0=norm(q_j_0-w_e);d_b_0=norm(q_r_0-w_b);d_u_0=norm(q_r_0-w_u);d_e_0=norm(q_r_0-w_e);
g_ju=(h0d_ju_0^(-alpha))^0.5;g_je=(h0d_je_0^(-alpha))^0.5;
an=Psh0^2M^2/N0/(d_b_0d_u_0)^alpha;bn=(g_ju^2)/N0;cn=Psh0^2*(M^2)/N0/(d_b_0d_e_0)^alpha;dn=(g_je^2)/N0;
% Pj=zeros(N,1);
cvx_begin
variable Pj
expression an
expression bn
expression cn
expression dn
expression sum
expression sum_pj
an=Ps
h0^2M^2/N0/(d_b_0d_u_0)^alpha;bn=(g_ju^2)/N0;cn=Psh0^2(M^2)/N0/(d_b_0d_e_0)^alpha;dn=(g_je^2)/N0;
A=-an
bn/log(2)/(bnPj_k+1)/(bnPj_k+an+1);
R=APj+log(1-cninv_pos(dnPj+1))/log(2);
sum=0;sum_pj=0;
sum=sum+R;
sum_pj=sum_pj+Pj;
maximize (sum);
subject to
0<=sum_pj<=N
Pj_average;
0<=Pj<=4Pj_average;
cvx_end
Pj0=Pj;
cvx_begin
B=h0^2
M^2Ps/(Pj0g_ju^2+N0);
sum_trj=0;z_l=2;v_l=2;
variable q_r(1,2)
variable z
variable v
sum_trj=sum_trj+log2(1+Binv_pos(z_l))-B.(z-z_l)/z_l/(z_l+B)/log(2)+log(1-Binv_pos(v));
d_b=norm(q_r-w_b);
d_u=norm(q_r-w_u);
d_e=norm(q_r-w_e);
maximize (sum_trj);
subject to
norm(q_r-q_r_0)<=V
deta;
% norm(q_r(n+1,:)-q_r(n,:))<=Vdeta;
% q_r(1,:)==q_r_0;
% q_r(N,:)==q_r_sf;
z^(2/alpha)>=0.5
((d_b^2+d_u^2)^2-(d_b_0^4+d_u_0^4))-2d_b_0^2(q_r_0-w_b)’(q_r-q_r_0)-2d_u_0^2*(q_r_0-w_u)’(q_r-q_r_0);
v_l^(2/alpha)+2/alpha
v_l^(2/alpha-1)(v-v_l)<= …
0.5
((d_b_0^2+d_e_0^2)^2-(d_b^4+d_e^4))+2d_b_0^2(2q_r_0-w_b-w_e)’(q_r-q_r_0)+2d_e_0^2(2q_r_0-w_b-w_e)’(q_r-q_r_0)
cvx_end

After I delete all the circle ,I find the second convex optimization is failed, but I don’t know why.
Successive approximation method to be employed.
For improved efficiency, SDPT3 is solving the dual problem.
SDPT3 will be called several times to refine the solution.
Original size: 11 variables, 4 equality constraints
1 exponentials add 7 variables, 4 equality constraints

Cones | Errors |
Mov/Act | Centering Exp cone Poly cone | Status
--------±--------------------------------±--------
1/ 1 | 1.166e+00 8.566e-02 0.000e+00 | Solved
1/ 1 | 1.842e-01 2.243e-03 0.000e+00 | Solved
1/ 1 | 1.985e-02 2.586e-05 0.000e+00 | Solved
1/ 1 | 2.301e-03 3.454e-07 0.000e+00 | Solved
0/ 1 | 2.646e-04 2.511e-09 0.000e+00 | Solved

Status: Solved
Optimal value (cvx_optval): -7.15067e-09

Successive approximation method to be employed.
For improved efficiency, SDPT3 is solving the dual problem.
SDPT3 will be called several times to refine the solution.
Original size: 67 variables, 28 equality constraints
1 exponentials add 7 variables, 4 equality constraints

Cones | Errors |
Mov/Act | Centering Exp cone Poly cone | Status
--------±--------------------------------±--------
0/ 0 | 0.000e+00 0.000e+00 0.000e+00 | Failed
0/ 0 | 0.000e+00 0.000e+00 0.000e+00 | Failed
0/ 0 | 0.000e+00 0.000e+00 0.000e+00 | Failed

Status: Failed
Optimal value (cvx_optval): NaN

Use CVX 2.2 with Mosek 9.2 if available to you. otherwise, follow the directions at CVXQUAD: How to use CVXQUAD's Pade Approximant instead of CVX's unreliable Successive Approximation for GP mode, log, exp, entr, rel_entr, kl_div, log_det, det_rootn, exponential cone. CVXQUAD's Quantum (Matrix) Entropy & Matrix Log related functions .if things are still not good, the read my warnings in many posts on this forum about the perils of (crude) SCA.

clear all;
N=12;T=80;V=3;deta=T/N;h0=10^(-8);alpha=2.5;Ps=1;M=10;N0=10^(-15);Pj_average=1;Pj_k=0.9124;
q_r_0=[100,600];q_j_0=[100,600];q_r_sf=[100,-600];q_j_sf=[100,-600];
w_b=[50,0];w_u=[100,0];w_e=[200,0];
d_ju_0=norm(q_j_0-w_u);d_je_0=norm(q_j_0-w_e);d_b_0=norm(q_r_0-w_b);d_u_0=norm(q_r_0-w_u);d_e_0=norm(q_r_0-w_e);
g_ju=(h0d_ju_0^(-alpha))^0.5;g_je=(h0d_je_0^(-alpha))^0.5;
an=Psh0^2M^2/N0/(d_b_0d_u_0)^alpha;bn=(g_ju^2)/N0;cn=Psh0^2*(M^2)/N0/(d_b_0d_e_0)^alpha;dn=(g_je^2)/N0;
% 优化Pj
sum=0;sum_pj=0;
cvx_begin
variable Pj(N)
for n=1:N
A=-an
bn/log(2)/(bnPj_k+1)/(bnPj_k+an+1);
R(n)=APj(n)+log(1-cninv_pos(dnPj(n)+1))/log(2);
sum=sum+R(n);
sum_pj=sum_pj+Pj(n);
end
maximize (sum);
subject to
0<=sum_pj<=N
Pj_average;
0<=Pj(n)<=4Pj_average;
cvx_end
% 优化轨迹
cvx_begin
sum_trj=0;z_l=100;v_l=100;
variable q_r(N,2)
variable z(N)
variable v(N)
for n=1:N
B(n)=h0^2
M^2Ps/(Pj(n)g_ju^2+N0);
sum_trj=sum_trj+log2(1+B(n)inv_pos(z_l))-B(n)(z(n)-z_l)/z_l/(z_l+B(n))/log(2)+log(1-B(n)inv_pos(v(n)));
end
maximize (sum_trj);
subject to
for n=1:N-1
d_b(n)=norm(q_r(n,:)-w_b);
d_u(n)=norm(q_r(n,:)-w_u);
d_e(n)=norm(q_r(n,:)-w_e);
norm(q_r(1,:)-q_r_0)<=V
deta;
norm(q_r(n+1,:)-q_r(n,:))<=V
deta;
q_r(1,:)==q_r_0;
q_r(N,:)==q_r_sf;
pow_pos(z(n),(2/alpha))>=0.5
(pow_pos((pow_pos(d_b(n),2)+pow_pos(d_u(n),2)),2)-(d_b_0^4+d_u_0^4))-2d_b_0^2(q_r_0-w_b)’(q_r(n,:)-q_r_0)-2d_u_0^2*(q_r_0-w_u)’(q_r(n,:)-q_r_0);
% pow_pos(z(n),(2/alpha))>=0.5
((pow_pos(d_b(n),2)+pow_pos(d_u(n)^2)^2-(d_b_0^4+d_u_0^4))-2d_b_0^2(q_r_0-w_b)’(q_r(n,:)-q_r_0)-2d_u_0^2*(q_r_0-w_u)’(q_r(n,:)-q_r_0);
v_l^(2/alpha)+2/alpha
v_l^(2/alpha-1)(v(n)-v_l)<= …
0.5
((d_b_0^2+d_e_0^2)^2-(d_b(n)^4+d_e(n)^4))+2d_b_0^2(2q_r_0-w_b-w_e)’(q_r(n,:)-q_r_0)+2d_e_0^2(2q_r_0-w_b-w_e)’(q_r(n,:)-q_r_0)
end
cvx_end

I tyied agian and want to realize that


The error is 未定义函数或变量 ‘op’。

出错 cvx/power>power_p (line 104)
cvx_dcp_error( errs, op );

出错 cvx_binary_op (line 107)
z = p.funcs{vu(1)}( vec(x), vec(y), varargin{:} );

出错 .^ (line 31)
z = cvx_binary_op( BP, x, y );

出错 pow_pos (line 12)
y = power( pos( x ), p );

出错 twojiont (line 43)
pow_pos(z(n),(2/alpha))>=0.5*(pow_pos((pow_pos(d_b(n),2)+pow_pos(d_u(n),2)),2)-(d_b_0^4+d_u_0^4))-2d_b_0^2(q_r_0-w_b)’(q_r(n,:)-q_r_0)-2d_u_0^2*(q_r_0-w_u)’*(q_r(n,:)-q_r_0);
MY CVX version is 3.0,but I don’t know if the CVX version is wrong.

Do not use CVX 3.0. It is known to be buggy.

clear all;
N=12;T=80;V=3;deta=T/N;h0=10^(-8);alpha=2.5;Ps=1;M=10;N0=10^(-15);Pj_average=0.001;Pj_k=0.003;
q_r_0=[100,600];q_j_0=[100,600];q_r_sf=[100,-600];q_j_sf=[100,-600];
w_b=[50,0];w_u=[100,0];w_e=[200,0];
d_ju_0=norm(q_j_0-w_u);d_je_0=norm(q_j_0-w_e);d_b_0=norm(q_r_0-w_b);d_u_0=norm(q_r_0-w_u);d_e_0=norm(q_r_0-w_e);
g_ju=(h0d_ju_0^(-alpha))^0.5;g_je=(h0d_je_0^(-alpha))^0.5;
an=Psh0^2M^2/N0/(d_b_0d_u_0)^alpha;bn=(g_ju^2)/N0;cn=Psh0^2*(M^2)/N0/(d_b_0d_e_0)^alpha;dn=(g_je^2)/N0;
% 优化Pj
sum=0;sum_pj=0;
cvx_begin
variable Pj(N)
for n=1:N
A=-an
bn/log(2)/(bnPj_k+1)/(bnPj_k+an+1);
R(n)=A*(Pj(n)-Pj_k)+log2(1+an/(bnPj_k+1))+log(1-cninv_pos(dnPj(n)+1))/log(2);
sum=sum+R(n);
sum_pj=sum_pj+Pj(n);
end
maximize (sum);
subject to
0<=sum_pj<=N
Pj_average;
0<=Pj(n)<=4Pj_average;
cvx_end
% 优化轨迹
cvx_begin
sum_trj=0;z_l=100;v_l=100;
variable q_r(N,2)
variable z(N)
variable v(N)
for n=1:N
B(n)=h0^2
M^2Ps/(Pj(n)g_ju^2+N0);
sum_trj=sum_trj+log(1+B(n).inv_pos(z_l))/log(2)-B(n).(z(n)-z_l)/z_l./(z_l+B(n))/log(2)+log(1-B(n).inv_pos(v(n)));
end
maximize (sum_trj);
subject to
q_r(1,:)==q_r_0;
q_r(N,:)==q_r_sf;
% 100<=v(n)<=10000;
% -10000<=z(n)<=10000;
for n=1:N-1
norm(q_r(1,:)-q_r_0)<=V
deta;
norm(q_r(n+1,:)-q_r(n,:))<=V
deta;
d_b(n)=norm(q_r(n,:)-w_b);
d_u(n)=norm(q_r(n,:)-w_u);
d_e(n)=norm(q_r(n,:)-w_e);
% d_b(n)<=100;
norm(q_r(1,:)-q_r_0)<=V
deta;
norm(q_r(n+1,:)-q_r(n,:))<=Vdeta;
0.5
(pow_pos(pow_pos(d_b(n),2)+pow_pos(d_u(n),2),2)-(d_b_0^4+d_u_0^4))-2d_b_0^2(q_r_0-w_b)(q_r(n,:)-q_r_0)’-2d_u_0^2*(q_r_0-w_u)’(q_r(n,:)-q_r_0)<=z(n)^(2/alpha);
0.5
((d_b_0^2+d_e_0^2)^2-(pow_pos(d_b(n),4)+pow_pos(d_e(n),4)))+2d_b_0^2(2q_r_0-w_b-w_e)(q_r(n,:)-q_r_0)’+2d_e_0^2(2q_r_0-w_b-w_e)’(q_r(n,:)-q_r_0)>=v_l^(2/alpha)+2/alphav_l^(2/alpha-1)(v(n)-v_l);
end
cvx_end

I try it with 2.2 and it still have somthing wrong, it demostrate that it is unbounded,but I do not know why…

when I try to annotate the objective function and found it is feasible.Does it mean it is a feasible question?

An unbounded problem is feasible.

Read https://yalmip.github.io/debuggingunbounded/ for how to diagnose unbounded problems.

At least some optimizers say unbounded when they actually mean dual infeasible.

But in general if a problem is unbounded it should have feasible solution.

Thank you for your help!
I tried it with only constrain is optimized,and failed. and I tried to only optimize the objective function and it shows success but the optimized varialbles are sparse double. Does it mean my all convex optimazation problem is wrong ?

I tried it with only constrain is optimized,and failed. and I tried to only optimize the objective function and it shows success but the optimized varialbles are sparse double.

I don’t understand specifically which problem you solved and what happened when you tried to solve it.

It is your problem, so you are the one who should understand it.

兄弟,我也在做这方面的问题,有联系方式没,一起讨论一下