{positive constant} ./ {real affine}

Trajectory optimization problem, power P is known, wk is the target position is known, Rho_ 0. It is known that Q is the best trajectory to be solved. I initialize a trajectory and iterate the trajectory. In the expression, yital (eta_tra in Matlab), s and Q are variables, R_ lb_ Km is the erexpressions obtained from Q。

cvx_begin
variables q1_tra(N,2) q2_tra(N,2) S1(N,6) S2(N,6) eta_tra(K,1) t1(N,6) y1(N,6) t2(N,6) y2(N,6)
expressions R_lb_11(N,1) R_lb_12(N,1) R_lb_13(N,1) R_lb_14(N,1) R_lb_15(N,1) R_lb_16(N,1) R_lb_21(N,1) R_lb_22(N,1) R_lb_23(N,1) R_lb_24(N,1) R_lb_25(N,1) R_lb_26(N,1)
for tt=1:N
n=tt;
R_lb_11(n)=-A_11(n).((q1_tra(n,1)-q(1,1)).^2+(q1_tra(n,2)-q(1,2)).^2-norm(q1(n,:)-q(1,:)).^2)-A_21(n).((q2_tra(n,1)-q(1,1)).^2+(q2_tra(n,2)-q(1,2)).^2-norm(q2(n,:)-q(1,:)).^2)+B_11(n);
R_lb_12(n)=-A_12(n).((q1_tra(n,1)-q(2,1)).^2+(q1_tra(n,2)-q(2,2)).^2-norm(q1(n,:)-q(2,:)).^2)-A_22(n).((q2_tra(n,1)-q(2,1)).^2+(q2_tra(n,2)-q(2,2)).^2-norm(q2(n,:)-q(2,:)).^2)+B_12(n);
R_lb_13(n)=-A_13(n).((q1_tra(n,1)-q(3,1)).^2+(q1_tra(n,2)-q(3,2)).^2-norm(q1(n,:)-q(3,:)).^2)-A_23(n).((q2_tra(n,1)-q(3,1)).^2+(q2_tra(n,2)-q(3,2)).^2-norm(q2(n,:)-q(3,:)).^2)+B_13(n);
R_lb_14(n)=-A_14(n).((q1_tra(n,1)-q(4,1)).^2+(q1_tra(n,2)-q(4,2)).^2-norm(q1(n,:)-q(4,:)).^2)-A_24(n).((q2_tra(n,1)-q(4,1)).^2+(q2_tra(n,2)-q(4,2)).^2-norm(q2(n,:)-q(4,:)).^2)+B_14(n);
R_lb_15(n)=-A_15(n).((q1_tra(n,1)-q(5,1)).^2+(q1_tra(n,2)-q(5,2)).^2-norm(q1(n,:)-q(5,:)).^2)-A_25(n).((q2_tra(n,1)-q(5,1)).^2+(q2_tra(n,2)-q(5,2)).^2-norm(q2(n,:)-q(5,:)).^2)+B_15(n);
R_lb_16(n)=-A_16(n).((q1_tra(n,1)-q(6,1)).^2+(q1_tra(n,2)-q(6,2)).^2-norm(q1(n,:)-q(6,:)).^2)-A_26(n).((q2_tra(n,1)-q(6,1)).^2+(q2_tra(n,2)-q(6,2)).^2-norm(q2(n,:)-q(6,:)).^2)+B_16(n);

     R_lb_21(n)=-A_11(n).*((q1_tra(n,1)-q(1,1)).^2+(q1_tra(n,2)-q(1,2)).^2-norm(q1(n,:)-q(1,:)).^2)-A_21(n).*((q2_tra(n,1)-q(1,1)).^2+(q2_tra(n,2)-q(1,2)).^2-norm(q2(n,:)-q(1,:)).^2)+B_11(n);
     R_lb_22(n)=-A_12(n).*((q1_tra(n,1)-q(2,1)).^2+(q1_tra(n,2)-q(2,2)).^2-norm(q1(n,:)-q(2,:)).^2)-A_22(n).*((q2_tra(n,1)-q(2,1)).^2+(q2_tra(n,2)-q(2,2)).^2-norm(q2(n,:)-q(2,:)).^2)+B_12(n);
     R_lb_23(n)=-A_13(n).*((q1_tra(n,1)-q(3,1)).^2+(q1_tra(n,2)-q(3,2)).^2-norm(q1(n,:)-q(3,:)).^2)-A_23(n).*((q2_tra(n,1)-q(3,1)).^2+(q2_tra(n,2)-q(3,2)).^2-norm(q2(n,:)-q(3,:)).^2)+B_13(n);
     R_lb_24(n)=-A_14(n).*((q1_tra(n,1)-q(4,1)).^2+(q1_tra(n,2)-q(4,2)).^2-norm(q1(n,:)-q(4,:)).^2)-A_24(n).*((q2_tra(n,1)-q(4,1)).^2+(q2_tra(n,2)-q(4,2)).^2-norm(q2(n,:)-q(4,:)).^2)+B_14(n);
     R_lb_25(n)=-A_15(n).*((q1_tra(n,1)-q(5,1)).^2+(q1_tra(n,2)-q(5,2)).^2-norm(q1(n,:)-q(5,:)).^2)-A_25(n).*((q2_tra(n,1)-q(5,1)).^2+(q2_tra(n,2)-q(5,2)).^2-norm(q2(n,:)-q(5,:)).^2)+B_15(n);
     R_lb_26(n)=-A_16(n).*((q1_tra(n,1)-q(6,1)).^2+(q1_tra(n,2)-q(6,2)).^2-norm(q1(n,:)-q(6,:)).^2)-A_26(n).*((q2_tra(n,1)-q(6,1)).^2+(q2_tra(n,2)-q(6,2)).^2-norm(q2(n,:)-q(6,:)).^2)+B_16(n);
 end
 obj_value=0;
 for k=1:K
    obj_value=obj_value+eta_tra(k);
 end
 maximize(obj_value);
 subject to
     eta_tra(1)<=sum(a1(:,1).*(R_lb_11-log(t2(:,1))./log(2)))+sum(a2(:,1).*(R_lb_21-log(t1(:,1))./log(2)));
     eta_tra(2)<=sum(a1(:,2).*(R_lb_12-log(t2(:,2))./log(2)))+sum(a2(:,2).*(R_lb_22-log(t1(:,2))./log(2)));
     eta_tra(3)<=sum(a1(:,3).*(R_lb_13-log(t2(:,3))./log(2)))+sum(a2(:,3).*(R_lb_23-log(t1(:,3))./log(2)));
     eta_tra(4)<=sum(a1(:,4).*(R_lb_14-log(t2(:,4))./log(2)))+sum(a2(:,4).*(R_lb_24-log(t1(:,4))./log(2)));
     eta_tra(5)<=sum(a1(:,5).*(R_lb_15-log(t2(:,5))./log(2)))+sum(a2(:,5).*(R_lb_25-log(t1(:,5))./log(2)));
     eta_tra(6)<=sum(a1(:,6).*(R_lb_16-log(t2(:,6))./log(2)))+sum(a2(:,6).*(R_lb_26-log(t1(:,6))./log(2)));
     
     t1>=log_sum_exp(log(N0),y1);
     t2>=log_sum_exp(log(N0),y2);
     (H.^2+S2)./(p2.*rou)>=exp(-y2);
     (H.^2+S1)./(p1.*rou)>=exp(-y1);

That’s not a reproducible example. It is missing the input data. You should make the example as small and simple as you can, while still illustrating the error. That is for everyone’s benefit, including your own.

You shouldn’t be taking log of t2. Rather, t2 takes the place of the log_sum_inv. Please study section 5.2.7 of https://docs.mosek.com/modeling-cookbook/expo.html#modeling-with-the-exponential-cone and see example CVX code at Log( {convex} ) . The only log term you should have should be of a constant, not a CvX variable or expression.

Thank you, brother Mark. I’ll study it carefully

Dear Mark
What is the reason for this situation after my modification?

eta_tra(1)<=sum(a1(:,1).(R_lb_11-t2(:,1)./log(2)))+sum(a2(:,1).(R_lb_21-t1(:,1)./log(2)));
eta_tra(2)<=sum(a1(:,2).(R_lb_12-t2(:,2)./log(2)))+sum(a2(:,2).(R_lb_22-t1(:,2)./log(2)));
eta_tra(3)<=sum(a1(:,3).(R_lb_13-t2(:,3)./log(2)))+sum(a2(:,3).(R_lb_23-t1(:,3)./log(2)));
eta_tra(4)<=sum(a1(:,4).(R_lb_14-t2(:,4)./log(2)))+sum(a2(:,4).(R_lb_24-t1(:,4)./log(2)));
eta_tra(5)<=sum(a1(:,5).(R_lb_15-t2(:,5)./log(2)))+sum(a2(:,5).(R_lb_25-t1(:,5)./log(2)));
eta_tra(6)<=sum(a1(:,6).(R_lb_16-t2(:,6)./log(2)))+sum(a2(:,6).(R_lb_26-t1(:,6)./log(2)));

You may have bad numerical scaling.

In any event, if you have Mosek available, use that as solver. Otherwise, follow the advice in CVXQUAD: How to use CVXQUAD's Pade Approximant instead of CVX's unreliable Successive Approximation for GP mode, log, exp, entr, rel_entr, kl_div, log_det, det_rootn, exponential cone. CVXQUAD's Quantum (Matrix) Entropy & Matrix Log related functions

Brother Mark, I installed MOSEK. There is no problem inputting mosekopt. Why is there no MOSEK in CVX solver2134

Try re-installing CVX. Hopefully it will find Mosek.

If not,

Have you set the MATLAB path correctly?

What is the output of mosekdiag ?

This is the result of mosekdiag. There is an error. I try to reinstall CVX

Is your lMosek license in the right place? Have you tried a new MATLAB session? Get mosekdiag to give a favorable result. Then reinstall CVX.

Thank you very much for Mark’s clarification. MOSEK is already available

So does everything work now?

The result is no longer Nan, and the trajectory optimization is also right, but the value of the objective function is somewhat wrong, and I am checking it carefully

Were you able to solve the problem?
I am getting the same error while writing the below Scaled Lasso in cvx.
image

Here theta and sigma are variables.

How can I write this in cvx?

quad_over_lin can be used to handle the first term, which involves quadratic (in theta) divided by linear (in sigma).

help quad_over_lin
quad_over_lin Sum of squares over linear.
Z=quad_over_lin(X,Y), where X is a vector and Y is a scalar, is equal to
SUM(ABS(X).^2)./Y if Y is positive, and +Inf otherwise. Y must be real.

If X is a matrix, quad_over_lin(X,Y) is a row vector containing the values
of quad_over_lin applied to each column. If X is an N-D array, the operation
is applied to the first non-singleton dimension of X.

quad_over_lin(X,Y,DIM) takes the sum along the dimension DIM of X.
A special value of DIM == 0 is accepted here, which is automatically
replaced with DIM == NDIMS(X) + 1. This has the effect of eliminating
the sum; thus quad_over_lin( X, Y, NDIMS(X) + 1 ) = ABS( X ).^2 ./ Y.

In all cases, Y must be compatible in the same sense as ./ with the squared
sum; that is, Y must be a scalar or the same size as SUM(ABS(X).^2,DIM).

Disciplined convex programming information:
    quad_over_lin is convex, nonmontonic in X, and nonincreasing in Y.
    Thus when used with CVX expressions, X must be convex (or affine)
    and Y must be concave (or affine).

On writing the first term using quad_over_lin, cvx part is working, but the reconstruction is extremely poor.

What do you mean by “the reconstruction is extremely poor”?

CVX can’t be blamed if the statistical technique implied by the optimization model provided to CVX does not do a good job on your data set. And the choice of lambda may matter a lot.

cvx_begin
variables theta(p, 1) sigcap(1,1);
minimize( (1/(2 * n)) * quad_over_lin(Y - X * theta, sigcap) + sigcap/2 + lambda_final * norm(theta, 1) );
subject to
sigcap > 0;
cvx_end

I am using this but getting very large values for sigcap, where sigcap is standard deviation of the noise.

Yeah, I am using cross-validation for lambda selection.
Is the cvx part written correctly??