 Why I get this when using rel_entr?

#1

My goal is to express \log(1+\frac {k_{20}}{(d_{20}+d_{21}x+d_{22}y)}), where k_{20},d_{20}, d_{21}, d_{22} are constant vectors (101x1), x,y are variables (101x1). \frac {k_{20}}{(d_{20}+d_{21}x+d_{22}y)} is positive.

and trying to express my function form this post Minimize log(1+1/x) where 0<x<inf

but I get this result

With all the 101x1 coefficients and variables, why I get a 100x1 result when using rel_entr ?

(Mark L. Stone) #2

Please show a reproducible example. If you can do it with smaller arrays, for example 3 by 1 instead of 100 by 1, that would be preferred. Please include the output from cvx_version.

#3

The code

    beta=9.0453e4;
N=11;
L=3000;
for n=1:N
x0(n,1)=300*n-300;
y0(n,1)=0;
end
k20=1000*beta*ones(N,1);
d2=(x0-L).^2+y0.^2+1e4;
d20=d2-2*x0.*(x0-L)-2*y0.^2;
d21=2*(x0-L);
d22=2*y0;
cvx_begin
variables x(N) y(N)
R2=(rel_entr(d20+d21.*x+d22.*y,k20+d20+d21.*x+d22.*y)+rel_entr(k20+d20+d21.*x+d22.*y,d20+d21.*x+d22.*y))./k20;
maximize sum(R2)
subject to
x(1)==0;
y(1)==0;
x(N)==L;
y(N)==0;
d20+d21.*x+d22.*y>=1e4;
for i=1:N-1
square(x(i+1)-x(i))+square(y(i+1)-y(i))<=1e6;
end
cvx_end


The output

===================================== Using Pade approximation for exponential cone with parameters m=3, k=3

error using .* (line 46)
Matrix dimensions must agree.

error ./ (line 19)
z = times( x, y, ‘./’ );

error Untitled1 (line 15)
R2=(rel_entr(d20+d21.*x+d22.*y,k20+d20+d21.*x+d22.*y)+rel_entr(k20+d20+d21.*x+d22.*y,d20+d21.*x+d22.*y))./k20;

My cvx_version CVX: Software for Disciplined Convex Programming ©2014 CVX Research Version 2.1, Build 1123 (cff5298) Sun Dec 17 18:58:10 2017

Installation info:
Path: D:\Program Files\cvx
MATLAB version: 9.4 (R2018a)
OS: Windows 10 amd64 version 10.0
Java version: 1.8.0_144

(Mark L. Stone) #4

This looks to me like a bug in rel_entr .

help rel_entr

rel_entr Scalar relative entropy.
rel_entr(X,Y) returns an array of the same size as X+Y with the
relative entropy function applied to each element:
{ X.*LOG(X./Y) if X > 0 & Y > 0,
rel_entr(X,Y) = { 0 if X == 0 & Y >= 0,
{ +Inf otherwise.
X and Y must either be the same size, or one must be a scalar. If X and
Y are vectors, then SUM(rel_entr(X,Y)) returns their relative entropy.
If they are PDFs (that is, if X>=0, Y>=0, SUM(X)==1, SUM(Y)==1) then
this is equal to their Kullback-Liebler divergence SUM(KL_DIV(X,Y)).
-SUM(rel_entr(X,1)) returns the entropy of X.

Disciplined convex programming information:
rel_entr(X,Y) is convex in both X and Y, nonmonotonic in X, and
nonincreasing in Y. Thus when used in CVX expressions, X must be
real and affine and Y must be concave. The use of rel_entr(X,Y) in
an objective or constraint will effectively constrain both X and Y
constraints X >= 0 or Y >= 0 to enforce this.