Cannot perform the operation: {real affine} .* {log-affine}


(Ramy Ramy) #1

Please help me to fix the CVX error. I am maximizing a provably concave function below.
n_dash, M, and Nf are integers. q_i is a row of size 1Nf and the decision vector is a vector x of size Nf1.

cvx_begin
variables x(Nf)                                                         
maximize   (    q_i*x + n_dash*q_i*(  x.*(1-x).*exp(-n_dash*x)    )      )           
subject to
sum(x) == M
x>= zeros(Nf,1)
x<= ones(Nf,1)
cvx_end

(Mark L. Stone) #2

Consider the simple case, with x a scalar:
objective = x + x*(1-x)*exp(-x)
Its second derivative is negative for x < 1 and positive for x > 1. The switchover point is different if n_dash is a number other than 1. For instance, if n_dash = 2, objective is convex for x >= 0.634. .Regardless of what positive value is used for n_dash, the function is not concave for all of 0 <= x <= 1.

So not concave.


(Ramy Ramy) #3

Thanks Ms Mark for your reply. If x either as scalar or vector is between 0 and 1. And for some values of n_dasy, the second derivative is negative. Can I solve this problem as a concave problem for this specific parameters or assumptions?


(Mark L. Stone) #4

Doubtful. But please provide a complete reproducible example, i.e., with all input values, of the problem you want to solve, even though the code will not be accepted by CVX. It would be preferable for you to show the simplest, lowest dimension (one, if possible) problem of any interest to you - that will make things easier to analyze. And please provide the proof of concavity of the objective function for the problem instance you provide.


(Ramy Ramy) #5

My function is
P© = \sum_{m=1}^{Nf} qm (cm + n_dashcm(1 - cm)*exp(-cm n_dash))

Nf = 24, 0<=q_m, c_m<=1, n_dash = 5. The decision varible (vector) is c_m(Nf), i.e., a vector Nf*1.

In the proof as below, I depended on an assumption on the c_m itself. Obtaining the second derivative
\partial^2 P©/\partial c_m^2 = qm exp(-cm n_dash) * (cm * n_dash^2 (4 + n_dash(1-cm)) - 2n_dash (n_dash+1))

If the caching probability c_m is << 1, this second derivative is negative, which makes the function concave.


(Mark L. Stone) #6

Can you please show this as a complete MATLAB/CVX code, even though CVX will not accept it?

As for concavity (which need to be joint concavity) proof by 2nd derivatives, you need to show that the Hessian is negative semidefinite over the constraint region. It is not sufficient to show that partial derivatives are non-positive.


(Ramy Ramy) #7

Thanks for your kind reply and help. Here is below the cvx code:

clc; clear all; close all;

%%%%%%%%%%%%%%%
%%% Wireless caching simulation parameters
Nf = 40;
M = 8 ;
n_dash = 4;
beta = 0.5 ;

%Calculating Zipf’s distribution
q_i = zeros(1,Nf);
for i=1:Nf
num = (i)^(-beta) ;
den = 0;
for j = 1:Nf
den = den + j^-(beta);
end
q_i(1,i) = num / den; % 1XNf row
end

cvx_begin
variables x(Nf)    % NfX1 vector                                                      
maximize   (    q_i*x + n_dash*q_i*(x.*(1-x).*exp(-n_dash*x) )    )                  
subject to
sum(x) == M
x>= zeros(Nf,1)
x<= ones(Nf,1)
cvx_end

(Mark L. Stone) #8

q_i*x is a linear term, so its 2nd derivative is zero, and t is irrelevant for concavity determination.

n_dash*q_i > 0

For x >= 0.3877, the 2nd derivative ofx*(1-x)*exp(-4*x) is positive, thereby disproving concavity.

Note: Do not use beta as a user variable name. It is a MATLAB (toolbox) function.


(Ramy Ramy) #9

Thanks. Now I got it. My question is as you said earlier it is a partial derivative that you disproved its concavity. As we deal with the function of x as a vector, i.e., x(Nf). Can it still be concave, especially, when we consider the following:

Summation over x == M. If for instance x(1) and/or x(2) have their values making the second derivative positive, others x’s. i.e., x(3:Nf) are expected to be much smaller since the constraint (Summation over x == M) . How can I approve or disapprove the joint concavity?

Thanks in advance.


(Mark L. Stone) #10

Ar this point you are better of seeking tutorial help at https://math.stackexchange.com/

Presuming existence of the 2nd partial derivatives, It is necessary, but not sufficient for joint concavity that all 2nd partial derivatives be <= 0. As I wrote previously, also presuming existence of the 2nd partial derivatives, negative semidefiniteness of the Hessian is sufficient to prove joint concavity.


(Ramy Ramy) #11

Many thanks, just to conclude this interesting discussion.

The joint concavity of my problem is now dependent on whether the Hessian matrix is -ve semidefiniteness or not.

Thanks in advance.


(Mark L. Stone) #12

Yes, you are correct.