Do you know how I can fix this problem ? I guess the problem comes from the multiplication of vector variables, when one comes from another cvx problem. Any help is appreciated.

I believe this may be the same issue I was asking about here).

I think you can solve get_gmax_CVX analytically. Isn’t it something like the following? Take k=floor(alpha) and p=alpha-k. Then

(1 - p) * sum_largest(f, k) + p * sum_largest(f, k + 1)

Since 0\le p<1, this is convex. The idea is that if alpha were an integer (must be less than length(f)), then your function would be simply the sum of the largest k elements of f. Haven’t checked that this is right, but something along these lines may work.

Yes, you’ve identified the problem. The disciplined convex ruleset has to be followed at all times, even across functions like this. That means that you cannot specify get_gmax_CVX in this way, even though it is convex, because it involves an illegal product rho'*f of two variables.

That said, you can re-implement this function in terms of the dual of the optimization problem. In other words: temporarily pretend that f is a constant, and determine the Lagrangian dual of the optimization model$$\begin{array}{ll}\text{minimize} & f^T\rho \ \text{subject to} & \vec{0} \preceq \rho \preceq \vec{1} \ & \textstyle \sum_i \rho_i = \alpha \end{array}$$
The Lagrangian dual will be a maximization problem. Because this optimization problem is strictly feasible, the duality gap is zero, and the dual optimal value is attained. Write get_gmax_CVX in terms of that dual problem, and you’ll get it. You’ll see that the resulting optimization problem will satisfy the DCP ruleset both for f and the dual variables.

Actually, now that I think of it, isn’t this just sum_smallest(f,alpha)? That is: if alpha is a positive integer, isn’t this nothing but the sum of the alpha smallest elements of f? If so, then sum_smallest will work, even if alpha is not an integer.