First, to hopefully demonstrate that I have done a lot of the groundwork already (at least as much as I was able), the following program already works:
lambda=0.1; N=numel(b); cvx_solver MOSEK cvx_begin variable x(N,1) binary minimize (lambda*(x'*A*x-Adiag'*x)-(1-lambda)*b'*X) cvx_end
A is a positive semidefinite matrix, Adiag is the diagonal of A and b is a positive vector.
The above is a workable version of the problem, since by varying lambda I am able to weight the two terms differently. However, it annoys me that the first term (x’Ax) will grow (roughly) quadratically with sum(x), meaning that for most lambda-values, there will be a benefit to choosing x with many non-zero entries.
Is there a way to scale the first part of the objective with x, or something close to?
“(lambda*(x’Ax-Adiag’*x)/sum(x)-(1-lambda)*b’*X)” is convex (because x’Ax-Adiag’*x is convex, (1-lambda)*b’*X) is affine and sum(x) is concave), but I understand that it violates the no-product rule. Is there a way around this? I tried substituting with a scaled version of x (xScaled= x/sum(x)), which didn’t work, and neither does multiplying the second term by sum(x). Is there an accepted work-around?