Help need for the problem of the least number of zeros in a vector variable

Hi all, I am working a problem with the constraint of the least number of zeros in a vector.

Its like
variable x(100,1)
sum(x) = 1;
maximize( some quadratic equation)

Is there any way to make it possible for the constraint? I am using the mosek solver.

Thanks in adv.

I think you can do this with a big M approach using MIDCP capability which you have with MOSEK as solver.

Let M be the smallest upper bound you can impose on x. Given that sum(x) == 1, M can be set to 1. Let c be some very small constant. I don’t know whether c equal to zero will work given solver tolerance. There has to be some slop somewhere, perhaps the slop in the solver is good enough.

variable x(100,1)
variable y(100,1) binary
x <= c + M*(1-y)
sum(y) >= 50
sum(x) == 1; % NOTE: this must have == , not =
maximize(<some quadratic objective>)