How to know the solutions are non-unique, and get at least two different solutions

The solution of my optimization is not unique, how to get at least two different solutions. I tried to run the optimizations for several times, but the solutions are always the same one.

I give an simple example as follows:

variable x(n)
minimize x(1)+x(2)+x(3)
subject to

the results of x(1), x(2) are always 0.15, 0.15. Obviously, x(1) and x(2) have multiple solutions.

This might not be the best way, or even a good way, but consider solving additional problem(s) with modified (additional) constraints and/or objective function. For instance, put in constraint(s) which preclude your original solution, such as x(1) - x(2)) >= d or x(1) - x(2) <= d, where d is some suitably chosen number, combined with a constraint that the objective function be <= value from the original solution, so in this case, x(1) + x(2) + x(3) <= 0.6, (this should work, because your objective function had to have been convex and DCP compliant, else CVX would not have accepted it, and therefore this extra constraint will be convex and DCP-compliant). This approach will not provide you with the infinity of possible solutions.

No practical convex optimization problem enumerates all of the possible solutions to a problem. Even in exact arithmetic, that is an intractable problem. Indeed, in this problem and others, there is an entire surface of solutions, an infinite number of them; how would you propose an algorithm return them?

Mark’s idea is good. To generalize, consider the following problem:
\text{minimize} & f(x) \
\text{subject to} & x \in\mathcal{C}
The first thing I would do is solve this original problem to obtain the optimal value p^*. Then I would solve a sequence of related problems:
\text{minimize} & g_i(x) \
\text{subject to} & f(x) \leq p^* + \epsilon \
& x \in\mathcal{C}
where \epsilon\geq 0 allows for small numerical errors and g_i(x), i=1,2,\dots N, are new functions that help you “select” desirable values of x. For instance, you could have g_i(x) = -x_i to find the optimal point with the largest value of x_i.

thank you very much, I am trying to add more additional constraints.

Thanks for your suggestion, It works.

I noticed that there is a difference between CVX and the optimization toolbox in matlab: the solution in matlab changes in multiple runs of the codes for non-unique solutions, It seems that the path to get the solution varies for each run of the codes. However, the solution stays the same in CVX, It seems that the path to obtain the solution stays unchanged.

You example is an LP. If that is the case then the so-called sensitivity analysis based on the optimal partition can reveal non uniqueness of the optimal solution. See

for further information.

Please provide the exact commands you used in MATLAB in which you got different solutions in different runs. I am curious as to the reason you did not get the same solution every time. Was something being randomized somewhere?