# Finding a point farthest from center of disc that satisfies bounds

Let me set up the problem. Assume I am operating in x \in \mathbb{R}^{2} and I have a unit disc in \mathbb{R}^{2} defined by x^{\rm T}x=1. Assume I have a bounding box \alpha_{i} \leq x_{i}< \beta_{i}. My goal is to find the point farthest from the center from the disc. So my first crack at this was
$${\rm min} \frac{1}{x^{\rm T}x}$$
subject to
$$x^{\rm T} x\leq 1$$
$$\alpha_{i} \leq x_{i}< \beta_{i}$$

This violated the DCP requirement. So I tried other similar cost functions like {\rm exp}(-x^{\rm T} x) and log barrier functions and I still got DCP requirement errors. Should I give up trying to massage this?

It sounds like you want to maximize the convex function x'*x subject to the intersection of the interior of the circle x'*x <= 1 and the box alpha <= x <= beta,. That is a concave programming problem with a compact convex constraint region… It will have a global maximum at an extreme of the constraint set. CVX is not the right tool for this problem.

I do not see any “reasonable” objective function for your desires which would be convex (for minimization).

Thank you. Do you have advice on which tool I should use? Maybe I should just google Concave Programming problem.

I’m thinking of doing gradient descent with barrier functions to account for the box constraint. If another tool can do the same I might be able to save some time.

I think you want to use a global nonlinear optimizer. For instance, BMIBNB under YALMIP (which in turn needs a local nonlinear solver, such as FMINCON, to call) or BARON. Perhaps COUENNE or SCIP global solver would work as well. Depending on the values of alpha and beta, there may be many global optima. This is actually quite an easy problem, especially in 2 dimensions.

If you use a local nonlinear solver, you may find a local optimum which is not globally optimal. That might not meet your needs. You are especially likely to be led astray if you use a starting value for x of zeros(n,1), which might get stuck at that point, which is actually the global minimum, not even a local maximum, and might get erroneously reported as being a local optimum based on looking only at first order optimality conditions.

I did try BMIBNB in YALMIP which worked decently well for 5 dimensions, but I want to scale this up to 400 dimensions.

I ended up coding a really dirty monte carlo type setup that is converging pretty fast (~4 seconds) for up to 600 dimensions. I’m taking advantage of the fact that my bounding box is not gigantic. I’m not proud of it but it suits my needs for now.

In fairness, you did say R^2, not R^n. 