About the establishment of convex approximation function

Hi,
I’m working on a channel maximization problem which is MAX -log2(1 + p / d *d). We know that the -log2(1 + 1 / x) is concave with respect to x. So I let x = d * d, then the initial problem became MAX -log2(1 + 1 / x), constr(x <= d * d).

Now, objective function is a convex optimization problem because it is a concave problem of maximization. However, due to the constraints, the feasible solution set is not convex, so the whole problem does not meet the criteria of convex optimization.

Because x * x is convex with respect to x, at any given point xc, there will always be x * x > = 2 * xc * x - xc * xc. In this step, the right-hand side of the original constraint equation can be transformed into a linear expression. So the original problem can be turned into MAX -log2(1 + 1 / x), constr(x <= 2 * xc * x - xc * xc). Because of the first order Taylor expansion at xc.

My question is, after such a series of transformations, how different is the optimal solution between the transformed problem and the original problem? What are the reasons for this difference?

If you approximate the problem, the solution may be different. You’ll find out how much different by solving with and without the approximation (non-convex problem will have to be solved not using CVX).

That is to say, when I face a nonconvex problem, I establish a convex approximation function to solve it. But I can’t know the difference between the solution of the approximation function and the original function, because I can’t get the optimal solution of the nonconvex problem. Is it right?

You use a global optimization solver for non-convex problems, for instance via YALMIP.