This appears to be concave for any n, and is concave for particular values of n per Maple. Is there a conic formulation? It seems tough. Note that if formulated as a sum of log terms, each log term is NOT concave individually, so it seems especially tricky.

If this can be extended to n >= 2, it looks tricky, if possible at all, because each additive log term is not concave, so it seems the whole function will have to be dealt with at once. Also, for high enough values of x (some amount greater than 1), the whole function is not concave, so it seems that will be a complicating matter in the formulation. Also, I believe for n >= 2, the argument of log is not concave, which suggests to me the approach in the 1st paragraph is a dead end for n >= 2.

This is not the kind of problem I have any interest in whatsoever. It makes no sense to try and convert this to a comic form. This is one of those times when we need to resurrect a smooth convex solver, like Mosek’s original.

Speaking of conic formulations or even reformulations. Whenever i pass cvx a problem using its library of functions like $norm(\textbf{A}(x))$ subject to constraints, I believe CVX parses it a transforms it to conic form. Is it possible to somehow retrieve the normal equations (I mean their coeeficient matrices) cvx is actually passing to the solver or perhaps see what the data matrices in conic form look like?

It is our experience that there are better algorithms for conic optimizatoin than for general convex optimization.

The nonsymmetric conic solver in MOSEK 9 is definitely more reliable than the general convex solver. I think most solver developers will agree that general convex optimization is tractable in theory, but less so in practice.

Is it possible to somehow retrieve the normal equations (I mean their coeeficient matrices) cvx is actually passing to the solver or perhaps see what the data matrices in conic form look like?

See CVX/MOSEK problem format .This may not be a very “friendly” output from your high-level math modeler point of view. I can not clarify anything in the link.

But in all seriousness: I fully acknowledge that the latest wisdom is that conic solvers are superior to smooth solvers. But conic solvers as currently designed are limited to a fixed set of cones. This is great if your model is readily converted to utilize those cones, and I know that the MOSEK team has done an amazing job of uncovering conic formulations for a lot of models I was not expecting would be compatible.

What I’d be curious to see, however, is the ability to experiment with a cone defined by the perspective of a smooth function; that is,

You can not just do optimization algorithms for arbitrary convex functions. Well, you can because you can use the universal barrier function suggested by Nesterov and Nemirovski but unfortunately nobody knows how deal with it in practice as far as I know. Indeed as I recall the universal barrier involves an integral.

So to summarize in practice you need to study appropriate convex sets that admits a self-concordant barrier function you can compute with in polynomial time. This implies the sets need to a have very well defined explicit structure because otherwise you have very little chance of proving self concordance and compute whatever you need.

I have stopped chasing unicorns and plans to spend rest my career of doing this efficiently for specific convex sets which happens to be cones.

Something that also has surprised me is that you would think there would be an infinite number interesting convex sets. Of course there is but they do not seem materialize in practice.

This is of course a personal view and primarily applies to interior-point methods. First order methods I know very little about.