Many conic reformulation tricks have been used in this forum as well as presented in the Mosek Modeling Cookbook. if the optimization modeling system tools were smarter, the user could be “less disciplined”, and in many cases still be able to successfully use conic optimization solvers.
Perhaps someone could incorporate these reformulations into a next generation of CVX or CVXPY like tool, which would make use of symbolic processing, along the lines of computer algebra tools such as Maple, Mathematica, and SymPy, which if done well enough, could actually come up with some formulations beyond the capabilities of even the most clever people on this forum. This new tool could incorporate heavy duty computer algebra type technology as part of its arsenal. On the flip side, such a system might be able to prove non-convexity in many instances.
The easiest step would be to create additional higher level functions, augmenting those currently in CVX or CVXPY, but still placing the burden on the user to recognize the need for and applicability of these functions. But I believe a tool could be developed which would automatically detect and recognize many structures, alone or in combination, and reformulate accordingly. With heavy duty “computer algebra” under the hood, these abilities could exceed humans in many cases.
I have made a few thoughts on how to realize this idea. It literally says on my TODO:
- Fill in the details to make sure this is realistic.
- Pitch the project idea to academia.
Let me know if you find a Ph.D. candidate who thinks this is interesting, and I am up for a meeting
@hfriberg The tool should also (optionally) output code for the reformulated problem, perhaps with an explanation of what it did.
It is NP-hard to decide convexity of functions (https://arxiv.org/abs/1012.1908) and hence most likely also NP-hard to determine if a DCP formulation exists or not. This suggests that the developed symbolic processing engine could potentially be very time consuming. On top of that, the identification of a DCP formulation is absolutely necessary for being able to solve the problem in cvx, further implying that this is not a feature you want to repeat and rely on from run to run (I could easily imagine a reduction found in v1 of this engine being missed in v2 for no other reason than a reordering of operations, and that you would get spammed with questions on this forum about things that used to work but now doesn’t).
For this reason I suggest forcing (or strongly nudging) the user through an intermediate format that cvx have no trouble parsing. This could potentially be code as you suggest, but this has some disadvantages that one should keep in mind:
- hard to parse back into the symbolic engine if you wanted to operate on it for some reason.
- hard to stay backwards compatible with.
- hard to port should the user change programming language.
I have no thoughts on what this intermediate format should look like, but it just sounds like a nice feature that one never has to regenerate the DCP once a successful reformulation has been found.
I was trying to suggest that users would benefit if they could see the reformulation which was used. They could then apply the reformulation themselves on future problems.
Per Donald Knuth, Richard Bellman supposedly said “If you can solve it, it is an exercise; otherwise it’s a research problem.”
I couldn’t solve it. Hence I suggested it as a research problem. I leave the details to the researchers.
Such as system should figure out when and how to apply Schur complements, among other things.
One of the more interesting and useful extensions to Disciplined Convex Programming which is discussed in @mcg 's thesis, but not subsequently implemented in CVX, is Perspective Transformations, discussed in section 5.2.2. Certainly, that should be included in a next generation system,
Perspective Transformation has been solved for cone cases, at the 7.1.1 Perspective functions, the Mosek Modeling Cookbook above. but its not for general convex cases. If the improved DCP rules are based on that, they might be limited to modeling cone-expressible problems only, rather than modeling convex problems by using any existing atom convex functions (cone or not) like the old ones did.