So what I am trying to do is to minimize the difference between the Frobenius norm of a PSD matrix and a real positive value, which can be formulated as \min \|\|\textbf{P}\|_F - J\|_2^2
where \textbf{P} is the PSD matrix to be optimized and J is a real positive constant. Then the problem here is that the whole minimization will fail the disciplined convex programming rule, like if we formulate something like
pow_p(norm(P, ‘fro’ ) - J, 2)
So I want to know whether we can formulate this idea to be a convex formulation and accepted by CVX. Is it possible to do that? Or if it’s not convex, can we find a way to transform and approximate this objective?
That is non-convex, as can easily be seen by example even in one dimension, norm(P) - J), with or without squaring, where P is a scalar variable.
It’s your problem, so you would have to determine what, if any, would be an adequate convex approximation, which would be out of scope for this forum.
If this is the problem you actually want to solve (but you need not have the square the objective function), you could solve it with a non-convex solver,for instance using YALMIP.
In the future, please determine whether a problem is convex before posting, rather than asking the forum readers whether the problem is convex.
Really thanks for the answer. Sorry Mark, although I knew it’s nonconvex, I just tried my luck if there was a trick I just don’t know to approximate here a little bit. I won’t post a nonconvex problem again.