Convex/nonconvex approximation to F1

IS there any convex/nonconvex approximations to F1 measure?

Presuming by F1 you mean

F1 = 2*precision*recall/(precision+recall)

where precision and recall are both in [0,1], then F1 is a concave function of precision and recall, with its Hessian having one negative eigenvalue and one eigenvalue equal to zero. That it is log-concave as a function of precision and recall allows you to be able to do something with it in CVX. See CVX’s rules for log-convex and log-concave functions at Log of sigmoid function .

If you wish to maximize F1 with respect to precision and recall, or have it appear in a constraint in the form F1 >= some number, then CVX could be used in Geometric Programing mode. If however, you, for instance, wanted to minimize F1, then CVX could not be used. If your optimization variables are not precision and recall, but rather, some underlying contributors to these, then you would have to supply more information to determine the applicability of CVX.

For example:

cvx_begin gp
variables precision recall
maximize(precision*recall/(precision+recall))
0 <= precision <= .9 % or whatever your DCP-compliant convex constraints are
cvx_end

or

cvx_begin gp
variables precision recall
minimize(precision+recall) % or whatever your DCP compliant objective is
% .1 below is just an example, but must be on the right-hand side to be DCP compliant
precision*recall/(precision+recall)  >= .1
0 <= precision <= .9 % or whatever your DCP-compliant convex constraints are
cvx_end

These are just examples to show you the kind of thing which can be accepted and solved by CVX. I’m not saying the examples above make sense.

To expand on the comment that .1 (for example) must be on the right-hand side

CVX's pertinent rules:

Positive constants are log-affine, log-concave, and log-convex.
Sums of log-affine and/or log-convex expressions are log-convex. Sums of log-concave expressions are not permitted. Note that even if every term of the sum is log-affine, the resulting sum is log-convex.

So

precision*recall/(precision+recall)  >= .1

is allowed. But the mathematically equivalent

precision*recall/(precision+recall) -.1 >= 0

is not.