Because p_1 and p_2 only appear as p_1^2 and p_2^2, p_1^2 and p_2^2 can be replaced by variables p1sq, p2sq
declared as
variables p1sq p2sq
The constraints then become
p1sq + p2sq == 1
0 <= p1sq <= Pmax
0 <= p2sq <= Pmax
This formulation would be a Fractional Linear Programming problem, but for the (additive) non-fractional term involving p2sq
in the objective function. Linear Fractional Programs can be formulated and solved in CVX via the transformation described in 4.3.2 “Linear-fractional programming” of http://stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf I leave it to you whether the Linear Fractional Programming reformulationcan be adapted to tjhis problem.
If not, and If you are willing to try an iterative procedure which calls CVX repeatedly, and is essentially a local optimization algorithm, with limited guarantee of convergence, you can try How to handle nonlinear equality constraints? . To apply this procedure to your problem, you would write two inequality constraints
p_1^2 + p_2^2 \le 1
p_1^2 + p_2^2 \ge 1
The first of these can be directly entered into CVX. The second of these could be handled by the convex-concave procedure described in the link. Stephen Boys A more detailed paper co-authored by Stephen Boyd is Variations and extension of the convex–concave procedure . There are example of the convex concave procedure using CVX to solve the convex sub-problems at https://web.stanford.edu/~boyd/software/cvx_ccv_examples/ .
However, you may be better off using a non-convex solver. Given that your problem only has 2 variables, it might be easily solvable to provable global optimality by a branch and bound global optimizer, such as BARON or YALMIP’s BMIBNB, both of which can be called from MATLAB.