Invalid quadratic form(s): not a square in a convex problem

I am trying to solve the following problem using CVX:
minimize_\textbf{x} && x_1^2+2x_1x_2+2x_2^2-3x_1+x_2\
subject\ to && x_1+x_2=1\
&& x_1\geq 0\
&& x_2\geq 0

The code is:

    variable x(2,1);
    dual variables y1 y2 y3;
    minimize -x(1)^2 - 2*x(1)*x(2) - 2*x(2)^2 + 3*x(1) - x(2);
    subject to
        y1 : x(1) + x(2) == 1;
        y2 : x(1) >= 0;
        y3 : x(2) >= 0;

I get the error:

Error using .* (line 262) Disciplined convex programming error: Invalid quadratic form(s): not a square.

The same happens if I do a maximization i.e. if I minimize x_1^2-2x_1x_2-2x_2^2+3x_1-x_2. Both problems are convex since the Hessian of the objective function is positive definite, the equality constraint is affine and the inequality constraints are convex.

The Hessian of the objective in your displayed problem (as opposed to your code) is [-2 2;2 4], which has one positive and one negative eigenvalue, hence is not convex.

The Hessian of the objective in your code is [-2 -2;-2 -4], which has 2 negative eigenvalues, hence is concave, and so can not be minimized in CVX. You could maximize it if you use quad_form.

I mistyped the objective in the code, so let’s ignore it to avoid confusion. I just realized that the first function is not convex but I disagree that this is true for the second one too. Basically, here is the Matlab code:

syms x1 x2;
f_function = x1^2 + 2*x1*x2 + 2*x2^2 - 3*x1 + x2;
f_gradient = gradient(f_function, [x1, x2])

%Transpose gradient to match theory conventions
f_gradient = f_gradient.';

%Hessian matrix for second order Taylor approximation
f_hessian = hessian(f_function, [x1, x2])

%Is the function convex?
all(eig(f_hessian) > 0)

This produces a Hessian [2 2; 2 4]. Am I doing something wrong here?

My assessment of what was posted in your original post is correct. The f_function whose Hessian you have computed in the immediately preceding post is yet a different function, specifically being the negative of the function (in your original post code) I said was concave. So yes, f_function is convex.

I believe if you use
minimize (1/2*quad_form(x,[2 2;2 4]) + 3*x(1) - x(2))
for the objective line, that will do what you want for the f_function version of your objective function. Or you could use
[3 -1]*x
for the linear portion.

You are correct. Now, I have tried to solve the following:

    variable x(2,1);
    dual variables y1 y2 y3;
    minimize x(1)^2 + 2*x(1)*x(2) + 2*x(2)^2 - 3*x(1) + x(2);
    subject to
        y1 : x(1) + x(2) == 0;
        y2 : x(1) >= 0;
        y3 : x(2) >= 0;

but it produces the same error. I also tried replacing the minimization line with

minimize (1/2quad_form(x,[2 2;2 4]) + 3x(1) - x(2))

as you suggested but that didn’t work too.

It works for me. What exactly happened when you tried it?

Let me point out that you have once again changed the problem. What you have post immediately above has the constraint x(1) + x(2) == 0, which when combined with your other constraints of x(1) >= 0 and x(2) >= 0 means that x(1) = x(2) = 0 is the only feasible point, and is therefore optimal. I am guessing you really intended to have the original version, x(1) + x(2) == 1.

In any event, you seem to have rather severe quality control issues on the description and entry of your problems, so perhaps you should start with a clean session and enter and check everything carefully.