Uncertainty / Error bounds for the solution?


I’ve successfully solved a convex problem with cvx. This means I found a convex variable which best fits my problem. Now I want to know something about its uncertainty.
When fitting “normal” variables, I calculate the hessian at the solution, invert that matrix and get something which is proportional to the covariance matrix of the solution vector.
Does anybody know how to do this calculation here, where we deal with solution matrices (and not with vectors)?



You can either vec the matrix variable into a vector and then calculate the Hessian of that or deal with the Hessian of a matrix variable directly as a 4-tensor (matrix is a 2-tensor). Anyhow, it can get a bit messy.

Ok, thank you Mark,
in my case my cvx-variable is (4,4) hermitian semidefinite with the additional constraint that the trace is fixed to a constant. Let’s call the best fitting solution of my problem H.
So I’ve 15 remaining degrees of freedom. Which means I’m searching for a 15x15 Hessian (/covariance) matrix.

But let’s forget about that for a moment and let us assume that I’d have an unconstrained problem with 16 degrees of freedom:
Furthermore let’s assume that the 16x16 covariance matrix was found (and let’s call it S).

Now I want to generate test canditates Htest that have a mean of H and a distribution according to S.
In Matlab I would do something like:
(this may be wrong for complex numbers, but anyhow:)

How can I assure that Htest is definitely semidefinite?
Can cvx support me with this problem?

Yeah, something like that for the MATLAB code.

As for the constrained case, you need to look at the projected covariance - see my answer at https://stats.stackexchange.com/questions/7308/can-the-empirical-hessian-of-an-m-estimator-be-indefinite/288076#288076 . However, in your case, if you have solved the parameter estimation problem in CVX, then the objective function must be convex, and therefore its Hessian positive semidefinite at the solution, unlike the more general non-convex case described in my linked answer.

Nevertheless, from a statistical standpoint, i don’t think it is meaningful to look at the unprojected Hessian, and even that may be very dubious due to only being valid in a possibly very small neighborhood about the optimum. You may be better off bootstrapping, in which you invoke CVX to solve the parameter optimization problem for each bootstrap sample and forget about the Hessian entirely.


I’m not sure if I got it right (never heard about bootstrapping before).
You suggest me to vary my (problem-) input data according to the statistics I’ve got, solve each problem with cvx and use the statistics of the set of results?
Sounds good to me.
Please correct me if I’m wrong.

Thanks again

Let’s say you have N vector data points. Draw M bootstrap samples, each of which consists of a random sample of N vector data points, drawn with replacement. I.e., in a given bootstrap sample, the ith vector data point chosen will have probability 1/N of being vector data point j, for j from 1 to N. Therefore, a bootstrap sample may have some repeated vector data points, and be missing some vector data points.

For each bootstrap sample, solve the parameter optimization problem using CVX. This will provide you one sample of possible vector parameter value. Therefore, you will end up with M sample vector parameter values. M in the range of 100 to 1000 is typical, but the needed or desired value of M depends on your analysis needs.

OK, thank you for your explanation, Mark
I got it and I will test it.

Best regards