I run a simple SDP program and the output differ from the theoretical value by 0.002. I find out it is because the resulting variable is not really SDP, but has a negative eigenvalue of 1e-6, leading to a larger error in the end. Though changing the solver from SDPT3 to SeDuMi makes it better, I do not want to check if the result is SDP manually every time, because this function is part of another optimization problem. So is there any way to increase the accuracy for the variable to fulfill the SDP constraint?

You can guarantee a truly psd matrix by adding a small*number times eye(n)

```
M - 1e-5*eye(n) == semidefinite(n)
M - 1e-5*eye(n) >= 0 % if in sdp mode
```

Alternatively, it may be possible to actually solve the SDP to a tighter tolerance than default if using Mosek. I was confronted with s similar situation solving SDP subproblems as part of a top level algorithm, and if the SDP was not solved accurately enough, it interfered with the SDP extension of KKT optimality criteria for my overall problem. I could set the negative eigenvalues to zero, but the bigger the adjustment, the more it degraded the top level algorithm.

So donâ€™t tell @Erling I said this,but with extensive testing for two different types of SDP subproblems I was solving I found I could set `MSK_DPAR_INTPNT_CO_TOL_PFEAS`

to `1e-12`

and reliably get solutions from Mosek. That greatly improved my top level algorithm performance vs. using the default value of 1r-8. I reset the -1e-12 values to zero, but that did much less harm than setting -1e-8 eigenvalues to zero. BTW, I had to set small positive eigenvalues to zero also - the trick was knowing when those were truly zero eigenvalues (i.e., active constraint, and if active, â€śhow activeâ€ť), or just very small, but really non-zero eigenvalues.

Specifically, in CVX, specify

`cvx_solver_settings('MSK_DPAR_INTPNT_CO_TOL_PFEAS',1e-12)`