I am using the code below to find a symmetric matric P that simultaneously satisfies two lyapunov inequalities for two given A1 and A2, and the cvx_status shows ‘Solved’.

Can anyone please help me answer this question? Is it because the optimization problem has not bounded region in P, which means the constraints are too few so there is no solution for this optimization problem? Thanks.

Can you provide a reproducible example? Have you tried another solver?

Because no objective function is specified, it s a feasibility problem, and the solver/CVX should return a feasible solution if it exists. So if the first problem is feasible, and the second problem is identical except for deletion of one constraint, the second problem should be feasible as well, which is why you made your post.

Are you sureA1 is unchanged between the problems, such as using new random numbers? Barring that, it sounds like either a bug in CVX or the solver, or the numerics of your problems are bad, for instance, having matrix entries of widely disparate orders of magnitude or high condition number.

Indeed, A1 and A2 have entries having a very wide range of magnitudes, and the 2-norm condition numbers of A1 and A2 are respectively 8e15 and 1e17, which are very large, and make double precision calculations rather unreliable. Are your order e-5 and e-6 entries “legitimate”, and not “rightly” zero?

Both your two and three constraint problems appear to be infeasible in my efforts. But such determination may be unreliable due to the horrible numerics in A1 and A2. If you are really stuck with these matrices, your only only hope might be going to higher precision - you can take a look at Quadruple Precision .

Hi Mark, thanks again for your reply, it is really very helpful.

For some unknown reason, the three constraint problem is feasible on my computer, and the two constraint problem is infeasible. What could be the possible reason for this to happen since both the condition numbers are very large? Thank you.

The calculations are unreliable (untrustworthy) with such poorly conditioned problems. If you have access to it, you can try MOSEK, which is more numerically robust than SeDuMi and SDPT3. But even MOSEK may be on very shaky ground with such a terribly conditioned problem.

The fact that you got inconsistent results tells you that, due to the numerics, neither of the results can be relied on with whatever solver you used.

Hi Mark, I have a quick question. Since there is no objective function, what is the meaning of the optimal value that the algorithm gets? I do not understand what is the condition for the algorithm to get the optimal solution when there is no objective function in an SDP problem? Thank you!

When a feasible solution is found, i.e., a solution which satisfies all the constraints, the problem is considered to be solved.

When a solver and CVX claim the problem is solved, that means a feasible solution has been found. Any feasible solution can be found and reported. So any feasible solution is “optimal” if there is no objective function.

Now I decrease the condition number of the A1 and A2 matrices from approximate 1e16 to approximate 1e8 through re-scaling some elements. I was wondering if 1e8 is an acceptable condition number for this problem? And with this condition number, the Mosek solver is able to get some solutions with ‘Inaccurate/Solved’ status. I also verify these solutions, and they all satisfy the constraints. I was wondering if there are any other methods to verify the solutions? Thank you.

I’m not sure what you are saying. Did you run the problem with 1e8 condition numbers using MOSEK and it return inaccurate/Solved? So it produced a solution inaccurate/Solved and didn’t say infeasible?

If so, I recommend you calculate directly in MATLAB (double precision).the minimum eigenvalue of eye(n) - A1’*P - P’*A1, P - eye(n), eye(n) - A2’*P - P’*A2 for the P returned by CVX If you want to be extra sure, you could do the calculation in higher precision. There are several packages (including symbolic toolbox) available for MATLAB which support arbitrary prevision calculation. You could use quad precision or even higher to verify that a candidate solution is necessarily feasible. If you just have to do this on a one-time basis for a 10 by 10 matrix, it shouldn’t take long. However, I think the minimum eigenvalue calculation should be accurate enough in MATLAB double precision.

If the candidate solution has a slightly negative eigenvalue when evaluated in MATLAB (double or higher precision), I don’t think that means the problem is necessarily infeasible, because the solver might have reported a problem on the boundary of feasibility within tolerance, even if there might be a solution with larger minimum eigenvalues. Therefore, you could use CVX:s lambda_min to impose minimum eigenvalues or, add an objective to maximize some combination of lambda_min of one or more LMIs instead of or in addition to LMI constraints.