Status: Failed. Can't get lmi feasibility

Please help me to solve the lmi.
\underset{P_1,P_2,Y_1,Y_2,\alpha,\varepsilon}{minimise }\big[tr(P_1)+tr(P_2)\big]
$
subject to
If the matrices
X_1:=P_1^{-1}, X_2:=P_2, R_1 Q \in\mathcal{R}^{n \times n }, Y_1:=K P_1^{-1}\in\mathcal{R}^{m \times n }, Y_2:=P_2L\in\mathcal{R}^{m \times k}
and the numbers \alpha,~\varepsilon \in \mathcal{R} satisfy the system of matrix inequalities
\left[ \begin{array}{cccc} -R_1 & Y_2C & I & 0 \\ \ast & \Lambda_2 & 0 & X_2 \\ \ast & \ast & -\varepsilon I & 0 \\ \ast & \ast & \ast & -\varepsilon I \\ \end{array}% \right]\leq 0

\left[ \begin{array}{cc} -R_1-2X_2 & I \\ \ast & \Lambda_1 \\ \end{array}% \right]\leq 0
and
\left[ \begin{array}{cc} Q & X_1 \\ \ast & I \\ \end{array}% \right]\geq 0
where \Lambda_1=AX_1+BY_1+X_1A^{T}+Y_1^{T}B^{T}+\alpha X_1+\varepsilon L_{\phi}^{2}Q, and \Lambda_2=X_2A-Y_2C+A^{T}X_2-C^{T}Y_2^{T}+\alpha X_2+\varepsilon L_{\phi}^{2}I additionally
\alpha>0,\varepsilon>0, X_1>0,X_2>0,R_1>0,Q>0

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%CVX code
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Define the model parameters %%%%%%%%%%%%%%%%%%
Ti = 52.29;
Vi = 0.042318;
k = 0.22752;
ke = 0.050272;
p1 = 0.0049719;
p2 = 0.021312;
p3 = 8.8033e-5;
Gb=180;
p4=ke;
p5=1/(Ti*Vi);
p6=1/Ti;

A = [-p1 Gb 0 0 0;0 -p2 p3 0 0;0 0 -p4 p5 0;0 0 0 -p6 p6;0 0 0 0 -p6];
B=[0;0;0;0;1];
C=[1 0 0 0 0];
n = size(A, 1);
m = size(C, 2);
o = size(B ,1);
G2=10;

cvx_begin sdp
variable X1(n,n) symmetric
variable X2(n,n) symmetric
variable R1(n,n) symmetric
variable Q(n,n) symmetric
variable Y1(1,n)
variable Y2(n,1)

a=1e40;
e=1e-20;

% Objective
minimize(trace(X1)+trace(X2))
X1>=100*eye(n);
X2>=153*eye(n);
R1>=100*eye(n);
Q>=100*eye(n);
% LMI 1
[-R1 Y2*C eye(5) zeros(5);...
 C'*Y2' X2*A-Y2*C+A'*X2-C'*Y2'+a*X2+  e*0.0064*eye(5) zeros(5) X2;...
 eye(5) zeros(5) -e*eye(5) zeros(5);...
 zeros(5) X2' zeros(5) -e*eye(5)]<=0;
% LMI 2
[R1-2*X2 eye(5);...
 eye(5) A*X1+B*Y1+X1*A'+Y1'*B'+a*X1+e*0.0064*Q]<=0;
% % LMI 3
[Q X1;...
    X1 eye(5)]>=0;
cvx_end

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Output %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Calling SeDuMi 1.34: 380 variables, 70 equality constraints
For improved efficiency, SeDuMi is solving the dual problem.

Warning: Rank deficient, rank = 23, tol = 6.856737e+28.

In sedumi at 268
In cvx_run_solver at 50
In cvx_sedumi>solve at 245
In cvxprob.solve at 423
In cvx_end at 88
The coefficient matrix is not full row rank, numerical problems may occur.
SeDuMi 1.34 (beta) by AdvOL, 2005-2008 and Jos F. Sturm, 1998-2003.
Alg = 2: xz-corrector, Adaptive Step-Differentiation, theta = 0.250, beta = 0.500
eqs m = 70, order n = 61, dim = 701, blocks = 8
nnz(A) = 260 + 0, nnz(ADA) = 4060, nnz(L) = 2065
it : by gap delta rate t/tP t/tD* feas cg cg prec
0 : 2.14E+36 0.000
1 : 1.83E+03 2.14E+29 0.000 0.0000 1.0000 1.0000 1.00 7 2 2.0E+33
2 : -1.26E+34 7.84E+27 0.000 0.0366 0.9196 0.9000 1.00 9 6 1.8E+32
3 : -1.04E+34 1.01E+27 0.000 0.1284 0.9063 0.9000 0.92 9 9 2.6E+31
4 : -4.27E+32 3.82E+25 0.000 0.0379 0.9900 0.9665 1.43 9 9 8.6E+29
5 : -1.46E+32 1.25E+25 0.000 0.3284 0.9000 0.9000 0.91 9 9 2.8E+29
6 : -1.41E+32 1.17E+25 0.000 0.9320 0.9000 0.9000 -2.48 9 1 3.7E+29
Run into numerical problems.

iter seconds |Ax| [Ay]_+ |x| |y|
6 0.9 8.5e+29 7.3e+41 1.2e+01 3.0e+35
Failed: no sensible solution/direction found.

Detailed timing (sec)
Pre IPM Post
1.060E-01 4.570E-01 4.700E-02
Max-norms: ||b||=1, ||c|| = 1.530000e+42,
Cholesky |add|=0, |skip| = 47, ||L.L|| = 1.

Status: Failed
Optimal value (cvx_optval): NaN

I haven’t looked very carefully at what you’ve done, but for starters, note the following:

\Lambda_2<= 0 is infeasible just by itself, so LMI1 must be infeasible.
\Lambda_1 <= 0 is infeasible just by itself, so LMI2 must be infeasible.

So you need to look at the definition and construction of \Lambda_2 and \Lambda_1. That doesn’t mean that’s all you need to look at.

No good can come from such extreme values a=1e40, e=1e-20.

Kindly explain why \Lambda_{1} and \Lambda_{2} are infeasible? These are derived from lyapunov stability analysis.

They are infeasible because they are infeasible. I leave the explanation of why (including whether they are constructed correctly) to you because ii is your optimization problem. But because they are infeasible, your problem must be infeasible.

As I wrote earlier, having a=1e40, e=1e-20.is not a good thing (numerically). I don’t know to what extent they are contributing to your difficulties.

Respected Sir,

Thank you very much for your feedback. Please accept my query as a student asks to a teacher for clarification.

I absolutely understand that there is no point of choosing a=10^{40} and b=10^{-20}. But I do not understand why the term is Lamda_{1} is infeasible by itself. As these have been derived from Lyapunov analysis.

I request you to kindly correct me if something wrong with the proof.

I am providing a simple proof where the similar term like the term Lambda_2 appears in the LMI as provided at the end of the proof, for instance. Kindly, explain the reasom behind the infeasibility of the term Lambda_2.

Luenberger like Observer for Uncertain Nonlinear System

Plant

\left. \begin{array}{c} \dot{x}\left( t\right) =\left[ A+\Delta A\left( t\right) \right] x\left( t\right) +Bu\left( t\right) +\phi \left( x\left( t\right) \right) \\ \\ y\left( t\right) =Cx\left( t\right)% \end{array}% \right\}

Assumptions:
\begin{array}{c} \left\Vert \Delta A\left( t\right) \right\Vert \leq \delta \\ \\ Control ~law: u\left( t\right)=Kx :\text{ }\left\Vert x\left( t\right) \right\Vert \leq X_{+}% \end{array}%

State estimator:
\frac{d}{dt}\hat{x}\left( t\right) =A\hat{x}\left( t\right) +Bu\left( t\right) +\phi \left( x\left( t\right) \right) +L\left[ y\left( t\right) -C% \hat{x}\left( t\right) \right]

Error of estimation
\begin{array}{c} e\left( t\right) :=x\left( t\right) -\hat{x}\left( t\right) \\ \\ Closed~loop ~error~dynamics\\ \dot{e}\left( t\right) =\left[ A+\Delta A\left( t\right) \right] x\left( t\right) +Bu\left( t\right) +\phi \left( x\left( t\right) \right) -A\hat{x}% \left( t\right) -Bu\left( t\right) -\phi \left( \hat{x}\left( t\right) \right) -L\left[ y\left( t\right) -C\hat{x}\left( t\right) \right] \\ =Ax\left( t\right) +\Delta A\left( t\right) x\left( t\right) -A\hat{x}\left( t\right) -LCe\left( t\right) =\left[ A-LC\right] e\left( t\right) +\Delta A\left( t\right) x\left( t\right) +\Delta \phi \left( t\right) \end{array}%
where
\Delta \phi \left( t\right) :=\phi \left( x\left( t\right) \right) -\phi \left( \hat{x}\left( t\right) \right)
satisfing (by the Lipschitz property assumptions)
\left\Vert \Delta \phi \left( t\right) \right\Vert \leq L_{\phi }\left\Vert e\left( t\right) \right\Vert
Finally
\dot{e}\left( t\right) =\left[ A-LC\right] e\left( t\right) +\Delta A\left( t\right) x\left( t\right) +\Delta \phi \left( t\right)

Storage function: V\left( e\right) :=e^{\intercal }Pe, P>0%
\begin{array}{c} \dot{V}\left( e\left( t\right) \right) =2e^{\intercal }\left( t\right) P\dot{% e}\left( t\right) =2e^{\intercal }\left( t\right) P\left( \left[ A-LC\right] e\left( t\right) +\underset{\xi \left( t\right) }{\underbrace{\Delta A\left( t\right) x\left( t\right) +\Delta \phi \left( t\right) }}\right) = \\ \\ \left( \begin{array}{c} e\left( t\right) \\ \xi \left( t\right)% \end{array}% \right) ^{\intercal }\left[ \begin{array}{cc} P\left[ A-LC\right] +\left[ A-LC\right] ^{\intercal }P & P \\ P & 0% \end{array}% \right] \left( \begin{array}{c} e\left( t\right) \\ \xi \left( t\right)% \end{array}% \right) = \\ \\ \left( \begin{array}{c} e\left( t\right) \\ \xi \left( t\right)% \end{array}% \right) ^{\intercal }\left[ \begin{array}{cc} P\left[ A-LC\right] +\left[ A-LC\right] ^{\intercal }P+\alpha P & P \\ P & -\varepsilon I_{n\times n}% \end{array}% \right] \left( \begin{array}{c} e\left( t\right) \\ \xi \left( t\right)% \end{array}% \right) +\varepsilon \left\Vert \xi \left( t\right) \right\Vert ^{2}-\alpha V\left( e\left( t\right) \right) = \\ \\ \left( \begin{array}{c} e\left( t\right) \\ \xi \left( t\right)% \end{array}% \right) ^{\intercal }\underset{W\left( P,L\mid \alpha ,\varepsilon \right) }{% \underbrace{\left[ \begin{array}{cc} P\left[ A+\dfrac{\alpha }{2}I_{n\times n}-LC\right] +\left[ A+\dfrac{\alpha }{2}I_{n\times n}-LC\right] ^{\intercal }P & P \\ P & -\varepsilon I_{n\times n}% \end{array}% \right] }}\left( \begin{array}{c} e\left( t\right) \\ \xi \left( t\right)% \end{array}% \right) +\varepsilon \underset{\leq \delta ^{2}X_{+}^{2}}{\underbrace{% \left\Vert \xi \left( t\right) \right\Vert ^{2}}}-\alpha V\left( e\left( t\right) \right)% \end{array}%

Finally,
\begin{array}{c} \dot{V}\left( e\left( t\right) \right) \leq \left( \begin{array}{c} e\left( t\right) \\ \xi \left( t\right)% \end{array}% \right) ^{\intercal }W\left( P,L\mid \alpha ,\varepsilon \right) \left( \begin{array}{c} e\left( t\right) \\ \xi \left( t\right)% \end{array}% \right) \\ -\alpha V\left( e\left( t\right) \right) +\varepsilon \delta ^{2}X_{+}^{2}+\varepsilon L_{\phi }^{2}\left\Vert e\left( t\right) \right\Vert ^{2}= \\ \left( \begin{array}{c} e\left( t\right) \\ \xi \left( t\right)% \end{array}% \right) ^{\intercal }\tilde{W}\left( P,L\mid \alpha ,\varepsilon \right) \left( \begin{array}{c} e\left( t\right) \\ \xi \left( t\right)% \end{array}% \right) -\alpha V\left( e\left( t\right) \right) +\varepsilon \delta ^{2}X_{+}^{2} \\ \\ \tilde{W}\left( P,L\mid \alpha ,\varepsilon \right) := \\ \left[ \begin{array}{cc} P\left[ A+\dfrac{\alpha }{2}I_{n\times n}-LC\right] +\left[ A+\dfrac{\alpha }{2}I_{n\times n}-LC\right] ^{\intercal }P+\varepsilon L_{\phi }^{2}I_{n\times n} & P \\ P & -\varepsilon I_{n\times n}% \end{array}% \right]% \end{array}

Theorem:
If under the accepted assumption for a given L there exists a matrix P>0
and positive constants \alpha ,\varepsilon such that%
\tilde{W}\left( P,L\mid \alpha ,\varepsilon \right) <0
then we may guarantee
\dot{V}\left( e\left( t\right) \right) \leq -\alpha V\left( e\left( t\right) \right) +\varepsilon \delta ^{2}X_{+}^{2}
implying%
V\left( e\left( t\right) \right) \leq V\left( e\left( 0\right) \right) e^{-\alpha t}+\frac{\varepsilon \delta ^{2}X_{+}^{2}}{\alpha }\left( 1-e^{-\alpha t}\right)
and
\begin{array}{c} \underset{t\rightarrow \infty }{\limsup }V\left( e\left( t\right) \right) \leq \dfrac{\varepsilon \delta ^{2}X_{+}^{2}}{\alpha } \\ \underset{t\rightarrow \infty }{\limsup }e^{\intercal }\left( t\right) \left[ \dfrac{\alpha }{\varepsilon \delta ^{2}X_{+}^{2}}P\right] e\left( t\right) \leq 1% \end{array}%
So, e\left( t\right) \rightarrow E\left( P_{attr}\right) where%
P_{attr}=\dfrac{\alpha }{\varepsilon \delta ^{2}X_{+}^{2}}P

Corollary~ 2:
Notice that this Matrix Inequality can be represented as LMI
\begin{array}{c} \left[ \begin{array}{cc} P\left[ A+\dfrac{\alpha }{2}I_{n\times n}-LC\right] +\left[ A+\dfrac{\alpha }{2}I_{n\times n}-LC\right] ^{\intercal }P+\varepsilon L_{\phi }^{2}I_{n\times n} & P \\ P & -\varepsilon I_{n\times n}% \end{array} \right] \\ \overset{X:=P,Y:=PL}{=} \\ \left[ \begin{array}{cc} X\left( A+\dfrac{\alpha }{2}I_{n\times n}\right) +\left( A+\dfrac{\alpha }{2}% I_{n\times n}\right) ^{\intercal }X-YC-C^{\intercal }Y^{\intercal }+\varepsilon L_{\phi }^{2}I_{n\times n} & X \\ X & -\varepsilon I_{n\times n}% \end{array}% \right] <0% \end{array}%
such that
L^{\ast }=Y^{\ast }\left( X^{\ast }\right) ^{-1}

I am not an expert in Lyapunov analysis, so I have no comment on your analysis.

Thank you once again Sir. Please consider me as an interested student. Sir can you suggest any way or advice me how to get a feasible solution for this. In Yalmip I get feasiblity but in CVX I dont.

I don’t know what you did different in YALMIP than CVX, or whether you used a different solver.

Anyhow, Johan is more likely to diagnose your problem than I am.