How is possible to implement eig(X) with matlab cvx?

I have the following equation:


Eig(inv(M)*K) are the eigenvalues of inv(M)*K, where [M]nn is a known square matrix. [K]nn is the unknown matrix, but we know the sparsity of this matrix. K is also symmetric, and is known only if we know n elements of it (located on the diameter of the matrix). The problem is that we do not access all the elements of the vector Lambda, so that I should solve the following optimization problem to find the elements ki of the matrix K

arg min {Lambda- Eig(inv(M)*K)}

I am confused about how I can define the optimization problem to be implementable by numerical solvers, e.g. matlab cvx, and if this problem is a convex optimization problem.

Any suggestions would really be appreciated.

First of all, the objective function needs to be a real scalar. The objective function must refer to one specific eigenvalue, or some well-defined function of them, such as trace.

If you want the minimum eigenvalue of inv(M)*K, that can be accomplished with lambda_min(inv(M)*K). This will actually constrain inv(M)*K to be symmetric, rather than constraining K to be symmetric. If you need the latter, the problem is not convex, and CVX can’t be used.

If you want the sum of the j smallest eigenvalues of inv(M)*K, that can be accomplished with lambda_sum_smallest(inv(M)*K,j). , which will constrain inv(M)*K to be symmetric, rather than constraining K to be symmetric.

If you need the largest eigenvalue or sum of the j largest eigenvalues, that would be non-convex.

Please read for a list of CVX’s eigenvalue and other functions. The Semidefinite optimization chapter, 6, of the Mosek Cookbook may also be helpful, but requires some level of mathematical maturity to understand.

Hi Mark,
Thank you for the very helpful clarifications. I checked the two links that you shared with me and found them very useful. Thanks for that.

lambda_sum_smallest function is exactly what I need to model this problem. M and K are symmetric matrices, but the problem is that inv(M)*K is not symmetric, even though it is real and has all positive eigenvalues.

In my problem, the size of the square matrices M and K is 14. We know the matrix M, but we should estimate the matrix K. I also have some information about Matrix K (its sparsity and the relationship between its different elements, this matrix can be defined by only 14 scalar variables, but it is not diagonal). I only know some of inv(M)*K eigenvalues (the first 6 eigenvalues out of 14), and need to solve the following apparently convex problem:

minimize{sum([lambda1, …, lambda6) - lambda_sum_smallest(inv(M)*K,6)}

But I cannot use the standard function lambda_sum_smallest, because inv(M)*K is not symmetric. Do you have any suggestions in this regard?

It would appear that the problem is non-convex, so can’t be solved by CVX.

Thank you again, Mark.
I see. Do you have any reference to show this problem is nonconvex? (I understand that the definition of the problem may seem nonconvex to Matlab)
When I solve the equation Lambda=Eig(inv(M)*K) with knowledge about all the eigenvalues, there is always a unique solution for K.
What I know is that inv(M)*K always has real and positive eigenvalue, and its sparsity is also known.

if nv(M)*K always has only real and positive eigenvalues, I believe that means it is symmetric PSD. if that is true, you can use lambda_sum_smallest. If that is not true, it is not convex.

Is is the obligation of forum question askers to prove their problem is convex, not of forum question answerers to prove their problem is non-convex

if you want to really understand convex optimization, study and solve exercises in

Please read Why isn't CVX accepting my model? READ THIS FIRST! .