Dear all

I’m sorry for the question that is a bit theoretical but I would like to understand why the big-O notation is O(n^3)? Is n the number of variables or constraints? I would appreciate if I can get an answer about this.

Dear all

I’m sorry for the question that is a bit theoretical but I would like to understand why the big-O notation is O(n^3)? Is n the number of variables or constraints? I would appreciate if I can get an answer about this.

This is a CVX specific forum, so the wrong place. I would try OR stackexchange.

Also your question make no sense. Since what problem and algorithm are you referring to.

I’m sorry for posting here but I was hoping if I can some answers about this topic that is confusing me. I’m talking about the case where Primal-dual IPMs are used to solve the SDP for a system of m equations and n variables. From what I have understood after going through some papers, the most expensive part of the PDIPM is finding the direction in each iteration which is done by solving Schur’s compliment system. My question is what is Schur’s compliment system and how is it related to the matrix A where AX=B where X is a nXn the semidefinite variable matrix. Does the sparsity of A affect the performance of the PDIPM and is the cost O(n^3) related to that?

I think your question related to sparsity can only be answered for a specific software package because the complexity is dependent on how the alg. is implemented.

So you will have to look in the documentation/papers of the software you are using. As far as my memory goes Jos Sturm has discussed this in one of his papers about SeDuMi. There is no documentation available for Mosek regarding this.

I’m using Mosek to solve two SDP problems. One of them has constraints that depend heavily on real diagonal elements of the PSD matrix X and the other has constraints that depend on complex off-diagonal elements of the PSD X (X is hermitian with real diagonal elements and complex off-diagonal elements). As the number of the variables n and constraints m increased, I noticed that the problem that depended on the real diagonal elements of X was superior in terms of solver time. I’m trying to justify this behavior in a scientific way. Can I relate this improvement in computational speed to the reduction in constraints that involve the PSD matrix X and if so, how? Can I relate it to the O(m^3) cost? I would really appreciate it if you may explain this to me.

If you spend less time per iteration, then that most likely it is caused by you spend less time on computing the search direction. All other things equal then less constraints is better.