The first step of the algorithm “ApproxMaxQP” as described by Charikar, Wirth, “Maximizing quadratic programs: extending Grothendieck’s inequality” is to obtain an optimal solution \{v_i\}_{i=1,\ldots,n} for v_i \in \mathbb{R}^n to the following SDP:
$$\text{maximize} \sum_{i,j} a_{ij} \ v_i \cdot v_j \quad \text{subject to} \quad v_i \cdot v_i = 1 \quad \text{for all i,j}\quad,$$
where A=[a_{ij}] \in \mathbb{R}^{n \times n} is any matrix with zero-diagonal, and \cdot refers to the scalar product.
I am unfamilar with both SDP and optimization, but my goal is to transform this into CVX-code. After some trial and error I arrived at the following code, which seems to give meaningful results, but I am unsure about whether it is really correct, and whether it contains some obvious stupidities ?
cvx_begin sdp
variable S(n,n)
maximize(trace(A*S))
diag(S) == ones(n,1)
S >= 0
cvx_end
[X,Y,Z] = svd(S); here X==Z
V = X * Y.^0.5;
So, I first determine some positive semi-definite S in order to guarantee the existence of some V such that S= V^T V with V having \{v_i\} as its columns. Afterwards I use Matlab’s singular value decomposition of S in order to determine V from S.
Assuming my approach is correct at all, is there something to do better? For example, I wonder if it is possible to directly get V from cvx. And should I probably rewrite the objective function in another way to improve numerical stability or performance?