The problem is \|f(y)-f(A)*a\|_2, but f(y) = w_1*f_1(y)+w_2*f_2(y)+\dots+w_m*f_m(y). The variables are a(n) and w(m). Thank you!
variables a(n,1) w(2,1);
for i = 1:2
tmp(:,:,i) = w(i)*RR(:,:,i);% RR & ZZ have computed before.
tmpz(:,:,i) = w(i)*ZZ(:,:,i);
K = sum(tmp,3);
Ky = sum(tmpz,3);
minimize 1+matrix_frac(a,K)-2*Ky*a; %RBF kernel
sum(w) == 1;
This is the code by myself,but it has error “The second argument must be positive or negative semidefinite”. I don’t know the problem. Thank you !
In my experiments, RR is a n×n×2 matrix and ZZ is a 1×n×2 matrix. n is the number of samples. For a single kernel problem which RR is n×n and ZZ is 1×n, i can use cvx to solve the objective function. 1+matrix_frac(a,K)-2*Ky*a has no error because K and Ky are fixed. But for multi-kernel, K and Ky has a variable w which is computed in the loop. So I don’t know how to solve the problem.
Thank you for posting your code. There isn’t quite enough information here—in particular we don’t see what the sizes are of RR and ZZ. But even as stated, I suspect the problem is that your formulation simply isn’t a valid DCP, as 2*Ky*a violates the no-product rule.
As I mentioned above in the comments, my strong suspicion is that your problem is not convex as written—or, at least, not a valid disciplined convex program—due to the -2*Ky*a term.
I know that people have used CVX successfully in kernel learning research, but I am not familiar with it first-hand. You may need to find out what other researchers have done in this area; in particular, look for citations of CVX in their papers.
EDIT: Yes, based on your clarifications, it is clear that my guess was largely correct. CVX cannot solve the problem as written. It may be convex, but it is not expressed in disciplined convex programming form (that is, according to the DCP ruleset CVX requires.) You’ll need to consult the kernel learning literature to see how to represent this problem as a semidefinite program.