Y=conj(transpose(vec(X)))*vec(X)

Sir, the described below are convex optimization problem which are relaxed using sdp. It can be solved by cvx or not . plz check it.
Sir, please check this code and tell me my fault…
$$\mathop{\min}{\hat{{ X}}}{ P}{{\rm tot}}(\hat{{ X}}):{\rm SINR}{{\rm i}}(\hat{{ X}})\geq\alpha{i},\ \ \ i=1,2, \ldots, M;\hat{{ X}}\succeq 0.$$
where,$${ P}{{\rm tot}}(\hat{{ X}}) :=\langle { A}\hat{{ X}}{ A}^{H}[(\sigma{s}^{2}{ HH}^{H}+\sigma_{{\rm re}}^{2}{ I}{N})^{T}\otimes { I}{N}]\rangle$$
and
$$\hat{{ X}}={\rm v}\tilde{{\rm e}}{\rm c}^{H}({ X}){\rm v}\tilde{{\rm e}}{\rm c}({ X})\in {\cal C}^{(RN_{R}^{2})\times(RN_{R}^{2})}.$$
and X is a N*N matrix.
Will u plz tell that how \hat{{ X}} can be used in cvx to solve the problem. Plz pay your kind attention towards this problem.
I am attaching the code … plz verify it.
clc;
clear all;
close all;
M=3;
N=5;
R=1;

n_r=N/R;   %since N=R*n_r
sig=10^(20/10)   ;  %varisnce of signal
s_r=10^(0/10)   ; %variance of received signal
iden=eye(N);    %identity function of n*n matrix
sigde=10^(0/10);
SINR_db=1:10;


for r=1:R
    J_r=[zeros((r-1)*n_r,n_r);eye(n_r);zeros((R-r)*n_r,n_r)];
    H(:,:,r)=J_r;
end
J_r1=H(:,:,1);
%J_r2=H(:,:,2);
for r=1:R
    A=kron(eye(n_r),J_r);
    %B(:,:,r)=G
end


for i=1:M
    h_temp=(randn(1,N) + sqrt(-1) * randn(1,N)) / sqrt(2);
    k(:,:,i)=transpose(h_temp);
    Hup(:,i)=k(:,:,i);
    
end
k1=k(:,:,1);
k2=k(:,:,2);
k3=k(:,:,3);

%Hup=[k1,k2,k3];
Htrans=conj(transpose(Hup));
%n=k1*conj(transpose(k1));





product=A*conj(transpose(A))*(kron(transpose((sig*Hup*Htrans)+s_r*iden),iden));
ptot=real(trace(product));


%yalmip program for SINR of equation 12


for i=1:M
    l_temp=(randn(1,N) + sqrt(-1) * randn(1,N)) / sqrt(2);
    l(:,:,i)=conj(transpose(l_temp));
    Ldown(:,i)=l(:,:,i);
    
end

l1=l(:,:,1);
l2=l(:,:,2);
l3=l(:,:,3);
%Ldown=[l1,l2,l3];



for i=1:M
    hi=k(:,:,i);
    hi_h=conj(transpose(hi));
    
    li=conj(l(:,:,i));
    li_h=conj(transpose(li));
    num{i}=sig*(A*conj(transpose(A)*(kron(transpose(hi*hi_h),(li*li_h)))));
    
    for j=1:M
        
        hsum(i)=hi(j)*hi_h(j);
        
        if j==i
            hsum(i)=hsum(i) - hi(j)*hi_h(j);
        end
    end
    
    
    den{i}=(A*conj(transpose(A))*kron(transpose(sig*hsum(i) + (s_r*iden)),(li*li_h)));
    
    %sinri(i)=num(i)/den(i);
    %S=10*log10(sinri(i));
    %sdisplay(sinri(i));
end


thdb=1:10;
th=10.^(thdb/10);
for l=1:length(thdb)
    for i=1:M
        DF{i}=num{i}-th(l)*den{i};
    end
    cvx_begin sdp
    variable Y(N^2,N^2) symmetric
    minimize(trace(real(product*Y)))
    subject to
    for i=1:M
        trace(DF{i}*Y)>=th(l)*sigde;
    end
    %Y>=0;
    norm(Y,Inf) <= 1;
    Y==semidefinite(N^2);
    cvx_end
    Q=cvx_optval;
    opt_power(l) = Q;
end
plot(thdb,opt_power,'-*m')

Sir, plz verify my code, I m not getting any solution of this…

This quantity violates the disciplined convex programming ruleset, and cannot be used in CVX. Please read the user’s guide carefully, in particular the section on the DCP ruleset for admissable expressions. The problem is that the off-diagonal elements of this expression are neither convex nor concave. CVX requires convexity to be preserved in every subexpression.

That said, the scalar quantity conj(transpose(vec(X)))*vec(X) is a valid scalar convex quadratic form. However, it is much more simply represented as sum_square_abs(X).

If SeDuMi can solve it, then yes. However, SeDuMi cannot solve it as written. If you can find a proof that SeDuMi can handle this, then you will see the transformations necessary.