When I replaced log_det function with det_rootn in CVX code, I got totally different results

The basic code for my CVX process is listed below:

% Optimal solution
% concave-convex procedure optimization
% using slack variable and iteration process
mu_user_old = 1e-5 * ones(Nr, 1);
for iter = 1:maxiter 
    cvx_begin quiet
    cvx_solver SeDuMi
        variable mu_user(Nr);
        variable t_user(Nr);
        variable B(Nt, Nr);
        maximize 0.5 * log_det((eye(Nr) + 2 / (pi * exp(1)) * diag(t_user ./ sigma2z)));
        subject to
            H * B == diag(mu_user);
            sum(abs(B), 2) <= delta;
            t_user - mu_user_old.^2 - 2 * mu_user_old .* (mu_user - mu_user_old) <= 0;
            mu_user >= 0;
    cvx_end
    mu_user_old = mu_user;
end
mu_opt_all(:, ii) = mu_user;
t_opt_all(:, ii) = t_user;
B_opt_all(:, : ,ii) = B;
sum_rate_max_opt(ii) =  0.5 * log2(det(eye(Nr) + 2 / (pi * exp(1)) * diag(t_user ./ sigma2z)));

When I replace the log_det function with more efficient det_rootn, I get different results for sum_rate_max_opt. I wonder what is the reason for my problem. If the reproducible code is needed, please reply to me for the code.

I have no idea what you’re doing or did.

Where is the code for replacing log_det with det_rootn? Did you use the relation Nr*log(det_rootn(X)) = log_det(X)? . Why are you using log2, which is log base 2 in the calculation of sum_rate_max_opt(ii)? log_det is based on log, i.e., natural log.

And presuming what you did with respect to all of the above was "correct ", you are using the concave-convex procedure which may not converge to the ((a) global optimum of the original problem, or even to anything. And you are running a fixed number, maciter, of iterations, so the ending value of the optimization variables may not correspond to the oncave-convex procedure having converged at all. And then within the context of al this bad stuff, even a roundoff or solver tolerance level change in the evaluation of the objective function in one outer iteration (i…e, CVX invocation could cause the concave-convex procedure to take an entirely different path, converging either to a different something, or to nothing, at all.

As a further piece of advice, given your apparent difficulties, I recommend you not use cvx quiet mode until you have everything working properly. True you may get a lot of output due to multiple calls to CVX, but as it is now, you have no idea what is happening with any of the CVX solutions (solver calls).

You should start your debugging and diagnosis with a single outer iteration, i.e., maxiter, and look carefully at the results of that.

What I mean is just replace the log_det in my code above with det_rootn. I think you have known what I am doing. In my CVX process, I am trying to get the optimal results of the objective function with log . After I get the results, I think the optimal result of log2 is the same as log, so I use log2 in the calculation of sum_rate_max_opt(ii).

Thank you for your advice. I will try to debug my code without CVX quiet mode.

One more question: how to make a concave-convex procedure to converge to a better result?

Everything I wrioe about slight numerical differences in objective function potentially changing the entire path of the concave-convex procedure applies. If you just do a straight swap of det_rootn for log_det in the objective function, the argmax, if it is unique, which it might not be, is unaffected if the problem is solved exactly, i.e., to infinite precision. But your problem is being solved by a double precision solver to within a solver tolerance, so the argmac could be affected. Once the argmax from one outer iteration is affected even a little, there could be a major discontinuity in the future evolution (path) of the concave-convex algorithm.

You have a very crude implementation of the concave-convex algorithm, because you don’t check for convergence at all. The starting value for mx_user_old can play a huge role in how the algorithm performs, including what if anything it converges to. But other than trying several different values, i can’t give specific advice.

To me, the concave-convex algorithm is a crude unsophisticated non-robust non-convex nonlinear optimization algorithm, i.e., a poor man’s non-convex nonlinear optimizer. t is totally unsafeguarded, i.e., it doesn’t use trust regions or line search to enforce descent across major iterations. That can be added, but at that point, you are essentially writing your own nonlinear optimizer. In general, high quality non-convex nonlinear optimizers are more robust than a crude concave-convex algorithm. You may fall victim to the same problem as people seeing successful Taylor expansions in physics, and assuming they can take a Taylor expansion of whatever, with no regard for remainder term, and it will all work out (but it often doesn’t https://www.linkedin.com/pulse/high-crimes-misdemeanors-analysis-biz-part-i-my-funny-mark-l-stone/ ). There are cases in which the concave-convex algorithm works well, and perhaps you have seen them in books, papers, and lectures, but that doesn’t it mean it always, or even usually, works well.