Optimization Value Remains Constant During Iterations in CVX (SDP)

Hello everyone,

I’m currently working on an iterative optimization problem involving a Semi-Definite Programming (SDP) formulation in CVX, and I’ve encountered an issue where the optimization value (cvx_optval) remains the same throughout all iterations.

  • I am solving a problem where I update a decision variable X iteratively.
  • I use CVX to solve a convex optimization problem at each iteration.
  • The objective is to maximize a linearized approximation of a function involving trace(V * X).
  • X is a Hermitian, semidefinite matrix, and I impose a constraint diag(X) == ones(RIS_elements, 1).
    Here is the relevant part of my MATLAB code:

for iter = 1:max_iterations
% Step 1: Compute x_current
x_current = real(trace(V * X_current));

% Step 2: Compute R(x^{(l)}) and its gradient
a = P_v / Noise_up;
numerator = a;
denominator = (1 + a * x_current) * log(2);
gradient = numerator / denominator;

% Linearized objective function
constant_term = B_Uplink_Hz * log2(1 + a * x_current) - gradient * x_current;

% Step 3: Solve the convex optimization problem using CVX
cvx_begin sdp
    variable X(RIS_elements, RIS_elements) hermitian semidefinite
    maximize (constant_term + gradient * real(trace(V * X)))
    subject to
        diag(X) == ones(RIS_elements, 1);
cvx_end

% Store objective values
objective_values = [objective_values, cvx_optval];

% Check for convergence
if iter > 1 && abs(objective_values(end) - objective_values(end-1)) < tolerance
    break;
end

% Update X_current for next iteration
X_current = X;

end
The value of cvx_optval does not change between iterations, and therefore the algorithm does not seem to make any progress
I would appreciate any insights or advice on why the optimization value is not changing and how to possibly modify the setup or debugging approach.

Thank you for your help!

Did you look at the value of X after cvx_end on each iteration? That’s the first thing to diagnose or check out your algorithm performance.

Does that result in the same or a different x_current than on the previous iteration? if x_current is the same, then the same CVX problem is being provided as on the previous iteration. If so, either the algorithm has converged (maybe the starting value of x_current (or X_current) is optimal), or your algorithm is “wrong”, or the input data causes the result.

For instance, if a is 0, so is gradient, and each CVX problem becomes the same feasibility problem, for which X = eye((RIS_elements) or some other ```
hermitian semidefinite matrix with some non-zero off-diagonals from that; and will produce the same optimal X on every iteration.

Or if V is scalar multiple, m, of he Identity matrix, then real(trace(V * X)) must have the value m*RIS_elements due to the diag constraint. And therefore, the CVX problem on every iteration will be the same feasibility problem, and produce the same optimal X.

In any event, input data can matter as to the basic performance of the algorithm.

Whether the linearization (SCA) scheme you concocted is any good, is for you to determine, and fix. Not all such schemes are any good, at all. This forum is littered with failed attempts at SCA and linearization. You can see some of mt previous posts in this regard at Search results for 'SCA unreliable order:latest' - CVX Forum: a community-driven support forum . Not all linearization or SCA algorithms converge to anything, and if they do converge, it may not even be to a local optimum of the original problem, let alone a global optimum. Starting value can matter a lot.

Thank you very much for your detailed response and suggestions.

I checked the value of X after each cvx_end during the iterations, and I found that the matrix X does not change across iterations. In other words, the value of X remains the same after each CVX solve, and consequently, x_current is also identical from one iteration to the next.

Did you try changing the starting value? Or perhaps try a different value of P_v or V?

cvx_optval is quite large, which suggest bad numerical scaling. Whether improving scaling will help, I don’t know. But perhaps it is just the case that your algorithm is lousy, and perhaps doesn’t make any sense at all, or is not implemented correctly. It is off-topic for other people to figure that out for you. But it is easier to write down an ad hoc algorithm which is lousy, than an algorithm which is good.