Regularization to LP problem

I have the following linear programming equation and want to add a regularization term.

My problem is as follows:

echo on
cvx_begin
   variable x(n) 
   dual variables y 

minimize (norm((A_matrixA*x-b_vectorA),2)+norm((A_matrixD*x-b_vectorD),2)+lambda*norm((v_delta),1))
  subject to
        
cvx_end

echo off

Where v_delta is the sum of absolute value of (x_i - x_i-1)

But it seems that CVX does not recognize this v_delta term as it does not affect the result.

You are not showing the entirety of your code, Specifically, where is your code for making v_delta sum of absolute value of (x_i - x_i-1) ?

Also, your code not an LP. The one-norm regularization term can be formulated as linear, but not the two-norms. CVX will reformulate this as an SOCP, and call a solver to solve the problem you have specified.

If lambda = 0 or is small enough, or if the optimal x without regularization has all components equal to each other (or close enough to equal), the optimal x may be the same with or without the regularization term.

In addition to that, because you have not shown the entirety of your code, I don’t know whether you have correctly implemented what you intended. Hint: if you place the expression for v_delta after the objective function rather than before, it will be treated as zero, and therefore have no effect.

Hi Mark,

Thank you for your reply.

What I have done is I run the code for the minimization mentioned without the regularization term. That is:

minimize (norm((A_matrixAx-b_vectorA),2)+norm((A_matrixDx-b_vectorD),2))

The result, x(n), from CVX is what I use for the regularization term next time I run the optimization again.

Therefore, I use

v_delta=abs(diff(x,1))

I add this into the optimization to result as:

minimize (norm((A_matrixAx-b_vectorA),2)+norm((A_matrixDx-b_vectorD),2)+lambda*norm((v_delta),1))

Yet the result/graph of x the second time around is the same as the first one. When changing lambda, the optimal value does increase but the result of x is still the same.

I tried to do

minimize (norm((A_matrixACx-b_vectorAC),2)+norm((A_matrixADx-b_vectorAD),2))+(lambda*norm((diff(x)),1))

But I get the error that Undefined function ‘diff’ for input arguments of type ‘cvx’.

Another this is when running, it uses SDPT3 is solving the dual problem.

Am I incorporating the regularization term wrongfully?

If you are calculating v_delta as the difference of a previously caclulated optimal (without regularization) value of x, then it will be just a numerical constant as far as CVX is concerned. So it has no effect on the optimal x.

Maybe there’s a nice vectorized way to get v_delta, but in any event, you can build it up using a for loop

Hi Mark,

I greatly appreciate your insight. It makes sense what you are saying but I don’t understand how to incorporate the for loop into the minimization for this regularization term. Could you give me some advice in it?

Thank you!

For loop (non-vectorized way)

See http://cvxr.com/cvx/doc/basics.html#assignment-and-expression-holders

cvx_begin
   variable x(n) 
   dual variables y 
   expression v_delta(n-1)
   for j=1:n-1
     v_delta(j) = x(j+1) - x(j);
   end
minimize (norm((A_matrixA*x-b_vectorA),2)+norm((A_matrixD*x-b_vectorD),2)+lambda*norm(v_delta,1))
  subject to
        
cvx_end

Vectorized (nicer) way:

cvx_begin
   variable x(n) 
   dual variables y 
minimize (norm((A_matrixA*x-b_vectorA),2)+norm((A_matrixD*x-b_vectorD),2)+lambda*norm(x(2:n)-x(1:n-1),1))
  subject to
        
cvx_end

This is my best guess as to what you want. If it’s not quite right, you should be able to figure out how to fix it.

1 Like

Thank you very much Mark, I tried it and it’s working.