# How to implement the weighted L1 norm with adaptive weight with CVX?

Hi All,

I would like to solve the following problem:

minimize norm(Wx,1)
s.t. norm(A
x-b,2) <= epsilon

so the classical lasso or basis pursuit problem but with weighed l1 norm.
In particular the weights are re update at each iteration and are defined as the solution x at the previous iteration.
How can I implement this problem in cvx.

Thanks,
M

Is your objective `norm(W.*x,1)` , i.e., the `.` wasn’t displayed in `.*` ?

In any event, how about calling CVX iteratively within a while or for loop? Initialize W, then start the while or for loop. Inside it, call CVX (declare x as the only CVX variable). After CVX exits (or at beginning of next time through while loop), check on termination conditions for your top level algorithm, and exit if satisfied. Set `W = x;` and start the next iteration through the loop. Does this do what you want?

yes, it is missing the . and the objective function is norm(W.*x,1).

However, I have also think about this iterative calling of CVX, but at each iteration I can specify the initial point of CVX that is I can’t start from the last solution of the previous loop. More in detail:

W initialization

for K=1:N

cvx_call

update of W

end

when I call cvx at the k-th iteration, I can’t use the solution at k-1 as initial point.

What you are doing is updating W. So each time through the loop, CVX is solving a different problem, hopefully for you, converging to what you want (I have no idea whether it will). You don’t need to (and can not) provide a starting value for x on each CVX invocation, but CVX will solve the problem using the particular W you have provided that time through the loop. Is this what you want to do?

Yes in this way i’m solving a different problem each time through the loop. For each W cvx solves a problem and find a new solution.
In my original problem, I wish W would update at each iteration of CVX. So I would like to consider an adaptive l1 norm.
So at each iteration of CVX I wish that a different W would be considered respect to the previous iteration.

Are you in effect wanting
norm(x.*x,1))
as the objective function? If so, that can be expressed in CVX as
`sum_square(x)`
You can not change the problem that CVX is solving in the middle of the solution process.

I think you need to step back and determine the optimization problem you really want to solve. Then determine how to solve it.

I have already determined my optimization problem that is widely known in literature. See for instance the paper at the link https://arxiv.org/abs/0711.1612

However I think that it is not possible to implement it with cvx toolbox.

Your formulation was confusing me because it is incorrect. The algorithm proposed in section 2.2 of the paper in your link above is quite straightforward to implement with CVX, and follows the top level algorithm paradigm I described in the first post of my thread, except that the updates to W are not as you described, but more or less the reciprocal of the magnitude of each component of x (with an extra epsilon thrown in), as shown in equation (6).

So just use the framework I described, but update the weights after each iteration by equation (6). You can put those in a vector W and use `W.*x`, or place them as the entries in a diagonal matrix W and use `W*x` as in the paper. Each time through the loop, CVX is being invoked to perform step 2 (per the paper) of the algorithm.