# Why an SOCP formulation of this problem?

The following simple CVX procedure implements a Dantzig selector for a linear regression problem min || X*beta -y ||. Since the minimization is of a 1-norm with an infinity norm constraint, this can be transformed into an LP. I was surprised to see that CVX actually transformed this into an SOCP rather than an LP. Why does CVX choose an SOCP formulation over an LP formulation?

%
% First, get the problem size.
%
[n,p]=size(X);
%
% Now, solve using CVX.
%
cvx_begin
cvx_precision high
subject to
cvx_end
optval=cvx_optval;
1 Like

I think this question can only be answered by the CVX developer, @mcg, who doesn’t visit the forum as often as he used to.

While waiting for hum to swing by, you can look at his dissertation https://web.stanford.edu/~boyd/papers/pdf/mcg_thesis.pdf to get some idea of at least how the early versions of CVX worked.

If you have

\| x \| \leq t

where x is a scalar then you can alternatively do

x \leq t \mbox{ and } -x \leq t.

It is not obvious one is better than the other unless you want to use a simplex based optimizer.

1 Like