Optimization with affine smooth and affine non-smooth functions

Hi all,

I’ve been using TFOCS for a while now for Lasso and weighted Lasso with no issues, however I’ve been having trouble lately using the SCD solver for general affine inequality constraints. Specifically, I’m trying to solve problems of the form

min_x ||A_1x + y||_2^2 + c^Tx s.t. A_2*x >= b, x >= 0 , sum(x) = d

I looked at past posts relating to this, and I can’t seem use those responses to get this optimization program working well, even for the simple linear case of

min_x c^Tx s.t. A_2x >= b, x >= 0, sum(x) = d

The call to tfocs I’ve been using (for the linear case) is

x = tfocs_SCD(smooth_linear©, {1,zeros(N,1); A_2, -b; ones(N, 1), -d}, …
{proj_Rplus; proj_Rplus; proj_Rn}, mu, [], [], tfocs_opts, cont_opts);

For really small problems (N ~ 10), this program matches the output given by MATLAB’s linprog, and satisfies all the constraints. The moment I boost the problem size, however, the solution given by tfocs starts taking much longer than MATLAB’s built-in, and also starts producing results that are nowhere near abiding by the constraints. Am I calling tfocs wrong? I double checked all my linear operators and the fact that MATLAB’s linprog (and quadprog for the quadratic case) results in a reasonable solution given the same operators etc. leads me to think that I’m simply calling tfocs in some sub-optimal manner.

My first question is: Has anyone else encountered similar issues with simple linear programs and affine constraints and solved them satisfactorily?

My second question is: Returning back to the quadratic program, is it possible to provide affine forms for both the smooth_quad function and the proj_Rplus function simultaneously?

Thanks in advance for any help!

If it works for small problems but not large problems, my guess is the issue is with continuation. Can you tell us what you have set tfocs_opts and cont_opts? A very simple issue is if continuation is turned off, or if mu is too large. There are sometimes issues with the continuation iterations not converging.

Hopefully that fixes it. If not, we can investigate further.

Also, we can remove one of the dualizations. Right now, you have smooth_linear(c) as the primal objective, but we can actually incorporate this with proj_simplex and that will help convergence. Let me know if you need help with this (the linear term interacts with this a bit, but not in a major way)

Hi Stephen,

Thanks for the fast reply! I actually played around quite extensively with the solver and continuation options, so I’ll try to briefly summarize what I noticed. First, the options I have set now are:

tfocs_opts.tol = 1e-3;
tfocs_opts.maxIts = 2000;
tfocs_opts.continuation = true;
cont_opts.betaTol = 2;
cont_opts.maxIts = 3;
cont_opts.accel = true; 
cont_opts.tol = 1e-3;
cont_opts.muDecrement = 1;
mu = 1;

What I notice is:

  • If I make mu smaller (even by only 0.00001!) The solution becomes unstable in terms of the majority of conditions being grossly violated
  • If I initialize tfocs very well in terms of the primal variable, i don’t see much of a benefit in terms of solution quality
  • Adding more continuation steps helps marginally.
  • I seem to have to really turn down the tolerance to get decent results at mid-sized problems (where tfocs is still much slower than the MATLAB built-in)
  • I tried to change the continuation variable decay betaTol to no avail.

In terms of incorporating proj_simplex, do you mean running the following instead?

x = tfocs_SCD(0, {diag©,0; A_2, -b; ones(N, 1), -d}, …
{proj_simplex(d*sum©); proj_Rplus; proj_Rn}, mu, [], [], tfocs_opts, cont_opts);

I was wondering what to put into the simplex argument since it seems to bound the sum.

Thanks again!

-Adam

Hi Adam,
With a lot of dualized constraints, sometimes it just doesn’t work well. The extreme sensitivity is a bit unusual so I will investigate in a bit. For now, about the proj_simplex, I was thinking of putting it into the first term, not the dual prox. I have added a function to TFOCS that will make the shifting by c'x simpler, so download the latest version (or just grab prox_shift.m), and run code like this:

n   = 10;
rng(343);
c   = randn(n,1);
m   = n;
A   = randn(m,n);
b   = randn(m,1);
d   = 10;

%% test solution via CVX
cvx_begin
  variable x(n)
  minimize dot(c,x)
  subject to
    x >= 0
    sum(x) == d
    A*x >= b
cvx_end
xTrue = x;

%% Try in TFOCS
% (make sure TFOCS is in your path)

op1     = proj_simplex(d);
op      = prox_shift( op1, c );
mu      = .1;
x0      = zeros(n,1);
opts    = struct('maxIts',2e3);
opts.errFcn = @(f,d,p) norm(p-xTrue)/norm(xTrue);
[x,out,opts]= tfocs_SCD(op,{A,-b},proj_Rplus,mu,x0,[],opts);
fprintf('Final error is %.2e\n',  norm(x-xTrue)/norm(xTrue) )

This way you can keep it as a primal, and dualize the others, and convergence should be faster. Let me know if that’s not clear. I’ll try to take a look at the other issues – maybe you can post or email me a .mat file with a test case?

Thanks Stephen!

That works a LOT better. I’m still running some experiments to see what the timing is with respect to the equivalent MATLAB solver. I’ll send you the [cleaned up] code later today that runs the particular problem I’m working with.

I guess this now brings me to the next question. Say I wanted to add an affine l_2 norm to the objective. It seems that tfocs_SCD() only allows affine transforms for the proximal functions, and tfocs() only allows affine transforms for the smooth function. Is there an easy way to go about doing this as well? I’ll include the code I’m trying to run in the code I send you as well.

Thanks so much!