I would like to run TFOCS_SCD function with multiple independent problems.

These problems have the same objective function but partially different inequality constraints.

I use TFOCS for Apache Spark, so I would expect it to run faster if I am able to run multiple problems at the same time instead of looping through the different problems.

Is it possible to specify multiple affineF and run the function to solve it for each of them independently? Any help is very appreciated.

```
val zeroVec = Array.fill(m.length)(0.0)
val rdd = sc.parallelize(zeroVec).glom.map(new DenseVector(_))
val objectiveF = new ProxShiftRPlus(objFunc)
val affineF = new LinopMatrixAdjoint(constMatrixRDD, concatConstraintsDense)
val dualProxF = new ProjRPlus()
val x0 = rdd
val z0 = Vectors.zeros(concatConstraintsDense.size).toDense
val out = TFOCS_SCD.optimize(objectiveF, affineF, dualProxF, 0.000001, x0, z0, 10, 1e-1, 1, 2)
```

@Stephen_Becker is probably the only one around here who can answer this.

Thank you so much for your answer @Mark_L_Stone.

I hope that @Stephen_Becker will be kind to help on this.

Good question. I think the short answer is No.

The longer answer is probably not: I don’t know the Spark version (or Spark itself), but it would require rewriting the code. Each problem has its own dual variables, and you’d need to apply the adjoint of each linear operator to each set of dual variables, so there’s not a clear savings here. If forming one adjoint took a lot of memory, and you could re-use that memory to create the other linear operator adjoints, then there might be a savings, but that seems like a rare case. If you’re using the linear operators in a typical matrix-free manner (e.g, using sparsity or FFTs) then I don’t see a benefit; only if you’re using them in a load-chunks-of-a-large-matrix manner could there be a benefit.

So it may be possible, but not much benefit, and it’s require someone with intimate knowledge of your use-case, and Spark, and TFOCS. Hence I think it’s better to burn computer time than human-programming time in this particular case.