I’m sorry to perhaps be asking a very stupid question, but I have an optimization problem which has an obvious feasible point at which the objective function is obviously finite, but cvx returns an optimal value of -Inf and I have absolutely no idea why. I have struggled mightily and checked my constraints (quite literally) hundreds of times, so I am wondering if someone might be able to tell me what might be going wrong. This is the code (clearly phi = the 0 vector is feasible):

One image per post is one image too much. Can you post it as preformatted text? Just copy-paste it as text. Posting an image is not so helpful because we cannot execute the image in Matlab.

Also, remember that the problem 10^{-30}x=1 also has an obvious feasible point but it is unlikely any solver will find it. In other words, numerical/scaling issues.

Optimal value of -Inf when maximizing means your problem has been determined to be (primal) infeasible, as you say.

All but section 1 of https://yalmip.github.io/debugginginfeasible apply to CVX. Given you claim to have a feasible solution, you can start with section 5., which you’ll have to do differently in CVX than the YALMIP instructions (YALMIP way is easier and less error prone). You don’t actually need to declare dlogv or dphi as expressions, because you assigned them all at once. Therefore, outside of CVX, you can set your variable phi to the known feasible solution, then the statement assigning dphi, then skip down to the constraints and numerically evaluate them (perhaps change LHS >= RHS to LHS -RHS and see whether the constraints are satisfied (no more than1e-8 violation). If they’re not, then either your solution is really infeasible (follow the other sections of the link) or you have incorrectly coded the constraints.

Thanks for replying. I didn’t post it as text since many of the terms in the constraints and in defining the objective function rely on variables from the rest of the code and other files on my computer, so the above wouldn’t run without them. I was mainly looking for reasons as to why the feasible point wouldn’t be found. Thanks!

Thank you so much for your reply. I will look into this guide and try and debug. phi = 0 is feasible for sure (setting beta = 0 in my code generates output that is not -Inf, for eg., but setting beta > 0 (expanding the feasible set) yields infeasibility, which is not possible).

We the readers were disadvantaged by not knowing the value of alpha or int_length,

In any event, it looks like you might have a thin feasible solution; no interior in any dimensions, maybe just one point. Solvers don’t like that (in a similar spirit to the above example of @Michal_Adamaszek) . beta = 0 “forces” the solver to find the feasible solution phi = 0 (maybe in presolve if the solver has one). When beta > 0, the solver is not forced to find it, and doesn’t. if phi = 0 is the only feasible solution, then it’'s not much of an optimization problem, because there’s nothing to optimize.

it’s possible that changing the solver or removing the objective could change this behavior (both are suggested in the link).

\beta and \alpha are taken to be quite large, which is why I was confused. However, your explanation makes sense from the perspective that int_lengths is a vector of somewhat small numbers (~1/1000) which represents the distance between “evaluation points” of a function involved in the objective function and so might cause a small feasible region.

The part that confuses me now is that even when I take \alpha to be obscenely large, it still returns infeasibility, and this makes no sense.

Removing the constraints involving \alpha gives a result with software, but I need these constraints for the problem I want to solve of course.

Forum readers might be able to provide more help if they had a reproducible example. perhaps you could produce a small version still exhibiting the phenomena, while allowing the input data to be posted?

Nor have you yet to even post the CVX and solver output, which doesn’t require all the input data to be posted.

The following is the output. I can’t reproduce a small example easily, since most of the code in the optimization problem requires the input of a specific dataset. There are clearly plenty of numerical issues.

Calling Mosek 9.1.9: 45453 variables, 15154 equality constraints
For improved efficiency, Mosek is solving the dual problem.

MOSEK warning 710: #1 (nearly) zero elements are specified in sparse col ‘’ (6269) of matrix ‘A’.
MOSEK warning 710: #1 (nearly) zero elements are specified in sparse col ‘’ (23036) of matrix ‘A’.
MOSEK warning 710: #1 (nearly) zero elements are specified in sparse col ‘’ (25325) of matrix ‘A’.
MOSEK warning 57: A large value of -2.5e+11 has been specified in c for variable ‘’ (1).
MOSEK warning 57: A large value of 2.5e+11 has been specified in c for variable ‘’ (2).
MOSEK warning 57: A large value of -2.5e+11 has been specified in c for variable ‘’ (982).
MOSEK warning 57: A large value of 2.5e+11 has been specified in c for variable ‘’ (983).
MOSEK warning 57: A large value of -2.5e+11 has been specified in c for variable ‘’ (1963).
MOSEK warning 57: A large value of 2.5e+11 has been specified in c for variable ‘’ (1964).
MOSEK warning 57: A large value of 1.4e+09 has been specified in c for variable ‘’ (2943).
MOSEK warning 57: A large value of 1.4e+09 has been specified in c for variable ‘’ (3924).
MOSEK warning 57: A large value of 1.4e+09 has been specified in c for variable ‘’ (4905).
Problem
Name :
Objective sense : min
Type : CONIC (conic optimization problem)
Constraints : 15154
Cones : 14170
Scalar variables : 45453
Matrix variables : 0
Integer variables : 0

Optimizer started.
Presolve started.
Linear dependency checker started.
Linear dependency checker terminated.
Eliminator started.
Freed constraints in eliminator : 0
Eliminator terminated.
Eliminator - tries : 1 time : 0.00
Lin. dep. - tries : 1 time : 0.00
Lin. dep. - number : 0
Presolve terminated. Time: 0.03
Problem
Name :
Objective sense : min
Type : CONIC (conic optimization problem)
Constraints : 15154
Cones : 14170
Scalar variables : 45453
Matrix variables : 0
Integer variables : 0

Optimizer - threads : 16
Optimizer - solved problem : the primal
Optimizer - Constraints : 984
Optimizer - Cones : 14170
Optimizer - Scalar variables : 44472 conic : 42510
Optimizer - Semi-definite variables: 0 scalarized : 0
Factor - setup time : 0.09 dense det. time : 0.00
Factor - ML order time : 0.00 GP order time : 0.00
Factor - nonzeros before factor : 8585 after factor : 9060
Factor - dense dim. : 0 flops : 2.42e+07
ITE PFEAS DFEAS GFEAS PRSTATUS POBJ DOBJ MU TIME
0 1.0e+00 2.5e+11 3.0e+09 0.00e+00 3.000169393e+09 -1.061849030e+04 1.0e+00 0.17
1 1.2e-02 3.1e+09 3.3e+08 -1.00e+00 3.007216484e+09 -1.707178829e+06 1.2e-02 0.22
2 1.7e-04 4.2e+07 5.1e+07 -1.00e+00 3.789674245e+09 -1.196043206e+08 1.7e-04 0.27
3 1.2e-04 2.1e+07 3.6e+07 -1.00e+00 3.671940333e+09 -2.388448427e+08 8.4e-05 0.30
4 1.5e-06 2.7e+05 4.1e+06 -1.00e+00 -1.402501873e+10 -1.793488202e+10 1.1e-06 0.41
Optimizer terminated. Time: 0.42

I think that is Mosek’s rather polite way of telling you the input data is garbage.

Are there perhaps some near zero input values which can be made exactly zero without compromising the integrity of the model? Can you do anything to change scaling to make large magnitude numbers smaller? Or is your problem just intrinsically extremely ill-conditioned, in which case double precision solvers might not be suitable?

The strange thing about the input data is that it has no such problems when run through a different optimization problem (which I’m trying to compare this one to). Also, when I remove the constraints containing \alpha in this optimization problem, I get the following output which is giving a feasible solution, but with numerical errors remaining. I’ve omitted some of the iterations below.

My_Optimization_Problem_Fixed_Lambda

Calling Mosek 9.1.9: 43497 variables, 15154 equality constraints
For improved efficiency, Mosek is solving the dual problem.

MOSEK warning 710: #1 (nearly) zero elements are specified in sparse col ‘’ (4313) of matrix ‘A’.
MOSEK warning 710: #1 (nearly) zero elements are specified in sparse col ‘’ (21080) of matrix ‘A’.
MOSEK warning 710: #1 (nearly) zero elements are specified in sparse col ‘’ (23369) of matrix ‘A’.
MOSEK warning 57: A large value of 1.4e+09 has been specified in c for variable ‘’ (987).
MOSEK warning 57: A large value of 1.4e+09 has been specified in c for variable ‘’ (1968).
MOSEK warning 57: A large value of 1.4e+09 has been specified in c for variable ‘’ (2949).
Problem
Name :
Objective sense : min
Type : CONIC (conic optimization problem)
Constraints : 15154
Cones : 14170
Scalar variables : 43497
Matrix variables : 0
Integer variables : 0

Optimizer started.
Presolve started.
Linear dependency checker started.
Linear dependency checker terminated.
Eliminator started.
Freed constraints in eliminator : 0
Eliminator terminated.
Eliminator - tries : 1 time : 0.00
Lin. dep. - tries : 1 time : 0.00
Lin. dep. - number : 0
Presolve terminated. Time: 0.03
Problem
Name :
Objective sense : min
Type : CONIC (conic optimization problem)
Constraints : 15154
Cones : 14170
Scalar variables : 43497
Matrix variables : 0
Integer variables : 0

Optimizer - threads : 16
Optimizer - solved problem : the primal
Optimizer - Constraints : 1414
Optimizer - Cones : 14170
Optimizer - Scalar variables : 42946 conic : 42510
Optimizer - Semi-definite variables: 0 scalarized : 0
Factor - setup time : 0.06 dense det. time : 0.00
Factor - ML order time : 0.00 GP order time : 0.00
Factor - nonzeros before factor : 9836 after factor : 1.04e+04
Factor - dense dim. : 0 flops : 2.42e+07
ITE PFEAS DFEAS GFEAS PRSTATUS POBJ DOBJ MU TIME
55 3.3e-13 2.0e-02 2.0e-15 1.00e+00 3.519828392e+00 3.519816994e+00 3.1e-20 2.81
56 3.3e-13 2.0e-02 2.0e-15 1.00e+00 3.519828392e+00 3.519816994e+00 3.1e-20 2.91
57 3.6e-13 2.2e-02 2.0e-15 1.00e+00 3.519828460e+00 3.519816996e+00 3.1e-20 3.00
58 3.6e-13 2.7e-02 2.0e-15 1.00e+00 3.519828393e+00 3.519816998e+00 3.1e-20 3.08
59 3.6e-13 2.7e-02 2.0e-15 1.00e+00 3.519828393e+00 3.519816998e+00 3.1e-20 3.19
Optimizer terminated. Time: 3.28

On some problems you get away with bad input data, on some you don’t. The objective function also matters even to feasibility determination, even though you might think it shouldn’t. It either goes into the constraints via epigraph formulation, or couples into the other constraints in pre-solve, affects scaling, or something. Trying to find a feasible solution as a feasibility problem, i.e., no objective, might be easier, because at least the objective function scaling isn’t placed into the mix.

Also I’d like to say thank you again for looking into this in such detail, particularly since my understanding of the software is so weak and this problem is perhaps poorly formed.

Both log outputs you posted look equally bad. The second one sort of solves but you have huge solution norms and considerable violations. I would put them in the same category and not get too keen on one having a different final status than the other. The first thing to do is to tame down your scaling so that you are not being warned about huge coefficients. Then like Mark says make the problem more robust so that the feasible set is “thick” and that the expected solution is not too large.