I have a linear program in conic form that I wish to solve in CVX:

```
cvx_begin;
variable x(n) nonnegative;
minimize (c'*x);
A*x == b;
cvx_end
```

Here, \boldsymbol{A} \in \mathbb{R}^{m \times n} with m > n. I solved the problem with my own algorithm, and I verified the solution with the linear programming routine in MATLAB’s Optimization Toolbox:

```
z = linprog(c,[],[],A,b, zeros(size(c)), []);
```

Thus, I know that the feasible region is nonempty for my choice of \boldsymbol{A}, \boldsymbol{b}, and \boldsymbol{c}. But CVX claims to be unable to solve the problem:

```
Redundant equality constraints detected.
CVX cannot solve this problem; but it is likely infeasible.
Status: Overdetermined
Optimal value (cvx_optval): NaN
```

Is there no way for me to convince CVX that the problem is actually solvable?

A small working example with m = 5 and n = 2 is

```
A = [-0.9439, -0.1537; 0.2576, -0.4602; 0.6777, 0.5686; -0.3149, -1.5169; -0.0058, -1.4912];
b = [-0.1302, -0.2308, 0.3366, -0.8209, -0.7913];
c = [-1.1548, -0.4573]';
```

I calculate the solution \hat{\boldsymbol{x}} = [0.0516, 0.5305]^T with optimum \boldsymbol{c}^T \hat{\boldsymbol{x}} = -0.3021176 using three methods: my algorithm, MATLAB’s `linprog`

, and a short YALMIP program using the Mosek solver.

EDIT: Mark Stone pointed out that the problem with the aforementioned A, b, and c is infeasible. This was not the case when I originally initialized and ran the problem **BUT** it was true when I reran my code with A, b, and c generated from the code that I posted earlier.

However, I **know** that the original problem is feasible because I **designed** it that way:

```
m = 5;
n = 2;
A = randn(m,n);
v = rand(n,1);
c = randn(n,1);
b = A*v;
```

The feasible region must contain at least one point v, which is a least squares solution.

Since I was foolish and did not save the original A, b, and c from my working example, I had to initialize new problem variables to prove my point. However, I saved the matrices to 12 decimal places this time:

```
A = [4.147002930480000e-01 -5.148816329260000e-01;
3.484411999520000e-01 -8.964461505020000e-01;
3.492544166640000e-01 -1.203268186420000e+00;
-7.292472676300000e-01 1.037815639490000e+00;
3.268402487630000e-01 -8.459442123360000e-01];
b = [7.288209781380001e-02;
3.630438960010000e-02;
2.000855166540000e-02;
-1.210455116160000e-01;
3.378115026800000e-02];
c = [-2.971267999950000e-01
-3.232037795940000e+00
```

My algorithm recovers \hat{\boldsymbol{x}} = [0.2424865589365387, 0.05375439221311855]^T. I verify it with the YALMIP program

```
y = sdpvar(n,1);
Constraints = [y >= 0; A*y == b];
Objective = c'*y;
options = sdpsettings('verbose',2,'solver','mosek');
sol = solvesdp(Constraints,Objective,options);
if sol.problem == 0
solution = double(y);
end
```

and the Optimization Toolbox code from before:

```
z = linprog(c,[],[],A,b, zeros(size(c)), []);
```

Mosek gives me

```
Problem status : PRIMAL_AND_DUAL_FEASIBLE
Solution status : OPTIMAL
```

and `linprog`

gives me `exitflag`

equal to 1, which I have interpreted (correctly, I hope) as “function converges to a solution”. When I find the difference between my vector `x`

and those results, I get

```
[z solution] - [x x]
ans =
-3.372579993055069e-13 -3.372579993055069e-13
-4.644347406657090e-12 -4.644347406657090e-12
```

and when I check the feasibility of my solution, I get

```
A*x - b
ans =
-2.251421271637355e-12
-4.045888624126803e-12
-5.547048931298093e-12
4.582084711657330e-12
-3.824808525454415e-12
```

My own algorithm uses a tolerance of 10^{-6} for constraint satisfaction, so the answer given above is fine for my purposes. But CVX *still* claims that the problem is infeasible even with `cvx_precision`

set to `low`

! I suspect that CVX demands stricter constraint satisfaction, which I do not need here. Is this true?