This post corresponds to https://or.stackexchange.com/questions/5271/how-can-i-convexify-allowed-some-approximation-the-objective-function

I want to maximize

\sum_{u,b} D_{u,b} H_{u,b} T_u

subject to

\sum_b H_{u,b} T_u - \sum_b D_{u,b} H_{u,b} T_u = 1 for each u.

Introducing Y_{u,b} = T_u\cdot D_{u,b} to linearize both objective and constraint:

we have

\text{maximize} \sum_{u,b} H_{u,b} Y_{u,b}

subject to

\sum_b H_{u,b} T_u - \sum_b H_{u,b} Y_{u,b}=1}

Finally, we linearize the relationship between Y and D.

I am having the following script

```
variable T(U)
variable D(U,B) binary
variable Y(U,B)
```

I model objective function as

```
maximize sum(sum(H.*Y))
```

And the constraints as

```
for u=1:U
sum(H(u,:).*T(u))-sum(Y(u,:).*H(u,:))==1;
end
for b=1:B
for u=1:U
Y(u,b)<=D(u,b).*M;
Y(u,b)>=0;
Y(u,b)<=T(u);
Y(u,b)>=T(u)-(1-D(u,b))*M;
end
end
```

But I am getting infeasible solution. Have I modeled the constraints correctly?