Add nonlinear equality constraints as penalty

I am trying to add a number of nonlinear equality constraints of the form:
$$ ||d_{j} - P_{i}||2 = r{i,j}, \forall i, j$$
,where d_{i}, r_{ij} are variables, and P_{i} is matrix, k is the dimension of space. Hence each such constraint says the distance of point p_i from a point d_j in \mathbb{R}^k is a new variable r_{ij}.

This is a part of a larger second order conic formulation.
I figured since equality os non convex, I would try to put these constraints in the objective.
I tried using expression, and adding all these equalities as
$$ sumEqualityConstraints = sumEqualityConstraints+ ||d_{j} - P_{i}||2 - r{i,j}, \forall i, j$$
However the expression apparently gets converted to a evaluated expression in objective.

I tried adding a different variable instead of expression, but that results in an error because I am voilating DCP ruleset: {real affine} == {convex}.

How do I add these to objective without a for loop.
Is there an object like cvx_objective wherein I can create add to this object.

I think what you’re trying to do is this, right?

$$\text{minimize} f_0(x) + \sum_{i,j} |d_j-P_i|2-r{ij}$$

If you’re absolutely sure this is what you want to do, then it is exactly equivalent to this:

\text{minimize} & f_0(x) +\sum_{i,j} t_{ij} \
\text{subject to} & |d_j-P_i|2-r{ij} \leq t_{ij}, ~ \forall i,j

So that’s how you do it in CVX, too: create a new variable t(imax,jmax), and use a for loop to construct the imax*jmax inequality constraints.

But of course, hopefully, this reveals the potential problem with your approach as well: it penalizes the case when \|d_j-P_i\|_2=r_{ij}, but it actually encourages \|d_i-P_i\|_2\leq r_{ij}. It’s not an entirely uncommon technique for relaxing nonlinear equality constraints, but it does have a clear disadvantage.

So if you’re sure that you’re willing to deal with this relaxed problem, then the inequality approach I’ve described here will do it. But do go into it with an open mind: nonlinear equality constraints are not convex, and no amount of trickery can do away with that.

One way to slightly improve this is to do just drop the t_{ij} terms, and just convert the equations to equalities.

\text{minimize} & f_0(x) \
\text{subject to} & |d_j-P_i|2 \leq r{ij}, ~ \forall i,j

Now this will prevent t_{ij} from driving the difference down below zero. It will still allow it, but it won’t necessarily encourage it.