I have a convex problem that I solve with CVX.
The inequality constraints are second order cones.
The problem is always feasible by nature.

However, I would like to now add some additional linear constraints (Ax=b) to the problem.
With these, the problem is mostly feasible but in some extreme cases can become infeasible.
I want to avoid this infeasibility by making the linear constraints soft.
One way to achieve this is to add to the objective a term like this:
$$||Ax-b||_F$$
However, in order for this to be effective I need to weigh this term.
Using a small weight will give a solution that doesn’t satisfy the constraints.
A large weight on the other hand seems to cause numerical instability.
Moreover, I would like the problem to be parameter free.

The question is can I inform CVX somehow that this constraint is soft and expect that CVX will automatically priorities this constraint over the others?

In my opinion you are dealing with is a modelling issue. I mean if you cannot tell what the trade off between optimality and infeasibility is, then how how should the optimizer. [Says a person who builds optimizers.]

I agree that if the problem is infeasible then a trade off is needed to be made and probably the modeler (me) needs to prioritize things (say with a simple selection of a weight).
However, if I simply weight the two energy terms, say like this: E1+lambda*E2 then even if the problem is feasible, the two energy terms will “fight” each other.
The only way to avoid this would be to use infinite weight which would be inaccurate numerically.
One way to approach this would be to first solve the problem with hard constraints and then only if it is infeasible to resolve a different problem (with soft constraints). That would be slow though.
I wonder if there is a way to tell CVX that in case of infeasibility it should automatically go for soft constraints (possibly with user defined weight and well defined model in hand).

A common but mostly invalid assumption is that the solution time is independent of the problem data. Now if you choose a large weight you most likely get a nasty problem with scaling issues. So solving 2 easy problem is better than 1 nasty. And it is more robust.

Indeed for a large wight your problem will look like 2 problems to the optimizer anyway. All you gotten out that is to make your problem even harder. For a small weight it might be ok of course.

You should be careful in the case where the first problem is barely feasible or infeasible.