# How can I improve my code run time?

I am doing convex optimization. The code runs quite slowly. I tried the different solutions suggested in the forum but none of them could help to improve the run time. Below, I have attached the code:

``````cvx_begin
variable J(3,3) diagonal
variable k(3,3) symmetric

C= input_torque(10:length(deriv_deriv_ang_pos_main))';
D= output_torque(10:length(deriv_deriv_ang_pos_main))';

minimize(norm((V'*J*V)*deriv_deriv_ang_pos_trans(:,10:length(deriv_deriv_ang_pos_main)) + (V'*k*V)*ang_pos_trans(:,10:length(deriv_deriv_ang_pos_main)) - (V')*[C ; zeros(1, length(C)) ; D]))

k == semidefinite(3);
k(1,3) == 0;
k(2,2) == k(1,1)+k(3,3);
k(1,2) == -k(1,1);
k(2,3) == -k(3,3);
cvx_end
``````

V is a 33 matrix.
deriv_deriv_ang_pos_trans and ang_pos_trans are 3
n matrices (time series data) so that n is the length of the time-series data. Even relaxations of constraints of the problem and reduction of the length of time series data didnâ€™t help that much.

Any recommendations are highly appreciated.

Your code is not reproducible by forum readers. You havenâ€™t shown us the solver or CVX output, havenâ€™t told us what the run time is, havenâ€™t told us what solver you are using, and havenâ€™t told us what things you tried. One easy suggestion is that if you are not using Mosek as solver, but are able to do, then try doing so.

One thing you could try, but I have no idea whether it would help at all, is to declare only 2 scalar variables in place of declaring `k`, and build up the matrix `k` as an expression to which the semidefinite constraint is applied. This eliminates 4 variables and 4 equality constraints.

Remove

``````variable k(3,3) symmetric
k(1,3) == 0;
k(2,2) == k(1,1)+k(3,3);
k(1,2) == -k(1,1);
k(2,3) == -k(3,3);`
``````

Add prior to the minimize statement in which `k` is used

``````variables x y
k = [x -x 0;-x x+y -y;0 -y y]
``````

and retain
`k == semidefinite(3)`

Hey Mark,
The solver finally failed, and the result is not what Iâ€™m looking for:

``````Calling SDPT3 4.0: 71946017 variables, 7 equality constraints
For improved efficiency, SDPT3 is solving the dual problem.
------------------------------------------------------------

num. of constraints =  7
dim. of sdp    var  = 11998,   num. of sdp  blk  =  2
dim. of free   var  =  1 *** convert ublk to lblk
*******************************************************************
SDPT3: Infeasible path-following algorithms
*******************************************************************
version  predcorr  gam  expon  scale_data
HKM      1      0.000   1        0
it pstep dstep pinfeas dinfeas  gap      prim-obj      dual-obj    cputime
-------------------------------------------------------------------
0|0.000|0.000|8.8e+06|1.1e+02|1.6e+16| 0.000000e+00  0.000000e+00| 0:0:12| chol  1  1
1|0.000|0.000|8.8e+06|1.1e+02|1.6e+16| 2.600454e+04 -8.198888e-01| 0:2:29| chol  1  2
2|0.000|0.000|8.8e+06|1.1e+02|1.6e+16|-2.222070e+05 -1.157240e+00| 9:27:33| chol  1  2
3|0.000|0.000|8.8e+06|1.1e+02|1.6e+16|-3.059407e+04 -1.970146e+00| 23:13:55| chol  1  2
4|0.000|0.000|8.8e+06|1.1e+02|1.6e+16|-1.009117e+05 -4.622667e+00| 33:48:10|
*** Too many tiny steps:  restarting with the following iterate.
*** [X,y,Z] = infeaspt(blk,At,C,b,2,1e5); chol  1  1
5|0.000|0.819|6.0e+08|1.8e-01|7.0e+17|-3.690335e+07 -5.828311e+08| 33:50:05|
*** Too many tiny steps even after restarting
stop: steps too short consecutively*
-------------------------------------------------------------------
number of iterations   =  5
primal objective value = -3.69033487e+07
dual   objective value = -5.82831071e+08
gap := trace(XZ)       = 6.99e+17
relative gap           = 1.13e+09
actual relative gap    = 8.81e-01
rel. primal infeas (scaled problem)   = 6.00e+08
rel. dual     "        "       "      = 1.81e-01
rel. primal infeas (unscaled problem) = 0.00e+00
rel. dual     "        "       "      = 0.00e+00
norm(X), norm(y), norm(Z) = 7.6e+08, 5.8e+08, 6.4e+10
norm(A), norm(b), norm(C) = 1.7e+03, 2.0e+00, 9.2e+08
Total CPU time (secs)  = 121804.84
CPU time per iteration = 24360.97
termination code       = -5
DIMACS: 6.0e+08  0.0e+00  2.8e+01  0.0e+00  8.8e-01  1.1e+09
-------------------------------------------------------------------

------------------------------------------------------------
Status: Failed
Optimal value (cvx_optval): NaN

>> J

J =

1.0e+06 *

(1,1)       2.3649
(2,2)       0.0004
(3,3)       0.0535

>> k

k =

1.0e+05 *

0.5059   -0.5059         0
-0.5059    3.6607   -3.0734
0   -3.0734    3.0734
``````

I did not use a timer on the code, but it took about two days to complete the run.
SDPT3 is used as the solver in the code.
This is the first time that I use CVX with time-series input data. I do not have any sense if it is natural to be that slow. I reduced the length of my time-series to only 1000 samples but I did not feel a big difference in the matter of time.
To make the code reproducible, you could find the input time-series data from:
(https://studntnu-my.sharepoint.com/:f:/g/personal/faridkm_ntnu_no/Ev2TcqVYFG5Ft9a_RgWedYgBJow-Pb0sdStYozZaaxnrcA?e=X1bdHK)
I switched to MOSEK as you suggested but it gave me an out of memory error even though I am using 64 bit version of MOSEK on a 64 bit operating system.

<deleted my original paragraph now that I have seen @Michal_Adamaszek 's subsequent post and re-read the preceding post which I did not read carefully enough originally.>

How much of the time is taken up before solver output begins vs. after?

You might also look at the scaling of the input data, and make modifications, if necessary, to get all non-zero elements to be within a few orders of magnitude of one.

Have your tried SeDuMi or Mosek? Mosek is likely to be the fastests.

@Erling The OP wrote

I switched to MOSEK as you suggested but it gave me an out of memory error even though I am using 64 bit version of MOSEK on a 64 bit operating system.

Ooops. If I can run the problem I will look into why Mosek run out of storage.

I ran the problem with MOSEK (I had to fake some input data, but my problem structure looks similar like in the log above). It contains a matrix variable of dimension 10k. Is that intentional? I havenâ€™t checked but it seems that cvx is interpreting your norm as some spectral norm (indeed your argument to norm is a marix) and builds an SDP model. With `norm(..., 'fro')` the problem models and solves in no time. So what kind of norm did you have in mind?

1 Like

This optimization problem represents three decoupled linear equations in terms of five scaler variables (k1, k2, J1, J2, J3). So it will be an underdetermined case. I defined a 2-norm function of the different time-series data points to minimize the MSE of the two sides of the set of equations. However, I couldnâ€™t solve the problem with CVX.
As an approximation of the above described optimization problem. I defined another optimization problem to minimize the sum of the MSE of the three individual equations. In other words, for each equation in terms of scalar variables, I defined a 2-norm function, so that the optimization problem tries to minimize the sum of three 2-norm fcuntions each associated with one of the three equations. In this case, the optimization problem can be represented by only the scalar variables:

``````cvx_begin
cvx_solver mosek
variables k1 k2 J1 J2 J3
minimize( ( norm((V(1,3)^2*J1 + V(2,3)^2*J2 + V(3,3)^2*J3) * deriv_deriv_ang_pos_gen_trans(10:length(deriv_deriv_ang_pos_main))  +  (V(1,3)^2*k1-2*V(1,3)*V(2,3)*k1+V(2,3)^2*k1+V(2,3)^2*k2-2*V(2,3)*V(3,3)*k2+V(3,3)^2*k2) * ang_pos_gen_trans(10:length(deriv_deriv_ang_pos_main))  -  (V(1,3)*input_torque(10:length(deriv_deriv_ang_pos_main))+V(3,3)*output_torque(10:length(deriv_deriv_ang_pos_main)))))  +  (norm((V(1,1)^2*J1 + V(2,1)^2*J2 + V(3,1)^2*J3) * deriv_deriv_ang_pos_main_trans(10:length(deriv_deriv_ang_pos_main))  +  (V(1,1)^2*k1-2*V(1,1)*V(2,1)*k1+V(2,1)^2*k1+V(2,1)^2*k2-2*V(2,1)*V(3,1)*k2+V(3,1)^2*k2) * ang_pos_main_trans(10:length(deriv_deriv_ang_pos_main))  -  (V(1,1)*input_torque(10:length(deriv_deriv_ang_pos_main))+V(3,1)*output_torque(10:length(deriv_deriv_ang_pos_main)))))  + (norm((V(1,2)^2*J1 + V(2,2)^2*J2 + V(3,2)^2*J3) * deriv_deriv_ang_pos_gear_trans(10:length(deriv_deriv_ang_pos_main))  +  (V(1,2)^2*k1-2*V(1,2)*V(2,2)*k1+V(2,2)^2*k1+V(2,2)^2*k2-2*V(2,2)*V(3,2)*k2+V(3,2)^2*k2) * ang_pos_gear_trans(10:length(deriv_deriv_ang_pos_main))  -  (V(1,2)*input_torque(10:length(deriv_deriv_ang_pos_main))+V(3,2)*output_torque(10:length(deriv_deriv_ang_pos_main))))) )
k1 >= 0
k2 >= 0
J1 >= 0
J2 >= 0
J3 >= 0
cvx_end
``````

In this case, CVX is able to find the optimizer but I am not sure how much the result could be reliable. Because minimization of the sum of MSE of the individual equations does not necessarily mean the minimization of the MSE of the set of equations. I do not know if these two optimization problems will be the same since there is no coupling between the individual equations.

The problem is not in the semidefinite variable k. The problem is that you minimize `norm(something)`, where `something` is a `3xn` matrix. What norm should be used? If you want the ordinary square root of the sum of squares of all entries then you should use `norm(something, 'fro')` or flatten `something` to a vector. Otherwise it seems you are getting the hard spectral norm of a matrix.

1 Like