Linear algebra approaches to solving systems of constant coefficient ODEs

| categories: ode | tags: | View Comments

Matlab post Today we consider how to solve a system of first order, constant coefficient ordinary differential equations using linear algebra. These equations could be solved numerically, but in this case there are analytical solutions that can be derived. The equations we will solve are:

\(y'_1 = -0.02 y_1 + 0.02 y_2\)

\(y'_2 = 0.02 y_1 - 0.02 y_2\)

We can express this set of equations in matrix form as: \(\left[\begin{array}{c}y'_1\\y'_2\end{array}\right] = \left[\begin{array}{cc} -0.02 & 0.02 \\ 0.02 & -0.02\end{array}\right] \left[\begin{array}{c}y_1\\y_2\end{array}\right]\)

The general solution to this set of equations is

\(\left[\begin{array}{c}y_1\\y_2\end{array}\right] = \left[\begin{array}{cc}v_1 & v_2\end{array}\right] \left[\begin{array}{cc} c_1 & 0 \\ 0 & c_2\end{array}\right] \exp\left(\left[\begin{array}{cc} \lambda_1 & 0 \\ 0 & \lambda_2\end{array}\right] \left[\begin{array}{c}t\\t\end{array}\right]\right)\)

where \(\left[\begin{array}{cc} \lambda_1 & 0 \\ 0 & \lambda_2\end{array}\right]\) is a diagonal matrix of the eigenvalues of the constant coefficient matrix, \(\left[\begin{array}{cc}v_1 & v_2\end{array}\right]\) is a matrix of eigenvectors where the \(i^{th}\) column corresponds to the eigenvector of the \(i^{th}\) eigenvalue, and \(\left[\begin{array}{cc} c_1 & 0 \\ 0 & c_2\end{array}\right]\) is a matrix determined by the initial conditions.

In this example, we evaluate the solution using linear algebra. The initial conditions we will consider are \(y_1(0)=0\) and \(y_2(0)=150\).

import numpy as np

A = np.array([[-0.02,  0.02],
              [ 0.02, -0.02]])

# Return the eigenvalues and eigenvectors of a Hermitian or symmetric matrix.
evals, evecs = np.linalg.eigh(A)
print evals
print evecs
>>> ... >>> >>> ... >>> [-0.04  0.  ]
[[ 0.70710678  0.70710678]
 [-0.70710678  0.70710678]]

The eigenvectors are the columns of evecs.

Compute the \(c\) matrix

V*c = Y0

Y0 = [0, 150]

c = np.diag(np.linalg.solve(evecs, Y0))
print c
>>> >>> [[-106.06601718    0.        ]
 [   0.          106.06601718]]

Constructing the solution

We will create a vector of time values, and stack them for each solution, \(y_1(t)\) and \(Y_2(t)\).

import matplotlib.pyplot as plt

t = np.linspace(0, 100)
T = np.row_stack([t, t])

D = np.diag(evals)

# y = V*c*exp(D*T);
y = np.dot(np.dot(evecs, c), np.exp(np.dot(D, T)))

# y has a shape of (2, 50) so we have to transpose it
plt.plot(t, y.T)
plt.xlabel('t')
plt.ylabel('y')
plt.legend(['$y_1$', '$y_2$'])
plt.savefig('images/ode-la.png')
plt.show()
>>> >>> >>> >>> ... >>> >>> ... [<matplotlib.lines.Line2D object at 0x1d4db950>, <matplotlib.lines.Line2D object at 0x1d4db4d0>]
<matplotlib.text.Text object at 0x1d35fbd0>
<matplotlib.text.Text object at 0x1c222390>
<matplotlib.legend.Legend object at 0x1d34ee90>

Copyright (C) 2013 by John Kitchin. See the License for information about copying.

org-mode source

Read and Post Comments

Another way to parameterize an ODE - nested function

| categories: ode | tags: | View Comments

Matlab post We saw one method to parameterize an ODE, by creating an ode function that takes an extra parameter argument, and then making a function handle that has the syntax required for the solver, and passes the parameter the ode function.

Here we define the ODE function in a loop. Since the nested function is in the namespace of the main function, it can “see” the values of the variables in the main function. We will use this method to look at the solution to the van der Pol equation for several different values of mu.

import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt

MU = [0.1, 1, 2, 5]
tspan = np.linspace(0, 100, 5000)
Y0 = [0, 3]

for mu in MU:
    # define the ODE
    def vdpol(Y, t):
        x,y = Y
        dxdt = y
        dydt = -x + mu * (1 - x**2) * y
        return  [dxdt, dydt]
    
    Y = odeint(vdpol, Y0, tspan)
    
    x = Y[:,0]; y = Y[:,1]
    plt.plot(x, y, label='mu={0:1.2f}'.format(mu))

plt.axis('equal')
plt.legend(loc='best')
plt.savefig('images/ode-nested-parameterization.png')
plt.show()

You can see the solution changes dramatically for different values of mu. The point here is not to understand why, but to show an easy way to study a parameterize ode with a nested function. Nested functions can be a great way to “share” variables between functions especially for ODE solving, and nonlinear algebra solving, or any other application where you need a lot of parameters defined in one function in another function.

Copyright (C) 2013 by John Kitchin. See the License for information about copying.

org-mode source

Read and Post Comments

Yet another way to parameterize an ODE

| categories: ode | tags: | View Comments

Matlab post We previously examined a way to parameterize an ODE. In those methods, we either used an anonymous function to parameterize an ode function, or we used a nested function that used variables from the shared workspace.

We want a convenient way to solve \(dCa/dt = -k Ca\) for multiple values of \(k\). Here we use a trick to pass a parameter to an ODE through the initial conditions. We expand the ode function definition to include this parameter, and set its derivative to zero, effectively making it a constant.

import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt

def ode(F, t):
    Ca, k = F
    dCadt = -k * Ca
    dkdt = 0.0
    return [dCadt, dkdt]

tspan = np.linspace(0, 4)

Ca0 = 1;
K = [2.0, 3.0]
for k in K:
    F = odeint(ode, [Ca0, k], tspan)
    Ca = F[:,0]
    plt.plot(tspan, Ca, label='k={0}'.format(k))
plt.xlabel('time')
plt.ylabel('$C_A$')
plt.legend(loc='best')
plt.savefig('images/ode-parameterized-1.png')
plt.show()

I do not think this is a very elegant way to pass parameters around compared to the previous methods, but it nicely illustrates that there is more than one way to do it. And who knows, maybe it will be useful in some other context one day!

Copyright (C) 2013 by John Kitchin. See the License for information about copying.

org-mode source

Read and Post Comments

Error tolerance in numerical solutions to ODEs

| categories: ode | tags: | View Comments

Matlab post Usually, the numerical ODE solvers in python work well with the standard settings. Sometimes they do not, and it is not always obvious they have not worked! Part of using a tool like python is checking how well your solution really worked. We use an example of integrating an ODE that defines the van der Waal equation of an ideal gas here.

we plot the analytical solution to the van der waal equation in reduced form here.

import numpy as np
import matplotlib.pyplot as plt

Tr = 0.9
Vr = np.linspace(0.34,4,1000)

#analytical equation for Pr
Prfh = lambda Vr: 8.0 / 3.0 * Tr / (Vr - 1.0 / 3.0) - 3.0 / (Vr**2)
Pr = Prfh(Vr) # evaluated on our reduced volume vector.

# Plot the EOS
plt.plot(Vr,Pr)
plt.ylim([0, 2])
plt.xlabel('$V_R$')
plt.ylabel('$P_R$')
plt.savefig('images/ode-vw-1.png')
plt.show()
>>> >>> >>> >>> >>> ... >>> >>> >>> ... [<matplotlib.lines.Line2D object at 0x1c5a3550>]
(0, 2)
<matplotlib.text.Text object at 0x1c22f750>
<matplotlib.text.Text object at 0x1d4e0750>

we want an equation for dPdV, which we will integrate we use symbolic math to do the derivative for us.

from sympy import diff, Symbol
Vrs = Symbol('Vrs')

Prs = 8.0 / 3.0 * Tr / (Vrs - 1.0/3.0) - 3.0/(Vrs**2) 
print diff(Prs,Vrs)
>>> -2.4/(Vrs - 0.333333333333333)**2 + 6.0/Vrs**3

Now, we solve the ODE. We will specify a large relative tolerance criteria (Note the default is much smaller than what we show here).

from scipy.integrate import odeint

def myode(Pr, Vr):
    dPrdVr = -2.4/(Vr - 0.333333333333333)**2 + 6.0/Vr**3
    return dPrdVr

Vspan = np.linspace(0.334, 4)
Po = Prfh(Vspan[0])
P = odeint(myode, Po, Vspan, rtol=1e-4)

# Plot the EOS
plt.plot(Vr,Pr) # analytical solution
plt.plot(Vspan, P[:,0], 'r.')
plt.ylim([0, 2])
plt.xlabel('$V_R$')
plt.ylabel('$P_R$')
plt.savefig('images/ode-vw-2.png')
plt.show()
... >>> >>> >>> >>> ... [<matplotlib.lines.Line2D object at 0x1d4f3b90>]
[<matplotlib.lines.Line2D object at 0x2ac47518e710>]
(0, 2)
<matplotlib.text.Text object at 0x1c238fd0>
<matplotlib.text.Text object at 0x1c22af10>

You can see there is disagreement between the analytical solution and numerical solution. The origin of this problem is accuracy at the initial condition, where the derivative is extremely large.

print myode(Po, 0.34)
-53847.3437818

We can increase the tolerance criteria to get a better answer. The defaults in odeint are actually set to 1.49012e-8.

Vspan = np.linspace(0.334, 4)
Po = Prfh(Vspan[0])
P = odeint(myode, Po, Vspan)

# Plot the EOS
plt.plot(Vr,Pr) # analytical solution
plt.plot(Vspan, P[:,0], 'r.')
plt.ylim([0, 2])
plt.xlabel('$V_R$')
plt.ylabel('$P_R$')
plt.savefig('images/ode-vw-3.png')
plt.show()
>>> ... [<matplotlib.lines.Line2D object at 0x1d4dbf10>]
[<matplotlib.lines.Line2D object at 0x1c6e5550>]
(0, 2)
<matplotlib.text.Text object at 0x1d4e31d0>
<matplotlib.text.Text object at 0x1d9d3710>

The problem here was the derivative value varied by four orders of magnitude over the integration range, so the default tolerances were insufficient to accurately estimate the numerical derivatives over that range. Tightening the tolerances helped resolve that problem. Another approach might be to split the integration up into different regions. For instance, if instead of starting at Vr = 0.34, which is very close to a sigularity in the van der waal equation at Vr = 1/3, if you start at Vr = 0.5, the solution integrates just fine with the standard tolerances.

Copyright (C) 2013 by John Kitchin. See the License for information about copying.

org-mode source

Read and Post Comments

Finding minima and maxima in ODE solutions with events

| categories: ode | tags: | View Comments

Matlab post Today we look at another way to use events in an ode solver. We use an events function to find minima and maxima, by evaluating the ODE in the event function to find conditions where the first derivative is zero, and approached from the right direction. A maximum is when the fisrt derivative is zero and increasing, and a minimum is when the first derivative is zero and decreasing.

We use a simple ODE, \(y' = sin(x)*e^{-0.05x}\), which has minima and maxima.

from pycse import *
import numpy as np

def ode(y, x):
    return np.sin(x) * np.exp(-0.05 * x)

def minima(y, x):
    '''Approaching a minumum, dydx is negatime and going to zero. our event function is increasing'''
    value = ode(y, x)
    direction = 1
    isterminal = False
    return value,  isterminal, direction

def maxima(y, x):
    '''Approaching a maximum, dydx is positive and going to zero. our event function is decreasing'''
    value = ode(y, x)
    direction = -1
    isterminal = False
    return value,  isterminal, direction

xspan = np.linspace(0, 20, 100)

y0 = 0

X, Y, XE, YE, IE = odelay(ode, y0, xspan, events=[minima, maxima])
print IE
import matplotlib.pyplot as plt
plt.plot(X, Y)

# blue is maximum, red is minimum
colors = 'rb'
for xe, ye, ie in zip(XE, YE, IE):
    plt.plot([xe], [ye], 'o', color=colors[ie])

plt.savefig('./images/ode-events-min-max.png')
plt.show()
[1, 0, 1, 0, 1, 0]

Copyright (C) 2013 by John Kitchin. See the License for information about copying.

org-mode source

Read and Post Comments

« Previous Page -- Next Page »