A differentiable ODE integrator for sensitivity analysis

| categories: ode, autograd, sensitivity | tags: | View Comments

Last time I wrote about using automatic differentiation to find the derivative of an integral function. A related topic is finding derivatives of functions that are defined by differential equations. We typically use a numerical integrator to find solutions to these functions. Those leave us with numeric solutions which we then have to use to approximate derivatives. What if the integrator itself was differentiable? It is after all, just a program, and automatic differentiation should be able to tell us the derivatives of functions that use them. This is not a new idea, there is already a differentiable ODE solver in Tensorflow. Here I will implement a simple Runge Kutta integrator and then show how we can use automatic differentiation to do sensitivity analysis on the numeric solution.

I previously used autograd for sensitivity analysis on analytical solutions in this post. Here I will compare those results to the results from sensitivity analysis on the numerical solutions.

First, we need an autograd compatible ODE integrator. Here is one implementation of a simple, fourth order Runge-Kutta integrator. Usually, I would use indexing to do this, but that was not compatible with autograd, so I just accumulate the solution. This is a limitation of autograd, and it is probably not an issue with Tensorflow, for example, or probably pytorch. Those are more sophisticated, and more difficult to use packages than autograd. Here I am just prototyping an idea, so we stick with autograd.

import autograd.numpy as np
from autograd import grad
%matplotlib inline
import matplotlib.pyplot as plt

def rk4(f, tspan, y0, N=50):
    x, h = np.linspace(*tspan, N, retstep=True)
    y = []
    y = y + [y0]
    for i in range(0, len(x) - 1):
        k1 = h * f(x[i], y[i])
        k2 = h * f(x[i] + h / 2, y[i] + k1 / 2)
        k3 = h * f(x[i] + h / 2, y[i] + k2 / 2)
        k4 = h * f(x[i + 1], y[i] + k3)
        y += [y[-1] + (k1 + (2 * k2) + (2 * k3) + k4) / 6]
    return x, y

Now, we just check that it works as expected:

Ca0 = 1.0
k1 = k_1 = 3.0

def dCdt(t, Ca):
    return -k1 * Ca + k_1 * (Ca0 - Ca)

t, Ca = rk4(dCdt, (0, 0.5), Ca0)

def analytical_A(t, k1, k_1):
    return Ca0 / (k1 + k_1) * (k1 * np.exp(-(k1 + k_1) * t) + k_1)

plt.plot(t, Ca, label='RK4')
plt.plot(t, analytical_A(t, k1, k_1), 'r--', label='analytical')
plt.xlabel('t')
plt.ylabel('[A]')
plt.xlim([0, 0.5])
plt.ylim([0.5, 1])
plt.legend()

That looks fine, we cannot visually distinguish the two solutions, and they both look like Figure 1 in this paper. Note the analytical solution is not that complex, but it would not take much variation of the rate law to make this solution difficult to derive.

Next, to do sensitivity analysis, we need to define a function for \(A\) that depends on the rate constants, so we can take a derivative of it with respect to the parameters we want the sensitivity from. We seek the derivatives: \(\frac{dC_A}{dk_1}\) and \(\frac{dC_A}{dk_{-1}}\). Here is a function that does that. It will return the value of [A] at \(t\) given an initial concentration and the rate constants.

def A(Ca0, k1, k_1, t):
    def dCdt(t, Ca):
        return -k1 * Ca + k_1 * (Ca0 - Ca)
    t, Ca_ = rk4(dCdt, (0, t), Ca0)
    return Ca_[-1]

# Here are the two derivatives we seek.
dCadk1 = grad(A, 1)
dCadk_1 = grad(A, 2)

We also use autograd to get the derivatives from the analytical solution for comparison.

dAdk1 = grad(analytical_A, 1)
dAdk_1 = grad(analytical_A, 2)

Now, we can plot the sensitivities over the time range and compare them. I use the list comprehensions here because the AD functions aren't vectorized.

tspan = np.linspace(0, 0.5)

# From the numerical solutions
k1_sensitivity = [dCadk1(1.0, 3.0, 3.0, t) for t in tspan]
k_1_sensitivity = [dCadk_1(1.0, 3.0, 3.0, t) for t in tspan]

# from the analytical solutions
ak1_sensitivity = [dAdk1(t, 3.0, 3.0) for t in tspan]
ak_1_sensitivity = [dAdk_1(t, 3.0, 3.0) for t in tspan]

plt.plot(tspan, np.abs(ak1_sensitivity), 'b-', label='k1 analytical')
plt.plot(tspan, np.abs(k1_sensitivity), 'y--', label='k1 numerical')

plt.plot(tspan, np.abs(ak_1_sensitivity), 'r-', label='k_1 analytical')
plt.plot(tspan, np.abs(k_1_sensitivity), 'k--', label='k_1 numerical')

plt.xlim([0, 0.5])
plt.ylim([0, 0.1])
plt.legend()
plt.xlabel('t')
plt.ylabel('sensitivity')

The two approaches are indistinguishable on paper. I will note that it takes a lot longer to make the graph from the numerical solution than from the analytical solution because at each point you have to reintegrate the solution from the beginning, which is certainly not efficient. That is an implementation detail that could probably be solved, at the expense of making the code look different than the way I would normally think about the problem.

On the other hand, it is remarkable we get derivatives from the numerical solution, and they look really good! That means we could do sensitivity analysis on more complex reactions, and still have a reasonable way to get sensitivity. The work here is a long way from that. My simple Runge-Kutta integrator isn't directly useful for systems of ODEs, it wouldn't work well on stiff problems, the step size isn't adaptive, etc. The Tensorflow implementation might be more suitable for this though, and maybe this post is motivation to learn how to use it!

Copyright (C) 2018 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.1.13

Read and Post Comments

Autograd and the derivative of an integral function

| categories: python, autograd | tags: | View Comments

There are many functions that are defined by integrals. The error function, for example is defined by \(erf(x) = \frac{2}{\sqrt{\pi}}\int_0^x e^{-t^2}dt\).

Another example is:

\(\phi(\alpha) = \int_0^1 \frac{\alpha}{x^2 + \alpha^2} dx\).

We have reasonable ways to evaluate these functions numerically, e.g. scipy.integrate.quad, or numpy.trapz, but what about the derivatives of these functions? The analytical way to do this is to use the Leibniz rule, which involves integrating a derivative and evaluating it at the limits. For some functions, this may also lead to new integrals you have to numerically evaluate. Today, we consider the role that automatic differentiation can play in this.

The idea is simple, we define a function in Python as usual, and in the function body calculate the integral in a program. Then we use autograd to get the derivative of the function.

In this case, we have an analytical derivative to compare the answers to:

\(\frac{d\phi}{d\alpha} = -\frac{1}{1 + \alpha^2}\).

1 Example 1

For simplicity, I am going to approximate the integral with the trapezoid method in vectorized form. Here is our program to define \(\phi(\alpha)\). I found we need a pretty dense grid on the x value so that we have a pretty accurate integral, especially near \(x=0\) where there is a singularity as α goes to zero. That doesn't worry me too much, there are better integral approximations to use, including Simpson's method, adaptive methods and perhaps quadrature. If you define them so autograd can use them, they should all work. I chose the trapezoidal method because it is simple to implement here. Note, however, the autograd.numpy wrappers don't have a definition for numpy.trapz to use it directly. You could add one, or just do this.

import autograd.numpy as np
from autograd import grad

def trapz(y, x):
    d = np.diff(x)
    return np.sum((y[0:-1] + y[1:]) * d / 2)


def phi(alpha):
    x = np.linspace(0, 1, 1000)
    y = alpha / (x**2 + alpha**2)
    return trapz(y, x)


# This is the derivative here!
adphi = grad(phi, 0)

Now, we can plot the derivatives. I will plot both the analytical and automatic differentiated results.

%matplotlib inline
import matplotlib.pyplot as plt

# results from AD
alpha = np.linspace(0.01, 1)

# The AD derivative function is not vectorized, so we use this list comprehension.
dphidalpha = [adphi(a) for a in alpha]

def analytical_dphi(alpha):
    return -1 / (1 + alpha**2)

plt.plot(alpha, analytical_dphi(alpha), label='analytical')
plt.plot(alpha, dphidalpha, 'r--', label='AD')
plt.xlabel(r'$\alpha$')
plt.ylabel(r'$frac{d\phi}{d\alpha}$')
plt.legend()

Visually, these are indistinguishable from each other. We can look at the errors too, and here we see they are negligible, and probably we can attribute them to the approximation we use for the integral, and not due to automatic differentiation.

perr = (analytical_dphi(alpha) - dphidalpha) / analytical_dphi(alpha) * 100
plt.plot(alpha, perr, label='analytical')
plt.xlabel(r'$\alpha$')
plt.ylabel('%error')

2 Example 2

In example 2 there is this function, which has variable limits:

\(f(x) = \int_{\sin x}^{\cos x} \cosh t^2 dt\)

What is \(f'(x)\) here? It can be derived with some effort and it is:

\(f'(x) = -\cosh(\cos^2 x) \sin x - \cosh(\sin^2 x) \cos x\)

This function was kind of fun to code up, I hadn't thought about how to represent variable limits, but here it is.

def f(x):
    a = np.sin(x)
    b = np.cos(x)
    t = np.linspace(a, b, 1000)
    y = np.cosh(t**2)
    return trapz(y, t)

# Here is our derivative!
dfdx = grad(f, 0)

Here is a graphical comparison of the two:

x = np.linspace(0, 2 * np.pi)

analytical = -np.cosh(np.cos(x)**2) * np.sin(x) - \
    np.cosh(np.sin(x)**2) * np.cos(x)
ad = [dfdx(_x) for _x in x]

plt.plot(x, analytical, label='analytical')
plt.plot(x, ad, 'r--', label='AD')
plt.xlabel('x')
plt.ylabel('df/dx')
plt.legend()

These are once again indistinguishable.

3 Summary

These are amazing results to me. Before trying it, I would not have thought it would be so easy to evaluate the derivative of these functions. These work of course because all the operations involved in computing the integral are differentiable and defined in autograd. It certainly opens the door to all kinds of new approaches to solving engineering problems that need the derivatives for various purposes like optimization, sensitivity analysis, etc.

Copyright (C) 2018 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.1.13

Read and Post Comments

Compressibility variation from an implicit equation of state

| categories: python, autograd | tags: | View Comments

Table of Contents

In this post I explored using automatic differentiation to compute how the compressibility of a gas defined by the van der Waal equation varies with the reduced pressure. In that example we had an explicit function of the pressure as a function of the volume and temperature, and we could derive a differential equation that defines the variation we were interested in.

I thought we should be able to derive the differential equation more directly, still using automatic differentiation and we explore that idea here. The general strategy to compute the compressibility as a function of pressure is to integrate \(dV / dP_r\) over a range of \(P_r\) to get the molar volume as a function of \(P_r\), and then to directly compute the compressibility from \(Z = PV/(RT)\).

To use this approach we need to get \(dV / dP_r\) from the van der Waal equation. Previously, we derived this in a round about way from the explicit form of the van der Waal equation. Here, we follow the work in this post to get the derivative from the implicit form of the van der Waal equation:

\(f(V, P_r, T_r) = \frac{R Tr * Tc}{V - b} - \frac{a}{V^2} - P_r Pc = 0\)

Based on the work in this post, we can get

\(dV/dP_r = (-df/dP_r) / (df/dV)\)

and the two derivatives on the right can be found easily by automatic differentiation. First, we express the van der Waal equation in implicit form, with the variables as \(V, P_r, T_r\). Only two of those variables are independent; if you define two of them you can compute the third one using a tool like fsolve.

R = 0.08206
Pc = 72.9
Tc = 304.2

a = 27 * R**2 * Tc**2 / (Pc * 64)
b = R * Tc / (8 * Pc)

Tr = 1.1  # Constant for this example

def f(V, Pr, Tr):
    return R * Tr * Tc / (V - b) - a / V**2 - Pr * Pc

Now, if we want to know how does the volume vary with \(P_r\), we need to derive the derivative \(dV/dP_r\), and then integrate it. Here we use autograd to define the derivatives, and then we define a function that uses them. Note the arguments in the function dVdPr are in an order that anticipates we want to integrate it in solve_ivp, to get a function \(V(P_r)\).

from autograd import grad

dfdPr = grad(f, 1)  # derivative of f with respect to arg at index=1: Pr
dfdV = grad(f, 0)  # derivative of f with respect to arg at index=0: V

def dVdPr(Pr, V):
    return -dfdPr(V, Pr, Tr) / dfdV(V, Pr, Tr)  # Tr is a constant in here

Now, we need an initial condition to start the integration from. We want the volume at \(P_r=0.1\). We have to use fsolve for this, or some other method that tells you want is the volume at \(P_r=0.1\). We can pass the values of \(P_r\) and \(T_R\) as arguments to our implicit function. Since \(V\) is the first argument, we can directly solve our implicit function. Otherwise you would have to define a helper objective function to use with fsolve.

from scipy.optimize import fsolve

V0, = fsolve(f, 3.5, args=(0.1, 1.1))
V0
3.6764763125625435

Finally, we are ready to integrate the ODE, and plot the solution.

import numpy as np
from scipy.integrate import solve_ivp

Pr_span = (0.1, 10)
Pr_eval, h = np.linspace(*Pr_span, retstep=True)

sol = solve_ivp(dVdPr, Pr_span, (V0,), max_step=h)
print(sol.message)

%matplotlib inline
import matplotlib.pyplot as plt

Pr = sol.t  # the P_r steps used in the solution
V = sol.y[0]  # V(P_r) from the solution

Z = Pr * Pc * V / (R * Tr * Tc)  # Compressibility Z(P_r)

plt.plot(Pr, Z)
plt.xlabel('$P_r$')
plt.ylabel('Z')
plt.xlim([0, 10])
plt.ylim([0, 2])
The solver successfully reached the end of the integration interval.


(0, 2)

That is the same result as we got before.

1 Summary thoughts

This method also worked successfully to solve this problem. In most ways, this method has less algebraic manipulations required to get to the solution. In method 3, we had to do some calculus that relied on a particular explicit form of the van der Waal equation. While those manipulations were not particularly difficulty, the leave opportunities for mistakes, and they will be more difficult for an implicit equation of state (e.g. if there was a \(P\) on the right hand side).

This approach also required some manipulation, but it is a standard one and that is how do you get a derivative from an implicit function. After that, it is straightforward to define the desired derivative as a function and then integrate it to get the solution. So, we still don't get a free pass on calculus, but we do reduce the number of manipulations required to get to the solution. I consider that a plus.

Copyright (C) 2018 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.1.13

Read and Post Comments

Getting derivatives from implicit functions with autograd

| categories: python, autograd | tags: | View Comments

If we have an implicit function: \(f(x, y(x)) = 0\), but we want to compute the derivative \(dy/dx\) we can use the chain rule to derive:

\(df/dx + df/dy dy/dx = 0\)

We can then solve for \(dy/dx\):

\(dy/dx = -df/dx / df/dy\) to get the desired derivative. The interesting point of this blog post is that we can get the two derivatives on the right hand side of this equation using automatic differentiation of the function \(f(x, y)\)! There are a few examples of analytical approaches to derivatives from implicit functions here, and I wanted to explore them with autograd in this post.

In the following examples, we will assume that \(y\) is a function of \(x\) and that \(x\) is independent. We will consider a series of implicit equations, compute \(dy/dx\) using autograd from the formula above, and compare them to the analytical results in the web page referenced above.

The \(dy/dx\) functions generally depend on both \(x\), and \(y\). Technically, these are the derivatives along the curve \(y(x)\), but since we can evaluate them at any points, we will use some random points for \(x\) and \(y\) to test for equality between the analytical derivatives and the autograd derivatives. This isn't a rigorous proof of equality, but it is the only thing that makes sense to do for now. It is assumed that if these points are ok, all the others are too. That might be a broad claim, since we only sample \(x\) and \(y\) from 0 to 1 here but the approach is general. Here are the imports and the random test points for all the examples that follow.

import autograd.numpy as np
from autograd import grad

xr = np.random.random(50)
yr = np.random.random(50)

1 Example 1

\(x^3 + y^3 = 4\)

def f1(x, y):
    return x**3 + y**3 - 4

D1x = grad(f1, 0)
D1y = grad(f1, 1)

dydx_1 = lambda x, y: -D1x(x, y) / D1y(x, y)
dydx_1a = lambda x, y: -x**2 / y**2

np.allclose(dydx_1a(xr, yr),
             [dydx_1(_xr, _yr) for _xr, _yr in zip(xr, yr)])
True

The output of True means the autograd results and the analytical results are "all close", i.e. within a tolerance the results are the same. The required derivatives of this are not that difficult to derive, but it is nice to see a simple example that "just works". A subtle point of the dydx function is that it is not vectorized which is why I used a list comprehension to evaluate all the points. It might be possible to make a pseudo-vectorized version with the np.vectorize decorator, but that is not of interest here.

2 Example 2

\((x - y)^2 = x + y - 1\)

def f2(x, y):
    return (x - y)**2 - x - y + 1

D2x = grad(f2, 0)
D2y = grad(f2, 1)

dydx_2 = lambda x, y: -D2x(x, y) / D2y(x, y)
dydx2_a = lambda x, y: (2 * y - 2 * x + 1) / (2 * y - 2 * x - 1)

np.allclose(dydx2_a(xr, yr),
            [dydx_2(_xr, _yr) for _xr, _yr in zip(xr, yr)])
True

This also works.

3 Example 3

\(y = sin(3x + 4y)\)

def f3(x, y):
    return y - np.sin(3 * x + 4 * y)


D3x = grad(f3, 0)
D3y = grad(f3, 1)

dydx_3 = lambda x, y: -D3x(x, y) / D3y(x, y)
dydx3_a = lambda x, y: (3 * np.cos(3 * x + 4 * y)) / (1 - 4 * np.cos(3 * x + 4 * y))

np.allclose(dydx3_a(xr, yr),
            [dydx_3(_xr, _yr) for _xr, _yr in zip(xr, yr)])
True

Check.

4 Example 4

\(y = x^2 y^3 + x^3 y^2\)

def f4(x, y):
    return y - x**2 * y**3 - x**3 * y**2


D4x = grad(f4, 0)
D4y = grad(f4, 1)

dydx_4 = lambda x, y: -D4x(x, y) / D4y(x, y)
dydx4_a = lambda x, y: (2 * x * y**3 + 3 * x**2 * y**2) / (1 - 3 * x**2 * y**2 - 2 * x**3 * y)

np.allclose(dydx4_a(xr, yr),
            [dydx_4(_xr, _yr) for _xr, _yr in zip(xr, yr)])
True

5 Example 5

\(e^{xy} = e^{4x} - e^{5y}\)

def f5(x, y):
    return np.exp(4 * x) - np.exp(5 * y) - np.exp(x * y)

D5x = grad(f5, 0)
D5y = grad(f5, 1)

dydx_5 = lambda x, y: -D5x(x, y) / D5y(x, y)
dydx5_a = lambda x, y: (4 * np.exp(4 * x) - y * np.exp(x * y)) / (x * np.exp(x * y) + 5 * np.exp(5 * y))

np.allclose(dydx5_a(xr, yr),
            [dydx_5(_xr, _yr) for _xr, _yr in zip(xr, yr)])
True

Also check.

6 Example 6

\(\cos^2 x + cos^2 y = cos(2x + 2y)\)

def f6(x, y):
    return np.cos(x)**2 + np.cos(y)**2 - np.cos(2 * x + 2 * y)

D6x = grad(f6, 0)
D6y = grad(f6, 1)

dydx_6 = lambda x, y: -D6x(x, y) / D6y(x, y)
dydx6_a = lambda x, y: (np.cos(x) * np.sin(x) - np.sin(2 * x + 2 * y)) / (np.sin(2 * x + 2 * y) - np.cos(y) * np.sin(y))

np.allclose(dydx6_a(xr, yr),
            [dydx_6(_xr, _yr) for _xr, _yr in zip(xr, yr)])
True

Check.

7 Example 7

\(x = 3 + \sqrt{x^2 + y^2}\)

def f7(x, y):
    return 3 + np.sqrt(x**2 + y**2) - x

D7x = grad(f7, 0)
D7y = grad(f7, 1)

dydx_7 = lambda x, y: -D7x(x, y) / D7y(x, y)
dydx7_a = lambda x, y: (np.sqrt(x**2 + y**2) - x) / y

np.allclose(dydx7_a(xr, yr),
            [dydx_7(_xr, _yr) for _xr, _yr in zip(xr, yr)])
True

8 Conclusions

There are a handful of other examples at the website referenced in the beginning, but I am stopping here. After seven examples of quantitative agreement between the easy to derive autograd derivatives, and the easy to moderately difficult analytical derivatives, it seems like it is autograd for the win here. This technique has some important implications for engineering calculations that I will explore in a future post. Until then, this is yet another interesting thing we can do with automatic differentiation!

Copyright (C) 2018 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.1.13

Read and Post Comments

Compressibility factor variation from the van der Waals equation by three different approaches

| categories: ode, python, nonlinear algebra, autograd | tags: | View Comments

In the book Problem solving in chemical and biochemical engineering with POLYMATH, Excel and Matlab by Cutlip and Shacham there is a problem (7.1) where you want to plot the compressibility factor for CO2 over a range of \(0.1 \le P_r <= 10\) for a constant \(T_r=1.1\) using the van der Waal equation of state. There are a two standard ways to do this:

  1. Solve a nonlinear equation for different values of \(P_r\).
  2. Solve a nonlinear equation for one value of \(P_r\), then derive an ODE for how the compressibility varies with \(P_r\) and integrate it over the relevant range.

In this post, we compare and contrast the two methods, and consider a variation of the second method that uses automatic differentiation.

1 Method 1 - fsolve

The van der Waal equation of state is:

\(P = \frac{R T}{V - b} - \frac{a}{V^2}\).

We define the reduced pressure as \(P_r = P / P_c\), and the reduced temperature as \(T_r = T / T_c\).

So, we simply solve for V at a given \(P_r\), and then compute \(Z\). There is a subtle trick needed to make this easy to solve, and that is to multiply each side of the equation by \((V - b)\) to avoid a singularity when \(V = b\), which happens in this case near \(P_r \approx 7.5\).

from scipy.optimize import fsolve
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt

R = 0.08206
Pc = 72.9
Tc = 304.2

a = 27 * R**2 * Tc**2 / (Pc * 64)
b = R * Tc / (8 * Pc)

Tr = 1.1

def objective(V, Pr):
    P = Pr * Pc
    T = Tr * Tc
    return P * (V - b) - (R * T)  +  a / V**2 * (V - b)


Pr_range = np.linspace(0.1, 10)
V = [fsolve(objective, 3, args=(Pr,))[0] for Pr in Pr_range]

T = Tr * Tc
P_range = Pr_range * Pc
Z = P_range * V / (R * T)

plt.plot(Pr_range, Z)
plt.xlabel('$P_r$')
plt.ylabel('Z')
plt.xlim([0, 10])
plt.ylim([0, 2])
(0, 2)

That looks like Figure 7-1 in the book. This approach is fine, but the equation did require a little algebraic finesse to solve, and you have to use some iteration to get the solution.

2 Method 2 - solve_ivp

In this method, you have to derive an expression for \(\frac{dV}{dP_r}\). That derivation goes like this:

\(\frac{dV}{dP_r} = \frac{dV}{dP} \frac{dP}{dP_r}\)

The first term \(\frac{dV}{dP}\) is \((\frac{dP}{dV})^{-1}\), which we can derive directly from the van der Waal equation, and the second term is just a constant: \(P_c\) from the definition of \(P_r\).

They derived:

\(\frac{dP}{dV} = -\frac{R T}{(V - b)^2} + \frac{2 a}{V^3}\)

We need to solve for one V, at the beginning of the range of \(P_r\) we are interested in.

V0, = fsolve(objective, 3, args=(0.1,))
V0
3.6764763125625461

Now, we can define the functions, and integrate them to get the same solution. I defined these pretty verbosely, just for readability.

from scipy.integrate import solve_ivp

def dPdV(V):
    return -R * T / (V - b)**2 + 2 * a / V**3

def dVdP(V):
    return 1 / dPdV(V)

dPdPr = Pc

def dVdPr(Pr, V):
    return dVdP(V) * dPdPr

Pr_span = (0.1, 10)
Pr_eval, h = np.linspace(*Pr_span, retstep=True)

sol = solve_ivp(dVdPr, Pr_span, (V0,), dense_output=True, max_step=h)

V = sol.y[0]
P = sol.t * Pc
Z = P * V / (R * T)
plt.plot(sol.t, Z)
plt.xlabel('$P_r$')
plt.ylabel('Z')
plt.xlim([0, 10])
plt.ylim([0, 2])
(0, 2)

This also looks like Figure 7-1. It is arguably a better approach since we only need an initial condition, and after that have a reliable integration (rather than many iterative solutions from an initial guess of the solution in fsolve).

The only downside to this approach (in my opinion) is the need to derive and implement derivatives. As equations of state get more complex, this gets more tedious and complicated.

3 Method 3 - autograd + solve_ivp

The whole point of automatic differentiation is to get derivatives of functions that are written as programs. We explore here the possibility of using this to solve this problem. The idea is to use autograd to define the derivative \(dP/dV\), and then solve the ODE like we did before.

from autograd import grad

def P(V):
    return R * T / (V - b) - a / V**2

# autograd.grad returns a callable that acts like a function
dPdV = grad(P, 0)

def dVdPr(Pr, V):
    return 1 / dPdV(V) * Pc

sol = solve_ivp(dVdPr,  Pr_span, (V0,), dense_output=True, max_step=h)

V, = sol.y
P = sol.t * Pc
Z = P * V / (R * T)
plt.plot(sol.t, Z)
plt.xlabel('$P_r$')
plt.ylabel('Z')
plt.xlim([0, 10])
plt.ylim([0, 2])
(0, 2)

Not surprisingly, this answer looks the same as the previous ones. I think this solution is pretty awesome. We only had to implement the van der Waal equation, and then let autograd do its job to get the relevant derivative. We don't get a free pass on calculus here; we still have to know which derivatives are important. We also need some knowledge about how to use autograd, but with that, this problem becomes pretty easy to solve.

Copyright (C) 2018 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.1.13

Read and Post Comments

Next Page ยป