Compressibility factor variation from the van der Waals equation by three different approaches

| categories: ode, autograd, nonlinear algebra, python | tags:

In the book Problem solving in chemical and biochemical engineering with POLYMATH, Excel and Matlab by Cutlip and Shacham there is a problem (7.1) where you want to plot the compressibility factor for CO2 over a range of \(0.1 \le P_r <= 10\) for a constant \(T_r=1.1\) using the van der Waal equation of state. There are a two standard ways to do this:

  1. Solve a nonlinear equation for different values of \(P_r\).
  2. Solve a nonlinear equation for one value of \(P_r\), then derive an ODE for how the compressibility varies with \(P_r\) and integrate it over the relevant range.

In this post, we compare and contrast the two methods, and consider a variation of the second method that uses automatic differentiation.

1 Method 1 - fsolve

The van der Waal equation of state is:

\(P = \frac{R T}{V - b} - \frac{a}{V^2}\).

We define the reduced pressure as \(P_r = P / P_c\), and the reduced temperature as \(T_r = T / T_c\).

So, we simply solve for V at a given \(P_r\), and then compute \(Z\). There is a subtle trick needed to make this easy to solve, and that is to multiply each side of the equation by \((V - b)\) to avoid a singularity when \(V = b\), which happens in this case near \(P_r \approx 7.5\).

from scipy.optimize import fsolve
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt

R = 0.08206
Pc = 72.9
Tc = 304.2

a = 27 * R**2 * Tc**2 / (Pc * 64)
b = R * Tc / (8 * Pc)

Tr = 1.1

def objective(V, Pr):
    P = Pr * Pc
    T = Tr * Tc
    return P * (V - b) - (R * T)  +  a / V**2 * (V - b)


Pr_range = np.linspace(0.1, 10)
V = [fsolve(objective, 3, args=(Pr,))[0] for Pr in Pr_range]

T = Tr * Tc
P_range = Pr_range * Pc
Z = P_range * V / (R * T)

plt.plot(Pr_range, Z)
plt.xlabel('$P_r$')
plt.ylabel('Z')
plt.xlim([0, 10])
plt.ylim([0, 2])
(0, 2)

That looks like Figure 7-1 in the book. This approach is fine, but the equation did require a little algebraic finesse to solve, and you have to use some iteration to get the solution.

2 Method 2 - solve_ivp

In this method, you have to derive an expression for \(\frac{dV}{dP_r}\). That derivation goes like this:

\(\frac{dV}{dP_r} = \frac{dV}{dP} \frac{dP}{dP_r}\)

The first term \(\frac{dV}{dP}\) is \((\frac{dP}{dV})^{-1}\), which we can derive directly from the van der Waal equation, and the second term is just a constant: \(P_c\) from the definition of \(P_r\).

They derived:

\(\frac{dP}{dV} = -\frac{R T}{(V - b)^2} + \frac{2 a}{V^3}\)

We need to solve for one V, at the beginning of the range of \(P_r\) we are interested in.

V0, = fsolve(objective, 3, args=(0.1,))
V0
3.6764763125625461

Now, we can define the functions, and integrate them to get the same solution. I defined these pretty verbosely, just for readability.

from scipy.integrate import solve_ivp

def dPdV(V):
    return -R * T / (V - b)**2 + 2 * a / V**3

def dVdP(V):
    return 1 / dPdV(V)

dPdPr = Pc

def dVdPr(Pr, V):
    return dVdP(V) * dPdPr

Pr_span = (0.1, 10)
Pr_eval, h = np.linspace(*Pr_span, retstep=True)

sol = solve_ivp(dVdPr, Pr_span, (V0,), dense_output=True, max_step=h)

V = sol.y[0]
P = sol.t * Pc
Z = P * V / (R * T)
plt.plot(sol.t, Z)
plt.xlabel('$P_r$')
plt.ylabel('Z')
plt.xlim([0, 10])
plt.ylim([0, 2])
(0, 2)

This also looks like Figure 7-1. It is arguably a better approach since we only need an initial condition, and after that have a reliable integration (rather than many iterative solutions from an initial guess of the solution in fsolve).

The only downside to this approach (in my opinion) is the need to derive and implement derivatives. As equations of state get more complex, this gets more tedious and complicated.

3 Method 3 - autograd + solve_ivp

The whole point of automatic differentiation is to get derivatives of functions that are written as programs. We explore here the possibility of using this to solve this problem. The idea is to use autograd to define the derivative \(dP/dV\), and then solve the ODE like we did before.

from autograd import grad

def P(V):
    return R * T / (V - b) - a / V**2

# autograd.grad returns a callable that acts like a function
dPdV = grad(P, 0)

def dVdPr(Pr, V):
    return 1 / dPdV(V) * Pc

sol = solve_ivp(dVdPr,  Pr_span, (V0,), dense_output=True, max_step=h)

V, = sol.y
P = sol.t * Pc
Z = P * V / (R * T)
plt.plot(sol.t, Z)
plt.xlabel('$P_r$')
plt.ylabel('Z')
plt.xlim([0, 10])
plt.ylim([0, 2])
(0, 2)

Not surprisingly, this answer looks the same as the previous ones. I think this solution is pretty awesome. We only had to implement the van der Waal equation, and then let autograd do its job to get the relevant derivative. We don't get a free pass on calculus here; we still have to know which derivatives are important. We also need some knowledge about how to use autograd, but with that, this problem becomes pretty easy to solve.

Copyright (C) 2018 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.1.13

Discuss on Twitter

Solving nonlinear algebra problems with internal state information

| categories: nonlinear algebra, python | tags:

In engineering, we often need to solve an equation in one variable, and then use the solution to compute other variables. For example, we might want the bubble point temperature of a mixture, and then to determine the composition of the vapor phase that has formed. In other words, we compute the temperature, and then have to use it in a subsequent step to get the composition. Here is a bubble point computation adapted from example 10.2 in Smith and van Ness, Introduction to Chemical Engineering Thermodynamics.

Given a solution of acetone (x1=0.3), acetonitrile (x2=0.45) and nitromethane (x3=0.25) at a total pressure of 80 kPa, compute the bubble point temperature and gas phase composition.

The key here is to find a temperature where the gas-phase mole fractions sum to one. The gas phase mole fractions are defined by:

\(y_i = x_i Pvap_i(T) / P\)

The typical way I would teach students how solve this looks like this. It uses the Antoine equation coded below to estimate the vapor pressure of each component as a function of temperature, and then uses fsolve to find a temperature where the gas-phase mole fractions sum to one.

import numpy as np
from scipy.optimize import fsolve

acetone = (14.5463, 2940.46, 237.22)
acetonitrile = (14.2724, 2945.47,224)
nitromethane = (14.2043, 2972.64, 209)

def antoine(T, A, B, C):
    T = float(T) # there is some subtle issue that comes up when T is an array,
                 # as passed in from fsolve. It needs to be a float, or you get
                 # the wrong answer.
    return np.exp(A - B / (T + C))

x = np.array([0.3, 0.45, 0.25])
P = 80

def objective(T):
    Pvap = np.array([antoine(T, *pars) for pars in [acetone, acetonitrile, nitromethane]])
    y = x * Pvap / P
    return 1 - y.sum()

Tans, = fsolve(objective, 70)

# This is where we end up repeating code
Pvap = np.array([antoine(Tans, *pars) for pars in [acetone, acetonitrile, nitromethane]])
y = x * Pvap / P

print(f'The bubble point temperature is {Tans:1.2f} degC, and the gas phase compositions are {np.round(y, 4)}.')
The bubble point temperature is 68.60 degC, and the gas phase compositions are [ 0.5196  0.3773  0.1031].


This solution works fine, but there is in my opinion, an issue with the small amount of repeated code at the end that is required to get the composition of the gas-phase. This is a small problem here, but as the problems get bigger it is more and more tedious to correctly repeat all the code to see what the state of a system is at the solution, and it seems wasteful to have to repeat the computations; they were known in the objective function. In the following subsections, I explore some alternative approaches to reduce the repetition.

1 First approach

There are two small chunks of repeated code in the example above. One way to minimize the amount of repeated code is to pull these out into reusable functions. Here, we do that, and only have to repeat one function call at the end to get the system composition out.

import numpy as np
from scipy.optimize import fsolve

acetone = (14.5463, 2940.46, 237.22)
acetonitrile = (14.2724, 2945.47,224)
nitromethane = (14.2043, 2972.64, 209)

def antoine(T, A, B, C):
    T = float(T) # there is some subtle issue that comes up when T is an array,
                 # as passed in from fsolve. It needs to be a float, or you get
                 # the wrong answer.
    return np.exp(A - B / (T + C))

x = np.array([0.3, 0.45, 0.25])
P = 80

def Pvap(T):
    return np.array([antoine(T, *pars) for pars in [acetone, acetonitrile, nitromethane]])

def y(T):
    return x * Pvap(T) / P

def objective(T):
    return 1 - y(T).sum()

Tans, = fsolve(objective, 70)

yans = y(Tans) # minimal repetition of a calculation to get the composition state.

print(f'The bubble point temperature is {Tans:1.2f} degC, and the gas phase compositions are {np.round(yans, 4)}.')
The bubble point temperature is 68.60 degC, and the gas phase compositions are [ 0.5196  0.3773  0.1031].


That is a small improvement. The code is not much shorter, just reorganized for easier reuse. It would be easy in this case to also get the vapor pressures of each species at this temperature, just by calling the Pvap function. Still, it feels annoying we have to recalculate the answer to something you know must have already been known when the objective function was evaluated.

2 Second approach - use a state dictionary as an arg in the objective function

In this approach, we will use a dictionary to store the state of the objective function. The dictionary will be in the global namespace, and we will just update it each time the objective function is called.

import numpy as np
from scipy.optimize import fsolve

acetone = (14.5463, 2940.46, 237.22)
acetonitrile = (14.2724, 2945.47,224)
nitromethane = (14.2043, 2972.64, 209)

def antoine(T, A, B, C):
    return np.exp(A - B / (T + C))

x = np.array([0.3, 0.45, 0.25])

state = {}

P = 80


def objective(T, state):
    T = float(T)
    Pvap = np.array([antoine(T, *pars) for pars in [acetone, acetonitrile, nitromethane]])
    y = x * Pvap / P
    state.update({'y': y,
                  'T':  T,
                  'Pvap': Pvap,
                  'z': 1 - y.sum()})
    return state['z']

Tans, = fsolve(objective, 70, args=(state,))

print(f'The bubble point temperature is {Tans:1.2f} degC, and the gas phase compositions are {np.round(state["y"], 4)}.')
print(Tans- state['T']) # check to make sure last value from objective is the same as the solution
state
The bubble point temperature is 68.60 degC, and the gas phase compositions are [ 0.5196  0.3773  0.1031].
0.0


{'Pvap': array([ 138.5620209 ,   67.07966082,   32.98218545]),
 'T': 68.60064626680659,
 'y': array([ 0.51960758,  0.37732309,  0.10306933]),
 'z': -3.4194869158454821e-14}

What we see in the state dictionary is the result from the last time that the objective function was called. It appears that the list time it was called is also where the solution comes from, so the other variables stored here should be consistent. Now you can see we have access to both the Pvap and y composition data from the solution without needing any further computations. This approach could be easily extended to store any derived quantities that represent internal states you want. We don't store any history in this, but you could by appending to lists in the dictionary.

One feature of this is the state dictionary is updated by side effect, and you have to use the state dictionary as an argument parameter to the function.

3 third approach - a callable object

A standard approach to tracking state data is to encapsulate it in a class. fsolve requires a callable function that returns zero at the solution. It is possible to make an object act like a callable function if we define a __call__ method on it. Then, in this method, we can set attributes on the object to keep track of the state, similar to what we did with the dictionary. Since we have a class, we can define some other special dunder methods, e.g. to print the solution. Here is one implementation.

import numpy as np
from scipy.optimize import fsolve

class Objective(object):
    acetone = (14.5463, 2940.46, 237.22)
    acetonitrile = (14.2724, 2945.47,224)
    nitromethane = (14.2043, 2972.64, 209)

    def __init__(self, x, P):
        self.x = np.array(x)
        self.P = P

    def antoine(self, T, A, B, C):
        return np.exp(A - B / (T + C))

    def __str__(self):
        s = f'The bubble point temperature is {self.T:1.2f} degC, and the gas phase compositions are {np.round(self.y, 4)}.'
        return s

    def __call__(self, T):
        T = float(T)
        self.T = T
        self.Pvap = np.array([self.antoine(T, *pars) for pars in [self.acetone, self.acetonitrile, self.nitromethane]])
        self.y = self.x * self.Pvap / self.P
        return 1 - self.y.sum()

obj = Objective(x=np.array([0.3, 0.45, 0.25]), P=80)
ans, = fsolve(obj, 60)

print(obj)
The bubble point temperature is 68.60 degC, and the gas phase compositions are [ 0.5196  0.3773  0.1031].


Similar to the state dictionary approach, there is no repeated code here, and no repeated evaluations to get to the state after the solution. This is a bit more advanced Python than the state dictionary. Note, this implementation doesn't have any checking in it, so if you try to print the object before calling fsolve, you will get an error because the attributes don't exist until after the object has been called. That is also an issue with the state dictionary above.

There are many choices you could make in constructing this example. Maybe this one has gone too far in encapsulating the antoine function as a method. That limits its reusability for another problem. On the other hand, you can reuse it for any other pressure or liquid composition of acetone, acetonitrile and nitromethane very readily.

4 Summary

We looked at three ways to reduce having redundant code in the solution to problems involving nonlinear algebra. The first approach is conceptually simple; you break out as much as you can into reusable functions, and then at most have repeated function calls. These computations are usually not expensive, so repeating them is mostly tedious and provides opportunities for mistakes. This is probably what I will stick to for teaching students that are just seeing this for the first time.

The second approach used a dictionary (other data structures could work too) as an argument to the objective function, and internal states were kept in the dictionary so that after the problem was solved, you have immediate access to them. This is more advanced than the first approach because it requires understanding that the dictionary is modified as a side effect of solving the problem.

Finally, we considered an object-oriented class encapsulation of the information we wanted. I consider this the most advanced Python solution, since it requires some understanding of classes, dunder methods and attributes, and how to make an instance of a class.

The last two methods seem like candidates for an advanced class in problem solving. Thoughts?

Copyright (C) 2018 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.1.13

Discuss on Twitter

Coupled nonlinear equations

| categories: nonlinear algebra | tags:

Suppose we seek the solution to this set of equations:

\begin{align} y &=& x^2 \\ y &=& 8 - x^2 \end{align}

To solve this we need to setup a function that is equal to zero at the solution. We have two equations, so our function must return two values. There are two variables, so the argument to our function will be an array of values.

from scipy.optimize import fsolve

def objective(X):
    x, y = X            # unpack the array in the argument
    z1 = y - x**2       # first equation
    z2 = y - 8 + x**2   # second equation
    return [z1, z2]     # list of zeros

x0, y0 = 1, 1           # initial guesses
guess = [x0, y0]
sol = fsolve(objective, guess)
print sol

# of course there may be more than one solution
x0, y0 = -1, -1           # initial guesses
guess = [x0, y0]
sol = fsolve(objective, guess)
print sol
[ 2.  4.]
[-2.  4.]

Copyright (C) 2013 by John Kitchin. See the License for information about copying.

org-mode source

Discuss on Twitter

Finding the nth root of a periodic function

| categories: nonlinear algebra | tags: heat transfer

There is a heat transfer problem where one needs to find the n^th root of the following equation: \(x J_1(x) - Bi J_0(x)=0\) where \(J_0\) and \(J_1\) are the Bessel functions of zero and first order, and \(Bi\) is the Biot number. We examine an approach to finding these roots.

First, we plot the function.

from scipy.special import jn, jn_zeros
import matplotlib.pyplot as plt
import numpy as np

Bi = 1

def f(x):
    return x * jn(1, x) - Bi * jn(0, x)

X = np.linspace(0, 30, 200)
plt.plot(X, f(X))
plt.savefig('images/heat-transfer-roots-1.png')

You can see there are many roots to this equation, and we want to be sure we get the n^{th} root. This function is pretty well behaved, so if you make a good guess about the solution you will get an answer, but if you make a bad guess, you may get the wrong root. We examine next a way to do it without guessing the solution. What we want is the solution to \(f(x) = 0\), but we want all the solutions in a given interval. We derive a new equation, \(f'(x) = 0\), with initial condition \(f(0) = f0\), and integrate the ODE with an event function that identifies all zeros of \(f\) for us. The derivative of our function is \(df/dx = d/dx(x J_1(x)) - Bi J'_0(x)\). It is known (http://www.markrobrien.com/besselfunct.pdf) that \(d/dx(x J_1(x)) = x J_0(x)\), and \(J'_0(x) = -J_1(x)\). All we have to do now is set up the problem and run it.

from pycse import *  # contains the ode integrator with events

from scipy.special import jn, jn_zeros
import matplotlib.pyplot as plt
import numpy as np

Bi = 1

def f(x):
    "function we want roots for"
    return x * jn(1, x) - Bi * jn(0, x)

def fprime(f, x):
    "df/dx"
    return x * jn(0, x) - Bi * (-jn(1, x))

def e1(f, x):
    "event function to find zeros of f"
    isterminal = False
    value = f
    direction = 0
    return value, isterminal, direction

f0 = f(0)
xspan = np.linspace(0, 30, 200)

x, fsol, XE, FE, IE = odelay(fprime, f0, xspan, events=[e1])

plt.plot(x, fsol, '.-', label='Numerical solution')
plt.plot(xspan, f(xspan), '--', label='Analytical function')
plt.plot(XE, FE, 'ro', label='roots')
plt.legend(loc='best')
plt.savefig('images/heat-transfer-roots-2.png')

for i, root in enumerate(XE):
    print 'root {0} is at {1}'.format(i, root)

plt.show()
root 0 is at 1.25578377377
root 1 is at 4.07947743741
root 2 is at 7.15579904465
root 3 is at 10.2709851256
root 4 is at 13.3983973869
root 5 is at 16.5311587137
root 6 is at 19.6667276775
root 7 is at 22.8039503455
root 8 is at 25.9422288192
root 9 is at 29.081221492

You can work this out once, and then you have all the roots in the interval and you can select the one you want.

Copyright (C) 2013 by John Kitchin. See the License for information about copying.

org-mode source

Discuss on Twitter

Method of continuity for solving nonlinear equations - Part II

| categories: nonlinear algebra | tags:

Matlab post Yesterday in Post 1324 we looked at a way to solve nonlinear equations that takes away some of the burden of initial guess generation. The idea was to reformulate the equations with a new variable \(\lambda\), so that at \(\lambda=0\) we have a simpler problem we know how to solve, and at \(\lambda=1\) we have the original set of equations. Then, we derive a set of ODEs on how the solution changes with \(\lambda\), and solve them.

Today we look at a simpler example and explain a little more about what is going on. Consider the equation: \(f(x) = x^2 - 5x + 6 = 0\), which has two roots, \(x=2\) and \(x=3\). We will use the method of continuity to solve this equation to illustrate a few ideas. First, we introduce a new variable \(\lambda\) as: \(f(x; \lambda) = 0\). For example, we could write \(f(x;\lambda) = \lambda x^2 - 5x + 6 = 0\). Now, when \(\lambda=0\), we hve the simpler equation \(- 5x + 6 = 0\), with the solution \(x=6/5\). The question now is, how does \(x\) change as \(\lambda\) changes? We get that from the total derivative of how \(f(x,\lambda)\) changes with \(\lambda\). The total derivative is:

$$\frac{df}{d\lambda} = \frac{\partial f}{\partial \lambda} + \frac{\partial f}{\partial x}\frac{\partial x}{\partial \lambda}=0$$

We can calculate two of those quantities: \(\frac{\partial f}{\partial \lambda}\) and \(\frac{\partial f}{\partial x}\) analytically from our equation and solve for \(\frac{\partial x}{\partial \lambda}\) as

$$ \frac{\partial x}{\partial \lambda} = -\frac{\partial f}{\partial \lambda}/\frac{\partial f}{\partial x}$$

That defines an ordinary differential equation that we can solve by integrating from \(\lambda=0\) where we know the solution to \(\lambda=1\) which is the solution to the real problem. For this problem: \(\frac{\partial f}{\partial \lambda}=x^2\) and \(\frac{\partial f}{\partial x}=-5 + 2\lambda x\).

import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt

def dxdL(x, Lambda):
    return -x**2 / (-5.0 + 2 * Lambda * x)

x0 = 6.0/5.0
Lspan = np.linspace(0, 1)
x = odeint(dxdL, x0, Lspan)

plt.plot(Lspan, x)
plt.xlabel('$\lambda$')
plt.ylabel('x')
plt.savefig('images/nonlin-contin-II-1.png')

We found one solution at x=2. What about the other solution? To get that we have to introduce \(\lambda\) into the equations in another way. We could try: \(f(x;\lambda) = x^2 + \lambda(-5x + 6)\), but this leads to an ODE that is singular at the initial starting point. Another approach is \(f(x;\lambda) = x^2 + 6 + \lambda(-5x)\), but now the solution at \(\lambda=0\) is imaginary, and we do not have a way to integrate that! What we can do instead is add and subtract a number like this: \(f(x;\lambda) = x^2 - 4 + \lambda(-5x + 6 + 4)\). Now at \(\lambda=0\), we have a simple equation with roots at \(\pm 2\), and we already know that \(x=2\) is a solution. So, we create our ODE on \(dx/d\lambda\) with initial condition \(x(0) = -2\).

import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt

def dxdL(x, Lambda):
    return (5 * x - 10) / (2 * x - 5 * Lambda)

x0 = -2
Lspan = np.linspace(0, 1)
x = odeint(dxdL, x0, Lspan)

plt.plot(Lspan, x)
plt.xlabel('$\lambda$')
plt.ylabel('x')
plt.savefig('images/nonlin-contin-II-2.png')

Now we have the other solution. Note if you choose the other root, \(x=2\), you find that 2 is a root, and learn nothing new. You could choose other values to add, e.g., if you chose to add and subtract 16, then you would find that one starting point leads to one root, and the other starting point leads to the other root. This method does not solve all problems associated with nonlinear root solving, namely, how many roots are there, and which one is “best” or physically reasonable? But it does give a way to solve an equation where you have no idea what an initial guess should be. You can see, however, that just like you can get different answers from different initial guesses, here you can get different answers by setting up the equations differently.

Copyright (C) 2013 by John Kitchin. See the License for information about copying.

org-mode source

Discuss on Twitter
Next Page ยป