Solving coupled ODEs with a neural network and autograd

| categories: autograd, ode | tags:

In a previous post I wrote about using ideas from machine learning to solve an ordinary differential equation using a neural network for the solution. A friend recently tried to apply that idea to coupled ordinary differential equations, without success. It seems like that should work, so here we diagnose the issue and figure it out. This is a long post, but it works in the end.

In the classic series reaction \(A \rightarrow B \rightarrow C\) in a batch reactor, we get the set of coupled mole balances:

\(dC_A/dt = -k_1 C_A\)

\(dC_B/dt = k_1 C_A - k_2 C_B\)

\(dC_C/dt = k2 C_B\)

1 The standard numerical solution

Here is the standard numerical solution to this problem. This will give us a reference for what the solution should look like.

from scipy.integrate import solve_ivp

def ode(t, C):
    Ca, Cb, Cc = C
    dCadt = -k1 * Ca
    dCbdt = k1 * Ca - k2 * Cb
    dCcdt = k2 * Cb
    return [dCadt, dCbdt, dCcdt]

C0 = [1.0, 0.0, 0.0]
k1 = 1
k2 = 1

sol = solve_ivp(ode, (0, 10), C0)

%matplotlib inline
import matplotlib.pyplot as plt

plt.plot(sol.t, sol.y.T)
plt.legend(['A', 'B', 'C'])
plt.xlabel('Time')
plt.ylabel('C')

2 Can a neural network learn the solution?

The first thing I want to show is that you can train a neural network to reproduce this solution. That is certainly a prerequisite to the idea working. We use the same code I used before, but this time our neural network will output three values, one for each concentration.

import autograd.numpy as np
from autograd import grad, elementwise_grad, jacobian
import autograd.numpy.random as npr
from autograd.misc.optimizers import adam

def init_random_params(scale, layer_sizes, rs=npr.RandomState(0)):
    """Build a list of (weights, biases) tuples, one for each layer."""
    return [(rs.randn(insize, outsize) * scale,   # weight matrix
             rs.randn(outsize) * scale)           # bias vector
            for insize, outsize in zip(layer_sizes[:-1], layer_sizes[1:])]

def swish(x):
    "see https://arxiv.org/pdf/1710.05941.pdf"
    return x / (1.0 + np.exp(-x))

def C(params, inputs):
    "Neural network functions"
    for W, b in params:
        outputs = np.dot(inputs, W) + b
        inputs = swish(outputs)
    return outputs

# initial guess for the weights and biases
params = init_random_params(0.1, layer_sizes=[1, 8, 3])

Now, we train our network to reproduce the solution. I ran this block manually a bunch of times, but eventually you see that we can train a one layer network with 8 nodes to output all three concentrations pretty accurately. So, there is no issue there, a neural network can represent the solution.

def objective_soln(params, step):
    return np.sum((sol.y.T - C(params, sol.t.reshape([-1, 1])))**2)

params = adam(grad(objective_soln), params,
              step_size=0.001, num_iters=500)

plt.plot(sol.t.reshape([-1, 1]), C(params, sol.t.reshape([-1, 1])),
         sol.t, sol.y.T, 'o')
plt.legend(['A', 'B', 'C', 'Ann', 'Bnn', 'Cnn'])
plt.xlabel('Time')
plt.ylabel('C')

3 Given a neural network function how do we get the right derivatives?

The next issue is how do we get the relevant derivatives. The solution method I developed here relies on using optimization to find a set of weights that produces a neural network whose derivatives are consistent with the ODE equations. So, we need to be able to get the derivatives that are relevant in the equations.

The neural network outputs three concentrations, and we need the time derivatives of them. Autograd provides three options: grad, elementwise_grad and jacobian. We cannot use grad because our function is not scalar. We cannot use elementwise_grad because that will give the wrong shape (I think it may be the sum of the gradients). That leaves us with the jacobian. This, however, gives an initially unintuitive (i.e. it isn't what we need out of the box) result. The output is 4-dimensional in this case, consistent with the documentation of that function.

jacC = jacobian(C, 1)
jacC(params, sol.t.reshape([-1, 1])).shape
(17, 3, 17, 1)

Why does it have this shape? Our time input vector we used has 17 time values, in a column vector. That leads to an output from the NN with a shape of (17, 3), i.e. the concentrations of each species at each time. The jacobian will output an array of shape (17, 3, 17, 1), and we have to extract the pieces we want from that. The first and third dimensions are related to the time steps. The second dimension is the species, and the last dimension is nothing here, but is there because the input is in a column. I use some fancy indexing on the array to get the desired arrays of the derivatives. This is not obvious out of the box. I only figured this out by direct comparison of the data from a numerical solution and the output of the jacobian. Here I show how to do that, and make sure that the derivatives we pull out are comparable to the derivatives defined by the ODEs above. Parity here means they are comparable.

i = np.arange(len(sol.t))
plt.plot(jacC(params, sol.t.reshape([-1, 1]))[i, 0, i, 0],   -k1 * sol.y[0], 'ro')
plt.plot(jacC(params, sol.t.reshape([-1, 1]))[i, 1, i, 0],   -k2 * sol.y[1] + k1 * sol.y[0], 'bo')
plt.plot(jacC(params, sol.t.reshape([-1, 1]))[i, 2, i, 0],   k2 * sol.y[1], 'go')
[<matplotlib.lines.Line2D at 0x118a2e860>]

Note this is pretty inefficient. It requires a lot of calculations (the jacobian here has print(17*3*17) 867 elements) to create the jacobian, and we don't need most of them. You could avoid this by creating separate neural networks for each species, and then just use elementwise_grad on each one. Alternatively, one might be able to more efficiently compute some vector-jacobian product. Nevertheless, it looks like we can get the correct derivatives out of the neural network, we just need a convenient function to return them. Here is one such function for this problem, using a fancier slicing and reshaping to get the derivative array.

# Derivatives
jac = jacobian(C, 1)

def dCdt(params, t):
    i = np.arange(len(t))
    return jac(params, t)[i, :, i].reshape((len(t), 3))

4 Solving the system of ODEs with a neural network

Finally, we are ready to try solving the ODEs solely by the neural network approach. We reinitialize the neural network first, and define a time grid to solve it on.

t = np.linspace(0, 10, 25).reshape((-1, 1))
params = init_random_params(0.1, layer_sizes=[1, 8, 3])
i = 0    # number of training steps
N = 501  # epochs for training
et = 0.0 # total elapsed time

We define our objective function. This function will be zero at the perfect solution, and has contributions for each mole balance and the initial conditions. It could make sense to put additional penalties for things like negative concentrations, or the sum of concentrations is a constant, but we do not do that here, and it does not seem to be necessary.

def objective(params, step):
    Ca, Cb, Cc = C(params, t).T
    dCadt, dCbdt, dCcdt = dCdt(params, t).T

    z1 = np.sum((dCadt + k1 * Ca)**2)
    z2 = np.sum((dCbdt - k1 * Ca + k2 * Cb)**2)
    z3 = np.sum((dCcdt - k2 * Cb)**2)
    ic = np.sum((np.array([Ca[0], Cb[0], Cc[0]]) - C0)**2)  # initial conditions
    return z1 + z2 + z3 + ic

def callback(params, step, g):
    if step % 100 == 0:
        print("Iteration {0:3d} objective {1}".format(step,
                                                      objective(params, step)))

objective(params, 0)  # make sure the objective is scalar
5.2502237371050295

Finally, we run the optimization. I also manually ran this block several times.

import time
t0 = time.time()

params = adam(grad(objective), params,
              step_size=0.001, num_iters=N, callback=callback)

i += N
t1 = (time.time() - t0) / 60
et += t1

plt.plot(t, C(params, t), sol.t, sol.y.T, 'o')
plt.legend(['Ann', 'Bnn', 'Cnn', 'A', 'B', 'C'])
plt.xlabel('Time')
plt.ylabel('C')
print(f'{t1:1.1f} minutes elapsed this time. Total time = {et:1.2f} min. Total epochs = {i}.')
Iteration   0 objective 0.00047651643957525214
Iteration 100 objective 0.0004473301532609342
Iteration 200 objective 0.00041218410058863227
Iteration 300 objective 0.00037161526137030344
Iteration 400 objective 0.000327567400443358
Iteration 500 objective 0.0002836975879675981
0.6 minutes elapsed this time. Total time = 4.05 min. Total epochs = 3006.


The effort seems to have been worth it though, we get a pretty good solution from our neural network.

We can check the accuracy of the derivatives by noting the sum of the derivatives in this case should be zero. Here you can see that the sum is pretty small. It would take additional optimization to a lower error to get this to be smaller.

plt.plot(t, np.sum(dCdt(params, t), axis=1))
plt.xlabel('Time')
plt.ylabel(r'$\Sigma dC/dt$')

5 Summary

In the end, this method is illustrated to work for systems of ODEs also. There is some subtlety in how to get the relevant derivatives from the jacobian, but after that, it is essentially the same. I think it would be much faster to do this with separate neural networks for each function in the solution because then you do not need the jacobian, you can use elementwise_grad.

This is not faster than direct numerical integration. One benefit to this solution over a numerical solution is we get an actual continuous function as the solution, rather than an array of data. This solution is not reliable at longer times, but then again neither is extrapolation of numeric data. It could be interesting to explore if this has any benefits for stiff equations. Maybe another day. For now, I am declaring victory for autograd on this problem.

Copyright (C) 2018 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.1.14

Discuss on Twitter

A differentiable ODE integrator for sensitivity analysis

| categories: sensitivity, autograd, ode | tags:

Last time I wrote about using automatic differentiation to find the derivative of an integral function. A related topic is finding derivatives of functions that are defined by differential equations. We typically use a numerical integrator to find solutions to these functions. Those leave us with numeric solutions which we then have to use to approximate derivatives. What if the integrator itself was differentiable? It is after all, just a program, and automatic differentiation should be able to tell us the derivatives of functions that use them. This is not a new idea, there is already a differentiable ODE solver in Tensorflow. Here I will implement a simple Runge Kutta integrator and then show how we can use automatic differentiation to do sensitivity analysis on the numeric solution.

I previously used autograd for sensitivity analysis on analytical solutions in this post. Here I will compare those results to the results from sensitivity analysis on the numerical solutions.

First, we need an autograd compatible ODE integrator. Here is one implementation of a simple, fourth order Runge-Kutta integrator. Usually, I would use indexing to do this, but that was not compatible with autograd, so I just accumulate the solution. This is a limitation of autograd, and it is probably not an issue with Tensorflow, for example, or probably pytorch. Those are more sophisticated, and more difficult to use packages than autograd. Here I am just prototyping an idea, so we stick with autograd.

import autograd.numpy as np
from autograd import grad
%matplotlib inline
import matplotlib.pyplot as plt

def rk4(f, tspan, y0, N=50):
    x, h = np.linspace(*tspan, N, retstep=True)
    y = []
    y = y + [y0]
    for i in range(0, len(x) - 1):
        k1 = h * f(x[i], y[i])
        k2 = h * f(x[i] + h / 2, y[i] + k1 / 2)
        k3 = h * f(x[i] + h / 2, y[i] + k2 / 2)
        k4 = h * f(x[i + 1], y[i] + k3)
        y += [y[-1] + (k1 + (2 * k2) + (2 * k3) + k4) / 6]
    return x, y

Now, we just check that it works as expected:

Ca0 = 1.0
k1 = k_1 = 3.0

def dCdt(t, Ca):
    return -k1 * Ca + k_1 * (Ca0 - Ca)

t, Ca = rk4(dCdt, (0, 0.5), Ca0)

def analytical_A(t, k1, k_1):
    return Ca0 / (k1 + k_1) * (k1 * np.exp(-(k1 + k_1) * t) + k_1)

plt.plot(t, Ca, label='RK4')
plt.plot(t, analytical_A(t, k1, k_1), 'r--', label='analytical')
plt.xlabel('t')
plt.ylabel('[A]')
plt.xlim([0, 0.5])
plt.ylim([0.5, 1])
plt.legend()

That looks fine, we cannot visually distinguish the two solutions, and they both look like Figure 1 in this paper. Note the analytical solution is not that complex, but it would not take much variation of the rate law to make this solution difficult to derive.

Next, to do sensitivity analysis, we need to define a function for \(A\) that depends on the rate constants, so we can take a derivative of it with respect to the parameters we want the sensitivity from. We seek the derivatives: \(\frac{dC_A}{dk_1}\) and \(\frac{dC_A}{dk_{-1}}\). Here is a function that does that. It will return the value of [A] at \(t\) given an initial concentration and the rate constants.

def A(Ca0, k1, k_1, t):
    def dCdt(t, Ca):
        return -k1 * Ca + k_1 * (Ca0 - Ca)
    t, Ca_ = rk4(dCdt, (0, t), Ca0)
    return Ca_[-1]

# Here are the two derivatives we seek.
dCadk1 = grad(A, 1)
dCadk_1 = grad(A, 2)

We also use autograd to get the derivatives from the analytical solution for comparison.

dAdk1 = grad(analytical_A, 1)
dAdk_1 = grad(analytical_A, 2)

Now, we can plot the sensitivities over the time range and compare them. I use the list comprehensions here because the AD functions aren't vectorized.

tspan = np.linspace(0, 0.5)

# From the numerical solutions
k1_sensitivity = [dCadk1(1.0, 3.0, 3.0, t) for t in tspan]
k_1_sensitivity = [dCadk_1(1.0, 3.0, 3.0, t) for t in tspan]

# from the analytical solutions
ak1_sensitivity = [dAdk1(t, 3.0, 3.0) for t in tspan]
ak_1_sensitivity = [dAdk_1(t, 3.0, 3.0) for t in tspan]

plt.plot(tspan, np.abs(ak1_sensitivity), 'b-', label='k1 analytical')
plt.plot(tspan, np.abs(k1_sensitivity), 'y--', label='k1 numerical')

plt.plot(tspan, np.abs(ak_1_sensitivity), 'r-', label='k_1 analytical')
plt.plot(tspan, np.abs(k_1_sensitivity), 'k--', label='k_1 numerical')

plt.xlim([0, 0.5])
plt.ylim([0, 0.1])
plt.legend()
plt.xlabel('t')
plt.ylabel('sensitivity')

The two approaches are indistinguishable on paper. I will note that it takes a lot longer to make the graph from the numerical solution than from the analytical solution because at each point you have to reintegrate the solution from the beginning, which is certainly not efficient. That is an implementation detail that could probably be solved, at the expense of making the code look different than the way I would normally think about the problem.

On the other hand, it is remarkable we get derivatives from the numerical solution, and they look really good! That means we could do sensitivity analysis on more complex reactions, and still have a reasonable way to get sensitivity. The work here is a long way from that. My simple Runge-Kutta integrator isn't directly useful for systems of ODEs, it wouldn't work well on stiff problems, the step size isn't adaptive, etc. The Tensorflow implementation might be more suitable for this though, and maybe this post is motivation to learn how to use it!

Copyright (C) 2018 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.1.13

Discuss on Twitter

Autograd and the derivative of an integral function

| categories: autograd, python | tags:

There are many functions that are defined by integrals. The error function, for example is defined by \(erf(x) = \frac{2}{\sqrt{\pi}}\int_0^x e^{-t^2}dt\).

Another example is:

\(\phi(\alpha) = \int_0^1 \frac{\alpha}{x^2 + \alpha^2} dx\).

We have reasonable ways to evaluate these functions numerically, e.g. scipy.integrate.quad, or numpy.trapz, but what about the derivatives of these functions? The analytical way to do this is to use the Leibniz rule, which involves integrating a derivative and evaluating it at the limits. For some functions, this may also lead to new integrals you have to numerically evaluate. Today, we consider the role that automatic differentiation can play in this.

The idea is simple, we define a function in Python as usual, and in the function body calculate the integral in a program. Then we use autograd to get the derivative of the function.

In this case, we have an analytical derivative to compare the answers to:

\(\frac{d\phi}{d\alpha} = -\frac{1}{1 + \alpha^2}\).

1 Example 1

For simplicity, I am going to approximate the integral with the trapezoid method in vectorized form. Here is our program to define \(\phi(\alpha)\). I found we need a pretty dense grid on the x value so that we have a pretty accurate integral, especially near \(x=0\) where there is a singularity as α goes to zero. That doesn't worry me too much, there are better integral approximations to use, including Simpson's method, adaptive methods and perhaps quadrature. If you define them so autograd can use them, they should all work. I chose the trapezoidal method because it is simple to implement here. Note, however, the autograd.numpy wrappers don't have a definition for numpy.trapz to use it directly. You could add one, or just do this.

import autograd.numpy as np
from autograd import grad

def trapz(y, x):
    d = np.diff(x)
    return np.sum((y[0:-1] + y[1:]) * d / 2)


def phi(alpha):
    x = np.linspace(0, 1, 1000)
    y = alpha / (x**2 + alpha**2)
    return trapz(y, x)


# This is the derivative here!
adphi = grad(phi, 0)

Now, we can plot the derivatives. I will plot both the analytical and automatic differentiated results.

%matplotlib inline
import matplotlib.pyplot as plt

# results from AD
alpha = np.linspace(0.01, 1)

# The AD derivative function is not vectorized, so we use this list comprehension.
dphidalpha = [adphi(a) for a in alpha]

def analytical_dphi(alpha):
    return -1 / (1 + alpha**2)

plt.plot(alpha, analytical_dphi(alpha), label='analytical')
plt.plot(alpha, dphidalpha, 'r--', label='AD')
plt.xlabel(r'$\alpha$')
plt.ylabel(r'$frac{d\phi}{d\alpha}$')
plt.legend()

Visually, these are indistinguishable from each other. We can look at the errors too, and here we see they are negligible, and probably we can attribute them to the approximation we use for the integral, and not due to automatic differentiation.

perr = (analytical_dphi(alpha) - dphidalpha) / analytical_dphi(alpha) * 100
plt.plot(alpha, perr, label='analytical')
plt.xlabel(r'$\alpha$')
plt.ylabel('%error')

2 Example 2

In example 2 there is this function, which has variable limits:

\(f(x) = \int_{\sin x}^{\cos x} \cosh t^2 dt\)

What is \(f'(x)\) here? It can be derived with some effort and it is:

\(f'(x) = -\cosh(\cos^2 x) \sin x - \cosh(\sin^2 x) \cos x\)

This function was kind of fun to code up, I hadn't thought about how to represent variable limits, but here it is.

def f(x):
    a = np.sin(x)
    b = np.cos(x)
    t = np.linspace(a, b, 1000)
    y = np.cosh(t**2)
    return trapz(y, t)

# Here is our derivative!
dfdx = grad(f, 0)

Here is a graphical comparison of the two:

x = np.linspace(0, 2 * np.pi)

analytical = -np.cosh(np.cos(x)**2) * np.sin(x) - \
    np.cosh(np.sin(x)**2) * np.cos(x)
ad = [dfdx(_x) for _x in x]

plt.plot(x, analytical, label='analytical')
plt.plot(x, ad, 'r--', label='AD')
plt.xlabel('x')
plt.ylabel('df/dx')
plt.legend()

These are once again indistinguishable.

3 Summary

These are amazing results to me. Before trying it, I would not have thought it would be so easy to evaluate the derivative of these functions. These work of course because all the operations involved in computing the integral are differentiable and defined in autograd. It certainly opens the door to all kinds of new approaches to solving engineering problems that need the derivatives for various purposes like optimization, sensitivity analysis, etc.

Copyright (C) 2018 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.1.13

Discuss on Twitter

Compressibility variation from an implicit equation of state

| categories: autograd, python | tags:

Table of Contents

In this post I explored using automatic differentiation to compute how the compressibility of a gas defined by the van der Waal equation varies with the reduced pressure. In that example we had an explicit function of the pressure as a function of the volume and temperature, and we could derive a differential equation that defines the variation we were interested in.

I thought we should be able to derive the differential equation more directly, still using automatic differentiation and we explore that idea here. The general strategy to compute the compressibility as a function of pressure is to integrate \(dV / dP_r\) over a range of \(P_r\) to get the molar volume as a function of \(P_r\), and then to directly compute the compressibility from \(Z = PV/(RT)\).

To use this approach we need to get \(dV / dP_r\) from the van der Waal equation. Previously, we derived this in a round about way from the explicit form of the van der Waal equation. Here, we follow the work in this post to get the derivative from the implicit form of the van der Waal equation:

\(f(V, P_r, T_r) = \frac{R Tr * Tc}{V - b} - \frac{a}{V^2} - P_r Pc = 0\)

Based on the work in this post, we can get

\(dV/dP_r = (-df/dP_r) / (df/dV)\)

and the two derivatives on the right can be found easily by automatic differentiation. First, we express the van der Waal equation in implicit form, with the variables as \(V, P_r, T_r\). Only two of those variables are independent; if you define two of them you can compute the third one using a tool like fsolve.

R = 0.08206
Pc = 72.9
Tc = 304.2

a = 27 * R**2 * Tc**2 / (Pc * 64)
b = R * Tc / (8 * Pc)

Tr = 1.1  # Constant for this example

def f(V, Pr, Tr):
    return R * Tr * Tc / (V - b) - a / V**2 - Pr * Pc

Now, if we want to know how does the volume vary with \(P_r\), we need to derive the derivative \(dV/dP_r\), and then integrate it. Here we use autograd to define the derivatives, and then we define a function that uses them. Note the arguments in the function dVdPr are in an order that anticipates we want to integrate it in solve_ivp, to get a function \(V(P_r)\).

from autograd import grad

dfdPr = grad(f, 1)  # derivative of f with respect to arg at index=1: Pr
dfdV = grad(f, 0)  # derivative of f with respect to arg at index=0: V

def dVdPr(Pr, V):
    return -dfdPr(V, Pr, Tr) / dfdV(V, Pr, Tr)  # Tr is a constant in here

Now, we need an initial condition to start the integration from. We want the volume at \(P_r=0.1\). We have to use fsolve for this, or some other method that tells you want is the volume at \(P_r=0.1\). We can pass the values of \(P_r\) and \(T_R\) as arguments to our implicit function. Since \(V\) is the first argument, we can directly solve our implicit function. Otherwise you would have to define a helper objective function to use with fsolve.

from scipy.optimize import fsolve

V0, = fsolve(f, 3.5, args=(0.1, 1.1))
V0
3.6764763125625435

Finally, we are ready to integrate the ODE, and plot the solution.

import numpy as np
from scipy.integrate import solve_ivp

Pr_span = (0.1, 10)
Pr_eval, h = np.linspace(*Pr_span, retstep=True)

sol = solve_ivp(dVdPr, Pr_span, (V0,), max_step=h)
print(sol.message)

%matplotlib inline
import matplotlib.pyplot as plt

Pr = sol.t  # the P_r steps used in the solution
V = sol.y[0]  # V(P_r) from the solution

Z = Pr * Pc * V / (R * Tr * Tc)  # Compressibility Z(P_r)

plt.plot(Pr, Z)
plt.xlabel('$P_r$')
plt.ylabel('Z')
plt.xlim([0, 10])
plt.ylim([0, 2])
The solver successfully reached the end of the integration interval.


(0, 2)

That is the same result as we got before.

1 Summary thoughts

This method also worked successfully to solve this problem. In most ways, this method has less algebraic manipulations required to get to the solution. In method 3, we had to do some calculus that relied on a particular explicit form of the van der Waal equation. While those manipulations were not particularly difficulty, the leave opportunities for mistakes, and they will be more difficult for an implicit equation of state (e.g. if there was a \(P\) on the right hand side).

This approach also required some manipulation, but it is a standard one and that is how do you get a derivative from an implicit function. After that, it is straightforward to define the desired derivative as a function and then integrate it to get the solution. So, we still don't get a free pass on calculus, but we do reduce the number of manipulations required to get to the solution. I consider that a plus.

Copyright (C) 2018 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.1.13

Discuss on Twitter

Getting derivatives from implicit functions with autograd

| categories: autograd, python | tags:

If we have an implicit function: \(f(x, y(x)) = 0\), but we want to compute the derivative \(dy/dx\) we can use the chain rule to derive:

\(df/dx + df/dy dy/dx = 0\)

We can then solve for \(dy/dx\):

\(dy/dx = -df/dx / df/dy\) to get the desired derivative. The interesting point of this blog post is that we can get the two derivatives on the right hand side of this equation using automatic differentiation of the function \(f(x, y)\)! There are a few examples of analytical approaches to derivatives from implicit functions here, and I wanted to explore them with autograd in this post.

In the following examples, we will assume that \(y\) is a function of \(x\) and that \(x\) is independent. We will consider a series of implicit equations, compute \(dy/dx\) using autograd from the formula above, and compare them to the analytical results in the web page referenced above.

The \(dy/dx\) functions generally depend on both \(x\), and \(y\). Technically, these are the derivatives along the curve \(y(x)\), but since we can evaluate them at any points, we will use some random points for \(x\) and \(y\) to test for equality between the analytical derivatives and the autograd derivatives. This isn't a rigorous proof of equality, but it is the only thing that makes sense to do for now. It is assumed that if these points are ok, all the others are too. That might be a broad claim, since we only sample \(x\) and \(y\) from 0 to 1 here but the approach is general. Here are the imports and the random test points for all the examples that follow.

import autograd.numpy as np
from autograd import grad

xr = np.random.random(50)
yr = np.random.random(50)

1 Example 1

\(x^3 + y^3 = 4\)

def f1(x, y):
    return x**3 + y**3 - 4

D1x = grad(f1, 0)
D1y = grad(f1, 1)

dydx_1 = lambda x, y: -D1x(x, y) / D1y(x, y)
dydx_1a = lambda x, y: -x**2 / y**2

np.allclose(dydx_1a(xr, yr),
             [dydx_1(_xr, _yr) for _xr, _yr in zip(xr, yr)])
True

The output of True means the autograd results and the analytical results are "all close", i.e. within a tolerance the results are the same. The required derivatives of this are not that difficult to derive, but it is nice to see a simple example that "just works". A subtle point of the dydx function is that it is not vectorized which is why I used a list comprehension to evaluate all the points. It might be possible to make a pseudo-vectorized version with the np.vectorize decorator, but that is not of interest here.

2 Example 2

\((x - y)^2 = x + y - 1\)

def f2(x, y):
    return (x - y)**2 - x - y + 1

D2x = grad(f2, 0)
D2y = grad(f2, 1)

dydx_2 = lambda x, y: -D2x(x, y) / D2y(x, y)
dydx2_a = lambda x, y: (2 * y - 2 * x + 1) / (2 * y - 2 * x - 1)

np.allclose(dydx2_a(xr, yr),
            [dydx_2(_xr, _yr) for _xr, _yr in zip(xr, yr)])
True

This also works.

3 Example 3

\(y = sin(3x + 4y)\)

def f3(x, y):
    return y - np.sin(3 * x + 4 * y)


D3x = grad(f3, 0)
D3y = grad(f3, 1)

dydx_3 = lambda x, y: -D3x(x, y) / D3y(x, y)
dydx3_a = lambda x, y: (3 * np.cos(3 * x + 4 * y)) / (1 - 4 * np.cos(3 * x + 4 * y))

np.allclose(dydx3_a(xr, yr),
            [dydx_3(_xr, _yr) for _xr, _yr in zip(xr, yr)])
True

Check.

4 Example 4

\(y = x^2 y^3 + x^3 y^2\)

def f4(x, y):
    return y - x**2 * y**3 - x**3 * y**2


D4x = grad(f4, 0)
D4y = grad(f4, 1)

dydx_4 = lambda x, y: -D4x(x, y) / D4y(x, y)
dydx4_a = lambda x, y: (2 * x * y**3 + 3 * x**2 * y**2) / (1 - 3 * x**2 * y**2 - 2 * x**3 * y)

np.allclose(dydx4_a(xr, yr),
            [dydx_4(_xr, _yr) for _xr, _yr in zip(xr, yr)])
True

5 Example 5

\(e^{xy} = e^{4x} - e^{5y}\)

def f5(x, y):
    return np.exp(4 * x) - np.exp(5 * y) - np.exp(x * y)

D5x = grad(f5, 0)
D5y = grad(f5, 1)

dydx_5 = lambda x, y: -D5x(x, y) / D5y(x, y)
dydx5_a = lambda x, y: (4 * np.exp(4 * x) - y * np.exp(x * y)) / (x * np.exp(x * y) + 5 * np.exp(5 * y))

np.allclose(dydx5_a(xr, yr),
            [dydx_5(_xr, _yr) for _xr, _yr in zip(xr, yr)])
True

Also check.

6 Example 6

\(\cos^2 x + cos^2 y = cos(2x + 2y)\)

def f6(x, y):
    return np.cos(x)**2 + np.cos(y)**2 - np.cos(2 * x + 2 * y)

D6x = grad(f6, 0)
D6y = grad(f6, 1)

dydx_6 = lambda x, y: -D6x(x, y) / D6y(x, y)
dydx6_a = lambda x, y: (np.cos(x) * np.sin(x) - np.sin(2 * x + 2 * y)) / (np.sin(2 * x + 2 * y) - np.cos(y) * np.sin(y))

np.allclose(dydx6_a(xr, yr),
            [dydx_6(_xr, _yr) for _xr, _yr in zip(xr, yr)])
True

Check.

7 Example 7

\(x = 3 + \sqrt{x^2 + y^2}\)

def f7(x, y):
    return 3 + np.sqrt(x**2 + y**2) - x

D7x = grad(f7, 0)
D7y = grad(f7, 1)

dydx_7 = lambda x, y: -D7x(x, y) / D7y(x, y)
dydx7_a = lambda x, y: (np.sqrt(x**2 + y**2) - x) / y

np.allclose(dydx7_a(xr, yr),
            [dydx_7(_xr, _yr) for _xr, _yr in zip(xr, yr)])
True

8 Conclusions

There are a handful of other examples at the website referenced in the beginning, but I am stopping here. After seven examples of quantitative agreement between the easy to derive autograd derivatives, and the easy to moderately difficult analytical derivatives, it seems like it is autograd for the win here. This technique has some important implications for engineering calculations that I will explore in a future post. Until then, this is yet another interesting thing we can do with automatic differentiation!

Copyright (C) 2018 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.1.13

Discuss on Twitter
« Previous Page -- Next Page »