Sensitivity analysis using automatic differentiation in Python

| categories: autograd, python, sensitivity | tags: | View Comments

This paper describes how sensitivity analysis requires access to the derivatives of a function. Say, for example we have a function describing the time evolution of the concentration of species A:

\([A] = \frac{[A]_0}{k_1 + k_{-1}} (k_1 e^{(-(k_1 _ k_{-1})t)} + k_{-1})\)

The local sensitivity of the concentration of A to the parameters \(k1\) and \(k_1\) are defined as \(\frac{\partial A}{\partial k1}\) and \(\frac{\partial A}{\partial k_1}\). Our goal is to plot the sensitivity as a function of time. We could derive those derivatives, but we will use auto-differentiation instead through the autograd package. Here we import numpy from the autograd package and plot the function above.

import autograd.numpy as np

A0 = 1.0

def A(t, k1, k_1):
    return A0 / (k1 + k_1) * (k1 * np.exp(-(k1 + k_1) * t) + k_1)

%matplotlib inline
import matplotlib.pyplot as plt

t = np.linspace(0, 0.5)

k1 = 3.0
k_1 = 3.0
plt.plot(t, A(t, k1, k_1))
plt.xlim([0, 0.5])
plt.ylim([0, 1])
plt.xlabel('t')
plt.ylabel('A')

The figure above reproduces Fig. 1 from the paper referenced above. Next, we use autograd to get the derivatives. This is subtly different than our previous post. First, we need the derivative of the function with respect to the second and third arguments; the default is the first argument. Second, we want to evaluate this derivative at each time value. We use the jacobian function in autograd to get these. This is different than grad, which will sum up the derivatives at each time. That might be useful for regression, but not for sensitivity analysis. Finally, to reproduce Figure 2a, we plot the absolute value of the sensitivities.

from autograd import jacobian

dAdk1 = jacobian(A, 1)
dAdk_1 = jacobian(A, 2)

plt.plot(t, np.abs(dAdk1(t, k1, k_1)))
plt.plot(t, np.abs(dAdk_1(t, k1, k_1)))
plt.xlim([0, 0.5])
plt.ylim([0, 0.1])
plt.xlabel('t')
plt.legend(['$S_{k1}$', '$S_{k\_1}$'])

That looks like the figure in the paper. To summarize the main takeaway, autograd enabled us to readily compute derivatives without having to derive them manually. There was a little subtlety in choosing jacobian over grad or elementwise_grad but once you know what these do, it seems reasonable. It is important to import the wrapped numpy first, to enable autograd to do its work. All the functions here are pretty standard, so everything worked out of the box. We should probably be using autograd, or something like it for more things in science!

Copyright (C) 2017 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.1.2

Read and Post Comments

Forces by automatic differentiation in molecular simulation

| categories: autograd, simulation | tags: | View Comments

In molecular simulation we often use a potential to compute the total energy of a system. For example, we might use something simple like a Lennard-Jones potential. If we have a potential function, e.g. \(E = V(R)\) where \(R\) are the positions of the atoms, then we know the forces on the atoms are defined by \(f = -\frac{dV}{dR}\). For simple functions, you can derive the derivative pretty easily, but these functions quickly get complicated. In this post, we consider automatic differentiation as implemented by autograd. This is neither symbolic nor numerical differentiation. The gist is that the program uses the chain rule to evaluate derivatives. Here we do not delve into how it is done, we just see how it might help us in molecular simulation.

For reference, here is a result from the LennardJones calculator in ASE.

from ase.calculators.lj import LennardJones
from ase.cluster.icosahedron import Icosahedron

atoms = Icosahedron('Ar', noshells=2, latticeconstant=3)
atoms.set_calculator(LennardJones())

atoms.rattle(0.5)
print('LJ: ', atoms.get_potential_energy())
('LJ: ', -3.3553466825679812)

First, we define a function for the Lennard-Jones potential. I adapted the code here to implement a function that calculates the Lennard-Jones energy for a cluster of atoms with no periodic boundary conditions. Instead of using numpy directly, we import it from the autograd package which puts thin wrappers around the functions to enable the derivative calculations.

import autograd.numpy as np

def energy(positions):
    "Compute the energy of a Lennard-Jones system."
    natoms = len(positions)

    sigma = 1.0
    epsilon = 1.0
    rc = 3 * sigma

    e0 = 4 * epsilon * ((sigma / rc)**12 - (sigma / rc)**6)
    energy = 0.0
    for a1 in range(natoms):
        for j in range(a1 + 1, natoms):
            r2 = np.sum((positions[a1] - positions[j])**2)
            if r2 <= rc**2:
                c6 = (sigma**2 / r2)**3
                energy -= e0 
                c12 = c6**2
                energy += 4 * epsilon * (c12 - c6)

    return energy

Here is our function in action, and it produces the same result as the ASE calculator. So far there is nothing new.

print('our func: ', energy(atoms.positions))
('our func: ', -3.3553466825679803)

Now, we look at the forces from the ASE calculator. If you look at the ASE code you will see that the formula for forces was analytically derived and accumulated in the loop.

np.set_printoptions(precision=3, suppress=True)
print(atoms.get_forces())
[[ 0.545  1.667  0.721]
 [-0.068  0.002  0.121]
 [-0.18   0.018 -0.121]
 [ 0.902 -0.874 -0.083]
 [ 0.901 -0.937 -1.815]
 [ 0.243 -0.19   0.063]
 [-0.952 -1.776 -0.404]
 [-0.562  1.822  1.178]
 [-0.235  0.231  0.081]
 [-0.023  0.204 -0.294]
 [ 0.221 -0.342 -0.425]
 [-5.385 -6.017  1.236]
 [ 4.593  6.193 -0.258]]

Now we look at how to use autograd for this purpose. We want an element-wise gradient of the total energy with respect to the positions. autograd returns functions, so we wrap it in another function so we can take the negative of that function.

from autograd import elementwise_grad

def forces(pos):
    dEdR = elementwise_grad(energy)
    return -dEdR(pos)

print(forces(atoms.positions))
[[ 0.545  1.667  0.721]
 [-0.068  0.002  0.121]
 [-0.18   0.018 -0.121]
 [ 0.902 -0.874 -0.083]
 [ 0.901 -0.937 -1.815]
 [ 0.243 -0.19   0.063]
 [-0.952 -1.776 -0.404]
 [-0.562  1.822  1.178]
 [-0.235  0.231  0.081]
 [-0.023  0.204 -0.294]
 [ 0.221 -0.342 -0.425]
 [-5.385 -6.017  1.236]
 [ 4.593  6.193 -0.258]]

Here we show the results are the same from both approaches.

print(np.allclose(atoms.get_forces(), forces(atoms.positions)))
True

Wow. We got forces without deriving a derivative, or using numerical finite differences, across loops, and conditionals. That is pretty awesome. You can easily modify the potential function now, without the need to rederive the force derivatives! This is an idea worth exploring further. In principle, it should be possible to include periodic boundary conditions and use autograd to compute stresses too. Maybe that will be a future post.

Copyright (C) 2017 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.1.2

Read and Post Comments

New publication in J. Phys. Chem. Lett.

| categories: publication, news | tags: | View Comments

DFT calculations are extensively used to predict the chemical properties of metal alloy surfaces, but they are expensive which limits the number of calculations that can be practically be calculated. In this paper, we explore a perturbation approach known as alchemy to take previously calculated results and extend them to new compositions. We use oxygen reduction as a prototype reaction, and show that alchemy is often much faster than DFT, with an accuracy within 0.1 eV of the DFT. There are cases where the accuracy is not as good suggesting that further improvements to the perturbation model could be beneficial. Overall, alchemy appears to be a useful tool in high-throughput screening research.

@article{saravanan-2017-alchem-predic,
  title =        {Alchemical Predictions for Computational Catalysis: Potential
                  and Limitations},
  year =         2017,
  Author =       {Saravanan, Karthikeyan and Kitchin, John R. and von
                  Lilienfeld, O. Anatole and Keith, John A.},
  Bdsk-Url-1 =   {http://dx.doi.org/10.1021/acs.jpclett.7b01974},
  Doi =          {10.1021/acs.jpclett.7b01974},
  Eprint =       {http://dx.doi.org/10.1021/acs.jpclett.7b01974},
  Journal =      {The Journal of Physical Chemistry Letters},
  Note =         {PMID: 28938798},
  Number =       {ja},
  Pages =        {null},
  Url =          {http://dx.doi.org/10.1021/acs.jpclett.7b01974},
  Volume =       0,
}

Copyright (C) 2017 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.0.7

Read and Post Comments

Finding similar bibtex entries

| categories: bibtex, similarity | tags: | View Comments

A common task while writing scientific papers is citing previous research. I use org-ref extensively for that, and it makes it pretty easy to find similar references, e.g. that have common authors, or common keywords. It also lets me find similar articles in Web of Science or Scopus. Suppose that I have cited a particular paper, e.g. e boes-2016-neural-networ, and I want to find similar references to it that are already in my bibtex file, and similar by my definition. With org-ref I can easily search by keyword or author to find similar entries, but these are limited by what I search for, and they are not sorted. Today, I will explore the first step in a recommender system that calculates similarity, and provides a sorted list of candidates with the most relevant ones first.

The idea is to calculate some measure of similarity between the title of that reference, and the titles of other references in my bibtex file, and then sort them by similarity. This is the reference I want to find similar entries for:

Boes, J. R., Groenenboom, M. C., Keith, J. A., & Kitchin, J. R., Neural network and Reaxff comparison for Au properties, Int. J. Quantum Chem., 116(13), 979–987 (2016). http://dx.doi.org/10.1002/qua.25115

The first thing we do is read in our bibtex file, and print a representative entry.

import bibtexparser
from bibtexparser.bparser import BibTexParser

with open('/Users/jkitchin/Dropbox/bibliography/references.bib') as bibtex_file:
    parser = BibTexParser()
    bib_database = bibtexparser.load(bibtex_file, parser=parser)
    entries = bib_database.entries

print(entries[10])

{'author': 'Jaan Aarik and Aleks Aidla and V{\\"a}ino Sammelselg and Teet\nUustare', 'title': 'Effect of Growth Conditions on Formation of \\ce{TiO_2}-{II}\nThin Films in Atomic Layer Deposition Process', 'journal': 'Journal of Crystal Growth', 'volume': '181', 'number': '3', 'pages': '259 - 264', 'year': '1997', 'doi': '10.1016/S0022-0248(97)00279-0', 'link': 'http://www.sciencedirect.com/science/article/pii/S0022024897002790', 'issn': '0022-0248', 'ENTRYTYPE': 'article', 'ID': 'aarik-1997-effec-growt'}

Each entry is a dictionary containing the fields and their values. For this exploration, I will only consider similarities between titles. The next step is we find which entry corresponds to the reference we want to find similarities to.

ids = [e['ID'] for e in entries]
i = ids.index('boes-2016-neural-networ')
print(entries[i])

{'author': 'Jacob R. Boes and Mitchell C. Groenenboom and John A. Keith\nand John R. Kitchin', 'title': 'Neural Network and {Reaxff} Comparison for {Au} Properties', 'journal': 'Int. J. Quantum Chem.', 'volume': '116', 'number': '13', 'pages': '979-987', 'year': '2016', 'doi': '10.1002/qua.25115', 'link': 'http://dx.doi.org/10.1002/qua.25115', 'issn': '1097-461X', 'keyword': 'Kohn-Sham density functional theory, neural networks, reactive\nforce fields, potential energy surfaces, machine learning', 'ENTRYTYPE': 'article', 'ID': 'boes-2016-neural-networ'}

It is best if we make the entry we want to find similarities to the first one, so here we swap the first and ith entries.

entries[0], entries[i] = entries[i], entries[0]

Now, we prepare the list of strings to get similarities for.

titles = [e.get('title', '') for e in entries]

We will use term frequency–inverse document frequency to get a vector that represents each title, and then use cosine similarity as a measure of similarity. Here is the place to note that I chose these, and could choose other ones too. Also, it is worth noting that in this measure of similarity I did not choose which keywords to measure similarity on.

The functionality for this is provided by sklearn. It has implemented functions for the algorithms above, and in just a few lines of code you get an array of tf-idf features to analyze. The array we get from our vectorizer contains normalized vectors, so we can get the cosine similarity just from a dot product of the vectors. The first row corresponds to the similarity of the first string to all the others. I want them sorted in descending order. The argsort function returns ascending order, so we use a trick to sort the negative of the similarity score which achieves that. There are certainly more advanced treatments of the text we could use by customizing the vectorizer, e.g. word stemming, but for now we neglect that.

from sklearn.feature_extraction.text import TfidfVectorizer

vectorizer = TfidfVectorizer(stop_words='english')
X = vectorizer.fit_transform(titles)

cosine_similarities = (X * X.T).A[0]

related_docs_indices = (-cosine_similarities).argsort()

print('The top 10 recommendations for {} are:\n'.format(S[0]))
for i, j in enumerate(related_docs_indices[1:11]):
    print('{i}. {ID}: {title}, {author}\n'.format(i=i + 1, **entries[j]))

The top 10 recommendations for Neural Network and {Reaxff} Comparison for {Au} Properties are:

  1. behler-2010-neural: Neural network potential-energy surfaces for atomistic

simulations, J{\"o}rg Behler

  1. boes-2017-neural-networ: Neural Network Predictions of Oxygen Interactions on a Dynamic

{Pd} Surface, Jacob R. Boes and John R. Kitchin

  1. eshet-2010-ab: Ab Initio Quality Neural-Network Potential for Sodium, Hagai Eshet and Rustam Z. Khaliullin and Thomas D. K{\"u}hne

and J{\"o}rg Behler and Michele Parrinello

  1. behler-2014-repres-poten: Representing Potential Energy Surfaces By High-Dimensional

Neural Network Potentials, J Behler

  1. behler-2007-gener-neural: Generalized Neural-Network Representation of High-Dimensional

Potential-Energy Surfaces, J{\"o}rg Behler and Michele Parrinello

  1. artrith-2012-high: High-Dimensional Neural Network Potentials for Metal Surfaces:

A Prototype Study for Copper, Nongnuch Artrith and J{\"o}rg Behler

  1. behler-2015-const: Constructing High-Dimensional Neural Network Potentials: A

Tutorial Review, J{\"o}rg Behler

  1. artrith-2011-high: High-Dimensional Neural-Network Potentials for Multicomponent

Systems: Applications To Zinc Oxide, Nongnuch Artrith and Tobias Morawietz and J{\"o}rg Behler

  1. sosso-2012-neural-gete: Neural Network Interatomic Potential for the Phase Change

Material \ce{GeTe}, Gabriele C. Sosso and Giacomo Miceli and Sebastiano Caravati and J{\"o}rg Behler and Marco Bernasconi

  1. lorenz-2006-descr: Descriptions of Surface Chemical Reactions Using a Neural

Network Representation of the Potential-Energy Surface, S{\"o}nke Lorenz and Matthias Scheffler and Axel Gross

It is evident that this is showing other references containing the words "neural network"! I guess that is a little disappointing, since these would just as easily been narrowed down in org-ref. On the other hand, they are sorted and grouped, which would not happen in org-ref. This is a comparison of pretty short strings (just the titles), so maybe this would be much more interesting if abstracts were also included. Including authors would give a different set as well (I tried it, and got a bunch of my own references!).

I don't think it would be very difficult to get this into an Emacs selection tool, e.g. helm/ivy. Check this out:

import pycse.lisp

related_docs_indices[1:6].lisp

'(1592 1650 299 1751 103)'

That is a result that can be read directly by lisp, so we could simply write the code above as a shell script that takes an argument, and returns a list of indices to sort the candidates on. The alternative is to implement this in elisp, perhaps via a dynamic module if there is already a good C library for this. My sense is the Python libraries are more advanced in functionality.

This could have a number of other applications. Given some reference content, you could imagine finding emails that are similar to it, finding RSS entries that are similar to it, finding org headlines that are related, similar files, or similarity with any other set of strings that can be gathered, e.g. from Crossref or some other search, etc. I predict there will be more on these topics in the future!

Copyright (C) 2017 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.0.7

Read and Post Comments

New publication in Journal of Physics Condensed Matter

| categories: publication, news | tags: | View Comments

The Atomic Simulation Environment is a powerful python library for setting up, running and analyzing molecular simulations. I have been using it and contributing to it since around 2002 when I used the ASE-2 version in Python 1.5! The new ase-3 version is much simpler to use, and much more powerful. This paper describes some of its design principles and capabilities. If you use ASE, please cite this paper!

@article{larsen-2017-atomic-simul,
  author =       {Ask Hjorth Larsen and Jens J{\o}rgen Mortensen and Jakob
                  Blomqvist and Ivano E Castelli and Rune Christensen and
                  Marcin Dułak and Jesper Friis and Michael N Groves and
                  Bj{\o}rk Hammer and Cory Hargus and Eric D Hermes and Paul C
                  Jennings and Peter Bjerre Jensen and James Kermode and John
                  R Kitchin and Esben Leonhard Kolsbjerg and Joseph Kubal and
                  Kristen Kaasbjerg and Steen Lysgaard and J{\'o}n Bergmann
                  Maronsson and Tristan Maxson and Thomas Olsen and Lars
                  Pastewka and Andrew Peterson and Carsten Rostgaard and Jakob
                  Schi{\o}tz and Ole Sch{\"u}tt and Mikkel Strange and Kristian
                  S Thygesen and Tejs Vegge and Lasse Vilhelmsen and Michael
                  Walter and Zhenhua Zeng and Karsten W Jacobsen},
  title =        {The Atomic Simulation Environment-A Python Library for Working
                  With Atoms},
  journal =      {Journal of Physics: Condensed Matter},
  volume =       29,
  number =       27,
  pages =        273002,
  year =         2017,
  url =          {http://stacks.iop.org/0953-8984/29/i=27/a=273002},
  abstract =     {The atomic simulation environment (ASE) is a software package
                  written in the Python programming language with the aim of
                  setting up, steering, and analyzing atomistic simulations. In
                  ASE, tasks are fully scripted in Python. The powerful syntax
                  of Python combined with the NumPy array library make it
                  possible to perform very complex simulation tasks. For
                  example, a sequence of calculations may be performed with the
                  use of a simple 'for-loop' construction. Calculations of
                  energy, forces, stresses and other quantities are performed
                  through interfaces to many external electronic structure codes
                  or force fields using a uniform interface. On top of this
                  calculator interface, ASE provides modules for performing many
                  standard simulation tasks such as structure optimization,
                  molecular dynamics, handling of constraints and performing
                  nudged elastic band calculations.},
}

Copyright (C) 2017 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.0.7

Read and Post Comments

« Previous Page -- Next Page »