Multiprocessing with vasp.py

One of the reasons we use scripting is to setup, run and analyze a lot of calculations. Usually though, our scripts are serial, which means if we want to do ten things, we do them one at a time. We consider here how to use multiprocessing to speed things up. We consider a simple equation of state where we want 10 calculations at lattice constants from ±10% of a base value. This isn't a definitive guide on multiprocessing or threads, just some working examples of what they do.

1 The serial case

We start with the serial case where each result is read one after another. This will give us a base case for how long it takes to read results when they are finished.

from vasp import Vasp
from ase import Atom, Atoms
import time
import numpy as np

t0 = time.time()

factors = np.linspace(0.9, 1.1, 10)
a = 3.6

calcs = [Vasp('mp/serial-Cu-{}'.format(i),
              xc='pbe',
              encut=300,
              kpts=[6, 6, 6],
              isym=2,
              atoms=Atoms([Atom('Cu',(0, 0, 0))],
                          cell=0.5 * a * f * np.array([[1.0, 1.0, 0.0],
                                                       [0.0, 1.0, 1.0],
                                                       [1.0, 0.0, 1.0]])))
         for i, f in enumerate(factors)]

energies = [calc.potential_energy for calc in calcs]
print(energies)
print('Elapsed-time = {0:1.1f} seconds'.format(time.time() - t0))
[-2.84280715, -3.21541663, -3.46563573, -3.62036549, -3.69449845, -3.71605521, -3.69440019, -3.64156925, -3.5622029, -3.464047]
Elapsed-time = 4.6 seconds

Just under 5 seconds. Doesn't seem like a big deal, but suppose there were 1000 results, or you need to do this over and over. It might make sense to shorten this if possible. One way might be to run these in parallel. Each calculation is independent, so this should be easy.

2 Multiprocessing case

See https://docs.python.org/2/library/multiprocessing.html. The key to multiprocessing is to create a function that does the work, and then create the threads to run the function. There are a couple of examples here. It might seem like more processes would finish faster, but it isn't always the case. It takes some time to create processes/threads. I found that just 2-3 processes gives close to the best performance, and more is not better for this example. This may be specific to this example, since it doesn't seem to take long to get the energy anyway.

2.1 Pool example

from vasp import Vasp
from ase import Atom, Atoms
import multiprocessing

import time
import numpy as np

def worker(arg):
    i, f = arg
    a = 3.6
    atoms = Atoms([Atom('Cu',(0, 0, 0))],
                  cell=0.5 * a * f * np.array([[1.0, 1.0, 0.0],
                                               [0.0, 1.0, 1.0],
                                               [1.0, 0.0, 1.0]]))
    calc = Vasp('mp/mp-pool-Cu-{}'.format(i),
                xc='pbe',
                encut=300,
                kpts=[6, 6, 6],
                isym=2,
                atoms=atoms)
    return calc.potential_energy

if __name__ == '__main__':
    t0 = time.time()

    factors = np.linspace(0.9, 1.1, 10)
    from multiprocessing import Pool
    p = Pool(2)

    energies = p.map(worker, enumerate(factors))
    print energies

    print('Elapsed-time = {0:1.1f} seconds'.format(time.time() - t0))
[-2.84280715, -3.21541663, -3.46563573, -3.62036549, -3.69449845, -3.71605521, -3.69440019, -3.64156925, -3.5622029, -3.464047]
Elapsed-time = 2.8 seconds

2.2 ThreadPool example

This is basically like using threads but with multiprocessing.

from vasp import Vasp
from ase import Atom, Atoms
import multiprocessing

import time
import numpy as np

def worker(i, f):
    a = 3.6
    atoms = Atoms([Atom('Cu',(0, 0, 0))],
                  cell=0.5 * a * f * np.array([[1.0, 1.0, 0.0],
                                               [0.0, 1.0, 1.0],
                                               [1.0, 0.0, 1.0]]))
    calc = Vasp('mp/mp-threadpool-Cu-{}'.format(i),
                xc='pbe',
                encut=300,
                kpts=[6, 6, 6],
                isym=2,
                atoms=atoms)
    return calc.potential_energy

t0 = time.time()

factors = np.linspace(0.9, 1.1, 10)

from multiprocessing.pool import ThreadPool
pool = ThreadPool(processes=2)

energies = [r.get() for r in [pool.apply_async(worker, (i, f))
                              for i, f in enumerate(factors)]]

print(energies)
print('Elapsed-time = {0:1.1f} seconds'.format(time.time() - t0))
[-2.84280715, -3.21541663, -3.46563573, -3.62036549, -3.69449845, -3.71605521, -3.69440019, -3.64156925, -3.5622029, -3.464047]
Elapsed-time = 3.3 seconds

Interesting, it isn't quite two times faster, but it is reproducibly faster.

3 Threading

Threads are a bit different than multiprocessing. For one, it doesn't come with a nice "pool" feature. We have to roll our own with a queue. I tried a few variations with this, and here is one that worked in the end. Another difference is we have to use a mutable list to capture the results. There is no way to "return" a value!

from vasp import Vasp
from ase import Atom, Atoms

import Queue
import threading

import time
import numpy as np

class Worker(threading.Thread):
    def __init__(self, queue):
        self.queue = queue
        threading.Thread.__init__(self)
        self.daemon = True
    def run(self):
        while 1:
            item = self.queue.get()
            f, energies, index = item
            a = 3.6
            atoms = Atoms([Atom('Cu',(0, 0, 0))],
                          cell=0.5 * a * f * np.array([[1.0, 1.0, 0.0],
                                                       [0.0, 1.0, 1.0],
                                                       [1.0, 0.0, 1.0]]))
            calc = Vasp('mp/queue-Cu-{}'.format(index),
                        xc='pbe',
                        encut=300,
                        kpts=[6, 6, 6],
                        isym=2,
                        atoms=atoms)

            energies[index] = calc.potential_energy
            q.task_done()

t0 = time.time()
factors = np.linspace(0.9, 1.1, 10)

# Setup our queue
q = Queue.Queue()
num_threads = 2

for i in range(num_threads):
    Worker(q).start()

energies = [None for f in factors]
for i, f in enumerate(factors):
    q.put([f, energies, i])

q.join()
print(energies)
print('Elapsed-time = {0:1.1f} seconds'.format(time.time() - t0))
[-2.84280715, -3.21541663, -3.46563573, -3.62036549, -3.69449845, -3.71605521, -3.69440019, -3.64156925, -3.5622029, -3.464047]
Elapsed-time = 3.2 seconds

Again, not notably faster if you add more threads. Your mileage could vary. The ThreadPoolExecutor is a little more convenient to use here, as it allows you to return something from the function. The syntax is a little heavy with the context manager.

from vasp import Vasp
from ase import Atom, Atoms

import time
import numpy as np
from concurrent import futures

def worker(arg):
    f, index = arg
    a = 3.6
    calc = Vasp('mp/futures-Cu-{}'.format(index),
                xc='pbe',
                encut=300,
                kpts=[6, 6, 6],
                isym=2,
                atoms=Atoms([Atom('Cu',(0, 0, 0))],
                            cell=0.5 * a * f * np.array([[1.0, 1.0, 0.0],
                                                         [0.0, 1.0, 1.0],
                                                         [1.0, 0.0, 1.0]])))

    return calc.potential_energy

t0 = time.time()

factors = np.linspace(0.9, 1.1, 10)

with futures.ThreadPoolExecutor(max_workers=2) as executor:
    fs = [executor.submit(worker, (f, i)) for i, f in enumerate(factors)]
    energies = [f.result() for f in futures.as_completed(fs)]

print(energies)
print('Elapsed-time = {0:1.1f} seconds'.format(time.time() - t0))
[-2.84280715, -3.21541663, -3.46563573, -3.62036549, -3.69449845, -3.71605521, -3.69440019, -3.64156925, -3.464047, -3.5622029]
Elapsed-time = 3.3 seconds

4 Summary

This example didn't show the benefits of threading/multiprocessing very well. It is hard to get even two times faster with this example, and it plateaus out after 2-3 threads. I guess this is a feature of how fast each function evaluation is in the first place. Still, if you have code that is iterative and the iterations are independent and long-running, you might try this to see if you can get a real-time speed up. Take care if you are on a multi-user machine though, the other users might not appreciate you using all the cores and putting a heavy load on them ;)

org-source

Bandstructures in vasp.py

Calculating a bandstructure with vasp.py and an IPython notebook

This post has three goals. 1) Show we can run simulations in the IPython notebook (instead of org-mode), second, to directly post the notebook to the dft-book blog, and finally to show how to calculate a band-structure.

First we import the vasp.py libraries we need.

In [1]:
%matplotlib inline
from vasp import Vasp
In [2]:
from ase.lattice.surface import fcc111
In [3]:
slab = fcc111('Al', size=(1, 1, 4), vacuum=10)
print(slab)
Atoms(symbols='Al4', positions=..., tags=..., cell=[[2.8637824638055176, 0.0, 0.0], [1.4318912319027588, 2.4801083645679673, 0.0], [0.0, 0.0, 27.014805770653954]], pbc=[True, True, False])

Now we setup and run a calculation. We need a base calculation to get the electron density from. Then, we will run a non-self-consistent calculation with a k-point path using that density.

In [4]:
from vasp.vasprc import VASPRC
VASPRC['queue.nodes'] = 'n5'  # specify to run on node named n5

calc = Vasp('../../Al-bandstructure',
            xc='pbe',
            encut=300,
            kpts=[6, 6, 6],
            lcharg=True,  # We need the charge and wavefunctions for the second step
            lwave=True,
            atoms=slab)
calc.run()  # we need to wait for this to finish
Out[4]:
-14.17006237

Once the calculation is done, we can run the bandstructure calculation. We specify a path through k-space as a series of pairs of points, and the number of "intersections" we want on each path. This path has 4 segments, with 10 points on each segment.

In [5]:
n, bands, p = calc.get_bandstructure(kpts_path=[(r'$\Gamma$', [0, 0, 0]),
                                                ('$K1$', [0.5, 0.0, 0.0]),
                                                ('$K1$', [0.5, 0.0, 0.0]),
                                                ('$K2$', [0.5, 0.5, 0.0]),
                                                ('$K2$', [0.5, 0.5, 0.5]),
                                                (r'$\Gamma$', [0, 0, 0]),
                                                (r'$\Gamma$', [0, 0, 0]),
                                                ('$K3$', [0, 0, 1])],
                                                 kpts_nintersections=10,
                                                 show=True)

The figure above shows why we only need $m \times n \times 1$ k-point meshes for slabs. From $\Gamma$ to $K3$ the bands are flat, so one k-point is sufficient to characterize the band energy in that direction (that is the z-direction in this calculation.).

That is basically it! I didn't find this as easy to use as Emacs + org-mode, but since I have 5+ years of skill with that, and a day of experience with this, that might be expected ;)

Managing queued jobs with vasp.py

One of the reasons we use vasp.py is to make it easy to run a lot of calculations conveniently. vasp.py automatically submits jobs to the queue for us, and provides some job control features to make it wait until they are finished.

Sometimes things go wrong though, e.g. you realize you used the wrong parameter and suddenly you have a lot of jobs in the queue to stop. Not to worry, vasp.py can help you out. Each calculator stores a jobid on it, and vasp.py provides some support to interact with the queue system (assuming you use Torque or something compatible with it)!

Here is an example where we setup and run 10 calculations. The first time we run this, 10 jobs get submitted.

from vasp import Vasp
from ase import Atom, Atoms
import matplotlib.pyplot as plt
import numpy as np

bond_lengths = np.linspace(1.05, 1.3, 10)

calcs = [Vasp('molecules/co-n{0:02d}'.format(i),  # output dir
                xc='PBE',
                nbands=6,
                encut=520,
                ismear=1,
                sigma=0.2,
                atoms=Atoms([Atom('C', [0, 0, 0]),
                             Atom('O', [d, 0, 0])],
                            cell=(6, 6, 6)))
         for i, d in enumerate(bond_lengths)]

energies = [calc.potential_energy for calc in calcs]

# Here are the jobids
print([(c.jobid(), c.in_queue()) for c in calcs])

# Stop here until all calculations are done.
Vasp.stop_if(None in energies)

print(energies)
/home-research/jkitchin/dft-book/blog/source/org/molecules/co-n00 submitted: 1397279.gilgamesh.cheme.cmu.edu
/home-research/jkitchin/dft-book/blog/source/org/molecules/co-n01 submitted: 1397280.gilgamesh.cheme.cmu.edu
/home-research/jkitchin/dft-book/blog/source/org/molecules/co-n02 submitted: 1397281.gilgamesh.cheme.cmu.edu
/home-research/jkitchin/dft-book/blog/source/org/molecules/co-n03 submitted: 1397282.gilgamesh.cheme.cmu.edu
/home-research/jkitchin/dft-book/blog/source/org/molecules/co-n04 submitted: 1397283.gilgamesh.cheme.cmu.edu
/home-research/jkitchin/dft-book/blog/source/org/molecules/co-n05 submitted: 1397284.gilgamesh.cheme.cmu.edu
/home-research/jkitchin/dft-book/blog/source/org/molecules/co-n06 submitted: 1397285.gilgamesh.cheme.cmu.edu
/home-research/jkitchin/dft-book/blog/source/org/molecules/co-n07 submitted: 1397286.gilgamesh.cheme.cmu.edu
/home-research/jkitchin/dft-book/blog/source/org/molecules/co-n08 submitted: 1397287.gilgamesh.cheme.cmu.edu
/home-research/jkitchin/dft-book/blog/source/org/molecules/co-n09 submitted: 1397288.gilgamesh.cheme.cmu.edu
[(u'1397279.gilgamesh.cheme.cmu.edu', True), (u'1397280.gilgamesh.cheme.cmu.edu', True), (u'1397281.gilgamesh.cheme.cmu.edu', True), (u'1397282.gilgamesh.cheme.cmu.edu', True), (u'1397283.gilgamesh.cheme.cmu.edu', True), (u'1397284.gilgamesh.cheme.cmu.edu', True), (u'1397285.gilgamesh.cheme.cmu.edu', True), (u'1397286.gilgamesh.cheme.cmu.edu', True), (u'1397287.gilgamesh.cheme.cmu.edu', True), (u'1397288.gilgamesh.cheme.cmu.edu', True)]

Say we just realize we wanted to delete these jobs, and change a parameter. We just get the calculators, and call qdel on each one like this:

from vasp import Vasp
import numpy as np

bond_lengths = np.linspace(1.05, 1.3, 10)
calcs = [Vasp('molecules/co-n{0:02d}'.format(i))
         for i, d in enumerate(bond_lengths)]

print([calc.qdel() for calc in calcs])
[(0, ''), (0, ''), (0, ''), (0, ''), (0, ''), (0, ''), (0, ''), (0, ''), (0, ''), (0, '')]

All those 0's just means the job deletion succeeded. Now, we just have to adjust the original parameter set, and rerun the original script! That will resubmit the new jobs for us.

Here is the output from one of those calculations. Note the path is an org-link, so you can just click on it to open the directory if you run this in Emacs.

from vasp import Vasp
print(Vasp('molecules/co-n00'))
*************** VASP CALCULATION SUMMARY ***************
Vasp calculation directory:
---------------------------
  [[/home-research/jkitchin/dft-book/blog/source/org/molecules/co-n00]]

Unit cell:
----------
       x       y       z             |v|
  v0   6.000   0.000   0.000       6.000 Ang
  v1   0.000   6.000   0.000       6.000 Ang
  v2   0.000   0.000   6.000       6.000 Ang
  a,b,c,alpha,beta,gamma (deg): 6.000 6.000 6.000 90.0 90.0 90.0
  Total volume:                  216.000 Ang^3
  Stress:    xx     yy     zz     yz     xz     xy
         -0.071  0.001  0.001 -0.000 -0.000 -0.000 GPa

  ID  tag     sym    x        y        z    rmsF (eV/A)constraints (F=Frozen)
  0   0       C      0.000    0.000    0.000   14.97      T T T
  1   0       O      1.050    0.000    0.000   14.97      T T T
  Potential energy: -14.1779 eV

INPUT Parameters:
-----------------
  lcharg    : False
  pp        : PBE
  nbands    : 6
  xc        : pbe
  ismear    : 1
  lwave     : False
  sigma     : 0.2
  kpts      : [1, 1, 1]
  encut     : 520

Pseudopotentials used:
----------------------
  C: potpaw_PBE/C/POTCAR (git-hash: ee4d8576584f8e9f32e90853a0cbf9d4a9297330)
  O: potpaw_PBE/O/POTCAR (git-hash: 592f34096943a6f30db8749d13efca516d75ec55)

Here is a summary of the job control functions and queue commands built in to vasp.py.

1 vasp.py job control functions

1.1 Vasp.run()

Runs all stored calculators on the Vasp class.

from vasp import Vasp
help(Vasp.run)
Help on method run in module vasp.vasp_core:

run(cls, wait=False) method of __builtin__.type instance
    Convenience function to run calculators.

    The default behavior is to exit after doing this. If wait is
    True, iy will cause it to wait with the default args to
    Vasp.wait.

    If wait is a dictionary, it will be passed as kwargs to
    Vasp.wait.

1.2 calc.run()

Runs the instance calculator.

1.3 Vasp.all()

Returns whether all calculators are ready, e.g. with results.

from vasp import Vasp
help(Vasp.all)
Help on method all in module vasp.vasp_core:

all(cls) method of __builtin__.type instance
    Returns if all calculators in the class are ready.

1.4 Vasp.stop_if(condition)

Aborts the script if condition is not truthy

from vasp import Vasp
help(Vasp.stop_if)
Help on method stop_if in module vasp.vasp_core:

stop_if(cls, condition) method of __builtin__.type instance
    Stops the program if condition is truthy.

1.5 Vasp.abort() and calc.abort()

Aborts the script.

from vasp import Vasp
help(Vasp.abort)
Help on method abort in module vasp.vasp_core:

abort(cls) method of __builtin__.type instance
    Abort and exit the program the calculator is running in.

1.6 Vasp.wait() and calc.wait()

Allows a real time-elapsed wait for jobs to finish. This blocks the script.

from vasp import Vasp
help(Vasp.wait)
Help on method wait in module vasp.vasp_core:

wait(cls, poll_interval=5, timeout=None, abort=False) method of __builtin__.type instance
    Control function to wait until all calculators are ready.

    if abort is truthy, stop the program.

    Otherwise check the calculators every poll_interval seconds,
    up to timeout seconds later. If timeout is None, poll forever.

2 vasp.py queue commands

2.1 calc.qstat(*options)

Gets information about the job.

from vasp import Vasp
import numpy as np

calc = Vasp('molecules/co-n09')

calc.qstat()
Job id                    Name             User            Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
1397268.gilgamesh         .../co-n09       jkitchin               0 R short

2.2 calc.qalter(*options)

Allows you to alter the queue parameters for a job.

from vasp import Vasp
import numpy as np

calc = Vasp('molecules/co-n09')

calc.qalter('-l', 'walltime=20:00:00')

2.3 calc.qdel(*options)

Allows you to delete the job

2.4 calc.xterm()

This will pop up an xterm window in the directory of the calculation. There you can run commands and see what is going on.

from vasp import Vasp
import numpy as np

calc = Vasp('molecules/co-n00')

calc.xterm()

2.5 calc.qoutout()

Returns contents of the queue output file if it exists. May be useful to debug.

from vasp import Vasp
import numpy as np

calc = Vasp('molecules/co-n00')

print(calc.qoutput())
xmodmap:  unable to open display ''
 vasp.5.3.5 31Mar14 (build Aug 04 2015 12:48:45) complex

 POSCAR found :  2 types and       2 ions
 LDA part: xc-table for Pade appr. of Perdew
 POSCAR found :  2 types and       2 ions
 POSCAR, INCAR and KPOINTS ok, starting setup
 WARNING: small aliasing (wrap around) errors must be expected
 FFT: planning ...
 WAVECAR not read
 WARNING: random wavefunctions but no delay for mixing, default for NELMDL
 entering main loop
       N       E                     dE             d eps       ncg     rms          rms(c)
DAV:   1     0.693510492725E+02    0.69351E+02   -0.29305E+03    12   0.855E+02
DAV:   2    -0.556930203939E+01   -0.74920E+02   -0.74931E+02    18   0.248E+02
DAV:   3    -0.151153850754E+02   -0.95461E+01   -0.95461E+01    12   0.964E+01
DAV:   4    -0.153607093341E+02   -0.24532E+00   -0.24532E+00    12   0.138E+01
DAV:   5    -0.153763439194E+02   -0.15635E-01   -0.15635E-01    24   0.343E+00    0.816E+00
DAV:   6    -0.144168400550E+02    0.95950E+00   -0.26887E+00    12   0.196E+01    0.429E+00
DAV:   7    -0.142207761391E+02    0.19606E+00   -0.48211E-01    18   0.795E+00    0.186E+00
DAV:   8    -0.142021024270E+02    0.18674E-01   -0.81621E-02    18   0.376E+00    0.582E-01
DAV:   9    -0.142016090043E+02    0.49342E-03   -0.11924E-02    12   0.136E+00    0.870E-02
DAV:  10    -0.142024449557E+02   -0.83595E-03   -0.55763E-04    12   0.268E-01    0.420E-02
DAV:  11    -0.142040155105E+02   -0.15706E-02   -0.43005E-04    18   0.201E-01    0.267E-02
DAV:  12    -0.142041222436E+02   -0.10673E-03   -0.11191E-04     6   0.106E-01    0.989E-03
DAV:  13    -0.142042422801E+02   -0.12004E-03   -0.26082E-05     6   0.587E-02    0.352E-03
DAV:  14    -0.142042518369E+02   -0.95569E-05   -0.35661E-06     6   0.202E-02
   1 F= -.14204252E+02 E0= -.14208117E+02  d E =0.115958E-01
org-source

Job control in vasp.py

Table of Contents

One of the things we often need to do in DFT calculations is setup a series of calculations, run them all, and then do some analysis. The new vasp.py helps us with this. We can create a list of calculators, and then run each one of them. We run all our jobs asynchronously in a queue system, with no real control over when they start and when they end. A challenge in this is we usually need some kind of way to stop the script after the jobs are started so we can wait for them to finish before proceeding with analysis.

We address this challenge by storing a reference to all calculators created on the Vasp class, and defining some class methods that can work on all of them. For example the Vasp.run method will run each calculator (which submits a job for each one to the queue system). The default behavior is then to exit, but we can also tell it to wait, which will cause it to periodically check if the calculations are done before proceeding.

import vasp
help(vasp.Vasp.run)
Help on method run in module vasp.vasp_core:

run(cls, wait=False) method of __builtin__.type instance
    Convenience function to run calculators.
    
    The default behavior is to exit after doing this. If wait is
    True, iy will cause it to wait with the default args to
    Vasp.wait.
    
    If wait is a dictionary, it will be passed as kwargs to
    Vasp.wait.

With our typical workflow of scripting in org-mode this is very convenient. We can write one script that sets up the calculations, runs them, and later when they are done, performs the analysis. We have to run this script two times. The first time submits the jobs and exits at the Vasp.run() line (see Output from first run). After the jobs are done, we run it a second time, and we get the results and make the plot!

from vasp import Vasp
from ase.lattice.cubic import BodyCenteredCubic

NUPDOWNS = [4.0, 4.5, 5.0, 5.5, 6.0]

# These are the same for all calculations.
fixed_pars = dict( xc='PBE',
                   encut=200,
                   kpts=[4, 4, 4],
                   ispin=2,
                   atoms=BodyCenteredCubic(directions=[[1, 0, 0],
                                                       [0, 1, 0],
                                                       [0, 0, 1]],
                                           size=(1, 1, 1),
                                           symbol='Fe'))

# Prepare a list of calculators
calcs = [Vasp('bulk/Fe-bcc-fixedmagmom-{0:1.2f}'.format(B),
              nupdown=B,
              **fixed_pars)             
         for B in NUPDOWNS]

# This will start each calculation, and if they are not ready abort the script.
# If they are ready, we will get the energies.
energies = Vasp.run()  

import matplotlib.pyplot as plt
plt.plot(NUPDOWNS, energies)
plt.xlabel('Total Magnetic Moment')
plt.ylabel('Energy (eV)')
plt.savefig('Fe-fixedmagmom.png')

Fe-fixedmagmom.png

This style works especially well for our workflow with org-mode.

1 Output from first run

Here is the output of the script the first time I ran it. It just tells me that jobs have been submitted and are queued. The output is a bit verbose, because of the way the exception handling system works in vasp.py. Basically, there ends up being multiple calls to self.update before the script exits.

energy not in {}. Calc required.
energy not in {}. Calc required.
/home-research/jkitchin/dft-book/blog/source/org/bulk/Fe-bcc-fixedmagmom-4.00 submitted: 1397190.gilgamesh.cheme.cmu.edu
/home-research/jkitchin/dft-book/blog/source/org/bulk/Fe-bcc-fixedmagmom-4.00 Queued: 1397190.gilgamesh.cheme.cmu.edu
energy not in {}. Calc required.
energy not in {}. Calc required.
/home-research/jkitchin/dft-book/blog/source/org/bulk/Fe-bcc-fixedmagmom-4.50 submitted: 1397191.gilgamesh.cheme.cmu.edu
/home-research/jkitchin/dft-book/blog/source/org/bulk/Fe-bcc-fixedmagmom-4.50 Queued: 1397191.gilgamesh.cheme.cmu.edu
energy not in {}. Calc required.
energy not in {}. Calc required.
/home-research/jkitchin/dft-book/blog/source/org/bulk/Fe-bcc-fixedmagmom-5.00 submitted: 1397192.gilgamesh.cheme.cmu.edu
/home-research/jkitchin/dft-book/blog/source/org/bulk/Fe-bcc-fixedmagmom-5.00 Queued: 1397192.gilgamesh.cheme.cmu.edu
energy not in {}. Calc required.
energy not in {}. Calc required.
/home-research/jkitchin/dft-book/blog/source/org/bulk/Fe-bcc-fixedmagmom-5.50 submitted: 1397193.gilgamesh.cheme.cmu.edu
/home-research/jkitchin/dft-book/blog/source/org/bulk/Fe-bcc-fixedmagmom-5.50 Queued: 1397193.gilgamesh.cheme.cmu.edu
energy not in {}. Calc required.
energy not in {}. Calc required.
/home-research/jkitchin/dft-book/blog/source/org/bulk/Fe-bcc-fixedmagmom-6.00 submitted: 1397194.gilgamesh.cheme.cmu.edu
/home-research/jkitchin/dft-book/blog/source/org/bulk/Fe-bcc-fixedmagmom-6.00 Queued: 1397194.gilgamesh.cheme.cmu.edu
org-source

First post on DFT book blog

We have a new rewrite of the vasp Python calculator (https://github.com/jkitchin/vasp). Most of DFT-book has been re-written with the new library to provide testing and examples. There are a lot of new ideas in it though, so I am creating this new blog for dft-book to try them out, discuss various design choices, and develop advanced methods in running DFT calculations. This is the first post!

One of the features it has is better support for things like list comprehensions in DFT calculations. Here is an example of that where we start with a list of bond lengths, create a list of calculators, and get the energy from each one. This one script sets up and runs the calculations. You can run it over and over, and it should simply retrieve the results once the calculations are finished.

from vasp import Vasp
from ase import Atom, Atoms
import matplotlib.pyplot as plt

bond_lengths = [1.05, 1.1, 1.15, 1.2, 1.25]

calcs = [Vasp('molecules/co-{0}'.format(d),  # output dir
                xc='PBE',
                nbands=6,
                encut=350,
                ismear=1,
                sigma=0.01,
                atoms=Atoms([Atom('C', [0, 0, 0]),
                             Atom('O', [d, 0, 0])],
                            cell=(6, 6, 6)))
         for d in bond_lengths]

energies = [calc.potential_energy for calc in calcs]

# Stop here until all calculations are done.
calcs[0].stop_if(None in energies)

plt.plot(energies)
plt.xlabel('Bond length ($\AA$)')
plt.ylabel('Energy (eV)')
plt.savefig('test.png')

test.png

Figure 1: Energy vs. bond-length for the CO molecule.

org-source