10 years ago - org mode is awesome video

| categories: org-mode | tags:

Ten years ago I posted a video on YouTube titled "org mode is awesome". This 18 minute video was a tour of features in org-mode that ranged included, outlining, task management, agendas, tables, code, exporting to different formats, and extendability. That video has been viewed 92.9K times!

A fair bit has changed since then, and a lot has stayed the same. org-mode is even more awesome! That video was made with what I called at the time jmax, and that has evolved into scimax today. I use scimax on a daily basis in my research, teaching and other work. It is as important in my work today as it was 10 years ago, and has survived watching other editors come and go.

I have had in mind to make an update of that video, but it has in my opinion really stood the test of time and is still highly relevant in its present form. The only thing that would really change is the background and font colors.

I have made over 200 other YouTube videos over that time, and many of them are on using Emacs and org-mode in a lot of different ways. These videos are organized in these playlists:

scimax-eln
Using scimax as an electronic lab notebook.
scimax
Videos on libraries I developed in scimax, including org-ref
org-mode
Videos exploring features in org-mode
pycse
Videos on using Python in Emacs to solve engineering and science problems

You can see how scimax has evolved, and continues to evolve over this time through these videos, and of course through the scimax repo at https://github.com/jkitchin/scimax. There are still great things coming for scimax, stay tuned!

Copyright (C) 2024 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.7.5

Discuss on Twitter

New publication - Pourbaix Machine Learning Framework Identifies Acidic Water Oxidation Catalysts Exhibiting Suppressed Ruthenium Dissolution

| categories: publication, news | tags:

Water splitting is a crucial technology for renewable hydrogen generation. Under acid conditions most metals that would be used for the oxidation reaction tend to dissolve, limiting their utility. Iridium oxide is widely regarded as the most active and stable material, but it is very expensive. Ruthenium oxide is the next most active material, but it is less stable and tends to dissolve over time. In this work we studied 36,000 mixed metal oxides to identify potential compositions that would stabilize ruthenium from dissolution. We found a candidate Ru0.6Cr0.2Ti0.2O2 with promise. We synthesized this material anf show that it has superior stability and improved activity compared to RuO2.

@article{abed-2024-pourb-machin,
  author =       {Jehad Abed and Javier Heras-Domingo and Rohan Yuri Sanspeur
                  and Mingchuan Luo and Wajdi Alnoush and Debora Motta Meira and
                  Hsiaotsu Wang and Jian Wang and Jigang Zhou and Daojin Zhou
                  and Khalid Fatih and John R. Kitchin and Drew Higgins and
                  Zachary W. Ulissi and Edward H. Sargent},
  title =        {Pourbaix Machine Learning Framework Identifies Acidic Water
                  Oxidation Catalysts Exhibiting Suppressed Ruthenium
                  Dissolution},
  journal =      {Journal of the American Chemical Society},
  volume =       {nil},
  number =       {nil},
  pages =        {nil},
  year =         2024,
  doi =          {10.1021/jacs.4c01353},
  url =          {http://dx.doi.org/10.1021/jacs.4c01353},
  DATE_ADDED =   {Sat Jun 8 13:12:31 2024},
}

Copyright (C) 2024 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.7-pre

Discuss on Twitter

New publication - Surface Segregation Studies in Ternary Noble Metal Alloys Comparing DFT and Machine Learning with Experimental Data

| categories: publication, news | tags:

Alloy segregation is hard to model; you need large unit cells to get fine-grained compositions, and a lot of DFT calculations to sample all the possible configurations. The challenge gets even bigger when you consider a ternary alloy, and want to model segregation over the entire ternary alloy composition space, and across multiple surfaces. We tackle this problem in this work using the Open Catalyst Project machine learned potentials (MLPs) that are fine-tuned on a few thousand DFT calculations. We use those MLPs with Monte Carlo simulations to predict segregation on three ternary alloy (111), (110), and (100) surfaces. We compare our predictions to experimental measurements on a polycrystalline CSAF. Similar to previous work of ours, we find qualitative and quantitative agreements in some composition ranges, and disagreement in others. We trace the limitations of quantitative accuracy to limitations in the DFT calculations.

@article{broderick-2024-surfac-segreg,
  author =       {Kirby Broderick and Robert A. Burnley and Andrew J. Gellman
                  and John R. Kitchin},
  title =        {Surface Segregation Studies in Ternary Noble Metal Alloys:
                  Comparing Dft and Machine Learning With Experimental Data},
  journal =      {ChemPhysChem},
  volume =       {nil},
  number =       {nil},
  pages =        {nil},
  year =         2024,
  doi =          {10.1002/cphc.202400073},
  url =          {http://dx.doi.org/10.1002/cphc.202400073},
  DATE_ADDED =   {Thu Jun 6 08:37:37 2024},
}

Copyright (C) 2024 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.7-pre

Discuss on Twitter

New publication - Cyclic Steady-State Simulation and Waveform Design for Dynamic Programmable Catalysis

| categories: publication, news | tags:

You can get higher rates of reaction on a catalyst by dynamically changing the adsorbate and reaction energetics. It has been an open challenge though to find ways to obtain the optimal waveform. In this work we present a problem formulation that is easy to solve and optimize waveforms in programmable catalysis.

https://doi.org/10.1021/acs.jpcc.4c01543

@article{tedesco-2024-cyclic-stead,
  author =       {Carolina Colombo Tedesco and John R. Kitchin and Carl D.
                  Laird},
  title =        {Cyclic Steady-State Simulation and Waveform Design for
                  Dynamic/programmable Catalysis},
  journal =      {The Journal of Physical Chemistry C},
  volume =       {nil},
  number =       {nil},
  pages =        {nil},
  year =         2024,
  doi =          {10.1021/acs.jpcc.4c01543},
  url =          {http://dx.doi.org/10.1021/acs.jpcc.4c01543},
  DATE_ADDED =   {Thu May 23 16:35:52 2024},
}

Copyright (C) 2024 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.7-pre

Discuss on Twitter

Kolmogorov-Arnold Networks (KANs) and Lennard Jones

| categories: uncategorized | tags:

KANs have been a hot topic of discussion recently (https://arxiv.org/abs/2404.19756). Here I explore using them as an alternative to a neural network for a simple atomistic potential using Lennard Jones data. I adapted this code from https://github.com/KindXiaoming/pykan/blob/master/hellokan.ipynb.

TL;DR It was easy to make the model, and it fit this simple data very well. It does not extrapolate in this example, and it is not obvious what the extrapolation behavior should be.

1. Create a dataset

We leverage the create_dataset function to generate the dataset here. I chose a range with some modest nonlinearity, and the minimum.

import matplotlib.pyplot as plt
import torch
from kan import create_dataset, KAN

def LJ(r):
    r6 = r**6
    return 1 / r6**2 - 1 / r6

dataset = create_dataset(LJ, n_var=1, ranges=[0.95, 2.0],
                         train_num=50)

plt.plot(dataset['train_input'], dataset['train_label'], 'b.')
plt.xlabel('r')
plt.ylabel('E');

2. Create and train the model

We start by making the model. We are going to model a Lennard-Jones potential with one input, the distance between two atoms, and one output. We start with a width of 2 "neurons".

model = KAN(width=[1, 2, 1])

Training is easy. You can even run this cell several times.

model.train(dataset, opt="LBFGS", steps=20);

model.plot()
train loss: 1.64e-04 | test loss: 1.46e-02 | reg: 6.72e+00 : 100%|██| 20/20 [00:03<00:00,  5.61it/s]

We can see here that the fit looks very good.

X = torch.linspace(dataset['train_input'].min(),
                   dataset['train_input'].max(), 100)[:, None]

plt.plot(dataset['train_input'], dataset['train_label'], 'b.', label='data')

plt.plot(X, model(X).detach().numpy(), 'r-', label='fit')
plt.legend()
plt.xlabel('r')
plt.ylabel('E');

KANs do not save us from extrapolation issues though. I think a downside of KANs is it is not obvious what extrapolation behavior to expect. I guess it could be related to what happens in the spline representation of the functions. Eventually those have to extrapolate too.

X = torch.linspace(0, 5, 1000)[:, None]
plt.plot(dataset['train_input'], dataset['train_label'], 'b.')
plt.plot(X, model(X).detach().numpy(), 'r-');

It is early days for KANs, so many things we know about MLPs are still unknown for KANs. For example, with MLPs we know they extrapolate like the activation functions. Probably there is some insight like that to be had here, but it needs to be uncovered. With MLPs there are a lot of ways to regularize them for desired behavior. Probably that is true here too, and will be discovered. Similarly, there are many ways people have approached uncertainty quantification in MLPs that probably have some analog in KANs. Still, the ease of use suggests it could be promising for some applications.

Copyright (C) 2024 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.7-pre

Discuss on Twitter
Next Page »