## A partial symbolic numeric solver in emacs-lisp

| categories: | tags:

I have been exploring ways to use emacs-lisp to express scientific ideas. In this post, we explore a partial symbolic numeric solver in Emacs-lisp. This involves some syntactic developments to more clearly identify something we want to solve for and to then generate the code required to solve it.

In section The Newton solver you can find a simple implementation of a Newton solver in emacs-lisp. This function allows you to numerically solve equations that can be written in the form $$f(x) = 0$$ for $$x$$ given an initial guess. You write a function for $$f(x)$$ and pass the function to the solver. This is a standard approach used in Python with fsolve, for example. Here is an example of solving a trivial problem: $$x - 4 = 0$$ just to check that it works. We use a lambda function for $$f(x) = x - 4 = 0$$. The answer is $$x=4$$.

(newton-f (lambda (x) (- x 4)) 2)


That syntax is not too bad, but we have the whole lambda expression in there, and some repetition of what we want to solve for as an argument and in the function. It would be interesting if we could just have an expression that gets solved, e.g. (newton-f (- x? 4) 2) where x? indicates the thing to solve for.

We can do that! We can take an expression, flatten it and find the variable names that end with ?. We should check that there is only one, but for now we don't. Here is an example that does that. I use a nested expression here just to illustrate that the code finds the x? variable correctly.

(require 'dash)

(let ((body '((* (- x? 4) 1))))
(loop for item in (-flatten body)
if (and (symbolp item) (s-ends-with? "?" (symbol-name item)))
collect item))


So, given an expression we can identify the unknown that should be the argument to a lambda function. So, we create a macro that takes that expression and constructs a function to solve it, then calls newton-f on it. The macro is syntactically useful here because we do not have to quote the expression. Here is that macro.

(defmacro solve (expression guess)
(newton-f
(lambda ,(loop for item in (-flatten expression)
if (and (symbolp item) (s-ends-with? "?" (symbol-name item)))
collect item)
,expression)
,guess))


I call this a partial symbolic solver because we do some introspection symbolically to identify what to solve for, and then construct the code required to solve it. Here is that trivial example (x? - 4 = 0). It just shows we can have some nesting and it still works. I am not so thrilled with the initial guess, but this is an iterative solver, so you either need an initial guess, or a solution range.

(solve (* (- x? 4) 1) 3)


Here is what that expands into:

(macroexpand '(solve (* (- x? 4) 1) 3))


It expands into what we would have written in the first place. The benefit to us is less typing, and a simpler syntax. Both of those reduce the opportunity to make errors!

A more realistic problem might be: Reactant A flows into a continuously stirred tank reactor at a rate of $$F_{A0} = 1$$ mol/min with a volumetric flow of $$v_0 = 1$$ L/min.. The reactor achieves 50% conversion ($$X$$) of A to products. The reaction rate law is known to be $$-r_A = k C_A$$ with $$k = 0.1$$ 1/min. Estimate the volume of the reactor. If you have taken my class in reaction engineering, you know the following facts:

• The exit molar flow is defined by $$F_A = F_{A0} (1 - X)$$
• The exit concentration is $$C_A = F_A / v_0$$
• The mole balance is defined by $$0 = F_{A0} - F_A + r_A V$$

That is all we need; we can solve for $$V$$ from the last equation. This is simple enough you might do the algebra to get: $$V = \frac{F_{A0} - F_A}{-r_A}$$ which can be simply evaluated. We use our solver here and compare it to the evaluation.

Here is the solver:

(let* ((Fa0 1)
(X 0.5)
(Fa (* Fa0 (- 1 X)))
(k 0.1)
(v0 1)
(Ca (/ Fa v0))
(r (* k Ca))
(ra (* r -1)))
(solve (+ Fa0 (* Fa -1) (* ra V?)) 2))


It is pretty hard to imagine doing something like this in Python! It would probably involve parsing a string.

Here is the evaluation from our algebra:

(let* ((Fa0 1)
(X 0.5)
(Fa (* Fa0 (- 1 X)))
(k 0.1)
(v0 1)
(Ca (/ Fa v0))
(r (* k Ca))
(ra (* r -1)))
(/ (- Fa0 Fa) (* -1 ra)))


Within the tolerance specified in newton-f, these are the same.

This is just the tip of the iceberg though. You may have noticed that none of the variables in the let* had any descriptions. Sure, you could put some comments after them, but those are not really part of the code.

Also, we had to define the variables in advance of the expression. That is a limitation of how computers work, they cannot evaluate undefined variables. It constrains how we can express the idea. What if we could instead specify the equation first, then the data? That way we are clear what we are trying to do at a higher level, and fill in the details later. Suppose we wanted a syntax like the block below instead. Here we emphasize the equation we are solving first, and then define the variables and quantities used in the equation, and finally the guess that we use to find the solution.

(solve1
(eqn (+ Fa0 (* -1 Fa) (* ra V?)))
(data ((k 0.1 "rate constant 1/min")
(Ca0 1.0 "feed concentration")
(v0 1 "volumetric flow L/min")
(Fa0 (* v0 Ca0) "Inlet molar flow")
(X 0.5 "Desired conversion")
(Fa (* Fa0 (- 1 X)) "Exit molar flow")
(Ca (/ Fa v0) "exit concentration")
(ra (* -1 k Ca) "rate in the reactor")))
(guess 8))


That is achievable with the solve1 macro below! It too has some limitations, mostly the order of the data block still has to be correct, e.g. you cannot use a variable before it is defined. It would take some serious macro-fu to sort these so that everything is defined in the right order! Still, it allows you to express an executable idea in the order we defined. The strings in this syntax for documenting the variables are ignored, but they could be used in the macro to print useful information or something else you could imagine. You could also make them mandatory to encourage documentation.

(defmacro solve1 (&rest body)
(let ((expression (second (assoc 'eqn body)))
(data (loop for d in (second (assoc 'data body))
collect (list (first d) (second d))))
(guess (second (assoc 'guess body))))
(let* ,data
(newton-f
(lambda ,(loop for item in (-flatten expression)
if (and (symbolp item) (s-ends-with? "?" (symbol-name item)))
collect item)
,expression)
,guess))))


To summarize, lisp macros allow us to rewrite the syntax of code before it is evaluated. This gives us the opportunity to inspect it, and generate new code, e.g. functions with arguments based on the contents of expressions, to save us typing. It also allows us to define ideas in a near arbitrary order that make sense to us, and then rearrange them so they make sense to the computer. See, for example, this post for an example of changing how functions are defined.

This seems to be heading in the domain specific language direction. I think it would be very helpful in engineering problem solving to build up tools like this. They could start out simple for new students to use. They never need to see the macro parts of this, just to learn how to use them for problem solving. These beginner tools would be limited in what they could do to minimize how much lisp is required to be learned so students can focus on the problem solving. Eventually they might outgrow them, and in the process transition to having the full lisp language at their disposal for problem solving.

## 1 The Newton solver in emacs-lisp

This is an emacs-lisp implementation of Newton's method. It is a simple implementation for a single variable. The tolerance and step-size are hard-coded for this post because we focus on the partial symbolic solver, not the best solver methods.

;; See https://en.wikipedia.org/wiki/Newton%27s_method for the method

(defun newton-f (func x0)
"Solve the equation FUNC(x)=0 using Newton's method.
X0 is an initial guess."
(let* ((tolerance 1e-6)
(x x0)
(dx 1e-6)
fx fpx)
(while (> (abs (funcall func x)) tolerance)
(setq fx (funcall func x)
fpx (/ (- (funcall func (+ x dx)) (funcall func (- x dx))) (* 2 dx))
x (- x (/ fx fpx))))
x))


org-mode source

Org-mode version = 9.0.5

## Uncertainty in an integral equation

| categories: | tags:

In a previous example, we solved for the time to reach a specific conversion in a batch reactor. However, it is likely there is uncertainty in the rate constant, and possibly in the initial concentration. Here we examine the effects of that uncertainty on the time to reach the desired conversion.

To do this we have to write a function that takes arguments with uncertainty, and wrap the function with the uncertainties.wrap decorator. The function must return a single float number (current limitation of the uncertainties package). Then, we simply call the function, and the uncertainties from the inputs will be automatically propagated to the outputs. Let us say there is about 10% uncertainty in the rate constant, and 1% uncertainty in the initial concentration.

from scipy.integrate import quad
import uncertainties as u

k = u.ufloat((1.0e-3, 1.0e-4))
Ca0 = u.ufloat((1.0, 0.01))# mol/L

@u.wrap
def func(k, Ca0):
def integrand(X):
return 1./(k*Ca0)*(1./(1-X)**2)
integral, abserr = quad(integrand, 0, 0.9)
return integral

sol = func(k, Ca0)
print 't = {0} seconds ({1} hours)'.format(sol, sol/3600)

t = 9000.0+/-904.488801332 seconds (2.5+/-0.251246889259 hours)


The result shows about a 10% uncertainty in the time, which is similar to the largest uncertainty in the inputs. This information should certainly be used in making decisions about how long to actually run the reactor to be sure of reaching the goal. For example, in this case, running the reactor for 3 hours (that is roughly + 2σ) would ensure at a high level of confidence (approximately 95% confidence) that you reach at least 90% conversion.

org-mode source

## Is your ice cream float bigger than mine

| categories: math | tags:

Float numbers (i.e. the ones with decimals) cannot be perfectly represented in a computer. This can lead to some artifacts when you have to compare float numbers that on paper should be the same, but in silico are not. Let us look at some examples. In this example, we do some simple math that should result in an answer of 1, and then see if the answer is “equal” to one.

print 3.0 * (1.0/3.0)
print 1.0 == 3.0 * (1.0/3.0)

1.0
True


Everything looks fine. Now, consider this example.

print 49.0 * (1.0/49.0)
print 1.0 == 49.0 * (1.0/49.0)

1.0
False


The first line looks like everything is find, but the equality fails!

1.0
False


You can see here why the equality statement fails. We will print the two numbers to sixteen decimal places.

print '{0:1.16f}'.format(49.0 * (1.0/49.0) )
print '{0:1.16f}'.format(1.0)
print 1 - 49.0 * (1.0/49.0)

0.9999999999999999
1.0000000000000000
1.11022302463e-16


The two numbers actually are not equal to each other because of float math. They are very, very close to each other, but not the same.

This leads to the idea of asking if two numbers are equal to each other within some tolerance. The question of what tolerance to use requires thought. Should it be an absolute tolerance? a relative tolerance? How large should the tolerance be? We will use the distance between 1 and the nearest floating point number (this is eps in Matlab). numpy can tell us this number with the np.spacing command.

Below, we implement a comparison function from 10.1107/S010876730302186X that allows comparisons with tolerance.

# Implemented from Acta Crystallographica A60, 1-6 (2003). doi:10.1107/S010876730302186X

import numpy as np
print np.spacing(1)

def feq(x, y, epsilon):
'x == y'
return not((x < (y - epsilon)) or (y < (x - epsilon)))

print feq(1.0, 49.0 * (1.0/49.0), np.spacing(1))

2.22044604925e-16
True


For completeness, here are the other float comparison operators from that paper. We also show a few examples.

import numpy as np

def flt(x, y, epsilon):
'x < y'
return x < (y - epsilon)

def fgt(x, y, epsilon):
'x > y'
return y < (x - epsilon)

def fle(x, y, epsilon):
'x <= y'
return not(y < (x - epsilon))

def fge(x, y, epsilon):
'x >= y'
return not(x < (y - epsilon))

print fge(1.0, 49.0 * (1.0/49.0), np.spacing(1))
print fle(1.0, 49.0 * (1.0/49.0), np.spacing(1))

print fgt(1.0 + np.spacing(1), 49.0 * (1.0/49.0), np.spacing(1))
print flt(1.0 - 2 * np.spacing(1), 49.0 * (1.0/49.0), np.spacing(1))

True
True
True
True


As you can see, float comparisons can be tricky. You have to give a lot of thought to how to make the comparisons, and the functions shown above are not the only way to do it. You need to build in testing to make sure your comparisons are doing what you want.

org-mode source

## Numerical Simpsons rule

| categories: | tags:

A more accurate numerical integration than the trapezoid method is Simpson's rule. The syntax is similar to trapz, but the method is in scipy.integrate.

import numpy as np
from scipy.integrate import simps, romb

a = 0.0; b = np.pi / 4.0;
N = 10  # this is the number of intervals

x = np.linspace(a, b, N)
y = np.cos(x)

t = np.trapz(y, x)
s = simps(y, x)
a = np.sin(b) - np.sin(a)

print
print 'trapz = {0} ({1:%} error)'.format(t, (t - a)/a)
print 'simps = {0} ({1:%} error)'.format(s, (s - a)/a)
print 'analy = {0}'.format(a)

>>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>>
trapz = 0.70665798038 (-0.063470% error)
simps = 0.707058914216 (-0.006769% error)
analy = 0.707106781187


You can see the Simpson's method is more accurate than the trapezoid method.

org-mode source

## Symbolic math in python

| categories: | tags:

Matlab post Python has capability to do symbolic math through the sympy package.

## 1 Solve the quadratic equation

from sympy import solve, symbols, pprint

a,b,c,x = symbols('a,b,c,x')

f = a*x**2 + b*x + c

solution = solve(f, x)
print solution
pprint(solution)

>>> >>> >>> >>> >>> >>> [(-b + (-4*a*c + b**2)**(1/2))/(2*a), -(b + (-4*a*c + b**2)**(1/2))/(2*a)]
_____________   /       _____________\
/           2    |      /           2 |
-b + \/  -4*a*c + b    -\b + \/  -4*a*c + b  /
[---------------------, -----------------------]
2*a                     2*a


The solution you should recognize in the form of $$\frac{b \pm \sqrt{b^2 - 4 a c}}{2 a}$$ although python does not print it this nicely!

## 2 differentiation

from sympy import diff

print diff(f, x)
print diff(f, x, 2)

print diff(f, a)

>>> 2*a*x + b
2*a
>>> x**2


## 3 integration

from sympy import integrate

print integrate(f, x)          # indefinite integral
print integrate(f, (x, 0, 1))  # definite integral from x=0..1

>>> a*x**3/3 + b*x**2/2 + c*x
a/3 + b/2 + c


## 4 Analytically solve a simple ODE

from sympy import Function, Symbol, dsolve
f = Function('f')
x = Symbol('x')
fprime = f(x).diff(x) - f(x) # f' = f(x)

y = dsolve(fprime, f(x))

print y
print y.subs(x,4)
print [y.subs(x, X) for X in [0, 0.5, 1]] # multiple values

>>> >>> >>> >>> >>> >>> f(x) == exp(C1 + x)
f(4) == exp(C1 + 4)
[f(0) == exp(C1), f(0.5) == exp(C1 + 0.5), f(1) == exp(C1 + 1)]


It is not clear you can solve the initial value problem to get C1.

The symbolic math in sympy is pretty good. It is not up to the capability of Maple or Mathematica, (but neither is Matlab) but it continues to be developed, and could be helpful in some situations.