Uncertainty in an integral equation

| categories: math, uncertainty | tags: | View Comments

In a previous example, we solved for the time to reach a specific conversion in a batch reactor. However, it is likely there is uncertainty in the rate constant, and possibly in the initial concentration. Here we examine the effects of that uncertainty on the time to reach the desired conversion.

To do this we have to write a function that takes arguments with uncertainty, and wrap the function with the uncertainties.wrap decorator. The function must return a single float number (current limitation of the uncertainties package). Then, we simply call the function, and the uncertainties from the inputs will be automatically propagated to the outputs. Let us say there is about 10% uncertainty in the rate constant, and 1% uncertainty in the initial concentration.

from scipy.integrate import quad
import uncertainties as u

k = u.ufloat((1.0e-3, 1.0e-4))
Ca0 = u.ufloat((1.0, 0.01))# mol/L

@u.wrap
def func(k, Ca0):
    def integrand(X):
        return 1./(k*Ca0)*(1./(1-X)**2)
    integral, abserr = quad(integrand, 0, 0.9)
    return integral

sol = func(k, Ca0)
print 't = {0} seconds ({1} hours)'.format(sol, sol/3600)
t = 9000.0+/-904.488801332 seconds (2.5+/-0.251246889259 hours)

The result shows about a 10% uncertainty in the time, which is similar to the largest uncertainty in the inputs. This information should certainly be used in making decisions about how long to actually run the reactor to be sure of reaching the goal. For example, in this case, running the reactor for 3 hours (that is roughly + 2σ) would ensure at a high level of confidence (approximately 95% confidence) that you reach at least 90% conversion.

Copyright (C) 2013 by John Kitchin. See the License for information about copying.

org-mode source

Read and Post Comments

Is your ice cream float bigger than mine

| categories: math | tags: | View Comments

Float numbers (i.e. the ones with decimals) cannot be perfectly represented in a computer. This can lead to some artifacts when you have to compare float numbers that on paper should be the same, but in silico are not. Let us look at some examples. In this example, we do some simple math that should result in an answer of 1, and then see if the answer is “equal” to one.

print 3.0 * (1.0/3.0) 
print 1.0 == 3.0 * (1.0/3.0)
1.0
True

Everything looks fine. Now, consider this example.

print 49.0 * (1.0/49.0) 
print 1.0 == 49.0 * (1.0/49.0)
1.0
False

The first line looks like everything is find, but the equality fails!

1.0
False

You can see here why the equality statement fails. We will print the two numbers to sixteen decimal places.

print '{0:1.16f}'.format(49.0 * (1.0/49.0) )
print '{0:1.16f}'.format(1.0)
print 1 - 49.0 * (1.0/49.0)
0.9999999999999999
1.0000000000000000
1.11022302463e-16

The two numbers actually are not equal to each other because of float math. They are very, very close to each other, but not the same.

This leads to the idea of asking if two numbers are equal to each other within some tolerance. The question of what tolerance to use requires thought. Should it be an absolute tolerance? a relative tolerance? How large should the tolerance be? We will use the distance between 1 and the nearest floating point number (this is eps in Matlab). numpy can tell us this number with the np.spacing command.

Below, we implement a comparison function from 10.1107/S010876730302186X that allows comparisons with tolerance.

# Implemented from Acta Crystallographica A60, 1-6 (2003). doi:10.1107/S010876730302186X

import numpy as np
print np.spacing(1)

def feq(x, y, epsilon):
    'x == y'
    return not((x < (y - epsilon)) or (y < (x - epsilon)))

print feq(1.0, 49.0 * (1.0/49.0), np.spacing(1))
2.22044604925e-16
True

For completeness, here are the other float comparison operators from that paper. We also show a few examples.

import numpy as np

def flt(x, y, epsilon):
    'x < y'
    return x < (y - epsilon)

def fgt(x, y, epsilon):
    'x > y'
    return y < (x - epsilon)

def fle(x, y, epsilon):
    'x <= y'
    return not(y < (x - epsilon))

def fge(x, y, epsilon):
    'x >= y'
    return not(x < (y - epsilon))

print fge(1.0, 49.0 * (1.0/49.0), np.spacing(1))
print fle(1.0, 49.0 * (1.0/49.0), np.spacing(1))

print fgt(1.0 + np.spacing(1), 49.0 * (1.0/49.0), np.spacing(1))
print flt(1.0 - 2 * np.spacing(1), 49.0 * (1.0/49.0), np.spacing(1))
True
True
True
True

As you can see, float comparisons can be tricky. You have to give a lot of thought to how to make the comparisons, and the functions shown above are not the only way to do it. You need to build in testing to make sure your comparisons are doing what you want.

Copyright (C) 2013 by John Kitchin. See the License for information about copying.

org-mode source

Read and Post Comments

Numerical Simpsons rule

| categories: math, integration | tags: | View Comments

A more accurate numerical integration than the trapezoid method is Simpson's rule. The syntax is similar to trapz, but the method is in scipy.integrate.

import numpy as np
from scipy.integrate import simps, romb

a = 0.0; b = np.pi / 4.0;
N = 10  # this is the number of intervals

x = np.linspace(a, b, N)
y = np.cos(x)

t = np.trapz(y, x)
s = simps(y, x)
a = np.sin(b) - np.sin(a)

print
print 'trapz = {0} ({1:%} error)'.format(t, (t - a)/a)
print 'simps = {0} ({1:%} error)'.format(s, (s - a)/a)
print 'analy = {0}'.format(a)
>>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>>
trapz = 0.70665798038 (-0.063470% error)
simps = 0.707058914216 (-0.006769% error)
analy = 0.707106781187

You can see the Simpson's method is more accurate than the trapezoid method.

Copyright (C) 2013 by John Kitchin. See the License for information about copying.

org-mode source

Read and Post Comments

Symbolic math in python

| categories: math, symbolic | tags: | View Comments

Matlab post Python has capability to do symbolic math through the sympy package.

1 Solve the quadratic equation

from sympy import solve, symbols, pprint

a,b,c,x = symbols('a,b,c,x')

f = a*x**2 + b*x + c

solution = solve(f, x)
print solution
pprint(solution)
>>> >>> >>> >>> >>> >>> [(-b + (-4*a*c + b**2)**(1/2))/(2*a), -(b + (-4*a*c + b**2)**(1/2))/(2*a)]
_____________   /       _____________\ 
        /           2    |      /           2 | 
 -b + \/  -4*a*c + b    -\b + \/  -4*a*c + b  / 
[---------------------, -----------------------]
          2*a                     2*a

The solution you should recognize in the form of \(\frac{b \pm \sqrt{b^2 - 4 a c}}{2 a}\) although python does not print it this nicely!

2 differentiation

you might find this helpful!

from sympy import diff

print diff(f, x)
print diff(f, x, 2)

print diff(f, a)
>>> 2*a*x + b
2*a
>>> x**2

3 integration

from sympy import integrate

print integrate(f, x)          # indefinite integral
print integrate(f, (x, 0, 1))  # definite integral from x=0..1
>>> a*x**3/3 + b*x**2/2 + c*x
a/3 + b/2 + c

4 Analytically solve a simple ODE

from sympy import Function, Symbol, dsolve
f = Function('f')
x = Symbol('x')
fprime = f(x).diff(x) - f(x) # f' = f(x)

y = dsolve(fprime, f(x))

print y
print y.subs(x,4)
print [y.subs(x, X) for X in [0, 0.5, 1]] # multiple values
>>> >>> >>> >>> >>> >>> f(x) == exp(C1 + x)
f(4) == exp(C1 + 4)
[f(0) == exp(C1), f(0.5) == exp(C1 + 0.5), f(1) == exp(C1 + 1)]

It is not clear you can solve the initial value problem to get C1.

The symbolic math in sympy is pretty good. It is not up to the capability of Maple or Mathematica, (but neither is Matlab) but it continues to be developed, and could be helpful in some situations.

Copyright (C) 2013 by John Kitchin. See the License for information about copying.

org-mode source

Read and Post Comments

Smooth transitions between two constants

| categories: math | tags: | View Comments

Suppose we have a parameter that has two different values depending on the value of a dimensionless number. For example when the dimensionless number is much less than 1, x = 2/3, and when x is much greater than 1, x = 1. We desire a smooth transition from 2/3 to 1 as a function of x to avoid discontinuities in functions of x. We will adapt the smooth transitions between functions to be a smooth transition between constants.

We define our function as \(x(D) = x0 + (x1 - x0)*(1 - sigma(D,w)\). We control the rate of the transition by the variable \(w\)

import numpy as np
import matplotlib.pyplot as plt

x0 = 2.0 / 3.0
x1 = 1.5

w = 0.05

D = np.linspace(0,2, 500)

sigmaD = 1.0 / (1.0 + np.exp(-(1 - D) / w))

x =  x0 + (x1 - x0)*(1 - sigmaD)

plt.plot(D, x)
plt.xlabel('D'); plt.ylabel('x')
plt.savefig('images/smooth-transitions-constants.png')

This is a nice trick to get an analytical function with continuous derivatives for a transition between two constants. You could have the transition occur at a value other than D = 1, as well by changing the argument to the exponential function.

Copyright (C) 2013 by John Kitchin. See the License for information about copying.

org-mode source

Read and Post Comments

Next Page ยป