<?xml version="1.0" encoding="UTF-8"?>

<rss version="2.0"
     xmlns:content="http://purl.org/rss/1.0/modules/content/"
     xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
     xmlns:atom="http://www.w3.org/2005/Atom"
     xmlns:dc="http://purl.org/dc/elements/1.1/"
     xmlns:wfw="http://wellformedweb.org/CommentAPI/"
     >
  <channel>
    <atom:link href="http://kitchingroup.cheme.cmu.edu/blog/feed/index.xml" rel="self" type="application/rss+xml" />
    <title>The Kitchin Research Group</title>
    <link>https://kitchingroup.cheme.cmu.edu/blog</link>
    <description>Chemical Engineering at Carnegie Mellon University</description>
    <pubDate>Sat, 01 Nov 2025 13:47:46 GMT</pubDate>
    <generator>Blogofile</generator>
    <sy:updatePeriod>hourly</sy:updatePeriod>
    <sy:updateFrequency>1</sy:updateFrequency>
    
    <item>
      <title>Using autograd for error propagation</title>
      <link>https://kitchingroup.cheme.cmu.edu/blog/2018/11/05/Using-autograd-for-error-propagation</link>
      <pubDate>Mon, 05 Nov 2018 21:04:21 EST</pubDate>
      <category><![CDATA[autograd]]></category>
      <category><![CDATA[uncertainty]]></category>
      <guid isPermaLink="false">VOnvqoFwCueTJTkZl1jY2hDGjqY=</guid>
      <description>Using autograd for error propagation</description>
      <content:encoded><![CDATA[


&lt;p&gt;
Back in &lt;a href="http://kitchingroup.cheme.cmu.edu/blog/2013/03/07/Another-approach-to-error-propagation/"&gt;2013&lt;/a&gt; I wrote about using the uncertainties package to propagate uncertainties. The problem setup was for finding the uncertainty in the exit concentration from a CSTR when there are uncertainties in the other parameters. In this problem we were given this information about the parameters and their uncertainties.
&lt;/p&gt;

&lt;table border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides"&gt;


&lt;colgroup&gt;
&lt;col  class="org-left" /&gt;

&lt;col  class="org-right" /&gt;

&lt;col  class="org-right" /&gt;
&lt;/colgroup&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th scope="col" class="org-left"&gt;Parameter&lt;/th&gt;
&lt;th scope="col" class="org-right"&gt;value&lt;/th&gt;
&lt;th scope="col" class="org-right"&gt;&amp;sigma;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td class="org-left"&gt;Fa0&lt;/td&gt;
&lt;td class="org-right"&gt;5&lt;/td&gt;
&lt;td class="org-right"&gt;0.05&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td class="org-left"&gt;v0&lt;/td&gt;
&lt;td class="org-right"&gt;10&lt;/td&gt;
&lt;td class="org-right"&gt;0.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td class="org-left"&gt;V&lt;/td&gt;
&lt;td class="org-right"&gt;66000&lt;/td&gt;
&lt;td class="org-right"&gt;100&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td class="org-left"&gt;k&lt;/td&gt;
&lt;td class="org-right"&gt;3&lt;/td&gt;
&lt;td class="org-right"&gt;0.2&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;
The exit concentration is found by solving this equation:
&lt;/p&gt;

&lt;p&gt;
\(0 = Fa0 - v0 * Ca - k * Ca**2 * V\)
&lt;/p&gt;

&lt;p&gt;
So the question was what is Ca, and what is the uncertainty on it? Finding Ca is easy with fsolve.
&lt;/p&gt;

&lt;div class="org-src-container"&gt;
&lt;pre class="src src-ipython"&gt;&lt;span style="color: #0000FF;"&gt;from&lt;/span&gt; scipy.optimize &lt;span style="color: #0000FF;"&gt;import&lt;/span&gt; fsolve

&lt;span style="color: #BA36A5;"&gt;Fa0&lt;/span&gt; = 5.0
&lt;span style="color: #BA36A5;"&gt;v0&lt;/span&gt; = 10.0

&lt;span style="color: #BA36A5;"&gt;V&lt;/span&gt; = 66000.0
&lt;span style="color: #BA36A5;"&gt;k&lt;/span&gt; = 3.0

&lt;span style="color: #0000FF;"&gt;def&lt;/span&gt; &lt;span style="color: #006699;"&gt;func&lt;/span&gt;(Ca, v0, k, Fa0, V):
&lt;span style="color: #9B9B9B; background-color: #EDEDED;"&gt; &lt;/span&gt;   &lt;span style="color: #036A07;"&gt;"Mole balance for a CSTR. Solve this equation for func(Ca)=0"&lt;/span&gt;
&lt;span style="color: #9B9B9B; background-color: #EDEDED;"&gt; &lt;/span&gt;   &lt;span style="color: #BA36A5;"&gt;Fa&lt;/span&gt; = v0 * Ca     &lt;span style="color: #8D8D84;"&gt;# &lt;/span&gt;&lt;span style="color: #8D8D84; font-style: italic;"&gt;exit molar flow of A&lt;/span&gt;
&lt;span style="color: #9B9B9B; background-color: #EDEDED;"&gt; &lt;/span&gt;   &lt;span style="color: #BA36A5;"&gt;ra&lt;/span&gt; = -k * Ca**2  &lt;span style="color: #8D8D84;"&gt;# &lt;/span&gt;&lt;span style="color: #8D8D84; font-style: italic;"&gt;rate of reaction of A L/mol/h&lt;/span&gt;
&lt;span style="color: #9B9B9B; background-color: #EDEDED;"&gt; &lt;/span&gt;   &lt;span style="color: #0000FF;"&gt;return&lt;/span&gt; Fa0 - Fa + V * ra

Ca, = fsolve(func, 0.1 * Fa0 / v0, args=(v0, k, Fa0, V))
Ca
&lt;/pre&gt;
&lt;/div&gt;

&lt;pre class="example"&gt;
0.0050000000000000001

&lt;/pre&gt;

&lt;p&gt;
The uncertainty on Ca is a little trickier. A &lt;a href="https://en.wikipedia.org/wiki/Propagation_of_uncertainty#Simplification"&gt;simplified&lt;/a&gt; way to estimate it is:
&lt;/p&gt;

&lt;p&gt;
\(\sigma_{Ca} = \sqrt{(dCa/dv0)^2 \sigma_{v0}^2 + (dCa/dv0)^2 \sigma_{v0}^2 + (dCa/dFa0)^2 \sigma_{Fa0}^2 + (dCa/dV)^2 \sigma_{V}^2}\)
&lt;/p&gt;

&lt;p&gt;
We know the &amp;sigma;_i for each input, we just need those partial derivatives. However, we only have the implicit function we used to solve for Ca, and I do not want to do the algebra to solve for Ca. Luckily, we &lt;a href="http://kitchingroup.cheme.cmu.edu/blog/2018/10/08/Getting-derivatives-from-implicit-functions-with-autograd/"&gt;previously worked out&lt;/a&gt; how to get these derivatives from an implicit function using autograd. We just need to loop through the arguments, get the relevant derivatives, and accumulate the product of the squared derivatives and errors. Finally, take the square root of that sum.
&lt;/p&gt;

&lt;div class="org-src-container"&gt;
&lt;pre class="src src-ipython"&gt;&lt;span style="color: #0000FF;"&gt;import&lt;/span&gt; autograd.numpy &lt;span style="color: #0000FF;"&gt;as&lt;/span&gt; np
&lt;span style="color: #0000FF;"&gt;from&lt;/span&gt; autograd &lt;span style="color: #0000FF;"&gt;import&lt;/span&gt; grad

&lt;span style="color: #8D8D84;"&gt;# &lt;/span&gt;&lt;span style="color: #8D8D84; font-style: italic;"&gt;these are the uncertainties on the inputs&lt;/span&gt;
&lt;span style="color: #BA36A5;"&gt;s&lt;/span&gt; = [&lt;span style="color: #D0372D;"&gt;None&lt;/span&gt;, 0.1, 0.2, 0.05, 100]

&lt;span style="color: #BA36A5;"&gt;S2&lt;/span&gt; = 0.0

&lt;span style="color: #BA36A5;"&gt;dfdCa&lt;/span&gt; = grad(func, 0)
&lt;span style="color: #0000FF;"&gt;for&lt;/span&gt; i &lt;span style="color: #0000FF;"&gt;in&lt;/span&gt; &lt;span style="color: #006FE0;"&gt;range&lt;/span&gt;(1, 5):
&lt;span style="color: #9B9B9B; background-color: #EDEDED;"&gt; &lt;/span&gt;   &lt;span style="color: #BA36A5;"&gt;dfdarg2&lt;/span&gt; = grad(func, i)
&lt;span style="color: #9B9B9B; background-color: #EDEDED;"&gt; &lt;/span&gt;   &lt;span style="color: #BA36A5;"&gt;dCadarg2&lt;/span&gt; = -dfdarg2(Ca, v0, k, Fa0, V) / dfdCa(Ca, v0, k, Fa0, V)
&lt;span style="color: #9B9B9B; background-color: #EDEDED;"&gt; &lt;/span&gt;   &lt;span style="color: #BA36A5;"&gt;S2&lt;/span&gt; += dCadarg2**2 * s[i]**2

&lt;span style="color: #BA36A5;"&gt;Ca_s&lt;/span&gt; = np.sqrt(S2)
&lt;span style="color: #0000FF;"&gt;print&lt;/span&gt;(f&lt;span style="color: #008000;"&gt;'Ca = {Ca:1.5f} +\- {Ca_s}'&lt;/span&gt;)
&lt;/pre&gt;
&lt;/div&gt;

&lt;pre class="example"&gt;
Ca = 0.00500 +\- 0.00016776432898276802


&lt;/pre&gt;

&lt;p&gt;
That is the same uncertainty estimate the quantities package provided. One benefit here is I did not have to do the somewhat complicated wrapping procedure around fsolve that was required with uncertainties to get this. On the other hand, I did have to derive the formula and implement them. It worked fine here, since we have an implicit function and a way to get the required derivatives. It could take some work to do this with the exit concentration of a PFR, which requires an integrator. Maybe that &lt;a href="http://kitchingroup.cheme.cmu.edu/blog/2018/10/11/A-differentiable-ODE-integrator-for-sensitivity-analysis/"&gt;differentiable integrator&lt;/a&gt; will come in handy again!
&lt;/p&gt;
&lt;p&gt;Copyright (C) 2018 by John Kitchin. See the &lt;a href="/copying.html"&gt;License&lt;/a&gt; for information about copying.&lt;p&gt;
&lt;p&gt;&lt;a href="/org/2018/11/05/Using-autograd-for-error-propagation.org"&gt;org-mode source&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Org-mode version = 9.1.14&lt;/p&gt;]]></content:encoded>
    </item>
    <item>
      <title>Visualizing uncertainty in linear regression</title>
      <link>https://kitchingroup.cheme.cmu.edu/blog/2013/07/18/Visualizing-uncertainty-in-linear-regression</link>
      <pubDate>Thu, 18 Jul 2013 19:13:40 EDT</pubDate>
      <category><![CDATA[data analysis]]></category>
      <category><![CDATA[uncertainty]]></category>
      <guid isPermaLink="false">6et-kvuDQR-6PXnXSkJyua0xEhc=</guid>
      <description>Visualizing uncertainty in linear regression</description>
      <content:encoded><![CDATA[




&lt;p&gt;
In this example, we show how to visualize  uncertainty in a fit. The idea is to fit a model to &lt;a href="http://www.itl.nist.gov/div898/handbook/pmd/section4/pmd44.htm"&gt;data&lt;/a&gt;, and get the uncertainty in the model parameters. Then we sample the parameters according to the normal distribution, and plot the corresponding distribution of models. We use transparent lines and allow the overlap to indicate the density of the fits.
&lt;/p&gt;

&lt;p&gt;
The data is stored in a text file download PT.txt , with the following structure:
&lt;/p&gt;

&lt;pre class="example"&gt;
Run          Ambient                            Fitted
 Order  Day  Temperature  Temperature  Pressure    Value    Residual
  1      1      23.820      54.749      225.066   222.920     2.146
...
&lt;/pre&gt;

&lt;p&gt;
We need to read the data in, and perform a regression analysis on P vs. T. In python we start counting at 0, so we actually want columns 3 and 4.
&lt;/p&gt;

&lt;div class="org-src-container"&gt;

&lt;pre class="src src-python"&gt;&lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; numpy &lt;span style="color: #8b0000;"&gt;as&lt;/span&gt; np
&lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; matplotlib.pyplot &lt;span style="color: #8b0000;"&gt;as&lt;/span&gt; plt
&lt;span style="color: #8b0000;"&gt;from&lt;/span&gt; pycse &lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; regress

data = np.loadtxt(&lt;span style="color: #228b22;"&gt;'../../pycse/data/PT.txt'&lt;/span&gt;, skiprows=2)
T = data[:, 3]
P = data[:, 4]

A = np.column_stack([T**0, T])

p, pint, se = regress(A, P, 0.05)

&lt;span style="color: #8b0000;"&gt;print&lt;/span&gt; p, pint, se
plt.plot(T, P, &lt;span style="color: #228b22;"&gt;'k.'&lt;/span&gt;)
plt.plot(T, np.dot(A, p))

&lt;span style="color: #ff0000; font-weight: bold;"&gt;# Now we plot the distribution of possible lines&lt;/span&gt;
N = 2000
B = np.random.normal(p[0], se[0], N)
M = np.random.normal(p[1], se[1], N)
x = np.array([&lt;span style="color: #8b0000;"&gt;min&lt;/span&gt;(T), &lt;span style="color: #8b0000;"&gt;max&lt;/span&gt;(T)])

&lt;span style="color: #8b0000;"&gt;for&lt;/span&gt; b,m &lt;span style="color: #8b0000;"&gt;in&lt;/span&gt; &lt;span style="color: #8b0000;"&gt;zip&lt;/span&gt;(B, M):
    plt.plot(x, m*x + b, &lt;span style="color: #228b22;"&gt;'-'&lt;/span&gt;, color=&lt;span style="color: #228b22;"&gt;'gray'&lt;/span&gt;, alpha=0.02)
plt.savefig(&lt;span style="color: #228b22;"&gt;'images/plotting-uncertainty.png'&lt;/span&gt;)
&lt;/pre&gt;
&lt;/div&gt;

&lt;pre class="example"&gt;
[ 7.74899739  3.93014044] [[  2.97964903  12.51834576]
 [  3.82740876   4.03287211]] [ 2.35384765  0.05070183]
&lt;/pre&gt;

&lt;p&gt;&lt;img src="/img/./images/plotting-uncertainty.png"&gt;&lt;p&gt;

&lt;p&gt;
Here you can see 2000 different lines that have some probability of being correct. The darkest gray is near the fit, as expected; the darker the gray the more probable it is the line. This is a qualitative way of judging the quality of the fit.
&lt;/p&gt;

&lt;p&gt;
Note, this is not the prediction error that we are plotting, that is the uncertainty in where a predicted y-value is. 
&lt;/p&gt;
&lt;p&gt;Copyright (C) 2013 by John Kitchin. See the &lt;a href="/copying.html"&gt;License&lt;/a&gt; for information about copying.&lt;p&gt;&lt;p&gt;&lt;a href="/org/2013/07/18/Visualizing-uncertainty-in-linear-regression.org"&gt;org-mode source&lt;/a&gt;&lt;p&gt;]]></content:encoded>
    </item>
    <item>
      <title>Uncertainty in the solution of an ODE</title>
      <link>https://kitchingroup.cheme.cmu.edu/blog/2013/07/14/Uncertainty-in-the-solution-of-an-ODE</link>
      <pubDate>Sun, 14 Jul 2013 13:36:36 EDT</pubDate>
      <category><![CDATA[uncertainty]]></category>
      <category><![CDATA[ode]]></category>
      <guid isPermaLink="false">vgxLM1eNdDWFxYoYzKL_cdL_bP8=</guid>
      <description>Uncertainty in the solution of an ODE</description>
      <content:encoded><![CDATA[



&lt;p&gt;
Our objective in this post is to examine the effects of uncertainty in parameters that define an ODE on the integrated solution of the ODE. My favorite method for numerical uncertainty analysis is Monte Carlo simulation because it is easy to code and usually easy to understand. We take that approach first.
&lt;/p&gt;

&lt;p&gt;
The problem to solve is to estimate the conversion in a constant volume batch reactor with a second order reaction \(A \rightarrow B\), and the rate law: \(-r_A = k C_A^2\), after one hour of reaction. There is 5% uncertainty in the rate constant \(k=0.001\) and in the initial concentration \(C_{A0}=1\). 
&lt;/p&gt;

&lt;p&gt;
The relevant differential equation is:
&lt;/p&gt;

&lt;p&gt;
\(\frac{dX}{dt} = -r_A /C_{A0}\).
&lt;/p&gt;

&lt;p&gt;
We have to assume that 5% uncertainty refers to a normal distribution of error that has a standard deviation of 5% of the mean value. 
&lt;/p&gt;

&lt;div class="org-src-container"&gt;

&lt;pre class="src src-python"&gt;&lt;span style="color: #8b0000;"&gt;from&lt;/span&gt; scipy.integrate &lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; odeint
&lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; numpy &lt;span style="color: #8b0000;"&gt;as&lt;/span&gt; np

&lt;span style="color: #8b008b;"&gt;N&lt;/span&gt; = 1000

&lt;span style="color: #8b008b;"&gt;K&lt;/span&gt; = np.random.normal(0.001, 0.05*0.001, N)
&lt;span style="color: #8b008b;"&gt;CA0&lt;/span&gt; = np.random.normal(1, 0.05*1, N)

&lt;span style="color: #8b008b;"&gt;X&lt;/span&gt; = [] &lt;span style="color: #ff0000; font-weight: bold;"&gt;# &lt;/span&gt;&lt;span style="color: #ff0000; font-weight: bold;"&gt;to store answer in&lt;/span&gt;
&lt;span style="color: #8b0000;"&gt;for&lt;/span&gt; k, Ca0 &lt;span style="color: #8b0000;"&gt;in&lt;/span&gt; &lt;span style="color: #cd0000;"&gt;zip&lt;/span&gt;(K, CA0):
    &lt;span style="color: #ff0000; font-weight: bold;"&gt;# &lt;/span&gt;&lt;span style="color: #ff0000; font-weight: bold;"&gt;define ODE&lt;/span&gt;
    &lt;span style="color: #8b0000;"&gt;def&lt;/span&gt; &lt;span style="color: #8b2323;"&gt;ode&lt;/span&gt;(X, t):
        &lt;span style="color: #8b008b;"&gt;ra&lt;/span&gt; = -k * (Ca0 * (1 - X))**2
        &lt;span style="color: #8b0000;"&gt;return&lt;/span&gt; -ra / Ca0
    
    &lt;span style="color: #8b008b;"&gt;X0&lt;/span&gt; = 0
    &lt;span style="color: #8b008b;"&gt;tspan&lt;/span&gt; = np.linspace(0,3600)

    &lt;span style="color: #8b008b;"&gt;sol&lt;/span&gt; = odeint(ode, X0, tspan)

    &lt;span style="color: #8b008b;"&gt;X&lt;/span&gt; += [sol[-1][0]]

&lt;span style="color: #8b008b;"&gt;s&lt;/span&gt; = &lt;span style="color: #228b22;"&gt;'Final conversion at one hour is {0:1.3f} +- {1:1.3f} (1 sigma)'&lt;/span&gt;
&lt;span style="color: #8b0000;"&gt;print&lt;/span&gt; s.&lt;span style="color: #cd0000;"&gt;format&lt;/span&gt;(np.average(X),
               np.std(X))
&lt;/pre&gt;
&lt;/div&gt;

&lt;pre class="example"&gt;
Final conversion at one hour is 0.782 +- 0.013 (1 sigma)
&lt;/pre&gt;

&lt;p&gt;
See, it is not too difficulty to write. It is however, a little on the expensive side to run, since we typically need 1e3-1e6 samples to get the statistics reasonable. Let us try the uncertainties package too. For this we have to wrap a function that takes uncertainties and returns a single float number. 
&lt;/p&gt;

&lt;div class="org-src-container"&gt;

&lt;pre class="src src-python"&gt;&lt;span style="color: #8b0000;"&gt;from&lt;/span&gt; scipy.integrate &lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; odeint
&lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; numpy &lt;span style="color: #8b0000;"&gt;as&lt;/span&gt; np
&lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; uncertainties &lt;span style="color: #8b0000;"&gt;as&lt;/span&gt; u

&lt;span style="color: #8b008b;"&gt;k&lt;/span&gt; = u.ufloat(0.001, 0.05*0.001)
&lt;span style="color: #8b008b;"&gt;Ca0&lt;/span&gt; = u.ufloat(1.0, 0.05)

&lt;span style="color: #4682b4;"&gt;@u.wrap&lt;/span&gt;
&lt;span style="color: #8b0000;"&gt;def&lt;/span&gt; &lt;span style="color: #8b2323;"&gt;func&lt;/span&gt;(k, Ca0):
    &lt;span style="color: #ff0000; font-weight: bold;"&gt;# &lt;/span&gt;&lt;span style="color: #ff0000; font-weight: bold;"&gt;define the ODE&lt;/span&gt;
    &lt;span style="color: #8b0000;"&gt;def&lt;/span&gt; &lt;span style="color: #8b2323;"&gt;ode&lt;/span&gt;(X, t):
        &lt;span style="color: #8b008b;"&gt;ra&lt;/span&gt; = -k * (Ca0 * (1 - X))**2
        &lt;span style="color: #8b0000;"&gt;return&lt;/span&gt; -ra / Ca0
    
    &lt;span style="color: #8b008b;"&gt;X0&lt;/span&gt; = 0 &lt;span style="color: #ff0000; font-weight: bold;"&gt;# &lt;/span&gt;&lt;span style="color: #ff0000; font-weight: bold;"&gt;initial condition&lt;/span&gt;
    &lt;span style="color: #8b008b;"&gt;tspan&lt;/span&gt; = np.linspace(0, 3600)
    &lt;span style="color: #ff0000; font-weight: bold;"&gt;# &lt;/span&gt;&lt;span style="color: #ff0000; font-weight: bold;"&gt;integrate it&lt;/span&gt;
    &lt;span style="color: #8b008b;"&gt;sol&lt;/span&gt; = odeint(ode, X0, tspan)
    &lt;span style="color: #8b0000;"&gt;return&lt;/span&gt; sol[-1][0]

&lt;span style="color: #8b008b;"&gt;result&lt;/span&gt; = func(k, Ca0)
&lt;span style="color: #8b008b;"&gt;s&lt;/span&gt; = &lt;span style="color: #228b22;"&gt;'Final conversion at one hour is {0}(1 sigma)'&lt;/span&gt;
&lt;span style="color: #8b0000;"&gt;print&lt;/span&gt; s.&lt;span style="color: #cd0000;"&gt;format&lt;/span&gt;(result)
&lt;/pre&gt;
&lt;/div&gt;

&lt;pre class="example"&gt;
Final conversion at one hour is 0.783+/-0.012(1 sigma)
&lt;/pre&gt;

&lt;p&gt;
This is about the same amount of code as the Monte Carlo approach, but it runs much faster, and gets approximately the same results. You have to remember the wrapping technique, since the uncertainties package does not run natively with the odeint function. 
&lt;/p&gt;
&lt;p&gt;Copyright (C) 2013 by John Kitchin. See the &lt;a href="/copying.html"&gt;License&lt;/a&gt; for information about copying.&lt;p&gt;&lt;p&gt;&lt;a href="/org/2013/07/14/Uncertainty-in-the-solution-of-an-ODE.org"&gt;org-mode source&lt;/a&gt;&lt;p&gt;]]></content:encoded>
    </item>
    <item>
      <title>Uncertainty in an integral equation</title>
      <link>https://kitchingroup.cheme.cmu.edu/blog/2013/07/10/Uncertainty-in-an-integral-equation</link>
      <pubDate>Wed, 10 Jul 2013 09:05:02 EDT</pubDate>
      <category><![CDATA[math]]></category>
      <category><![CDATA[uncertainty]]></category>
      <guid isPermaLink="false">-K18eqwvCJAIOIRhVfWk9J0Zf08=</guid>
      <description>Uncertainty in an integral equation</description>
      <content:encoded><![CDATA[



&lt;p&gt;
In a &lt;a href="http://jkitchin.github.io/blog/2013/01/06/Integrating-a-batch-reactor-design-equation/"&gt;previous example&lt;/a&gt;, we solved for the time to reach a specific conversion in a batch reactor. However, it is likely there is uncertainty in the rate constant, and possibly in the initial concentration. Here we examine the effects of that uncertainty on the time to reach the desired conversion.
&lt;/p&gt;

&lt;p&gt;
To do this we have to write a function that takes arguments with uncertainty, and wrap the function with the uncertainties.wrap decorator. The function must return a single float number (current limitation of the uncertainties package). Then, we simply call the function, and the uncertainties from the inputs will be automatically propagated to the outputs. Let us say there is about 10% uncertainty in the rate constant, and 1% uncertainty in the initial concentration.
&lt;/p&gt;

&lt;div class="org-src-container"&gt;

&lt;pre class="src src-python"&gt;&lt;span style="color: #8b0000;"&gt;from&lt;/span&gt; scipy.integrate &lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; quad
&lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; uncertainties &lt;span style="color: #8b0000;"&gt;as&lt;/span&gt; u

k = u.ufloat((1.0e-3, 1.0e-4))
Ca0 = u.ufloat((1.0, 0.01))&lt;span style="color: #ff0000; font-weight: bold;"&gt;# &lt;/span&gt;&lt;span style="color: #ff0000; font-weight: bold;"&gt;mol/L&lt;/span&gt;

@u.wrap
&lt;span style="color: #8b0000;"&gt;def&lt;/span&gt; &lt;span style="color: #8b2323;"&gt;func&lt;/span&gt;(k, Ca0):
    &lt;span style="color: #8b0000;"&gt;def&lt;/span&gt; &lt;span style="color: #8b2323;"&gt;integrand&lt;/span&gt;(X):
        &lt;span style="color: #8b0000;"&gt;return&lt;/span&gt; 1./(k*Ca0)*(1./(1-X)**2)
    integral, abserr = quad(integrand, 0, 0.9)
    &lt;span style="color: #8b0000;"&gt;return&lt;/span&gt; integral

sol = func(k, Ca0)
&lt;span style="color: #8b0000;"&gt;print&lt;/span&gt; &lt;span style="color: #228b22;"&gt;'t = {0} seconds ({1} hours)'&lt;/span&gt;.format(sol, sol/3600)
&lt;/pre&gt;
&lt;/div&gt;

&lt;pre class="example"&gt;
t = 9000.0+/-904.488801332 seconds (2.5+/-0.251246889259 hours)
&lt;/pre&gt;

&lt;p&gt;
The result shows about a 10% uncertainty in the time, which is similar to the largest uncertainty in the inputs.  This information should certainly be used in making decisions about how long to actually run the reactor to be sure of reaching the goal. For example, in this case, running the reactor for 3 hours (that is roughly + 2&amp;sigma;) would ensure at a high level of confidence (approximately 95% confidence) that you reach at least 90% conversion.  
&lt;/p&gt;
&lt;p&gt;Copyright (C) 2013 by John Kitchin. See the &lt;a href="/copying.html"&gt;License&lt;/a&gt; for information about copying.&lt;p&gt;&lt;p&gt;&lt;a href="/org/2013/07/10/Uncertainty-in-an-integral-equation.org"&gt;org-mode source&lt;/a&gt;&lt;p&gt;]]></content:encoded>
    </item>
    <item>
      <title>Uncertainty in polynomial roots - Part II</title>
      <link>https://kitchingroup.cheme.cmu.edu/blog/2013/07/06/Uncertainty-in-polynomial-roots-Part-II</link>
      <pubDate>Sat, 06 Jul 2013 15:31:38 EDT</pubDate>
      <category><![CDATA[data analysis]]></category>
      <category><![CDATA[uncertainty]]></category>
      <guid isPermaLink="false">McvWDyZQgz4sfhRBKgJxaDZOLjA=</guid>
      <description>Uncertainty in polynomial roots - Part II</description>
      <content:encoded><![CDATA[


&lt;p&gt;
We previously looked at uncertainty in polynomial roots where we had an analytical formula for the roots of the polynomial, and we knew the uncertainties in the polynomial parameters. It would be inconvenient to try this for a cubic polynomial, although there may be formulas for the roots. I do not know of there are general formulas for the roots of a 4&lt;sup&gt;th&lt;/sup&gt; order polynomial or higher. 
&lt;/p&gt;

&lt;p&gt;
Unfortunately, we cannot use the uncertainties package out of the box directly here.
&lt;/p&gt;

&lt;div class="org-src-container"&gt;

&lt;pre class="src src-python"&gt;&lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; uncertainties &lt;span style="color: #8b0000;"&gt;as&lt;/span&gt; u
&lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; numpy &lt;span style="color: #8b0000;"&gt;as&lt;/span&gt; np
c, b, a = [-0.99526746, -0.011546,    1.00188999]
sc, sb, sa = [ 0.0249142,   0.00860025,  0.00510128]

A = u.ufloat((a, sa))
B = u.ufloat((b, sb))
C = u.ufloat((c, sc))

&lt;span style="color: #8b0000;"&gt;print&lt;/span&gt; np.roots([A, B, C])
&lt;/pre&gt;
&lt;/div&gt;

&lt;pre class="example"&gt;
&amp;gt;&amp;gt;&amp;gt; &amp;gt;&amp;gt;&amp;gt; &amp;gt;&amp;gt;&amp;gt; &amp;gt;&amp;gt;&amp;gt; &amp;gt;&amp;gt;&amp;gt; &amp;gt;&amp;gt;&amp;gt; &amp;gt;&amp;gt;&amp;gt; &amp;gt;&amp;gt;&amp;gt; Traceback (most recent call last):
  File "&amp;lt;stdin&amp;gt;", line 1, in &amp;lt;module&amp;gt;
  File "c:\Users\jkitchin\AppData\Local\Enthought\Canopy\User\lib\site-packages\numpy\lib\polynomial.py", line 218, in roots
    p = p.astype(float)
  File "c:\Users\jkitchin\AppData\Local\Enthought\Canopy\User\lib\site-packages\uncertainties\__init__.py", line 1257, in raise_error
    % (self.__class__, coercion_type))
TypeError: can't convert an affine function (&amp;lt;class 'uncertainties.Variable'&amp;gt;) to float; use x.nominal_value
&lt;/pre&gt;

&lt;p&gt;
To make some progress, we have to understand how the &lt;a href="https://github.com/numpy/numpy/blob/v1.7.0/numpy/lib/polynomial.py#L149"&gt;numpy.roots&lt;/a&gt; function works. It constructs a &lt;a href="http://en.wikipedia.org/wiki/Companion_matrix"&gt;Companion matrix&lt;/a&gt;, and the eigenvalues of that matrix are the same as the roots of the polynomial.  
&lt;/p&gt;

&lt;div class="org-src-container"&gt;

&lt;pre class="src src-python"&gt;&lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; numpy &lt;span style="color: #8b0000;"&gt;as&lt;/span&gt; np

c0, c1, c2 = [-0.99526746, -0.011546,    1.00188999]

p = np.array([c2, c1, c0])
N = &lt;span style="color: #8b0000;"&gt;len&lt;/span&gt;(p)

&lt;span style="color: #ff0000; font-weight: bold;"&gt;# we construct the companion matrix like this&lt;/span&gt;
&lt;span style="color: #ff0000; font-weight: bold;"&gt;# see https://github.com/numpy/numpy/blob/v1.7.0/numpy/lib/polynomial.py#L220&lt;/span&gt;
&lt;span style="color: #ff0000; font-weight: bold;"&gt;# for this code.&lt;/span&gt;
&lt;span style="color: #ff0000; font-weight: bold;"&gt;# build companion matrix and find its eigenvalues (the roots)&lt;/span&gt;
A = np.diag(np.ones((N-2,), p.dtype), -1)
A[0, :] = -p[1:] / p[0]

&lt;span style="color: #8b0000;"&gt;print&lt;/span&gt; A

roots = np.linalg.eigvals(A)
&lt;span style="color: #8b0000;"&gt;print&lt;/span&gt; roots
&lt;/pre&gt;
&lt;/div&gt;

&lt;pre class="example"&gt;
[[ 0.01152422  0.99338996]
 [ 1.          0.        ]]
[ 1.00246827 -0.99094405]
&lt;/pre&gt;

&lt;p&gt;
This definition of the companion matrix is a little different than the one &lt;a href="http://en.wikipedia.org/wiki/Companion_matrix"&gt;here&lt;/a&gt;, but primarily in the scaling of the coefficients. That does not seem to change the eigenvalues, or the roots. 
&lt;/p&gt;

&lt;p&gt;
Now, we have a path to estimate the uncertainty in the roots. Since we know the polynomial coefficients and their uncertainties from the fit, we can use Monte Carlo sampling to estimate the uncertainty in the roots. 
&lt;/p&gt;

&lt;div class="org-src-container"&gt;

&lt;pre class="src src-python"&gt;&lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; numpy &lt;span style="color: #8b0000;"&gt;as&lt;/span&gt; np
&lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; uncertainties &lt;span style="color: #8b0000;"&gt;as&lt;/span&gt; u

c, b, a = [-0.99526746, -0.011546,    1.00188999]
sc, sb, sa = [ 0.0249142,   0.00860025,  0.00510128]

NSAMPLES = 100000
A = np.random.normal(a, sa, (NSAMPLES, ))
B = np.random.normal(b, sb, (NSAMPLES, ))
C = np.random.normal(c, sc, (NSAMPLES, ))

roots = [[] &lt;span style="color: #8b0000;"&gt;for&lt;/span&gt; i &lt;span style="color: #8b0000;"&gt;in&lt;/span&gt; &lt;span style="color: #8b0000;"&gt;range&lt;/span&gt;(NSAMPLES)]

&lt;span style="color: #8b0000;"&gt;for&lt;/span&gt; i &lt;span style="color: #8b0000;"&gt;in&lt;/span&gt; &lt;span style="color: #8b0000;"&gt;range&lt;/span&gt;(NSAMPLES):
    p = np.array([A[i], B[i], C[i]])
    N = &lt;span style="color: #8b0000;"&gt;len&lt;/span&gt;(p)
    
    M = np.diag(np.ones((N-2,), p.dtype), -1)
    M[0, :] = -p[1:] / p[0]
    r = np.linalg.eigvals(M)
    r.sort()  &lt;span style="color: #ff0000; font-weight: bold;"&gt;# there is no telling what order the values come out in&lt;/span&gt;
    roots[i] = r
    
avg = np.average(roots, axis=0)
std = np.std(roots, axis=0)

&lt;span style="color: #8b0000;"&gt;for&lt;/span&gt; r, s &lt;span style="color: #8b0000;"&gt;in&lt;/span&gt; &lt;span style="color: #8b0000;"&gt;zip&lt;/span&gt;(avg, std):
    &lt;span style="color: #8b0000;"&gt;print&lt;/span&gt; &lt;span style="color: #228b22;"&gt;'{0: f} +/- {1: f}'&lt;/span&gt;.format(r, s)
&lt;/pre&gt;
&lt;/div&gt;

&lt;pre class="example"&gt;
-0.990949 +/-  0.013435
 1.002443 +/-  0.013462
&lt;/pre&gt;

&lt;p&gt;
Compared to our previous approach with the uncertainties package where we got:
&lt;/p&gt;

&lt;pre class="example"&gt;
: -0.990944048037+/-0.0134208013339
:  1.00246826738 +/-0.0134477390832
&lt;/pre&gt;

&lt;p&gt;
the agreement is quite good! The advantage of this approach is that we do not have to know the formula for the roots of higher order polynomials to estimate the uncertainty in the roots. The downside is we have to evaluate the eigenvalues of a matrix a large number of times to get good estimates of the uncertainty. For high power polynomials this could be problematic. I do not currently see a way around this, unless it becomes possible to get the uncertainties package to propagate through the numpy.eigvals function. It is possible to &lt;a href="http://pythonhosted.org/uncertainties/user_guide.html#making-custom-functions-accept-numbers-with-uncertainties"&gt;wrap&lt;/a&gt; some functions with uncertainties, but so far only functions that return a single number.
&lt;/p&gt;

&lt;p&gt;
There are some other potential problems with this approach.  It is assumed that the accuracy of the eigenvalue solver is much better than the uncertainty in the polynomial parameters. You have to use some judgment in using these uncertainties. We are approximating the uncertainties of a nonlinear problem. In other words, the uncertainties of the roots are not linearly dependent on the uncertainties of the polynomial coefficients.  
&lt;/p&gt;

&lt;p&gt;
It is possible to &lt;a href="http://pythonhosted.org/uncertainties/user_guide.html#making-custom-functions-accept-numbers-with-uncertainties"&gt;wrap&lt;/a&gt; some functions with uncertainties, but so far only functions that return a single number. Here is an example of getting the n&lt;sup&gt;th&lt;/sup&gt; root and its uncertainty.
&lt;/p&gt;

&lt;div class="org-src-container"&gt;

&lt;pre class="src src-python"&gt;&lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; uncertainties &lt;span style="color: #8b0000;"&gt;as&lt;/span&gt; u
&lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; numpy &lt;span style="color: #8b0000;"&gt;as&lt;/span&gt; np

@u.wrap
&lt;span style="color: #8b0000;"&gt;def&lt;/span&gt; &lt;span style="color: #8b2323;"&gt;f&lt;/span&gt;(n=0, *P):
    &lt;span style="color: #228b22;"&gt;''' compute the nth root of the polynomial P and the uncertainty of the root'''&lt;/span&gt;
    p =  np.array(P)
    N = &lt;span style="color: #8b0000;"&gt;len&lt;/span&gt;(p)
    
    M = np.diag(np.ones((N-2,), p.dtype), -1)
    M[0, :] = -p[1:] / p[0]
    r = np.linalg.eigvals(M)
    r.sort()  &lt;span style="color: #ff0000; font-weight: bold;"&gt;# there is no telling what order the values come out in&lt;/span&gt;
    &lt;span style="color: #8b0000;"&gt;return&lt;/span&gt; r[n]

&lt;span style="color: #ff0000; font-weight: bold;"&gt;# our polynomial coefficients and standard errors&lt;/span&gt;
c, b, a = [-0.99526746, -0.011546,    1.00188999]
sc, sb, sa = [ 0.0249142,   0.00860025,  0.00510128]

A = u.ufloat((a, sa))
B = u.ufloat((b, sb))
C = u.ufloat((c, sc))

&lt;span style="color: #8b0000;"&gt;for&lt;/span&gt; result &lt;span style="color: #8b0000;"&gt;in&lt;/span&gt; [f(n, A, B, C) &lt;span style="color: #8b0000;"&gt;for&lt;/span&gt; n &lt;span style="color: #8b0000;"&gt;in&lt;/span&gt; [0, 1]]:
    &lt;span style="color: #8b0000;"&gt;print&lt;/span&gt; result
&lt;/pre&gt;
&lt;/div&gt;

&lt;pre class="example"&gt;
-0.990944048037+/-0.013420800377
1.00246826738+/-0.0134477388218
&lt;/pre&gt;

&lt;p&gt;
It is good to see this is the same result we got earlier, with &lt;i&gt;a lot less work&lt;/i&gt; (although we do have to solve it for each root, which is a bit redundant)! It is a bit more abstract though, and requires a specific formulation of the function for the wrapper to work.
&lt;/p&gt;
&lt;p&gt;Copyright (C) 2013 by John Kitchin. See the &lt;a href="/copying.html"&gt;License&lt;/a&gt; for information about copying.&lt;p&gt;&lt;p&gt;&lt;a href="/org/2013/07/06/Uncertainty-in-polynomial-roots---Part-II.org"&gt;org-mode source&lt;/a&gt;&lt;p&gt;]]></content:encoded>
    </item>
    <item>
      <title>Uncertainty in polynomial roots</title>
      <link>https://kitchingroup.cheme.cmu.edu/blog/2013/07/05/Uncertainty-in-polynomial-roots</link>
      <pubDate>Fri, 05 Jul 2013 09:10:09 EDT</pubDate>
      <category><![CDATA[data analysis]]></category>
      <category><![CDATA[uncertainty]]></category>
      <guid isPermaLink="false">6-Z6PdLBsxMl0CJ9rnMQKQEamfI=</guid>
      <description>Uncertainty in polynomial roots</description>
      <content:encoded><![CDATA[



&lt;p&gt;
Polynomials are convenient for fitting to data. Frequently we need to derive some properties of the data from the fit, e.g. the minimum value, or the slope, etc&amp;#x2026; Since we are fitting data, there is uncertainty in the polynomial parameters, and corresponding uncertainty in any properties derived from those parameters. 
&lt;/p&gt;

&lt;p&gt;
Here is our data.
&lt;/p&gt;

&lt;table id="data" border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides"&gt;


&lt;colgroup&gt;
&lt;col class="right"/&gt;

&lt;col class="right"/&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td class="right"&gt;-3.00&lt;/td&gt;
&lt;td class="right"&gt;8.10&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td class="right"&gt;-2.33&lt;/td&gt;
&lt;td class="right"&gt;4.49&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td class="right"&gt;-1.67&lt;/td&gt;
&lt;td class="right"&gt;1.73&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td class="right"&gt;-1.00&lt;/td&gt;
&lt;td class="right"&gt;-0.02&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td class="right"&gt;-0.33&lt;/td&gt;
&lt;td class="right"&gt;-0.90&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td class="right"&gt;0.33&lt;/td&gt;
&lt;td class="right"&gt;-0.83&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td class="right"&gt;1.00&lt;/td&gt;
&lt;td class="right"&gt;0.04&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td class="right"&gt;1.67&lt;/td&gt;
&lt;td class="right"&gt;1.78&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td class="right"&gt;2.33&lt;/td&gt;
&lt;td class="right"&gt;4.43&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td class="right"&gt;3.00&lt;/td&gt;
&lt;td class="right"&gt;7.95&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;div class="org-src-container"&gt;

&lt;pre class="src src-python"&gt;&lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; matplotlib.pyplot &lt;span style="color: #8b0000;"&gt;as&lt;/span&gt; plt

x = [a[0] &lt;span style="color: #8b0000;"&gt;for&lt;/span&gt; a &lt;span style="color: #8b0000;"&gt;in&lt;/span&gt; data]
y = [a[1] &lt;span style="color: #8b0000;"&gt;for&lt;/span&gt; a &lt;span style="color: #8b0000;"&gt;in&lt;/span&gt; data]
plt.plot(x, y)
plt.savefig(&lt;span style="color: #228b22;"&gt;'images/uncertain-roots.png'&lt;/span&gt;)
&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;&lt;img src="/img/./images/uncertain-roots.png"&gt;&lt;p&gt;

&lt;div class="org-src-container"&gt;

&lt;pre class="src src-python"&gt;&lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; matplotlib.pyplot &lt;span style="color: #8b0000;"&gt;as&lt;/span&gt; plt
&lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; numpy &lt;span style="color: #8b0000;"&gt;as&lt;/span&gt; np
&lt;span style="color: #8b0000;"&gt;from&lt;/span&gt; pycse &lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; regress

x = np.array([a[0] &lt;span style="color: #8b0000;"&gt;for&lt;/span&gt; a &lt;span style="color: #8b0000;"&gt;in&lt;/span&gt; data])
y = [a[1] &lt;span style="color: #8b0000;"&gt;for&lt;/span&gt; a &lt;span style="color: #8b0000;"&gt;in&lt;/span&gt; data]

A = np.column_stack([x**0, x**1, x**2])

p, pint, se = regress(A, y, alpha=0.05)

&lt;span style="color: #8b0000;"&gt;print&lt;/span&gt; p

&lt;span style="color: #8b0000;"&gt;print&lt;/span&gt; pint

&lt;span style="color: #8b0000;"&gt;print&lt;/span&gt; se

plt.plot(x, y, &lt;span style="color: #228b22;"&gt;'bo '&lt;/span&gt;)

xfit = np.linspace(x.min(), x.max())
plt.plot(xfit, np.dot(np.column_stack([xfit**0, xfit**1, xfit**2]), p), &lt;span style="color: #228b22;"&gt;'b-'&lt;/span&gt;)
plt.savefig(&lt;span style="color: #228b22;"&gt;'images/uncertain-roots-1.png'&lt;/span&gt;)
&lt;/pre&gt;
&lt;/div&gt;

&lt;pre class="example"&gt;
[-0.99526746 -0.011546    1.00188999]
[[-1.05418017 -0.93635474]
 [-0.03188236  0.00879037]
 [ 0.98982737  1.01395261]]
[ 0.0249142   0.00860025  0.00510128]
&lt;/pre&gt;

&lt;p&gt;&lt;img src="/img/./images/uncertain-roots-1.png"&gt;&lt;p&gt;

&lt;p&gt;
Since this is a quadratic equation, we know the roots analytically: \(x = \frac{-b \pm \sqrt{b^2 - 4 a c}{2 a}\). So, we can use the uncertainties package to directly compute the uncertainties in the roots. 
&lt;/p&gt;

&lt;div class="org-src-container"&gt;

&lt;pre class="src src-python"&gt;&lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; numpy &lt;span style="color: #8b0000;"&gt;as&lt;/span&gt; np
&lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; uncertainties &lt;span style="color: #8b0000;"&gt;as&lt;/span&gt; u

c, b, a = [-0.99526746, -0.011546,    1.00188999]
sc, sb, sa = [ 0.0249142,   0.00860025,  0.00510128]

A = u.ufloat((a, sa))
B = u.ufloat((b, sb))
C = u.ufloat((c, sc))

&lt;span style="color: #ff0000; font-weight: bold;"&gt;# np.sqrt does not work with uncertainity&lt;/span&gt;
r1 = (-B + (B**2 - 4 * A * C)**0.5) / (2 * A)
r2 = (-B - (B**2 - 4 * A * C)**0.5) / (2 * A)

&lt;span style="color: #8b0000;"&gt;print&lt;/span&gt; r1
&lt;span style="color: #8b0000;"&gt;print&lt;/span&gt; r2
&lt;/pre&gt;
&lt;/div&gt;

&lt;pre class="example"&gt;
1.00246826738+/-0.0134477390832
-0.990944048037+/-0.0134208013339
&lt;/pre&gt;

&lt;p&gt;
The minimum is also straightforward to analyze here. The derivative of the polynomial is \(2 a x + b\) and it is equal to zero at \(x = -b / (2 a)\).
&lt;/p&gt;

&lt;div class="org-src-container"&gt;

&lt;pre class="src src-python"&gt;&lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; numpy &lt;span style="color: #8b0000;"&gt;as&lt;/span&gt; np
&lt;span style="color: #8b0000;"&gt;import&lt;/span&gt; uncertainties &lt;span style="color: #8b0000;"&gt;as&lt;/span&gt; u

c, b, a = [-0.99526746, -0.011546,    1.00188999]
sc, sb, sa = [ 0.0249142,   0.00860025,  0.00510128]

A = u.ufloat((a, sa))
B = u.ufloat((b, sb))

zero = -B / (2 * A)
&lt;span style="color: #8b0000;"&gt;print&lt;/span&gt; &lt;span style="color: #228b22;"&gt;'The minimum is at {0}.'&lt;/span&gt;.format(zero)
&lt;/pre&gt;
&lt;/div&gt;

&lt;pre class="example"&gt;
The minimum is at 0.00576210967034+/-0.00429211341136.
&lt;/pre&gt;

&lt;p&gt;
You can see there is uncertainty in both the roots of the original polynomial, as well as the minimum of the data. The approach here worked well because the polynomials were low order (quadratic or linear) where we know the formulas for the roots. Consequently, we can take advantage of the uncertainties module with little effort to propagate the errors. For higher order polynomials, we would probably have to do some wrapping of functions to propagate uncertainties.
&lt;/p&gt;
&lt;p&gt;Copyright (C) 2013 by John Kitchin. See the &lt;a href="/copying.html"&gt;License&lt;/a&gt; for information about copying.&lt;p&gt;&lt;p&gt;&lt;a href="/org/2013/07/05/Uncertainty-in-polynomial-roots.org"&gt;org-mode source&lt;/a&gt;&lt;p&gt;]]></content:encoded>
    </item>
  </channel>
</rss>
