DMelt:Units/3 Error Propagation

From HandWiki
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.
Member


Error propagation

Propagation of uncertainty (or propagation of error) is the effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on them. Error (or uncertainty) propagation for arbitrary (analytic and non-analytic) functions are supported using several methods:

  • Using a Taylor expansion
  • Using a Monte Carlo simulation

Error propagation using P1D

A typical measurement contains an uncertainty. A convenient way to keep data with errors is to use the jhplot.P1D jhplot.P1D data container.

from jhplot import *
p=P1D("data with errors")
p.add(1, 2, 0.5)  # error on Y=2 is 0.5
p.add(2, 5, 0.9)  # error on Y=5 is 0.9
When you are using the method

"oper()" of this class, the errors are automatically propagated. This applies for subtraction, division, multiplication and addition.

Error propagation

DataMelt links the Python package uncertainties, therefore, you can program with Python as usual. Run this example in the DataMelt IDE and you can see this:

from uncertainties import ufloat
from uncertainties.umath import *  # sin(), etc.
x = ufloat(1, 0.1)  # x = 1+/-0.1

This approach calculates errors using a Taylor expansion. It also can differentiate a function.

Error propagation for complex functions

In this section we describe error propagation for an arbitrary transformation using the Java classes.

Error propagation for arbitrary transformation can be done by the class jhplot.math.ValueErr jhplot.math.ValueErr. The approach is based on a Taylor expansion (linearization) and still have some limitations. Here is a simple example:

from jhplot.math import *
x=ValueErr(10,2)
ylog=x.log10(x)  # log10 and associated error
print ylog.toString()

Uncertainty propagation using a Monte Carlo method

Complex functions are difficult to deal with using the above approach. DataMelt includes Java classes for a Monte Carlo approach which:

  • avoids any errors associated with the linearization of the model
  • produces a distribution for the uncertain output as well as the mean and standard deviation.


Propagation of errors formula requires the computation of derivatives, which can be quite complicated for larger models. The Monte Carlo approach can handle correlated parameters as well as independent parameters.

Error propagation for arbitrary transformation using a Monte Carlo can be done by the class jhpro.upropog.UPropogate jhpro.upropog.UPropogate. Here is a simple example:

from java.lang import Math 
from jhplot import *
from jhpro.upropog import UPropogate

class MyFunc(FNon):   
     def value(self, x):                
              y=x[0]*x[0]*Math.sqrt(x[0])*self.p[0]+self.p[1]
              if (x[0]<4): y=0
              return y 

pl = MyFunc("test",1,2)    
print "f(2)=  ",pl.value([2])
print pl.numberOfParameters() 
print pl.dimension()
print pl.parameterNames().tolist() 

# define parameters and their errors
p0=100
p1=20

p0_err=10
p1_err=2

# do calculations at x=10 (with zero error)
xval=10
xval_err=0

ff=UPropogate(pl,[p0,p1],[p0_err,p1_err],[xval],[xval_err],10000)
err=ff.getStd()
pl.setParameter("par0",p0) 
pl.setParameter("par1",p1)
val=pl.value([xval])
print "val=",val," +- ",err


hh=ff.getHisto()
c1=HPlot()
c1.visible()
c1.setAutoRange()
c1.draw(hh)

and here is a more custom approach without this class:

from java.lang import Math 
from jhplot import *
from java.util import Date;

class MyFunc(FNon):   
     def value(self, x):                
              y=x[0]*x[0]*Math.sqrt(x[0])*self.p[0]+self.p[1]
              if (x[0]<4): y=0
              return y 

pl = MyFunc("test",1,2)    
print "f(2)=  ",pl.value([2])
print pl.numberOfParameters() 
print pl.dimension()
print pl.parameterNames().tolist() 

p0=100
p1=20

p0_err=10
p1_err=2

from cern.jet.random import Normal
from cern.jet.random.engine import MersenneTwister
from  hep.aida.ref.histogram import Histogram1D

engine=MersenneTwister(Date())
rd0=Normal(p0, p0_err,engine)
rd1=Normal(p1, p1_err,engine)

xval=10;


pl.setParameter("par0",p0+4*p0_err) 
pl.setParameter("par1",p1-4*p1_err)
x1=pl.value([xval])

pl.setParameter("par0",p0+4*p0_err) 
pl.setParameter("par1",p1+4*p1_err)
x2=pl.value([xval])

pl.setParameter("par0",p0-4*p0_err) 
pl.setParameter("par1",p1-4*p1_err)
x3=pl.value([xval])

pl.setParameter("par0",p0-4*p0_err) 
pl.setParameter("par1",p1+4*p1_err)
x4=pl.value([xval])

z=P0D("data")
z.add(x1)
z.add(x2)
z.add(x3)
z.add(x4)
hh=H1D("error",100,z.getMin(),z.getMax())

for i in range(10000):
    p0n=rd0.nextDouble()
    p1n=rd1.nextDouble()
    pl.setParameter("par0",p0n) 
    pl.setParameter("par1",p1n) 
    #print pl.parameters() 
    ff=pl.value([xval])
    hh.fill(ff)


all=hh.getStat()
err=all['standardDeviation']
pl.setParameter("par0",p0) 
pl.setParameter("par1",p1)
val=pl.value([xval])
print "val=",val," +- ",err


c1=HPlot()
c1.visible()
c1.setAutoRange()
c1.draw(hh)

Error propagation using VisAd

The Java package VisAd VisAd contains powerful classes visad.Real visad.Real and visad.Data visad.Data to perform calculations with units and errors.

In VisAD the values of a Scalar may be either Real which is used for numeric quantities. The form of constructor for Real:

Real(double value, double error)

allows us to provide an error estimate for the value. This estimate can then be propagated through mathematical operations to provide an error estimate. Here is Java example:

import visad.Real;
Real a,b,c;
a = new Real(10.,1.);
b = new Real(255.,10.);
c = (Real) a.add(b);
System.out.println("sum = "+c);

When you run this example, you still get: sum = 265.0

The VisAD default for most math operations, however, is to not propagate errors, so to make use of this, we must explicitly indicate how we want error estimate to propagate. This is done by using an alternate signature of the add method (and all other math operators):

import visad.Real;
Real a,b,c;
a = new Real(10.,1.);
b = new Real(255.,10.);
c = (Real) a.add(b, Data.NEAREST_NEIGHBOR, Data.INDEPENDENT);
System.out.println("sum = "+c);
System.out.println("error of sum is="+c.getError().getErrorValue());

When you run this example, you get: "sum = 265.0" and "error of sum is=10.04987562112089"

The constants supplied to the add method are the type of interpolation and the type of error propagation. In this simple case, the type of interpolation is not really relevant, but as you will see later, VisAD Data types may contain finite approximations to continuous functions and when these are combined mathematically, may need to be resampled in order to match domains.

This example shows how to define values with errors and how to perform mathematical transformation while propagating errors (independently) using Jython/Python:

from visad import SI,Data,Real,RealType

def printReal(x):
   print x.toString(), "+-", (x.getError()).getErrorValue()

a=Real(10,2) # 10 +- 2 (error)
b=Real(30,5) # 30 +- 5 (error)

c=a.add(b,Data.NEAREST_NEIGHBOR, Data.INDEPENDENT)
print "a+b=", printReal(c)

c=a.divide(b,Data.NEAREST_NEIGHBOR, Data.INDEPENDENT)
print "a+b=", printReal(c)

a=Real(0.9,0.1) # 10 +- 2 (error)
c=a.cos(Data.NEAREST_NEIGHBOR, Data.INDEPENDENT)
print "acos(a)=", printReal(c)

a=Real(0.9,0.1) # 10 +- 2 (error)
c=a.pow(c,Data.NEAREST_NEIGHBOR, Data.INDEPENDENT)
print "a^4=", printReal(c)

print "use units:"
k=RealType("kelvin",SI.kelvin)
t2=Real(k,273.);
t1= Real(k,255.);
xsum =t1.add(t2);
print "sum = ",xsum," ",xsum.getUnit()