Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
20130225 Nonlinear Scan Optimisation | ||||||||
Added: | ||||||||
> > |
| |||||||
Nonlinear scans often produce a fit uncertainty for sigma of about 0.1um. This is ok, however, better is always better and more importantly, fitting the same data repeatedly often yields different results that can be different by greater than the uncertainty. Whilst not ideal, this can be understood as being a four parameter fit, it is often possible to minimise successfully with one parameter different from a previous fitting by adjusting the others. This is usually shown with data from different scans as each scan is unique and the route taken to minimum often is also. Therefore, if we can't improve this, we should improve the uncertainty on individual points and secondly (more for time purposes) the number of points and their location. More samples corresponds to greater statistics but also a greater overall length of scan (linear in number of samples), however, the benefit from the statistics reduces as it's dependent on 1/sqrt(nsamples). At some point, a significantly | ||||||||
Added: | ||||||||
> > | Version 1 - used before 20130222
Version 2 - developed during shift on 20130222
ComparisonGenerate model overlap integral data for different sigmas with the same normalisation and background over the same range. Use 1001 points to see smooth curve with linear samples for now.sigma = arange(0.6,1.61,0.2) data = [lwIntegral2.OISetEV(yarray,0,200,s,1.0,0.0,0.0) for s in sigma] datad = dict(zip(sigma,data)) datad2 = {} for key,val in datad.iteritems(): newkey = str(round(key,2)) datad2[newkey] = val for key in datad2: plot(yarray,datad2[key],label=('$\sigma_{ex}$ = '+key+'$\mu$m'))Saved this data as different_sigmas.dat
| |||||||
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Added: | ||||||||
> > |
20130225 Nonlinear Scan OptimisationNonlinear scans often produce a fit uncertainty for sigma of about 0.1um. This is ok, however, better is always better and more importantly, fitting the same data repeatedly often yields different results that can be different by greater than the uncertainty. Whilst not ideal, this can be understood as being a four parameter fit, it is often possible to minimise successfully with one parameter different from a previous fitting by adjusting the others. This is usually shown with data from different scans as each scan is unique and the route taken to minimum often is also. Therefore, if we can't improve this, we should improve the uncertainty on individual points and secondly (more for time purposes) the number of points and their location. More samples corresponds to greater statistics but also a greater overall length of scan (linear in number of samples), however, the benefit from the statistics reduces as it's dependent on 1/sqrt(nsamples). At some point, a significantly<--
|