993 resultados para Calibration data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In developing neural network techniques for real world applications it is still very rare to see estimates of confidence placed on the neural network predictions. This is a major deficiency, especially in safety-critical systems. In this paper we explore three distinct methods of producing point-wise confidence intervals using neural networks. We compare and contrast Bayesian, Gaussian Process and Predictive error bars evaluated on real data. The problem domain is concerned with the calibration of a real automotive engine management system for both air-fuel ratio determination and on-line ignition timing. This problem requires real-time control and is a good candidate for exploring the use of confidence predictions due to its safety-critical nature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two types of prediction problem can be solved using a regression line viz., prediction of the ‘population’ regression line at the point ‘x’ and prediction of an ‘individual’ new member of the population ‘y1’ for which ‘x1’ has been measured. The second problem is probably the most commonly encountered and the most relevant to calibration studies. A regression line is likely to be most useful for calibration if the range of values of the X variable is large, if there is a good representation of the ‘x,y’ values across the range of X, and if several estimates of ‘y’ are made at each ‘x’. It is poor statistical practice to use a regression line for calibration or prediction beyond the limits of the data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis first considers the calibration and signal processing requirements of a neuromagnetometer for the measurement of human visual function. Gradiometer calibration using straight wire grids is examined and optimal grid configurations determined, given realistic constructional tolerances. Simulations show that for gradiometer balance of 1:104 and wire spacing error of 0.25mm the achievable calibration accuracy of gain is 0.3%, of position is 0.3mm and of orientation is 0.6°. Practical results with a 19-channel 2nd-order gradiometer based system exceed this performance. The real-time application of adaptive reference noise cancellation filtering to running-average evoked response data is examined. In the steady state, the filter can be assumed to be driven by a non-stationary step input arising at epoch boundaries. Based on empirical measures of this driving step an optimal progression for the filter time constant is proposed which improves upon fixed time constant filter performance. The incorporation of the time-derivatives of the reference channels was found to improve the performance of the adaptive filtering algorithm by 15-20% for unaveraged data, falling to 5% with averaging. The thesis concludes with a neuromagnetic investigation of evoked cortical responses to chromatic and luminance grating stimuli. The global magnetic field power of evoked responses to the onset of sinusoidal gratings was shown to have distinct chromatic and luminance sensitive components. Analysis of the results, using a single equivalent current dipole model, shows that these components arise from activity within two distinct cortical locations. Co-registration of the resulting current source localisations with MRI shows a chromatically responsive area lying along the midline within the calcarine fissure, possibly extending onto the lingual and cuneal gyri. It is postulated that this area is the human homologue of the primate cortical area V4.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study examined the effect of range of a confidence scale on consumer knowledge calibration, specifically whether a restricted range scale (25%- 100%) leads to difference in calibration compared to a full range scale (0%-100%), for multiple-choice questions. A quasi-experimental study using student participants (N = 434) was employed. Data were collected from two samples; in the first sample (N = 167) a full range confidence scale was used, and in the second sample (N = 267) a restricted range scale was used. No differences were found between the two scales on knowledge calibration. Results from studies of knowledge calibration employing restricted range and full range confidence scales are thus comparable. © Psychological Reports 2014.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nanoindentation has become a common technique for measuring the hardness and elastic-plastic properties of materials, including coatings and thin films. In recent years, different nanoindenter instruments have been commercialised and used for this purpose. Each instrument is equipped with its own analysis software for the derivation of the hardness and reduced Young's modulus from the raw data. These data are mostly analysed through the Oliver and Pharr method. In all cases, the calibration of compliance and area function is mandatory. The present work illustrates and describes a calibration procedure and an approach to raw data analysis carried out for six different nanoindentation instruments through several round-robin experiments. Three different indenters were used, Berkovich, cube corner, spherical, and three standardised reference samples were chosen, hard fused quartz, soft polycarbonate, and sapphire. It was clearly shown that the use of these common procedures consistently limited the hardness and reduced the Young's modulus data spread compared to the same measurements performed using instrument-specific procedures. The following recommendations for nanoindentation calibration must be followed: (a) use only sharp indenters, (b) set an upper cut-off value for the penetration depth below which measurements must be considered unreliable, (c) perform nanoindentation measurements with limited thermal drift, (d) ensure that the load-displacement curves are as smooth as possible, (e) perform stiffness measurements specific to each instrument/indenter couple, (f) use Fq and Sa as calibration reference samples for stiffness and area function determination, (g) use a function, rather than a single value, for the stiffness and (h) adopt a unique protocol and software for raw data analysis in order to limit the data spread related to the instruments (i.e. the level of drift or noise, defects of a given probe) and to make the H and E r data intercomparable. © 2011 Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sheer volume of citizen weather data collected and uploaded to online data hubs is immense. However as with any citizen data it is difficult to assess the accuracy of the measurements. Within this project we quantify just how much data is available, where it comes from, the frequency at which it is collected, and the types of automatic weather stations being used. We also list the numerous possible sources of error and uncertainty within citizen weather observations before showing evidence of such effects in real data. A thorough intercomparison field study was conducted, testing popular models of citizen weather stations. From this study we were able to parameterise key sources of bias. Most significantly the project develops a complete quality control system through which citizen air temperature observations can be passed. The structure of this system was heavily informed by the results of the field study. Using a Bayesian framework the system learns and updates its estimates of the calibration and radiation-induced biases inherent to each station. We then show the benefit of correcting for these learnt biases over using the original uncorrected data. The system also attaches an uncertainty estimate to each observation, which would provide real world applications that choose to incorporate such observations with a measure on which they may base their confidence in the data. The system relies on interpolated temperature and radiation observations from neighbouring professional weather stations for which a Bayesian regression model is used. We recognise some of the assumptions and flaws of the developed system and suggest further work that needs to be done to bring it to an operational setting. Such a system will hopefully allow applications to leverage the additional value citizen weather data brings to longstanding professional observing networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most pavement design procedures incorporate reliability to account for design inputs-associated uncertainty and variability effect on predicted performance. The load and resistance factor design (LRFD) procedure, which delivers economical section while considering design inputs variability separately, has been recognised as an effective tool to incorporate reliability into design procedures. This paper presents a new reliability-based calibration in LRFD format for a mechanics-based fatigue cracking analysis framework. This paper employs a two-component reliability analysis methodology that utilises a central composite design-based response surface approach and a first-order reliability method. The reliability calibration was achieved based on a number of field pavement sections that have well-documented performance history and high-quality field and laboratory data. The effectiveness of the developed LRFD procedure was evaluated by performing pavement designs of various target reliabilities and design conditions. The result shows an excellent agreement between the target and actual reliabilities. Furthermore, it is clear from the results that more design features need to be included in the reliability calibration to minimise the deviation of the actual reliability from the target reliability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation develops a new mathematical approach that overcomes the effect of a data processing phenomenon known as “histogram binning” inherent to flow cytometry data. A real-time procedure is introduced to prove the effectiveness and fast implementation of such an approach on real-world data. The histogram binning effect is a dilemma posed by two seemingly antagonistic developments: (1) flow cytometry data in its histogram form is extended in its dynamic range to improve its analysis and interpretation, and (2) the inevitable dynamic range extension introduces an unwelcome side effect, the binning effect, which skews the statistics of the data, undermining as a consequence the accuracy of the analysis and the eventual interpretation of the data. ^ Researchers in the field contended with such a dilemma for many years, resorting either to hardware approaches that are rather costly with inherent calibration and noise effects; or have developed software techniques based on filtering the binning effect but without successfully preserving the statistical content of the original data. ^ The mathematical approach introduced in this dissertation is so appealing that a patent application has been filed. The contribution of this dissertation is an incremental scientific innovation based on a mathematical framework that will allow researchers in the field of flow cytometry to improve the interpretation of data knowing that its statistical meaning has been faithfully preserved for its optimized analysis. Furthermore, with the same mathematical foundation, proof of the origin of such an inherent artifact is provided. ^ These results are unique in that new mathematical derivations are established to define and solve the critical problem of the binning effect faced at the experimental assessment level, providing a data platform that preserves its statistical content. ^ In addition, a novel method for accumulating the log-transformed data was developed. This new method uses the properties of the transformation of statistical distributions to accumulate the output histogram in a non-integer and multi-channel fashion. Although the mathematics of this new mapping technique seem intricate, the concise nature of the derivations allow for an implementation procedure that lends itself to a real-time implementation using lookup tables, a task that is also introduced in this dissertation. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research sought to understand the role that differentially assessed lands (lands in the United States given tax breaks in return for their guarantee to remain in agriculture) play in influencing urban growth. Our method was to calibrate the SLEUTH urban growth model under two different conditions. The first used an excluded layer that ignored such lands, effectively rendering them available for development. The second treated those lands as totally excluded from development. Our hypothesis was that excluding those lands would yield better metrics of fit with past data. Our results validate our hypothesis since two different metrics that evaluate goodness of fit both yielded higher values when differentially assessed lands are treated as excluded. This suggests that, at least in our study area, differential assessment, which protects farm and ranch lands for tenuous periods of time, has indeed allowed farmland to resist urban development. Including differentially assessed lands also yielded very different calibrated coefficients of growth as the model tried to account for the same growth patterns over two very different excluded areas. Excluded layer design can greatly affect model behavior. Since differentially assessed lands are quite common through the United States and are often ignored in urban growth modeling, the findings of this research can assist other urban growth modelers in designing excluded layers that result in more accurate model calibration and thus forecasting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation develops a new mathematical approach that overcomes the effect of a data processing phenomenon known as "histogram binning" inherent to flow cytometry data. A real-time procedure is introduced to prove the effectiveness and fast implementation of such an approach on real-world data. The histogram binning effect is a dilemma posed by two seemingly antagonistic developments: (1) flow cytometry data in its histogram form is extended in its dynamic range to improve its analysis and interpretation, and (2) the inevitable dynamic range extension introduces an unwelcome side effect, the binning effect, which skews the statistics of the data, undermining as a consequence the accuracy of the analysis and the eventual interpretation of the data. Researchers in the field contended with such a dilemma for many years, resorting either to hardware approaches that are rather costly with inherent calibration and noise effects; or have developed software techniques based on filtering the binning effect but without successfully preserving the statistical content of the original data. The mathematical approach introduced in this dissertation is so appealing that a patent application has been filed. The contribution of this dissertation is an incremental scientific innovation based on a mathematical framework that will allow researchers in the field of flow cytometry to improve the interpretation of data knowing that its statistical meaning has been faithfully preserved for its optimized analysis. Furthermore, with the same mathematical foundation, proof of the origin of such an inherent artifact is provided. These results are unique in that new mathematical derivations are established to define and solve the critical problem of the binning effect faced at the experimental assessment level, providing a data platform that preserves its statistical content. In addition, a novel method for accumulating the log-transformed data was developed. This new method uses the properties of the transformation of statistical distributions to accumulate the output histogram in a non-integer and multi-channel fashion. Although the mathematics of this new mapping technique seem intricate, the concise nature of the derivations allow for an implementation procedure that lends itself to a real-time implementation using lookup tables, a task that is also introduced in this dissertation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Highway Safety Manual (HSM) estimates roadway safety performance based on predictive models that were calibrated using national data. Calibration factors are then used to adjust these predictive models to local conditions for local applications. The HSM recommends that local calibration factors be estimated using 30 to 50 randomly selected sites that experienced at least a total of 100 crashes per year. It also recommends that the factors be updated every two to three years, preferably on an annual basis. However, these recommendations are primarily based on expert opinions rather than data-driven research findings. Furthermore, most agencies do not have data for many of the input variables recommended in the HSM. This dissertation is aimed at determining the best way to meet three major data needs affecting the estimation of calibration factors: (1) the required minimum sample sizes for different roadway facilities, (2) the required frequency for calibration factor updates, and (3) the influential variables affecting calibration factors. In this dissertation, statewide segment and intersection data were first collected for most of the HSM recommended calibration variables using a Google Maps application. In addition, eight years (2005-2012) of traffic and crash data were retrieved from existing databases from the Florida Department of Transportation. With these data, the effect of sample size criterion on calibration factor estimates was first studied using a sensitivity analysis. The results showed that the minimum sample sizes not only vary across different roadway facilities, but they are also significantly higher than those recommended in the HSM. In addition, results from paired sample t-tests showed that calibration factors in Florida need to be updated annually. To identify influential variables affecting the calibration factors for roadway segments, the variables were prioritized by combining the results from three different methods: negative binomial regression, random forests, and boosted regression trees. Only a few variables were found to explain most of the variation in the crash data. Traffic volume was consistently found to be the most influential. In addition, roadside object density, major and minor commercial driveway densities, and minor residential driveway density were also identified as influential variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation presents a calibration procedure for a pressure velocity probe. The dissertation is divided into four main chapters. The first chapter is divided into six main sections. In the firsts two, the wave equation in fluids and the velocity of sound in gases are calculated, the third section contains a general solution of the wave equation in the case of plane acoustic waves. Section four and five report the definition of the acoustic impedance and admittance, and the practical units the sound level is measured with, i.e. the decibel scale. Finally, the last section of the chapter is about the theory linked to the frequency analysis of a sound wave and includes the analysis of sound in bands and the discrete Fourier analysis, with the definition of some important functions. The second chapter describes different reference field calibration procedures that are used to calibrate the P-V probes, between them the progressive plane wave method, which is that has been used in this work. Finally, the last section of the chapter contains a description of the working principles of the two transducers that have been used, with a focus on the velocity one. The third chapter of the dissertation is devoted to the explanation of the calibration set up and the instruments used for the data acquisition and analysis. Since software routines were extremely important, this chapter includes a dedicated section on them and the proprietary routines most used are thoroughly explained. Finally, there is the description of the work that has been done, which is identified with three different phases, where the data acquired and the results obtained are presented. All the graphs and data reported were obtained through the Matlab® routine. As for the last chapter, it briefly presents all the work that has been done as well as an excursus on a new probe and on the way the procedure implemented in this dissertation could be applied in the case of a general field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

TEX86 (TetraEther indeX of tetraethers consisting of 86 carbon atoms) is a sea surface temperature (SST) proxy based on the distribution of archaeal isoprenoid glycerol dialkyl glycerol tetraethers (GDGTs). In this study, we appraise the applicability of TEX86 and TEX86L in subpolar and polar regions using surface sediments. We present TEX86 and TEX86L data from 160 surface sediment samples collected in the Arctic, the Southern Ocean and the North Pacific. Most of the SST estimates derived from both TEX86 and TEX86L are anomalously high in the Arctic, especially in the vicinity of Siberian river mouths and the sea ice margin, plausibly due to additional archaeal contributions linked to terrigenous input. We found unusual GDGT distributions at five sites in the North Pacific. High GDGT-0/crenarchaeol and GDGT-2/crenarchaeol ratios at these sites suggest a substantial contribution of methanogenic and/or methanotrophic archaea to the sedimentary GDGT pool here. Apart from these anomalous findings, TEX86 and TEX86L values in the surface sediments from the Southern Ocean and the North Pacific do usually vary with overlaying SSTs. In these regions, the sedimentary TEX86-SST relationship is similar to the global calibration, and the derived temperature estimates agree well with overlaying annual mean SSTs at the sites. However, there is a systematic offset between the regional TEX86L-SST relationships and the global calibration. At these sites, temperature estimates based on the global TEX86L calibration are closer to summer SSTs than annual mean SSTs. This finding suggests that in these subpolar settings a regional TEX86L calibration may be a more suitable equation for temperature reconstruction than the global calibration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sediment samples from the Ontong-Java Plateau in the Pacific and the 90° east ridge in the Indian Ocean were used to investigate whether shell size and early diagenesis affect d11B of the symbiont-bearing planktonic foraminifer Globigerinoides sacculifer. In pristine shells from both study locations we found a systematic increase of d11B and Mg/Ca with shell size. Shells in the sieve size class 515-865 µm revealed d11B values +2.1 to +2.3 per mil higher than shells in the 250-380 µm class. This pattern is most likely due to differences in symbiont photosynthetic activity and its integrated effect on the pH of the foraminiferal microenvironment. We therefore suggest smaller individuals must live at approximately 50-100 m water depth where ambient light levels are lower. Using the empirical calibration curve for d11B in G. sacculifer, only shells larger than 425 µm reflect surface seawater pH. Partial dissolution of shells derived from deeper sediment cores was determined by shell weight analyses and investigation of the shell surface microstructure by scanning electron microscopy. The d11B in partially dissolved shells is up to 2 per mil lower relative to pristine shells of the same size class. In agreement with a relatively higher weight loss in smaller shells, samples from the Ontong-Java Plateau show a more pronounced dissolution effect than larger shells. On the basis of the primary size effect and potential postdepositional dissolution effects, we recommend the use of shells that are visually pristine and, in the case of G. sacculifer, larger than 500 ?m for paleoreconstructions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The oxygen isotopic composition (d18O) of calcium carbonate of planktonic calcifying organisms is a key tool for reconstructing both past seawater temperature and salinity. The calibration of paloeceanographic proxies relies in general on empirical relationships derived from field experiments on extant species. Laboratory experiments have more often than not revealed that variables other than the target parameter influence the proxy signal, which makes proxy calibration a challenging task. Understanding these secondary or "vital" effects is crucial for increasing proxy accuracy. We present data from laboratory experiments showing that oxygen isotope fractionation during calcification in the coccolithophore Calcidiscus leptoporus and the calcareous dinoflagellate Thoracosphaera heimii is dependent on carbonate chemistry of seawater in addition to its dependence on temperature. A similar result has previously been reported for planktonic foraminifera, supporting the idea that the [CO3]2- effect on d18O is universal for unicellular calcifying planktonic organisms. The slopes of the d18O/[CO3]2- relationships range between -0.0243 per mil/(µmol/kg) (calcareous dinoflagellate T. heimii) and the previously published -0.0022 per mil/(µmol/kg) (non-symbiotic planktonic foramifera Orbulina universa), while C. leptoporus has a slope of -0.0048 per mil/(µmol/kg). We present a simple conceptual model, based on the contribution of d18O-enriched [HCO3]- to the [CO3]2- pool in the calcifying vesicle, which can explain the [CO3]2- effect on d18O for the different unicellular calcifiers. This approach provides a new insight into biological fractionation in calcifying organisms. The large range in d18O/[CO3]2- slopes should possibly be explored as a means for paleoreconstruction of surface [CO3]2-, particularly through comparison of the response in ecologically similar planktonic organisms.