980 resultados para Measurement Error
Resumo:
This article deals with classification problems involving unequal probabilities in each class and discusses metrics to systems that use multilayer perceptrons neural networks (MLP) for the task of classifying new patterns. In addition we propose three new pruning methods that were compared to other seven existing methods in the literature for MLP networks. All pruning algorithms presented in this paper have been modified by the authors to do pruning of neurons, in order to produce fully connected MLP networks but being small in its intermediary layer. Experiments were carried out involving the E. coli unbalanced classification problem and ten pruning methods. The proposed methods had obtained good results, actually, better results than another pruning methods previously defined at the MLP neural network area. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
This study aimed to assess measurements of temperature and relative humidity obtained with HOBO a data logger, under various conditions of exposure to solar radiation, comparing them with those obtained through the use of a temperature/relative humidity probe and a copper-constantan thermocouple psychrometer, which are considered the standards for obtaining such measurements. Data were collected over a 6-day period (from 25 March to 1 April, 2010), during which the equipment was monitored continuously and simultaneously. We employed the following combinations of equipment and conditions: a HOBO data logger in full sunlight; a HOBO data logger shielded within a white plastic cup with windows for air circulation; a HOBO data logger shielded within a gill-type shelter (multi-plate prototype plastic); a copper-constantan thermocouple psychrometer exposed to natural ventilation and protected from sunlight; and a temperature/relative humidity probe under a commercial, multi-plate radiation shield. Comparisons between the measurements obtained with the various devices were made on the basis of statistical indicators: linear regression, with coefficient of determination; index of agreement; maximum absolute error; and mean absolute error. The prototype multi-plate shelter (gill-type) used in order to protect the HOBO data logger was found to provide the best protection against the effects of solar radiation on measurements of temperature and relative humidity. The precision and accuracy of a device that measures temperature and relative humidity depend on an efficient shelter that minimizes the interference caused by solar radiation, thereby avoiding erroneous analysis of the data obtained.
Resumo:
Evaluations of measurement invariance provide essential construct validity evidence. However, the quality of such evidence is partly dependent upon the validity of the resulting statistical conclusions. The presence of Type I or Type II errors can render measurement invariance conclusions meaningless. The purpose of this study was to determine the effects of categorization and censoring on the behavior of the chi-square/likelihood ratio test statistic and two alternative fit indices (CFI and RMSEA) under the context of evaluating measurement invariance. Monte Carlo simulation was used to examine Type I error and power rates for the (a) overall test statistic/fit indices, and (b) change in test statistic/fit indices. Data were generated according to a multiple-group single-factor CFA model across 40 conditions that varied by sample size, strength of item factor loadings, and categorization thresholds. Seven different combinations of model estimators (ML, Yuan-Bentler scaled ML, and WLSMV) and specified measurement scales (continuous, censored, and categorical) were used to analyze each of the simulation conditions. As hypothesized, non-normality increased Type I error rates for the continuous scale of measurement and did not affect error rates for the categorical scale of measurement. Maximum likelihood estimation combined with a categorical scale of measurement resulted in more correct statistical conclusions than the other analysis combinations. For the continuous and censored scales of measurement, the Yuan-Bentler scaled ML resulted in more correct conclusions than normal-theory ML. The censored measurement scale did not offer any advantages over the continuous measurement scale. Comparing across fit statistics and indices, the chi-square-based test statistics were preferred over the alternative fit indices, and ΔRMSEA was preferred over ΔCFI. Results from this study should be used to inform the modeling decisions of applied researchers. However, no single analysis combination can be recommended for all situations. Therefore, it is essential that researchers consider the context and purpose of their analyses.
Resumo:
Measurement-based quantum computation is an efficient model to perform universal computation. Nevertheless, theoretical questions have been raised, mainly with respect to realistic noise conditions. In order to shed some light on this issue, we evaluate the exact dynamics of some single-qubit-gate fidelities using the measurement-based quantum computation scheme when the qubits which are used as a resource interact with a common dephasing environment. We report a necessary condition for the fidelity dynamics of a general pure N-qubit state, interacting with this type of error channel, to present an oscillatory behavior, and we show that for the initial canonical cluster state, the fidelity oscillates as a function of time. This state fidelity oscillatory behavior brings significant variations to the values of the computational results of a generic gate acting on that state depending on the instants we choose to apply our set of projective measurements. As we shall see, considering some specific gates that are frequently found in the literature, the fast application of the set of projective measurements does not necessarily imply high gate fidelity, and likewise the slow application thereof does not necessarily imply low gate fidelity. Our condition for the occurrence of the fidelity oscillatory behavior shows that the oscillation presented by the cluster state is due exclusively to its initial geometry. Other states that can be used as resources for measurement-based quantum computation can present the same initial geometrical condition. Therefore, it is very important for the present scheme to know when the fidelity of a particular resource state will oscillate in time and, if this is the case, what are the best times to perform the measurements.
Resumo:
In the thesis is presented the measurement of the neutrino velocity with the OPERA experiment in the CNGS beam, a muon neutrino beam produced at CERN. The OPERA detector observes muon neutrinos 730 km away from the source. Previous measurements of the neutrino velocity have been performed by other experiments. Since the OPERA experiment aims the direct observation of muon neutrinos oscillations into tau neutrinos, a higher energy beam is employed. This characteristic together with the higher number of interactions in the detector allows for a measurement with a much smaller statistical uncertainty. Moreover, a much more sophisticated timing system (composed by cesium clocks and GPS receivers operating in “common view mode”), and a Fast Waveform Digitizer (installed at CERN and able to measure the internal time structure of the proton pulses used for the CNGS beam), allows for a new measurement with a smaller systematic error. Theoretical models on Lorentz violating effects can be investigated by neutrino velocity measurements with terrestrial beams. The analysis has been carried out with blind method in order to guarantee the internal consistency and the goodness of each calibration measurement. The performed measurement is the most precise one done with a terrestrial neutrino beam, the statistical accuracy achieved by the OPERA measurement is about 10 ns and the systematic error is about 20 ns.
Resumo:
The goal of this thesis was an experimental test of an effective theory of strong interactions at low energy, called Chiral Perturbation Theory (ChPT). Weak decays of kaon mesons provide such a test. In particular, K± → π±γγ decays are interesting because there is no tree-level O(p2) contribution in ChPT, and the leading contributions start at O(p4). At this order, these decays include one undetermined coupling constant, ĉ. Both the branching ratio and the spectrum shape of K± → π±γγ decays are sensitive to this parameter. O(p6) contributions to K± → π±γγ ChPT predict a 30-40% increase in the branching ratio. From the measurement of the branching ratio and spectrum shape of K± → π±γγ decays, it is possible to determine a model dependent value of ĉ and also to examine whether the O(p6) corrections are necessary and enough to explain the rate.About 40% of the data collected in the year 2003 by the NA48/2 experiment have been analyzed and 908 K± → π±γγ candidates with about 8% background contamination have been selected in the region with z = mγγ2/mK2 ≥ 0.2. Using 5,750,121 selected K± → π±π0 decays as normalization channel, a model independent differential branching ratio of K± → π±γγ has been measured to be:BR(K± → π±γγ, z ≥ 0.2) = (1.018 ± 0.038stat ± 0.039syst ± 0.004ext) ∙10-6. From the fit to the O(p6) ChPT prediction of the measured branching ratio and the shape of the z-spectrum, a value of ĉ = 1.54 ± 0.15stat ± 0.18syst has been extracted. Using the measured ĉ value and the O(p6) ChPT prediction, the branching ratio for z =mγγ2/mK2 <0.2 was computed and added to the measured result. The value obtained for the total branching ratio is:BR(K± → π±γγ) = (1.055 ± 0.038stat ± 0.039syst ± 0.004ext + 0.003ĉ -0.002ĉ) ∙10-6, where the last error reflects the uncertainty on ĉ.The branching ratio result presented here agrees with previous experimental results, improving the precision of the measurement by at least a factor of five. The precision on the ĉ measurement has been improved by approximately a factor of three. A slight disagreement with the O(p6) ChPT branching ratio prediction as a function of ĉ has been observed. This mightrnbe due to the possible existence of non-negligible terms not yet included in the theory. Within the scope of this thesis, η-η' mixing effects in O(p4) ChPT have also been measured.
Resumo:
Precision measurements of observables in neutron beta decay address important open questions of particle physics and cosmology. In this thesis, a measurement of the proton recoil spectrum with the spectrometer aSPECT is described. From this spectrum the antineutrino-electron angular correlation coefficient a can be derived. In our first beam time at the FRM II in Munich, background instabilities prevented us from presenting a new value for a. In the latest beam time at the ILL in Grenoble, the background has been reduced sufficiently. As a result of the data analysis, we identified and fixed a problem in the detector electronics which caused a significant systematic error. The aim of the latest beam time was a new value for a with an error well below the present literature value of 4%. A statistical accuracy of about 1.4% was reached, but we could only set upper limits on the correction of the problem in the detector electronics, too high to determine a meaningful result. This thesis focused on the investigation of different systematic effects. With the knowledge of the systematics gained in this thesis, we are able to improve aSPECT to perform a 1% measurement of a in a further beam time.
Resumo:
Brain functions, such as learning, orchestrating locomotion, memory recall, and processing information, all require glucose as a source of energy. During these functions, the glucose concentration decreases as the glucose is being consumed by brain cells. By measuring this drop in concentration, it is possible to determine which parts of the brain are used during specific functions and consequently, how much energy the brain requires to complete the function. One way to measure in vivo brain glucose levels is with a microdialysis probe. The drawback of this analytical procedure, as with many steadystate fluid flow systems, is that the probe fluid will not reach equilibrium with the brain fluid. Therefore, brain concentration is inferred by taking samples at multiple inlet glucose concentrations and finding a point of convergence. The goal of this thesis is to create a three-dimensional, time-dependent, finite element representation of the brainprobe system in COMSOL 4.2 that describes the diffusion and convection of glucose. Once validated with experimental results, this model can then be used to test parameters that experiments cannot access. When simulations were run using published values for physical constants (i.e. diffusivities, density and viscosity), the resulting glucose model concentrations were within the error of the experimental data. This verifies that the model is an accurate representation of the physical system. In addition to accurately describing the experimental brain-probe system, the model I created is able to show the validity of zero-net-flux for a given experiment. A useful discovery is that the slope of the zero-net-flux line is dependent on perfusate flow rate and diffusion coefficients, but it is independent of brain glucose concentrations. The model was simplified with the realization that the perfusate is at thermal equilibrium with the brain throughout the active region of the probe. This allowed for the assumption that all model parameters are temperature independent. The time to steady-state for the probe is approximately one minute. However, the signal degrades in the exit tubing due to Taylor dispersion, on the order of two minutes for two meters of tubing. Given an analytical instrument requiring a five μL aliquot, the smallest brain process measurable for this system is 13 minutes.
Resumo:
BACKGROUND: Physiological data obtained with the pulmonary artery catheter (PAC) are susceptible to errors in measurement and interpretation. Little attention has been paid to the relevance of errors in hemodynamic measurements performed in the intensive care unit (ICU). The aim of this study was to assess the errors related to the technical aspects (zeroing and reference level) and actual measurement (curve interpretation) of the pulmonary artery occlusion pressure (PAOP). METHODS: Forty-seven participants in a special ICU training program and 22 ICU nurses were tested without pre-announcement. All participants had previously been exposed to the clinical use of the method. The first task was to set up a pressure measurement system for PAC (zeroing and reference level) and the second to measure the PAOP. RESULTS: The median difference from the reference mid-axillary zero level was - 3 cm (-8 to + 9 cm) for physicians and -1 cm (-5 to + 1 cm) for nurses. The median difference from the reference PAOP was 0 mmHg (-3 to 5 mmHg) for physicians and 1 mmHg (-1 to 15 mmHg) for nurses. When PAOP values were adjusted for the differences from the reference transducer level, the median differences from the reference PAOP values were 2 mmHg (-6 to 9 mmHg) for physicians and 2 mmHg (-6 to 16 mmHg) for nurses. CONCLUSIONS: Measurement of the PAOP is susceptible to substantial error as a result of practical mistakes. Comparison of results between ICUs or practitioners is therefore not possible.
Resumo:
To estimate a parameter in an elliptic boundary value problem, the method of equation error chooses the value that minimizes the error in the PDE and boundary condition (the solution of the BVP having been replaced by a measurement). The estimated parameter converges to the exact value as the measured data converge to the exact value, provided Tikhonov regularization is used to control the instability inherent in the problem. The error in the estimated solution can be bounded in an appropriate quotient norm; estimates can be derived for both the underlying (infinite-dimensional) problem and a finite-element discretization that can be implemented in a practical algorithm. Numerical experiments demonstrate the efficacy and limitations of the method.
Resumo:
Discrepancies in finite-element model predictions of bone strength may be attributed to the simplified modeling of bone as an isotropic structure due to the resolution limitations of clinical-level Computed Tomography (CT) data. The aim of this study is to calculate the preferential orientations of bone (the principal directions) and the extent to which bone is deposited more in one direction compared to another (degree of anisotropy). Using 100 femoral trabecular samples, the principal directions and degree of anisotropy were calculated with a Gradient Structure Tensor (GST) and a Sobel Structure Tensor (SST) using clinical-level CT. The results were compared against those calculated with the gold standard Mean-Intercept-Length (MIL) fabric tensor using micro-CT. There was no significant difference between the GST and SST in the calculation of the main principal direction (median error=28°), and the error was inversely correlated to the degree of transverse isotropy (r=−0.34, p<0.01). The degree of anisotropy measured using the structure tensors was weakly correlated with the MIL-based measurements (r=0.2, p<0.001). Combining the principal directions with the degree of anisotropy resulted in a significant increase in the correlation of the tensor distributions (r=0.79, p<0.001). Both structure tensors were robust against simulated noise, kernel sizes, and bone volume fraction. We recommend the use of the GST because of its computational efficiency and ease of implementation. This methodology has the promise to predict the structural anisotropy of bone in areas with a high degree of anisotropy, and may improve the in vivo characterization of bone.
Resumo:
We report on a new measurement of the neutron beta-asymmetry parameter A with the instrument \perkeo. Main advancements are the high neutron polarization of P=99.7(1) from a novel arrangement of super mirror polarizers and reduced background from improvements in beam line and shielding. Leading corrections were thus reduced by a factor of 4, pushing them below the level of statistical error and resulting in a significant reduction of systematic uncertainty compared to our previous experiments. From the result A0=−0.11996(58), we derive the ratio of the axial-vector to the vector coupling constant λ=gA/gV=−1.2767(16)
Resumo:
We present an independent calibration model for the determination of biogenic silica (BSi) in sediments, developed from analysis of synthetic sediment mixtures and application of Fourier transform infrared spectroscopy (FTIRS) and partial least squares regression (PLSR) modeling. In contrast to current FTIRS applications for quantifying BSi, this new calibration is independent from conventional wet-chemical techniques and their associated measurement uncertainties. This approach also removes the need for developing internal calibrations between the two methods for individual sediments records. For the independent calibration, we produced six series of different synthetic sediment mixtures using two purified diatom extracts, with one extract mixed with quartz sand, calcite, 60/40 quartz/calcite and two different natural sediments, and a second extract mixed with one of the natural sediments. A total of 306 samples—51 samples per series—yielded BSi contents ranging from 0 to 100 %. The resulting PLSR calibration model between the FTIR spectral information and the defined BSi concentration of the synthetic sediment mixtures exhibits a strong cross-validated correlation ( R2cv = 0.97) and a low root-mean square error of cross-validation (RMSECV = 4.7 %). Application of the independent calibration to natural lacustrine and marine sediments yields robust BSi reconstructions. At present, the synthetic mixtures do not include the variation in organic matter that occurs in natural samples, which may explain the somewhat lower prediction accuracy of the calibration model for organic-rich samples.
Resumo:
A measurement of the total pp cross section at the LHC at √s = 7 TeV is presented. In a special run with high-β* beam optics, an integrated luminosity of 80 μb−1 was accumulated in order to measure the differential elastic cross section as a function of the Mandelstam momentum transfer variable t . The measurement is performed with the ALFA sub-detector of ATLAS. Using a fit to the differential elastic cross section in the |t | range from 0.01 GeV2 to 0.1 GeV2 to extrapolate to |t | →0, the total cross section, σtot(pp→X), is measured via the optical theorem to be: σtot(pp→X) = 95.35± 0.38 (stat.)± 1.25 (exp.)± 0.37 (extr.) mb, where the first error is statistical, the second accounts for all experimental systematic uncertainties and the last is related to uncertainties in the extrapolation to |t | → 0. In addition, the slope of the elastic cross section at small |t | is determined to be B = 19.73 ±0.14 (stat.) ±0.26 (syst.) GeV−2.
Resumo:
A portable Fourier transform spectrometer (FTS), model EM27/SUN, was deployed onboard the research vessel Polarstern to measure the column-average dry air mole fractions of carbon dioxide (XCO2) and methane (XCH4) by means of direct sunlight absorption spectrometry. We report on technical developments as well as data calibration and reduction measures required to achieve the targeted accuracy of fractions of a percent in retrieved XCO2 and XCH4 while operating the instrument under field conditions onboard the moving platform during a 6-week cruise on the Atlantic from Cape Town (South Africa, 34° S, 18° E; 5 March 2014) to Bremerhaven (Germany, 54° N, 19° E; 14 April 2014). We demonstrate that our solar tracker typically achieved a tracking precision of better than 0.05° toward the center of the sun throughout the ship cruise which facilitates accurate XCO2 and XCH4 retrievals even under harsh ambient wind conditions. We define several quality filters that screen spectra, e.g., when the field of view was partially obstructed by ship structures or when the lines-of-sight crossed the ship exhaust plume. The measurements in clean oceanic air, can be used to characterize a spurious air-mass dependency. After the campaign, deployment of the spectrometer alongside the TCCON (Total Carbon Column Observing Network) instrument at Karlsruhe, Germany, allowed for determining a calibration factor that makes the entire campaign record traceable to World Meteorological Organization (WMO) standards. Comparisons to observations of the GOSAT satellite and concentration fields modeled by the European Centre for Medium-Range Weather Forecasts (ECMWF) Copernicus Atmosphere Monitoring Service (CAMS) demonstrate that the observational setup is well suited to provide validation opportunities above the ocean and along interhemispheric transects.