855 resultados para SYSTEMATIC-ERROR CORRECTION
Resumo:
The error introduced in depolarisation measurements due to the convergence of the incident beam has been investigated theoretically as well as experimentally for the case of colloid scattering, where the particles are not small compared to the wavelength of light. Assuming the scattering particles to be anisotropic rods, it is shown that, when the incident unpolarised light is condensed by means of a lens with a circular aperture, the observed depolarisation ratio ϱ u is given by ϱ u = ϱ u0 + 5/3 θ2 where ϱ u0 is the true depolarisation for incident parallel light, and θ the semi-angle of convergence. Appropriate formulae are derived when the incident beam is polarised vertically and horizontally. Experiments performed on six typical colloids support the theoretical conclusions. Other immediate consequences of the theory are also discussed.
Resumo:
In a statistical downscaling model, it is important to remove the bias of General Circulations Model (GCM) outputs resulting from various assumptions about the geophysical processes. One conventional method for correcting such bias is standardisation, which is used prior to statistical downscaling to reduce systematic bias in the mean and variances of GCM predictors relative to the observations or National Centre for Environmental Prediction/ National Centre for Atmospheric Research (NCEP/NCAR) reanalysis data. A major drawback of standardisation is that it may reduce the bias in the mean and variance of the predictor variable but it is much harder to accommodate the bias in large-scale patterns of atmospheric circulation in GCMs (e.g. shifts in the dominant storm track relative to observed data) or unrealistic inter-variable relationships. While predicting hydrologic scenarios, such uncorrected bias should be taken care of; otherwise it will propagate in the computations for subsequent years. A statistical method based on equi-probability transformation is applied in this study after downscaling, to remove the bias from the predicted hydrologic variable relative to the observed hydrologic variable for a baseline period. The model is applied in prediction of monsoon stream flow of Mahanadi River in India, from GCM generated large scale climatological data.
Resumo:
This letter proposes a simple tuning algorithm for digital deadbeat control based on error correlation. By injecting a square-wave reference input and calculating the correlation of the control error, a gain correction for deadbeat control is obtained. The proposed solution is simple, it requires a short tuning time, and it is suitable for different DC-DC converter topologies. Simulation and experimental results on synchronous buck converters confirm the properties of the proposed tuning algorithm.
Resumo:
Eleven GCMs (BCCR-BCCM2.0, INGV-ECHAM4, GFDL2.0, GFDL2.1, GISS, IPSL-CM4, MIROC3, MRI-CGCM2, NCAR-PCMI, UKMO-HADCM3 and UKMO-HADGEM1) were evaluated for India (covering 73 grid points of 2.5 degrees x 2.5 degrees) for the climate variable `precipitation rate' using 5 performance indicators. Performance indicators used were the correlation coefficient, normalised root mean square error, absolute normalised mean bias error, average absolute relative error and skill score. We used a nested bias correction methodology to remove the systematic biases in GCM simulations. The Entropy method was employed to obtain weights of these 5 indicators. Ranks of the 11 GCMs were obtained through a multicriterion decision-making outranking method, PROMETHEE-2 (Preference Ranking Organisation Method of Enrichment Evaluation). An equal weight scenario (assigning 0.2 weight for each indicator) was also used to rank the GCMs. An effort was also made to rank GCMs for 4 river basins (Godavari, Krishna, Mahanadi and Cauvery) in peninsular India. The upper Malaprabha catchment in Karnataka, India, was chosen to demonstrate the Entropy and PROMETHEE-2 methods. The Spearman rank correlation coefficient was employed to assess the association between the ranking patterns. Our results suggest that the ensemble of GFDL2.0, MIROC3, BCCR-BCCM2.0, UKMO-HADCM3, MPIECHAM4 and UKMO-HADGEM1 is suitable for India. The methodology proposed can be extended to rank GCMs for any selected region.
Resumo:
Building on Item Response Theory we introduce students’ optimal behavior in multiple-choice tests. Our simulations indicate that the optimal penalty is relatively high, because although correction for guessing discriminates against risk-averse subjects, this effect is small compared with the measurement error that the penalty prevents. This result obtains when knowledge is binary or partial, under different normalizations of the score, when risk aversion is related to knowledge and when there is a pass-fail break point. We also find that the mean degree of difficulty should be close to the mean level of knowledge and that the variance of difficulty should be high.
Resumo:
This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.
Resumo:
The problem motivating this investigation is that of pure axisymmetric torsion of an elastic shell of revolution. The analysis is carried out within the framework of the three-dimensional linear theory of elastic equilibrium for homogeneous, isotropic solids. The objective is the rigorous estimation of errors involved in the use of approximations based on thin shell theory.
The underlying boundary value problem is one of Neumann type for a second order elliptic operator. A systematic procedure for constructing pointwise estimates for the solution and its first derivatives is given for a general class of second-order elliptic boundary-value problems which includes the torsion problem as a special case.
The method used here rests on the construction of “energy inequalities” and on the subsequent deduction of pointwise estimates from the energy inequalities. This method removes certain drawbacks characteristic of pointwise estimates derived in some investigations of related areas.
Special interest is directed towards thin shells of constant thickness. The method enables us to estimate the error involved in a stress analysis in which the exact solution is replaced by an approximate one, and thus provides us with a means of assessing the quality of approximate solutions for axisymmetric torsion of thin shells.
Finally, the results of the present study are applied to the stress analysis of a circular cylindrical shell, and the quality of stress estimates derived here and those from a previous related publication are discussed.
Resumo:
Body length measurement is an important part of growth, condition, and mortality analyses of larval and juvenile fish. If the measurements are not accurate (i.e., do not reflect real fish length), results of subsequent analyses may be affected considerably (McGurk, 1985; Fey, 1999; Porter et al., 2001). The primary cause of error in fish length measurement is shrinkage related to collection and preservation (Theilacker, 1980; Hay, 1981; Butler, 1992; Fey, 1999). The magnitude of shrinkage depends on many factors, namely the duration and speed of the collection tow, abundance of other planktonic organisms in the sample (Theilacker, 1980; Hay, 1981; Jennings, 1991), the type and strength of the preservative (Hay, 1982), and the species of fish (Jennings, 1991; Fey, 1999). Further, fish size affects shrinkage (Fowler and Smith, 1983; Fey, 1999, 2001), indicating that live length should be modeled as a function of preserved length (Pepin et al., 1998; Fey, 1999).
Resumo:
Photonic crystal devices with feature sizes of a few hundred nanometers are often fabricated by electron beam lithography. The proximity effect, stitching error and resist profiles have significant influence on the pattern quality, and therefore determine the optical properties of the devices. In this paper, detailed analyses and simple solutions to these problems are presented. The proximity effect is corrected by the introduction of a compensating dose. The influence of the stitching error is alleviated by replacing the original access waveguides with taper-added waveguides, and the taper parameters are also discussed to get the optimal choice. It is demonstrated experimentally that patterns exposed with different doses have almost the same edge-profiles in the resist for the same development time, and that optimized etching conditions can improve the wall angle of the holes in the substrate remarkably. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
The ground-state properties of Hs nuclei are studied in the framework of the relativistic meanfield theory. We find that the more relatively stable isotopes are located on the proton abundant side of the isotopic chain. The last stable nucleus near the proton drip line is probably the (255)Hs nucleus. The alpha-decay half-lives of Hs nuclei are predicted, and together with the evaluation of the spontaneous-fission half-lives it is shown that the nuclei, which are possibly stable against spontaneous fission are (263-274)Hs. This is in coincidence with the larger binding energies per nucleon. If (271-274)Hs can be synthesized and identified, only those nuclei from the upper Z = 118 isotopic chain, which are lighter than the nucleus (294)118, and those nuclei in the corresponding alpha-decay chain lead to Hs nuclei. The most stable unknown Hs nucleus is (268)Hs. The density-dependent delta interaction pairing is used to improve the BCS pairing correction, which results in more reasonable single-particle energy level distributions and nucleon occupation probabilities. It is shown that the properties of nuclei in the superheavy region can be described with this interaction.
Resumo:
Target transformation factor analysis was used to correct spectral interference in inductively coupled plasma atomic emission spectrometry (ICP-BES) for the determination of rare earth impurities in high purity thulium oxide. Data matrix was constructed with pure and mixture vectors and background vector. A method based on an error evaluation function was proposed to optimize the peak position, so the influence of the peak position shift in spectral scans on the determination was eliminated or reduced. Satisfactory results were obtained using factor analysis and the proposed peak position optimization method.
Resumo:
The present paper reports some definite evidence for the significance of wavelength positioning accuracy in multicomponent analysis techniques for the correction of line interferences in inductively coupled plasma atomic emission spectrometry (ICP-AES). Using scanning spectrometers commercially available today, a large relative error, DELTA(A) may occur in the estimated analyte concentration, owing to wavelength positioning errors, unless a procedure for data processing can eliminate the problem of optical instability. The emphasis is on the effect of the positioning error (deltalambda) in a model scan, which is evaluated theoretically and determined experimentally. A quantitative relation between DELTA(A) and deltalambda, the peak distance, and the effective widths of the analysis and interfering lines is established under the assumption of Gaussian line profiles. The agreement between calculated and experimental DELTA(A) is also illustrated. The DELTA(A) originating from deltalambda is independent of the net analyte/interferent signal ratio; this contrasts with the situation for the positioning error (dlambda) in a sample scan, where DELTA(A) decreases with an increase in the ratio. Compared with dlambda, the effect of deltalambda is generally less significant.
Resumo:
The present paper deals with the evaluation of the relative error (DELTA(A)) in estimated analyte concentrations originating from the wavelength positioning error in a sample scan when multicomponent analysis (MCA) techniques are used for correcting line interferences in inductively coupled plasma atomic emission spectrometry. In the theoretical part, a quantitative relation of DELTA(A) with the extent of line overlap, bandwidth and the magnitude of the positioning error is developed under the assumption of Gaussian line profiles. The measurements of eleven samples covering various typical line interferences showed that the calculated DELTA(A) generally agrees well with the experimental one. An expression of the true detection limit associated with MCA techniques was thus formulated. With MCA techniques, the determination of the analyte and interferent concentrations depend on each other while with conventional correction techniques, such as the three-point method, the estimate of interfering signals is independent of the analyte signals. Therefore. a given positioning error results in a larger DELTA(A) and hence a higher true detection limit in the case of MCA techniques than that in the case of conventional correction methods. although the latter could be a reasonable approximation of the former when the peak distance expressed in the effective width of the interfering line is larger than 0.4. In the light of the effect of wavelength positioning errors, MCA techniques have no advantages over conventional correction methods unless the former can bring an essential reduction ot the positioning error.
Resumo:
Lee, M., Barnes, D. P., Hardy, N. (1985). Research into error recovery for sensory robots. Sensor Review, 5 (4), 194-197.
Resumo:
In most diffusion tensor imaging (DTI) studies, images are acquired with either a partial-Fourier or a parallel partial-Fourier echo-planar imaging (EPI) sequence, in order to shorten the echo time and increase the signal-to-noise ratio (SNR). However, eddy currents induced by the diffusion-sensitizing gradients can often lead to a shift of the echo in k-space, resulting in three distinct types of artifacts in partial-Fourier DTI. Here, we present an improved DTI acquisition and reconstruction scheme, capable of generating high-quality and high-SNR DTI data without eddy current-induced artifacts. This new scheme consists of three components, respectively, addressing the three distinct types of artifacts. First, a k-space energy-anchored DTI sequence is designed to recover eddy current-induced signal loss (i.e., Type 1 artifact). Second, a multischeme partial-Fourier reconstruction is used to eliminate artificial signal elevation (i.e., Type 2 artifact) associated with the conventional partial-Fourier reconstruction. Third, a signal intensity correction is applied to remove artificial signal modulations due to eddy current-induced erroneous T2(∗) -weighting (i.e., Type 3 artifact). These systematic improvements will greatly increase the consistency and accuracy of DTI measurements, expanding the utility of DTI in translational applications where quantitative robustness is much needed.