920 resultados para averaging error.


Relevância:

60.00% 60.00%

Publicador:

Resumo:

RMS measuring device is a nonlinear device consisting of linear and nonlinear devices. The performance of rms measurement is influenced by a number of factors; i) signal characteristics, 2) the measurement technique used and 3) the device characteristics. RMS measurement is not simple, particularly when the signals are complex and unknown. The problem of rms measurement on high crest-factor signals is fully discussed and a solution to this problem is presented in this thesis. The problem of rms measurement is systematically analized and found to have mainly three types of errors: (1) amplitude or waveform error 2) Frequency error and (3) averaging error. Various rms measurement techniques are studied and compared. On the basis of this study the rms -measurement is reclassified three categories: (1) Wave-form-error-free measurement (2) High-frequncy-error measurement and (3) Low-frequency error-free measurement. In modern digital sampled-data systems the signals are complex and waveform-error-free rms measurement is highly appreciated. Among the three basic blocks of rms measuring device the squarer is the most important one. A squaring technique is selected, that permits shaping of the squarer error characteristic in such a way as to achieve waveform-errob free rms measurement. The squarer is designed, fabricated and tested. A hybrid rms measurement using an analog rms computing device and digital display combines the speed of analog techniques and the resolution and ease of measurement of digital techniques. An A/D converter is modified to perform the square-rooting operation. A 10-V rms voltmeter using the developed rms detector is fabricated and tested. The chapters two, three and four analyse the problems involved in rms measurement and present a comparative study of rms computing techniques and devices. The fifth chapter gives the details of the developed rms detector that permits wave-form-error free rms measurement. The sixth chapter, enumerates the the highlights of the thesis and suggests a list of future projects

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper develops methods for Stochastic Search Variable Selection (currently popular with regression and Vector Autoregressive models) for Vector Error Correction models where there are many possible restrictions on the cointegration space. We show how this allows the researcher to begin with a single unrestricted model and either do model selection or model averaging in an automatic and computationally efficient manner. We apply our methods to a large UK macroeconomic model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Blood pressure (BP) is a heritable, quantitative trait with intraindividual variability and susceptibility to measurement error. Genetic studies of BP generally use single-visit measurements and thus cannot remove variability occurring over months or years. We leveraged the idea that averaging BP measured across time would improve phenotypic accuracy and thereby increase statistical power to detect genetic associations. We studied systolic BP (SBP), diastolic BP (DBP), mean arterial pressure (MAP), and pulse pressure (PP) averaged over multiple years in 46,629 individuals of European ancestry. We identified 39 trait-variant associations across 19 independent loci (p < 5 × 10(-8)); five associations (in four loci) uniquely identified by our LTA analyses included those of SBP and MAP at 2p23 (rs1275988, near KCNK3), DBP at 2q11.2 (rs7599598, in FER1L5), and PP at 6p21 (rs10948071, near CRIP3) and 7p13 (rs2949837, near IGFBP3). Replication analyses conducted in cohorts with single-visit BP data showed positive replication of associations and a nominal association (p < 0.05). We estimated a 20% gain in statistical power with long-term average (LTA) as compared to single-visit BP association studies. Using LTA analysis, we identified genetic loci influencing BP. LTA might be one way of increasing the power of genetic associations for continuous traits in extant samples for other phenotypes that are measured serially over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Excess nutrient loads carried by streams and rivers are a great concern for environmental resource managers. In agricultural regions, excess loads are transported downstream to receiving water bodies, potentially causing algal blooms, which could lead to numerous ecological problems. To better understand nutrient load transport, and to develop appropriate water management plans, it is important to have accurate estimates of annual nutrient loads. This study used a Monte Carlo sub-sampling method and error-corrected statistical models to estimate annual nitrate-N loads from two watersheds in central Illinois. The performance of three load estimation methods (the seven-parameter log-linear model, the ratio estimator, and the flow-weighted averaging estimator) applied at one-, two-, four-, six-, and eight-week sampling frequencies were compared. Five error correction techniques; the existing composite method, and four new error correction techniques developed in this study; were applied to each combination of sampling frequency and load estimation method. On average, the most accurate error reduction technique, (proportional rectangular) resulted in 15% and 30% more accurate load estimates when compared to the most accurate uncorrected load estimation method (ratio estimator) for the two watersheds. Using error correction methods, it is possible to design more cost-effective monitoring plans by achieving the same load estimation accuracy with fewer observations. Finally, the optimum combinations of monitoring threshold and sampling frequency that minimizes the number of samples required to achieve specified levels of accuracy in load estimation were determined. For one- to three-weeks sampling frequencies, combined threshold/fixed-interval monitoring approaches produced the best outcomes, while fixed-interval-only approaches produced the most accurate results for four- to eight-weeks sampling frequencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop the energy norm a-posteriori error estimation for hp-version discontinuous Galerkin (DG) discretizations of elliptic boundary-value problems on 1-irregularly, isotropically refined affine hexahedral meshes in three dimensions. We derive a reliable and efficient indicator for the errors measured in terms of the natural energy norm. The ratio of the efficiency and reliability constants is independent of the local mesh sizes and weakly depending on the polynomial degrees. In our analysis we make use of an hp-version averaging operator in three dimensions, which we explicitly construct and analyze. We use our error indicator in an hp-adaptive refinement algorithm and illustrate its practical performance in a series of numerical examples. Our numerical results indicate that exponential rates of convergence are achieved for problems with smooth solutions, as well as for problems with isotropic corner singularities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

77

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Genome wide association studies (GWAS) are becoming the approach of choice to identify genetic determinants of complex phenotypes and common diseases. The astonishing amount of generated data and the use of distinct genotyping platforms with variable genomic coverage are still analytical challenges. Imputation algorithms combine directly genotyped markers information with haplotypic structure for the population of interest for the inference of a badly genotyped or missing marker and are considered a near zero cost approach to allow the comparison and combination of data generated in different studies. Several reports stated that imputed markers have an overall acceptable accuracy but no published report has performed a pair wise comparison of imputed and empiric association statistics of a complete set of GWAS markers. Results: In this report we identified a total of 73 imputed markers that yielded a nominally statistically significant association at P < 10(-5) for type 2 Diabetes Mellitus and compared them with results obtained based on empirical allelic frequencies. Interestingly, despite their overall high correlation, association statistics based on imputed frequencies were discordant in 35 of the 73 (47%) associated markers, considerably inflating the type I error rate of imputed markers. We comprehensively tested several quality thresholds, the haplotypic structure underlying imputed markers and the use of flanking markers as predictors of inaccurate association statistics derived from imputed markers. Conclusions: Our results suggest that association statistics from imputed markers showing specific MAF (Minor Allele Frequencies) range, located in weak linkage disequilibrium blocks or strongly deviating from local patterns of association are prone to have inflated false positive association signals. The present study highlights the potential of imputation procedures and proposes simple procedures for selecting the best imputed markers for follow-up genotyping studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the relentless quest for improved performance driving ever tighter tolerances for manufacturing, machine tools are sometimes unable to meet the desired requirements. One option to improve the tolerances of machine tools is to compensate for their errors. Among all possible sources of machine tool error, thermally induced errors are, in general for newer machines, the most important. The present work demonstrates the evaluation and modelling of the behaviour of the thermal errors of a CNC cylindrical grinding machine during its warm-up period.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe a one-time signature scheme based on the hardness of the syndrome decoding problem, and prove it secure in the random oracle model. Our proposal can be instantiated on general linear error correcting codes, rather than restricted families like alternant codes for which a decoding trapdoor is known to exist. (C) 2010 Elsevier Inc. All rights reserved,

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this article is to present a quantitative analysis of the human failure contribution in the collision and/or grounding of oil tankers, considering the recommendation of the ""Guidelines for Formal Safety Assessment"" of the International Maritime Organization. Initially, the employed methodology is presented, emphasizing the use of the technique for human error prediction to reach the desired objective. Later, this methodology is applied to a ship operating on the Brazilian coast and, thereafter, the procedure to isolate the human actions with the greatest potential to reduce the risk of an accident is described. Finally, the management and organizational factors presented in the ""International Safety Management Code"" are associated with these selected actions. Therefore, an operator will be able to decide where to work in order to obtain an effective reduction in the probability of accidents. Even though this study does not present a new methodology, it can be considered as a reference in the human reliability analysis for the maritime industry, which, in spite of having some guides for risk analysis, has few studies related to human reliability effectively applied to the sector.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The general objective of this study was to evaluate the ordered weighted averaging (OWA) method, integrated to a geographic information systems (GIS), in the definition of priority areas for forest conservation in a Brazilian river basin, aiming at to increase the regional biodiversity. We demonstrated how one could obtain a range of alternatives by applying OWA, including the one obtained by the weighted linear combination method and, also the use of the analytic hierarchy process (AHP) to structure the decision problem and to assign the importance to each criterion. The criteria considered important to this study were: proximity to forest patches; proximity among forest patches with larger core area; proximity to surface water; distance from roads: distance from urban areas; and vulnerability to erosion. OWA requires two sets of criteria weights: the weights of relative criterion importance and the order weights. Thus, Participatory Technique was used to define the criteria set and the criterion importance (based in AHP). In order to obtain the second set of weights we considered the influence of each criterion, as well as the importance of each one, on this decision-making process. The sensitivity analysis indicated coherence among the criterion importance weights, the order weights, and the solution. According to this analysis, only the proximity to surface water criterion is not important to identify priority areas for forest conservation. Finally, we can highlight that the OWA method is flexible, easy to be implemented and, mainly, it facilitates a better understanding of the alternative land-use suitability patterns. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show that quantum feedback control can be used as a quantum-error-correction process for errors induced by a weak continuous measurement. In particular, when the error model is restricted to one, perfectly measured, error channel per physical qubit, quantum feedback can act to perfectly protect a stabilizer codespace. Using the stabilizer formalism we derive an explicit scheme, involving feedback and an additional constant Hamiltonian, to protect an (n-1)-qubit logical state encoded in n physical qubits. This works for both Poisson (jump) and white-noise (diffusion) measurement processes. Universal quantum computation is also possible in this scheme. As an example, we show that detected-spontaneous emission error correction with a driving Hamiltonian can greatly reduce the amount of redundancy required to protect a state from that which has been previously postulated [e.g., Alber , Phys. Rev. Lett. 86, 4402 (2001)].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a method for estimating the posterior probability density of the cointegrating rank of a multivariate error correction model. A second contribution is the careful elicitation of the prior for the cointegrating vectors derived from a prior on the cointegrating space. This prior obtains naturally from treating the cointegrating space as the parameter of interest in inference and overcomes problems previously encountered in Bayesian cointegration analysis. Using this new prior and Laplace approximation, an estimator for the posterior probability of the rank is given. The approach performs well compared with information criteria in Monte Carlo experiments. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analysis of a major multi-site epidemiologic study of heart disease has required estimation of the pairwise correlation of several measurements across sub-populations. Because the measurements from each sub-population were subject to sampling variability, the Pearson product moment estimator of these correlations produces biased estimates. This paper proposes a model that takes into account within and between sub-population variation, provides algorithms for obtaining maximum likelihood estimates of these correlations and discusses several approaches for obtaining interval estimates. (C) 1997 by John Wiley & Sons, Ltd.