912 resultados para ERROR THRESHOLD
Resumo:
Background: Genome wide association studies (GWAS) are becoming the approach of choice to identify genetic determinants of complex phenotypes and common diseases. The astonishing amount of generated data and the use of distinct genotyping platforms with variable genomic coverage are still analytical challenges. Imputation algorithms combine directly genotyped markers information with haplotypic structure for the population of interest for the inference of a badly genotyped or missing marker and are considered a near zero cost approach to allow the comparison and combination of data generated in different studies. Several reports stated that imputed markers have an overall acceptable accuracy but no published report has performed a pair wise comparison of imputed and empiric association statistics of a complete set of GWAS markers. Results: In this report we identified a total of 73 imputed markers that yielded a nominally statistically significant association at P < 10(-5) for type 2 Diabetes Mellitus and compared them with results obtained based on empirical allelic frequencies. Interestingly, despite their overall high correlation, association statistics based on imputed frequencies were discordant in 35 of the 73 (47%) associated markers, considerably inflating the type I error rate of imputed markers. We comprehensively tested several quality thresholds, the haplotypic structure underlying imputed markers and the use of flanking markers as predictors of inaccurate association statistics derived from imputed markers. Conclusions: Our results suggest that association statistics from imputed markers showing specific MAF (Minor Allele Frequencies) range, located in weak linkage disequilibrium blocks or strongly deviating from local patterns of association are prone to have inflated false positive association signals. The present study highlights the potential of imputation procedures and proposes simple procedures for selecting the best imputed markers for follow-up genotyping studies.
Resumo:
Ecological systems are vulnerable to irreversible change when key system properties are pushed over thresholds, resulting in the loss of resilience and the precipitation of a regime shift. Perhaps the most important of such properties in human-modified landscapes is the total amount of remnant native vegetation. In a seminal study Andren proposed the existence of a fragmentation threshold in the total amount of remnant vegetation, below which landscape-scale connectivity is eroded and local species richness and abundance become dependent on patch size. Despite the fact that species patch-area effects have been a mainstay of conservation science there has yet to be a robust empirical evaluation of this hypothesis. Here we present and test a new conceptual model describing the mechanisms and consequences of biodiversity change in fragmented landscapes, identifying the fragmentation threshold as a first step in a positive feedback mechanism that has the capacity to impair ecological resilience, and drive a regime shift in biodiversity. The model considers that local extinction risk is defined by patch size, and immigration rates by landscape vegetation cover, and that the recovery from local species losses depends upon the landscape species pool. Using a unique dataset on the distribution of non-volant small mammals across replicate landscapes in the Atlantic forest of Brazil, we found strong evidence for our model predictions - that patch-area effects are evident only at intermediate levels of total forest cover, where landscape diversity is still high and opportunities for enhancing biodiversity through local management are greatest. Furthermore, high levels of forest loss can push native biota through an extinction filter, and result in the abrupt, landscape-wide loss of forest-specialist taxa, ecological resilience and management effectiveness. The proposed model links hitherto distinct theoretical approaches within a single framework, providing a powerful tool for analysing the potential effectiveness of management interventions.
Resumo:
Pires, FO, Hammond, J, Lima-Silva, AE, Bertuzzi, RCM, and Kiss, MAPDM. Ventilation behavior during upper-body incremental exercise. J Strength Cond Res 25(1): 225-230, 2011-This study tested the ventilation (V(E)) behavior during upper-body incremental exercise by mathematical models that calculate 1 or 2 thresholds and compared the thresholds identified by mathematical models with V-slope, ventilatory equivalent for oxygen uptake (V(E)/(V) over dotO(2)), and ventilatory equivalent for carbon dioxide uptake (V(E)/(V) over dotCO(2)). Fourteen rock climbers underwent an upper-body incremental test on a cycle ergometer with increases of approximately 20 W.min(-1) until exhaustion at a cranking frequency of approximately 90 rpm. The V(E) data were smoothed to 10-second averages for V(E) time plotting. The bisegmental and the 3-segmental linear regression models were calculated from 1 or 2 intercepts that best shared the V(E) curve in 2 or 3 linear segments. The ventilatory threshold(s) was determined mathematically by the intercept(s) obtained by bisegmental and 3-segmental models, by V-slope model, or visually by V(E)/(V) over dotO(2) and V(E)/(V) over dotCO(2). There was no difference between bisegmental (mean square error [MSE] = 35.3 +/- 32.7 l.min(-1)) and 3-segmental (MSE = 44.9 +/- 47.8 l.min(-1)) models in fitted data. There was no difference between ventilatory threshold identified by the bisegmental (28.2 +/- 6.8 ml.kg(-1).min(-1)) and second ventilatory threshold identified by the 3-segmental (30.0 +/- 5.1 ml.kg(-1).min(-1)), V(E)/(V) over dotO(2) (28.8 +/- 5.5 ml.kg(-1).min(-1)), or V-slope (28.5 +/- 5.6 ml.kg(-1).min(-1)). However, the first ventilatory threshold identified by 3-segmental (23.1 +/- 4.9 ml.kg(-1).min(-1)) or by VE/(V) over dotO(2) (24.9 +/- 4.4 ml.kg(-1).min(-1)) was different from these 4. The V(E) behavior during upper-body exercise tends to show only 1 ventilatory threshold. These findings have practical implications because this point is frequently used for aerobic training prescription in healthy subjects, athletes, and in elderly or diseased populations. The ventilatory threshold identified by V(E) curve should be used for aerobic training prescription in healthy subjects and athletes.
Resumo:
PURPOSE: Walking training is considered as the first treatment option for patients with peripheral arterial disease and intermittent claudication (IC). Walking exercise has been prescribed for these patients by relative intensity of peak oxygen uptake (VO(2)peak), ranging from 40% to 70% VO(2)peak, or pain threshold (PT). However, the relationship between these methods and anaerobic threshold (AT), which is considered one of the best metabolic markers for establishing training intensity, has not been analyzed. Thus, the aim of this study was to compare, in IC patients, the physiological responses at exercise intensities usually prescribed for training (% VO(2) peak or % PT) with the ones observed at AT. METHODS: Thirty-three IC patients performed maximal graded cardiopulmonary treadmill test to assess exercise tolerance. During the test, heart rate (HR), VO(2), and systolic blood pressure were measured and responses were analyzed at the following: 40% of VO(2)peak; 70% of VO(2)peak; AT; and PT. RESULTS: Heart rate and VO(2) at 40% and 70% of VO(2)peak were lower than those at AT (HR: -13 +/- 9% and -3 +/- 8%, P < .01, respectively; VO(2): -52 +/- 12% and -13 +/- 15%, P < .01, respectively). Conversely, HR and VO(2) at PT were slightly higher than those at AT (HR: +3 +/- 8%, P < .01; VO(2): + 6 +/- 15%, P = .04). None of the patients achieved the respiratory compensation point. CONCLUSION: Prescribing exercise for IC patients between 40% and 70% of VO(2)peak will induce a lower stimulus than that at AT, whereas prescribing exercise at PT will result in a stimulus above AT. Thus, prescribing exercise training for IC patients on the basis of PT will probably produce a greater metabolic stimulus, promoting better cardiovascular benefits.
Resumo:
This paper proposes a three-stage offline approach to detect, identify, and correct series and shunt branch parameter errors. In Stage 1 the branches suspected of having parameter errors are identified through an Identification Index (II). The II of a branch is the ratio between the number of measurements adjacent to that branch, whose normalized residuals are higher than a specified threshold value, and the total number of measurements adjacent to that branch. Using several measurement snapshots, in Stage 2 the suspicious parameters are estimated, in a simultaneous multiple-state-and-parameter estimation, via an augmented state and parameter estimator which increases the V - theta state vector for the inclusion of suspicious parameters. Stage 3 enables the validation of the estimation obtained in Stage 2, and is performed via a conventional weighted least squares estimator. Several simulation results (with IEEE bus systems) have demonstrated the reliability of the proposed approach to deal with single and multiple parameter errors in adjacent and non-adjacent branches, as well as in parallel transmission lines with series compensation. Finally the proposed approach is confirmed on tests performed on the Hydro-Quebec TransEnergie network.
Resumo:
In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.
Resumo:
With the relentless quest for improved performance driving ever tighter tolerances for manufacturing, machine tools are sometimes unable to meet the desired requirements. One option to improve the tolerances of machine tools is to compensate for their errors. Among all possible sources of machine tool error, thermally induced errors are, in general for newer machines, the most important. The present work demonstrates the evaluation and modelling of the behaviour of the thermal errors of a CNC cylindrical grinding machine during its warm-up period.
Resumo:
We describe a one-time signature scheme based on the hardness of the syndrome decoding problem, and prove it secure in the random oracle model. Our proposal can be instantiated on general linear error correcting codes, rather than restricted families like alternant codes for which a decoding trapdoor is known to exist. (C) 2010 Elsevier Inc. All rights reserved,
Resumo:
The purpose of this article is to present a quantitative analysis of the human failure contribution in the collision and/or grounding of oil tankers, considering the recommendation of the ""Guidelines for Formal Safety Assessment"" of the International Maritime Organization. Initially, the employed methodology is presented, emphasizing the use of the technique for human error prediction to reach the desired objective. Later, this methodology is applied to a ship operating on the Brazilian coast and, thereafter, the procedure to isolate the human actions with the greatest potential to reduce the risk of an accident is described. Finally, the management and organizational factors presented in the ""International Safety Management Code"" are associated with these selected actions. Therefore, an operator will be able to decide where to work in order to obtain an effective reduction in the probability of accidents. Even though this study does not present a new methodology, it can be considered as a reference in the human reliability analysis for the maritime industry, which, in spite of having some guides for risk analysis, has few studies related to human reliability effectively applied to the sector.
Resumo:
The multiple-gate field-effect transistor (MuGFET) is a device with a gate folded on different sides of the channel region. They are one of the most promising technological solutions to create high-performance ultra-scaled SOI CMOS. In this work, the behavior of the threshold voltage in double-gate, triple-gate and quadruple-gate SOI transistors with different channel doping concentrations is studied through three-dimensional numerical simulation. The results indicated that for double-gate transistors, one or two threshold voltages can be observed, depending on the channel doping concentration. However, in triple-gate and quadruple-gate it is possible to observe up to four threshold voltages due to the corner effect and the different doping concentration between the top and bottom of the Fin. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Leaf wetness duration (LWD) models based on empirical approaches offer practical advantages over physically based models in agricultural applications, but their spatial portability is questionable because they may be biased to the climatic conditions under which they were developed. In our study, spatial portability of three LWD models with empirical characteristics - a RH threshold model, a decision tree model with wind speed correction, and a fuzzy logic model - was evaluated using weather data collected in Brazil, Canada, Costa Rica, Italy and the USA. The fuzzy logic model was more accurate than the other models in estimating LWD measured by painted leaf wetness sensors. The fraction of correct estimates for the fuzzy logic model was greater (0.87) than for the other models (0.85-0.86) across 28 sites where painted sensors were installed, and the degree of agreement k statistic between the model and painted sensors was greater for the fuzzy logic model (0.71) than that for the other models (0.64-0.66). Values of the k statistic for the fuzzy logic model were also less variable across sites than those of the other models. When model estimates were compared with measurements from unpainted leaf wetness sensors, the fuzzy logic model had less mean absolute error (2.5 h day(-1)) than other models (2.6-2.7 h day(-1)) after the model was calibrated for the unpainted sensors. The results suggest that the fuzzy logic model has greater spatial portability than the other models evaluated and merits further validation in comparison with physical models under a wider range of climate conditions. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Centuries after Locke asserted the importance of memory to identity, Freudian psychology argued that what was forgotten was of equal importance as to what was remembered. The closing decades of the nineteenth century saw a rising interest in the nature of forgetting, resulting in a reassessment and newfound distrust of the long revered faculty of memory. The relationship between memory and identity was inverted, seeing forgetting also become a means for forging identity. This newfound distrust of memory manifested in the writings of Nietzsche who in 1874 called for society to learn to feel unhistorically and distance itself from the past - in what was essentially tantamount to a cultural forgetting. Following the Nietzschean call, the architecture of Modernism was also compelled by the need to 'overcome' the limits imposed by history. This paper examines notions of identity through the shifting boundaries of remembering and forgetting, with particular reference to the construction of Brazilian identity through the ‘repression’ of history and memory in the design of the Brazilian capital. Designed as a forward-looking modernist utopia, transcending the limits imposed by the country's colonial heritage, the design for Brasilia exploited the anti-historicist agenda of modernism to emancipate the country from cultural and political associations with the Portuguese Empire. This paper examines the relationship between place, memory and forgetting through a discussion of the design for Brasilia.
Resumo:
We show that quantum feedback control can be used as a quantum-error-correction process for errors induced by a weak continuous measurement. In particular, when the error model is restricted to one, perfectly measured, error channel per physical qubit, quantum feedback can act to perfectly protect a stabilizer codespace. Using the stabilizer formalism we derive an explicit scheme, involving feedback and an additional constant Hamiltonian, to protect an (n-1)-qubit logical state encoded in n physical qubits. This works for both Poisson (jump) and white-noise (diffusion) measurement processes. Universal quantum computation is also possible in this scheme. As an example, we show that detected-spontaneous emission error correction with a driving Hamiltonian can greatly reduce the amount of redundancy required to protect a state from that which has been previously postulated [e.g., Alber , Phys. Rev. Lett. 86, 4402 (2001)].
Resumo:
We investigated the recruitment behaviour of low threshold motor units in flexor digitorum superficialis by altering two biomechanical constraints: the load against which the muscle worked and the initial muscle length. The load was increased using isotonic (low load), loaded dynamic (intermediate load) and isometric (high load) contractions in two studies. The initial muscle position reflected resting muscle length in series A, and a longer length with digit III fully extended in series B. Intramuscular EMG was recorded from 48 single motor units in 10 experiments on five healthy subjects, 21 units in series A and,27 in series B, while subjects performed ramp up, hold and ramp down contractions. Increasing the load on the muscle decreased the force, displacement and firing rate of single motor units at recruitment at shorter muscle lengths (P < 0.001, dependent t-test). At longer muscle lengths this recruitment pattern was observed between loaded dynamic and isotonic contractions, but not between isometric and loaded dynamic contractions. Thus, the recruitment properties of single motor units in human flexor digitorum superficialis are sensitive to changes in both imposed external loads and the initial length of the muscle. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents a method for estimating the posterior probability density of the cointegrating rank of a multivariate error correction model. A second contribution is the careful elicitation of the prior for the cointegrating vectors derived from a prior on the cointegrating space. This prior obtains naturally from treating the cointegrating space as the parameter of interest in inference and overcomes problems previously encountered in Bayesian cointegration analysis. Using this new prior and Laplace approximation, an estimator for the posterior probability of the rank is given. The approach performs well compared with information criteria in Monte Carlo experiments. (C) 2003 Elsevier B.V. All rights reserved.