966 resultados para Algebra of Errors


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Critical loads are the basis for policies controlling emissions of acidic substances in Europe and elsewhere. They are assessed by several elaborate and ingenious models, each of which requires many parameters, and have to be applied on a spatially-distributed basis. Often the values of the input parameters are poorly known, calling into question the validity of the calculated critical loads. This paper attempts to quantify the uncertainty in the critical loads due to this "parameter uncertainty", using examples from the UK. Models used for calculating critical loads for deposition of acidity and nitrogen in forest and heathland ecosystems were tested at four contrasting sites. Uncertainty was assessed by Monte Carlo methods. Each input parameter or variable was assigned a value, range and distribution in an objective a fashion as possible. Each model was run 5000 times at each site using parameters sampled from these input distributions. Output distributions of various critical load parameters were calculated. The results were surprising. Confidence limits of the calculated critical loads were typically considerably narrower than those of most of the input parameters. This may be due to a "compensation of errors" mechanism. The range of possible critical load values at a given site is however rather wide, and the tails of the distributions are typically long. The deposition reductions required for a high level of confidence that the critical load is not exceeded are thus likely to be large. The implication for pollutant regulation is that requiring a high probability of non-exceedance is likely to carry high costs. The relative contribution of the input variables to critical load uncertainty varied from site to site: any input variable could be important, and thus it was not possible to identify variables as likely targets for research into narrowing uncertainties. Sites where a number of good measurements of input parameters were available had lower uncertainties, so use of in situ measurement could be a valuable way of reducing critical load uncertainty at particularly valuable or disputed sites. From a restricted number of samples, uncertainties in heathland critical loads appear comparable to those of coniferous forest, and nutrient nitrogen critical loads to those of acidity. It was important to include correlations between input variables in the Monte Carlo analysis, but choice of statistical distribution type was of lesser importance. Overall, the analysis provided objective support for the continued use of critical loads in policy development. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The chess endgame is increasingly being seen through the lens of, and therefore effectively defined by, a data ‘model’ of itself. It is vital that such models are clearly faithful to the reality they purport to represent. This paper examines that issue and systems engineering responses to it, using the chess endgame as the exemplar scenario. A structured survey has been carried out of the intrinsic challenges and complexity of creating endgame data by reviewing the past pattern of errors during work in progress, surfacing in publications and occurring after the data was generated. Specific measures are proposed to counter observed classes of error-risk, including a preliminary survey of techniques for using state-of-the-art verification tools to generate EGTs that are correct by construction. The approach may be applied generically beyond the game domain.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Improvements in the resolution of satellite imagery have enabled extraction of water surface elevations at the margins of the flood. Comparison between modelled and observed water surface elevations provides a new means for calibrating and validating flood inundation models, however the uncertainty in this observed data has yet to be addressed. Here a flood inundation model is calibrated using a probabilistic treatment of the observed data. A LiDAR guided snake algorithm is used to determine an outline of a flood event in 2006 on the River Dee, North Wales, UK, using a 12.5m ERS-1 image. Points at approximately 100m intervals along this outline are selected, and the water surface elevation recorded as the LiDAR DEM elevation at each point. With a planar water surface from the gauged upstream to downstream water elevations as an approximation, the water surface elevations at points along this flooded extent are compared to their ‘expected’ value. The pattern of errors between the two show a roughly normal distribution, however when plotted against coordinates there is obvious spatial autocorrelation. The source of this spatial dependency is investigated by comparing errors to the slope gradient and aspect of the LiDAR DEM. A LISFLOOD-FP model of the flood event is set-up to investigate the effect of observed data uncertainty on the calibration of flood inundation models. Multiple simulations are run using different combinations of friction parameters, from which the optimum parameter set will be selected. For each simulation a T-test is used to quantify the fit between modelled and observed water surface elevations. The points chosen for use in this T-test are selected based on their error. The criteria for selection enables evaluation of the sensitivity of the choice of optimum parameter set to uncertainty in the observed data. This work explores the observed data in detail and highlights possible causes of error. The identification of significant error (RMSE = 0.8m) between approximate expected and actual observed elevations from the remotely sensed data emphasises the limitations of using this data in a deterministic manner within the calibration process. These limitations are addressed by developing a new probabilistic approach to using the observed data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Non-word repetition (NWR) was investigated in adolescents with typical development, Specific Language Impairment (SLI) and Autism Plus language Impairment (ALI) (n = 17, 13, 16, and mean age 14;4, 15;4, 14;8 respectively). The study evaluated the hypothesis that poor NWR performance in both groups indicates an overlapping language phenotype (Kjelgaard & Tager-Flusberg, 2001). Performance was investigated both quantitatively, e.g. overall error rates, and qualitatively, e.g. effect of length on repetition, proportion of errors affecting phonological structure, and proportion of consonant substitutions involving manner changes. Findings were consistent with previous research (Whitehouse, Barry, & Bishop, 2008) demonstrating a greater effect of length in the SLI group than the ALI group, which may be due to greater short-term memory limitations. In addition, an automated count of phoneme errors identified poorer performance in the SLI group than the ALI group. These findings indicate differences in the language profiles of individuals with SLI and ALI, but do not rule out a partial overlap. Errors affecting phonological structure were relatively frequent, accounting for around 40% of phonemic errors, but less frequent than straight Consonant-for-Consonant or vowel-for-vowel substitutions. It is proposed that these two different types of errors may reflect separate contributory mechanisms. Around 50% of consonant substitutions in the clinical groups involved manner changes, suggesting poor auditory-perceptual encoding. From a clinical perspective algorithms which automatically count phoneme errors may enhance sensitivity of NWR as a diagnostic marker of language impairment. Learning outcomes: Readers will be able to (1) describe and evaluate the hypothesis that there is a phenotypic overlap between SLI and Autism Spectrum Disorders (2) describe differences in the NWR performance of adolescents with SLI and ALI, and discuss whether these differences support or refute the phenotypic overlap hypothesis, and (3) understand how computational algorithms such as the Levenshtein Distance may be used to analyse NWR data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

If secondary structure predictions are to be incorporated into fold recognition methods, an assessment of the effect of specific types of errors in predicted secondary structures on the sensitivity of fold recognition should be carried out. Here, we present a systematic comparison of different secondary structure prediction methods by measuring frequencies of specific types of error. We carry out an evaluation of the effect of specific types of error on secondary structure element alignment (SSEA), a baseline fold recognition method. The results of this evaluation indicate that missing out whole helix or strand elements, or predicting the wrong type of element, is more detrimental than predicting the wrong lengths of elements or overpredicting helix or strand. We also suggest that SSEA scoring is an effective method for assessing accuracy of secondary structure prediction and perhaps may also provide a more appropriate assessment of the “usefulness” and quality of predicted secondary structure, if secondary structure alignments are to be used in fold recognition.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Jargon aphasia is one of the most intractable forms of aphasia with limited recommendation on amelioration of associated naming difficulties and neologisms. The few naming therapy studies that exist in jargon aphasia have utilized either semantic or phonological approaches but the results have been equivocal. Moreover, the effect of therapy on characteristics of neologisms is less explored. Aims: This study investigates the effectiveness of a phonological naming therapy (i.e., phonological component analysis, PCA) on picture naming abilities and on quantitative and qualitative changes in neologisms for an individual with jargon aphasia (FF). Methods: FF showed evidence of jargon aphasia with severe naming difficulties and produced a very high proportion of neologisms. A single-subject multiple probe design across behaviors was employed to evaluate the effects of PCA therapy on the accuracy for three sets of words. In therapy, a phonological components analysis chart was used to identify five phonological components (i.e., rhymes, first sound, first sound associate, final sound, number of syllables) for each target word. Generalization effects—change in percent accuracy and error pattern—were examined comparing pre-and post-therapy responses on the Philadelphia Naming Test and these responses were analyzed to explore the characteristics of the neologisms. The quantitative change in neologisms was measured by change in the proportion of neologisms from pre- to post-therapy and the qualitative change was indexed by the phonological overlap between target and neologism. Results: As a consequence of PCA therapy, FF showed a significant improvement in his ability to name the treated items. His performance in maintenance and follow-up phases remained comparable to his performance during the therapy phases. Generalization to other naming tasks did not show a change in accuracy but distinct differences in error pattern (an increase in proportion of real word responses and a decrease in proportion of neologisms) were observed. Notably, the decrease in neologisms occurred with a corresponding trend for increase in the phonological similarity between the neologisms and the targets. Conclusions: This study demonstrated the effectiveness of a phonological therapy for improving naming abilities and reducing the amount of neologisms in an individual with severe jargon aphasia. The positive outcome of this research is encouraging, as it provides evidence for effective therapies for jargon aphasia and also emphasizes that use of the quality and quantity of errors may provide a sensitive outcome measure to determine therapy effectiveness, in particular for client groups who are difficult to treat.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of virtualization in high-performance computing (HPC) has been suggested as a means to provide tailored services and added functionality that many users expect from full-featured Linux cluster environments. The use of virtual machines in HPC can offer several benefits, but maintaining performance is a crucial factor. In some instances the performance criteria are placed above the isolation properties. This selective relaxation of isolation for performance is an important characteristic when considering resilience for HPC environments that employ virtualization. In this paper we consider some of the factors associated with balancing performance and isolation in configurations that employ virtual machines. In this context, we propose a classification of errors based on the concept of “error zones”, as well as a detailed analysis of the trade-offs between resilience and performance based on the level of isolation provided by virtualization solutions. Finally, a set of experiments are performed using different virtualization solutions to elucidate the discussion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Observational analyses of running 5-year ocean heat content trends (Ht) and net downward top of atmosphere radiation (N) are significantly correlated (r~0.6) from 1960 to 1999, but a spike in Ht in the early 2000s is likely spurious since it is inconsistent with estimates of N from both satellite observations and climate model simulations. Variations in N between 1960 and 2000 were dominated by volcanic eruptions, and are well simulated by the ensemble mean of coupled models from the Fifth Coupled Model Intercomparison Project (CMIP5). We find an observation-based reduction in N of -0.31±0.21 Wm-2 between 1999 and 2005 that potentially contributed to the recent warming slowdown, but the relative roles of external forcing and internal variability remain unclear. While present-day anomalies of N in the CMIP5 ensemble mean and observations agree, this may be due to a cancellation of errors in outgoing longwave and absorbed solar radiation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cognitive functions such as attention and memory are known to be impaired in End Stage Renal Disease (ESRD), but the sites of the neural changes underlying these impairments are uncertain. Patients and controls took part in a latent learning task, which had previously shown a dissociation between patients with Parkinson’s disease and those with medial temporal damage. ESRD patients (n=24) and age and education-matched controls (n=24) were randomly assigned to either an exposed or unexposed condition. In Phase 1 of the task, participants learned that a cue (word) on the back of a schematic head predicted that the subsequently seen face would be smiling. For the exposed (but not unexposed) condition, an additional (irrelevant) colour cue was shown during presentation. In Phase 2, a different association, between colour and facial expression, was learned. Instructions were the same for each phase: participants had to predict whether the subsequently viewed face was going to be happy or sad. No difference in error rate between the groups was found in Phase 1, suggesting that patients and controls learned at a similar rate. However, in Phase 2, a significant interaction was found between group and condition, with exposed controls performing significantly worse than unexposed (therefore demonstrating learned irrelevance). In contrast, exposed patients made a similar number of errors to unexposed in Phase 2. The pattern of results in ESRD was different from that previously found in Parkinson’s disease, suggesting a different neural origin.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Medication safety and errors are a major concern in care homes. In addition to the identification of incidents, there is a need for a comprehensive system description to avoid the danger of introducing interventions that have unintended consequences and are therefore unsustainable. The aim of the study was to explore the impact and uniqueness of Work Domain Analysis (WDA) to facilitate an in-depth understanding of medication safety problems within the care home system and identify the potential benefits of WDA to design safety interventions to improve medication safety. A comprehensive, systematic and contextual overview of the care home medication system was developed for the first time. The novel use of the Abstraction Hierarchy (AH) to analyse medication errors revealed the value of the AH to guide a comprehensive analysis of errors and generate system improvement recommendations that took into account the contextual information of the wider system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper investigates the potential of fusion at normalisation/segmentation level prior to feature extraction. While there are several biometric fusion methods at data/feature level, score level and rank/decision level combining raw biometric signals, scores, or ranks/decisions, this type of fusion is still in its infancy. However, the increasing demand to allow for more relaxed and less invasive recording conditions, especially for on-the-move iris recognition, suggests to further investigate fusion at this very low level. This paper focuses on the approach of multi-segmentation fusion for iris biometric systems investigating the benefit of combining the segmentation result of multiple normalisation algorithms, using four methods from two different public iris toolkits (USIT, OSIRIS) on the public CASIA and IITD iris datasets. Evaluations based on recognition accuracy and ground truth segmentation data indicate high sensitivity with regards to the type of errors made by segmentation algorithms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work presents a Bayesian semiparametric approach for dealing with regression models where the covariate is measured with error. Given that (1) the error normality assumption is very restrictive, and (2) assuming a specific elliptical distribution for errors (Student-t for example), may be somewhat presumptuous; there is need for more flexible methods, in terms of assuming only symmetry of errors (admitting unknown kurtosis). In this sense, the main advantage of this extended Bayesian approach is the possibility of considering generalizations of the elliptical family of models by using Dirichlet process priors in dependent and independent situations. Conditional posterior distributions are implemented, allowing the use of Markov Chain Monte Carlo (MCMC), to generate the posterior distributions. An interesting result shown is that the Dirichlet process prior is not updated in the case of the dependent elliptical model. Furthermore, an analysis of a real data set is reported to illustrate the usefulness of our approach, in dealing with outliers. Finally, semiparametric proposed models and parametric normal model are compared, graphically with the posterior distribution density of the coefficients. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We simplify the results of Bremner and Hentzel [J. Algebra 231 (2000) 387-405] on polynomial identities of degree 9 in two variables satisfied by the ternary cyclic sum [a, b, c] abc + bca + cab in every totally associative ternary algebra. We also obtain new identities of degree 9 in three variables which do not follow from the identities in two variables. Our results depend on (i) the LLL algorithm for lattice basis reduction, and (ii) linearization operators in the group algebra of the symmetric group which permit efficient computation of the representation matrices for a non-linear identity. Our computational methods can be applied to polynomial identities for other algebraic structures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We begin a study of torsion theories for representations of finitely generated algebras U over a field containing a finitely generated commutative Harish-Chandra subalgebra Gamma. This is an important class of associative algebras, which includes all finite W-algebras of type A over an algebraically closed field of characteristic zero, in particular, the universal enveloping algebra of gl(n) (or sl(n)) for all n. We show that any Gamma-torsion theory defined by the coheight of the prime ideals of Gamma is liftable to U. Moreover, for any simple U-module M, all associated prime ideals of M in Spec Gamma have the same coheight. Hence, the coheight of these associated prime ideals is an invariant of a given simple U-module. This implies the stratification of the category of U-modules controlled by the coheight of the associated prime ideals of Gamma. Our approach can be viewed as a generalization of the classical paper by Block (1981) [4]; it allows, in particular, to study representations of gl(n) beyond the classical category of weight or generalized weight modules. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Let A be a (non-necessarily associative) finite-dimensional algebra over a field of characteristic zero. A quantitative estimate of the polynomial identities satisfied by A is achieved through the study of the asymptotics of the sequence of codimensions of A. It is well known that for such an algebra this sequence is exponentially bounded. Here we capture the exponential rate of growth of the sequence of codimensions for several classes of algebras including simple algebras with a special non-degenerate form, finite-dimensional Jordan or alternative algebras and many more. In all cases such rate of growth is integer and is explicitly related to the dimension of a subalgebra of A. One of the main tools of independent interest is the construction in the free non-associative algebra of multialternating polynomials satisfying special properties. (C) 2010 Elsevier Inc. All rights reserved.