55 resultados para Error probability


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In groundwater applications, Monte Carlo methods are employed to model the uncertainty on geological parameters. However, their brute-force application becomes computationally prohibitive for highly detailed geological descriptions, complex physical processes, and a large number of realizations. The Distance Kernel Method (DKM) overcomes this issue by clustering the realizations in a multidimensional space based on the flow responses obtained by means of an approximate (computationally cheaper) model; then, the uncertainty is estimated from the exact responses that are computed only for one representative realization per cluster (the medoid). Usually, DKM is employed to decrease the size of the sample of realizations that are considered to estimate the uncertainty. We propose to use the information from the approximate responses for uncertainty quantification. The subset of exact solutions provided by DKM is then employed to construct an error model and correct the potential bias of the approximate model. Two error models are devised that both employ the difference between approximate and exact medoid solutions, but differ in the way medoid errors are interpolated to correct the whole set of realizations. The Local Error Model rests upon the clustering defined by DKM and can be seen as a natural way to account for intra-cluster variability; the Global Error Model employs a linear interpolation of all medoid errors regardless of the cluster to which the single realization belongs. These error models are evaluated for an idealized pollution problem in which the uncertainty of the breakthrough curve needs to be estimated. For this numerical test case, we demonstrate that the error models improve the uncertainty quantification provided by the DKM algorithm and are effective in correcting the bias of the estimate computed solely from the MsFV results. The framework presented here is not specific to the methods considered and can be applied to other combinations of approximate models and techniques to select a subset of realizations

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A lot of research in cognition and decision making suffers from a lack of formalism. The quantum probability program could help to improve this situation, but we wonder whether it would provide even more added value if its presumed focus on outcome models were complemented by process models that are, ideally, informed by ecological analyses and integrated into cognitive architectures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The multiscale finite-volume (MSFV) method is designed to reduce the computational cost of elliptic and parabolic problems with highly heterogeneous anisotropic coefficients. The reduction is achieved by splitting the original global problem into a set of local problems (with approximate local boundary conditions) coupled by a coarse global problem. It has been shown recently that the numerical errors in MSFV results can be reduced systematically with an iterative procedure that provides a conservative velocity field after any iteration step. The iterative MSFV (i-MSFV) method can be obtained with an improved (smoothed) multiscale solution to enhance the localization conditions, with a Krylov subspace method [e.g., the generalized-minimal-residual (GMRES) algorithm] preconditioned by the MSFV system, or with a combination of both. In a multiphase-flow system, a balance between accuracy and computational efficiency should be achieved by finding a minimum number of i-MSFV iterations (on pressure), which is necessary to achieve the desired accuracy in the saturation solution. In this work, we extend the i-MSFV method to sequential implicit simulation of time-dependent problems. To control the error of the coupled saturation/pressure system, we analyze the transport error caused by an approximate velocity field. We then propose an error-control strategy on the basis of the residual of the pressure equation. At the beginning of simulation, the pressure solution is iterated until a specified accuracy is achieved. To minimize the number of iterations in a multiphase-flow problem, the solution at the previous timestep is used to improve the localization assumption at the current timestep. Additional iterations are used only when the residual becomes larger than a specified threshold value. Numerical results show that only a few iterations on average are necessary to improve the MSFV results significantly, even for very challenging problems. Therefore, the proposed adaptive strategy yields efficient and accurate simulation of multiphase flow in heterogeneous porous media.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Odorant receptor (OR) genes constitute with 1200 members the largest gene family in the mouse genome. A mature olfactory sensory neuron (OSN) is thought to express just one OR gene, and from one allele. The cell bodies of OSNs that express a given OR gene display a mosaic pattern within a particular region of the main olfactory epithelium. The mechanisms and cis-acting DNA elements that regulate the expression of one OR gene per OSN - OR gene choice - remain poorly understood. Here, we describe a reporter assay to identify minimal promoters for OR genes in transgenic mice, which are produced by the conventional method of pronuclear injection of DNA. The promoter transgenes are devoid of an OR coding sequence, and instead drive expression of the axonal marker tau-β-galactosidase. For four mouse OR genes (M71, M72, MOR23, and P3) and one human OR gene (hM72), a mosaic, OSN-specific pattern of reporter expression can be obtained in transgenic mice with contiguous DNA segments of only ~300 bp that are centered around the transcription start site (TSS). The ~150bp region upstream of the TSS contains three conserved sequence motifs, including homeodomain (HD) binding sites. Such HD binding sites are also present in the H and P elements, DNA sequences that are known to strongly influence OR gene expression. When a 19mer encompassing a HD binding site from the P element is multimerized nine times and added upstream of a MOR23 minigene that contains the MOR23 coding region, we observe a dramatic increase in the number of transgene-expressing founders and lines and in the number of labeled OSNs. By contrast, a nine times multimerized 19mer with a mutant HD binding site does not have these effects. We hypothesize that HD binding sites in the H and P elements and in OR promoters modulate the probability of OR gene choice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When researchers introduce a new test they have to demonstrate that it is valid, using unbiased designs and suitable statistical procedures. In this article we use Monte Carlo analyses to highlight how incorrect statistical procedures (i.e., stepwise regression, extreme scores analyses) or ignoring regression assumptions (e.g., heteroscedasticity) contribute to wrong validity estimates. Beyond these demonstrations, and as an example, we re-examined the results reported by Warwick, Nettelbeck, and Ward (2010) concerning the validity of the Ability Emotional Intelligence Measure (AEIM). Warwick et al. used the wrong statistical procedures to conclude that the AEIM was incrementally valid beyond intelligence and personality traits in predicting various outcomes. In our re-analysis, we found that the reliability-corrected multiple correlation of their measures with personality and intelligence was up to .69. Using robust statistical procedures and appropriate controls, we also found that the AEIM did not predict incremental variance in GPA, stress, loneliness, or well-being, demonstrating the importance for testing validity instead of looking for it.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This letter to the Editor comments on the article Practical relevance of pattern uniqueness in forensic science by P.T. Jayaprakash (Forensic Science International, in press).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To review, retrospectively, the possible causes of sub- or intertrochanteric fractures after screw fixation of intracapsular fractures of the proximal femur. METHODS: Eighty-four patients with an intracapsular fracture of proximal femur were operated between 1995 and 1998 by using three cannulated 6.25 mm screws. The screws were inserted in a triangular configuration, one screw in the upper part of the femoral neck and two screws in the inferior part. Between 1999 and 2001, we use two screws proximally and one screw distally. RESULTS: In the first series, two patients died within one week after operation. Sixty-four fractures healed without problems. Four patients developed an atrophic non-union; avascular necrosis of the femoral head was found in 11 patients. Three patients (3.6%) suffered a sub- and/or intertrochanteric fracture after a mean postoperative time of 30 days, in one case without obvious trauma. In all three cases surgical revision was necessary. Between 1999 and 2001 we did not observe any fracture after screwing. CONCLUSION: Two screws in the inferior part of the femoral neck create a stress riser in the subtrochanteric region, potentially inducing a fracture in the weakened bone. For internal fixation for proximal intracapsular femoral fracture only one screw must be inserted in the inferior part of neck.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Real time glycemia is a cornerstone for metabolic research, particularly when performing oral glucose tolerance tests (OGTT) or glucose clamps. From 1965 to 2009, the gold standard device for real time plasma glucose assessment was the Beckman glucose analyzer 2 (Beckman Instruments, Fullerton, CA), which technology couples glucose oxidase enzymatic assay with oxygen sensors. Since its discontinuation in 2009, today's researchers are left with few choices that utilize glucose oxidase technology. The first one is the YSI 2300 (Yellow Springs Instruments Corp., Yellow Springs, OH), known to be as accurate as the Beckman(1). The YSI has been used extensively for clinical research studies and is used to validate other glucose monitoring devices(2). The major drawback of the YSI is that it is relatively slow and requires high maintenance. The Analox GM9 (Analox instruments, London), more recent and faster, is increasingly used in clinical research(3) as well as in basic sciences(4) (e.g. 23 papers in Diabetes or 21 in Diabetologia). This article is protected by copyright. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using a large prospective cohort of over 12,000 women, we determined 2 thresholds (high risk and low risk of hip fracture) to use in a 10-yr hip fracture probability model that we had previously described, a model combining the heel stiffness index measured by quantitative ultrasound (QUS) and a set of easily determined clinical risk factors (CRFs). The model identified a higher percentage of women with fractures as high risk than a previously reported risk score that combined QUS and CRF. In addition, it categorized women in a way that was quite consistent with the categorization that occurred using dual X-ray absorptiometry (DXA) and the World Health Organization (WHO) classification system; the 2 methods identified similar percentages of women with and without fractures in each of their 3 categories, but the 2 identified only in part the same women. Nevertheless, combining our composite probability model with DXA in a case findings strategy will likely further improve the detection of women at high risk of fragility hip fracture. We conclude that the currently proposed model may be of some use as an alternative to the WHO classification criteria for osteoporosis, at least when access to DXA is limited.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a system where tens of thousands of words are made up of a limited number of phonemes, many words are bound to sound alike. This similarity of the words in the lexicon as characterized by phonological neighbourhood density (PhND) has been shown to affect speed and accuracy of word comprehension and production. Whereas there is a consensus about the interfering nature of neighbourhood effects in comprehension, the language production literature offers a more contradictory picture with mainly facilitatory but also interfering effects reported on word production. Here we report both of these two types of effects in the same study. Multiple regression mixed models analyses were conducted on PhND effects on errors produced in a naming task by a group of 21 participants with aphasia. These participants produced more formal errors (interfering effect) for words in dense phonological neighbourhoods, but produced fewer nonwords and semantic errors (a facilitatory effect) with increasing density. In order to investigate the nature of these opposite effects of PhND, we further analysed a subset of formal errors and nonword errors by distinguishing errors differing on a single phoneme from the target (corresponding to the definition of phonological neighbours) from those differing on two or more phonemes. This analysis confirmed that only formal errors that were phonological neighbours of the target increased in dense neighbourhoods, while all other errors decreased. Based on additional observations favouring a lexical origin of these formal errors (they exceeded the probability of producing a real-word error by chance, were of a higher frequency, and preserved the grammatical category of the targets), we suggest that the interfering effect of PhND is due to competition between lexical neighbours and target words in dense neighbourhoods.