111 resultados para Statistical Error


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In groundwater applications, Monte Carlo methods are employed to model the uncertainty on geological parameters. However, their brute-force application becomes computationally prohibitive for highly detailed geological descriptions, complex physical processes, and a large number of realizations. The Distance Kernel Method (DKM) overcomes this issue by clustering the realizations in a multidimensional space based on the flow responses obtained by means of an approximate (computationally cheaper) model; then, the uncertainty is estimated from the exact responses that are computed only for one representative realization per cluster (the medoid). Usually, DKM is employed to decrease the size of the sample of realizations that are considered to estimate the uncertainty. We propose to use the information from the approximate responses for uncertainty quantification. The subset of exact solutions provided by DKM is then employed to construct an error model and correct the potential bias of the approximate model. Two error models are devised that both employ the difference between approximate and exact medoid solutions, but differ in the way medoid errors are interpolated to correct the whole set of realizations. The Local Error Model rests upon the clustering defined by DKM and can be seen as a natural way to account for intra-cluster variability; the Global Error Model employs a linear interpolation of all medoid errors regardless of the cluster to which the single realization belongs. These error models are evaluated for an idealized pollution problem in which the uncertainty of the breakthrough curve needs to be estimated. For this numerical test case, we demonstrate that the error models improve the uncertainty quantification provided by the DKM algorithm and are effective in correcting the bias of the estimate computed solely from the MsFV results. The framework presented here is not specific to the methods considered and can be applied to other combinations of approximate models and techniques to select a subset of realizations

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The evolution of continuous traits is the central component of comparative analyses in phylogenetics, and the comparison of alternative models of trait evolution has greatly improved our understanding of the mechanisms driving phenotypic differentiation. Several factors influence the comparison of models, and we explore the effects of random errors in trait measurement on the accuracy of model selection. We simulate trait data under a Brownian motion model (BM) and introduce different magnitudes of random measurement error. We then evaluate the resulting statistical support for this model against two alternative models: Ornstein-Uhlenbeck (OU) and accelerating/decelerating rates (ACDC). Our analyses show that even small measurement errors (10%) consistently bias model selection towards erroneous rejection of BM in favour of more parameter-rich models (most frequently the OU model). Fortunately, methods that explicitly incorporate measurement errors in phylogenetic analyses considerably improve the accuracy of model selection. Our results call for caution in interpreting the results of model selection in comparative analyses, especially when complex models garner only modest additional support. Importantly, as measurement errors occur in most trait data sets, we suggest that estimation of measurement errors should always be performed during comparative analysis to reduce chances of misidentification of evolutionary processes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The multiscale finite-volume (MSFV) method is designed to reduce the computational cost of elliptic and parabolic problems with highly heterogeneous anisotropic coefficients. The reduction is achieved by splitting the original global problem into a set of local problems (with approximate local boundary conditions) coupled by a coarse global problem. It has been shown recently that the numerical errors in MSFV results can be reduced systematically with an iterative procedure that provides a conservative velocity field after any iteration step. The iterative MSFV (i-MSFV) method can be obtained with an improved (smoothed) multiscale solution to enhance the localization conditions, with a Krylov subspace method [e.g., the generalized-minimal-residual (GMRES) algorithm] preconditioned by the MSFV system, or with a combination of both. In a multiphase-flow system, a balance between accuracy and computational efficiency should be achieved by finding a minimum number of i-MSFV iterations (on pressure), which is necessary to achieve the desired accuracy in the saturation solution. In this work, we extend the i-MSFV method to sequential implicit simulation of time-dependent problems. To control the error of the coupled saturation/pressure system, we analyze the transport error caused by an approximate velocity field. We then propose an error-control strategy on the basis of the residual of the pressure equation. At the beginning of simulation, the pressure solution is iterated until a specified accuracy is achieved. To minimize the number of iterations in a multiphase-flow problem, the solution at the previous timestep is used to improve the localization assumption at the current timestep. Additional iterations are used only when the residual becomes larger than a specified threshold value. Numerical results show that only a few iterations on average are necessary to improve the MSFV results significantly, even for very challenging problems. Therefore, the proposed adaptive strategy yields efficient and accurate simulation of multiphase flow in heterogeneous porous media.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper deals with the recruitment strategies of employers in the low-skilled segment of the labour market. We focus on low-skilled workers because they are overrepresented among jobless people and constitute the bulk of the clientele included in various activation and labour market programmes. A better understanding of the constraints and opportunities of interventions in this labour market segment may help improve their quality and effectiveness. On the basis of qualitative interviews with 41 employers in six European countries, we find that the traditional signals known to be used as statistical discrimination devices (old age, immigrant status and unemployment) play a somewhat reduced role, since these profiles are overrepresented among applicants for low skill positions. However, we find that other signals, mostly considered to be indicators of motivation, have a bigger impact in the selection process. These tend to concern the channel through which the contact with a prospective candidate is made. Unsolicited applications and recommendations from already employed workers emit a positive signal, whereas the fact of being referred by the public employment office is associated with the likelihood of lower motivation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: The purpose of this article is to assess the effect of the adaptive statistical iterative reconstruction (ASIR) technique on image quality in hip MDCT arthrography and to evaluate its potential for reducing radiation dose. SUBJECTS AND METHODS: Thirty-seven patients examined with hip MDCT arthrography were prospectively randomized into three different protocols: one with a regular dose (volume CT dose index [CTDIvol], 38.4 mGy) and two with a reduced dose (CTDIvol, 24.6 or 15.4 mGy). Images were reconstructed using filtered back projection (FBP) and four increasing percentages of ASIR (30%, 50%, 70%, and 90%). Image noise and contrast-to-noise ratio (CNR) were measured. Two musculoskeletal radiologists independently evaluated several anatomic structures and image quality parameters using a 4-point scale. They also jointly assessed acetabular labrum tears and articular cartilage lesions. RESULTS: With decreasing radiation dose level, image noise statistically significantly increased (p=0.0009) and CNR statistically significantly decreased (p=0.001). We also found a statistically significant reduction in noise (p=0.0001) and increase in CNR (p≤0.003) with increasing percentage of ASIR; in addition, we noted statistically significant increases in image quality scores for the labrum and cartilage, subchondral bone, overall diagnostic quality (up to 50% ASIR), and subjective noise (p≤0.04), and statistically significant reductions for the trabecular bone and muscles (p≤0.03). Regardless of the radiation dose level, there were no statistically significant differences in the detection and characterization of labral tears (n=24; p=1) and cartilage lesions (n=40; p≥0.89) depending on the ASIR percentage. CONCLUSION: The use of up to 50% ASIR in hip MDCT arthrography helps to reduce radiation dose by approximately 35-60%, while maintaining diagnostic image quality comparable to that of a regular-dose protocol using FBP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Question: When multiple observers record the same spatial units of alpine vegetation, how much variation is there in the records and what are the consequences of this variation for monitoring schemes to detect change? Location: One test summit in Switzerland (Alps) and one test summit in Scotland (Cairngorm Mountains). Method: Eight observers used the GLORIA protocols for species composition and visual cover estimates in percent on large summit sections (>100 m2) and species composition and frequency in nested quadrats (1 m2). Results: The multiple records from the same spatial unit for species composition and species cover showed considerable variation in the two countries. Estimates of pseudoturnover of composition and coefficients of variation of cover estimates for vascular plant species in 1m x 1m quadrats showed less variation than in previously published reports whereas our results in larger sections were broadly in line with previous reports. In Scotland, estimates for bryophytes and lichens were more variable than for vascular plants. Conclusions: Statistical power calculations indicated that, unless large numbers of plots were used, changes in cover or frequency were only likely to be detected for abundant species (exceeding 10% cover) or if relative changes were large (50% or more). Lower variation could be reached with the point methods and with larger numbers of small plots. However, as summits often strongly differ from each other, supplementary summits cannot be considered as a way of increasing statistical power without introducing a supplementary component of variance into the analysis and hence the power calculations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

INTRODUCTION: Adaptive statistical iterative reconstruction (ASIR) can decrease image noise, thereby generating CT images of comparable diagnostic quality with less radiation. The purpose of this study is to quantify the effect of systematic use of ASIR versus filtered back projection (FBP) for neuroradiology CT protocols on patients' radiation dose and image quality. METHODS: We evaluated the effect of ASIR on six types of neuroradiologic CT studies: adult and pediatric unenhanced head CT, adult cervical spine CT, adult cervical and intracranial CT angiography, adult soft tissue neck CT with contrast, and adult lumbar spine CT. For each type of CT study, two groups of 100 consecutive studies were retrospectively reviewed: 100 studies performed with FBP and 100 studies performed with ASIR/FBP blending factor of 40 %/60 % with appropriate noise indices. The weighted volume CT dose index (CTDIvol), dose-length product (DLP) and noise were recorded. Each study was also reviewed for image quality by two reviewers. Continuous and categorical variables were compared by t test and free permutation test, respectively. RESULTS: For adult unenhanced brain CT, CT cervical myelography, cervical and intracranial CT angiography and lumbar spine CT both CTDIvol and DLP were lowered by up to 10.9 % (p < 0.001), 17.9 % (p = 0.005), 20.9 % (p < 0.001), and 21.7 % (p = 0.001), respectively, by using ASIR compared with FBP alone. Image quality and noise were similar for both FBP and ASIR. CONCLUSION: We recommend routine use of iterative reconstruction for neuroradiology CT examinations because this approach affords a significant dose reduction while preserving image quality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Research projects aimed at proposing fingerprint statistical models based on the likelihood ratio framework have shown that low quality finger impressions left on crime scenes may have significant evidential value. These impressions are currently either not recovered, considered to be of no value when first analyzed by fingerprint examiners, or lead to inconclusive results when compared to control prints. There are growing concerns within the fingerprint community that recovering and examining these low quality impressions will result in a significant increase of the workload of fingerprint units and ultimately of the number of backlogged cases. This study was designed to measure the number of impressions currently not recovered or not considered for examination, and to assess the usefulness of these impressions in terms of the number of additional detections that would result from their examination.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As a thorough aggregation of probability and graph theory, Bayesian networks currently enjoy widespread interest as a means for studying factors that affect the coherent evaluation of scientific evidence in forensic science. Paper I of this series of papers intends to contribute to the discussion of Bayesian networks as a framework that is helpful for both illustrating and implementing statistical procedures that are commonly employed for the study of uncertainties (e.g. the estimation of unknown quantities). While the respective statistical procedures are widely described in literature, the primary aim of this paper is to offer an essentially non-technical introduction on how interested readers may use these analytical approaches - with the help of Bayesian networks - for processing their own forensic science data. Attention is mainly drawn to the structure and underlying rationale of a series of basic and context-independent network fragments that users may incorporate as building blocs while constructing larger inference models. As an example of how this may be done, the proposed concepts will be used in a second paper (Part II) for specifying graphical probability networks whose purpose is to assist forensic scientists in the evaluation of scientific evidence encountered in the context of forensic document examination (i.e. results of the analysis of black toners present on printed or copied documents).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study we propose an evaluation of the angular effects altering the spectral response of the land-cover over multi-angle remote sensing image acquisitions. The shift in the statistical distribution of the pixels observed in an in-track sequence of WorldView-2 images is analyzed by means of a kernel-based measure of distance between probability distributions. Afterwards, the portability of supervised classifiers across the sequence is investigated by looking at the evolution of the classification accuracy with respect to the changing observation angle. In this context, the efficiency of various physically and statistically based preprocessing methods in obtaining angle-invariant data spaces is compared and possible synergies are discussed.