929 resultados para Probabilistic Error Correction
Resumo:
Optical aberration due to the nonflatness of spatial light modulators used in holographic optical tweezers significantly deteriorates the quality of the trap and may easily prevent stable trapping of particles. We use a Shack-Hartmann sensor to measure the distorted wavefront at the modulator plane; the conjugate of this wavefront is then added to the holograms written into the display to counteract its own curvature and thus compensate the optical aberration of the system. For a Holoeye LC-R 2500 reflective device, flatness is improved from 0.8¿ to ¿/16 (¿=532 nm), leading to a diffraction-limited spot at the focal plane of the microscope objective, which makes stable trapping possible. This process could be fully automated in a closed-loop configuration and would eventually allow other sources of aberration in the optical setup to be corrected for.
Resumo:
The multiscale finite-volume (MSFV) method is designed to reduce the computational cost of elliptic and parabolic problems with highly heterogeneous anisotropic coefficients. The reduction is achieved by splitting the original global problem into a set of local problems (with approximate local boundary conditions) coupled by a coarse global problem. It has been shown recently that the numerical errors in MSFV results can be reduced systematically with an iterative procedure that provides a conservative velocity field after any iteration step. The iterative MSFV (i-MSFV) method can be obtained with an improved (smoothed) multiscale solution to enhance the localization conditions, with a Krylov subspace method [e.g., the generalized-minimal-residual (GMRES) algorithm] preconditioned by the MSFV system, or with a combination of both. In a multiphase-flow system, a balance between accuracy and computational efficiency should be achieved by finding a minimum number of i-MSFV iterations (on pressure), which is necessary to achieve the desired accuracy in the saturation solution. In this work, we extend the i-MSFV method to sequential implicit simulation of time-dependent problems. To control the error of the coupled saturation/pressure system, we analyze the transport error caused by an approximate velocity field. We then propose an error-control strategy on the basis of the residual of the pressure equation. At the beginning of simulation, the pressure solution is iterated until a specified accuracy is achieved. To minimize the number of iterations in a multiphase-flow problem, the solution at the previous timestep is used to improve the localization assumption at the current timestep. Additional iterations are used only when the residual becomes larger than a specified threshold value. Numerical results show that only a few iterations on average are necessary to improve the MSFV results significantly, even for very challenging problems. Therefore, the proposed adaptive strategy yields efficient and accurate simulation of multiphase flow in heterogeneous porous media.
Resumo:
This document produced by the Iowa Department of Administrative Services has been developed to provide a multitude of information about executive branch agencies/department on a single sheet of paper. The facts provides general information, contact information, workforce data, leave and benefits information and affirmative action data.
Resumo:
Background: Current methodology of gene expression analysis limits the possibilities of comparison between cells/tissues of organs in which cell size and/or number changes as a consequence of the study (e.g. starvation). A method relating the abundance of specific mRNA copies per cell may allow direct comparison or different organs and/or changing physiological conditions. Methods: With a number of selected genes, we analysed the relationship of the number of bases and the fluorescence recorded at a present level using cDNA standards. A lineal relationship was found between the final number of bases and the length of the transcript. The constants of this equation and those of the relationship between fluorescence and number of bases in cDNA were determined and a general equation linking the length of the transcript and the initial number of copies of mRNA was deduced for a given pre-established fluorescence setting. This allowed the calculation of the concentration of the corresponding mRNAs per g of tissue. The inclusion of tissue RNA and the DNA content per cell, allowed the calculation of the mRNA copies per cell. Results: The application of this procedure to six genes: Arbp, cyclophilin, ChREBP, T4 deiodinase 2, acetyl-CoA carboxylase 1 and IRS-1, in liver and retroperitoneal adipose tissue of food-restricted rats allowed precise measures of their changes irrespective of the shrinking of the tissue, the loss of cells or changes in cell size, factors that deeply complicate the comparison between changing tissue conditions. The percentage results obtained with the present methods were essentially the same obtained with the delta-delta procedure and with individual cDNA standard curve quantitative RT-PCR estimation. Conclusion: The method presented allows the comparison (i.e. as copies of mRNA per cell) between different genes and tissues, establishing the degree of abundance of the different molecular species tested.
Resumo:
Aim Conservation strategies are in need of predictions that capture spatial community composition and structure. Currently, the methods used to generate these predictions generally focus on deterministic processes and omit important stochastic processes and other unexplained variation in model outputs. Here we test a novel approach of community models that accounts for this variation and determine how well it reproduces observed properties of alpine butterfly communities. Location The western Swiss Alps. Methods We propose a new approach to process probabilistic predictions derived from stacked species distribution models (S-SDMs) in order to predict and assess the uncertainty in the predictions of community properties. We test the utility of our novel approach against a traditional threshold-based approach. We used mountain butterfly communities spanning a large elevation gradient as a case study and evaluated the ability of our approach to model species richness and phylogenetic diversity of communities. Results S-SDMs reproduced the observed decrease in phylogenetic diversity and species richness with elevation, syndromes of environmental filtering. The prediction accuracy of community properties vary along environmental gradient: variability in predictions of species richness was higher at low elevation, while it was lower for phylogenetic diversity. Our approach allowed mapping the variability in species richness and phylogenetic diversity projections. Main conclusion Using our probabilistic approach to process species distribution models outputs to reconstruct communities furnishes an improved picture of the range of possible assemblage realisations under similar environmental conditions given stochastic processes and help inform manager of the uncertainty in the modelling results
Resumo:
Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.
Resumo:
The problem of prediction is considered in a multidimensional setting. Extending an idea presented by Barndorff-Nielsen and Cox, a predictive density for a multivariate random variable of interest is proposed. This density has the form of an estimative density plus a correction term. It gives simultaneous prediction regions with coverage error of smaller asymptotic order than the estimative density. A simulation study is also presented showing the magnitude of the improvement with respect to the estimative method.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
Podeu consultar el document complet de la "XVI Setmana de Cinema Formatiu" a: http://hdl.handle.net/2445/22523
Resumo:
El presente trabajo, continuando la línea investigadora acerca de las nociones derazón, conciencia y subjetividad en Descartes, tal como se ha defendido en otros artículos ya publicados, aporta un nuevo argumento a una línea de trabajo previamente iniciada, poniendo de relieve que el problema gnoseológico del error viene condicionado por la misma noción cartesiana de racionalidad, y que ésta dista mucho de lo que tradicionalmente se ha entendido como una racionalidad abstracta y formal, libre de los imperativos humanos. Por otro lado, y a la inversa, también se intenta mostrar como el hecho del error contribuye, cartesianamente hablando, a definir un modelo de racionalidad profundamentehumanizada. El artículo, tras una introducción, se propone analizar las relaciones entre los conceptos básicos de racionalidad, dogma, y naturaleza, lo que permitirá a continuación dejar constancia de la copertenencia entre racionalidad y error, para acabar viendo como la libertad humana es la vez, y para ambos, su fundamento último.
Resumo:
Introducción. El concepto de comorbilidad en trastornos del neurodesarrollo como el autismo resulta, en ocasiones, ambiguo. La coocurrencia entre ansiedad y autismo es clínicamente signifi cativa; sin embargo, no siempre es fácil diferenciar si se trata de una comorbilidad"real", donde las dos condiciones comórbidas son fenotípica y etiológicamente idénticas a lo que supondría dicha ansiedad en personas con un desarrollo neurotípico; si se trata de una ansiedad fenotípicamente alterada por los procesos patogénicos de los trastornos del espectro autista, resultando en una variante específica de éstos, o si partimos de una comorbilidad falsa derivada de diagnósticos diferenciales poco exactos. Desarrollo. El artículo plantea dos hipótesis explicativas de dicha coocurrencia, que se retroalimentan entre sí y que no dejan de ser una refl exión en voz alta partiendo de las evidencias científi cas con las que contamos. La primera es la hipótesis del"error social", y considera que el desajuste en el comportamiento social de las personas con autismofruto de alteraciones en los procesos de cognición social contribuye a exacerbar la ansiedad en el autismo. La segunda hipótesis, la de la carga alostática, defi ende que la ansiedad es la respuesta a un estrés crónico, al desgaste o agotamiento que produce la hiperactivación de ciertas estructuras del sistema límbico. Conclusiones. Las manifestaciones prototípicas de la ansiedad presentes en la persona con autismo no siempre se relacionan con las mismas variables biopsicosociales evidenciadas en personas sin autismo. Las evidencias apuntan a respuestas hiperreactivas de huida o lucha (hipervigilancia) cuando la persona se encuentra fuera de su zona de confort, y apoyan la hipótesis del"error social" y de la descompensación del mecanismo de alostasis que permite afrontar el estrés.
Resumo:
When researchers introduce a new test they have to demonstrate that it is valid, using unbiased designs and suitable statistical procedures. In this article we use Monte Carlo analyses to highlight how incorrect statistical procedures (i.e., stepwise regression, extreme scores analyses) or ignoring regression assumptions (e.g., heteroscedasticity) contribute to wrong validity estimates. Beyond these demonstrations, and as an example, we re-examined the results reported by Warwick, Nettelbeck, and Ward (2010) concerning the validity of the Ability Emotional Intelligence Measure (AEIM). Warwick et al. used the wrong statistical procedures to conclude that the AEIM was incrementally valid beyond intelligence and personality traits in predicting various outcomes. In our re-analysis, we found that the reliability-corrected multiple correlation of their measures with personality and intelligence was up to .69. Using robust statistical procedures and appropriate controls, we also found that the AEIM did not predict incremental variance in GPA, stress, loneliness, or well-being, demonstrating the importance for testing validity instead of looking for it.
Resumo:
Diffusion-weighting in magnetic resonance imaging (MRI) increases the sensitivity to molecular Brownian motion, providing insight in the micro-environment of the underlying tissue types and structures. At the same time, the diffusion weighting renders the scans sensitive to other motion, including bulk patient motion. Typically, several image volumes are needed to extract diffusion information, inducing also inter-volume motion susceptibility. Bulk motion is more likely during long acquisitions, as they appear in diffusion tensor, diffusion spectrum and q-ball imaging. Image registration methods are successfully used to correct for bulk motion in other MRI time series, but their performance in diffusion-weighted MRI is limited since diffusion weighting introduces strong signal and contrast changes between serial image volumes. In this work, we combine the capability of free induction decay (FID) navigators, providing information on object motion, with image registration methodology to prospectively--or optionally retrospectively--correct for motion in diffusion imaging of the human brain. Eight healthy subjects were instructed to perform small-scale voluntary head motion during clinical diffusion tensor imaging acquisitions. The implemented motion detection based on FID navigator signals is processed in real-time and provided an excellent detection performance of voluntary motion patterns even at a sub-millimetre scale (sensitivity≥92%, specificity>98%). Motion detection triggered an additional image volume acquisition with b=0 s/mm2 which was subsequently co-registered to a reference volume. In the prospective correction scenario, the calculated motion-parameters were applied to perform a real-time update of the gradient coordinate system to correct for the head movement. Quantitative analysis revealed that the motion correction implementation is capable to correct head motion in diffusion-weighted MRI to a level comparable to scans without voluntary head motion. The results indicate the potential of this method to improve image quality in diffusion-weighted MRI, a concept that can also be applied when highest diffusion weightings are performed.
Resumo:
" Has comes un error" . " Estas en un error" . " És un error votar aquest parti!" . " És un error votar" . " És un error afirmar que 2 + 3 = 9" . " És un error afirmar que és un error afirmar que 2 + 3 = 5" . " És un error afirmar que, quan dividim, sempre obtenim un nombre més petit" . " És un error que l'existencia precedeixi l'essencia" . " És un error que vulguis enganyar-me" . " És un error afirmar que a = a" ... i així fins a acomplir les il'limitades possibilitats del llenguatge. Qualsevol judici, en la mesura que té un significat, en la mesura que és assertori, és susceptible de ser erroni, de ser fals. Peró, l'error té sempre la mateixa qualitat? Us hem proposat un reguitzell d'exemples. És obvi (si excloem la mentida, que no és error, sinó mentida) que el significat d'" error" (o el seu valor) no és identic en tots els casos.
Resumo:
A new model for dealing with decision making under risk by considering subjective and objective information in the same formulation is here presented. The uncertain probabilistic weighted average (UPWA) is also presented. Its main advantage is that it unifies the probability and the weighted average in the same formulation and considering the degree of importance that each case has in the analysis. Moreover, it is able to deal with uncertain environments represented in the form of interval numbers. We study some of its main properties and particular cases. The applicability of the UPWA is also studied and it is seen that it is very broad because all the previous studies that use the probability or the weighted average can be revised with this new approach. Focus is placed on a multi-person decision making problem regarding the selection of strategies by using the theory of expertons.