912 resultados para ERROR THRESHOLD
Resumo:
Location information is becoming increasingly necessary as every new smartphone incorporates a GPS (Global Positioning System) which allows the development of various applications based on it. However, it is not possible to properly receive the GPS signal in indoor environments. For this reason, new indoor positioning systems are being developed. As indoors is a very challenging scenario, it is necessary to study the precision of the obtained location information in order to determine if these new positioning techniques are suitable for indoor positioning.
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.
Resumo:
INTRODUCTION: Perfusion-CT (PCT) processing involves deconvolution, a mathematical operation that computes the perfusion parameters from the PCT time density curves and an arterial curve. Delay-sensitive deconvolution does not correct for arrival delay of contrast, whereas delay-insensitive deconvolution does. The goal of this study was to compare delay-sensitive and delay-insensitive deconvolution PCT in terms of delineation of the ischemic core and penumbra. METHODS: We retrospectively identified 100 patients with acute ischemic stroke who underwent admission PCT and CT angiography (CTA), a follow-up vascular study to determine recanalization status, and a follow-up noncontrast head CT (NCT) or MRI to calculate final infarct volume. PCT datasets were processed twice, once using delay-sensitive deconvolution and once using delay-insensitive deconvolution. Regions of interest (ROIs) were drawn, and cerebral blood flow (CBF), cerebral blood volume (CBV), and mean transit time (MTT) in these ROIs were recorded and compared. Volume and geographic distribution of ischemic core and penumbra using both deconvolution methods were also recorded and compared. RESULTS: MTT and CBF values are affected by the deconvolution method used (p < 0.05), while CBV values remain unchanged. Optimal thresholds to delineate ischemic core and penumbra are different for delay-sensitive (145 % MTT, CBV 2 ml × 100 g(-1) × min(-1)) and delay-insensitive deconvolution (135 % MTT, CBV 2 ml × 100 g(-1) × min(-1) for delay-insensitive deconvolution). When applying these different thresholds, however, the predicted ischemic core (p = 0.366) and penumbra (p = 0.405) were similar with both methods. CONCLUSION: Both delay-sensitive and delay-insensitive deconvolution methods are appropriate for PCT processing in acute ischemic stroke patients. The predicted ischemic core and penumbra are similar with both methods when using different sets of thresholds, specific for each deconvolution method.
Resumo:
This study evaluated the effect of initial pH values of 4.5, 6.5 and 8.5 of the attractant (protein bait) Milhocina® and borax (sodium borate) in the field, on the capture of fruit flies in McPhail traps, using 1, 2, 4 and 8 traps per hectare, in order to estimate control thresholds in a Hamlin orange grove in the central region of the state of São Paulo. The most abundant fruit fly species was Ceratitis capitata, comprising almost 99% of the fruit flies captured, of which 80% were females. The largest captures of C. capitata were found in traps baited with Milhocina® and borax at pH 8.5. Captures per trap for the four densities were similar, indicating that the population can be estimated with one trap per hectare in areas with high populations. It was found positive relationships between captures of C. capitata and the number of Hamlin oranges damaged, 2 and 3 weeks after capture. It was obtained equations that correlate captures and damage levels which can be used to estimate control thresholds. The average loss caused in Hamlin orange fruits by C. capitata was 2.5 tons per hectare or 7.5% of production.
Resumo:
Gene filtering is a useful preprocessing technique often applied to microarray datasets. However, it is no common practice because clear guidelines are lacking and it bears the risk of excluding some potentially relevant genes. In this work, we propose to model microarray data as a mixture of two Gaussian distributions that will allow us to obtain an optimal filter threshold in terms of the gene expression level.
Resumo:
This thesis studies evaluation of software development practices through an error analysis. The work presents software development process, software testing, software errors, error classification and software process improvement methods. The practical part of the work presents results from the error analysis of one software process. It also gives improvement ideas for the project. It was noticed that the classification of the error data was inadequate in the project. Because of this it was impossible to use the error data effectively. With the error analysis we were able to show that there were deficiencies in design and analyzing phases, implementation phase and in testing phase. The work gives ideas for improving error classification and for software development practices.
Resumo:
En el sector suroriental de la Cuenca del Ebro, la inclinación paleomagnética obtenida en las sucesiones aluviales oligocenas es considerablemente menor que la esperable, si se considera la paleolatitud de referencia calculada para esa región durante el Oligoceno. Este error de inclinación puede deberse a diversos factores, como el control hidrodinámica de las partículas magnéticas en el medio deposicional, la compactación diferencial del sedimento durante el enterramiento, o bien a la deformación tectónica. Este trabajo se ha centrado en su estudio en dos sucesiones dominantemente aluviales, donde previamente se había establecido su magnetoestratigrafia. Las litofacies aluviales y lacustres estudiadas se han agrupado en cinco grupos: areniscas grises, areniscas rojas y versicolores, limos rojos, lutitas rojas y calizas. Se ha demostrado la existencia de una correlación entre la abundancia de filosilicatos y el error de inclinación. De esta manera, las litofacies con un bajo porcentaje de filosilicatos (calizas y areniscas grises) presentan errores de unos 5', estadisticarnente no significativos, con respecto a la inclinación de referencia. Por el contrario, en materiales con un porcentaje más elevado de filosilicatos (limos y arcillas) el error puede llegar a los 25'. Este hecho no tiene repercusión en la interpretación de las polaridades magnéticas, pero si en las reconstmcciones palinspásticas y paleogeográficas basadas en los cálculos de paleolatitudes a partir de las paleoinclinaciones. Los resultados obtenidos demuestran la necesidad de cautela en la propuesta de conclusiones basadas exclusivamente en este tipo de información.
Resumo:
We develop an analytical approach to the susceptible-infected-susceptible epidemic model that allows us to unravel the true origin of the absence of an epidemic threshold in heterogeneous networks. We find that a delicate balance between the number of high degree nodes in the network and the topological distance between them dictates the existence or absence of such a threshold. In particular, small-world random networks with a degree distribution decaying slower than an exponential have a vanishing epidemic threshold in the thermodynamic limit.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Abnormal Error Monitoring in Math-Anxious Individuals: Evidence from Error-Related Brain Potentials.
Resumo:
This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA) and seventeen low math-anxious (LMA) individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN) in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN), the error positivity component (Pe), classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants" math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA) we found greater activation of the insula in errors on a numerical task as compared to errors in a nonnumerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN.
Resumo:
Integrating single nucleotide polymorphism (SNP) p-values from genome-wide association studies (GWAS) across genes and pathways is a strategy to improve statistical power and gain biological insight. Here, we present Pascal (Pathway scoring algorithm), a powerful tool for computing gene and pathway scores from SNP-phenotype association summary statistics. For gene score computation, we implemented analytic and efficient numerical solutions to calculate test statistics. We examined in particular the sum and the maximum of chi-squared statistics, which measure the strongest and the average association signals per gene, respectively. For pathway scoring, we use a modified Fisher method, which offers not only significant power improvement over more traditional enrichment strategies, but also eliminates the problem of arbitrary threshold selection inherent in any binary membership based pathway enrichment approach. We demonstrate the marked increase in power by analyzing summary statistics from dozens of large meta-studies for various traits. Our extensive testing indicates that our method not only excels in rigorous type I error control, but also results in more biologically meaningful discoveries.
Resumo:
Adjusting behavior following the detection of inappropriate actions allows flexible adaptation to task demands and environmental contingencies during goal-directed behaviors. Post-error behavioral adjustments typically consist in adopting more cautious response mode, which manifests as a slowing down of response speed. Although converging evidence involves the dorsolateral prefrontal cortex (DLPFC) in post-error behavioral adjustment, whether and when the left or right DLPFC is critical for post-error slowing (PES), as well as the underlying brain mechanisms, remain highly debated. To resolve these issues, we used single-pulse transcranial magnetic stimulation in healthy human adults to disrupt the left or right DLPFC selectively at various delays within the 30-180ms interval following false alarms commission, while participants preformed a standard visual Go/NoGo task. PES significantly increased after TMS disruption of the right, but not the left DLPFC at 150ms post-FA response. We discuss these results in terms of an involvement of the right DLPFC in reducing the detrimental effects of error detection on subsequent behavioral performance, as opposed to implementing adaptative error-induced slowing down of response speed.
Resumo:
Los análisis de Fourier permiten caracterizar el contorno del diente y obtener una serie de parámetros para un posterior análisis multivariante. Sin embargo, la gran complejidad que presentan algunas formas obliga a determinar el error de medición intrínseco que se produce. El objetivo de este trabajo es aplicar y validar los análisis de Fourier en el estudio de la forma dental del segundo molar inferior (M2) de cuatro especies de primates Hominoidea para explorar la variabilidad morfométrica interespecífica, así como determinar el error de medición a un nivel intra e interobservador. El contorno de la superficie oclusal del diente fue definido digitalmente y con las funciones derivadas del análisis de Fourier se realizaron Análisis Discriminantes y Test de Mantel (correlaciones de Pearson) para determinar las diferencias de forma a partir de las mediciones tomadas. Los resultados indican que el análisis de Fourier muestra la variabilidad de forma en dientes molares en especies de primates Hominoidea. Adicionalmente, los altos niveles de correlación a nivel intra (r>0,9) como interobservador (r>0,7) sugieren que la descripción morfométrica del diente a partir de métodos de Fourier realizados por diferentes observadores puede ser agrupada para posteriores análisis.