912 resultados para error threshold
Resumo:
Random scale-free networks have the peculiar property of being prone to the spreading of infections. Here we provide for the susceptible-infected-susceptible model an exact result showing that a scale-free degree distribution with diverging second moment is a sufficient condition to have null epidemic threshold in unstructured networks with either assortative or disassortative mixing. Degree correlations result therefore irrelevant for the epidemic spreading picture in these scale-free networks. The present result is related to the divergence of the average nearest neighbors degree, enforced by the degree detailed balance condition.
Resumo:
PURPOSE: Neurophysiological monitoring aims to improve the safety of pedicle screw placement, but few quantitative studies assess specificity and sensitivity. In this study, screw placement within the pedicle is measured (post-op CT scan, horizontal and vertical distance from the screw edge to the surface of the pedicle) and correlated with intraoperative neurophysiological stimulation thresholds. METHODS: A single surgeon placed 68 thoracic and 136 lumbar screws in 30 consecutive patients during instrumented fusion under EMG control. The female to male ratio was 1.6 and the average age was 61.3 years (SD 17.7). Radiological measurements, blinded to stimulation threshold, were done on reformatted CT reconstructions using OsiriX software. A standard deviation of the screw position of 2.8 mm was determined from pilot measurements, and a 1 mm of screw-pedicle edge distance was considered as a difference of interest (standardised difference of 0.35) leading to a power of the study of 75 % (significance level 0.05). RESULTS: Correct placement and stimulation thresholds above 10 mA were found in 71 % of screws. Twenty-two percent of screws caused cortical breach, 80 % of these had stimulation thresholds above 10 mA (sensitivity 20 %, specificity 90 %). True prediction of correct position of the screw was more frequent for lumbar than for thoracic screws. CONCLUSION: A screw stimulation threshold of >10 mA does not indicate correct pedicle screw placement. A hypothesised gradual decrease of screw stimulation thresholds was not observed as screw placement approaches the nerve root. Aside from a robust threshold of 2 mA indicating direct contact with nervous tissue, a secondary threshold appears to depend on patients' pathology and surgical conditions.
Resumo:
We present a heuristic method for learning error correcting output codes matrices based on a hierarchical partition of the class space that maximizes a discriminative criterion. To achieve this goal, the optimal codeword separation is sacrificed in favor of a maximum class discrimination in the partitions. The creation of the hierarchical partition set is performed using a binary tree. As a result, a compact matrix with high discrimination power is obtained. Our method is validated using the UCI database and applied to a real problem, the classification of traffic sign images.
Resumo:
A common way to model multiclass classification problems is by means of Error-Correcting Output Codes (ECOCs). Given a multiclass problem, the ECOC technique designs a code word for each class, where each position of the code identifies the membership of the class for a given binary problem. A classification decision is obtained by assigning the label of the class with the closest code. One of the main requirements of the ECOC design is that the base classifier is capable of splitting each subgroup of classes from each binary problem. However, we cannot guarantee that a linear classifier model convex regions. Furthermore, nonlinear classifiers also fail to manage some type of surfaces. In this paper, we present a novel strategy to model multiclass classification problems using subclass information in the ECOC framework. Complex problems are solved by splitting the original set of classes into subclasses and embedding the binary problems in a problem-dependent ECOC design. Experimental results show that the proposed splitting procedure yields a better performance when the class overlap or the distribution of the training objects conceal the decision boundaries for the base classifier. The results are even more significant when one has a sufficiently large training size.
Resumo:
An adequate air void system is imperative to produce concrete with freeze-thaw durability in a wet freeze environment such as found in Iowa. Specifications rely on a percentage of air obtained in the plastic state by the pressure meter. Actual, in place air contents, of some concrete pavements in Iowa, have been found with reduced air content due to a number of factors such as excessive vibration and inadequate mixing. Determining hardened air void parameters is a time consuming process involving potential for human error. The RapidAir 457 air void analyzer is an automated device used to determine hardened air void parameters. The device is used in Europe and has been shown to quickly produce accurate and repeatable hardened air results. This research investigates how well the RapidAir 457 results correlate to plastic air content and the image analysis air technique. The repeatability and operator variation were also investigated, as well as, the impact of aggregate porosity and selection of threshold value on hardened air results.
Resumo:
[spa] En un modelo de Poisson compuesto, definimos una estrategia de reaseguro proporcional de umbral : se aplica un nivel de retención k1 siempre que las reservas sean inferiores a un determinado umbral b, y un nivel de retención k2 en caso contrario. Obtenemos la ecuación íntegro-diferencial para la función Gerber-Shiu, definida en Gerber-Shiu -1998- en este modelo, que nos permite obtener las expresiones de la probabilidad de ruina y de la transformada de Laplace del momento de ruina para distintas distribuciones de la cuantía individual de los siniestros. Finalmente presentamos algunos resultados numéricos.
Resumo:
Background: MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample.Results: Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace.Conclusion: Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed.
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
Division of labor in social insects is determinant to their ecological success. Recent models emphasize that division of labor is an emergent property of the interactions among nestmates obeying to simple behavioral rules. However, the role of evolution in shaping these rules has been largely neglected. Here, we investigate a model that integrates the perspectives of self-organization and evolution. Our point of departure is the response threshold model, where we allow thresholds to evolve. We ask whether the thresholds will evolve to a state where division of labor emerges in a form that fits the needs of the colony. We find that division of labor can indeed evolve through the evolutionary branching of thresholds, leading to workers that differ in their tendency to take on a given task. However, the conditions under which division of labor evolves depend on the strength of selection on the two fitness components considered: amount of work performed and on worker distribution over tasks. When selection is strongest on the amount of work performed, division of labor evolves if switching tasks is costly. When selection is strongest on worker distribution, division of labor is less likely to evolve. Furthermore, we show that a biased distribution (like 3:1) of workers over tasks is not easily achievable by a threshold mechanism, even under strong selection. Contrary to expectation, multiple matings of colony foundresses impede the evolution of specialization. Overall, our model sheds light on the importance of considering the interaction between specific mechanisms and ecological requirements to better understand the evolutionary scenarios that lead to division of labor in complex systems. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s00265-012-1343-2) contains supplementary material, which is available to authorized users.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
Podeu consultar el document complet de la "XVI Setmana de Cinema Formatiu" a: http://hdl.handle.net/2445/22523
Resumo:
El presente trabajo, continuando la línea investigadora acerca de las nociones derazón, conciencia y subjetividad en Descartes, tal como se ha defendido en otros artículos ya publicados, aporta un nuevo argumento a una línea de trabajo previamente iniciada, poniendo de relieve que el problema gnoseológico del error viene condicionado por la misma noción cartesiana de racionalidad, y que ésta dista mucho de lo que tradicionalmente se ha entendido como una racionalidad abstracta y formal, libre de los imperativos humanos. Por otro lado, y a la inversa, también se intenta mostrar como el hecho del error contribuye, cartesianamente hablando, a definir un modelo de racionalidad profundamentehumanizada. El artículo, tras una introducción, se propone analizar las relaciones entre los conceptos básicos de racionalidad, dogma, y naturaleza, lo que permitirá a continuación dejar constancia de la copertenencia entre racionalidad y error, para acabar viendo como la libertad humana es la vez, y para ambos, su fundamento último.
Resumo:
Introducción. El concepto de comorbilidad en trastornos del neurodesarrollo como el autismo resulta, en ocasiones, ambiguo. La coocurrencia entre ansiedad y autismo es clínicamente signifi cativa; sin embargo, no siempre es fácil diferenciar si se trata de una comorbilidad"real", donde las dos condiciones comórbidas son fenotípica y etiológicamente idénticas a lo que supondría dicha ansiedad en personas con un desarrollo neurotípico; si se trata de una ansiedad fenotípicamente alterada por los procesos patogénicos de los trastornos del espectro autista, resultando en una variante específica de éstos, o si partimos de una comorbilidad falsa derivada de diagnósticos diferenciales poco exactos. Desarrollo. El artículo plantea dos hipótesis explicativas de dicha coocurrencia, que se retroalimentan entre sí y que no dejan de ser una refl exión en voz alta partiendo de las evidencias científi cas con las que contamos. La primera es la hipótesis del"error social", y considera que el desajuste en el comportamiento social de las personas con autismofruto de alteraciones en los procesos de cognición social contribuye a exacerbar la ansiedad en el autismo. La segunda hipótesis, la de la carga alostática, defi ende que la ansiedad es la respuesta a un estrés crónico, al desgaste o agotamiento que produce la hiperactivación de ciertas estructuras del sistema límbico. Conclusiones. Las manifestaciones prototípicas de la ansiedad presentes en la persona con autismo no siempre se relacionan con las mismas variables biopsicosociales evidenciadas en personas sin autismo. Las evidencias apuntan a respuestas hiperreactivas de huida o lucha (hipervigilancia) cuando la persona se encuentra fuera de su zona de confort, y apoyan la hipótesis del"error social" y de la descompensación del mecanismo de alostasis que permite afrontar el estrés.