950 resultados para Probabilistic metrics


Relevância:

10.00% 10.00%

Publicador:

Resumo:

1. Aim - Concerns over how global change will influence species distributions, in conjunction with increased emphasis on understanding niche dynamics in evolutionary and community contexts, highlight the growing need for robust methods to quantify niche differences between or within taxa. We propose a statistical framework to describe and compare environmental niches from occurrence and spatial environmental data.¦2. Location - Europe, North America, South America¦3. Methods - The framework applies kernel smoothers to densities of species occurrence in gridded environmental space to calculate metrics of niche overlap and test hypotheses regarding niche conservatism. We use this framework and simulated species with predefined distributions and amounts of niche overlap to evaluate several ordination and species distribution modeling techniques for quantifying niche overlap. We illustrate the approach with data on two well-studied invasive species.¦4. Results - We show that niche overlap can be accurately detected with the framework when variables driving the distributions are known. The method is robust to known and previously undocumented biases related to the dependence of species occurrences on the frequency of environmental conditions that occur across geographic space. The use of a kernel smoother makes the process of moving from geographical space to multivariate environmental space independent of both sampling effort and arbitrary choice of resolution in environmental space. However, the use of ordination and species distribution model techniques for selecting, combining and weighting variables on which niche overlap is calculated provide contrasting results.¦5. Main conclusions - The framework meets the increasing need for robust methods to quantify niche differences. It is appropriate to study niche differences between species, subspecies or intraspecific lineages that differ in their geographical distributions. Alternatively, it can be used to measure the degree to which the environmental niche of a species or intraspecific lineage has changed over time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To provide a quantitative support to the handwriting evidence evaluation, a new method was developed through the computation of a likelihood ratio based on a Bayesian approach. In the present paper, the methodology is briefly described and applied to data collected within a simulated case of a threatening letter. Fourier descriptors are used to characterise the shape of loops of handwritten characters "a" of the true writer of the threatening letter, and: 1) with reference characters "a" of the true writer of the threatening letter, and then 2) with characters "a" of a writer who did not write the threatening letter. The findings support that the probabilistic methodology correctly supports either the hypothesis of authorship or the alternative hypothesis. Further developments will enable the handwriting examiner to use this methodology as a helpful assistance to assess the strength of evidence in handwriting casework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Time periods composing stance phase of gait can be clinically meaningful parameters to reveal differences between normal and pathological gait. This study aimed, first, to describe a novel method for detecting stance and inner-stance temporal events based on foot-worn inertial sensors; second, to extract and validate relevant metrics from those events; and third, to investigate their suitability as clinical outcome for gait evaluations. 42 subjects including healthy subjects and patients before and after surgical treatments for ankle osteoarthritis performed 50-m walking trials while wearing foot-worn inertial sensors and pressure insoles as a reference system. Several hypotheses were evaluated to detect heel-strike, toe-strike, heel-off, and toe-off based on kinematic features. Detected events were compared with the reference system on 3193 gait cycles and showed good accuracy and precision. Absolute and relative stance periods, namely loading response, foot-flat, and push-off were then estimated, validated, and compared statistically between populations. Besides significant differences observed in stance duration, the analysis revealed differing tendencies with notably a shorter foot-flat in healthy subjects. The result indicated which features in inertial sensors' signals should be preferred for detecting precisely and accurately temporal events against a reference standard. The system is suitable for clinical evaluations and provides temporal analysis of gait beyond the common swing/stance decomposition, through a quantitative estimation of inner-stance phases such as foot-flat.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The analysis of conservation between the human and mouse genomes resulted in the identification of a large number of conserved nongenic sequences (CNGs). The functional significance of this nongenic conservation remains unknown, however. The availability of the sequence of a third mammalian genome, the dog, allows for a large-scale analysis of evolutionary attributes of CNGs in mammals. We have aligned 1638 previously identified CNGs and 976 conserved exons (CODs) from human chromosome 21 (Hsa21) with their orthologous sequences in mouse and dog. Attributes of selective constraint, such as sequence conservation, clustering, and direction of substitutions were compared between CNGs and CODs, showing a clear distinction between the two classes. We subsequently performed a chromosome-wide analysis of CNGs by correlating selective constraint metrics with their position on the chromosome and relative to their distance from genes. We found that CNGs appear to be randomly arranged in intergenic regions, with no bias to be closer or farther from genes. Moreover, conservation and clustering of substitutions of CNGs appear to be completely independent of their distance from genes. These results suggest that the majority of CNGs are not typical of previously described regulatory elements in terms of their location. We propose models for a global role of CNGs in genome function and regulation, through long-distance cis or trans chromosomal interactions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Models are presented for the optimal location of hubs in airline networks, that take into consideration the congestion effects. Hubs, which are the most congested airports, are modeled as M/D/c queuing systems, that is, Poisson arrivals, deterministic service time, and {\em c} servers. A formula is derived for the probability of a number of customers in the system, which is later used to propose a probabilistic constraint. This constraint limits the probability of {\em b} airplanes in queue, to be lesser than a value $\alpha$. Due to the computational complexity of the formulation. The model is solved using a meta-heuristic based on tabu search. Computational experience is presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider adaptive sequential lossy coding of bounded individual sequences when the performance is measured by the sequentially accumulated mean squared distortion. Theencoder and the decoder are connected via a noiseless channel of capacity $R$ and both are assumed to have zero delay. No probabilistic assumptions are made on how the sequence to be encoded is generated. For any bounded sequence of length $n$, the distortion redundancy is defined as the normalized cumulative distortion of the sequential scheme minus the normalized cumulative distortion of the best scalarquantizer of rate $R$ which is matched to this particular sequence. We demonstrate the existence of a zero-delay sequential scheme which uses common randomization in the encoder and the decoder such that the normalized maximum distortion redundancy converges to zero at a rate $n^{-1/5}\log n$ as the length of the encoded sequence $n$ increases without bound.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we explore the mechanisms that allow securities analysts to value companies in contexts of Knightian uncertainty, that is, in the face of information that is unclear, subject to unforeseeable contingencies or to multiple interpretations. We address this question with a grounded-theory analysis of the reports written on Amazon.com by securities analyst Henry Blodget and rival analysts during the years 1998-2000. Our core finding is that analysts' reports are structured by internally consistent associations that includecategorizations, key metrics and analogies. We refer to these representations as calculative frames, and propose that analysts function as frame-makers - that is, asspecialized intermediaries that help investors value uncertain stocks. We conclude by considering the implications of frame-making for the rise of new industry categories, analysts' accuracy, and the regulatory debate on analysts'independence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Virtual reality (VR) simulators are widely used to familiarize surgical novices with laparoscopy, but VR training methods differ in efficacy. In the present trial, self-controlled basic VR training (SC-training) was tested against training based on peer-group-derived benchmarks (PGD-training). METHODS: First, novice laparoscopic residents were randomized into a SC group (n = 34), and a group using PGD-benchmarks (n = 34) for basic laparoscopic training. After completing basic training, both groups performed 60 VR laparoscopic cholecystectomies for performance analysis. Primary endpoints were simulator metrics; secondary endpoints were program adherence, trainee motivation, and training efficacy. RESULTS: Altogether, 66 residents completed basic training, and 3,837 of 3,960 (96.8 %) cholecystectomies were available for analysis. Course adherence was good, with only two dropouts, both in the SC-group. The PGD-group spent more time and repetitions in basic training until the benchmarks were reached and subsequently showed better performance in the readout cholecystectomies: Median time (gallbladder extraction) showed significant differences of 520 s (IQR 354-738 s) in SC-training versus 390 s (IQR 278-536 s) in the PGD-group (p < 0.001) and 215 s (IQR 175-276 s) in experts, respectively. Path length of the right instrument also showed significant differences, again with the PGD-training group being more efficient. CONCLUSIONS: Basic VR laparoscopic training based on PGD benchmarks with external assessment is superior to SC training, resulting in higher trainee motivation and better performance in simulated laparoscopic cholecystectomies. We recommend such a basic course based on PGD benchmarks before advancing to more elaborate VR training.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of statistical models for forensic fingerprint identification purposes has been the subject of increasing research attention in recent years. This can be partly seen as a response to a number of commentators who claim that the scientific basis for fingerprint identification has not been adequately demonstrated. In addition, key forensic identification bodies such as ENFSI [1] and IAI [2] have recently endorsed and acknowledged the potential benefits of using statistical models as an important tool in support of the fingerprint identification process within the ACE-V framework. In this paper, we introduce a new Likelihood Ratio (LR) model based on Support Vector Machines (SVMs) trained with features discovered via morphometric and spatial analyses of corresponding minutiae configurations for both match and close non-match populations often found in AFIS candidate lists. Computed LR values are derived from a probabilistic framework based on SVMs that discover the intrinsic spatial differences of match and close non-match populations. Lastly, experimentation performed on a set of over 120,000 publicly available fingerprint images (mostly sourced from the National Institute of Standards and Technology (NIST) datasets) and a distortion set of approximately 40,000 images, is presented, illustrating that the proposed LR model is reliably guiding towards the right proposition in the identification assessment of match and close non-match populations. Results further indicate that the proposed model is a promising tool for fingerprint practitioners to use for analysing the spatial consistency of corresponding minutiae configurations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Counterfeit pharmaceutical products have become a widespread problem in the last decade. Various analytical techniques have been applied to discriminate between genuine and counterfeit products. Among these, Near-infrared (NIR) and Raman spectroscopy provided promising results.The present study offers a methodology allowing to provide more valuable information fororganisations engaged in the fight against counterfeiting of medicines.A database was established by analyzing counterfeits of a particular pharmaceutical product using Near-infrared (NIR) and Raman spectroscopy. Unsupervised chemometric techniques (i.e. principal component analysis - PCA and hierarchical cluster analysis - HCA) were implemented to identify the classes within the datasets. Gas Chromatography coupled to Mass Spectrometry (GC-MS) and Fourier Transform Infrared Spectroscopy (FT-IR) were used to determine the number of different chemical profiles within the counterfeits. A comparison with the classes established by NIR and Raman spectroscopy allowed to evaluate the discriminating power provided by these techniques. Supervised classifiers (i.e. k-Nearest Neighbors, Partial Least Squares Discriminant Analysis, Probabilistic Neural Networks and Counterpropagation Artificial Neural Networks) were applied on the acquired NIR and Raman spectra and the results were compared to the ones provided by the unsupervised classifiers.The retained strategy for routine applications, founded on the classes identified by NIR and Raman spectroscopy, uses a classification algorithm based on distance measures and Receiver Operating Characteristics (ROC) curves. The model is able to compare the spectrum of a new counterfeit with that of previously analyzed products and to determine if a new specimen belongs to one of the existing classes, consequently allowing to establish a link with other counterfeits of the database.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new generation of microcapsules based on the use of oligomers which participate in polyelectrolyte complexation reactions has been developed. These freeze-thaw stable capsules have been applied as a bioartificial pancreas and have resulted in normoglycemia for periods of six months in concordant xenotransplantations. The new chemistry permits the control of permeability and mechanical properties over a wide range and can be adapted both to microcapsule and hollow fiber geometries rendering it a robust tool for encapsulation in general. Methods, and metrics, for the characterization of the mechanical properties and permeability of microcapsules are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIU: determinar la qualitat de vida de les persones amb demència ateses en una unitat avaluadora de deteriorament cognitiu. MÈTODE: estudi descriptiu transversal amb una mostra consecutiva no probabilística, formada per 42 persones amb demència tipus Alzheimer lleu o moderada i els seus cuidadors. La Qualitat de Vida (QV) es va avaluar amb el qüestionari QoL-AD (Quality of Life Alzheimer’s Disease) en les versions per al pacient (QoL-ADp) i per al cuidador (QoL-ADc). RESULTATS: la mitjana de puntuació del QoL-ADp va ser de 35,38 punts (DE = 5,24) i del QoL-Adc, de 30,60 (DE 5,33). La diferència entre aquests resultats és significativa (p&0,001). Els pacients amb simptomatologia depressiva i els seus cuidadors van puntuar significativament més baix el QoL-AD (p&0,001). En les freqüències per ítems del QoL-ADp s’observa que: més del 75% van valorar com a bona/excel·lent les condicions de vida, família, matrimoni/relació estreta, vida social, situació financera i vida en general; el 61% valoraren bona/excel·lent la capacitat per realitzar tasques a casa; prop del 50% pensava que l’estat d’ànim, l’energia, la salut física, la capacitat per fer coses per diversió i la visió de si mateixos era dolenta/regular, i el 85,7% opinava que la seva memòria era dolenta/regular. CONCLUSIONS: els resultats obtinguts en el QoL-AD no difereixen dels obtinguts en altres investigacions. Suggereixen que les intervencions que genera l’avaluació de la QV en la pràctica clínica inclouen aspectes centrats pròpiament en la malaltia i aspectes vinculats amb les relacions socials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Uncertainty quantification of petroleum reservoir models is one of the present challenges, which is usually approached with a wide range of geostatistical tools linked with statistical optimisation or/and inference algorithms. The paper considers a data driven approach in modelling uncertainty in spatial predictions. Proposed semi-supervised Support Vector Regression (SVR) model has demonstrated its capability to represent realistic features and describe stochastic variability and non-uniqueness of spatial properties. It is able to capture and preserve key spatial dependencies such as connectivity, which is often difficult to achieve with two-point geostatistical models. Semi-supervised SVR is designed to integrate various kinds of conditioning data and learn dependences from them. A stochastic semi-supervised SVR model is integrated into a Bayesian framework to quantify uncertainty with multiple models fitted to dynamic observations. The developed approach is illustrated with a reservoir case study. The resulting probabilistic production forecasts are described by uncertainty envelopes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper analyses the predictive ability of quantitative precipitation forecasts (QPF) and the so-called "poor-man" rainfall probabilistic forecasts (RPF). With this aim, the full set of warnings issued by the Meteorological Service of Catalonia (SMC) for potentially-dangerous events due to severe precipitation has been analysed for the year 2008. For each of the 37 warnings, the QPFs obtained from the limited-area model MM5 have been verified against hourly precipitation data provided by the rain gauge network covering Catalonia (NE of Spain), managed by SMC. For a group of five selected case studies, a QPF comparison has been undertaken between the MM5 and COSMO-I7 limited-area models. Although MM5's predictive ability has been examined for these five cases by making use of satellite data, this paper only shows in detail the heavy precipitation event on the 9¿10 May 2008. Finally, the "poor-man" rainfall probabilistic forecasts (RPF) issued by SMC at regional scale have also been tested against hourly precipitation observations. Verification results show that for long events (>24 h) MM5 tends to overestimate total precipitation, whereas for short events (¿24 h) the model tends instead to underestimate precipitation. The analysis of the five case studies concludes that most of MM5's QPF errors are mainly triggered by very poor representation of some of its cloud microphysical species, particularly the cloud liquid water and, to a lesser degree, the water vapor. The models' performance comparison demonstrates that MM5 and COSMO-I7 are on the same level of QPF skill, at least for the intense-rainfall events dealt with in the five case studies, whilst the warnings based on RPF issued by SMC have proven fairly correct when tested against hourly observed precipitation for 6-h intervals and at a small region scale. Throughout this study, we have only dealt with (SMC-issued) warning episodes in order to analyse deterministic (MM5 and COSMO-I7) and probabilistic (SMC) rainfall forecasts; therefore we have not taken into account those episodes that might (or might not) have been missed by the official SMC warnings. Therefore, whenever we talk about "misses", it is always in relation to the deterministic LAMs' QPFs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Se analiza el uso de estadísticas e indicadores de rendimiento de productos y servicios electrónicos en los procesos de evaluación bibliotecaria. Se examinan los principales proyectos de definición de estadísticas e indicadores desarrollados durante los últimos años, prestando especial atención a tres de ellos: Counter, E-metrics e ISO, y se analizan las estadísticas que actualmente ofrecen cuatro grandes editores de revistas electrónicas (American Chemical Society, Emerald, Kluwer y Wiley) y un servicio (Scitation Usage Statistics) que aglutina datos de seis editores de revistas de física. Los resultados muestran un cierto grado de consenso en la determinación de un conjunto básico de estadísticas e indicadores a pesar de la diversidad de proyectos existentes y de la heterogeneidad de datos ofrecidos por los editores.