135 resultados para Probabilistic metrics
Resumo:
Members of the Chlamydiales order all share a biphasic lifecycle alternating between small infectious particles, the elementary bodies (EBs) and larger intracellular forms able to replicate, the reticulate bodies. Whereas the classical Chlamydia usually harbours round-shaped EBs, some members of the Chlamydia-related families display crescent and star-shaped morphologies by electron microscopy. To determine the impact of fixative methods on the shape of the bacterial cells, different buffer and fixative combinations were tested on purified EBs of Criblamydia sequanensis, Estrella lausannensis, Parachlamydia acanthamoebae, and Waddlia chondrophila. A linear discriminant analysis was performed on particle metrics extracted from electron microscopy images to recognize crescent, round, star and intermediary forms. Depending on the buffer and fixatives used, a mixture of alternative shapes were observed in varying proportions with stars and crescents being more frequent in C. sequanensis and P. acanthamoebae, respectively. No tested buffer and chemical fixative preserved ideally the round shape of a majority of bacteria and other methods such as deep-freezing and cryofixation should be applied. Although crescent and star shapes could represent a fixation artifact, they certainly point towards a diverse composition and organization of membrane proteins or intracellular structures rather than being a distinct developmental stage.
Resumo:
The research considers the problem of spatial data classification using machine learning algorithms: probabilistic neural networks (PNN) and support vector machines (SVM). As a benchmark model simple k-nearest neighbor algorithm is considered. PNN is a neural network reformulation of well known nonparametric principles of probability density modeling using kernel density estimator and Bayesian optimal or maximum a posteriori decision rules. PNN is well suited to problems where not only predictions but also quantification of accuracy and integration of prior information are necessary. An important property of PNN is that they can be easily used in decision support systems dealing with problems of automatic classification. Support vector machine is an implementation of the principles of statistical learning theory for the classification tasks. Recently they were successfully applied for different environmental topics: classification of soil types and hydro-geological units, optimization of monitoring networks, susceptibility mapping of natural hazards. In the present paper both simulated and real data case studies (low and high dimensional) are considered. The main attention is paid to the detection and learning of spatial patterns by the algorithms applied.
Resumo:
OBJECTIVE: Although dual-energy X-ray absorptiometry (DEXA) is the preferred method to estimate adiposity, body mass index (BMI) is often used as a proxy. However, the ability of BMI to measure adiposity change among youth is poorly evidenced. This study explored which metrics of BMI change have the highest correlations with different metrics of DEXA change. METHODS: Data were from the Quebec Adipose and Lifestyle Investigation in Youth cohort, a prospective cohort of children (8-10 years at recruitment) from Québec, Canada (n=557). Height and weight were measured by trained nurses at baseline (2008) and follow-up (2010). Metrics of BMI change were raw (ΔBMIkg/m(2) ), adjusted for median BMI (ΔBMIpercentage) and age-sex-adjusted with the Centers for Disease Control and Prevention growth curves expressed as centiles (ΔBMIcentile) or z-scores (ΔBMIz-score). Metrics of DEXA change were raw (total fat mass; ΔFMkg), per cent (ΔFMpercentage), height-adjusted (fat mass index; ΔFMI) and age-sex-adjusted z-scores (ΔFMz-score). Spearman's rank correlations were derived. RESULTS: Correlations ranged from modest (0.60) to strong (0.86). ΔFMkg correlated most highly with ΔBMIkg/m(2) (r = 0.86), ΔFMI with ΔBMIkg/m(2) and ΔBMIpercentage (r = 0.83-0.84), ΔFMz-score with ΔBMIz-score (r = 0.78), and ΔFMpercentage with ΔBMIpercentage (r = 0.68). Correlations with ΔBMIcentile were consistently among the lowest. CONCLUSIONS: In 8-10-year-old children, absolute or per cent change in BMI is a good proxy for change in fat mass or FMI, and BMI z-score change is a good proxy for FM z-score change. However change in BMI centile and change in per cent fat mass perform less well and are not recommended.
Resumo:
1. Identifying the boundary of a species' niche from observational and environmental data is a common problem in ecology and conservation biology and a variety of techniques have been developed or applied to model niches and predict distributions. Here, we examine the performance of some pattern-recognition methods as ecological niche models (ENMs). Particularly, one-class pattern recognition is a flexible and seldom used methodology for modelling ecological niches and distributions from presence-only data. The development of one-class methods that perform comparably to two-class methods (for presence/absence data) would remove modelling decisions about sampling pseudo-absences or background data points when absence points are unavailable. 2. We studied nine methods for one-class classification and seven methods for two-class classification (five common to both), all primarily used in pattern recognition and therefore not common in species distribution and ecological niche modelling, across a set of 106 mountain plant species for which presence-absence data was available. We assessed accuracy using standard metrics and compared trade-offs in omission and commission errors between classification groups as well as effects of prevalence and spatial autocorrelation on accuracy. 3. One-class models fit to presence-only data were comparable to two-class models fit to presence-absence data when performance was evaluated with a measure weighting omission and commission errors equally. One-class models were superior for reducing omission errors (i.e. yielding higher sensitivity), and two-classes models were superior for reducing commission errors (i.e. yielding higher specificity). For these methods, spatial autocorrelation was only influential when prevalence was low. 4. These results differ from previous efforts to evaluate alternative modelling approaches to build ENM and are particularly noteworthy because data are from exhaustively sampled populations minimizing false absence records. Accurate, transferable models of species' ecological niches and distributions are needed to advance ecological research and are crucial for effective environmental planning and conservation; the pattern-recognition approaches studied here show good potential for future modelling studies. This study also provides an introduction to promising methods for ecological modelling inherited from the pattern-recognition discipline.
Resumo:
Background: The role of the non-injured hemisphere in stroke recovery is poorly understood. In this pilot study, we sought to explore the presence of structural changes detectable by diffusion tensor imaging (DTI) in the contralesional hemispheres of patients who recovered well from ischemic stroke. Methods: We analyzed serial DTI data from 16 stroke patients who had moderate initial neurological deficits (NIHSS scores 3-12) and good functional outcome at 3-6 months (NIHSS score 0 or modified Rankin Score ≤1). We segmented the brain tissue in gray and white matter (GM and WM) and measured the apparent diffusion coefficient (ADC) and fractional anisotropy in the infarct, in the contralesional infarct mirror region as well as in concentrically expanding regions around them. Results: We found that GM and WM ADC significantly increased in the infarct region (p < 0.01) from acute to chronic time points, whereas in the infarct mirror region, GM and WM ADC increased (p < 0.01) and WM fractional anisotropy decreased (p < 0.05). No significant changes were detected in other regions. Conclusion: DTI-based metrics are sensitive to regional structural changes in the contralesional hemisphere during stroke recovery. Prospective studies in larger cohorts with varying levels of recovery are needed to confirm our findings.
Resumo:
This paper reports on the purpose, design, methodology and target audience of E-learning courses in forensic interpretation offered by the authors since 2010, including practical experiences made throughout the implementation period of this project. This initiative was motivated by the fact that reporting results of forensic examinations in a logically correct and scientifically rigorous way is a daily challenge for any forensic practitioner. Indeed, interpretation of raw data and communication of findings in both written and oral statements are topics where knowledge and applied skills are needed. Although most forensic scientists hold educational records in traditional sciences, only few actually followed full courses that focussed on interpretation issues. Such courses should include foundational principles and methodology - including elements of forensic statistics - for the evaluation of forensic data in a way that is tailored to meet the needs of the criminal justice system. In order to help bridge this gap, the authors' initiative seeks to offer educational opportunities that allow practitioners to acquire knowledge and competence in the current approaches to the evaluation and interpretation of forensic findings. These cover, among other aspects, probabilistic reasoning (including Bayesian networks and other methods of forensic statistics, tools and software), case pre-assessment, skills in the oral and written communication of uncertainty, and the development of independence and self-confidence to solve practical inference problems. E-learning was chosen as a general format because it helps to form a trans-institutional online-community of practitioners from varying forensic disciplines and workfield experience such as reporting officers, (chief) scientists, forensic coordinators, but also lawyers who all can interact directly from their personal workplaces without consideration of distances, travel expenses or time schedules. In the authors' experience, the proposed learning initiative supports participants in developing their expertise and skills in forensic interpretation, but also offers an opportunity for the associated institutions and the forensic community to reinforce the development of a harmonized view with regard to interpretation across forensic disciplines, laboratories and judicial systems.
Resumo:
A mobile ad hoc network (MANET) is a decentralized and infrastructure-less network. This thesis aims to provide support at the system-level for developers of applications or protocols in such networks. To do this, we propose contributions in both the algorithmic realm and in the practical realm. In the algorithmic realm, we contribute to the field by proposing different context-aware broadcast and multicast algorithms in MANETs, namely six-shot broadcast, six-shot multicast, PLAN-B and ageneric algorithmic approach to optimize the power consumption of existing algorithms. For each algorithm we propose, we compare it to existing algorithms that are either probabilistic or context-aware, and then we evaluate their performance based on simulations. We demonstrate that in some cases, context-aware information, such as location or signal-strength, can improve the effciency. In the practical realm, we propose a testbed framework, namely ManetLab, to implement and to deploy MANET-specific protocols, and to evaluate their performance. This testbed framework aims to increase the accuracy of performance evaluation compared to simulations, while keeping the ease of use offered by the simulators to reproduce a performance evaluation. By evaluating the performance of different probabilistic algorithms with ManetLab, we observe that both simulations and testbeds should be used in a complementary way. In addition to the above original contributions, we also provide two surveys about system-level support for ad hoc communications in order to establish a state of the art. The first is about existing broadcast algorithms and the second is about existing middleware solutions and the way they deal with privacy and especially with location privacy. - Un réseau mobile ad hoc (MANET) est un réseau avec une architecture décentralisée et sans infrastructure. Cette thèse vise à fournir un support adéquat, au niveau système, aux développeurs d'applications ou de protocoles dans de tels réseaux. Dans ce but, nous proposons des contributions à la fois dans le domaine de l'algorithmique et dans celui de la pratique. Nous contribuons au domaine algorithmique en proposant différents algorithmes de diffusion dans les MANETs, algorithmes qui sont sensibles au contexte, à savoir six-shot broadcast,six-shot multicast, PLAN-B ainsi qu'une approche générique permettant d'optimiser la consommation d'énergie de ces algorithmes. Pour chaque algorithme que nous proposons, nous le comparons à des algorithmes existants qui sont soit probabilistes, soit sensibles au contexte, puis nous évaluons leurs performances sur la base de simulations. Nous montrons que, dans certains cas, des informations liées au contexte, telles que la localisation ou l'intensité du signal, peuvent améliorer l'efficience de ces algorithmes. Sur le plan pratique, nous proposons une plateforme logicielle pour la création de bancs d'essai, intitulé ManetLab, permettant d'implémenter, et de déployer des protocoles spécifiques aux MANETs, de sorte à évaluer leur performance. Cet outil logiciel vise à accroître la précision desévaluations de performance comparativement à celles fournies par des simulations, tout en conservant la facilité d'utilisation offerte par les simulateurs pour reproduire uneévaluation de performance. En évaluant les performances de différents algorithmes probabilistes avec ManetLab, nous observons que simulateurs et bancs d'essai doivent être utilisés de manière complémentaire. En plus de ces contributions principales, nous fournissons également deux états de l'art au sujet du support nécessaire pour les communications ad hoc. Le premier porte sur les algorithmes de diffusion existants et le second sur les solutions de type middleware existantes et la façon dont elles traitent de la confidentialité, en particulier celle de la localisation.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science- Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
The paper deals with the development and application of the generic methodology for automatic processing (mapping and classification) of environmental data. General Regression Neural Network (GRNN) is considered in detail and is proposed as an efficient tool to solve the problem of spatial data mapping (regression). The Probabilistic Neural Network (PNN) is considered as an automatic tool for spatial classifications. The automatic tuning of isotropic and anisotropic GRNN/PNN models using cross-validation procedure is presented. Results are compared with the k-Nearest-Neighbours (k-NN) interpolation algorithm using independent validation data set. Real case studies are based on decision-oriented mapping and classification of radioactively contaminated territories.
Resumo:
Estimating the time since discharge of a spent cartridge or a firearm can be useful in criminal situa-tions involving firearms. The analysis of volatile gunshot residue remaining after shooting using solid-phase microextraction (SPME) followed by gas chromatography (GC) was proposed to meet this objective. However, current interpretative models suffer from several conceptual drawbacks which render them inadequate to assess the evidential value of a given measurement. This paper aims to fill this gap by proposing a logical approach based on the assessment of likelihood ratios. A probabilistic model was thus developed and applied to a hypothetical scenario where alternative hy-potheses about the discharge time of a spent cartridge found on a crime scene were forwarded. In order to estimate the parameters required to implement this solution, a non-linear regression model was proposed and applied to real published data. The proposed approach proved to be a valuable method for interpreting aging-related data.
Resumo:
The safe and responsible development of engineered nanomaterials (ENM), nanotechnology-based materials and products, together with the definition of regulatory measures and implementation of "nano"-legislation in Europe require a widely supported scientific basis and sufficient high quality data upon which to base decisions. At the very core of such a scientific basis is a general agreement on key issues related to risk assessment of ENMs which encompass the key parameters to characterise ENMs, appropriate methods of analysis and best approach to express the effect of ENMs in widely accepted dose response toxicity tests. The following major conclusions were drawn: Due to high batch variability of ENMs characteristics of commercially available and to a lesser degree laboratory made ENMs it is not possible to make general statements regarding the toxicity resulting from exposure to ENMs. 1) Concomitant with using the OECD priority list of ENMs, other criteria for selection of ENMs like relevance for mechanistic (scientific) studies or risk assessment-based studies, widespread availability (and thus high expected volumes of use) or consumer concern (route of consumer exposure depending on application) could be helpful. The OECD priority list is focussing on validity of OECD tests. Therefore source material will be first in scope for testing. However for risk assessment it is much more relevant to have toxicity data from material as present in products/matrices to which men and environment are be exposed. 2) For most, if not all characteristics of ENMs, standardized methods analytical methods, though not necessarily validated, are available. Generally these methods are only able to determine one single characteristic and some of them can be rather expensive. Practically, it is currently not feasible to fully characterise ENMs. Many techniques that are available to measure the same nanomaterial characteristic produce contrasting results (e.g. reported sizes of ENMs). It was recommended that at least two complementary techniques should be employed to determine a metric of ENMs. The first great challenge is to prioritise metrics which are relevant in the assessment of biological dose response relations and to develop analytical methods for characterising ENMs in biological matrices. It was generally agreed that one metric is not sufficient to describe fully ENMs. 3) Characterisation of ENMs in biological matrices starts with sample preparation. It was concluded that there currently is no standard approach/protocol for sample preparation to control agglomeration/aggregation and (re)dispersion. It was recommended harmonization should be initiated and that exchange of protocols should take place. The precise methods used to disperse ENMs should be specifically, yet succinctly described within the experimental section of a publication. 4) ENMs need to be characterised in the matrix as it is presented to the test system (in vitro/ in vivo). 5) Alternative approaches (e.g. biological or in silico systems) for the characterisation of ENMS are simply not possible with the current knowledge. Contributors: Iseult Lynch, Hans Marvin, Kenneth Dawson, Markus Berges, Diane Braguer, Hugh J. Byrne, Alan Casey, Gordon Chambers, Martin Clift, Giuliano Elia1, Teresa F. Fernandes, Lise Fjellsbø, Peter Hatto, Lucienne Juillerat, Christoph Klein, Wolfgang Kreyling, Carmen Nickel1, and Vicki Stone.
Resumo:
Abstract In social insects, workers perform a multitude of tasks, such as foraging, nest construction, and brood rearing, without central control of how work is allocated among individuals. It has been suggested that workers choose a task by responding to stimuli gathered from the environment. Response-threshold models assume that individuals in a colony vary in the stimulus intensity (response threshold) at which they begin to perform the corresponding task. Here we highlight the limitations of these models with respect to colony performance in task allocation. First, we show with analysis and quantitative simulations that the deterministic response-threshold model constrains the workers' behavioral flexibility under some stimulus conditions. Next, we show that the probabilistic response-threshold model fails to explain precise colony responses to varying stimuli. Both of these limitations would be detrimental to colony performance when dynamic and precise task allocation is needed. To address these problems, we propose extensions of the response-threshold model by adding variables that weigh stimuli. We test the extended response-threshold model in a foraging scenario and show in simulations that it results in an efficient task allocation. Finally, we show that response-threshold models can be formulated as artificial neural networks, which consequently provide a comprehensive framework for modeling task allocation in social insects.
Resumo:
Detailed knowledge of the anatomy and connectivity pattern of cortico-basal ganglia circuits is essential to an understanding of abnormal cortical function and pathophysiology associated with a wide range of neurological and neuropsychiatric diseases. We aim to study the spatial extent and topography of human basal ganglia connectivity in vivo. Additionally, we explore at an anatomical level the hypothesis of coexistent segregated and integrative cortico-basal ganglia loops. We use probabilistic tractography on magnetic resonance diffusion weighted imaging data to segment basal ganglia and thalamus in 30 healthy subjects based on their cortical and subcortical projections. We introduce a novel method to define voxel-based connectivity profiles that allow representation of projections from a source to more than one target region. Using this method, we localize specific relay nuclei within predefined functional circuits. We find strong correlation between tractography-based basal ganglia parcellation and anatomical data from previously reported invasive tracing studies in nonhuman primates. Additionally, we show in vivo the anatomical basis of segregated loops and the extent of their overlap in prefrontal, premotor, and motor networks. Our findings in healthy humans support the notion that probabilistic diffusion tractography can be used to parcellate subcortical gray matter structures on the basis of their connectivity patterns. The coexistence of clearly segregated and also overlapping connections from cortical sites to basal ganglia subregions is a neuroanatomical correlate of both parallel and integrative networks within them. We believe that this method can be used to examine pathophysiological concepts in a number of basal ganglia-related disorders.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
False identity documents constitute a potential powerful source of forensic intelligence because they are essential elements of transnational crime and provide cover for organized crime. In previous work, a systematic profiling method using false documents' visual features has been built within a forensic intelligence model. In the current study, the comparison process and metrics lying at the heart of this profiling method are described and evaluated. This evaluation takes advantage of 347 false identity documents of four different types seized in two countries whose sources were known to be common or different (following police investigations and dismantling of counterfeit factories). Intra-source and inter-sources variations were evaluated through the computation of more than 7500 similarity scores. The profiling method could thus be validated and its performance assessed using two complementary approaches to measuring type I and type II error rates: a binary classification and the computation of likelihood ratios. Very low error rates were measured across the four document types, demonstrating the validity and robustness of the method to link documents to a common source or to differentiate them. These results pave the way for an operational implementation of a systematic profiling process integrated in a developed forensic intelligence model.