986 resultados para Naïve Bayesian Classification


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the forensic examination of DNA mixtures, the question of how to set the total number of contributors (N) presents a topic of ongoing interest. Part of the discussion gravitates around issues of bias, in particular when assessments of the number of contributors are not made prior to considering the genotypic configuration of potential donors. Further complication may stem from the observation that, in some cases, there may be numbers of contributors that are incompatible with the set of alleles seen in the profile of a mixed crime stain, given the genotype of a potential contributor. In such situations, procedures that take a single and fixed number contributors as their output can lead to inferential impasses. Assessing the number of contributors within a probabilistic framework can help avoiding such complication. Using elements of decision theory, this paper analyses two strategies for inference on the number of contributors. One procedure is deterministic and focuses on the minimum number of contributors required to 'explain' an observed set of alleles. The other procedure is probabilistic using Bayes' theorem and provides a probability distribution for a set of numbers of contributors, based on the set of observed alleles as well as their respective rates of occurrence. The discussion concentrates on mixed stains of varying quality (i.e., different numbers of loci for which genotyping information is available). A so-called qualitative interpretation is pursued since quantitative information such as peak area and height data are not taken into account. The competing procedures are compared using a standard scoring rule that penalizes the degree of divergence between a given agreed value for N, that is the number of contributors, and the actual value taken by N. Using only modest assumptions and a discussion with reference to a casework example, this paper reports on analyses using simulation techniques and graphical models (i.e., Bayesian networks) to point out that setting the number of contributors to a mixed crime stain in probabilistic terms is, for the conditions assumed in this study, preferable to a decision policy that uses categoric assumptions about N.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A ubiquitous assessment of swimming velocity (main metric of the performance) is essential for the coach to provide a tailored feedback to the trainee. We present a probabilistic framework for the data-driven estimation of the swimming velocity at every cycle using a low-cost wearable inertial measurement unit (IMU). The statistical validation of the method on 15 swimmers shows that an average relative error of 0.1 ± 9.6% and high correlation with the tethered reference system (rX,Y=0.91 ) is achievable. Besides, a simple tool to analyze the influence of sacrum kinematics on the performance is provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: Combined antiretroviral therapy has dramatically improved HIV-infected individuals survival. Long-term strategies are currently needed to achieve the goal of durable virologic suppression. However, long-term available data for specific antiretrovirals (ARV) are limited. In clinical trials, boosted atazanavir (ATV/r) regimens has shown good efficacy and tolerability in ARV-naïve patients for up to 4 years. The REMAIN study aimed to evaluate the long-term outcomes of ATV/r regimens in ARV-naïve patients in a real life setting. Methods: Non-comparative, observational study conducted in Germany, Portugal and Spain. Historical and longitudinal follow-up data was extracted six monthly from the medical record of HIV-infected, treatment-naïve patients, who initiated an ATV/r-regimen between 2008 and 2010. The primary endpoint was the proportion of patients remaining on ATV treatment over time. Secondary endpoints included virologic response (HIV-1 RNA <50 c/mL and <500 c/mL), reasons for discontinuation and long-term safety. The duration of treatment and time to virologic failure (VF) were analyzed using the Kaplan- Meier method. Data from an interim analysis including patients with at least one year of follow-up are reported here. Results: A total of 411 patients were included in this interim analysis [median (Q1, Q3) follow-up: 23.42 (16.25, 32.24) months≥: 77% male; median age 40 years [min, max: 19, 78≥; 16% IDUs; 18% CDC C; 18% hepatitis C. TDF/FTC was the most common backbone (85%). At baseline, median (Q1, Q3) HIV-RNA and CD4 cell count were 4.91 (4.34, 5.34) log10 c/mL and 256 (139, 353) cells/mm3, respectively. The probability of remaining on treatment was 0.84 (95% CI: 0.80, 0.87) and 0.72 (95% CI: 0.67, 0.76) for the first and second year, respectively. After 2 years of follow-up, 84% (95% CI: 0.79, 0.88) of patients were virologically suppressed (<50 c/mL). No major protease inhibitors mutations were observed at VF. Overall, 125 patients (30%) discontinued ATV therapy [median (Q1, Q3) time to discontinuation: 11.14 (6.24, 19.35) months]. Adverse events (AEs) were the main reason for discontinuation (n =47, 11%). Hyperbilirubinaemia was the most common AE leading to discontinuation (14 patients). No unexpected AEs were reported. Conclusions: In a real life clinical setting, ATV/r regimens showed durable virologic efficacy with good tolerability in an ARV-naïve population. Data from longer follow-up will provide additional valuable information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The genetic characterization of unbalanced mixed stains remains an important area where improvement is imperative. In fact, with current methods for DNA analysis (Polymerase Chain Reaction with the SGM Plus™ multiplex kit), it is generally not possible to obtain a conventional autosomal DNA profile of the minor contributor if the ratio between the two contributors in a mixture is smaller than 1:10. This is a consequence of the fact that the major contributor's profile 'masks' that of the minor contributor. Besides known remedies to this problem, such as Y-STR analysis, a new compound genetic marker that consists of a Deletion/Insertion Polymorphism (DIP), linked to a Short Tandem Repeat (STR) polymorphism, has recently been developed and proposed elsewhere in literature [1]. The present paper reports on the derivation of an approach for the probabilistic evaluation of DIP-STR profiling results obtained from unbalanced DNA mixtures. The procedure is based on object-oriented Bayesian networks (OOBNs) and uses the likelihood ratio as an expression of the probative value. OOBNs are retained in this paper because they allow one to provide a clear description of the genotypic configuration observed for the mixed stain as well as for the various potential contributors (e.g., victim and suspect). These models also allow one to depict the assumed relevance relationships and perform the necessary probabilistic computations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In 2009, the World Health Organization (WHO) issued a new guideline that stratifies dengue-affected patients into severe (SD) and non-severe dengue (NSD) (with or without warning signs). To evaluate the new recommendations, we completed a retrospective cross-sectional study of the dengue haemorrhagic fever (DHF) cases reported during an outbreak in 2011 in northeastern Brazil. We investigated 84 suspected DHF patients, including 45 (53.6%) males and 39 (46.4%) females. The ages of the patients ranged from five-83 years and the median age was 29. According to the DHF/dengue shock syndrome classification, 53 (63.1%) patients were classified as having dengue fever and 31 (36.9%) as having DHF. According to the 2009 WHO classification, 32 (38.1%) patients were grouped as having NSD [4 (4.8%) without warning signs and 28 (33.3%) with warning signs] and 52 (61.9%) as having SD. A better performance of the revised classification in the detection of severe clinical manifestations allows for an improved detection of patients with SD and may reduce deaths. The revised classification will not only facilitate effective screening and patient management, but will also enable the collection of standardised surveillance data for future epidemiological and clinical studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A statistical method for classification of sags their origin downstream or upstream from the recording point is proposed in this work. The goal is to obtain a statistical model using the sag waveforms useful to characterise one type of sags and to discriminate them from the other type. This model is built on the basis of multi-way principal component analysis an later used to project the available registers in a new space with lower dimension. Thus, a case base of diagnosed sags is built in the projection space. Finally classification is done by comparing new sags against the existing in the case base. Similarity is defined in the projection space using a combination of distances to recover the nearest neighbours to the new sag. Finally the method assigns the origin of the new sag according to the origin of their neighbours

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The log-ratio methodology makes available powerful tools for analyzing compositionaldata. Nevertheless, the use of this methodology is only possible for those data setswithout null values. Consequently, in those data sets where the zeros are present, aprevious treatment becomes necessary. Last advances in the treatment of compositionalzeros have been centered especially in the zeros of structural nature and in the roundedzeros. These tools do not contemplate the particular case of count compositional datasets with null values. In this work we deal with \count zeros" and we introduce atreatment based on a mixed Bayesian-multiplicative estimation. We use the Dirichletprobability distribution as a prior and we estimate the posterior probabilities. Then weapply a multiplicative modi¯cation for the non-zero values. We present a case studywhere this new methodology is applied.Key words: count data, multiplicative replacement, composition, log-ratio analysis

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Inherited ichthyoses belong to a large, clinically and etiologically heterogeneous group of mendelian disorders of cornification, typically involving the entire integument. Over the recent years, much progress has been made defining their molecular causes. However, there is no internationally accepted classification and terminology. OBJECTIVE: We sought to establish a consensus for the nomenclature and classification of inherited ichthyoses. METHODS: The classification project started at the First World Conference on Ichthyosis in 2007. A large international network of expert clinicians, skin pathologists, and geneticists entertained an interactive dialogue over 2 years, eventually leading to the First Ichthyosis Consensus Conference held in Sorèze, France, on January 23 and 24, 2009, where subcommittees on different issues proposed terminology that was debated until consensus was reached. RESULTS: It was agreed that currently the nosology should remain clinically based. "Syndromic" versus "nonsyndromic" forms provide a useful major subdivision. Several clinical terms and controversial disease names have been redefined: eg, the group caused by keratin mutations is referred to by the umbrella term, "keratinopathic ichthyosis"-under which are included epidermolytic ichthyosis, superficial epidermolytic ichthyosis, and ichthyosis Curth-Macklin. "Autosomal recessive congenital ichthyosis" is proposed as an umbrella term for the harlequin ichthyosis, lamellar ichthyosis, and the congenital ichthyosiform erythroderma group. LIMITATIONS: As more becomes known about these diseases in the future, modifications will be needed. CONCLUSION: We have achieved an international consensus for the classification of inherited ichthyosis that should be useful for all clinicians and can serve as reference point for future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 2008 Data Fusion Contest organized by the IEEE Geoscience and Remote Sensing Data Fusion Technical Committee deals with the classification of high-resolution hyperspectral data from an urban area. Unlike in the previous issues of the contest, the goal was not only to identify the best algorithm but also to provide a collaborative effort: The decision fusion of the best individual algorithms was aiming at further improving the classification performances, and the best algorithms were ranked according to their relative contribution to the decision fusion. This paper presents the five awarded algorithms and the conclusions of the contest, stressing the importance of decision fusion, dimension reduction, and supervised classification methods, such as neural networks and support vector machines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Here we present the first in a series of articles about the ecology of immature stages of anophelines in the Brazilian Yanomami area. We propose a new larval habitat classification and a new larval sampling methodology. We also report some preliminary results illustrating the applicability of the methodology based on data collected in the Brazilian Amazon rainforest in a longitudinal study of two remote Yanomami communities, Parafuri and Toototobi. In these areas, we mapped and classified 112 natural breeding habitats located in low-order river systems based on their association with river flood pulses, seasonality and exposure to sun. Our classification rendered seven types of larval habitats: lakes associated with the river, which are subdivided into oxbow lakes and nonoxbow lakes, flooded areas associated with the river, flooded areas not associated with the river, rainfall pools, small forest streams, medium forest streams and rivers. The methodology for larval sampling was based on the accurate quantification of the effective breeding area, taking into account the area of the perimeter and subtypes of microenvironments present per larval habitat type using a laser range finder and a small portable inflatable boat. The new classification and new sampling methodology proposed herein may be useful in vector control programs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND The objective of this research was to evaluate data from a randomized clinical trial that tested injectable diacetylmorphine (DAM) and oral methadone (MMT) for substitution treatment, using a multi-domain dichotomous index, with a Bayesian approach. METHODS Sixty two long-term, socially-excluded heroin injectors, not benefiting from available treatments were randomized to receive either DAM or MMT for 9 months in Granada, Spain. Completers were 44 and data at the end of the study period was obtained for 50. Participants were determined to be responders or non responders using a multi-domain outcome index accounting for their physical and mental health and psychosocial integration, used in a previous trial. Data was analyzed with Bayesian methods, using information from a similar study conducted in The Netherlands to select a priori distributions. On adding the data from the present study to update the a priori information, the distribution of the difference in response rates were obtained and used to build credibility intervals and relevant probability computations. RESULTS In the experimental group (n = 27), the rate of responders to treatment was 70.4% (95% CI 53.287.6), and in the control group (n = 23), it was 34.8% (95% CI 15.354.3). The probability of success in the experimental group using the a posteriori distributions was higher after a proper sensitivity analysis. Almost the whole distribution of the rates difference (the one for diacetylmorphine minus methadone) was located to the right of the zero, indicating the superiority of the experimental treatment. CONCLUSION The present analysis suggests a clinical superiority of injectable diacetylmorphine compared to oral methadone in the treatment of severely affected heroin injectors not benefiting sufficiently from the available treatments. TRIAL REGISTRATION Current Controlled Trials ISRCTN52023186.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Colorectal cancer is a heterogeneous disease that manifests through diverse clinical scenarios. During many years, our knowledge about the variability of colorectal tumors was limited to the histopathological analysis from which generic classifications associated with different clinical expectations are derived. However, currently we are beginning to understand that under the intense pathological and clinical variability of these tumors there underlies strong genetic and biological heterogeneity. Thus, with the increasing available information of inter-tumor and intra-tumor heterogeneity, the classical pathological approach is being displaced in favor of novel molecular classifications. In the present article, we summarize the most relevant proposals of molecular classifications obtained from the analysis of colorectal tumors using powerful high throughput techniques and devices. We also discuss the role that cancer systems biology may play in the integration and interpretation of the high amount of data generated and the challenges to be addressed in the future development of precision oncology. In addition, we review the current state of implementation of these novel tools in the pathological laboratory and in clinical practice.