986 resultados para Naïve Bayesian Classification
Resumo:
This paper presents 3-D brain tissue classificationschemes using three recent promising energy minimizationmethods for Markov random fields: graph cuts, loopybelief propagation and tree-reweighted message passing.The classification is performed using the well knownfinite Gaussian mixture Markov Random Field model.Results from the above methods are compared with widelyused iterative conditional modes algorithm. Theevaluation is performed on a dataset containing simulatedT1-weighted MR brain volumes with varying noise andintensity non-uniformities. The comparisons are performedin terms of energies as well as based on ground truthsegmentations, using various quantitative metrics.
Resumo:
INTRODUCTION: Dolutegravir (DTG) 50 mg once daily was superior to darunavir/ritonavir (DRV/r) 800 mg/100 mg once daily through Week 48, with 90% vs. 83% of participants achieving HIV RNA 50 c/mL (p=0.025) [1]. We present data through Week 96. MATERIAL AND METHODS: FLAMINGO is a multicentre, randomized, open-label, Phase IIIb non-inferiority study, in which HIV-1-positive ART-naïve adults with HIV-1 RNA≥1000 c/mL and no evidence of viral resistance were randomized 1:1 to receive DTG or DRV/r, with investigator-selected backbone NRTIs (TDF/FTC or ABC/3TC). Participants were stratified by screening HIV-1 RNA (≤100K c/mL) and NRTI backbone. RESULTS: A total of 484 adults were randomized and treated; 25% had baseline HIV RNA 100K c/mL. At Week 96, the proportion of participants with HIV RNA 50 c/mL was 80% in the DTG arm vs. 68% in the DRV/r arm (adjusted difference 12.4%; 95% CI 4.7, 20.2%; p=0.002). Secondary analyses supported primary results: per-protocol [(DTG 83% vs. DRV/r 70%), 95% CI 12.9 (5.3, 20.6)] and treatment-related discontinuation = failure [(98% vs. 95%), 95% CI 3.2 (-0.3, 6.7)]. Overall virologic non-response (DTG 8%; DRV/r 12%) and non-response due to other reasons (DTG 12%; DRV/r 21%) occurred less frequently on DTG. As at Week 48, the difference between arms was most pronounced in participants with high baseline viral load (82% vs. 52% response through Week 96) and in the TDF/FTC stratum (79% vs. 64%); consistent responses were seen in the ABC/3TC stratum (82% vs. 75%). Six participants (DTG 2, none post-Week 48; DRV/r 4, two post-Week 48) experienced protocol-defined virologic failure (PDVF; confirmed viral load 200 c/mL on or after Week 24); none had treatment-emergent resistance to study drugs. Most frequent drug-related adverse events (AEs) were diarrhoea, nausea and headache, with diarrhoea significantly more common on DRV/r (24%) than DTG (10%). Significantly more participants had Grade 2 fasting LDL toxicities on DRV/r (22%) vs. DTG (7%), p<0.001; mean changes in creatinine for DTG (~0.18 mg/dL) observed at Week 2 were stable through Week 96. CONCLUSIONS: Once-daily DTG was superior to once-daily DRV/r in treatment-naïve HIV-1-positive individuals, with no evidence of emergent resistance to DTG in virologic failure and relatively similar safety profiles for DTG and DRV/r through 96 Weeks.
Resumo:
Introduction: Quantitative measures of degree of lumbar spinal stenosis (LSS) such as antero-posterior diameter of the canal or dural sac cross sectional area vary widely and do not correlate with clinical symptoms or results of surgical decompression. In an effort to improve quantification of stenosis we have developed a grading system based on the morphology of the dural sac and its contents as seen on T2 axial images. The grading comprises seven categories ranging form normal to the most severe stenosis and takes into account the ratio of rootlet/CSF content. Material and methods: Fifty T2 axial MRI images taken at disc level from twenty seven symptomatic lumbar spinal stenosis patients who underwent decompressive surgery were classified into seven categories by five observers and reclassified 2 weeks later by the same investigators. Intra- and inter-observer reliability of the classification were assessed using Cohen's and Fleiss' kappa statistics, respectively. Results: Generally, the morphology grading system itself was well adopted by the observers. Its success in application is strongly influenced by the identification of the dural sac. The average intraobserver Cohen's kappa was 0.53 ± 0.2. The inter-observer Fleiss' kappa was 0.38 ± 0.02 in the first rating and 0.3 ± 0.03 in the second rating repeated after two weeks. Discussion: In this attempt, the teaching of the observers was limited to an introduction to the general idea of the morphology grading system and one example MRI image per category. The identification of the dimension of the dural sac may be a difficult issue in absence of complete T1 T2 MRI image series as it was the case here. The similarity of the CSF to possibly present fat on T2 images was the main reason of mismatch in the assignment of the cases to a category. The Fleiss correlation factors of the five observers are fair and the proposed morphology grading system is promising.
Resumo:
The interpretation of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) is based on a 4-factor model, which is only partially compatible with the mainstream Cattell-Horn-Carroll (CHC) model of intelligence measurement. The structure of cognitive batteries is frequently analyzed via exploratory factor analysis and/or confirmatory factor analysis. With classical confirmatory factor analysis, almost all crossloadings between latent variables and measures are fixed to zero in order to allow the model to be identified. However, inappropriate zero cross-loadings can contribute to poor model fit, distorted factors, and biased factor correlations; most important, they do not necessarily faithfully reflect theory. To deal with these methodological and theoretical limitations, we used a new statistical approach, Bayesian structural equation modeling (BSEM), among a sample of 249 French-speaking Swiss children (8-12 years). With BSEM, zero-fixed cross-loadings between latent variables and measures are replaced by approximate zeros, based on informative, small-variance priors. Results indicated that a direct hierarchical CHC-based model with 5 factors plus a general intelligence factor better represented the structure of the WISC-IV than did the 4-factor structure and the higher order models. Because a direct hierarchical CHC model was more adequate, it was concluded that the general factor should be considered as a breadth rather than a superordinate factor. Because it was possible for us to estimate the influence of each of the latent variables on the 15 subtest scores, BSEM allowed improvement of the understanding of the structure of intelligence tests and the clinical interpretation of the subtest scores.
Resumo:
This paper analyses and discusses arguments that emerge from a recent discussion about the proper assessment of the evidential value of correspondences observed between the characteristics of a crime stain and those of a sample from a suspect when (i) this latter individual is found as a result of a database search and (ii) remaining database members are excluded as potential sources (because of different analytical characteristics). Using a graphical probability approach (i.e., Bayesian networks), the paper here intends to clarify that there is no need to (i) introduce a correction factor equal to the size of the searched database (i.e., to reduce a likelihood ratio), nor to (ii) adopt a propositional level not directly related to the suspect matching the crime stain (i.e., a proposition of the kind 'some person in (outside) the database is the source of the crime stain' rather than 'the suspect (some other person) is the source of the crime stain'). The present research thus confirms existing literature on the topic that has repeatedly demonstrated that the latter two requirements (i) and (ii) should not be a cause of concern.
Resumo:
In this paper we propose a Pyramidal Classification Algorithm,which together with an appropriate aggregation index producesan indexed pseudo-hierarchy (in the strict sense) withoutinversions nor crossings. The computer implementation of thealgorithm makes it possible to carry out some simulation testsby Monte Carlo methods in order to study the efficiency andsensitivity of the pyramidal methods of the Maximum, Minimumand UPGMA. The results shown in this paper may help to choosebetween the three classification methods proposed, in order toobtain the classification that best fits the original structureof the population, provided we have an a priori informationconcerning this structure.
Resumo:
We provide methods for forecasting variables and predicting turning points in panel Bayesian VARs. We specify a flexible model which accounts for both interdependencies in the cross section and time variations in the parameters. Posterior distributions for the parameters are obtained for a particular type of diffuse, for Minnesota-type and for hierarchical priors. Formulas for multistep, multiunit point and average forecasts are provided. An application to the problem of forecasting the growth rate of output and of predicting turning points in the G-7 illustrates the approach. A comparison with alternative forecasting methods is also provided.
Resumo:
Different types of cell death are often defined by morphological criteria, without a clear reference to precise biochemical mechanisms. The Nomenclature Committee on Cell Death (NCCD) proposes unified criteria for the definition of cell death and of its different morphologies, while formulating several caveats against the misuse of words and concepts that slow down progress in the area of cell death research. Authors, reviewers and editors of scientific periodicals are invited to abandon expressions like 'percentage apoptosis' and to replace them with more accurate descriptions of the biochemical and cellular parameters that are actually measured. Moreover, at the present stage, it should be accepted that caspase-independent mechanisms can cooperate with (or substitute for) caspases in the execution of lethal signaling pathways and that 'autophagic cell death' is a type of cell death occurring together with (but not necessarily by) autophagic vacuolization. This study details the 2009 recommendations of the NCCD on the use of cell death-related terminology including 'entosis', 'mitotic catastrophe', 'necrosis', 'necroptosis' and 'pyroptosis'.
Resumo:
To be diagnostically useful, structural MRI must reliably distinguish Alzheimer's disease (AD) from normal aging in individual scans. Recent advances in statistical learning theory have led to the application of support vector machines to MRI for detection of a variety of disease states. The aims of this study were to assess how successfully support vector machines assigned individual diagnoses and to determine whether data-sets combined from multiple scanners and different centres could be used to obtain effective classification of scans. We used linear support vector machines to classify the grey matter segment of T1-weighted MR scans from pathologically proven AD patients and cognitively normal elderly individuals obtained from two centres with different scanning equipment. Because the clinical diagnosis of mild AD is difficult we also tested the ability of support vector machines to differentiate control scans from patients without post-mortem confirmation. Finally we sought to use these methods to differentiate scans between patients suffering from AD from those with frontotemporal lobar degeneration. Up to 96% of pathologically verified AD patients were correctly classified using whole brain images. Data from different centres were successfully combined achieving comparable results from the separate analyses. Importantly, data from one centre could be used to train a support vector machine to accurately differentiate AD and normal ageing scans obtained from another centre with different subjects and different scanner equipment. Patients with mild, clinically probable AD and age/sex matched controls were correctly separated in 89% of cases which is compatible with published diagnosis rates in the best clinical centres. This method correctly assigned 89% of patients with post-mortem confirmed diagnosis of either AD or frontotemporal lobar degeneration to their respective group. Our study leads to three conclusions: Firstly, support vector machines successfully separate patients with AD from healthy aging subjects. Secondly, they perform well in the differential diagnosis of two different forms of dementia. Thirdly, the method is robust and can be generalized across different centres. This suggests an important role for computer based diagnostic image analysis for clinical practice.
Resumo:
Saving our science from ourselves: the plight of biological classification. Biological classification ( nomenclature, taxonomy, and systematics) is being sold short. The desire for new technologies, faster and cheaper taxonomic descriptions, identifications, and revisions is symptomatic of a lack of appreciation and understanding of classification. The problem of gadget-driven science, a lack of best practice and the inability to accept classification as a descriptive and empirical science are discussed. The worst cases scenario is a future in which classifications are purely artificial and uninformative.
Resumo:
The classical binary classification problem is investigatedwhen it is known in advance that the posterior probability function(or regression function) belongs to some class of functions. We introduceand analyze a method which effectively exploits this knowledge. The methodis based on minimizing the empirical risk over a carefully selected``skeleton'' of the class of regression functions. The skeleton is acovering of the class based on a data--dependent metric, especiallyfitted for classification. A new scale--sensitive dimension isintroduced which is more useful for the studied classification problemthan other, previously defined, dimension measures. This fact isdemonstrated by performance bounds for the skeleton estimate in termsof the new dimension.
Resumo:
This paper proposes a common and tractable framework for analyzingdifferent definitions of fixed and random effects in a contant-slopevariable-intercept model. It is shown that, regardless of whethereffects (i) are treated as parameters or as an error term, (ii) areestimated in different stages of a hierarchical model, or whether (iii)correlation between effects and regressors is allowed, when the sameinformation on effects is introduced into all estimation methods, theresulting slope estimator is also the same across methods. If differentmethods produce different results, it is ultimately because differentinformation is being used for each methods.