12 resultados para receiver operating characteristic curve
em CentAUR: Central Archive University of Reading - UK
Resumo:
Background Screening instruments for autistic-spectrum disorders have not been compared in the same sample. Aims To compare the Social Communication Questionnaire (SCQ), the Social Responsiveness Scale (SRS) and the Children's Communication Checklist (CCC). Method Screen and diagnostic assessments on 119 children between 9 and 13 years of age with special educational needs with and without autistic-spectrum disorders were weighted to estimate screen characteristics for a realistic target population. Results The SCQ performed best (area under receiver operating characteristic curve (AUC)=0.90; sensitivity. 6; specificity 0.78). The SRS had a lower AUC (0.77) with high sensitivity (0.78) and moderate specificity (0.67). The CCC had a high sensitivity but lower specificity (AUC=0.79; sensitivity 0.93; specificity 0.46). The AUC of the SRS and CCC was lower for children with IQ < 70. Behaviour problems reduced specificity for all three instruments. Conclusions The SCQ, SRS and CCC showed strong to moderate ability to identify autistic-spectrum disorder in this at-risk sample of school-age children with special educational needs.
Resumo:
Aim This paper presents Convergence Insufficiency Symptom Survey (CISS) and orthoptic findings in a sample of typical young adults who considered themselves to have normal eyesight apart from weak spectacles. Methods The CISS questionnaire was administered,followed by a full orthoptic evaluation, to 167 university undergraduate and postgraduate students during the recruitment phase of another study. The primary criterion for recruitment to this study was that participants‘feltthey had normal eyesight'. A CISS score of ≥21 was used to define‘significant’symptoms, and convergence insufficiency (CI) was defined as convergence≥8cm from the nose with a fusion range <15Δ base-out with small or no exophoria. Results The group mean CISS score was 15.4. In all, 17(10%) of the participants were diagnosed with CI, but 11(65%) of these did not have significant symptoms. 41(25%) participants returned a‘high’CISS score of ≥21 but only 6 (15%) of these had genuine CI. Sensitivity of the CISS to detect CI in this asymptomatic sample was 38%; specificity 77%; positive predictive value 15%; and negative predictive value 92%. The area under a receiver operating characteristic curve was 0.596 (95% CI 0.46 to 0.73). Conclusions‘Visual symptoms’are common in young adults, but often not related to any clinical defect, while true CI may be asymptomatic. This study suggests that screening for CI is not indicated
Resumo:
Algorithms for computer-aided diagnosis of dementia based on structural MRI have demonstrated high performance in the literature, but are difficult to compare as different data sets and methodology were used for evaluation. In addition, it is unclear how the algorithms would perform on previously unseen data, and thus, how they would perform in clinical practice when there is no real opportunity to adapt the algorithm to the data at hand. To address these comparability, generalizability and clinical applicability issues, we organized a grand challenge that aimed to objectively compare algorithms based on a clinically representative multi-center data set. Using clinical practice as the starting point, the goal was to reproduce the clinical diagnosis. Therefore, we evaluated algorithms for multi-class classification of three diagnostic groups: patients with probable Alzheimer's disease, patients with mild cognitive impairment and healthy controls. The diagnosis based on clinical criteria was used as reference standard, as it was the best available reference despite its known limitations. For evaluation, a previously unseen test set was used consisting of 354 T1-weighted MRI scans with the diagnoses blinded. Fifteen research teams participated with a total of 29 algorithms. The algorithms were trained on a small training set (n = 30) and optionally on data from other sources (e.g., the Alzheimer's Disease Neuroimaging Initiative, the Australian Imaging Biomarkers and Lifestyle flagship study of aging). The best performing algorithm yielded an accuracy of 63.0% and an area under the receiver-operating-characteristic curve (AUC) of 78.8%. In general, the best performances were achieved using feature extraction based on voxel-based morphometry or a combination of features that included volume, cortical thickness, shape and intensity. The challenge is open for new submissions via the web-based framework: http://caddementia.grand-challenge.org.
Resumo:
Objective: This paper presents a detailed study of fractal-based methods for texture characterization of mammographic mass lesions and architectural distortion. The purpose of this study is to explore the use of fractal and lacunarity analysis for the characterization and classification of both tumor lesions and normal breast parenchyma in mammography. Materials and methods: We conducted comparative evaluations of five popular fractal dimension estimation methods for the characterization of the texture of mass lesions and architectural distortion. We applied the concept of lacunarity to the description of the spatial distribution of the pixel intensities in mammographic images. These methods were tested with a set of 57 breast masses and 60 normal breast parenchyma (dataset1), and with another set of 19 architectural distortions and 41 normal breast parenchyma (dataset2). Support vector machines (SVM) were used as a pattern classification method for tumor classification. Results: Experimental results showed that the fractal dimension of region of interest (ROIs) depicting mass lesions and architectural distortion was statistically significantly lower than that of normal breast parenchyma for all five methods. Receiver operating characteristic (ROC) analysis showed that fractional Brownian motion (FBM) method generated the highest area under ROC curve (A z = 0.839 for dataset1, 0.828 for dataset2, respectively) among five methods for both datasets. Lacunarity analysis showed that the ROIs depicting mass lesions and architectural distortion had higher lacunarities than those of ROIs depicting normal breast parenchyma. The combination of FBM fractal dimension and lacunarity yielded the highest A z value (0.903 and 0.875, respectively) than those based on single feature alone for both given datasets. The application of the SVM improved the performance of the fractal-based features in differentiating tumor lesions from normal breast parenchyma by generating higher A z value. Conclusion: FBM texture model is the most appropriate model for characterizing mammographic images due to self-affinity assumption of the method being a better approximation. Lacunarity is an effective counterpart measure of the fractal dimension in texture feature extraction in mammographic images. The classification results obtained in this work suggest that the SVM is an effective method with great potential for classification in mammographic image analysis.
Resumo:
The notions of resolution and discrimination of probability forecasts are revisited. It is argued that the common concept underlying both resolution and discrimination is the dependence (in the sense of probability theory) of forecasts and observations. More specifically, a forecast has no resolution if and only if it has no discrimination if and only if forecast and observation are stochastically independent. A statistical tests for independence is thus also a test for no resolution and, at the same time, for no discrimination. The resolution term in the decomposition of the logarithmic scoring rule, and the area under the Receiver Operating Characteristic will be investigated in this light.
Resumo:
Scope: The use of biomarkers in the objective assessment of dietary intake is a high priority in nutrition research. The aim of this study was to examine pentadecanoic acid (C15:0) and heptadecanoic acid (C17:0) as biomarkers of dairy foods intake. Methods and results: The data used in the present study were obtained as part of the Food4me Study. Estimates of C15:0 and C17:0 from dried blood spots and intakes of dairy from an FFQ were obtained from participants (n=1,180) across 7 countries. Regression analyses were used to explore associations of biomarkers with dairy intake levels and receiver operating characteristic (ROC) analyses were used to evaluate the fatty acids. Significant positive associations were found between C15:0 and total intakes of high-fat dairy products. C15:0 showed good ability to distinguish between low and high consumers of high-fat dairy products. Conclusion: C15:0 can be used as a biomarker of high-fat dairy intake and of specific high-fat dairy products. Both C15:0 and C17:0 performed poorly for total dairy intake highlighting the need for caution when using these in epidemiological studies.
Resumo:
Many kernel classifier construction algorithms adopt classification accuracy as performance metrics in model evaluation. Moreover, equal weighting is often applied to each data sample in parameter estimation. These modeling practices often become problematic if the data sets are imbalanced. We present a kernel classifier construction algorithm using orthogonal forward selection (OFS) in order to optimize the model generalization for imbalanced two-class data sets. This kernel classifier identification algorithm is based on a new regularized orthogonal weighted least squares (ROWLS) estimator and the model selection criterion of maximal leave-one-out area under curve (LOO-AUC) of the receiver operating characteristics (ROCs). It is shown that, owing to the orthogonalization procedure, the LOO-AUC can be calculated via an analytic formula based on the new regularized orthogonal weighted least squares parameter estimator, without actually splitting the estimation data set. The proposed algorithm can achieve minimal computational expense via a set of forward recursive updating formula in searching model terms with maximal incremental LOO-AUC value. Numerical examples are used to demonstrate the efficacy of the algorithm.
Resumo:
The estimation of prediction quality is important because without quality measures, it is difficult to determine the usefulness of a prediction. Currently, methods for ligand binding site residue predictions are assessed in the function prediction category of the biennial Critical Assessment of Techniques for Protein Structure Prediction (CASP) experiment, utilizing the Matthews Correlation Coefficient (MCC) and Binding-site Distance Test (BDT) metrics. However, the assessment of ligand binding site predictions using such metrics requires the availability of solved structures with bound ligands. Thus, we have developed a ligand binding site quality assessment tool, FunFOLDQA, which utilizes protein feature analysis to predict ligand binding site quality prior to the experimental solution of the protein structures and their ligand interactions. The FunFOLDQA feature scores were combined using: simple linear combinations, multiple linear regression and a neural network. The neural network produced significantly better results for correlations to both the MCC and BDT scores, according to Kendall’s τ, Spearman’s ρ and Pearson’s r correlation coefficients, when tested on both the CASP8 and CASP9 datasets. The neural network also produced the largest Area Under the Curve score (AUC) when Receiver Operator Characteristic (ROC) analysis was undertaken for the CASP8 dataset. Furthermore, the FunFOLDQA algorithm incorporating the neural network, is shown to add value to FunFOLD, when both methods are employed in combination. This results in a statistically significant improvement over all of the best server methods, the FunFOLD method (6.43%), and one of the top manual groups (FN293) tested on the CASP8 dataset. The FunFOLDQA method was also found to be competitive with the top server methods when tested on the CASP9 dataset. To the best of our knowledge, FunFOLDQA is the first attempt to develop a method that can be used to assess ligand binding site prediction quality, in the absence of experimental data.
Resumo:
We propose a new class of neurofuzzy construction algorithms with the aim of maximizing generalization capability specifically for imbalanced data classification problems based on leave-one-out (LOO) cross validation. The algorithms are in two stages, first an initial rule base is constructed based on estimating the Gaussian mixture model with analysis of variance decomposition from input data; the second stage carries out the joint weighted least squares parameter estimation and rule selection using orthogonal forward subspace selection (OFSS)procedure. We show how different LOO based rule selection criteria can be incorporated with OFSS, and advocate either maximizing the leave-one-out area under curve of the receiver operating characteristics, or maximizing the leave-one-out Fmeasure if the data sets exhibit imbalanced class distribution. Extensive comparative simulations illustrate the effectiveness of the proposed algorithms.
Resumo:
Forecasting atmospheric blocking is one of the main problems facing medium-range weather forecasters in the extratropics. The European Centre for Medium-Range Weather Forecasts (ECMWF) Ensemble Prediction System (EPS) provides an excellent basis for medium-range forecasting as it provides a number of different possible realizations of the meteorological future. This ensemble of forecasts attempts to account for uncertainties in both the initial conditions and the model formulation. Since 18 July 2000, routine output from the EPS has included the field of potential temperature on the potential vorticity (PV) D 2 PV units (PVU) surface, the dynamical tropopause. This has enabled the objective identification of blocking using an index based on the reversal of the meridional potential-temperature gradient. A year of EPS probability forecasts of Euro-Atlantic and Pacific blocking have been produced and are assessed in this paper, concentrating on the Euro-Atlantic sector. Standard verification techniques such as Brier scores, Relative Operating Characteristic (ROC) curves and reliability diagrams are used. It is shown that Euro-Atlantic sector-blocking forecasts are skilful relative to climatology out to 10 days, and are more skilful than the deterministic control forecast at all lead times. The EPS is also more skilful than a probabilistic version of this deterministic forecast, though the difference is smaller. In addition, it is shown that the onset of a sector-blocking episode is less well predicted than its decay. As the lead time increases, the probability forecasts tend towards a model climatology with slightly less blocking than is seen in the real atmosphere. This small under-forecasting bias in the blocking forecasts is possibly related to a westerly bias in the ECMWF model. Copyright © 2003 Royal Meteorological Society
Resumo:
Using the plausible model of activated carbon proposed by Harris and co-workers and grand canonical Monte Carlo simulations, we study the applicability of standard methods for describing adsorption data on microporous carbons widely used in adsorption science. Two carbon structures are studied, one with a small distribution of micropores in the range up to 1 nm, and the other with micropores covering a wide range of porosity. For both structures, adsorption isotherms of noble gases (from Ne to Xe), carbon tetrachloride and benzene are simulated. The data obtained are considered in terms of Dubinin-Radushkevich plots. Moreover, for benzene and carbon tetrachloride the temperature invariance of the characteristic curve is also studied. We show that using simulated data some empirical relationships obtained from experiment can be successfully recovered. Next we test the applicability of Dubinin's related models including the Dubinin-Izotova, Dubinin-Radushkevich-Stoeckli, and Jaroniec-Choma equations. The results obtained demonstrate the limits and applications of the models studied in the field of carbon porosity characterization.
Resumo:
In this paper we present the capability of a new network of field mill sensors to monitor the atmospheric electric field at various locations in South America; we also show some early results. The main objective of the new network is to obtain the characteristic Universal Time diurnal curve of the atmospheric electric field in fair weather, known as the Carnegie curve. The Carnegie curve is closely related to the current sources flowing in the Global Atmospheric Electric Circuit so that another goal is the study of this relationship on various time scales (transient/monthly/seasonal/annual). Also, by operating this new network, we may also study departures of the Carnegie curve from its long term average value related to various solar, geophysical and atmospheric phenomena such as the solar cycle, solar flares and energetic charged particles, galactic cosmic rays, seismic activity and specific meteorological events. We then expect to have a better understanding of the influence of these phenomena on the Global Atmospheric Electric Circuit and its time-varying behavior.