50 resultados para Associative Classifier
Resumo:
OBJECTIVE Our aim was to assess the diagnostic and predictive value of several quantitative EEG (qEEG) analysis methods in comatose patients. METHODS In 79 patients, coupling between EEG signals on the left-right (inter-hemispheric) axis and on the anterior-posterior (intra-hemispheric) axis was measured with four synchronization measures: relative delta power asymmetry, cross-correlation, symbolic mutual information and transfer entropy directionality. Results were compared with etiology of coma and clinical outcome. Using cross-validation, the predictive value of measure combinations was assessed with a Bayes classifier with mixture of Gaussians. RESULTS Five of eight measures showed a statistically significant difference between patients grouped according to outcome; one measure revealed differences in patients grouped according to the etiology. Interestingly, a high level of synchrony between the left and right hemisphere was associated with mortality on intensive care unit, whereas higher synchrony between anterior and posterior brain regions was associated with survival. The combination with the best predictive value reached an area-under the curve of 0.875 (for patients with post anoxic encephalopathy: 0.946). CONCLUSIONS EEG synchronization measures can contribute to clinical assessment, and provide new approaches for understanding the pathophysiology of coma. SIGNIFICANCE Prognostication in coma remains a challenging task. qEEG could improve current multi-modal approaches.
Resumo:
We present observations of total cloud cover and cloud type classification results from a sky camera network comprising four stations in Switzerland. In a comprehensive intercomparison study, records of total cloud cover from the sky camera, long-wave radiation observations, Meteosat, ceilometer, and visual observations were compared. Total cloud cover from the sky camera was in 65–85% of cases within ±1 okta with respect to the other methods. The sky camera overestimates cloudiness with respect to the other automatic techniques on average by up to 1.1 ± 2.8 oktas but underestimates it by 0.8 ± 1.9 oktas compared to the human observer. However, the bias depends on the cloudiness and therefore needs to be considered when records from various observational techniques are being homogenized. Cloud type classification was conducted using the k-Nearest Neighbor classifier in combination with a set of color and textural features. In addition, a radiative feature was introduced which improved the discrimination by up to 10%. The performance of the algorithm mainly depends on the atmospheric conditions, site-specific characteristics, the randomness of the selected images, and possible visual misclassifications: The mean success rate was 80–90% when the image only contained a single cloud class but dropped to 50–70% if the test images were completely randomly selected and multiple cloud classes occurred in the images.
Resumo:
Synesthesia is a condition where presentation of one perceptual class consistently evokes additional experiences in different perceptual categories. Synesthesia is widely considered a congenital condition, although an alternative view is that it is underpinned by repeated exposure to combined perceptual features at key developmental stages. Here we explore the potential for repeated associative learning to shape and engender synesthetic experiences. Non-synesthetic adult participants engaged in an extensive training regime that involved adaptive memory and reading tasks, designed to reinforce 13 specific letter-color associations. Following training, subjects exhibited a range of standard behavioral and physiological markers for grapheme-color synesthesia; crucially, most also described perceiving color experiences for achromatic letters, inside and outside the lab, where such experiences are usually considered the hallmark of genuine synesthetes. Collectively our results are consistent with developmental accounts of synesthesia and illuminate a previously unsuspected potential for new learning to shape perceptual experience, even in adulthood.
Resumo:
PURPOSE: To differentiate diabetic macular edema (DME) from pseudophakic cystoid macular edema (PCME) based solely on spectral-domain optical coherence tomography (SD-OCT). METHODS: This cross-sectional study included 134 participants: 49 with PCME, 60 with DME, and 25 with diabetic retinopathy (DR) and ME after cataract surgery. First, two unmasked experts classified the 25 DR patients after cataract surgery as either DME, PCME, or mixed-pattern based on SD-OCT and color-fundus photography. Then all 134 patients were divided into two datasets and graded by two masked readers according to a standardized reading-protocol. Accuracy of the masked readers to differentiate the diseases based on SD-OCT parameters was tested. Parallel to the masked readers, a computer-based algorithm was established using support vector machine (SVM) classifiers to automatically differentiate disease entities. RESULTS: The masked readers assigned 92.5% SD-OCT images to the correct clinical diagnose. The classifier-accuracy trained and tested on dataset 1 was 95.8%. The classifier-accuracy trained on dataset 1 and tested on dataset 2 to differentiate PCME from DME was 90.2%. The classifier-accuracy trained and tested on dataset 2 to differentiate all three diseases was 85.5%. In particular, higher central-retinal thickness/retinal-volume ratio, absence of an epiretinal-membrane, and solely inner nuclear layer (INL)-cysts indicated PCME, whereas higher outer nuclear layer (ONL)/INL ratio, the absence of subretinal fluid, presence of hard exudates, microaneurysms, and ganglion cell layer and/or retinal nerve fiber layer cysts strongly favored DME in this model. CONCLUSIONS: Based on the evaluation of SD-OCT, PCME can be differentiated from DME by masked reader evaluation, and by automated analysis, even in DR patients with ME after cataract surgery. The automated classifier may help to independently differentiate these two disease entities and is made publicly available.
Resumo:
MRSI grids frequently show spectra with poor quality, mainly because of the high sensitivity of MRS to field inhomogeneities. These poor quality spectra are prone to quantification and/or interpretation errors that can have a significant impact on the clinical use of spectroscopic data. Therefore, quality control of the spectra should always precede their clinical use. When performed manually, quality assessment of MRSI spectra is not only a tedious and time-consuming task, but is also affected by human subjectivity. Consequently, automatic, fast and reliable methods for spectral quality assessment are of utmost interest. In this article, we present a new random forest-based method for automatic quality assessment of (1) H MRSI brain spectra, which uses a new set of MRS signal features. The random forest classifier was trained on spectra from 40 MRSI grids that were classified as acceptable or non-acceptable by two expert spectroscopists. To account for the effects of intra-rater reliability, each spectrum was rated for quality three times by each rater. The automatic method classified these spectra with an area under the curve (AUC) of 0.976. Furthermore, in the subset of spectra containing only the cases that were classified every time in the same way by the spectroscopists, an AUC of 0.998 was obtained. Feature importance for the classification was also evaluated. Frequency domain skewness and kurtosis, as well as time domain signal-to-noise ratios (SNRs) in the ranges 50-75 ms and 75-100 ms, were the most important features. Given that the method is able to assess a whole MRSI grid faster than a spectroscopist (approximately 3 s versus approximately 3 min), and without loss of accuracy (agreement between classifier trained with just one session and any of the other labelling sessions, 89.88%; agreement between any two labelling sessions, 89.03%), the authors suggest its implementation in the clinical routine. The method presented in this article was implemented in jMRUI's SpectrIm plugin. Copyright © 2016 John Wiley & Sons, Ltd.