804 resultados para Pixel-based Classification


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, the joint exploitation of images acquired daily by remote sensing instruments and of images available from archives allows a detailed monitoring of the transitions occurring at the surface of the Earth. These modifications of the land cover generate spectral discrepancies that can be detected via the analysis of remote sensing images. Independently from the origin of the images and of type of surface change, a correct processing of such data implies the adoption of flexible, robust and possibly nonlinear method, to correctly account for the complex statistical relationships characterizing the pixels of the images. This Thesis deals with the development and the application of advanced statistical methods for multi-temporal optical remote sensing image processing tasks. Three different families of machine learning models have been explored and fundamental solutions for change detection problems are provided. In the first part, change detection with user supervision has been considered. In a first application, a nonlinear classifier has been applied with the intent of precisely delineating flooded regions from a pair of images. In a second case study, the spatial context of each pixel has been injected into another nonlinear classifier to obtain a precise mapping of new urban structures. In both cases, the user provides the classifier with examples of what he believes has changed or not. In the second part, a completely automatic and unsupervised method for precise binary detection of changes has been proposed. The technique allows a very accurate mapping without any user intervention, resulting particularly useful when readiness and reaction times of the system are a crucial constraint. In the third, the problem of statistical distributions shifting between acquisitions is studied. Two approaches to transform the couple of bi-temporal images and reduce their differences unrelated to changes in land cover are studied. The methods align the distributions of the images, so that the pixel-wise comparison could be carried out with higher accuracy. Furthermore, the second method can deal with images from different sensors, no matter the dimensionality of the data nor the spectral information content. This opens the doors to possible solutions for a crucial problem in the field: detecting changes when the images have been acquired by two different sensors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: Optimal identification of subtle cognitive impairment in the primary care setting requires a very brief tool combining (a) patients' subjective impairments, (b) cognitive testing, and (c) information from informants. The present study developed a new, very quick and easily administered case-finding tool combining these assessments ('BrainCheck') and tested the feasibility and validity of this instrument in two independent studies. METHODS: We developed a case-finding tool comprised of patient-directed (a) questions about memory and depression and (b) clock drawing, and (c) the informant-directed 7-item version of the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE). Feasibility study: 52 general practitioners rated the feasibility and acceptance of the patient-directed tool. Validation study: An independent group of 288 Memory Clinic patients (mean ± SD age = 76.6 ± 7.9, education = 12.0 ± 2.6; 53.8% female) with diagnoses of mild cognitive impairment (n = 80), probable Alzheimer's disease (n = 185), or major depression (n = 23) and 126 demographically matched, cognitively healthy volunteer participants (age = 75.2 ± 8.8, education = 12.5 ± 2.7; 40% female) partook. All patient and healthy control participants were administered the patient-directed tool, and informants of 113 patient and 70 healthy control participants completed the very short IQCODE. RESULTS: Feasibility study: General practitioners rated the patient-directed tool as highly feasible and acceptable. Validation study: A Classification and Regression Tree analysis generated an algorithm to categorize patient-directed data which resulted in a correct classification rate (CCR) of 81.2% (sensitivity = 83.0%, specificity = 79.4%). Critically, the CCR of the combined patient- and informant-directed instruments (BrainCheck) reached nearly 90% (that is 89.4%; sensitivity = 97.4%, specificity = 81.6%). CONCLUSION: A new and very brief instrument for general practitioners, 'BrainCheck', combined three sources of information deemed critical for effective case-finding (that is, patients' subject impairments, cognitive testing, informant information) and resulted in a nearly 90% CCR. Thus, it provides a very efficient and valid tool to aid general practitioners in deciding whether patients with suspected cognitive impairments should be further evaluated or not ('watchful waiting').

Relevância:

30.00% 30.00%

Publicador:

Resumo:

HAMAP (High-quality Automated and Manual Annotation of Proteins-available at http://hamap.expasy.org/) is a system for the automatic classification and annotation of protein sequences. HAMAP provides annotation of the same quality and detail as UniProtKB/Swiss-Prot, using manually curated profiles for protein sequence family classification and expert curated rules for functional annotation of family members. HAMAP data and tools are made available through our website and as part of the UniRule pipeline of UniProt, providing annotation for millions of unreviewed sequences of UniProtKB/TrEMBL. Here we report on the growth of HAMAP and updates to the HAMAP system since our last report in the NAR Database Issue of 2013. We continue to augment HAMAP with new family profiles and annotation rules as new protein families are characterized and annotated in UniProtKB/Swiss-Prot; the latest version of HAMAP (as of 3 September 2014) contains 1983 family classification profiles and 1998 annotation rules (up from 1780 and 1720). We demonstrate how the complex logic of HAMAP rules allows for precise annotation of individual functional variants within large homologous protein families. We also describe improvements to our web-based tool HAMAP-Scan which simplify the classification and annotation of sequences, and the incorporation of an improved sequence-profile search algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the ∼1% of the human genome in the ENCODE regions, only about half of the transcriptionally active regions (TARs) identified with tiling microarrays correspond to annotated exons. Here we categorize this large amount of “unannotated transcription.” We use a number of disparate features to classify the 6988 novel TARs—array expression profiles across cell lines and conditions, sequence composition, phylogenetic profiles (presence/absence of syntenic conservation across 17 species), and locations relative to genes. In the classification, we first filter out TARs with unusual sequence composition and those likely resulting from cross-hybridization. We then associate some of those remaining with proximal exons having correlated expression profiles. Finally, we cluster unclassified TARs into putative novel loci, based on similar expression and phylogenetic profiles. To encapsulate our classification, we construct a Database of Active Regions and Tools (DART.gersteinlab.org). DART has special facilities for rapidly handling and comparing many sets of TARs and their heterogeneous features, synchronizing across builds, and interfacing with other resources. Overall, we find that ∼14% of the novel TARs can be associated with known genes, while ∼21% can be clustered into ∼200 novel loci. We observe that TARs associated with genes are enriched in the potential to form structural RNAs and many novel TAR clusters are associated with nearby promoters. To benchmark our classification, we design a set of experiments for testing the connectivity of novel TARs. Overall, we find that 18 of the 46 connections tested validate by RT-PCR and four of five sequenced PCR products confirm connectivity unambiguously.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Demosaicking is a particular case of interpolation problems where, from a scalar image in which each pixel has either the red, the green or the blue component, we want to interpolate the full-color image. State-of-the-art demosaicking algorithms perform interpolation along edges, but these edges are estimated locally. We propose a level-set-based geometric method to estimate image edges, inspired by the image in-painting literature. This method has a time complexity of O(S) , where S is the number of pixels in the image, and compares favorably with the state-of-the-art algorithms both visually and in most relevant image quality measures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monitoring of posture allocations and activities enables accurate estimation of energy expenditure and may aid in obesity prevention and treatment. At present, accurate devices rely on multiple sensors distributed on the body and thus may be too obtrusive for everyday use. This paper presents a novel wearable sensor, which is capable of very accurate recognition of common postures and activities. The patterns of heel acceleration and plantar pressure uniquely characterize postures and typical activities while requiring minimal preprocessing and no feature extraction. The shoe sensor was tested in nine adults performing sitting and standing postures and while walking, running, stair ascent/descent and cycling. Support vector machines (SVMs) were used for classification. A fourfold validation of a six-class subject-independent group model showed 95.2% average accuracy of posture/activity classification on full sensor set and over 98% on optimized sensor set. Using a combination of acceleration/pressure also enabled a pronounced reduction of the sampling frequency (25 to 1 Hz) without significant loss of accuracy (98% versus 93%). Subjects had shoe sizes (US) M9.5-11 and W7-9 and body mass index from 18.1 to 39.4 kg/m2 and thus suggesting that the device can be used by individuals with varying anthropometric characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Responses to external stimuli are typically investigated by averaging peri-stimulus electroencephalography (EEG) epochs in order to derive event-related potentials (ERPs) across the electrode montage, under the assumption that signals that are related to the external stimulus are fixed in time across trials. We demonstrate the applicability of a single-trial model based on patterns of scalp topographies (De Lucia et al, 2007) that can be used for ERP analysis at the single-subject level. The model is able to classify new trials (or groups of trials) with minimal a priori hypotheses, using information derived from a training dataset. The features used for the classification (the topography of responses and their latency) can be neurophysiologically interpreted, because a difference in scalp topography indicates a different configuration of brain generators. An above chance classification accuracy on test datasets implicitly demonstrates the suitability of this model for EEG data. Methods: The data analyzed in this study were acquired from two separate visual evoked potential (VEP) experiments. The first entailed passive presentation of checkerboard stimuli to each of the four visual quadrants (hereafter, "Checkerboard Experiment") (Plomp et al, submitted). The second entailed active discrimination of novel versus repeated line drawings of common objects (hereafter, "Priming Experiment") (Murray et al, 2004). Four subjects per experiment were analyzed, using approx. 200 trials per experimental condition. These trials were randomly separated in training (90%) and testing (10%) datasets in 10 independent shuffles. In order to perform the ERP analysis we estimated the statistical distribution of voltage topographies by a Mixture of Gaussians (MofGs), which reduces our original dataset to a small number of representative voltage topographies. We then evaluated statistically the degree of presence of these template maps across trials and whether and when this was different across experimental conditions. Based on these differences, single-trials or sets of a few single-trials were classified as belonging to one or the other experimental condition. Classification performance was assessed using the Receiver Operating Characteristic (ROC) curve. Results: For the Checkerboard Experiment contrasts entailed left vs. right visual field presentations for upper and lower quadrants, separately. The average posterior probabilities, indicating the presence of the computed template maps in time and across trials revealed significant differences starting at ~60-70 ms post-stimulus. The average ROC curve area across all four subjects was 0.80 and 0.85 for upper and lower quadrants, respectively and was in all cases significantly higher than chance (unpaired t-test, p<0.0001). In the Priming Experiment, we contrasted initial versus repeated presentations of visual object stimuli. Their posterior probabilities revealed significant differences, which started at 250ms post-stimulus onset. The classification accuracy rates with single-trial test data were at chance level. We therefore considered sub-averages based on five single trials. We found that for three out of four subjects' classification rates were significantly above chance level (unpaired t-test, p<0.0001). Conclusions: The main advantage of the present approach is that it is based on topographic features that are readily interpretable along neurophysiologic lines. As these maps were previously normalized by the overall strength of the field potential on the scalp, a change in their presence across trials and between conditions forcibly reflects a change in the underlying generator configurations. The temporal periods of statistical difference between conditions were estimated for each training dataset for ten shuffles of the data. Across the ten shuffles and in both experiments, we observed a high level of consistency in the temporal periods over which the two conditions differed. With this method we are able to analyze ERPs at the single-subject level providing a novel tool to compare normal electrophysiological responses versus single cases that cannot be considered part of any cohort of subjects. This aspect promises to have a strong impact on both basic and clinical research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to compare the diagnostic efficiency of plain film and spiral CT examinations with 3D reconstructions of 42 tibial plateau fractures and to assess the accuracy of these two techniques in the pre-operative surgical plan in 22 cases. Forty-two tibial plateau fractures were examined with plain film (anteroposterior, lateral, two obliques) and spiral CT with surface-shaded-display 3D reconstructions. The Swiss AO-ASIF classification system of bone fracture from Muller was used. In 22 cases the surgical plans and the sequence of reconstruction of the fragments were prospectively determined with both techniques, successively, and then correlated with the surgical reports and post-operative plain film. The fractures were underestimated with plain film in 18 of 42 cases (43%). Due to the spiral CT 3D reconstructions, and precise pre-operative information, the surgical plans based on plain film were modified and adjusted in 13 cases among 22 (59%). Spiral CT 3D reconstructions give a better and more accurate demonstration of the tibial plateau fracture and allows a more precise pre-operative surgical plan.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The diagnosis of idiopathic Parkinson's disease (IPD) is entirely clinical. The fact that neuronal damage begins 5-10 years before occurrence of sub-clinical signs, underlines the importance of preclinical diagnosis. A new approach for in-vivo pathophysiological assessment of IPD-related neurodegeneration was implemented based on recently developed neuroimaging methods. It is based on non- invasive magnetic resonance data sensitive to brain tissue property changes that precede macroscopic atrophy in the early stages of IPD. This research aims to determine the brain tissue property changes induced by neurodegeneration that can be linked to clinical phenotypes which will allow us to create a predictive model for early diagnosis in IPD. We hypothesized that the degree of disease progression in IPD patients will have a differential and specific impact on brain tissue properties used to create a predictive model of motor and non-motor impairment in IPD. We studied the potential of in-vivo quantitative imaging sensitive to neurodegeneration- related brain tissue characteristics to detect changes in patients with IPD. We carried out methodological work within the well established SPM8 framework to estimate the sensitivity of tissue probability maps for automated tissue classification for detection of early IPD. We performed whole-brain multi parameter mapping at high resolution followed by voxel-based morphometric (VBM) analysis and voxel-based quantification (VBQ) comparing healthy subjects to IPD patients. We found a trend demonstrating non-significant tissue property changes in the olfactory bulb area using the MT and R1 parameter with p<0.001. Comparing to the IPD patients, the healthy group presented a bilateral higher MT and R1 intensity in this specific functional region. These results did not correlate with age, severity or duration of disease. We failed to demonstrate any changes with the R2* parameter. We interpreted our findings as demyelination of the olfactory tract, which is clinically represented as anosmia. However, the lack of correlation with duration or severity complicates its implications in the creation of a predictive model of impairment in IPD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE To validate terms of nursing language especially for physical-motor rehabilitation and map them to the terms of ICNP® 2.0. METHOD A methodology research based on document analysis, with collection and analysis of terms from 1,425 records. RESULTS 825 terms were obtained after the methodological procedure, of which 226 had still not been included in the ICNP® 2.0. These terms were distributed as follows: 47 on the Focus axis; 15 on the Judgment axis; 31 on the Action axis; 25 on the Location axis; 102 on the Means axis; three on the Time axis; and three on the Client axis. All non-constant terms in ICNP® have been validated by experts, having reached an agreement index ≥0.80. CONCLUSION The ICNP® is applicable and used in nursing care for physical-motor rehabilitation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The potential of type-2 fuzzy sets for managing high levels of uncertainty in the subjective knowledge of experts or of numerical information has focused on control and pattern classification systems in recent years. One of the main challenges in designing a type-2 fuzzy logic system is how to estimate the parameters of type-2 fuzzy membership function (T2MF) and the Footprint of Uncertainty (FOU) from imperfect and noisy datasets. This paper presents an automatic approach for learning and tuning Gaussian interval type-2 membership functions (IT2MFs) with application to multi-dimensional pattern classification problems. T2MFs and their FOUs are tuned according to the uncertainties in the training dataset by a combination of genetic algorithm (GA) and crossvalidation techniques. In our GA-based approach, the structure of the chromosome has fewer genes than other GA methods and chromosome initialization is more precise. The proposed approach addresses the application of the interval type-2 fuzzy logic system (IT2FLS) for the problem of nodule classification in a lung Computer Aided Detection (CAD) system. The designed IT2FLS is compared with its type-1 fuzzy logic system (T1FLS) counterpart. The results demonstrate that the IT2FLS outperforms the T1FLS by more than 30% in terms of classification accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: Jean Cruveilhier has always been described as a pioneer in pathological anatomy. Almost nothing has been reported concerning his exceptional methodology allying pre-mortem clinical description and syndromic classification of neurological and neurosurgical diseases, and post-mortem meticulous dissections. Cruveilhier's methodology announced the birth of the anatomoclinical method built up by Jean-Martin Charcot and the neurological French school during the 19th century. The aim of our work is to extract the quintessence of Cruveilhier's contributions to skull base pathology through his cogent clinical descriptions coupled with exceptional lithographs of anterior skull base, suprasellar and cerebello-pontine angle tumors. METHODS: We reviewed the masterwork of Jean Cruveilhier on pathological anatomy and we selected the chapters dedicated to central nervous system pathologies, mainly skull base diseases. A systematic review was performed on Pubmed/Medline and Google Scholar using the keywords "Jean Cruveilhier", "Skull base pathology", "Anatomoclinical method". RESULTS: Among his descriptions, Cruveilhier dedicated large chapters to neurosurgical diseases including brain tumors, cerebrovascular pathologies, malformations of the central nervous system, hydrocephalus, brain infections and spinal cord compressions. CONCLUSION: This work emphasizes on the role of Jean Cruveilhier in the birth of the anatomoclinical method particularly in neuroscience during a 19th century rich of epistemological evolutions toward an evidence-based medicine, through the prism of Cruveilhier's contribution to skull base pathology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, kernel-based Machine Learning methods have gained great popularity in many data analysis and data mining fields: pattern recognition, biocomputing, speech and vision, engineering, remote sensing etc. The paper describes the use of kernel methods to approach the processing of large datasets from environmental monitoring networks. Several typical problems of the environmental sciences and their solutions provided by kernel-based methods are considered: classification of categorical data (soil type classification), mapping of environmental and pollution continuous information (pollution of soil by radionuclides), mapping with auxiliary information (climatic data from Aral Sea region). The promising developments, such as automatic emergency hot spot detection and monitoring network optimization are discussed as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents 3-D brain tissue classificationschemes using three recent promising energy minimizationmethods for Markov random fields: graph cuts, loopybelief propagation and tree-reweighted message passing.The classification is performed using the well knownfinite Gaussian mixture Markov Random Field model.Results from the above methods are compared with widelyused iterative conditional modes algorithm. Theevaluation is performed on a dataset containing simulatedT1-weighted MR brain volumes with varying noise andintensity non-uniformities. The comparisons are performedin terms of energies as well as based on ground truthsegmentations, using various quantitative metrics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The emphasis on integrated care implies new incentives that promote coordinationbetween levels of care. Considering a population as a whole, the resource allocation systemhas to adapt to this environment. This research is aimed to design a model that allows formorbidity related prospective and concurrent capitation payment. The model can be applied inpublicly funded health systems and managed competition settings.Methods: We analyze the application of hybrid risk adjustment versus either prospective orconcurrent risk adjustment formulae in the context of funding total health expenditures for thepopulation of an integrated healthcare delivery organization in Catalonia during years 2004 and2005.Results: The hybrid model reimburses integrated care organizations avoiding excessive risktransfer and maximizing incentives for efficiency in the provision. At the same time, it eliminatesincentives for risk selection for a specific set of high risk individuals through the use ofconcurrent reimbursement in order to assure a proper classification of patients.Conclusion: Prospective Risk Adjustment is used to transfer the financial risk to the healthprovider and therefore provide incentives for efficiency. Within the context of a National HealthSystem, such transfer of financial risk is illusory, and the government has to cover the deficits.Hybrid risk adjustment is useful to provide the right combination of incentive for efficiency andappropriate level of risk transfer for integrated care organizations.