930 resultados para Classification algorithm


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main purpose of this paper is to present architecture of automated system that allows monitoring and tracking in real time (online) the possible occurrence of faults and electromagnetic transients observed in primary power distribution networks. Through the interconnection of this automated system to the utility operation center, it will be possible to provide an efficient tool that will assist in decisionmaking by the Operation Center. In short, the desired purpose aims to have all tools necessary to identify, almost instantaneously, the occurrence of faults and transient disturbances in the primary power distribution system, as well as to determine its respective origin and probable location. The compilations of results from the application of this automated system show that the developed techniques provide accurate results, identifying and locating several occurrences of faults observed in the distribution system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Here, we examine morphological changes in cortical thickness of patients with Alzheimer`s disease (AD) using image analysis algorithms for brain structure segmentation and study automatic classification of AD patients using cortical and volumetric data. Cortical thickness of AD patients (n = 14) was measured using MRI cortical surface-based analysis and compared with healthy subjects (n = 20). Data was analyzed using an automated algorithm for tissue segmentation and classification. A Support Vector Machine (SVM) was applied over the volumetric measurements of subcortical and cortical structures to separate AD patients from controls. The group analysis showed cortical thickness reduction in the superior temporal lobe, parahippocampal gyrus, and enthorhinal cortex in both hemispheres. We also found cortical thinning in the isthmus of cingulate gyrus and middle temporal gyrus at the right hemisphere, as well as a reduction of the cortical mantle in areas previously shown to be associated with AD. We also confirmed that automatic classification algorithms (SVM) could be helpful to distinguish AD patients from healthy controls. Moreover, the same areas implicated in the pathogenesis of AD were the main parameters driving the classification algorithm. While the patient sample used in this study was relatively small, we expect that using a database of regional volumes derived from MRI scans of a large number of subjects will increase the SVM power of AD patient identification.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Understanding the ecological role of benthic microalgae, a highly productive component of coral reef ecosystems, requires information on their spatial distribution. The spatial extent of benthic microalgae on Heron Reef (southern Great Barrier Reef, Australia) was mapped using data from the Landsat 5 Thematic Mapper sensor. integrated with field measurements of sediment chlorophyll concentration and reflectance. Field-measured sediment chlorophyll concentrations. 2 ranging from 23-1.153 mg chl a m(2), were classified into low, medium, and high concentration classes (1-170, 171-290, and > 291 mg chl a m(-2)) using a K-means clustering algorithm. The mapping process assumed that areas in the Thematic Mapper image exhibiting similar reflectance levels in red and blue bands would correspond to areas of similar chlorophyll a levels. Regions of homogenous reflectance values corresponding to low, medium, and high chlorophyll levels were identified over the reef sediment zone by applying a standard image classification algorithm to the Thematic Mapper image. The resulting distribution map revealed large-scale ( > 1 km 2) patterns in chlorophyll a levels throughout the sediment zone of Heron Reef. Reef-wide estimates of chlorophyll a distribution indicate that benthic Microalgae may constitute up to 20% of the total benthic chlorophyll a at Heron Reef. and thus contribute significantly to total primary productivity on the reef.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the electricity market liberalization, the distribution and retail companies are looking for better market strategies based on adequate information upon the consumption patterns of its electricity consumers. A fair insight on the consumers’ behavior will permit the definition of specific contract aspects based on the different consumption patterns. In order to form the different consumers’ classes, and find a set of representative consumption patterns we use electricity consumption data from a utility client’s database and two approaches: Two-step clustering algorithm and the WEACS approach based on evidence accumulation (EAC) for combining partitions in a clustering ensemble. While EAC uses a voting mechanism to produce a co-association matrix based on the pairwise associations obtained from N partitions and where each partition has equal weight in the combination process, the WEACS approach uses subsampling and weights differently the partitions. As a complementary step to the WEACS approach, we combine the partitions obtained in the WEACS approach with the ALL clustering ensemble construction method and we use the Ward Link algorithm to obtain the final data partition. The characterization of the obtained consumers’ clusters was performed using the C5.0 classification algorithm. Experiment results showed that the WEACS approach leads to better results than many other clustering approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the electricity market liberalization, distribution and retail companies are looking for better market strategies based on adequate information upon the consumption patterns of its electricity customers. In this environment all consumers are free to choose their electricity supplier. A fair insight on the customer´s behaviour will permit the definition of specific contract aspects based on the different consumption patterns. In this paper Data Mining (DM) techniques are applied to electricity consumption data from a utility client’s database. To form the different customer´s classes, and find a set of representative consumption patterns, we have used the Two-Step algorithm which is a hierarchical clustering algorithm. Each consumer class will be represented by its load profile resulting from the clustering operation. Next, to characterize each consumer class a classification model will be constructed with the C5.0 classification algorithm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper an automatic classification algorithm is proposed for the diagnosis of the liver steatosis, also known as, fatty liver, from ultrasound images. The features, automatically extracted from the ultrasound images used by the classifier, are basically the ones used by the physicians in the diagnosis of the disease based on visual inspection of the ultrasound images. The main novelty of the method is the utilization of the speckle noise that corrupts the ultrasound images to compute textural features of the liver parenchyma relevant for the diagnosis. The algorithm uses the Bayesian framework to compute a noiseless image, containing anatomic and echogenic information of the liver and a second image containing only the speckle noise used to compute the textural features. The classification results, with the Bayes classifier using manually classified data as ground truth show that the automatic classifier reaches an accuracy of 95% and a 100% of sensitivity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many learning problems require handling high dimensional datasets with a relatively small number of instances. Learning algorithms are thus confronted with the curse of dimensionality, and need to address it in order to be effective. Examples of these types of data include the bag-of-words representation in text classification problems and gene expression data for tumor detection/classification. Usually, among the high number of features characterizing the instances, many may be irrelevant (or even detrimental) for the learning tasks. It is thus clear that there is a need for adequate techniques for feature representation, reduction, and selection, to improve both the classification accuracy and the memory requirements. In this paper, we propose combined unsupervised feature discretization and feature selection techniques, suitable for medium and high-dimensional datasets. The experimental results on several standard datasets, with both sparse and dense features, show the efficiency of the proposed techniques as well as improvements over previous related techniques.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mestrado em Computação e Instrumentação Médica

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Amyotrophic Lateral Sclerosis (ALS) is a neurodegenerative disease characterized by motor neurons degeneration, which reduces muscular force, being very difficult to diagnose. Mathematical methods are used in order to analyze the surface electromiographic signal’s dynamic behavior (Fractal Dimension (FD) and Multiscale Entropy (MSE)), evaluate different muscle group’s synchronization (Coherence and Phase Locking Factor (PLF)) and to evaluate the signal’s complexity (Lempel-Ziv (LZ) techniques and Detrended Fluctuation Analysis (DFA)). Surface electromiographic signal acquisitions were performed in upper limb muscles, being the analysis executed for instants of contraction for ipsilateral acquisitions for patients and control groups. Results from LZ, DFA and MSE analysis present capability to distinguish between the patient group and the control group, whereas coherence, PLF and FD algorithms present results very similar for both groups. LZ, DFA and MSE algorithms appear then to be a good measure of corticospinal pathways integrity. A classification algorithm was applied to the results in combination with extracted features from the surface electromiographic signal, with an accuracy percentage higher than 70% for 118 combinations for at least one classifier. The classification results demonstrate capability to distinguish members between patients and control groups. These results can demonstrate a major importance in the disease diagnose, once surface electromyography (sEMG) may be used as an auxiliary diagnose method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pressures on the Brazilian Amazon forest have been accentuated by agricultural activities practiced by families encouraged to settle in this region in the 1970s by the colonization program of the government. The aims of this study were to analyze the temporal and spatial evolution of land cover and land use (LCLU) in the lower Tapajós region, in the state of Pará. We contrast 11 watersheds that are generally representative of the colonization dynamics in the region. For this purpose, Landsat satellite images from three different years, 1986, 2001, and 2009, were analyzed with Geographic Information Systems. Individual images were subject to an unsupervised classification using the Maximum Likelihood Classification algorithm available on GRASS. The classes retained for the representation of LCLU in this study were: (1) slightly altered old-growth forest, (2) succession forest, (3) crop land and pasture, and (4) bare soil. The analysis and observation of general trends in eleven watersheds shows that LCLU is changing very rapidly. The average deforestation of old-growth forest in all the watersheds was estimated at more than 30% for the period of 1986 to 2009. The local-scale analysis of watersheds reveals the complexity of LCLU, notably in relation to large changes in the temporal and spatial evolution of watersheds. Proximity to the sprawling city of Itaituba is related to the highest rate of deforestation in two watersheds. The opening of roads such as the Transamazonian highway is associated to the second highest rate of deforestation in three watersheds.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose and validate a multivariate classification algorithm for characterizing changes in human intracranial electroencephalographic data (iEEG) after learning motor sequences. The algorithm is based on a Hidden Markov Model (HMM) that captures spatio-temporal properties of the iEEG at the level of single trials. Continuous intracranial iEEG was acquired during two sessions (one before and one after a night of sleep) in two patients with depth electrodes implanted in several brain areas. They performed a visuomotor sequence (serial reaction time task, SRTT) using the fingers of their non-dominant hand. Our results show that the decoding algorithm correctly classified single iEEG trials from the trained sequence as belonging to either the initial training phase (day 1, before sleep) or a later consolidated phase (day 2, after sleep), whereas it failed to do so for trials belonging to a control condition (pseudo-random sequence). Accurate single-trial classification was achieved by taking advantage of the distributed pattern of neural activity. However, across all the contacts the hippocampus contributed most significantly to the classification accuracy for both patients, and one fronto-striatal contact for one patient. Together, these human intracranial findings demonstrate that a multivariate decoding approach can detect learning-related changes at the level of single-trial iEEG. Because it allows an unbiased identification of brain sites contributing to a behavioral effect (or experimental condition) at the level of single subject, this approach could be usefully applied to assess the neural correlates of other complex cognitive functions in patients implanted with multiple electrodes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Behavioral and brain responses to identical stimuli can vary with experimental and task parameters, including the context of stimulus presentation or attention. More surprisingly, computational models suggest that noise-related random fluctuations in brain responses to stimuli would alone be sufficient to engender perceptual differences between physically identical stimuli. In two experiments combining psychophysics and EEG in healthy humans, we investigated brain mechanisms whereby identical stimuli are (erroneously) perceived as different (higher vs lower in pitch or longer vs shorter in duration) in the absence of any change in the experimental context. Even though, as expected, participants' percepts to identical stimuli varied randomly, a classification algorithm based on a mixture of Gaussians model (GMM) showed that there was sufficient information in single-trial EEG to reliably predict participants' judgments of the stimulus dimension. By contrasting electrical neuroimaging analyses of auditory evoked potentials (AEPs) to the identical stimuli as a function of participants' percepts, we identified the precise timing and neural correlates (strength vs topographic modulations) as well as intracranial sources of these erroneous perceptions. In both experiments, AEP differences first occurred ∼100 ms after stimulus onset and were the result of topographic modulations following from changes in the configuration of active brain networks. Source estimations localized the origin of variations in perceived pitch of identical stimuli within right temporal and left frontal areas and of variations in perceived duration within right temporoparietal areas. We discuss our results in terms of providing neurophysiologic evidence for the contribution of random fluctuations in brain activity to conscious perception.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Counterfeit pharmaceutical products have become a widespread problem in the last decade. Various analytical techniques have been applied to discriminate between genuine and counterfeit products. Among these, Near-infrared (NIR) and Raman spectroscopy provided promising results.The present study offers a methodology allowing to provide more valuable information fororganisations engaged in the fight against counterfeiting of medicines.A database was established by analyzing counterfeits of a particular pharmaceutical product using Near-infrared (NIR) and Raman spectroscopy. Unsupervised chemometric techniques (i.e. principal component analysis - PCA and hierarchical cluster analysis - HCA) were implemented to identify the classes within the datasets. Gas Chromatography coupled to Mass Spectrometry (GC-MS) and Fourier Transform Infrared Spectroscopy (FT-IR) were used to determine the number of different chemical profiles within the counterfeits. A comparison with the classes established by NIR and Raman spectroscopy allowed to evaluate the discriminating power provided by these techniques. Supervised classifiers (i.e. k-Nearest Neighbors, Partial Least Squares Discriminant Analysis, Probabilistic Neural Networks and Counterpropagation Artificial Neural Networks) were applied on the acquired NIR and Raman spectra and the results were compared to the ones provided by the unsupervised classifiers.The retained strategy for routine applications, founded on the classes identified by NIR and Raman spectroscopy, uses a classification algorithm based on distance measures and Receiver Operating Characteristics (ROC) curves. The model is able to compare the spectrum of a new counterfeit with that of previously analyzed products and to determine if a new specimen belongs to one of the existing classes, consequently allowing to establish a link with other counterfeits of the database.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Several features that can be extracted from digital images of the sky and that can be useful for cloud-type classification of such images are presented. Some features are statistical measurements of image texture, some are based on the Fourier transform of the image and, finally, others are computed from the image where cloudy pixels are distinguished from clear-sky pixels. The use of the most suitable features in an automatic classification algorithm is also shown and discussed. Both the features and the classifier are developed over images taken by two different camera devices, namely, a total sky imager (TSI) and a whole sky imager (WSC), which are placed in two different areas of the world (Toowoomba, Australia; and Girona, Spain, respectively). The performance of the classifier is assessed by comparing its image classification with an a priori classification carried out by visual inspection of more than 200 images from each camera. The index of agreement is 76% when five different sky conditions are considered: clear, low cumuliform clouds, stratiform clouds (overcast), cirriform clouds, and mottled clouds (altocumulus, cirrocumulus). Discussion on the future directions of this research is also presented, regarding both the use of other features and the use of other classification techniques