867 resultados para Cascaded classifier
Resumo:
Pós-graduação em Ciência Florestal - FCA
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This research aimed to develop a Fuzzy inference based on expert system to help preventing lameness in dairy cattle. Hoof length, nutritional parameters and floor material properties (roughness) were used to build the Fuzzy inference system. The expert system architecture was defined using Unified Modelling Language (UML). Data were collected in a commercial dairy herd using two different subgroups (H-1 and H-2), in order to validate the Fuzzy inference functions. The numbers of True Positive (TP), False Positive (FP), True Negative (TN), and False Negative (FN) responses were used to build the classifier system up, after an established gold standard comparison. A Lesion Incidence Possibility (LIP) developed function indicates the chances of a cow becoming lame. The obtained lameness percentage in H-1 and H-2 was 8.40% and 1.77%, respectively. The system estimated a Lesion Incidence Possibility (LIP) of 5.00% and 2.00% in H-1 and H-2, respectively. The system simulation presented 3.40% difference from real cattle lameness data for H-1, while for H-2, it was 0.23%; indicating the system efficiency in decision-making.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In this paper we deal with the problem of boosting the Optimum-Path Forest (OPF) clustering approach using evolutionary-based optimization techniques. As the OPF classifier performs an exhaustive search to find out the size of sample's neighborhood that allows it to reach the minimum graph cut as a quality measure, we compared several optimization techniques that can obtain close graph cut values to the ones obtained by brute force. Experiments in two public datasets in the context of unsupervised network intrusion detection have showed the evolutionary optimization techniques can find suitable values for the neighborhood faster than the exhaustive search. Additionally, we have showed that it is not necessary to employ many agents for such task, since the neighborhood size is defined by discrete values, with constrain the set of possible solution to a few ones.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The aim of this work is to discriminate vegetation classes throught remote sensing images from the satellite CBERS-2, related to winter and summer seasons in the Campos Gerais region Paraná State, Brazil. The vegetation cover of the region presents different kinds of vegetations: summer and winter cultures, reforestation areas, natural areas and pasture. Supervised classification techniques like Maximum Likelihood Classifier (MLC) and Decision Tree were evaluated, considering a set of attributes from images, composed by bands of the CCD sensor (1, 2, 3, 4), vegetation indices (CTVI, DVI, GEMI, NDVI, SR, SAVI, TVI), mixture models (soil, shadow, vegetation) and the two first main components. The evaluation of the classifications accuracy was made using the classification error matrix and the kappa coefficient. It was defined a high discriminatory level during the classes definition, in order to allow separation of different kinds of winter and summer crops. The classification accuracy by decision tree was 94.5% and the kappa coefficient was 0.9389 for the scene 157/128. For the scene 158/127, the values were 88% and 0.8667, respectively. The classification accuracy by MLC was 84.86% and the kappa coefficient was 0.8099 for the scene 157/128. For the scene 158/127, the values were 77.90% and 0.7476, respectively. The results showed a better performance of the Decision Tree classifier than MLC, especially to the classes related to cultivated crops, indicating the use of the Decision Tree classifier to the vegetation cover mapping including different kinds of crops.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Hundreds of Terabytes of CMS (Compact Muon Solenoid) data are being accumulated for storage day by day at the University of Nebraska-Lincoln, which is one of the eight US CMS Tier-2 sites. Managing this data includes retaining useful CMS data sets and clearing storage space for newly arriving data by deleting less useful data sets. This is an important task that is currently being done manually and it requires a large amount of time. The overall objective of this study was to develop a methodology to help identify the data sets to be deleted when there is a requirement for storage space. CMS data is stored using HDFS (Hadoop Distributed File System). HDFS logs give information regarding file access operations. Hadoop MapReduce was used to feed information in these logs to Support Vector Machines (SVMs), a machine learning algorithm applicable to classification and regression which is used in this Thesis to develop a classifier. Time elapsed in data set classification by this method is dependent on the size of the input HDFS log file since the algorithmic complexities of Hadoop MapReduce algorithms here are O(n). The SVM methodology produces a list of data sets for deletion along with their respective sizes. This methodology was also compared with a heuristic called Retention Cost which was calculated using size of the data set and the time since its last access to help decide how useful a data set is. Accuracies of both were compared by calculating the percentage of data sets predicted for deletion which were accessed at a later instance of time. Our methodology using SVMs proved to be more accurate than using the Retention Cost heuristic. This methodology could be used to solve similar problems involving other large data sets.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Alzheimer's disease (AD) is the most common cause of dementia in the human population, characterized by a spectrum of neuropathological abnormalities that results in memory impairment and loss of other cognitive processes as well as the presence of non-cognitive symptoms. Transcriptomic analyses provide an important approach to elucidating the pathogenesis of complex diseases like AD, helping to figure out both pre-clinical markers to identify susceptible patients and the early pathogenic mechanisms to serve as therapeutic targets. This study provides the gene expression profile of postmortem brain tissue from subjects with clinic-pathological AD (Braak IV, V, or V and CERAD B or C; and CDR >= 1), preclinical AD (Braak IV, V, or VI and CERAD B or C; and CDR = 0), and healthy older individuals (Braak <= II and CERAD 0 or A; and CDR = 0) in order to establish genes related to both AD neuropathology and clinical emergence of dementia. Based on differential gene expression, hierarchical clustering and network analysis, genes involved in energy metabolism, oxidative stress, DNA damage/repair, senescence, and transcriptional regulation were implicated with the neuropathology of AD; a transcriptional profile related to clinical manifestation of AD could not be detected with reliability using differential gene expression analysis, although genes involved in synaptic plasticity, and cell cycle seems to have a role revealed by gene classifier. In conclusion, the present data suggest gene expression profile changes secondary to the development of AD-related pathology and some genes that appear to be related to the clinical manifestation of dementia in subjects with significant AD pathology, making necessary further investigations to better understand these transcriptional findings on the pathogenesis and clinical emergence of AD.
Resumo:
Traditional supervised data classification considers only physical features (e. g., distance or similarity) of the input data. Here, this type of learning is called low level classification. On the other hand, the human (animal) brain performs both low and high orders of learning and it has facility in identifying patterns according to the semantic meaning of the input data. Data classification that considers not only physical attributes but also the pattern formation is, here, referred to as high level classification. In this paper, we propose a hybrid classification technique that combines both types of learning. The low level term can be implemented by any classification technique, while the high level term is realized by the extraction of features of the underlying network constructed from the input data. Thus, the former classifies the test instances by their physical features or class topologies, while the latter measures the compliance of the test instances to the pattern formation of the data. Our study shows that the proposed technique not only can realize classification according to the pattern formation, but also is able to improve the performance of traditional classification techniques. Furthermore, as the class configuration's complexity increases, such as the mixture among different classes, a larger portion of the high level term is required to get correct classification. This feature confirms that the high level classification has a special importance in complex situations of classification. Finally, we show how the proposed technique can be employed in a real-world application, where it is capable of identifying variations and distortions of handwritten digit images. As a result, it supplies an improvement in the overall pattern recognition rate.