31 resultados para Associative Classifiers


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although some individual techniques of supervised Machine Learning (ML), also known as classifiers, or algorithms of classification, to supply solutions that, most of the time, are considered efficient, have experimental results gotten with the use of large sets of pattern and/or that they have a expressive amount of irrelevant data or incomplete characteristic, that show a decrease in the efficiency of the precision of these techniques. In other words, such techniques can t do an recognition of patterns of an efficient form in complex problems. With the intention to get better performance and efficiency of these ML techniques, were thought about the idea to using some types of LM algorithms work jointly, thus origin to the term Multi-Classifier System (MCS). The MCS s presents, as component, different of LM algorithms, called of base classifiers, and realized a combination of results gotten for these algorithms to reach the final result. So that the MCS has a better performance that the base classifiers, the results gotten for each base classifier must present an certain diversity, in other words, a difference between the results gotten for each classifier that compose the system. It can be said that it does not make signification to have MCS s whose base classifiers have identical answers to the sames patterns. Although the MCS s present better results that the individually systems, has always the search to improve the results gotten for this type of system. Aim at this improvement and a better consistency in the results, as well as a larger diversity of the classifiers of a MCS, comes being recently searched methodologies that present as characteristic the use of weights, or confidence values. These weights can describe the importance that certain classifier supplied when associating with each pattern to a determined class. These weights still are used, in associate with the exits of the classifiers, during the process of recognition (use) of the MCS s. Exist different ways of calculating these weights and can be divided in two categories: the static weights and the dynamic weights. The first category of weights is characterizes for not having the modification of its values during the classification process, different it occurs with the second category, where the values suffers modifications during the classification process. In this work an analysis will be made to verify if the use of the weights, statics as much as dynamics, they can increase the perfomance of the MCS s in comparison with the individually systems. Moreover, will be made an analysis in the diversity gotten for the MCS s, for this mode verify if it has some relation between the use of the weights in the MCS s with different levels of diversity

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In systems that combine the outputs of classification methods (combination systems), such as ensembles and multi-agent systems, one of the main constraints is that the base components (classifiers or agents) should be diverse among themselves. In other words, there is clearly no accuracy gain in a system that is composed of a set of identical base components. One way of increasing diversity is through the use of feature selection or data distribution methods in combination systems. In this work, an investigation of the impact of using data distribution methods among the components of combination systems will be performed. In this investigation, different methods of data distribution will be used and an analysis of the combination systems, using several different configurations, will be performed. As a result of this analysis, it is aimed to detect which combination systems are more suitable to use feature distribution among the components

Relevância:

10.00% 10.00%

Publicador:

Resumo:

RePART (Reward/Punishment ART) is a neural model that constitutes a variation of the Fuzzy Artmap model. This network was proposed in order to minimize the inherent problems in the Artmap-based model, such as the proliferation of categories and misclassification. RePART makes use of additional mechanisms, such as an instance counting parameter, a reward/punishment process and a variable vigilance parameter. The instance counting parameter, for instance, aims to minimize the misclassification problem, which is a consequence of the sensitivity to the noises, frequently presents in Artmap-based models. On the other hand, the use of the variable vigilance parameter tries to smoouth out the category proliferation problem, which is inherent of Artmap-based models, decreasing the complexity of the net. RePART was originally proposed in order to minimize the aforementioned problems and it was shown to have better performance (higer accuracy and lower complexity) than Artmap-based models. This work proposes an investigation of the performance of the RePART model in classifier ensembles. Different sizes, learning strategies and structures will be used in this investigation. As a result of this investigation, it is aimed to define the main advantages and drawbacks of this model, when used as a component in classifier ensembles. This can provide a broader foundation for the use of RePART in other pattern recognition applications

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multi-classifier systems, also known as ensembles, have been widely used to solve several problems, because they, often, present better performance than the individual classifiers that form these systems. But, in order to do so, it s necessary that the base classifiers to be as accurate as diverse among themselves this is also known as diversity/accuracy dilemma. Given its importance, some works have investigate the ensembles behavior in context of this dilemma. However, the majority of them address homogenous ensemble, i.e., ensembles composed only of the same type of classifiers. Thus, motivated by this limitation, this thesis, using genetic algorithms, performs a detailed study on the dilemma diversity/accuracy for heterogeneous ensembles

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of the researches in artificial intelligence is to qualify the computer to execute functions that are performed by humans using knowledge and reasoning. This work was developed in the area of machine learning, that it s the study branch of artificial intelligence, being related to the project and development of algorithms and techniques capable to allow the computational learning. The objective of this work is analyzing a feature selection method for ensemble systems. The proposed method is inserted into the filter approach of feature selection method, it s using the variance and Spearman correlation to rank the feature and using the reward and punishment strategies to measure the feature importance for the identification of the classes. For each ensemble, several different configuration were used, which varied from hybrid (homogeneous) to non-hybrid (heterogeneous) structures of ensemble. They were submitted to five combining methods (voting, sum, sum weight, multiLayer Perceptron and naïve Bayes) which were applied in six distinct database (real and artificial). The classifiers applied during the experiments were k- nearest neighbor, multiLayer Perceptron, naïve Bayes and decision tree. Finally, the performance of ensemble was analyzed comparatively, using none feature selection method, using a filter approach (original) feature selection method and the proposed method. To do this comparison, a statistical test was applied, which demonstrate that there was a significant improvement in the precision of the ensembles

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Classifier ensembles are systems composed of a set of individual classifiers and a combination module, which is responsible for providing the final output of the system. In the design of these systems, diversity is considered as one of the main aspects to be taken into account since there is no gain in combining identical classification methods. The ideal situation is a set of individual classifiers with uncorrelated errors. In other words, the individual classifiers should be diverse among themselves. One way of increasing diversity is to provide different datasets (patterns and/or attributes) for the individual classifiers. The diversity is increased because the individual classifiers will perform the same task (classification of the same input patterns) but they will be built using different subsets of patterns and/or attributes. The majority of the papers using feature selection for ensembles address the homogenous structures of ensemble, i.e., ensembles composed only of the same type of classifiers. In this investigation, two approaches of genetic algorithms (single and multi-objective) will be used to guide the distribution of the features among the classifiers in the context of homogenous and heterogeneous ensembles. The experiments will be divided into two phases that use a filter approach of feature selection guided by genetic algorithm

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the world we are constantly performing everyday actions. Two of these actions are frequent and of great importance: classify (sort by classes) and take decision. When we encounter problems with a relatively high degree of complexity, we tend to seek other opinions, usually from people who have some knowledge or even to the extent possible, are experts in the problem domain in question in order to help us in the decision-making process. Both the classification process as the process of decision making, we are guided by consideration of the characteristics involved in the specific problem. The characterization of a set of objects is part of the decision making process in general. In Machine Learning this classification happens through a learning algorithm and the characterization is applied to databases. The classification algorithms can be employed individually or by machine committees. The choice of the best methods to be used in the construction of a committee is a very arduous task. In this work, it will be investigated meta-learning techniques in selecting the best configuration parameters of homogeneous committees for applications in various classification problems. These parameters are: the base classifier, the architecture and the size of this architecture. We investigated nine types of inductors candidates for based classifier, two methods of generation of architecture and nine medium-sized groups for architecture. Dimensionality reduction techniques have been applied to metabases looking for improvement. Five classifiers methods are investigated as meta-learners in the process of choosing the best parameters of a homogeneous committee.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Committees of classifiers may be used to improve the accuracy of classification systems, in other words, different classifiers used to solve the same problem can be combined for creating a system of greater accuracy, called committees of classifiers. To that this to succeed is necessary that the classifiers make mistakes on different objects of the problem so that the errors of a classifier are ignored by the others correct classifiers when applying the method of combination of the committee. The characteristic of classifiers of err on different objects is called diversity. However, most measures of diversity could not describe this importance. Recently, were proposed two measures of the diversity (good and bad diversity) with the aim of helping to generate more accurate committees. This paper performs an experimental analysis of these measures applied directly on the building of the committees of classifiers. The method of construction adopted is modeled as a search problem by the set of characteristics of the databases of the problem and the best set of committee members in order to find the committee of classifiers to produce the most accurate classification. This problem is solved by metaheuristic optimization techniques, in their mono and multi-objective versions. Analyzes are performed to verify if use or add the measures of good diversity and bad diversity in the optimization objectives creates more accurate committees. Thus, the contribution of this study is to determine whether the measures of good diversity and bad diversity can be used in mono-objective and multi-objective optimization techniques as optimization objectives for building committees of classifiers more accurate than those built by the same process, but using only the accuracy classification as objective of optimization

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Remote sensing is one technology of extreme importance, allowing capture of data from the Earth's surface that are used with various purposes, including, environmental monitoring, tracking usage of natural resources, geological prospecting and monitoring of disasters. One of the main applications of remote sensing is the generation of thematic maps and subsequent survey of areas from images generated by orbital or sub-orbital sensors. Pattern classification methods are used in the implementation of computational routines to automate this activity. Artificial neural networks present themselves as viable alternatives to traditional statistical classifiers, mainly for applications whose data show high dimensionality as those from hyperspectral sensors. This work main goal is to develop a classiffier based on neural networks radial basis function and Growing Neural Gas, which presents some advantages over using individual neural networks. The main idea is to use Growing Neural Gas's incremental characteristics to determine the radial basis function network's quantity and choice of centers in order to obtain a highly effective classiffier. To demonstrate the performance of the classiffier three studies case are presented along with the results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, we propose a two-stage algorithm for real-time fault detection and identification of industrial plants. Our proposal is based on the analysis of selected features using recursive density estimation and a new evolving classifier algorithm. More specifically, the proposed approach for the detection stage is based on the concept of density in the data space, which is not the same as probability density function, but is a very useful measure for abnormality/outliers detection. This density can be expressed by a Cauchy function and can be calculated recursively, which makes it memory and computational power efficient and, therefore, suitable for on-line applications. The identification/diagnosis stage is based on a self-developing (evolving) fuzzy rule-based classifier system proposed in this work, called AutoClass. An important property of AutoClass is that it can start learning from scratch". Not only do the fuzzy rules not need to be prespecified, but neither do the number of classes for AutoClass (the number may grow, with new class labels being added by the on-line learning process), in a fully unsupervised manner. In the event that an initial rule base exists, AutoClass can evolve/develop it further based on the newly arrived faulty state data. In order to validate our proposal, we present experimental results from a level control didactic process, where control and error signals are used as features for the fault detection and identification systems, but the approach is generic and the number of features can be significant due to the computationally lean methodology, since covariance or more complex calculations, as well as storage of old data, are not required. The obtained results are significantly better than the traditional approaches used for comparison

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work discusses the application of techniques of ensembles in multimodal recognition systems development in revocable biometrics. Biometric systems are the future identification techniques and user access control and a proof of this is the constant increases of such systems in current society. However, there is still much advancement to be developed, mainly with regard to the accuracy, security and processing time of such systems. In the search for developing more efficient techniques, the multimodal systems and the use of revocable biometrics are promising, and can model many of the problems involved in traditional biometric recognition. A multimodal system is characterized by combining different techniques of biometric security and overcome many limitations, how: failures in the extraction or processing the dataset. Among the various possibilities to develop a multimodal system, the use of ensembles is a subject quite promising, motivated by performance and flexibility that they are demonstrating over the years, in its many applications. Givin emphasis in relation to safety, one of the biggest problems found is that the biometrics is permanently related with the user and the fact of cannot be changed if compromised. However, this problem has been solved by techniques known as revocable biometrics, which consists of applying a transformation on the biometric data in order to protect the unique characteristics, making its cancellation and replacement. In order to contribute to this important subject, this work compares the performance of individual classifiers methods, as well as the set of classifiers, in the context of the original data and the biometric space transformed by different functions. Another factor to be highlighted is the use of Genetic Algorithms (GA) in different parts of the systems, seeking to further maximize their eficiency. One of the motivations of this development is to evaluate the gain that maximized ensembles systems by different GA can bring to the data in the transformed space. Another relevant factor is to generate revocable systems even more eficient by combining two or more functions of transformations, demonstrating that is possible to extract information of a similar standard through applying different transformation functions. With all this, it is clear the importance of revocable biometrics, ensembles and GA in the development of more eficient biometric systems, something that is increasingly important in the present day

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective to establish a methodology for the oil spill monitoring on the sea surface, located at the Submerged Exploration Area of the Polo Region of Guamaré, in the State of Rio Grande do Norte, using orbital images of Synthetic Aperture Radar (SAR integrated with meteoceanographycs products. This methodology was applied in the following stages: (1) the creation of a base map of the Exploration Area; (2) the processing of NOAA/AVHRR and ERS-2 images for generation of meteoceanographycs products; (3) the processing of RADARSAT-1 images for monitoring of oil spills; (4) the integration of RADARSAT-1 images with NOAA/AVHRR and ERS-2 image products; and (5) the structuring of a data base. The Integration of RADARSAT-1 image of the Potiguar Basin of day 21.05.99 with the base map of the Exploration Area of the Polo Region of Guamaré for the identification of the probable sources of the oil spots, was used successfully in the detention of the probable spot of oil detected next to the exit to the submarine emissary in the Exploration Area of the Polo Region of Guamaré. To support the integration of RADARSAT-1 images with NOAA/AVHRR and ERS-2 image products, a methodology was developed for the classification of oil spills identified by RADARSAT-1 images. For this, the following algorithms of classification not supervised were tested: K-means, Fuzzy k-means and Isodata. These algorithms are part of the PCI Geomatics software, which was used for the filtering of RADARSAT-1 images. For validation of the results, the oil spills submitted to the unsupervised classification were compared to the results of the Semivariogram Textural Classifier (STC). The mentioned classifier was developed especially for oil spill classification purposes and requires PCI software for the whole processing of RADARSAT-1 images. After all, the results of the classifications were analyzed through Visual Analysis; Calculation of Proportionality of Largeness and Analysis Statistics. Amongst the three algorithms of classifications tested, it was noted that there were no significant alterations in relation to the spills classified with the STC, in all of the analyses taken into consideration. Therefore, considering all the procedures, it has been shown that the described methodology can be successfully applied using the unsupervised classifiers tested, resulting in a decrease of time in the identification and classification processing of oil spills, if compared with the utilization of the STC classifier

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The work presented here aims to make an analysis of the socio-spatial dynamics of associative supermarket chains and their importance in redefining the roles of small urban North Rio Grande cities. The theoretical approach gives priority to business as a city constituent whose understanding allows us to seize the new socio-spatial dynamics of small towns in the face of globalization and which caused changes in the scope of its commercial forms. In this sense, we understand that trade, as an essentially urban activity has a very specific characteristic, with respect to its ability to transform the content and meaning of places. Another important factor in the construction work was the context of changes in the capitalist production system with the advent of flexible production and the determinations of the economic globalization process that brought new ways of organizing trade. The empirical analysis of the research includes two associative supermarket chains, the “Rede 10” and the “Rede Seridó”, bringing together basic elements for understanding the genesis and evolution of this new organizational model of trade in small towns of the state, as well as allowed -In understand the main changes in this segment of commercial activity. The methodology we used literature in books and periodicals, collected mainly secondary data collection with the SEBRAE and the ABRAS and was still a field research where interviews were conducted forwarded along to the associative network managers to supermarkets, owners of associated facilities and with consumers of the surveyed networks .Finally, we conclude that the formation and expansion of associative supermarket chains in the context of small cities potiguares is essentially in a survival alternative traditional small traders, that sharing the associative principles albeit somewhat rigidly guided by the training cooperation networks can not only stay in the market , but to impose as a new agent in the capital of the reproduction process. Thus, the associative supermarket chains in the search for new spaces, particularly within small towns end up promoting new momentum in these cities providing different flows and interconnections with different places, giving new content and urban roles. By taking not only the condition of the place of living, but also the place to reproduce the capital, small towns offer their population better able to make purchases, thus avoiding the mandatory population shifts to other urban centers in order to meet their consumption needs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Among the wide range of skills displayed by a medical doctor is undoubtedly the need to use cohesive and well grounded clinical reasoning in order for medical care to be indeed effective. It is in this respect that conceptual maps emerge; these are a methodological innovation that allows a comprehensive, panoramic and associative outlook of theoretical content, making it more practical and applicable to the reality of clinical observation. Promoting learning, learning resources and a feedback system between professor and students, as well as assessing and monitoring the performance of students during their academic training, are the main features of this tool. OBJETIVE: Assess the use of conceptual maps as a teaching-learning tool in the training of undergraduate medical students at Universidade Federal do Rio Grande do Norte (UFRN). METHODOLOGY: Interventional, randomized, cross-sectional study conducted with students from the 3rd and 5th periods of the medical course at UFRN, during the second semester of 2014, totaling 86 participants, divided into two groups: GI (intervention – clinical case resolution with a conceptual map) and GII (control – clinical case resolution without a conceptual map) in each period. RESULTS: The use of conceptual maps to teach liver failure syndrome resulted in a statistically significant cognitive gain for G1 students from the 5th period (GI: 6.8±1.6 and 8.0±1.5, p = 0.024; GII: 7.2±2.1 and 8.0±1.7, p = 0.125, pre and post-intermediate means, respectively), a result not observed in the period 3rd (GI: 7.7±1.3 and 8.0±1.4, p = 0.501; GII: 6.7±1.8 and 7.8±1.8; p=0.068, pre and post-intermediate means, respectively). Students in the 3 rd period gave better responses to the first clinical case, with a larger number of suitable concepts and crosslinks, when they used conceptual maps (GI: 91.3±13.15 and GII: 64.84±22.84, p=0,002). Students in the 5th period exhibited better clinical reasoning and more complete responses using the tool (p=0,01). Most of the students were not aware of the tool (53.8% from the 3rd period and 65.3% from the 5th period). Among those who knew about conceptual maps, most (59.3%) had only used them during high school, 14.8% had never used them and only seven students (25.9%) used them during the medical course. Analysis of open responses, obtained in process assessment showed clear satisfaction and enthusiasm with learning about the new tool, and frequent suggestions to use it at other moments in the course. Assessment of learning profile, using the VARK questionnaire, showed that most students from both periods exhibited a multimodal style. CONCLUSION: Despite their scant knowledge regarding the tool, good acceptability and understanding was observed in the study participants. The conceptual maps allowed cognitive gains, better responses and clinical reasoning in teaching liver failure syndrome to 5th period students.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents the synthesis, characterization and study of the associative behaviour in aqueous media of new responsive graft copolymers, based on carboxymethylcellulose as the water-soluble backbone and Jeffamine® M-2070 e Jeffamine® M-600 (commercial polyetheramines) as the thermoresponsive grafts with high cloud point temperatures in water. The synthesis was performed on aqueous medium, by using 1-ethyl-3- (3-(dimethylamino)-propyl)carbodiimide hydrochloride and N-hydroxysuccinimide as activators of the reaction between carboxylategroupsfrom carboxymethylcellulose and amino groups from polyetheramines. The grafting reaction was confirmed by infrared spectroscopy and the grafting percentage by 1H NMR. The molar mass of the polyetheramines was determined by 1H NMR, whereas the molar mass of CMC and graft copolymers was determined by static light scattering. The salt effect on the association behaviour of the copolymers was evaluated in different aqueous media (Milli-Q water, 0.5M NaCl, 0.5M K2CO3 and synthetic sea water), at different temperatures, through UV-vis, rheology and dynamic light scattering. None of the copolymers solutions, at 5 g/L, turned turbid in Milli-Q water when heated from 25 to 95 °C, probably because of the increase in hydrophibicity promoted by CMC backbone. However, they became turbid in the presence of salts, due to the salting out effect, where the lowest cloud point was observed in 0.5M K2CO3, which was attributed to the highest ionic strength in water, combined to the ability of CO3 2- to decrease polymer-solvents interactions. The hydrodynamic radius and apparent viscosity of the copolymers in aqueous medium changed as a function of salts dissolved in the medium, temperature and copolymer composition. Thermothickening behaviour was observed in 0.5M K2CO3 when the temperature was raised from 25 to 60°C. This performance can be attributed to intermolecular associations as a physical network, since the temperature is above the cloud point of the copolymers in this solvent.