9 resultados para Data mining models
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
Traditional applications of feature selection in areas such as data mining, machine learning and pattern recognition aim to improve the accuracy and to reduce the computational cost of the model. It is done through the removal of redundant, irrelevant or noisy data, finding a representative subset of data that reduces its dimensionality without loss of performance. With the development of research in ensemble of classifiers and the verification that this type of model has better performance than the individual models, if the base classifiers are diverse, comes a new field of application to the research of feature selection. In this new field, it is desired to find diverse subsets of features for the construction of base classifiers for the ensemble systems. This work proposes an approach that maximizes the diversity of the ensembles by selecting subsets of features using a model independent of the learning algorithm and with low computational cost. This is done using bio-inspired metaheuristics with evaluation filter-based criteria
Resumo:
The genome of all organisms is subject to injuries that can be caused by endogenous and environmental factors. If these lesions are not corrected, it can be fixed generating a mutation which can be lethal to the organisms. In order to prevent this, there are different DNA repair mechanisms. These mechanisms are well known in bacteria, yeast, human, but not in plants. Two plant models Oriza sativa and Arabidopsis thaliana had the genome sequenced and due to this some DNA repair genes have been characterized. The aim of this work is to characterized two sugarcane cDNAs that had homology to AP endonuclease: scARP1 and scARP3. In silico has been done with these two sequences and other from plants. It has been observed domain conservation on these sequences, but the cystein at 65 position that is a characteristic from the redox domain in APE1 protein was not so conservated in plants. Phylogenetic relationship showed two branches, one branch with dicots and monocots sequence and the other branch with only monocots sequences. Another approach in order to characterized these two cDNAs was to construct overexpression cassettes (sense and antisense orientation) using the 35S promoter. After that, these cassettes were transferred to the binary vector pPZP211. Furthermore, previously in the laboratory was obtained a plant from nicotiana tabacum containing the overexpression cassette in anti-sense orientation. It has been observed that this plant had a slow development and problems in setting seeds. After some manual crossing, some seeds were obtained (T2) and it was analyzed the T2 segregation. The third approach used in this work was to clone the promoter region from these two cDNAs by PCR walking. The sequences obtained were analyzed using the program PLANTCARE. It was observed in these sequences some motives that may be related to oxidative stress response
Resumo:
The relevance of rising healthcare costs is a main topic in complementary health companies in Brazil. In 2011, these expenses consumed more than 80% of the monthly health insurance in Brazil. Considering the administrative costs, it is observed that the companies operating in this market work, on average, at the threshold between profit and loss. This paper presents results after an investigation of the welfare costs of a health plan company in Brazil. It was based on the KDD process and explorative Data Mining. A diversity of results is presented, such as data summarization, providing compact descriptions of the data, revealing common features and intrinsic observations. Among the key findings was observed that a small portion of the population is responsible for the most demanding of resources devoted to health care
Resumo:
Currently, one of the biggest challenges for the field of data mining is to perform cluster analysis on complex data. Several techniques have been proposed but, in general, they can only achieve good results within specific areas providing no consensus of what would be the best way to group this kind of data. In general, these techniques fail due to non-realistic assumptions about the true probability distribution of the data. Based on this, this thesis proposes a new measure based on Cross Information Potential that uses representative points of the dataset and statistics extracted directly from data to measure the interaction between groups. The proposed approach allows us to use all advantages of this information-theoretic descriptor and solves the limitations imposed on it by its own nature. From this, two cost functions and three algorithms have been proposed to perform cluster analysis. As the use of Information Theory captures the relationship between different patterns, regardless of assumptions about the nature of this relationship, the proposed approach was able to achieve a better performance than the main algorithms in literature. These results apply to the context of synthetic data designed to test the algorithms in specific situations and to real data extracted from problems of different fields
Resumo:
The opening of the Brazilian market of electricity and competitiveness between companies in the energy sector make the search for useful information and tools that will assist in decision making activities, increase by the concessionaires. An important source of knowledge for these utilities is the time series of energy demand. The identification of behavior patterns and description of events become important for the planning execution, seeking improvements in service quality and financial benefits. This dissertation presents a methodology based on mining and representation tools of time series, in order to extract knowledge that relate series of electricity demand in various substations connected of a electric utility. The method exploits the relationship of duration, coincidence and partial order of events in multi-dimensionals time series. To represent the knowledge is used the language proposed by Mörchen (2005) called Time Series Knowledge Representation (TSKR). We conducted a case study using time series of energy demand of 8 substations interconnected by a ring system, which feeds the metropolitan area of Goiânia-GO, provided by CELG (Companhia Energética de Goiás), responsible for the service of power distribution in the state of Goiás (Brazil). Using the proposed methodology were extracted three levels of knowledge that describe the behavior of the system studied, representing clearly the system dynamics, becoming a tool to assist planning activities
Resumo:
Self-organizing maps (SOM) are artificial neural networks widely used in the data mining field, mainly because they constitute a dimensionality reduction technique given the fixed grid of neurons associated with the network. In order to properly the partition and visualize the SOM network, the various methods available in the literature must be applied in a post-processing stage, that consists of inferring, through its neurons, relevant characteristics of the data set. In general, such processing applied to the network neurons, instead of the entire database, reduces the computational costs due to vector quantization. This work proposes a post-processing of the SOM neurons in the input and output spaces, combining visualization techniques with algorithms based on gravitational forces and the search for the shortest path with the greatest reward. Such methods take into account the connection strength between neighbouring neurons and characteristics of pattern density and distances among neurons, both associated with the position that the neurons occupy in the data space after training the network. Thus, the goal consists of defining more clearly the arrangement of the clusters present in the data. Experiments were carried out so as to evaluate the proposed methods using various artificially generated data sets, as well as real world data sets. The results obtained were compared with those from a number of well-known methods existent in the literature
Resumo:
Clustering data is a very important task in data mining, image processing and pattern recognition problems. One of the most popular clustering algorithms is the Fuzzy C-Means (FCM). This thesis proposes to implement a new way of calculating the cluster centers in the procedure of FCM algorithm which are called ckMeans, and in some variants of FCM, in particular, here we apply it for those variants that use other distances. The goal of this change is to reduce the number of iterations and processing time of these algorithms without affecting the quality of the partition, or even to improve the number of correct classifications in some cases. Also, we developed an algorithm based on ckMeans to manipulate interval data considering interval membership degrees. This algorithm allows the representation of data without converting interval data into punctual ones, as it happens to other extensions of FCM that deal with interval data. In order to validate the proposed methodologies it was made a comparison between a clustering for ckMeans, K-Means and FCM algorithms (since the algorithm proposed in this paper to calculate the centers is similar to the K-Means) considering three different distances. We used several known databases. In this case, the results of Interval ckMeans were compared with the results of other clustering algorithms when applied to an interval database with minimum and maximum temperature of the month for a given year, referring to 37 cities distributed across continents
Resumo:
Data clustering is applied to various fields such as data mining, image processing and pattern recognition technique. Clustering algorithms splits a data set into clusters such that elements within the same cluster have a high degree of similarity, while elements belonging to different clusters have a high degree of dissimilarity. The Fuzzy C-Means Algorithm (FCM) is a fuzzy clustering algorithm most used and discussed in the literature. The performance of the FCM is strongly affected by the selection of the initial centers of the clusters. Therefore, the choice of a good set of initial cluster centers is very important for the performance of the algorithm. However, in FCM, the choice of initial centers is made randomly, making it difficult to find a good set. This paper proposes three new methods to obtain initial cluster centers, deterministically, the FCM algorithm, and can also be used in variants of the FCM. In this work these initialization methods were applied in variant ckMeans.With the proposed methods, we intend to obtain a set of initial centers which are close to the real cluster centers. With these new approaches startup if you want to reduce the number of iterations to converge these algorithms and processing time without affecting the quality of the cluster or even improve the quality in some cases. Accordingly, cluster validation indices were used to measure the quality of the clusters obtained by the modified FCM and ckMeans algorithms with the proposed initialization methods when applied to various data sets
Resumo:
Geological and geophysical studies (resistivity, self potential and VLF) were undertaken in the Tararaca and Santa Rita farms, respectively close to the Santo Antônio and Santa Cruz villages, eastern Rio Grande do Norte State, NE Brazil. Their aim was to characterize water acummulation structures in crystalline rocks. Based on geological and geophysical data, two models were characterized, the fracture-stream and the eluvio-alluvial through, in part already described in the literature. In the Tararaca Farm, a water well was located in a NW-trending streamlet; surrounding outcrops display fractures with the same orientation. Apparent resistivity sections, accross the stream channel, confirm fracturing at depth. The VLF profiles systematically display an alignment of equivalent current density anomalies, coinciding with the stream. Based on such data, the classical fracture-stream model seems to be well characterized at this place. In the Santa Rita Farm, a NE-trending stream display a metric-thick eluvioregolith-alluvial cover. The outcropping bedrock do not present fractures paralell to the stream direction, although the latter coincides with the trend of the gneiss foliation, which dips to the south. Geophysical data confirm the absence of a fracture zone at this place, but delineate the borders of a through-shaped structure filled with sediments (alluvium and regolith). The southern border of this structure dips steeper compared to the northern one. This water acummulation structure corresponds to an alternative model as regards to the classical fracture-stream, being named as the eluvio-alluvial trough. Its local controls are the drainage and relief, coupled with the bedrock weathering preferentially following foliation planes, generating the asymmetry of the through