40 resultados para Máquina de indução duplamente alimentada (DFIG)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main objective of this study is to apply recently developed methods of physical-statistic to time series analysis, particularly in electrical induction s profiles of oil wells data, to study the petrophysical similarity of those wells in a spatial distribution. For this, we used the DFA method in order to know if we can or not use this technique to characterize spatially the fields. After obtain the DFA values for all wells, we applied clustering analysis. To do these tests we used the non-hierarchical method called K-means. Usually based on the Euclidean distance, the K-means consists in dividing the elements of a data matrix N in k groups, so that the similarities among elements belonging to different groups are the smallest possible. In order to test if a dataset generated by the K-means method or randomly generated datasets form spatial patterns, we created the parameter Ω (index of neighborhood). High values of Ω reveals more aggregated data and low values of Ω show scattered data or data without spatial correlation. Thus we concluded that data from the DFA of 54 wells are grouped and can be used to characterize spatial fields. Applying contour level technique we confirm the results obtained by the K-means, confirming that DFA is effective to perform spatial analysis

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of the maps obtained from remote sensing orbital images submitted to digital processing became fundamental to optimize conservation and monitoring actions of the coral reefs. However, the accuracy reached in the mapping of submerged areas is limited by variation of the water column that degrades the signal received by the orbital sensor and introduces errors in the final result of the classification. The limited capacity of the traditional methods based on conventional statistical techniques to solve the problems related to the inter-classes took the search of alternative strategies in the area of the Computational Intelligence. In this work an ensemble classifiers was built based on the combination of Support Vector Machines and Minimum Distance Classifier with the objective of classifying remotely sensed images of coral reefs ecosystem. The system is composed by three stages, through which the progressive refinement of the classification process happens. The patterns that received an ambiguous classification in a certain stage of the process were revalued in the subsequent stage. The prediction non ambiguous for all the data happened through the reduction or elimination of the false positive. The images were classified into five bottom-types: deep water; under-water corals; inter-tidal corals; algal and sandy bottom. The highest overall accuracy (89%) was obtained from SVM with polynomial kernel. The accuracy of the classified image was compared through the use of error matrix to the results obtained by the application of other classification methods based on a single classifier (neural network and the k-means algorithm). In the final, the comparison of results achieved demonstrated the potential of the ensemble classifiers as a tool of classification of images from submerged areas subject to the noise caused by atmospheric effects and the water column

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The skin cancer is the most common of all cancers and the increase of its incidence must, in part, caused by the behavior of the people in relation to the exposition to the sun. In Brazil, the non-melanoma skin cancer is the most incident in the majority of the regions. The dermatoscopy and videodermatoscopy are the main types of examinations for the diagnosis of dermatological illnesses of the skin. The field that involves the use of computational tools to help or follow medical diagnosis in dermatological injuries is seen as very recent. Some methods had been proposed for automatic classification of pathology of the skin using images. The present work has the objective to present a new intelligent methodology for analysis and classification of skin cancer images, based on the techniques of digital processing of images for extraction of color characteristics, forms and texture, using Wavelet Packet Transform (WPT) and learning techniques called Support Vector Machine (SVM). The Wavelet Packet Transform is applied for extraction of texture characteristics in the images. The WPT consists of a set of base functions that represents the image in different bands of frequency, each one with distinct resolutions corresponding to each scale. Moreover, the characteristics of color of the injury are also computed that are dependants of a visual context, influenced for the existing colors in its surround, and the attributes of form through the Fourier describers. The Support Vector Machine is used for the classification task, which is based on the minimization principles of the structural risk, coming from the statistical learning theory. The SVM has the objective to construct optimum hyperplanes that represent the separation between classes. The generated hyperplane is determined by a subset of the classes, called support vectors. For the used database in this work, the results had revealed a good performance getting a global rightness of 92,73% for melanoma, and 86% for non-melanoma and benign injuries. The extracted describers and the SVM classifier became a method capable to recognize and to classify the analyzed skin injuries

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several research lines show that sleep favors memory consolidation and learning. It has been proposed that the cognitive role of sleep is derived from a global scaling of synaptic weights, able to homeostatically restore the ability to learn new things, erasing memories overnight. This phenomenon is typical of slow-wave sleep (SWS) and characterized by non-Hebbian mechanisms, i.e., mechanisms independent of synchronous neuronal activity. Another view holds that sleep also triggers the specific enhancement of synaptic connections, carrying out the embossing of certain mnemonic traces within a lattice of synaptic weights rescaled each night. Such an embossing is understood as the combination of Hebbian and non-Hebbian mechanisms, capable of increasing and decreasing respectively the synaptic weights in complementary circuits, leading to selective memory improvement and a restructuring of synaptic configuration (SC) that can be crucial for the generation of new behaviors ( insights ). The empirical findings indicate that initiation of Hebbian plasticity during sleep occurs in the transition of the SWS to the stage of rapid eye movement (REM), possibly due to the significant differences between the firing rates regimes of the stages and the up-regulation of factors involved in longterm synaptic plasticity. In this study the theories of homeostasis and embossing were compared using an artificial neural network (ANN) fed with action potentials recorded in the hippocampus of rats during the sleep-wake cycle. In the simulation in which the ANN did not apply the long-term plasticity mechanisms during sleep (SWS-transition REM), the synaptic weights distribution was re-scaled inexorably, for its mean value proportional to the input firing rate, erasing the synaptic weights pattern that had been established initially. In contrast, when the long-term plasticity is modeled during the transition SWSREM, an increase of synaptic weights were observed in the range of initial/low values, redistributing effectively the weights in a way to reinforce a subset of synapses over time. The results suggest that a positive regulation coming from the long-term plasticity can completely change the role of sleep: its absence leads to forgetting; its presence leads to a positive mnemonic change

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Support Vector Machines (SVM) has attracted increasing attention in machine learning area, particularly on classification and patterns recognition. However, in some cases it is not easy to determinate accurately the class which given pattern belongs. This thesis involves the construction of a intervalar pattern classifier using SVM in association with intervalar theory, in order to model the separation of a pattern set between distinct classes with precision, aiming to obtain an optimized separation capable to treat imprecisions contained in the initial data and generated during the computational processing. The SVM is a linear machine. In order to allow it to solve real-world problems (usually nonlinear problems), it is necessary to treat the pattern set, know as input set, transforming from nonlinear nature to linear problem. The kernel machines are responsible to do this mapping. To create the intervalar extension of SVM, both for linear and nonlinear problems, it was necessary define intervalar kernel and the Mercer s theorem (which caracterize a kernel function) to intervalar function

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present work is based on the applied bilinear predictive control applied to an induction motor. As in particular case of the technique based on predictive control in nonlinem systems, these have desperted great interest, a time that present the advantage of being simpler than the non linear in general and most representative one than the linear one. One of the methods, adopted here, uses the linear model "quasi linear for step of time" based in Generalized Predictive Control. The modeling of the induction motor is made by the Vectorial control with orientation given for the indirect rotor. The system is formed by an induction motor of 3 cv with rotor in squirregate, set in motion for a group of benches of tests developed for this work, presented resulted for a variation of +5% in the value of set-point and for a variation of +10% and -10% in the value of the applied nominal load to the motor. The results prove a good efficiency of the predictive bilinear controllers, then compared with the linear cases

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present work describes the use of a mathematical tool to solve problems arising from control theory, including the identification, analysis of the phase portrait and stability, as well as the temporal evolution of the plant s current induction motor. The system identification is an area of mathematical modeling that has as its objective the study of techniques which can determine a dynamic model in representing a real system. The tool used in the identification and analysis of nonlinear dynamical system is the Radial Basis Function (RBF). The process or plant that is used has a mathematical model unknown, but belongs to a particular class that contains an internal dynamics that can be modeled.Will be presented as contributions to the analysis of asymptotic stability of the RBF. The identification using radial basis function is demonstrated through computer simulations from a real data set obtained from the plant

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Relevant researches have been growing on electric machine without mancal or bearing and that is generally named bearingless motor or specifically, mancal motor. In this paper it is made an introductory presentation about bearingless motor and its peripherical devices with focus on the design and implementation of sensors and interfaces needed to control rotor radial positioning and rotation of the machine. The signals from the machine are conditioned in analogic inputs of DSP TMS320F2812 and used in the control program. This work has a purpose to elaborate and build a system with sensors and interfaces suitable to the input and output of DSP TMS320F2812 to control a mancal motor, bearing in mind the modularity, simplicity of circuits, low number of power used, good noise imunity and good response frequency over 10 kHz. The system is tested at a modified ordinary induction motor of 3,7 kVA to be used with a bearingless motor with divided coil

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Induction motors are one of the most important equipment of modern industry. However, in many situations, are subject to inadequate conditions as high temperatures and pressures, load variations and constant vibrations, for example. Such conditions, leaving them more susceptible to failures, either external or internal in nature, unwanted in the industrial process. In this context, predictive maintenance plays an important role, where the detection and diagnosis of faults in a timely manner enables the increase of time of the engine and the possibiity of reducing costs, caused mainly by stopping the production and corrective maintenance the motor itself. In this juncture, this work proposes the design of a system that is able to detect and diagnose faults in induction motors, from the collection of electrical line voltage and current, and also the measurement of engine speed. This information will use as input to a fuzzy inference system based on rules that find and classify a failure from the variation of thess quantities

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the most important goals of bioinformatics is the ability to identify genes in uncharacterized DNA sequences on world wide database. Gene expression on prokaryotes initiates when the RNA-polymerase enzyme interacts with DNA regions called promoters. In these regions are located the main regulatory elements of the transcription process. Despite the improvement of in vitro techniques for molecular biology analysis, characterizing and identifying a great number of promoters on a genome is a complex task. Nevertheless, the main drawback is the absence of a large set of promoters to identify conserved patterns among the species. Hence, a in silico method to predict them on any species is a challenge. Improved promoter prediction methods can be one step towards developing more reliable ab initio gene prediction methods. In this work, we present an empirical comparison of Machine Learning (ML) techniques such as Na¨ýve Bayes, Decision Trees, Support Vector Machines and Neural Networks, Voted Perceptron, PART, k-NN and and ensemble approaches (Bagging and Boosting) to the task of predicting Bacillus subtilis. In order to do so, we first built two data set of promoter and nonpromoter sequences for B. subtilis and a hybrid one. In order to evaluate of ML methods a cross-validation procedure is applied. Good results were obtained with methods of ML like SVM and Naïve Bayes using B. subtilis. However, we have not reached good results on hybrid database

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays, classifying proteins in structural classes, which concerns the inference of patterns in their 3D conformation, is one of the most important open problems in Molecular Biology. The main reason for this is that the function of a protein is intrinsically related to its spatial conformation. However, such conformations are very difficult to be obtained experimentally in laboratory. Thus, this problem has drawn the attention of many researchers in Bioinformatics. Considering the great difference between the number of protein sequences already known and the number of three-dimensional structures determined experimentally, the demand of automated techniques for structural classification of proteins is very high. In this context, computational tools, especially Machine Learning (ML) techniques, have become essential to deal with this problem. In this work, ML techniques are used in the recognition of protein structural classes: Decision Trees, k-Nearest Neighbor, Naive Bayes, Support Vector Machine and Neural Networks. These methods have been chosen because they represent different paradigms of learning and have been widely used in the Bioinfornmatics literature. Aiming to obtain an improvment in the performance of these techniques (individual classifiers), homogeneous (Bagging and Boosting) and heterogeneous (Voting, Stacking and StackingC) multiclassification systems are used. Moreover, since the protein database used in this work presents the problem of imbalanced classes, artificial techniques for class balance (Undersampling Random, Tomek Links, CNN, NCL and OSS) are used to minimize such a problem. In order to evaluate the ML methods, a cross-validation procedure is applied, where the accuracy of the classifiers is measured using the mean of classification error rate, on independent test sets. These means are compared, two by two, by the hypothesis test aiming to evaluate if there is, statistically, a significant difference between them. With respect to the results obtained with the individual classifiers, Support Vector Machine presented the best accuracy. In terms of the multi-classification systems (homogeneous and heterogeneous), they showed, in general, a superior or similar performance when compared to the one achieved by the individual classifiers used - especially Boosting with Decision Tree and the StackingC with Linear Regression as meta classifier. The Voting method, despite of its simplicity, has shown to be adequate for solving the problem presented in this work. The techniques for class balance, on the other hand, have not produced a significant improvement in the global classification error. Nevertheless, the use of such techniques did improve the classification error for the minority class. In this context, the NCL technique has shown to be more appropriated

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an evaluative study about the effects of using a machine learning technique on the main features of a self-organizing and multiobjective genetic algorithm (GA). A typical GA can be seen as a search technique which is usually applied in problems involving no polynomial complexity. Originally, these algorithms were designed to create methods that seek acceptable solutions to problems where the global optimum is inaccessible or difficult to obtain. At first, the GAs considered only one evaluation function and a single objective optimization. Today, however, implementations that consider several optimization objectives simultaneously (multiobjective algorithms) are common, besides allowing the change of many components of the algorithm dynamically (self-organizing algorithms). At the same time, they are also common combinations of GAs with machine learning techniques to improve some of its characteristics of performance and use. In this work, a GA with a machine learning technique was analyzed and applied in a antenna design. We used a variant of bicubic interpolation technique, called 2D Spline, as machine learning technique to estimate the behavior of a dynamic fitness function, based on the knowledge obtained from a set of laboratory experiments. This fitness function is also called evaluation function and, it is responsible for determining the fitness degree of a candidate solution (individual), in relation to others in the same population. The algorithm can be applied in many areas, including in the field of telecommunications, as projects of antennas and frequency selective surfaces. In this particular work, the presented algorithm was developed to optimize the design of a microstrip antenna, usually used in wireless communication systems for application in Ultra-Wideband (UWB). The algorithm allowed the optimization of two variables of geometry antenna - the length (Ls) and width (Ws) a slit in the ground plane with respect to three objectives: radiated signal bandwidth, return loss and central frequency deviation. These two dimensions (Ws and Ls) are used as variables in three different interpolation functions, one Spline for each optimization objective, to compose a multiobjective and aggregate fitness function. The final result proposed by the algorithm was compared with the simulation program result and the measured result of a physical prototype of the antenna built in the laboratory. In the present study, the algorithm was analyzed with respect to their success degree in relation to four important characteristics of a self-organizing multiobjective GA: performance, flexibility, scalability and accuracy. At the end of the study, it was observed a time increase in algorithm execution in comparison to a common GA, due to the time required for the machine learning process. On the plus side, we notice a sensitive gain with respect to flexibility and accuracy of results, and a prosperous path that indicates directions to the algorithm to allow the optimization problems with "η" variables

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Experiments were performed to study the effect of surface properties of a vertical channel heated by a source of thermal radiation to induce air flow through convection. Two channels (solar chimney prototype) were built with glass plates, forming a structure of truncated pyramidal geometry. We considered two surface finishes: transparent and opaque. Each stack was mounted on a base of thermal energy absorber with a central opening for passage of air, and subjected to heating by a radiant source comprises a bank of incandescent bulbs and were performed field tests. Thermocouples were fixed on the bases and on the walls of chimneys and then connected to a data acquisition system in computer. The air flow within the chimney, the speed and temperature were measured using a hot wire anemometer. Five experiments were performed for each stack in which convective flows were recorded with values ranging from 17 m³ / h and 22 m³ / h and air flow velocities ranging from 0.38 m / s and 0.56 m / s for the laboratory tests and air velocities between 0.6 m/s and 1.1m/s and convective airflows between 650 m³/h and 1150 m³/h for the field tests. The test data were compared to those obtained by semi-empirical equations, which are valid for air flow induced into channels and simulated data from 1st Thermodynamics equation. It was found that the chimney with transparent walls induced more intense convective flows than the chimney with matte finish. Based on the results obtained can be proposed for the implementation of prototype to exhaust fumes, mists, gases, vapors, mists and dusts in industrial environments, to help promote ventilation and air renewal in built environments and for drying materials, fruits and seeds

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays generation ethanol second, that t is obtained from fermentation of sugars of hydrolyses of cellulose, is gaining attention worldwide as a viable alternative to petroleum mainly for being a renewable resource. The increase of first generation ethanol production i.e. that obtained from sugar-cane molasses could lead to a reduction of lands sustainable for crops and food production. However, second generation ethanol needs technologic pathway for reduce the bottlenecks as production of enzymes to hydrolysis the cellulose to glucose i.e. the cellulases as well as the development of efficient biomass pretreatment and of low-cost. In this work Trichoderma reesei ATCC 2768 was cultivated under submerged fermentation to produce cellulases using as substrates waste of lignocellulosic material such as cashew apple bagasse as well as coconut bagasse with and without pretreatment. For pretreatment the bagasses were treated with 1 M NaOH and by explosion at high pressure. Enzyme production was carried out in shaker (temperature of 27ºC, 150 rpm and initial medium pH of 4.8). Results showed that T.reesei ATCC 2768 showed the higher cellulase production when the cashew apple bagasse was treated with 1M NaOH (2.160 UI/mL of CMCase and 0.215 UI/mL of FPase), in which the conversion of cellulose, in terms of total reducing sugars, was of 98.38%, when compared to pretreatment by explosion at high pressure (0.853 UI/mL of CMCase and 0.172 UI/mL of Fpase) showing a conversion of 47.39% of total reducing sugars. Cellulase production is lower for the medium containing coconut bagasse treated with 1M NaOH (0.480 UI/mL of CMcase and 0.073 UI/mL of FPase), giving a conversion of 49.5% in terms of total reducing sugars. Cashew apple bagasse without pretreatment showed cellulase activities lower (0.535 UI/mL of CMCase and 0,152 UI/mL of FPase) then pretreated bagasse while the coconut bagasse without pretreatment did not show any enzymatic activity. Maximum cell concentration was obtained using cashew nut bagasse as well as coconut shell bagasse treated with 1M NaOH, with 2.92 g/L and 1.97 g/L, respectively. These were higher than for the experiments in which the substrates were treated by explosion at high pressure, 1.93 g/L and 1.17 g/L. Cashew apple is a potential inducer for cellulolytic enzymes synthysis showing better results than coconut bagasse. Pretreatment improves the process for the cellulolytic enzyme production

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The following work is to interpret and analyze the problem of induction under a vision founded on set theory and probability theory as a basis for solution of its negative philosophical implications related to the systems of inductive logic in general. Due to the importance of the problem and the relatively recent developments in these fields of knowledge (early 20th century), as well as the visible relations between them and the process of inductive inference, it has been opened a field of relatively unexplored and promising possibilities. The key point of the study consists in modeling the information acquisition process using concepts of set theory, followed by a treatment using probability theory. Throughout the study it was identified as a major obstacle to the probabilistic justification, both: the problem of defining the concept of probability and that of rationality, as well as the subtle connection between the two. This finding called for a greater care in choosing the criterion of rationality to be considered in order to facilitate the treatment of the problem through such specific situations, but without losing their original characteristics so that the conclusions can be extended to classic cases such as the question about the continuity of the sunrise