548 resultados para CNPQ::ENGENHARIAS::ENGENHARIA ELETRICA
Resumo:
The automatic speech recognition by machine has been the target of researchers in the past five decades. In this period have been numerous advances, such as in the field of recognition of isolated words (commands), which has very high rates of recognition, currently. However, we are still far from developing a system that could have a performance similar to the human being (automatic continuous speech recognition). One of the great challenges of searches for continuous speech recognition is the large amount of pattern. The modern languages such as English, French, Spanish and Portuguese have approximately 500,000 words or patterns to be identified. The purpose of this study is to use smaller units than the word such as phonemes, syllables and difones units as the basis for the speech recognition, aiming to recognize any words without necessarily using them. The main goal is to reduce the restriction imposed by the excessive amount of patterns. In order to validate this proposal, the system was tested in the isolated word recognition in dependent-case. The phonemes characteristics of the Brazil s Portuguese language were used to developed the hierarchy decision system. These decisions are made through the use of neural networks SVM (Support Vector Machines). The main speech features used were obtained from the Wavelet Packet Transform. The descriptors MFCC (Mel-Frequency Cepstral Coefficient) are also used in this work. It was concluded that the method proposed in this work, showed good results in the steps of recognition of vowels, consonants (syllables) and words when compared with other existing methods in literature
Sistema de detecção e isolamento de falhas em sistemas dinâmicos baseado em identificação paramétrica
Resumo:
The present research aims at contributing to the area of detection and diagnosis of failure through the proposal of a new system architecture of detection and isolation of failures (FDI, Fault Detection and Isolation). The proposed architecture presents innovations related to the way the physical values monitored are linked to the FDI system and, as a consequence, the way the failures are detected, isolated and classified. A search for mathematical tools able to satisfy the objectives of the proposed architecture has pointed at the use of the Kalman Filter and its derivatives EKF (Extended Kalman Filter) and UKF (Unscented Kalman Filter). The use of the first one is efficient when the monitored process presents a linear relation among its physical values to be monitored and its out-put. The other two are proficient in case this dynamics is no-linear. After that, a short comparative of features and abilities in the context of failure detection concludes that the UFK system is a better alternative than the EKF one to compose the architecture of the FDI system proposed in case of processes of no-linear dynamics. The results shown in the end of the research refer to the linear and no-linear industrial processes. The efficiency of the proposed architecture may be observed since it has been applied to simulated and real processes. To conclude, the contributions of this thesis are found in the end of the text
Resumo:
The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developing the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. It s important to point out that, in spite of the loads being normally connected to the transformer s secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity
Resumo:
This work present a interval approach to deal with images with that contain uncertainties, as well, as treating these uncertainties through morphologic operations. Had been presented two intervals models. For the first, is introduced an algebraic space with three values, that was constructed based in the tri-valorada logic of Lukasiewiecz. With this algebraic structure, the theory of the interval binary images, that extends the classic binary model with the inclusion of the uncertainty information, was introduced. The same one can be applied to represent certain binary images with uncertainty in pixels, that it was originated, for example, during the process of the acquisition of the image. The lattice structure of these images, allow the definition of the morphologic operators, where the uncertainties are treated locally. The second model, extend the classic model to the images in gray levels, where the functions that represent these images are mapping in a finite set of interval values. The algebraic structure belong the complete lattices class, what also it allow the definition of the elementary operators of the mathematical morphology, dilation and erosion for this images. Thus, it is established a interval theory applied to the mathematical morphology to deal with problems of uncertainties in images
Resumo:
In this work, we propose a methodology for teaching robotics in elementary schools, based on the socio-historical Vygotsky theory. This methodology in conjunction with the Lego Mindstoms kit (R) and an educational software (an interface for control and programming of prototypes) are part of an educational robotics system named RoboEduc. For the practical development of this work, we have used the action-research strategy, being realized robotics activities with participation of children with age between 8 and 10 years, students of the elementary school level of Municipal School Ascendino de Almeida. This school is located at the city zone of Pitimbu, at the periphery of Natal, in Rio Grande do Norte state. The activities have focused on understanding the construction of robotic prototypes, their programming and control. At constructing prototypes, children develop zone of proximal development (ZPDs) that are learning spaces that, when well used, allow the construction not only of scientific concepts by the individuals but also of abilities and capabilities that are important for the social and cultural interactiond of each one and of the group. With the development of these practical workshops, it was possible to analyse the use of the Robot as the mediator element of the teaching-learning process and the contributions that the use of robotics may bring to teaching since elementary levels
Resumo:
Techniques of optimization known as metaheuristics have achieved success in the resolution of many problems classified as NP-Hard. These methods use non deterministic approaches that reach very good solutions which, however, don t guarantee the determination of the global optimum. Beyond the inherent difficulties related to the complexity that characterizes the optimization problems, the metaheuristics still face the dilemma of xploration/exploitation, which consists of choosing between a greedy search and a wider exploration of the solution space. A way to guide such algorithms during the searching of better solutions is supplying them with more knowledge of the problem through the use of a intelligent agent, able to recognize promising regions and also identify when they should diversify the direction of the search. This way, this work proposes the use of Reinforcement Learning technique - Q-learning Algorithm - as exploration/exploitation strategy for the metaheuristics GRASP (Greedy Randomized Adaptive Search Procedure) and Genetic Algorithm. The GRASP metaheuristic uses Q-learning instead of the traditional greedy-random algorithm in the construction phase. This replacement has the purpose of improving the quality of the initial solutions that are used in the local search phase of the GRASP, and also provides for the metaheuristic an adaptive memory mechanism that allows the reuse of good previous decisions and also avoids the repetition of bad decisions. In the Genetic Algorithm, the Q-learning algorithm was used to generate an initial population of high fitness, and after a determined number of generations, where the rate of diversity of the population is less than a certain limit L, it also was applied to supply one of the parents to be used in the genetic crossover operator. Another significant change in the hybrid genetic algorithm is the proposal of a mutually interactive cooperation process between the genetic operators and the Q-learning algorithm. In this interactive/cooperative process, the Q-learning algorithm receives an additional update in the matrix of Q-values based on the current best solution of the Genetic Algorithm. The computational experiments presented in this thesis compares the results obtained with the implementation of traditional versions of GRASP metaheuristic and Genetic Algorithm, with those obtained using the proposed hybrid methods. Both algorithms had been applied successfully to the symmetrical Traveling Salesman Problem, which was modeled as a Markov decision process
Resumo:
The Predictive Controller has been receiving plenty attention in the last decades, because the need to understand, to analyze, to predict and to control real systems has been quickly growing with the technological and industrial progress. The objective of this thesis is to present a contribution for the development and implementation of Nonlinear Predictive Controllers based on Hammerstein model, as well as to its make properties evaluation. In this case, in the Nonlinear Predictive Controller development the time-step linearization method is used and a compensation term is introduced in order to improve the controller performance. The main motivation of this thesis is the study and stability guarantee for the Nonlinear Predictive Controller based on Hammerstein model. In this case, was used the concepts of sections and Popov Theorem. Simulation results with literature models shows that the proposed approaches are able to control with good performance and to guarantee the systems stability
Resumo:
Due of industrial informatics several attempts have been done to develop notations and semantics, which are used for classifying and describing different kind of system behavior, particularly in the modeling phase. Such attempts provide the infrastructure to resolve some real problems of engineering and construct practical systems that aim at, mainly, to increase the productivity, quality, and security of the process. Despite the many studies that have attempted to develop friendly methods for industrial controller programming, they are still programmed by conventional trial-and-error methods and, in practice, there is little written documentation on these systems. The ideal solution would be to use a computational environment that allows industrial engineers to implement the system using high-level language and that follows international standards. Accordingly, this work proposes a methodology for plant and control modelling of the discrete event systems that include sequential, parallel and timed operations, using a formalism based on Statecharts, denominated Basic Statechart (BSC). The methodology also permits automatic procedures to validate and implement these systems. To validate our methodology, we presented two case studies with typical examples of the manufacturing sector. The first example shows a sequential control for a tagged machine, which is used to illustrated dependences between the devices of the plant. In the second example, we discuss more than one strategy for controlling a manufacturing cell. The model with no control has 72 states (distinct configurations) and, the model with sequential control generated 20 different states, but they only act in 8 distinct configurations. The model with parallel control generated 210 different states, but these 210 configurations act only in 26 distinct configurations, therefore, one strategy control less restrictive than previous. Lastly, we presented one example for highlight the modular characteristic of our methodology, which it is very important to maintenance of applications. In this example, the sensors for identifying pieces in the plant were removed. So, changes in the control model are needed to transmit the information of the input buffer sensor to the others positions of the cell
Resumo:
This work deals with a mathematical fundament for digital signal processing under point view of interval mathematics. Intend treat the open problem of precision and repesention of data in digital systems, with a intertval version of signals representation. Signals processing is a rich and complex area, therefore, this work makes a cutting with focus in systems linear invariant in the time. A vast literature in the area exists, but, some concepts in interval mathematics need to be redefined or to be elaborated for the construction of a solid theory of interval signal processing. We will construct a basic fundaments for signal processing in the interval version, such as basic properties linearity, stability, causality, a version to intervalar of linear systems e its properties. They will be presented interval versions of the convolution and the Z-transform. Will be made analysis of convergences of systems using interval Z-transform , a essentially interval distance, interval complex numbers , application in a interval filter.
Resumo:
The number of applications based on embedded systems grows significantly every year, even with the fact that embedded systems have restrictions, and simple processing units, the performance of these has improved every day. However the complexity of applications also increase, a better performance will always be necessary. So even such advances, there are cases, which an embedded system with a single unit of processing is not sufficient to achieve the information processing in real time. To improve the performance of these systems, an implementation with parallel processing can be used in more complex applications that require high performance. The idea is to move beyond applications that already use embedded systems, exploring the use of a set of units processing working together to implement an intelligent algorithm. The number of existing works in the areas of parallel processing, systems intelligent and embedded systems is wide. However works that link these three areas to solve any problem are reduced. In this context, this work aimed to use tools available for FPGA architectures, to develop a platform with multiple processors to use in pattern classification with artificial neural networks
Resumo:
The bidimensional periodic structures called frequency selective surfaces have been well investigated because of their filtering properties. Similar to the filters that work at the traditional radiofrequency band, such structures can behave as band-stop or pass-band filters, depending on the elements of the array (patch or aperture, respectively) and can be used for a variety of applications, such as: radomes, dichroic reflectors, waveguide filters, artificial magnetic conductors, microwave absorbers etc. To provide high-performance filtering properties at microwave bands, electromagnetic engineers have investigated various types of periodic structures: reconfigurable frequency selective screens, multilayered selective filters, as well as periodic arrays printed on anisotropic dielectric substrates and composed by fractal elements. In general, there is no closed form solution directly from a given desired frequency response to a corresponding device; thus, the analysis of its scattering characteristics requires the application of rigorous full-wave techniques. Besides that, due to the computational complexity of using a full-wave simulator to evaluate the frequency selective surface scattering variables, many electromagnetic engineers still use trial-and-error process until to achieve a given design criterion. As this procedure is very laborious and human dependent, optimization techniques are required to design practical periodic structures with desired filter specifications. Some authors have been employed neural networks and natural optimization algorithms, such as the genetic algorithms and the particle swarm optimization for the frequency selective surface design and optimization. This work has as objective the accomplishment of a rigorous study about the electromagnetic behavior of the periodic structures, enabling the design of efficient devices applied to microwave band. For this, artificial neural networks are used together with natural optimization techniques, allowing the accurate and efficient investigation of various types of frequency selective surfaces, in a simple and fast manner, becoming a powerful tool for the design and optimization of such structures
Resumo:
Internet applications such as media streaming, collaborative computing and massive multiplayer are on the rise,. This leads to the need for multicast communication, but unfortunately group communications support based on IP multicast has not been widely adopted due to a combination of technical and non-technical problems. Therefore, a number of different application-layer multicast schemes have been proposed in recent literature to overcome the drawbacks. In addition, these applications often behave as both providers and clients of services, being called peer-topeer applications, and where participants come and go very dynamically. Thus, servercentric architectures for membership management have well-known problems related to scalability and fault-tolerance, and even peer-to-peer traditional solutions need to have some mechanism that takes into account member's volatility. The idea of location awareness distributes the participants in the overlay network according to their proximity in the underlying network allowing a better performance. Given this context, this thesis proposes an application layer multicast protocol, called LAALM, which takes into account the actual network topology in the assembly process of the overlay network. The membership algorithm uses a new metric, IPXY, to provide location awareness through the processing of local information, and it was implemented using a distributed shared and bi-directional tree. The algorithm also has a sub-optimal heuristic to minimize the cost of membership process. The protocol has been evaluated in two ways. First, through an own simulator developed in this work, where we evaluated the quality of distribution tree by metrics such as outdegree and path length. Second, reallife scenarios were built in the ns-3 network simulator where we evaluated the network protocol performance by metrics such as stress, stretch, time to first packet and reconfiguration group time
Resumo:
The industrial automation is directly linked to the development of information tecnology. Better hardware solutions, as well as improvements in software development methodologies make possible the rapid growth of the productive process control. In this thesis, we propose an architecture that will allow the joining of two technologies in hardware (industrial network) and software field (multiagent systems). The objective of this proposal is to join those technologies in a multiagent architecture to allow control strategies implementations in to field devices. With this, we intend develop an agents architecture to detect and solve problems which may occur in the industrial network environment. Our work ally machine learning with industrial context, become proposed multiagent architecture adaptable to unfamiliar or unexpected production environment. We used neural networks and presented an allocation strategies of these networks in industrial network field devices. With this we intend to improve decision support at plant level and allow operations human intervention independent
Resumo:
In this work, we present the GATE, an approach based on middleware for interperceptive applications. Through the services offered by the GATE, we extension we extend the concept of Interperception for integration with several devices, including set-top box, mobile devices (cell phones), among others. Through this extension ensures the implementation of virtual environments in these devices. Thus, users who access the version of the computer environment may interact with those who access the same environment by other devices. This extension is just a part of the services provided by the GATE, that remerges as a new proposal for multi-user virtual environments creation.
Resumo:
This thesis presents a new structure of robust adaptive controller applied to mobile robots (surface mobile robot) with nonholonomic constraints. It acts in the dynamics and kinematics of the robot, and it is split in two distinct parts. The first part controls the robot dynamics, using variable structure model reference adaptive controllers. The second part controls the robot kinematics, using a position controller, whose objective is to make the robot to reach any point in the cartesian plan. The kinematic controller is based only on information about the robot configuration. A decoupling method is adopted to transform the linear model of the mobile robot, a multiple-input multiple-output system, into two decoupled single-input single-output systems, thus reducing the complexity of designing the controller for the mobile robot. After that, a variable structure model reference adaptive controller is applied to each one of the resulting systems. One of such controllers will be responsible for the robot position and the other for the leading angle, using reference signals generated by the position controller. To validate the proposed structure, some simulated and experimental results using differential drive mobile robots of a robot soccer kit are presented. The simulator uses the main characteristics of real physical system as noise and non-linearities such as deadzone and saturation. The experimental results were obtained through an C++ program applied to the robot soccer kit of Microrobot team at the LACI/UFRN. The simulated and experimental results are presented and discussed at the end of the text