97 resultados para Sistemas não-lineares e variantes no tempo
Resumo:
The lava Platform is increasing1y being adopted in the development of distributed sys¬tems with higb user demando This kind of application is more complex because it needs beyond attending the functional requirements, to fulfil1 the pre-established performance parameters. This work makes a study on the Java Vutual Machine (JVM), approaching its intemal aspects and exploring the garbage collection strategies existing in the literature and used by the NM. It also presents a set of tools that helps in the job of optimizing applications and others that help in the monitoring of applications in the production envi¬ronment. Doe to the great amount of technologies that aim to solve problems which are common to the application layer, it becomes difficult to choose the one with best time response and less memory usage. This work presents a brief introduction to each one of tbe possible technologies and realize comparative tests through a statistical analysis of the response time and garbage collection activity random variables. The obtained results supply engineers and managers with a subside to decide which technologies to use in large applications through the knowledge of how they behave in their environments and the amount of resources that they consume. The relation between the productivity of the technology and its performance is also considered ao important factor in this choice
Resumo:
This master dissertation introduces a study about some aspects that determine the aplication of adaptative arrays in DS-CDMA cellular systems. Some basics concepts and your evolution in the time about celular systems was detailed here, meanly the CDMA tecnique, specialy about spread-codes and funtionaly principies. Since this, the mobile radio enviroment, with your own caracteristcs, and the basics concepts about adaptive arrays, as powerfull spacial filter was aborded. Some adaptative algorithms was introduced too, these are integrants of the signals processing, and are answerable for weights update that influency directly in the radiation pattern of array. This study is based in a numerical analysis of adaptative array system behaviors related to the used antenna and array geometry types. All the simulations was done by Mathematica 4.0 software. The results for weights convergency, square mean error, gain, array pattern and supression capacity based the analisis made here, using RLS (supervisioned) and LSDRMTA (blind) algorithms
Resumo:
The present work is based on the applied bilinear predictive control applied to an induction motor. As in particular case of the technique based on predictive control in nonlinem systems, these have desperted great interest, a time that present the advantage of being simpler than the non linear in general and most representative one than the linear one. One of the methods, adopted here, uses the linear model "quasi linear for step of time" based in Generalized Predictive Control. The modeling of the induction motor is made by the Vectorial control with orientation given for the indirect rotor. The system is formed by an induction motor of 3 cv with rotor in squirregate, set in motion for a group of benches of tests developed for this work, presented resulted for a variation of +5% in the value of set-point and for a variation of +10% and -10% in the value of the applied nominal load to the motor. The results prove a good efficiency of the predictive bilinear controllers, then compared with the linear cases
Resumo:
RFID (Radio Frequency Identification) identifies object by using the radio frequency which is a non-contact automatic identification technique. This technology has shown its powerful practical value and potential in the field of manufacturing, retailing, logistics and hospital automation. Unfortunately, the key problem that impacts the application of RFID system is the security of the information. Recently, researchers have demonstrated solutions to security threats in RFID technology. Among these solutions are several key management protocols. This master dissertations presents a performance evaluation of Neural Cryptography and Diffie-Hellman protocols in RFID systems. For this, we measure the processing time inherent in these protocols. The tests was developed on FPGA (Field-Programmable Gate Array) platform with Nios IIr embedded processor. The research methodology is based on the aggregation of knowledge to development of new RFID systems through a comparative analysis between these two protocols. The main contributions of this work are: performance evaluation of protocols (Diffie-Hellman encryption and Neural) on embedded platform and a survey on RFID security threats. According to the results the Diffie-Hellman key agreement protocol is more suitable for RFID systems
Resumo:
We propose a new approach to reduction and abstraction of visual information for robotics vision applications. Basically, we propose to use a multi-resolution representation in combination with a moving fovea for reducing the amount of information from an image. We introduce the mathematical formalization of the moving fovea approach and mapping functions that help to use this model. Two indexes (resolution and cost) are proposed that can be useful to choose the proposed model variables. With this new theoretical approach, it is possible to apply several filters, to calculate disparity and to obtain motion analysis in real time (less than 33ms to process an image pair at a notebook AMD Turion Dual Core 2GHz). As the main result, most of time, the moving fovea allows the robot not to perform physical motion of its robotics devices to keep a possible region of interest visible in both images. We validate the proposed model with experimental results
Resumo:
The incorporate of industrial automation in the medical are requires mechanisms to safety and efficient establishment of communication between biomedical devices. One solution to this problem is the MP-HA (Multicycles Protocol to Hospital Automation) that down a segmented network by beds coordinated by an element called Service Provider. The goal of this work is to model this Service Provider and to do performance analysis of the activities executed by in establishment and maintenance of hospital networks
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Resumo:
This work deals with an on-line control strategy based on Robust Model Predictive Control (RMPC) technique applied in a real coupled tanks system. This process consists of two coupled tanks and a pump to feed the liquid to the system. The control objective (regulator problem) is to keep the tanks levels in the considered operation point even in the presence of disturbance. The RMPC is a technique that allows explicit incorporation of the plant uncertainty in the problem formulation. The goal is to design, at each time step, a state-feedback control law that minimizes a 'worst-case' infinite horizon objective function, subject to constraint in the control. The existence of a feedback control law satisfying the input constraints is reduced to a convex optimization over linear matrix inequalities (LMIs) problem. It is shown in this work that for the plant uncertainty described by the polytope, the feasible receding horizon state feedback control design is robustly stabilizing. The software implementation of the RMPC is made using Scilab, and its communication with Coupled Tanks Systems is done through the OLE for Process Control (OPC) industrial protocol
Resumo:
Conventional methods to solve the problem of blind source separation nonlinear, in general, using series of restrictions to obtain the solution, often leading to an imperfect separation of the original sources and high computational cost. In this paper, we propose an alternative measure of independence based on information theory and uses the tools of artificial intelligence to solve problems of blind source separation linear and nonlinear later. In the linear model applies genetic algorithms and Rényi of negentropy as a measure of independence to find a separation matrix from linear mixtures of signals using linear form of waves, audio and images. A comparison with two types of algorithms for Independent Component Analysis widespread in the literature. Subsequently, we use the same measure of independence, as the cost function in the genetic algorithm to recover source signals were mixed by nonlinear functions from an artificial neural network of radial base type. Genetic algorithms are powerful tools for global search, and therefore well suited for use in problems of blind source separation. Tests and analysis are through computer simulations
Resumo:
The public illumination system of Natal/RN city presents some recurring problems in the aspect of monitoring, since currently is not possible to detect in real time the light bulbs which are on throughout the day, or those which are off or burned out, at night. These factors depreciate the efficiency of the services provided, as well as, the use of energetic resources, because there is energetic waste and, consequently, financial resources that could be applied at the own public system illumination. The purpose of the work is create a prototype in substitution to the currently photoelectric relays used at public illumination, that have the same function, as well others: turn on or off the light bulbs remotely (control flexibility by the use of specifics algorithms supervisory), checking the light bulbs status (on or off) and wireless communication with the system through the ZigBee® protocol. The development steps of this product and the tests carried out are related as a way to validate and justify its use at the public illumination
Resumo:
This master dissertation introduces a study about some aspects that determine the aplication of adaptative arrays in DS-CDMA cellular systems. Some basics concepts and your evolution in the time about celular systems was detailed here, meanly the CDMA tecnique, specialy about spread-codes and funtionaly principies. Since this, the mobile radio enviroment, with your own caracteristcs, and the basics concepts about adaptive arrays, as powerfull spacial filter was aborded. Some adaptative algorithms was introduced too, these are integrants of the signals processing, and are answerable for weights update that influency directly in the radiation pattern of array. This study is based in a numerical analysis of adaptative array system behaviors related to the used antenna and array geometry types. All the simulations was done by Mathematica 4.0 software. The results for weights convergency, square mean error, gain, array pattern and supression capacity based the analisis made here, using RLS (supervisioned) and LSDRMTA (blind) algorithms.
Resumo:
The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors
Resumo:
This work describes the development of a nonlinear control strategy for an electro-hydraulic actuated system. The system to be controlled is represented by a third order ordinary differential equation subject to a dead-zone input. The control strategy is based on a nonlinear control scheme, combined with an artificial intelligence algorithm, namely, the method of feedback linearization and an artificial neural network. It is shown that, when such a hard nonlinearity and modeling inaccuracies are considered, the nonlinear technique alone is not enough to ensure a good performance of the controller. Therefore, a compensation strategy based on artificial neural networks, which have been notoriously used in systems that require the simulation of the process of human inference, is used. The multilayer perceptron network and the radial basis functions network as well are adopted and mathematically implemented within the control law. On this basis, the compensation ability considering both networks is compared. Furthermore, the application of new intelligent control strategies for nonlinear and uncertain mechanical systems are proposed, showing that the combination of a nonlinear control methodology and artificial neural networks improves the overall control system performance. Numerical results are presented to demonstrate the efficacy of the proposed control system
Resumo:
The development of non-linear controllers gained space in the theoretical ambit and of practical applications on the moment that the arising of digital computers enabled the implementation of these methodologies. In comparison with the linear controllers more utilized, the non -linear controllers present the advantage of not requiring the linearity of the system to determine the parameters of control, which permits a more efficient control especially when the system presents a high level of non-linearity. Another additional advantage is the reduction of costs, since to obtain the efficient control through linear controllers it is necessary the utilization of sensors and more refined actuators than when it is utilized a non-linear controller. Among the non-linear theories of control, the method of control by gliding ways is detached for being a method that presents more robustness, before uncertainties. It is already confirmed that the adoption of compensation on the region of residual error permits to improve better the performance of these controllers. So, in this work it is described the development of a non-linear controller that looks for an association of strategy of control by gliding ways, with the fuzzy compensation technique. Through the implementation of some strategies of fuzzy compensation, it was searched the one which provided the biggest efficiency before a system with high level of nonlinearities and uncertainties. The electrohydraulic actuator was utilized as an example of research, and the results appoint to two configurations of compensation that permit a bigger reduction of the residual error
Resumo:
The constant search for biodegradable materials for applications in several fields shows that carnauba wax can be a viable alternative in the manufacturing of biolubricants. Carnauba wax is the unique among the natural waxes to have a combination of properties of great importance. In previous studies it was verified the presence of metals in wax composition that can harm the oxidative stability of lubricants. Considering these factors, it was decided to develop a research to evaluate iron removal from carnauba wax, using microemulsion systems (Me) and perform the optimization of parameters, such as: extraction pH, temperature, extraction time, among others. Iron concentration was determined by atomic absorption and, to perform this analysis, sample digestion in microwave oven was used, showing that this process was very efficient. It was performed some analysis in order to characterize the wax sample, such as: attenuated total reflectance infrared spectroscopy (ATR-IR), thermogravimetry (TG), differential scanning calorimetry (DSC), energy dispersive X-ray fluorescence (EDXRF), scanning electron microscopy (SEM) and melting point (FP). The microemulsion systems were composed by: coconut oil as surfactant, n-butanol as cosurfactant, kerosene and/or heptanes as oil phase, distilled water as water phase. The pH chosen for this study was 4.5 and the metal extraction was performed in finite experiments. To evaluate Me extraction it was performed a factorial design for systems with heptane and kerosene as oil phase, also investigating the influence of temperature time and wax/Me ratio, that showed an statistically significant answer for iron extraction at 95% confidence level. The best result was obtained at 60°C, 10 hours contact time and 1: 10 wax/Me ratio, in both systems with kerosene and heptanes as oil phase. The best extraction occurred with kerosene as oil phase, with 54% iron removal