515 resultados para Estimulacao eletrica
Resumo:
This graduate thesis proposes a model to asynchronously replicate heterogeneous databases. This model singularly combines -in a systematic way and in a single project -different concepts, techniques and paradigms related to the areas of database replication and management of heterogeneous databases. One of the main advantages of the replication is to allow applications to continue to process information, during time intervals when they are off the network and to trigger the database synchronization, as soon as the network connection is reestablished. Therefore, the model introduces a communication and update protocol that takes in consideration the environment of asynchronous characteristics used. As part of the work, a tool was developed in Java language, based on the model s premises in order to process, test, simulate and validate the proposed model
Resumo:
We propose a multi-resolution approach for surface reconstruction from clouds of unorganized points representing an object surface in 3D space. The proposed method uses a set of mesh operators and simple rules for selective mesh refinement, with a strategy based on Kohonen s self-organizing map. Basically, a self-adaptive scheme is used for iteratively moving vertices of an initial simple mesh in the direction of the set of points, ideally the object boundary. Successive refinement and motion of vertices are applied leading to a more detailed surface, in a multi-resolution, iterative scheme. Reconstruction was experimented with several point sets, induding different shapes and sizes. Results show generated meshes very dose to object final shapes. We include measures of performance and discuss robustness.
Resumo:
New multimedia applications that use the Internet as a communication media are pressing for the development of new technologies, such as: MPLS (Multiprotocol Label Switching) and DiffServ. These technologies introduce new and powerful features to the Internet backbone, as the provision of QoS (Quality of Service) capabilities. However, to obtain a true end-to-end QoS, it is not enough to implement such technologies in the network core, it becomes indispensable to extend such improvements to the access networks, what is the aim of the several works presently under development. To contribute to this process, this Thesis presents the RSVP-SVC (Resource Reservation Protocol Switched Virtual Connection) that consists in an extension of RSVP-TE. The RSVP-SVC is presented herein as a mean to support a true end-to-end QoS, through the extension of MPLS scope. Thus, it is specified a Switched Virtual Connection (SVC) service to be used in the context of a MPLS User-to-Network Interface (MPLS UNI), that is able to efficiently establish and activate Label Switched Paths (LSP), starting from the access routers that satisfy the QoS requirements demanded by the applications. The RSVP-SVC was specified in Estelle, a Formal Description Technique (FDT) standardized by ISO. The edition, compilation, verification and simulation of RSVP-SVC were made by the EDT (Estelle Development Toolset) software. The benefits and most important issues to be considered when using the proposed protocol are also included
Resumo:
The problems of combinatory optimization have involved a large number of researchers in search of approximative solutions for them, since it is generally accepted that they are unsolvable in polynomial time. Initially, these solutions were focused on heuristics. Currently, metaheuristics are used more for this task, especially those based on evolutionary algorithms. The two main contributions of this work are: the creation of what is called an -Operon- heuristic, for the construction of the information chains necessary for the implementation of transgenetic (evolutionary) algorithms, mainly using statistical methodology - the Cluster Analysis and the Principal Component Analysis; and the utilization of statistical analyses that are adequate for the evaluation of the performance of the algorithms that are developed to solve these problems. The aim of the Operon is to construct good quality dynamic information chains to promote an -intelligent- search in the space of solutions. The Traveling Salesman Problem (TSP) is intended for applications based on a transgenetic algorithmic known as ProtoG. A strategy is also proposed for the renovation of part of the chromosome population indicated by adopting a minimum limit in the coefficient of variation of the adequation function of the individuals, with calculations based on the population. Statistical methodology is used for the evaluation of the performance of four algorithms, as follows: the proposed ProtoG, two memetic algorithms and a Simulated Annealing algorithm. Three performance analyses of these algorithms are proposed. The first is accomplished through the Logistic Regression, based on the probability of finding an optimal solution for a TSP instance by the algorithm being tested. The second is accomplished through Survival Analysis, based on a probability of the time observed for its execution until an optimal solution is achieved. The third is accomplished by means of a non-parametric Analysis of Variance, considering the Percent Error of the Solution (PES) obtained by the percentage in which the solution found exceeds the best solution available in the literature. Six experiments have been conducted applied to sixty-one instances of Euclidean TSP with sizes of up to 1,655 cities. The first two experiments deal with the adjustments of four parameters used in the ProtoG algorithm in an attempt to improve its performance. The last four have been undertaken to evaluate the performance of the ProtoG in comparison to the three algorithms adopted. For these sixty-one instances, it has been concluded on the grounds of statistical tests that there is evidence that the ProtoG performs better than these three algorithms in fifty instances. In addition, for the thirty-six instances considered in the last three trials in which the performance of the algorithms was evaluated through PES, it was observed that the PES average obtained with the ProtoG was less than 1% in almost half of these instances, having reached the greatest average for one instance of 1,173 cities, with an PES average equal to 3.52%. Therefore, the ProtoG can be considered a competitive algorithm for solving the TSP, since it is not rare in the literature find PESs averages greater than 10% to be reported for instances of this size.
Resumo:
Large efforts have been maden by the scientific community on tasks involving locomotion of mobile robots. To execute this kind of task, we must develop to the robot the ability of navigation through the environment in a safe way, that is, without collisions with the objects. In order to perform this, it is necessary to implement strategies that makes possible to detect obstacles. In this work, we deal with this problem by proposing a system that is able to collect sensory information and to estimate the possibility for obstacles to occur in the mobile robot path. Stereo cameras positioned in parallel to each other in a structure coupled to the robot are employed as the main sensory device, making possible the generation of a disparity map. Code optimizations and a strategy for data reduction and abstraction are applied to the images, resulting in a substantial gain in the execution time. This makes possible to the high level decision processes to execute obstacle deviation in real time. This system can be employed in situations where the robot is remotely operated, as well as in situations where it depends only on itself to generate trajectories (the autonomous case)
Resumo:
There are some approaches that take advantage of unused computational resources in the Internet nodes - users´ machines. In the last years , the peer-to-peer networks (P2P) have gaining a momentum mainly due to its support for scalability and fault tolerance. However, current P2P architectures present some problems such as nodes overhead due to messages routing, a great amount of nodes reconfigurations when the network topology changes, routing traffic inside a specific network even when the traffic is not directed to a machine of this network, and the lack of a proximity relationship among the P2P nodes and the proximity of these nodes in the IP network. Although some architectures use the information about the nodes distance in the IP network, they use methods that require dynamic information. In this work we propose a P2P architecture to fix the problems afore mentioned. It is composed of three parts. The first part consists of a basic P2P architecture, called SGrid, which maintains a relationship of nodes in the P2P network with their position in the IP network. Its assigns adjacent key regions to nodes of a same organization. The second part is a protocol called NATal (Routing and NAT application layer) that extends the basic architecture in order to remove from the nodes the responsibility of routing messages. The third part consists of a special kind of node, called LSP (Lightware Super-Peer), which is responsible for maintaining the P2P routing table. In addition, this work also presents a simulator that validates the architecture and a module of the Natal protocol to be used in Linux routers
Resumo:
This work develops a robustness analysis with respect to the modeling errors, being applied to the strategies of indirect control using Artificial Neural Networks - ANN s, belong to the multilayer feedforward perceptron class with on-line training based on gradient method (backpropagation). The presented schemes are called Indirect Hybrid Control and Indirect Neural Control. They are presented two Robustness Theorems, being one for each proposed indirect control scheme, which allow the computation of the maximum steady-state control error that will occur due to the modeling error what is caused by the neural identifier, either for the closed loop configuration having a conventional controller - Indirect Hybrid Control, or for the closed loop configuration having a neural controller - Indirect Neural Control. Considering that the robustness analysis is restrict only to the steady-state plant behavior, this work also includes a stability analysis transcription that is suitable for multilayer perceptron class of ANN s trained with backpropagation algorithm, to assure the convergence and stability of the used neural systems. By other side, the boundness of the initial transient behavior is assured by the assumption that the plant is BIBO (Bounded Input, Bounded Output) stable. The Robustness Theorems were tested on the proposed indirect control strategies, while applied to regulation control of simulated examples using nonlinear plants, and its results are presented
Resumo:
This master dissertation presents the development of a fault detection and isolation system based in neural network. The system is composed of two parts: an identification subsystem and a classification subsystem. Both of the subsystems use neural network techniques with multilayer perceptron training algorithm. Two approaches for identifica-tion stage were analyzed. The fault classifier uses only residue signals from the identification subsystem. To validate the proposal we have done simulation and real experiments in a level system with two water reservoirs. Several faults were generated above this plant and the proposed fault detection system presented very acceptable behavior. In the end of this work we highlight the main difficulties found in real tests that do not exist when it works only with simulation environments
Resumo:
Breast cancer, despite being one of the leading causes of death among women worldwide is a disease that can be cured if diagnosed early. One of the main techniques used in the detection of breast cancer is the Fine Needle Aspirate FNA (aspiration puncture by thin needle) which, depending on the clinical case, requires the analysis of several medical specialists for the diagnosis development. However, such diagnosis and second opinions have been hampered by geographical dispersion of physicians and/or the difficulty in reconciling time to undertake work together. Within this reality, this PhD thesis uses computational intelligence in medical decision-making support for remote diagnosis. For that purpose, it presents a fuzzy method to assist the diagnosis of breast cancer, able to process and sort data extracted from breast tissue obtained by FNA. This method is integrated into a virtual environment for collaborative remote diagnosis, whose model was developed providing for the incorporation of prerequisite Modules for Pre Diagnosis to support medical decision. On the fuzzy Method Development, the process of knowledge acquisition was carried out by extraction and analysis of numerical data in gold standard data base and by interviews and discussions with medical experts. The method has been tested and validated with real cases and, according to the sensitivity and specificity achieved (correct diagnosis of tumors, malignant and benign respectively), the results obtained were satisfactory, considering the opinions of doctors and the quality standards for diagnosis of breast cancer and comparing them with other studies involving breast cancer diagnosis by FNA.
Resumo:
This work presents a set of intelligent algorithms with the purpose of correcting calibration errors in sensors and reducting the periodicity of their calibrations. Such algorithms were designed using Artificial Neural Networks due to its great capacity of learning, adaptation and function approximation. Two approaches willbe shown, the firstone uses Multilayer Perceptron Networks to approximate the many shapes of the calibration curve of a sensor which discalibrates in different time points. This approach requires the knowledge of the sensor s functioning time, but this information is not always available. To overcome this need, another approach using Recurrent Neural Networks was proposed. The Recurrent Neural Networks have a great capacity of learning the dynamics of a system to which it was trained, so they can learn the dynamics of a sensor s discalibration. Knowingthe sensor s functioning time or its discalibration dynamics, it is possible to determine how much a sensor is discalibrated and correct its measured value, providing then, a more exact measurement. The algorithms proposed in this work can be implemented in a Foundation Fieldbus industrial network environment, which has a good capacity of device programming through its function blocks, making it possible to have them applied to the measurement process
Resumo:
Este trabalho propõe um ambiente computacional aplicado ao ensino de sistemas de controle, denominado de ModSym. O software implementa uma interface gráfica para a modelagem de sistemas físicos lineares e mostra, passo a passo, o processamento necessário à obtenção de modelos matemáticos para esses sistemas. Um sistema físico pode ser representado, no software, de três formas diferentes. O sistema pode ser representado por um diagrama gráfico a partir de elementos dos domínios elétrico, mecânico translacional, mecânico rotacional e hidráulico. Pode também ser representado a partir de grafos de ligação ou de diagramas de fluxo de sinal. Uma vez representado o sistema, o ModSym possibilita o cálculo de funções de transferência do sistema na forma simbólica, utilizando a regra de Mason. O software calcula também funções de transferência na forma numérica e funções de sensibilidade paramétrica. O trabalho propõe ainda um algoritmo para obter o diagrama de fluxo de sinal de um sistema físico baseado no seu grafo de ligação. Este algoritmo e a metodologia de análise de sistemas conhecida por Network Method permitiram a utilização da regra de Mason no cálculo de funções de transferência dos sistemas modelados no software
Resumo:
Due to the current need of the industry to integrate data of the beginning of production originating from of several sources and of transforming them in useful information for sockets of decisions, a search exists every time larger for systems of visualization of information that come to collaborate with that functionality. On the other hand, a common practice nowadays, due to the high competitiveness of the market, it is the development of industrial systems that possess characteristics of modularity, distribution, flexibility, scalability, adaptation, interoperability, reusability and access through web. Those characteristics provide an extra agility and a larger easiness in adapting to the frequent changes of demand of the market. Based on the arguments exposed above, this work consists of specifying a component-based architecture, with the respective development of a system based on that architecture, for the visualization of industrial data. The system was conceived to be capable to supply on-line information and, optionally, historical information of variables originating from of the beginning of production. In this work it is shown that the component-based architecture developed possesses the necessary requirements for the obtaining of a system robust, reliable and of easy maintenance, being, like this, in agreement with the industrial needs. The use of that architecture allows although components can be added, removed or updated in time of execution, through a manager of components through web, still activating more the adaptation process and updating of the system
Resumo:
This work presents the development of new microwaves structures, filters and high gain antenna, through the cascading of frequency selective surfaces, which uses fractals Dürer and Minkowski patches as elements, addition of an element obtained from the combination of the other two simple the cross dipole and the square spiral. Frequency selective surfaces (FSS) includes a large area of Telecommunications and have been widely used due to its low cost, low weight and ability to integrate with others microwaves circuits. They re especially important in several applications, such as airplane, antennas systems, radomes, rockets, missiles, etc. FSS applications in high frequency ranges have been investigated, as well as applications of cascading structures or multi-layer, and active FSS. In this work, we present results for simulated and measured transmission characteristics of cascaded structures (multilayer), aiming to investigate the behavior of the operation in terms of bandwidth, one of the major problems presented by frequency selective surfaces. Comparisons are made with simulated results, obtained using commercial software such as Ansoft DesignerTM v3 and measured results in the laboratory. Finally, some suggestions are presented for future works on this subject
Resumo:
On this paper, it is made a comparative analysis among a controller fuzzy coupled to a PID neural adjusted by an AGwith several traditional control techniques, all of them applied in a system of tanks (I model of 2nd order non lineal). With the objective of making possible the techniques involved in the comparative analysis and to validate the control to be compared, simulations were accomplished of some control techniques (conventional PID adjusted by GA, Neural PID (PIDN) adjusted by GA, Fuzzy PI, two Fuzzy attached to a PID Neural adjusted by GA and Fuzzy MISO (3 inputs) attached to a PIDN adjusted by GA) to have some comparative effects with the considered controller. After doing, all the tests, some control structures were elected from all the tested techniques on the simulating stage (conventional PID adjusted by GA, Fuzzy PI, two Fuzzy attached to a PIDN adjusted by GA and Fuzzy MISO (3 inputs) attached to a PIDN adjusted by GA), to be implemented at the real system of tanks. These two kinds of operation, both the simulated and the real, were very important to achieve a solid basement in order to establish the comparisons and the possible validations show by the results
Resumo:
The increasing of the number of attacks in the computer networks has been treated with the increment of the resources that are applied directly in the active routers equip-ments of these networks. In this context, the firewalls had been consolidated as essential elements in the input and output control process of packets in a network. With the advent of intrusion detectors systems (IDS), efforts have been done in the direction to incorporate packets filtering based in standards of traditional firewalls. This integration incorporates the IDS functions (as filtering based on signatures, until then a passive element) with the already existing functions in firewall. In opposite of the efficiency due this incorporation in the blockage of signature known attacks, the filtering in the application level provokes a natural retard in the analyzed packets, and it can reduce the machine performance to filter the others packets because of machine resources demand by this level of filtering. This work presents models of treatment for this problem based in the packets re-routing for analysis by a sub-network with specific filterings. The suggestion of implementa- tion of this model aims reducing the performance problem and opening a space for the consolidation of scenes where others not conventional filtering solutions (spam blockage, P2P traffic control/blockage, etc.) can be inserted in the filtering sub-network, without inplying in overload of the main firewall in a corporative network