522 resultados para Soldagem eletrica


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Launching centers are designed for scientific and commercial activities with aerospace vehicles. Rockets Tracking Systems (RTS) are part of the infrastructure of these centers and they are responsible for collecting and processing the data trajectory of vehicles. Generally, Parabolic Reflector Radars (PRRs) are used in RTS. However, it is possible to use radars with antenna arrays, or Phased Arrays (PAs), so called Phased Arrays Radars (PARs). Thus, the excitation signal of each radiating element of the array can be adjusted to perform electronic control of the radiation pattern in order to improve functionality and maintenance of the system. Therefore, in the implementation and reuse projects of PARs, modeling is subject to various combinations of excitation signals, producing a complex optimization problem due to the large number of available solutions. In this case, it is possible to use offline optimization methods, such as Genetic Algorithms (GAs), to calculate the problem solutions, which are stored for online applications. Hence, the Genetic Algorithm with Maximum-Minimum Crossover (GAMMC) optimization method was used to develop the GAMMC-P algorithm that optimizes the modeling step of radiation pattern control from planar PAs. Compared with a conventional crossover GA, the GAMMC has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, the GAMMC prevents premature convergence, increases population fitness and reduces the processing time. Therefore, the GAMMC-P uses a reconfigurable algorithm with multiple objectives, different coding and genetic operator MMC. The test results show that GAMMC-P reached the proposed requirements for different operating conditions of a planar RAV.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Atualmente com o crescente aumento de dispositivos robóticos destinados para aplicação na área de mobilidade de pessoas que sofreram algum tipo de lesão medular, se faz necessário desenvolver novas ferramentas para tornar tais equipamentos mais adaptáveis, seguros e autônomos. Para que as órteses robóticas que auxiliam na locomoção de pessoas paraplégicas possam desempenhar sua função, estas devem ser capazes de reproduzir os movimentos perdidos com o máximo de fidelidade e segurança em ambientes que eventualmente possam conter obstáculos de diferentes tipos como buracos, escadas e calçadas. As órteses robóticas para membros inferiores têm a capacidade de caminhar, subir e descer degraus, todavia, esses movimentos, na maioria das vezes, não se adaptam ao ambiente, sendo assim, para uma órtese robótica que foi projetada para subir um degrau com uma determinada altura ao se deparar com um degrau maior provavelmente não conseguirá realizar essa tarefa com a mesma segurança. Para solucionar esse e outros problemas, esse trabalho apresenta um Sistema de Auxílio à Locomoção (SAL) dotado de um planejador de passos e um gerador de referências angulares com características antropomórficas para a órtese robótica Ortholeg. O SAL utiliza dados antropométricos do usuário para gerar um padrão de marcha personalizado, dessa forma, a órtese em questão é capaz de adaptar o tamanho do passo para não colidir com obstáculos presentes no ambiente e transpor buracos com diversos tamanhos, subir e descer escadas e calçadas com diferentes valores de altura e comprimento. Para desenvolver o sistema de auxílio à locomoção foram adaptadas técnicas de planejamento de caminho, usadas a princípio em robôs bípedes. São apresentados vários experimentos que mostram a órtese Ortholeg realizando alguns movimentos com características antropomórficas para diferentes distâncias de caminhada e três tipos de obstáculos: degrau, buraco e calçada. A autonomia adquirida com a utilização do sistema de planejamento apresentado facilita a utilização de órteses robóticas como também garante uma maior segurança ao usuário.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The great amount of data generated as the result of the automation and process supervision in industry implies in two problems: a big demand of storage in discs and the difficulty in streaming this data through a telecommunications link. The lossy data compression algorithms were born in the 90’s with the goal of solving these problems and, by consequence, industries started to use those algorithms in industrial supervision systems to compress data in real time. These algorithms were projected to eliminate redundant and undesired information in a efficient and simple way. However, those algorithms parameters must be set for each process variable, becoming impracticable to configure this parameters for each variable in case of systems that monitor thousands of them. In that context, this paper propose the algorithm Adaptive Swinging Door Trending that consists in a adaptation of the Swinging Door Trending, as this main parameters are adjusted dynamically by the analysis of the signal tendencies in real time. It’s also proposed a comparative analysis of performance in lossy data compression algorithms applied on time series process variables and dynamometer cards. The algorithms used to compare were the piecewise linear and the transforms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O NAVSTAR/GPS (NAVigation System with Timing And Ranging/Global Po- sitioning System), mais conhecido por GPS, _e um sistema de navegacão baseado em sat_elites desenvolvido pelo departamento de defesa norte-americano em meados de 1970. Criado inicialmente para fins militares, o GPS foi adaptado para o uso civil. Para fazer a localização, o receptor precisa fazer a aquisição de sinais dos satélites visíveis. Essa etapa é de extrema importância, pois é responsável pela detecção dos satélites visíveis, calculando suas respectivas frequências e fases iniciais. Esse processo pode demandar bastante tempo de processamento e precisa ser implementado de forma eficiente. Várias técnicas são utilizadas atualmente, mas a maioria delas colocam em conflito questões de projeto tais como, complexidade computacional, tempo de aquisição e recursos computacionais. Objetivando equilibrar essas questões, foi desenvolvido um método que reduz a complexidade do processo de aquisição utilizando algumas estratégias, a saber, redução do efeito doppler, amostras e tamanho do sinal utilizados, além do paralelismo. Essa estratégia é dividida em dois passos, um grosseiro em todo o espaço de busca e um fino apenas na região identificada previamente pela primeira etapa. Devido a busca grosseira, o limiar do algoritmo convencional não era mais aceitável. Nesse sentido, um novo limiar foi estabelecido baseado na variância dos picos de correlação. Inicialmente, é feita uma busca com pouca precisão comparando a variância dos cinco maiores picos de correlação encontrados. Caso a variância ultrapasse um certo limiar, a região de maior pico torna-se candidata à detecção. Por fim, essa região passa por um refinamento para se ter a certeza de detecção. Os resultados mostram que houve uma redução significativa na complexidade e no tempo de execução, sem que tenha sido necessário utilizar algoritmos muito complexos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work concerns a refinement of a suboptimal dual controller for discrete time systems with stochastic parameters. The dual property means that the control signal is chosen so that estimation of the model parameters and regulation of the output signals are optimally balanced. The control signal is computed in such a way so as to minimize the variance of output around a reference value one step further, with the addition of terms in the loss function. The idea is add simple terms depending on the covariance matrix of the parameter estimates two steps ahead. An algorithm is used for the adaptive adjustment of the adjustable parameter lambda, for each step of the way. The actual performance of the proposed controller is evaluated through a Monte Carlo simulations method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recentemente diversas técnicas de computação evolucionárias têm sido utilizadas em áreas como estimação de parâmetros de processos dinâmicos lineares e não lineares ou até sujeitos a incertezas. Isso motiva a utilização de algoritmos como o otimizador por nuvem de partículas (PSO) nas referidas áreas do conhecimento. Porém, pouco se sabe sobre a convergência desse algoritmo e, principalmente, as análises e estudos realizados têm se concentrado em resultados experimentais. Por isso, é objetivo deste trabalho propor uma nova estrutura para o PSO que permita analisar melhor a convergência do algoritmo de forma analítica. Para isso, o PSO é reestruturado para assumir uma forma matricial e reformulado como um sistema linear por partes. As partes serão analisadas de forma separada e será proposta a inserção de um fator de esquecimento que garante que a parte mais significativa deste sistema possua autovalores dentro do círculo de raio unitário. Também será realizada a análise da convergência do algoritmo como um todo, utilizando um critério de convergência quase certa, aplicável a sistemas chaveados. Na sequência, serão realizados testes experimentais de maneira a verificar o comportamento dos autovalores após a inserção do fator de esquecimento. Posteriormente, os algoritmos de identificação de parâmetros tradicionais serão combinados com o PSO matricial, de maneira a tornar os resultados da identificação tão bons ou melhores que a identificação apenas com o PSO ou, apenas com os algoritmos tradicionais. Os resultados mostram a convergência das partículas em uma região delimitada e que as funções obtidas após a combinação do algoritmo PSO matricial com os algoritmos convencionais, apresentam maior generalização para o sistema apresentado. As conclusões a que se chega é que a hibridização, apesar de limitar a busca por uma partícula mais apta do PSO, permite um desempenho mínimo para o algoritmo e ainda possibilita melhorar o resultado obtido com os algoritmos tradicionais, permitindo a representação do sistema aproximado em quantidades maiores de frequências.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The great interest in nonlinear system identification is mainly due to the fact that a large amount of real systems are complex and need to have their nonlinearities considered so that their models can be successfully used in applications of control, prediction, inference, among others. This work evaluates the application of Fuzzy Wavelet Neural Networks (FWNN) to identify nonlinear dynamical systems subjected to noise and outliers. Generally, these elements cause negative effects on the identification procedure, resulting in erroneous interpretations regarding the dynamical behavior of the system. The FWNN combines in a single structure the ability to deal with uncertainties of fuzzy logic, the multiresolution characteristics of wavelet theory and learning and generalization abilities of the artificial neural networks. Usually, the learning procedure of these neural networks is realized by a gradient based method, which uses the mean squared error as its cost function. This work proposes the replacement of this traditional function by an Information Theoretic Learning similarity measure, called correntropy. With the use of this similarity measure, higher order statistics can be considered during the FWNN training process. For this reason, this measure is more suitable for non-Gaussian error distributions and makes the training less sensitive to the presence of outliers. In order to evaluate this replacement, FWNN models are obtained in two identification case studies: a real nonlinear system, consisting of a multisection tank, and a simulated system based on a model of the human knee joint. The results demonstrate that the application of correntropy as the error backpropagation algorithm cost function makes the identification procedure using FWNN models more robust to outliers. However, this is only achieved if the gaussian kernel width of correntropy is properly adjusted.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The microstrip antennas in your simplest form consist of a ground plane and a dielectric substrate which supports a conductive tape. As these antennas have some limitations, this work presents a study of anisotropic substrates, as well as some results in microstrip antennas with circular patch, aiming to overcome these limitations, especially in applications at 4G technology. These anisotropic substrates are those in which electrical permittivity and magnetic permeability are represented by tensors of second order. The study consists of a theoretical analysis of substrates and development of a mathematical formalism, the Transverse Transmission Line Method, aimed the application of these substrates in microstrip antennas. Among the substrates used in this study, there are the ferrimagnetic and metamaterials, in which some miniaturizations of the antennas are achieved. For antennas with circular patch, are considered arrays and modified ground planes in order to achieve improvement in parameters, in particular, gain and bandwidth. Several simulations have been made and antennas were constructed so that the measured values could be compared with the simulated values.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Generation systems, using renewable sources, are becoming increasingly popular due to the need for increased use of electricity. Currently, renewables sources have a role to cooperate with conventional generation, due to the system limitation in delivering the required power, the need for reduction of unwanted effects from sources that use fossil fuels (pollution) and the difficulty of building new transmission and/or distribution lines. This cooperation takes place through distributed generation. Therefore, this work proposes a control strategy for the interconnection of a PV (Photovoltaic) system generation distributed with a three-phase power grid through a connection filter the type LCL. The compensation of power quality at point of common coupling (PCC) is performed ensuring that the mains supply or consume only active power and that his currents have low distorcion. Unlike traditional techniques which require schemes for harmonic detection, the technique performs the harmonic compensation without the use of this schemes, controlling the output currents of the system in an indirect way. So that there is effective control of the DC (Direct Current) bus voltage is used the robust controller mode dual DSMPI (Dual-Sliding Mode-Proportional Integral), that behaves as a sliding mode controller SM-PI (Sliding Mode-Proportional Integral) during the transition and like a conventional PI (Proportional Integral) in the steady-state. For control of current is used to repetitive control strategy, which are used double sequence controllers (DSC) tuned to the fundamental component, the fifth and seventh harmonic. The output phase current are aligned with the phase angle of the utility voltage vector obtained from the use of a SRF-PLL (Synchronous Reference Frame Phase-Locked-Loop). In order to obtain the maximum power from the PV array is used a MPPT (Maximum Power Point Tracking) algorithm without the need for adding sensors. Experimental results are presented to demonstrate the effectiveness of the proposed control system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In control loops valve stiction is a very common problem. Generally, it is one of main causes of poor performance of industrial systems. Its most commonly observed effect is oscillation in the process variables. To circumvent the undesirable effects, friction compensators have been proposed in order to reduce the variability in the output. This work analyzes the friction compensation in pneumatic control valves by using feedback linearization technique. The valve model includes both dead zone and jump. Simulations show that the use of this more complete model results in controllers with superior performance. The method is also compared through simulations with the method known as Constant Reinforcement (CR), widely used in this problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information extraction is a frequent and relevant problem in digital signal processing. In the past few years, different methods have been utilized for the parameterization of signals and the achievement of efficient descriptors. When the signals possess statistical cyclostationary properties, the Cyclic Autocorrelation Function (CAF) and the Spectral Cyclic Density (SCD) can be used to extract second-order cyclostationary information. However, second-order cyclostationary information is poor in nongaussian signals, as the cyclostationary analysis in this case should comprise higher-order statistical information. This paper proposes a new mathematical tool for the higher-order cyclostationary analysis based on the correntropy function. Specifically, the cyclostationary analysis is revisited focusing on the information theory, while the Cyclic Correntropy Function (CCF) and Cyclic Correntropy Spectral Density (CCSD) are also defined. Besides, it is analytically proven that the CCF contains information regarding second- and higher-order cyclostationary moments, being a generalization of the CAF. The performance of the aforementioned new functions in the extraction of higher-order cyclostationary characteristics is analyzed in a wireless communication system where nongaussian noise exists.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lung cancer is one of the most common types of cancer and has the highest mortality rate. Patient survival is highly correlated with early detection. Computed Tomography technology services the early detection of lung cancer tremendously by offering aminimally invasive medical diagnostic tool. However, the large amount of data per examination makes the interpretation difficult. This leads to omission of nodules by human radiologist. This thesis presents a development of a computer-aided diagnosis system (CADe) tool for the detection of lung nodules in Computed Tomography study. The system, called LCD-OpenPACS (Lung Cancer Detection - OpenPACS) should be integrated into the OpenPACS system and have all the requirements for use in the workflow of health facilities belonging to the SUS (Brazilian health system). The LCD-OpenPACS made use of image processing techniques (Region Growing and Watershed), feature extraction (Histogram of Gradient Oriented), dimensionality reduction (Principal Component Analysis) and classifier (Support Vector Machine). System was tested on 220 cases, totaling 296 pulmonary nodules, with sensitivity of 94.4% and 7.04 false positives per case. The total time for processing was approximately 10 minutes per case. The system has detected pulmonary nodules (solitary, juxtavascular, ground-glass opacity and juxtapleural) between 3 mm and 30 mm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Brazil Telehealth Networks Program was established by the Ministry of Health in 2007. Its main objective is to support professionals in Primary Health Care (PHC) by offering educational qualification, resulting in more favorable conditions to fixate the professional in remote areas. The formulation and management of telehealth services are performed by scientific and technical centers that are operated by public institutions of higher education and responsible for providing tools and services in the context of the regions where they are. However, one of the problems generated by this decentralization is the development of various tools with different types of language, architecture and without any regulation and integration of information with the Ministry of Health. Aiming to solve the above problem, we propose the specification, implementation and validation of an architectural model in the development and distribution of the Unified Health System software tools. This proposed architecture enables tools developed in telehealth center to be shared among the other centers, thereby preventing the unnecessary use of resources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The localization of mobile robots in indoor environments finds lots of problems such as accumulated errors and the constant changes that occur at these places. A technique called global vision intends to localize robots using images acquired by cameras placed in such a way that covers the place where the robots movement takes place. Localization is obtained by marks put on top of the robot. Algorithms applied to the images search for the mark on top of the robot and by finding the mark they are able to get the position and orientation of the robot. Such techniques used to face some difficulties related with the hardware capacity, fact that limited their execution in real time. However, the technological advances of the last years changed that situation and enabling the development and execution of such algorithms in plain capacity. The proposal specified here intends to develop a mobile robot localization system at indoor environments using a technique called global vision to track the robot and acquire the images, all in real time, intending to improve the robot localization process inside the environment. Being a localization method that takes just actual information in its calculations, the robot localization using images fit into the needs of this kind of place. Besides, it enables more accurate results and in real time, what is exactly the museum application needs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work presents an analysis of the behavior of some algorithms usually available in stereo correspondence literature, with full HD images (1920x1080 pixels) to establish, within the precision dilemma versus runtime applications which these methods can be better used. The images are obtained by a system composed of a stereo camera coupled to a computer via a capture board. The OpenCV library is used for computer vision operations and processing images involved. The algorithms discussed are an overall method of search for matching blocks with the Sum of the Absolute Value of the difference (Sum of Absolute Differences - SAD), a global technique based on cutting energy graph cuts, and a so-called matching technique semi -global. The criteria for analysis are processing time, the consumption of heap memory and the mean absolute error of disparity maps generated.