1000 resultados para Arquitectura, Sistemas e Redes
Resumo:
As redes de comunicação de nova geração, onde se incluem as tecnologias de fibra óptica, têm sido uma área na qual tem havido imensa investigação e desenvolvimento. As tecnologias de FTTx, em especial a FTTH, já são uma realidade em milhares de habitações em variadíssimos países. A presente dissertação descreve um estudo da tecnologia GPON - Gigabit Passive Optical Network - nas instalações da EMACOM, Telecomunicações da Madeira, Unipessoal, Lda. Inicialmente são enumerados os requisitos necessários, quer seja por parte dos clientes como dos fornecedores. Em seguida apresenta-se o padrão do ITU-T G.984, contendo as várias camadas, tais como a camada física e a camada de convergência de transmissão, abordando também a estrutura das tramas e as principais características do referido padrão. Depois é dado a conhecer ao leitor os elementos que constituem uma rede GPON, sejam estes equipamentos passivos ou activos. Seguidamente foi descrito o planeamento para o projecto proposto onde foram estudados os vários tipos de arquitectura utilizados pelo FTTH e definiu-se qual seria a melhor opção para a zona urbana considerada. Utilizou-se o ArcMap da ESRI Portugal - Sistemas e Informação Geográfica, S.A, criando uma base de dados e um esquema da própria rede num mapa da freguesia de São Martinho. Utilizou-se o AutoCad, onde foram elaborados vários sinópticos da rede da área escolhida, nos quais englobam a rede de alimentação, e a rede de distribuição. Toda a informação ilustrada nos sinópticos foi colocada numa folha de cálculo de Excel sendo mais rápida a sua pesquisa. Posteriormente, criou-se uma lista de material com as quantidades necessárias a utilizar para a sua implementação. Para verificar a viabilidade do projecto em termos de potência óptica foi efectuado um cálculo do balanço de potência. Assim, é considerada a ligação mais distante para averiguar se existe potência óptica suficiente para cobrir essa ligação. Caso seja viável, então os outros pontos de distribuição ópticos com distâncias inferiores também estarão suficientemente alimentados em termos ópticos. São ainda abordados os vários tipos de perdas nas fibras ópticas, as definições de link power e link loss budgets. Finalmente, foram efectuados testes de simulação com o programa OptiSystem simulando a ligação usada para o cálculo de potência podendo-se assim comparar valores e verificar o desempenho do sistema através do diagrama de olho obtido.
Resumo:
A Internet é responsável pelo surgimento de um novo paradigma de televisão – IPTV (Televisão sobre IP). Este serviço distingue-se de outros modelos de televisão, pois permite aos utilizadores um elevado grau de interactividade, com um controlo personalizado sobre os conteúdos a que pretende assistir. Possibilita ainda a oferta de um número ilimitado de canais, bem como o acesso a conteúdos de Vídeo on Demand (VoD). O IPTV apresenta diversas funcionalidades suportadas por uma arquitectura complexa e uma rede convergente que serve de integração a serviços de voz, dados e vídeo. A tecnologia IPTV explora ao máximo as características da Internet, com a utilização de mecanismos de Qualidade de Serviço. Surge ainda como uma revolução dentro do panorama televisivo, abrindo portas a novos investimentos por parte das empresas de telecomunicações. A Internet também permite fazer chamadas telefónicas sobre a rede IP. Este serviço é denominado VoIP (Voz sobre IP) e encontra-se em funcionamento já há algum tempo. Desta forma surge a oportunidade de poder oferecer ao consumidor final, um serviço que inclua os serviços de Internet, de VoIP e de IPTV denominado serviço Triple Play. O serviço Triple Play veio obrigar a revisão de toda a rede de transporte de forma a preparar a mesma para suportar este serviço de uma forma eficiente (QoS), resiliente (recuperação de falhas) e optimizado (Engenharia de tráfego). Em redes de telecomunicações, tanto a quebra de uma ligação como a congestão nas redes pode interferir nos serviços oferecidos aos consumidores finais. Mecanismos de sobrevivência são aplicados de forma a garantir a continuidade do serviço mesmo na ocorrência de uma falha. O objectivo desta dissertação é propor uma solução de uma arquitectura de rede capaz de suportar o serviço Triple Play de uma forma eficiente, resiliente e optimizada através de um encaminhamento óptimo ou quase óptimo. No âmbito deste trabalho, é realizada a análise do impacto das estratégias de encaminhamento que garantem a eficiência, sobrevivência e optimização das redes IP existentes, bem como é determinado o número limite de clientes permitido numa situação de pico de uma dada rede. Neste trabalho foram abordados os conceitos de Serviços Triple Play, Redes de Acesso, Redes Núcleo, Qualidade de Serviço, MPLS (Multi-Protocolo Label Switching), Engenharia de Tráfego e Recuperação de falhas. As conclusões obtidas das simulações efectuadas através do simulador de rede NS-2.33 (Network Simulator versão 2.33) serviram para propor a solução da arquitectura de uma rede capaz de suportar o serviço Triple Play de uma forma eficiente, resiliente e optimizada.
Resumo:
As redes de computadores cresceram nas últimas décadas em diversidade e complexidade,sempre procurando manter um nível elevado de qualidade na comunicação. Atualmente existe uma variedade de ferramentas para a gest~ao de redes que cobrem de forma completa ou parcial as diferentes etapas do ciclo de vida dessas redes. Dadas a dimensão e heterogeneidade dessas redes, o seu desenvolvimento e operação são tarefas que envolvem um número crescente de ferramentas, o que induz a uma maior complexidade. Além do mais, a maior parte das ferramentas existentes s~ao independentes e incompatíveis, tornando a tarefa dos arquitetos e dos gestores de redes mais difícil. Dessa forma, é identificada a necessidade de uma abstraãoo ou abordagem genérica que permita a interoperabilidade entre diferentes ferramentas/ambientes de rede de forma a facilitar e otimizar a sua gestão. O trabalho apresentado nesta tese introduz a proposta e a implementação de uma framework para a integração de diferentes ferramentas heterogéneas de rede dando suporte Da criação de um ambiente de gestão que cubra o ciclo de vida de uma rede de comunica ção. A interoperabilidade proporcionada pela framework é implementada através da proposta de uma nova linguagem para a descrição de redes e de todos os seus componentes, incluindo a informação da topologia e dos contextos onde a rede pode existir. As demais contribuições desta tese estão relacionadas com (i) a implementação de algumas ferramentas de gestão para dar suporte há construção de cenários de rede utilizando a linguagem proposta, e (ii) a modelação de vários cenários de rede com tecnologias diferentes, incluindo aspetos de Qualidade de Serviço, para a validação da utilização da framework proposta para proporcionar a interoperabilidade entre diferentes ferramentas de gestão de redes. A linguagem proposta para a descrição de redes preocupou-se com a descrição dos cenários de rede dando suporte das diferentes fases da existência dessa rede, desde o seu projeto até a sua operação, manutenção e atualização. Uma vantagem desta abordagem de permitir a coexistência de diversas informações de utilização da rede numa única descrição, mantendo cada uma independente das restantes, o que promove a compatibilidade e a reutilização das informações de forma direta entre as ferramentas, ultrapassando assim a principal limitação detetada nas linguagens e ferramentas existentes e reforçando as possibilidades de interoperabilidade.
Resumo:
As energias renováveis têm-se tornado uma alternativa viável e complementar aos combustíveis fósseis, pelo facto de serem energias virtualmente inesgotáveis, limpas e economicamente vantajosas. Um dos principais problemas associados às fontes de energia renováveis é a sua intermitência. Este problema impossibilita o controlo da produção de energia e reflete-se na qualidade da energia elétrica. Em sistemas de microprodução de energia, este problema pode ser atenuado com a inclusão de sistemas de armazenamento intermédio que possibilitam o armazenamento do excedente extraído das fontes renováveis, podendo ser utilizado como recurso auxiliar na alimentação de cargas ou como meio de estabilização e otimização do desempenho da Rede Elétrica de Energia (REE), evitando variações bruscas na energia transferida para a mesma. Os sistemas de microprodução com armazenamento intermédio podem ser considerados fundamentais na implementação do conceito de Rede Inteligente de Energia (RIE), visto serem sistemas de energia descentralizados que permitem uma melhor gestão da energia elétrica e uma consequente redução de custos. No presente trabalho desenvolveu-se um sistema de microprodução de energia renovável compatível com as fontes renováveis fotovoltaica e eólica, possuindo um banco de baterias como sistema de armazenamento intermédio. A construção deste sistema teve como principal objetivo seguir as referências de potência impostas pela RIE, independentemente das condições meteorológicas, com recurso à energia armazenada nas baterias, evitando a introdução de perturbações na REE ao nível da tensão e da frequência. Estudou-se o comportamento do sistema na ocorrência de variações bruscas da fonte renovável, perturbações na tensão da REE e introdução de cargas lineares e não lineares. Foi desenvolvido um protótipo experimental com painéis fotovoltaicos, no qual foram registados os valores de alguns parâmetros da qualidade da energia elétrica. Obteve-se uma resposta de aproximadamente 25 μs por parte das baterias para cada Watt de potência requisitado pela RIE.
Resumo:
Forecast is the basis for making strategic, tactical and operational business decisions. In financial economics, several techniques have been used to predict the behavior of assets over the past decades.Thus, there are several methods to assist in the task of time series forecasting, however, conventional modeling techniques such as statistical models and those based on theoretical mathematical models have produced unsatisfactory predictions, increasing the number of studies in more advanced methods of prediction. Among these, the Artificial Neural Networks (ANN) are a relatively new and promising method for predicting business that shows a technique that has caused much interest in the financial environment and has been used successfully in a wide variety of financial modeling systems applications, in many cases proving its superiority over the statistical models ARIMA-GARCH. In this context, this study aimed to examine whether the ANNs are a more appropriate method for predicting the behavior of Indices in Capital Markets than the traditional methods of time series analysis. For this purpose we developed an quantitative study, from financial economic indices, and developed two models of RNA-type feedfoward supervised learning, whose structures consisted of 20 data in the input layer, 90 neurons in one hidden layer and one given as the output layer (Ibovespa). These models used backpropagation, an input activation function based on the tangent sigmoid and a linear output function. Since the aim of analyzing the adherence of the Method of Artificial Neural Networks to carry out predictions of the Ibovespa, we chose to perform this analysis by comparing results between this and Time Series Predictive Model GARCH, developing a GARCH model (1.1).Once applied both methods (ANN and GARCH) we conducted the results' analysis by comparing the results of the forecast with the historical data and by studying the forecast errors by the MSE, RMSE, MAE, Standard Deviation, the Theil's U and forecasting encompassing tests. It was found that the models developed by means of ANNs had lower MSE, RMSE and MAE than the GARCH (1,1) model and Theil U test indicated that the three models have smaller errors than those of a naïve forecast. Although the ANN based on returns have lower precision indicator values than those of ANN based on prices, the forecast encompassing test rejected the hypothesis that this model is better than that, indicating that the ANN models have a similar level of accuracy . It was concluded that for the data series studied the ANN models show a more appropriate Ibovespa forecasting than the traditional models of time series, represented by the GARCH model
Resumo:
In this paper artificial neural network (ANN) based on supervised and unsupervised algorithms were investigated for use in the study of rheological parameters of solid pharmaceutical excipients, in order to develop computational tools for manufacturing solid dosage forms. Among four supervised neural networks investigated, the best learning performance was achieved by a feedfoward multilayer perceptron whose architectures was composed by eight neurons in the input layer, sixteen neurons in the hidden layer and one neuron in the output layer. Learning and predictive performance relative to repose angle was poor while to Carr index and Hausner ratio (CI and HR, respectively) showed very good fitting capacity and learning, therefore HR and CI were considered suitable descriptors for the next stage of development of supervised ANNs. Clustering capacity was evaluated for five unsupervised strategies. Network based on purely unsupervised competitive strategies, classic "Winner-Take-All", "Frequency-Sensitive Competitive Learning" and "Rival-Penalize Competitive Learning" (WTA, FSCL and RPCL, respectively) were able to perform clustering from database, however this classification was very poor, showing severe classification errors by grouping data with conflicting properties into the same cluster or even the same neuron. On the other hand it could not be established what was the criteria adopted by the neural network for those clustering. Self-Organizing Maps (SOM) and Neural Gas (NG) networks showed better clustering capacity. Both have recognized the two major groupings of data corresponding to lactose (LAC) and cellulose (CEL). However, SOM showed some errors in classify data from minority excipients, magnesium stearate (EMG) , talc (TLC) and attapulgite (ATP). NG network in turn performed a very consistent classification of data and solve the misclassification of SOM, being the most appropriate network for classifying data of the study. The use of NG network in pharmaceutical technology was still unpublished. NG therefore has great potential for use in the development of software for use in automated classification systems of pharmaceutical powders and as a new tool for mining and clustering data in drug development
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Resumo:
Post dispatch analysis of signals obtained from digital disturbances registers provide important information to identify and classify disturbances in systems, looking for a more efficient management of the supply. In order to enhance the task of identifying and classifying the disturbances - providing an automatic assessment - techniques of digital signal processing can be helpful. The Wavelet Transform has become a very efficient tool for the analysis of voltage or current signals, obtained immediately after disturbance s occurrences in the network. This work presents a methodology based on the Discrete Wavelet Transform to implement this process. It uses a comparison between distribution curves of signals energy, with and without disturbance. This is done for different resolution levels of its decomposition in order to obtain descriptors that permit its classification, using artificial neural networks
Resumo:
The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developing the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. It s important to point out that, in spite of the loads being normally connected to the transformer s secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity
Resumo:
The bidimensional periodic structures called frequency selective surfaces have been well investigated because of their filtering properties. Similar to the filters that work at the traditional radiofrequency band, such structures can behave as band-stop or pass-band filters, depending on the elements of the array (patch or aperture, respectively) and can be used for a variety of applications, such as: radomes, dichroic reflectors, waveguide filters, artificial magnetic conductors, microwave absorbers etc. To provide high-performance filtering properties at microwave bands, electromagnetic engineers have investigated various types of periodic structures: reconfigurable frequency selective screens, multilayered selective filters, as well as periodic arrays printed on anisotropic dielectric substrates and composed by fractal elements. In general, there is no closed form solution directly from a given desired frequency response to a corresponding device; thus, the analysis of its scattering characteristics requires the application of rigorous full-wave techniques. Besides that, due to the computational complexity of using a full-wave simulator to evaluate the frequency selective surface scattering variables, many electromagnetic engineers still use trial-and-error process until to achieve a given design criterion. As this procedure is very laborious and human dependent, optimization techniques are required to design practical periodic structures with desired filter specifications. Some authors have been employed neural networks and natural optimization algorithms, such as the genetic algorithms and the particle swarm optimization for the frequency selective surface design and optimization. This work has as objective the accomplishment of a rigorous study about the electromagnetic behavior of the periodic structures, enabling the design of efficient devices applied to microwave band. For this, artificial neural networks are used together with natural optimization techniques, allowing the accurate and efficient investigation of various types of frequency selective surfaces, in a simple and fast manner, becoming a powerful tool for the design and optimization of such structures
Resumo:
This study presents a description of the development model of a representation of simplified grid applied in hybrid load flow for calculation of the voltage variations in a steady-state caused by the wind farm on power system. Also, it proposes an optimal load-flow able to control power factor on connection bar and to minimize the loss. The analysis process on system, led by the wind producer, it has as base given technician supplied by the grid. So, the propose model to the simplification of the grid that allows the necessity of some knowledge only about the data referring the internal network, that is, the part of the network that interests in the analysis. In this way, it is intended to supply forms for the auxiliary in the systematization of the relations between the sector agents. The model for simplified network proposed identifies the internal network, external network and the buses of boulders from a study of vulnerability of the network, attributing them floating liquid powers attributing slack models. It was opted to apply the presented model in Newton-Raphson and a hybrid load flow, composed by The Gauss-Seidel method Zbarra and Summation Power. Finally, presents the results obtained to a developed computational environment of SCILAB and FORTRAN, with their respective analysis and conclusion, comparing them with the ANAREDE
Resumo:
This paper presents a new multi-model technique of dentification in ANFIS for nonlinear systems. In this technique, the structure used is of the fuzzy Takagi-Sugeno of which the consequences are local linear models that represent the system of different points of operation and the precursors are membership functions whose adjustments are realized by the learning phase of the neuro-fuzzy ANFIS technique. The models that represent the system at different points of the operation can be found with linearization techniques like, for example, the Least Squares method that is robust against sounds and of simple application. The fuzzy system is responsible for informing the proportion of each model that should be utilized, using the membership functions. The membership functions can be adjusted by ANFIS with the use of neural network algorithms, like the back propagation error type, in such a way that the models found for each area are correctly interpolated and define an action of each model for possible entries into the system. In multi-models, the definition of action of models is known as metrics and, since this paper is based on ANFIS, it shall be denominated in ANFIS metrics. This way, ANFIS metrics is utilized to interpolate various models, composing a system to be identified. Differing from the traditional ANFIS, the created technique necessarily represents the system in various well defined regions by unaltered models whose pondered activation as per the membership functions. The selection of regions for the application of the Least Squares method is realized manually from the graphic analysis of the system behavior or from the physical characteristics of the plant. This selection serves as a base to initiate the linear model defining technique and generating the initial configuration of the membership functions. The experiments are conducted in a teaching tank, with multiple sections, designed and created to show the characteristics of the technique. The results from this tank illustrate the performance reached by the technique in task of identifying, utilizing configurations of ANFIS, comparing the developed technique with various models of simple metrics and comparing with the NNARX technique, also adapted to identification
Resumo:
The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developping the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. Its important to point out that, in spite of the loads being normally connected to the transformers secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity
Resumo:
We propose a multi-resolution approach for surface reconstruction from clouds of unorganized points representing an object surface in 3D space. The proposed method uses a set of mesh operators and simple rules for selective mesh refinement, with a strategy based on Kohonen s self-organizing map. Basically, a self-adaptive scheme is used for iteratively moving vertices of an initial simple mesh in the direction of the set of points, ideally the object boundary. Successive refinement and motion of vertices are applied leading to a more detailed surface, in a multi-resolution, iterative scheme. Reconstruction was experimented with several point sets, induding different shapes and sizes. Results show generated meshes very dose to object final shapes. We include measures of performance and discuss robustness.
Resumo:
ART networks present some advantages: online learning; convergence in a few epochs of training; incremental learning, etc. Even though, some problems exist, such as: categories proliferation, sensitivity to the presentation order of training patterns, the choice of a good vigilance parameter, etc. Among the problems, the most important is the category proliferation that is probably the most critical. This problem makes the network create too many categories, consuming resources to store unnecessarily a large number of categories, impacting negatively or even making the processing time unfeasible, without contributing to the quality of the representation problem, i. e., in many cases, the excessive amount of categories generated by ART networks makes the quality of generation inferior to the one it could reach. Another factor that leads to the category proliferation of ART networks is the difficulty of approximating regions that have non-rectangular geometry, causing a generalization inferior to the one obtained by other methods of classification. From the observation of these problems, three methodologies were proposed, being two of them focused on using a most flexible geometry than the one used by traditional ART networks, which minimize the problem of categories proliferation. The third methodology minimizes the problem of the presentation order of training patterns. To validate these new approaches, many tests were performed, where these results demonstrate that these new methodologies can improve the quality of generalization for ART networks