852 resultados para H-Infinity Time-Varying Adaptive Algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Composite resins have been subjected to structural modifications aiming at improved optical and mechanical properties. The present study consisted in an in vitro evaluation of the staining behavior of two nanohybrid resins (NH1 and NH2), a nanoparticulated resin (NP) and a microhybrid resin (MH). Samples of these materials were prepared and immersed in commonly ingested drinks, i.e., coffee, red wine and acai berry for periods of time varying from 1 to 60 days. Cylindrical samples of each resin were shaped using a metallic die and polymerized during 30 s both on the bottom and top of its disk. All samples were polished and immersed in the staining solutions. After 24 hours, three samples of each resin immersed in each solution were removed and placed in a spectrofotome ter for analysis. To that end, the samples were previously diluted in HCl at 50%. Tukey tests were carried out in the statistical analysis of the results. The results revealed that there was a clear difference in the staining behavior of each material. The nanoparticulated resin did not show better color stability compared to the microhybrid resin. Moreover, all resins stained with time. The degree of staining decreased in the sequence nanoparticulated, microhybrid, nanohybrid MH2 and MH1. Wine was the most aggressive drink followed by coffee and acai berry. SEM and image analysis revealed significant porosity on the surface of MH resin and relatively large pores on a NP sample. The NH2 resin was characterized by homogeneous dispersion of particles and limited porosity. Finally, the NH1 resin depicted the lowest porosity level. The results revealed that staining is likely related to the concentration of inorganic pa rticles and surface porosity

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An alternative nonlinear technique for decoupling and control is presented. This technique is based on a RBF (Radial Basis Functions) neural network and it is applied to the synchronous generator model. The synchronous generator is a coupled system, in other words, a change at one input variable of the system, changes more than one output. The RBF network will perform the decoupling, separating the control of the following outputs variables: the load angle and flux linkage in the field winding. This technique does not require knowledge of the system parameters and, due the nature of radial basis functions, it shows itself stable to parametric uncertainties, disturbances and simpler when it is applied in control. The RBF decoupler is designed in this work for decouple a nonlinear MIMO system with two inputs and two outputs. The weights between hidden and output layer are modified online, using an adaptive law in real time. The adaptive law is developed by Lyapunov s Method. A decoupling adaptive controller uses the errors between system outputs and model outputs, and filtered outputs of the system to produce control signals. The RBF network forces each outputs of generator to behave like reference model. When the RBF approaches adequately control signals, the system decoupling is achieved. A mathematical proof and analysis are showed. Simulations are presented to show the performance and robustness of the RBF network

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an evaluative study about the effects of using a machine learning technique on the main features of a self-organizing and multiobjective genetic algorithm (GA). A typical GA can be seen as a search technique which is usually applied in problems involving no polynomial complexity. Originally, these algorithms were designed to create methods that seek acceptable solutions to problems where the global optimum is inaccessible or difficult to obtain. At first, the GAs considered only one evaluation function and a single objective optimization. Today, however, implementations that consider several optimization objectives simultaneously (multiobjective algorithms) are common, besides allowing the change of many components of the algorithm dynamically (self-organizing algorithms). At the same time, they are also common combinations of GAs with machine learning techniques to improve some of its characteristics of performance and use. In this work, a GA with a machine learning technique was analyzed and applied in a antenna design. We used a variant of bicubic interpolation technique, called 2D Spline, as machine learning technique to estimate the behavior of a dynamic fitness function, based on the knowledge obtained from a set of laboratory experiments. This fitness function is also called evaluation function and, it is responsible for determining the fitness degree of a candidate solution (individual), in relation to others in the same population. The algorithm can be applied in many areas, including in the field of telecommunications, as projects of antennas and frequency selective surfaces. In this particular work, the presented algorithm was developed to optimize the design of a microstrip antenna, usually used in wireless communication systems for application in Ultra-Wideband (UWB). The algorithm allowed the optimization of two variables of geometry antenna - the length (Ls) and width (Ws) a slit in the ground plane with respect to three objectives: radiated signal bandwidth, return loss and central frequency deviation. These two dimensions (Ws and Ls) are used as variables in three different interpolation functions, one Spline for each optimization objective, to compose a multiobjective and aggregate fitness function. The final result proposed by the algorithm was compared with the simulation program result and the measured result of a physical prototype of the antenna built in the laboratory. In the present study, the algorithm was analyzed with respect to their success degree in relation to four important characteristics of a self-organizing multiobjective GA: performance, flexibility, scalability and accuracy. At the end of the study, it was observed a time increase in algorithm execution in comparison to a common GA, due to the time required for the machine learning process. On the plus side, we notice a sensitive gain with respect to flexibility and accuracy of results, and a prosperous path that indicates directions to the algorithm to allow the optimization problems with "η" variables

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Complex network analysis is a powerful tool into research of complex systems like brain networks. This work aims to describe the topological changes in neural functional connectivity networks of neocortex and hippocampus during slow-wave sleep (SWS) in animals submited to a novel experience exposure. Slow-wave sleep is an important sleep stage where occurs reverberations of electrical activities patterns of wakeness, playing a fundamental role in memory consolidation. Although its importance there s a lack of studies that characterize the topological dynamical of functional connectivity networks during that sleep stage. There s no studies that describe the topological modifications that novel exposure leads to this networks. We have observed that several topological properties have been modified after novel exposure and this modification remains for a long time. Major part of this changes in topological properties by novel exposure are related to fault tolerance

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wavelet coding has emerged as an alternative coding technique to minimize the fading effects of wireless channels. This work evaluates the performance of wavelet coding, in terms of bit error probability, over time-varying, frequency-selective multipath Rayleigh fading channels. The adopted propagation model follows the COST207 norm, main international standards reference for GSM, UMTS, and EDGE applications. The results show the wavelet coding s efficiency against the inter symbolic interference which characterizes these communication scenarios. This robustness of the presented technique enables its usage in different environments, bringing it one step closer to be applied in practical wireless communication systems

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ionospheric scintillations are caused by time-varying electron density irregularities in the ionosphere, occurring more often at equatorial and high latitudes. This paper focuses exclusively on experiments undertaken in Europe, at geographic latitudes between similar to 50 degrees N and similar to 80 degrees N, where a network of GPS receivers capable of monitoring Total Electron Content and ionospheric scintillation parameters was deployed. The widely used ionospheric scintillation indices S4 and sigma(phi) represent a practical measure of the intensity of amplitude and phase scintillation affecting GNSS receivers. However, they do not provide sufficient information regarding the actual tracking errors that degrade GNSS receiver performance. Suitable receiver tracking models, sensitive to ionospheric scintillation, allow the computation of the variance of the output error of the receiver PLL (Phase Locked Loop) and DLL (Delay Locked Loop), which expresses the quality of the range measurements used by the receiver to calculate user position. The ability of such models of incorporating phase and amplitude scintillation effects into the variance of these tracking errors underpins our proposed method of applying relative weights to measurements from different satellites. That gives the least squares stochastic model used for position computation a more realistic representation, vis-a-vis the otherwise 'equal weights' model. For pseudorange processing, relative weights were computed, so that a 'scintillation-mitigated' solution could be performed and compared to the (non-mitigated) 'equal weights' solution. An improvement between 17 and 38% in height accuracy was achieved when an epoch by epoch differential solution was computed over baselines ranging from 1 to 750 km. The method was then compared with alternative approaches that can be used to improve the least squares stochastic model such as weighting according to satellite elevation angle and by the inverse of the square of the standard deviation of the code/carrier divergence (sigma CCDiv). The influence of multipath effects on the proposed mitigation approach is also discussed. With the use of high rate scintillation data in addition to the scintillation indices a carrier phase based mitigated solution was also implemented and compared with the conventional solution. During a period of occurrence of high phase scintillation it was observed that problems related to ambiguity resolution can be reduced by the use of the proposed mitigated solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we study a connection between a non-Gaussian statistics, the Kaniadakis statistics, and Complex Networks. We show that the degree distribution P(k)of a scale free-network, can be calculated using a maximization of information entropy in the context of non-gaussian statistics. As an example, a numerical analysis based on the preferential attachment growth model is discussed, as well as a numerical behavior of the Kaniadakis and Tsallis degree distribution is compared. We also analyze the diffusive epidemic process (DEP) on a regular lattice one-dimensional. The model is composed of A (healthy) and B (sick) species that independently diffusive on lattice with diffusion rates DA and DB for which the probabilistic dynamical rule A + B → 2B and B → A. This model belongs to the category of non-equilibrium systems with an absorbing state and a phase transition between active an inactive states. We investigate the critical behavior of the DEP using an auto-adaptive algorithm to find critical points: the method of automatic searching for critical points (MASCP). We compare our results with the literature and we find that the MASCP successfully finds the critical exponents 1/ѵ and 1/zѵ in all the cases DA =DB, DA DB. The simulations show that the DEP has the same critical exponents as are expected from field-theoretical arguments. Moreover, we find that, contrary to a renormalization group prediction, the system does not show a discontinuous phase transition in the regime o DA >DB.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we obtain the cosmological solutions and investigate the thermodynamics of matter creation in two diferent contexts. In the first we propose a cosmological model with a time varying speed of light c. We consider two diferent time dependence of c for a at Friedmann-Robertson- Walker (FRW) universe. We write the energy conservation law arising from Einstein equations and study how particles are created as c decreases with cosmic epoch. The variation of c is coupled to a cosmological Λ term and both singular and non-singular solutions are possible. We calculate the "adiabatic" particle creation rate and the total number of particles as a function of time and find the constrains imposed by the second law of thermodynamics upon the models. In the second scenario, we study the nonlinearity of the electrodynamics as a source of matter creation in the cosmological models with at FRW geometry. We write the energy conservation law arising from Einstein field equations with cosmological term Λ, solve the field equations and study how particles are created as the magnetic field B changes with cosmic epoch. We obtain solutions for the adiabatic particle creation rate, the total number of particles and the scale factor as a function of time in three cases: Λ = 0, Λ = constant and Λ α H2 (cosmological term proportional to the Hubble parameter). In all cases, the second law of thermodynamics demands that the universe is not contracting (H ≥ 0). The first two solutions are non-singular and exhibit in ationary periods. The third case studied allows an always in ationary universe for a suficiently large cosmological term

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper an alternative method based on artificial neural networks is presented to determine harmonic components in the load current of a single-phase electric power system with nonlinear loads, whose parameters can vary so much in reason of the loads characteristic behaviors as because of the human intervention. The first six components in the load current are determined using the information contained in the time-varying waveforms. The effectiveness of this method is verified by using it in a single-phase active power filter with selective compensation of the current drained by an AC controller. The proposed method is compared with the fast Fourier transform.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quinze pacientes com síndrome de Lennox-Gastaut secundária foram submetidos a exames clínicos e eletrencefalográficos mensais por períodos de tempo que variaram de 1 a 9 anos. Os exames eletrencefalográficos foram realizados durante sono induzido por barbitúrico. Doze pacientes apresentaram atividade epiléptica focal. No presente trabalho descrevemos a localização, morfologia e freqüência dos paroxismos epilépticos focais.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the macroscopic quantum tunneling, self-trapping phenomena in two weakly coupled Bose-Einstein condensates with periodically time-varying atomic scattering length.The resonances in the oscillations of the atomic populations are investigated. We consider oscillations in the cases of macroscopic quantum tunneling and the self-trapping regimes. The existence of chaotic oscillations in the relative atomic population due to overlaps between nonlinear resonances is showed. We derive the whisker-type map for the problem and obtain the estimate for the critical amplitude of modulations leading to chaos. The diffusion coefficient for motion in the stochastic layer near separatrix is calculated. The analysis of the oscillations in the rapidly varying case shows the possibility of stabilization of the unstable pi-mode regime. (C) 2000 Published by Elsevier B.V. B.V. PACS: 03.75.Fi; 05.30.Jp.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Fermi accelerator model is studied in the framework of inelastic collisions. The dynamics of this problem is obtained by use of a two-dimensional nonlinear area-contracting map. We consider that the collisions of the particle with both periodically time varying and fixed walls are inelastic. We have shown that the dissipation destroys the mixed phase space structure of the nondissipative case and in special, we have obtained and characterized in this problem a family of two damping coefficients for which a boundary crisis occurs. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)