884 resultados para Specialized genetic algorithm
Resumo:
We present a novel array RLS algorithm with forgetting factor that circumvents the problem of fading regularization, inherent to the standard exponentially-weighted RLS, by allowing for time-varying regularization matrices with generic structure. Simulations in finite precision show the algorithm`s superiority as compared to alternative algorithms in the context of adaptive beamforming.
Distributed Estimation Over an Adaptive Incremental Network Based on the Affine Projection Algorithm
Resumo:
We study the problem of distributed estimation based on the affine projection algorithm (APA), which is developed from Newton`s method for minimizing a cost function. The proposed solution is formulated to ameliorate the limited convergence properties of least-mean-square (LMS) type distributed adaptive filters with colored inputs. The analysis of transient and steady-state performances at each individual node within the network is developed by using a weighted spatial-temporal energy conservation relation and confirmed by computer simulations. The simulation results also verify that the proposed algorithm provides not only a faster convergence rate but also an improved steady-state performance as compared to an LMS-based scheme. In addition, the new approach attains an acceptable misadjustment performance with lower computational and memory cost, provided the number of regressor vectors and filter length parameters are appropriately chosen, as compared to a distributed recursive-least-squares (RLS) based method.
Resumo:
We derive an easy-to-compute approximate bound for the range of step-sizes for which the constant-modulus algorithm (CMA) will remain stable if initialized close to a minimum of the CM cost function. Our model highlights the influence, of the signal constellation used in the transmission system: for smaller variation in the modulus of the transmitted symbols, the algorithm will be more robust, and the steady-state misadjustment will be smaller. The theoretical results are validated through several simulations, for long and short filters and channels.
Resumo:
Higher order (2,4) FDTD schemes used for numerical solutions of Maxwell`s equations are focused on diminishing the truncation errors caused by the Taylor series expansion of the spatial derivatives. These schemes use a larger computational stencil, which generally makes use of the two constant coefficients, C-1 and C-2, for the four-point central-difference operators. In this paper we propose a novel way to diminish these truncation errors, in order to obtain more accurate numerical solutions of Maxwell`s equations. For such purpose, we present a method to individually optimize the pair of coefficients, C-1 and C-2, based on any desired grid size resolution and size of time step. Particularly, we are interested in using coarser grid discretizations to be able to simulate electrically large domains. The results of our optimization algorithm show a significant reduction in dispersion error and numerical anisotropy for all modeled grid size resolutions. Numerical simulations of free-space propagation verifies the very promising theoretical results. The model is also shown to perform well in more complex, realistic scenarios.
Resumo:
Starting from the Durbin algorithm in polynomial space with an inner product defined by the signal autocorrelation matrix, an isometric transformation is defined that maps this vector space into another one where the Levinson algorithm is performed. Alternatively, for iterative algorithms such as discrete all-pole (DAP), an efficient implementation of a Gohberg-Semencul (GS) relation is developed for the inversion of the autocorrelation matrix which considers its centrosymmetry. In the solution of the autocorrelation equations, the Levinson algorithm is found to be less complex operationally than the procedures based on GS inversion for up to a minimum of five iterations at various linear prediction (LP) orders.
Resumo:
In this paper the continuous Verhulst dynamic model is used to synthesize a new distributed power control algorithm (DPCA) for use in direct sequence code division multiple access (DS-CDMA) systems. The Verhulst model was initially designed to describe the population growth of biological species under food and physical space restrictions. The discretization of the corresponding differential equation is accomplished via the Euler numeric integration (ENI) method. Analytical convergence conditions for the proposed DPCA are also established. Several properties of the proposed recursive algorithm, such as Euclidean distance from optimum vector after convergence, convergence speed, normalized mean squared error (NSE), average power consumption per user, performance under dynamics channels, and implementation complexity aspects, are analyzed through simulations. The simulation results are compared with two other DPCAs: the classic algorithm derived by Foschini and Miljanic and the sigmoidal of Uykan and Koivo. Under estimated errors conditions, the proposed DPCA exhibits smaller discrepancy from the optimum power vector solution and better convergence (under fixed and adaptive convergence factor) than the classic and sigmoidal DPCAs. (C) 2010 Elsevier GmbH. All rights reserved.
Resumo:
The main goal of this paper is to apply the so-called policy iteration algorithm (PIA) for the long run average continuous control problem of piecewise deterministic Markov processes (PDMP`s) taking values in a general Borel space and with compact action space depending on the state variable. In order to do that we first derive some important properties for a pseudo-Poisson equation associated to the problem. In the sequence it is shown that the convergence of the PIA to a solution satisfying the optimality equation holds under some classical hypotheses and that this optimal solution yields to an optimal control strategy for the average control problem for the continuous-time PDMP in a feedback form.
Resumo:
An algorithm inspired on ant behavior is developed in order to find out the topology of an electric energy distribution network with minimum power loss. The algorithm performance is investigated in hypothetical and actual circuits. When applied in an actual distribution system of a region of the State of Sao Paulo (Brazil), the solution found by the algorithm presents loss lower than the topology built by the concessionary company.
Resumo:
The most popular algorithms for blind equalization are the constant-modulus algorithm (CMA) and the Shalvi-Weinstein algorithm (SWA). It is well-known that SWA presents a higher convergence rate than CMA. at the expense of higher computational complexity. If the forgetting factor is not sufficiently close to one, if the initialization is distant from the optimal solution, or if the signal-to-noise ratio is low, SWA can converge to undesirable local minima or even diverge. In this paper, we show that divergence can be caused by an inconsistency in the nonlinear estimate of the transmitted signal. or (when the algorithm is implemented in finite precision) by the loss of positiveness of the estimate of the autocorrelation matrix, or by a combination of both. In order to avoid the first cause of divergence, we propose a dual-mode SWA. In the first mode of operation. the new algorithm works as SWA; in the second mode, it rejects inconsistent estimates of the transmitted signal. Assuming the persistence of excitation condition, we present a deterministic stability analysis of the new algorithm. To avoid the second cause of divergence, we propose a dual-mode lattice SWA, which is stable even in finite-precision arithmetic, and has a computational complexity that increases linearly with the number of adjustable equalizer coefficients. The good performance of the proposed algorithms is confirmed through numerical simulations.
Resumo:
This paper presents the design and implementation of an embedded soft sensor, i. e., a generic and autonomous hardware module, which can be applied to many complex plants, wherein a certain variable cannot be directly measured. It is implemented based on a fuzzy identification algorithm called ""Limited Rules"", employed to model continuous nonlinear processes. The fuzzy model has a Takagi-Sugeno-Kang structure and the premise parameters are defined based on the Fuzzy C-Means (FCM) clustering algorithm. The firmware contains the soft sensor and it runs online, estimating the target variable from other available variables. Tests have been performed using a simulated pH neutralization plant. The results of the embedded soft sensor have been considered satisfactory. A complete embedded inferential control system is also presented, including a soft sensor and a PID controller. (c) 2007, ISA. Published by Elsevier Ltd. All rights reserved.
Resumo:
This paper addresses the single machine scheduling problem with a common due date aiming to minimize earliness and tardiness penalties. Due to its complexity, most of the previous studies in the literature deal with this problem using heuristics and metaheuristics approaches. With the intention of contributing to the study of this problem, a branch-and-bound algorithm is proposed. Lower bounds and pruning rules that exploit properties of the problem are introduced. The proposed approach is examined through a computational comparative study with 280 problems involving different due date scenarios. In addition, the values of optimal solutions for small problems from a known benchmark are provided.
Resumo:
This paper proposes and describes an architecture that allows the both engineer and programmer for defining and quantifying which peripheral of a microcontroller will be important to the particular project. For each application, it is necessary to use different types of peripherals. In this study, we have verified the possibility for emulating the behavior of peripheral in specifically CPUs. These CPUs hold a RAM memory, where code spaces specifically written for them could represent the behavior of some target peripheral, which are loaded and executed on it. We believed that the proposed architecture will provide larger flexibility in the use of the microcontrolles since this ""dedicated hardware components"" don`t execute to a special function, but it is a hardware capable to self adapt to the needs of each project. This research had as fundament a comparative study of four current microcontrollers. Preliminary tests using VHDL and FPGAs were done.
Resumo:
Hormones are likely to be important factors modulating the light-dependent anthocyanin accumulation. Here we analyzed anthocyanin contents in hypocotyls of near isogenic Micro-Tom (MT) tomato lines carrying hormone and phytochrome mutations, as single and double-mutant combinations. In order to recapitulate mutant phenotype, exogenous hormone applications were also performed Anthocyanin accumulation was promoted by exogenous abscisic acid (ABA) and inhibited by gibberellin (GA), in accordance to the reduced anthocyanin contents measured in ABA-deficient (notabills) and GA-constitutive response (procera) mutants. Exogenous cytokinin also enhanced anthocyanin levels in MT hypocotyls. Although auxin-insensitive chageotropica mutant exhibited higher anthocyanin contents, pharmacological approaches employing exogenous auxin and a transport inhibitor did not support a direct role of the hormone in anthocyanin accumulation Analysis of mutants exhibiting increased ethylene production (epwastic) or reduced sensitivity (Never ripe), together with pharmacological data obtained from plants treated with the hormone, indicated a limited role for ethylene in anthocyanin contents. Phytochrome-deficiency (aurea) and hormone double-mutant combinations exhibited phenotypes suggesting additive or synergistic interactions, but not fully espistatic ones, in the control of anthocyanin levels in tomato hypocotyls. Our results indicate that phytochrome-mediated anthocyanin accumulation in tomato hypocotyls is modulated by distinct hormone classes via both shared and independent pathways. (C) 2010 Elsevier Ireland Ltd. All rights reserved
Resumo:
The use of chloroplast DNA markers (cpDNA) helps to elucidate questions related to ecology, evolution and genetic structure. The knowledge of inter-and intra-population genetic structure allows to design effective conservation and management strategies for tropical tree species. With the aim to help the conservation of Hymenaea stigonocarpa of the Cerrado (Brazilian savanna) in Sao Paulo State, an analysis of the spatial genetic structure (SGS) was conducted in two populations using five universal chloroplast microsatellite loci (cpSSR). The population of 68 trees of H. stigonocarpa in the Ecological Station of Itirapina (ESI) had a single haplotype, indicating a strong founder effect. In turn, the population of 47 trees of H. stigonocarpa in a contiguous area that includes the Ecological Station of Assis and the Assis State Forest (ESA), showed six haplotypes ((n) over cap (h) = 6) with a moderate haplotype diversity ((h) over cap = 0667 + 0094), revealing that it was founded by a small number of maternal lineages. The SGS analysis for the population ESA/ASF, using Moran`s I index, indicated limited seed dispersal. Considering SGS, for ex situ conservation strategies in the population ESA/ASF, seed harvesting should require a minimum distance of 750 m among seed-trees.
Resumo:
Genetic variation and environmental heterogeneity fundamentally shape the interactions between plants of the same species. According to the resource partitioning hypothesis, competition between neighbors intensifies as their similarity increases. Such competition may change in response to increasing supplies of limiting resources. We tested the resource partitioning hypothesis in stands of genetically identical (clone-origin) and genetically diverse (seed-origin) Eucalyptus trees with different water and nutrient supplies, using individual-based tree growth models. We found that genetic variation greatly reduced competitive interactions between neighboring trees, supporting the resource partitioning hypothesis. The importance of genetic variation for Eucalyptus growth patterns depended strongly on local stand structure and focal tree size. This suggests that spatial and temporal variation in the strength of species interactions leads to reversals in the growth rank of seed-origin and clone-origin trees. This study is one of the first to experimentally test the resource partitioning hypothesis for intergenotypic vs. intragenotypic interactions in trees. We provide evidence that variation at the level of genes, and not just species, is functionally important for driving individual and community-level processes in forested ecosystems.