962 resultados para Fast view-matching algorithm
Resumo:
Higher order (2,4) FDTD schemes used for numerical solutions of Maxwell`s equations are focused on diminishing the truncation errors caused by the Taylor series expansion of the spatial derivatives. These schemes use a larger computational stencil, which generally makes use of the two constant coefficients, C-1 and C-2, for the four-point central-difference operators. In this paper we propose a novel way to diminish these truncation errors, in order to obtain more accurate numerical solutions of Maxwell`s equations. For such purpose, we present a method to individually optimize the pair of coefficients, C-1 and C-2, based on any desired grid size resolution and size of time step. Particularly, we are interested in using coarser grid discretizations to be able to simulate electrically large domains. The results of our optimization algorithm show a significant reduction in dispersion error and numerical anisotropy for all modeled grid size resolutions. Numerical simulations of free-space propagation verifies the very promising theoretical results. The model is also shown to perform well in more complex, realistic scenarios.
Resumo:
Starting from the Durbin algorithm in polynomial space with an inner product defined by the signal autocorrelation matrix, an isometric transformation is defined that maps this vector space into another one where the Levinson algorithm is performed. Alternatively, for iterative algorithms such as discrete all-pole (DAP), an efficient implementation of a Gohberg-Semencul (GS) relation is developed for the inversion of the autocorrelation matrix which considers its centrosymmetry. In the solution of the autocorrelation equations, the Levinson algorithm is found to be less complex operationally than the procedures based on GS inversion for up to a minimum of five iterations at various linear prediction (LP) orders.
Resumo:
In this paper the continuous Verhulst dynamic model is used to synthesize a new distributed power control algorithm (DPCA) for use in direct sequence code division multiple access (DS-CDMA) systems. The Verhulst model was initially designed to describe the population growth of biological species under food and physical space restrictions. The discretization of the corresponding differential equation is accomplished via the Euler numeric integration (ENI) method. Analytical convergence conditions for the proposed DPCA are also established. Several properties of the proposed recursive algorithm, such as Euclidean distance from optimum vector after convergence, convergence speed, normalized mean squared error (NSE), average power consumption per user, performance under dynamics channels, and implementation complexity aspects, are analyzed through simulations. The simulation results are compared with two other DPCAs: the classic algorithm derived by Foschini and Miljanic and the sigmoidal of Uykan and Koivo. Under estimated errors conditions, the proposed DPCA exhibits smaller discrepancy from the optimum power vector solution and better convergence (under fixed and adaptive convergence factor) than the classic and sigmoidal DPCAs. (C) 2010 Elsevier GmbH. All rights reserved.
Resumo:
The main goal of this paper is to apply the so-called policy iteration algorithm (PIA) for the long run average continuous control problem of piecewise deterministic Markov processes (PDMP`s) taking values in a general Borel space and with compact action space depending on the state variable. In order to do that we first derive some important properties for a pseudo-Poisson equation associated to the problem. In the sequence it is shown that the convergence of the PIA to a solution satisfying the optimality equation holds under some classical hypotheses and that this optimal solution yields to an optimal control strategy for the average control problem for the continuous-time PDMP in a feedback form.
Resumo:
This essay is a trial on giving some mathematical ideas about the concept of biological complexity, trying to explore four different attributes considered to be essential to characterize a complex system in a biological context: decomposition, heterogeneous assembly, self-organization, and adequacy. It is a theoretical and speculative approach, opening some possibilities to further numerical and experimental work, illustrated by references to several researches that applied the concepts presented here. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
An algorithm inspired on ant behavior is developed in order to find out the topology of an electric energy distribution network with minimum power loss. The algorithm performance is investigated in hypothetical and actual circuits. When applied in an actual distribution system of a region of the State of Sao Paulo (Brazil), the solution found by the algorithm presents loss lower than the topology built by the concessionary company.
Resumo:
The most popular algorithms for blind equalization are the constant-modulus algorithm (CMA) and the Shalvi-Weinstein algorithm (SWA). It is well-known that SWA presents a higher convergence rate than CMA. at the expense of higher computational complexity. If the forgetting factor is not sufficiently close to one, if the initialization is distant from the optimal solution, or if the signal-to-noise ratio is low, SWA can converge to undesirable local minima or even diverge. In this paper, we show that divergence can be caused by an inconsistency in the nonlinear estimate of the transmitted signal. or (when the algorithm is implemented in finite precision) by the loss of positiveness of the estimate of the autocorrelation matrix, or by a combination of both. In order to avoid the first cause of divergence, we propose a dual-mode SWA. In the first mode of operation. the new algorithm works as SWA; in the second mode, it rejects inconsistent estimates of the transmitted signal. Assuming the persistence of excitation condition, we present a deterministic stability analysis of the new algorithm. To avoid the second cause of divergence, we propose a dual-mode lattice SWA, which is stable even in finite-precision arithmetic, and has a computational complexity that increases linearly with the number of adjustable equalizer coefficients. The good performance of the proposed algorithms is confirmed through numerical simulations.
Resumo:
This work aims at proposing the use of the evolutionary computation methodology in order to jointly solve the multiuser channel estimation (MuChE) and detection problems at its maximum-likelihood, both related to the direct sequence code division multiple access (DS/CDMA). The effectiveness of the proposed heuristic approach is proven by comparing performance and complexity merit figures with that obtained by traditional methods found in literature. Simulation results considering genetic algorithm (GA) applied to multipath, DS/CDMA and MuChE and multi-user detection (MuD) show that the proposed genetic algorithm multi-user channel estimation (GAMuChE) yields a normalized mean square error estimation (nMSE) inferior to 11%, under slowly varying multipath fading channels, large range of Doppler frequencies and medium system load, it exhibits lower complexity when compared to both maximum likelihood multi-user channel estimation (MLMuChE) and gradient descent method (GrdDsc). A near-optimum multi-user detector (MuD) based on the genetic algorithm (GAMuD), also proposed in this work, provides a significant reduction in the computational complexity when compared to the optimum multi-user detector (OMuD). In addition, the complexity of the GAMuChE and GAMuD algorithms were (jointly) analyzed in terms of number of operations necessary to reach the convergence, and compared to other jointly MuChE and MuD strategies. The joint GAMuChE-GAMuD scheme can be regarded as a promising alternative for implementing third-generation (3G) and fourth-generation (4G) wireless systems in the near future. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
This paper presents the design and implementation of an embedded soft sensor, i. e., a generic and autonomous hardware module, which can be applied to many complex plants, wherein a certain variable cannot be directly measured. It is implemented based on a fuzzy identification algorithm called ""Limited Rules"", employed to model continuous nonlinear processes. The fuzzy model has a Takagi-Sugeno-Kang structure and the premise parameters are defined based on the Fuzzy C-Means (FCM) clustering algorithm. The firmware contains the soft sensor and it runs online, estimating the target variable from other available variables. Tests have been performed using a simulated pH neutralization plant. The results of the embedded soft sensor have been considered satisfactory. A complete embedded inferential control system is also presented, including a soft sensor and a PID controller. (c) 2007, ISA. Published by Elsevier Ltd. All rights reserved.
Resumo:
This paper addresses the single machine scheduling problem with a common due date aiming to minimize earliness and tardiness penalties. Due to its complexity, most of the previous studies in the literature deal with this problem using heuristics and metaheuristics approaches. With the intention of contributing to the study of this problem, a branch-and-bound algorithm is proposed. Lower bounds and pruning rules that exploit properties of the problem are introduced. The proposed approach is examined through a computational comparative study with 280 problems involving different due date scenarios. In addition, the values of optimal solutions for small problems from a known benchmark are provided.
Resumo:
Eucalyptus is the dominant and most productive planted forest in Brazil, covering around 3.4 million ha for the production of charcoal, pulp, sawtimber, timber plates, wood foils, plywood and for building purposes. At the early establishment of the forest plantations, during the second half of the 1960s, the eucalypt yield was 10 m(3) ha(-1) y(-1). Now, as a result of investments in research and technology, the average productivity is 38 m3 ha(-1) y(-1). The productivity restrictions are related to the following environmental factors, in order of importance: water deficits > nutrient deficiency > soil depth and strength. The clonal forests have been fundamental in sites with larger water and nutrient restrictions, where they out-perform those established from traditional seed-based planting stock. When the environmental limitations are small the productivities of plantations based on clones or seeds appear to be similar. In the long term there are risks to sustainability, because of the low fertility and low reserves of primary minerals in the soils, which are, commonly, loamy and clayey oxisols and ultisols. Usually, a decline of soil quality is caused by management that does not conserve soil and site resources, damages soil physical and chemical characteristics, and insufficient or unbalanced fertiliser management. The problem is more serious when fast-growing genotypes are planted, which have a high nutrient demand and uptake capacity, and therefore high nutrient output through harvesting. The need to mobilise less soil by providing more cover and protection, reduce the nutrient and organic matter losses, preserve crucial physical properties as permeability ( root growth, infiltration and aeration), improve weed control and reduce costs has led to a progressive increase in the use of minimum cultivation practices during the last 20 years, which has been accepted as a good alternative to keep or increase site quality in the long term. In this paper we provide a synthesis and critical appraisal of the research results and practical implications of early silvicultural management on long-term site productivity of fast-growing eucalypt plantations arising from the Brazilian context.
Resumo:
Understanding resource capture can help design appropriate species combinations, planting designs and management. Leaf area index (LAI) and its longevity are the most important factors defining dry matter production and thus growth and productivity. The ecophysiological modifications and yield of rubber (Hevea spp.) in an agroforestry system (AFS) with beans (Phaseolus vulgaris L.) were studied. The experiment was established in Southeast-Brazil, with three rubber cultivars: IAN 3087, RRIM 600 and RRIM 527. The AFS comprised double rows of rubber trees along with beans sown in autumn and winter seasons in 1999. There was about 50% higher rubber yield per tree in the AFS than the rubber monoculture. Trees within the AFS responded to higher solar radiation availability with higher LAI and total foliage area, allowing its greater interception. All three cultivars had higher LAI in the AFS than monoculture, reaching maximum values in the AFS between April and May of 3.17 for RRIM 527; 2.83 for RRIM 600 and 2.28 for IAN 3087. The maximum LAI values for monocrop rubber trees were: 2.65, 2.62 and 1.99, respectively, for each cultivar. Rubber production and LAI were positively correlated in both the AFS and monoculture but leaf fall of rubber trees in the AFS was delayed and total phytomass was larger. It is suggested that trees in the AFS were under exploited and could yield more without compromising their life cycle if the tapping system was intensified. This shows how knowledge of LAI can be used to manage tapping intensity in the field, leading to higher rubber yield.
Resumo:
By allowing the estimation of forest structural and biophysical characteristics at different temporal and spatial scales, remote sensing may contribute to our understanding and monitoring of planted forests. Here, we studied 9-year time-series of the Normalized Difference Vegetation Index (NDVI) from the Moderate Resolution Imaging Spectroradiometer (MODIS) on a network of 16 stands in fast-growing Eucalyptus plantations in Sao Paulo State, Brazil. We aimed to examine the relationships between NDVI time-series spanning entire rotations and stand structural characteristics (volume, dominant height, mean annual increment) in these simple forest ecosystems. Our second objective was to examine spatial and temporal variations of light use efficiency for wood production, by comparing time-series of Absorbed Photosynthetically Active Radiation (APAR) with inventory data. Relationships were calibrated between the NDVI and the fractions of intercepted diffuse and direct radiation, using hemispherical photographs taken on the studied stands at two seasons. APAR was calculated from the NDVI time-series using these relationships. Stem volume and dominant height were strongly correlated with summed NDVI values between planting date and inventory date. Stand productivity was correlated with mean NDVI values. APAR during the first 2 years of growth was variable between stands and was well correlated with stem wood production (r(2) = 0.78). In contrast, APAR during the following years was less variable and not significantly correlated with stem biomass increments. Production of wood per unit of absorbed light varied with stand age and with site index. In our study, a better site index was accompanied both by increased APAR during the first 2 years of growth and by higher light use efficiency for stem wood production during the whole rotation. Implications for simple process-based modelling are discussed. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Despite the importance of Eucalyptus spp. in the pulp and paper industry, functional genomic approaches have only recently been applied to understand wood formation in this genus. We attempted to establish a global view of gene expression in the juvenile cambial region of Eucalyptus grandis Hill ex Maiden. The expression profile was obtained from serial analysis of gene expression (SAGE) library data produced from 3- and 6-year-old trees. Fourteen-base expressed sequence tags (ESTs) were searched against public Eucalyptus ESTs and annotated with GenBank. Altogether 43,304 tags were generated producing 3066 unigenes with three or more copies each, 445 with a putative identity, 215 with unknown function and 2406 without an EST match. The expression profile of the juvenile cambial region revealed the presence of highly frequent transcripts related to general metabolism and energy metabolism, cellular processes, transport, structural components and information pathways. We made a quantitative analysis of a large number of genes involved in the biosynthesis of cellulose, pectin, hemicellulose and lignin. Our findings provide insight into the expression of functionally related genes involved in juvenile wood formation in young fast-growing E. grandis trees.
Resumo:
A method was optimized for the analysis of omeprazole (OMZ) by ultra-high speed LC with diode array detection using a monolithic Chromolith Fast Gradient RP 18 endcapped column (50 x 2.0 mm id). The analyses were performed at 30 degrees C using a mobile phase consisting of 0.15% (v/v) trifluoroacetic acid (TFA) in water (solvent A) and 0.15% (v/v) TFA in acetonitrile (solvent B) under a linear gradient of 5 to 90% B in 1 min at a flow rate of 1.0 mL/min and detection at 220 nm. Under these conditions, OMZ retention time was approximately 0.74 min. Validation parameters, such as selectivity, linearity, precision, accuracy, and robustness, showed results within the acceptable criteria. The method developed was successfully applied to OMZ enteric-coated pellets, showing that this assay can be used in the pharmaceutical industry for routine QC analysis. Moreover, the analytical conditions established allow for the simultaneous analysis of OMZ metabolites, 5-hydroxyomeprazole and omeprazole sulfone, in the same run, showing that this method can be extended to other matrixes with adequate procedures for sample preparation.