76 resultados para Clustering search algorithm
Resumo:
We derive an easy-to-compute approximate bound for the range of step-sizes for which the constant-modulus algorithm (CMA) will remain stable if initialized close to a minimum of the CM cost function. Our model highlights the influence, of the signal constellation used in the transmission system: for smaller variation in the modulus of the transmitted symbols, the algorithm will be more robust, and the steady-state misadjustment will be smaller. The theoretical results are validated through several simulations, for long and short filters and channels.
Resumo:
Higher order (2,4) FDTD schemes used for numerical solutions of Maxwell`s equations are focused on diminishing the truncation errors caused by the Taylor series expansion of the spatial derivatives. These schemes use a larger computational stencil, which generally makes use of the two constant coefficients, C-1 and C-2, for the four-point central-difference operators. In this paper we propose a novel way to diminish these truncation errors, in order to obtain more accurate numerical solutions of Maxwell`s equations. For such purpose, we present a method to individually optimize the pair of coefficients, C-1 and C-2, based on any desired grid size resolution and size of time step. Particularly, we are interested in using coarser grid discretizations to be able to simulate electrically large domains. The results of our optimization algorithm show a significant reduction in dispersion error and numerical anisotropy for all modeled grid size resolutions. Numerical simulations of free-space propagation verifies the very promising theoretical results. The model is also shown to perform well in more complex, realistic scenarios.
Resumo:
Starting from the Durbin algorithm in polynomial space with an inner product defined by the signal autocorrelation matrix, an isometric transformation is defined that maps this vector space into another one where the Levinson algorithm is performed. Alternatively, for iterative algorithms such as discrete all-pole (DAP), an efficient implementation of a Gohberg-Semencul (GS) relation is developed for the inversion of the autocorrelation matrix which considers its centrosymmetry. In the solution of the autocorrelation equations, the Levinson algorithm is found to be less complex operationally than the procedures based on GS inversion for up to a minimum of five iterations at various linear prediction (LP) orders.
Resumo:
In this paper the continuous Verhulst dynamic model is used to synthesize a new distributed power control algorithm (DPCA) for use in direct sequence code division multiple access (DS-CDMA) systems. The Verhulst model was initially designed to describe the population growth of biological species under food and physical space restrictions. The discretization of the corresponding differential equation is accomplished via the Euler numeric integration (ENI) method. Analytical convergence conditions for the proposed DPCA are also established. Several properties of the proposed recursive algorithm, such as Euclidean distance from optimum vector after convergence, convergence speed, normalized mean squared error (NSE), average power consumption per user, performance under dynamics channels, and implementation complexity aspects, are analyzed through simulations. The simulation results are compared with two other DPCAs: the classic algorithm derived by Foschini and Miljanic and the sigmoidal of Uykan and Koivo. Under estimated errors conditions, the proposed DPCA exhibits smaller discrepancy from the optimum power vector solution and better convergence (under fixed and adaptive convergence factor) than the classic and sigmoidal DPCAs. (C) 2010 Elsevier GmbH. All rights reserved.
Resumo:
The main goal of this paper is to apply the so-called policy iteration algorithm (PIA) for the long run average continuous control problem of piecewise deterministic Markov processes (PDMP`s) taking values in a general Borel space and with compact action space depending on the state variable. In order to do that we first derive some important properties for a pseudo-Poisson equation associated to the problem. In the sequence it is shown that the convergence of the PIA to a solution satisfying the optimality equation holds under some classical hypotheses and that this optimal solution yields to an optimal control strategy for the average control problem for the continuous-time PDMP in a feedback form.
Resumo:
An algorithm inspired on ant behavior is developed in order to find out the topology of an electric energy distribution network with minimum power loss. The algorithm performance is investigated in hypothetical and actual circuits. When applied in an actual distribution system of a region of the State of Sao Paulo (Brazil), the solution found by the algorithm presents loss lower than the topology built by the concessionary company.
Resumo:
The most popular algorithms for blind equalization are the constant-modulus algorithm (CMA) and the Shalvi-Weinstein algorithm (SWA). It is well-known that SWA presents a higher convergence rate than CMA. at the expense of higher computational complexity. If the forgetting factor is not sufficiently close to one, if the initialization is distant from the optimal solution, or if the signal-to-noise ratio is low, SWA can converge to undesirable local minima or even diverge. In this paper, we show that divergence can be caused by an inconsistency in the nonlinear estimate of the transmitted signal. or (when the algorithm is implemented in finite precision) by the loss of positiveness of the estimate of the autocorrelation matrix, or by a combination of both. In order to avoid the first cause of divergence, we propose a dual-mode SWA. In the first mode of operation. the new algorithm works as SWA; in the second mode, it rejects inconsistent estimates of the transmitted signal. Assuming the persistence of excitation condition, we present a deterministic stability analysis of the new algorithm. To avoid the second cause of divergence, we propose a dual-mode lattice SWA, which is stable even in finite-precision arithmetic, and has a computational complexity that increases linearly with the number of adjustable equalizer coefficients. The good performance of the proposed algorithms is confirmed through numerical simulations.
Resumo:
This work aims at proposing the use of the evolutionary computation methodology in order to jointly solve the multiuser channel estimation (MuChE) and detection problems at its maximum-likelihood, both related to the direct sequence code division multiple access (DS/CDMA). The effectiveness of the proposed heuristic approach is proven by comparing performance and complexity merit figures with that obtained by traditional methods found in literature. Simulation results considering genetic algorithm (GA) applied to multipath, DS/CDMA and MuChE and multi-user detection (MuD) show that the proposed genetic algorithm multi-user channel estimation (GAMuChE) yields a normalized mean square error estimation (nMSE) inferior to 11%, under slowly varying multipath fading channels, large range of Doppler frequencies and medium system load, it exhibits lower complexity when compared to both maximum likelihood multi-user channel estimation (MLMuChE) and gradient descent method (GrdDsc). A near-optimum multi-user detector (MuD) based on the genetic algorithm (GAMuD), also proposed in this work, provides a significant reduction in the computational complexity when compared to the optimum multi-user detector (OMuD). In addition, the complexity of the GAMuChE and GAMuD algorithms were (jointly) analyzed in terms of number of operations necessary to reach the convergence, and compared to other jointly MuChE and MuD strategies. The joint GAMuChE-GAMuD scheme can be regarded as a promising alternative for implementing third-generation (3G) and fourth-generation (4G) wireless systems in the near future. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
This paper analyzes the complexity-performance trade-off of several heuristic near-optimum multiuser detection (MuD) approaches applied to the uplink of synchronous single/multiple-input multiple-output multicarrier code division multiple access (S/MIMO MC-CDMA) systems. Genetic algorithm (GA), short term tabu search (STTS) and reactive tabu search (RTS), simulated annealing (SA), particle swarm optimization (PSO), and 1-opt local search (1-LS) heuristic multiuser detection algorithms (Heur-MuDs) are analyzed in details, using a single-objective antenna-diversity-aided optimization approach. Monte- Carlo simulations show that, after convergence, the performances reached by all near-optimum Heur-MuDs are similar. However, the computational complexities may differ substantially, depending on the system operation conditions. Their complexities are carefully analyzed in order to obtain a general complexity-performance framework comparison and to show that unitary Hamming distance search MuD (uH-ds) approaches (1-LS, SA, RTS and STTS) reach the best convergence rates, and among them, the 1-LS-MuD provides the best trade-off between implementation complexity and bit error rate (BER) performance.
Resumo:
The flowshop scheduling problem with blocking in-process is addressed in this paper. In this environment, there are no buffers between successive machines: therefore intermediate queues of jobs waiting in the system for their next operations are not allowed. Heuristic approaches are proposed to minimize the total tardiness criterion. A constructive heuristic that explores specific characteristics of the problem is presented. Moreover, a GRASP-based heuristic is proposed and Coupled with a path relinking strategy to search for better outcomes. Computational tests are presented and the comparisons made with an adaptation of the NEH algorithm and with a branch-and-bound algorithm indicate that the new approaches are promising. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
We introduced a spectral clustering algorithm based on the bipartite graph model for the Manufacturing Cell Formation problem in [Oliveira S, Ribeiro JFF, Seok SC. A spectral clustering algorithm for manufacturing cell formation. Computers and Industrial Engineering. 2007 [submitted for publication]]. It constructs two similarity matrices; one for parts and one for machines. The algorithm executes a spectral clustering algorithm on each separately to find families of parts and cells of machines. The similarity measure in the approach utilized limited information between parts and between machines. This paper reviews several well-known similarity measures which have been used for Group Technology. Computational clustering results are compared by various performance measures. (C) 2008 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved.
Resumo:
The image reconstruction using the EIT (Electrical Impedance Tomography) technique is a nonlinear and ill-posed inverse problem which demands a powerful direct or iterative method. A typical approach for solving the problem is to minimize an error functional using an iterative method. In this case, an initial solution close enough to the global minimum is mandatory to ensure the convergence to the correct minimum in an appropriate time interval. The aim of this paper is to present a new, simple and low cost technique (quadrant-searching) to reduce the search space and consequently to obtain an initial solution of the inverse problem of EIT. This technique calculates the error functional for four different contrast distributions placing a large prospective inclusion in the four quadrants of the domain. Comparing the four values of the error functional it is possible to get conclusions about the internal electric contrast. For this purpose, initially we performed tests to assess the accuracy of the BEM (Boundary Element Method) when applied to the direct problem of the EIT and to verify the behavior of error functional surface in the search space. Finally, numerical tests have been performed to verify the new technique.
Resumo:
The main focus of this essay is the first American round-the-world scientific voyage, the U. S Exploring Expedition, which took place between 1838 and 1841 and was lead by Lieutenant Charles Wilkes. Here, I discuss the purposes of this expedition in the context of the voyages of circumnavigation accomplished by the various European powers during the same period.
Resumo:
Chronic beryllium disease (CBD) is clinically similar to other granulomatous diseases such as sarcoidosis. It is often misdiagnosed if a thorough occupational history is not taken. When appropriate, a beryllium lymphocyte proliferation tests (BeLPT) need to be performed. We aimed to search for CBD among currently diagnosed pulmonary sarcoidosis patients and to identify the occupations and exposures in Ontario leading to CBD. Questionnaire items included work history and details of possible exposure to beryllium. Participants who provided a history of previous work with metals underwent BeLPTs and an ELISPOT on the basis of having a higher pretest probability of CBD. Among 121 sarcoid patients enrolled, 87 (72%) reported no known previous metal dust or fume exposure, while 34 (28%) had metal exposure, including 17 (14%) with beryllium exposure at work or home. However, none of these 34 who underwent testing had positive test results. Self-reported exposure to beryllium or metals was relatively common in these patients with clinical sarcoidosis, but CBD was not confirmed using blood assays in this population.
Resumo:
Background: Although various techniques have been used for breast conservation surgery reconstruction, there are few studies describing a logical approach to reconstruction of these defects. The objectives of this study were to establish a classification system for partial breast defects and to develop a reconstructive algorithm. Methods: The authors reviewed a 7-year experience with 209 immediate breast conservation surgery reconstructions. Mean follow-up was 31 months. Type I defects include tissue resection in smaller breasts (bra size A/B), including type IA, which involves minimal defects that do not cause distortion; type III, which involves moderate defects that cause moderate distortion; and type IC, which involves large defects that cause significant deformities. Type II includes tissue resection in medium-sized breasts with or without ptosis (bra size C), and type III includes tissue resection in large breasts with ptosis (bra size D). Results: Eighteen percent of patients presented type I, where a lateral thoracodorsal flap and a latissimus dorsi flap were performed in 68 percent. Forty-five percent presented type II defects, where bilateral mastopexy was performed in 52 percent. Thirty-seven percent of patients presented type III distortion, where bilateral reduction mammaplasty was performed in 67 percent. Thirty-five percent of patients presented complications, and most were minor. Conclusions: An algorithm based on breast size in relation to tumor location and extension of resection can be followed to determine the best approach to reconstruction. The authors` results have demonstrated that the complications were similar to those in other clinical series. Success depends on patient selection, coordinated planning with the oncologic surgeon, and careful intraoperative management.