971 resultados para Meta heuristic algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Higher order (2,4) FDTD schemes used for numerical solutions of Maxwell`s equations are focused on diminishing the truncation errors caused by the Taylor series expansion of the spatial derivatives. These schemes use a larger computational stencil, which generally makes use of the two constant coefficients, C-1 and C-2, for the four-point central-difference operators. In this paper we propose a novel way to diminish these truncation errors, in order to obtain more accurate numerical solutions of Maxwell`s equations. For such purpose, we present a method to individually optimize the pair of coefficients, C-1 and C-2, based on any desired grid size resolution and size of time step. Particularly, we are interested in using coarser grid discretizations to be able to simulate electrically large domains. The results of our optimization algorithm show a significant reduction in dispersion error and numerical anisotropy for all modeled grid size resolutions. Numerical simulations of free-space propagation verifies the very promising theoretical results. The model is also shown to perform well in more complex, realistic scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Starting from the Durbin algorithm in polynomial space with an inner product defined by the signal autocorrelation matrix, an isometric transformation is defined that maps this vector space into another one where the Levinson algorithm is performed. Alternatively, for iterative algorithms such as discrete all-pole (DAP), an efficient implementation of a Gohberg-Semencul (GS) relation is developed for the inversion of the autocorrelation matrix which considers its centrosymmetry. In the solution of the autocorrelation equations, the Levinson algorithm is found to be less complex operationally than the procedures based on GS inversion for up to a minimum of five iterations at various linear prediction (LP) orders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper the continuous Verhulst dynamic model is used to synthesize a new distributed power control algorithm (DPCA) for use in direct sequence code division multiple access (DS-CDMA) systems. The Verhulst model was initially designed to describe the population growth of biological species under food and physical space restrictions. The discretization of the corresponding differential equation is accomplished via the Euler numeric integration (ENI) method. Analytical convergence conditions for the proposed DPCA are also established. Several properties of the proposed recursive algorithm, such as Euclidean distance from optimum vector after convergence, convergence speed, normalized mean squared error (NSE), average power consumption per user, performance under dynamics channels, and implementation complexity aspects, are analyzed through simulations. The simulation results are compared with two other DPCAs: the classic algorithm derived by Foschini and Miljanic and the sigmoidal of Uykan and Koivo. Under estimated errors conditions, the proposed DPCA exhibits smaller discrepancy from the optimum power vector solution and better convergence (under fixed and adaptive convergence factor) than the classic and sigmoidal DPCAs. (C) 2010 Elsevier GmbH. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this paper is to apply the so-called policy iteration algorithm (PIA) for the long run average continuous control problem of piecewise deterministic Markov processes (PDMP`s) taking values in a general Borel space and with compact action space depending on the state variable. In order to do that we first derive some important properties for a pseudo-Poisson equation associated to the problem. In the sequence it is shown that the convergence of the PIA to a solution satisfying the optimality equation holds under some classical hypotheses and that this optimal solution yields to an optimal control strategy for the average control problem for the continuous-time PDMP in a feedback form.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An algorithm inspired on ant behavior is developed in order to find out the topology of an electric energy distribution network with minimum power loss. The algorithm performance is investigated in hypothetical and actual circuits. When applied in an actual distribution system of a region of the State of Sao Paulo (Brazil), the solution found by the algorithm presents loss lower than the topology built by the concessionary company.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The most popular algorithms for blind equalization are the constant-modulus algorithm (CMA) and the Shalvi-Weinstein algorithm (SWA). It is well-known that SWA presents a higher convergence rate than CMA. at the expense of higher computational complexity. If the forgetting factor is not sufficiently close to one, if the initialization is distant from the optimal solution, or if the signal-to-noise ratio is low, SWA can converge to undesirable local minima or even diverge. In this paper, we show that divergence can be caused by an inconsistency in the nonlinear estimate of the transmitted signal. or (when the algorithm is implemented in finite precision) by the loss of positiveness of the estimate of the autocorrelation matrix, or by a combination of both. In order to avoid the first cause of divergence, we propose a dual-mode SWA. In the first mode of operation. the new algorithm works as SWA; in the second mode, it rejects inconsistent estimates of the transmitted signal. Assuming the persistence of excitation condition, we present a deterministic stability analysis of the new algorithm. To avoid the second cause of divergence, we propose a dual-mode lattice SWA, which is stable even in finite-precision arithmetic, and has a computational complexity that increases linearly with the number of adjustable equalizer coefficients. The good performance of the proposed algorithms is confirmed through numerical simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the design and implementation of an embedded soft sensor, i. e., a generic and autonomous hardware module, which can be applied to many complex plants, wherein a certain variable cannot be directly measured. It is implemented based on a fuzzy identification algorithm called ""Limited Rules"", employed to model continuous nonlinear processes. The fuzzy model has a Takagi-Sugeno-Kang structure and the premise parameters are defined based on the Fuzzy C-Means (FCM) clustering algorithm. The firmware contains the soft sensor and it runs online, estimating the target variable from other available variables. Tests have been performed using a simulated pH neutralization plant. The results of the embedded soft sensor have been considered satisfactory. A complete embedded inferential control system is also presented, including a soft sensor and a PID controller. (c) 2007, ISA. Published by Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the single machine scheduling problem with a common due date aiming to minimize earliness and tardiness penalties. Due to its complexity, most of the previous studies in the literature deal with this problem using heuristics and metaheuristics approaches. With the intention of contributing to the study of this problem, a branch-and-bound algorithm is proposed. Lower bounds and pruning rules that exploit properties of the problem are introduced. The proposed approach is examined through a computational comparative study with 280 problems involving different due date scenarios. In addition, the values of optimal solutions for small problems from a known benchmark are provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reviews the attitudes, skills and knowledge that engineering innovators should possess. It critically analyses and compares sets of graduate attributes from the USA, Australia and Malaysia in terms of which of these relate to the ability to innovate. Innovation can be described as an integrative, meta attribute that overarches most of the other graduate attributes. Due to the “graduate attribute paradox”, it is shown how meeting the stated attributes of graduates by industry does not necessarily satisfy the requirements of industry. It is argued that the culture of the engineering school is an important influence on fostering innovation in engineers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently Adams and Bischof (1994) proposed a novel region growing algorithm for segmenting intensity images. The inputs to the algorithm are the intensity image and a set of seeds - individual points or connected components - that identify the individual regions to be segmented. The algorithm grows these seed regions until all of the image pixels have been assimilated. Unfortunately the algorithm is inherently dependent on the order of pixel processing. This means, for example, that raster order processing and anti-raster order processing do not, in general, lead to the same tessellation. In this paper we propose an improved seeded region growing algorithm that retains the advantages of the Adams and Bischof algorithm fast execution, robust segmentation, and no tuning parameters - but is pixel order independent. (C) 1997 Elsevier Science B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This systematic review aimed to collate randomized controlled trials (RCTs) of various interventions used to treat tardive dyskinesia (TD) and, where appropriate, to combine the data for mete-analysis, Clinical trials were identified by electronic searches, handsearches and contact with principal investigators. Data were extracted independently by two reviewers, for outcomes related to improvement, deterioration, side-effects and drop out rates. Data were pooled using the Mantel-Haenzel Odds Ratio (fixed effect model). For treatments that had significant effects, the number needed to treat (NNT) was calculated. From 296 controlled clinical trials, data were extracted from 47 trials. For most interventions, we could identify no RCT-derived evidence of efficacy. A meta-analysis showed that baclofen, deanol and diazepam were no more effective than a placebo. Single RCTs demonstrated a lack of evidence of any effect for bromocriptine, ceruletide, clonidine, estrogen, gamma linolenic acid, hydergine, lecithin, lithium, progabide, seligiline and tetrahydroisoxazolopyridinol. The meta-analysis found that five interventions were effective: L-dopa, oxypertine, sodium valproate, tiapride and vitamin E; neuroleptic reduction was marginally significant. Data from single RCTs revealed that insulin, alpha methyl dopa and reserpine were more effective than a placebo. There was a significantly increased risk of adverse events associated with baclofen, deanol, L-dopa, oxypertine and reserpine. Metaanalysis of the impact of placebo (n=485) showed that 37.3% of participants showed an improvement. Interpretation of this systematic review requires caution as the individual trials identified tended to have small sample sizes. For many compounds, data from only one trial were available, and where meta-analyses were possible, these were based on a small number of trials. Despite these concerns, the review facilitated the interpretation of the large and diverse range of treatments used for TD. Clinical recommendations for the treatment of TD are made, based on the availability of RCT-derived evidence, the strength of that evidence and the presence of adverse effects. (C) 1999 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motivation: Prediction methods for identifying binding peptides could minimize the number of peptides required to be synthesized and assayed, and thereby facilitate the identification of potential T-cell epitopes. We developed a bioinformatic method for the prediction of peptide binding to MHC class II molecules. Results: Experimental binding data and expert knowledge of anchor positions and binding motifs were combined with an evolutionary algorithm (EA) and an artificial neural network (ANN): binding data extraction --> peptide alignment --> ANN training and classification. This method, termed PERUN, was implemented for the prediction of peptides that bind to HLA-DR4(B1*0401). The respective positive predictive values of PERUN predictions of high-, moderate-, low- and zero-affinity binder-a were assessed as 0.8, 0.7, 0.5 and 0.8 by cross-validation, and 1.0, 0.8, 0.3 and 0.7 by experimental binding. This illustrates the synergy between experimentation and computer modeling, and its application to the identification of potential immunotheraaeutic peptides.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

lBACKGROUND. Management of patients with ductal carcinoma in situ (DCIS) is a dilemma, as mastectomy provides nearly a 100% cure rate but at the expense of physical and psychologic morbidity. It would be helpful if we could predict which patients with DCIS are at sufficiently high risk of local recurrence after conservative surgery (CS) alone to warrant postoperative radiotherapy (RT) and which patients are at sufficient risk of local recurrence after CS + RT to warrant mastectomy. The authors reviewed the published studies and identified the factors that may be predictive of local recurrence after management by mastectomy, CS alone, or CS + RT. METHODS. The authors examined patient, tumor, and treatment factors as potential predictors for local recurrence and estimated the risks of recurrence based on a review of published studies. They examined the effects of patient factors (age at diagnosis and family history), tumor factors (sub-type of DCIS, grade, tumor size, necrosis, and margins), and treatment (mastectomy, CS alone, and CS + RT). The 95% confidence intervals (CI) of the recurrence rates for each of the studies were calculated for subtype, grade, and necrosis, using the exact binomial; the summary recurrence rate and 95% CI for each treatment category were calculated by quantitative meta-analysis using the fixed and random effects models applied to proportions. RESULTS, Meta-analysis yielded a summary recurrence rate of 22.5% (95% CI = 16.9-28.2) for studies employing CS alone, 8.9% (95% CI = 6.8-11.0) for CS + RT, and 1.4% (95% CI = 0.7-2.1) for studies involving mastectomy alone. These summary figures indicate a clear and statistically significant separation, and therefore outcome, between the recurrence rates of each treatment category, despite the likelihood that the patients who underwent CS alone were likely to have had smaller, possibly low grade lesions with clear margins. The patients with risk factors of presence of necrosis, high grade cytologic features, or comedo subtype were found to derive the greatest improvement in local control with the addition of RT to CS. Local recurrence among patients treated by CS alone is approximately 20%, and one-half of the recurrences are invasive cancers. For most patients, RT reduces the risk of recurrence after CS alone by at least 50%. The differences in local recurrence between CS alone and CS + RT are most apparent for those patients with high grade tumors or DCIS with necrosis, or of the comedo subtype, or DCIS with close or positive surgical margins. CONCLUSIONS, The authors recommend that radiation be added to CS if patients with DCIS who also have the risk factors for local recurrence choose breast conservation over mastectomy. The patients who may be suitable for CS alone outside of a clinical trial may be those who have low grade lesions with little or no necrosis, and with clear surgical margins. Use of the summary statistics when discussing outcomes with patients may help the patient make treatment decisions. Cancer 1999;85:616-28. (C) 1999 American Cancer Society.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We tested the effects of four data characteristics on the results of reserve selection algorithms. The data characteristics were nestedness of features (land types in this case), rarity of features, size variation of sites (potential reserves) and size of data sets (numbers of sites and features). We manipulated data sets to produce three levels, with replication, of each of these data characteristics while holding the other three characteristics constant. We then used an optimizing algorithm and three heuristic algorithms to select sites to solve several reservation problems. We measured efficiency as the number or total area of selected sites, indicating the relative cost of a reserve system. Higher nestedness increased the efficiency of all algorithms (reduced the total cost of new reserves). Higher rarity reduced the efficiency of all algorithms (increased the total cost of new reserves). More variation in site size increased the efficiency of all algorithms expressed in terms of total area of selected sites. We measured the suboptimality of heuristic algorithms as the percentage increase of their results over optimal (minimum possible) results. Suboptimality is a measure of the reliability of heuristics as indicative costing analyses. Higher rarity reduced the suboptimality of heuristics (increased their reliability) and there is some evidence that more size variation did the same for the total area of selected sites. We discuss the implications of these results for the use of reserve selection algorithms as indicative and real-world planning tools.