104 resultados para Cluster Counting Algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A methodology of identification and characterization of coherent structures mostly known as clusters is applied to hydrodynamic results of numerical simulation generated for the riser of a circulating fluidized bed. The numerical simulation is performed using the MICEFLOW code, which includes the two-fluids IIT`s hydrodynamic model B. The methodology for cluster characterization that is used is based in the determination of four characteristics, related to average life time, average volumetric fraction of solid, existing time fraction and frequency of occurrence. The identification of clusters is performed by applying a criterion related to the time average value of the volumetric solid fraction. A qualitative rather than quantitative analysis is performed mainly owing to the unavailability of operational data used in the considered experiments. Concerning qualitative analysis, the simulation results are in good agreement with literature. Some quantitative comparisons between predictions and experiment were also presented to emphasize the capability of the modeling procedure regarding the analysis of macroscopic scale coherent structures. (c) 2007 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The general flowshop scheduling problem is a production problem where a set of n jobs have to be processed with identical flow pattern on in machines. In permutation flowshops the sequence of jobs is the same on all machines. A significant research effort has been devoted for sequencing jobs in a flowshop minimizing the makespan. This paper describes the application of a Constructive Genetic Algorithm (CGA) to makespan minimization on flowshop scheduling. The CGA was proposed recently as an alternative to traditional GA approaches, particularly, for evaluating schemata directly. The population initially formed only by schemata, evolves controlled by recombination to a population of well-adapted structures (schemata instantiation). The CGA implemented is based on the NEH classic heuristic and a local search heuristic used to define the fitness functions. The parameters of the CGA are calibrated using a Design of Experiments (DOE) approach. The computational results are compared against some other successful algorithms from the literature on Taillard`s well-known standard benchmark. The computational experience shows that this innovative CGA approach provides competitive results for flowshop scheduling; problems. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study proposes a new PSOS-model based damage identification procedure using frequency domain data. The formulation of the objective function for the minimization problem is based on the Frequency Response Functions (FRFs) of the system. A novel strategy for the control of the Particle Swarm Optimization (PSO) parameters based on the Nelder-Mead algorithm (Simplex method) is presented; consequently, the convergence of the PSOS becomes independent of the heuristic constants and its stability and confidence are enhanced. The formulated hybrid method performs better in different benchmark functions than the Simulated Annealing (SA) and the basic PSO (PSO(b)). Two damage identification problems, taking into consideration the effects of noisy and incomplete data, were studied: first, a 10-bar truss and second, a cracked free-free beam, both modeled with finite elements. In these cases, the damage location and extent were successfully determined. Finally, a non-linear oscillator (Duffing oscillator) was identified by PSOS providing good results. (C) 2009 Elsevier Ltd. All rights reserved

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an Adaptive Maximum Entropy (AME) approach for modeling biological species. The Maximum Entropy algorithm (MaxEnt) is one of the most used methods in modeling biological species geographical distribution. The approach presented here is an alternative to the classical algorithm. Instead of using the same set features in the training, the AME approach tries to insert or to remove a single feature at each iteration. The aim is to reach the convergence faster without affect the performance of the generated models. The preliminary experiments were well performed. They showed an increasing on performance both in accuracy and in execution time. Comparisons with other algorithms are beyond the scope of this paper. Some important researches are proposed as future works.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a free software tool that supports the next-generation Mobile Communications, through the automatic generation of models of components and electronic devices based on neural networks. This tool enables the creation, training, validation and simulation of the model directly from measurements made on devices of interest, using an interface totally oriented to non-experts in neural models. The resulting model can be exported automatically to a traditional circuit simulator to test different scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new methodology to estimate unbalanced harmonic distortions in a power system, based on measurements of a limited number of given sites. The algorithm utilizes evolutionary strategies (ES), a development branch of evolutionary algorithms. The problem solving algorithm herein proposed makes use of data from various power quality meters, which can either be synchronized by high technology GPS devices or by using information from a fundamental frequency load flow, what makes the overall power quality monitoring system much less costly. The ES based harmonic estimation model is applied to a 14 bus network to compare its performance to a conventional Monte Carlo approach. It is also applied to a 50 bus subtransmission network in order to compare the three-phase and single-phase approaches as well as the robustness of the proposed method. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An improvement to the quality bidimensional Delaunay mesh generation algorithm, which combines the mesh refinement algorithms strategy of Ruppert and Shewchuk is proposed in this research. The developed technique uses diametral lenses criterion, introduced by L. P. Chew, with the purpose of eliminating the extremely obtuse triangles in the boundary mesh. This method splits the boundary segment and obtains an initial prerefinement, and thus reducing the number of necessary iterations to generate a high quality sequential triangulation. Moreover, it decreases the intensity of the communication and synchronization between subdomains in parallel mesh refinement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Following the approach developed for rods in Part 1 of this paper (Pimenta et al. in Comput. Mech. 42:715-732, 2008), this work presents a fully conserving algorithm for the integration of the equations of motion in nonlinear shell dynamics. We begin with a re-parameterization of the rotation field in terms of the so-called Rodrigues rotation vector, allowing for an extremely simple update of the rotational variables within the scheme. The weak form is constructed via non-orthogonal projection, the time-collocation of which ensures exact conservation of momentum and total energy in the absence of external forces. Appealing is the fact that general hyperelastic materials (and not only materials with quadratic potentials) are permitted in a totally consistent way. Spatial discretization is performed using the finite element method and the robust performance of the scheme is demonstrated by means of numerical examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A fully conserving algorithm is developed in this paper for the integration of the equations of motion in nonlinear rod dynamics. The starting point is a re-parameterization of the rotation field in terms of the so-called Rodrigues rotation vector, which results in an extremely simple update of the rotational variables. The weak form is constructed with a non-orthogonal projection corresponding to the application of the virtual power theorem. Together with an appropriate time-collocation, it ensures exact conservation of momentum and total energy in the absence of external forces. Appealing is the fact that nonlinear hyperelastic materials (and not only materials with quadratic potentials) are permitted without any prejudice on the conservation properties. Spatial discretization is performed via the finite element method and the performance of the scheme is assessed by means of several numerical simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, the applicability of a new algorithm for the estimation of mechanical properties from instrumented indentation data was studied for thin films. The applicability was analyzed with the aid of both three-dimensional finite element simulations and experimental indentation tests. The numerical approach allowed studying the effect of the substrate on the estimation of mechanical properties of the film, which was conducted based on the ratio h(max)/l between maximum indentation depth and film thickness. For the experimental analysis, indentation tests were conducted on AISI H13 tool steel specimens, plasma nitrated and coated with TiN thin films. Results have indicated that, for the conditions analyzed in this work, the elastic deformation of the substrate limited the extraction of mechanical properties of the film/substrate system. This limitation occurred even at low h(max)/l ratios and especially for the estimation of the values of yield strength and strain hardening exponent. At indentation depths lower than 4% of the film thickness, the proposed algorithm estimated the mechanical properties of the film with accuracy. Particularly for hardness, precise values were estimated at h(max)/l lower than 0.1, i.e. 10% of film thickness. (C) 2010 Published by Elsevier B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Cluster Variation Method (CVM), introduced over 50 years ago by Prof. Dr. Ryoichi Kikuchi, is applied to the thermodynamic modeling of the BCC Cr-Fe system in the irregular tetrahedron approximation, using experimental thermochemical data as initial input for accessing the model parameters. The results are checked against independent data on the low-temperature miscibility gap, using increasingly accurate thermodynamic models, first by the inclusion of the magnetic degrees of freedom of iron and then also by the inclusion of the magnetic degrees of freedom of chromium. It is shown that a reasonably accurate description of the phase diagram at the iron-rich side (i.e. the miscibility gap borders and the Curie line) is obtained, but only at expense of the agreement with the above mentioned thermochemical data. Reasons for these inconsistencies are discussed, especially with regard to the need of introducing vibrational degrees of freedom in the CVM model. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cost of a new ship design heavily depends on the principal dimensions of the ship; however, dimensions minimization often conflicts with the minimum oil outflow (in the event of an accidental spill). This study demonstrates one rational methodology for selecting the optimal dimensions and coefficients of form of tankers via the use of a genetic algorithm. Therein, a multi-objective optimization problem was formulated by using two objective attributes in the evaluation of each design, specifically, total cost and mean oil outflow. In addition, a procedure that can be used to balance the designs in terms of weight and useful space is proposed. A genetic algorithm was implemented to search for optimal design parameters and to identify the nondominated Pareto frontier. At the end of this study, three real ships are used as case studies. [DOI:10.1115/1.4002740]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a novel array RLS algorithm with forgetting factor that circumvents the problem of fading regularization, inherent to the standard exponentially-weighted RLS, by allowing for time-varying regularization matrices with generic structure. Simulations in finite precision show the algorithm`s superiority as compared to alternative algorithms in the context of adaptive beamforming.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the problem of distributed estimation based on the affine projection algorithm (APA), which is developed from Newton`s method for minimizing a cost function. The proposed solution is formulated to ameliorate the limited convergence properties of least-mean-square (LMS) type distributed adaptive filters with colored inputs. The analysis of transient and steady-state performances at each individual node within the network is developed by using a weighted spatial-temporal energy conservation relation and confirmed by computer simulations. The simulation results also verify that the proposed algorithm provides not only a faster convergence rate but also an improved steady-state performance as compared to an LMS-based scheme. In addition, the new approach attains an acceptable misadjustment performance with lower computational and memory cost, provided the number of regressor vectors and filter length parameters are appropriately chosen, as compared to a distributed recursive-least-squares (RLS) based method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We derive an easy-to-compute approximate bound for the range of step-sizes for which the constant-modulus algorithm (CMA) will remain stable if initialized close to a minimum of the CM cost function. Our model highlights the influence, of the signal constellation used in the transmission system: for smaller variation in the modulus of the transmitted symbols, the algorithm will be more robust, and the steady-state misadjustment will be smaller. The theoretical results are validated through several simulations, for long and short filters and channels.