66 resultados para Ant-based algorithm
Resumo:
We propose a computational method for the coupled simulation of a compressible flow interacting with a thin-shell structure undergoing large deformations. An Eulerian finite volume formulation is adopted for the fluid and a Lagrangian formulation based on subdivision finite elements is adopted for the shell response. The coupling between the fluid and the solid response is achieved via a novel approach based on level sets. The basic approach furnishes a general algorithm for coupling Lagrangian shell solvers with Cartesian grid based Eulerian fluid solvers. The efficiency and robustness of the proposed approach is demonstrated with a airbag deployment simulation. It bears emphasis that in the proposed approach the solid and the fluid components as well as their coupled interaction are considered in full detail and modeled with an equivalent level of fidelity without any oversimplifying assumptions or bias towards a particular physical aspect of the problem.
Resumo:
A control algorithm is presented that addresses the stability issues inherent to the operation of monolithic mode-locked laser diodes. It enables a continuous pulse duration tuning without any onset of Q-switching instabilities. A demonstration of the algorithm performance is presented for two radically different laser diode geometries and continuous pulse duration tuning between 0.5 ps to 2.2 ps and 1.2 ps to 10.2 ps is achieved. With practical applications in mind, this algorithm also facilitates control over performance parameters such as output power and wavelength during pulse duration tuning. The developed algorithm enables the user to harness the operational flexibility from such a laser with 'push-button' simplicity.
Resumo:
This paper proposes a new algorithm for waveletbased multidimensional image deconvolution which employs subband-dependent minimization and the dual-tree complex wavelet transform in an iterative Bayesian framework. In addition, this algorithm employs a new prior instead of the popular ℓ1 norm, and is thus able to embed a learning scheme during the iteration which helps it to achieve better deconvolution results and faster convergence. © 2008 IEEE.
Resumo:
This paper introduces a new technique called species conservation for evolving parallel subpopulations. The technique is based on the concept of dividing the population into several species according to their similarity. Each of these species is built around a dominating individual called the species seed. Species seeds found in the current generation are saved (conserved) by moving them into the next generation. Our technique has proved to be very effective in finding multiple solutions of multimodal optimization problems. We demonstrate this by applying it to a set of test problems, including some problems known to be deceptive to genetic algorithms.
Resumo:
The sensor scheduling problem can be formulated as a controlled hidden Markov model and this paper solves the problem when the state, observation and action spaces are continuous. This general case is important as it is the natural framework for many applications. The aim is to minimise the variance of the estimation error of the hidden state w.r.t. the action sequence. We present a novel simulation-based method that uses a stochastic gradient algorithm to find optimal actions. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents a novel coarse-to-fine global localization approach inspired by object recognition and text retrieval techniques. Harris-Laplace interest points characterized by scale-invariant transformation feature descriptors are used as natural landmarks. They are indexed into two databases: a location vector space model (LVSM) and a location database. The localization process consists of two stages: coarse localization and fine localization. Coarse localization from the LVSM is fast, but not accurate enough, whereas localization from the location database using a voting algorithm is relatively slow, but more accurate. The integration of coarse and fine stages makes fast and reliable localization possible. If necessary, the localization result can be verified by epipolar geometry between the representative view in the database and the view to be localized. In addition, the localization system recovers the position of the camera by essential matrix decomposition. The localization system has been tested in indoor and outdoor environments. The results show that our approach is efficient and reliable. © 2006 IEEE.
Resumo:
This paper presents a novel coarse-to-fine global localization approach that is inspired by object recognition and text retrieval techniques. Harris-Laplace interest points characterized by SIFT descriptors are used as natural land-marks. These descriptors are indexed into two databases: an inverted index and a location database. The inverted index is built based on a visual vocabulary learned from the feature descriptors. In the location database, each location is directly represented by a set of scale invariant descriptors. The localization process consists of two stages: coarse localization and fine localization. Coarse localization from the inverted index is fast but not accurate enough; whereas localization from the location database using voting algorithm is relatively slow but more accurate. The combination of coarse and fine stages makes fast and reliable localization possible. In addition, if necessary, the localization result can be verified by epipolar geometry between the representative view in database and the view to be localized. Experimental results show that our approach is efficient and reliable. ©2005 IEEE.
Resumo:
Two new maximum power point tracking algorithms are presented: the input voltage sensor, and duty ratio maximum power point tracking algorithm (ViSD algorithm); and the output voltage sensor, and duty ratio maximum power point tracking algorithm (VoSD algorithm). The ViSD and VoSD algorithms have the features, characteristics and advantages of the incremental conductance algorithm (INC); but, unlike the incremental conductance algorithm which requires two sensors (the voltage sensor and current sensor), the two algorithms are more desirable because they require only one sensor: the voltage sensor. Moreover, the VoSD technique is less complex; hence, it requires less computational processing. Both the ViSD and the VoSD techniques operate by maximising power at the converter output, instead of the input. The ViSD algorithm uses a voltage sensor placed at the input of a boost converter, while the VoSD algorithm uses a voltage sensor placed at the output of a boost converter. © 2011 IEEE.
Resumo:
We describe a novel constitutive model of lung parenchyma, which can be used for continuum mechanics based predictive simulations. To develop this model, we experimentally determined the nonlinear material behavior of rat lung parenchyma. This was achieved via uni-axial tension tests on living precision-cut rat lung slices. The resulting force-displacement curves were then used as inputs for an inverse analysis. The Levenberg-Marquardt algorithm was utilized to optimize the material parameters of combinations and recombinations of established strain-energy density functions (SEFs). Comparing the best-fits of the tested SEFs we found Wpar = 4.1 kPa(I1-3)2 + 20.7 kPa(I1 - 3)3 + 4.1 kPa(-2 ln J + J2 - 1) to be the optimal constitutive model. This SEF consists of three summands: the first can be interpreted as the contribution of the elastin fibers and the ground substance, the second as the contribution of the collagen fibers while the third controls the volumetric change. The presented approach will help to model the behavior of the pulmonary parenchyma and to quantify the strains and stresses during ventilation.
Resumo:
A dynamic programming algorithm for joint data detection and carrier phase estimation of continuous-phase-modulated signal is presented. The intent is to combine the robustness of noncoherent detectors with the superior performance of coherent ones. The algorithm differs from the Viterbi algorithm only in the metric that it maximizes over the possible transmitted data sequences. This metric is influenced both by the correlation with the received signal and the current estimate of the carrier phase. Carrier-phase estimation is based on decision guiding, but there is no external phase-locked loop. Instead, the phase of the best complex correlation with the received signal over the last few signaling intervals is used. The algorithm is slightly more complex than the coherent Viterbi algorithm but does not require narrowband filtering of the recovered carrier, as earlier appproaches did, to achieve the same level of performance.
Resumo:
This paper describes two applications in speech recognition of the use of stochastic context-free grammars (SCFGs) trained automatically via the Inside-Outside Algorithm. First, SCFGs are used to model VQ encoded speech for isolated word recognition and are compared directly to HMMs used for the same task. It is shown that SCFGs can model this low-level VQ data accurately and that a regular grammar based pre-training algorithm is effective both for reducing training time and obtaining robust solutions. Second, an SCFG is inferred from a transcription of the speech used to train a phoneme-based recognizer in an attempt to model phonotactic constraints. When used as a language model, this SCFG gives improved performance over a comparable regular grammar or bigram. © 1991.