860 resultados para Simulation-optimization method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerical modeling of the eddy currents induced in the human body by the pulsed field gradients in MRI presents a difficult computational problem. It requires an efficient and accurate computational method for high spatial resolution analyses with a relatively low input frequency. In this article, a new technique is described which allows the finite difference time domain (FDTD) method to be efficiently applied over a very large frequency range, including low frequencies. This is not the case in conventional FDTD-based methods. A method of implementing streamline gradients in FDTD is presented, as well as comparative analyses which show that the correct source injection in the FDTD simulation plays a crucial rule in obtaining accurate solutions. In particular, making use of the derivative of the input source waveform is shown to provide distinct benefits in accuracy over direct source injection. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and the source injection method has been verified against examples with analytical solutions. Results are presented showing the spatial distribution of gradient-induced electric fields and eddy currents in a complete body model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Anew thermodynamic approach has been developed in this paper to analyze adsorption in slitlike pores. The equilibrium is described by two thermodynamic conditions: the Helmholtz free energy must be minimal, and the grand potential functional at that minimum must be negative. This approach has led to local isotherms that describe adsorption in the form of a single layer or two layers near the pore walls. In narrow pores local isotherms have one step that could be either very sharp but continuous or discontinuous benchlike for a definite range of pore width. The latter reflects a so-called 0 --> 1 monolayer transition. In relatively wide pores, local isotherms have two steps, of which the first step corresponds to the appearance of two layers near the pore walls, while the second step corresponds to the filling of the space between these layers. All features of local isotherms are in agreement with the results obtained from the density functional theory and Monte Carlo simulations. The approach is used for determining pore size distributions of carbon materials. We illustrate this with the benzene adsorption data on activated carbon at 20, 50, and 80 degreesC, argon adsorption on activated carbon Norit ROX at 87.3 K, and nitrogen adsorption on activated carbon Norit R1 at 77.3 K.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Load-Unload Response Ratio (LURR) method is an intermediate-term earthquake prediction approach that has shown considerable promise. It involves calculating the ratio of a specified energy release measure during loading and unloading where loading and unloading periods are determined from the earth tide induced perturbations in the Coulomb Failure Stress on optimally oriented faults. In the lead-up to large earthquakes, high LURR values are frequently observed a few months or years prior to the event. These signals may have a similar origin to the observed accelerating seismic moment release (AMR) prior to many large earthquakes or may be due to critical sensitivity of the crust when a large earthquake is imminent. As a first step towards studying the underlying physical mechanism for the LURR observations, numerical studies are conducted using the particle based lattice solid model (LSM) to determine whether LURR observations can be reproduced. The model is initialized as a heterogeneous 2-D block made up of random-sized particles bonded by elastic-brittle links. The system is subjected to uniaxial compression from rigid driving plates on the upper and lower edges of the model. Experiments are conducted using both strain and stress control to load the plates. A sinusoidal stress perturbation is added to the gradual compressional loading to simulate loading and unloading cycles and LURR is calculated. The results reproduce signals similar to those observed in earthquake prediction practice with a high LURR value followed by a sudden drop prior to macroscopic failure of the sample. The results suggest that LURR provides a good predictor for catastrophic failure in elastic-brittle systems and motivate further research to study the underlying physical mechanisms and statistical properties of high LURR values. The results provide encouragement for earthquake prediction research and the use of advanced simulation models to probe the physics of earthquakes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Free-space optical interconnects (FSOIs), made up of dense arrays of vertical-cavity surface-emitting lasers, photodetectors and microlenses can be used for implementing high-speed and high-density communication links, and hence replace the inferior electrical interconnects. A major concern in the design of FSOIs is minimization of the optical channel cross talk arising from laser beam diffraction. In this article we introduce modifications to the mode expansion method of Tanaka et al. [IEEE Trans. Microwave Theory Tech. MTT-20, 749 (1972)] to make it an efficient tool for modelling and design of FSOIs in the presence of diffraction. We demonstrate that our modified mode expansion method has accuracy similar to the exact solution of the Huygens-Kirchhoff diffraction integral in cases of both weak and strong beam clipping, and that it is much more accurate than the existing approximations. The strength of the method is twofold: first, it is applicable in the region of pronounced diffraction (strong beam clipping) where all other approximations fail and, second, unlike the exact-solution method, it can be efficiently used for modelling diffraction on multiple apertures. These features make the mode expansion method useful for design and optimization of free-space architectures containing multiple optical elements inclusive of optical interconnects and optical clock distribution systems. (C) 2003 Optical Society of America.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Frequency deviation is a common problem for power system signal processing. Many power system measurements are carried out in a fixed sampling rate assuming the system operates in its nominal frequency (50 or 60 Hz). However, the actual frequency may deviate from the normal value from time to time due to various reasons such as disturbances and subsequent system transients. Measurement of signals based on a fixed sampling rate may introduce errors under such situations. In order to achieve high precision signal measurement appropriate algorithms need to be employed to reduce the impact from frequency deviation in the power system data acquisition process. This paper proposes an advanced algorithm to enhance Fourier transform for power system signal processing. The algorithm is able to effectively correct frequency deviation under fixed sampling rate. Accurate measurement of power system signals is essential for the secure and reliable operation of power systems. The algorithm is readily applicable to such occasions where signal processing is affected by frequency deviation. Both mathematical proof and numerical simulation are given in this paper to illustrate robustness and effectiveness of the proposed algorithm. Crown Copyright (C) 2003 Published by Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of a fitted parameter watershed model to address water quantity and quality management issues requires that it be calibrated under a wide range of hydrologic conditions. However, rarely does model calibration result in a unique parameter set. Parameter nonuniqueness can lead to predictive nonuniqueness. The extent of model predictive uncertainty should be investigated if management decisions are to be based on model projections. Using models built for four neighboring watersheds in the Neuse River Basin of North Carolina, the application of the automated parameter optimization software PEST in conjunction with the Hydrologic Simulation Program Fortran (HSPF) is demonstrated. Parameter nonuniqueness is illustrated, and a method is presented for calculating many different sets of parameters, all of which acceptably calibrate a watershed model. A regularization methodology is discussed in which models for similar watersheds can be calibrated simultaneously. Using this method, parameter differences between watershed models can be minimized while maintaining fit between model outputs and field observations. In recognition of the fact that parameter nonuniqueness and predictive uncertainty are inherent to the modeling process, PEST's nonlinear predictive analysis functionality is then used to explore the extent of model predictive uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new wavelet-based adaptive framework for solving population balance equations (PBEs) is proposed in this work. The technique is general, powerful and efficient without the need for prior assumptions about the characteristics of the processes. Because there are steeply varying number densities across a size range, a new strategy is developed to select the optimal order of resolution and the collocation points based on an interpolating wavelet transform (IWT). The proposed technique has been tested for size-independent agglomeration, agglomeration with a linear summation kernel and agglomeration with a nonlinear kernel. In all cases, the predicted and analytical particle size distributions (PSDs) are in excellent agreement. Further work on the solution of the general population balance equations with nucleation, growth and agglomeration and the solution of steady-state population balance equations will be presented in this framework. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pectus excavatum is the most common congenital deformity of the anterior chest wall, in which an abnormal formation of the rib cage gives the chest a caved-in or sunken appearance. Today, the surgical correction of this deformity is carried out in children and adults through Nuss technic, which consists in the placement of a prosthetic bar under the sternum and over the ribs. Although this technique has been shown to be safe and reliable, not all patients have achieved adequate cosmetic outcome. This often leads to psychological problems and social stress, before and after the surgical correction. This paper targets this particular problem by presenting a method to predict the patient surgical outcome based on pre-surgical imagiologic information and chest skin dynamic modulation. The proposed approach uses the patient pre-surgical thoracic CT scan and anatomical-surgical references to perform a 3D segmentation of the left ribs, right ribs, sternum and skin. The technique encompasses three steps: a) approximation of the cartilages, between the ribs and the sternum, trough b-spline interpolation; b) a volumetric mass spring model that connects two layers - inner skin layer based on the outer pleura contour and the outer surface skin; and c) displacement of the sternum according to the prosthetic bar position. A dynamic model of the skin around the chest wall region was generated, capable of simulating the effect of the movement of the prosthetic bar along the sternum. The results were compared and validated with patient postsurgical skin surface acquired with Polhemus FastSCAN system

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a method for estimating local thickness distribution in nite element models, applied to injection molded and cast engineering parts. This method features considerable improved performance compared to two previously proposed approaches, and has been validated against thickness measured by di erent human operators. We also demonstrate that the use of this method for assigning a distribution of local thickness in FEM crash simulations results in a much more accurate prediction of the real part performance, thus increasing the bene ts of computer simulations in engineering design by enabling zero-prototyping and thus reducing product development costs. The simulation results have been compared to experimental tests, evidencing the advantage of the proposed method. Thus, the proposed approach to consider local thickness distribution in FEM crash simulations has high potential on the product development process of complex and highly demanding injection molded and casted parts and is currently being used by Ford Motor Company.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Polymeric materials have become the reference material for high reliability and performance applications. However, their performance in service conditions is difficult to predict, due in large part to their inherent complex morphology, which leads to non-linear and anisotropic behavior, highly dependent on the thermomechanical environment under which it is processed. In this work, a multiscale approach is proposed to investigate the mechanical properties of polymeric-based material under strain. To achieve a better understanding of phenomena occurring at the smaller scales, the coupling of a finite element method (FEM) and molecular dynamics (MD) modeling, in an iterative procedure, was employed, enabling the prediction of the macroscopic constitutive response. As the mechanical response can be related to the local microstructure, which in turn depends on the nano-scale structure, this multiscale approach computes the stress-strain relationship at every analysis point of the macro-structure by detailed modeling of the underlying micro- and meso-scale deformation phenomena. The proposed multiscale approach can enable prediction of properties at the macroscale while taking into consideration phenomena that occur at the mesoscale, thus offering an increased potential accuracy compared to traditional methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A previously developed model is used to numerically simulate real clinical cases of the surgical correction of scoliosis. This model consists of one-dimensional finite elements with spatial deformation in which (i) the column is represented by its axis; (ii) the vertebrae are assumed to be rigid; and (iii) the deformability of the column is concentrated in springs that connect the successive rigid elements. The metallic rods used for the surgical correction are modeled by beam elements with linear elastic behavior. To obtain the forces at the connections between the metallic rods and the vertebrae geometrically, non-linear finite element analyses are performed. The tightening sequence determines the magnitude of the forces applied to the patient column, and it is desirable to keep those forces as small as possible. In this study, a Genetic Algorithm optimization is applied to this model in order to determine the sequence that minimizes the corrective forces applied during the surgery. This amounts to find the optimal permutation of integers 1, ... , n, n being the number of vertebrae involved. As such, we are faced with a combinatorial optimization problem isomorph to the Traveling Salesman Problem. The fitness evaluation requires one computing intensive Finite Element Analysis per candidate solution and, thus, a parallel implementation of the Genetic Algorithm is developed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Because of the adverse effect of CO2 from fossil fuel combustion on the earth's ecosystems, the most cost-effective method for CO2 capture is an important area of research. The predominant process for CO2 capture currently employed by industry is chemical absorption in amine solutions. A dynamic model for the de-absorption process was developed with monoethanolamine (MEA) solution. Henry's law was used for modelling the vapour phase equilibrium of the CO2, and fugacity ratios calculated by the Peng-Robinson equation of state (EOS) were used for H2O, MEA, N-2 and O-2. Chemical reactions between CO2 and MEA were included in the model along with the enhancement factor for chemical absorption. Liquid and vapour energy balances were developed to calculate the liquid and vapour temperature, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Topology optimization consists in finding the spatial distribution of a given total volume of material for the resulting structure to have some optimal property, for instance, maximization of structural stiffness or maximization of the fundamental eigenfrequency. In this paper a Genetic Algorithm (GA) employing a representation method based on trees is developed to generate initial feasible individuals that remain feasible upon crossover and mutation and as such do not require any repairing operator to ensure feasibility. Several application examples are studied involving the topology optimization of structures where the objective functions is the maximization of the stiffness and the maximization of the first and the second eigenfrequencies of a plate, all cases having a prescribed material volume constraint.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The portfolio generating the iTraxx EUR index is modeled by coupled Markov chains. Each of the industries of the portfolio evolves according to its own Markov transition matrix. Using a variant of the method of moments, the model parameters are estimated from a data set of Standard and Poor's. Swap spreads are evaluated by Monte-Carlo simulations. Along with an actuarially fair spread, at least squares spread is considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In practical applications of optimization it is common to have several conflicting objective functions to optimize. Frequently, these functions are subject to noise or can be of black-box type, preventing the use of derivative-based techniques. We propose a novel multiobjective derivative-free methodology, calling it direct multisearch (DMS), which does not aggregate any of the objective functions. Our framework is inspired by the search/poll paradigm of direct-search methods of directional type and uses the concept of Pareto dominance to maintain a list of nondominated points (from which the new iterates or poll centers are chosen). The aim of our method is to generate as many points in the Pareto front as possible from the polling procedure itself, while keeping the whole framework general enough to accommodate other disseminating strategies, in particular, when using the (here also) optional search step. DMS generalizes to multiobjective optimization (MOO) all direct-search methods of directional type. We prove under the common assumptions used in direct search for single objective optimization that at least one limit point of the sequence of iterates generated by DMS lies in (a stationary form of) the Pareto front. However, extensive computational experience has shown that our methodology has an impressive capability of generating the whole Pareto front, even without using a search step. Two by-products of this paper are (i) the development of a collection of test problems for MOO and (ii) the extension of performance and data profiles to MOO, allowing a comparison of several solvers on a large set of test problems, in terms of their efficiency and robustness to determine Pareto fronts.