950 resultados para simulation methods


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A methodology for the computational modeling of the fatigue crack growth in pressurized shell structures, based on the finite element method and concepts of Linear Elastic Fracture Mechanics, is presented. This methodology is based on that developed by Potyondy [Potyondy D, Wawrzynek PA, Ingraffea, AR. Discrete crack growth analysis methodology for through crack in pressurized fuselage structures. Int J Numer Methods Eng 1995;38:1633-1644], which consists of using four stress intensity factors, computed from the modified crack integral method, to predict the fatigue propagation life as well as the crack trajectory, which is computed as part of the numerical simulation. Some issues not presented in the study of Potyondy are investigated herein such as the influence of the crack increment size and the number of nodes per element (4 or 9 nodes) on the simulation results by means of a fatigue crack propagation simulation of a Boeing 737 airplane fuselage. The results of this simulation are compared with experimental results and those obtained by Potyondy [1]. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A mathematical model, numerical simulations and stability and flow regime maps corresponding to severe slugging in pipeline riser systems, are presented. In the simulations air and water were used as flowing fluids. The mathematical model considers continuity equations for liquid and gas phases, with a simplified momentum equation for the mixture, neglecting inertia. A drift-flux model, evaluated for the local conditions in the riser, is used as a closure law. The developed model predicts the location of the liquid accumulation front in the pipeline and the liquid level in the riser, so it is possible to determine which type of severe slugging occurs in the system. The numerical procedure is convergent for different nodalizations. A comparison is made with experimental results corresponding to a catenary riser, showing very good results for slugging cycle and stability and flow regime maps. (c) 2010 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 3D flow around a circular cylinder free to oscillate transversely to the free stream was simulated using Computational Fluid Dynamics (CFD) and the Spalart-Allmaras Detached Eddy Simulation (DES) turbulence model for a Reynolds number Re = 10(4). Simulations were carried out for a small mass-damping parameter m*zeta = 0.00858, where m* = 3.3 and zeta = 0.0026. We found good agreement between the numerical results and experimental data. The simulations predicted the high observed amplitudes of the upper branch of vortex-induced vibrations for low mass-damping parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a method to simulate the Magnetic Barkhausen Noise using the Random Field Ising Model with magnetic long-range interaction. The method allows calculating the magnetic flux density behavior in particular sections of the lattice reticule. The results show an internal demagnetizing effect that proceeds from the magnetic long-range interactions. This demagnetizing effect induces the appearing of a magnetic pattern in the region of magnetic avalanches. When compared with the traditional method, the proposed numerical procedure neatly reduces computational costs of simulation. (c) 2008 Published by Elsevier B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are several ways to attempt to model a building and its heat gains from external sources as well as internal ones in order to evaluate a proper operation, audit retrofit actions, and forecast energy consumption. Different techniques, varying from simple regression to models that are based on physical principles, can be used for simulation. A frequent hypothesis for all these models is that the input variables should be based on realistic data when they are available, otherwise the evaluation of energy consumption might be highly under or over estimated. In this paper, a comparison is made between a simple model based on artificial neural network (ANN) and a model that is based on physical principles (EnergyPlus) as an auditing and predicting tool in order to forecast building energy consumption. The Administration Building of the University of Sao Paulo is used as a case study. The building energy consumption profiles are collected as well as the campus meteorological data. Results show that both models are suitable for energy consumption forecast. Additionally, a parametric analysis is carried out for the considered building on EnergyPlus in order to evaluate the influence of several parameters such as the building profile occupation and weather data on such forecasting. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The perfect mixing model (PMM) is based on parameters derived from the equipment characteristics as well as ore breakage characteristics. Ore characteristics are represented through the appearance function. This function may be determined using JKMRC laboratorial methods or by standard functions. This work describes the model fitting process of the Carajas grinding circuit, using the JKSimMet simulator Two scenarios were used in model fitting exercises: 1) standard appearance function; and 2) appearance fund ion based on testing carried out on samples taken at circuit feed. From this assessment, the appearance function`s influence in the PMM,fit and it`s relation with the breakage rate were determined. The influence of the appearance function on the respective breakage rate distribution was assessed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, processing methods of Fourier optics implemented in a digital holographic microscopy system are presented. The proposed methodology is based on the possibility of the digital holography in carrying out the whole reconstruction of the recorded wave front and consequently, the determination of the phase and intensity distribution in any arbitrary plane located between the object and the recording plane. In this way, in digital holographic microscopy the field produced by the objective lens can be reconstructed along its propagation, allowing the reconstruction of the back focal plane of the lens, so that the complex amplitudes of the Fraunhofer diffraction, or equivalently the Fourier transform, of the light distribution across the object can be known. The manipulation of Fourier transform plane makes possible the design of digital methods of optical processing and image analysis. The proposed method has a great practical utility and represents a powerful tool in image analysis and data processing. The theoretical aspects of the method are presented, and its validity has been demonstrated using computer generated holograms and images simulations of microscopic objects. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Here, we study the stable integration of real time optimization (RTO) with model predictive control (MPC) in a three layer structure. The intermediate layer is a quadratic programming whose objective is to compute reachable targets to the MPC layer that lie at the minimum distance to the optimum set points that are produced by the RTO layer. The lower layer is an infinite horizon MPC with guaranteed stability with additional constraints that force the feasibility and convergence of the target calculation layer. It is also considered the case in which there is polytopic uncertainty in the steady state model considered in the target calculation. The dynamic part of the MPC model is also considered unknown but it is assumed to be represented by one of the models of a discrete set of models. The efficiency of the methods presented here is illustrated with the simulation of a low order system. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose an approach to the transient and steady-state analysis of the affine combination of one fast and one slow adaptive filters. The theoretical models are based on expressions for the excess mean-square error (EMSE) and cross-EMSE of the component filters, which allows their application to different combinations of algorithms, such as least mean-squares (LMS), normalized LMS (NLMS), and constant modulus algorithm (CMA), considering white or colored inputs and stationary or nonstationary environments. Since the desired universal behavior of the combination depends on the correct estimation of the mixing parameter at every instant, its adaptation is also taken into account in the transient analysis. Furthermore, we propose normalized algorithms for the adaptation of the mixing parameter that exhibit good performance. Good agreement between analysis and simulation results is always observed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over the last decades, anti-resonant reflecting optical waveguides (ARROW) have been used in different integrated optics applications. In this type of waveguide, light confinement is partially achieved through an anti-resonant reflection. In this work, the simulation, fabrication and characterization of ARROW waveguides using dielectric films deposited by a plasma-enhanced chemical vapor deposition (PECVD) technique, at low temperatures(similar to 300 degrees C), are presented. Silicon oxynitride (SiO(x)N(y)) films were used as core and second cladding layers and amorphous hydrogenated silicon carbide(a-SiC:H) films as first cladding layer. Furthermore, numerical simulations were performed using homemade routines based on two computational methods: the transfer matrix method (TMM) for the determination of the optimum thickness of the Fabry-Perot layers; and the non-uniform finite difference method (NU-FDM) for 2D design and determination of the maximum width that yields single-mode operation. The utilization of a silicon carbide anti-resonant layer resulted in low optical attenuations, which is due to the high refractive index difference between the core and this layer. Finally, for comparison purposes, optical waveguides using titanium oxide (TiO(2)) as the first ARROW layer were also fabricated and characterized.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work aims at proposing the use of the evolutionary computation methodology in order to jointly solve the multiuser channel estimation (MuChE) and detection problems at its maximum-likelihood, both related to the direct sequence code division multiple access (DS/CDMA). The effectiveness of the proposed heuristic approach is proven by comparing performance and complexity merit figures with that obtained by traditional methods found in literature. Simulation results considering genetic algorithm (GA) applied to multipath, DS/CDMA and MuChE and multi-user detection (MuD) show that the proposed genetic algorithm multi-user channel estimation (GAMuChE) yields a normalized mean square error estimation (nMSE) inferior to 11%, under slowly varying multipath fading channels, large range of Doppler frequencies and medium system load, it exhibits lower complexity when compared to both maximum likelihood multi-user channel estimation (MLMuChE) and gradient descent method (GrdDsc). A near-optimum multi-user detector (MuD) based on the genetic algorithm (GAMuD), also proposed in this work, provides a significant reduction in the computational complexity when compared to the optimum multi-user detector (OMuD). In addition, the complexity of the GAMuChE and GAMuD algorithms were (jointly) analyzed in terms of number of operations necessary to reach the convergence, and compared to other jointly MuChE and MuD strategies. The joint GAMuChE-GAMuD scheme can be regarded as a promising alternative for implementing third-generation (3G) and fourth-generation (4G) wireless systems in the near future. Copyright (C) 2010 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, regression models are evaluated for grouped survival data when the effect of censoring time is considered in the model and the regression structure is modeled through four link functions. The methodology for grouped survival data is based on life tables, and the times are grouped in k intervals so that ties are eliminated. Thus, the data modeling is performed by considering the discrete models of lifetime regression. The model parameters are estimated by using the maximum likelihood and jackknife methods. To detect influential observations in the proposed models, diagnostic measures based on case deletion, which are denominated global influence, and influence measures based on small perturbations in the data or in the model, referred to as local influence, are used. In addition to those measures, the local influence and the total influential estimate are also employed. Various simulation studies are performed and compared to the performance of the four link functions of the regression models for grouped survival data for different parameter settings, sample sizes and numbers of intervals. Finally, a data set is analyzed by using the proposed regression models. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study in detail the so-called beta-modified Weibull distribution, motivated by the wide use of the Weibull distribution in practice, and also for the fact that the generalization provides a continuous crossover towards cases with different shapes. The new distribution is important since it contains as special sub-models some widely-known distributions, such as the generalized modified Weibull, beta Weibull, exponentiated Weibull, beta exponential, modified Weibull and Weibull distributions, among several others. It also provides more flexibility to analyse complex real data. Various mathematical properties of this distribution are derived, including its moments and moment generating function. We examine the asymptotic distributions of the extreme values. Explicit expressions are also derived for the chf, mean deviations, Bonferroni and Lorenz curves, reliability and entropies. The estimation of parameters is approached by two methods: moments and maximum likelihood. We compare by simulation the performances of the estimates from these methods. We obtain the expected information matrix. Two applications are presented to illustrate the proposed distribution.