950 resultados para Simulation Optimization
Resumo:
Reconstructions in optical tomography involve obtaining the images of absorption and reduced scattering coefficients. The integrated intensity data has greater sensitivity to absorption coefficient variations than scattering coefficient. However, the sensitivity of intensity data to scattering coefficient is not zero. We considered an object with two inhomogeneities (one in absorption and the other in scattering coefficient). The standard iterative reconstruction techniques produced results, which were plagued by cross talk, i.e., the absorption coefficient reconstruction has a false positive corresponding to the location of scattering inhomogeneity, and vice-versa. We present a method to remove cross talk in the reconstruction, by generating a weight matrix and weighting the update vector during the iteration. The weight matrix is created by the following method: we first perform a simple backprojection of the difference between the experimental and corresponding homogeneous intensity data. The built up image has greater weightage towards absorption inhomogeneity than the scattering inhomogeneity and its appropriate inverse is weighted towards the scattering inhomogeneity. These two weight matrices are used as multiplication factors in the update vectors, normalized backprojected image of difference intensity for absorption inhomogeneity and the inverse of the above for the scattering inhomogeneity, during the image reconstruction procedure. We demonstrate through numerical simulations, that cross-talk is fully eliminated through this modified reconstruction procedure.
Resumo:
A diffusion/replacement model for new consumer durables designed to be used as a long-term forecasting tool is developed. The model simulates new demand as well as replacement demand over time. The model is called DEMSIM and is built upon a counteractive adoption model specifying the basic forces affecting the adoption behaviour of individual consumers. These forces are the promoting forces and the resisting forces. The promoting forces are further divided into internal and external influences. These influences are operationalized within a multi-segmental diffusion model generating the adoption behaviour of the consumers in each segment as an expected value. This diffusion model is combined with a replacement model built upon the same segmental structure as the diffusion model. This model generates, in turn, the expected replacement behaviour in each segment. To be able to use DEMSIM as a forecasting tool in early stages of a diffusion process estimates of the model parameters are needed as soon as possible after product launch. However, traditional statistical techniques are not very helpful in estimating such parameters in early stages of a diffusion process. To enable early parameter calibration an optimization algorithm is developed by which the main parameters of the diffusion model can be estimated on the basis of very few sales observations. The optimization is carried out in iterative simulation runs. Empirical validations using the optimization algorithm reveal that the diffusion model performs well in early long-term sales forecasts, especially as it comes to the timing of future sales peaks.
Resumo:
Precipitation involving mixing of two sets of reverse micellar solutions-containing a reactant and precipitant respectively-has been analyzed. Particle formation in such systems has been simulated by a Monte Carlo (MC) scheme (Li, Y.; Park, C. W. Langmuir 1999, 15, 952), which however is very restrictive in its approach. We have simulated particle formation by developing a general Monte Carlo scheme, using the interval of quiescence technique (IQ). It uses Poisson distribution with realistic, low micellar occupancies of reactants, Brownian collision of micelles with coalescence efficiency, fission of dimers with binomial redispersion of solutes, finite nucleation rate of particles with critical number of molecules, and instantaneous particle growth. With the incorporation of these features, the previous work becomes a special case of our simulation. The present scheme was then used to predict experimental data on two systems. The first is the experimental results of Lianos and Thomas (Chem. Phys. Lett. 1986, 125, 299, J. Colloid Interface Sci. 1987, 117, 505) on formation of CdS nanoparticles. They reported the number of molecules in a particle as a function of micellar size and reactant concentrations, which have been predicted very well. The second is on the formation of Fe(OH)(3) nanoparticles, reported by Li and Park. Our simulation in this case provides a better prediction of the experimental particle size range than the prediction of the authors. The present simulation scheme is general and can be applied to explain nanoparticle formation in other systems.
Resumo:
Experimental data on average velocity and turbulence intensity generated by pitched blade downflow turbines (PTD) were presented in Part I of this paper. Part II presents the results of the simulation of flow generated by PTD The standard κ-ε model along with the boundary conditions developed in the Part 1 have been employed to predict the flow generated by PTD in cylindrical baffled vessel. This part describes the new software FIAT (Flow In Agitated Tanks) for the prediction of three dimensional flow in stirred tanks. The basis of this software has been described adequately. The influence of grid size, impeller boundary conditions and values of model parameters on the predicted flow have been analysed. The model predictions successfully reproduce the three dimensionality and the other essential characteristics of the flow. The model can be used to improve the overall understanding about the relative distribution of turbulence by PTD in the agitated tank
Resumo:
The possibility of advanced indication of moisture stress in a crop by small prepared plots with compacted or partially sand-substituted soils is examined by an analytical simulation. A series of soils and three crops are considered for the simulation. The moisture characteristics of the soils are calculated with an available model. Using average potential evapotranspiration values and a simple actual evapotranspiration model, the onset of moisture stress in the natural and indicator plots is calculated for different degrees of sand substitution and compaction. Cases where sand substitution fails are determined. The effect of intervening rainfall and limited root depth on the beginning of moisture stress is investigated.
Resumo:
We compare magnetovolume effects in bulk and nanoparticles by performing Monte Carlo simulations of a spin-analogous model with coupled spatial and magnetic degrees of freedom and chemical disorder. We find that correlations between surface and bulk atoms lead with decreasing particle size to a substantial modification of the magnetic and elastic behavior at low temperatures.
Resumo:
In this paper, direct numerical simulation of autoignition in an initially non-premixed medium under isotropic, homogeneous, and decaying turbulence is presented. The pressure-based method developed herein is a spectral implementation of the sequential steps followed in the predictor-corrector type of algorithms; it includes the effects of density fluctuations caused by spatial inhomogeneities ill temperature and species. The velocity and pressure field are solved in the spectral space while the scalars and density field are solved in the physical space. The presented results reveal that the autoignition spots originate and evolve at locations where (1) the composition corresponds to a small range around a specific mixture fraction, and (2) the conditional scaler dissipation rate is low. A careful examination of the data obtained indicates that the autoignition spots originate in the vortex cores, and the hot gases travel outward as combustion progresses. Hence, the applicability of the transient laminar flamelet model for this problem is questioned. The dependence of autoignition characteristics on parameters such as (1) die initial eddy-turnover time and (2) the initial ratio of length scale of scalars to that of velocities are investigated. Certain implications of new results on the conditional moment closure modeling are discussed.
Resumo:
The CCEM method (Contact Criteria and Energy Minimisation) has been developed and applied to study protein-carbohydrate interactions. The method uses available X-ray data even on the native protein at low resolution (above 2.4 Å) to generate realistic models of a variety of proteins with various ligands.The two examples discussed in this paper are arabinose-binding protein (ABP) and pea lectin. The X-ray crystal structure data reported on ABP-β-l-arabinose complex at 2.8, 2.4 and 1.7 Å resolution differ drastically in predicting the nature of the interactions between the protein and ligand. It is shown that, using the data at 2.4 Å resolution, the CCEM method generates complexes which are as good as the higher (1.7 Å) resolution data. The CCEM method predicts some of the important hydrogen bonds between the ligand and the protein which are missing in the interpretation of the X-ray data at 2.4 Å resolution. The theoretically predicted hydrogen bonds are in good agreement with those reported at 1.7 Å resolution. Pea lectin has been solved only in the native form at 3 Å resolution. Application of the CCEM method also enables us to generate complexes of pea lectin with methyl-α-d-glucopyranoside and methyl-2,3-dimethyl-α-d-glucopyranoside which explain well the available experimental data in solution.
Resumo:
There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.
Resumo:
We offer a technique, motivated by feedback control and specifically sliding mode control, for the simulation of differential-algebraic equations (DAEs) that describe common engineering systems such as constrained multibody mechanical structures and electric networks. Our algorithm exploits the basic results from sliding mode control theory to establish a simulation environment that then requires only the most primitive of numerical solvers. We circumvent the most important requisite for the conventionalsimulation of DAEs: the calculation of a set of consistent initial conditions. Our algorithm, which relies on the enforcement and occurrence of sliding mode, will ensure that the algebraic equation is satisfied by the dynamic system even for inconsistent initial conditions and for all time thereafter. [DOI:10.1115/1.4001904]
Resumo:
The growing interest for sequencing with higher throughput in the last decade has led to the development of new sequencing applications. This thesis concentrates on optimizing DNA library preparation for Illumina Genome Analyzer II sequencer. The library preparation steps that were optimized include fragmentation, PCR purification and quantification. DNA fragmentation was performed with focused sonication in different concentrations and durations. Two column based PCR purification method, gel matrix method and magnetic bead based method were compared. Quantitative PCR and gel electrophoresis in a chip were compared for DNA quantification. The magnetic bead purification was found to be the most efficient and flexible purification method. The fragmentation protocol was changed to produce longer fragments to be compatible with longer sequencing reads. Quantitative PCR correlates better with the cluster number and should thus be considered to be the default quantification method for sequencing. As a result of this study more data have been acquired from sequencing with lower costs and troubleshooting has become easier as qualification steps have been added to the protocol. New sequencing instruments and applications will create a demand for further optimizations in future.
Resumo:
The dynamics of low-density flows is governed by the Boltzmann equation of the kinetic theory of gases. This is a nonlinear integro-differential equation and, in general, numerical methods must be used to obtain its solution. The present paper, after a brief review of Direct Simulation Monte Carlo (DSMC) methods due to Bird, and Belotserkovskii and Yanitskii, studies the details of theDSMC method of Deshpande for mono as well as multicomponent gases. The present method is a statistical particle-in-cell method and is based upon the Kac-Prigogine master equation which reduces to the Boltzmann equation under the hypothesis of molecular chaos. The proposed Markoff model simulating the collisions uses a Poisson distribution for the number of collisions allowed in cells into which the physical space is divided. The model is then extended to a binary mixture of gases and it is shown that it is necessary to perform the collisions in a certain sequence to obtain unbiased simulation.
Resumo:
In this paper, we present a generic method/model for multi-objective design optimization of laminated composite components, based on Vector Evaluated Artificial Bee Colony (VEABC) algorithm. VEABC is a parallel vector evaluated type, swarm intelligence multi-objective variant of the Artificial Bee Colony algorithm (ABC). In the current work a modified version of VEABC algorithm for discrete variables has been developed and implemented successfully for the multi-objective design optimization of composites. The problem is formulated with multiple objectives of minimizing weight and the total cost of the composite component to achieve a specified strength. The primary optimization variables are the number of layers, its stacking sequence (the orientation of the layers) and thickness of each layer. The classical lamination theory is utilized to determine the stresses in the component and the design is evaluated based on three failure criteria: failure mechanism based failure criteria, maximum stress failure criteria and the tsai-wu failure criteria. The optimization method is validated for a number of different loading configurations-uniaxial, biaxial and bending loads. The design optimization has been carried for both variable stacking sequences, as well fixed standard stacking schemes and a comparative study of the different design configurations evolved has been presented. Finally the performance is evaluated in comparison with other nature inspired techniques which includes Particle Swarm Optimization (PSO), Artificial Immune System (AIS) and Genetic Algorithm (GA). The performance of ABC is at par with that of PSO, AIS and GA for all the loading configurations. (C) 2009 Elsevier B.V. All rights reserved.