883 resultados para Simulation Based Method
Resumo:
Modern lifestyle markedly changed eating habits worldwide, with an increasing demand for ready-to-eat foods, such as minimally processed fruits and leafy greens. Packaging and storage conditions of those products may favor the growth of psychrotrophic bacteria, including the pathogen Listeria monocytogenes. In this work, minimally processed leafy vegetables samples (n = 162) from retail market from Ribeirao Preto, Sao Paulo, Brazil, were tested for the presence or absence of Listeria spp. by the immunoassay Listeria Rapid Test, Oxoid. Two L. monocytogenes positive and six artificially contaminated samples of minimally processed leafy vegetables were evaluated by the Most Probable Number (MPN) with detection by classical culture method and also culture method combined with real-time PCR (RTi-PCR) for 16S rRNA genes of L monocytogenes. Positive MPN enrichment tubes were analyzed by RTi-PCR with primers specific for L. monocytogenes using the commercial preparation ABSOLUTET (TM) QPCR SYBR (R) Green Mix (ABgene, UK). Real-time PCR assay presented good exclusivity and inclusivity results and no statistical significant difference was found in comparison with the conventional culture method (p < 0.05). Moreover, RTi-PCR was fist and easy to perform, with MPN results obtained in ca. 48 h for RTi-PCR in comparison to 7 days for conventional method. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
A general, fast wavelet-based adaptive collocation method is formulated for heat and mass transfer problems involving a steep moving profile of the dependent variable. The technique of grid adaptation is based on sparse point representation (SPR). The method is applied and tested for the case of a gas–solid non-catalytic reaction in a porous solid at high Thiele modulus. Accurate and convergent steep profiles are obtained for Thiele modulus as large as 100 for the case of slab and found to match the analytical solution.
Resumo:
Market-based transmission expansion planning gives information to investors on where is the most cost efficient place to invest and brings benefits to those who invest in this grid. However, both market issue and power system adequacy problems are system planers’ concern. In this paper, a hybrid probabilistic criterion of Expected Economical Loss (EEL) is proposed as an index to evaluate the systems’ overall expected economical losses during system operation in a competitive market. It stands on both investors’ and planner’s point of view and will further improves the traditional reliability cost. By applying EEL, it is possible for system planners to obtain a clear idea regarding the transmission network’s bottleneck and the amount of losses arises from this weak point. Sequentially, it enables planners to assess the worth of providing reliable services. Also, the EEL will contain valuable information for moneymen to undertake their investment. This index could truly reflect the random behaviors of power systems and uncertainties from electricity market. The performance of the EEL index is enhanced by applying Normalized Coefficient of Probability (NCP), so it can be utilized in large real power systems. A numerical example is carried out on IEEE Reliability Test System (RTS), which will show how the EEL can predict the current system bottleneck under future operational conditions and how to use EEL as one of planning objectives to determine future optimal plans. A well-known simulation method, Monte Carlo simulation, is employed to achieve the probabilistic characteristic of electricity market and Genetic Algorithms (GAs) is used as a multi-objective optimization tool.
Resumo:
This paper presents a numerical technique for the design of an RF coil for asymmetric magnetic resonance imaging (MRI) systems. The formulation is based on an inverse approach where the cylindrical surface currents are expressed in terms of a combination of sub-domain basis functions: triangular and pulse functions. With the homogeneous transverse magnetic field specified in a spherical region, a functional method is applied to obtain the unknown current coefficients. The current distribution is then transformed to a conductor pattern by use of a stream function technique. Preliminary MR images acquired using a prototype RF coil are presented and validate the design method. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Common sense tells us that the future is an essential element in any strategy. In addition, there is a good deal of literature on scenario planning, which is an important tool in considering the future in terms of strategy. However, in many organizations there is serious resistance to the development of scenarios, and they are not broadly implemented by companies. But even organizations that do not rely heavily on the development of scenarios do, in fact, construct visions to guide their strategies. But it might be asked, what happens when this vision is not consistent with the future? To address this problem, the present article proposes a method for checking the content and consistency of an organization`s vision of the future, no matter how it was conceived. The proposed method is grounded on theoretical concepts from the field of future studies, which are described in this article. This study was motivated by the search for developing new ways of improving and using scenario techniques as a method for making strategic decisions. The method was then tested on a company in the field of information technology in order to check its operational feasibility. The test showed that the proposed method is, in fact, operationally feasible and was capable of analyzing the vision of the company being studied, indicating both its shortcomings and points of inconsistency. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Bond's method for ball mill scale-up only gives the mill power draw for a given duty. This method is incompatible with computer modelling and simulation techniques. It might not be applicable for the design of fine grinding ball mills and ball mills preceded by autogenous and semi-autogenous grinding mills. Model-based ball mill scale-up methods have not been validated using a wide range of full-scale circuit data. Their accuracy is therefore questionable. Some of these methods also need expensive pilot testing. A new ball mill scale-up procedure is developed which does not have these limitations. This procedure uses data from two laboratory tests to determine the parameters of a ball mill model. A set of scale-up criteria then scales-up these parameters. The procedure uses the scaled-up parameters to simulate the steady state performance of full-scale mill circuits. At the end of the simulation, the scale-up procedure gives the size distribution, the volumetric flowrate and the mass flowrate of all the streams in the circuit, and the mill power draw.
Model-based procedure for scale-up of wet, overflow ball mills - Part III: Validation and discussion
Resumo:
A new ball mill scale-up procedure is developed. This procedure has been validated using seven sets of Ml-scale ball mil data. The largest ball mills in these data have diameters (inside liners) of 6.58m. The procedure can predict the 80% passing size of the circuit product to within +/-6% of the measured value, with a precision of +/-11% (one standard deviation); the re-circulating load to within +/-33% of the mass-balanced value (this error margin is within the uncertainty associated with the determination of the re-circulating load); and the mill power to within +/-5% of the measured value. This procedure is applicable for the design of ball mills which are preceded by autogenous (AG) mills, semi-autogenous (SAG) mills, crushers and flotation circuits. The new procedure is more precise and more accurate than Bond's method for ball mill scale-up. This procedure contains no efficiency correction which relates to the mill diameter. This suggests that, within the range of mill diameter studied, milling efficiency does not vary with mill diameter. This is in contrast with Bond's equation-Bond claimed that milling efficiency increases with mill diameter. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
A new method to extract MOSFET's threshold voltage VT by measurement of the gate-to-substrate capacitance C-gb of the transistor is presented. Unlike existing extraction methods based on I-V data, the measurement of C-gb does not require de drain current to now between drain and source thus eliminating the effects of source and drain series resistance R-S/D, and at the same time, retains a symmetrical potential profile across the channel. Experimental and simulation results on devices with different sizes are presented to justify the proposed method.
Resumo:
Lateral ventricular volumes based on segmented brain MR images can be significantly underestimated if partial volume effects are not considered. This is because a group of voxels in the neighborhood of lateral ventricles is often mis-classified as gray matter voxels due to partial volume effects. This group of voxels is actually a mixture of ventricular cerebro-spinal fluid and the white matter and therefore, a portion of it should be included as part of the lateral ventricular structure. In this note, we describe an automated method for the measurement of lateral ventricular volumes on segmented brain MR images. Image segmentation was carried in combination of intensity correction and thresholding. The method is featured with a procedure for addressing mis-classified voxels in the surrounding of lateral ventricles. A detailed analysis showed that lateral ventricular volumes could be underestimated by 10 to 30% depending upon the size of the lateral ventricular structure, if mis-classified voxels were not included. Validation of the method was done through comparison with the averaged manually traced volumes. Finally, the merit of the method is demonstrated in the evaluation of the rate of lateral ventricular enlargement. (C) 2001 Elsevier Science Inc. All rights reserved.
Resumo:
The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds some high level L before it becomes empty, starting from a given state. The approach is based on a Markov additive process representation of the buffer processes, leading to an exponential change of measure to be used in an importance sampling procedure. Unlike changes of measures proposed and studied in recent literature, the one derived here is a function of the content of the first buffer. We prove that when the first buffer is finite, this method yields asymptotically efficient simulation for any set of arrival and service rates. In fact, the relative error is bounded independent of the level L; a new result which is not established for any other known method. When the first buffer is infinite, we propose a natural extension of the exponential change of measure for the finite buffer case. In this case, the relative error is shown to be bounded (independent of L) only when the second server is the bottleneck; a result which is known to hold for some other methods derived through large deviations analysis. When the first server is the bottleneck, experimental results using our method seem to suggest that the relative error is bounded linearly in L.
Resumo:
The binary diffusivities of water in low molecular weight sugars; fructose, sucrose and a high molecular weight carbohydrate; maltodextrin (DE 11) and the effective diffusivities of water in mixtures of these sugars (sucrose, glucose, fructose) and maltodextrin (DE 11) were determined using a simplified procedure based on the Regular Regime Approach. The effective diffusivity of these mixtures exhibited both the concentration and molecular weight dependence. Surface stickiness was observed in all samples during desorption, with fructose exhibiting the highest and maltodextrin the lowest. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper presents a new approach to the LU decomposition method for the simulation of stationary and ergodic random fields. The approach overcomes the size limitations of LU and is suitable for any size simulation. The proposed approach can facilitate fast updating of generated realizations with new data, when appropriate, without repeating the full simulation process. Based on a novel column partitioning of the L matrix, expressed in terms of successive conditional covariance matrices, the approach presented here demonstrates that LU simulation is equivalent to the successive solution of kriging residual estimates plus random terms. Consequently, it can be used for the LU decomposition of matrices of any size. The simulation approach is termed conditional simulation by successive residuals as at each step, a small set (group) of random variables is simulated with a LU decomposition of a matrix of updated conditional covariance of residuals. The simulated group is then used to estimate residuals without the need to solve large systems of equations.
Resumo:
A detailed analysis procedure is described for evaluating rates of volumetric change in brain structures based on structural magnetic resonance (MR) images. In this procedure, a series of image processing tools have been employed to address the problems encountered in measuring rates of change based on structural MR images. These tools include an algorithm for intensity non-uniforniity correction, a robust algorithm for three-dimensional image registration with sub-voxel precision and an algorithm for brain tissue segmentation. However, a unique feature in the procedure is the use of a fractional volume model that has been developed to provide a quantitative measure for the partial volume effect. With this model, the fractional constituent tissue volumes are evaluated for voxels at the tissue boundary that manifest partial volume effect, thus allowing tissue boundaries be defined at a sub-voxel level and in an automated fashion. Validation studies are presented on key algorithms including segmentation and registration. An overall assessment of the method is provided through the evaluation of the rates of brain atrophy in a group of normal elderly subjects for which the rate of brain atrophy due to normal aging is predictably small. An application of the method is given in Part 11 where the rates of brain atrophy in various brain regions are studied in relation to normal aging and Alzheimer's disease. (C) 2002 Elsevier Science Inc. All rights reserved.