5 resultados para NUMERICAL EVALUATION

em Greenwich Academic Literature Archive - UK


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerical solutions of realistic 2-D and 3-D inverse problems may require a very large amount of computation. A two-level concept on parallelism is often used to solve such problems. The primary level uses the problem partitioning concept which is a decomposition based on the mathematical/physical problem. The secondary level utilizes the widely used data partitioning concept. A theoretical performance model is built based on the two-level parallelism. The observed performance results obtained from a network of general purpose Sun Sparc stations are compared with the theoretical values. Restrictions of the theoretical model are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Financial modelling in the area of option pricing involves the understanding of the correlations between asset and movements of buy/sell in order to reduce risk in investment. Such activities depend on financial analysis tools being available to the trader with which he can make rapid and systematic evaluation of buy/sell contracts. In turn, analysis tools rely on fast numerical algorithms for the solution of financial mathematical models. There are many different financial activities apart from shares buy/sell activities. The main aim of this chapter is to discuss a distributed algorithm for the numerical solution of a European option. Both linear and non-linear cases are considered. The algorithm is based on the concept of the Laplace transform and its numerical inverse. The scalability of the algorithm is examined. Numerical tests are used to demonstrate the effectiveness of the algorithm for financial analysis. Time dependent functions for volatility and interest rates are also discussed. Applications of the algorithm to non-linear Black-Scholes equation where the volatility and the interest rate are functions of the option value are included. Some qualitative results of the convergence behaviour of the algorithm is examined. This chapter also examines the various computational issues of the Laplace transformation method in terms of distributed computing. The idea of using a two-level temporal mesh in order to achieve distributed computation along the temporal axis is introduced. Finally, the chapter ends with some conclusions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerical simulation of heat transfer in a high aspect ratio rectangular microchannel with heat sinks has been conducted, similar to an experimental study. Three channel heights measuring 0.3 mm, 0.6mmand 1mmare considered and the Reynolds number varies from 300 to 2360, based on the hydraulic diameter. Simulation starts with the validation study on the Nusselt number and the Poiseuille number variations along the channel streamwise direction. It is found that the predicted Nusselt number has shown very good agreement with the theoretical estimation, but some discrepancies are noted in the Poiseuille number comparison. This observation however is in consistent with conclusions made by other researchers for the same flow problem. Simulation continues on the evaluation of heat transfer characteristics, namely the friction factor and the thermal resistance. It is found that noticeable scaling effect happens at small channel height of 0.3 mm and the predicted friction factor agrees fairly well with an experimental based correlation. Present simulation further reveals that the thermal resistance is low at small channel height, indicating that the heat transfer performance can be enhanced with the decrease of the channel height.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Melting of metallic samples in a cold crucible causes inclusions to concentrate on the surface owing to the action of the electromagnetic force in the skin layer. This process is dynamic, involving the melting stage, then quasi-stationary particle separation, and finally the solidification in the cold crucible. The proposed modeling technique is based on the pseudospectral solution method for coupled turbulent fluid flow, thermal and electromagnetic fields within the time varying fluid volume contained by the free surface, and partially the solid crucible wall. The model uses two methods for particle tracking: (1) a direct Lagrangian particle path computation and (2) a drifting concentration model. Lagrangian tracking is implemented for arbitrary unsteady flow. A specific numerical time integration scheme is implemented using implicit advancement that permits relatively large time-steps in the Lagrangian model. The drifting concentration model is based on a local equilibrium drift velocity assumption. Both methods are compared and demonstrated to give qualitatively similar results for stationary flow situations. The particular results presented are obtained for iron alloys. Small size particles of the order of 1 μm are shown to be less prone to separation by electromagnetic field action. In contrast, larger particles, 10 to 100 μm, are easily “trapped” by the electromagnetic field and stay on the sample surface at predetermined locations depending on their size and properties. The model allows optimization for melting power, geometry, and solidification rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A particle swarm optimisation approach is used to determine the accuracy and experimental relevance of six disparate cure kinetics models. The cure processes of two commercially available thermosetting polymer materials utilised in microelectronics manufacturing applications have been studied using a differential scanning calorimetry system. Numerical models have been fitted to the experimental data using a particle swarm optimisation algorithm which enables the ultimate accuracy of each of the models to be determined. The particle swarm optimisation approach to model fitting proves to be relatively rapid and effective in determining the optimal coefficient set for the cure kinetics models. Results indicate that the singlestep autocatalytic model is able to represent the curing process more accurately than more complex model, with ultimate accuracy likely to be limited by inaccuracies in the processing of the experimental data.