971 resultados para NUMERICAL EVALUATION
Resumo:
This work presents an experimental and numerical investigation to characterise the fracture properties of pure bitumen (the binder in asphalt paving materials). The paper is divided into two parts. The first part describes an experimental study of fracture characterisation parameters of pure bitumen as determined by three-point bend tests. The second part deals with modelling of fracture and failure of bitumen by Finite Element analysis. Fracture mechanics parameters, stress intensity factor, KIC, fracture energy, GIC, and J-integral, JIC, are used for evaluation of bitumen's fracture properties. The material constitutive model developed by Ossa et al. [40,41] which was implemented into a FE code by Costanzi [18] is combined with cohesive zone models (CZM) to simulate the fracture behaviour of pure bitumen. Experimental and numerical results are presented in the form of failure mechanism maps where ductile, brittle and brittle-ductile transition regimes of fracture behaviour are classified. The FE predictions of fracture behaviour match well with experimental results. © 2012 Elsevier Ltd.
Resumo:
The development of infrastructure in major cities often involves tunnelling, which can cause damage to existing structures. Therefore, these projects require a careful prediction of the risk of settlement induced damage. The simplified approach of current methods cannot account for three-dimensional structural aspects of buildings, which can result in an inaccurate evaluation of damage. This paper investigates the effect of the building alignment with the tunnel axis on structural damage. A three-dimensional, phased, fully coupled finite element model with non-linear material properties is used as a tool to perform a parametric study. The model includes the simulation of the tunnel construction process, with the tunnel located adjacent to a masonry building. Three different type of settlements are included (sagging, hogging and a combination of them), with seven different increasing angles of the building with respect to the tunnel axis. The alignment parameter is assessed, based on the maximum occurring crack width, measured in the building. Results show a significant dependency of the final damage on the building and tunnel alignment.
Resumo:
The forward scattering light (FSL) received by the detector can cause uncertainties in turbidity measurement of the coagulation rate of colloidal dispersion, and this effect becomes more significant for large particles. In this study, the effect of FSL is investigated on the basis of calculations using the T-matrix method, an exact technique for the computation of nonspherical scattering. The theoretical formulation and relevant numerical implementation for predicting the contribution of FSL in the turbidity measurement is presented. To quantitatively estimate the degree of the influence of FSL, an influence ratio comparing the contribution of FSL to the pure transmitted light in the turbidity measurement is introduced. The influence ratios evaluated under various parametric conditions and the relevant analyses provide a guideline for properly choosing particle size, measuring wavelength to minimize the effect of FSL in turbidity measurement of coagulation rate.
Resumo:
Numerical solutions of realistic 2-D and 3-D inverse problems may require a very large amount of computation. A two-level concept on parallelism is often used to solve such problems. The primary level uses the problem partitioning concept which is a decomposition based on the mathematical/physical problem. The secondary level utilizes the widely used data partitioning concept. A theoretical performance model is built based on the two-level parallelism. The observed performance results obtained from a network of general purpose Sun Sparc stations are compared with the theoretical values. Restrictions of the theoretical model are also discussed.
Resumo:
Financial modelling in the area of option pricing involves the understanding of the correlations between asset and movements of buy/sell in order to reduce risk in investment. Such activities depend on financial analysis tools being available to the trader with which he can make rapid and systematic evaluation of buy/sell contracts. In turn, analysis tools rely on fast numerical algorithms for the solution of financial mathematical models. There are many different financial activities apart from shares buy/sell activities. The main aim of this chapter is to discuss a distributed algorithm for the numerical solution of a European option. Both linear and non-linear cases are considered. The algorithm is based on the concept of the Laplace transform and its numerical inverse. The scalability of the algorithm is examined. Numerical tests are used to demonstrate the effectiveness of the algorithm for financial analysis. Time dependent functions for volatility and interest rates are also discussed. Applications of the algorithm to non-linear Black-Scholes equation where the volatility and the interest rate are functions of the option value are included. Some qualitative results of the convergence behaviour of the algorithm is examined. This chapter also examines the various computational issues of the Laplace transformation method in terms of distributed computing. The idea of using a two-level temporal mesh in order to achieve distributed computation along the temporal axis is introduced. Finally, the chapter ends with some conclusions.
Resumo:
Numerical simulation of heat transfer in a high aspect ratio rectangular microchannel with heat sinks has been conducted, similar to an experimental study. Three channel heights measuring 0.3 mm, 0.6mmand 1mmare considered and the Reynolds number varies from 300 to 2360, based on the hydraulic diameter. Simulation starts with the validation study on the Nusselt number and the Poiseuille number variations along the channel streamwise direction. It is found that the predicted Nusselt number has shown very good agreement with the theoretical estimation, but some discrepancies are noted in the Poiseuille number comparison. This observation however is in consistent with conclusions made by other researchers for the same flow problem. Simulation continues on the evaluation of heat transfer characteristics, namely the friction factor and the thermal resistance. It is found that noticeable scaling effect happens at small channel height of 0.3 mm and the predicted friction factor agrees fairly well with an experimental based correlation. Present simulation further reveals that the thermal resistance is low at small channel height, indicating that the heat transfer performance can be enhanced with the decrease of the channel height.
Resumo:
Melting of metallic samples in a cold crucible causes inclusions to concentrate on the surface owing to the action of the electromagnetic force in the skin layer. This process is dynamic, involving the melting stage, then quasi-stationary particle separation, and finally the solidification in the cold crucible. The proposed modeling technique is based on the pseudospectral solution method for coupled turbulent fluid flow, thermal and electromagnetic fields within the time varying fluid volume contained by the free surface, and partially the solid crucible wall. The model uses two methods for particle tracking: (1) a direct Lagrangian particle path computation and (2) a drifting concentration model. Lagrangian tracking is implemented for arbitrary unsteady flow. A specific numerical time integration scheme is implemented using implicit advancement that permits relatively large time-steps in the Lagrangian model. The drifting concentration model is based on a local equilibrium drift velocity assumption. Both methods are compared and demonstrated to give qualitatively similar results for stationary flow situations. The particular results presented are obtained for iron alloys. Small size particles of the order of 1 μm are shown to be less prone to separation by electromagnetic field action. In contrast, larger particles, 10 to 100 μm, are easily “trapped” by the electromagnetic field and stay on the sample surface at predetermined locations depending on their size and properties. The model allows optimization for melting power, geometry, and solidification rate.
Resumo:
A particle swarm optimisation approach is used to determine the accuracy and experimental relevance of six disparate cure kinetics models. The cure processes of two commercially available thermosetting polymer materials utilised in microelectronics manufacturing applications have been studied using a differential scanning calorimetry system. Numerical models have been fitted to the experimental data using a particle swarm optimisation algorithm which enables the ultimate accuracy of each of the models to be determined. The particle swarm optimisation approach to model fitting proves to be relatively rapid and effective in determining the optimal coefficient set for the cure kinetics models. Results indicate that the singlestep autocatalytic model is able to represent the curing process more accurately than more complex model, with ultimate accuracy likely to be limited by inaccuracies in the processing of the experimental data.
Resumo:
There is growing concern within the profession of pharmacy regarding the numerical competency of students completing their undergraduate studies. In this 7 year study, the numerical competency of first year pharmacy undergraduate students at the School of Pharmacy, Queen's University Belfast, was assessed both on entry to the MPharm degree and after completion of a basic numeracy course during the first semester of Level 1. The results suggest that students are not retaining fundamental numeracy concepts initially taught at secondary level education, and that the level of ability has significantly decreased over the past 7 years. Keywords: Numeracy; calculations; MPharm; assessment
Extracting S-matrix poles for resonances from numerical scattering data: Type-II Pade reconstruction
Resumo:
We present a FORTRAN 77 code for evaluation of resonance pole positions and residues of a numerical scattering matrix element in the complex energy (CE) as well as in the complex angular momentum (CAM) planes. Analytical continuation of the S-matrix element is performed by constructing a type-II Pade approximant from given physical values (Bessis et al. (1994) [421: Vrinceanu et al. (2000) [24]; Sokolovski and Msezane (2004) [23]). The algorithm involves iterative 'preconditioning' of the numerical data by extracting its rapidly oscillating potential phase component. The code has the capability of adding non-analytical noise to the numerical data in order to select 'true' physical poles, investigate their stability and evaluate the accuracy of the reconstruction. It has an option of employing multiple-precision (MPFUN) package (Bailey (1993) [451) developed by D.H. Bailey wherever double precision calculations fail due to a large number of input partial waves (energies) involved. The code has been successfully tested on several models, as well as the F + H-2 -> HE + H, F + HD : HE + D, Cl + HCI CIH + Cl and H + D-2 -> HD + D reactions. Some detailed examples are given in the text.
Resumo:
The use of accelerators, with compute architectures different and distinct from the CPU, has become a new research frontier in high-performance computing over the past ?ve years. This paper is a case study on how the instruction-level parallelism offered by three accelerator technologies, FPGA, GPU and ClearSpeed, can be exploited in atomic physics. The algorithm studied is the evaluation of two electron integrals, using direct numerical quadrature, a task that arises in the study of intermediate energy electron scattering by hydrogen atoms. The results of our ‘productivity’ study show that while each accelerator is viable, there are considerable differences in the implementation strategies that must be followed on each.
Resumo:
The hybrid test method is a relatively recently developed dynamic testing technique that uses numerical modelling combined with simultaneous physical testing. The concept of substructuring allows the critical or highly nonlinear part of the structure that is difficult to numerically model with accuracy to be physically tested whilst the remainder of the structure, that has a more predictable response, is numerically modelled. In this paper, a substructured soft-real time hybrid test is evaluated as an accurate means of performing seismic tests of complex structures. The structure analysed is a three-storey, two-by-one bay concentrically braced frame (CBF) steel structure subjected to seismic excitation. A ground storey braced frame substructure whose response is critical to the overall response of the structure is tested, whilst the remainder of the structure is numerically modelled. OpenSees is used for numerical modelling and OpenFresco is used for the communication between the test equipment and numerical model. A novel approach using OpenFresco to define the complex numerical substructure of an X-braced frame within a hybrid test is also presented. The results of the hybrid tests are compared to purely numerical models using OpenSees and a simulated test using a combination of OpenSees and OpenFresco. The comparative results indicate that the test method provides an accurate and cost effective procedure for performing
full scale seismic tests of complex structural systems.
Resumo:
Our review of paleoclimate information for New Zealand pertaining to the past 30,000 years has identified a general sequence of climatic events, spanning the onset of cold conditions marking the final phase of the Last Glaciation, through to the emergence to full interglacial conditions in the early Holocene. In order to facilitate more detailed assessments of climate variability and any leads or lags in the timing of climate changes across the region, a composite stratotype is proposed for New Zealand. The stratotype is based on terrestrial stratigraphic records and is intended to provide a standard reference for the intercomparison and evaluation of climate proxy records. We nominate a specific stratigraphic type record for each climatic event, using either natural exposure or drill core stratigraphic sections. Type records were selected on thebasis of having very good numerical age control and a clear proxy record. In all cases the main proxy of the type record is subfossil pollen. The type record for the period from ca 30 to ca 18 calendar kiloyears BP (cal. ka BP) is designated in lake-bed sediments from a small morainic kettle lake (Galway tarn) in western South Island. The Galway tarn type record spans a period of full glacial conditions (Last Glacial Coldest Period, LGCP) within the Otira Glaciation, and includes three cold stadials separated by two cool interstadials. The type record for the emergence from glacial conditions following the termination of the Last Glaciation (post-Termination amelioration) is in a core of lake sediments from a maar (Pukaki volcanic crater) in Auckland, northern North Island, and spans from ca 18 to 15.64±0.41 cal. ka BP. The type record for the Lateglacial period is an exposure of interbedded peat and mud at montane Kaipo bog, eastern North Island. In this high-resolution type record, an initial mild period was succeeded at 13.74±0.13 cal. ka BP by a cooler period, which after 12.55±0.14 cal. ka BP gave way to a progressive ascent to full interglacial conditions that were achieved by 11.88±0.18 cal. ka BP. Although a type section is not formally designated for the Holocene Interglacial (11.88±0.18 cal. ka BP to the present day), the sedimentary record of Lake Maratoto on the Waikato lowlands, northwestern North Island, is identified as a prospective type section pending the integration and updating of existing stratigraphic and proxy datasets, and age models. The type records are interconnected by one or more dated tephra layers, the ages of which are derived from Bayesian depositional modelling and OxCal-based calibrations using the IntCal09 dataset. Along with the type sections and the Lake Maratoto record, important, well-dated terrestrial reference records are provided for each climate event. Climate proxies from these reference records include pollen flora, stable isotopes from speleothems, beetle and chironomid fauna, and glacier moraines. The regional composite stratotype provides a benchmark against which to compare other records and proxies. Based on the composite stratotype, we provide an updated climate event stratigraphic classification for the New Zealand region. © 2013 Elsevier Ltd.
Resumo:
This paper evaluates the potential of gabions as roadside safety barriers. Gabions have the capacity to blend into natural landscape, suggesting that they could be used as a safety barrier for low-volume road in scenic environments. In fact, gabions have already been used for this purpose in Nepal, but the impact response was not evaluated. This paper reports on numerical and experimental investigations performed on a new gabion barrier prototype. To assess the potential use as a roadside barrier, the optimal gabion unit size and mass were investigated using multibody analysis and four sets of 1:4 scaled crash tests were carried out to study the local vehicle-barrier interaction. The barrier prototype was then finalised and subjected to a TB31 crash test according to the European EN1317 standard for N1 safety barriers. The test resulted in a failure due to the rollover of the vehicle and tearing of the gabion mesh yielding a large working width. It was found that although the system potentially has the necessary mass to contain a vehicle, the barrier front face does not have the necessary stiffness and strength to contain the gabion stone filling and hence redirect the vehicle. In the EN1317 test, the gabion barrier acted as a ramp for the impacting vehicle, causing rollover.
Resumo:
A range of lanthanum strontium manganates (La1−xSrxMnO3–LSMO) where 0 ≤ x < 0.4 were prepared using a modified peroxide sol–gel synthesis method. The magnetic nanoparticle (MNP) clusters obtained for each of the materials were characterised using scanning electron microscopy (SEM), X-ray powder diffraction (XRD) and infra-red (IR) spectroscopy in order to confirm the crystalline phases, crystallite size and cluster morphology. The magnetic properties of the materials were assessed using the Superconducting quantum interference device (SQUID) to evaluate the magnetic susceptibility, Curie temperature (Tc) and static hysteretic losses. Induction heating experiments also provided an insight into the magnetocaloric effect for each material. The specific absorption rate (SAR) of the materials was evaluated experimentally and via numerical simulations. The magnetic properties and heating data were linked with the crystalline structure to make predictions with respect to the best LSMO composition for mild hyperthermia (41 °C ≤ T ≤ 46 °C). La0.65Sr0.35MnO3, with crystallite diameter of 82.4 nm, (agglomerate size of ∼10 μm), Tc of 89 °C and SAR of 56 W gMn−1 at a concentration 10 mg mL−1 gave the optimal induction heating results (Tmax of 46.7 °C) and was therefore deemed as most suitable for the purposes of mild hyperthermia, vide infra.