63 resultados para numerical modelling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oil rig mooring lines have traditionally consisted of chain and wire rope. As production has moved into deeper water it has proved advantageous to incorporate sections of fibre rope into the mooring lines. However, this has highlighted torsional interaction problems that can occur when ropes of different types are joined together. This paper describes a method by which the torsional properties of ropes can be modelled and can then be used to calculate the rotation and torque for two ropes connected in series. The method uses numerical representations of the torsional characteristics of both the ropes, and equates the torque generated in each rope under load to determine the rotation at the connection point. Data from rope torsional characterization tests have been analysed to derive constants used in the numerical model. Constants are presented for: a six-strand wire rope; a torque-balanced fibre rope; and a fibre rope that has been designed to be torque-matched to stranded wire rope. The calculation method has been verified by comparing predicted rotations with measured test values. Worked examples are given for a six-strand wire rope connected, firstly, to a torque-balanced fibre rope that offers little rotational restraint, and, secondly, to a fibre rope whose torsional properties are matched to that of the wire rope.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Elevated levels of low-density-lipoprotein cholesterol (LDL-C) in the plasma are a well-established risk factor for the development of coronary heart disease. Plasma LDL-C levels are in part determined by the rate at which LDL particles are removed from the bloodstream by hepatic uptake. The uptake of LDL by mammalian liver cells occurs mainly via receptor-mediated endocytosis, a process which entails the binding of these particles to specific receptors in specialised areas of the cell surface, the subsequent internalization of the receptor-lipoprotein complex, and ultimately the degradation and release of the ingested lipoproteins' constituent parts. We formulate a mathematical model to study the binding and internalization (endocytosis) of LDL and VLDL particles by hepatocytes in culture. The system of ordinary differential equations, which includes a cholesterol-dependent pit production term representing feedback regulation of surface receptors in response to intracellular cholesterol levels, is analysed using numerical simulations and steady-state analysis. Our numerical results show good agreement with in vitro experimental data describing LDL uptake by cultured hepatocytes following delivery of a single bolus of lipoprotein. Our model is adapted in order to reflect the in vivo situation, in which lipoproteins are continuously delivered to the hepatocyte. In this case, our model suggests that the competition between the LDL and VLDL particles for binding to the pits on the cell surface affects the intracellular cholesterol concentration. In particular, we predict that when there is continuous delivery of low levels of lipoproteins to the cell surface, more VLDL than LDL occupies the pit, since VLDL are better competitors for receptor binding. VLDL have a cholesterol content comparable to LDL particles; however, due to the larger size of VLDL, one pit-bound VLDL particle blocks binding of several LDLs, and there is a resultant drop in the intracellular cholesterol level. When there is continuous delivery of lipoprotein at high levels to the hepatocytes, VLDL particles still out-compete LDL particles for receptor binding, and consequently more VLDL than LDL particles occupy the pit. Although the maximum intracellular cholesterol level is similar for high and low levels of lipoprotein delivery, the maximum is reached more rapidly when the lipoprotein delivery rates are high. The implications of these results for the design of in vitro experiments is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The budgets of seven halogenated gases (CFC-11, CFC-12, CFC-113, CFC-114, CFC-115, CCl4 and SF6) are studied by comparing measurements in polar firn air from two Arctic and three Antarctic sites, and simulation results of two numerical models: a 2-D atmospheric chemistry model and a 1-D firn diffusion model. The first one is used to calculate atmospheric concentrations from emission trends based on industrial inventories; the calculated concentration trends are used by the second one to produce depth concentration profiles in the firn. The 2-D atmospheric model is validated in the boundary layer by comparison with atmospheric station measurements, and vertically for CFC-12 by comparison with balloon and FTIR measurements. Firn air measurements provide constraints on historical atmospheric concentrations over the last century. Age distributions in the firn are discussed using a Green function approach. Finally, our results are used as input to a radiative model in order to evaluate the radiative forcing of our target gases. Multi-species and multi-site firn air studies allow to better constrain atmospheric trends. The low concentrations of all studied gases at the bottom of the firn, and their consistency with our model results confirm that their natural sources are small. Our results indicate that the emissions, sinks and trends of CFC-11, CFC-12, CFC-113, CFC-115 and SF6 are well constrained, whereas it is not the case for CFC-114 and CCl4. Significant emission-dependent changes in the lifetimes of halocarbons destroyed in the stratosphere were obtained. Those result from the time needed for their transport from the surface where they are emitted to the stratosphere where they are destroyed. Efforts should be made to update and reduce the large uncertainties on CFC lifetimes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a new fast, effective and practical model structure construction algorithm for a mixture of experts network system utilising only process data. The algorithm is based on a novel forward constrained regression procedure. Given a full set of the experts as potential model bases, the structure construction algorithm, formed on the forward constrained regression procedure, selects the most significant model base one by one so as to minimise the overall system approximation error at each iteration, while the gate parameters in the mixture of experts network system are accordingly adjusted so as to satisfy the convex constraints required in the derivation of the forward constrained regression procedure. The procedure continues until a proper system model is constructed that utilises some or all of the experts. A pruning algorithm of the consequent mixture of experts network system is also derived to generate an overall parsimonious construction algorithm. Numerical examples are provided to demonstrate the effectiveness of the new algorithms. The mixture of experts network framework can be applied to a wide variety of applications ranging from multiple model controller synthesis to multi-sensor data fusion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article a simple and effective controller design is introduced for the Hammerstein systems that are identified based on observational input/output data. The nonlinear static function in the Hammerstein system is modelled using a B-spline neural network. The controller is composed by computing the inverse of the B-spline approximated nonlinear static function, and a linear pole assignment controller. The contribution of this article is the inverse of De Boor algorithm that computes the inverse efficiently. Mathematical analysis is provided to prove the convergence of the proposed algorithm. Numerical examples are utilised to demonstrate the efficacy of the proposed approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We make a qualitative and quantitative comparison of numericalsimulations of the ashcloud generated by the eruption of Eyjafjallajökull in April2010 with ground-basedlidar measurements at Exeter and Cardington in southern England. The numericalsimulations are performed using the Met Office’s dispersion model, NAME (Numerical Atmospheric-dispersion Modelling Environment). The results show that NAME captures many of the features of the observed ashcloud. The comparison enables us to estimate the fraction of material which survives the near-source fallout processes and enters into the distal plume. A number of simulations are performed which show that both the structure of the ashcloudover southern England and the concentration of ash within it are particularly sensitive to the height of the eruption column (and the consequent estimated mass emission rate), to the shape of the vertical source profile and the level of prescribed ‘turbulent diffusion’ (representing the mixing by the unresolved eddies) in the free troposphere with less sensitivity to the timing of the start of the eruption and the sedimentation of particulates in the distal plume.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At the end of the 20th century, we can look back on a spectacular development of numerical weather prediction, which has, practically uninterrupted, been going on since the middle of the century. High-resolution predictions for more than a week ahead for any part of the globe are now routinely produced and anyone with an Internet connection can access many of these forecasts for anywhere in the world. Extended predictions for several seasons ahead are also being done — the latest El Niño event in 1997/1998 is an example of such a successful prediction. The great achievement is due to a number of factors including the progress in computational technology and the establishment of global observing systems, combined with a systematic research program with an overall strategy towards building comprehensive prediction systems for climate and weather. In this article, I will discuss the different evolutionary steps in this development and the way new scientific ideas have contributed to efficiently explore the computing power and in using observations from new types of observing systems. Weather prediction is not an exact science due to unavoidable errors in initial data and in the models. To quantify the reliability of a forecast is therefore essential and probably more so the longer the forecasts are. Ensemble prediction is thus a new and important concept in weather and climate prediction, which I believe will become a routine aspect of weather prediction in the future. The limit between weather and climate prediction is becoming more and more diffuse and in the final part of this article I will outline the way I think development may proceed in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Details are given of a boundary-fitted mesh generation method for use in modelling free surface flow and water quality. A numerical method has been developed for generating conformal meshes for curvilinear polygonal and multiply-connected regions. The method is based on the Cauchy-Riemann conditions for the analytic function and is able to map a curvilinear polygonal region directly onto a regular polygonal region, with horizontal and vertical sides. A set of equations have been derived for determining the lengths of these sides and the least-squares method has been used in solving the equations. Several numerical examples are presented to illustrate the method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A key step in many numerical schemes for time-dependent partial differential equations with moving boundaries is to rescale the problem to a fixed numerical mesh. An alternative approach is to use a moving mesh that can be adapted to focus on specific features of the model. In this paper we present and discuss two different velocity-based moving mesh methods applied to a two-phase model of avascular tumour growth formulated by Breward et al. (2002) J. Math. Biol. 45(2), 125-152. Each method has one moving node which tracks the moving boundary. The first moving mesh method uses a mesh velocity proportional to the boundary velocity. The second moving mesh method uses local conservation of volume fraction of cells (masses). Our results demonstrate that these moving mesh methods produce accurate results, offering higher resolution where desired whilst preserving the balance of fluxes and sources in the governing equations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When studying hydrological processes with a numerical model, global sensitivity analysis (GSA) is essential if one is to understand the impact of model parameters and model formulation on results. However, different definitions of sensitivity can lead to a difference in the ranking of importance of the different model factors. Here we combine a fuzzy performance function with different methods of calculating global sensitivity to perform a multi-method global sensitivity analysis (MMGSA). We use an application of a finite element subsurface flow model (ESTEL-2D) on a flood inundation event on a floodplain of the River Severn to illustrate this new methodology. We demonstrate the utility of the method for model understanding and show how the prediction of state variables, such as Darcian velocity vectors, can be affected by such a MMGSA. This paper is a first attempt to use GSA with a numerically intensive hydrological model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper seeks to elucidate the fundamental differences between the nonconservation of potential temperature and that of Conservative Temperature, in order to better understand the relative merits of each quantity for use as the heat variable in numerical ocean models. The main result is that potential temperature is found to behave similarly to entropy, in the sense that its nonconservation primarily reflects production/destruction by surface heat and freshwater fluxes; in contrast, the nonconservation of Conservative Temperature is found to reflect primarily the overall compressible work of expansion/contraction. This paper then shows how this can be exploited to constrain the nonconservation of potential temperature and entropy from observed surface heat fluxes, and the nonconservation of Conservative Temperature from published estimates of the mechanical energy budgets of ocean numerical models. Finally, the paper shows how to modify the evolution equation for potential temperature so that it is exactly equivalent to using an exactly conservative evolution equation for Conservative Temperature, as was recently recommended by IOC et al. (2010). This result should in principle allow ocean modellers to test the equivalence between the two formulations, and to indirectly investigate to what extent the budget of derived nonconservative quantities such as buoyancy and entropy can be expected to be accurately represented in ocean models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When studying hydrological processes with a numerical model, global sensitivity analysis (GSA) is essential if one is to understand the impact of model parameters and model formulation on results. However, different definitions of sensitivity can lead to a difference in the ranking of importance of the different model factors. Here we combine a fuzzy performance function with different methods of calculating global sensitivity to perform a multi-method global sensitivity analysis (MMGSA). We use an application of a finite element subsurface flow model (ESTEL-2D) on a flood inundation event on a floodplain of the River Severn to illustrate this new methodology. We demonstrate the utility of the method for model understanding and show how the prediction of state variables, such as Darcian velocity vectors, can be affected by such a MMGSA. This paper is a first attempt to use GSA with a numerically intensive hydrological model

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a general approach based on nonequilibrium thermodynamics for bridging the gap between a well-defined microscopic model and the macroscopic rheology of particle-stabilised interfaces. Our approach is illustrated by starting with a microscopic model of hard ellipsoids confined to a planar surface, which is intended to simply represent a particle-stabilised fluid–fluid interface. More complex microscopic models can be readily handled using the methods outlined in this paper. From the aforementioned microscopic starting point, we obtain the macroscopic, constitutive equations using a combination of systematic coarse-graining, computer experiments and Hamiltonian dynamics. Exemplary numerical solutions of the constitutive equations are given for a variety of experimentally relevant flow situations to explore the rheological behaviour of our model. In particular, we calculate the shear and dilatational moduli of the interface over a wide range of surface coverages, ranging from the dilute isotropic regime, to the concentrated nematic regime.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper uses a novel numerical optimization technique - robust optimization - that is well suited to solving the asset-liability management (ALM) problem for pension schemes. It requires the estimation of fewer stochastic parameters, reduces estimation risk and adopts a prudent approach to asset allocation. This study is the first to apply it to a real-world pension scheme, and the first ALM model of a pension scheme to maximise the Sharpe ratio. We disaggregate pension liabilities into three components - active members, deferred members and pensioners, and transform the optimal asset allocation into the scheme’s projected contribution rate. The robust optimization model is extended to include liabilities and used to derive optimal investment policies for the Universities Superannuation Scheme (USS), benchmarked against the Sharpe and Tint, Bayes-Stein, and Black-Litterman models as well as the actual USS investment decisions. Over a 144 month out-of-sample period robust optimization is superior to the four benchmarks across 20 performance criteria, and has a remarkably stable asset allocation – essentially fix-mix. These conclusions are supported by six robustness checks.