961 resultados para experimental modelling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

White clover (Trifolium repens) is an important pasture legume but is often difficult to sustain in a mixed sward because, among other things, of the damage to roots caused by the soil-dwelling larval stages of S. lepidus. Locating the root nodules on the white clover roots is crucial for the survival of the newly hatched larvae. This paper presents a numerical model to simulate the movement of newly hatched S. lepidus larvae towards the root nodules, guided by a chemical signal released by the nodules. The model is based on the diffusion-chemotaxis equation. Experimental observations showed that the average speed of the larvae remained approximately constant, so the diffusion-chernotaxis model was modified so that the larvae respond only to the gradient direction of the chemical signal but not its magnitude. An individual-based lattice Boltzmann method was used to simulate the movement of individual larvae, and the parameters required for the model were estimated from the measurement of larval movement towards nodules in soil scanned using X-ray microtomography. The model was used to investigate the effects of nodule density, the rate of release of chemical signal, the sensitivity of the larvae to the signal, and the random foraging of the larvae on the movement and subsequent survival of the larvae. The simulations showed that the most significant factors for larval survival were nodule density and the sensitivity of the larvae to the signal. The dependence of larval survival rate on nodule density was well fitted by the Michealis-Menten kinetics. (c) 2005 Elsevier B.V All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The complex and variable composition of natural sediments makes it very difficult to predict the bioavailability and bioaccumulation of sediment-bound contaminants. Several approaches have been proposed to overcome this problem, including an experimental model using artificial particles with or without humic acids as a source of organic matter. For this work, we have applied this experimental model, and also a sample of a natural sediment, to investigate the uptake and bioaccumulation of 2,4-dichlorophenol (2,4-DCP) by Sphaerium corneum. Additionally, the particle-water partition coefficients (K-d) were calculated. The results showed that the bioaccumulation of 2,4-DCP by clams did not depend solely on the levels of chemical dissolved, but also on the amount sorbed onto the particles and the characteristics and the strength of that binding. This study confirms the value of using artificial particles as a suitable experimental model for assessing the fate of sediment-bound contaminants. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Details about the parameters of kinetic systems are crucial for progress in both medical and industrial research, including drug development, clinical diagnosis and biotechnology applications. Such details must be collected by a series of kinetic experiments and investigations. The correct design of the experiment is essential to collecting data suitable for analysis, modelling and deriving the correct information. We have developed a systematic and iterative Bayesian method and sets of rules for the design of enzyme kinetic experiments. Our method selects the optimum design to collect data suitable for accurate modelling and analysis and minimises the error in the parameters estimated. The rules select features of the design such as the substrate range and the number of measurements. We show here that this method can be directly applied to the study of other important kinetic systems, including drug transport, receptor binding, microbial culture and cell transport kinetics. It is possible to reduce the errors in the estimated parameters and, most importantly, increase the efficiency and cost-effectiveness by reducing the necessary amount of experiments and data points measured. (C) 2003 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Kinetic studies on the AR (aldose reductase) protein have shown that it does not behave as a classical enzyme in relation to ring aldose sugars. As with non-enzymatic glycation reactions, there is probably a free radical element involved derived from monosaccharide autoxidation. in the case of AR, there is free radical oxidation of NADPH by autoxidizing monosaccharides, which is enhanced in the presence of the NADPH-binding protein. Thus any assay for AR based on the oxidation of NADPH in the presence of autoxidizing monosaccharides is invalid, and tissue AR measurements based on this method are also invalid, and should be reassessed. AR exhibits broad specificity for both hydrophilic and hydrophobic aldehydes that suggests that the protein may be involved in detoxification. The last thing we would want to do is to inhibit it. ARIs (AR inhibitors) have a number of actions in the cell which are not specific, and which do not involve them binding to AR. These include peroxy-radical scavenging and effects of metal ion chelation. The AR/ARI story emphasizes the importance of correct experimental design in all biocatalytic experiments. Developing the use of Bayesian utility functions, we have used a systematic method to identify the optimum experimental designs for a number of kinetic model data sets. This has led to the identification of trends between kinetic model types, sets of design rules and the key conclusion that such designs should be based on some prior knowledge of K-m and/or the kinetic model. We suggest an optimal and iterative method for selecting features of the design such as the substrate range, number of measurements and choice of intermediate points. The final design collects data suitable for accurate modelling and analysis and minimizes the error in the parameters estimated, and is suitable for simple or complex steady-state models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In areas such as drug development, clinical diagnosis and biotechnology research, acquiring details about the kinetic parameters of enzymes is crucial. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. We demonstrate that a Bayesian approach (the use of prior knowledge) can produce major gains quantifiable in terms of information, productivity and accuracy of each experiment. Developing the use of Bayesian Utility functions, we have used a systematic method to identify the optimum experimental designs for a number of kinetic model data sets. This has enabled the identification of trends between kinetic model types, sets of design rules and the key conclusion that such designs should be based on some prior knowledge of K-M and/or the kinetic model. We suggest an optimal and iterative method for selecting features of the design such as the substrate range, number of measurements and choice of intermediate points. The final design collects data suitable for accurate modelling and analysis and minimises the error in the parameters estimated. (C) 2003 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The assumption that negligible work is involved in the formation of new surfaces in the machining of ductile metals, is re-examined in the light of both current Finite Element Method (FEM) simulations of cutting and modern ductile fracture mechanics. The work associated with separation criteria in FEM models is shown to be in the kJ/m2 range rather than the few J/m2 of the surface energy (surface tension) employed by Shaw in his pioneering study of 1954 following which consideration of surface work has been omitted from analyses of metal cutting. The much greater values of surface specific work are not surprising in terms of ductile fracture mechanics where kJ/m2 values of fracture toughness are typical of the ductile metals involved in machining studies. This paper shows that when even the simple Ernst–Merchant analysis is generalised to include significant surface work, many of the experimental observations for which traditional ‘plasticity and friction only’ analyses seem to have no quantitative explanation, are now given meaning. In particular, the primary shear plane angle φ becomes material-dependent. The experimental increase of φ up to a saturated level, as the uncut chip thickness is increased, is predicted. The positive intercepts found in plots of cutting force vs. depth of cut, and in plots of force resolved along the primary shear plane vs. area of shear plane, are shown to be measures of the specific surface work. It is demonstrated that neglect of these intercepts in cutting analyses is the reason why anomalously high values of shear yield stress are derived at those very small uncut chip thicknesses at which the so-called size effect becomes evident. The material toughness/strength ratio, combined with the depth of cut to form a non-dimensional parameter, is shown to control ductile cutting mechanics. The toughness/strength ratio of a given material will change with rate, temperature, and thermomechanical treatment and the influence of such changes, together with changes in depth of cut, on the character of machining is discussed. Strength or hardness alone is insufficient to describe machining. The failure of the Ernst–Merchant theory seems less to do with problems of uniqueness and the validity of minimum work, and more to do with the problem not being properly posed. The new analysis compares favourably and consistently with the wide body of experimental results available in the literature. Why considerable progress in the understanding of metal cutting has been achieved without reference to significant surface work is also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses experimental and theoretical investigations and Computational Fluid Dynamics (CFD) modelling considerations to evaluate the performance of a square section wind catcher system connected to the top of a test room for the purpose of natural ventilation. The magnitude and distribution of pressure coefficients (C-p) around a wind catcher and the air flow into the test room were analysed. The modelling results indicated that air was supplied into the test room through the wind catcher's quadrants with positive external pressure coefficients and extracted out of the test room through quadrants with negative pressure coefficients. The air flow achieved through the wind catcher depends on the speed and direction of the wind. The results obtained using the explicit and AIDA implicit calculation procedures and CFX code correlate relatively well with the experimental results at lower wind speeds and with wind incidents at an angle of 0 degrees. Variation in the C-p and air flow results were observed particularly with a wind direction of 45 degrees. The explicit and implicit calculation procedures were found to be quick and easy to use in obtaining results whereas the wind tunnel tests were more expensive in terms of effort, cost and time. CFD codes are developing rapidly and are widely available especially with the decreasing prices of computer hardware. However, results obtained using CFD codes must be considered with care, particularly in the absence of empirical data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Elevated levels of low-density-lipoprotein cholesterol (LDL-C) in the plasma are a well-established risk factor for the development of coronary heart disease. Plasma LDL-C levels are in part determined by the rate at which LDL particles are removed from the bloodstream by hepatic uptake. The uptake of LDL by mammalian liver cells occurs mainly via receptor-mediated endocytosis, a process which entails the binding of these particles to specific receptors in specialised areas of the cell surface, the subsequent internalization of the receptor-lipoprotein complex, and ultimately the degradation and release of the ingested lipoproteins' constituent parts. We formulate a mathematical model to study the binding and internalization (endocytosis) of LDL and VLDL particles by hepatocytes in culture. The system of ordinary differential equations, which includes a cholesterol-dependent pit production term representing feedback regulation of surface receptors in response to intracellular cholesterol levels, is analysed using numerical simulations and steady-state analysis. Our numerical results show good agreement with in vitro experimental data describing LDL uptake by cultured hepatocytes following delivery of a single bolus of lipoprotein. Our model is adapted in order to reflect the in vivo situation, in which lipoproteins are continuously delivered to the hepatocyte. In this case, our model suggests that the competition between the LDL and VLDL particles for binding to the pits on the cell surface affects the intracellular cholesterol concentration. In particular, we predict that when there is continuous delivery of low levels of lipoproteins to the cell surface, more VLDL than LDL occupies the pit, since VLDL are better competitors for receptor binding. VLDL have a cholesterol content comparable to LDL particles; however, due to the larger size of VLDL, one pit-bound VLDL particle blocks binding of several LDLs, and there is a resultant drop in the intracellular cholesterol level. When there is continuous delivery of lipoprotein at high levels to the hepatocytes, VLDL particles still out-compete LDL particles for receptor binding, and consequently more VLDL than LDL particles occupy the pit. Although the maximum intracellular cholesterol level is similar for high and low levels of lipoprotein delivery, the maximum is reached more rapidly when the lipoprotein delivery rates are high. The implications of these results for the design of in vitro experiments is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A mathematical growth model for the batch solid-state fermentation process for fungal tannase production was developed and tested experimentally. The unstructured model describes the uptake and growth kinetics of Penicillium glabrum in an impregnated polyurethane foam substrate system. In general, good agreement between the experimental data and model simulations was obtained. Biomass, tannase and spore production are described by logistic kinetics with a time delay between biomass production and tannase and spore formation. Possible induction mechanisms for the latter are proposed. Hydrolysis of tannic acid, the main carbon source in the substrate system, is reasonably well described with Michaelis-Menten kinetics with time-varying enzyme concentration but a more complex reaction mechanism is suspected. The metabolism of gallic acid, a tannase-hydrolysis product of tannic acid, was shown to be growth limiting during the main growth phase. (c) 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An unstructured mathematical model is proposed to describe the fermentation kinetics of growth, lactic acid production, pH and sugar consumption by Lactobacillus plantarum as a function of the buffering capacity and initial glucose concentration of the culture media. Initially the experimental data of L plantarum fermentations in synthetic media with different buffering capacity and glucose were fitted to a set of primary models. Later the parameters obtained from these models were used to establish mathematical relationships with the independent variables tested. The models were validated with 6 fermentations of L. plantarum in different cereal-based media. In most cases the proposed models adequately describe the biochemical changes taking place during fermentation and are a promising approach for the formulation of cereal-based probiotic foods. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A construction algorithm for multioutput radial basis function (RBF) network modelling is introduced by combining a locally regularised orthogonal least squares (LROLS) model selection with a D-optimality experimental design. The proposed algorithm aims to achieve maximised model robustness and sparsity via two effective and complementary approaches. The LROLS method alone is capable of producing a very parsimonious RBF network model with excellent generalisation performance. The D-optimality design criterion enhances the model efficiency and robustness. A further advantage of the combined approach is that the user only needs to specify a weighting for the D-optimality cost in the combined RBF model selecting criterion and the entire model construction procedure becomes automatic. The value of this weighting does not influence the model selection procedure critically and it can be chosen with ease from a wide range of values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New construction algorithms for radial basis function (RBF) network modelling are introduced based on the A-optimality and D-optimality experimental design criteria respectively. We utilize new cost functions, based on experimental design criteria, for model selection that simultaneously optimizes model approximation, parameter variance (A-optimality) or model robustness (D-optimality). The proposed approaches are based on the forward orthogonal least-squares (OLS) algorithm, such that the new A-optimality- and D-optimality-based cost functions are constructed on the basis of an orthogonalization process that gains computational advantages and hence maintains the inherent computational efficiency associated with the conventional forward OLS approach. The proposed approach enhances the very popular forward OLS-algorithm-based RBF model construction method since the resultant RBF models are constructed in a manner that the system dynamics approximation capability, model adequacy and robustness are optimized simultaneously. The numerical examples provided show significant improvement based on the D-optimality design criterion, demonstrating that there is significant room for improvement in modelling via the popular RBF neural network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modelling the interaction of terahertz(THz) radiation with biological tissueposes many interesting problems. THzradiation is neither obviously described byan electric field distribution or anensemble of photons and biological tissueis an inhomogeneous medium with anelectronic permittivity that is bothspatially and frequency dependent making ita complex system to model.A three-layer system of parallel-sidedslabs has been used as the system throughwhich the passage of THz radiation has beensimulated. Two modelling approaches havebeen developed a thin film matrix model anda Monte Carlo model. The source data foreach of these methods, taken at the sametime as the data recorded to experimentallyverify them, was a THz spectrum that hadpassed though air only.Experimental verification of these twomodels was carried out using athree-layered in vitro phantom. Simulatedtransmission spectrum data was compared toexperimental transmission spectrum datafirst to determine and then to compare theaccuracy of the two methods. Goodagreement was found, with typical resultshaving a correlation coefficient of 0.90for the thin film matrix model and 0.78 forthe Monte Carlo model over the full THzspectrum. Further work is underway toimprove the models above 1 THz.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel algorithm for joint state-parameter estimation using sequential three dimensional variational data assimilation (3D Var) and demonstrate its application in the context of morphodynamic modelling using an idealised two parameter 1D sediment transport model. The new scheme combines a static representation of the state background error covariances with a flow dependent approximation of the state-parameter cross-covariances. For the case presented here, this involves calculating a local finite difference approximation of the gradient of the model with respect to the parameters. The new method is easy to implement and computationally inexpensive to run. Experimental results are positive with the scheme able to recover the model parameters to a high level of accuracy. We expect that there is potential for successful application of this new methodology to larger, more realistic models with more complex parameterisations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper outlines some rehabilitation applications of manipulators and identifies that new approaches demand that the robot make an intimate contact with the user. Design of new generations of manipulators with programmable compliance along with higher level controllers that can set the compliance appropriately for the task, are both feasible propositions. We must thus gain a greater insight into the way in which a person interacts with a machine, particularly given that the interaction may be non-passive. We are primarily interested in the change in wrist and arm dynamics as the person co-contracts his/her muscles. It is observed that this leads to a change in stiffness that can push an actuated interface into a limit cycle. We use both experimental results gathered from a PHANToM haptic interface and a mathematical model to observe this effect. Results are relevant to the fields of rehabilitation and therapy robots, haptic interfaces, and telerobotics