38 resultados para experimental modelling

em CentAUR: Central Archive University of Reading - UK


Relevância:

70.00% 70.00%

Publicador:

Resumo:

A solution has been found to the long-standing problem of experimental modelling of the interfacial instability in aluminium reduction cells. The idea is to replace the electrolyte overlaying molten aluminium with a mesh of thin rods supplying current down directly into the liquid metal layer. This eliminates electrolysis altogether and all the problems associated with it, such as high temperature, chemical aggressiveness of media, products of electrolysis, the necessity for electrolyte renewal, high power demands, etc. The result is a room temperature, versatile laboratory model which simulates Sele-type, rolling pad interfacial instability. Our new, safe laboratory model enables detailed experimental investigations to test the existing theoretical models for the first time.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Crumpets are made by heating fermented batter on a hot plate at around 230°C. The characteristic structure dominated by vertical pores develops rapidly: structure has developed throughout around 75% of the product height within 30s, which is far faster than might be expected from transient heat conduction through the batter. Cooking is complete within around 3 min. Image analysis based on results from X-ray tomography shows that the voidage fraction is approximately constant and that there is continual coalescence between the larger pores throughout the product although there is also a steady level of small bubbles trapped within the solidified batter. We report here experimental studies which shed light on some of the mechanisms responsible for this structure, together with some models of key phenomena.Three aspects are discussed here: the role of gas (carbon dioxide and nitrogen) nuclei in initiating structure development; convective heat transfer inside the developing pores; and the kinetics of setting the batter into an elastic solid structure. It is shown conclusively that the small bubbles of carbon dioxide resulting from the fermentation stage play a crucial role as nuclei for pore development: without these nuclei, the result is not a porous structure, but rather a solid, elastic, inedible, gelatinized product. These nuclei are also responsible for the tiny bubbles which are set in the final product. The nuclei form the source of the dominant pore structure which is largely driven by the, initially explosive, release of water vapour from the batter together with the desorption of dissolved carbon dioxide. It is argued that the rapid evaporation, transport and condensation of steam within the growing pores provides an important mechanism, as in a heat pipe, for rapid heat transfer, and models for this process are developed and tested. The setting of the continuous batter phase is essential for final product quality: studies using differential scanning calorimetry and on the kinetics of change in the visco-elastic properties of the batter suggest that this process is driven by the kinetics of gelatinization. Unlike many thermally driven food processes the rates of heating are such that gelatinization kinetics cannot be neglected. The implications of these results for modelling and for the development of novel structures are discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

almonella enterica serovar Typhimurium is an established model organism for Gram-negative, intracellular pathogens. Owing to the rapid spread of resistance to antibiotics among this group of pathogens, new approaches to identify suitable target proteins are required. Based on the genome sequence of Salmonella Typhimurium and associated databases, a genome-scale metabolic model was constructed. Output was based on an experimental determination of the biomass of Salmonella when growing in glucose minimal medium. Linear programming was used to simulate variations in energy demand, while growing in glucose minimal medium. By grouping reactions with similar flux responses, a sub-network of 34 reactions responding to this variation was identified (the catabolic core). This network was used to identify sets of one and two reactions, that when removed from the genome-scale model interfered with energy and biomass generation. 11 such sets were found to be essential for the production of biomass precursors. Experimental investigation of 7 of these showed that knock-outs of the associated genes resulted in attenuated growth for 4 pairs of reactions, while 3 single reactions were shown to be essential for growth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

White clover (Trifolium repens) is an important pasture legume but is often difficult to sustain in a mixed sward because, among other things, of the damage to roots caused by the soil-dwelling larval stages of S. lepidus. Locating the root nodules on the white clover roots is crucial for the survival of the newly hatched larvae. This paper presents a numerical model to simulate the movement of newly hatched S. lepidus larvae towards the root nodules, guided by a chemical signal released by the nodules. The model is based on the diffusion-chemotaxis equation. Experimental observations showed that the average speed of the larvae remained approximately constant, so the diffusion-chernotaxis model was modified so that the larvae respond only to the gradient direction of the chemical signal but not its magnitude. An individual-based lattice Boltzmann method was used to simulate the movement of individual larvae, and the parameters required for the model were estimated from the measurement of larval movement towards nodules in soil scanned using X-ray microtomography. The model was used to investigate the effects of nodule density, the rate of release of chemical signal, the sensitivity of the larvae to the signal, and the random foraging of the larvae on the movement and subsequent survival of the larvae. The simulations showed that the most significant factors for larval survival were nodule density and the sensitivity of the larvae to the signal. The dependence of larval survival rate on nodule density was well fitted by the Michealis-Menten kinetics. (c) 2005 Elsevier B.V All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimates of soil organic carbon (SOC) stocks and changes under different land use systems can help determine vulnerability to land degradation. Such information is important for countries in and areas with high susceptibility to desertification. SOC stocks, and predicted changes between 2000 and 2030, were determined at the national scale for Jordan using The Global Environment Facility Soil Organic Carbon (GEFSOC) Modelling System. For the purpose of this study, Jordan was divided into three natural regions (The Jordan Valley, the Uplands and the Badia) and three developmental regions (North, Middle and South). Based on this division, Jordan was divided into five zones (based on the dominant land use): the Jordan Valley, the North Uplands, the Middle Uplands, the South Uplands and the Badia. This information was merged using GIS, along with a map of rainfall isohyets, to produce a map with 498 polygons. Each of these was given a unique ID, a land management unit identifier and was characterized in terms of its dominant soil type. Historical land use data, current land use and future land use change scenarios were also assembled, forming major inputs of the modelling system. The GEFSOC Modelling System was then run to produce C stocks in Jordan for the years 1990, 2000 and 2030. The results were compared with conventional methods of estimating carbon stocks, such as the mapping based SOTER method. The results of these comparisons showed that the model runs are acceptable, taking into consideration the limited availability of long-term experimental soil data that can be used to validate them. The main findings of this research show that between 2000 and 2030, SOC may increase in heavily used areas under irrigation and will likely decrease in grazed rangelands that cover most of Jordan giving an overall decrease in total SOC over time if the land is indeed used under the estimated forms of land use. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aquatic sediments often remove hydrophobic contaminants from fresh waters. The subsequent distribution and concentration of contaminants in bed sediments determines their effect on benthic organisms and the risk of re-entry into the water and/or leaching to groundwater. This study examines the transport of simazine and lindane in aquatic bed sediments with the aim of understanding the processes that determine their depth distribution. Experiments in flume channels (water flow of 10 cm s(-1)) determined the persistence of the compounds in the absence of sediment with (a) de-ionised water and (b) a solution that had been in contact with river sediment. In further experiments with river bed sediments in light and dark conditions, measurements were made of the concentration of the compounds in the overlying water and the development of bacterial/algal biofilms and bioturbation activity. At the end of the experiments, concentrations in sediments and associated pore waters were determined in sections of the sediment at 1 mm resolution down to 5 mm and then at 10 mm resolution to 50 mm depth and these distributions analysed using a sorption-diffusion-degradation model. The fine resolution in the depth profile permitted the detection of a maximum in the concentration of the compounds in the pore water near the surface, whereas concentrations in the sediment increased to a maximum at the surface itself. Experimental distribution coefficients determined from the pore water and sediment concentrations indicated a gradient with depth that was partly explained by an increase in organic matter content and specific surface area of the solids near the interface. The modelling showed that degradation of lindane within the sediment was necessary to explain the concentration profiles, with the optimum agreement between the measured and theoretical profiles obtained with differential degradation in the oxic and anoxic zones. The compounds penetrated to a depth of 40-50 rum over a period of 42 days. (C) 2004 Society of Chemical Industry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increased atmospheric deposition of inorganic nitrogen (N) may lead to increased leaching of nitrate (NO3-) to surface waters. The mechanisms responsible for, and controls on, this leaching are matters of debate. An experimental N addition has been conducted at Gardsjon, Sweden to determine the magnitude and identify the mechanisms of N leaching from forested catchments within the EU funded project NITREX. The ability of INCA-N, a simple process-based model of catchment N dynamics, to simulate catchment-scale inorganic N dynamics in soil and stream water during the course of the experimental addition is evaluated. Simulations were performed for 1990-2002. Experimental N addition began in 1991. INCA-N was able to successfully reproduce stream and soil water dynamics before and during the experiment. While INCA-N did not correctly simulate the lag between the start of N addition and NO 2 3 breakthrough, the model was able to simulate the state change resulting from increased N deposition. Sensitivity analysis showed that model behaviour was controlled primarily by parameters related to hydrology and vegetation dynamics and secondarily by in-soil processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rising nitrate levels have been observed in UK Chalk catchments in recent decades, with concentrations now approaching or exceeding legislated maximum values in many areas. In response, strategies seeking to contain concentrations through appropriate land management are now in place. However, there is an increasing consensus that Chalk systems, a predominant landscape type over England and indeed northwest Europe, can retard decades of prior nitrate loading within their deep unsaturated zones. Current levels may not fully reflect the long-term impact of present-day practices, and stringent land management controls may not be enough to avert further medium-term rises. This paper discusses these issues in the context of the EU Water Framework Directive, drawing on data from recent experimental work and a new model (INCA-Chalk) that allows the impacts of different land use management practices to be explored. Results strongly imply that timelines for water quality improvement demanded by the Water Framework directive are not realistic for the Chalk, and give an indication of time-scales over which improvements might be achieved. However, important unresolved scientific issues remain, and further monitoring and targeted data collection is recommended to reduce prediction uncertainties and allow cost effective strategies for mitigation to be designed and implemented. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

White clover (Trifolium repens) is an important pasture legume but is often difficult to sustain in a mixed sward because, among other things, of the damage to roots caused by the soil-dwelling larval stages of S. lepidus. Locating the root nodules on the white clover roots is crucial for the survival of the newly hatched larvae. This paper presents a numerical model to simulate the movement of newly hatched S. lepidus larvae towards the root nodules, guided by a chemical signal released by the nodules. The model is based on the diffusion-chemotaxis equation. Experimental observations showed that the average speed of the larvae remained approximately constant, so the diffusion-chernotaxis model was modified so that the larvae respond only to the gradient direction of the chemical signal but not its magnitude. An individual-based lattice Boltzmann method was used to simulate the movement of individual larvae, and the parameters required for the model were estimated from the measurement of larval movement towards nodules in soil scanned using X-ray microtomography. The model was used to investigate the effects of nodule density, the rate of release of chemical signal, the sensitivity of the larvae to the signal, and the random foraging of the larvae on the movement and subsequent survival of the larvae. The simulations showed that the most significant factors for larval survival were nodule density and the sensitivity of the larvae to the signal. The dependence of larval survival rate on nodule density was well fitted by the Michealis-Menten kinetics. (c) 2005 Elsevier B.V All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The complex and variable composition of natural sediments makes it very difficult to predict the bioavailability and bioaccumulation of sediment-bound contaminants. Several approaches have been proposed to overcome this problem, including an experimental model using artificial particles with or without humic acids as a source of organic matter. For this work, we have applied this experimental model, and also a sample of a natural sediment, to investigate the uptake and bioaccumulation of 2,4-dichlorophenol (2,4-DCP) by Sphaerium corneum. Additionally, the particle-water partition coefficients (K-d) were calculated. The results showed that the bioaccumulation of 2,4-DCP by clams did not depend solely on the levels of chemical dissolved, but also on the amount sorbed onto the particles and the characteristics and the strength of that binding. This study confirms the value of using artificial particles as a suitable experimental model for assessing the fate of sediment-bound contaminants. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Details about the parameters of kinetic systems are crucial for progress in both medical and industrial research, including drug development, clinical diagnosis and biotechnology applications. Such details must be collected by a series of kinetic experiments and investigations. The correct design of the experiment is essential to collecting data suitable for analysis, modelling and deriving the correct information. We have developed a systematic and iterative Bayesian method and sets of rules for the design of enzyme kinetic experiments. Our method selects the optimum design to collect data suitable for accurate modelling and analysis and minimises the error in the parameters estimated. The rules select features of the design such as the substrate range and the number of measurements. We show here that this method can be directly applied to the study of other important kinetic systems, including drug transport, receptor binding, microbial culture and cell transport kinetics. It is possible to reduce the errors in the estimated parameters and, most importantly, increase the efficiency and cost-effectiveness by reducing the necessary amount of experiments and data points measured. (C) 2003 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Kinetic studies on the AR (aldose reductase) protein have shown that it does not behave as a classical enzyme in relation to ring aldose sugars. As with non-enzymatic glycation reactions, there is probably a free radical element involved derived from monosaccharide autoxidation. in the case of AR, there is free radical oxidation of NADPH by autoxidizing monosaccharides, which is enhanced in the presence of the NADPH-binding protein. Thus any assay for AR based on the oxidation of NADPH in the presence of autoxidizing monosaccharides is invalid, and tissue AR measurements based on this method are also invalid, and should be reassessed. AR exhibits broad specificity for both hydrophilic and hydrophobic aldehydes that suggests that the protein may be involved in detoxification. The last thing we would want to do is to inhibit it. ARIs (AR inhibitors) have a number of actions in the cell which are not specific, and which do not involve them binding to AR. These include peroxy-radical scavenging and effects of metal ion chelation. The AR/ARI story emphasizes the importance of correct experimental design in all biocatalytic experiments. Developing the use of Bayesian utility functions, we have used a systematic method to identify the optimum experimental designs for a number of kinetic model data sets. This has led to the identification of trends between kinetic model types, sets of design rules and the key conclusion that such designs should be based on some prior knowledge of K-m and/or the kinetic model. We suggest an optimal and iterative method for selecting features of the design such as the substrate range, number of measurements and choice of intermediate points. The final design collects data suitable for accurate modelling and analysis and minimizes the error in the parameters estimated, and is suitable for simple or complex steady-state models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In areas such as drug development, clinical diagnosis and biotechnology research, acquiring details about the kinetic parameters of enzymes is crucial. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. We demonstrate that a Bayesian approach (the use of prior knowledge) can produce major gains quantifiable in terms of information, productivity and accuracy of each experiment. Developing the use of Bayesian Utility functions, we have used a systematic method to identify the optimum experimental designs for a number of kinetic model data sets. This has enabled the identification of trends between kinetic model types, sets of design rules and the key conclusion that such designs should be based on some prior knowledge of K-M and/or the kinetic model. We suggest an optimal and iterative method for selecting features of the design such as the substrate range, number of measurements and choice of intermediate points. The final design collects data suitable for accurate modelling and analysis and minimises the error in the parameters estimated. (C) 2003 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The assumption that negligible work is involved in the formation of new surfaces in the machining of ductile metals, is re-examined in the light of both current Finite Element Method (FEM) simulations of cutting and modern ductile fracture mechanics. The work associated with separation criteria in FEM models is shown to be in the kJ/m2 range rather than the few J/m2 of the surface energy (surface tension) employed by Shaw in his pioneering study of 1954 following which consideration of surface work has been omitted from analyses of metal cutting. The much greater values of surface specific work are not surprising in terms of ductile fracture mechanics where kJ/m2 values of fracture toughness are typical of the ductile metals involved in machining studies. This paper shows that when even the simple Ernst–Merchant analysis is generalised to include significant surface work, many of the experimental observations for which traditional ‘plasticity and friction only’ analyses seem to have no quantitative explanation, are now given meaning. In particular, the primary shear plane angle φ becomes material-dependent. The experimental increase of φ up to a saturated level, as the uncut chip thickness is increased, is predicted. The positive intercepts found in plots of cutting force vs. depth of cut, and in plots of force resolved along the primary shear plane vs. area of shear plane, are shown to be measures of the specific surface work. It is demonstrated that neglect of these intercepts in cutting analyses is the reason why anomalously high values of shear yield stress are derived at those very small uncut chip thicknesses at which the so-called size effect becomes evident. The material toughness/strength ratio, combined with the depth of cut to form a non-dimensional parameter, is shown to control ductile cutting mechanics. The toughness/strength ratio of a given material will change with rate, temperature, and thermomechanical treatment and the influence of such changes, together with changes in depth of cut, on the character of machining is discussed. Strength or hardness alone is insufficient to describe machining. The failure of the Ernst–Merchant theory seems less to do with problems of uniqueness and the validity of minimum work, and more to do with the problem not being properly posed. The new analysis compares favourably and consistently with the wide body of experimental results available in the literature. Why considerable progress in the understanding of metal cutting has been achieved without reference to significant surface work is also discussed.