22 resultados para model determination

em CentAUR: Central Archive University of Reading - UK


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The effect of temperature on the degradation of blackcurrant anthocyanins in a model juice system was determined over a temperature range of 4–140 °C. The thermal degradation of anthocyanins followed pseudo first-order kinetics. From 4–100 °C an isothermal method was used to determine the kinetic parameters. In order to mimic the temperature profile in retort systems, a non-isothermal method was applied to determine the kinetic parameters in the model juice over the temperature range 110–140 °C. The results from both isothermal and non-isothermal methods fit well together, indicating that the non-isothermal procedure is a reliable mathematical method to determine the kinetics of anthocyanin degradation. The reaction rate constant (k) increased from 0.16 (±0.01) × 10−3 to 9.954 (±0.004) h−1 at 4 and 140 °C, respectively. The temperature dependence of the rate of anthocyanin degradation was modelled by an extension of the Arrhenius equation, which showed a linear increase in the activation energy with temperature.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

1. Nutrient concentrations (particularly N and P) determine the extent to which water bodies are or may become eutrophic. Direct determination of nutrient content on a wide scale is labour intensive but the main sources of N and P are well known. This paper describes and tests an export coefficient model for prediction of total N and total P from: (i) land use, stock headage and human population; (ii) the export rates of N and P from these sources; and (iii) the river discharge. Such a model might be used to forecast the effects of changes in land use in the future and to hindcast past water quality to establish comparative or baseline states for the monitoring of change. 2. The model has been calibrated against observed data for 1988 and validated against sets of observed data for a sequence of earlier years in ten British catchments varying from uplands through rolling, fertile lowlands to the flat topography of East Anglia. 3. The model predicted total N and total P concentrations with high precision (95% of the variance in observed data explained). It has been used in two forms: the first on a specific catchment basis; the second for a larger natural region which contains the catchment with the assumption that all catchments within that region will be similar. Both models gave similar results with little loss of precision in the latter case. This implies that it will be possible to describe the overall pattern of nutrient export in the UK with only a fraction of the effort needed to carry out the calculations for each individual water body. 4. Comparison between land use, stock headage, population numbers and nutrient export for the ten catchments in the pre-war year of 1931, and for 1970 and 1988 show that there has been a substantial loss of rough grazing to fertilized temporary and permanent grasslands, an increase in the hectarage devoted to arable, consistent increases in the stocking of cattle and sheep and a marked movement of humans to these rural catchments. 5. All of these trends have increased the flows of nutrients with more than a doubling of both total N and total P loads during the period. On average in these rural catchments, stock wastes have been the greatest contributors to both N and P exports, with cultivation the next most important source of N and people of P. Ratios of N to P were high in 1931 and remain little changed so that, in these catchments, phosphorus continues to be the nutrient most likely to control algal crops in standing waters supplied by the rivers studied.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Although the tube theory is successful in describing entangled polymers qualitatively, a more quantitative description requires precise and consistent definitions of its parameters. Here we investigate the simplest model of entangled polymers, namely a single Rouse chain in a cubic lattice of line obstacles, and illustrate the typical problems and uncertainties of the tube theory. In particular we show that in general one needs 3 entanglement related parameters, but only 2 combinations of them are relevant for the long-time dynamics. Conversely, the plateau modulus can not be determined from these two parameters and requires a more detailed model of entanglements with explicit entanglement forces, such as the slipsprings model. It is shown that for the grid model the Rouse time within the tube is larger than the Rouse time of the free chain, in contrast to what the standard tube theory assumes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the many models developed for phosphorus concentration prediction at differing spatial and temporal scales, there has been little effort to quantify uncertainty in their predictions. Model prediction uncertainty quantification is desirable, for informed decision-making in river-systems management. An uncertainty analysis of the process-based model, integrated catchment model of phosphorus (INCA-P), within the generalised likelihood uncertainty estimation (GLUE) framework is presented. The framework is applied to the Lugg catchment (1,077 km2), a River Wye tributary, on the England–Wales border. Daily discharge and monthly phosphorus (total reactive and total), for a limited number of reaches, are used to initially assess uncertainty and sensitivity of 44 model parameters, identified as being most important for discharge and phosphorus predictions. This study demonstrates that parameter homogeneity assumptions (spatial heterogeneity is treated as land use type fractional areas) can achieve higher model fits, than a previous expertly calibrated parameter set. The model is capable of reproducing the hydrology, but a threshold Nash-Sutcliffe co-efficient of determination (E or R 2) of 0.3 is not achieved when simulating observed total phosphorus (TP) data in the upland reaches or total reactive phosphorus (TRP) in any reach. Despite this, the model reproduces the general dynamics of TP and TRP, in point source dominated lower reaches. This paper discusses why this application of INCA-P fails to find any parameter sets, which simultaneously describe all observed data acceptably. The discussion focuses on uncertainty of readily available input data, and whether such process-based models should be used when there isn’t sufficient data to support the many parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we pledge that physically based equations should be combined with remote sensing techniques to enable a more theoretically rigorous estimation of area-average soil heat flux, G. A standard physical equation (i.e. the analytical or exact method) for the estimation of G, in combination with a simple, but theoretically derived, equation for soil thermal inertia (F), provides the basis for a more transparent and readily interpretable method for the estimation of G; without the requirement for in situ instrumentation. Moreover, such an approach ensures a more universally applicable method than those derived from purely empirical studies (employing vegetation indices and albedo, for example). Hence, a new equation for the estimation of Gamma(for homogeneous soils) is discussed in this paper which only requires knowledge of soil type, which is readily obtainable from extant soil databases and surveys, in combination with a coarse estimate of moisture status. This approach can be used to obtain area-averaged estimates of Gamma(and thus G, as explained in paper II) which is important for large-scale energy balance studies that employ aircraft or satellite data. Furthermore, this method also relaxes the instrumental demand for studies at the plot and field scale (no requirement for in situ soil temperature sensors, soil heat flux plates and/or thermal conductivity sensors). In addition, this equation can be incorporated in soil-vegetation-atmosphere-transfer models that use the force restore method to update surface temperatures (such as the well-known ISBA model), to replace the thermal inertia coefficient.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The formulation of a new process-based crop model, the general large-area model (GLAM) for annual crops is presented. The model has been designed to operate on spatial scales commensurate with those of global and regional climate models. It aims to simulate the impact of climate on crop yield. Procedures for model parameter determination and optimisation are described, and demonstrated for the prediction of groundnut (i.e. peanut; Arachis hypogaea L.) yields across India for the period 1966-1989. Optimal parameters (e.g. extinction coefficient, transpiration efficiency, rate of change of harvest index) were stable over space and time, provided the estimate of the yield technology trend was based on the full 24-year period. The model has two location-specific parameters, the planting date, and the yield gap parameter. The latter varies spatially and is determined by calibration. The optimal value varies slightly when different input data are used. The model was tested using a historical data set on a 2.5degrees x 2.5degrees grid to simulate yields. Three sites are examined in detail-grid cells from Gujarat in the west, Andhra Pradesh towards the south, and Uttar Pradesh in the north. Agreement between observed and modelled yield was variable, with correlation coefficients of 0.74, 0.42 and 0, respectively. Skill was highest where the climate signal was greatest, and correlations were comparable to or greater than correlations with seasonal mean rainfall. Yields from all 35 cells were aggregated to simulate all-India yield. The correlation coefficient between observed and simulated yields was 0.76, and the root mean square error was 8.4% of the mean yield. The model can be easily extended to any annual crop for the investigation of the impacts of climate variability (or change) on crop yield over large areas. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mitochondrial DNA (mtDNA) is one of the most Popular population genetic markers. Its relevance as an indicator Of Population size and history has recently been questioned by several large-scale studies in animals reporting evidence for recurrent adaptive evolution, at least in invertebrates. Here we focus on mammals, a more restricted taxonomic group for which the issue of mtDNA near neutrality is crucial. By analyzing the distribution of mtDNA diversity across species and relating 4 to allozyme diversity, life-history traits, and taxonomy, we show that (i) mtDNA in mammals (toes not reject the nearly neutral model; (ii) mtDNA diversity, however, is unrelated to any of the 14 life-history and ecological variables that we analyzed, including body mass, geographic range, and The World Conservation Union (IUCN) categorization; (iii) mtDNA diversity is highly variable between mammalian orders and families; (iv) this taxonomic effect is most likely explained by variations of mutation rate between lineages. These results are indicative of a strong stochasticity of effective population size in mammalian species. They Suggest that, even in the absence of selection, mtDNA genetic diversity is essentially unpredictable, knowing species biology, and probably uncorrelated to species abundance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this review paper is to present experimental methodologies and the mathematical approaches used to determine effective diffusivities of solutes in food materials. The paper commences by describing the diffusion phenomena related to solute mass transfer in foods and effective diffusivities. It then focuses on the mathematical formulation for the calculation of effective diffusivities considering different diffusion models based on Fick's second law of diffusion. Finally, experimental considerations for effective diffusivity determination are elucidated primarily based on the acquirement of a series of solute content versus time curves appropriate to the equation model chosen. Different factors contributing to the determination of the effective diffusivities such as the structure of food material, temperature, diffusion solvent, agitation, sampling, concentration and different techniques used are considered. (c) 2005 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mathematical models that describe the immersion-frying period and the post-frying cooling period of an infinite slab or an infinite cylinder were solved and tested. Results were successfully compared with those found in the literature or obtained experimentally, and were discussed in terms of the hypotheses and simplifications made. The models were used as the basis of a sensitivity analysis. Simulations showed that a decrease in slab thickness and core heat capacity resulted in faster crust development. On the other hand, an increase in oil temperature and boiling heat transfer coefficient between the oil and the surface of the food accelerated crust formation. The model for oil absorption during cooling was analysed using the tested post-frying cooling equation to determine the moment in which a positive pressure driving force, allowing oil suction within the pore, originated. It was found that as crust layer thickness, pore radius and ambient temperature decreased so did the time needed to start the absorption. On the other hand, as the effective convective heat transfer coefficient between the air and the surface of the slab increased the required cooling time decreased. In addition, it was found that the time needed to allow oil absorption during cooling was extremely sensitive to pore radius, indicating the importance of an accurate pore size determination in future studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sunflower oil-in-water emulsions containing TBHQ, caffeic acid, epigallocatechin gallate (EGCG), or 6-hydroxy-2,5,7,8-tetramethylchroman-2-carboxylic acid (Trolox), both with and without BSA, were stored at 50 and 30degreesC. Oxidation of the oil was monitored by determination of the PV, conjugated diene content, and hexanal formation. Emulsions containing EGCG, caffeic acid, and, to a lesser extent, Trolox were much more stable during storage in the presence of BSA than in its absence even though BSA itself did not provide an antioxidant effect. BSA did not have a synergistic effect on the antioxidant activity of TBHQ. The BSA structure changed, with a considerable loss of fluorescent tryptophan groups during storage of solutions containing BSA and antioxidants, and a BSA-antioxidant adduct with radical-scavenging activity was formed. The highest radical-scavenging activity observed was for the isolated protein from a sample containing EGCG and BSA incubated at 30degreesC for 10 d. This fraction contained unchanged BSA as well as BSA-antioxidant adduct, but 95.7% of the initial fluorescence had been lost, showing that most of the BSA had been altered. It can be concluded that BSA exerts its synergistic effect with antioxidants because of formation of a protein-antioxidant adduct during storage, which is concentrated at the oil-water interface owing to the surface-active nature of the protein.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This correspondence introduces a new orthogonal forward regression (OFR) model identification algorithm using D-optimality for model structure selection and is based on an M-estimators of parameter estimates. M-estimator is a classical robust parameter estimation technique to tackle bad data conditions such as outliers. Computationally, The M-estimator can be derived using an iterative reweighted least squares (IRLS) algorithm. D-optimality is a model structure robustness criterion in experimental design to tackle ill-conditioning in model Structure. The orthogonal forward regression (OFR), often based on the modified Gram-Schmidt procedure, is an efficient method incorporating structure selection and parameter estimation simultaneously. The basic idea of the proposed approach is to incorporate an IRLS inner loop into the modified Gram-Schmidt procedure. In this manner, the OFR algorithm for parsimonious model structure determination is extended to bad data conditions with improved performance via the derivation of parameter M-estimators with inherent robustness to outliers. Numerical examples are included to demonstrate the effectiveness of the proposed algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we show how a set of recently derived theoretical results for recurrent neural networks can be applied to the production of an internal model control system for a nonlinear plant. The results include determination of the relative order of a recurrent neural network and invertibility of such a network. A closed loop controller is produced without the need to retrain the neural network plant model. Stability of the closed-loop controller is also demonstrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the finite sample properties of model selection by information criteria in conditionally heteroscedastic models. Recent theoretical results show that certain popular criteria are consistent in that they will select the true model asymptotically with probability 1. To examine the empirical relevance of this property, Monte Carlo simulations are conducted for a set of non–nested data generating processes (DGPs) with the set of candidate models consisting of all types of model used as DGPs. In addition, not only is the best model considered but also those with similar values of the information criterion, called close competitors, thus forming a portfolio of eligible models. To supplement the simulations, the criteria are applied to a set of economic and financial series. In the simulations, the criteria are largely ineffective at identifying the correct model, either as best or a close competitor, the parsimonious GARCH(1, 1) model being preferred for most DGPs. In contrast, asymmetric models are generally selected to represent actual data. This leads to the conjecture that the properties of parameterizations of processes commonly used to model heteroscedastic data are more similar than may be imagined and that more attention needs to be paid to the behaviour of the standardized disturbances of such models, both in simulation exercises and in empirical modelling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a new method for the assessment of palaeohydrology through the Holocene. A palaeoclimate model was linked with a hydrological model, using a weather generator to correct bias in the rainfall estimates, to simulate the changes in the flood frequency and the groundwater response through the late Pleistocene and Holocene for the Wadi Faynan in southern Jordan, a site considered internationally important due to its rich archaeological heritage spanning the Pleistocene and Holocene. This is the first study to describe the hydrological functioning of the Wadi Faynan, a meso-scale (241 km2) semi-arid catchment, setting this description within the framework of contemporary archaeological investigations. Historic meteorological records were collated and supplemented with new hydrological and water quality data. The modelled outcomes indicate that environmental changes, such as deforestation, had a major impact on the local water cycle and this amplified the effect of the prevailing climate on the flow regime. The results also show that increased rainfall alone does not necessarily imply better conditions for farming and highlight the importance of groundwater. The discussion focuses on the utility of the method and the importance of the local hydrology to the sustained settlement of the Wadi Faynan through pre-history and history.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Progressive telomere shortening from cell division (replicative aging) provides a barrier for human tumor progression. This program is not conserved in laboratory mice, which have longer telomeres and constitutive telomerase. Wild species that do ⁄ do not use replicative aging have been reported, but the evolution of different phenotypes and a conceptual framework for understanding their uses of telomeres is lacking. We examined telomeres ⁄ telomerase in cultured cells from > 60 mammalian species to place different uses of telomeres in a broad mammalian context. Phylogeny-based statistical analysis reconstructed ancestral states. Our analysis suggested that the ancestral mammalian phenotype included short telomeres (< 20 kb, as we now see in humans) and repressed telomerase. We argue that the repressed telomerase was a response to a higher mutation load brought on by the evolution of homeothermy. With telomerase repressed, we then see the evolution of replicative aging. Telomere length inversely correlated with lifespan, while telomerase expression co-evolved with body size. Multiple independent times smaller, shorter-lived species changed to having longer telomeres and expressing telomerase. Trade-offs involving reducing the energetic ⁄ cellular costs of specific oxidative protection mechanisms (needed to protect < 20 kb telomeres in the absence oftelomerase) could explain this abandonment of replicative aging. These observations provide a conceptual framework for understanding different uses of telomeres in mammals, support a role for human-like telomeres in allowing longer lifespans to evolve, demonstrate the need to include telomere length in the analysis of comparative studies of oxidative protection in the biology of aging, and identify which mammals can be used as appropriate model organisms for the study of the role of telomeres in human cancer and aging. Key words: evolution of telomeres; immortalization; telomerase; replicative aging; senescence.