44 resultados para Post-convertibility accumulation model
em CentAUR: Central Archive University of Reading - UK
Resumo:
A mathematical model is presented to understand heat transfer processes during the cooling and re-warming of patients during cardiac surgery. Our compartmental model is able to account for many of the qualitative features observed in the cooling of various regions of the body including the central core containing the majority of organs, the rectal region containing the intestines and the outer peripheral region of skin and muscle. In particular, we focus on the issue of afterdrop: a drop in core temperature following patient re-warming, which can lead to serious post-operative complications. Model results for a typical cooling and re-warming procedure during surgery are in qualitative agreement with experimental data in producing the afterdrop effect and the observed dynamical variation in temperature between the core, rectal and peripheral regions. The influence of heat transfer processes and the volume of each compartmental region on the afterdrop effect is discussed. We find that excess fat on the peripheral and rectal regions leads to an increase in the afterdrop effect. Our model predicts that, by allowing constant re-warming after the core temperature has been raised, the afterdrop effect will be reduced.
Resumo:
Several studies have highlighted the importance of the cooling period in oil absorption in deep-fat fried products. Specifically, it has been established that the largest proportion of oil which ends up into the food, is sucked into the porous crust region after the fried product is removed from the oil bath, stressing the importance of this time interval. The main objective of this paper was to develop a predictive mechanistic model that can be used to understand the principles behind post-frying cooling oil absorption kinetics, which can also help identifying the key parameters that affect the final oil intake by the fried product. The model was developed for two different geometries, an infinite slab and an infinite cylinder, and was divided into two main sub-models, one describing the immersion frying period itself and the other describing the post-frying cooling period. The immersion frying period was described by a transient moving-front model that considered the movement of the crust/core interface, whereas post-frying cooling oil absorption was considered to be a pressure driven flow mediated by capillary forces. A key element in the model was the hypothesis that oil suction would only begin once a positive pressure driving force had developed. The mechanistic model was based on measurable physical and thermal properties, and process parameters with no need of empirical data fitting, and can be used to study oil absorption in any deep-fat fried product that satisfies the assumptions made.
Resumo:
The mathematical models that describe the immersion-frying period and the post-frying cooling period of an infinite slab or an infinite cylinder were solved and tested. Results were successfully compared with those found in the literature or obtained experimentally, and were discussed in terms of the hypotheses and simplifications made. The models were used as the basis of a sensitivity analysis. Simulations showed that a decrease in slab thickness and core heat capacity resulted in faster crust development. On the other hand, an increase in oil temperature and boiling heat transfer coefficient between the oil and the surface of the food accelerated crust formation. The model for oil absorption during cooling was analysed using the tested post-frying cooling equation to determine the moment in which a positive pressure driving force, allowing oil suction within the pore, originated. It was found that as crust layer thickness, pore radius and ambient temperature decreased so did the time needed to start the absorption. On the other hand, as the effective convective heat transfer coefficient between the air and the surface of the slab increased the required cooling time decreased. In addition, it was found that the time needed to allow oil absorption during cooling was extremely sensitive to pore radius, indicating the importance of an accurate pore size determination in future studies.
Resumo:
Wave solutions to a mechanochemical model for cytoskeletal activity are studied and the results applied to the waves of chemical and mechanical activity that sweep over an egg shortly after fertilization. The model takes into account the calcium-controlled presence of actively contractile units in the cytoplasm, and consists of a viscoelastic force equilibrium equation and a conservation equation for calcium. Using piecewise linear caricatures, we obtain analytic solutions for travelling waves on a strip and demonstrate uiat the full nonlinear system behaves as predicted by the analytic solutions. The equations are solved on a sphere and the numerical results are similar to the analytic solutions. We indicate how the speed of the waves can be used as a diagnostic tool with which the chemical reactivity of the egg surface can be measured.
Resumo:
The 2008-2009 financial crisis and related organizational and economic failures have meant that financial organizations are faced with a ‘tsunami’ of new regulatory obligations. This environment provides new managerial challenges as organizations are forced to engage in complex and costly remediation projects with short deadlines. Drawing from a longitudinal study conducted with nine financial institutions over twelve years, this paper identifies nine IS capabilities which underpin activities for managing regulatory themed governance, risk and compliance efforts. The research shows that many firms are now focused on meeting the Regulators’ deadlines at the expense of developing a strategic, enterprise-wide connected approach to compliance. Consequently, executives are in danger of implementing siloed compliance solutions within business functions. By evaluating the maturity of their IS capabilities which underpin regulatory adherence, managers have an opportunity to develop robust operational architectures and so are better positioned to face the challenges derived from shifting regulatory landscapes.
Resumo:
This paper focuses upon the policy and institutional change that has taken place within the Argentine electricity market since the country’s economic and social crisis of 2001/2. As one of the first less developed countries (LDCs) to liberalise and privatise its electricity industry, Argentina has since moved away from the orthodox market model after consumer prices were frozen by the Government in early 2002 when the national currency was devalued by 70%. Although its reforms were widely praised during the 1990s, the electricity market has undergone a number of interventions, ostensibly to keep consumer prices low and to avert the much-discussed energy ‘crisis’ caused by a dearth of new investment combined with rising demand levels. This paper explores how the economic crisis and its consequences have both enabled and legitimised these policy and institutional amendments, while drawing upon the specifics of the post-neoliberal market ‘re-reforms’ to consider the extent to which the Government appears to be moving away from market-based prescriptions. In addition, this paper contributes to sector-specific understandings of how, despite these changes, neoliberal ideas and assumptions continue to dominate Argentine public policy well beyond the postcrisis era.
Resumo:
Testing of the Integrated Nitrogen model for Catchments (INCA) in a wide range of ecosystem types across Europe has shown that the model underestimates N transformation processes to a large extent in northern catchments of Finland and Norway in winter and spring. It is found, and generally assumed, that microbial activity in soils proceeds at low rates at northern latitudes during winter, even at sub-zero temperatures. The INCA model was modified to improve the simulation of N transformation rates in northern catchments, characterised by cold climates and extensive snow accumulation and insulation in winter, by introducing an empirical function to simulate soil temperatures below the seasonal snow pack, and a degree-day model to calculate the depth of the snow pack. The proposed snow-correction factor improved the simulation of soil temperatures at Finnish and Norwegian field sites in winter, although soil temperature was still underestimated during periods with a thin snow cover. Finally, a comparison between the modified INCA version (v. 1.7) and the former version (v. 1.6) was made at the Simojoki river basin in northern Finland and at Dalelva Brook in northern Norway. The new modules did not imply any significant changes in simulated NO3- concentration levels in the streams but improved the timing of simulated higher concentrations. The inclusion of a modified temperature response function and an empirical snow-correction factor improved the flexibility and applicability of the model for climate effect studies.
Resumo:
A new snow-soil-vegetation-atmosphere transfer (Snow-SVAT) scheme, which simulates the accumulation and ablation of the snow cover beneath a forest canopy, is presented. The model was formulated by coupling a canopy optical and thermal radiation model to a physically-based multi-layer snow model. This canopy radiation model is physically-based yet requires few parameters, so can be used when extensive in-situ field measurements are not available. Other forest effects such as the reduction of wind speed, interception of snow on the canopy and the deposition of litter were incorporated within this combined model, SNOWCAN, which was tested with data taken as part of the Boreal Ecosystem-Atmosphere Study (BOREAS) international collaborative experiment. Snow depths beneath four different canopy types and at an open site were simulated. Agreement between observed and simulated snow depths was generally good, with correlation coefficients ranging between r^2=0.94 and r^2=0.98 for all sites where automatic measurements were available. However, the simulated date of total snowpack ablation generally occurred later than the observed date. A comparison between simulated solar radiation and limited measurements of sub-canopy radiation at one site indicates that the model simulates the sub-canopy downwelling solar radiation early in the season to within measurement uncertainty.
Resumo:
Three successive field experiments (2000/01-2002/03) assessed the effect of wheat cultivar (Consort.. Hereward and Shamrock) and fungicide (epoxiconazole and azoxystrobin) applied at and after flag leaf emergence on the nitrogen in the above-ground crop (Total N) and grain (Grain N), net nitrogen remobilization from non-grain tissues (Remobilized N). grain dry matter (Grain Dill), and nitrogen utilization efficiency (NUtE(g) = Grain DM/Total N). Ordinary logistic curves were fitted to the accumulation of Grain N, Grain DM and Remobilized N against thermal time after anthesis and used to simultaneously derive fits for Total N and NUtE(g). When disease was controlled, Consort achieved the greatest Grain DM, Total N, Grain N and NUtEg; in each case due mostly to longer durations, rather than quicker rates, of accumulation. Fungicide application increased final Grain Dill.. Grant N, Total N and Remobilized N, also mostly through effects on duration rather than rate of accumulation. Completely senesced leaf laminas retained less nitrogen when fungicide had been applied compared with leaf laminas previously infected severely with brown rust (Puccinia recondita) and Septoria tritici, or with just S. tritici. Late movement of nitrogen out of fungicide-treated laminas contributed to extended duration of both nitrogen remobilization and grain N filling, and meant that increases in NUtE(g) could occur without simultaneous reductions in grain N concentration.
Resumo:
A two-sector Ramsey-type model of growth is developed to investigate the relationship between agricultural productivity and economy-wide growth. The framework takes into account the peculiarities of agriculture both in production ( reliance on a fixed natural resource base) and in consumption (life-sustaining role and low income elasticity of food demand). The transitional dynamics of the model establish that when preferences respect Engel's law, the level and growth rate of agricultural productivity influence the speed of capital accumulation. A calibration exercise shows that a small difference in agricultural productivity has drastic implications for the rate and pattern of growth of the economy. Hence, low agricultural productivity can form a bottleneck limiting growth, because high food prices result in a low saving rate.
Resumo:
Background: Selecting the highest quality 3D model of a protein structure from a number of alternatives remains an important challenge in the field of structural bioinformatics. Many Model Quality Assessment Programs (MQAPs) have been developed which adopt various strategies in order to tackle this problem, ranging from the so called "true" MQAPs capable of producing a single energy score based on a single model, to methods which rely on structural comparisons of multiple models or additional information from meta-servers. However, it is clear that no current method can separate the highest accuracy models from the lowest consistently. In this paper, a number of the top performing MQAP methods are benchmarked in the context of the potential value that they add to protein fold recognition. Two novel methods are also described: ModSSEA, which based on the alignment of predicted secondary structure elements and ModFOLD which combines several true MQAP methods using an artificial neural network. Results: The ModSSEA method is found to be an effective model quality assessment program for ranking multiple models from many servers, however further accuracy can be gained by using the consensus approach of ModFOLD. The ModFOLD method is shown to significantly outperform the true MQAPs tested and is competitive with methods which make use of clustering or additional information from multiple servers. Several of the true MQAPs are also shown to add value to most individual fold recognition servers by improving model selection, when applied as a post filter in order to re-rank models. Conclusion: MQAPs should be benchmarked appropriately for the practical context in which they are intended to be used. Clustering based methods are the top performing MQAPs where many models are available from many servers; however, they often do not add value to individual fold recognition servers when limited models are available. Conversely, the true MQAP methods tested can often be used as effective post filters for re-ranking few models from individual fold recognition servers and further improvements can be achieved using a consensus of these methods.
Resumo:
Essential and Molecular Dynamics (ED/MD) have been used to model the conformational changes of a protein implicated in a conformational disease-cataract, the largest cause of blindness in the world-after non-enzymic post-translational modification. Cyanate modification did not significantly alter flexibility, while the Schiff's base adduct produced a more flexible N-terminal domain, and intra-secondary structure regions, than either the cyanate adduct or the native structure. Glycation also increased linker flexibility and disrupted the charge network. A number of post-translational adducts showed structural disruption around Cys15 and increased linker flexibility; this may be important in subsequent protein aggregation. Our modelling results are in accord with experimental evidence, and show that ED/MD is a useful tool in modelling conformational changes in proteins implicated in disease processes. (C) 2003 Published by Elsevier Ltd.
Resumo:
Modern buildings are designed to enhance the match between environment, spaces and the people carrying out work, so that the well-being and the performance of the occupants are all in harmony. Building services are systems that facilitate a healthy working environment within which workers productivity can be optimised in the buildings. However, the maintenance of these services is fraught with problems that may contribute to up to 50% of the total life cycle cost of the building. Maintenance support is one area which is not usually designed into the system as this is not common practice in the services industry. The other areas of shortfall for future designs are; client requirements, commissioning, facilities management data and post occupancy evaluation feedback which needs to be adequately planned to capture and document this information for use in future designs. At the University of Reading an integrated approach has been developed to assemble the multitude of aspects inherent in this field. The means records required and measured achievements for the benefit of both building owners and practitioners. This integrated approach can be represented in a Through Life Business Model (TLBM) format using the concept of Integrated Logistic Support (ILS). The prototype TLBM developed utilises the tailored tools and techniques of ILS for building services. This TLBM approach will facilitate the successful development of a databank that would be invaluable in capturing essential data (e.g. reliability of components) for enhancing future building services designs, life cycle costing and decision making by practitioners, in particular facilities managers.
Resumo:
An exaggerated postprandial lipaemic response is thought to play a central role in the development of an atherogenic lipoprotein phenotype, a recognized lipid risk factor for coronary heart disease. A small number of limited studies have compared postprandial lipaemia in subjects of varying age, but have not investigated mechanisms underlying age-associated changes in postprandial lipaemia. In order to test the hypothesis that impaired lipaemia in older subjects is associated with loss of insulin sensitivity, the present study compared the postprandial lipaemic and hormone responses for 9 h following a standard mixed meal in normolipidaemic healthy young and middle-aged men. Lipoprotein lipase (LPL) and hepatic lipase (HL) activities were determined in post-heparin plasma 9 h postprandially and on another occasion under fasting conditions. Postprandial plasma glucose (P < 0.02), retinyl ester (indirect marker for chylomicron particles; P < 0.005) and triacylglycerol (TAG)-rich lipoprotein (density < 1.006 g/ml fraction of plasma) TAG (P < 0.05) and retinyl ester (P < 0.005) responses were higher in middle-aged men, whereas plasma insulin responses were lower in this group (P < 0.001). Fasting and 9 h postprandial LPL and HL activities were also significantly lower in the middle-aged men compared with the young men (P < 0.006). In conclusion, the higher incremental postprandial TAG response in middle-aged men than young men was attributed to the accumulation of dietary-derived TAG-rich lipoproteins (density < 1.006 g/ml fraction of plasma) and occurred in the absence of marked differences in fasting TAG levels between the two groups. Fasting and postprandial LPL and HL activities were markedly lower in middle-aged men, but lack of statistical associations between measures of insulin response and post-heparin lipase activities, as well as between insulin and measures of postprandial lipaemia, suggest that this lower activity cannot be attributed to lack of sensitivity of lipases to activation by insulin. Alternatively, post-heparin lipase activities may not be good markers for the insulin-sensitive component of lipase that is activated postprandially.
Resumo:
A fundamental principle in practical nonlinear data modeling is the parsimonious principle of constructing the minimal model that explains the training data well. Leave-one-out (LOO) cross validation is often used to estimate generalization errors by choosing amongst different network architectures (M. Stone, "Cross validatory choice and assessment of statistical predictions", J. R. Stast. Soc., Ser. B, 36, pp. 117-147, 1974). Based upon the minimization of LOO criteria of either the mean squares of LOO errors or the LOO misclassification rate respectively, we present two backward elimination algorithms as model post-processing procedures for regression and classification problems. The proposed backward elimination procedures exploit an orthogonalization procedure to enable the orthogonality between the subspace as spanned by the pruned model and the deleted regressor. Subsequently, it is shown that the LOO criteria used in both algorithms can be calculated via some analytic recursive formula, as derived in this contribution, without actually splitting the estimation data set so as to reduce computational expense. Compared to most other model construction methods, the proposed algorithms are advantageous in several aspects; (i) There are no tuning parameters to be optimized through an extra validation data set; (ii) The procedure is fully automatic without an additional stopping criteria; and (iii) The model structure selection is directly based on model generalization performance. The illustrative examples on regression and classification are used to demonstrate that the proposed algorithms are viable post-processing methods to prune a model to gain extra sparsity and improved generalization.