187 resultados para Power Sensitivity Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A mathematical model describing the uptake of low density lipoprotein (LDL) and very low density lipoprotein (VLDL) particles by a single hepatocyte cell is formulated and solved. The model includes a description of the dynamic change in receptor density on the surface of the cell due to the binding and dissociation of the lipoprotein particles, the subsequent internalisation of bound particles, receptors and unbound receptors, the recycling of receptors to the cell surface, cholesterol dependent de novo receptor formation by the cell and the effect that particle uptake has on the cell's overall cholesterol content. The effect that blocking access to LDL receptors by VLDL, or internalisation of VLDL particles containing different amounts of apolipoprotein E (we will refer to these particles as VLDL-2 and VLDL-3) has on LDL uptake is explored. By comparison with experimental data we find that measures of cell cholesterol content are important in differentiating between the mechanisms by which VLDL is thought to inhibit LDL uptake. We extend our work to show that in the presence of both types of VLDL particle (VLDL-2 and VLDL-3), measuring relative LDL uptake does not allow differentiation between the results of blocking and internalisation of each VLDL particle to be made. Instead by considering the intracellular cholesterol content it is found that internalisation of VLDL-2 and VLDL-3 leads to the highest intracellular cholesterol concentration. A sensitivity analysis of the model reveals that binding, unbinding and internalisation rates, the fraction of receptors recycled and the rate at which the cholesterol dependent free receptors are created by the cell have important implications for the overall uptake dynamics of either VLDL or LDL particles and subsequent intracellular cholesterol concentration. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mathematical modeling of bacterial chemotaxis systems has been influential and insightful in helping to understand experimental observations. We provide here a comprehensive overview of the range of mathematical approaches used for modeling, within a single bacterium, chemotactic processes caused by changes to external gradients in its environment. Specific areas of the bacterial system which have been studied and modeled are discussed in detail, including the modeling of adaptation in response to attractant gradients, the intracellular phosphorylation cascade, membrane receptor clustering, and spatial modeling of intracellular protein signal transduction. The importance of producing robust models that address adaptation, gain, and sensitivity are also discussed. This review highlights that while mathematical modeling has aided in understanding bacterial chemotaxis on the individual cell scale and guiding experimental design, no single model succeeds in robustly describing all of the basic elements of the cell. We conclude by discussing the importance of this and the future of modeling in this area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This investigation deals with the question of when a particular population can be considered to be disease-free. The motivation is the case of BSE where specific birth cohorts may present distinct disease-free subpopulations. The specific objective is to develop a statistical approach suitable for documenting freedom of disease, in particular, freedom from BSE in birth cohorts. The approach is based upon a geometric waiting time distribution for the occurrence of positive surveillance results and formalizes the relationship between design prevalence, cumulative sample size and statistical power. The simple geometric waiting time model is further modified to account for the diagnostic sensitivity and specificity associated with the detection of disease. This is exemplified for BSE using two different models for the diagnostic sensitivity. The model is furthermore modified in such a way that a set of different values for the design prevalence in the surveillance streams can be accommodated (prevalence heterogeneity) and a general expression for the power function is developed. For illustration, numerical results for BSE suggest that currently (data status September 2004) a birth cohort of Danish cattle born after March 1999 is free from BSE with probability (power) of 0.8746 or 0.8509, depending on the choice of a model for the diagnostic sensitivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Substantial resources are used for surveillance of bovine spongiform encephalopathy (BSE) despite an extremely low detection rate, especially in healthy slaughtered cattle. We have developed a method based on the geometric waiting time distribution to establish and update the statistical evidence for BSE-freedom for defined birth cohorts using continued surveillance data. The results suggest that currently (data included till September 2004) a birth cohort of Danish cattle born after March 1999 is free from BSE with probability (power) of 0.8746 or 0.8509, depending on the choice of a model for the diagnostic sensitivity. These results apply to an assumed design prevalence of 1 in 10,000 and account for prevalence heterogeneity. The age-dependent, diagnostic sensitivity for the detection of BSE has been identified as major determinant of the power. The incorporation of heterogeneity was deemed adequate on scientific grounds and led to improved power values. We propose our model as a decision tool for possible future modification of the BSE surveillance and discuss public health and international trade implications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper presents the techno-economic modelling of CO2 capture process in coal-fired power plants. An overall model is being developed to compare carbon capture and sequestration options at locations within the UK, and for studies of the sensitivity of the cost of disposal to changes in the major parameters of the most promising solutions identified. Technological options of CO2 capture have been studied and cost estimation relationships (CERs) for the chosen options calculated. Created models are related to the capital, operation and maintenance cost. A total annualised cost of plant electricity output and amount of CO2 avoided have been developed. The influence of interest rates and plant life has been analysed as well. The CERs are included as an integral part of the overall model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the life cycle GHG emissions from existing UK pulverized coal power plants. The life cycle of the electricity Generation plant includes construction, operation and decommissioning. The operation phase is extended to upstream and downstream processes. Upstream processes include the mining and transport of coal including methane leakage and the production and transport of limestone and ammonia, which are necessary for flue gas clean up. Downstream processes, on the other hand, include waste disposal and the recovery of land used for surface mining. The methodology used is material based process analysis that allows calculation of the total emissions for each process involved. A simple model for predicting the energy and material requirements of the power plant is developed. Preliminary calculations reveal that for a typical UK coal fired plant, the life cycle emissions amount to 990 g CO2-e/kWh of electricity generated, which compares well with previous UK studies. The majority of these emissions result from direct fuel combustion (882 g/kWh 89%) with methane leakage from mining operations accounting for 60% of indirect emissions. In total, mining operations (including methane leakage) account for 67.4% of indirect emissions, while limestone and other material production and transport account for 31.5%. The methodology developed is also applied to a typical IGCC power plant. It is found that IGCC life cycle emissions are 15% less than those from PC power plants. Furthermore, upon investigating the influence of power plant parameters on life cycle emissions, it is determined that, while the effect of changing the load factor is negligible, increasing efficiency from 35% to 38% can reduce emissions by 7.6%. The current study is funded by the UK National Environment Research Council (NERC) and is undertaken as part of the UK Carbon Capture and Storage Consortium (UKCCSC). Future work will investigate the life cycle emissions from other power generation technologies with and without carbon capture and storage. The current paper reveals that it might be possible that, when CCS is employed. the emissions during generation decrease to a level where the emissions from upstream processes (i.e. coal production and transport) become dominant, and so, the life cycle efficiency of the CCS system can be significantly reduced. The location of coal, coal composition and mining method are important in determining the overall impacts. In addition to studying the net emissions from CCS systems, future work will also investigate the feasibility and technoeconomics of these systems as a means of carbon abatement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evaluation of life cycle greenhouse gas emissions from power generation with carbon capture and storage (CCS) is a critical factor in energy and policy analysis. The current paper examines life cycle emissions from three types of fossil-fuel-based power plants, namely supercritical pulverized coal (super-PC), natural gas combined cycle (NGCC) and integrated gasification combined cycle (IGCC), with and without CCS. Results show that, for a 90% CO2 capture efficiency, life cycle GHG emissions are reduced by 75-84% depending on what technology is used. With GHG emissions less than 170 g/kWh, IGCC technology is found to be favorable to NGCC with CCS. Sensitivity analysis reveals that, for coal power plants, varying the CO2 capture efficiency and the coal transport distance has a more pronounced effect on life cycle GHG emissions than changing the length of CO2 transport pipeline. Finally, it is concluded from the current study that while the global warming potential is reduced when MEA-based CO2 capture is employed, the increase in other air pollutants such as NOx and NH3 leads to higher eutrophication and acidification potentials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of glycine to limit acrylamide formation during the heating of a potato model system was also found to alter the relative proportions of alkylpyrazines. The addition of glycine increased the quantities of several alkylpyrazines, and labeling studies using [2-C-13]glycine showed that those alkylpyrazines which increased in the presence of glycine had at least one C-13-labeled methyl substituent derived from glycine. The distribution of C-13 within the pyrazines suggested two pathways by which glycine, and other amino acids, participate in alkylpyrazine formation, and showed the relative contribution of each pathway. Alkylpyrazines that involve glycine in both formation pathways displayed the largest relative increases with glycine addition. The study provided an insight into the sensitivity of alkylpyrazine formation to the amino acid composition in a heated food and demonstrated the importance of those amino acids that are able to contribute an alkyl substituent. This may aid in estimating the impact of amino acid addition on pyrazine formation, when amino acids are added to foods for acrylamide mitigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mathematical models that describe the immersion-frying period and the post-frying cooling period of an infinite slab or an infinite cylinder were solved and tested. Results were successfully compared with those found in the literature or obtained experimentally, and were discussed in terms of the hypotheses and simplifications made. The models were used as the basis of a sensitivity analysis. Simulations showed that a decrease in slab thickness and core heat capacity resulted in faster crust development. On the other hand, an increase in oil temperature and boiling heat transfer coefficient between the oil and the surface of the food accelerated crust formation. The model for oil absorption during cooling was analysed using the tested post-frying cooling equation to determine the moment in which a positive pressure driving force, allowing oil suction within the pore, originated. It was found that as crust layer thickness, pore radius and ambient temperature decreased so did the time needed to start the absorption. On the other hand, as the effective convective heat transfer coefficient between the air and the surface of the slab increased the required cooling time decreased. In addition, it was found that the time needed to allow oil absorption during cooling was extremely sensitive to pore radius, indicating the importance of an accurate pore size determination in future studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is evidence to suggest that insulin sensitivity may vary in response to changes in sex hormone levels. However, the results Of human studies designed to investigate changes in insulin sensitivity through the menstrual cycle have proved inconclusive. The aims of this Study were to 1) evaluate the impact of menstrual cycle phase on insulin sensitivity measures and 2) determine the variability Of insulin sensitivity measures within the same menstrual cycle phase. A controlled observational study of 13 healthy premenopausal women, not taking any hormone preparation and having regular menstrual cycles, was conducted. Insulin sensitivity (Si) and glucose effectiveness (Sg) were measured using an intravenous glucose tolerance test (IVGTT) with minimal model analysis. Additional Surrogate measures Of insulin sensitivity were calculated (homoeostasis model for insulin resistance [HOMA IR], quantitative insulin-to-glucose check index [QUICKI] and revised QUICKI [rQUICKI]), as well as plasma lipids. Each woman was tested in the luteal and follicular phases of her Menstrual cycle, and duplicate measures were taken in one phase of the cycle. No significant differences in insulin sensitivity (measured by the IVGTT or Surrogate markers) or plasma lipids were reported between the two phases of the menstrual cycle or between duplicate measures within the same phase. It was Concluded that variability in measures of insulin sensitivity were similar within and between menstrual phases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE. To investigate the nature of early ocular misalignments in human infants to determine whether they can provide insight into the etiology of esotropia and, in particular, to examine the correlates of misalignments. METHODS. A remote haploscopic photorefraction system was used to measure accommodation and vergence in 146 infants between 0 and 12 months of age. Infants underwent photorefraction immediately after watching a target moving between two of five viewing distances (25, 33, 50, 100, and 200 cm). In some instances, infants were tested in two conditions: both eyes open and one eye occluded. The resultant data were screened for instances of large misalignments. Data were assessed to determine whether accommodative, retinal disparity, or other cues were associated with the occurrence of misalignments. RESULTS. The results showed that there was no correlation between accommodative behavior and misalignments. Infants were more likely to show misalignments when retinal disparity cues were removed through occlusion. They were also more likely to show misalignments immediately after the target moved from a near to a far position in comparison to far-to-near target movement. DISCUSSION. The data suggest that the prevalence of misalignments in infants of 2 to 3 months of age is decreased by the addition of retinal disparity cues to the stimulus. In addition, target movement away from the infant increases the prevalence of misalignments. These data are compatible with the notion that misalignment are caused by poor sensitivity to targets moving away from the infant and support the theory that some forms of strabismus could be related to failure in a system that is sensitive to the direction of motion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Neural Mass model is coupled with a novel method to generate realistic Phase reset ERPs. The power spectra of these synthetic ERPs are compared with the spectra of real ERPs and synthetic ERPs generated via the Additive model. Real ERP spectra show similarities with synthetic Phase reset ERPs and synthetic Additive ERPs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Danish Eulerian Model (DEM) is a powerful air pollution model, designed to calculate the concentrations of various dangerous species over a large geographical region (e.g. Europe). It takes into account the main physical and chemical processes between these species, the actual meteorological conditions, emissions, etc.. This is a huge computational task and requires significant resources of storage and CPU time. Parallel computing is essential for the efficient practical use of the model. Some efficient parallel versions of the model were created over the past several years. A suitable parallel version of DEM by using the Message Passing Interface library (AIPI) was implemented on two powerful supercomputers of the EPCC - Edinburgh, available via the HPC-Europa programme for transnational access to research infrastructures in EC: a Sun Fire E15K and an IBM HPCx cluster. Although the implementation is in principal, the same for both supercomputers, few modifications had to be done for successful porting of the code on the IBM HPCx cluster. Performance analysis and parallel optimization was done next. Results from bench marking experiments will be presented in this paper. Another set of experiments was carried out in order to investigate the sensitivity of the model to variation of some chemical rate constants in the chemical submodel. Certain modifications of the code were necessary to be done in accordance with this task. The obtained results will be used for further sensitivity analysis Studies by using Monte Carlo simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When a computer program requires legitimate access to confidential data, the question arises whether such a program may illegally reveal sensitive information. This paper proposes a policy model to specify what information flow is permitted in a computational system. The security definition, which is based on a general notion of information lattices, allows various representations of information to be used in the enforcement of secure information flow in deterministic or nondeterministic systems. A flexible semantics-based analysis technique is presented, which uses the input-output relational model induced by an attacker's observational power, to compute the information released by the computational system. An illustrative attacker model demonstrates the use of the technique to develop a termination-sensitive analysis. The technique allows the development of various information flow analyses, parametrised by the attacker's observational power, which can be used to enforce what declassification policies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.