917 resultados para Power Sensitivity Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the life cycle GHG emissions from existing UK pulverized coal power plants. The life cycle of the electricity Generation plant includes construction, operation and decommissioning. The operation phase is extended to upstream and downstream processes. Upstream processes include the mining and transport of coal including methane leakage and the production and transport of limestone and ammonia, which are necessary for flue gas clean up. Downstream processes, on the other hand, include waste disposal and the recovery of land used for surface mining. The methodology used is material based process analysis that allows calculation of the total emissions for each process involved. A simple model for predicting the energy and material requirements of the power plant is developed. Preliminary calculations reveal that for a typical UK coal fired plant, the life cycle emissions amount to 990 g CO2-e/kWh of electricity generated, which compares well with previous UK studies. The majority of these emissions result from direct fuel combustion (882 g/kWh 89%) with methane leakage from mining operations accounting for 60% of indirect emissions. In total, mining operations (including methane leakage) account for 67.4% of indirect emissions, while limestone and other material production and transport account for 31.5%. The methodology developed is also applied to a typical IGCC power plant. It is found that IGCC life cycle emissions are 15% less than those from PC power plants. Furthermore, upon investigating the influence of power plant parameters on life cycle emissions, it is determined that, while the effect of changing the load factor is negligible, increasing efficiency from 35% to 38% can reduce emissions by 7.6%. The current study is funded by the UK National Environment Research Council (NERC) and is undertaken as part of the UK Carbon Capture and Storage Consortium (UKCCSC). Future work will investigate the life cycle emissions from other power generation technologies with and without carbon capture and storage. The current paper reveals that it might be possible that, when CCS is employed. the emissions during generation decrease to a level where the emissions from upstream processes (i.e. coal production and transport) become dominant, and so, the life cycle efficiency of the CCS system can be significantly reduced. The location of coal, coal composition and mining method are important in determining the overall impacts. In addition to studying the net emissions from CCS systems, future work will also investigate the feasibility and technoeconomics of these systems as a means of carbon abatement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evaluation of life cycle greenhouse gas emissions from power generation with carbon capture and storage (CCS) is a critical factor in energy and policy analysis. The current paper examines life cycle emissions from three types of fossil-fuel-based power plants, namely supercritical pulverized coal (super-PC), natural gas combined cycle (NGCC) and integrated gasification combined cycle (IGCC), with and without CCS. Results show that, for a 90% CO2 capture efficiency, life cycle GHG emissions are reduced by 75-84% depending on what technology is used. With GHG emissions less than 170 g/kWh, IGCC technology is found to be favorable to NGCC with CCS. Sensitivity analysis reveals that, for coal power plants, varying the CO2 capture efficiency and the coal transport distance has a more pronounced effect on life cycle GHG emissions than changing the length of CO2 transport pipeline. Finally, it is concluded from the current study that while the global warming potential is reduced when MEA-based CO2 capture is employed, the increase in other air pollutants such as NOx and NH3 leads to higher eutrophication and acidification potentials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of glycine to limit acrylamide formation during the heating of a potato model system was also found to alter the relative proportions of alkylpyrazines. The addition of glycine increased the quantities of several alkylpyrazines, and labeling studies using [2-C-13]glycine showed that those alkylpyrazines which increased in the presence of glycine had at least one C-13-labeled methyl substituent derived from glycine. The distribution of C-13 within the pyrazines suggested two pathways by which glycine, and other amino acids, participate in alkylpyrazine formation, and showed the relative contribution of each pathway. Alkylpyrazines that involve glycine in both formation pathways displayed the largest relative increases with glycine addition. The study provided an insight into the sensitivity of alkylpyrazine formation to the amino acid composition in a heated food and demonstrated the importance of those amino acids that are able to contribute an alkyl substituent. This may aid in estimating the impact of amino acid addition on pyrazine formation, when amino acids are added to foods for acrylamide mitigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mathematical models that describe the immersion-frying period and the post-frying cooling period of an infinite slab or an infinite cylinder were solved and tested. Results were successfully compared with those found in the literature or obtained experimentally, and were discussed in terms of the hypotheses and simplifications made. The models were used as the basis of a sensitivity analysis. Simulations showed that a decrease in slab thickness and core heat capacity resulted in faster crust development. On the other hand, an increase in oil temperature and boiling heat transfer coefficient between the oil and the surface of the food accelerated crust formation. The model for oil absorption during cooling was analysed using the tested post-frying cooling equation to determine the moment in which a positive pressure driving force, allowing oil suction within the pore, originated. It was found that as crust layer thickness, pore radius and ambient temperature decreased so did the time needed to start the absorption. On the other hand, as the effective convective heat transfer coefficient between the air and the surface of the slab increased the required cooling time decreased. In addition, it was found that the time needed to allow oil absorption during cooling was extremely sensitive to pore radius, indicating the importance of an accurate pore size determination in future studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is evidence to suggest that insulin sensitivity may vary in response to changes in sex hormone levels. However, the results Of human studies designed to investigate changes in insulin sensitivity through the menstrual cycle have proved inconclusive. The aims of this Study were to 1) evaluate the impact of menstrual cycle phase on insulin sensitivity measures and 2) determine the variability Of insulin sensitivity measures within the same menstrual cycle phase. A controlled observational study of 13 healthy premenopausal women, not taking any hormone preparation and having regular menstrual cycles, was conducted. Insulin sensitivity (Si) and glucose effectiveness (Sg) were measured using an intravenous glucose tolerance test (IVGTT) with minimal model analysis. Additional Surrogate measures Of insulin sensitivity were calculated (homoeostasis model for insulin resistance [HOMA IR], quantitative insulin-to-glucose check index [QUICKI] and revised QUICKI [rQUICKI]), as well as plasma lipids. Each woman was tested in the luteal and follicular phases of her Menstrual cycle, and duplicate measures were taken in one phase of the cycle. No significant differences in insulin sensitivity (measured by the IVGTT or Surrogate markers) or plasma lipids were reported between the two phases of the menstrual cycle or between duplicate measures within the same phase. It was Concluded that variability in measures of insulin sensitivity were similar within and between menstrual phases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE. To investigate the nature of early ocular misalignments in human infants to determine whether they can provide insight into the etiology of esotropia and, in particular, to examine the correlates of misalignments. METHODS. A remote haploscopic photorefraction system was used to measure accommodation and vergence in 146 infants between 0 and 12 months of age. Infants underwent photorefraction immediately after watching a target moving between two of five viewing distances (25, 33, 50, 100, and 200 cm). In some instances, infants were tested in two conditions: both eyes open and one eye occluded. The resultant data were screened for instances of large misalignments. Data were assessed to determine whether accommodative, retinal disparity, or other cues were associated with the occurrence of misalignments. RESULTS. The results showed that there was no correlation between accommodative behavior and misalignments. Infants were more likely to show misalignments when retinal disparity cues were removed through occlusion. They were also more likely to show misalignments immediately after the target moved from a near to a far position in comparison to far-to-near target movement. DISCUSSION. The data suggest that the prevalence of misalignments in infants of 2 to 3 months of age is decreased by the addition of retinal disparity cues to the stimulus. In addition, target movement away from the infant increases the prevalence of misalignments. These data are compatible with the notion that misalignment are caused by poor sensitivity to targets moving away from the infant and support the theory that some forms of strabismus could be related to failure in a system that is sensitive to the direction of motion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Neural Mass model is coupled with a novel method to generate realistic Phase reset ERPs. The power spectra of these synthetic ERPs are compared with the spectra of real ERPs and synthetic ERPs generated via the Additive model. Real ERP spectra show similarities with synthetic Phase reset ERPs and synthetic Additive ERPs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Danish Eulerian Model (DEM) is a powerful air pollution model, designed to calculate the concentrations of various dangerous species over a large geographical region (e.g. Europe). It takes into account the main physical and chemical processes between these species, the actual meteorological conditions, emissions, etc.. This is a huge computational task and requires significant resources of storage and CPU time. Parallel computing is essential for the efficient practical use of the model. Some efficient parallel versions of the model were created over the past several years. A suitable parallel version of DEM by using the Message Passing Interface library (AIPI) was implemented on two powerful supercomputers of the EPCC - Edinburgh, available via the HPC-Europa programme for transnational access to research infrastructures in EC: a Sun Fire E15K and an IBM HPCx cluster. Although the implementation is in principal, the same for both supercomputers, few modifications had to be done for successful porting of the code on the IBM HPCx cluster. Performance analysis and parallel optimization was done next. Results from bench marking experiments will be presented in this paper. Another set of experiments was carried out in order to investigate the sensitivity of the model to variation of some chemical rate constants in the chemical submodel. Certain modifications of the code were necessary to be done in accordance with this task. The obtained results will be used for further sensitivity analysis Studies by using Monte Carlo simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When a computer program requires legitimate access to confidential data, the question arises whether such a program may illegally reveal sensitive information. This paper proposes a policy model to specify what information flow is permitted in a computational system. The security definition, which is based on a general notion of information lattices, allows various representations of information to be used in the enforcement of secure information flow in deterministic or nondeterministic systems. A flexible semantics-based analysis technique is presented, which uses the input-output relational model induced by an attacker's observational power, to compute the information released by the computational system. An illustrative attacker model demonstrates the use of the technique to develop a termination-sensitive analysis. The technique allows the development of various information flow analyses, parametrised by the attacker's observational power, which can be used to enforce what declassification policies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An idealized equilibrium model for the undisturbed partly cloudy boundary layer (BL) is used as a framework to explore the coupling of the energy, water, and carbon cycles over land in midlatitudes and show the sensitivity to the clear‐sky shortwave flux, the midtropospheric temperature, moisture, CO2, and subsidence. The changes in the surface fluxes, the BL equilibrium, and cloud cover are shown for a warmer, doubled CO2 climate. Reduced stomatal conductance in a simple vegetation model amplifies the background 2 K ocean temperature rise to an (unrealistically large) 6 K increase in near‐surface temperature over land, with a corresponding drop of near‐surface relative humidity of about 19%, and a rise of cloud base of about 70 hPa. Cloud changes depend strongly on changes of mean subsidence; but evaporative fraction (EF) decreases. EF is almost uniquely related to mixed layer (ML) depth, independent of background forcing climate. This suggests that it might be possible to infer EF for heterogeneous landscapes from ML depth. The asymmetry of increased evaporation over the oceans and reduced transpiration over land increases in a warmer doubled CO2 climate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several previous studies have attempted to assess the sublimation depth-scales of ice particles from clouds into clear air. Upon examining the sublimation depth-scales in the Met Office Unified Model (MetUM), it was found that the MetUM has evaporation depth-scales 2–3 times larger than radar observations. Similar results can be seen in the European Centre for Medium-Range Weather Forecasts (ECMWF), Regional Atmospheric Climate Model (RACMO) and Météo-France models. In this study, we use radar simulation (converting model variables into radar observations) and one-dimensional explicit microphysics numerical modelling to test and diagnose the cause of the deep sublimation depth-scales in the forecast model. The MetUM data and parametrization scheme are used to predict terminal velocity, which can be compared with the observed Doppler velocity. This can then be used to test the hypothesis as to why the sublimation depth-scale is too large within the MetUM. Turbulence could lead to dry air entrainment and higher evaporation rates; particle density may be wrong, particle capacitance may be too high and lead to incorrect evaporation rates or the humidity within the sublimating layer may be incorrectly represented. We show that the most likely cause of deep sublimation zones is an incorrect representation of model humidity in the layer. This is tested further by using a one-dimensional explicit microphysics model, which tests the sensitivity of ice sublimation to key atmospheric variables and is capable of including sonde and radar measurements to simulate real cases. Results suggest that the MetUM grid resolution at ice cloud altitudes is not sufficient enough to maintain the sharp drop in humidity that is observed in the sublimation zone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper derives some exact power properties of tests for spatial autocorrelation in the context of a linear regression model. In particular, we characterize the circumstances in which the power vanishes as the autocorrelation increases, thus extending the work of Krämer (2005). More generally, the analysis in the paper sheds new light on how the power of tests for spatial autocorrelation is affected by the matrix of regressors and by the spatial structure. We mainly focus on the problem of residual spatial autocorrelation, in which case it is appropriate to restrict attention to the class of invariant tests, but we also consider the case when the autocorrelation is due to the presence of a spatially lagged dependent variable among the regressors. A numerical study aimed at assessing the practical relevance of the theoretical results is included

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A solution has been found to the long-standing problem of experimental modelling of the interfacial instability in aluminium reduction cells. The idea is to replace the electrolyte overlaying molten aluminium with a mesh of thin rods supplying current down directly into the liquid metal layer. This eliminates electrolysis altogether and all the problems associated with it, such as high temperature, chemical aggressiveness of media, products of electrolysis, the necessity for electrolyte renewal, high power demands, etc. The result is a room temperature, versatile laboratory model which simulates Sele-type, rolling pad interfacial instability. Our new, safe laboratory model enables detailed experimental investigations to test the existing theoretical models for the first time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a kinetic double layer model coupling aerosol surface and bulk chemistry (K2-SUB) based on the PRA framework of gas-particle interactions (Poschl-Rudich-Ammann, 2007). K2-SUB is applied to a popular model system of atmospheric heterogeneous chemistry: the interaction of ozone with oleic acid. We show that our modelling approach allows de-convoluting surface and bulk processes, which has been a controversial topic and remains an important challenge for the understanding and description of atmospheric aerosol transformation. In particular, we demonstrate how a detailed treatment of adsorption and reaction at the surface can be coupled to a description of bulk reaction and transport that is consistent with traditional resistor model formulations. From literature data we have derived a consistent set of kinetic parameters that characterise mass transport and chemical reaction of ozone at the surface and in the bulk of oleic acid droplets. Due to the wide range of rate coefficients reported from different experimental studies, the exact proportions between surface and bulk reaction rates remain uncertain. Nevertheless, the model results suggest an important role of chemical reaction in the bulk and an approximate upper limit of similar to 10(-11) cm(2) s(-1) for the surface reaction rate coefficient. Sensitivity studies show that the surface accommodation coefficient of the gas-phase reactant has a strong non-linear influence on both surface and bulk chemical reactions. We suggest that K2-SUB may be used to design, interpret and analyse future experiments for better discrimination between surface and bulk processes in the oleic acid-ozone system as well as in other heterogeneous reaction systems of atmospheric relevance.