882 resultados para energy requirement model
Resumo:
This paper presents a multicriteria decision-making model for lifespan energy efficiency assessment of intelligent buildings (IBs). The decision-making model called IBAssessor is developed using an analytic network process (ANP) method and a set of lifespan performance indicators for IBs selected by a new quantitative approach called energy-time consumption index (ETI). In order to improve the quality of decision-making, the authors of this paper make use of previous research achievements including a lifespan sustainable business model, the Asian IB Index, and a number of relevant publications. Practitioners can use the IBAssessor ANP model at different stages of an IB lifespan for either engineering or business oriented assessments. Finally, this paper presents an experimental case study to demonstrate how to use IBAssessor ANP model to solve real-world design tasks.
Resumo:
While building provides shelter for human being, the previous models for assessing the intelligence of a building seldom consider the responses of occupants. In addition, the assessment is usually conducted by an authority organization on a yearly basis, thus can seldom provide timely assistance for facility manager to improve his daily facility maintenance performance. By the extending the law of entropy into the area of intelligent building, this paper demonstrate that both energy consumption and the response of occupants are important when partially assessing the intelligence of a building. This study then develops a sensor based real time building intelligence (BI) assessment model. An experimental case study demonstrates how the model can be implemented. The developed model can address the two demerits of the previous BI assessment model.
Resumo:
This article presents a prototype model based on a wireless sensor actuator network (WSAN) aimed at optimizing both energy consumption of environmental systems and well-being of occupants in buildings. The model is a system consisting of the following components: a wireless sensor network, `sense diaries', environmental systems such as heating, ventilation and air-conditioning systems, and a central computer. A multi-agent system (MAS) is used to derive and act on the preferences of the occupants. Each occupant is represented by a personal agent in the MAS. The sense diary is a new device designed to elicit feedback from occupants about their satisfaction with the environment. The roles of the components are: the WSAN collects data about physical parameters such as temperature and humidity from an indoor environment; the central computer processes the collected data; the sense diaries leverage trade-offs between energy consumption and well-being, in conjunction with the agent system; and the environmental systems control the indoor environment.
Resumo:
Controlled human intervention trials are required to confirm the hypothesis that dietary fat quality may influence insulin action. The aim was to develop a food-exchange model, suitable for use in free-living volunteers, to investigate the effects of four experimental diets distinct in fat quantity and quality: high SFA (HSFA); high MUFA (HMUFA) and two low-fat (LF) diets, one supplemented with 1.24g EPA and DHA/d (LFn-3). A theoretical food-exchange model was developed. The average quantity of exchangeable fat was calculated as the sum of fat provided by added fats (spreads and oils), milk, cheese, biscuits, cakes, buns and pastries using data from the National Diet and Nutrition Survey of UK adults. Most of the exchangeable fat was replaced by specifically designed study foods. Also critical to the model was the use of carbohydrate exchanges to ensure the diets were isoenergetic. Volunteers from eight centres across Europe completed the dietary intervention. Results indicated that compositional targets were largely achieved with significant differences in fat quantity between the high-fat diets (39.9 (SEM 0.6) and 38.9 (SEM 0.51) percentage energy (%E) from fat for the HSFA and HMUFA diets respectively) and the low-fat diets (29.6 (SEM 0.6) and 29.1 (SEM 0.5) %E from fat for the LF and LFn-3 diets respectively) and fat quality (17.5 (SEM 0.3) and 10.4 (SEM 0.2) %E front SFA and 12.7 (SEM 0.3) and 18.7 (SEM 0.4) %E MUFA for the HSFA and HMUFA diets respectively). In conclusion, a robust, flexible food-exchange model was developed and implemented successfully in the LIPGENE dietary intervention trial.
Resumo:
Our objective in this study was to develop and implement an effective intervention strategy to manipulate the amount and composition of dietary fat and carbohydrate (CHO) in free-living individuals in the RISCK study. The study was a randomized, controlled dietary intervention study that was conducted in 720 participants identified as higher risk for or with metabolic syndrome. All followed a 4-wk run-in reference diet [high saturated fatty acids (SF)/high glycemic index (GI)]. Volunteers were randomized to continue this diet for a further 24 wk or to I of 4 isoenergetic prescriptions [high monounsaturated fatty acids (MUFA)/high GI; high MUFA/low GI; low fat (LF)/high GI; and LF/low GI]. We developed a food exchange model to implement each diet. Dietary records and plasma phospholipid fatty acids were used to assess the effectiveness of the intervention strategy. Reported fat intake from the LF diets was significantly reduced to 28% of energy (%E) compared with 38% E from the HM and LF diets. SF intake was successfully decreased in the HM and LF diets was similar to 10% E compared with 17% E in the reference diet (P = 0.001). Dietary MUFA in the HIM diets was similar to 17% E, significantly higher than in the reference (12% E) and LF diets (10% E) (P = 0.001). Changes in plasma phospholipid fatty acids provided further evidence for the successful manipulation of fat intake. The GI of the HGI and LGI arms differed by similar to 9 points (P = 0.001). The food exchange model provided an effective dietary strategy for the design and implementation across multiple sites of 5 experimental diets with specific targets for the proportion of fat and CHO. J. Nutr. 139: 1534-1540, 2009.
Resumo:
This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
The effects of meson fluctuations are studied in a nonlocal generalization of the Nambu–Jona-Lasinio model, by including terms of next-to-leading order (NLO) in 1/Nc. In the model with only scalar and pseudoscalar interactions NLO contributions to the quark condensate are found to be very small. This is a result of cancellation between virtual mesons and Fock terms, which occurs for the parameter sets of most interest. In the quark self-energy, similar cancellations arise in the tadpole diagrams, although not in other NLO pieces which contribute at the 25% level. The effects on pion properties are also found to be small. NLO contributions from real pi-pi intermediate states increase the sigma meson mass by 30%. In an extended model with vector and axial interactions, there are indications that NLO effects could be larger.
Resumo:
The low-energy electron diffraction (LEED) pattern of the step-kinked Pt{531} surface at 200 K shows energy-dependent cancellation of diffraction spots over unusually large energy ranges, up to 100 eV. This cannot be reproduced theoretically when a flat surface geometry is assumed. A relatively simple model of roughening, however, involving 0.25 ML of vacancies and adatoms leads to very good agreement with the experiment. The cancellation of intensities within a very narrow range of adatom or vacancy coverages is caused by the interference of electrons emerging from different heights but similar local environments. This is a rare example where the energy dependence of integrated LEED spot intensities is dramatically affected by the long-range arrangement of atoms.
Resumo:
The adsorption of NO on Ir{100} has been studied as a function of NO coverage and temperature using temperature programmed reflection absorption infrared spectroscopy (TP-RAIRS), low energy electron diffraction (LEED) and temperature programmed desorption (TPD). After saturating the clean (1 x 5)-reconstructed surface with NO at 95 K. two N-2, desorption peaks are observed upon heating. The first N-2 peak at 346 K results from the decomposition of bridge-bonded NO, and the second at 475 K from the decomposition of atop-bonded NO molecules. NO decomposition is proposed to be the rate limiting step for both N-2 desorption states. For high NO coverages on the (1 x 5) surface, the narrow width of the first N-2 desorption peak is indicative of an autocatalytic process for which the parallel formation of N2O appears to be the crucial step. When NO is adsorbed on the metastable unreconstructed (1 x 1) phase of clean Ir{100} N-2 desorption starts at lower temperatures, indicating that this surface modification is more reactive. When a high coverage of oxygen, near 0.5 ML, is pre-adsorbed on the surface, the decomposition of NO is inhibited and mainly desorption of intact NO is observed.
Resumo:
An idealized equilibrium model for the undisturbed partly cloudy boundary layer (BL) is used as a framework to explore the coupling of the energy, water, and carbon cycles over land in midlatitudes and show the sensitivity to the clear‐sky shortwave flux, the midtropospheric temperature, moisture, CO2, and subsidence. The changes in the surface fluxes, the BL equilibrium, and cloud cover are shown for a warmer, doubled CO2 climate. Reduced stomatal conductance in a simple vegetation model amplifies the background 2 K ocean temperature rise to an (unrealistically large) 6 K increase in near‐surface temperature over land, with a corresponding drop of near‐surface relative humidity of about 19%, and a rise of cloud base of about 70 hPa. Cloud changes depend strongly on changes of mean subsidence; but evaporative fraction (EF) decreases. EF is almost uniquely related to mixed layer (ML) depth, independent of background forcing climate. This suggests that it might be possible to infer EF for heterogeneous landscapes from ML depth. The asymmetry of increased evaporation over the oceans and reduced transpiration over land increases in a warmer doubled CO2 climate.
Resumo:
A generic model of Exergy Assessment is proposed for the Environmental Impact of the Building Lifecycle, with a special focus on the natural environment. Three environmental impacts: energy consumption, resource consumption and pollutant discharge have been analyzed with reference to energy-embodied exergy, resource chemical exergy and abatement exergy, respectively. The generic model of Exergy Assessment of the Environmental Impact of the Building Lifecycle thus formulated contains two sub-models, one from the aspect of building energy utilization and the other from building materials use. Combined with theories by ecologists such as Odum, the paper evaluates a building's environmental sustainability through its exergy footprint and environmental impacts. A case study from Chongqing, China illustrates the application of this method. From the case study, it was found that energy consumption constitutes 70–80% of the total environmental impact during a 50-year building lifecycle, in which the operation phase accounts for 80% of the total environmental impact, the building material production phase 15% and 5% for the other phases.
Resumo:
In molecular mechanics simulations of biological systems, the solvation water is typically represented by a default water model which is an integral part of the force field. Indeed, protein nonbonding parameters are chosen in order to obtain a balance between water-water and protein-water interactions and hence a reliable description of protein solvation. However, less attention has been paid to the question of whether the water model provides a reliable description of the water properties under the chosen simulation conditions, for which more accurate water models often exist. Here we consider the case of the CHARMM protein force field, which was parametrized for use with a modified TIP3P model. Using quantum mechanical and molecular mechanical calculations, we investigate whether the CHARMM force field can be used with other water models: TIP4P and TIP5P. Solvation properties of N-methylacetamide (NMA), other small solute molecules, and a small protein are examined. The results indicate differences in binding energies and minimum energy geometries, especially for TIP5P, but the overall description of solvation is found to be similar for all models tested. The results provide an indication that molecular mechanics simulations with the CHARMM force field can be performed with water models other than TIP3P, thus enabling an improved description of the solvent water properties.
Resumo:
A large number of urban surface energy balance models now exist with different assumptions about the important features of the surface and exchange processes that need to be incorporated. To date, no com- parison of these models has been conducted; in contrast, models for natural surfaces have been compared extensively as part of the Project for Intercomparison of Land-surface Parameterization Schemes. Here, the methods and first results from an extensive international comparison of 33 models are presented. The aim of the comparison overall is to understand the complexity required to model energy and water exchanges in urban areas. The degree of complexity included in the models is outlined and impacts on model performance are discussed. During the comparison there have been significant developments in the models with resulting improvements in performance (root-mean-square error falling by up to two-thirds). Evaluation is based on a dataset containing net all-wave radiation, sensible heat, and latent heat flux observations for an industrial area in Vancouver, British Columbia, Canada. The aim of the comparison is twofold: to identify those modeling ap- proaches that minimize the errors in the simulated fluxes of the urban energy balance and to determine the degree of model complexity required for accurate simulations. There is evidence that some classes of models perform better for individual fluxes but no model performs best or worst for all fluxes. In general, the simpler models perform as well as the more complex models based on all statistical measures. Generally the schemes have best overall capability to model net all-wave radiation and least capability to model latent heat flux.
Resumo:
Atmosphere–ocean general circulation models (AOGCMs) predict a weakening of the Atlantic meridional overturning circulation (AMOC) in response to anthropogenic forcing of climate, but there is a large model uncertainty in the magnitude of the predicted change. The weakening of the AMOC is generally understood to be the result of increased buoyancy input to the north Atlantic in a warmer climate, leading to reduced convection and deep water formation. Consistent with this idea, model analyses have shown empirical relationships between the AMOC and the meridional density gradient, but this link is not direct because the large-scale ocean circulation is essentially geostrophic, making currents and pressure gradients orthogonal. Analysis of the budget of kinetic energy (KE) instead of momentum has the advantage of excluding the dominant geostrophic balance. Diagnosis of the KE balance of the HadCM3 AOGCM and its low-resolution version FAMOUS shows that KE is supplied to the ocean by the wind and dissipated by viscous forces in the global mean of the steady-state control climate, and the circulation does work against the pressure-gradient force, mainly in the Southern Ocean. In the Atlantic Ocean, however, the pressure-gradient force does work on the circulation, especially in the high-latitude regions of deep water formation. During CO2-forced climate change, we demonstrate a very good temporal correlation between the AMOC strength and the rate of KE generation by the pressure-gradient force in 50–70°N of the Atlantic Ocean in each of nine contemporary AOGCMs, supporting a buoyancy-driven interpretation of AMOC changes. To account for this, we describe a conceptual model, which offers an explanation of why AOGCMs with stronger overturning in the control climate tend to have a larger weakening under CO2 increase.
Resumo:
The hypothesis of a low dimensional martian climate attractor is investigated by the application of the proper orthogonal decomposition (POD) to a simulation of martian atmospheric circulation using the UK Mars general circulation model (UK-MGCM). In this article we focus on a time series of the interval between autumn and winter in the northern hemisphere, when baroclinic activity is intense. The POD is a statistical technique that allows the attribution of total energy (TE) to particular structures embedded in the UK-MGCM time-evolving circulation. These structures are called empirical orthogonal functions (EOFs). Ordering the EOFs according to their associated energy content, we were able to determine the necessary number to account for a chosen amount of atmospheric TE. We show that for Mars a large fraction of TE is explained by just a few EOFs (with 90% TE in 23 EOFs), which apparently support the initial hypothesis. We also show that the resulting EOFs represent classical types of atmospheric motion, such as thermal tides and transient waves. Thus, POD is shown to be an efficient method for the identification of different classes of atmospheric modes. It also provides insight into the non-linear interaction of these modes.