862 resultados para Exascale, Supercomputer,OFET,energy effincency, data locality, HPC
Resumo:
The LINK Integrated Farming Systems (LINK-IFS) Project (1992-1997) was setup to compare conventional and integrated arable farming systems (IAFS), concentrating on practical feasibility and economic viability, but also taking into account the level of inputs used and environmental impact. As part of this, an examination into energy use within the two systems was also undertaken. This paper presents the results from that analysis. The data used is from the six sites within the LINK-IFS Project, spread through the arable production areas of England and from the one site in Scotland, covering the 5 years of the project. The comparison of the energy used is based on the equipment and inputs used to produce I kg of each crop within the conventional and integrated rotations, and thereby the overall energy used for each system. The results suggest that, in terms of total energy used, the integrated system appears to be the most efficient. However, in terms of energy efficiency, energy use per kilogram of output, the results are less conclusive. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Subsidised energy prices in pre-transition Hungary had led to excessive energy intensity in the agricultural sector. Transition has resulted in steep input price increases. In this study, Allen and Morishima elasticities of substitution are estimated to study the effects of these price changes on energy use, chemical input use, capital formation and employment. Panel data methods, Generalised Method of Moments (GMM) and instrument exogeneity tests are used to specify and estimate technology and substitution elasticities. Results indicate that indirect price policy may be effective in controlling energy consumption. The sustained increases in energy and chemical input prices have worked together to restrict energy and chemical input use, and the substitutability between energy, capital and labour has prevented the capital shrinkage and agricultural unemployment situations from being worse. The Hungarian push towards lower energy intensity may be best pursued through sustained energy price increases rather than capital subsidies. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Periplasmic chaperone/usher machineries are used for assembly of filamentous adhesion organelles of Gram-negative pathogens in a process that has been suggested to be driven by folding energy. Structures of mutant chaperone-subunit complexes revealed a final folding transition (condensation of the subunit hydrophobic core) on the release of organelle subunit from the chaperone-subunit pre-assembly complex and incorporation into the final fibre structure. However, in view of the large interface between chaperone and subunit in the pre-assembly complex and the reported stability of this complex, it is difficult to understand how final folding could release sufficient energy to drive assembly. In the present paper, we show the X-ray structure for a native chaperone-fibre complex that, together with thermodynamic data, shows that the final folding step is indeed an essential component of the assembly process. We show that completion of the hydrophobic core and incorporation into the fibre results in an exceptionally stable module, whereas the chaperone-subunit preassembly complex is greatly destabilized by the high-energy conformation of the bound subunit. This difference in stabilities creates a free energy potential that drives fibre formation.
Resumo:
Most gram-negative pathogens express fibrous adhesive virulence organelles that mediate targeting to the sites of infection. The F1 capsular antigen from the plague pathogen Yersinia pestis consists of linear fibers of a single subunit (Caf1) and serves as a prototype for nonpilus organelles assembled via the chaperone/usher pathway. Genetic data together with high-resolution X-ray structures corresponding to snapshots of the assembly process reveal the structural basis of fiber formation. Comparison of chaperone bound Caf1 subunit with the subunit in the fiber reveals a novel type of conformational change involving the entire hydrophobic core of the protein. The observed conformational change suggests that the chaperone traps a high-energy folding intermediate of Caf1. A model is proposed in which release of the subunit allows folding to be completed, driving fiber formation.
Resumo:
Experimental data for the title reaction were modeled using master equation (ME)/RRKM methods based on the Multiwell suite of programs. The starting point for the exercise was the empirical fitting provided by the NASA (Sander, S. P.; Finlayson-Pitts, B. J.; Friedl, R. R.; Golden, D. M.; Huie, R. E.; Kolb, C. E.; Kurylo, M. J.; Molina, M. J.; Moortgat, G. K.; Orkin, V. L.; Ravishankara, A. R. Chemical Kinetics and Photochemical Data for Use in Atmospheric Studies, Evaluation Number 15; Jet Propulsion Laboratory: Pasadena, California, 2006)(1) and IUPAC (Atkinson, R.; Baulch, D. L.; Cox, R. A.: R. F. Hampson, J.; Kerr, J. A.; Rossi, M. J.; Troe, J. J. Phys. Chem. Ref. Data. 2000, 29, 167) 2 data evaluation panels, which represents the data in the experimental pressure ranges rather well. Despite the availability of quite reliable parameters for these calculations (molecular vibrational frequencies (Parthiban, S.; Lee, T. J. J. Chem. Phys. 2000, 113, 145)3 and a. value (Orlando, J. J.; Tyndall, G. S. J. Phys. Chem. 1996, 100,. 19398)4 of the bond dissociation energy, D-298(BrO-NO2) = 118 kJ mol(-1), corresponding to Delta H-0(circle) = 114.3 kJ mol(-1) at 0 K) and the use of RRKM/ME methods, fitting calculations to the reported data or the empirical equations was anything but straightforward. Using these molecular parameters resulted in a discrepancy between the calculations and the database of rate constants of a factor of ca. 4 at, or close to, the low-pressure limit. Agreement between calculation and experiment could be achieved in two ways, either by increasing Delta H-0(circle) to an unrealistically high value (149.3 kJ mol(-1)) or by increasing
Resumo:
Building energy consumption(BEC) accounting and assessment is fundamental work for building energy efficiency(BEE) development. In existing Chinese statistical yearbook, there is no specific item for BEC accounting and relevant data are separated and mixed with other industry consumption. Approximate BEC data can be acquired from existing energy statistical yearbook. For BEC assessment, caloric values of different energy carriers are adopted in energy accounting and assessment field. This methodology obtained much useful conclusion for energy efficiency development. While the traditional methodology concerns only on the energy quantity, energy classification issue is omitted. Exergy methodology is put forward to assess BEC. With the new methodology, energy quantity and quality issues are both concerned in BEC assessment. To illustrate the BEC accounting and exergy assessment, a case of Chongqing in 2004 is shown. Based on the exergy analysis, BEC of Chongqing in 2004 accounts for 17.3% of the total energy consumption. This result is quite common to that of traditional methodology. As far as energy supply efficiency is concerned, the difference is highlighted by 0.417 of the exergy methodology to 0.645 of the traditional methodology.
Resumo:
A wireless sensor network (WSN) is a group of sensors linked by wireless medium to perform distributed sensing tasks. WSNs have attracted a wide interest from academia and industry alike due to their diversity of applications, including home automation, smart environment, and emergency services, in various buildings. The primary goal of a WSN is to collect data sensed by sensors. These data are characteristic of being heavily noisy, exhibiting temporal and spatial correlation. In order to extract useful information from such data, as this paper will demonstrate, people need to utilise various techniques to analyse the data. Data mining is a process in which a wide spectrum of data analysis methods is used. It is applied in the paper to analyse data collected from WSNs monitoring an indoor environment in a building. A case study is given to demonstrate how data mining can be used to optimise the use of the office space in a building.
Resumo:
This article presents a prototype model based on a wireless sensor actuator network (WSAN) aimed at optimizing both energy consumption of environmental systems and well-being of occupants in buildings. The model is a system consisting of the following components: a wireless sensor network, `sense diaries', environmental systems such as heating, ventilation and air-conditioning systems, and a central computer. A multi-agent system (MAS) is used to derive and act on the preferences of the occupants. Each occupant is represented by a personal agent in the MAS. The sense diary is a new device designed to elicit feedback from occupants about their satisfaction with the environment. The roles of the components are: the WSAN collects data about physical parameters such as temperature and humidity from an indoor environment; the central computer processes the collected data; the sense diaries leverage trade-offs between energy consumption and well-being, in conjunction with the agent system; and the environmental systems control the indoor environment.
Resumo:
A One-Dimensional Time to Explosion (ODTX) apparatus has been used to study the times to explosion of a number of compositions based on RDX and HMX over a range of contact temperatures. The times to explosion at any given temperature tend to increase from RDX to HMX and with the proportion of HMX in the composition. Thermal ignition theory has been applied to time to explosion data to calculate kinetic parameters. The apparent activation energy for all of the compositions lay between 127 kJ mol−1 and 146 kJ mol−1. There were big differences in the pre-exponential factor and this controlled the time to explosion rather than the activation energy for the process.
Resumo:
Satellite measurements and numerical forecast model reanalysis data are used to compute an updated estimate of the cloud radiative effect on the global multi-annual mean radiative energy budget of the atmosphere and surface. The cloud radiative cooling effect through reflection of shortwave radiation dominates over the longwave heating effect, resulting in a net cooling of the climate system of –21 Wm-2. The shortwave radiative effect of cloud is primarily manifest as a reduction in the solar radiation absorbed at the surface of -53 Wm-2. Clouds impact longwave radiation by heating the moist tropical atmosphere (up to around 40 Wm-2 for global annual means) while enhancing the radiative cooling of the atmosphere over other regions, in particular higher latitudes and sub-tropical marine stratocumulus regimes. While clouds act to cool the climate system during the daytime, the cloud greenhouse effect heats the climate system at night. The influence of cloud radiative effect on determining cloud feedbacks and changes in the water cycle are discussed.
Resumo:
The principal driver of nitrogen (N) losses from the body including excretion and secretion in milk is N intake. However, other covariates may also play a role in modifying the partitioning of N. This study tests the hypothesis that N partitioning in dairy cows is affected by energy and protein interactions. A database containing 470 dairy cow observations was collated from calorimetry experiments. The data include N and energy parameters of the diet and N utilization by the animal. Univariate and multivariate meta-analyses that considered both within and between study effects were conducted to generate prediction equations based on N intake alone or with an energy component. The univariate models showed that there was a strong positive linear relationships between N intake and N excretion in faeces, urine and milk. The slopes were 0.28 faeces N, 0.38 urine N and 0.20 milk N. Multivariate model analysis did not improve the fit. Metabolizable energy intake had a significant positive effect on the amount of milk N in proportion to faeces and urine N, which is also supported by other studies. Another measure of energy considered as a covariate to N intake was diet quality or metabolizability (the concentration of metabolizable energy relative to gross energy of the diet). Diet quality also had a positive linear relationship with the proportion of milk N relative to N excreted in faeces and urine. Metabolizability had the largest effect on faeces N due to lower protein digestibility of low quality diets. Urine N was also affected by diet quality and the magnitude of the effect was higher than for milk N. This research shows that including a measure of diet quality as a covariate with N intake in a model of N execration can enhance our understanding of the effects of diet composition on N losses from dairy cows. The new prediction equations developed in this study could be used to monitor N losses from dairy systems.
Resumo:
This paper investigates whether obtaining sustainable building certification entails a rental premium for commercial office buildings and tracks its development over time. To this aim, both a difference-in-differences and a fixed-effects model approach are applied to a large panel dataset of office buildings in the United States in the 2000–2010 period. The results indicate a significant rental premium for both ENERGY STAR and LEED certified buildings. Controlling for confounding factors, this premium is shown to have increased steadily from 2006 to 2008, followed by a moderate decline in the subsequent periods. The results also show a significant positive relationship between ENERGY STAR labeling and building occupancy rates.
Resumo:
We introduce the notion that the energy of individuals can manifest as a higher-level, collective construct. To this end, we conducted four independent studies to investigate the viability and importance of the collective energy construct as assessed by a new survey instrument—the productive energy measure (PEM). Study 1 (n = 2208) included exploratory and confirmatory factor analyses to explore the underlying factor structure of PEM. Study 2 (n = 660) cross-validated the same factor structure in an independent sample. In study 3, we administered the PEM to more than 5000 employees from 145 departments located in five countries. Results from measurement invariance, statistical aggregation, convergent, and discriminant-validity assessments offered additional support for the construct validity of PEM. In terms of predictive and incremental validity, the PEM was positively associated with three collective attitudes—units' commitment to goals, the organization, and overall satisfaction. In study 4, we explored the relationship between the productive energy of firms and their overall performance. Using data from 92 firms (n = 5939employees), we found a positive relationship between the PEM (aggregated to the firm level) and the performance of those firms. Copyright © 2011 John Wiley & Sons, Ltd.
Resumo:
A new electronic software distribution (ESD) life cycle analysis (LCA)methodology and model structure were constructed to calculate energy consumption and greenhouse gas (GHG) emissions. In order to counteract the use of high level, top-down modeling efforts, and to increase result accuracy, a focus upon device details and data routes was taken. In order to compare ESD to a relevant physical distribution alternative,physical model boundaries and variables were described. The methodology was compiled from the analysis and operational data of a major online store which provides ESD and physical distribution options. The ESD method included the calculation of power consumption of data center server and networking devices. An in-depth method to calculate server efficiency and utilization was also included to account for virtualization and server efficiency features. Internet transfer power consumption was analyzed taking into account the number of data hops and networking devices used. The power consumed by online browsing and downloading was also factored into the model. The embedded CO2e of server and networking devices was proportioned to each ESD process. Three U.K.-based ESD scenarios were analyzed using the model which revealed potential CO2e savings of 83% when ESD was used over physical distribution. Results also highlighted the importance of server efficiency and utilization methods.
Resumo:
This paper describes a method that employs Earth Observation (EO) data to calculate spatiotemporal estimates of soil heat flux, G, using a physically-based method (the Analytical Method). The method involves a harmonic analysis of land surface temperature (LST) data. It also requires an estimate of near-surface soil thermal inertia; this property depends on soil textural composition and varies as a function of soil moisture content. The EO data needed to drive the model equations, and the ground-based data required to provide verification of the method, were obtained over the Fakara domain within the African Monsoon Multidisciplinary Analysis (AMMA) program. LST estimates (3 km × 3 km, one image 15 min−1) were derived from MSG-SEVIRI data. Soil moisture estimates were obtained from ENVISAT-ASAR data, while estimates of leaf area index, LAI, (to calculate the effect of the canopy on G, largely due to radiation extinction) were obtained from SPOT-HRV images. The variation of these variables over the Fakara domain, and implications for values of G derived from them, were discussed. Results showed that this method provides reliable large-scale spatiotemporal estimates of G. Variations in G could largely be explained by the variability in the model input variables. Furthermore, it was shown that this method is relatively insensitive to model parameters related to the vegetation or soil texture. However, the strong sensitivity of thermal inertia to soil moisture content at low values of relative saturation (<0.2) means that in arid or semi-arid climates accurate estimates of surface soil moisture content are of utmost importance, if reliable estimates of G are to be obtained. This method has the potential to improve large-scale evaporation estimates, to aid land surface model prediction and to advance research that aims to explain failure in energy balance closure of meteorological field studies.