819 resultados para Energy consumption data sets
Resumo:
People's interaction with the indoor environment plays a significant role in energy consumption in buildings. Mismatching and delaying occupants' feedback on the indoor environment to the building energy management system is the major barrier to the efficient energy management of buildings. There is an increasing trend towards the application of digital technology to support control systems in order to achieve energy efficiency in buildings. This article introduces a holistic, integrated, building energy management model called `smart sensor, optimum decision and intelligent control' (SMODIC). The model takes into account occupants' responses to the indoor environments in the control system. The model of optimal decision-making based on multiple criteria of indoor environments has been integrated into the whole system. The SMODIC model combines information technology and people centric concepts to achieve energy savings in buildings.
Resumo:
This paper analyzes the delay performance of Enhanced relay-enabled Distributed Coordination Function (ErDCF) for wireless ad hoc networks under ideal condition and in the presence of transmission errors. Relays are nodes capable of supporting high data rates for other low data rate nodes. In ideal channel ErDCF achieves higher throughput and reduced energy consumption compared to IEEE 802.11 Distributed Coordination Function (DCF). This gain is still maintained in the presence of errors. It is also expected of relays to reduce the delay. However, the impact on the delay behavior of ErDCF under transmission errors is not known. In this work, we have presented the impact of transmission errors on delay. It turns out that under transmission errors of sufficient magnitude to increase dropped packets, packet delay is reduced. This is due to increase in the probability of failure. As a result the packet drop time increases, thus reflecting the throughput degradation.
Resumo:
A role for sequential test procedures is emerging in genetic and epidemiological studies using banked biological resources. This stems from the methodology's potential for improved use of information relative to comparable fixed sample designs. Studies in which cost, time and ethics feature prominently are particularly suited to a sequential approach. In this paper sequential procedures for matched case–control studies with binary data will be investigated and assessed. Design issues such as sample size evaluation and error rates are identified and addressed. The methodology is illustrated and evaluated using both real and simulated data sets.
Resumo:
Svalgaard and Cliver (2010) recently reported a consensus between the various reconstructions of the heliospheric field over recent centuries. This is a significant development because, individually, each has uncertainties introduced by instrument calibration drifts, limited numbers of observatories, and the strength of the correlations employed. However, taken collectively, a consistent picture is emerging. We here show that this consensus extends to more data sets and methods than reported by Svalgaard and Cliver, including that used by Lockwood et al. (1999), when their algorithm is used to predict the heliospheric field rather than the open solar flux. One area where there is still some debate relates to the existence and meaning of a floor value to the heliospheric field. From cosmogenic isotope abundances, Steinhilber et al. (2010) have recently deduced that the near-Earth IMF at the end of the Maunder minimum was 1.80 ± 0.59 nT which is considerably lower than the revised floor of 4nT proposed by Svalgaard and Cliver. We here combine cosmogenic and geomagnetic reconstructions and modern observations (with allowance for the effect of solar wind speed and structure on the near-Earth data) to derive an estimate for the open solar flux of (0.48 ± 0.29) × 1014 Wb at the end of the Maunder minimum. By way of comparison, the largest and smallest annual means recorded by instruments in space between 1965 and 2010 are 5.75 × 1014 Wb and 1.37 × 1014 Wb, respectively, set in 1982 and 2009, and the maximum of the 11 year running means was 4.38 × 1014 Wb in 1986. Hence the average open solar flux during the Maunder minimum is found to have been 11% of its peak value during the recent grand solar maximum.
Resumo:
This paper uses data provided by three major real estate advisory firms to investigate the level and pattern of variation in the measurement of historic real estate rental values for the main European office centres. The paper assesses the extent to which the data providing organizations agree on historic market performance in terms of returns, risk and timing and examines the relationship between market maturity and agreement. The analysis suggests that at the aggregate level and for many markets, there is substantial agreement on direction, quantity and timing of market change. However, there is substantial variability in the level of agreement among cities. The paper also assesses whether the different data sets produce different explanatory models and market forecast. It is concluded that, although disagreement on the direction of market change is high for many market, the different data sets often produce similar explanatory models and predict similar relative performance.
Resumo:
Monitoring Earth's terrestrial water conditions is critically important to many hydrological applications such as global food production; assessing water resources sustainability; and flood, drought, and climate change prediction. These needs have motivated the development of pilot monitoring and prediction systems for terrestrial hydrologic and vegetative states, but to date only at the rather coarse spatial resolutions (∼10–100 km) over continental to global domains. Adequately addressing critical water cycle science questions and applications requires systems that are implemented globally at much higher resolutions, on the order of 1 km, resolutions referred to as hyperresolution in the context of global land surface models. This opinion paper sets forth the needs and benefits for a system that would monitor and predict the Earth's terrestrial water, energy, and biogeochemical cycles. We discuss six major challenges in developing a system: improved representation of surface‐subsurface interactions due to fine‐scale topography and vegetation; improved representation of land‐atmospheric interactions and resulting spatial information on soil moisture and evapotranspiration; inclusion of water quality as part of the biogeochemical cycle; representation of human impacts from water management; utilizing massively parallel computer systems and recent computational advances in solving hyperresolution models that will have up to 109 unknowns; and developing the required in situ and remote sensing global data sets. We deem the development of a global hyperresolution model for monitoring the terrestrial water, energy, and biogeochemical cycles a “grand challenge” to the community, and we call upon the international hydrologic community and the hydrological science support infrastructure to endorse the effort.
Resumo:
In this contribution we aim at anchoring Agent-Based Modeling (ABM) simulations in actual models of human psychology. More specifically, we apply unidirectional ABM to social psychological models using low level agents (i.e., intra-individual) to examine whether they generate better predictions, in comparison to standard statistical approaches, concerning the intentions of performing a behavior and the behavior. Moreover, this contribution tests to what extent the predictive validity of models of attitude such as the Theory of Planned Behavior (TPB) or Model of Goal-directed Behavior (MGB) depends on the assumption that peoples’ decisions and actions are purely rational. Simulations were therefore run by considering different deviations from rationality of the agents with a trembling hand method. Two data sets concerning respectively the consumption of soft drinks and physical activity were used. Three key findings emerged from the simulations. First, compared to standard statistical approach the agent-based simulation generally improves the prediction of behavior from intention. Second, the improvement in prediction is inversely proportional to the complexity of the underlying theoretical model. Finally, the introduction of varying degrees of deviation from rationality in agents’ behavior can lead to an improvement in the goodness of fit of the simulations. By demonstrating the potential of ABM as a complementary perspective to evaluating social psychological models, this contribution underlines the necessity of better defining agents in terms of psychological processes before examining higher levels such as the interactions between individuals.
Resumo:
Novel imaging techniques are playing an increasingly important role in drug development, providing insight into the mechanism of action of new chemical entities. The data sets obtained by these methods can be large with complex inter-relationships, but the most appropriate statistical analysis for handling this data is often uncertain - precisely because of the exploratory nature of the way the data are collected. We present an example from a clinical trial using magnetic resonance imaging to assess changes in atherosclerotic plaques following treatment with a tool compound with established clinical benefit. We compared two specific approaches to handle the correlations due to physical location and repeated measurements: two-level and four-level multilevel models. The two methods identified similar structural variables, but higher level multilevel models had the advantage of explaining a greater proportion of variation, and the modeling assumptions appeared to be better satisfied.
Resumo:
Occupants’ behaviour when improving the indoor environment plays a significant role in saving energy in buildings. Therefore the key step to reducing energy consumption and carbon emissions from buildings is to understand how occupants interact with the environment they are exposed to in terms of achieving thermal comfort and well-being; though such interaction is complex. This paper presents a dynamic process of occupant behaviours involving technological, personal and psychological adaptations in response to varied thermal conditions based on the data covering four seasons gathered from the field study in Chongqing, China. It demonstrates that occupants are active players in environmental control and their adaptive responses are driven strongly by ambient thermal stimuli and vary from season to season and from time to time, even on the same day. Positive, dynamic, behavioural adaptation will help save energy used in heating and cooling buildings. However, when environmental parameters cannot fully satisfy occupants’ requirements, negative behaviours could conflict with energy saving. The survey revealed that about 23% of windows are partly open for fresh air when air-conditioners are in operation in summer. This paper addresses the issues how the building and environmental systems should be designed, operated and managed in a way that meets the requirements of energy efficiency without compromising wellbeing and productivity.
Resumo:
The development of an Artificial Neural Network model of UK domestic appliance energy consumption is presented. The model uses diary-style appliance use data and a survey questionnaire collected from 51 households during the summer of 2010. It also incorporates measured energy data and is sensitive to socioeconomic, physical dwelling and temperature variables. A prototype model is constructed in MATLAB using a two layer feed forward network with backpropagation training and has a12:10:24architecture.Model outputs include appliance load profiles which can be applied to the fields of energy planning (micro renewables and smart grids), building simulation tools and energy policy.
Resumo:
To achieve CO2 emissions reductions the UK Building Regulations require developers of new residential buildings to calculate expected CO2 emissions arising from their energy consumption using a methodology such as Standard Assessment Procedure (SAP 2005) or, more recently SAP 2009. SAP encompasses all domestic heat consumption and a limited proportion of the electricity consumption. However, these calculations are rarely verified with real energy consumption and related CO2 emissions. This paper presents the results of an analysis based on weekly head demand data for more than 200 individual flats. The data is collected from recently built residential development connected to a district heating network. A methodology for separating out the domestic hot water use (DHW) and space heating demand (SH) has been developed and compares measured values to the demand calculated using SAP 2005 and 2009 methodologies. The analysis shows also the variance in DHW and SH consumption between both size of the flats and tenure (privately owned or housing association). Evaluation of the space heating consumption includes also an estimation of the heating degree day (HDD) base temperature for each block of flats and its comparison to the average base temperature calculated using the SAP 2005 methodology.
Resumo:
The assessment of building energy efficiency is one of the most effective measures for reducing building energy consumption. This paper proposes a holistic method (HMEEB) for assessing and certifying building energy efficiency based on the D-S (Dempster-Shafer) theory of evidence and the Evidential Reasoning (ER) approach. HMEEB has three main features: (i) it provides both a method to assess and certify building energy efficiency, and exists as an analytical tool to identify improvement opportunities; (ii) it combines a wealth of information on building energy efficiency assessment, including identification of indicators and a weighting mechanism; and (iii) it provides a method to identify and deal with inherent uncertainties within the assessment procedure. This paper demonstrates the robustness, flexibility and effectiveness of the proposed method, using two examples to assess the energy efficiency of two residential buildings, both located in the ‘Hot Summer and Cold Winter’ zone in China. The proposed certification method provides detailed recommendations for policymakers in the context of carbon emission reduction targets and promoting energy efficiency in the built environment. The method is transferable to other countries and regions, using an indicator weighting system to modify local climatic, economic and social factors.
Resumo:
CO, O3, and H2O data in the upper troposphere/lower stratosphere (UTLS) measured by the Atmospheric Chemistry Experiment Fourier Transform Spectrometer(ACE-FTS) on Canada’s SCISAT-1 satellite are validated using aircraft and ozonesonde measurements. In the UTLS, validation of chemical trace gas measurements is a challenging task due to small-scale variability in the tracer fields, strong gradients of the tracers across the tropopause, and scarcity of measurements suitable for validation purposes. Validation based on coincidences therefore suffers from geophysical noise. Two alternative methods for the validation of satellite data are introduced, which avoid the usual need for coincident measurements: tracer-tracer correlations, and vertical tracer profiles relative to tropopause height. Both are increasingly being used for model validation as they strongly suppress geophysical variability and thereby provide an “instantaneous climatology”. This allows comparison of measurements between non-coincident data sets which yields information about the precision and a statistically meaningful error-assessment of the ACE-FTS satellite data in the UTLS. By defining a trade-off factor, we show that the measurement errors can be reduced by including more measurements obtained over a wider longitude range into the comparison, despite the increased geophysical variability. Applying the methods then yields the following upper bounds to the relative differences in the mean found between the ACE-FTS and SPURT aircraft measurements in the upper troposphere (UT) and lower stratosphere (LS), respectively: for CO ±9% and ±12%, for H2O ±30% and ±18%, and for O3 ±25% and ±19%. The relative differences for O3 can be narrowed down by using a larger dataset obtained from ozonesondes, yielding a high bias in the ACEFTS measurements of 18% in the UT and relative differences of ±8% for measurements in the LS. When taking into account the smearing effect of the vertically limited spacing between measurements of the ACE-FTS instrument, the relative differences decrease by 5–15% around the tropopause, suggesting a vertical resolution of the ACE-FTS in the UTLS of around 1 km. The ACE-FTS hence offers unprecedented precision and vertical resolution for a satellite instrument, which will allow a new global perspective on UTLS tracer distributions.
Resumo:
In the last decade, a vast number of land surface schemes has been designed for use in global climate models, atmospheric weather prediction, mesoscale numerical models, ecological models, and models of global changes. Since land surface schemes are designed for different purposes they have various levels of complexity in the treatment of bare soil processes, vegetation, and soil water movement. This paper is a contribution to a little group of papers dealing with intercomparison of differently designed and oriented land surface schemes. For that purpose we have chosen three schemes for classification: i) global climate models, BATS (Dickinson et al., 1986; Dickinson et al., 1992); ii) mesoscale and ecological models, LEAF (Lee, 1992) and iii) mesoscale models, LAPS (Mihailović, 1996; Mihailović and Kallos, 1997; Mihailović et al., 1999) according to the Shao et al. (1995) classification. These schemes were compared using surface fluxes and leaf temperature outputs obtained by time integrations of data sets derived from the micrometeorological measurements above a maize field at an experimental site in De Sinderhoeve (The Netherlands) for 18 August, 8 September, and 4 October 1988. Finally, comparison of the schemes was supported applying a simple statistical analysis on the surface flux outputs.
Resumo:
A model for estimating the turbulent kinetic energy dissipation rate in the oceanic boundary layer, based on insights from rapid-distortion theory, is presented and tested. This model provides a possible explanation for the very high dissipation levels found by numerous authors near the surface. It is conceived that turbulence, injected into the water by breaking waves, is subsequently amplified due to its distortion by the mean shear of the wind-induced current and straining by the Stokes drift of surface waves. The partition of the turbulent shear stress into a shear-induced part and a wave-induced part is taken into account. In this picture, dissipation enhancement results from the same mechanism responsible for Langmuir circulations. Apart from a dimensionless depth and an eddy turn-over time, the dimensionless dissipation rate depends on the wave slope and wave age, which may be encapsulated in the turbulent Langmuir number La_t. For large La_t, or any Lat but large depth, the dissipation rate tends to the usual surface layer scaling, whereas when Lat is small, it is strongly enhanced near the surface, growing asymptotically as ɛ ∝ La_t^{-2} when La_t → 0. Results from this model are compared with observations from the WAVES and SWADE data sets, assuming that this is the dominant dissipation mechanism acting in the ocean surface layer and statistical measures of the corresponding fit indicate a substantial improvement over previous theoretical models. Comparisons are also carried out against more recent measurements, showing good order-of-magnitude agreement, even when shallow-water effects are important.