870 resultados para Consumption Predicting Model
Resumo:
Given that the next and current generation networks will coexist for a considerable period of time, it is important to improve the performance of existing networks. One such improvement recently proposed is to enhance the throughput of ad hoc networks by using dual-hop relay-based transmission schemes. Since in ad hoc networks throughput is normally related to their energy consumption, it is important to examine the impact of using relay-based transmissions on energy consumption. In this paper, we present an analytical energy consumption model for dual-hop relay-based medium access control (MAC) protocols. Based on the recently reported relay-enabled Distributed Coordination Function (rDCF), we have shown the efficacy of the proposed analytical model. This is a generalized model and can be used to predict energy consumption in saturated relay-based ad hoc networks. This model can predict energy consumption in ideal environment and with transmission errors. It is shown that using a relay results in not only better throughput but also better energy efficiency. Copyright (C) 2009 Rizwan Ahmad et al.
Resumo:
People's interaction with the indoor environment plays a significant role in energy consumption in buildings. Mismatching and delaying occupants' feedback on the indoor environment to the building energy management system is the major barrier to the efficient energy management of buildings. There is an increasing trend towards the application of digital technology to support control systems in order to achieve energy efficiency in buildings. This article introduces a holistic, integrated, building energy management model called `smart sensor, optimum decision and intelligent control' (SMODIC). The model takes into account occupants' responses to the indoor environments in the control system. The model of optimal decision-making based on multiple criteria of indoor environments has been integrated into the whole system. The SMODIC model combines information technology and people centric concepts to achieve energy savings in buildings.
Resumo:
A generic model of Exergy Assessment is proposed for the Environmental Impact of the Building Lifecycle, with a special focus on the natural environment. Three environmental impacts: energy consumption, resource consumption and pollutant discharge have been analyzed with reference to energy-embodied exergy, resource chemical exergy and abatement exergy, respectively. The generic model of Exergy Assessment of the Environmental Impact of the Building Lifecycle thus formulated contains two sub-models, one from the aspect of building energy utilization and the other from building materials use. Combined with theories by ecologists such as Odum, the paper evaluates a building's environmental sustainability through its exergy footprint and environmental impacts. A case study from Chongqing, China illustrates the application of this method. From the case study, it was found that energy consumption constitutes 70–80% of the total environmental impact during a 50-year building lifecycle, in which the operation phase accounts for 80% of the total environmental impact, the building material production phase 15% and 5% for the other phases.
Resumo:
The hierarchical and "bob" (or branch-on-branch) models are tube-based computational models recently developed for predicting the linear rheology of general mixtures of polydisperse branched polymers. These two models are based on a similar tube-theory framework but differ in their numerical implementation and details of relaxation mechanisms. We present a detailed overview of the similarities and differences of these models and examine the effects of these differences on the predictions of the linear viscoelastic properties of a set of representative branched polymer samples in order to give a general picture of the performance of these models. Our analysis confirms that the hierarchical and bob models quantitatively predict the linear rheology of a wide range of branched polymer melts but also indicate that there is still no unique solution to cover all types of branched polymers without case-by-case adjustment of parameters such as the dilution exponent alpha and the factor p(2) which defines the hopping distance of a branch point relative to the tube diameter. An updated version of the hierarchical model, which shows improved computational efficiency and refined relaxation mechanisms, is introduced and used in these analyses.
Resumo:
The Water Framework Directive has caused a paradigm shift towards the integrated management of recreational water quality through the development of drainage basin-wide programmes of measures. This has increased the need for a cost-effective diagnostic tool capable of accurately predicting riverine faecal indicator organism (FIO) concentrations. This paper outlines the application of models developed to fulfil this need, which represent the first transferrable generic FIO models to be developed for the UK to incorporate direct measures of key FIO sources (namely human and livestock population data) as predictor variables. We apply a recently developed transfer methodology, which enables the quantification of geometric mean presumptive faecal coliforms and presumptive intestinal enterococci concentrations for base- and high-flow during the summer bathing season in unmonitored UK watercourses, to predict FIO concentrations in the Humber river basin district. Because the FIO models incorporate explanatory variables which allow the effects of policy measures which influence livestock stocking rates to be assessed, we carry out empirical analysis of the differential effects of seven land use management and policy instruments (fiscal constraint, production constraint, cost intervention, area intervention, demand-side constraint, input constraint, and micro-level land use management) all of which can be used to reduce riverine FIO concentrations. This research provides insights into FIO source apportionment, explores a selection of pollution remediation strategies and the spatial differentiation of land use policies which could be implemented to deliver river quality improvements. All of the policy tools we model reduce FIO concentrations in rivers but our research suggests that the installation of streamside fencing in intensive milk producing areas may be the single most effective land management strategy to reduce riverine microbial pollution.
Resumo:
Bloom-forming and toxin-producing cyanobacteria remain a persistent nuisance across the world. Modelling cyanobacterial behaviour in freshwaters is an important tool for understanding their population dynamics and predicting the location and timing of the bloom events in lakes, reservoirs and rivers. A new deterministic–mathematical model was developed, which simulates the growth and movement of cyanobacterial blooms in river systems. The model focuses on the mathematical description of the bloom formation, vertical migration and lateral transport of colonies within river environments by taking into account the major factors that affect the cyanobacterial bloom formation in rivers including light, nutrients and temperature. A parameter sensitivity analysis using a one-at-a-time approach was carried out. There were two objectives of the sensitivity analysis presented in this paper: to identify the key parameters controlling the growth and movement patterns of cyanobacteria and to provide a means for model validation. The result of the analysis suggested that maximum growth rate and day length period were the most significant parameters in determining the population growth and colony depth, respectively.
Resumo:
We explore the potential for making statistical decadal predictions of sea surface temperatures (SSTs) in a perfect model analysis, with a focus on the Atlantic basin. Various statistical methods (Lagged correlations, Linear Inverse Modelling and Constructed Analogue) are found to have significant skill in predicting the internal variability of Atlantic SSTs for up to a decade ahead in control integrations of two different global climate models (GCMs), namely HadCM3 and HadGEM1. Statistical methods which consider non-local information tend to perform best, but which is the most successful statistical method depends on the region considered, GCM data used and prediction lead time. However, the Constructed Analogue method tends to have the highest skill at longer lead times. Importantly, the regions of greatest prediction skill can be very different to regions identified as potentially predictable from variance explained arguments. This finding suggests that significant local decadal variability is not necessarily a prerequisite for skillful decadal predictions, and that the statistical methods are capturing some of the dynamics of low-frequency SST evolution. In particular, using data from HadGEM1, significant skill at lead times of 6–10 years is found in the tropical North Atlantic, a region with relatively little decadal variability compared to interannual variability. This skill appears to come from reconstructing the SSTs in the far north Atlantic, suggesting that the more northern latitudes are optimal for SST observations to improve predictions. We additionally explore whether adding sub-surface temperature data improves these decadal statistical predictions, and find that, again, it depends on the region, prediction lead time and GCM data used. Overall, we argue that the estimated prediction skill motivates the further development of statistical decadal predictions of SSTs as a benchmark for current and future GCM-based decadal climate predictions.
Resumo:
The recent decline in the open magnetic flux of the Sun heralds the end of the Grand Solar Maximum (GSM) that has persisted throughout the space age, during which the largest‐fluence Solar Energetic Particle (SEP) events have been rare and Galactic Cosmic Ray (GCR) fluxes have been relatively low. In the absence of a predictive model of the solar dynamo, we here make analogue forecasts by studying past variations of solar activity in order to evaluate how long‐term change in space climate may influence the hazardous energetic particle environment of the Earth in the future. We predict the probable future variations in GCR flux, near‐Earth interplanetary magnetic field (IMF), sunspot number, and the probability of large SEP events, all deduced from cosmogenic isotope abundance changes following 24 GSMs in a 9300‐year record.
Resumo:
The potential of visible-near infrared spectra, obtained using a light backscatter sensor, in conjunction with chemometrics, to predict curd moisture and whey fat content in a cheese vat was examined. A three-factor (renneting temperature, calcium chloride, cutting time), central composite design was carried out in triplicate. Spectra (300–1,100 nm) of the product in the cheese vat were captured during syneresis using a prototype light backscatter sensor. Stirring followed upon cutting the gel, and samples of curd and whey were removed at 10 min intervals and analyzed for curd moisture and whey fat content. Spectral data were used to develop models for predicting curd moisture and whey fat contents using partial least squares regression. Subjecting the spectral data set to Jack-knifing improved the accuracy of the models. The whey fat models (R = 0.91, 0.95) and curd moisture model (R = 0.86, 0.89) provided good and approximate predictions, respectively. Visible-near infrared spectroscopy was found to have potential for the prediction of important syneresis indices in stirred cheese vats.
Resumo:
The objective of this study was to investigate the potential application of mid-infrared spectroscopy for determination of selected sensory attributes in a range of experimentally manufactured processed cheese samples. This study also evaluates mid-infrared spectroscopy against other recently proposed techniques for predicting sensory texture attributes. Processed cheeses (n = 32) of varying compositions were manufactured on a pilot scale. After 2 and 4 wk of storage at 4 degrees C, mid-infrared spectra ( 640 to 4,000 cm(-1)) were recorded and samples were scored on a scale of 0 to 100 for 9 attributes using descriptive sensory analysis. Models were developed by partial least squares regression using raw and pretreated spectra. The mouth-coating and mass-forming models were improved by using a reduced spectral range ( 930 to 1,767 cm(-1)). The remaining attributes were most successfully modeled using a combined range ( 930 to 1,767 cm(-1) and 2,839 to 4,000 cm(-1)). The root mean square errors of cross-validation for the models were 7.4(firmness; range 65.3), 4.6 ( rubbery; range 41.7), 7.1 ( creamy; range 60.9), 5.1(chewy; range 43.3), 5.2(mouth-coating; range 37.4), 5.3 (fragmentable; range 51.0), 7.4 ( melting; range 69.3), and 3.1 (mass-forming; range 23.6). These models had a good practical utility. Model accuracy ranged from approximate quantitative predictions to excellent predictions ( range error ratio = 9.6). In general, the models compared favorably with previously reported instrumental texture models and near-infrared models, although the creamy, chewy, and melting models were slightly weaker than the previously reported near-infrared models. We concluded that mid-infrared spectroscopy could be successfully used for the nondestructive and objective assessment of processed cheese sensory quality..
Resumo:
Pollination is one of the most important ecosystem services in agroecosystems and supports food production. Pollinators are potentially at risk being exposed to pesticides and the main route of exposure is direct contact, in some cases ingestion, of contaminated materials such as pollen, nectar, flowers and foliage. To date there are no suitable methods for predicting pesticide exposure for pollinators, therefore official procedures to assess pesticide risk are based on a Hazard Quotient. Here we develop a procedure to assess exposure and risk for pollinators based on the foraging behaviour of honeybees (Apis mellifera) and using this species as indicator representative of pollinating insects. The method was applied in 13 European field sites with different climatic, landscape and land use characteristics. The level of risk during the crop growing season was evaluated as a function of the active ingredients used and application regime. Risk levels were primarily determined by the agronomic practices employed (i.e. crop type, pest control method, pesticide use), and there was a clear temporal partitioning of risks through time. Generally the risk was higher in sites cultivated with permanent crops, such as vineyard and olive, than in annual crops, such as cereals and oil seed rape. The greatest level of risk is generally found at the beginning of the growing season for annual crops and later in June–July for permanent crops.
Resumo:
esponse to dietary fat manipulation is highly heterogeneous, yet generic population-based recommendations aimed at reducing the burden of CVD are given. The APOE epsilon genotype has been proposed to be an important determinant of this response. The present study reports on the dietary strategy employed in the SATgenɛ (SATurated fat and gene APOE) study, to assess the impact of altered fat content and composition on the blood lipid profile according to the APOE genotype. A flexible dietary exchange model was developed to implement three isoenergetic diets: a low-fat (LF) diet (target composition: 24 % of energy (%E) as fat, 8 %E SFA and 59 %E carbohydrate), a high-saturated fat (HSF) diet (38 %E fat, 18 %E SFA and 45 %E carbohydrate) and a HSF-DHA diet (HSF diet with 3 g DHA/d). Free-living participants (n 88; n 44 E3/E3 and n 44 E3/E4) followed the diets in a sequential design for 8 weeks, each using commercially available spreads, oils and snacks with specific fatty acid profiles. Dietary compositional targets were broadly met with significantly higher total fat (42·8 %E and 41·0 %E v. 25·1 %E, P ≤ 0·0011) and SFA (19·3 %E and 18·6 %E v. 8·33 %E, P ≤ 0·0011) intakes during the HSF and HSF-DHA diets compared with the LF diet, in addition to significantly higher DHA intake during the HSF-DHA diet (P ≤ 0·0011). Plasma phospholipid fatty acid analysis revealed a 2-fold increase in the proportion of DHA after consumption of the HSF-DHA diet for 8 weeks, which was independent of the APOE genotype. In summary, the dietary strategy was successfully implemented in a free-living population resulting in well-tolerated diets which broadly met the dietary targets set.
Resumo:
Recent research shows that because they rely on separate goals, cognitions about not performing a behaviour are not simple opposites of cognitions about performing the same behaviour. Using this perspective, two studies (N = 758 & N = 104) examined the psycho-social determinants of reduction in resource consumption. Results showed that goals associated with reducing versus not reducing resource consumption were not simple opposites (Study 1). Additionally, the discriminant validity of the Theory of Planned Behaviour constructs associated with reducing versus not reducing resource consumption was demonstrated (Study 1 & 2). Moreover, results revealed the incremental validity of both Intentions (to reduce and to not reduce resource consumption) for predicting a series of behaviours (Study 1 & 2). Finally, results indicated a mediation role for the importance of ecological dimensions on the effect of both Intentions on a mock TV choice and a mediation role for the importance of non ecological dimensions on the effect of Intention of not reducing on the same TV choice. Discussion is organized around the consequences, at both theoretical and applied levels, of considering separate motivational systems for reducing and not reducing resource consumption.
Resumo:
A new electronic software distribution (ESD) life cycle analysis (LCA)methodology and model structure were constructed to calculate energy consumption and greenhouse gas (GHG) emissions. In order to counteract the use of high level, top-down modeling efforts, and to increase result accuracy, a focus upon device details and data routes was taken. In order to compare ESD to a relevant physical distribution alternative,physical model boundaries and variables were described. The methodology was compiled from the analysis and operational data of a major online store which provides ESD and physical distribution options. The ESD method included the calculation of power consumption of data center server and networking devices. An in-depth method to calculate server efficiency and utilization was also included to account for virtualization and server efficiency features. Internet transfer power consumption was analyzed taking into account the number of data hops and networking devices used. The power consumed by online browsing and downloading was also factored into the model. The embedded CO2e of server and networking devices was proportioned to each ESD process. Three U.K.-based ESD scenarios were analyzed using the model which revealed potential CO2e savings of 83% when ESD was used over physical distribution. Results also highlighted the importance of server efficiency and utilization methods.
Resumo:
What does the saving–investment (SI) relation really measure and how should the SI relation be measured? These are two of the most discussed issues triggered by the so-called Feldstein–Horioka puzzle. Based on panel data we introduce a new variant of functional coefficient models that allows to separate long and short to medium run parameter dependence. The new modeling framework is applied to uncover the determinants of the SI relation. Macroeconomic state variables such as openness, the age dependency ratio, government current and consumption expenditures are found to affect the SI relation significantly in the long run.