946 resultados para logarithmic mean
Resumo:
This paper provides an empirical estimation of energy efficiency and other proximate factors that explain energy intensity in Australia for the period 1978-2009. The analysis is performed by decomposing the changes in energy intensity by means of energy efficiency, fuel mix and structural changes using sectoral and sub-sectoral levels of data. The results show that the driving forces behind the decrease in energy intensity in Australia are efficiency effect and sectoral composition effect, where the former is found to be more prominent than the latter. Moreover, the favourable impact of the composition effect has slowed consistently in recent years. A perfect positive association characterizes the relationship between energy intensity and carbon intensity in Australia. The decomposition results indicate that Australia needs to improve energy efficiency further to reduce energy intensity and carbon emissions. © 2012 Elsevier Ltd.
Resumo:
This study analyzes the management of air pollutant substance in Chinese industrial sectors from 1998 to 2009. Decomposition analysis applying the logarithmic mean divisia index is used to analyze changes in emissions of air pollutants with a focus on the following five factors: coal pollution intensity (CPI), end-of-pipe treatment (EOP), the energy mix (EM), productive efficiency change (EFF), and production scale changes (PSC). Three pollutants are the main focus of this study: sulfur dioxide (SO2), dust, and soot. The novelty of this paper is focusing on the impact of the elimination policy on air pollution management in China by type of industry using the scale merit effect for pollution abatement technology change. First, the increase in SO2 emissions from Chinese industrial sectors because of the increase in the production scale is demonstrated. However, the EOP equipment that induced this change and improvements in energy efficiency has prevented an increase in SO2 emissions that is commensurate with the increase in production. Second, soot emissions were successfully reduced and controlled in all industries except the steel industry between 1998 and 2009, even though the production scale expanded for these industries. This reduction was achieved through improvements in EOP technology and in energy efficiency. Dust emissions decreased by nearly 65% between 1998 and 2009 in the Chinese industrial sectors. This successful reduction in emissions was achieved by implementing EOP technology and pollution prevention activities during the production processes, especially in the cement industry. Finally, pollution prevention in the cement industry is shown to result from production technology development rather than scale merit. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
This study analyzes toxic chemical substance management in three U.S. manufacturing sectors from 1991 to 2008. Decomposition analysis applying the logarithmic mean Divisia index is used to analyze changes in toxic chemical substance emissions by the following five factors: cleaner production, end-of-pipe treatment, transfer for further management, mixing of intermediate materials, and production scale. Based on our results, the chemical manufacturing sector reduced toxic chemical substance emissions mainly via end-of-pipe treatment. In the meantime, transfer for further management contributed to the reduction of toxic chemical substance emissions in the metal fabrication industry. This occurred because the environmental business market expanded in the 1990s, and the infrastructure for the recycling of metal and other wastes became more efficient. Cleaner production is the main contributor to toxic chemical reduction in the electrical product industry. This implies that the electrical product industry is successful in developing a more environmentally friendly product design and production process.
Resumo:
Changes in energy-related CO2 emissions aggregate intensity, total CO2 emissions and per-capita CO2 emissions in Australia are decomposed by using a Logarithmic Mean Divisia Index (LMDI) method for the period 1978-2010. Results indicate improvements in energy efficiency played a dominant role in the measured 17% reduction in CO2 emissions aggregate intensity in Australia over the period. Structural changes in the economy, such as changes in the relative importance of the services sector vis-à-vis manufacturing, have also played a major role in achieving this outcome. Results also suggest that, without these mitigating factors, income per capita and population effects could well have produced an increase in total emissions of more than 50% higher than actually occurred over the period. Perhaps most starkly, the results indicate that, without these mitigating factors, the growth in CO2 emissions per capita could have been over 150% higher than actually observed. Notwithstanding this, the study suggests that, for Australia to meet its Copenhagen commitment, the relative average per annum effectiveness of these mitigating factors during 2010-2020 probably needs to be almost three times what it was in the 2005-2010 period-a very daunting challenge indeed for Australia's policymakers.
Resumo:
Measurements of both the velocity and the temperature field have been made in the thermal layer that grows inside a turbulent boundary layer which is subjected to a small step change in surface heat flux. Upstream of the step, the wall heat flux is zero and the velocity boundary layer is nearly self-preserving. The thermal-layer measurements are discussed in the context of a self-preserving analysis for the temperature disturbance which grows underneath a thick external turbulent boundary layer. A logarithmic mean temperature profile is established downstream of the step but the budget for the mean-square temperature fluctuations shows that, in the inner region of the thermal layer, the production and dissipation of temperature fluctuations are not quite equal at the furthest downstream measurement station. The measurements for both the mean and the fluctuating temperature field indicate that the relaxation distance for the thermal layer is quite large, of the order of 1000θ0, where θ0 is the momentum thickness of the boundary layer at the step. Statistics of the thermal-layer interface and conditionally sampled measurements with respect to this interface are presented. Measurements of the temperature intermittency factor indicate that the interface is normally distributed with respect to its mean position. Near the step, the passive heat contaminant acts as an effective marker of the organized turbulence structure that has been observed in the wall region of a boundary layer. Accordingly, conditional averages of Reynolds stresses and heat fluxes measured in the heated part of the flow are considerably larger than the conventional averages when the temperature intermittency factor is small.
Resumo:
We revisit the boundedness of Hankel and Toeplitz operators acting on the Hardy space H 1 and give a new proof of the old result stating that the Hankel operator H a is bounded if and only if a has bounded logarithmic mean oscillation. We also establish a sufficient and necessary condition for H a to be compact on H 1. The Fredholm properties of Toeplitz operators on H 1 are studied for symbols in a Banach algebra similar to C + H ∞ under mild additional conditions caused by the differences in the boundedness of Toeplitz operators acting on H 1 and H 2.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
We present analytic results to show that the Schwinger-boson hole-fermion mean-field state exhibits non-Fermi liquid behavior due to spin-charge separation. The physical electron Green's function consists of three additive components. (a) A Fermi-liquid component associated with the bose condensate. (b) A non-Fermi liquid component which has a logarithmic peak and a long tail that gives rise to a linear density of states that is symmetric about the Fermi level and a momentum distribution function with a logarithmic discontinuity at the Fermi surface. (c) A second non-Fermi liquid component associated with the thermal bosons which leads to a constant density of states. It is shown that zero-point fluctuations associated with the spin-degrees of freedom are responsible for the logarithmic instabilities and the restoration of particle-hole symmetry close to the Fermi surface.
Resumo:
Mean velocity profiles were measured in the 5” x 60” wind channel of the turbulence laboratory at the GALCIT, by the use of a hot-wire anemometer. The repeatability of results was established, and the accuracy of the instrumentation estimated. Scatter of experimental results is a little, if any, beyond this limit, although some effects might be expected to arise from variations in atmospheric humidity, no account of this factor having been taken in the present work. Also, slight unsteadiness in flow conditions will be responsible for some scatter.
Irregularities of a hot-wire in close proximity to a solid boundary at low speeds were observed, as have already been found by others.
That Kármán’s logarithmic law holds reasonably well over the main part of a fully developed turbulent flow was checked, the equation u/ut = 6.0 + 6.25 log10 yut/v being obtained, and, as has been previously the case, the experimental points do not quite form one straight line in the region where viscosity effects are small. The values of the constants for this law for the best over-all agreement were determined and compared with those obtained by others.
The range of Reynolds numbers used (based on half-width of channel) was from 20,000 to 60,000.
Resumo:
The use of special units for logarithmic ratio quantities is reviewed. The neper is used with a natural logarithm (logarithm to the base e) to express the logarithm of the amplitude ratio of two pure sinusoidal signals, particularly in the context of linear systems where it is desired to represent the gain or loss in amplitude of a single-frequency signal between the input and output. The bel, and its more commonly used submultiple, the decibel, are used with a decadic logarithm (logarithm to the base 10) to measure the ratio of two power-like quantities, such as a mean square signal or a mean square sound pressure in acoustics. Thus two distinctly different quantities are involved. In this review we define the quantities first, without reference to the units, as is standard practice in any system of quantities and units. We show that two different definitions of the quantity power level, or logarithmic power ratio, are possible. We show that this leads to two different interpretations for the meaning and numerical values of the units bel and decibel. We review the question of which of these alternative definitions is actually used, or is used by implication, by workers in the field. Finally, we discuss the relative advantages of the alternative definitions.
Resumo:
2000 Mathematics Subject Classification: Primary 30C10, 30C15, 31B35.
Resumo:
Phase-type distributions represent the time to absorption for a finite state Markov chain in continuous time, generalising the exponential distribution and providing a flexible and useful modelling tool. We present a new reversible jump Markov chain Monte Carlo scheme for performing a fully Bayesian analysis of the popular Coxian subclass of phase-type models; the convenient Coxian representation involves fewer parameters than a more general phase-type model. The key novelty of our approach is that we model covariate dependence in the mean whilst using the Coxian phase-type model as a very general residual distribution. Such incorporation of covariates into the model has not previously been attempted in the Bayesian literature. A further novelty is that we also propose a reversible jump scheme for investigating structural changes to the model brought about by the introduction of Erlang phases. Our approach addresses more questions of inference than previous Bayesian treatments of this model and is automatic in nature. We analyse an example dataset comprising lengths of hospital stays of a sample of patients collected from two Australian hospitals to produce a model for a patient's expected length of stay which incorporates the effects of several covariates. This leads to interesting conclusions about what contributes to length of hospital stay with implications for hospital planning. We compare our results with an alternative classical analysis of these data.