997 resultados para Average comparisons


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this thesis was to study the crops currently used for biofuel production from the following aspects: 1. what should be the average yield/ ha to reach an energy balance at least 0 or positive 2. what are the shares of the primary and secondary energy flows in agriculture, transport, processing and usage, and 3. overall effects of biofuel crop cultivation, transport, processing and usage. This thesis concentrated on oilseed rape biodiesel and wheat bioethanol in the European Union, comparing them with competing biofuels, such as corn and sugarcane-based ethanol, and the second generation biofuels. The study was executed by comparing Life Cycle Assessment-studies from the EU-region and by analyzing them thoroughly from the differences viewpoint. The variables were the following: energy ratio, hectare yield (l/ha), impact on greenhouse gas emissions (particularly CO2), energy consumption in crop growing and processing one hectare of a particular crop to biofuel, distribution of energy in processing and effects of the secondary energy flows, like e.g. wheat straw. Processing was found to be the most energy consuming part in the production of biofuels. So if the raw materials will remain the same, the development will happen in processing. First generation biodiesel requires esterification, which consumes approximately one third of the process energy. Around 75% of the energy consumed in manufacturing the first generation wheat-based ethanol is spent in steam and electricity generation. No breakthroughs are in sight in the agricultural sector to achieve significantly higher energy ratios. It was found out that even in ideal conditions the energy ratio of first generation wheat-based ethanol will remain slightly under 2. For oilseed rape-based biodiesel the energy ratios are better, and energy consumption per hectare is lower compared to wheat-based ethanol. But both of these are lower compared to e.g. sugarcane-based ethanol. Also the hectare yield of wheat-based ethanol is significantly lower. Biofuels are in a key position when considering the future of the world’s transport sector. Uncertainties concerning biofuels are, however, several, like the schedule of large scale introduction to consumer markets, technologies used, raw materials and their availability and - maybe the biggest - the real production capacity in relation to the fuel consumption. First generation biofuels have not been the expected answer to environmental problems. Comparisons made show that sugarcane-based ethanol is the most prominent first generation biofuel at the moment, both from energy and environment point of view. Also palmoil-based biodiesel looks promising, although it involves environmental concerns as well. From this point of view the biofuels in this study - wheat-based ethanol and oilseed rape-based biodiesel - are not very competitive options. On the other hand, crops currently used for fuel production in different countries are selected based on several factors, not only based on thier relative general superiority. It is challenging to make long-term forecasts for the biofuel sector, but it can be said that satisfying the world's current and near future traffic fuel consumption with biofuels can only be regarded impossible. This does not mean that biofuels shoud be rejected and their positive aspects ignored, but maybe this reality helps us to put them in perspective. To achieve true environmental benefits through the usage of biofuels there must first be a significant drop both in traffic volumes and overall fuel consumption. Second generation biofuels are coming, but serious questions about their availability and production capacities remain open. Therefore nothing can be taken for granted in this issue, expect the need for development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Statistically averaged lattices provide a common basis to understand the diffraction properties of structures displaying deviations from regular crystal structures. An average lattice is defined and examples are given in one and two dimensions along with their diffraction patterns. The absence of periodicity in reciprocal space corresponding to aperiodic structures is shown to arise out of different projected spacings that are irrationally related, when the grid points are projected along the chosen coordinate axes. It is shown that the projected length scales are important factors which determine the existence or absence of observable periodicity in the diffraction pattern more than the sequence of arrangement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report the Brownian dynamics simulation results on the translational and bond-angle-orientational correlations for charged colloidal binary suspensions as the interparticle interactions are increased to form a crystalline (for a volume fraction phi = 0.2) or a glassy (phi = 0.3) state. The translational order is quantified in terms of the two- and four-point density autocorrelation functions whose comparisons show that there is no growing correlation length near the glass transition. The nearest-neighbor orientational order is determined in terms of the quadratic rotational invariant Q(l) and the bond-orientational correlation functions g(l)(t). The l dependence of Q(l) indicates that icosahedral (l = 6) order predominates at the cost of the cubic order (l = 4) near the glass as well as the crystal transition. The density and orientational correlation functions for a supercooled liquid freezing towards a glass fit well to the streched-exponential form exp[-(t/tau)(beta)]. The average relaxation times extracted from the fitted stretched-exponential functions as a function of effective temperatures T* obey the Arrhenius law for liquids freezing to a crystal whereas these obey the Vogel-Tamman-Fulcher law exp[AT(0)*/(T* - T-0*)] for supercooled Liquids tending towards a glassy state. The value of the parameter A suggests that the colloidal suspensions are ''fragile'' glass formers like the organic and molecular liquids.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The statistical thermodynamics of adsorption in caged zeolites is developed by treating the zeolite as an ensemble of M identical cages or subsystems. Within each cage adsorption is assumed to occur onto a lattice of n identical sites. Expressions for the average occupancy per cage are obtained by minimizing the Helmholtz free energy in the canonical ensemble subject to the constraints of constant M and constant number of adsorbates N. Adsorbate-adsorbate interactions in the Brag-Williams or mean field approximation are treated in two ways. The local mean field approximation (LMFA) is based on the local cage occupancy and the global mean field approximation (GMFA) is based on the average coverage of the ensemble. The GMFA is shown to be equivalent in formulation to treating the zeolite as a collection of interacting single site subsystems. In contrast, the treatment in the LMFA retains the description of the zeolite as an ensemble of identical cages, whose thermodynamic properties are conveniently derived in the grand canonical ensemble. For a z coordinated lattice within the zeolite cage, with epsilon(aa) as the adsorbate-adsorbate interaction parameter, the comparisons for different values of epsilon(aa)(*)=epsilon(aa)z/2kT, and number of sites per cage, n, illustrate that for -1 0. We compare the isotherms predicted with the LMFA with previous GMFA predictions [K. G. Ayappa, C. R. Kamala, and T. A. Abinandanan, J. Chem. Phys. 110, 8714 (1999)] (which incorporates both the site volume reduction and a coverage-dependent epsilon(aa)) for xenon and methane in zeolite NaA. In all cases the predicted isotherms are very similar, with the exception of a small steplike feature present in the LMFA for xenon at higher coverages. (C) 1999 American Institute of Physics. [S0021-9606(99)70333-8].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper a new parallel algorithm for nonlinear transient dynamic analysis of large structures has been presented. An unconditionally stable Newmark-beta method (constant average acceleration technique) has been employed for time integration. The proposed parallel algorithm has been devised within the broad framework of domain decomposition techniques. However, unlike most of the existing parallel algorithms (devised for structural dynamic applications) which are basically derived using nonoverlapped domains, the proposed algorithm uses overlapped domains. The parallel overlapped domain decomposition algorithm proposed in this paper has been formulated by splitting the mass, damping and stiffness matrices arises out of finite element discretisation of a given structure. A predictor-corrector scheme has been formulated for iteratively improving the solution in each step. A computer program based on the proposed algorithm has been developed and implemented with message passing interface as software development environment. PARAM-10000 MIMD parallel computer has been used to evaluate the performances. Numerical experiments have been conducted to validate as well as to evaluate the performance of the proposed parallel algorithm. Comparisons have been made with the conventional nonoverlapped domain decomposition algorithms. Numerical studies indicate that the proposed algorithm is superior in performance to the conventional domain decomposition algorithms. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fault-tolerance is due to the semiconductor technology development important, not only for safety-critical systems but also for general-purpose (non-safety critical) systems. However, instead of guaranteeing that deadlines always are met, it is for general-purpose systems important to minimize the average execution time (AET) while ensuring fault-tolerance. For a given job and a soft (transient) error probability, we define mathematical formulas for AET that includes bus communication overhead for both voting (active replication) and rollback-recovery with checkpointing (RRC). And, for a given multi-processor system-on-chip (MPSoC), we define integer linear programming (ILP) models that minimize AET including bus communication overhead when: (1) selecting the number of checkpoints when using RRC, (2) finding the number of processors and job-to-processor assignment when using voting, and (3) defining fault-tolerance scheme (voting or RRC) per job and defining its usage for each job. Experiments demonstrate significant savings in AET.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pricing is an effective tool to control congestion and achieve quality of service (QoS) provisioning for multiple differentiated levels of service. In this paper, we consider the problem of pricing for congestion control in the case of a network of nodes under a single service class and multiple queues, and present a multi-layered pricing scheme. We propose an algorithm for finding the optimal state dependent price levels for individual queues, at each node. The pricing policy used depends on a weighted average queue length at each node. This helps in reducing frequent price variations and is in the spirit of the random early detection (RED) mechanism used in TCP/IP networks. We observe in our numerical results a considerable improvement in performance using our scheme over that of a recently proposed related scheme in terms of both throughput and delay performance. In particular, our approach exhibits a throughput improvement in the range of 34 to 69 percent in all cases studied (over all routes) over the above scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present two efficient discrete parameter simulation optimization (DPSO) algorithms for the long-run average cost objective. One of these algorithms uses the smoothed functional approximation (SFA) procedure, while the other is based on simultaneous perturbation stochastic approximation (SPSA). The use of SFA for DPSO had not been proposed previously in the literature. Further, both algorithms adopt an interesting technique of random projections that we present here for the first time. We give a proof of convergence of our algorithms. Next, we present detailed numerical experiments on a problem of admission control with dependent service times. We consider two different settings involving parameter sets that have moderate and large sizes, respectively. On the first setting, we also show performance comparisons with the well-studied optimal computing budget allocation (OCBA) algorithm and also the equal allocation algorithm. Note to Practitioners-Even though SPSA and SFA have been devised in the literature for continuous optimization problems, our results indicate that they can be powerful techniques even when they are adapted to discrete optimization settings. OCBA is widely recognized as one of the most powerful methods for discrete optimization when the parameter sets are of small or moderate size. On a setting involving a parameter set of size 100, we observe that when the computing budget is small, both SPSA and OCBA show similar performance and are better in comparison to SFA, however, as the computing budget is increased, SPSA and SFA show better performance than OCBA. Both our algorithms also show good performance when the parameter set has a size of 10(8). SFA is seen to show the best overall performance. Unlike most other DPSO algorithms in the literature, an advantage with our algorithms is that they are easily implementable regardless of the size of the parameter sets and show good performance in both scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Large instruction windows and issue queues are key to exploiting greater instruction level parallelism in out-of-order superscalar processors. However, the cycle time and energy consumption of conventional large monolithic issue queues are high. Previous efforts to reduce cycle time segment the issue queue and pipeline wakeup. Unfortunately, this results in significant IPC loss. Other proposals which address energy efficiency issues by avoiding only the unnecessary tag-comparisons do not reduce broadcasts. These schemes also increase the issue latency.To address both these issues comprehensively, we propose the Scalable Lowpower Issue Queue (SLIQ). SLIQ augments a pipelined issue queue with direct indexing to mitigate the problem of delayed wakeups while reducing the cycle time. Also, the SLIQ design naturally leads to significant energy savings by reducing both the number of tag broadcasts and comparisons required.A 2 segment SLIQ incurs an average IPC loss of 0.2% over the entire SPEC CPU2000 suite, while achieving a 25.2% reduction in issue latency when compared to a monolithic 128-entry issue queue for an 8-wide superscalar processor. An 8 segment SLIQ improves scalability by reducing the issue latency by 38.3% while incurring an IPC loss of only 2.3%. Further, the 8 segment SLIQ significantly reduces the energy consumption and energy-delay product by 48.3% and 67.4% respectively on average.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of developing L2-stability criteria for feedback systems with a single time-varying gain, which impose average variation constraints on the gain is treated. A unified approach is presented which facilitates the development of such average variation criteria for both linear and nonlinear systems. The stability criteria derived here are shown to be more general than the existing results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A transient flame simulation tool based on unsteady Reynolds average Navier Stokes (RANS) is characterized for stationary and nonstationary flame applications with a motivation of performing computationally affordable flame stability studies. Specifically, the KIVA-3V code is utilized with incorporation of a recently proposed modified eddy dissipation concept for simulating turbulence-chemistry interaction along with a model for radiation loss. Detailed comparison of velocities, turbulent kinetic energies, temperature, and species are made with the experimental data of the turbulent, non-premixed DLR_A CH4/H-2/N-2 jet flame. The comparison shows that the model is able to predict flame structure very well. The effect of some of the modeling assumptions is assessed, and strategies to model a stationary diffusion flame are recommended. Unsteady flame simulation capabilities of the numerical model are assessed by simulating an acoustically excited, experimental, oscillatory H-2-air diffusion flame. Comparisons are made with oscillatory velocity field and OH plots, and the numerical code is observed to predict transient flame structure well.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The throughput-optimal discrete-rate adaptation policy, when nodes are subject to constraints on the average power and bit error rate, is governed by a power control parameter, for which a closed-form characterization has remained an open problem. The parameter is essential in determining the rate adaptation thresholds and the transmit rate and power at any time, and ensuring adherence to the power constraint. We derive novel insightful bounds and approximations that characterize the power control parameter and the throughput in closed-form. The results are comprehensive as they apply to the general class of Nakagami-m (m >= 1) fading channels, which includes Rayleigh fading, uncoded and coded modulation, and single and multi-node systems with selection. The results are appealing as they are provably tight in the asymptotic large average power regime, and are designed and verified to be accurate even for smaller average powers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We demonstrate the phase fluctuation introduced by oscillation of scattering centers in the focal volume of an ultrasound transducer in an optical tomography experiment has a nonzero mean. The conditions to be met for the above are: (i) the frequency of the ultrasound should be in the vicinity of the most dominant natural frequency of vibration of the ultrasound focal volume, (ii) the corresponding acoustic wavelength should be much larger than l(n)*, a modified transport mean-free-path applicable for phase decorrelation and (iii) the focal volume of the ultrasound transducer should not be larger than 4 - 5 times (l(n)*)(3). We demonstrate through simulations that as the ratio of the ultrasound focal volume to (l(n)*)(3) increases, the average of the phase fluctuation decreases and becomes zero when the focal volume becomes greater than around 4(l(n)*)(3); and through simulations and experiments that as the acoustic frequency increases from 100 Hz to 1 MHz, the average phase decreases to zero. Through experiments done in chicken breast we show that the average phase increases from around 110 degrees to 130 degrees when the background medium is changed from water to glycerol, indicating that the average of the phase fluctuation can be used to sense changes in refractive index deep within tissue.