58 resultados para Power distribution


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Timber is one of the most widely used structural material all over the world. Round timbers can be seen as a structural component in historical buildings, jetties, short span bridges and also as piles for foundation and poles for electrical and power distribution. To evaluate the current condition of these cylindrical type timber structures, guided wave has a great potential. However, the difficulties associated with the guided wave propagation in timber materials includes orthotropic behaviour of wood, moisture contents, temperature, grain direction, etc. In addition, the effect of fully or partially filled surrounding media, such as soil, water, etc. causes attenuation on the generated stress wave. In order to investigate the effects of these parameters on guided wave propagation, extensive numerical simulation is required to conduct parametric studies. Moreover, due to the presence of multi modes in guided wave propagation, dispersion curves are of great importance. Even though conventional finite element method (FEM) can determine dispersion curves along with wave propagation in time domain, it is highly computationally expensive. Furthermore, incorporating orthotropic behaviour and surrounding media to model a thick cylindrical wave (large diameter cylindrical structures) make conventional FEM inefficient for this purpose. In contrast, spectral finite element method (SFEM) is a semi analytical method to model the guided wave propagation which does not need fine meshes compared to the other methods, such as FEM or finite difference method (FDM). Also, even distribution of mass and stiffness of structures can be obtained with very few elements using SFEM. In this paper, the suitability of SFEM is investigated to model guided wave propagation through an orthotropic cylindrical waveguide with the presence of surrounding soil. Both the frequency domain analysis (dispersion curves) and time domain reconstruction for a multi-mode generated input signal are presented under different loading location. The dispersion curves obtained from SFEM are compared against analytical solution to verify its accuracy. Lastly, different numerical issues to solve for the dispersion curves and time domain results using SFEM are also discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

For multiple heterogeneous multicore server processors across clouds and data centers, the aggregated performance of the cloud of clouds can be optimized by load distribution and balancing. Energy efficiency is one of the most important issues for large-scale server systems in current and future data centers. The multicore processor technology provides new levels of performance and energy efficiency. The present paper aims to develop power and performance constrained load distribution methods for cloud computing in current and future large-scale data centers. In particular, we address the problem of optimal power allocation and load distribution for multiple heterogeneous multicore server processors across clouds and data centers. Our strategy is to formulate optimal power allocation and load distribution for multiple servers in a cloud of clouds as optimization problems, i.e., power constrained performance optimization and performance constrained power optimization. Our research problems in large-scale data centers are well-defined multivariable optimization problems, which explore the power-performance tradeoff by fixing one factor and minimizing the other, from the perspective of optimal load distribution. It is clear that such power and performance optimization is important for a cloud computing provider to efficiently utilize all the available resources. We model a multicore server processor as a queuing system with multiple servers. Our optimization problems are solved for two different models of core speed, where one model assumes that a core runs at zero speed when it is idle, and the other model assumes that a core runs at a constant speed. Our results in this paper provide new theoretical insights into power management and performance optimization in data centers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Electrostatic Precipitators (ESP) are the most reliable and industrially used control devices to capture fine particles for reducing exhaust emission. Its efficiency is 99% or more. However, capturing submicron particles which are hazardous is still a problem as it involves complex flow phenomena and ESP design limitations. In this study, the effect of baffles on flow distribution inside the ESP is investigated computationally. Baffles are expected to increase the residence time of flue gas which helps to collect more particles into the collector plates, and hence increase the collection efficiency of an ESP. Besides, the placement of a baffle is likely to cause swirling of flue gas and hence sub-micron particles move towards the collector plate due to eccentric and electrostatic force. Therefore, the effects of position, shape and thickness of the baffles on collection efficiency which are also important for ESP design are reported in this study. The fluid flow distribution has been modelled using computational fluid dynamics (CFD) software Fluent and the result and outcome are presented and discussed. The result shows that baffles have significant influence on fluid flow pattern and the efficiency of ESP.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This chapter presents an unbalanced multi-phase optimal power flow (UMOPF) based planning approach to determine the optimum capacities of multiple distributed generation units in a distribution network. An adaptive weight particle swarm optimization algorithm is used to find the global optimum solution. To increase the efficiency of the proposed scheme, a co-simulation platform is developed. Since the proposed method is mainly based on the cost optimization, variations in loads and uncertainties within DG units are also taken into account to perform the analysis. An IEEE 123 node distribution system is used as a test distribution network which is unbalanced and multi-phase in nature, for the validation of the proposed scheme. The superiority of the proposed method is investigated through the comparisons of the results obtained that of a Genetic Algorithm based OPF method. This analysis also shows that the DG capacity planning considering annual load and generation uncertainties outperform the traditional well practised peak-load planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, Australian brown coal fly ash particles have been collected from power station and analysed by scanning electron microscopy to obtain morphological information and elemental composition of individual particles. The most common particles found to be the irregular shape particle aggregates. Other shapes include ball shape with smooth surface and with some attachments; and crystal shape fine particles. The X-ray spectra of each fly ash particle revealed five groups of elemental composition, they are Si-rich particles; Ca-rich particles; Fe-rich particles; particles with Mg-Ca Matrix and particles with Si-Ca matrix. A particle sire distribution analysis has been conducted using particle size analyser and found to have a mean particle size of 21fim. The sample then was separated into fine and coarse fractions using aerodynamic classifier, and the elemental composition of both fractions were determined by ICP-AES. Borate fusion and acid dissolution method was used for sample preparation. It is found that some environmental sensitive elements such as Zn, Pb, Ni, K and Cu are enriched in fine fly ash particles. Ca has much higher contents in fine particles as well. Si and Mg have higher concentrations in coarse particles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an analytical model of fuel consumption (AMFC) to coordinate the driving power and manage the overall fuel consumption for an internal combustion engine vehicle. The model calculates the different loads applied on the vehicle including road-slope, road-friction, wind-drag, accessories, and mechanical losses. Also, it solves the combustion equation of the engine under different working conditions including various fuel compositions, excess airs and air inlet temperatures. Then it determines the contribution of each load to signify the energy distribution and power flows of the vehicle. Unlike the conventional models in which the vehicle speed needs to be given as an input, the developed model can predict the vehicle speed and acceleration under different working conditions by allowing the speed to vary within a predefined range only. Furthermore, the model indicates the ways to minimises the vehicles' fuel consumption under various driving conditions. The results show that the model has the potential to assist in the vehicle energy management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Any attempt to model an economy requires foundational assumptions about the relations between prices, values and the distribution of wealth. These assumptions exert a profound influence over the results of any model. Unfortunately, there are few areas in economics as vexed as the theory of value. I argue in this paper that the fundamental problem with past theories of value is that it is simply not possible to model the determination of value, the formation of prices and the distribution of income in a real economy with analytic mathematical models. All such attempts leave out crucial processes or make unrealistic assumptions which significantly affect the results. There have been two primary approaches to the theory of value. The first, associated with classical economists such as Ricardo and Marx were substance theories of value, which view value as a substance inherent in an object and which is conserved in exchange. For Marxists, the value of a commodity derives solely from the value of the labour power used to produce it - and therefore any profit is due to the exploitation of the workers. The labour theory of value has been discredited because of its assumption that labour was the only ‘factor’ that contributed to the creation of value, and because of its fundamentally circular argument. Neoclassical theorists argued that price was identical with value and was determined purely by the interaction of supply and demand. Value then, was completely subjective. Returns to labour (wages) and capital (profits) were determined solely by their marginal contribution to production, so that each factor received its just reward by definition. Problems with the neoclassical approach include assumptions concerning representative agents, perfect competition, perfect and costless information and contract enforcement, complete markets for credit and risk, aggregate production functions and infinite, smooth substitution between factors, distribution according to marginal products, firms always on the production possibility frontier and firms’ pricing decisions, ignoring money and credit, and perfectly rational agents with infinite computational capacity. Two critical areas include firstly, the underappreciated Sonnenschein-Mantel- Debreu results which showed that the foundational assumptions of the Walrasian general-equilibrium model imply arbitrary excess demand functions and therefore arbitrary equilibrium price sets. Secondly, in real economies, there is no equilibrium, only continuous change. Equilibrium is never reached because of constant changes in preferences and tastes; technological and organisational innovations; discoveries of new resources and new markets; inaccurate and evolving expectations of businesses, consumers, governments and speculators; changing demand for credit; the entry and exit of firms; the birth, learning, and death of citizens; changes in laws and government policies; imperfect information; generalized increasing returns to scale; random acts of impulse; weather and climate events; changes in disease patterns, and so on. The problem is not the use of mathematical modelling, but the kind of mathematical modelling used. Agent-based models (ABMs), objectoriented programming and greatly increased computer power however, are opening up a new frontier. Here a dynamic bargaining ABM is outlined as a basis for an alternative theory of value. A large but finite number of heterogeneous commodities and agents with differing degrees of market power are set in a spatial network. Returns to buyers and sellers are decided at each step in the value chain, and in each factor market, through the process of bargaining. Market power and its potential abuse against the poor and vulnerable are fundamental to how the bargaining dynamics play out. Ethics therefore lie at the very heart of economic analysis, the determination of prices and the distribution of wealth. The neoclassicals are right then that price is the enumeration of value at a particular time and place, but wrong to downplay the critical roles of bargaining, power and ethics in determining those same prices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bones adapt to prevalent loading, which comprises mainly forces caused by muscle contractions. Therefore, we hypothesized that similar associations would be observed between neuromuscular performance and rigidity of bones located in the same body segment. These associations were assessed among 221 premenopausal women representing athletes in high-impact, odd-impact, highmagnitude, repetitive low-impact, and repetitive nonimpact sports and physically active referents aged 17–40 years. The whole group mean age and body mass were 23 (5) and 63 (9) kg, respectively. Bone cross sections at the tibial and fibular mid-diaphysis were assessed with peripheral quantitative computed tomography (pQCT). Density-weighted polar section modulus (SSI) and minimal and maximal crosssectional moments of inertia (Imin, Imax) were analyzed. Bone morphology was described as the Imax/Imin ratio. Neuromuscular performance was assessed by maximal power during countermovement jump (CMJ). Tibial SSI was 31% higher in the high-impact, 19% in the odd-impact, and 30% in the repetitive low-impact groups compared with the reference group (P\0.005). Only the high-impact group differed from the referents in fibular SSI (17%, P\0.005). Tibial morphology differed between groups (P = 0.001), but fibular morphology did not (P = 0.247). The bone-bygroup interaction was highly significant (P\0.001). After controlling for height, weight, and age, the CMJ peak power correlated moderately with tibial SSI (r = 0.31, P\0.001) but not with fibular SSI (r = 0.069, P = 0.313). In conclusion, observed differences in the association between neuromuscular performance and tibial and fibular traits suggest
that the tibia and fibula experience different loading

Relevância:

30.00% 30.00%

Publicador:

Resumo:

* 1
Much recent research has focused on the use of species distribution models to explore the influence(s) of environment (predominantly climate) on species’ distributions. A weakness of this approach is that it typically does not consider effects of biotic interactions, including competition, on species’ distributions.
* 2
Here we identify and quantify the contribution of environmental factors relative to biotic factors (interspecific competition) to the distribution and abundance of three large, wide-ranging herbivores, the antilopine wallaroo (Macropus antilopinus), common wallaroo (Macropus robustus) and eastern grey kangaroo (Macropus giganteus), across an extensive zone of sympatry in tropical northern Australia.
* 3
To assess the importance of competition relative to habitat features, we constructed models of abundance for each species incorporating habitat only and habitat + the abundance of the other species, and compared their respective likelihoods using Akaike's information criterion. We further assessed the importance of variables predicting abundance across models for each species.
* 4
The best-supported models of antilopine wallaroo and eastern grey kangaroo abundance included both habitat and the abundance of the other species, providing evidence of interspecific competition. Contrastingly, models of common wallaroo abundance were largely influenced by climate and not the abundance of other species. The abundance of antilopine wallaroos was most influenced by water availability, eastern grey kangaroo abundance and the frequency of late season fires. The abundance of eastern grey kangaroos was most influenced by aspects of climate, antilopine wallaroo abundance and a measure of cattle abundance.
* 5
Our study demonstrates that where census and habitat data are available, it is possible to reveal species’ interactions (and measure their relative strength and direction) between large, mobile and/or widely-distributed species for which competition is difficult to demonstrate experimentally. This allows discrimination of the influences of environmental factors and species interactions on species’ distributions, and should therefore improve the predictive power of species distribution models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chromatographic detection responses are recorded digitally. A peak is represented ideally by a Guassian distribution. Raising a Guassian distribution to the power ‘n’ increases the height of the peak to that power, but decreases the standard deviation by √n. Hence there is an increasing disparity in detection responses as the signal moves from low level noise, with a corresponding decrease in peak width. This increases the S/N ratio and increases peak to peak resolution. The ramifications of these factors are that poor resolution in complex chromatographic data can be improved, and low signal responses embedded at near noise levels can be enhanced. The application of this data treatment process is potentially very useful in 2D-HPLC where sample dilution occurs between dimension, reducing signal response, and in the application of post-reaction detection methods, where band broadening is increased by virtue of reaction coils. In this work power functions applied to chromatographic data are discussed in the context of (a) complex separation problems, (b) 2D-HPLC separations, and (c) post-column reaction detectors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prediction intervals (PIs) are a promising tool for quantification of uncertainties associated with point forecasts of wind power. However, construction of PIs using parametric methods is questionable, as forecast errors do not follow a standard distribution. This paper proposes a nonparametric method for construction of reliable PIs for neural network (NN) forecasts. A lower upper bound estimation (LUBE) method is adapted for construction of PIs for wind power generation. A new framework is proposed for synthesizing PIs generated using an ensemble of NN models in the LUBE method. This is done to guard against NN performance instability in generating reliable and informative PIs. A validation set is applied for short listing NNs based on the quality of PIs. Then, PIs constructed using filtered NNs are aggregated to obtain combined PIs. Performance of the proposed method is examined using data sets taken from two wind farms in Australia. Simulation results indicate that the quality of combined PIs is significantly superior to the quality of PIs constructed using NN models ranked and filtered by the validation set.