964 resultados para Load Distribution.
Resumo:
The morphology of asphalt mixture can be defined as a set of parameters describing the geometrical characteristics of its constituent materials, their relative proportions as well as spatial arrangement in the mixture. The present study is carried out to investigate the effect of the morphology on its meso- and macro-mechanical response. An analysis approach is used for the meso-structural characterisation based on the X-ray computed tomography (CT) data. Image processing techniques are used to systematically vary the internal structure to obtain different morphology structures. A morphology framework is used to characterise the average mastic coating thickness around the main load carrying structure in the structures. The uniaxial tension simulation shows that the mixtures with the lowest coating thickness exhibit better inter-particle interaction with more continuous load distribution chains between adjacent aggregate particles, less stress concentrations and less strain localisation in the mastic phase.
Resumo:
In studies of complex heterogeneous networks, particularly of the Internet, significant attention was paid to analysing network failures caused by hardware faults or overload. There network reaction was modelled as rerouting of traffic away from failed or congested elements. Here we model network reaction to congestion on much shorter time scales when the input traffic rate through congested routes is reduced. As an example we consider the Internet where local mismatch between demand and capacity results in traffic losses. We describe the onset of congestion as a phase transition characterised by strong, albeit relatively short-lived, fluctuations of losses caused by noise in input traffic and exacerbated by the heterogeneous nature of the network manifested in a power-law load distribution. The fluctuations may result in the network strongly overreacting to the first signs of congestion by significantly reducing input traffic along the communication paths where congestion is utterly negligible. © 2013 IEEE.
Resumo:
In recent years, the internet has grown exponentially, and become more complex. This increased complexity potentially introduces more network-level instability. But for any end-to-end internet connection, maintaining the connection's throughput and reliability at a certain level is very important. This is because it can directly affect the connection's normal operation. Therefore, a challenging research task is to improve a network's connection performance by optimizing its throughput and reliability. This dissertation proposed an efficient and reliable transport layer protocol (called concurrent TCP (cTCP)), an extension of the current TCP protocol, to optimize end-to-end connection throughput and enhance end-to-end connection fault tolerance. The proposed cTCP protocol could aggregate multiple paths' bandwidth by supporting concurrent data transfer (CDT) on a single connection. Here concurrent data transfer was defined as the concurrent transfer of data from local hosts to foreign hosts via two or more end-to-end paths. An RTT-Based CDT mechanism, which was based on a path's RTT (Round Trip Time) to optimize CDT performance, was developed for the proposed cTCP protocol. This mechanism primarily included an RTT-Based load distribution and path management scheme, which was used to optimize connections' throughput and reliability. A congestion control and retransmission policy based on RTT was also provided. According to experiment results, under different network conditions, our RTT-Based CDT mechanism could acquire good CDT performance. Finally a CWND-Based CDT mechanism, which was based on a path's CWND (Congestion Window), to optimize CDT performance was introduced. This mechanism primarily included: a CWND-Based load allocation scheme, which assigned corresponding data to paths based on their CWND to achieve aggregate bandwidth; a CWND-Based path management, which was used to optimize connections' fault tolerance; and a congestion control and retransmission management policy, which was similar to regular TCP in its separate path handling. According to corresponding experiment results, this mechanism could acquire near-optimal CDT performance under different network conditions.
Resumo:
Este estudo incide sobre as características que a presença do ião flúor em moléculas concede. Mais concretamente em fluoroquinolonas, antibióticos que cada vez são mais utilizados. Fez-se uma analise de vários parâmetros para obtermos informação sobre a interação fármaco-receptor nas fluoroquinolonas. Sendo para isso utilizadas técnicas de caracterização química computacional para conseguirmos caracterizar eletronicamente e estruturalmente (3D) as fluoroquinolonas em complemento aos métodos semi-empíricos utilizados inicialmente. Como é sabido, a especificidade e a afinidade para o sitio alvo, é essencial para eficácia de um fármaco. As fluoroquinolonas sofreram um grande desenvolvimento desde a primeira quinolona sintetizada em 1958, sendo que desde ai foram sintetizadas inúmeros derivados da mesma. Este facto deve-se a serem facilmente manipuladas, derivando fármacos altamente potentes, espectro alargado, factores farmacocinéticos optimizados e efeitos adversos reduzidos. A grande alteração farmacológica para o aumento do interesse neste grupo, foi a substituição em C6 de um átomo de flúor em vez de um de hidrogénio. Para obtermos as informações sobre a influência do ião flúor sobre as propriedades estruturais e electrónicas das fluoroquinolonas, foi feita uma comparação entre a fluoroquinolona com flúor em C6 e com hidrogénio em C6. As quatro fluoroquinolonas presentes neste estudo foram: ciprofloxacina, moxiflocacina, sparfloxacina e pefloxacina. As informações foram obtidas por programas informáticos de mecânica quântica e molecular. Concluiu-se que a presença de substituinte flúor não modificava de forma significativa a geometria das moléculas mas sim a distribuição da carga no carbono vicinal e nos átomos em posição alfa, beta e gama relativamente a este. Esta modificação da distribuição electrónica pode condicionar a ligação do fármaco ao receptor, modificando a sua actividade farmacológica.
Resumo:
Background: There are several numerical investigations on bone remodelling after total hip arthroplasty (THA) on the basis of the finite element analysis (FEA). For such computations certain boundary conditions have to be defined. The authors chose a maximum of three static load situations, usually taken from the gait cycle because this is the most frequent dynamic activity of a patient after THA. Materials and methods: The numerical study presented here investigates whether it is useful to consider only one static load situation of the gait cycle in the FE calculation of the bone remodelling. For this purpose, 5 different loading cases were examined in order to determine their influence on the change in the physiological load distribution within the femur and on the resulting strain-adaptive bone remodelling. First, four different static loading cases at 25%, 45%, 65% and 85% of the gait cycle, respectively, and then the whole gait cycle in a loading regime were examined in order to regard all the different loadings of the cycle in the simulation. Results: The computed evolution of the apparent bone density (ABD) and the calculated mass losses in the periprosthetic femur show that the simulation results are highly dependent on the chosen boundary conditions. Conclusion: These numerical investigations prove that a static load situation is insufficient for representing the whole gait cycle. This causes severe deviations in the FE calculation of the bone remodelling. However, accompanying clinical examinations are necessary to calibrate the bone adaptation law and thus to validate the FE calculations.
Resumo:
In the scope of the discussions about microgeneration (and microgrids), the avoided electrical losses are often pointed out as an important value to be credited to those entities. Therefore, methods to assess the impact of microgeneration on losses must be developed in order to support the definition of a suitable regulatory framework for the economic integration of microgeneration on distribution networks. This paper presents an analytical method to quantify the value of avoided losses that microgeneration may produce on LV networks. Intervals of expected avoided losses are used to account for the variation of avoided losses due to the number, size and location of microgenerators, as well as for the kind of load distribution on LV networks.
Resumo:
In this paper, the performance of voltage-source converter-based shunt and series compensators used for load voltage control in electrical power distribution systems has been analyzed and compared, when a nonlinear load is connected across the load bus. The comparison has been made based on the closed-loop frequency resopnse characteristics of the compensated distribution system. A distribution static compensator (DSTATCOM) as a shunt device and a dynamic voltage restorer (DVR) as a series device are considered in the voltage-control mode for the comparison. The power-quality problems which these compensator address include voltage sags/swells, load voltage harmonic distortions, and unbalancing. The effect of various system parameters on the control performance of the compensator can be studied using the proposed analysis. In particular, the performance of the two compensators are compared with the strong ac supply (stiff source) and weak ac-supply (non-still source) distribution system. The experimental verification of the analytical results derived has been obtained using a laboratory model of the single-phase DSTATCOM and DVR. A generalized converter topology using a cascaded multilevel inverter has been proposed for the medium-voltage distribution system. Simulation studies have been performed in the PSCAD/EMTDC software to verify the results in the three-phase system.
Resumo:
In this paper, a new comprehensive planning methodology is proposed for implementing distribution network reinforcement. The load growth, voltage profile, distribution line loss, and reliability are considered in this procedure. A time-segmentation technique is employed to reduce the computational load. Options considered range from supporting the load growth using the traditional approach of upgrading the conventional equipment in the distribution network, through to the use of dispatchable distributed generators (DDG). The objective function is composed of the construction cost, loss cost and reliability cost. As constraints, the bus voltages and the feeder currents should be maintained within the standard level. The DDG output power should not be less than a ratio of its rated power because of efficiency. A hybrid optimization method, called modified discrete particle swarm optimization, is employed to solve this nonlinear and discrete optimization problem. A comparison is performed between the optimized solution based on planning of capacitors along with tap-changing transformer and line upgrading and when DDGs are included in the optimization.
Resumo:
Articular cartilage is the load-bearing tissue that consists of proteoglycan macromolecules entrapped between collagen fibrils in a three-dimensional architecture. To date, the drudgery of searching for mathematical models to represent the biomechanics of such a system continues without providing a fitting description of its functional response to load at micro-scale level. We believe that the major complication arose when cartilage was first envisaged as a multiphasic model with distinguishable components and that quantifying those and searching for the laws that govern their interaction is inadequate. To the thesis of this paper, cartilage as a bulk is as much continuum as is the response of its components to the external stimuli. For this reason, we framed the fundamental question as to what would be the mechano-structural functionality of such a system in the total absence of one of its key constituents-proteoglycans. To answer this, hydrated normal and proteoglycan depleted samples were tested under confined compression while finite element models were reproduced, for the first time, based on the structural microarchitecture of the cross-sectional profile of the matrices. These micro-porous in silico models served as virtual transducers to produce an internal noninvasive probing mechanism beyond experimental capabilities to render the matrices micromechanics and several others properties like permeability, orientation etc. The results demonstrated that load transfer was closely related to the microarchitecture of the hyperelastic models that represent solid skeleton stress and fluid response based on the state of the collagen network with and without the swollen proteoglycans. In other words, the stress gradient during deformation was a function of the structural pattern of the network and acted in concert with the position-dependent compositional state of the matrix. This reveals that the interaction between indistinguishable components in real cartilage is superimposed by its microarchitectural state which directly influences macromechanical behavior.
Resumo:
This paper presents a new method to determine feeder reconfiguration scheme considering variable load profile. The objective function consists of system losses, reliability costs and also switching costs. In order to achieve an optimal solution the proposed method compares these costs dynamically and determines when and how it is reasonable to have a switching operation. The proposed method divides a year into several equal time periods, then using particle swarm optimization (PSO), optimal candidate configurations for each period are obtained. System losses and customer interruption cost of each configuration during each period is also calculated. Then, considering switching cost from a configuration to another one, dynamic programming algorithm (DPA) is used to determine the annual reconfiguration scheme. Several test systems were used to validate the proposed method. The obtained results denote that to have an optimum solution it is necessary to compare operation costs dynamically.
Resumo:
This paper suggests a supervisory control for storage units to provide load leveling in distribution networks. This approach coordinates storage units to charge during high generation and discharge during peak load times, while utilized to improve the network voltage profile indirectly. The aim of this control strategy is to establish power sharing on a pro rata basis for storage units. As a case study, a practical distribution network with 30 buses is simulated and the results are provided.
Resumo:
Large integration of solar Photo Voltaic (PV) in distribution network has resulted in over-voltage problems. Several control techniques are developed to address over-voltage problem using Deterministic Load Flow (DLF). However, intermittent characteristics of PV generation require Probabilistic Load Flow (PLF) to introduce variability in analysis that is ignored in DLF. The traditional PLF techniques are not suitable for distribution systems and suffer from several drawbacks such as computational burden (Monte Carlo, Conventional convolution), sensitive accuracy with the complexity of system (point estimation method), requirement of necessary linearization (multi-linear simulation) and convergence problem (Gram–Charlier expansion, Cornish Fisher expansion). In this research, Latin Hypercube Sampling with Cholesky Decomposition (LHS-CD) is used to quantify the over-voltage issues with and without the voltage control algorithm in the distribution network with active generation. LHS technique is verified with a test network and real system from an Australian distribution network service provider. Accuracy and computational burden of simulated results are also compared with Monte Carlo simulations.