896 resultados para Radiality constraints in distribution systems
Resumo:
Mode of access: Internet.
Resumo:
Shipping list no.: 91-280-P.
Resumo:
Particle flow patterns were investigated for wet granulation and dry powder mixing in ploughshare mixers using Positron Emission Particle Tracking (PEPT). In a 4-1 mixer, calcium carbonate with mean size 45 mum was granulated using a 50 wt.% solution of glycerol and water as binding fluid, and particle movement was followed using a 600-mum calcium hydroxy-phosphate tracer particle. In a 20-1 mixer, dry powder flow was studied using a 600-mum resin bead tracer particle to simulate the bulk polypropylene powder with mean size 600 mum. Important differences were seen between particle flow patterns for wet and dry systems. Particle speed relative to blade speed was lower in the wet system than in the dry system, with the ratios of average particle speed to blade tip speed for all experiments in the range 0.01-015. In the axial plane, the same particle motion was observed around each blade; this provides a significant advance for modelling flow in ploughshare mixers. For the future, a detailed understanding of the local velocity, acceleration and density variations around a plough blade will reveal the effects of flow patterns in granulating systems on the resultant distribution of granular product attributes such as size, density and strength. (C) 2002 Elsevier Science B.V All rights reserved.
Resumo:
This communication reports a laboratory and plant comparison between the University of Cape Town (UCT) device (capillary) and the McGill University bubble sizing method (imaging). The laboratory work was conducted on single bubbles to establish the accuracy of the techniques by comparing with a reference method (capture in a burette). Single bubble measurements with the McGill University technique showed a tendency to slightly underestimate (4% for a 1.3 mm bubble) and the UCT technique to slightly overestimate (1% for the 1.3 man bubble). Both trends are anticipated from fundamental considerations. In the UCT technique bubble breakup was observed when measuring a 2.7 mm bubble using a 0.5 mm ID capillary tube. A discrepancy of 11% was determined when comparing the techniques in an industrial-scale mechanical flotation cell. The possible sources of bias are discussed. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
This thesis was focused on theoretical models of synchronization to cortical dynamics as measured by magnetoencephalography (MEG). Dynamical systems theory was used in both identifying relevant variables for brain coordination and also in devising methods for their quantification. We presented a method for studying interactions of linear and chaotic neuronal sources using MEG beamforming techniques. We showed that such sources can be accurately reconstructed in terms of their location, temporal dynamics and possible interactions. Synchronization in low-dimensional nonlinear systems was studied to explore specific correlates of functional integration and segregation. In the case of interacting dissimilar systems, relevant coordination phenomena involved generalized and phase synchronization, which were often intermittent. Spatially-extended systems were then studied. For locally-coupled dissimilar systems, as in the case of cortical columns, clustering behaviour occurred. Synchronized clusters emerged at different frequencies and their boundaries were marked through oscillation death. The macroscopic mean field revealed sharp spectral peaks at the frequencies of the clusters and broader spectral drops at their boundaries. These results question existing models of Event Related Synchronization and Desynchronization. We re-examined the concept of the steady-state evoked response following an AM stimulus. We showed that very little variability in the AM following response could be accounted by system noise. We presented a methodology for detecting local and global nonlinear interactions from MEG data in order to account for residual variability. We found crosshemispheric nonlinear interactions of ongoing cortical rhythms concurrent with the stimulus and interactions of these rhythms with the following AM responses. Finally, we hypothesized that holistic spatial stimuli would be accompanied by the emergence of clusters in primary visual cortex resulting in frequency-specific MEG oscillations. Indeed, we found different frequency distributions in induced gamma oscillations for different spatial stimuli, which was suggestive of temporal coding of these spatial stimuli. Further, we addressed the bursting character of these oscillations, which was suggestive of intermittent nonlinear dynamics. However, we did not observe the characteristic-3/2 power-law scaling in the distribution of interburst intervals. Further, this distribution was only seldom significantly different to the one obtained in surrogate data, where nonlinear structure was destroyed. In conclusion, the work presented in this thesis suggests that advances in dynamical systems theory in conjunction with developments in magnetoencephalography may facilitate a mapping between levels of description int he brain. this may potentially represent a major advancement in neuroscience.
Resumo:
Over recent years, hub-and-spoke distribution techniques have attracted widespread research attention. Despite there being a growing body of literature in this area there is less focus on the spoke-terminal element of the hub-and-spoke system as being a key component in the overall service received by the end-user. Current literature is highly geared towards discussing bulk optimization of freight units rather than to the more discrete and individualistic profile characteristics of shared-user Less-than-truckload (LTL) freight. In this paper, a literature review is presented to review the role hub-and-spoke systems play in meeting multi-profile customer demands, particularly in developing sectors with more sophisticated needs, such as retail. The paper also looks at the use of simulation technology as a suitable tool for analyzing spoke-terminal operations within developing hub-and spoke systems.
Resumo:
Vehicle-to-Grid (V2G) system with efficient Demand Response Management (DRM) is critical to solve the problem of supplying electricity by utilizing surplus electricity available at EVs. An incentivilized DRM approach is studied to reduce the system cost and maintain the system stability. EVs are motivated with dynamic pricing determined by the group-selling based auction. In the proposed approach, a number of aggregators sit on the first level auction responsible to communicate with a group of EVs. EVs as bidders consider Quality of Energy (QoE) requirements and report interests and decisions on the bidding process coordinated by the associated aggregator. Auction winners are determined based on the bidding prices and the amount of electricity sold by the EV bidders. We investigate the impact of the proposed mechanism on the system performance with maximum feedback power constraints of aggregators. The designed mechanism is proven to have essential economic properties. Simulation results indicate the proposed mechanism can reduce the system cost and offer EVs significant incentives to participate in the V2G DRM operation.
Resumo:
"In this paper we extend the earlier treatment of out-of-equilibrium mesoscopic fluctuations in glassy systems in several significant ways. First, via extensive simulations, we demonstrate that models of glassy behavior without quenched disorder display scalings of the probability of local two-time correlators that are qualitatively similar to that of models with short-ranged quenched interactions. The key ingredient for such scaling properties is shown to be the development of a criticallike dynamical correlation length, and not other microscopic details. This robust data collapse may be described in terms of a time-evolving "extreme value" distribution. We develop a theory to describe both the form and evolution of these distributions based on a effective sigma model approach."
Resumo:
The system grounding method option has a direct influence on the overall performance of the entire medium voltage network as well as on the ground fault current magnitude. For any kind of grounding systems: ungrounded system, solidly and low impedance grounded and resonant grounded, we can find advantages and disadvantages. A thorough study is necessary to choose the most appropriate grounding protection system. The power distribution utilities justify their choices based on economic and technical criteria, according to the specific characteristics of each distribution network. In this paper we present a medium voltage Portuguese substation case study and a study of neutral system with Petersen coil, isolated neutral and impedance grounded.
Resumo:
Using robotic systems for many missions that require power distribution can decrease the need for human intervention in such missions significantly. For accomplishing this capability a robotic system capable of autonomous navigation, power systems adaptation, and establishing physical connection needs to be developed. This thesis presents developed path planning and navigation algorithms for an autonomous ground power distribution system. In this work, a survey on existing path planning methods along with two developed algorithms by author is presented. One of these algorithms is a simple path planner suitable for implementation on lab-size platforms. A navigation hierarchy is developed for experimental validation of the path planner and proof of concept for autonomous ground power distribution system in lab environment. The second algorithm is a robust path planner developed for real-size implementation based on lessons learned from lab-size experiments. The simulation results illustrates that the algorithm is efficient and reliable in unknown environments. Future plans for developing intelligent power electronics and integrating them with robotic systems is presented. The ultimate goal is to create a power distribution system capable of regulating power flow at a desired voltage and frequency adaptable to load demands.
Resumo:
Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.
Resumo:
Legionella is a Gram-negative bacterium that represent a public health issue, with heavy social and economic impact. Therefore, it is mandatory to provide a proper environmental surveillance and risk assessment plan to perform Legionella control in water distribution systems in hospital and community buildings. The thesis joins several methodologies in a unique workflow applied for the identification of non-pneumophila Legionella species (n-pL), starting from standard methods as culture and gene sequencing (mip and rpoB), and passing through innovative approaches as MALDI-TOF MS technique and whole genome sequencing (WGS). The results obtained, were compared to identify the Legionella isolates, and lead to four presumptive novel Legionella species identification. One of these four new isolates was characterized and recognized at taxonomy level with the name of Legionella bononiensis (the 64th Legionella species). The workflow applied in this thesis, help to increase the knowledge of Legionella environmental species, improving the description of the environment itself and the events that promote the growth of Legionella in their ecological niche. The correct identification and characterization of the isolates permit to prevent their spread in man-made environment and contain the occurrence of cases, clusters, or outbreaks. Therefore, the experimental work undertaken, could support the preventive measures during environmental and clinical surveillance, improving the study of species often underestimated or still unknown.
Resumo:
Water Distribution Networks (WDNs) play a vital importance rule in communities, ensuring well-being band supporting economic growth and productivity. The need for greater investment requires design choices will impact on the efficiency of management in the coming decades. This thesis proposes an algorithmic approach to address two related problems:(i) identify the fundamental asset of large WDNs in terms of main infrastructure;(ii) sectorize large WDNs into isolated sectors in order to respect the minimum service to be guaranteed to users. Two methodologies have been developed to meet these objectives and subsequently they were integrated to guarantee an overall process which allows to optimize the sectorized configuration of WDN taking into account the needs to integrated in a global vision the two problems (i) and (ii). With regards to the problem (i), the methodology developed introduces the concept of primary network to give an answer with a dual approach, of connecting main nodes of WDN in terms of hydraulic infrastructures (reservoirs, tanks, pumps stations) and identifying hypothetical paths with the minimal energy losses. This primary network thus identified can be used as an initial basis to design the sectors. The sectorization problem (ii) has been faced using optimization techniques by the development of a new dedicated Tabu Search algorithm able to deal with real case studies of WDNs. For this reason, three new large WDNs models have been developed in order to test the capabilities of the algorithm on different and complex real cases. The developed methodology also allows to automatically identify the deficient parts of the primary network and dynamically includes new edges in order to support a sectorized configuration of the WDN. The application of the overall algorithm to the new real case studies and to others from literature has given applicable solutions even in specific complex situations.
Resumo:
We analyze the irreversibility and the entropy production in nonequilibrium interacting particle systems described by a Fokker-Planck equation by the use of a suitable master equation representation. The irreversible character is provided either by nonconservative forces or by the contact with heat baths at distinct temperatures. The expression for the entropy production is deduced from a general definition, which is related to the probability of a trajectory in phase space and its time reversal, that makes no reference a priori to the dissipated power. Our formalism is applied to calculate the heat conductance in a simple system consisting of two Brownian particles each one in contact to a heat reservoir. We show also the connection between the definition of entropy production rate and the Jarzynski equality.
Resumo:
The Jensen theorem is used to derive inequalities for semiclassical tunneling probabilities for systems involving several degrees of freedom. These Jensen inequalities are used to discuss several aspects of sub-barrier heavy-ion fusion reactions. The inequality hinges on general convexity properties of the tunneling coefficient calculated with the classical action in the classically forbidden region.