875 resultados para Distribution network reconfiguration problem
Resumo:
The problem of "exit against a flow" for dynamical systems subject to small Gaussian white noise excitation is studied. Here the word "flow" refers to the behavior in phase space of the unperturbed system's state variables. "Exit against a flow" occurs if a perturbation causes the phase point to leave a phase space region within which it would normally be confined. In particular, there are two components of the problem of exit against a flow:
i) the mean exit time
ii) the phase-space distribution of exit locations.
When the noise perturbing the dynamical systems is small, the solution of each component of the problem of exit against a flow is, in general, the solution of a singularly perturbed, degenerate elliptic-parabolic boundary value problem.
Singular perturbation techniques are used to express the asymptotic solution in terms of an unknown parameter. The unknown parameter is determined using the solution of the adjoint boundary value problem.
The problem of exit against a flow for several dynamical systems of physical interest is considered, and the mean exit times and distributions of exit positions are calculated. The systems are then simulated numerically, using Monte Carlo techniques, in order to determine the validity of the asymptotic solutions.
Resumo:
In noncooperative cost sharing games, individually strategic agents choose resources based on how the welfare (cost or revenue) generated at each resource (which depends on the set of agents that choose the resource) is distributed. The focus is on finding distribution rules that lead to stable allocations, which is formalized by the concept of Nash equilibrium, e.g., Shapley value (budget-balanced) and marginal contribution (not budget-balanced) rules.
Recent work that seeks to characterize the space of all such rules shows that the only budget-balanced distribution rules that guarantee equilibrium existence in all welfare sharing games are generalized weighted Shapley values (GWSVs), by exhibiting a specific 'worst-case' welfare function which requires that GWSV rules be used. Our work provides an exact characterization of the space of distribution rules (not necessarily budget-balanced) for any specific local welfare functions remains, for a general class of scalable and separable games with well-known applications, e.g., facility location, routing, network formation, and coverage games.
We show that all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to GWSV rules on some 'ground' welfare functions. Therefore, it is neither the existence of some worst-case welfare function, nor the restriction of budget-balance, which limits the design to GWSVs. Also, in order to guarantee equilibrium existence, it is necessary to work within the class of potential games, since GWSVs result in (weighted) potential games.
We also provide an alternative characterization—all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to generalized weighted marginal contribution (GWMC) rules on some 'ground' welfare functions. This result is due to a deeper fundamental connection between Shapley values and marginal contributions that our proofs expose—they are equivalent given a transformation connecting their ground welfare functions. (This connection leads to novel closed-form expressions for the GWSV potential function.) Since GWMCs are more tractable than GWSVs, a designer can tradeoff budget-balance with computational tractability in deciding which rule to implement.
Resumo:
Limitation to an aqueous habitat is the most fundamental physiological constraint imposed upon fish, phrases such as 'like a fish of water', convey our acceptance of the general unsuitability of fish for terrestrial existence. The constraints that restrict fish to an aquatic habitat relate to respiration, acid-base regulation, nitrogenous excretion, water balance and ionic regulation. A fish not adapted for an amphibious lifestyle when removed from water, becomes hypoxic and hypercapnic and soon succumbs to respiratory acidosis because the problem of excretion of H super(+) and C0 sub(2) are more immediate than lack of oxygen. This happen because fish gills collapse in air, while the ventilator arrangements that moves an incompressible medium (water) oven them become ineffective
Resumo:
A neural network is a highly interconnected set of simple processors. The many connections allow information to travel rapidly through the network, and due to their simplicity, many processors in one network are feasible. Together these properties imply that we can build efficient massively parallel machines using neural networks. The primary problem is how do we specify the interconnections in a neural network. The various approaches developed so far such as outer product, learning algorithm, or energy function suffer from the following deficiencies: long training/ specification times; not guaranteed to work on all inputs; requires full connectivity.
Alternatively we discuss methods of using the topology and constraints of the problems themselves to design the topology and connections of the neural solution. We define several useful circuits-generalizations of the Winner-Take-All circuitthat allows us to incorporate constraints using feedback in a controlled manner. These circuits are proven to be stable, and to only converge on valid states. We use the Hopfield electronic model since this is close to an actual implementation. We also discuss methods for incorporating these circuits into larger systems, neural and nonneural. By exploiting regularities in our definition, we can construct efficient networks. To demonstrate the methods, we look to three problems from communications. We first discuss two applications to problems from circuit switching; finding routes in large multistage switches, and the call rearrangement problem. These show both, how we can use many neurons to build massively parallel machines, and how the Winner-Take-All circuits can simplify our designs.
Next we develop a solution to the contention arbitration problem of high-speed packet switches. We define a useful class of switching networks and then design a neural network to solve the contention arbitration problem for this class. Various aspects of the neural network/switch system are analyzed to measure the queueing performance of this method. Using the basic design, a feasible architecture for a large (1024-input) ATM packet switch is presented. Using the massive parallelism of neural networks, we can consider algorithms that were previously computationally unattainable. These now viable algorithms lead us to new perspectives on switch design.
Resumo:
The current power grid is on the cusp of modernization due to the emergence of distributed generation and controllable loads, as well as renewable energy. On one hand, distributed and renewable generation is volatile and difficult to dispatch. On the other hand, controllable loads provide significant potential for compensating for the uncertainties. In a future grid where there are thousands or millions of controllable loads and a large portion of the generation comes from volatile sources like wind and solar, distributed control that shifts or reduces the power consumption of electric loads in a reliable and economic way would be highly valuable.
Load control needs to be conducted with network awareness. Otherwise, voltage violations and overloading of circuit devices are likely. To model these effects, network power flows and voltages have to be considered explicitly. However, the physical laws that determine power flows and voltages are nonlinear. Furthermore, while distributed generation and controllable loads are mostly located in distribution networks that are multiphase and radial, most of the power flow studies focus on single-phase networks.
This thesis focuses on distributed load control in multiphase radial distribution networks. In particular, we first study distributed load control without considering network constraints, and then consider network-aware distributed load control.
Distributed implementation of load control is the main challenge if network constraints can be ignored. In this case, we first ignore the uncertainties in renewable generation and load arrivals, and propose a distributed load control algorithm, Algorithm 1, that optimally schedules the deferrable loads to shape the net electricity demand. Deferrable loads refer to loads whose total energy consumption is fixed, but energy usage can be shifted over time in response to network conditions. Algorithm 1 is a distributed gradient decent algorithm, and empirically converges to optimal deferrable load schedules within 15 iterations.
We then extend Algorithm 1 to a real-time setup where deferrable loads arrive over time, and only imprecise predictions about future renewable generation and load are available at the time of decision making. The real-time algorithm Algorithm 2 is based on model-predictive control: Algorithm 2 uses updated predictions on renewable generation as the true values, and computes a pseudo load to simulate future deferrable load. The pseudo load consumes 0 power at the current time step, and its total energy consumption equals the expectation of future deferrable load total energy request.
Network constraints, e.g., transformer loading constraints and voltage regulation constraints, bring significant challenge to the load control problem since power flows and voltages are governed by nonlinear physical laws. Remarkably, distribution networks are usually multiphase and radial. Two approaches are explored to overcome this challenge: one based on convex relaxation and the other that seeks a locally optimal load schedule.
To explore the convex relaxation approach, a novel but equivalent power flow model, the branch flow model, is developed, and a semidefinite programming relaxation, called BFM-SDP, is obtained using the branch flow model. BFM-SDP is mathematically equivalent to a standard convex relaxation proposed in the literature, but numerically is much more stable. Empirical studies show that BFM-SDP is numerically exact for the IEEE 13-, 34-, 37-, 123-bus networks and a real-world 2065-bus network, while the standard convex relaxation is numerically exact for only two of these networks.
Theoretical guarantees on the exactness of convex relaxations are provided for two types of networks: single-phase radial alternative-current (AC) networks, and single-phase mesh direct-current (DC) networks. In particular, for single-phase radial AC networks, we prove that a second-order cone program (SOCP) relaxation is exact if voltage upper bounds are not binding; we also modify the optimal load control problem so that its SOCP relaxation is always exact. For single-phase mesh DC networks, we prove that an SOCP relaxation is exact if 1) voltage upper bounds are not binding, or 2) voltage upper bounds are uniform and power injection lower bounds are strictly negative; we also modify the optimal load control problem so that its SOCP relaxation is always exact.
To seek a locally optimal load schedule, a distributed gradient-decent algorithm, Algorithm 9, is proposed. The suboptimality gap of the algorithm is rigorously characterized and close to 0 for practical networks. Furthermore, unlike the convex relaxation approach, Algorithm 9 ensures a feasible solution. The gradients used in Algorithm 9 are estimated based on a linear approximation of the power flow, which is derived with the following assumptions: 1) line losses are negligible; and 2) voltages are reasonably balanced. Both assumptions are satisfied in practical distribution networks. Empirical results show that Algorithm 9 obtains 70+ times speed up over the convex relaxation approach, at the cost of a suboptimality within numerical precision.
Resumo:
Systematic investigations of the distribution of pollen in stationary water bodies until now have hardly been conducted. For clarification of the problem of how the pollen of different plants which falls into a lake is deposited in relation to its physical properties, the character of the lake, wind currents and other factors, pollen analyses were carried out of surface samples of the bottom sediments of 13 Lithuanian lakes. Lakes were selected of different sizes (areas from 2333 ha. to 8 ha.) and different depths, not uniformly overgrown, situated in different physico-geographic regions of Lithuania. As a result of the investigation, it was established that in the surface layer of the sediments of the lakes of Lithuania pollen of woody species predominates.
Resumo:
We describe the use of a Wigner distribution function approach for exploring the problem of extending the depth of field in a hybrid imaging system. The Wigner distribution function, in connection with the phase-space curve that formulates a joint phase-space description of an optical field, is employed as a tool to display and characterize the evolving behavior of the amplitude point spread function as a wave propagating along the optical axis. It provides a comprehensive exhibition of the characteristics for the hybrid imaging system in extending the depth of field from both wave optics and geometrical optics. We use it to analyze several well-known optical designs in extending the depth of field from a new viewpoint. The relationships between this approach and the earlier ambiguity function approach are also briefly investigated. (c) 2006 Optical Society of America.
Resumo:
Climate change is arguably the most critical issue facing our generation and the next. As we move towards a sustainable future, the grid is rapidly evolving with the integration of more and more renewable energy resources and the emergence of electric vehicles. In particular, large scale adoption of residential and commercial solar photovoltaics (PV) plants is completely changing the traditional slowly-varying unidirectional power flow nature of distribution systems. High share of intermittent renewables pose several technical challenges, including voltage and frequency control. But along with these challenges, renewable generators also bring with them millions of new DC-AC inverter controllers each year. These fast power electronic devices can provide an unprecedented opportunity to increase energy efficiency and improve power quality, if combined with well-designed inverter control algorithms. The main goal of this dissertation is to develop scalable power flow optimization and control methods that achieve system-wide efficiency, reliability, and robustness for power distribution networks of future with high penetration of distributed inverter-based renewable generators.
Proposed solutions to power flow control problems in the literature range from fully centralized to fully local ones. In this thesis, we will focus on the two ends of this spectrum. In the first half of this thesis (chapters 2 and 3), we seek optimal solutions to voltage control problems provided a centralized architecture with complete information. These solutions are particularly important for better understanding the overall system behavior and can serve as a benchmark to compare the performance of other control methods against. To this end, we first propose a branch flow model (BFM) for the analysis and optimization of radial and meshed networks. This model leads to a new approach to solve optimal power flow (OPF) problems using a two step relaxation procedure, which has proven to be both reliable and computationally efficient in dealing with the non-convexity of power flow equations in radial and weakly-meshed distribution networks. We will then apply the results to fast time- scale inverter var control problem and evaluate the performance on real-world circuits in Southern California Edison’s service territory.
The second half (chapters 4 and 5), however, is dedicated to study local control approaches, as they are the only options available for immediate implementation on today’s distribution networks that lack sufficient monitoring and communication infrastructure. In particular, we will follow a reverse and forward engineering approach to study the recently proposed piecewise linear volt/var control curves. It is the aim of this dissertation to tackle some key problems in these two areas and contribute by providing rigorous theoretical basis for future work.
Resumo:
The present work deals with the problem of the interaction of the electromagnetic radiation with a statistical distribution of nonmagnetic dielectric particles immersed in an infinite homogeneous isotropic, non-magnetic medium. The wavelength of the incident radiation can be less, equal or greater than the linear dimension of a particle. The distance between any two particles is several wavelengths. A single particle in the absence of the others is assumed to scatter like a Rayleigh-Gans particle, i.e. interaction between the volume elements (self-interaction) is neglected. The interaction of the particles is taken into account (multiple scattering) and conditions are set up for the case of a lossless medium which guarantee that the multiple scattering contribution is more important than the self-interaction one. These conditions relate the wavelength λ and the linear dimensions of a particle a and of the region occupied by the particles D. It is found that for constant λ/a, D is proportional to λ and that |Δχ|, where Δχ is the difference in the dielectric susceptibilities between particle and medium, has to lie within a certain range.
The total scattering field is obtained as a series the several terms of which represent the corresponding multiple scattering orders. The first term is a single scattering term. The ensemble average of the total scattering intensity is then obtained as a series which does not involve terms due to products between terms of different orders. Thus the waves corresponding to different orders are independent and their Stokes parameters add.
The second and third order intensity terms are explicitly computed. The method used suggests a general approach for computing any order. It is found that in general the first order scattering intensity pattern (or phase function) peaks in the forward direction Θ = 0. The second order tends to smooth out the pattern giving a maximum in the Θ = π/2 direction and minima in the Θ = 0 , Θ = π directions. This ceases to be true if ka (where k = 2π/λ) becomes large (> 20). For large ka the forward direction is further enhanced. Similar features are expected from the higher orders even though the critical value of ka may increase with the order.
The first order polarization of the scattered wave is determined. The ensemble average of the Stokes parameters of the scattered wave is explicitly computed for the second order. A similar method can be applied for any order. It is found that the polarization of the scattered wave depends on the polarization of the incident wave. If the latter is elliptically polarized then the first order scattered wave is elliptically polarized, but in the Θ = π/2 direction is linearly polarized. If the incident wave is circularly polarized the first order scattered wave is elliptically polarized except for the directions Θ = π/2 (linearly polarized) and Θ = 0, π (circularly polarized). The handedness of the Θ = 0 wave is the same as that of the incident whereas the handedness of the Θ = π wave is opposite. If the incident wave is linearly polarized the first order scattered wave is also linearly polarized. The second order makes the total scattered wave to be elliptically polarized for any Θ no matter what the incident wave is. However, the handedness of the total scattered wave is not altered by the second order. Higher orders have similar effects as the second order.
If the medium is lossy the general approach employed for the lossless case is still valid. Only the algebra increases in complexity. It is found that the results of the lossless case are insensitive in the first order of kimD where kim = imaginary part of the wave vector k and D a linear characteristic dimension of the region occupied by the particles. Thus moderately extended regions and small losses make (kimD)2 ≪ 1 and the lossy character of the medium does not alter the results of the lossless case. In general the presence of the losses tends to reduce the forward scattering.
Resumo:
In this study we employed a dynamic recurrent neural network (DRNN) in a novel fashion to reveal characteristics of control modules underlying the generation of muscle activations when drawing figures with the outstretched arm. We asked healthy human subjects to perform four different figure-eight movements in each of two workspaces (frontal plane and sagittal plane). We then trained a DRNN to predict the movement of the wrist from information in the EMG signals from seven different muscles. We trained different instances of the same network on a single movement direction, on all four movement directions in a single movement plane, or on all eight possible movement patterns and looked at the ability of the DRNN to generalize and predict movements for trials that were not included in the training set. Within a single movement plane, a DRNN trained on one movement direction was not able to predict movements of the hand for trials in the other three directions, but a DRNN trained simultaneously on all four movement directions could generalize across movement directions within the same plane. Similarly, the DRNN was able to reproduce the kinematics of the hand for both movement planes, but only if it was trained on examples performed in each one. As we will discuss, these results indicate that there are important dynamical constraints on the mapping of EMG to hand movement that depend on both the time sequence of the movement and on the anatomical constraints of the musculoskeletal system. In a second step, we injected EMG signals constructed from different synergies derived by the PCA in order to identify the mechanical significance of each of these components. From these results, one can surmise that discrete-rhythmic movements may be constructed from three different fundamental modules, one regulating the co-activation of all muscles over the time span of the movement and two others elliciting patterns of reciprocal activation operating in orthogonal directions.
Resumo:
In traditional teaching, the fundamental concepts of electromagnetic induction are usually quickly analyzed, spending most of the time solving problems in a more or less rote manner. However, physics education research has shown that the fundamental concepts of the electromagnetic induction theory are barely understood by students. This article proposes an interactive teaching sequence introducing the topic of electromagnetic induction. The sequence has been designed based on contributions from physics education research. Particular attention is paid to the relationship between experimental findings (macroscopic level) and theoretical interpretation (microscopic level). An example of the activities that have been designed will also be presented, describing the implementation context and the corresponding findings. Since implementing the sequence, a considerable number of students have a more satisfactory grasp of the electromagnetic induction explicative model. However, difficulties are manifested in aspects that require a multilevel explanation, referring to deep structures where the system description is better defined.
Resumo:
Smart Grids are becoming a reality all over the world. Nowadays, the research efforts for the introduction and deployment of these grids are mainly focused on the development of the field of Smart Metering. This emerging application requires the use of technologies to access the significant number of points of supply (PoS) existing in the grid, covering the Low Voltage (LV) segment with the lowest possible costs. Power Line Communications (PLC) have been extensively used in electricity grids for a variety of purposes and, of late, have been the focus of renewed interest. PLC are really well suited for quick and inexpensive pervasive deployments. However, no LV grid is the same in any electricity company (utility), and the particularities of each grid evolution, architecture, circumstances and materials, makes it a challenge to deploy Smart Metering networks with PLC technologies, with the Smart Grid as an ultimate goal. This paper covers the evolution of Smart Metering networks, together with the evolution of PLC technologies until both worlds have converged to project PLC-enabled Smart Metering networks towards Smart Grid. This paper develops guidelines over a set of strategic aspects of PLC Smart Metering network deployment based on the knowledge gathered on real field; and introduces the future challenges of these networks in their evolution towards the Smart Grid.
Resumo:
Deposição é um fenômeno indesejável que ocorre na superfície dos trocadores de calor ao longo de sua operação, ocasionando redução na efetividade térmica e aumento da resistência ao escoamento nestes equipamentos. Estes efeitos trazem grandes consequências econômicas e ambientais, devido ao aumento dos custos operacionais (energia adicional é requerida), aumento dos custos de projeto (demanda por equipamentos de maior área de troca térmica), limitações hidráulicas (que pode levar a uma diminuição da carga processada) e aumento das emissões (aumento da queima de combustíveis fósseis para suprir a energia adicional requerida). Neste contexto, o presente trabalho tem por objetivo fornecer ferramentas computacionais robustas que apliquem técnicas de otimização para o gerenciamento da deposição em redes de trocadores de calor, visando minimizar os seus efeitos negativos. Estas ferramentas foram desenvolvidas utilizando programação matemática no ambiente computacional GAMS, e três abordagens distintas para a resolução do problema da deposição foram pesquisadas. Uma delas consiste na identificação do conjunto ótimo de trocadores de calor a serem limpos durante uma parada para manutenção da planta, visando restaurar a carga térmica nesses equipamentos através da remoção dos depósitos existentes. Já as duas outras abordagens consistem em otimizar a distribuição das vazões das correntes ao longo de ramais paralelos, uma de forma estacionária e a outra de forma dinâmica, visando maximizar a recuperação de energia ao longo da rede. O desempenho destas três abordagens é ilustrado através de um conjunto de exemplos de redes de trocadores de calor, onde os ganhos reais obtidos com estas ferramentas de otimização desenvolvidas são demonstrados
Resumo:
An approach to reconfiguring control systems in the event of major failures is advocated. The approach relies on the convergence of several technologies which are currently emerging: Constrained predictive control, High-fidelity modelling of complex systems, Fault detection and identification, and Model approximation and simplification. Much work is needed, both theoretical and algorithmic, to make this approach practical, but we believe that there is enough evidence, especially from existing industrial practice, for the scheme to be considered realistic. After outlining the problem and proposed solution, the paper briefly reviews constrained predictive control and object-oriented modelling, which are the essential ingredients for practical implementation. The prospects for automatic model simplification are also reviewed briefly. The paper emphasizes some emerging trends in industrial practice, especially as regards modelling and control of complex systems. Examples from process control and flight control are used to illustrate some of the ideas.
Resumo:
A case study of an aircraft engine manufacturer is used to analyze the effects of management levers on the lead time and design errors generated in an iteration-intensive concurrent engineering process. The levers considered are amount of design-space exploration iteration, degree of process concurrency, and timing of design reviews. Simulation is used to show how the ideal combination of these levers can vary with changes in design problem complexity, which can increase, for instance, when novel technology is incorporated in a design. Results confirm that it is important to consider multiple iteration-influencing factors and their interdependencies to understand concurrent processes, because the factors can interact with confounding effects. The article also demonstrates a new approach to derive a system dynamics model from a process task network. The new approach could be applied to analyze other concurrent engineering scenarios. © The Author(s) 2012.