8 resultados para General equilibrium, Efficiency, Oscillation
em CaltechTHESIS
Resumo:
The dissertation studies the general area of complex networked systems that consist of interconnected and active heterogeneous components and usually operate in uncertain environments and with incomplete information. Problems associated with those systems are typically large-scale and computationally intractable, yet they are also very well-structured and have features that can be exploited by appropriate modeling and computational methods. The goal of this thesis is to develop foundational theories and tools to exploit those structures that can lead to computationally-efficient and distributed solutions, and apply them to improve systems operations and architecture.
Specifically, the thesis focuses on two concrete areas. The first one is to design distributed rules to manage distributed energy resources in the power network. The power network is undergoing a fundamental transformation. The future smart grid, especially on the distribution system, will be a large-scale network of distributed energy resources (DERs), each introducing random and rapid fluctuations in power supply, demand, voltage and frequency. These DERs provide a tremendous opportunity for sustainability, efficiency, and power reliability. However, there are daunting technical challenges in managing these DERs and optimizing their operation. The focus of this dissertation is to develop scalable, distributed, and real-time control and optimization to achieve system-wide efficiency, reliability, and robustness for the future power grid. In particular, we will present how to explore the power network structure to design efficient and distributed market and algorithms for the energy management. We will also show how to connect the algorithms with physical dynamics and existing control mechanisms for real-time control in power networks.
The second focus is to develop distributed optimization rules for general multi-agent engineering systems. A central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to the given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control on the least amount of information possible. Our work focused on achieving this goal using the framework of game theory. In particular, we derived a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting game-theoretic equilibria and the system level design objective and (ii) that the resulting game possesses an inherent structure that can be exploited for distributed learning, e.g., potential games. The control design can then be completed by applying any distributed learning algorithm that guarantees convergence to the game-theoretic equilibrium. One main advantage of this game theoretic approach is that it provides a hierarchical decomposition between the decomposition of the systemic objective (game design) and the specific local decision rules (distributed learning algorithms). This decomposition provides the system designer with tremendous flexibility to meet the design objectives and constraints inherent in a broad class of multiagent systems. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.
Resumo:
This thesis is comprised of three chapters, each of which is concerned with properties of allocational mechanisms which include voting procedures as part of their operation. The theme of interaction between economic and political forces recurs in the three chapters, as described below.
Chapter One demonstrates existence of a non-controlling interest shareholders' equilibrium for a stylized one-period stock market economy with fewer securities than states of the world. The economy has two decision mechanisms: Owners vote to change firms' production plans across states, fixing shareholdings; and individuals trade shares and the current production / consumption good, fixing production plans. A shareholders' equilibrium is a production plan profile, and a shares / current good allocation stable for both mechanisms. In equilibrium, no (Kramer direction-restricted) plan revision is supported by a share-weighted majority, and there exists no Pareto superior reallocation.
Chapter Two addresses efficient management of stationary-site, fixed-budget, partisan voter registration drives. Sufficient conditions obtain for unique optimal registrar deployment within contested districts. Each census tract is assigned an expected net plurality return to registration investment index, computed from estimates of registration, partisanship, and turnout. Optimum registration intensity is a logarithmic transformation of a tract's index. These conditions are tested using a merged data set including both census variables and Los Angeles County Registrar data from several 1984 Assembly registration drives. Marginal registration spending benefits, registrar compensation, and the general campaign problem are also discussed.
The last chapter considers social decision procedures at a higher level of abstraction. Chapter Three analyzes the structure of decisive coalition families, given a quasitransitive-valued social decision procedure satisfying the universal domain and ITA axioms. By identifying those alternatives X* ⊆ X on which the Pareto principle fails, imposition in the social ranking is characterized. Every coaliton is weakly decisive for X* over X~X*, and weakly antidecisive for X~X* over X*; therefore, alternatives in X~X* are never socially ranked above X*. Repeated filtering of alternatives causing Pareto failure shows states in X^n*~X^((n+1))* are never socially ranked above X^((n+1))*. Limiting results of iterated application of the *-operator are also discussed.
Resumo:
The Madden-Julian Oscillation (MJO) is a pattern of intense rainfall and associated planetary-scale circulations in the tropical atmosphere, with a recurrence interval of 30-90 days. Although the MJO was first discovered 40 years ago, it is still a challenge to simulate the MJO in general circulation models (GCMs), and even with simple models it is difficult to agree on the basic mechanisms. This deficiency is mainly due to our poor understanding of moist convection—deep cumulus clouds and thunderstorms, which occur at scales that are smaller than the resolution elements of the GCMs. Moist convection is the most important mechanism for transporting energy from the ocean to the atmosphere. Success in simulating the MJO will improve our understanding of moist convection and thereby improve weather and climate forecasting.
We address this fundamental subject by analyzing observational datasets, constructing a hierarchy of numerical models, and developing theories. Parameters of the models are taken from observation, and the simulated MJO fits the data without further adjustments. The major findings include: 1) the MJO may be an ensemble of convection events linked together by small-scale high-frequency inertia-gravity waves; 2) the eastward propagation of the MJO is determined by the difference between the eastward and westward phase speeds of the waves; 3) the planetary scale of the MJO is the length over which temperature anomalies can be effectively smoothed by gravity waves; 4) the strength of the MJO increases with the typical strength of convection, which increases in a warming climate; 5) the horizontal scale of the MJO increases with the spatial frequency of convection; and 6) triggered convection, where potential energy accumulates until a threshold is reached, is important in simulating the MJO. Our findings challenge previous paradigms, which consider the MJO as a large-scale mode, and point to ways for improving the climate models.
Resumo:
This thesis presents a new class of solvers for the subsonic compressible Navier-Stokes equations in general two- and three-dimensional spatial domains. The proposed methodology incorporates: 1) A novel linear-cost implicit solver based on use of higher-order backward differentiation formulae (BDF) and the alternating direction implicit approach (ADI); 2) A fast explicit solver; 3) Dispersionless spectral spatial discretizations; and 4) A domain decomposition strategy that negotiates the interactions between the implicit and explicit domains. In particular, the implicit methodology is quasi-unconditionally stable (it does not suffer from CFL constraints for adequately resolved flows), and it can deliver orders of time accuracy between two and six in the presence of general boundary conditions. In fact this thesis presents, for the first time in the literature, high-order time-convergence curves for Navier-Stokes solvers based on the ADI strategy---previous ADI solvers for the Navier-Stokes equations have not demonstrated orders of temporal accuracy higher than one. An extended discussion is presented in this thesis which places on a solid theoretical basis the observed quasi-unconditional stability of the methods of orders two through six. The performance of the proposed solvers is favorable. For example, a two-dimensional rough-surface configuration including boundary layer effects at Reynolds number equal to one million and Mach number 0.85 (with a well-resolved boundary layer, run up to a sufficiently long time that single vortices travel the entire spatial extent of the domain, and with spatial mesh sizes near the wall of the order of one hundred-thousandth the length of the domain) was successfully tackled in a relatively short (approximately thirty-hour) single-core run; for such discretizations an explicit solver would require truly prohibitive computing times. As demonstrated via a variety of numerical experiments in two- and three-dimensions, further, the proposed multi-domain parallel implicit-explicit implementations exhibit high-order convergence in space and time, useful stability properties, limited dispersion, and high parallel efficiency.
Resumo:
We present a complete system for Spectral Cauchy characteristic extraction (Spectral CCE). Implemented in C++ within the Spectral Einstein Code (SpEC), the method employs numerous innovative algorithms to efficiently calculate the Bondi strain, news, and flux.
Spectral CCE was envisioned to ensure physically accurate gravitational wave-forms computed for the Laser Interferometer Gravitational wave Observatory (LIGO) and similar experiments, while working toward a template bank with more than a thousand waveforms to span the binary black hole (BBH) problem’s seven-dimensional parameter space.
The Bondi strain, news, and flux are physical quantities central to efforts to understand and detect astrophysical gravitational wave sources within the Simulations of eXtreme Spacetime (SXS) collaboration, with the ultimate aim of providing the first strong field probe of the Einstein field equation.
In a series of included papers, we demonstrate stability, convergence, and gauge invariance. We also demonstrate agreement between Spectral CCE and the legacy Pitt null code, while achieving a factor of 200 improvement in computational efficiency.
Spectral CCE represents a significant computational advance. It is the foundation upon which further capability will be built, specifically enabling the complete calculation of junk-free, gauge-free, and physically valid waveform data on the fly within SpEC.
Resumo:
Since the discovery in 1962 of laser action in semiconductor diodes made from GaAs, the study of spontaneous and stimulated light emission from semiconductors has become an exciting new field of semiconductor physics and quantum electronics combined. Included in the limited number of direct-gap semiconductor materials suitable for laser action are the members of the lead salt family, i.e . PbS, PbSe and PbTe. The material used for the experiments described herein is PbTe . The semiconductor PbTe is a narrow band- gap material (Eg = 0.19 electron volt at a temperature of 4.2°K). Therefore, the radiative recombination of electron-hole pairs between the conduction and valence bands produces photons whose wavelength is in the infrared (λ ≈ 6.5 microns in air).
The p-n junction diode is a convenient device in which the spontaneous and stimulated emission of light can be achieved via current flow in the forward-bias direction. Consequently, the experimental devices consist of a group of PbTe p-n junction diodes made from p –type single crystal bulk material. The p - n junctions were formed by an n-type vapor- phase diffusion perpendicular to the (100) plane, with a junction depth of approximately 75 microns. Opposite ends of the diode structure were cleaved to give parallel reflectors, thereby forming the Fabry-Perot cavity needed for a laser oscillator. Since the emission of light originates from the recombination of injected current carriers, the nature of the radiation depends on the injection mechanism.
The total intensity of the light emitted from the PbTe diodes was observed over a current range of three to four orders of magnitude. At the low current levels, the light intensity data were correlated with data obtained on the electrical characteristics of the diodes. In the low current region (region A), the light intensity, current-voltage and capacitance-voltage data are consistent with the model for photon-assisted tunneling. As the current is increased, the light intensity data indicate the occurrence of a change in the current injection mechanism from photon-assisted tunneling (region A) to thermionic emission (region B). With the further increase of the injection level, the photon-field due to light emission in the diode builds up to the point where stimulated emission (oscillation) occurs. The threshold current at which oscillation begins marks the beginning of a region (region C) where the total light intensity increases very rapidly with the increase in current. This rapid increase in intensity is accompanied by an increase in the number of narrow-band oscillating modes. As the photon density in the cavity continues to increase with the injection level, the intensity gradually enters a region of linear dependence on current (region D), i.e. a region of constant (differential) quantum efficiency.
Data obtained from measurements of the stimulated-mode light-intensity profile and the far-field diffraction pattern (both in the direction perpendicular to the junction-plane) indicate that the active region of high gain (i.e. the region where a population inversion exists) extends to approximately a diffusion length on both sides of the junction. The data also indicate that the confinement of the oscillating modes within the diode cavity is due to a variation in the real part of the dielectric constant, caused by the gain in the medium. A value of τ ≈ 10-9 second for the minority- carrier recombination lifetime (at a diode temperature of 20.4°K) is obtained from the above measurements. This value for τ is consistent with other data obtained independently for PbTe crystals.
Data on the threshold current for stimulated emission (for a diode temperature of 20. 4°K) as a function of the reciprocal cavity length were obtained. These data yield a value of J’th = (400 ± 80) amp/cm2 for the threshold current in the limit of an infinitely long diode-cavity. A value of α = (30 ± 15) cm-1 is obtained for the total (bulk) cavity loss constant, in general agreement with independent measurements of free- carrier absorption in PbTe. In addition, the data provide a value of ns ≈ 10% for the internal spontaneous quantum efficiency. The above value for ns yields values of tb ≈ τ ≈ 10-9 second and ts ≈ 10-8 second for the nonradiative and the spontaneous (radiative) lifetimes, respectively.
The external quantum efficiency (nd) for stimulated emission from diode J-2 (at 20.4° K) was calculated by using the total light intensity vs. diode current data, plus accepted values for the material parameters of the mercury- doped germanium detector used for the measurements. The resulting value is nd ≈ 10%-20% for emission from both ends of the cavity. The corresponding radiative power output (at λ = 6.5 micron) is 120-240 milliwatts for a diode current of 6 amps.
Resumo:
We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.
This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.
Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.
Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.
Resumo:
I. The binding of the intercalating dye ethidium bromide to closed circular SV 40 DNA causes an unwinding of the duplex structure and a simultaneous and quantitatively equivalent unwinding of the superhelices. The buoyant densities and sedimentation velocities of both intact (I) and singly nicked (II) SV 40 DNAs were measured as a function of free dye concentration. The buoyant density data were used to determine the binding isotherms over a dye concentration range extending from 0 to 600 µg/m1 in 5.8 M CsCl. At high dye concentrations all of the binding sites in II, but not in I, are saturated. At free dye concentrations less than 5.4 µg/ml, I has a greater affinity for dye than II. At a critical amount of dye bound I and II have equal affinities, and at higher dye concentration I has a lower affinity than II. The number of superhelical turns, τ, present in I is calculated at each dye concentration using Fuller and Waring's (1964) estimate of the angle of duplex unwinding per intercalation. The results reveal that SV 40 DNA I contains about -13 superhelical turns in concentrated salt solutions.
The free energy of superhelix formation is calculated as a function of τ from a consideration of the effect of the superhelical turns upon the binding isotherm of ethidium bromide to SV 40 DNA I. The value of the free energy is about 100 kcal/mole DNA in the native molecule. The free energy estimates are used to calculate the pitch and radius of the superhelix as a function of the number of superhelical turns. The pitch and radius of the native I superhelix are 430 Å and 135 Å, respectively.
A buoyant density method for the isolation and detection of closed circular DNA is described. The method is based upon the reduced binding of the intercalating dye, ethidium bromide, by closed circular DNA. In an application of this method it is found that HeLa cells contain in addition to closed circular mitochondrial DNA of mean length 4.81 microns, a heterogeneous group of smaller DNA molecules which vary in size from 0.2 to 3.5 microns and a paucidisperse group of multiples of the mitochondrial length.
II. The general theory is presented for the sedimentation equilibrium of a macromolecule in a concentrated binary solvent in the presence of an additional reacting small molecule. Equations are derived for the calculation of the buoyant density of the complex and for the determination of the binding isotherm of the reagent to the macrospecies. The standard buoyant density, a thermodynamic function, is defined and the density gradients which characterize the four component system are derived. The theory is applied to the specific cases of the binding of ethidium bromide to SV 40 DNA and of the binding of mercury and silver to DNA.