917 resultados para Statistical mixture-design optimization
Resumo:
206 p.
Resumo:
The brain is perhaps the most complex system to have ever been subjected to rigorous scientific investigation. The scale is staggering: over 10^11 neurons, each making an average of 10^3 synapses, with computation occurring on scales ranging from a single dendritic spine, to an entire cortical area. Slowly, we are beginning to acquire experimental tools that can gather the massive amounts of data needed to characterize this system. However, to understand and interpret these data will also require substantial strides in inferential and statistical techniques. This dissertation attempts to meet this need, extending and applying the modern tools of latent variable modeling to problems in neural data analysis.
It is divided into two parts. The first begins with an exposition of the general techniques of latent variable modeling. A new, extremely general, optimization algorithm is proposed - called Relaxation Expectation Maximization (REM) - that may be used to learn the optimal parameter values of arbitrary latent variable models. This algorithm appears to alleviate the common problem of convergence to local, sub-optimal, likelihood maxima. REM leads to a natural framework for model size selection; in combination with standard model selection techniques the quality of fits may be further improved, while the appropriate model size is automatically and efficiently determined. Next, a new latent variable model, the mixture of sparse hidden Markov models, is introduced, and approximate inference and learning algorithms are derived for it. This model is applied in the second part of the thesis.
The second part brings the technology of part I to bear on two important problems in experimental neuroscience. The first is known as spike sorting; this is the problem of separating the spikes from different neurons embedded within an extracellular recording. The dissertation offers the first thorough statistical analysis of this problem, which then yields the first powerful probabilistic solution. The second problem addressed is that of characterizing the distribution of spike trains recorded from the same neuron under identical experimental conditions. A latent variable model is proposed. Inference and learning in this model leads to new principled algorithms for smoothing and clustering of spike data.
Resumo:
Granular crystals are compact periodic assemblies of elastic particles in Hertzian contact whose dynamic response can be tuned from strongly nonlinear to linear by the addition of a static precompression force. This unique feature allows for a wide range of studies that include the investigation of new fundamental nonlinear phenomena in discrete systems such as solitary waves, shock waves, discrete breathers and other defect modes. In the absence of precompression, a particularly interesting property of these systems is their ability to support the formation and propagation of spatially localized soliton-like waves with highly tunable properties. The wealth of parameters one can modify (particle size, geometry and material properties, periodicity of the crystal, presence of a static force, type of excitation, etc.) makes them ideal candidates for the design of new materials for practical applications. This thesis describes several ways to optimally control and tailor the propagation of stress waves in granular crystals through the use of heterogeneities (interstitial defect particles and material heterogeneities) in otherwise perfectly ordered systems. We focus on uncompressed two-dimensional granular crystals with interstitial spherical intruders and composite hexagonal packings and study their dynamic response using a combination of experimental, numerical and analytical techniques. We first investigate the interaction of defect particles with a solitary wave and utilize this fundamental knowledge in the optimal design of novel composite wave guides, shock or vibration absorbers obtained using gradient-based optimization methods.
Resumo:
This work describes the design and synthesis of a true, heterogeneous, asymmetric catalyst. The catalyst consists of a thin film that resides on a high-surface- area hydrophilic solid and is composed of a chiral, hydrophilic organometallic complex dissolved in ethylene glycol. Reactions of prochiral organic reactants take place predominantly at the ethylene glycol-bulk organic interface.
The synthesis of this new heterogeneous catalyst is accomplished in a series of designed steps. A novel, water-soluble, tetrasulfonated 2,2'-bis (diphenylphosphino)-1,1'-binaphthyl (BINAP-4S0_3Na) is synthesized by direct sulfonation of 2,2'-bis(diphenylphosphino)-1,1'-binaphthyl (BINAP). The rhodium (I) complex of BINAP-4SO_3Na is prepared and is shown to be the first homogeneous catalyst to perform asymmetric reductions of prochiral 2-acetamidoacrylic acids in neat water with enantioselectivities as high as those obtained in non-aqueous solvents. The ruthenium (II) complex, [Ru(BINAP-4SO_3Na)(benzene)Cl]Cl is also synthesized and exhibits a broader substrate specificity as well as higher enantioselectivities for the homogeneous asymmetric reduction of prochiral 2-acylamino acid precursors in water. Aquation of the ruthenium-chloro bond in water is found to be detrimental to the enantioselectivity with some substrates. Replacement of water by ethylene glycol results in the same high e.e's as those found in neat methanol. The ruthenium complex is impregnated onto a controlled pore-size glass CPG-240 by the incipient wetness technique. Anhydrous ethylene glycol is used as the immobilizing agent in this heterogeneous catalyst, and a non-polar 1:1 mixture of chloroform and cyclohexane is employed as the organic phase.
Asymmetric reduction of 2-(6'-methoxy-2'-naphthyl)acrylic acid to the non-steroidal anti-inflammatory agent, naproxen, is accomplished with this heterogeneous catalyst at a third of the rate observed in homogeneous solution with an e.e. of 96% at a reaction temperature of 3°C and 1,400 psig of hydrogen. No leaching of the ruthenium complex into the bulk organic phase is found at a detection limit of 32 ppb. Recycling of the catalyst is possible without any loss in enantioselectivity. Long-term stability of this new heterogeneous catalyst is proven by a self-assembly test. That is, under the reaction conditions, the individual components of the present catalytic system self-assemble into the supported-catalyst configuration.
The strategies outlined here for the design and synthesis of this new heterogeneous catalyst are general, and can hopefully be applied to the development of other heterogeneous, asymmetric catalysts.
Resumo:
Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.
In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.
The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.
In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.
The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.
Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.
Resumo:
The dissertation studies the general area of complex networked systems that consist of interconnected and active heterogeneous components and usually operate in uncertain environments and with incomplete information. Problems associated with those systems are typically large-scale and computationally intractable, yet they are also very well-structured and have features that can be exploited by appropriate modeling and computational methods. The goal of this thesis is to develop foundational theories and tools to exploit those structures that can lead to computationally-efficient and distributed solutions, and apply them to improve systems operations and architecture.
Specifically, the thesis focuses on two concrete areas. The first one is to design distributed rules to manage distributed energy resources in the power network. The power network is undergoing a fundamental transformation. The future smart grid, especially on the distribution system, will be a large-scale network of distributed energy resources (DERs), each introducing random and rapid fluctuations in power supply, demand, voltage and frequency. These DERs provide a tremendous opportunity for sustainability, efficiency, and power reliability. However, there are daunting technical challenges in managing these DERs and optimizing their operation. The focus of this dissertation is to develop scalable, distributed, and real-time control and optimization to achieve system-wide efficiency, reliability, and robustness for the future power grid. In particular, we will present how to explore the power network structure to design efficient and distributed market and algorithms for the energy management. We will also show how to connect the algorithms with physical dynamics and existing control mechanisms for real-time control in power networks.
The second focus is to develop distributed optimization rules for general multi-agent engineering systems. A central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to the given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control on the least amount of information possible. Our work focused on achieving this goal using the framework of game theory. In particular, we derived a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting game-theoretic equilibria and the system level design objective and (ii) that the resulting game possesses an inherent structure that can be exploited for distributed learning, e.g., potential games. The control design can then be completed by applying any distributed learning algorithm that guarantees convergence to the game-theoretic equilibrium. One main advantage of this game theoretic approach is that it provides a hierarchical decomposition between the decomposition of the systemic objective (game design) and the specific local decision rules (distributed learning algorithms). This decomposition provides the system designer with tremendous flexibility to meet the design objectives and constraints inherent in a broad class of multiagent systems. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.
Resumo:
A general framework for multi-criteria optimal design is presented which is well-suited for automated design of structural systems. A systematic computer-aided optimal design decision process is developed which allows the designer to rapidly evaluate and improve a proposed design by taking into account the major factors of interest related to different aspects such as design, construction, and operation.
The proposed optimal design process requires the selection of the most promising choice of design parameters taken from a large design space, based on an evaluation using specified criteria. The design parameters specify a particular design, and so they relate to member sizes, structural configuration, etc. The evaluation of the design uses performance parameters which may include structural response parameters, risks due to uncertain loads and modeling errors, construction and operating costs, etc. Preference functions are used to implement the design criteria in a "soft" form. These preference functions give a measure of the degree of satisfaction of each design criterion. The overall evaluation measure for a design is built up from the individual measures for each criterion through a preference combination rule. The goal of the optimal design process is to obtain a design that has the highest overall evaluation measure - an optimization problem.
Genetic algorithms are stochastic optimization methods that are based on evolutionary theory. They provide the exploration power necessary to explore high-dimensional search spaces to seek these optimal solutions. Two special genetic algorithms, hGA and vGA, are presented here for continuous and discrete optimization problems, respectively.
The methodology is demonstrated with several examples involving the design of truss and frame systems. These examples are solved by using the proposed hGA and vGA.
Resumo:
Plasma equilibrium geometry has a great influence on the confinement and magnetohydrodynamic stability in tokamaks. The poloidal field (PF) system of a tokamak should be optimized to support the prescribed plasma equilibrium geometry. In this paper, a genetic algorithm-based method is applied to solve the optimization of the positions and currents of tokamak PF coils. To achieve this goal, we first describe the free-boundary code EQT Based on the EQT code, a genetic algorithm-based method is introduced to the optimization. We apply this new method to the PF system design of the fusion-driven subcritical system and plasma equilibrium geometry optimization of the Experimental Advanced Superconducting Tokamak (EAST). The results indicate that the optimization of the plasma equilibrium geometry can be improved by using this method.
Resumo:
Nucleic acids are a useful substrate for engineering at the molecular level. Designing the detailed energetics and kinetics of interactions between nucleic acid strands remains a challenge. Building on previous algorithms to characterize the ensemble of dilute solutions of nucleic acids, we present a design algorithm that allows optimization of structural features and binding energetics of a test tube of interacting nucleic acid strands. We extend this formulation to handle multiple thermodynamic states and combinatorial constraints to allow optimization of pathways of interacting nucleic acids. In both design strategies, low-cost estimates to thermodynamic properties are calculated using hierarchical ensemble decomposition and test tube ensemble focusing. These algorithms are tested on randomized test sets and on example pathways drawn from the molecular programming literature. To analyze the kinetic properties of designed sequences, we describe algorithms to identify dominant species and kinetic rates using coarse-graining at the scale of a small box containing several strands or a large box containing a dilute solution of strands.
Resumo:
Power system is at the brink of change. Engineering needs, economic forces and environmental factors are the main drivers of this change. The vision is to build a smart electrical grid and a smarter market mechanism around it to fulfill mandates on clean energy. Looking at engineering and economic issues in isolation is no longer an option today; it needs an integrated design approach. In this thesis, I shall revisit some of the classical questions on the engineering operation of power systems that deals with the nonconvexity of power flow equations. Then I shall explore some issues of the interaction of these power flow equations on the electricity markets to address the fundamental issue of market power in a deregulated market environment. Finally, motivated by the emergence of new storage technologies, I present an interesting result on the investment decision problem of placing storage over a power network. The goal of this study is to demonstrate that modern optimization and game theory can provide unique insights into this complex system. Some of the ideas carry over to applications beyond power systems.
Resumo:
O conhecimento de propriedades de transporte de misturas a diferentes pressões e temperaturas é importante em projetos, operação, controle e otimização de processos industriais. Nestes processos, frequentemente, o fluido é uma mistura binária ou multicomponente de hidrocarbonetos, como fluidos de petróleo. Propriedades experimentais de misturas, especialmente, a viscosidade absoluta como função de temperatura e pressão, podem fornecer importantes informações sobre o comportamento do fluido em diferentes composições e são usadas no desenvolvimento de modelos e correlações e na caracterização de misturas complexas. Diversas regras de mistura têm sido propostas na literatura para cálculo de viscosidade de misturas. Estas regras de mistura preveem o comportamento da mistura à pressão atmosférica usando propriedades dos componentes puros. Porém, em diversas aplicações é necessário estimar a viscosidade de misturas a altas pressões. Neste estudo, foram avaliadas regras de mistura comumente usadas como Refutas, Fator de Mistura, Índice de Mistura, Grunberg e Nissan, Kendall-Monroe e Eyring bem como Aditividade Molar, usando dados de viscosidade experimental de misturas em altas pressões. Inicialmente, foram realizadas medidas de viscosidade absoluta para a mistura altamente assimétrica de ciclohexano e n-hexadecano na faixa de temperatura entre (318,15 a 413,15) K e pressões até 62,053 MPa e, para este sistema, um modelo foi proposto para cálculo dos componentes puros para dada temperatura e pressão. Além disso, dados experimentais de viscosidade de trinta misturas cujos componentes diferem em forma, tamanho ou flexibilidade foram selecionados na literatura e modelados empregando-se regras de mistura. As viscosidades das misturas foram estimadas a partir de dados de viscosidade experimental dos componentes puros medidos nas mesmas temperaturas e pressões. A altas pressões, Refutas, Fator de Mistura e Índice de Mistura apresentaram os melhores resultados para todos os sistemas estudados. Mesmo para moléculas bastante assimétricas, Refutas, Fator de Mistura e Índice de Mistura podem ser usados.
Resumo:
O aumento nos rejeitos industriais e a contínua produção de resíduos causam muitas preocupações no âmbito ambiental. Neste contexto, o descarte de pneus usados tem se tornado um grande problema por conta da pequena atenção que se dá à sua destinação final. Assim sendo, essa pesquisa propõe a produção de uma mistura polimérica com polipropileno (PP), a borracha de etileno-propileno-dieno (EPDM) e o pó de pneu (SRT). A Metodologia de Superfície de Resposta (MSR), coleção de técnicas estatísticas e matemáticas úteis para desenvolver, melhorar e optimizar processos, foi aplicada à investigação das misturas ternárias. Após o processamento adequado em extrusora de dupla rosca e a moldagem por injeção, as propriedades mecânicas de resistência à tração e resistência ao impacto foram determinadas e utilizadas como variáveis resposta. Ao mesmo tempo, a microscopia eletrônica de varredura (MEV) foi usada para a investigação da morfologia das diferentes misturas e melhor interpretação dos resultados. Com as ferramentas estatísticas específicas e um número mínimo de experimentos foi possível o desenvolvimento de modelos de superfícies de resposta e a otimização das concentrações dos diferentes componentes da mistura em função do desempenho mecânico e além disso com a modificação da granulometria conseguimos um aumento ainda mais significativo deste desempenho mecânico.
Resumo:
The sun has the potential to power the Earth's total energy needs, but electricity from solar power still constitutes an extremely small fraction of our power generation because of its high cost relative to traditional energy sources. Therefore, the cost of solar must be reduced to realize a more sustainable future. This can be achieved by significantly increasing the efficiency of modules that convert solar radiation to electricity. In this thesis, we consider several strategies to improve the device and photonic design of solar modules to achieve record, ultrahigh (> 50%) solar module efficiencies. First, we investigate the potential of a new passivation treatment, trioctylphosphine sulfide, to increase the performance of small GaAs solar cells for cheaper and more durable modules. We show that small cells (mm2), which currently have a significant efficiency decrease (~ 5%) compared to larger cells (cm2) because small cells have a higher fraction of recombination-active surface from the sidewalls, can achieve significantly higher efficiencies with effective passivation of the sidewalls. We experimentally validate the passivation qualities of treatment by trioctylphosphine sulfide (TOP:S) through four independent studies and show that this facile treatment can enable efficient small devices. Then, we discuss our efforts toward the design and prototyping of a spectrum-splitting module that employs optical elements to divide the incident spectrum into different color bands, which allows for higher efficiencies than traditional methods. We present a design, the polyhedral specular reflector, that has the potential for > 50% module efficiencies even with realistic losses from combined optics, cell, and electrical models. Prototyping efforts of one of these designs using glass concentrators yields an optical module whose combined spectrum-splitting and concentration should correspond to a record module efficiency of 42%. Finally, we consider how the manipulation of radiatively emitted photons from subcells in multijunction architectures can be used to achieve even higher efficiencies than previously thought, inspiring both optimization of incident and radiatively emitted photons for future high efficiency designs. In this thesis work, we explore novel device and photonic designs that represent a significant departure from current solar cell manufacturing techniques and ultimately show the potential for much higher solar cell efficiencies.
Resumo:
Aperture patterns play a vital role in coded aperture imaging ( CAI) applications. In recent years, many approaches were presented to design optimum or near-optimum aperture patterns. Uniformly redundant arrays (URAs) are, undoubtedly, the most successful for constant sidelobe of their periodic autocorrelation function. Unfortunately, the existing methods can only be used to design URAs with a limited number of array sizes and fixed autocorrelation sidelobe-to-peak ratios. In this paper, we present a novel method to design more flexible URAs. Our approach is based on a searching program driven by DIRECT, a global optimization algorithm. We transform the design question to a mathematical model, based on the DIRECT algorithm, which is advantageous for computer implementation. By changing determinative conditions, we obtain two kinds of types of URAs, including the filled URAs which can be constructed by existing methods and the sparse URAs which have never been mentioned by other authors as far as we know. Finally, we carry out an experiment to demonstrate the imaging performance of the sparse URAs.
Resumo:
We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.
This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.
Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.
Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.