15 resultados para Transmission problem
em CaltechTHESIS
Resumo:
The problem of "exit against a flow" for dynamical systems subject to small Gaussian white noise excitation is studied. Here the word "flow" refers to the behavior in phase space of the unperturbed system's state variables. "Exit against a flow" occurs if a perturbation causes the phase point to leave a phase space region within which it would normally be confined. In particular, there are two components of the problem of exit against a flow:
i) the mean exit time
ii) the phase-space distribution of exit locations.
When the noise perturbing the dynamical systems is small, the solution of each component of the problem of exit against a flow is, in general, the solution of a singularly perturbed, degenerate elliptic-parabolic boundary value problem.
Singular perturbation techniques are used to express the asymptotic solution in terms of an unknown parameter. The unknown parameter is determined using the solution of the adjoint boundary value problem.
The problem of exit against a flow for several dynamical systems of physical interest is considered, and the mean exit times and distributions of exit positions are calculated. The systems are then simulated numerically, using Monte Carlo techniques, in order to determine the validity of the asymptotic solutions.
Resumo:
We consider the following singularly perturbed linear two-point boundary-value problem:
Ly(x) ≡ Ω(ε)D_xy(x) - A(x,ε)y(x) = f(x,ε) 0≤x≤1 (1a)
By ≡ L(ε)y(0) + R(ε)y(1) = g(ε) ε → 0^+ (1b)
Here Ω(ε) is a diagonal matrix whose first m diagonal elements are 1 and last m elements are ε. Aside from reasonable continuity conditions placed on A, L, R, f, g, we assume the lower right mxm principle submatrix of A has no eigenvalues whose real part is zero. Under these assumptions a constructive technique is used to derive sufficient conditions for the existence of a unique solution of (1). These sufficient conditions are used to define when (1) is a regular problem. It is then shown that as ε → 0^+ the solution of a regular problem exists and converges on every closed subinterval of (0,1) to a solution of the reduced problem. The reduced problem consists of the differential equation obtained by formally setting ε equal to zero in (1a) and initial conditions obtained from the boundary conditions (1b). Several examples of regular problems are also considered.
A similar technique is used to derive the properties of the solution of a particular difference scheme used to approximate (1). Under restrictions on the boundary conditions (1b) it is shown that for the stepsize much larger than ε the solution of the difference scheme, when applied to a regular problem, accurately represents the solution of the reduced problem.
Furthermore, the existence of a similarity transformation which block diagonalizes a matrix is presented as well as exponential bounds on certain fundamental solution matrices associated with the problem (1).
Resumo:
The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.
Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.
Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.
Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.
Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.
Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.
Resumo:
The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.
First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.
Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.
Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.
Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.
Resumo:
Glaciers are often assumed to deform only at slow (i.e., glacial) rates. However, with the advent of high rate geodetic observations of ice motion, many of the intricacies of glacial deformation on hourly and daily timescales have been observed and quantified. This thesis explores two such short timescale processes: the tidal perturbation of ice stream motion and the catastrophic drainage of supraglacial meltwater lakes. Our investigation into the transmission length-scale of a tidal load represents the first study to explore the daily tidal influence on ice stream motion using three-dimensional models. Our results demonstrate both that the implicit assumptions made in the standard two-dimensional flow-line models are inherently incorrect for many ice streams, and that the anomalously large spatial extent of the tidal influence seen on the motion of some glaciers cannot be explained, as previously thought, through the elastic or viscoelastic transmission of tidal loads through the bulk of the ice stream. We then discuss how the phase delay between a tidal forcing and the ice stream’s displacement response can be used to constrain in situ viscoelastic properties of glacial ice. Lastly, for the problem of supraglacial lake drainage, we present a methodology for implementing linear viscoelasticity into an existing model for lake drainage. Our work finds that viscoelasticity is a second-order effect when trying to model the deformation of ice in response to a meltwater lake draining to a glacier’s bed. The research in this thesis demonstrates that the first-order understanding of the short-timescale behavior of naturally occurring ice is incomplete, and works towards improving our fundamental understanding of ice behavior over the range of hours to days.
Resumo:
We present a theoretical study of electronic states in topological insulators with impurities. Chiral edge states in 2d topological insulators and helical surface states in 3d topological insulators show a robust transport against nonmagnetic impurities. Such a nontrivial character inspired physicists to come up with applications such as spintronic devices [1], thermoelectric materials [2], photovoltaics [3], and quantum computation [4]. Not only has it provided new opportunities from a practical point of view, but its theoretical study has deepened the understanding of the topological nature of condensed matter systems. However, experimental realizations of topological insulators have been challenging. For example, a 2d topological insulator fabricated in a HeTe quantum well structure by Konig et al. [5] shows a longitudinal conductance which is not well quantized and varies with temperature. 3d topological insulators such as Bi2Se3 and Bi2Te3 exhibit not only a signature of surface states, but they also show a bulk conduction [6]. The series of experiments motivated us to study the effects of impurities and coexisting bulk Fermi surface in topological insulators. We first address a single impurity problem in a topological insulator using a semiclassical approach. Then we study the conductance behavior of a disordered topological-metal strip where bulk modes are associated with the transport of edge modes via impurity scattering. We verify that the conduction through a chiral edge channel retains its topological signature, and we discovered that the transmission can be succinctly expressed in a closed form as a ratio of determinants of the bulk Green's function and impurity potentials. We further study the transport of 1d systems which can be decomposed in terms of chiral modes. Lastly, the surface impurity effect on the local density of surface states over layers into the bulk is studied between weak and strong disorder strength limits.
Resumo:
Conduction through TiO2 films of thickness 100 to 450 Å have been investigated. The samples were prepared by either anodization of Ti evaporation of TiO2, with Au or Al evaporated for contacts. The anodized samples exhibited considerable hysteresis due to electrical forming, however it was possible to avoid this problem with the evaporated samples from which complete sets of experimental results were obtained and used in the analysis. Electrical measurements included: the dependence of current and capacitance on dc voltage and temperature; the dependence of capacitance and conductance on frequency and temperature; and transient measurements of current and capacitance. A thick (3000 Å) evaporated TiO2 film was used for measuring the dielectric constant (27.5) and the optical dispersion, the latter being similar to that for rutile. An electron transmission diffraction pattern of a evaporated film indicated an essentially amorphous structure with a short range order that could be related to rutile. Photoresponse measurements indicated the same band gap of about 3 ev for anodized and evaporated films and reduced rutile crystals and gave the barrier energies at the contacts.
The results are interpreted in a self consistent manner by considering the effect of a large impurity concentration in the films and a correspondingly large ionic space charge. The resulting potential profile in the oxide film leads to a thermally assisted tunneling process between the contacts and the interior of the oxide. A general relation is derived for the steady state current through structures of this kind. This in turn is expressed quantitatively for each of two possible limiting types of impurity distributions, where one type gives barriers of an exponential shape and leads to quantitative predictions in c lose agreement with the experimental results. For films somewhat greater than 100 Å, the theory is formulated essentially in terms of only the independently measured barrier energies and a characteristic parameter of the oxide that depends primarily on the maximum impurity concentration at the contacts. A single value of this parameter gives consistent agreement with the experimentally observed dependence of both current and capacitance on dc voltage and temperature, with the maximum impurity concentration found to be approximately the saturation concentration quoted for rutile. This explains the relative insensitivity of the electrical properties of the films on the exact conditions of formation.
Resumo:
Semiconductor technology scaling has enabled drastic growth in the computational capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high bandwidth communication between ICs. Electrical channel bandwidth has not been able to keep up with this demand, making I/O link design more challenging. Interconnects which employ optical channels have negligible frequency dependent loss and provide a potential solution to this I/O bandwidth problem. Apart from the type of channel, efficient high-speed communication also relies on generation and distribution of multi-phase, high-speed, and high-quality clock signals. In the multi-gigahertz frequency range, conventional clocking techniques have encountered several design challenges in terms of power consumption, skew and jitter. Injection-locking is a promising technique to address these design challenges for gigahertz clocking. However, its small locking range has been a major contributor in preventing its ubiquitous acceptance.
In the first part of this dissertation we describe a wideband injection locking scheme in an LC oscillator. Phase locked loop (PLL) and injection locking elements are combined symbiotically to achieve wide locking range while retaining the simplicity of the latter. This method does not require a phase frequency detector or a loop filter to achieve phase lock. A mathematical analysis of the system is presented and the expression for new locking range is derived. A locking range of 13.4 GHz–17.2 GHz (25%) and an average jitter tracking bandwidth of up to 400 MHz are measured in a high-Q LC oscillator. This architecture is used to generate quadrature phases from a single clock without any frequency division. It also provides high frequency jitter filtering while retaining the low frequency correlated jitter essential for forwarded clock receivers.
To improve the locking range of an injection locked ring oscillator; QLL (Quadrature locked loop) is introduced. The inherent dynamics of injection locked quadrature ring oscillator are used to improve its locking range from 5% (7-7.4GHz) to 90% (4-11GHz). The QLL is used to generate accurate clock phases for a four channel optical receiver using a forwarded clock at quarter-rate. The QLL drives an injection locked oscillator (ILO) at each channel without any repeaters for local quadrature clock generation. Each local ILO has deskew capability for phase alignment. The optical-receiver uses the inherent frequency to voltage conversion provided by the QLL to dynamically body bias its devices. A wide locking range of the QLL helps to achieve a reliable data-rate of 16-32Gb/s and adaptive body biasing aids in maintaining an ultra-low power consumption of 153pJ/bit.
From the optical receiver we move on to discussing a non-linear equalization technique for a vertical-cavity surface-emitting laser (VCSEL) based optical transmitter, to enable low-power, high-speed optical transmission. A non-linear time domain optical model of the VCSEL is built and evaluated for accuracy. The modelling shows that, while conventional FIR-based pre-emphasis works well for LTI electrical channels, it is not optimum for the non-linear optical frequency response of the VCSEL. Based on the simulations of the model an optimum equalization methodology is derived. The equalization technique is used to achieve a data-rate of 20Gb/s with power efficiency of 0.77pJ/bit.
Resumo:
Politically the Colorado river is an interstate as well as an international stream. Physically the basin divides itself distinctly into three sections. The upper section from head waters to the mouth of San Juan comprises about 40 percent of the total of the basin and affords about 87 percent of the total runoff, or an average of about 15 000 000 acre feet per annum. High mountains and cold weather are found in this section. The middle section from the mouth of San Juan to the mouth of the Williams comprises about 35 percent of the total area of the basin and supplies about 7 percent of the annual runoff. Narrow canyons and mild weather prevail in this section. The lower third of the basin is composed of mainly hot arid plains of low altitude. It comprises some 25 percent of the total area of the basin and furnishes about 6 percent of the average annual runoff.
The proposed Diamond Creek reservoir is located in the middle section and is wholly within the boundary of Arizona. The site is at the mouth of Diamond Creek and is only 16 m. from Beach Spring, a station on the Santa Fe railroad. It is solely a power project with a limited storage capacity. The dam which creats the reservoir is of the gravity type to be constructed across the river. The walls and foundation are of granite. For a dam of 290 feet in height, the back water will be about 25 m. up the river.
The power house will be placed right below the dam perpendicular to the axis of the river. It is entirely a concrete structure. The power installation would consist of eighteen 37 500 H.P. vertical, variable head turbines, directly connected to 28 000 kwa. 110 000 v. 3 phase, 60 cycle generators with necessary switching and auxiliary apparatus. Each unit is to be fed by a separate penstock wholly embedded into the masonry.
Concerning the power market, the main electric transmission lines would extend to Prescott, Phoenix, Mesa, Florence etc. The mining regions of the mountains of Arizona would be the most adequate market. The demand of power in the above named places might not be large at present. It will, from the observation of the writer, rapidly increase with the wonderful advancement of all kinds of industrial development.
All these things being comparatively feasible, there is one difficult problem: that is the silt. At the Diamond Creek dam site the average annual silt discharge is about 82 650 acre feet. The geographical conditions, however, will not permit silt deposites right in the reservoir. So this design will be made under the assumption given in Section 4.
The silt condition and the change of lower course of the Colorado are much like those of the Yellow River in China. But one thing is different. On the Colorado most of the canyon walls are of granite, while those on the Yellow are of alluvial loess: so it is very hard, if not impossible, to get a favorable dam site on the lower part. As a visitor to this country, I should like to see the full development of the Colorado: but how about THE YELLOW!
Resumo:
No abstract.
Resumo:
Not available.
Resumo:
The use of transmission matrices and lumped parameter models for describing continuous systems is the subject of this study. Non-uniform continuous systems which play important roles in practical vibration problems, e.g., torsional oscillations in bars, transverse bending vibrations of beams, etc., are of primary importance.
A new approach for deriving closed form transmission matrices is applied to several classes of non-uniform continuous segments of one dimensional and beam systems. A power series expansion method is presented for determining approximate transmission matrices of any order for segments of non-uniform systems whose solutions cannot be found in closed form. This direct series method is shown to give results comparable to those of the improved lumped parameter models for one dimensional systems.
Four types of lumped parameter models are evaluated on the basis of the uniform continuous one dimensional system by comparing the behavior of the frequency root errors. The lumped parameter models which are based upon a close fit to the low frequency approximation of the exact transmission matrix, at the segment level, are shown to be superior. On this basis an improved lumped parameter model is recommended for approximating non-uniform segments. This new model is compared to a uniform segment approximation and error curves are presented for systems whose areas very quadratically and linearly. The effect of varying segment lengths is investigated for one dimensional systems and results indicate very little improvement in comparison to the use of equal length segments. For purposes of completeness, a brief summary of various lumped parameter models and other techniques which have previously been used to approximate the uniform Bernoulli-Euler beam is a given.
Resumo:
The present work deals with the problem of the interaction of the electromagnetic radiation with a statistical distribution of nonmagnetic dielectric particles immersed in an infinite homogeneous isotropic, non-magnetic medium. The wavelength of the incident radiation can be less, equal or greater than the linear dimension of a particle. The distance between any two particles is several wavelengths. A single particle in the absence of the others is assumed to scatter like a Rayleigh-Gans particle, i.e. interaction between the volume elements (self-interaction) is neglected. The interaction of the particles is taken into account (multiple scattering) and conditions are set up for the case of a lossless medium which guarantee that the multiple scattering contribution is more important than the self-interaction one. These conditions relate the wavelength λ and the linear dimensions of a particle a and of the region occupied by the particles D. It is found that for constant λ/a, D is proportional to λ and that |Δχ|, where Δχ is the difference in the dielectric susceptibilities between particle and medium, has to lie within a certain range.
The total scattering field is obtained as a series the several terms of which represent the corresponding multiple scattering orders. The first term is a single scattering term. The ensemble average of the total scattering intensity is then obtained as a series which does not involve terms due to products between terms of different orders. Thus the waves corresponding to different orders are independent and their Stokes parameters add.
The second and third order intensity terms are explicitly computed. The method used suggests a general approach for computing any order. It is found that in general the first order scattering intensity pattern (or phase function) peaks in the forward direction Θ = 0. The second order tends to smooth out the pattern giving a maximum in the Θ = π/2 direction and minima in the Θ = 0 , Θ = π directions. This ceases to be true if ka (where k = 2π/λ) becomes large (> 20). For large ka the forward direction is further enhanced. Similar features are expected from the higher orders even though the critical value of ka may increase with the order.
The first order polarization of the scattered wave is determined. The ensemble average of the Stokes parameters of the scattered wave is explicitly computed for the second order. A similar method can be applied for any order. It is found that the polarization of the scattered wave depends on the polarization of the incident wave. If the latter is elliptically polarized then the first order scattered wave is elliptically polarized, but in the Θ = π/2 direction is linearly polarized. If the incident wave is circularly polarized the first order scattered wave is elliptically polarized except for the directions Θ = π/2 (linearly polarized) and Θ = 0, π (circularly polarized). The handedness of the Θ = 0 wave is the same as that of the incident whereas the handedness of the Θ = π wave is opposite. If the incident wave is linearly polarized the first order scattered wave is also linearly polarized. The second order makes the total scattered wave to be elliptically polarized for any Θ no matter what the incident wave is. However, the handedness of the total scattered wave is not altered by the second order. Higher orders have similar effects as the second order.
If the medium is lossy the general approach employed for the lossless case is still valid. Only the algebra increases in complexity. It is found that the results of the lossless case are insensitive in the first order of kimD where kim = imaginary part of the wave vector k and D a linear characteristic dimension of the region occupied by the particles. Thus moderately extended regions and small losses make (kimD)2 ≪ 1 and the lossy character of the medium does not alter the results of the lossless case. In general the presence of the losses tends to reduce the forward scattering.
Resumo:
A technique is developed for the design of lenses for transitioning TEM waves between conical and/or cylindrical transmission lines, ideally with no reflection or distortion of the waves. These lenses utilize isotropic but inhomogeneous media and are based on a solution of Maxwell's equations instead of just geometrical optics. The technique employs the expression of the constitutive parameters, ɛ and μ, plus Maxwell's equations, in a general orthogonal curvilinear coordinate system in tensor form, giving what we term as formal quantities. Solving the problem for certain types of formal constitutive parameters, these are transformed to give ɛ and μ as functions of position. Several examples of such lenses are considered in detail.
Resumo:
We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.
This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.
Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.
Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.