10 resultados para Limited power supply

em CaltechTHESIS


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The dissertation studies the general area of complex networked systems that consist of interconnected and active heterogeneous components and usually operate in uncertain environments and with incomplete information. Problems associated with those systems are typically large-scale and computationally intractable, yet they are also very well-structured and have features that can be exploited by appropriate modeling and computational methods. The goal of this thesis is to develop foundational theories and tools to exploit those structures that can lead to computationally-efficient and distributed solutions, and apply them to improve systems operations and architecture.

Specifically, the thesis focuses on two concrete areas. The first one is to design distributed rules to manage distributed energy resources in the power network. The power network is undergoing a fundamental transformation. The future smart grid, especially on the distribution system, will be a large-scale network of distributed energy resources (DERs), each introducing random and rapid fluctuations in power supply, demand, voltage and frequency. These DERs provide a tremendous opportunity for sustainability, efficiency, and power reliability. However, there are daunting technical challenges in managing these DERs and optimizing their operation. The focus of this dissertation is to develop scalable, distributed, and real-time control and optimization to achieve system-wide efficiency, reliability, and robustness for the future power grid. In particular, we will present how to explore the power network structure to design efficient and distributed market and algorithms for the energy management. We will also show how to connect the algorithms with physical dynamics and existing control mechanisms for real-time control in power networks.

The second focus is to develop distributed optimization rules for general multi-agent engineering systems. A central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to the given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control on the least amount of information possible. Our work focused on achieving this goal using the framework of game theory. In particular, we derived a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting game-theoretic equilibria and the system level design objective and (ii) that the resulting game possesses an inherent structure that can be exploited for distributed learning, e.g., potential games. The control design can then be completed by applying any distributed learning algorithm that guarantees convergence to the game-theoretic equilibrium. One main advantage of this game theoretic approach is that it provides a hierarchical decomposition between the decomposition of the systemic objective (game design) and the specific local decision rules (distributed learning algorithms). This decomposition provides the system designer with tremendous flexibility to meet the design objectives and constraints inherent in a broad class of multiagent systems. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Two trends are emerging from modern electric power systems: the growth of renewable (e.g., solar and wind) generation, and the integration of information technologies and advanced power electronics. The former introduces large, rapid, and random fluctuations in power supply, demand, frequency, and voltage, which become a major challenge for real-time operation of power systems. The latter creates a tremendous number of controllable intelligent endpoints such as smart buildings and appliances, electric vehicles, energy storage devices, and power electronic devices that can sense, compute, communicate, and actuate. Most of these endpoints are distributed on the load side of power systems, in contrast to traditional control resources such as centralized bulk generators. This thesis focuses on controlling power systems in real time, using these load side resources. Specifically, it studies two problems.

(1) Distributed load-side frequency control: We establish a mathematical framework to design distributed frequency control algorithms for flexible electric loads. In this framework, we formulate a category of optimization problems, called optimal load control (OLC), to incorporate the goals of frequency control, such as balancing power supply and demand, restoring frequency to its nominal value, restoring inter-area power flows, etc., in a way that minimizes total disutility for the loads to participate in frequency control by deviating from their nominal power usage. By exploiting distributed algorithms to solve OLC and analyzing convergence of these algorithms, we design distributed load-side controllers and prove stability of closed-loop power systems governed by these controllers. This general framework is adapted and applied to different types of power systems described by different models, or to achieve different levels of control goals under different operation scenarios. We first consider a dynamically coherent power system which can be equivalently modeled with a single synchronous machine. We then extend our framework to a multi-machine power network, where we consider primary and secondary frequency controls, linear and nonlinear power flow models, and the interactions between generator dynamics and load control.

(2) Two-timescale voltage control: The voltage of a power distribution system must be maintained closely around its nominal value in real time, even in the presence of highly volatile power supply or demand. For this purpose, we jointly control two types of reactive power sources: a capacitor operating at a slow timescale, and a power electronic device, such as a smart inverter or a D-STATCOM, operating at a fast timescale. Their control actions are solved from optimal power flow problems at two timescales. Specifically, the slow-timescale problem is a chance-constrained optimization, which minimizes power loss and regulates the voltage at the current time instant while limiting the probability of future voltage violations due to stochastic changes in power supply or demand. This control framework forms the basis of an optimal sizing problem, which determines the installation capacities of the control devices by minimizing the sum of power loss and capital cost. We develop computationally efficient heuristics to solve the optimal sizing problem and implement real-time control. Numerical experiments show that the proposed sizing and control schemes significantly improve the reliability of voltage control with a moderate increase in cost.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Network information theory and channels with memory are two important but difficult frontiers of information theory. In this two-parted dissertation, we study these two areas, each comprising one part. For the first area we study the so-called entropy vectors via finite group theory, and the network codes constructed from finite groups. In particular, we identify the smallest finite group that violates the Ingleton inequality, an inequality respected by all linear network codes, but not satisfied by all entropy vectors. Based on the analysis of this group we generalize it to several families of Ingleton-violating groups, which may be used to design good network codes. Regarding that aspect, we study the network codes constructed with finite groups, and especially show that linear network codes are embedded in the group network codes constructed with these Ingleton-violating families. Furthermore, such codes are strictly more powerful than linear network codes, as they are able to violate the Ingleton inequality while linear network codes cannot. For the second area, we study the impact of memory to the channel capacity through a novel communication system: the energy harvesting channel. Different from traditional communication systems, the transmitter of an energy harvesting channel is powered by an exogenous energy harvesting device and a finite-sized battery. As a consequence, each time the system can only transmit a symbol whose energy consumption is no more than the energy currently available. This new type of power supply introduces an unprecedented input constraint for the channel, which is random, instantaneous, and has memory. Furthermore, naturally, the energy harvesting process is observed causally at the transmitter, but no such information is provided to the receiver. Both of these features pose great challenges for the analysis of the channel capacity. In this work we use techniques from channels with side information, and finite state channels, to obtain lower and upper bounds of the energy harvesting channel. In particular, we study the stationarity and ergodicity conditions of a surrogate channel to compute and optimize the achievable rates for the original channel. In addition, for practical code design of the system we study the pairwise error probabilities of the input sequences.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As the worldwide prevalence of diabetes mellitus continues to increase, diabetic retinopathy remains the leading cause of visual impairment and blindness in many developed countries. Between 32 to 40 percent of about 246 million people with diabetes develop diabetic retinopathy. Approximately 4.1 million American adults 40 years and older are affected by diabetic retinopathy. This glucose-induced microvascular disease progressively damages the tiny blood vessels that nourish the retina, the light-sensitive tissue at the back of the eye, leading to retinal ischemia (i.e., inadequate blood flow), retinal hypoxia (i.e., oxygen deprivation), and retinal nerve cell degeneration or death. It is a most serious sight-threatening complication of diabetes, resulting in significant irreversible vision loss, and even total blindness.

Unfortunately, although current treatments of diabetic retinopathy (i.e., laser therapy, vitrectomy surgery and anti-VEGF therapy) can reduce vision loss, they only slow down but cannot stop the degradation of the retina. Patients require repeated treatment to protect their sight. The current treatments also have significant drawbacks. Laser therapy is focused on preserving the macula, the area of the retina that is responsible for sharp, clear, central vision, by sacrificing the peripheral retina since there is only limited oxygen supply. Therefore, laser therapy results in a constricted peripheral visual field, reduced color vision, delayed dark adaptation, and weakened night vision. Vitrectomy surgery increases the risk of neovascular glaucoma, another devastating ocular disease, characterized by the proliferation of fibrovascular tissue in the anterior chamber angle. Anti-VEGF agents have potential adverse effects, and currently there is insufficient evidence to recommend their routine use.

In this work, for the first time, a paradigm shift in the treatment of diabetic retinopathy is proposed: providing localized, supplemental oxygen to the ischemic tissue via an implantable MEMS device. The retinal architecture (e.g., thickness, cell densities, layered structure, etc.) of the rabbit eye exposed to ischemic hypoxic injuries was well preserved after targeted oxygen delivery to the hypoxic tissue, showing that the use of an external source of oxygen could improve the retinal oxygenation and prevent the progression of the ischemic cascade.

The proposed MEMS device transports oxygen from an oxygen-rich space to the oxygen-deficient vitreous, the gel-like fluid that fills the inside of the eye, and then to the ischemic retina. This oxygen transport process is purely passive and completely driven by the gradient of oxygen partial pressure (pO2). Two types of devices were designed. For the first type, the oxygen-rich space is underneath the conjunctiva, a membrane covering the sclera (white part of the eye), beneath the eyelids and highly permeable to oxygen in the atmosphere when the eye is open. Therefore, sub-conjunctival pO2 is very high during the daytime. For the second type, the oxygen-rich space is inside the device since pure oxygen is needle-injected into the device on a regular basis.

To prevent too fast or too slow permeation of oxygen through the device that is made of parylene and silicone (two widely used biocompatible polymers in medical devices), the material properties of the hybrid parylene/silicone were investigated, including mechanical behaviors, permeation rates, and adhesive forces. Then the thicknesses of parylene and silicone became important design parameters that were fine-tuned to reach the optimal oxygen permeation rate.

The passive MEMS oxygen transporter devices were designed, built, and tested in both bench-top artificial eye models and in-vitro porcine cadaver eyes. The 3D unsteady saccade-induced laminar flow of water inside the eye model was modeled by computational fluid dynamics to study the convective transport of oxygen inside the eye induced by saccade (rapid eye movement). The saccade-enhanced transport effect was also demonstrated experimentally. Acute in-vivo animal experiments were performed in rabbits and dogs to verify the surgical procedure and the device functionality. Various hypotheses were confirmed both experimentally and computationally, suggesting that both the two types of devices are very promising to cure diabetic retinopathy. The chronic implantation of devices in ischemic dog eyes is still underway.

The proposed MEMS oxygen transporter devices can be also applied to treat other ocular and systemic diseases accompanied by retinal ischemia, such as central retinal artery occlusion, carotid artery disease, and some form of glaucoma.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The experimental portion of this thesis tries to estimate the density of the power spectrum of very low frequency semiconductor noise, from 10-6.3 cps to 1. cps with a greater accuracy than that achieved in previous similar attempts: it is concluded that the spectrum is 1/fα with α approximately 1.3 over most of the frequency range, but appearing to have a value of about 1 in the lowest decade. The noise sources are, among others, the first stage circuits of a grounded input silicon epitaxial operational amplifier. This thesis also investigates a peculiar form of stationarity which seems to distinguish flicker noise from other semiconductor noise.

In order to decrease by an order of magnitude the pernicious effects of temperature drifts, semiconductor "aging", and possible mechanical failures associated with prolonged periods of data taking, 10 independent noise sources were time-multiplexed and their spectral estimates were subsequently averaged. If the sources have similar spectra, it is demonstrated that this reduces the necessary data-taking time by a factor of 10 for a given accuracy.

In view of the measured high temperature sensitivity of the noise sources, it was necessary to combine the passive attenuation of a special-material container with active control. The noise sources were placed in a copper-epoxy container of high heat capacity and medium heat conductivity, and that container was immersed in a temperature controlled circulating ethylene-glycol bath.

Other spectra of interest, estimated from data taken concurrently with the semiconductor noise data were the spectra of the bath's controlled temperature, the semiconductor surface temperature, and the power supply voltage amplitude fluctuations. A brief description of the equipment constructed to obtain the aforementioned data is included.

The analytical portion of this work is concerned with the following questions: what is the best final spectral density estimate given 10 statistically independent ones of varying quality and magnitude? How can the Blackman and Tukey algorithm which is used for spectral estimation in this work be improved upon? How can non-equidistant sampling reduce data processing cost? Should one try to remove common trands shared by supposedly statistically independent noise sources and, if so, what are the mathematical difficulties involved? What is a physically plausible mathematical model that can account for flicker noise and what are the mathematical implications on its statistical properties? Finally, the variance of the spectral estimate obtained through the Blackman/Tukey algorithm is analyzed in greater detail; the variance is shown to diverge for α ≥ 1 in an assumed power spectrum of k/|f|α, unless the assumed spectrum is "truncated".

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Part I

The physical phenomena which will ultimately limit the packing density of planar bipolar and MOS integrated circuits are examined. The maximum packing density is obtained by minimizing the supply voltage and the size of the devices. The minimum size of a bipolar transistor is determined by junction breakdown, punch-through and doping fluctuations. The minimum size of a MOS transistor is determined by gate oxide breakdown and drain-source punch-through. The packing density of fully active bipolar or static non-complementary MOS circuits becomes limited by power dissipation. The packing density of circuits which are not fully active such as read-only memories, becomes limited by the area occupied by the devices, and the frequency is limited by the circuit time constants and by metal migration. The packing density of fully active dynamic or complementary MOS circuits is limited by the area occupied by the devices, and the frequency is limited by power dissipation and metal migration. It is concluded that read-only memories will reach approximately the same performance and packing density with MOS and bipolar technologies, while fully active circuits will reach the highest levels of integration with dynamic MOS or complementary MOS technologies.

Part II

Because the Schottky diode is a one-carrier device, it has both advantages and disadvantages with respect to the junction diode which is a two-carrier device. The advantage is that there are practically no excess minority carriers which must be swept out before the diode blocks current in the reverse direction, i.e. a much faster recovery time. The disadvantage of the Schottky diode is that for a high voltage device it is not possible to use conductivity modulation as in the p i n diode; since charge carriers are of one sign, no charge cancellation can occur and current becomes space charge limited. The Schottky diode design is developed in Section 2 and the characteristics of an optimally designed silicon Schottky diode are summarized in Fig. 9. Design criteria and quantitative comparison of junction and Schottky diodes is given in Table 1 and Fig. 10. Although somewhat approximate, the treatment allows a systematic quantitative comparison of the devices for any given application.

Part III

We interpret measurements of permittivity of perovskite strontium titanate as a function of orientation, temperature, electric field and frequency performed by Dr. Richard Neville. The free energy of the crystal is calculated as a function of polarization. The Curie-Weiss law and the LST relation are verified. A generalized LST relation is used to calculate the permittivity of strontium titanate from zero to optic frequencies. Two active optic modes are important. The lower frequency mode is attributed mainly to motion of the strontium ions with respect to the rest of the lattice, while the higher frequency active mode is attributed to motion of the titanium ions with respect to the oxygen lattice. An anomalous resonance which multi-domain strontium titanate crystals exhibit below 65°K is described and a plausible mechanism which explains the phenomenon is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Technology scaling has enabled drastic growth in the computational and storage capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high-bandwidth communication between and within ICs. In this dissertation we focus on low-power solutions that address this demand. We divide communication links into three subcategories depending on the communication distance. Each category has a different set of challenges and requirements and is affected by CMOS technology scaling in a different manner. We start with short-range chip-to-chip links for board-level communication. Next we will discuss board-to-board links, which demand a longer communication range. Finally on-chip links with communication ranges of a few millimeters are discussed.

Electrical signaling is a natural choice for chip-to-chip communication due to efficient integration and low cost. IO data rates have increased to the point where electrical signaling is now limited by the channel bandwidth. In order to achieve multi-Gb/s data rates, complex designs that equalize the channel are necessary. In addition, a high level of parallelism is central to sustaining bandwidth growth. Decision feedback equalization (DFE) is one of the most commonly employed techniques to overcome the limited bandwidth problem of the electrical channels. A linear and low-power summer is the central block of a DFE. Conventional approaches employ current-mode techniques to implement the summer, which require high power consumption. In order to achieve low-power operation we propose performing the summation in the charge domain. This approach enables a low-power and compact realization of the DFE as well as crosstalk cancellation. A prototype receiver was fabricated in 45nm SOI CMOS to validate the functionality of the proposed technique and was tested over channels with different levels of loss and coupling. Measurement results show that the receiver can equalize channels with maximum 21dB loss while consuming about 7.5mW from a 1.2V supply. We also introduce a compact, low-power transmitter employing passive equalization. The efficacy of the proposed technique is demonstrated through implementation of a prototype in 65nm CMOS. The design achieves up to 20Gb/s data rate while consuming less than 10mW.

An alternative to electrical signaling is to employ optical signaling for chip-to-chip interconnections, which offers low channel loss and cross-talk while providing high communication bandwidth. In this work we demonstrate the possibility of building compact and low-power optical receivers. A novel RC front-end is proposed that combines dynamic offset modulation and double-sampling techniques to eliminate the need for a short time constant at the input of the receiver. Unlike conventional designs, this receiver does not require a high-gain stage that runs at the data rate, making it suitable for low-power implementations. In addition, it allows time-division multiplexing to support very high data rates. A prototype was implemented in 65nm CMOS and achieved up to 24Gb/s with less than 0.4pJ/b power efficiency per channel. As the proposed design mainly employs digital blocks, it benefits greatly from technology scaling in terms of power and area saving.

As the technology scales, the number of transistors on the chip grows. This necessitates a corresponding increase in the bandwidth of the on-chip wires. In this dissertation, we take a close look at wire scaling and investigate its effect on wire performance metrics. We explore a novel on-chip communication link based on a double-sampling architecture and dynamic offset modulation technique that enables low power consumption and high data rates while achieving high bandwidth density in 28nm CMOS technology. The functionality of the link is demonstrated using different length minimum-pitch on-chip wires. Measurement results show that the link achieves up to 20Gb/s of data rate (12.5Gb/s/$\mu$m) with better than 136fJ/b of power efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Future fossil fuel scarcity and environmental degradation have demonstrated the need for renewable, low-carbon sources of energy to power an increasingly industrialized world. Solar energy with its infinite supply makes it an extraordinary resource that should not go unused. However with current materials, adoption is limited by cost and so a paradigm shift must occur to get everyone on the same page embracing solar technology. Cuprous Oxide (Cu2O) is a promising earth abundant material that can be a great alternative to traditional thin-film photovoltaic materials like CIGS, CdTe, etc. We have prepared Cu2O bulk substrates by the thermal oxidation of copper foils as well Cu2O thin films deposited via plasma-assisted Molecular Beam Epitaxy. From preliminary Hall measurements it was determined that Cu2O would need to be doped extrinsically. This was further confirmed by simulations of ZnO/Cu2O heterojunctions. A cyclic interdependence between, defect concentration, minority carrier lifetime, film thickness, and carrier concentration manifests itself a primary reason for why efficiencies greater than 4% has yet to be realized. Our growth methodology for our thin-film heterostructures allow precise control of the number of defects that incorporate into our film during both equilibrium and nonequilibrium growth. We also report process flow/device design/fabrication techniques in order to create a device. A typical device without any optimizations exhibited open-circuit voltages Voc, values in excess 500mV; nearly 18% greater than previous solid state devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid growth and development of Los Angeles City and County has been one of the phenomena of the present age. The growth of a city from 50,600 to 576,000, an increase of over 1000% in thirty years is an unprecedented occurrence. It has given rise to a variety of problems of increasing magnitude.

Chief among these are: supply of food, water and shelter development of industry and markets, prevention and removal of downtown congestion and protection of life and property. These, of course, are the problems that any city must face. But in the case of a community which doubles its population every ten years, radical and heroic measures must often be taken.