3 resultados para Internet of Things, Physical Web, Vending Machines, Beacon, Eddystone

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.

The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.

We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Part I

Particles are a key feature of planetary atmospheres. On Earth they represent the greatest source of uncertainty in the global energy budget. This uncertainty can be addressed by making more measurement, by improving the theoretical analysis of measurements, and by better modeling basic particle nucleation and initial particle growth within an atmosphere. This work will focus on the latter two methods of improvement.

Uncertainty in measurements is largely due to particle charging. Accurate descriptions of particle charging are challenging because one deals with particles in a gas as opposed to a vacuum, so different length scales come into play. Previous studies have considered the effects of transition between the continuum and kinetic regime and the effects of two and three body interactions within the kinetic regime. These studies, however, use questionable assumptions about the charging process which resulted in skewed observations, and bias in the proposed dynamics of aerosol particles. These assumptions affect both the ions and particles in the system. Ions are assumed to be point monopoles that have a single characteristic speed rather than follow a distribution. Particles are assumed to be perfect conductors that have up to five elementary charges on them. The effects of three body interaction, ion-molecule-particle, are also overestimated. By revising this theory so that the basic physical attributes of both ions and particles and their interactions are better represented, we are able to make more accurate predictions of particle charging in both the kinetic and continuum regimes.

The same revised theory that was used above to model ion charging can also be applied to the flux of neutral vapor phase molecules to a particle or initial cluster. Using these results we can model the vapor flux to a neutral or charged particle due to diffusion and electromagnetic interactions. In many classical theories currently applied to these models, the finite size of the molecule and the electromagnetic interaction between the molecule and particle, especially for the neutral particle case, are completely ignored, or, as is often the case for a permanent dipole vapor species, strongly underestimated. Comparing our model to these classical models we determine an “enhancement factor” to characterize how important the addition of these physical parameters and processes is to the understanding of particle nucleation and growth.

Part II

Whispering gallery mode (WGM) optical biosensors are capable of extraordinarily sensitive specific and non-specific detection of species suspended in a gas or fluid. Recent experimental results suggest that these devices may attain single-molecule sensitivity to protein solutions in the form of stepwise shifts in their resonance wavelength, \lambda_{R}, but present sensor models predict much smaller steps than were reported. This study examines the physical interaction between a WGM sensor and a molecule adsorbed to its surface, exploring assumptions made in previous efforts to model WGM sensor behavior, and describing computational schemes that model the experiments for which single protein sensitivity was reported. The resulting model is used to simulate sensor performance, within constraints imposed by the limited material property data. On this basis, we conclude that nonlinear optical effects would be needed to attain the reported sensitivity, and that, in the experiments for which extreme sensitivity was reported, a bound protein experiences optical energy fluxes too high for such effects to be ignored.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The assembly history of massive galaxies is one of the most important aspects of galaxy formation and evolution. Although we have a broad idea of what physical processes govern the early phases of galaxy evolution, there are still many open questions. In this thesis I demonstrate the crucial role that spectroscopy can play in a physical understanding of galaxy evolution. I present deep near-infrared spectroscopy for a sample of high-redshift galaxies, from which I derive important physical properties and their evolution with cosmic time. I take advantage of the recent arrival of efficient near-infrared detectors to target the rest-frame optical spectra of z > 1 galaxies, from which many physical quantities can be derived. After illustrating the applications of near-infrared deep spectroscopy with a study of star-forming galaxies, I focus on the evolution of massive quiescent systems.

Most of this thesis is based on two samples collected at the W. M. Keck Observatory that represent a significant step forward in the spectroscopic study of z > 1 quiescent galaxies. All previous spectroscopic samples at this redshift were either limited to a few objects, or much shallower in terms of depth. Our first sample is composed of 56 quiescent galaxies at 1 < z < 1.6 collected using the upgraded red arm of the Low Resolution Imaging Spectrometer (LRIS). The second consists of 24 deep spectra of 1.5 < z < 2.5 quiescent objects observed with the Multi-Object Spectrometer For Infra-Red Exploration (MOSFIRE). Together, these spectra span the critical epoch 1 < z < 2.5, where most of the red sequence is formed, and where the sizes of quiescent systems are observed to increase significantly.

We measure stellar velocity dispersions and dynamical masses for the largest number of z > 1 quiescent galaxies to date. By assuming that the velocity dispersion of a massive galaxy does not change throughout its lifetime, as suggested by theoretical studies, we match galaxies in the local universe with their high-redshift progenitors. This allows us to derive the physical growth in mass and size experienced by individual systems, which represents a substantial advance over photometric inferences based on the overall galaxy population. We find a significant physical growth among quiescent galaxies over 0 < z < 2.5 and, by comparing the slope of growth in the mass-size plane dlogRe/dlogM with the results of numerical simulations, we can constrain the physical process responsible for the evolution. Our results show that the slope of growth becomes steeper at higher redshifts, yet is broadly consistent with minor mergers being the main process by which individual objects evolve in mass and size.

By fitting stellar population models to the observed spectroscopy and photometry we derive reliable ages and other stellar population properties. We show that the addition of the spectroscopic data helps break the degeneracy between age and dust extinction, and yields significantly more robust results compared to fitting models to the photometry alone. We detect a clear relation between size and age, where larger galaxies are younger. Therefore, over time the average size of the quiescent population will increase because of the contribution of large galaxies recently arrived to the red sequence. This effect, called progenitor bias, is different from the physical size growth discussed above, but represents another contribution to the observed difference between the typical sizes of low- and high-redshift quiescent galaxies. By reconstructing the evolution of the red sequence starting at z ∼ 1.25 and using our stellar population histories to infer the past behavior to z ∼ 2, we demonstrate that progenitor bias accounts for only half of the observed growth of the population. The remaining size evolution must be due to physical growth of individual systems, in agreement with our dynamical study.

Finally, we use the stellar population properties to explore the earliest periods which led to the formation of massive quiescent galaxies. We find tentative evidence for two channels of star formation quenching, which suggests the existence of two independent physical mechanisms. We also detect a mass downsizing, where more massive galaxies form at higher redshift, and then evolve passively. By analyzing in depth the star formation history of the brightest object at z > 2 in our sample, we are able to put constraints on the quenching timescale and on the properties of its progenitor.

A consistent picture emerges from our analyses: massive galaxies form at very early epochs, are quenched on short timescales, and then evolve passively. The evolution is passive in the sense that no new stars are formed, but significant mass and size growth is achieved by accreting smaller, gas-poor systems. At the same time the population of quiescent galaxies grows in number due to the quenching of larger star-forming galaxies. This picture is in agreement with other observational studies, such as measurements of the merger rate and analyses of galaxy evolution at fixed number density.