17 resultados para exploit

em CaltechTHESIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.

In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.

The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.

In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.

The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.

Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation studies the general area of complex networked systems that consist of interconnected and active heterogeneous components and usually operate in uncertain environments and with incomplete information. Problems associated with those systems are typically large-scale and computationally intractable, yet they are also very well-structured and have features that can be exploited by appropriate modeling and computational methods. The goal of this thesis is to develop foundational theories and tools to exploit those structures that can lead to computationally-efficient and distributed solutions, and apply them to improve systems operations and architecture.

Specifically, the thesis focuses on two concrete areas. The first one is to design distributed rules to manage distributed energy resources in the power network. The power network is undergoing a fundamental transformation. The future smart grid, especially on the distribution system, will be a large-scale network of distributed energy resources (DERs), each introducing random and rapid fluctuations in power supply, demand, voltage and frequency. These DERs provide a tremendous opportunity for sustainability, efficiency, and power reliability. However, there are daunting technical challenges in managing these DERs and optimizing their operation. The focus of this dissertation is to develop scalable, distributed, and real-time control and optimization to achieve system-wide efficiency, reliability, and robustness for the future power grid. In particular, we will present how to explore the power network structure to design efficient and distributed market and algorithms for the energy management. We will also show how to connect the algorithms with physical dynamics and existing control mechanisms for real-time control in power networks.

The second focus is to develop distributed optimization rules for general multi-agent engineering systems. A central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to the given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control on the least amount of information possible. Our work focused on achieving this goal using the framework of game theory. In particular, we derived a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting game-theoretic equilibria and the system level design objective and (ii) that the resulting game possesses an inherent structure that can be exploited for distributed learning, e.g., potential games. The control design can then be completed by applying any distributed learning algorithm that guarantees convergence to the game-theoretic equilibrium. One main advantage of this game theoretic approach is that it provides a hierarchical decomposition between the decomposition of the systemic objective (game design) and the specific local decision rules (distributed learning algorithms). This decomposition provides the system designer with tremendous flexibility to meet the design objectives and constraints inherent in a broad class of multiagent systems. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The field of cavity-optomechanics explores the interaction of light with sound in an ever increasing array of devices. This interaction allows the mechanical system to be both sensed and controlled by the optical system, opening up a wide variety of experiments including the cooling of the mechanical resonator to its quantum mechanical ground state and the squeezing of the optical field upon interaction with the mechanical resonator, to name two.

In this work we explore two very different systems with different types of optomechanical coupling. The first system consists of two microdisk optical resonators stacked on top of each other and separated by a very small slot. The interaction of the disks causes their optical resonance frequencies to be extremely sensitive to the gap between the disks. By careful control of the gap between the disks, the optomechanical coupling can be made to be quadratic to first order which is uncommon in optomechanical systems. With this quadratic coupling the light field is now sensitive to the energy of the mechanical resonator and can directly control the potential energy trapping the mechanical motion. This ability to directly control the spring constant without modifying the energy of the mechanical system, unlike in linear optomechanical coupling, is explored.

Next, the bulk of this thesis deals with a high mechanical frequency optomechanical crystal which is used to coherently convert photons between different frequencies. This is accomplished via the engineered linear optomechanical coupling in these devices. Both classical and quantum systems utilize the interaction of light and matter across a wide range of energies. These systems are often not naturally compatible with one another and require a means of converting photons of dissimilar wavelengths to combine and exploit their different strengths. Here we theoretically propose and experimentally demonstrate coherent wavelength conversion of optical photons using photon-phonon translation in a cavity-optomechanical system. For an engineered silicon optomechanical crystal nanocavity supporting a 4 GHz localized phonon mode, optical signals in a 1.5 MHz bandwidth are coherently converted over a 11.2 THz frequency span between one cavity mode at wavelength 1460 nm and a second cavity mode at 1545 nm with a 93% internal (2% external) peak efficiency. The thermal and quantum limiting noise involved in the conversion process is also analyzed and, in terms of an equivalent photon number signal level, are found to correspond to an internal noise level of only 6 and 4 times 10x^-3 quanta, respectively.

We begin by developing the requisite theoretical background to describe the system. A significant amount of time is then spent describing the fabrication of these silicon nanobeams, with an emphasis on understanding the specifics and motivation. The experimental demonstration of wavelength conversion is then described and analyzed. It is determined that the method of getting photons into the cavity and collected from the cavity is a fundamental limiting factor in the overall efficiency. Finally, a new coupling scheme is designed, fabricated, and tested that provides a means of coupling greater than 90% of photons into and out of the cavity, addressing one of the largest obstacles with the initial wavelength conversion experiment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Home to hundreds of millions of souls and land of excessiveness, the Himalaya is also the locus of a unique seismicity whose scope and peculiarities still remain to this day somewhat mysterious. Having claimed the lives of kings, or turned ancient timeworn cities into heaps of rubbles and ruins, earthquakes eerily inhabit Nepalese folk tales with the fatalistic message that nothing lasts forever. From a scientific point of view as much as from a human perspective, solving the mysteries of Himalayan seismicity thus represents a challenge of prime importance. Documenting geodetic strain across the Nepal Himalaya with various GPS and leveling data, we show that unlike other subduction zones that exhibit a heterogeneous and patchy coupling pattern along strike, the last hundred kilometers of the Main Himalayan Thrust fault, or MHT, appear to be uniformly locked, devoid of any of the “creeping barriers” that traditionally ward off the propagation of large events. The approximately 20 mm/yr of reckoned convergence across the Himalaya matching previously established estimates of the secular deformation at the front of the arc, the slip accumulated at depth has to somehow elastically propagate all the way to the surface at some point. And yet, neither large events from the past nor currently recorded microseismicity nearly compensate for the massive moment deficit that quietly builds up under the giant mountains. Along with this large unbalanced moment deficit, the uncommonly homogeneous coupling pattern on the MHT raises the question of whether or not the locked portion of the MHT can rupture all at once in a giant earthquake. Univocally answering this question appears contingent on the still elusive estimate of the magnitude of the largest possible earthquake in the Himalaya, and requires tight constraints on local fault properties. What makes the Himalaya enigmatic also makes it the potential source of an incredible wealth of information, and we exploit some of the oddities of Himalayan seismicity in an effort to improve the understanding of earthquake physics and cipher out the properties of the MHT. Thanks to the Himalaya, the Indo-Gangetic plain is deluged each year under a tremendous amount of water during the annual summer monsoon that collects and bears down on the Indian plate enough to pull it away from the Eurasian plate slightly, temporarily relieving a small portion of the stress mounting on the MHT. As the rainwater evaporates in the dry winter season, the plate rebounds and tension is increased back on the fault. Interestingly, the mild waggle of stress induced by the monsoon rains is about the same size as that from solid-Earth tides which gently tug at the planets solid layers, but whereas changes in earthquake frequency correspond with the annually occurring monsoon, there is no such correlation with Earth tides, which oscillate back-and-forth twice a day. We therefore investigate the general response of the creeping and seismogenic parts of MHT to periodic stresses in order to link these observations to physical parameters. First, the response of the creeping part of the MHT is analyzed with a simple spring-and-slider system bearing rate-strengthening rheology, and we show that at the transition with the locked zone, where the friction becomes near velocity neutral, the response of the slip rate may be amplified at some periods, which values are analytically related to the physical parameters of the problem. Such predictions therefore hold the potential of constraining fault properties on the MHT, but still await observational counterparts to be applied, as nothing indicates that the variations of seismicity rate on the locked part of the MHT are the direct expressions of variations of the slip rate on its creeping part, and no variations of the slip rate have been singled out from the GPS measurements to this day. When shifting to the locked seismogenic part of the MHT, spring-and-slider models with rate-weakening rheology are insufficient to explain the contrasted responses of the seismicity to the periodic loads that tides and monsoon both place on the MHT. Instead, we resort to numerical simulations using the Boundary Integral CYCLes of Earthquakes algorithm and examine the response of a 2D finite fault embedded with a rate-weakening patch to harmonic stress perturbations of various periods. We show that such simulations are able to reproduce results consistent with a gradual amplification of sensitivity as the perturbing period get larger, up to a critical period corresponding to the characteristic time of evolution of the seismicity in response to a step-like perturbation of stress. This increase of sensitivity was not reproduced by simple 1D-spring-slider systems, probably because of the complexity of the nucleation process, reproduced only by 2D-fault models. When the nucleation zone is close to its critical unstable size, its growth becomes highly sensitive to any external perturbations and the timings of produced events may therefore find themselves highly affected. A fully analytical framework has yet to be developed and further work is needed to fully describe the behavior of the fault in terms of physical parameters, which will likely provide the keys to deduce constitutive properties of the MHT from seismological observations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.

In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.

We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.

Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.

This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studies three classes of randomized numerical linear algebra algorithms, namely: (i) randomized matrix sparsification algorithms, (ii) low-rank approximation algorithms that use randomized unitary transformations, and (iii) low-rank approximation algorithms for positive-semidefinite (PSD) matrices.

Randomized matrix sparsification algorithms set randomly chosen entries of the input matrix to zero. When the approximant is substituted for the original matrix in computations, its sparsity allows one to employ faster sparsity-exploiting algorithms. This thesis contributes bounds on the approximation error of nonuniform randomized sparsification schemes, measured in the spectral norm and two NP-hard norms that are of interest in computational graph theory and subset selection applications.

Low-rank approximations based on randomized unitary transformations have several desirable properties: they have low communication costs, are amenable to parallel implementation, and exploit the existence of fast transform algorithms. This thesis investigates the tradeoff between the accuracy and cost of generating such approximations. State-of-the-art spectral and Frobenius-norm error bounds are provided.

The last class of algorithms considered are SPSD "sketching" algorithms. Such sketches can be computed faster than approximations based on projecting onto mixtures of the columns of the matrix. The performance of several such sketching schemes is empirically evaluated using a suite of canonical matrices drawn from machine learning and data analysis applications, and a framework is developed for establishing theoretical error bounds.

In addition to studying these algorithms, this thesis extends the Matrix Laplace Transform framework to derive Chernoff and Bernstein inequalities that apply to all the eigenvalues of certain classes of random matrices. These inequalities are used to investigate the behavior of the singular values of a matrix under random sampling, and to derive convergence rates for each individual eigenvalue of a sample covariance matrix.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A long-standing challenge in transition metal catalysis is selective C–C bond coupling of simple feedstocks, such as carbon monoxide, ethylene or propylene, to yield value-added products. This work describes efforts toward selective C–C bond formation using early- and late-transition metals, which may have important implications for the production of fuels and plastics, as well as many other commodity chemicals.

The industrial Fischer-Tropsch (F-T) process converts synthesis gas (syngas, a mixture of CO + H2) into a complex mixture of hydrocarbons and oxygenates. Well-defined homogeneous catalysts for F-T may provide greater product selectivity for fuel-range liquid hydrocarbons compared to traditional heterogeneous catalysts. The first part of this work involved the preparation of late-transition metal complexes for use in syngas conversion. We investigated C–C bond forming reactions via carbene coupling using bis(carbene)platinum(II) compounds, which are models for putative metal–carbene intermediates in F-T chemistry. It was found that C–C bond formation could be induced by either (1) chemical reduction of or (2) exogenous phosphine coordination to the platinum(II) starting complexes. These two mild methods afforded different products, constitutional isomers, suggesting that at least two different mechanisms are possible for C–C bond formation from carbene intermediates. These results are encouraging for the development of a multicomponent homogeneous catalysis system for the generation of higher hydrocarbons.

A second avenue of research focused on the design and synthesis of post-metallocene catalysts for olefin polymerization. The polymerization chemistry of a new class of group 4 complexes supported by asymmetric anilide(pyridine)phenolate (NNO) pincer ligands was explored. Unlike typical early transition metal polymerization catalysts, NNO-ligated catalysts produce nearly regiorandom polypropylene, with as many as 30-40 mol % of insertions being 2,1-inserted (versus 1,2-inserted), compared to <1 mol % in most metallocene systems. A survey of model Ti polymerization catalysts suggests that catalyst modification pathways that could affect regioselectivity, such as C–H activation of the anilide ring, cleavage of the amine R-group, or monomer insertion into metal–ligand bonds are unlikely. A parallel investigation of a Ti–amido(pyridine)phenolate polymerization catalyst, which features a five- rather than a six-membered Ti–N chelate ring, but maintained a dianionic NNO motif, revealed that simply maintaining this motif was not enough to produce regioirregular polypropylene; in fact, these experiments seem to indicate that only an intact anilide(pyridine)phenolate ligated-complex will lead to regioirregular polypropylene. As yet, the underlying causes for the unique regioselectivity of anilide(pyridine)phenolate polymerization catalysts remains unknown. Further exploration of NNO-ligated polymerization catalysts could lead to the controlled synthesis of new types of polymer architectures.

Finally, we investigated the reactivity of a known Ti–phenoxy(imine) (Ti-FI) catalyst that has been shown to be very active for ethylene homotrimerization in an effort to upgrade simple feedstocks to liquid hydrocarbon fuels through co-oligomerization of heavy and light olefins. We demonstrated that the Ti-FI catalyst can homo-oligomerize 1-hexene to C12 and C18 alkenes through olefin dimerization and trimerization, respectively. Future work will include kinetic studies to determine monomer selectivity by investigating the relative rates of insertion of light olefins (e.g., ethylene) vs. higher α-olefins, as well as a more detailed mechanistic study of olefin trimerization. Our ultimate goal is to exploit this catalyst in a multi-catalyst system for conversion of simple alkenes into hydrocarbon fuels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the past many different methodologies have been devised to support software development and different sets of methodologies have been developed to support the analysis of software artefacts. We have identified this mismatch as one of the causes of the poor reliability of embedded systems software. The issue with software development styles is that they are ``analysis-agnostic.'' They do not try to structure the code in a way that lends itself to analysis. The analysis is usually applied post-mortem after the software was developed and it requires a large amount of effort. The issue with software analysis methodologies is that they do not exploit available information about the system being analyzed.

In this thesis we address the above issues by developing a new methodology, called "analysis-aware" design, that links software development styles with the capabilities of analysis tools. This methodology forms the basis of a framework for interactive software development. The framework consists of an executable specification language and a set of analysis tools based on static analysis, testing, and model checking. The language enforces an analysis-friendly code structure and offers primitives that allow users to implement their own testers and model checkers directly in the language. We introduce a new approach to static analysis that takes advantage of the capabilities of a rule-based engine. We have applied the analysis-aware methodology to the development of a smart home application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Methods that exploit the intrinsic locality of molecular interactions show significant promise in making tractable the electronic structure calculation of large-scale systems. In particular, embedded density functional theory (e-DFT) offers a formally exact approach to electronic structure calculations in which the interactions between subsystems are evaluated in terms of their electronic density. In the following dissertation, methodological advances of embedded density functional theory are described, numerically tested, and applied to real chemical systems.

First, we describe an e-DFT protocol in which the non-additive kinetic energy component of the embedding potential is treated exactly. Then, we present a general implementation of the exact calculation of the non-additive kinetic potential (NAKP) and apply it to molecular systems. We demonstrate that the implementation using the exact NAKP is in excellent agreement with reference Kohn-Sham calculations, whereas the approximate functionals lead to qualitative failures in the calculated energies and equilibrium structures.

Next, we introduce density-embedding techniques to enable the accurate and stable calculation of correlated wavefunction (CW) in complex environments. Embedding potentials calculated using e-DFT introduce the effect of the environment on a subsystem for CW calculations (WFT-in-DFT). We demonstrate that WFT-in-DFT calculations are in good agreement with CW calculations performed on the full complex.

We significantly improve the numerics of the algorithm by enforcing orthogonality between subsystems by introduction of a projection operator. Utilizing the projection-based embedding scheme, we rigorously analyze the sources of error in quantum embedding calculations in which an active subsystem is treated using CWs, and the remainder using density functional theory. We show that the embedding potential felt by the electrons in the active subsystem makes only a small contribution to the error of the method, whereas the error in the nonadditive exchange-correlation energy dominates. We develop an algorithm which corrects this term and demonstrate the accuracy of this corrected embedding scheme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Energy and sustainability have become one of the most critical issues of our generation. While the abundant potential of renewable energy such as solar and wind provides a real opportunity for sustainability, their intermittency and uncertainty present a daunting operating challenge. This thesis aims to develop analytical models, deployable algorithms, and real systems to enable efficient integration of renewable energy into complex distributed systems with limited information.

The first thrust of the thesis is to make IT systems more sustainable by facilitating the integration of renewable energy into these systems. IT represents the fastest growing sectors in energy usage and greenhouse gas pollution. Over the last decade there are dramatic improvements in the energy efficiency of IT systems, but the efficiency improvements do not necessarily lead to reduction in energy consumption because more servers are demanded. Further, little effort has been put in making IT more sustainable, and most of the improvements are from improved "engineering" rather than improved "algorithms". In contrast, my work focuses on developing algorithms with rigorous theoretical analysis that improve the sustainability of IT. In particular, this thesis seeks to exploit the flexibilities of cloud workloads both (i) in time by scheduling delay-tolerant workloads and (ii) in space by routing requests to geographically diverse data centers. These opportunities allow data centers to adaptively respond to renewable availability, varying cooling efficiency, and fluctuating energy prices, while still meeting performance requirements. The design of the enabling algorithms is however very challenging because of limited information, non-smooth objective functions and the need for distributed control. Novel distributed algorithms are developed with theoretically provable guarantees to enable the "follow the renewables" routing. Moving from theory to practice, I helped HP design and implement industry's first Net-zero Energy Data Center.

The second thrust of this thesis is to use IT systems to improve the sustainability and efficiency of our energy infrastructure through data center demand response. The main challenges as we integrate more renewable sources to the existing power grid come from the fluctuation and unpredictability of renewable generation. Although energy storage and reserves can potentially solve the issues, they are very costly. One promising alternative is to make the cloud data centers demand responsive. The potential of such an approach is huge.

To realize this potential, we need adaptive and distributed control of cloud data centers and new electricity market designs for distributed electricity resources. My work is progressing in both directions. In particular, I have designed online algorithms with theoretically guaranteed performance for data center operators to deal with uncertainties under popular demand response programs. Based on local control rules of customers, I have further designed new pricing schemes for demand response to align the interests of customers, utility companies, and the society to improve social welfare.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Soft hierarchical materials often present unique functional properties that are sensitive to the geometry and organization of their micro- and nano-structural features across different lengthscales. Carbon Nanotube (CNT) foams are hierarchical materials with fibrous morphology that are known for their remarkable physical, chemical and electrical properties. Their complex microstructure has led them to exhibit intriguing mechanical responses at different length-scales and in different loading regimes. Even though these materials have been studied for mechanical behavior over the past few years, their response at high-rate finite deformations and the influence of their microstructure on bulk mechanical behavior and energy dissipative characteristics remain elusive.

In this dissertation, we study the response of aligned CNT foams at the high strain-rate regime of 102 - 104 s-1. We investigate their bulk dynamic response and the fundamental deformation mechanisms at different lengthscales, and correlate them to the microstructural characteristics of the foams. We develop an experimental platform, with which to study the mechanics of CNT foams in high-rate deformations, that includes direct measurements of the strain and transmitted forces, and allows for a full field visualization of the sample’s deformation through high-speed microscopy.

We synthesize various CNT foams (e.g., vertically aligned CNT (VACNT) foams, helical CNT foams, micro-architectured VACNT foams and VACNT foams with microscale heterogeneities) and show that the bulk functional properties of these materials are highly tunable either by tailoring their microstructure during synthesis or by designing micro-architectures that exploit the principles of structural mechanics. We also develop numerical models to describe the bulk dynamic response using multiscale mass-spring models and identify the mechanical properties at length scales that are smaller than the sample height.

The ability to control the geometry of microstructural features, and their local interactions, allows the creation of novel hierarchical materials with desired functional properties. The fundamental understanding provided by this work on the key structure-function relations that govern the bulk response of CNT foams can be extended to other fibrous, soft and hierarchical materials. The findings can be used to design materials with tailored properties for different engineering applications, like vibration damping, impact mitigation and packaging.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The complex domain structure in ferroelectrics gives rise to electromechanical coupling, and its evolution (via domain switching) results in a time-dependent (i.e. viscoelastic) response. Although ferroelectrics are used in many technological applications, most do not attempt to exploit the viscoelastic response of ferroelectrics, mainly due to a lack of understanding and accurate models for their description and prediction. Thus, the aim of this thesis research is to gain better understanding of the influence of domain evolution in ferroelectrics on their dynamic mechanical response. There have been few studies on the viscoelastic properties of ferroelectrics, mainly due to a lack of experimental methods. Therefore, an apparatus and method called Broadband Electromechanical Spectroscopy (BES) was designed and built. BES allows for the simultaneous application of dynamic mechanical and electrical loading in a vacuum environment. Using BES, the dynamic stiffness and loss tangent in bending and torsion of a particular ferroelectric, viz. lead zirconate titanate (PZT), was characterized for different combinations of electrical and mechanical loading frequencies throughout the entire electric displacement hysteresis. Experimental results showed significant increases in loss tangent (by nearly an order of magnitude) and compliance during domain switching, which shows promise as a new approach to structural damping. A continuum model of the viscoelasticity of ferroelectrics was developed, which incorporates microstructural evolution via internal variables and associated kinetic relations. For the first time, through a new linearization process, the incremental dynamic stiffness and loss tangent of materials were computed throughout the entire electric displacement hysteresis for different combinations of mechanical and electrical loading frequencies. The model accurately captured experimental results. Using the understanding gained from the characterization and modeling of PZT, two applications of domain switching kinetics were explored by using Micro Fiber Composites (MFCs). Proofs of concept of set-and-hold actuation and structural damping using MFCs were demonstrated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current technological advances in fabrication methods have provided pathways to creating architected structural meta-materials similar to those found in natural organisms that are structurally robust and lightweight, such as diatoms. Structural meta-materials are materials with mechanical properties that are determined by material properties at various length scales, which range from the material microstructure (nm) to the macro-scale architecture (μm – mm). It is now possible to exploit material size effect, which emerge at the nanometer length scale, as well as structural effects to tune the material properties and failure mechanisms of small-scale cellular solids, such as nanolattices. This work demonstrates the fabrication and mechanical properties of 3-dimensional hollow nanolattices in both tension and compression. Hollow gold nanolattices loaded in uniaxial compression demonstrate that strength and stiffness vary as a function of geometry and tube wall thickness. Structural effects were explored by increasing the unit cell angle from 30° to 60° while keeping all other parameters constant; material size effects were probed by varying the tube wall thickness, t, from 200nm to 635nm, at a constant relative density and grain size. In-situ uniaxial compression experiments reveal an order-of-magnitude increase in yield stress and modulus in nanolattices with greater lattice angles, and a 150% increase in the yield strength without a concomitant change in modulus in thicker-walled nanolattices for fixed lattice angles. These results imply that independent control of structural and material size effects enables tunability of mechanical properties of 3-dimensional architected meta-materials and highlight the importance of material, geometric, and microstructural effects in small-scale mechanics. This work also explores the flaw tolerance of 3D hollow-tube alumina kagome nanolattices with and without pre-fabricated notches, both in experiment and simulation. Experiments demonstrate that the hollow kagome nanolattices in uniaxial tension always fail at the same load when the ratio of notch length (a) to sample width (w) is no greater than 1/3, with no correlation between failure occurring at or away from the notch. For notches with (a/w) > 1/3, the samples fail at lower peak loads and this is attributed to the increased compliance as fewer unit cells span the un-notched region. Finite element simulations of the kagome tension samples show that the failure is governed by tensile loading for (a/w) < 1/3 but as (a/w) increases, bending begins to play a significant role in the failure. This work explores the flaw sensitivity of hollow alumina kagome nanolattices in tension, using experiments and simulations, and demonstrates that the discrete-continuum duality of architected structural meta-materials gives rise to their flaw insensitivity even when made entirely of intrinsically brittle materials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.

This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.

Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.

Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The microwave response of the superconducting state in equilibrium and non-equilibrium configurations was examined experimentally and analytically. Thin film superconductors were mostly studied in order to explore spatial effects. The response parameter measured was the surface impedance.

For small microwave intensity the surface impedance at 10 GHz was measured for a variety of samples (mostly Sn) over a wide range of sample thickness and temperature. A detailed analysis based on the BCS theory was developed for calculating the surface impedance for general thickness and other experimental parameters. Experiment and theory agreed with each other to within the experimental accuracy. Thus it was established that the samples, thin films as well as bulk, were well characterised at low microwave powers (near equilibrium).

Thin films were perturbed by a small dc supercurrent and the effect on the superconducting order parameter and the quasiparticle response determined by measuring changes in the surface resistance (still at low microwave intensity and independent of it) due to the induced current. The use of fully superconducting resonators enabled the measurement of very small changes in the surface resistance (< 10-9 Ω/sq.). These experiments yield information regarding the dynamics of the order parameter and quasiparticle systems. For all the films studied the results could be described at temperatures near Tc by the thermodynamic depression of the order parameter due to the static current leading to a quadratic increase of the surface resistance with current.

For the thinnest films the low temperature results were surprising in that the surface resistance decreased with increasing current. An explanation is proposed according to which this decrease occurs due to an additional high frequency quasiparticle current caused by the combined presence of both static and high frequency fields. For frequencies larger than the inverse of the quasiparticle relaxation time this additional current is out of phase (by π) with the microwave electric field and is observed as a decrease of surface resistance. Calculations agree quantitatively with experimental results. This is the first observation and explanation of this non-equilibrium quasiparticle effect.

For thicker films of Sn, the low temperature surface resistance was found to increase with applied static current. It is proposed that due to the spatial non-uniformity of the induced current distribution across the thicker films, the above purely temporal analysis of the local quasiparticle response needs to be generalised to include space and time non-equilibrium effects.

The nonlinear interaction of microwaves arid superconducting films was also examined in a third set of experiments. The surface impedance of thin films was measured as a function of the incident microwave magnetic field. The experiments exploit the ability to measure the absorbed microwave power and applied microwave magnetic field absolutely. It was found that the applied surface microwave field could not be raised above a certain threshold level at which the absorption increased abruptly. This critical field level represents a dynamic critical field and was found to be associated with the penetration of the app1ied field into the film at values well below the thermodynamic critical field for the configuration of a field applied to one side of the film. The penetration occurs despite the thermal stability of the film which was unequivocally demonstrated by experiment. A new mechanism for such penetration via the formation of a vortex-antivortex pair is proposed. The experimental results for the thinnest films agreed with the calculated values of this pair generation field. The observations of increased transmission at the critical field level and suppression of the process by a metallic ground plane further support the proposed model.