971 resultados para Library systems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The proliferation of smartphones and other internet-enabled, sensor-equipped consumer devices enables us to sense and act upon the physical environment in unprecedented ways. This thesis considers Community Sense-and-Response (CSR) systems, a new class of web application for acting on sensory data gathered from participants' personal smart devices. The thesis describes how rare events can be reliably detected using a decentralized anomaly detection architecture that performs client-side anomaly detection and server-side event detection. After analyzing this decentralized anomaly detection approach, the thesis describes how weak but spatially structured events can be detected, despite significant noise, when the events have a sparse representation in an alternative basis. Finally, the thesis describes how the statistical models needed for client-side anomaly detection may be learned efficiently, using limited space, via coresets.

The Caltech Community Seismic Network (CSN) is a prototypical example of a CSR system that harnesses accelerometers in volunteers' smartphones and consumer electronics. Using CSN, this thesis presents the systems and algorithmic techniques to design, build and evaluate a scalable network for real-time awareness of spatial phenomena such as dangerous earthquakes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation studies long-term behavior of random Riccati recursions and mathematical epidemic model. Riccati recursions are derived from Kalman filtering. The error covariance matrix of Kalman filtering satisfies Riccati recursions. Convergence condition of time-invariant Riccati recursions are well-studied by researchers. We focus on time-varying case, and assume that regressor matrix is random and identical and independently distributed according to given distribution whose probability distribution function is continuous, supported on whole space, and decaying faster than any polynomial. We study the geometric convergence of the probability distribution. We also study the global dynamics of the epidemic spread over complex networks for various models. For instance, in the discrete-time Markov chain model, each node is either healthy or infected at any given time. In this setting, the number of the state increases exponentially as the size of the network increases. The Markov chain has a unique stationary distribution where all the nodes are healthy with probability 1. Since the probability distribution of Markov chain defined on finite state converges to the stationary distribution, this Markov chain model concludes that epidemic disease dies out after long enough time. To analyze the Markov chain model, we study nonlinear epidemic model whose state at any given time is the vector obtained from the marginal probability of infection of each node in the network at that time. Convergence to the origin in the epidemic map implies the extinction of epidemics. The nonlinear model is upper-bounded by linearizing the model at the origin. As a result, the origin is the globally stable unique fixed point of the nonlinear model if the linear upper bound is stable. The nonlinear model has a second fixed point when the linear upper bound is unstable. We work on stability analysis of the second fixed point for both discrete-time and continuous-time models. Returning back to the Markov chain model, we claim that the stability of linear upper bound for nonlinear model is strongly related with the extinction time of the Markov chain. We show that stable linear upper bound is sufficient condition of fast extinction and the probability of survival is bounded by nonlinear epidemic map.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes the use of multiply-substituted stable isotopologues of carbonate minerals and methane gas to better understand how these environmentally significant minerals and gases form and are modified throughout their geological histories. Stable isotopes have a long tradition in earth science as a tool for providing quantitative constraints on how molecules, in or on the earth, formed in both the present and past. Nearly all studies, until recently, have only measured the bulk concentrations of stable isotopes in a phase or species. However, the abundance of various isotopologues within a phase, for example the concentration of isotopologues with multiple rare isotopes (multiply substituted or 'clumped' isotopologues) also carries potentially useful information. Specifically, the abundances of clumped isotopologues in an equilibrated system are a function of temperature and thus knowledge of their abundances can be used to calculate a sample’s formation temperature. In this thesis, measurements of clumped isotopologues are made on both carbonate-bearing minerals and methane gas in order to better constrain the environmental and geological histories of various samples.

Clumped-isotope-based measurements of ancient carbonate-bearing minerals, including apatites, have opened up paleotemperature reconstructions to a variety of systems and time periods. However, a critical issue when using clumped-isotope based measurements to reconstruct ancient mineral formation temperatures is whether the samples being measured have faithfully recorded their original internal isotopic distributions. These original distributions can be altered, for example, by diffusion of atoms in the mineral lattice or through diagenetic reactions. Understanding these processes quantitatively is critical for the use of clumped isotopes to reconstruct past temperatures, quantify diagenesis, and calculate time-temperature burial histories of carbonate minerals. In order to help orient this part of the thesis, Chapter 2 provides a broad overview and history of clumped-isotope based measurements in carbonate minerals.

In Chapter 3, the effects of elevated temperatures on a sample’s clumped-isotope composition are probed in both natural and experimental apatites (which contain structural carbonate groups) and calcites. A quantitative model is created that is calibrated by the experiments and consistent with the natural samples. The model allows for calculations of the change in a sample’s clumped isotope abundances as a function of any time-temperature history.

In Chapter 4, the effects of diagenesis on the stable isotopic compositions of apatites are explored on samples from a variety of sedimentary phosphorite deposits. Clumped isotope temperatures and bulk isotopic measurements from carbonate and phosphate groups are compared for all samples. These results demonstrate that samples have experienced isotopic exchange of oxygen atoms in both the carbonate and phosphate groups. A kinetic model is developed that allows for the calculation of the amount of diagenesis each sample has experienced and yields insight into the physical and chemical processes of diagenesis.

The thesis then switches gear and turns its attention to clumped isotope measurements of methane. Methane is critical greenhouse gas, energy resource, and microbial metabolic product and substrate. Despite its importance both environmentally and economically, much about methane’s formational mechanisms and the relative sources of methane to various environments remains poorly constrained. In order to add new constraints to our understanding of the formation of methane in nature, I describe the development and application of methane clumped isotope measurements to environmental deposits of methane. To help orient the reader, a brief overview of the formation of methane in both high and low temperature settings is given in Chapter 5.

In Chapter 6, a method for the measurement of methane clumped isotopologues via mass spectrometry is described. This chapter demonstrates that the measurement is precise and accurate. Additionally, the measurement is calibrated experimentally such that measurements of methane clumped isotope abundances can be converted into equivalent formational temperatures. This study represents the first time that methane clumped isotope abundances have been measured at useful precisions.

In Chapter 7, the methane clumped isotope method is applied to natural samples from a variety of settings. These settings include thermogenic gases formed and reservoired in shales, migrated thermogenic gases, biogenic gases, mixed biogenic and thermogenic gas deposits, and experimentally generated gases. In all cases, calculated clumped isotope temperatures make geological sense as formation temperatures or mixtures of high and low temperature gases. Based on these observations, we propose that the clumped isotope temperature of an unmixed gas represents its formation temperature — this was neither an obvious nor expected result and has important implications for how methane forms in nature. Additionally, these results demonstrate that methane-clumped isotope compositions provided valuable additional constraints to studying natural methane deposits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate simulation of quantum dynamics in complex systems poses a fundamental theoretical challenge with immediate application to problems in biological catalysis, charge transfer, and solar energy conversion. The varied length- and timescales that characterize these kinds of processes necessitate development of novel simulation methodology that can both accurately evolve the coupled quantum and classical degrees of freedom and also be easily applicable to large, complex systems. In the following dissertation, the problems of quantum dynamics in complex systems are explored through direct simulation using path-integral methods as well as application of state-of-the-art analytical rate theories.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Power system is at the brink of change. Engineering needs, economic forces and environmental factors are the main drivers of this change. The vision is to build a smart electrical grid and a smarter market mechanism around it to fulfill mandates on clean energy. Looking at engineering and economic issues in isolation is no longer an option today; it needs an integrated design approach. In this thesis, I shall revisit some of the classical questions on the engineering operation of power systems that deals with the nonconvexity of power flow equations. Then I shall explore some issues of the interaction of these power flow equations on the electricity markets to address the fundamental issue of market power in a deregulated market environment. Finally, motivated by the emergence of new storage technologies, I present an interesting result on the investment decision problem of placing storage over a power network. The goal of this study is to demonstrate that modern optimization and game theory can provide unique insights into this complex system. Some of the ideas carry over to applications beyond power systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The LIGO and Virgo gravitational-wave observatories are complex and extremely sensitive strain detectors that can be used to search for a wide variety of gravitational waves from astrophysical and cosmological sources. In this thesis, I motivate the search for the gravitational wave signals from coalescing black hole binary systems with total mass between 25 and 100 solar masses. The mechanisms for formation of such systems are not well-understood, and we do not have many observational constraints on the parameters that guide the formation scenarios. Detection of gravitational waves from such systems — or, in the absence of detection, the tightening of upper limits on the rate of such coalescences — will provide valuable information that can inform the astrophysics of the formation of these systems. I review the search for these systems and place upper limits on the rate of black hole binary coalescences with total mass between 25 and 100 solar masses. I then show how the sensitivity of this search can be improved by up to 40% by the the application of the multivariate statistical classifier known as a random forest of bagged decision trees to more effectively discriminate between signal and non-Gaussian instrumental noise. I also discuss the use of this classifier in the search for the ringdown signal from the merger of two black holes with total mass between 50 and 450 solar masses and present upper limits. I also apply multivariate statistical classifiers to the problem of quantifying the non-Gaussianity of LIGO data. Despite these improvements, no gravitational-wave signals have been detected in LIGO data so far. However, the use of multivariate statistical classification can significantly improve the sensitivity of the Advanced LIGO detectors to such signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the author presents a method called Convex Model Predictive Control (CMPC) to control systems whose states are elements of the rotation matrices SO(n) for n = 2, 3. This is done without charts or any local linearization, and instead is performed by operating over the orbitope of rotation matrices. This results in a novel model predictive control (MPC) scheme without the drawbacks associated with conventional linearization techniques such as slow computation time and local minima. Of particular emphasis is the application to aeronautical and vehicular systems, wherein the method removes many of the trigonometric terms associated with these systems’ state space equations. Furthermore, the method is shown to be compatible with many existing variants of MPC, including obstacle avoidance via Mixed Integer Linear Programming (MILP).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work quantifies the nature of delays in genetic regulatory networks and their effect on system dynamics. It is known that a time lag can emerge from a sequence of biochemical reactions. Applying this modeling framework to the protein production processes, delay distributions are derived in a stochastic (probability density function) and deterministic setting (impulse function), whilst being shown to be equivalent under different assumptions. The dependence of the distribution properties on rate constants, gene length, and time-varying temperatures is investigated. Overall, the distribution of the delay in the context of protein production processes is shown to be highly dependent on the size of the genes and mRNA strands as well as the reaction rates. Results suggest longer genes have delay distributions with a smaller relative variance, and hence, less uncertainty in the completion times, however, they lead to larger delays. On the other hand large uncertainties may actually play a positive role, as broader distributions can lead to larger stability regions when this formalization of the protein production delays is incorporated into a feedback system.

Furthermore, evidence suggests that delays may play a role as an explicit design into existing controlling mechanisms. Accordingly, the reccurring dual-feedback motif is also investigated with delays incorporated into the feedback channels. The dual-delayed feedback is shown to have stabilizing effects through a control theoretic approach. Lastly, a distributed delay based controller design method is proposed as a potential design tool. In a preliminary study, the dual-delayed feedback system re-emerges as an effective controller design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While some of the deepest results in nature are those that give explicit bounds between important physical quantities, some of the most intriguing and celebrated of such bounds come from fields where there is still a great deal of disagreement and confusion regarding even the most fundamental aspects of the theories. For example, in quantum mechanics, there is still no complete consensus as to whether the limitations associated with Heisenberg's Uncertainty Principle derive from an inherent randomness in physics, or rather from limitations in the measurement process itself, resulting from phenomena like back action. Likewise, the second law of thermodynamics makes a statement regarding the increase in entropy of closed systems, yet the theory itself has neither a universally-accepted definition of equilibrium, nor an adequate explanation of how a system with underlying microscopically Hamiltonian dynamics (reversible) settles into a fixed distribution.

Motivated by these physical theories, and perhaps their inconsistencies, in this thesis we use dynamical systems theory to investigate how the very simplest of systems, even with no physical constraints, are characterized by bounds that give limits to the ability to make measurements on them. Using an existing interpretation, we start by examining how dissipative systems can be viewed as high-dimensional lossless systems, and how taking this view necessarily implies the existence of a noise process that results from the uncertainty in the initial system state. This fluctuation-dissipation result plays a central role in a measurement model that we examine, in particular describing how noise is inevitably injected into a system during a measurement, noise that can be viewed as originating either from the randomness of the many degrees of freedom of the measurement device, or of the environment. This noise constitutes one component of measurement back action, and ultimately imposes limits on measurement uncertainty. Depending on the assumptions we make about active devices, and their limitations, this back action can be offset to varying degrees via control. It turns out that using active devices to reduce measurement back action leads to estimation problems that have non-zero uncertainty lower bounds, the most interesting of which arise when the observed system is lossless. One such lower bound, a main contribution of this work, can be viewed as a classical version of a Heisenberg uncertainty relation between the system's position and momentum. We finally also revisit the murky question of how macroscopic dissipation appears from lossless dynamics, and propose alternative approaches for framing the question using existing systematic methods of model reduction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid rise in the residential photo voltaic (PV) adoptions in the past half decade has created a need in the electricity industry for a widely-accessible model that estimates PV adoption based on a combination of different business and policy decisions. This work analyzes historical adoption patterns and finds fiscal savings to be the single most important factor in PV adoption, with significantly greater predictive power compared to all other socioeconomic factors including income and education. We can create an application available on Google App Engine (GAE) based on our findings that allows all stakeholders including policymakers, power system researchers and regulators to study the complex and coupled relationship between PV adoption, utility economics and grid sustainability. The application allows users to experiment with different customer demographics, tier structures and subsidies, hence allowing them to tailor the application to the geographic region they are studying. This study then demonstrates the different type of analyses possible with the application by studying the relative impact of different policies regarding tier structures, fixed charges and PV prices on PV adoption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The high computational cost of correlated wavefunction theory (WFT) calculations has motivated the development of numerous methods to partition the description of large chemical systems into smaller subsystem calculations. For example, WFT-in-DFT embedding methods facilitate the partitioning of a system into two subsystems: a subsystem A that is treated using an accurate WFT method, and a subsystem B that is treated using a more efficient Kohn-Sham density functional theory (KS-DFT) method. Representation of the interactions between subsystems is non-trivial, and often requires the use of approximate kinetic energy functionals or computationally challenging optimized effective potential calculations; however, it has recently been shown that these challenges can be eliminated through the use of a projection operator. This dissertation describes the development and application of embedding methods that enable accurate and efficient calculation of the properties of large chemical systems.

Chapter 1 introduces a method for efficiently performing projection-based WFT-in-DFT embedding calculations on large systems. This is accomplished by using a truncated basis set representation of the subsystem A wavefunction. We show that naive truncation of the basis set associated with subsystem A can lead to large numerical artifacts, and present an approach for systematically controlling these artifacts.

Chapter 2 describes the application of the projection-based embedding method to investigate the oxidative stability of lithium-ion batteries. We study the oxidation potentials of mixtures of ethylene carbonate (EC) and dimethyl carbonate (DMC) by using the projection-based embedding method to calculate the vertical ionization energy (IE) of individual molecules at the CCSD(T) level of theory, while explicitly accounting for the solvent using DFT. Interestingly, we reveal that large contributions to the solvation properties of DMC originate from quadrupolar interactions, resulting in a much larger solvent reorganization energy than that predicted using simple dielectric continuum models. Demonstration that the solvation properties of EC and DMC are governed by fundamentally different intermolecular interactions provides insight into key aspects of lithium-ion batteries, with relevance to electrolyte decomposition processes, solid-electrolyte interphase formation, and the local solvation environment of lithium cations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Network information theory and channels with memory are two important but difficult frontiers of information theory. In this two-parted dissertation, we study these two areas, each comprising one part. For the first area we study the so-called entropy vectors via finite group theory, and the network codes constructed from finite groups. In particular, we identify the smallest finite group that violates the Ingleton inequality, an inequality respected by all linear network codes, but not satisfied by all entropy vectors. Based on the analysis of this group we generalize it to several families of Ingleton-violating groups, which may be used to design good network codes. Regarding that aspect, we study the network codes constructed with finite groups, and especially show that linear network codes are embedded in the group network codes constructed with these Ingleton-violating families. Furthermore, such codes are strictly more powerful than linear network codes, as they are able to violate the Ingleton inequality while linear network codes cannot. For the second area, we study the impact of memory to the channel capacity through a novel communication system: the energy harvesting channel. Different from traditional communication systems, the transmitter of an energy harvesting channel is powered by an exogenous energy harvesting device and a finite-sized battery. As a consequence, each time the system can only transmit a symbol whose energy consumption is no more than the energy currently available. This new type of power supply introduces an unprecedented input constraint for the channel, which is random, instantaneous, and has memory. Furthermore, naturally, the energy harvesting process is observed causally at the transmitter, but no such information is provided to the receiver. Both of these features pose great challenges for the analysis of the channel capacity. In this work we use techniques from channels with side information, and finite state channels, to obtain lower and upper bounds of the energy harvesting channel. In particular, we study the stationarity and ergodicity conditions of a surrogate channel to compute and optimize the achievable rates for the original channel. In addition, for practical code design of the system we study the pairwise error probabilities of the input sequences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computation technology has dramatically changed the world around us; you can hardly find an area where cell phones have not saturated the market, yet there is a significant lack of breakthroughs in the development to integrate the computer with biological environments. This is largely the result of the incompatibility of the materials used in both environments; biological environments and experiments tend to need aqueous environments. To help aid in these development chemists, engineers, physicists and biologists have begun to develop microfluidics to help bridge this divide. Unfortunately, the microfluidic devices required large external support equipment to run the device. This thesis presents a series of several microfluidic methods that can help integrate engineering and biology by exploiting nanotechnology to help push the field of microfluidics back to its intended purpose, small integrated biological and electrical devices. I demonstrate this goal by developing different methods and devices to (1) separate membrane bound proteins with the use of microfluidics, (2) use optical technology to make fiber optic cables into protein sensors, (3) generate new fluidic devices using semiconductor material to manipulate single cells, and (4) develop a new genetic microfluidic based diagnostic assay that works with current PCR methodology to provide faster and cheaper results. All of these methods and systems can be used as components to build a self-contained biomedical device.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ordered granular systems have been a subject of active research for decades. Due to their rich dynamic response and nonlinearity, ordered granular systems have been suggested for several applications, such as solitary wave focusing, acoustic signals manipulation, and vibration absorption. Most of the fundamental research performed on ordered granular systems has focused on macro-scale examples. However, most engineering applications require these systems to operate at much smaller scales. Very little is known about the response of micro-scale granular systems, primarily because of the difficulties in realizing reliable and quantitative experiments, which originate from the discrete nature of granular materials and their highly nonlinear inter-particle contact forces.

In this work, we investigate the physics of ordered micro-granular systems by designing an innovative experimental platform that allows us to assemble, excite, and characterize ordered micro-granular systems. This new experimental platform employs a laser system to deliver impulses with controlled momentum and incorporates non-contact measurement apparatuses to detect the particles’ displacement and velocity. We demonstrated the capability of the laser system to excite systems of dry (stainless steel particles of radius 150 micrometers) and wet (silica particles of radius 3.69 micrometers, immersed in fluid) micro-particles, after which we analyzed the stress propagation through these systems.

We derived the equations of motion governing the dynamic response of dry and wet particles on a substrate, which we then validated in experiments. We then measured the losses in these systems and characterized the collision and friction between two micro-particles. We studied wave propagation in one-dimensional dry chains of micro-particles as well as in two-dimensional colloidal systems immersed in fluid. We investigated the influence of defects to wave propagation in the one-dimensional systems. Finally, we characterized the wave-attenuation and its relation to the viscosity of the surrounding fluid and performed computer simulations to establish a model that captures the observed response.

The findings of the study offer the first systematic experimental and numerical analysis of wave propagation through ordered systems of micro-particles. The experimental system designed in this work provides the necessary tools for further fundamental studies of wave propagation in both granular and colloidal systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I. It was not possible to produce anti-tetracycline antibody in laboratory animals by any of the methods tried. Tetracycline protein conjugates were prepared and characterized. It was shown that previous reports of the detection of anti-tetracycline antibody by in vitro-methods were in error. Tetracycline precipitates non-specifically with serum proteins. The anaphylactic reaction reported was the result of misinterpretation, since the observations were inconsistent with the known mechanism of anaphylaxis and the supposed antibody would not sensitize guinea pig skin. The hemagglutination reaction was not reproducible and was extremely sensitive to minute amounts of microbial contamination. Both free tetracyclines and the conjugates were found to be poor antigens.

II. Anti-aspiryl antibodies were produced in rabbits using 3 protein carriers. The method of inhibition of precipitation was used to determine the specificity of the antibody produced. ε-Aminocaproate was found to be the most effective inhibitor of the haptens tested, indicating that the combining hapten of the protein is ε-aspiryl-lysyl. Free aspirin and salicylates were poor inhibitors and did not combine with the antibody to a significant extent. The ortho group was found to participate in the binding to antibody. The average binding constants were measured.

Normal rabbit serum was acetylated by aspirin under in vitro conditions, which are similar to physiological conditions. The extent of acetylation was determined by immunochemical tests. The acetylated serum proteins were shown to be potent antigens in rabbits. It was also shown that aspiryl proteins were partially acetylated. The relation of these results to human aspirin intolerance is discussed.

III. Aspirin did not induce contact sensitivity in guinea pigs when they were immunized by techniques that induce sensitivity with other reactive compounds. The acetylation mechanism is not relevant to this type of hypersensitivity, since sensitivity is not produced by potent acetylating agents like acetyl chloride and acetic anhydride. Aspiryl chloride, a totally artificial system, is a good sensitizer. Its specificity was examined.

IV. Protein conjugates were prepared with p-aminosalicylic acid and various carriers using azo, carbodiimide and mixed anhydride coupling. These antigens were injected into rabbits and guinea pigs and no anti-hapten IgG or IgM response was obtained. Delayed hypersensitivity was produced in guinea pigs by immunization with the conjugates, and its specificity was determined. Guinea pigs were not sensitized by either injections or topical application of p-amino-salicylic acid or p-aminosalicylate.