15 resultados para Parameter Estimation, Fokker-planck Equation, Finite Elements

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A theory of two-point boundary value problems analogous to the theory of initial value problems for stochastic ordinary differential equations whose solutions form Markov processes is developed. The theory of initial value problems consists of three main parts: the proof that the solution process is markovian and diffusive; the construction of the Kolmogorov or Fokker-Planck equation of the process; and the proof that the transistion probability density of the process is a unique solution of the Fokker-Planck equation.

It is assumed here that the stochastic differential equation under consideration has, as an initial value problem, a diffusive markovian solution process. When a given boundary value problem for this stochastic equation almost surely has unique solutions, we show that the solution process of the boundary value problem is also a diffusive Markov process. Since a boundary value problem, unlike an initial value problem, has no preferred direction for the parameter set, we find that there are two Fokker-Planck equations, one for each direction. It is shown that the density of the solution process of the boundary value problem is the unique simultaneous solution of this pair of Fokker-Planck equations.

This theory is then applied to the problem of a vibrating string with stochastic density.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of determining probability density functions of general transformations of random processes is considered in this thesis. A method of solution is developed in which partial differential equations satisfied by the unknown density function are derived. These partial differential equations are interpreted as generalized forms of the classical Fokker-Planck-Kolmogorov equations and are shown to imply the classical equations for certain classes of Markov processes. Extensions of the generalized equations which overcome degeneracy occurring in the steady-state case are also obtained.

The equations of Darling and Siegert are derived as special cases of the generalized equations thereby providing unity to two previously existing theories. A technique for treating non-Markov processes by studying closely related Markov processes is proposed and is seen to yield the Darling and Siegert equations directly from the classical Fokker-Planck-Kolmogorov equations.

As illustrations of their applicability, the generalized Fokker-Planck-Kolmogorov equations are presented for certain joint probability density functions associated with the linear filter. These equations are solved for the density of the output of an arbitrary linear filter excited by Markov Gaussian noise and for the density of the output of an RC filter excited by the Poisson square wave. This latter density is also found by using the extensions of the generalized equations mentioned above. Finally, some new approaches for finding the output probability density function of an RC filter-limiter-RC filter system driven by white Gaussian noise are included. The results in this case exhibit the data required for complete solution and clearly illustrate some of the mathematical difficulties inherent to the use of the generalized equations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, computationally efficient approximate methods are developed for analyzing uncertain dynamical systems. Uncertainties in both the excitation and the modeling are considered and examples are presented illustrating the accuracy of the proposed approximations.

For nonlinear systems under uncertain excitation, methods are developed to approximate the stationary probability density function and statistical quantities of interest. The methods are based on approximating solutions to the Fokker-Planck equation for the system and differ from traditional methods in which approximate solutions to stochastic differential equations are found. The new methods require little computational effort and examples are presented for which the accuracy of the proposed approximations compare favorably to results obtained by existing methods. The most significant improvements are made in approximating quantities related to the extreme values of the response, such as expected outcrossing rates, which are crucial for evaluating the reliability of the system.

Laplace's method of asymptotic approximation is applied to approximate the probability integrals which arise when analyzing systems with modeling uncertainty. The asymptotic approximation reduces the problem of evaluating a multidimensional integral to solving a minimization problem and the results become asymptotically exact as the uncertainty in the modeling goes to zero. The method is found to provide good approximations for the moments and outcrossing rates for systems with uncertain parameters under stochastic excitation, even when there is a large amount of uncertainty in the parameters. The method is also applied to classical reliability integrals, providing approximations in both the transformed (independently, normally distributed) variables and the original variables. In the transformed variables, the asymptotic approximation yields a very simple formula for approximating the value of SORM integrals. In many cases, it may be computationally expensive to transform the variables, and an approximation is also developed in the original variables. Examples are presented illustrating the accuracy of the approximations and results are compared with existing approximations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This is a two-part thesis concerning the motion of a test particle in a bath. In part one we use an expansion of the operator PLeit(1-P)LLP to shape the Zwanzig equation into a generalized Fokker-Planck equation which involves a diffusion tensor depending on the test particle's momentum and the time.

In part two the resultant equation is studied in some detail for the case of test particle motion in a weakly coupled Lorentz Gas. The diffusion tensor for this system is considered. Some of its properties are calculated; it is computed explicitly for the case of a Gaussian potential of interaction.

The equation for the test particle distribution function can be put into the form of an inhomogeneous Schroedinger equation. The term corresponding to the potential energy in the Schroedinger equation is considered. Its structure is studied, and some of its simplest features are used to find the Green's function in the limiting situations of low density and long time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The propagation of waves in an extended, irregular medium is studied under the "quasi-optics" and the "Markov random process" approximations. Under these assumptions, a Fokker-Planck equation satisfied by the characteristic functional of the random wave field is derived. A complete set of the moment equations with different transverse coordinates and different wavenumbers is then obtained from the characteristic functional. The derivation does not require Gaussian statistics of the random medium and the result can be applied to the time-dependent problem. We then solve the moment equations for the phase correlation function, angular broadening, temporal pulse smearing, intensity correlation function, and the probability distribution of the random waves. The necessary and sufficient conditions for strong scintillation are also given.

We also consider the problem of diffraction of waves by a random, phase-changing screen. The intensity correlation function is solved in the whole Fresnel diffraction region and the temporal pulse broadening function is derived rigorously from the wave equation.

The method of smooth perturbations is applied to interplanetary scintillations. We formulate and calculate the effects of the solar-wind velocity fluctuations on the observed intensity power spectrum and on the ratio of the observed "pattern" velocity and the true velocity of the solar wind in the three-dimensional spherical model. The r.m.s. solar-wind velocity fluctuations are found to be ~200 km/sec in the region about 20 solar radii from the Sun.

We then interpret the observed interstellar scintillation data using the theories derived under the Markov approximation, which are also valid for the strong scintillation. We find that the Kolmogorov power-law spectrum with an outer scale of 10 to 100 pc fits the scintillation data and that the ambient averaged electron density in the interstellar medium is about 0.025 cm-3. It is also found that there exists a region of strong electron density fluctuation with thickness ~10 pc and mean electron density ~7 cm-3 between the PSR 0833-45 pulsar and the earth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new analytic solution has been obtained to the complete Fokker-Planck equation for solar flare particle propagation including the effects of convection, energy-change, corotation, and diffusion with ĸr = constant and ĸƟ ∝ r2. It is assumed that the particles are injected impulsively at a single point in space, and that a boundary exists beyond which the particles are free to escape. Several solar flare particle events have been observed with the Caltech Solar and Galactic Cosmic Ray Experiment aboard OGO-6. Detailed comparisons of the predictions of the new solution with these observations of 1-70 MeV protons show that the model adequately describes both the rise and decay times, indicating that ĸr = constant is a better description of conditions inside 1 AU than is ĸr ∝ r. With an outer boundary at 2.7 AU, a solar wind velocity of 400 km/sec, and a radial diffusion coefficient ĸr ≈ 2-8 x 1020 cm2/sec, the model gives reasonable fits to the time-profile of 1-10 MeV protons from "classical" flare-associated events. It is not necessary to invoke a scatter-free region near the sun in order to reproduce the fast rise times observed for directly-connected events. The new solution also yields a time-evolution for the vector anisotropy which agrees well with previously reported observations.

In addition, the new solution predicts that, during the decay phase, a typical convex spectral feature initially at energy To will move to lower energies at an exponential rate given by TKINK = Toexp(-t/ƬKINK). Assuming adiabatic deceleration and a boundary at 2.7 AU, the solution yields ƬKINK ≈ 100h, which is faster than the measured ~200h time constant and slower than the adiabatic rate of ~78h at 1 AU. Two possible explanations are that the boundary is at ~5 AU or that some other energy-change process is operative.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents a novel framework for state estimation in the context of robotic grasping and manipulation. The overall estimation approach is based on fusing various visual cues for manipulator tracking, namely appearance and feature-based, shape-based, and silhouette-based visual cues. Similarly, a framework is developed to fuse the above visual cues, but also kinesthetic cues such as force-torque and tactile measurements, for in-hand object pose estimation. The cues are extracted from multiple sensor modalities and are fused in a variety of Kalman filters.

A hybrid estimator is developed to estimate both a continuous state (robot and object states) and discrete states, called contact modes, which specify how each finger contacts a particular object surface. A static multiple model estimator is used to compute and maintain this mode probability. The thesis also develops an estimation framework for estimating model parameters associated with object grasping. Dual and joint state-parameter estimation is explored for parameter estimation of a grasped object's mass and center of mass. Experimental results demonstrate simultaneous object localization and center of mass estimation.

Dual-arm estimation is developed for two arm robotic manipulation tasks. Two types of filters are explored; the first is an augmented filter that contains both arms in the state vector while the second runs two filters in parallel, one for each arm. These two frameworks and their performance is compared in a dual-arm task of removing a wheel from a hub.

This thesis also presents a new method for action selection involving touch. This next best touch method selects an available action for interacting with an object that will gain the most information. The algorithm employs information theory to compute an information gain metric that is based on a probabilistic belief suitable for the task. An estimation framework is used to maintain this belief over time. Kinesthetic measurements such as contact and tactile measurements are used to update the state belief after every interactive action. Simulation and experimental results are demonstrated using next best touch for object localization, specifically a door handle on a door. The next best touch theory is extended for model parameter determination. Since many objects within a particular object category share the same rough shape, principle component analysis may be used to parametrize the object mesh models. These parameters can be estimated using the action selection technique that selects the touching action which best both localizes and estimates these parameters. Simulation results are then presented involving localizing and determining a parameter of a screwdriver.

Lastly, the next best touch theory is further extended to model classes. Instead of estimating parameters, object class determination is incorporated into the information gain metric calculation. The best touching action is selected in order to best discern between the possible model classes. Simulation results are presented to validate the theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.

In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.

We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.

Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.

This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A general review of stochastic processes is given in the introduction; definitions, properties and a rough classification are presented together with the position and scope of the author's work as it fits into the general scheme.

The first section presents a brief summary of the pertinent analytical properties of continuous stochastic processes and their probability-theoretic foundations which are used in the sequel.

The remaining two sections (II and III), comprising the body of the work, are the author's contribution to the theory. It turns out that a very inclusive class of continuous stochastic processes are characterized by a fundamental partial differential equation and its adjoint (the Fokker-Planck equations). The coefficients appearing in those equations assimilate, in a most concise way, all the salient properties of the process, freed from boundary value considerations. The writer’s work consists in characterizing the processes through these coefficients without recourse to solving the partial differential equations.

First, a class of coefficients leading to a unique, continuous process is presented, and several facts are proven to show why this class is restricted. Then, in terms of the coefficients, the unconditional statistics are deduced, these being the mean, variance and covariance. The most general class of coefficients leading to the Gaussian distribution is deduced, and a complete characterization of these processes is presented. By specializing the coefficients, all the known stochastic processes may be readily studied, and some examples of these are presented; viz. the Einstein process, Bachelier process, Ornstein-Uhlenbeck process, etc. The calculations are effectively reduced down to ordinary first order differential equations, and in addition to giving a comprehensive characterization, the derivations are materially simplified over the solution to the original partial differential equations.

In the last section the properties of the integral process are presented. After an expository section on the definition, meaning, and importance of the integral process, a particular example is carried through starting from basic definition. This illustrates the fundamental properties, and an inherent paradox. Next the basic coefficients of the integral process are studied in terms of the original coefficients, and the integral process is uniquely characterized. It is shown that the integral process, with a slight modification, is a continuous Markoff process.

The elementary statistics of the integral process are deduced: means, variances, and covariances, in terms of the original coefficients. It is shown that an integral process is never temporally homogeneous in a non-degenerate process.

Finally, in terms of the original class of admissible coefficients, the statistics of the integral process are explicitly presented, and the integral process of all known continuous processes are specified.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Fokker-Planck (FP) equation is used to develop a general method for finding the spectral density for a class of randomly excited first order systems. This class consists of systems satisfying stochastic differential equations of form ẋ + f(x) = m/Ʃ/j = 1 hj(x)nj(t) where f and the hj are piecewise linear functions (not necessarily continuous), and the nj are stationary Gaussian white noise. For such systems, it is shown how the Laplace-transformed FP equation can be solved for the transformed transition probability density. By manipulation of the FP equation and its adjoint, a formula is derived for the transformed autocorrelation function in terms of the transformed transition density. From this, the spectral density is readily obtained. The method generalizes that of Caughey and Dienes, J. Appl. Phys., 32.11.

This method is applied to 4 subclasses: (1) m = 1, h1 = const. (forcing function excitation); (2) m = 1, h1 = f (parametric excitation); (3) m = 2, h1 = const., h2 = f, n1 and n2 correlated; (4) the same, uncorrelated. Many special cases, especially in subclass (1), are worked through to obtain explicit formulas for the spectral density, most of which have not been obtained before. Some results are graphed.

Dealing with parametrically excited first order systems leads to two complications. There is some controversy concerning the form of the FP equation involved (see Gray and Caughey, J. Math. Phys., 44.3); and the conditions which apply at irregular points, where the second order coefficient of the FP equation vanishes, are not obvious but require use of the mathematical theory of diffusion processes developed by Feller and others. These points are discussed in the first chapter, relevant results from various sources being summarized and applied. Also discussed is the steady-state density (the limit of the transition density as t → ∞).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A large number of technologically important materials undergo solid-solid phase transformations. Examples range from ferroelectrics (transducers and memory devices), zirconia (Thermal Barrier Coatings) to nickel superalloys and (lithium) iron phosphate (Li-ion batteries). These transformations involve a change in the crystal structure either through diffusion of species or local rearrangement of atoms. This change of crystal structure leads to a macroscopic change of shape or volume or both and results in internal stresses during the transformation. In certain situations this stress field gives rise to cracks (tin, iron phosphate etc.) which continue to propagate as the transformation front traverses the material. In other materials the transformation modifies the stress field around cracks and effects crack growth behavior (zirconia, ferroelectrics). These observations serve as our motivation to study cracks in solids undergoing phase transformations. Understanding these effects will help in improving the mechanical reliability of the devices employing these materials.

In this thesis we present work on two problems concerning the interplay between cracks and phase transformations. First, we consider the directional growth of a set of parallel edge cracks due to a solid-solid transformation. We conclude from our analysis that phase transformations can lead to formation of parallel edge cracks when the transformation strain satisfies certain conditions and the resulting cracks grow all the way till their tips cross over the phase boundary. Moreover the cracks continue to grow as the phase boundary traverses into the interior of the body at a uniform spacing without any instabilities. There exists an optimal value for the spacing between the cracks. We ascertain these conclusion by performing numerical simulations using finite elements.

Second, we model the effect of the semiconducting nature and dopants on cracks in ferroelectric perovskite materials, particularly barium titanate. Traditional approaches to model fracture in these materials have treated them as insulators. In reality, they are wide bandgap semiconductors with oxygen vacancies and trace impurities acting as dopants. We incorporate the space charge arising due the semiconducting effect and dopant ionization in a phase field model for the ferroelectric. We derive the governing equations by invoking the dissipation inequality over a ferroelectric domain containing a crack. This approach also yields the driving force acting on the crack. Our phase field simulations of polarization domain evolution around a crack show the accumulation of electronic charge on the crack surface making it more permeable than was previously believed so, as seen in recent experiments. We also discuss the effect the space charge has on domain formation and the crack driving force.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The search for reliable proxies of past deep ocean temperature and salinity has proved difficult, thereby limiting our ability to understand the coupling of ocean circulation and climate over glacial-interglacial timescales. Previous inferences of deep ocean temperature and salinity from sediment pore fluid oxygen isotopes and chlorinity indicate that the deep ocean density structure at the Last Glacial Maximum (LGM, approximately 20,000 years BP) was set by salinity, and that the density contrast between northern and southern sourced deep waters was markedly greater than in the modern ocean. High density stratification could help explain the marked contrast in carbon isotope distribution recorded in the LGM ocean relative to that we observe today, but what made the ocean's density structure so different at the LGM? How did it evolve from one state to another? Further, given the sparsity of the LGM temperature and salinity data set, what else can we learn by increasing the spatial density of proxy records?

We investigate the cause and feasibility of a highly and salinity stratified deep ocean at the LGM and we work to increase the amount of information we can glean about the past ocean from pore fluid profiles of oxygen isotopes and chloride. Using a coupled ocean--sea ice--ice shelf cavity model we test whether the deep ocean density structure at the LGM can be explained by ice--ocean interactions over the Antarctic continental shelves, and show that a large contribution of the LGM salinity stratification can be explained through lower ocean temperature. In order to extract the maximum information from pore fluid profiles of oxygen isotopes and chloride we evaluate several inverse methods for ill-posed problems and their ability to recover bottom water histories from sediment pore fluid profiles. We demonstrate that Bayesian Markov Chain Monte Carlo parameter estimation techniques enable us to robustly recover the full solution space of bottom water histories, not only at the LGM, but through the most recent deglaciation and the Holocene up to the present. Finally, we evaluate a non-destructive pore fluid sampling technique, Rhizon samplers, in comparison to traditional squeezing methods and show that despite their promise, Rhizons are unlikely to be a good sampling tool for pore fluid measurements of oxygen isotopes and chloride.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Earth is very heterogeneous, especially in the region close to the surface of the Earth, and in regions close to the core-mantle boundary (CMB). The lowermost mantle (bottom 300km of the mantle) is the place for fast anomaly (3% faster S velocity than PREM, modeled from Scd), for slow anomaly (-3% slower S velocity than PREM, modeled from S,ScS), for extreme anomalous structure (ultra-low velocity zone, 30% lower inS velocity, 10% lower in P velocity). Strong anomaly with larger dimension is also observed beneath Africa and Pacific, originally modeled from travel time of S, SKS and ScS. Given the heterogeneous nature of the earth, more accurate approach (than travel time) has to be applied to study the details of various anomalous structures, and matching waveform with synthetic seismograms has proven effective in constraining the velocity structures. However, it is difficult to make synthetic seismograms in more than 1D cases where no exact analytical solution is possible. Numerical methods like finite difference or finite elements are too time consuming in modeling body waveforms. We developed a 2D synthetic algorithm, which is extended from 1D generalized ray theory (GRT), to make synthetic seismograms efficiently (each seismogram per minutes). This 2D algorithm is related to WKB approximation, but is based on different principles, it is thus named to be WKM, i.e., WKB modified. WKM has been applied to study the variation of fast D" structure beneath the Caribbean sea, to study the plume beneath Africa. WKM is also applied to study PKP precursors which is a very important seismic phase in modeling lower mantle heterogeneity. By matching WKM synthetic seismograms with various data, we discovered and confirmed that (a) The D" beneath Caribbean varies laterally, and the variation is best revealed with Scd+Sab beyond 88 degree where Sed overruns Sab. (b) The low velocity structure beneath Africa is about 1500 km in height, at least 1000km in width, and features 3% reduced S velocity. The low velocity structure is a combination of a relatively thin, low velocity layer (200 km thick or less) beneath the Atlantic, then rising very sharply into mid mantle towards Africa. (c) At the edges of this huge Africa low velocity structures, ULVZs are found by modeling the large separation between S and ScS beyond 100 degree. The ULVZ to the eastern boundary was discovered with SKPdS data, and later is confirmed by PKP precursor data. This is the first time that ULVZ is verified with distinct seismic phase.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Advanced LIGO and Virgo experiments are poised to detect gravitational waves (GWs) directly for the first time this decade. The ultimate prize will be joint observation of a compact binary merger in both gravitational and electromagnetic channels. However, GW sky locations that are uncertain by hundreds of square degrees will pose a challenge. I describe a real-time detection pipeline and a rapid Bayesian parameter estimation code that will make it possible to search promptly for optical counterparts in Advanced LIGO. Having analyzed a comprehensive population of simulated GW sources, we describe the sky localization accuracy that the GW detector network will achieve as each detector comes online and progresses toward design sensitivity. Next, in preparation for the optical search with the intermediate Palomar Transient Factory (iPTF), we have developed a unique capability to detect optical afterglows of gamma-ray bursts (GRBs) detected by the Fermi Gamma-ray Burst Monitor (GBM). Its comparable error regions offer a close parallel to the Advanced LIGO problem, but Fermi's unique access to MeV-GeV photons and its near all-sky coverage may allow us to look at optical afterglows in a relatively unexplored part of the GRB parameter space. We present the discovery and broadband follow-up observations (X-ray, UV, optical, millimeter, and radio) of eight GBM-IPTF afterglows. Two of the bursts (GRB 130702A / iPTF13bxl and GRB 140606B / iPTF14bfu) are at low redshift (z=0.145 and z = 0.384, respectively), are sub-luminous with respect to "standard" cosmological bursts, and have spectroscopically confirmed broad-line type Ic supernovae. These two bursts are possibly consistent with mildly relativistic shocks breaking out from the progenitor envelopes rather than the standard mechanism of internal shocks within an ultra-relativistic jet. On a technical level, the GBM--IPTF effort is a prototype for locating and observing optical counterparts of GW events in Advanced LIGO with the Zwicky Transient Facility.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the field of mechanics, it is a long standing goal to measure quantum behavior in ever larger and more massive objects. It may now seem like an obvious conclusion, but until recently it was not clear whether a macroscopic mechanical resonator -- built up from nearly 1013 atoms -- could be fully described as an ideal quantum harmonic oscillator. With recent advances in the fields of opto- and electro-mechanics, such systems offer a unique advantage in probing the quantum noise properties of macroscopic electrical and mechanical devices, properties that ultimately stem from Heisenberg's uncertainty relations. Given the rapid progress in device capabilities, landmark results of quantum optics are now being extended into the regime of macroscopic mechanics.

The purpose of this dissertation is to describe three experiments -- motional sideband asymmetry, back-action evasion (BAE) detection, and mechanical squeezing -- that are directly related to the topic of measuring quantum noise with mechanical detection. These measurements all share three pertinent features: they explore quantum noise properties in a macroscopic electromechanical device driven by a minimum of two microwave drive tones, hence the title of this work: "Quantum electromechanics with two tone drive".

In the following, we will first introduce a quantum input-output framework that we use to model the electromechanical interaction and capture subtleties related to interpreting different microwave noise detection techniques. Next, we will discuss the fabrication and measurement details that we use to cool and probe these devices with coherent and incoherent microwave drive signals. Having developed our tools for signal modeling and detection, we explore the three-wave mixing interaction between the microwave and mechanical modes, whereby mechanical motion generates motional sidebands corresponding to up-down frequency conversions of microwave photons. Because of quantum vacuum noise, the rates of these processes are expected to be unequal. We will discuss the measurement and interpretation of this asymmetric motional noise in a electromechanical device cooled near the ground state of motion.

Next, we consider an overlapped two tone pump configuration that produces a time-modulated electromechanical interaction. By careful control of this drive field, we report a quantum non-demolition (QND) measurement of a single motional quadrature. Incorporating a second pair of drive tones, we directly measure the measurement back-action associated with both classical and quantum noise of the microwave cavity. Lastly, we slightly modify our drive scheme to generate quantum squeezing in a macroscopic mechanical resonator. Here, we will focus on data analysis techniques that we use to estimate the quadrature occupations. We incorporate Bayesian spectrum fitting and parameter estimation that serve as powerful tools for incorporating many known sources of measurement and fit error that are unavoidable in such work.