5 resultados para robust parameter estimation

em CaltechTHESIS


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis presents a novel framework for state estimation in the context of robotic grasping and manipulation. The overall estimation approach is based on fusing various visual cues for manipulator tracking, namely appearance and feature-based, shape-based, and silhouette-based visual cues. Similarly, a framework is developed to fuse the above visual cues, but also kinesthetic cues such as force-torque and tactile measurements, for in-hand object pose estimation. The cues are extracted from multiple sensor modalities and are fused in a variety of Kalman filters.

A hybrid estimator is developed to estimate both a continuous state (robot and object states) and discrete states, called contact modes, which specify how each finger contacts a particular object surface. A static multiple model estimator is used to compute and maintain this mode probability. The thesis also develops an estimation framework for estimating model parameters associated with object grasping. Dual and joint state-parameter estimation is explored for parameter estimation of a grasped object's mass and center of mass. Experimental results demonstrate simultaneous object localization and center of mass estimation.

Dual-arm estimation is developed for two arm robotic manipulation tasks. Two types of filters are explored; the first is an augmented filter that contains both arms in the state vector while the second runs two filters in parallel, one for each arm. These two frameworks and their performance is compared in a dual-arm task of removing a wheel from a hub.

This thesis also presents a new method for action selection involving touch. This next best touch method selects an available action for interacting with an object that will gain the most information. The algorithm employs information theory to compute an information gain metric that is based on a probabilistic belief suitable for the task. An estimation framework is used to maintain this belief over time. Kinesthetic measurements such as contact and tactile measurements are used to update the state belief after every interactive action. Simulation and experimental results are demonstrated using next best touch for object localization, specifically a door handle on a door. The next best touch theory is extended for model parameter determination. Since many objects within a particular object category share the same rough shape, principle component analysis may be used to parametrize the object mesh models. These parameters can be estimated using the action selection technique that selects the touching action which best both localizes and estimates these parameters. Simulation results are then presented involving localizing and determining a parameter of a screwdriver.

Lastly, the next best touch theory is further extended to model classes. Instead of estimating parameters, object class determination is incorporated into the information gain metric calculation. The best touching action is selected in order to best discern between the possible model classes. Simulation results are presented to validate the theory.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.

In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.

We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.

Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.

This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The search for reliable proxies of past deep ocean temperature and salinity has proved difficult, thereby limiting our ability to understand the coupling of ocean circulation and climate over glacial-interglacial timescales. Previous inferences of deep ocean temperature and salinity from sediment pore fluid oxygen isotopes and chlorinity indicate that the deep ocean density structure at the Last Glacial Maximum (LGM, approximately 20,000 years BP) was set by salinity, and that the density contrast between northern and southern sourced deep waters was markedly greater than in the modern ocean. High density stratification could help explain the marked contrast in carbon isotope distribution recorded in the LGM ocean relative to that we observe today, but what made the ocean's density structure so different at the LGM? How did it evolve from one state to another? Further, given the sparsity of the LGM temperature and salinity data set, what else can we learn by increasing the spatial density of proxy records?

We investigate the cause and feasibility of a highly and salinity stratified deep ocean at the LGM and we work to increase the amount of information we can glean about the past ocean from pore fluid profiles of oxygen isotopes and chloride. Using a coupled ocean--sea ice--ice shelf cavity model we test whether the deep ocean density structure at the LGM can be explained by ice--ocean interactions over the Antarctic continental shelves, and show that a large contribution of the LGM salinity stratification can be explained through lower ocean temperature. In order to extract the maximum information from pore fluid profiles of oxygen isotopes and chloride we evaluate several inverse methods for ill-posed problems and their ability to recover bottom water histories from sediment pore fluid profiles. We demonstrate that Bayesian Markov Chain Monte Carlo parameter estimation techniques enable us to robustly recover the full solution space of bottom water histories, not only at the LGM, but through the most recent deglaciation and the Holocene up to the present. Finally, we evaluate a non-destructive pore fluid sampling technique, Rhizon samplers, in comparison to traditional squeezing methods and show that despite their promise, Rhizons are unlikely to be a good sampling tool for pore fluid measurements of oxygen isotopes and chloride.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Advanced LIGO and Virgo experiments are poised to detect gravitational waves (GWs) directly for the first time this decade. The ultimate prize will be joint observation of a compact binary merger in both gravitational and electromagnetic channels. However, GW sky locations that are uncertain by hundreds of square degrees will pose a challenge. I describe a real-time detection pipeline and a rapid Bayesian parameter estimation code that will make it possible to search promptly for optical counterparts in Advanced LIGO. Having analyzed a comprehensive population of simulated GW sources, we describe the sky localization accuracy that the GW detector network will achieve as each detector comes online and progresses toward design sensitivity. Next, in preparation for the optical search with the intermediate Palomar Transient Factory (iPTF), we have developed a unique capability to detect optical afterglows of gamma-ray bursts (GRBs) detected by the Fermi Gamma-ray Burst Monitor (GBM). Its comparable error regions offer a close parallel to the Advanced LIGO problem, but Fermi's unique access to MeV-GeV photons and its near all-sky coverage may allow us to look at optical afterglows in a relatively unexplored part of the GRB parameter space. We present the discovery and broadband follow-up observations (X-ray, UV, optical, millimeter, and radio) of eight GBM-IPTF afterglows. Two of the bursts (GRB 130702A / iPTF13bxl and GRB 140606B / iPTF14bfu) are at low redshift (z=0.145 and z = 0.384, respectively), are sub-luminous with respect to "standard" cosmological bursts, and have spectroscopically confirmed broad-line type Ic supernovae. These two bursts are possibly consistent with mildly relativistic shocks breaking out from the progenitor envelopes rather than the standard mechanism of internal shocks within an ultra-relativistic jet. On a technical level, the GBM--IPTF effort is a prototype for locating and observing optical counterparts of GW events in Advanced LIGO with the Zwicky Transient Facility.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the field of mechanics, it is a long standing goal to measure quantum behavior in ever larger and more massive objects. It may now seem like an obvious conclusion, but until recently it was not clear whether a macroscopic mechanical resonator -- built up from nearly 1013 atoms -- could be fully described as an ideal quantum harmonic oscillator. With recent advances in the fields of opto- and electro-mechanics, such systems offer a unique advantage in probing the quantum noise properties of macroscopic electrical and mechanical devices, properties that ultimately stem from Heisenberg's uncertainty relations. Given the rapid progress in device capabilities, landmark results of quantum optics are now being extended into the regime of macroscopic mechanics.

The purpose of this dissertation is to describe three experiments -- motional sideband asymmetry, back-action evasion (BAE) detection, and mechanical squeezing -- that are directly related to the topic of measuring quantum noise with mechanical detection. These measurements all share three pertinent features: they explore quantum noise properties in a macroscopic electromechanical device driven by a minimum of two microwave drive tones, hence the title of this work: "Quantum electromechanics with two tone drive".

In the following, we will first introduce a quantum input-output framework that we use to model the electromechanical interaction and capture subtleties related to interpreting different microwave noise detection techniques. Next, we will discuss the fabrication and measurement details that we use to cool and probe these devices with coherent and incoherent microwave drive signals. Having developed our tools for signal modeling and detection, we explore the three-wave mixing interaction between the microwave and mechanical modes, whereby mechanical motion generates motional sidebands corresponding to up-down frequency conversions of microwave photons. Because of quantum vacuum noise, the rates of these processes are expected to be unequal. We will discuss the measurement and interpretation of this asymmetric motional noise in a electromechanical device cooled near the ground state of motion.

Next, we consider an overlapped two tone pump configuration that produces a time-modulated electromechanical interaction. By careful control of this drive field, we report a quantum non-demolition (QND) measurement of a single motional quadrature. Incorporating a second pair of drive tones, we directly measure the measurement back-action associated with both classical and quantum noise of the microwave cavity. Lastly, we slightly modify our drive scheme to generate quantum squeezing in a macroscopic mechanical resonator. Here, we will focus on data analysis techniques that we use to estimate the quadrature occupations. We incorporate Bayesian spectrum fitting and parameter estimation that serve as powerful tools for incorporating many known sources of measurement and fit error that are unavoidable in such work.