7 resultados para biased estimation

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Be it a physical object or a mathematical model, a nonlinear dynamical system can display complicated aperiodic behavior, or "chaos." In many cases, this chaos is associated with motion on a strange attractor in the system's phase space. And the dimension of the strange attractor indicates the effective number of degrees of freedom in the dynamical system.

In this thesis, we investigate numerical issues involved with estimating the dimension of a strange attractor from a finite time series of measurements on the dynamical system.

Of the various definitions of dimension, we argue that the correlation dimension is the most efficiently calculable and we remark further that it is the most commonly calculated. We are concerned with the practical problems that arise in attempting to compute the correlation dimension. We deal with geometrical effects (due to the inexact self-similarity of the attractor), dynamical effects (due to the nonindependence of points generated by the dynamical system that defines the attractor), and statistical effects (due to the finite number of points that sample the attractor). We propose a modification of the standard algorithm, which eliminates a specific effect due to autocorrelation, and a new implementation of the correlation algorithm, which is computationally efficient.

Finally, we apply the algorithm to chaotic data from the Caltech tokamak and the Texas tokamak (TEXT); we conclude that plasma turbulence is not a low- dimensional phenomenon.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a novel framework for state estimation in the context of robotic grasping and manipulation. The overall estimation approach is based on fusing various visual cues for manipulator tracking, namely appearance and feature-based, shape-based, and silhouette-based visual cues. Similarly, a framework is developed to fuse the above visual cues, but also kinesthetic cues such as force-torque and tactile measurements, for in-hand object pose estimation. The cues are extracted from multiple sensor modalities and are fused in a variety of Kalman filters.

A hybrid estimator is developed to estimate both a continuous state (robot and object states) and discrete states, called contact modes, which specify how each finger contacts a particular object surface. A static multiple model estimator is used to compute and maintain this mode probability. The thesis also develops an estimation framework for estimating model parameters associated with object grasping. Dual and joint state-parameter estimation is explored for parameter estimation of a grasped object's mass and center of mass. Experimental results demonstrate simultaneous object localization and center of mass estimation.

Dual-arm estimation is developed for two arm robotic manipulation tasks. Two types of filters are explored; the first is an augmented filter that contains both arms in the state vector while the second runs two filters in parallel, one for each arm. These two frameworks and their performance is compared in a dual-arm task of removing a wheel from a hub.

This thesis also presents a new method for action selection involving touch. This next best touch method selects an available action for interacting with an object that will gain the most information. The algorithm employs information theory to compute an information gain metric that is based on a probabilistic belief suitable for the task. An estimation framework is used to maintain this belief over time. Kinesthetic measurements such as contact and tactile measurements are used to update the state belief after every interactive action. Simulation and experimental results are demonstrated using next best touch for object localization, specifically a door handle on a door. The next best touch theory is extended for model parameter determination. Since many objects within a particular object category share the same rough shape, principle component analysis may be used to parametrize the object mesh models. These parameters can be estimated using the action selection technique that selects the touching action which best both localizes and estimates these parameters. Simulation results are then presented involving localizing and determining a parameter of a screwdriver.

Lastly, the next best touch theory is further extended to model classes. Instead of estimating parameters, object class determination is incorporated into the information gain metric calculation. The best touching action is selected in order to best discern between the possible model classes. Simulation results are presented to validate the theory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security.

At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level.

In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations.

In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction.

In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.

In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.

We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.

Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.

This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The following work explores the processes individuals utilize when making multi-attribute choices. With the exception of extremely simple or familiar choices, most decisions we face can be classified as multi-attribute choices. In order to evaluate and make choices in such an environment, we must be able to estimate and weight the particular attributes of an option. Hence, better understanding the mechanisms involved in this process is an important step for economists and psychologists. For example, when choosing between two meals that differ in taste and nutrition, what are the mechanisms that allow us to estimate and then weight attributes when constructing value? Furthermore, how can these mechanisms be influenced by variables such as attention or common physiological states, like hunger?

In order to investigate these and similar questions, we use a combination of choice and attentional data, where the attentional data was collected by recording eye movements as individuals made decisions. Chapter 1 designs and tests a neuroeconomic model of multi-attribute choice that makes predictions about choices, response time, and how these variables are correlated with attention. Chapter 2 applies the ideas in this model to intertemporal decision-making, and finds that attention causally affects discount rates. Chapter 3 explores how hunger, a common physiological state, alters the mechanisms we utilize as we make simple decisions about foods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the discovery in 1962 of laser action in semiconductor diodes made from GaAs, the study of spontaneous and stimulated light emission from semiconductors has become an exciting new field of semiconductor physics and quantum electronics combined. Included in the limited number of direct-gap semiconductor materials suitable for laser action are the members of the lead salt family, i.e . PbS, PbSe and PbTe. The material used for the experiments described herein is PbTe . The semiconductor PbTe is a narrow band- gap material (Eg = 0.19 electron volt at a temperature of 4.2°K). Therefore, the radiative recombination of electron-hole pairs between the conduction and valence bands produces photons whose wavelength is in the infrared (λ ≈ 6.5 microns in air).

The p-n junction diode is a convenient device in which the spontaneous and stimulated emission of light can be achieved via current flow in the forward-bias direction. Consequently, the experimental devices consist of a group of PbTe p-n junction diodes made from p –type single crystal bulk material. The p - n junctions were formed by an n-type vapor- phase diffusion perpendicular to the (100) plane, with a junction depth of approximately 75 microns. Opposite ends of the diode structure were cleaved to give parallel reflectors, thereby forming the Fabry-Perot cavity needed for a laser oscillator. Since the emission of light originates from the recombination of injected current carriers, the nature of the radiation depends on the injection mechanism.

The total intensity of the light emitted from the PbTe diodes was observed over a current range of three to four orders of magnitude. At the low current levels, the light intensity data were correlated with data obtained on the electrical characteristics of the diodes. In the low current region (region A), the light intensity, current-voltage and capacitance-voltage data are consistent with the model for photon-assisted tunneling. As the current is increased, the light intensity data indicate the occurrence of a change in the current injection mechanism from photon-assisted tunneling (region A) to thermionic emission (region B). With the further increase of the injection level, the photon-field due to light emission in the diode builds up to the point where stimulated emission (oscillation) occurs. The threshold current at which oscillation begins marks the beginning of a region (region C) where the total light intensity increases very rapidly with the increase in current. This rapid increase in intensity is accompanied by an increase in the number of narrow-band oscillating modes. As the photon density in the cavity continues to increase with the injection level, the intensity gradually enters a region of linear dependence on current (region D), i.e. a region of constant (differential) quantum efficiency.

Data obtained from measurements of the stimulated-mode light-intensity profile and the far-field diffraction pattern (both in the direction perpendicular to the junction-plane) indicate that the active region of high gain (i.e. the region where a population inversion exists) extends to approximately a diffusion length on both sides of the junction. The data also indicate that the confinement of the oscillating modes within the diode cavity is due to a variation in the real part of the dielectric constant, caused by the gain in the medium. A value of τ ≈ 10-9 second for the minority- carrier recombination lifetime (at a diode temperature of 20.4°K) is obtained from the above measurements. This value for τ is consistent with other data obtained independently for PbTe crystals.

Data on the threshold current for stimulated emission (for a diode temperature of 20. 4°K) as a function of the reciprocal cavity length were obtained. These data yield a value of J’th = (400 ± 80) amp/cm2 for the threshold current in the limit of an infinitely long diode-cavity. A value of α = (30 ± 15) cm-1 is obtained for the total (bulk) cavity loss constant, in general agreement with independent measurements of free- carrier absorption in PbTe. In addition, the data provide a value of ns ≈ 10% for the internal spontaneous quantum efficiency. The above value for ns yields values of tb ≈ τ ≈ 10-9 second and ts ≈ 10-8 second for the nonradiative and the spontaneous (radiative) lifetimes, respectively.

The external quantum efficiency (nd) for stimulated emission from diode J-2 (at 20.4° K) was calculated by using the total light intensity vs. diode current data, plus accepted values for the material parameters of the mercury- doped germanium detector used for the measurements. The resulting value is nd ≈ 10%-20% for emission from both ends of the cavity. The corresponding radiative power output (at λ = 6.5 micron) is 120-240 milliwatts for a diode current of 6 amps.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Techniques are developed for estimating activity profiles in fixed bed reactors and catalyst deactivation parameters from operating reactor data. These techniques are applicable, in general, to most industrial catalytic processes. The catalytic reforming of naphthas is taken as a broad example to illustrate the estimation schemes and to signify the physical meaning of the kinetic parameters of the estimation equations. The work is described in two parts. Part I deals with the modeling of kinetic rate expressions and the derivation of the working equations for estimation. Part II concentrates on developing various estimation techniques.

Part I: The reactions used to describe naphtha reforming are dehydrogenation and dehydroisomerization of cycloparaffins; isomerization, dehydrocyclization and hydrocracking of paraffins; and the catalyst deactivation reactions, namely coking on alumina sites and sintering of platinum crystallites. The rate expressions for the above reactions are formulated, and the effects of transport limitations on the overall reaction rates are discussed in the appendices. Moreover, various types of interaction between the metallic and acidic active centers of reforming catalysts are discussed as characterizing the different types of reforming reactions.

Part II: In catalytic reactor operation, the activity distribution along the reactor determines the kinetics of the main reaction and is needed for predicting the effect of changes in the feed state and the operating conditions on the reactor output. In the case of a monofunctional catalyst and of bifunctional catalysts in limiting conditions, the cumulative activity is sufficient for predicting steady reactor output. The estimation of this cumulative activity can be carried out easily from measurements at the reactor exit. For a general bifunctional catalytic system, the detailed activity distribution is needed for describing the reactor operation, and some approximation must be made to obtain practicable estimation schemes. This is accomplished by parametrization techniques using measurements at a few points along the reactor. Such parametrization techniques are illustrated numerically with a simplified model of naphtha reforming.

To determine long term catalyst utilization and regeneration policies, it is necessary to estimate catalyst deactivation parameters from the the current operating data. For a first order deactivation model with a monofunctional catalyst or with a bifunctional catalyst in special limiting circumstances, analytical techniques are presented to transform the partial differential equations to ordinary differential equations which admit more feasible estimation schemes. Numerical examples include the catalytic oxidation of butene to butadiene and a simplified model of naphtha reforming. For a general bifunctional system or in the case of a monofunctional catalyst subject to general power law deactivation, the estimation can only be accomplished approximately. The basic feature of an appropriate estimation scheme involves approximating the activity profile by certain polynomials and then estimating the deactivation parameters from the integrated form of the deactivation equation by regression techniques. Different bifunctional systems must be treated by different estimation algorithms, which are illustrated by several cases of naphtha reforming with different feed or catalyst composition.