13 resultados para direct behavioural manipulation

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a novel framework for state estimation in the context of robotic grasping and manipulation. The overall estimation approach is based on fusing various visual cues for manipulator tracking, namely appearance and feature-based, shape-based, and silhouette-based visual cues. Similarly, a framework is developed to fuse the above visual cues, but also kinesthetic cues such as force-torque and tactile measurements, for in-hand object pose estimation. The cues are extracted from multiple sensor modalities and are fused in a variety of Kalman filters.

A hybrid estimator is developed to estimate both a continuous state (robot and object states) and discrete states, called contact modes, which specify how each finger contacts a particular object surface. A static multiple model estimator is used to compute and maintain this mode probability. The thesis also develops an estimation framework for estimating model parameters associated with object grasping. Dual and joint state-parameter estimation is explored for parameter estimation of a grasped object's mass and center of mass. Experimental results demonstrate simultaneous object localization and center of mass estimation.

Dual-arm estimation is developed for two arm robotic manipulation tasks. Two types of filters are explored; the first is an augmented filter that contains both arms in the state vector while the second runs two filters in parallel, one for each arm. These two frameworks and their performance is compared in a dual-arm task of removing a wheel from a hub.

This thesis also presents a new method for action selection involving touch. This next best touch method selects an available action for interacting with an object that will gain the most information. The algorithm employs information theory to compute an information gain metric that is based on a probabilistic belief suitable for the task. An estimation framework is used to maintain this belief over time. Kinesthetic measurements such as contact and tactile measurements are used to update the state belief after every interactive action. Simulation and experimental results are demonstrated using next best touch for object localization, specifically a door handle on a door. The next best touch theory is extended for model parameter determination. Since many objects within a particular object category share the same rough shape, principle component analysis may be used to parametrize the object mesh models. These parameters can be estimated using the action selection technique that selects the touching action which best both localizes and estimates these parameters. Simulation results are then presented involving localizing and determining a parameter of a screwdriver.

Lastly, the next best touch theory is further extended to model classes. Instead of estimating parameters, object class determination is incorporated into the information gain metric calculation. The best touching action is selected in order to best discern between the possible model classes. Simulation results are presented to validate the theory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We perform a measurement of direct CP violation in b to s+gamma Acp, and the measurement of a difference between Acp for neutral B and charged B mesons, Delta A_{X_s\gamma}, using 429 inverse femtobarn of data recorded at the Upsilon(4S) resonance with the BABAR detector. B mesons are reconstructed from 16 exclusive final states. Particle identification is done using an algorithm based on Error Correcting Output Code with an exhaustive matrix. Background rejection and best candidate selection are done using two decision tree-based classifiers. We found $\acp = 1.73%+-1.93%+-1.02% and Delta A_X_sgamma = 4.97%+-3.90%+-1.45% where the uncertainties are statistical and systematic respectively. Based on the measured value of Delta A_X_sgamma, we determine a 90% confidence interval for Im C_8g/C_7gamma, where C_7gamma and C_8g are Wilson coefficients for New Physics amplitudes, at -1.64 < Im C_8g/C_7gamma < 6.52.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.

The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.

Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.

Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.

A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.

The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.

Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern robots are increasingly expected to function in uncertain and dynamically challenging environments, often in proximity with humans. In addition, wide scale adoption of robots requires on-the-fly adaptability of software for diverse application. These requirements strongly suggest the need to adopt formal representations of high level goals and safety specifications, especially as temporal logic formulas. This approach allows for the use of formal verification techniques for controller synthesis that can give guarantees for safety and performance. Robots operating in unstructured environments also face limited sensing capability. Correctly inferring a robot's progress toward high level goal can be challenging.

This thesis develops new algorithms for synthesizing discrete controllers in partially known environments under specifications represented as linear temporal logic (LTL) formulas. It is inspired by recent developments in finite abstraction techniques for hybrid systems and motion planning problems. The robot and its environment is assumed to have a finite abstraction as a Partially Observable Markov Decision Process (POMDP), which is a powerful model class capable of representing a wide variety of problems. However, synthesizing controllers that satisfy LTL goals over POMDPs is a challenging problem which has received only limited attention.

This thesis proposes tractable, approximate algorithms for the control synthesis problem using Finite State Controllers (FSCs). The use of FSCs to control finite POMDPs allows for the closed system to be analyzed as finite global Markov chain. The thesis explicitly shows how transient and steady state behavior of the global Markov chains can be related to two different criteria with respect to satisfaction of LTL formulas. First, the maximization of the probability of LTL satisfaction is related to an optimization problem over a parametrization of the FSC. Analytic computation of gradients are derived which allows the use of first order optimization techniques.

The second criterion encourages rapid and frequent visits to a restricted set of states over infinite executions. It is formulated as a constrained optimization problem with a discounted long term reward objective by the novel utilization of a fundamental equation for Markov chains - the Poisson equation. A new constrained policy iteration technique is proposed to solve the resulting dynamic program, which also provides a way to escape local maxima.

The algorithms proposed in the thesis are applied to the task planning and execution challenges faced during the DARPA Autonomous Robotic Manipulation - Software challenge.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Accurate simulation of quantum dynamics in complex systems poses a fundamental theoretical challenge with immediate application to problems in biological catalysis, charge transfer, and solar energy conversion. The varied length- and timescales that characterize these kinds of processes necessitate development of novel simulation methodology that can both accurately evolve the coupled quantum and classical degrees of freedom and also be easily applicable to large, complex systems. In the following dissertation, the problems of quantum dynamics in complex systems are explored through direct simulation using path-integral methods as well as application of state-of-the-art analytical rate theories.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Proton-coupled electron transfer (PCET) reactions are ubiquitous throughout chemistry and biology. However, challenges arise in both the the experimental and theoretical investigation of PCET reactions; the rare-event nature of the reactions and the coupling between quantum mechanical electron- and proton-transfer with the slower classical dynamics of the surrounding environment necessitates the development of robust simulation methodology. In the following dissertation, novel path-integral based methods are developed and employed for the direct simulation of the reaction dynamics and mechanisms of condensed-phase PCET.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation reformulates and streamlines the core tools of robustness analysis for linear time invariant systems using now-standard methods in convex optimization. In particular, robust performance analysis can be formulated as a primal convex optimization in the form of a semidefinite program using a semidefinite representation of a set of Gramians. The same approach with semidefinite programming duality is applied to develop a linear matrix inequality test for well-connectedness analysis, and many existing results such as the Kalman-Yakubovich--Popov lemma and various scaled small gain tests are derived in an elegant fashion. More importantly, unlike the classical approach, a decision variable in this novel optimization framework contains all inner products of signals in a system, and an algorithm for constructing an input and state pair of a system corresponding to the optimal solution of robustness optimization is presented based on this information. This insight may open up new research directions, and as one such example, this dissertation proposes a semidefinite programming relaxation of a cardinality constrained variant of the H ∞ norm, which we term sparse H ∞ analysis, where an adversarial disturbance can use only a limited number of channels. Finally, sparse H ∞ analysis is applied to the linearized swing dynamics in order to detect potential vulnerable spots in power networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This report presents the results of an investigation of a method of underwater propulsion. The propelling system utilizes the energy of a small mass of expanding gas to accelerate the flow of a large mass of water through an open ended duct of proper shape and dimensions to obtain a resultant thrust. The investigation was limited to making a large number of runs on a hydroduct of arbitrary design, varying between wide limits the water flow and gas flow through the device, and measuring the net thrust caused by the introduction and expansion of the gas.

In comparison with the effective exhaust velocity of about 6,000 feet per second observed in rocket motors, this hydroduct model attained a maximum effective exhaust velocity of more than 27,000 feet per second, using nitrogen gas. Using hydrogen gas, effective exhaust velocities of 146,000 feet per second were obtained. Further investigation should prove this method of propulsion not only to be practical but very efficient.

This investigation was conducted at Project No. 1, Guggenheim Aeronautical Laboratory, California Institute of Technology, Pasadena, California.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

From the tunneling characteristics of a tin-tin oxide-lead junction, a direct measurement has been made of the energy-gap variation for a superconductor carrying a current in a compensated geometry. Throughout the region investigated – several temperatures near Tc and down to a reduced temperature t = 0.8 –the observed current dependence agrees quite well with predictions based on the Ginzburg-Landau-Gor’kov theory. Near Tc the predicted temperature dependence is also well verified, though deviations are observed at lower temperatures; even for the latter, the data are internally consistent with the temperature dependence of the experimental critical current. At the lowest temperature investigated, t = 0.8, a small “Josephson” tunneling current allowed further a direct measurement of the electron drift velocity at low current densities. From this, a preliminary experimental value of the critical velocity, believed to be the first reported, can be inferred in the basis of Ginzburg-Landau theory. For tin at t = 0.8, we find vc = 87 m/sec. This value does not appear fully consistent with those predicted by recent theories for superconductors with short electronic mean-free-paths.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The thesis is divided into two parts. Part I generalizes a self-consistent calculation of residue shifts from SU3 symmetry, originally performed by Dashen, Dothan, Frautschi, and Sharp, to include the effects of non-linear terms. Residue factorizability is used to transform an overdetermined set of equations into a variational problem, which is designed to take advantage of the redundancy of the mathematical system. The solution of this problem automatically satisfies the requirement of factorizability and comes close to satisfying all the original equations.

Part II investigates some consequences of direct channel Regge poles and treats the problem of relating Reggeized partial wave expansions made in different reaction channels. An analytic method is introduced which can be used to determine the crossed-channel discontinuity for a large class of direct-channel Regge representations, and this method is applied to some specific representations.

It is demonstrated that the multi-sheeted analytic structure of the Regge trajectory function can be used to resolve apparent difficulties arising from infinitely rising Regge trajectories. Also discussed are the implications of large collections of "daughter trajectories."

Two things are of particular interest: first, the threshold behavior in direct and crossed channels; second, the potentialities of Reggeized representations for us in self-consistent calculations. A new representation is introduced which surpasses previous formulations in these two areas, automatically satisfying direct-channel threshold constraints while being capable of reproducing a reasonable crossed channel discontinuity. A scalar model is investigated for low energies, and a relation is obtained between the mass of the lowest bound state and the slope of the Regge trajectory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis explores the dynamics of scale interactions in a turbulent boundary layer through a forcing-response type experimental study. An emphasis is placed on the analysis of triadic wavenumber interactions since the governing Navier-Stokes equations for the flow necessitate a direct coupling between triadically consist scales. Two sets of experiments were performed in which deterministic disturbances were introduced into the flow using a spatially-impulsive dynamic wall perturbation. Hotwire anemometry was employed to measure the downstream turbulent velocity and study the flow response to the external forcing. In the first set of experiments, which were based on a recent investigation of dynamic forcing effects in a turbulent boundary layer, a 2D (spanwise constant) spatio-temporal normal mode was excited in the flow; the streamwise length and time scales of the synthetic mode roughly correspond to the very-large-scale-motions (VLSM) found naturally in canonical flows. Correlation studies between the large- and small-scale velocity signals reveal an alteration of the natural phase relations between scales by the synthetic mode. In particular, a strong phase-locking or organizing effect is seen on directly coupled small-scales through triadic interactions. Having characterized the bulk influence of a single energetic mode on the flow dynamics, a second set of experiments aimed at isolating specific triadic interactions was performed. Two distinct 2D large-scale normal modes were excited in the flow, and the response at the corresponding sum and difference wavenumbers was isolated from the turbulent signals. Results from this experiment serve as an unique demonstration of direct non-linear interactions in a fully turbulent wall-bounded flow, and allow for examination of phase relationships involving specific interacting scales. A direct connection is also made to the Navier-Stokes resolvent operator framework developed in recent literature. Results and analysis from the present work offer insights into the dynamical structure of wall turbulence, and have interesting implications for design of practical turbulence manipulation or control strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The field of plasmonics exploits the unique optical properties of metallic nanostructures to concentrate and manipulate light at subwavelength length scales. Metallic nanostructures get their unique properties from their ability to support surface plasmons– coherent wave-like oscillations of the free electrons at the interface between a conductive and dielectric medium. Recent advancements in the ability to fabricate metallic nanostructures with subwavelength length scales have created new possibilities in technology and research in a broad range of applications.

In the first part of this thesis, we present two investigations of the relationship between the charge state and optical state of plasmonic metal nanoparticles. Using experimental bias-dependent extinction measurements, we derive a potential- dependent dielectric function for Au nanoparticles that accounts for changes in the physical properties due to an applied bias that contribute to the optical extinction. We also present theory and experiment for the reverse effect– the manipulation of the carrier density of Au nanoparticles via controlled optical excitation. This plasmoelectric effect takes advantage of the strong resonant properties of plasmonic materials and the relationship between charge state and optical properties to eluci- date a new avenue for conversion of optical power to electrical potential.

The second topic of this thesis is the non-radiative decay of plasmons to a hot-carrier distribution, and the distribution’s subsequent relaxation. We present first-principles calculations that capture all of the significant microscopic mechanisms underlying surface plasmon decay and predict the initial excited carrier distributions so generated. We also preform ab initio calculations of the electron-temperature dependent heat capacities and electron-phonon coupling coefficients of plasmonic metals. We extend these first-principle methods to calculate the electron-temperature dependent dielectric response of hot electrons in plasmonic metals, including direct interband and phonon-assisted intraband transitions. Finally, we combine these first-principles calculations of carrier dynamics and optical response to produce a complete theoretical description of ultrafast pump-probe measurements, free of any fitting parameters that are typical in previous analyses.