7 resultados para Dual task paradigm
em Massachusetts Institute of Technology
Resumo:
We are investigating how to program robots so that they learn from experience. Our goal is to develop principled methods of learning that can improve a robot's performance of a wide range of dynamic tasks. We have developed task-level learning that successfully improves a robot's performance of two complex tasks, ball-throwing and juggling. With task- level learning, a robot practices a task, monitors its own performance, and uses that experience to adjust its task-level commands. This learning method serves to complement other approaches, such as model calibration, for improving robot performance.
Resumo:
In most classical frameworks for learning from examples, it is assumed that examples are randomly drawn and presented to the learner. In this paper, we consider the possibility of a more active learner who is allowed to choose his/her own examples. Our investigations are carried out in a function approximation setting. In particular, using arguments from optimal recovery (Micchelli and Rivlin, 1976), we develop an adaptive sampling strategy (equivalent to adaptive approximation) for arbitrary approximation schemes. We provide a general formulation of the problem and show how it can be regarded as sequential optimal recovery. We demonstrate the application of this general formulation to two special cases of functions on the real line 1) monotonically increasing functions and 2) functions with bounded derivative. An extensive investigation of the sample complexity of approximating these functions is conducted yielding both theoretical and empirical results on test functions. Our theoretical results (stated insPAC-style), along with the simulations demonstrate the superiority of our active scheme over both passive learning as well as classical optimal recovery. The analysis of active function approximation is conducted in a worst-case setting, in contrast with other Bayesian paradigms obtained from optimal design (Mackay, 1992).
Resumo:
The conceptual component of this work is about "reference surfaces'' which are the dual of reference frames often used for shape representation purposes. The theoretical component of this work involves the question of whether one can find a unique (and simple) mapping that aligns two arbitrary perspective views of an opaque textured quadric surface in 3D, given (i) few corresponding points in the two views, or (ii) the outline conic of the surface in one view (only) and few corresponding points in the two views. The practical component of this work is concerned with applying the theoretical results as tools for the task of achieving full correspondence between views of arbitrary objects.
Resumo:
We derive a new representation for a function as a linear combination of local correlation kernels at optimal sparse locations and discuss its relation to PCA, regularization, sparsity principles and Support Vector Machines. We first review previous results for the approximation of a function from discrete data (Girosi, 1998) in the context of Vapnik"s feature space and dual representation (Vapnik, 1995). We apply them to show 1) that a standard regularization functional with a stabilizer defined in terms of the correlation function induces a regression function in the span of the feature space of classical Principal Components and 2) that there exist a dual representations of the regression function in terms of a regularization network with a kernel equal to a generalized correlation function. We then describe the main observation of the paper: the dual representation in terms of the correlation function can be sparsified using the Support Vector Machines (Vapnik, 1982) technique and this operation is equivalent to sparsify a large dictionary of basis functions adapted to the task, using a variation of Basis Pursuit De-Noising (Chen, Donoho and Saunders, 1995; see also related work by Donahue and Geiger, 1994; Olshausen and Field, 1995; Lewicki and Sejnowski, 1998). In addition to extending the close relations between regularization, Support Vector Machines and sparsity, our work also illuminates and formalizes the LFA concept of Penev and Atick (1996). We discuss the relation between our results, which are about regression, and the different problem of pattern classification.
Resumo:
In a recent experiment, Freedman et al. recorded from inferotemporal (IT) and prefrontal cortices (PFC) of monkeys performing a "cat/dog" categorization task (Freedman 2001 and Freedman, Riesenhuber, Poggio, Miller 2001). In this paper we analyze the tuning properties of view-tuned units in our HMAX model of object recognition in cortex (Riesenhuber 1999) using the same paradigm and stimuli as in the experiment. We then compare the simulation results to the monkey inferotemporal neuron population data. We find that view-tuned model IT units that were trained without any explicit category information can show category-related tuning as observed in the experiment. This suggests that the tuning properties of experimental IT neurons might primarily be shaped by bottom-up stimulus-space statistics, with little influence of top-down task-specific information. The population of experimental PFC neurons, on the other hand, shows tuning properties that cannot be explained just by stimulus tuning. These analyses are compatible with a model of object recognition in cortex (Riesenhuber 2000) in which a population of shape-tuned neurons provides a general basis for neurons tuned to different recognition tasks.
Resumo:
The release of growth factors from tissue engineering scaffolds provides signals that influence the migration, differentiation, and proliferation of cells. The incorporation of a drug delivery platform that is capable of tunable release will give tissue engineers greater versatility in the direction of tissue regeneration. We have prepared a novel composite of two biomaterials with proven track records - apatite and poly(lactic-co-glycolic acid) (PLGA) – as a drug delivery platform with promising controlled release properties. These composites have been tested in the delivery of a model protein, bovine serum albumin (BSA), as well as therapeutic proteins, recombinant human bone morphogenetic protein-2 (rhBMP-2) and rhBMP-6. The controlled release strategy is based on the use of a polymer with acidic degradation products to control the dissolution of the basic apatitic component, resulting in protein release. Therefore, any parameter that affects either polymer degradation or apatite dissolution can be used to control protein release. We have modified the protein release profile systematically by varying the polymer molecular weight, polymer hydrophobicity, apatite loading, apatite particle size, and other material and processing parameters. Biologically active rhBMP-2 was released from these composite microparticles over 100 days, in contrast to conventional collagen sponge carriers, which were depleted in approximately 2 weeks. The released rhBMP-2 was able to induce elevated alkaline phosphatase and osteocalcin expression in pluripotent murine embryonic fibroblasts. To augment tissue engineering scaffolds with tunable and sustained protein release capabilities, these composite microparticles can be dispersed in the scaffolds in different combinations to obtain a superposition of the release profiles. We have loaded rhBMP-2 into composite microparticles with a fast release profile, and rhBMP-6 into slow-releasing composite microparticles. An equi-mixture of these two sets of composite particles was then injected into a collagen sponge, allowing for dual release of the proteins from the collagenous scaffold. The ability of these BMP-loaded scaffolds to induce osteoblastic differentiation in vitro and ectopic bone formation in a rat model is being investigated. We anticipate that these apatite-polymer composite microparticles can be extended to the delivery of other signalling molecules, and can be incorporated into other types of tissue engineering scaffolds.
Resumo:
Three terminal âdotted-I’ interconnect structures, with vias at both ends and an additional via in the middle, were tested under various test conditions. Mortalities (failures) were found in right segments with jL value as low as 1250 A/cm, and the mortality of a dotted-I segment is dependent on the direction and magnitude of the current in the adjacent segment. Some mortalities were also found in the right segments under a test condition where no failure was expected. Cu extrusion along the delaminated Cu/Si₃N₄ interface near the central via region was believed to cause the unexpected failures. From the time-to-failure (TTF), it is possible to quantify the Cu/Si₃N₄ interfacial strength and bonding energy. Hence, the demonstrated test methodology can be used to investigate the integrity of the Cu dual damascene processes. As conventionally determined critical jL values in two-terminal via-terminated lines cannot be directly applied to interconnects with branched segments, this also serves as a good methodology to identify the critical effective jL values for immortality.