7 resultados para ROI reusable object and instruction
em Cambridge University Engineering Department Publications Database
Resumo:
Developing noninvasive and accurate diagnostics that are easily manufactured, robust, and reusable will provide monitoring of high-risk individuals in any clinical or point-of-care environment. We have developed a clinically relevant optical glucose nanosensor that can be reused at least 400 times without a compromise in accuracy. The use of a single 6 ns laser (λ = 532 nm, 200 mJ) pulse rapidly produced off-axis Bragg diffraction gratings consisting of ordered silver nanoparticles embedded within a phenylboronic acid-functionalized hydrogel. This sensor exhibited reversible large wavelength shifts and diffracted the spectrum of narrow-band light over the wavelength range λpeak ≈ 510-1100 nm. The experimental sensitivity of the sensor permits diagnosis of glucosuria in the urine samples of diabetic patients with an improved performance compared to commercial high-throughput urinalysis devices. The sensor response was achieved within 5 min, reset to baseline in ∼10 s. It is anticipated that this sensing platform will have implications for the development of reusable, equipment-free colorimetric point-of-care diagnostic devices for diabetes screening.
Resumo:
To manipulate an object skillfully, the brain must learn its dynamics, specifying the mapping between applied force and motion. A fundamental issue in sensorimotor control is whether such dynamics are represented in an extrinsic frame of reference tied to the object or an intrinsic frame of reference linked to the arm. Although previous studies have suggested that objects are represented in arm-centered coordinates [1-6], all of these studies have used objects with unusual and complex dynamics. Thus, it is not known how objects with natural dynamics are represented. Here we show that objects with simple (or familiar) dynamics and those with complex (or unfamiliar) dynamics are represented in object- and arm-centered coordinates, respectively. We also show that objects with simple dynamics are represented with an intermediate coordinate frame when vision of the object is removed. These results indicate that object dynamics can be flexibly represented in different coordinate frames by the brain. We suggest that with experience, the representation of the dynamics of a manipulated object may shift from a coordinate frame tied to the arm toward one that is linked to the object. The additional complexity required to represent dynamics in object-centered coordinates would be economical for familiar objects because such a representation allows object use regardless of the orientation of the object in hand.
Resumo:
This paper presents a volumetric formulation for the multi-view stereo problem which is amenable to a computationally tractable global optimisation using Graph-cuts. Our approach is to seek the optimal partitioning of 3D space into two regions labelled as "object" and "empty" under a cost functional consisting of the following two terms: (1) A term that forces the boundary between the two regions to pass through photo-consistent locations and (2) a ballooning term that inflates the "object" region. To take account of the effect of occlusion on the first term we use an occlusion robust photo-consistency metric based on Normalised Cross Correlation, which does not assume any geometric knowledge about the reconstructed object. The globally optimal 3D partitioning can be obtained as the minimum cut solution of a weighted graph.
Resumo:
Our ability to skillfully manipulate an object often involves the motor system learning to compensate for the dynamics of the object. When the two arms learn to manipulate a single object they can act cooperatively, whereas when they manipulate separate objects they control each object independently. We examined how learning transfers between these two bimanual contexts by applying force fields to the arms. In a coupled context, a single dynamic is shared between the arms, and in an uncoupled context separate dynamics are experienced independently by each arm. In a composition experiment, we found that when subjects had learned uncoupled force fields they were able to transfer to a coupled field that was the sum of the two fields. However, the contribution of each arm repartitioned over time so that, when they returned to the uncoupled fields, the error initially increased but rapidly reverted to the previous level. In a decomposition experiment, after subjects learned a coupled field, their error increased when exposed to uncoupled fields that were orthogonal components of the coupled field. However, when the coupled field was reintroduced, subjects rapidly readapted. These results suggest that the representations of dynamics for uncoupled and coupled contexts are partially independent. We found additional support for this hypothesis by showing significant learning of opposing curl fields when the context, coupled versus uncoupled, was alternated with the curl field direction. These results suggest that the motor system is able to use partially separate representations for dynamics of the two arms acting on a single object and two arms acting on separate objects.
Resumo:
In this paper, we aim to reconstruct free-from 3D models from a single view by learning the prior knowledge of a specific class of objects. Instead of heuristically proposing specific regularities and defining parametric models as previous research, our shape prior is learned directly from existing 3D models under a framework based on the Gaussian Process Latent Variable Model (GPLVM). The major contributions of the paper include: 1) a probabilistic framework for prior-based reconstruction we propose, which requires no heuristic of the object, and can be easily generalized to handle various categories of 3D objects, and 2) an attempt at automatic reconstruction of more complex 3D shapes, like human bodies, from 2D silhouettes only. Qualitative and quantitative experimental results on both synthetic and real data demonstrate the efficacy of our new approach. ©2009 IEEE.
Resumo:
To explore the neural mechanisms related to representation of the manipulation dynamics of objects, we performed whole-brain fMRI while subjects balanced an object in stable and highly unstable states and while they balanced a rigid object and a flexible object in the same unstable state, in all cases without vision. In this way, we varied the extent to which an internal model of the manipulation dynamics was required in the moment-to-moment control of the object's orientation. We hypothesized that activity in primary motor cortex would reflect the amount of muscle activation under each condition. In contrast, we hypothesized that cerebellar activity would be more strongly related to the stability and complexity of the manipulation dynamics because the cerebellum has been implicated in internal model-based control. As hypothesized, the dynamics-related activation of the cerebellum was quite different from that of the primary motor cortex. Changes in cerebellar activity were much greater than would have been predicted from differences in muscle activation when the stability and complexity of the manipulation dynamics were contrasted. On the other hand, the activity of the primary motor cortex more closely resembled the mean motor output necessary to execute the task. We also discovered a small region near the anterior edge of the ipsilateral (right) inferior parietal lobule where activity was modulated with the complexity of the manipulation dynamics. We suggest that this is related to imagining the location and motion of an object with complex manipulation dynamics.