8 resultados para Visual control
em Cambridge University Engineering Department Publications Database
Resumo:
The human motor system is remarkably proficient in the online control of visually guided movements, adjusting to changes in the visual scene within 100 ms [1-3]. This is achieved through a set of highly automatic processes [4] translating visual information into representations suitable for motor control [5, 6]. For this to be accomplished, visual information pertaining to target and hand need to be identified and linked to the appropriate internal representations during the movement. Meanwhile, other visual information must be filtered out, which is especially demanding in visually cluttered natural environments. If selection of relevant sensory information for online control was achieved by visual attention, its limited capacity [7] would substantially constrain the efficiency of visuomotor feedback control. Here we demonstrate that both exogenously and endogenously cued attention facilitate the processing of visual target information [8], but not of visual hand information. Moreover, distracting visual information is more efficiently filtered out during the extraction of hand compared to target information. Our results therefore suggest the existence of a dedicated visuomotor binding mechanism that links the hand representation in visual and motor systems.
Resumo:
Human sensorimotor control has been predominantly studied using fixed tasks performed under laboratory conditions. This approach has greatly advanced our understanding of the mechanisms that integrate sensory information and generate motor commands during voluntary movement. However, experimental tasks necessarily restrict the range of behaviors that are studied. Moreover, the processes studied in the laboratory may not be the same processes that subjects call upon during their everyday lives. Naturalistic approaches thus provide an important adjunct to traditional laboratory-based studies. For example, wearable self-contained tracking systems can allow subjects to be monitored outside the laboratory, where they engage spontaneously in natural everyday behavior. Similarly, advances in virtual reality technology allow laboratory-based tasks to be made more naturalistic. Here, we review naturalistic approaches, including perspectives from psychology and visual neuroscience, as well as studies and technological advances in the field of sensorimotor control.
Resumo:
Biological sensing is explored through novel stable colloidal dispersions of pyrrole-benzophenone and pyrrole copolymerized silica (PPy-SiO(2)-PPyBPh) nanocomposites, which allow covalent linking of biological molecules through light mediation. The mechanism of nanocomposite attachment to a model protein is studied by gold labeled cholera toxin B (CTB) to enhance the contrast in electron microscopy imaging. The biological test itself is carried out without gold labeling, i.e., using CTB only. The protein is shown to be covalently bound through the benzophenone groups. When the reactive PPy-SiO(2)-PPyBPh-CTB nanocomposite is exposed to specific recognition anti-CTB immunoglobulins, a qualitative visual agglutination assay occurs spontaneously, producing as a positive test, PPy-SiO(2)-PPyBPh-CTB-anti-CTB, in less than 1 h, while the control solution of the PPy-SiO(2)-PPyBPh-CTB alone remained well-dispersed during the same period. These dispersions were characterized by cryogenic transmission microscopy (cryo-TEM), scanning electron microscopy (SEM), FTIR and X-ray photoelectron spectroscopy (XPS).
Resumo:
As-built models have been proven useful in many project-related applications, such as progress monitoring and quality control. However, they are not widely produced in most projects because a lot of effort is still necessary to manually convert remote sensing data from photogrammetry or laser scanning to an as-built model. In order to automate the generation of as-built models, the first and fundamental step is to automatically recognize infrastructure-related elements from the remote sensing data. This paper outlines a framework for creating visual pattern recognition models that can automate the recognition of infrastructure-related elements based on their visual features. The framework starts with identifying the visual characteristics of infrastructure element types and numerically representing them using image analysis tools. The derived representations, along with their relative topology, are then used to form element visual pattern recognition (VPR) models. So far, the VPR models of four infrastructure-related elements have been created using the framework. The high recognition performance of these models validates the effectiveness of the framework in recognizing infrastructure-related elements.
Resumo:
As-built models have been proven useful in many project-related applications, such as progress monitoring and quality control. However, they are not widely produced in most projects because a lot of effort is still necessary to manually convert remote sensing data from photogrammetry or laser scanning to an as-built model. In order to automate the generation of as-built models, the first and fundamental step is to automatically recognize infrastructure-related elements from the remote sensing data. This paper outlines a framework for creating visual pattern recognition models that can automate the recognition of infrastructure-related elements based on their visual features. The framework starts with identifying the visual characteristics of infrastructure element types and numerically representing them using image analysis tools. The derived representations, along with their relative topology, are then used to form element visual pattern recognition (VPR) models. So far, the VPR models of four infrastructure-related elements have been created using the framework. The high recognition performance of these models validates the effectiveness of the framework in recognizing infrastructure-related elements.
Resumo:
This paper investigates how the efficiency and robustness of a skilled rhythmic task compete against each other in the control of a bimanual movement. Human subjects juggled a puck in 2D through impacts with two metallic arms, requiring rhythmic bimanual actuation. The arms kinematics were only constrained by the position, velocity and time of impacts while the rest of the trajectory did not influence the movement of the puck. In order to expose the task robustness, we manipulated the task context in two distinct manners: the task tempo was assigned at four different values (hence manipulating the time available to plan and execute each impact movement individually); and vision was withdrawn during half of the trials (hence reducing the sensory inflows). We show that when the tempo was fast, the actuation was rhythmic (no pause in the trajectory) while at slow tempo, the actuation was discrete (with pause intervals between individual movements). Moreover, the withdrawal of visual information encouraged the rhythmic behavior at the four tested tempi. The discrete versus rhythmic behavior give different answers to the efficiency/robustness trade-off: discrete movements result in energy efficient movements, while rhythmic movements impact the puck with negative acceleration, a property preserving robustness. Moreover, we report that in all conditions the impact velocity of the arms was negatively correlated with the energy of the puck. This correlation tended to stabilize the task and was influenced by vision, revealing again different control strategies. In conclusion, this task involves different modes of control that balance efficiency and robustness, depending on the context. © 2008 Springer-Verlag.