2 resultados para Torsional Nems Actuator
em QSpace: Queen's University - Canada
Resumo:
This paper describes the design, tuning, and extensive field testing of an admittance-based Autonomous Loading Controller (ALC) for robotic excavation. Several iterations of the ALC were tuned and tested in fragmented rock piles—similar to those found in operating mines—by using both a robotic 1-tonne capacity Kubota R520S diesel-hydraulic surface loader and a 14-tonne capacity Atlas Copco ST14 underground load-haul-dump (LHD) machine. On the R520S loader, the ALC increased payload by 18 % with greater consistency, although with more energy expended and longer dig times when compared with digging at maximum actuator velocity. On the ST14 LHD, the ALC took 61 % less time to load 39 % more payload when compared to a single manual operator. The manual operator made 28 dig attempts by using three different digging strategies, and had one failed dig. The tuned ALC made 26 dig attempts at 10 and 11 MN target force levels. All 10 11 MN digs succeeded while 6 of the 16 10 MN digs failed. The results presented in this paper suggest that the admittance-based ALC is more productive and consistent than manual operators, but that care should be taken when detecting entry into the muck pile
Resumo:
Moving through a stable, three-dimensional world is a hallmark of our motor and perceptual experience. This stability is constantly being challenged by movements of the eyes and head, inducing retinal blur and retino-spatial misalignments for which the brain must compensate. To do so, the brain must account for eye and head kinematics to transform two-dimensional retinal input into the reference frame necessary for movement or perception. The four studies in this thesis used both computational and psychophysical approaches to investigate several aspects of this reference frame transformation. In the first study, we examined the neural mechanism underlying the visuomotor transformation for smooth pursuit using a feedforward neural network model. After training, the model performed the general, three-dimensional transformation using gain modulation. This gave mechanistic significance to gain modulation observed in cortical pursuit areas while also providing several testable hypotheses for future electrophysiological work. In the second study, we asked how anticipatory pursuit, which is driven by memorized signals, accounts for eye and head geometry using a novel head-roll updating paradigm. We showed that the velocity memory driving anticipatory smooth pursuit relies on retinal signals, but is updated for the current head orientation. In the third study, we asked how forcing retinal motion to undergo a reference frame transformation influences perceptual decision making. We found that simply rolling one's head impairs perceptual decision making in a way captured by stochastic reference frame transformations. In the final study, we asked how torsional shifts of the retinal projection occurring with almost every eye movement influence orientation perception across saccades. We found a pre-saccadic, predictive remapping consistent with maintaining a purely retinal (but spatially inaccurate) orientation perception throughout the movement. Together these studies suggest that, despite their spatial inaccuracy, retinal signals play a surprisingly large role in our seamless visual experience. This work therefore represents a significant advance in our understanding of how the brain performs one of its most fundamental functions.