3 resultados para TIME-LIKE GEODESICS

em Cambridge University Engineering Department Publications Database


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The flow field within an unsteady ejector has been investigated using experimental and computational techniques. The experimental results show a peak thrust augmentation of 1.4; numerical simulation gives a value of 1.37. It is shown that the vortex ring dominates the flow field. At optimal thrust augmentation the vortex ring acts like a fluid piston accelerating the fluid inside the ejector. A model is proposed for the operation of unsteady ejectors, based on the vortex ring acting like a fluid piston. Control volume analysis is presented showing that mass entrainment is responsible for thrust augmentation. It is proposed that the spacing of successive vortex rings determines the mass entrainment and therefore thrust augmentation. The efficiency of unsteady ejectors was found to vary between 28% and 32% depending on the L/D ratio of the unsteady jet source. Copyright © 2008 by J H Heffer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Real-time acquisition of EMG during functional MRI (fMRI) provides a novel method of controlling motor experiments in the scanner using feedback of EMG. Because of the redundancy in the human muscle system, this is not possible from recordings of joint torque and kinematics alone, because these provide no information about individual muscle activation. This is particularly critical during brain imaging because brain activations are not only related to joint torques and kinematics but are also related to individual muscle activation. However, EMG collected during imaging is corrupted by large artifacts induced by the varying magnetic fields and radio frequency (RF) pulses in the scanner. Methods proposed in literature for artifact removal are complex, computationally expensive, and difficult to implement for real-time noise removal. We describe an acquisition system and algorithm that enables real-time acquisition for the first time. The algorithm removes particular frequencies from the EMG spectrum in which the noise is concentrated. Although this decreases the power content of the EMG, this method provides excellent estimates of EMG with good resolution. Comparisons show that the cleaned EMG obtained with the algorithm is, like actual EMG, very well correlated with joint torque and can thus be used for real-time visual feedback during functional studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Animals repeat rewarded behaviors, but the physiological basis of reward-based learning has only been partially elucidated. On one hand, experimental evidence shows that the neuromodulator dopamine carries information about rewards and affects synaptic plasticity. On the other hand, the theory of reinforcement learning provides a framework for reward-based learning. Recent models of reward-modulated spike-timing-dependent plasticity have made first steps towards bridging the gap between the two approaches, but faced two problems. First, reinforcement learning is typically formulated in a discrete framework, ill-adapted to the description of natural situations. Second, biologically plausible models of reward-modulated spike-timing-dependent plasticity require precise calculation of the reward prediction error, yet it remains to be shown how this can be computed by neurons. Here we propose a solution to these problems by extending the continuous temporal difference (TD) learning of Doya (2000) to the case of spiking neurons in an actor-critic network operating in continuous time, and with continuous state and action representations. In our model, the critic learns to predict expected future rewards in real time. Its activity, together with actual rewards, conditions the delivery of a neuromodulatory TD signal to itself and to the actor, which is responsible for action choice. In simulations, we show that such an architecture can solve a Morris water-maze-like navigation task, in a number of trials consistent with reported animal performance. We also use our model to solve the acrobot and the cartpole problems, two complex motor control tasks. Our model provides a plausible way of computing reward prediction error in the brain. Moreover, the analytically derived learning rule is consistent with experimental evidence for dopamine-modulated spike-timing-dependent plasticity.