3 resultados para BIOELECTRICAL-IMPEDANCE VECTOR
em Boston University Digital Common
Resumo:
It is well documented that the presence of even a few air bubbles in water can signifi- cantly alter the propagation and scattering of sound. Air bubbles are both naturally and artificially generated in all marine environments, especially near the sea surface. The abil- ity to measure the acoustic propagation parameters of bubbly liquids in situ has long been a goal of the underwater acoustics community. One promising solution is a submersible, thick-walled, liquid-filled impedance tube. Recent water-filled impedance tube work was successful at characterizing low void fraction bubbly liquids in the laboratory [1]. This work details the modifications made to the existing impedance tube design to allow for submersed deployment in a controlled environment, such as a large tank or a test pond. As well as being submersible, the useable frequency range of the device is increased from 5 - 9 kHz to 1 - 16 kHz and it does not require any form of calibration. The opening of the new impedance tube is fitted with a large stainless steel flange to better define the boundary condition on the plane of the tube opening. The new device was validated against the classic theoretical result for the complex reflection coefficient of a tube opening fitted with an infinite flange. The complex reflection coefficient was then measured with a bubbly liquid (order 250 micron radius and 0.1 - 0.5 % void fraction) outside the tube opening. Results from the bubbly liquid experiments were inconsistent with flanged tube theory using current bubbly liquid models. The results were more closely matched to unflanged tube theory, suggesting that the high attenuation and phase speeds in the bubbly liquid made the tube opening appear as if it were radiating into free space.
Resumo:
This article describes neural network models for adaptive control of arm movement trajectories during visually guided reaching and, more generally, a framework for unsupervised real-time error-based learning. The models clarify how a child, or untrained robot, can learn to reach for objects that it sees. Piaget has provided basic insights with his concept of a circular reaction: As an infant makes internally generated movements of its hand, the eyes automatically follow this motion. A transformation is learned between the visual representation of hand position and the motor representation of hand position. Learning of this transformation eventually enables the child to accurately reach for visually detected targets. Grossberg and Kuperstein have shown how the eye movement system can use visual error signals to correct movement parameters via cerebellar learning. Here it is shown how endogenously generated arm movements lead to adaptive tuning of arm control parameters. These movements also activate the target position representations that are used to learn the visuo-motor transformation that controls visually guided reaching. The AVITE model presented here is an adaptive neural circuit based on the Vector Integration to Endpoint (VITE) model for arm and speech trajectory generation of Bullock and Grossberg. In the VITE model, a Target Position Command (TPC) represents the location of the desired target. The Present Position Command (PPC) encodes the present hand-arm configuration. The Difference Vector (DV) population continuously.computes the difference between the PPC and the TPC. A speed-controlling GO signal multiplies DV output. The PPC integrates the (DV)·(GO) product and generates an outflow command to the arm. Integration at the PPC continues at a rate dependent on GO signal size until the DV reaches zero, at which time the PPC equals the TPC. The AVITE model explains how self-consistent TPC and PPC coordinates are autonomously generated and learned. Learning of AVITE parameters is regulated by activation of a self-regulating Endogenous Random Generator (ERG) of training vectors. Each vector is integrated at the PPC, giving rise to a movement command. The generation of each vector induces a complementary postural phase during which ERG output stops and learning occurs. Then a new vector is generated and the cycle is repeated. This cyclic, biphasic behavior is controlled by a specialized gated dipole circuit. ERG output autonomously stops in such a way that, across trials, a broad sample of workspace target positions is generated. When the ERG shuts off, a modulator gate opens, copying the PPC into the TPC. Learning of a transformation from TPC to PPC occurs using the DV as an error signal that is zeroed due to learning. This learning scheme is called a Vector Associative Map, or VAM. The VAM model is a general-purpose device for autonomous real-time error-based learning and performance of associative maps. The DV stage serves the dual function of reading out new TPCs during performance and reading in new adaptive weights during learning, without a disruption of real-time operation. YAMs thus provide an on-line unsupervised alternative to the off-line properties of supervised error-correction learning algorithms. YAMs and VAM cascades for learning motor-to-motor and spatial-to-motor maps are described. YAM models and Adaptive Resonance Theory (ART) models exhibit complementary matching, learning, and performance properties that together provide a foundation for designing a total sensory-cognitive and cognitive-motor autonomous system.
Resumo:
This article compares the performance of Fuzzy ARTMAP with that of Learned Vector Quantization and Back Propagation on a handwritten character recognition task. Training with Fuzzy ARTMAP to a fixed criterion used many fewer epochs. Voting with Fuzzy ARTMAP yielded the highest recognition rates.