978 resultados para Sensory system
Resumo:
The lateral line system allows elasmobranchs to detect hydrodynamic movements in their close surroundings. We examined the distribution of pit organs and lateral line canals in 4 species of sawfish (Anoxypristis cuspidata, Pristis microdon, P. clavata and P. zijsron). Pit organs could only be located in A. cuspidata, which possesses elongated pits that are lined by dermal denticles. In all 4 pristid species, the lateral line canals are well developed and were separated into regions of pored and non-pored canals. In all species the tubules that extend from pored canals form extensive networks. In A. cuspidata, P. microdon and P. clavata, the lateral line canals on both the dorsal and ventral surfaces of the rostrum possess extensively branched and pored tubules. Based on this morphological observation, we hypothesized that these 3 species do not use their rostrum to search in the substrate for prey as previously assumed. Other batoids that possess lateral line canals adapted to perceive stimuli produced by infaunal prey possess non-pored lateral line canals, which also prevent the intrusion of substrate particles. However, this hypothesis remains to be tested behaviourally in pristids. Lateral line canals located between the mouth and the nostrils are non-pored in all 4 species of sawfish. Thus this region is hypothesized to perceive stimuli caused by direct contact with prey before ingestion. Lateral line canals that contain neuromasts are longest in P. microdon, but canals containing neuromasts along the rostrum are longest in A. cuspidata.
Resumo:
Using neuromorphic analog VLSI techniques for modeling large neural systems has several advantages over software techniques. By designing massively-parallel analog circuit arrays which are ubiquitous in neural systems, analog VLSI models are extremely fast, particularly when local interactions are important in the computation. While analog VLSI circuits are not as flexible as software methods, the constraints posed by this approach are often very similar to the constraints faced by biological systems. As a result, these constraints can offer many insights into the solutions found by evolution. This dissertation describes a hardware modeling effort to mimic the primate oculomotor system which requires both fast sensory processing and fast motor control. A one-dimensional hardware model of the primate eye has been built which simulates the physical dynamics of the biological system. It is driven by analog VLSI circuits mimicking brainstem and cortical circuits that control eye movements. In this framework, a visually-triggered saccadic system is demonstrated which generates averaging saccades. In addition, an auditory localization system, based on the neural circuits of the barn owl, is used to trigger saccades to acoustic targets in parallel with visual targets. Two different types of learning are also demonstrated on the saccadic system using floating-gate technology allowing the non-volatile storage of analog parameters directly on the chip. Finally, a model of visual attention is used to select and track moving targets against textured backgrounds, driving both saccadic and smooth pursuit eye movements to maintain the image of the target in the center of the field of view. This system represents one of the few efforts in this field to integrate both neuromorphic sensory processing and motor control in a closed-loop fashion.
Resumo:
Modern theories of motor control incorporate forward models that combine sensory information and motor commands to predict future sensory states. Such models circumvent unavoidable neural delays associated with on-line feedback control. Here we show that signals in human muscle spindle afferents during unconstrained wrist and finger movements predict future kinematic states of their parent muscle. Specifically, we show that the discharges of type Ia afferents are best correlated with the velocity of length changes in their parent muscles approximately 100-160 ms in the future and that their discharges vary depending on motor sequences in a way that cannot be explained by the state of their parent muscle alone. We therefore conclude that muscle spindles can act as "forward sensory models": they are affected both by the current state of their parent muscle and by efferent (fusimotor) control, and their discharges represent future kinematic states. If this conjecture is correct, then sensorimotor learning implies learning how to control not only the skeletal muscles but also the fusimotor system.
Resumo:
MRGX2, a G-protein-coupled receptor, is specifically expressed in the sensory neurons of the human peripheral nervous system and involved in nociception. Here, we studied DNA polymorphism patterns and evolution of the MRGX2 gene in world-wide human populations and the representative nonhuman primate species. Our results demonstrated that MRGX2 had undergone adaptive changes in the path of human evolution, which were likely caused by Darwinian positive selection. The patterns of DNA sequence polymorphisms in human populations showed an excess of derived substitutions, which against the expectation of neutral evolution, implying that the adaptive evolution of MRGX2 in humans was a relatively recent event. The reconstructed secondary structure of the human MRGX2 revealed that three of the four human-specific amino acid substitutions were located in the extra-cellular domains. Such critical substitutions may alter the interactions between MRGX2 protein and its ligand, thus, potentially led to adaptive changes of the pain-perception-related nervous system during human evolution. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
This paper addresses the question relative to the role of sensory feedback in rhythmic tasks. We study the properties of a sinusoidally vibrating wedge-billiard as a model for 2-D bounce juggling. If this wedge is actuated with an harmonic sinusoidal input, it has been shown that some periodic orbits are exponentially stable. This paper explores an intuitive method to enlarge the parametric stability region of the simplest of these orbits. Accurate processing of timing is proven to be an important key to achieve frequency-locking in rhythmic tasks. © 2005 IEEE.
Resumo:
Guided self-organization can be regarded as a paradigm proposed to understand how to guide a self-organizing system towards desirable behaviors, while maintaining its non-deterministic dynamics with emergent features. It is, however, not a trivial problem to guide the self-organizing behavior of physically embodied systems like robots, as the behavioral dynamics are results of interactions among their controller, mechanical dynamics of the body, and the environment. This paper presents a guided self-organization approach for dynamic robots based on a coupling between the system mechanical dynamics with an internal control structure known as the attractor selection mechanism. The mechanism enables the robot to gracefully shift between random and deterministic behaviors, represented by a number of attractors, depending on internally generated stochastic perturbation and sensory input. The robot used in this paper is a simulated curved beam hopping robot: a system with a variety of mechanical dynamics which depends on its actuation frequencies. Despite the simplicity of the approach, it will be shown how the approach regulates the probability of the robot to reach a goal through the interplay among the sensory input, the level of inherent stochastic perturbation, i.e., noise, and the mechanical dynamics. © 2014 by the authors; licensee MDPI, Basel, Switzerland.
Resumo:
A system for visual recognition is described, with implications for the general problem of representation of knowledge to assist control. The immediate objective is a computer system that will recognize objects in a visual scene, specifically hammers. The computer receives an array of light intensities from a device like a television camera. It is to locate and identify the hammer if one is present. The computer must produce from the numerical "sensory data" a symbolic description that constitutes its perception of the scene. Of primary concern is the control of the recognition process. Control decisions should be guided by the partial results obtained on the scene. If a hammer handle is observed this should suggest that the handle is part of a hammer and advise where to look for the hammer head. The particular knowledge that a handle has been found combines with general knowledge about hammers to influence the recognition process. This use of knowledge to direct control is denoted here by the term "active knowledge". A descriptive formalism is presented for visual knowledge which identifies the relationships relevant to the active use of the knowledge. A control structure is provided which can apply knowledge organized in this fashion actively to the processing of a given scene.
Resumo:
Lee, M., Barnes, D. P., Hardy, N. (1985). Research into error recovery for sensory robots. Sensor Review, 5 (4), 194-197.
Resumo:
Barnes, D. P., Lee, M. H., Hardy, N. W. (1983). A control and monitoring system for multiple-sensor industrial robots. In Proc. 3rd. Int. Conf. Robot Vision and Sensory Controls, Cambridge, MA. USA., 471-479.
Resumo:
Abstract—Personal communication devices are increasingly being equipped with sensors that are able to passively collect information from their surroundings – information that could be stored in fairly small local caches. We envision a system in which users of such devices use their collective sensing, storage, and communication resources to query the state of (possibly remote) neighborhoods. The goal of such a system is to achieve the highest query success ratio using the least communication overhead (power). We show that the use of Data Centric Storage (DCS), or directed placement, is a viable approach for achieving this goal, but only when the underlying network is well connected. Alternatively, we propose, amorphous placement, in which sensory samples are cached locally and informed exchanges of cached samples is used to diffuse the sensory data throughout the whole network. In handling queries, the local cache is searched first for potential answers. If unsuccessful, the query is forwarded to one or more direct neighbors for answers. This technique leverages node mobility and caching capabilities to avoid the multi-hop communication overhead of directed placement. Using a simplified mobility model, we provide analytical lower and upper bounds on the ability of amorphous placement to achieve uniform field coverage in one and two dimensions. We show that combining informed shuffling of cached samples upon an encounter between two nodes, with the querying of direct neighbors could lead to significant performance improvements. For instance, under realistic mobility models, our simulation experiments show that amorphous placement achieves 10% to 40% better query answering ratio at a 25% to 35% savings in consumed power over directed placement.
Resumo:
Both animals and mobile robots, or animats, need adaptive control systems to guide their movements through a novel environment. Such control systems need reactive mechanisms for exploration, and learned plans to efficiently reach goal objects once the environment is familiar. How reactive and planned behaviors interact together in real time, and arc released at the appropriate times, during autonomous navigation remains a major unsolved problern. This work presents an end-to-end model to address this problem, named SOVEREIGN: A Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goal-oriented Navigation system. The model comprises several interacting subsystems, governed by systems of nonlinear differential equations. As the animat explores the environment, a vision module processes visual inputs using networks that arc sensitive to visual form and motion. Targets processed within the visual form system arc categorized by real-time incremental learning. Simultaneously, visual target position is computed with respect to the animat's body. Estimates of target position activate a motor system to initiate approach movements toward the target. Motion cues from animat locomotion can elicit orienting head or camera movements to bring a never target into view. Approach and orienting movements arc alternately performed during animat navigation. Cumulative estimates of each movement, based on both visual and proprioceptive cues, arc stored within a motor working memory. Sensory cues are stored in a parallel sensory working memory. These working memories trigger learning of sensory and motor sequence chunks, which together control planned movements. Effective chunk combinations arc selectively enhanced via reinforcement learning when the animat is rewarded. The planning chunks effect a gradual transition from reactive to planned behavior. The model can read-out different motor sequences under different motivational states and learns more efficient paths to rewarded goals as exploration proceeds. Several volitional signals automatically gate the interactions between model subsystems at appropriate times. A 3-D visual simulation environment reproduces the animat's sensory experiences as it moves through a simplified spatial environment. The SOVEREIGN model exhibits robust goal-oriented learning of sequential motor behaviors. Its biomimctic structure explicates a number of brain processes which are involved in spatial navigation.
Resumo:
How do reactive and planned behaviors interact in real time? How are sequences of such behaviors released at appropriate times during autonomous navigation to realize valued goals? Controllers for both animals and mobile robots, or animats, need reactive mechanisms for exploration, and learned plans to reach goal objects once an environment becomes familiar. The SOVEREIGN (Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goaloriented Navigation) animat model embodies these capabilities, and is tested in a 3D virtual reality environment. SOVEREIGN includes several interacting subsystems which model complementary properties of cortical What and Where processing streams and which clarify similarities between mechanisms for navigation and arm movement control. As the animat explores an environment, visual inputs are processed by networks that are sensitive to visual form and motion in the What and Where streams, respectively. Position-invariant and sizeinvariant recognition categories are learned by real-time incremental learning in the What stream. Estimates of target position relative to the animat are computed in the Where stream, and can activate approach movements toward the target. Motion cues from animat locomotion can elicit head-orienting movements to bring a new target into view. Approach and orienting movements are alternately performed during animat navigation. Cumulative estimates of each movement are derived from interacting proprioceptive and visual cues. Movement sequences are stored within a motor working memory. Sequences of visual categories are stored in a sensory working memory. These working memories trigger learning of sensory and motor sequence categories, or plans, which together control planned movements. Predictively effective chunk combinations are selectively enhanced via reinforcement learning when the animat is rewarded. Selected planning chunks effect a gradual transition from variable reactive exploratory movements to efficient goal-oriented planned movement sequences. Volitional signals gate interactions between model subsystems and the release of overt behaviors. The model can control different motor sequences under different motivational states and learns more efficient sequences to rewarded goals as exploration proceeds.
Resumo:
This article describes how corollary discharges from outflow eye movement commands can be transformed by two stages of opponent neural processing into a head-centered representation of 3-D target position. This representation implicitly defines a cyclopean coordinate system whose variables approximate the binocular vergence and spherical horizontal and vertical angles with respect to the observer's head. Various psychophysical data concerning binocular distance perception and reaching behavior are clarified by this representation. The representation provides a foundation for learning head-centered and body-centered invariant representations of both foveated and non-foveated 3-D target positions. It also enables a solution to be developed of the classical motor equivalence problem, whereby many different joint configurations of a redundant manipulator can all be used to realize a desired trajectory in 3-D space.
Resumo:
Humans and song-learning birds communicate acoustically using learned vocalizations. The characteristic features of this social communication behavior include vocal control by forebrain motor areas, a direct cortical projection to brainstem vocal motor neurons, and dependence on auditory feedback to develop and maintain learned vocalizations. These features have so far not been found in closely related primate and avian species that do not learn vocalizations. Male mice produce courtship ultrasonic vocalizations with acoustic features similar to songs of song-learning birds. However, it is assumed that mice lack a forebrain system for vocal modification and that their ultrasonic vocalizations are innate. Here we investigated the mouse song system and discovered that it includes a motor cortex region active during singing, that projects directly to brainstem vocal motor neurons and is necessary for keeping song more stereotyped and on pitch. We also discovered that male mice depend on auditory feedback to maintain some ultrasonic song features, and that sub-strains with differences in their songs can match each other's pitch when cross-housed under competitive social conditions. We conclude that male mice have some limited vocal modification abilities with at least some neuroanatomical features thought to be unique to humans and song-learning birds. To explain our findings, we propose a continuum hypothesis of vocal learning.
Resumo:
Satiety and other core physiological functions are modulated by sensory signals arising from the surface of the gut. Luminal nutrients and bacteria stimulate epithelial biosensors called enteroendocrine cells. Despite being electrically excitable, enteroendocrine cells are generally thought to communicate indirectly with nerves through hormone secretion and not through direct cell-nerve contact. However, we recently uncovered in intestinal enteroendocrine cells a cytoplasmic process that we named neuropod. Here, we determined that neuropods provide a direct connection between enteroendocrine cells and neurons innervating the small intestine and colon. Using cell-specific transgenic mice to study neural circuits, we found that enteroendocrine cells have the necessary elements for neurotransmission, including expression of genes that encode pre-, post-, and transsynaptic proteins. This neuroepithelial circuit was reconstituted in vitro by coculturing single enteroendocrine cells with sensory neurons. We used a monosynaptic rabies virus to define the circuit's functional connectivity in vivo and determined that delivery of this neurotropic virus into the colon lumen resulted in the infection of mucosal nerves through enteroendocrine cells. This neuroepithelial circuit can serve as both a sensory conduit for food and gut microbes to interact with the nervous system and a portal for viruses to enter the enteric and central nervous systems.