988 resultados para insect visual guidance
Resumo:
This work proposes to seek for the factors related to the choices that people with special educational needs make as the result of the visual impairment, during the transition stage from high school to advanced education. Therefore, we have taken into consideration that Vocational Guidance and the transition towards adulthood get specific characteristics in case of visually impaired young people, particularly in what's related to continue with advanced education. The focus of this work is to be able to clarify the existence of factors that make this transition stage easier or harder, through the observation of visually impaired and blind people who complete high school. This matter has aroused interest and concern about the strategies to follow to ensure the successful entrance and remaining in the selected advanced education. However, if we don?t know the factors involved in the described fact, it's difficult to design an appropriate intervention strategy. Then, in order to take acknowledge about the specific issues of visually impaired young people who complete high school, we chose a special school for this disability and some students who will join this project
Resumo:
This paper describes a novel vision based texture tracking method to guide autonomous vehicles in agricultural fields where the crop rows are challenging to detect. Existing methods require sufficient visual difference between the crop and soil for segmentation, or explicit knowledge of the structure of the crop rows. This method works by extracting and tracking the direction and lateral offset of the dominant parallel texture in a simulated overhead view of the scene and hence abstracts away crop-specific details such as colour, spacing and periodicity. The results demonstrate that the method is able to track crop rows across fields with extremely varied appearance during day and night. We demonstrate this method can autonomously guide a robot along the crop rows.
Resumo:
Background In vision, there is a trade-off between sensitivity and resolution, and any eye which maximises information gain at low light levels needs to be large. This imposes exacting constraints upon vision in nocturnal flying birds. Eyes are essentially heavy, fluid-filled chambers, and in flying birds their increased size is countered by selection for both reduced body mass and the distribution of mass towards the body core. Freed from these mass constraints, it would be predicted that in flightless birds nocturnality should favour the evolution of large eyes and reliance upon visual cues for the guidance of activity. Methodology/Principal Findings We show that in Kiwi (Apterygidae), flightlessness and nocturnality have, in fact, resulted in the opposite outcome. Kiwi show minimal reliance upon vision indicated by eye structure, visual field topography, and brain structures, and increased reliance upon tactile and olfactory information. Conclusions/Significance This lack of reliance upon vision and increased reliance upon tactile and olfactory information in Kiwi is markedly similar to the situation in nocturnal mammals that exploit the forest floor. That Kiwi and mammals evolved to exploit these habitats quite independently provides evidence for convergent evolution in their sensory capacities that are tuned to a common set of perceptual challenges found in forest floor habitats at night and which cannot be met by the vertebrate visual system. We propose that the Kiwi visual system has undergone adaptive regressive evolution driven by the trade-off between the relatively low rate of gain of visual information that is possible at low light levels, and the metabolic costs of extracting that information.
Resumo:
Opsins are ancient molecules that enable animal vision by coupling to a vitamin-derived chromophore to form lightsensitive photopigments. The primary drivers of evolutionary diversification in opsins are thought to be visual tasks related to spectral sensitivity and color vision. Typically, only a few opsin amino acid sites affect photopigment spectral sensitivity. We show that opsin genes of the North American butterfly Limenitis arthemis have diversified along a latitudinal cline, consistent with natural selection due to environmental factors. We sequenced single nucleotide(SNP) polymorphisms in the coding regions of the ultraviolet (UVRh), blue (BRh), and long-wavelength (LWRh) opsin genes from ten butterfly populations along the eastern United States and found that a majority of opsin SNPs showed significant clinal variation. Outlier detection and analysis of molecular variance indicated that many SNPs are under balancing selection and show significant population structure. This contrasts with what we found by analysing SNPs in the wingless and EF-1 alpha loci, and from neutral amplified fragment length polymorphisms, which show no evidence of significant locus-specific or genome-wide structure among populations. Using a combination of functional genetic and physiological approaches, including expression in cell culture, transgenic Drosophila, UV-visible spectroscopy, and optophysiology, we show that key BRh opsin SNPs that vary clinally have almost no effect on spectral sensitivity. Our results suggest that opsin diversification in this butterfly is more consistent with natural selection unrelated to spectral tuning. Some of the clinally varying SNPs may instead play a role in regulating opsin gene expression levels or the thermostability of the opsin protein. Lastly, we discuss the possibility that insect opsins might have important, yet-to-be elucidated, adaptive functions in mediating animal responses to abiotic factors, such as temperature or photoperiod.
Resumo:
The use of UAVs for remote sensing tasks; e.g. agriculture, search and rescue is increasing. The ability for UAVs to autonomously find a target and perform on-board decision making, such as descending to a new altitude or landing next to a target is a desired capability. Computer-vision functionality allows the Unmanned Aerial Vehicle (UAV) to follow a designated flight plan, detect an object of interest, and change its planned path. In this paper we describe a low cost and an open source system where all image processing is achieved on-board the UAV using a Raspberry Pi 2 microprocessor interfaced with a camera. The Raspberry Pi and the autopilot are physically connected through serial and communicate via MAVProxy. The Raspberry Pi continuously monitors the flight path in real time through USB camera module. The algorithm checks whether the target is captured or not. If the target is detected, the position of the object in frame is represented in Cartesian coordinates and converted into estimate GPS coordinates. In parallel, the autopilot receives the target location approximate GPS and makes a decision to guide the UAV to a new location. This system also has potential uses in the field of Precision Agriculture, plant pest detection and disease outbreaks which cause detrimental financial damage to crop yields if not detected early on. Results show the algorithm is accurate to detect 99% of object of interest and the UAV is capable of navigation and doing on-board decision making.
Resumo:
Bactrocera tryoni (Froggatt) is Australia's major horticultural insect pest, yet monitoring females remains logistically difficult. We trialled the ‘Ladd trap’ as a potential female surveillance or monitoring tool. This trap design is used to trap and monitor fruit flies in countries other (e.g. USA) than Australia. The Ladd trap consists of a flat yellow panel (a traditional ‘sticky trap’), with a three dimensional red sphere (= a fruit mimic) attached in the middle. We confirmed, in field-cage trials, that the combination of yellow panel and red sphere was more attractive to B. tryoni than the two components in isolation. In a second set of field-cage trials, we showed that it was the red-yellow contrast, rather than the three dimensional effect, which was responsible for the trap's effectiveness, with B. tryoni equally attracted to a Ladd trap as to a two-dimensional yellow panel with a circular red centre. The sex ratio of catches was approximately even in the field-cage trials. In field trials, we tested the traditional red-sphere Ladd trap against traps for which the sphere was painted blue, black or yellow. The colour of sphere did not significantly influence trap efficiency in these trials, despite the fact the yellow-panel/yellow-sphere presented no colour contrast to the flies. In 6 weeks of field trials, over 1500 flies were caught, almost exactly two-thirds of them being females. Overall, flies were more likely to be caught on the yellow panel than the sphere; but, for the commercial Ladd trap, proportionally more females were caught on the red sphere versus the yellow panel than would be predicted based on relative surface area of each component, a result also seen the field-cage trial. We determined that no modification of the trap was more effective than the commercially available Ladd trap and so consider that product suitable for more extensive field testing as a B. tryoni research and monitoring tool.
Resumo:
How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.
Resumo:
BACKGROUND: Recent National Institute of Clinical Excellence guidance suggests primary surgery should be offered to patients presenting with glaucoma with severe visual field loss. We undertook a survey of UK consultant ophthalmologists to determine if this represents current practice and explore attitudes towards managing patients with advanced glaucoma at presentation.
DESIGN: Questionnaire evaluation study.
PARTICIPANTS: All consultant ophthalmologists currently practicing in the UK.
METHODS: A single-page questionnaire was posted to all consultants (n = 910) currently practicing in the UK along with a pre-paid return envelope. A second questionnaire was sent to non-responders (n = 459).
MAIN OUTCOME MEASURES: Questionnaire responses.
RESULTS: 626 responses were received representing 68.8% of the population surveyed. 152 (24%) volunteered a specialist interest in glaucoma. Consensus opinion for both glaucoma specialists (64.9%) and non-glaucoma specialists (62.4%) was to start with primary medical therapy, most commonly citing surgical risk as the primary reason (23% and 22%, respectively) for this approach. Most felt the highest intraocular pressure measurement during follow up (measured in clinic) was the most important variable for prevention of further visual loss (60% of glaucoma specialists and 55% of non-glaucoma specialists). Eighty-three per cent of all responders suggested they would change their practice if evidence supporting primary surgery as a safe and more effective approach existed.
CONCLUSIONS: Recent National Institute of Clinical Excellence guidance does not reflect the current management approach of UK ophthalmologists. The primary concern was related to potential complications of surgery although most practitioners would be willing to change their practice if evidence existed supporting primary surgery in patients presenting with advanced glaucoma.
Resumo:
Paradoxical kinesia describes the motor improvement in Parkinson's disease (PD) triggered by the presence of external sensory information relevant for the movement. This phenomenon has been puzzling scientists for over 60 years, both in neurological and motor control research, with the underpinning mechanism still being the subject of fierce debate. In this paper we present novel evidence supporting the idea that the key to understanding paradoxical kinesia lies in both spatial and temporal information conveyed by the cues and the coupling between perception and action. We tested a group of 7 idiopathic PD patients in an upper limb mediolateral movement task. Movements were performed with and without a visual point light display, travelling at 3 different speeds. The dynamic information presented in the visual point light display depicted three different movement speeds of the same amplitude performed by a healthy adult. The displays were tested and validated on a group of neurologically healthy participants before being tested on the PD group. Our data show that the temporal aspects of the movement (kinematics) in PD can be moderated by the prescribed temporal information presented in a dynamic environmental cue. Patients demonstrated a significant improvement in terms of movement time and peak velocity when executing movement in accordance with the information afforded by the point light display, compared to when the movement of the same amplitude and direction was performed without the display. In all patients we observed the effect of paradoxical kinesia, with a strong relationship between the perceptual information prescribed by the biological motion display and the observed motor performance of the patients. © 2013 Elsevier B.V. All rights reserved.
Resumo:
The application of augmented reality (AR) technology for assembly guidance is a novel approach in the traditional manufacturing domain. In this paper, we propose an AR approach for assembly guidance using a virtual interactive tool that is intuitive and easy to use. The virtual interactive tool, termed the Virtual Interaction Panel (VirIP), involves two tasks: the design of the VirIPs and the real-time tracking of an interaction pen using a Restricted Coulomb Energy (RCE) neural network. The VirIP includes virtual buttons, which have meaningful assembly information that can be activated by an interaction pen during the assembly process. A visual assembly tree structure (VATS) is used for information management and assembly instructions retrieval in this AR environment. VATS is a hierarchical tree structure that can be easily maintained via a visual interface. This paper describes a typical scenario for assembly guidance using VirIP and VATS. The main characteristic of the proposed AR system is the intuitive way in which an assembly operator can easily step through a pre-defined assembly plan/sequence without the need of any sensor schemes or markers attached on the assembly components.
Resumo:
Near ground maneuvers, such as hover, approach and landing, are key elements of autonomy in unmanned aerial vehicles. Such maneuvers have been tackled conventionally by measuring or estimating the velocity and the height above the ground often using ultrasonic or laser range finders. Near ground maneuvers are naturally mastered by flying birds and insects as objects below may be of interest for food or shelter. These animals perform such maneuvers efficiently using only the available vision and vestibular sensory information. In this paper, the time-to-contact (Tau) theory, which conceptualizes the visual strategy with which many species are believed to approach objects, is presented as a solution for Unmanned Aerial Vehicles (UAV) relative ground distance control. The paper shows how such an approach can be visually guided without knowledge of height and velocity relative to the ground. A control scheme that implements the Tau strategy is developed employing only visual information from a monocular camera and an inertial measurement unit. To achieve reliable visual information at a high rate, a novel filtering system is proposed to complement the control system. The proposed system is implemented on-board an experimental quadrotor UAV and shown not only to successfully land and approach ground, but also to enable the user to choose the dynamic characteristics of the approach. The methods presented in this paper are applicable to both aerial and space autonomous vehicles.
Resumo:
Near-ground maneuvers, such as hover, approach, and landing, are key elements of autonomy in unmanned aerial vehicles. Such maneuvers have been tackled conventionally by measuring or estimating the velocity and the height above the ground, often using ultrasonic or laser range finders. Near-ground maneuvers are naturally mastered by flying birds and insects because objects below may be of interest for food or shelter. These animals perform such maneuvers efficiently using only the available vision and vestibular sensory information. In this paper, the time-tocontact (tau) theory, which conceptualizes the visual strategy with which many species are believed to approach objects, is presented as a solution for relative ground distance control for unmanned aerial vehicles. The paper shows how such an approach can be visually guided without knowledge of height and velocity relative to the ground. A control scheme that implements the tau strategy is developed employing only visual information from a monocular camera and an inertial measurement unit. To achieve reliable visual information at a high rate, a novel filtering system is proposed to complement the control system. The proposed system is implemented onboard an experimental quadrotor unmannedaerial vehicle and is shown to not only successfully land and approach ground, but also to enable the user to choose the dynamic characteristics of the approach. The methods presented in this paper are applicable to both aerial and space autonomous vehicles.
Resumo:
Radial glia in the developing optic tectum express the key guidance molecules responsible for topographic targeting of retinal axons. However, the extent to which the radial glia are themselves influenced by retinal inputs and visual experience remains unknown. Using multiphoton live imaging of radial glia in the optic tectum of intact Xenopus laevis tadpoles in conjunction with manipulations of neural activity and sensory stimuli, radial glia were observed to exhibit spontaneous calcium transients that were modulated by visual stimulation. Structurally, radial glia extended and retracted many filopodial processes within the tectal neuropil over minutes. These processes interacted with retinotectal synapses and their motility was modulated by nitric oxide (NO) signaling downstream of neuronal NMDA receptor (NMDAR) activation and visual stimulation. These findings provide the first in vivo demonstration that radial glia actively respond both structurally and functionally to neural activity, via NMDAR-dependent NO release during the period of retinal axon ingrowth.
Resumo:
This study evaluates the influence of different cartographic representations of in-car navigation systems on visual demand, subjective preference, and navigational error. It takes into account the type and complexity of the representation, maneuvering complexity, road layout, and driver gender. A group of 28 drivers (14 male and 14 female) participated in this experiment which was performed in a low-cost driving simulator. The tests were performed on a limited number of instances for each type of representation, and their purpose was to carry out a preliminary assessment and provide future avenues for further studies. Data collected for the visual demand study were analyzed using non-parametric statistical analyses. Results confirmed previous research that showed that different levels of design complexity significantly influence visual demand. Non-grid-like road networks, for example, influence significantly visual demand and navigational error. An analysis of simple maneuvers on a grid-like road network showed that static and blinking arrows did not present significant differences. From the set of representations analyzed to assess visual demand, both arrows were equally efficient. From a gender perspective, women seem to took at the display more than men, but this factor was not significant. With respect to subjective preferences, drivers prefer representations with mimetic landmarks when they perform straight-ahead tasks. For maneuvering tasks, landmarks in a perspective model created higher visual demands.