53 resultados para binocular vision
The roles of olfaction and vision in host-plant finding by the diamondback moth, Plutella xylostella
Resumo:
The relative roles of olfaction and vision in the crepuscular host-finding process of a major lepidopteran pest of cruciferous crops, the diamondback moth Plutella xylostella are investigated in a series of laboratory and semi-field experiments. Flying female moths use volatile plant chemical cues to locate and to promote landing on their host, even in complex mixed-crop environments in large cages. Multiple regression analysis shows that both the plant position (front, middle or back rows) and the type of plant (host plant, nonhost plant) are needed to explain the distribution of insects in such a mixed-crop situation. This strong plant position effect indicates that, when host plants are present in a mixture, foraging P. xylostella are more likely to alight on the first row of the plants. The findings are discussed with regard to current theories of host-plant location by phytophagous insects and the possible implications for integrated pest management.
Resumo:
A whole life-cycle information management vision is proposed, the organizational requirements for the realization of the scenario is investigated. Preliminary interviews with construction professionals are reported. Discontinuities at information transfer throughout life-cycle of built environments are resulting from lack of coordination and multiple data collection/storage practices. A more coherent history of these activities can improve the work practices of various teams by augmenting decision making processes and creating organizational learning opportunities. Therefore, there is a need for unifying these fragmented bits of data to create a meaningful, semantically rich and standardized information repository for built environment. The proposed vision utilizes embedded technologies and distributed building information models. Two diverse construction project types (large one-off design, small repetitive design) are investigated for the applicability of the vision. A functional prototype software/hardware system for demonstrating the practical use of this vision is developed and discussed. Plans for case-studies for validating the proposed model at a large PFI hospital and housing association projects are discussed.
Resumo:
Earlier studies showed that the disparity with respect to other visible points could not explain stereoacuity performance, nor could various spatial derivatives of disparity [Glennerster, A., McKee, S. P., & Birch, M. D. (2002). Evidence of surface-based processing of binocular disparity. Current Biology, 12:825-828; Petrov, Y., & Glennerster, A. (2004). The role of the local reference in stereoscopic detection of depth relief. Vision Research, 44:367-376.] Two possible cues remain: (i) local changes in disparity gradient or (ii) disparity with respect to an interpolated line drawn through the reference points. Here, we aimed to distinguish between these two cues. Subjects judged.. in a two AFC paradigm, whether a target dot was in front of a plane defined by three reference dots or, in other experiments, in front of a line defined by two reference dots. We tested different slants of the reference line or plane and different locations of the target relative to the reference points. For slanted reference lines or plane, stereoacuity changed little as the target position was varied. For judgments relative to a frontoparallel reference line, stereoacuity did vary with target position, but less than would be predicted by disparity gradient change. This provides evidence that disparity with respect to the reference plane is an important cue. We discuss the potential advantages of this measure in generating a representation of surface relief that is invariant to viewpoint transformations. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Aim: To review current literature on the development of convergence and accommodation. The accommodation and vergence systems provide the foundation upon which bifoveal binocular single vision develops. Deviations from their normal development not only are implicated in the aetiology of convergence anomalies, accommodative anomalies and strabismus, but may also be implicated in failure of the emmetropisation process. Method: This review considers the problems of researching the development of accommodation and vergence in infants and how infant research has had to differ from adult methods. It then reviews and discusses the implications of current research into the development of both systems and their linkages. Results: Vergence and accommodation develop rapidly in the first months of life, with accommodation changing from relatively fixed myopic focus in the neonatal period to adult-like responses by 4 months of age. Vergence develops gradually and becomes more accurate after 4 months of age, but has been demonstrated in infants well before the age that binocular disparity detection mechanisms are thought to develop. Hypotheses for this early vergence mechanism are discussed. The relationship between accommodation and vergence shows much more variability in infants than adult literature has found, but this apparent adult/infant difference may be partly attributed to methodological differences rather than maturational change alone. Conclusions: Variability and flexibility characterise infant responses. This variability may enable infants to develop a flexible and robust binocular system for later life. Studies of infant visual cue use may give clues to the aetiology of strabismus and refractive error.
Resumo:
The perceived displacement of motion-defined contours in peripheral vision was examined in four experiments. In Experiment 1, in line with Ramachandran and Anstis' finding [Ramachandran, V. S., & Anstis, S. M. (1990). Illusory displacement of equiluminous kinetic edges. Perception, 19, 611-616], the border between a field of drifting dots and a static dot pattern was apparently displaced in the same direction as the movement of the dots. When a uniform dark area was substituted for the static dots, a similar displacement was found, but this was smaller and statistically insignificant. In Experiment 2, the border between two fields of dots moving in opposite directions was displaced in the direction of motion of the dots in the more eccentric field, so that the location of a boundary defined by a diverging pattern is perceived as more eccentric, and that defined by a converging as less eccentric. Two explanations for this effect (that the displacement reflects a greater weight given to the more eccentric motion, or that the region containing stronger centripetal motion components expands perceptually into that containing centrifugal motion) were tested in Experiment 3, by varying the velocity of the more eccentric region. The results favoured the explanation based on the expansion of an area in centripetal motion. Experiment 4 showed that the difference in perceived location was unlikely to be due to differences in the discriminability of contours in diverging and converging pattems, and confirmed that this effect is due to a difference between centripetal and centrifugal motion rather than motion components in other directions. Our result provides new evidence for a bias towards centripetal motion in human vision, and suggests that the direction of motion-induced displacement of edges is not always in the direction of an adjacent moving pattern. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Computer vision applications generally split their problem into multiple simpler tasks. Likewise research often combines algorithms into systems for evaluation purposes. Frameworks for modular vision provide interfaces and mechanisms for algorithm combination and network transparency. However, these don’t provide interfaces efficiently utilising the slow memory in modern PCs. We investigate quantitatively how system performance varies with different patterns of memory usage by the framework for an example vision system.
Resumo:
A vision system for recognizing rigid and articulated three-dimensional objects in two-dimensional images is described. Geometrical models are extracted from a commercial computer aided design package. The models are then augmented with appearance and functional information which improves the system's hypothesis generation, hypothesis verification, and pose refinement. Significant advantages over existing CAD-based vision systems, which utilize only information available in the CAD system, are realized. Examples show the system recognizing, locating, and tracking a variety of objects in a robot work-cell and in natural scenes.
Resumo:
This paper presents a review of the design and development of the Yorick series of active stereo camera platforms and their integration into real-time closed loop active vision systems, whose applications span surveillance, navigation of autonomously guided vehicles (AGVs), and inspection tasks for teleoperation, including immersive visual telepresence. The mechatronic approach adopted for the design of the first system, including head/eye platform, local controller, vision engine, gaze controller and system integration, proved to be very successful. The design team comprised researchers with experience in parallel computing, robot control, mechanical design and machine vision. The success of the project has generated sufficient interest to sanction a number of revisions of the original head design, including the design of a lightweight compact head for use on a robot arm, and the further development of a robot head to look specifically at increasing visual resolution for visual telepresence. The controller and vision processing engines have also been upgraded, to include the control of robot heads on mobile platforms and control of vergence through tracking of an operator's eye movement. This paper details the hardware development of the different active vision/telepresence systems.
Resumo:
This paper describes the design, implementation and testing of a high speed controlled stereo “head/eye” platform which facilitates the rapid redirection of gaze in response to visual input. It details the mechanical device, which is based around geared DC motors, and describes hardware aspects of the controller and vision system, which are implemented on a reconfigurable network of general purpose parallel processors. The servo-controller is described in detail and higher level gaze and vision constructs outlined. The paper gives performance figures gained both from mechanical tests on the platform alone, and from closed loop tests on the entire system using visual feedback from a feature detector.