12 resultados para Visual methods
em CentAUR: Central Archive University of Reading - UK
Resumo:
This article focuses on the final report of Lord Butler’s review of British intelligence on weapons of mass destruction (WMD), specifically on its treatment of the accuracy of the use of intelligence on Iraqi WMD in a government dossier published in September 2002 ahead of the 2003 Iraq war. In the report, the demonstration of the accuracy of the “September Dossier” hinges on the insertion of tables that compare side-by-side quotations from this document and from intelligence assessments. The analysis of the textual and visual methods by which the report is written reveals how the logic of the comparative tables is missed in the Butler report: the logic of these tables requires that the comparison between quotations from the two documents should be performed at the level of their details but the Butler report performs its comparison only at a broad and general level.
Resumo:
Purpose. Some children with visual stress and/or headaches have fewer symptoms when wearing colored lenses. Although subjective reports of improved perception exist, few objective correlates of these effects have been established. Methods. In a pilot study, 10 children who wore Intuitive Colorimeter lenses, and claimed benefit, and two asymptomatic children were tested. Steady-state potentials were measured in response to low contrast patterns modulating at a frequency of 12 Hz. Four viewing conditions were compared: 1) no lens; 2) Colorimeter lens; 3) lens of complementary color; and 4) spectrally neutral lens with similar photopic transmission. Results. The asymptomatic children showed little or no difference between the lens and no lens conditions. When all the symptomatic children were tested together, a similar result was found. However, when the symptomatic children were divided into two groups depending on their symptoms, an interaction emerged. Children with visual stress but no headaches showed the largest amplitude visual evoked potential response in the no lens condition, whereas those children whose symptoms included severe headaches or migraine showed the largest amplitude visual evoked potential response when wearing their prescribed lens. Conclusions. The results suggest that it is possible to measure objective correlates of the beneficial subjective perceptual effects of colored lenses, at least in some children who have a history of migraine or severe headaches.
Resumo:
There are still major challenges in the area of automatic indexing and retrieval of digital data. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. Research has been ongoing for a few years in the field of ontological engineering with the aim of using ontologies to add knowledge to information. In this paper we describe the architecture of a system designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval.
Resumo:
A novel framework referred to as collaterally confirmed labelling (CCL) is proposed, aiming at localising the visual semantics to regions of interest in images with textual keywords. Both the primary image and collateral textual modalities are exploited in a mutually co-referencing and complementary fashion. The collateral content and context-based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix of the visual keywords. A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. We introduce a novel high-level visual content descriptor that is devised for performing semantic-based image classification and retrieval. The proposed image feature vector model is fundamentally underpinned by the CCL framework. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval, respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicate that the proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Recent theories propose that semantic representation and sensorimotor processing have a common substrate via simulation. We tested the prediction that comprehension interacts with perception, using a standard psychophysics methodology.While passively listening to verbs that referred to upward or downward motion, and to control verbs that did not refer to motion, 20 subjects performed a motion-detection task, indicating whether or not they saw motion in visual stimuli containing threshold levels of coherent vertical motion. A signal detection analysis revealed that when verbs were directionally incongruent with the motion signal, perceptual sensitivity was impaired. Word comprehension also affected decision criteria and reaction times, but in different ways. The results are discussed with reference to existing explanations of embodied processing and the potential of psychophysical methods for assessing interactions between language and perception.
Resumo:
The objective of a Visual Telepresence System is to provide the operator with a high fidelity image from a remote stereo camera pair linked to a pan/tilt device such that the operator may reorient the camera position by use of head movement. Systems such as these which utilise virtual reality style helmet mounted displays have a number of limitations. The geometry of the camera positions and of the displays is generally fixed and is most suitable only for viewing elements of a scene at a particular distance. To address such limitations, a prototype system has been developed where the geometry of the displays and cameras is dynamically controlled by the eye movement of the operator. This paper explores why it is necessary to actively adjust the display system as well as the cameras and justifies the use of mechanical adjustment of the displays as an alternative to adjustment by electronic or image processing methods. The electronic and mechanical design is described including optical arrangements and control algorithms. The performance and accuracy of the system is assessed with respect to eye movement.
Resumo:
It is often assumed that humans generate a 3D reconstruction of the environment, either in egocentric or world-based coordinates, but the steps involved are unknown. Here, we propose two reconstruction-based models, evaluated using data from two tasks in immersive virtual reality. We model the observer’s prediction of landmark location based on standard photogrammetric methods and then combine location predictions to compute likelihood maps of navigation behaviour. In one model, each scene point is treated independently in the reconstruction; in the other, the pertinent variable is the spatial relationship between pairs of points. Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back. Error distributions varied substantially with changes in scene layout; we compared these directly with the likelihood maps to quantify the success of the models. We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models. Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision.
Resumo:
This paper details an investigation into sensory substitution by means of direct electrical stimulation of the tongue for the purpose of information input to the human brain. In particular, a device has been constructed and a series of trials have been performed in order to demonstrate the efficacy and performance of an electro-tactile array mounted onto the tongue surface for the purpose of sensory augmentation. Tests have shown that by using a low resolution array a computer-human feedback loop can be successfully implemented by humans in order to complete tasks such as object tracking, surface shape identification and shape recognition with no training or prior experience with the device. Comparisons of this technique have been made with visual alternatives and these show that the tongue based tactile array can match such methods in convenience and accuracy in performing simple tasks.
Resumo:
This work presents a method of information fusion involving data captured by both a standard CCD camera and a ToF camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time of light information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localization. Further development of these methods will make it possible to identify objects and their position in the real world, and to use this information to prevent possible collisions between the robot and such objects.
Resumo:
This work presents a method of information fusion involving data captured by both a standard charge-coupled device (CCD) camera and a time-of-flight (ToF) camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time-of-flight information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time, this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localisation. Further development of these methods will make it possible to identify objects and their position in the real world and to use this information to prevent possible collisions between the robot and such objects.
Resumo:
Near ground maneuvers, such as hover, approach and landing, are key elements of autonomy in unmanned aerial vehicles. Such maneuvers have been tackled conventionally by measuring or estimating the velocity and the height above the ground often using ultrasonic or laser range finders. Near ground maneuvers are naturally mastered by flying birds and insects as objects below may be of interest for food or shelter. These animals perform such maneuvers efficiently using only the available vision and vestibular sensory information. In this paper, the time-to-contact (Tau) theory, which conceptualizes the visual strategy with which many species are believed to approach objects, is presented as a solution for Unmanned Aerial Vehicles (UAV) relative ground distance control. The paper shows how such an approach can be visually guided without knowledge of height and velocity relative to the ground. A control scheme that implements the Tau strategy is developed employing only visual information from a monocular camera and an inertial measurement unit. To achieve reliable visual information at a high rate, a novel filtering system is proposed to complement the control system. The proposed system is implemented on-board an experimental quadrotor UAV and shown not only to successfully land and approach ground, but also to enable the user to choose the dynamic characteristics of the approach. The methods presented in this paper are applicable to both aerial and space autonomous vehicles.
Resumo:
Near-ground maneuvers, such as hover, approach, and landing, are key elements of autonomy in unmanned aerial vehicles. Such maneuvers have been tackled conventionally by measuring or estimating the velocity and the height above the ground, often using ultrasonic or laser range finders. Near-ground maneuvers are naturally mastered by flying birds and insects because objects below may be of interest for food or shelter. These animals perform such maneuvers efficiently using only the available vision and vestibular sensory information. In this paper, the time-tocontact (tau) theory, which conceptualizes the visual strategy with which many species are believed to approach objects, is presented as a solution for relative ground distance control for unmanned aerial vehicles. The paper shows how such an approach can be visually guided without knowledge of height and velocity relative to the ground. A control scheme that implements the tau strategy is developed employing only visual information from a monocular camera and an inertial measurement unit. To achieve reliable visual information at a high rate, a novel filtering system is proposed to complement the control system. The proposed system is implemented onboard an experimental quadrotor unmannedaerial vehicle and is shown to not only successfully land and approach ground, but also to enable the user to choose the dynamic characteristics of the approach. The methods presented in this paper are applicable to both aerial and space autonomous vehicles.