46 resultados para objectivity without objects
em CentAUR: Central Archive University of Reading - UK
Resumo:
A robot mounted camera is useful in many machine vision tasks as it allows control over view direction and position. In this paper we report a technique for calibrating both the robot and the camera using only a single corresponding point. All existing head-eye calibration systems we have encountered rely on using pre-calibrated robots, pre- calibrated cameras, special calibration objects or combinations of these. Our method avoids using large scale non-linear optimizations by recovering the parameters in small dependent groups. This is done by performing a series of planned, but initially uncalibrated robot movements. Many of the kinematic parameters are obtained using only camera views in which the calibration feature is at, or near the image center, thus avoiding errors which could be introduced by lens distortion. The calibration is shown to be both stable and accurate. The robotic system we use consists of camera with pan-tilt capability mounted on a Cartesian robot, providing a total of 5 degrees of freedom.
Resumo:
In an immersive virtual reality environment, subjects fail to notice when a scene expands or contracts around them, despite correct and consistent information from binocular stereopsis and motion parallax, resulting in gross failures of size constancy (A. Glennerster, L. Tcheang, S. J. Gilson, A. W. Fitzgibbon, & A. J. Parker, 2006). We determined whether the integration of stereopsis/motion parallax cues with texture-based cues could be modified through feedback. Subjects compared the size of two objects, each visible when the room was of a different size. As the subject walked, the room expanded or contracted, although subjects failed to notice any change. Subjects were given feedback about the accuracy of their size judgments, where the “correct” size setting was defined either by texture-based cues or (in a separate experiment) by stereo/motion parallax cues. Because of feedback, observers were able to adjust responses such that fewer errors were made. For texture-based feedback, the pattern of responses was consistent with observers weighting texture cues more heavily. However, for stereo/motion parallax feedback, performance in many conditions became worse such that, paradoxically, biases moved away from the point reinforced by the feedback. This can be explained by assuming that subjects remap the relationship between stereo/motion parallax cues and perceived size or that they develop strategies to change their criterion for a size match on different trials. In either case, subjects appear not to have direct access to stereo/motion parallax cues.
Resumo:
Older adults often demonstrate higher levels of false recognition than do younger adults. However, in experiments using novel shapes without preexisting semantic representations, this age-related elevation in false recognition was found to be greatly attenuated. Two experiments tested a semantic categorization account of these findings, examining whether older adults show especially heightened false recognition if the stimuli have preexisting semantic representations, such that semantic category information attenuates or truncates the encoding or retrieval of item-specific perceptual information. In Experiment 1, ambiguous shapes were presented with or without disambiguating semantic labels. Older adults showed higher false recognition when labels were present but not when labels were never presented. In Experiment 2, older adults showed higher false recognition for concrete but not abstract objects. The semantic categorization account was supported.
Resumo:
In his 1967 essay, “Art and Objecthood”, Michael Fried bemoaned the theatricality of minimalist sculpture, which replaced the presentness of compositional sculpture with the staging of an experience for the viewer as performer. His argument has since been inverted by artists and art writers invested in the idea of sculptures as props forming part of an artistic experience economy. This discourse has accompanied the rise of relational aesthetics as a dominant paradigm for contemporary art. More recently, however, there has been a turn away from relationality to ‘object-oriented’ art, where objects are seen to stage their own theatrical experiences, performing themselves without requiring the activation of a viewer’s body. We trace parallels between the philosophy of Bruno Latour and the “Speculative Materialism” group and this emerging trend in sculpture. In ascribing agency to objects, Latour proposes a radical shift from philosophy’s traditional investigation of the relationship between the mind and the world. Drawn to the idea that matter can be creative, artists have embraced his thinking. However, we argue that this has lead to a generalized, universalizing humanism that disables political action. Moreover, it undermines the potential for anti-humanist critique latent in object-oriented philosophy.
Resumo:
Pair Programming is a technique from the software development method eXtreme Programming (XP) whereby two programmers work closely together to develop a piece of software. A similar approach has been used to develop a set of Assessment Learning Objects (ALO). Three members of academic staff have developed a set of ALOs for a total of three different modules (two with overlapping content). In each case a pair programming approach was taken to the development of the ALO. In addition to demonstrating the efficiency of this approach in terms of staff time spent developing the ALOs, a statistical analysis of the outcomes for students who made use of the ALOs is used to demonstrate the effectiveness of the ALOs produced via this method.
Resumo:
The literature on vertical disparity is complicated by the fact that several different definitions of the term “vertical disparity” are in common use, often without a clear statement about which is intended or a widespread appreciation of the properties of the different definitions. Here, we examine two definitions of retinal vertical disparity: elevation-latitude and elevation-longitude disparities. Near the fixation point, these definitions become equivalent, but in general, they have quite different dependences on object distance and binocular eye posture, which have not previously been spelt out. We present analytical approximations for each type of vertical disparity, valid for more general conditions than previous derivations in the literature: we do not restrict ourselves to objects near the fixation point or near the plane of regard, and we allow for non-zero torsion, cyclovergence, and vertical misalignments of the eyes. We use these expressions to derive estimates of the latitude and longitude vertical disparities expected at each point in the visual field, averaged over all natural viewing. Finally, we present analytical expressions showing how binocular eye position—gaze direction, convergence, torsion, cyclovergence, and vertical misalignment—can be derived from the vertical disparity field and its derivatives at the fovea.
Resumo:
Inferences consistent with “recognition-based” decision-making may be drawn for various reasons other than recognition alone. We demonstrate that, for 2-alternative forced-choice decision tasks, less-is-more effects (reduced performance with additional learning) are not restricted to recognition-based inference but can also be seen in circumstances where inference is knowledge-based but item knowledge is limited. One reason why such effects may not be observed more widely is the dependence of the effect on specific values for the validity of recognition and knowledge cues. We show that both recognition and knowledge validity may vary as a function of the number of items recognized. The implications of these findings for the special nature of recognition information, and for the investigation of recognition-based inference, are discussed
Resumo:
Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.