29 resultados para Observers
Resumo:
As we move through the world, our eyes acquire a sequence of images. The information from this sequence is sufficient to determine the structure of a three-dimensional scene, up to a scale factor determined by the distance that the eyes have moved [1, 2]. Previous evidence shows that the human visual system accounts for the distance the observer has walked [3,4] and the separation of the eyes [5-8] when judging the scale, shape, and distance of objects. However, in an immersive virtual-reality environment, observers failed to notice when a scene expanded or contracted, despite having consistent information about scale from both distance walked and binocular vision. This failure led to large errors in judging the size of objects. The pattern of errors cannot be explained by assuming a visual reconstruction of the scene with an incorrect estimate of interocular separation or distance walked. Instead, it is consistent with a Bayesian model of cue integration in which the efficacy of motion and disparity cues is greater at near viewing distances. Our results imply that observers are more willing to adjust their estimate of interocular separation or distance walked than to accept that the scene has changed in size.
Resumo:
In the last few years a state-space formulation has been introduced into self-tuning control. This has not only allowed for a wider choice of possible control actions, but has also provided an insight into the theory underlying—and hidden by—that used in the polynomial description. This paper considers many of the self-tuning algorithms, both state-space and polynomial, presently in use, and by starting from first principles develops the observers which are, effectively, used in each case. At any specific time instant the state estimator can be regarded as taking one of two forms. In the first case the most recently available output measurement is excluded, and here an optimal and conditionally stable observer is obtained. In the second case the present output signal is included, and here it is shown that although the observer is once again conditionally stable, it is no longer optimal. This result is of significance, as many of the popular self-tuning controllers lie in the second, rather than first, category.
Resumo:
This paper presents novel observer-based techniques for the estimation of flow demands in gas networks, from sparse pressure telemetry. A completely observable model is explored, constructed by incorporating difference equations that assume the flow demands are steady. Since the flow demands usually vary slowly with time, this is a reasonable approximation. Two techniques for constructing robust observers are employed: robust eigenstructure assignment and singular value assignment. These techniques help to reduce the effects of the system approximation. Modelling error may be further reduced by making use of known profiles for the flow demands. The theory is extended to deal successfully with the problem of measurement bias. The pressure measurements available are subject to constant biases which degrade the flow demand estimates, and such biases need to be estimated. This is achieved by constructing a further model variation that incorporates the biases into an augmented state vector, but now includes information about the flow demand profiles in a new form.
Resumo:
Stereoscopic white-light imaging of a large portion of the inner heliosphere has been used to track interplanetary coronal mass ejections. At large elongations from the Sun, the white-light brightness depends on both the local electron density and the efficiency of the Thomson-scattering process. To quantify the effects of the Thomson-scattering geometry, we study an interplanetary shock using forward magnetohydrodynamic simulation and synthetic white-light imaging. Identifiable as an inclined streak of enhanced brightness in a time–elongation map, the travelling shock can be readily imaged by an observer located within a wide range of longitudes in the ecliptic. Different parts of the shock front contribute to the imaged brightness pattern viewed by observers at different longitudes. Moreover, even for an observer located at a fixed longitude, a different part of the shock front will contribute to the imaged brightness at any given time. The observed brightness within each imaging pixel results from a weighted integral along its corresponding ray-path. It is possible to infer the longitudinal location of the shock from the brightness pattern in an optical sky map, based on the east–west asymmetry in its brightness and degree of polarisation. Therefore, measurement of the interplanetary polarised brightness could significantly reduce the ambiguity in performing three-dimensional reconstruction of local electron density from white-light imaging.
Resumo:
When human observers are exposed to even slight motion signals followed by brief visual transients—stimuli containing no detectable coherent motion signals—they perceive large and salient illusory jumps. This novel effect, which we call “high phi”, challenges well-entrenched assumptions about the perception of motion, namely the minimal-motion principle and the breakdown of coherent motion perception with steps above an upper limit. Our experiments with transients such as texture randomization or contrast reversal show that the magnitude of the jump depends on spatial frequency and transient duration, but not on the speed of the inducing motion signals, and the direction of the jump depends on the duration of the inducer. Jump magnitude is robust across jump directions and different types of transient. In addition, when a texture is actually displaced by a large step beyond dmax, a breakdown of coherent motion perception is expected, but in the presence of an inducer observers again perceive coherent displacements at or just above dmax. In sum, across a large variety of stimuli, we find that when incoherent motion noise is preceded by a small bias, instead of perceiving little or no motion, as suggested by the minimal-motion principle, observers perceive jumps whose amplitude closely follows their own dmax limits.
Resumo:
Purpose – This paper describes visitors' reactions to using an Apple iPad or smartphone to follow trails in a museum by scanning QR codes and draws conclusions on the potential for this technology to help improve accessibility at low-cost. Design/methodology/approach – Activities were devised which involved visitors following trails around museum objects, each labelled with a QR code and symbolised text. Visitors scanned the QR codes using a mobile device which then showed more information about an object. Project-team members acted as participant-observers, engaging with visitors and noting how they used the system. Experiences from each activity fed into the design of the next. Findings – Some physical and technical problems with using QR codes can be overcome with the introduction of simple aids, particularly using movable object labels. A layered approach to information access is possible with the first layer comprising a label, the second a mobile-web enabled screen and the third choices of text, pictures, video and audio. Video was especially appealing to young people. The ability to repeatedly watch video or listen to audio seemed to be appreciated by visitors with learning disabilities. This approach can have low equipment-cost. However, maintaining the information behind labels and keeping-up with technological changes are on-going processes. Originality/value – Using QR codes on movable, symbolised object labels as part of a layered information system might help modestly-funded museums enhance their accessibility, particularly as visitors increasingly arrive with their own smartphones or tablets.
Resumo:
The role of state and trait anxiety on observer ratings of social skill and negatively biased self-perception of social skill was examined. Participants were aged between 7 and 13 years (mean=9.65; sd=1.77; N=102), 47 had a current anxiety diagnosis and 55 were non-anxious controls. Participants were randomly allocated to a high or low anxiety condition and asked to complete social tasks. Task instructions were adjusted across conditions to manipulate participants’ state anxiety. Observers rated anxious participants as having poorer social skills than non-anxious controls but there was no evidence that anxious participants exhibited a negative self-perception bias, relative to controls. However, as participants’ ratings of state anxiety increased, their perception of their performance became more negatively biased. The results suggest that anxious children may exhibit real impairments in social skill and that high levels of state anxiety can lead to biased judgements of social skills in anxious and non-anxious children.
Resumo:
It is often assumed that humans generate a 3D reconstruction of the environment, either in egocentric or world-based coordinates, but the steps involved are unknown. Here, we propose two reconstruction-based models, evaluated using data from two tasks in immersive virtual reality. We model the observer’s prediction of landmark location based on standard photogrammetric methods and then combine location predictions to compute likelihood maps of navigation behaviour. In one model, each scene point is treated independently in the reconstruction; in the other, the pertinent variable is the spatial relationship between pairs of points. Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back. Error distributions varied substantially with changes in scene layout; we compared these directly with the likelihood maps to quantify the success of the models. We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models. Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision.
Resumo:
When the sensory consequences of an action are systematically altered our brain can recalibrate the mappings between sensory cues and properties of our environment. This recalibration can be driven by both cue conflicts and altered sensory statistics, but neither mechanism offers a way for cues to be calibrated so they provide accurate information about the world, as sensory cues carry no information as to their own accuracy. Here, we explored whether sensory predictions based on internal physical models could be used to accurately calibrate visual cues to 3D surface slant. Human observers played a 3D kinematic game in which they adjusted the slant of a surface so that a moving ball would bounce off the surface and through a target hoop. In one group, the ball’s bounce was manipulated so that the surface behaved as if it had a different slant to that signaled by visual cues. With experience of this altered bounce, observers recalibrated their perception of slant so that it was more consistent with the assumed laws of kinematics and physical behavior of the surface. In another group, making the ball spin in a way that could physically explain its altered bounce eliminated this pattern of recalibration. Importantly, both groups adjusted their behavior in the kinematic game in the same way, experienced the same set of slants and were not presented with low-level cue conflicts that could drive the recalibration. We conclude that observers use predictive kinematic models to accurately calibrate visual cues to 3D properties of world.
Resumo:
Arches, streamers, polar lights, merry dancers… just a few of many names used to describe the aurora borealis in historical documents in the UK. We have compiled a new catalogue of 20591 independent reports of auroral sightings from the British Isles and Ireland for 1700–1975 using observatory yearbooks, the diaries of amateur observers, newspaper reports and the scientific literature. Our aim is to provide an independent data series that can aid understanding of longterm solar variability, alongside cosmogenic isotope data and historic records of geomagnetic activity and sunspots.
Resumo:
We use sunspot group observations from the Royal Greenwich Observatory (RGO) to investigate the effects of intercalibrating data from observers with different visual acuities. The tests are made by counting the number of groups RB above a variable cut-off threshold of observed total whole-spot area (uncorrected for foreshortening) to simulate what a lower acuity observer would have seen. The synthesised annual means of RB are then re-scaled to the full observed RGO group number RA using a variety of regression techniques. It is found that a very high correlation between RA and RB (rAB > 0.98) does not prevent large errors in the intercalibration (for example sunspot maximum values can be over 30 % too large even for such levels of rAB). In generating the backbone sunspot number (RBB), Svalgaard and Schatten (2015, this issue) force regression fits to pass through the scatter plot origin which generates unreliable fits (the residuals do not form a normal distribution) and causes sunspot cycle amplitudes to be exaggerated in the intercalibrated data. It is demonstrated that the use of Quantile-Quantile (“Q Q”) plots to test for a normal distribution is a useful indicator of erroneous and misleading regression fits. Ordinary least squares linear fits, not forced to pass through the origin, are sometimes reliable (although the optimum method used is shown to be different when matching peak and average sunspot group numbers). However, other fits are only reliable if non-linear regression is used. From these results it is entirely possible that the inflation of solar cycle amplitudes in the backbone group sunspot number as one goes back in time, relative to related solar-terrestrial parameters, is entirely caused by the use of inappropriate and non-robust regression techniques to calibrate the sunspot data.
Resumo:
Although the sunspot-number series have existed since the mid-19th century, they are still the subject of intense debate, with the largest uncertainty being related to the "calibration" of the visual acuity of individual observers in the past. Daisy-chain regression methods are applied to inter-calibrate the observers which may lead to significant bias and error accumulation. Here we present a novel method to calibrate the visual acuity of the key observers to the reference data set of Royal Greenwich Observatory sunspot groups for the period 1900-1976, using the statistics of the active-day fraction. For each observer we independently evaluate their observational thresholds [S_S] defined such that the observer is assumed to miss all of the groups with an area smaller than S_S and report all the groups larger than S_S. Next, using a Monte-Carlo method we construct, from the reference data set, a correction matrix for each observer. The correction matrices are significantly non-linear and cannot be approximated by a linear regression or proportionality. We emphasize that corrections based on a linear proportionality between annually averaged data lead to serious biases and distortions of the data. The correction matrices are applied to the original sunspot group records for each day, and finally the composite corrected series is produced for the period since 1748. The corrected series displays secular minima around 1800 (Dalton minimum) and 1900 (Gleissberg minimum), as well as the Modern grand maximum of activity in the second half of the 20th century. The uniqueness of the grand maximum is confirmed for the last 250 years. It is shown that the adoption of a linear relationship between the data of Wolf and Wolfer results in grossly inflated group numbers in the 18th and 19th centuries in some reconstructions.
Resumo:
Observers generally fail to recover three-dimensional shape accurately from binocular disparity. Typically, depth is overestimated at near distances and underestimated at far distances [Johnston, E. B. (1991). Systematic distortions of shape from stereopsis. Vision Research, 31, 1351–1360]. A simple prediction from this is that disparity-defined objects should appear to expand in depth when moving towards the observer, and compress in depth when moving away. However, additional information is provided when an object moves from which 3D Euclidean shape can be recovered, be this through the addition of structure from motion information [Richards, W. (1985). Structure from stereo and motion. Journal of the Optical Society of America A, 2, 343–349], or the use of non-generic strategies [Todd, J. T., & Norman, J. F. (2003). The visual perception of 3-D shape from multiple cues: Are observers capable of perceiving metric structure? Perception and Psychophysics, 65, 31–47]. Here, we investigated shape constancy for objects moving in depth. We found that to be perceived as constant in shape, objects needed to contract in depth when moving toward the observer, and expand in depth when moving away, countering the effects of incorrect distance scaling (Johnston, 1991). This is a striking example of the failure of shape con- stancy, but one that is predicted if observers neither accurately estimate object distance in order to recover Euclidean shape, nor are able to base their responses on a simpler processing strategy.
Resumo:
For many tasks, such as retrieving a previously viewed object, an observer must form a representation of the world at one location and use it at another. A world-based 3D reconstruction of the scene built up from visual information would fulfil this requirement, something computer vision now achieves with great speed and accuracy. However, I argue that it is neither easy nor necessary for the brain to do this. I discuss biologically plausible alternatives, including the possibility of avoiding 3D coordinate frames such as ego-centric and world-based representations. For example, the distance, slant and local shape of surfaces dictate the propensity of visual features to move in the image with respect to one another as the observer’s perspective changes (through movement or binocular viewing). Such propensities can be stored without the need for 3D reference frames. The problem of representing a stable scene in the face of continual head and eye movements is an appropriate starting place for understanding the goal of 3D vision, more so, I argue, than the case of a static binocular observer.