867 resultados para Visual Divided Field
Resumo:
Bioacoustic data can provide an important base for environmental monitoring. To explore a large amount of field recordings collected, an automated similarity search algorithm is presented in this paper. A region of an audio defined by frequency and time bounds is provided by a user; the content of the region is used to construct a query. In the retrieving process, our algorithm will automatically scan through recordings to search for similar regions. In detail, we present a feature extraction approach based on the visual content of vocalisations – in this case ridges, and develop a generic regional representation of vocalisations for indexing. Our feature extraction method works best for bird vocalisations showing ridge characteristics. The regional representation method allows the content of an arbitrary region of a continuous recording to be described in a compressed format.
Resumo:
The appropriateness of applying drink driving legislation to motorcycle riding has been questioned as there may be fundamental differences in the effects of alcohol on driving and motorcycling. It has been suggested that alcohol may redirect riders’ focus from higher-order cognitive skills such as cornering, judgement and hazard perception, to more physical skills such as maintaining balance. To test this hypothesis, the effects of low doses of alcohol on balance ability were investigated in a laboratory setting. The static balance of twenty experienced and twenty novice riders was measured while they performed either no secondary task, a visual (search) task, or a cognitive (arithmetic) task following the administration of alcohol (0%, 0.02%, and 0.05% BAC). Subjective ratings of intoxication and balance impairment increased in a dose-dependent manner in both novice and experienced motorcycle riders, while a BAC of 0.05%, but not 0.02%, was associated with impairments in static balance ability. This balance impairment was exacerbated when riders performed a cognitive, but not a visual, secondary task. Likewise, 0.05% BAC was associated with impairments in novice and experienced riders’ performance of a cognitive, but not a visual, secondary task, suggesting that interactive processes underlie balance and cognitive task performance. There were no observed differences between novice vs. experienced riders on static balance and secondary task performance, either alone or in combination. Implications for road safety and future ‘drink riding’ policy considerations are discussed.
Resumo:
Purpose To investigate hyperopic shifts and the oblique (or 45-degree/135-degree) component of astigmatism at large angles in the horizontal visual field using the Hartmann-Shack technique. Methods The adult participants consisted of 6 hypermetropes, 13 emmetropes and 11 myopes. Measurements were made with a modified COAS-HD Hartmann-Shack aberrometer across T60 degrees along the horizontal visual field in 5-degree steps. Eyes were dilated with 1% cyclopentolate. Peripheral refraction was estimated as mean spherical (or spherical equivalent) refraction, with/against the rule of astigmatism and oblique astigmatism components, and as horizontal and vertical refraction components based on 3-mm major diameter elliptical pupils. Results Thirty percent of eyes showed a pattern that was a combination of type IV and type I patterns of Rempt et al. (Rempt F, Hoogerheide J, Hoogenboom WP. Peripheral retinoscopy and the skiagram. Ophthalmologica 1971;162:1Y10), which shows the characteristics of type IV (relative hypermetropia along the vertical meridian and relative myopia along the horizontal meridian) out to an angle of between 40 and 50 degrees before behaving like type I (both meridians show relative hypermetropia). We classified this pattern as type IV/I. Seven of 13 emmetropes had this pattern. As a group, there was no significant variation of the oblique component of astigmatism with angle, but about one-half of the eyes showed significant positive slopes (more positive or less negative values in the nasal field than in the temporal field) and one-fourth showed significant negative slopes. Conclusions It is often considered that a pattern of relative peripheral hypermetropia predisposes to the development of myopia. In this context, the finding of a considerable portion of emmetropes with the IV/I pattern suggests that it is unlikely that refraction at visual field angles beyond 40 degrees from fixation contributes to myopia development.
Resumo:
Purpose: In animal models hemi-field deprivation results in localised, graded vitreous chamber elongation and presumably deprivation induced localised changes in retinal processing. The aim of this research was to determine if there are variations in ERG responses across the retina in normal chick eyes and to examine the effect of hemi-field and full-field deprivation on ERG responses across the retina and at earlier times than have previously been examined electrophysiologically. Methods: Chicks were either untreated, wore monocular full-diffusers or half-diffusers (depriving nasal retina) (n = 6-8 each group) from day 8. mfERG responses were measured using the VERIS mfERG system across the central 18.2º× 16.7º (H × V) field. The stimulus consisted of 61 unscaled hexagons with each hexagon modulated between black and white according to a pseudorandom binary m-sequence. The mfERG was measured on day 12 in untreated chicks, following 4 days of hemi-field diffuser wear, and 2, 48 and 96 h after application of full-field diffusers. Results: The ERG response of untreated chick eyes did not vary across the measured field; there was no effect of retinal location on the N1-P1 amplitude (p = 0.108) or on P1 implicit time (p > 0.05). This finding is consistent with retinal ganglion cell density of the chick varying by only a factor of two across the entire retina. Half-diffusers produced a ramped retina and a graded effect of negative lens correction (p < 0.0001); changes in retinal processing were localized. The untreated retina showed increasing complexity of the ERG waveform with development; form-deprivation prevented the increasing complexity of the response at the 2, 48 and 96 h measurement times and produced alterations in response timing. Conclusions: Form-deprivation and its concomitant loss of image contrast and high spatial frequency images prevented development of the ERG responses, consistent with a disruption of development of retinal feedback systems. The characterisation of ERG responses in normal and deprived chick eyes across the retina allows the assessment of concurrent visual and retinal manipulations in this model. (Ophthalmic & Physiological Optics © 2013 The College of Optometrists.)
Resumo:
Purpose To design and manufacture lenses to correct peripheral refraction along the horizontal meridian and to determine whether these resulted in noticeable improvements in visual performance. Method Subjective refraction of a low myope was determined on the basis of best peripheral detection acuity along the horizontal visual field out to ±30° for both horizontal and vertical gratings. Subjective refraction was compared to objective refractions using a COAS-HD aberrometer. Special lenses were made to correct peripheral refraction, based on designs optimized with and without smoothing across a 3 mm diameter square aperture. Grating detection was retested with these lenses. Contrast thresholds of 1.25’ spots were determined across the field for the conditions of best correction, on-axis correction, and the special lenses. Results The participant had high relative peripheral hyperopia, particularly in the temporal visual field (maximum 2.9 D). There were differences > 0.5D between subjective and objective refractions at a few field angles. On-axis correction reduced peripheral detection acuity and increased peripheral contrast threshold in the peripheral visual field, relative to the best correction, by up to 0.4 and 0.5 log units, respectively. The special lenses restored most of the peripheral vision, although not all at angles to ±10°, and with the lens optimized with aperture-smoothing possibly giving better vision than the lens optimized without aperture-smoothing at some angles. Conclusion It is possible to design and manufacture lenses to give near optimum peripheral visual performance to at least ±30° along one visual field meridian. The benefit of such lenses is likely to be manifest only if a subject has a considerable relative peripheral refraction, for example of the order of 2 D.
Resumo:
The ability to automate forced landings in an emergency such as engine failure is an essential ability to improve the safety of Unmanned Aerial Vehicles operating in General Aviation airspace. By using active vision to detect safe landing zones below the aircraft, the reliability and safety of such systems is vastly improved by gathering up-to-the-minute information about the ground environment. This paper presents the Site Detection System, a methodology utilising a downward facing camera to analyse the ground environment in both 2D and 3D, detect safe landing sites and characterise them according to size, shape, slope and nearby obstacles. A methodology is presented showing the fusion of landing site detection from 2D imagery with a coarse Digital Elevation Map and dense 3D reconstructions using INS-aided Structure-from-Motion to improve accuracy. Results are presented from an experimental flight showing the precision/recall of landing sites in comparison to a hand-classified ground truth, and improved performance with the integration of 3D analysis from visual Structure-from-Motion.
Resumo:
In late 2012 and early 2013 we interviewed 25 experienced and early career supervisors of creative practice higher research degrees. This journey spanned five universities and a broad range of disciplines including visual art, music, performing art, new media, creative writing, fashion, graphic design, interaction design and interior design. Some of the supervisors we interviewed were amongst the first to complete and supervise practice-led and practice-based PhDs; some have advocated for and defined this emergent field; and some belong to the next generation of supervisors who have confidently embarked on this exciting and challenging path. Their reflections have brought to light many insights gained over the past decade. Here we have drawn together common themes into a collection of principles and best practice examples. We present them as advice rather than rules, as one thing that the supervisors were unanimous about is the need to avoid proscriptive models and frameworks, and to foster creativity and innovation in what is still an emergent field of postgraduate supervision. It is with thanks to all of the supervisors who contributed to these conversations, and their generosity in sharing their practices, that we present their advice, exemplars and case studies.
Resumo:
This paper presents a long-term experiment where a mobile robot uses adaptive spherical views to localize itself and navigate inside a non-stationary office environment. The office contains seven members of staff and experiences a continuous change in its appearance over time due to their daily activities. The experiment runs as an episodic navigation task in the office over a period of eight weeks. The spherical views are stored in the nodes of a pose graph and they are updated in response to the changes in the environment. The updating mechanism is inspired by the concepts of long- and short-term memories. The experimental evaluation is done using three performance metrics which evaluate the quality of both the adaptive spherical views and the navigation over time.
Resumo:
Long-term autonomy in robotics requires perception systems that are resilient to unusual but realistic conditions that will eventually occur during extended missions. For example, unmanned ground vehicles (UGVs) need to be capable of operating safely in adverse and low-visibility conditions, such as at night or in the presence of smoke. The key to a resilient UGV perception system lies in the use of multiple sensor modalities, e.g., operating at different frequencies of the electromagnetic spectrum, to compensate for the limitations of a single sensor type. In this paper, visual and infrared imaging are combined in a Visual-SLAM algorithm to achieve localization. We propose to evaluate the quality of data provided by each sensor modality prior to data combination. This evaluation is used to discard low-quality data, i.e., data most likely to induce large localization errors. In this way, perceptual failures are anticipated and mitigated. An extensive experimental evaluation is conducted on data sets collected with a UGV in a range of environments and adverse conditions, including the presence of smoke (obstructing the visual camera), fire, extreme heat (saturating the infrared camera), low-light conditions (dusk), and at night with sudden variations of artificial light. A total of 240 trajectory estimates are obtained using five different variations of data sources and data combination strategies in the localization method. In particular, the proposed approach for selective data combination is compared to methods using a single sensor type or combining both modalities without preselection. We show that the proposed framework allows for camera-based localization resilient to a large range of low-visibility conditions.
Resumo:
Covertly tracking mobile targets, either animal or human, in previously unmapped outdoor natural environments using off-road robotic platforms requires both visual and acoustic stealth. Whilst the use of robots for stealthy surveillance is not new, the majority only consider navigation for visual covertness. However, most fielded robotic systems have a non-negligible acoustic footprint arising from the onboard sensors, motors, computers and cooling systems, and also from the wheels interacting with the terrain during motion. This time-varying acoustic signature can jeopardise any visual covertness and needs to be addressed in any stealthy navigation strategy. In previous work, we addressed the initial concepts for acoustically masking a tracking robot’s movements as it travels between observation locations selected to minimise its detectability by a dynamic natural target and ensuring con- tinuous visual tracking of the target. This work extends the overall concept by examining the utility of real-time acoustic signature self-assessment and exploiting shadows as hiding locations for use in a combined visual and acoustic stealth framework.
Resumo:
This paper describes a novel obstacle detection system for autonomous robots in agricultural field environments that uses a novelty detector to inform stereo matching. Stereo vision alone erroneously detects obstacles in environments with ambiguous appearance and ground plane such as in broad-acre crop fields with harvested crop residue. The novelty detector estimates the probability density in image descriptor space and incorporates image-space positional understanding to identify potential regions for obstacle detection using dense stereo matching. The results demonstrate that the system is able to detect obstacles typical to a farm at day and night. This system was successfully used as the sole means of obstacle detection for an autonomous robot performing a long term two hour coverage task travelling 8.5 km.
Resumo:
The focus of this research is the creation of a stage-directing training manual on the researcher's site at the National Institute of Dramatic Art. The directing procedures build on the work of Stanislavski's Active Analysis and findings from present-day visual cognition studies. Action research methodology and evidence-based data collection are employed to improve the efficacy of both the directing procedures and the pedagogical manual. The manual serves as a supplement to director training and a toolkit for the more experienced practitioner. The manual and research findings provide a unique and innovative contribution to the field of theatre directing.
Resumo:
We have developed a Hierarchical Look-Ahead Trajectory Model (HiLAM) that incorporates the firing pattern of medial entorhinal grid cells in a planning circuit that includes interactions with hippocampus and prefrontal cortex. We show the model’s flexibility in representing large real world environments using odometry information obtained from challenging video sequences. We acquire the visual data from a camera mounted on a small tele-operated vehicle. The camera has a panoramic field of view with its focal point approximately 5 cm above the ground level, similar to what would be expected from a rat’s point of view. Using established algorithms for calculating perceptual speed from the apparent rate of visual change over time, we generate raw dead reckoning information which loses spatial fidelity over time due to error accumulation. We rectify the loss of fidelity by exploiting the loop-closure detection ability of a biologically inspired, robot navigation model termed RatSLAM. The rectified motion information serves as a velocity input to the HiLAM to encode the environment in the form of grid cell and place cell maps. Finally, we show goal directed path planning results of HiLAM in two different environments, an indoor square maze used in rodent experiments and an outdoor arena more than two orders of magnitude larger than the indoor maze. Together these results bridge for the first time the gap between higher fidelity bio-inspired navigation models (HiLAM) and more abstracted but highly functional bio-inspired robotic mapping systems (RatSLAM), and move from simulated environments into real-world studies in rodent-sized arenas and beyond.
Resumo:
Sleep loss, widespread in today’s society and associated with a number of clinical conditions, has a detrimental effect on a variety of cognitive domains including attention. This study examined the sequelae of sleep deprivation upon BOLD fMRI activation during divided attention. Twelve healthy males completed two randomized sessions; one after 27 h of sleep deprivation and one after a normal night of sleep. During each session, BOLD fMRI was measured while subjects completed a cross-modal divided attention task (visual and auditory). After normal sleep, increased BOLD activation was observed bilaterally in the superior frontal gyrus and the inferior parietal lobe during divided attention performance. Subjects reported feeling significantly more sleepy in the sleep deprivation session, and there was a trend towards poorer divided attention task performance. Sleep deprivation led to a down regulation of activation in the left superior frontal gyrus, possibly reflecting an attenuation of top-down control mechanisms on the attentional system. These findings have implications for understanding the neural correlates of divided attention and the neurofunctional changes that occur in individuals who are sleep deprived.
Resumo:
Purpose To quantify the effects of driver age on night-time pedestrian conspicuity, and to determine whether individual differences in visual performance can predict drivers' ability to recognise pedestrians at night. Methods Participants were 32 visually normal drivers (20 younger: M = 24.4 years ± 6.4 years; 12 older: M = 72.0 years ± 5.0 years). Visual performance was measured in a laboratory-based testing session including visual acuity, contrast sensitivity, motion sensitivity and the useful field of view. Night-time pedestrian recognition distances were recorded while participants drove an instrumented vehicle along a closed road course at night; to increase the workload of drivers, auditory and visual distracter tasks were presented for some of the laps. Pedestrians walked in place, sideways to the oncoming vehicles, and wore either a standard high visibility reflective vest or reflective tape positioned on the movable joints (biological motion). Results Driver age and pedestrian clothing significantly (p < 0.05) affected the distance at which the drivers first responded to the pedestrians. Older drivers recognised pedestrians at approximately half the distance of the younger drivers and pedestrians were recognised more often and at longer distances when they wore a biological motion reflective clothing configuration than when they wore a reflective vest. Motion sensitivity was an independent predictor of pedestrian recognition distance, even when controlling for driver age. Conclusions The night-time pedestrian recognition capacity of older drivers was significantly worse than that of younger drivers. The distance at which drivers first recognised pedestrians at night was best predicted by a test of motion sensitivity.