484 resultados para Material visual
Resumo:
Purpose: There have been few studies of visual temporal processing of myopic eyes. This study investigated the visual performance of emmetropic and myopic eyes using a backward visual masking location task. Methods: Data were collected for 39 subjects (15 emmetropes, 12 stable myopes, 12 progressing myopes). In backward visual masking, a target’s visibility is reduced by a mask presented in quick succession ‘after’ the target. The target and mask stimuli were presented at different interstimulus intervals (from 12 to 300 ms). The task involved locating the position of a target letter with both a higher (seven per cent) and a lower (five per cent) contrast. Results: Emmetropic subjects had significantly better performance for the lower contrast location task than the myopes (F2,36 = 22.88; p < 0.001) but there was no difference between the progressing and stable myopic groups (p = 0.911). There were no differences between the groups for the higher contrast location task (F2,36 = 0.72, p = 0.495). No relationship between task performance and either the magnitude of myopia or axial length was found for either task. Conclusions: A location task deficit was observed in myopes only for lower contrast stimuli. Both emmetropic and myopic groups had better performance for the higher contrast task compared to the lower contrast task, with myopes showing considerable improvement. This suggests that five per cent contrast may be the contrast threshold required to bias the task towards the magnocellular system (where myopes have a temporal processing deficit). Alternatively, the task may be sensitive to the contrast sensitivity of the observer.
Resumo:
Ways in which humans engage with the environment have always provided a rich source of material for writers and illustrators of Australian children's literature. Currently, readers are confronted with a multiplicity of complex, competing and/or complementing networks of ideas, theories and emotions that provide narratives about human engagement with the environment at a particular historical moment. This study, entitled Reading the Environment: Narrative Constructions of Ecological Subjectivities in Australian Children's Literature, examines how a representative sample of Australian texts (19 picture books and 4 novels for children and young adults published between 1995 and 2006) constructs fictional ecological subjects in the texts, and offers readers ecological subject positions inscribed with contemporary environmental ideologies. The conceptual framework developed in this study identifies three ideologically grounded positions that humans may assume when engaging with the environment. None of these positions clearly exists independently of any other, nor are they internally homogeneous. Nevertheless they can be categorised as: (i) human dominion over the environment with little regard for environmental degradation (unrestrained anthropocentrism); (ii) human consideration for the environment driven by understandings that humans need the environment to survive (restrained anthropocentrism); and (iii) human deference towards the environment guided by understandings that humans are no more important than the environment (ecocentrism). iv The transdisciplinary methodological approach to textual analysis used in this thesis draws on ecocriticism, narrative theories, visual semiotics, ecofeminism and postcolonialism to discuss the difficulties and contradictions in the construction of the positions offered. Each chapter of textual analysis focuses on the construction of subjectivities in relation to one of the positions identified in the conceptual framework. Chapter 5 is concerned with how texts highlight the negative consequences of human dominion over the environment, or, in the words of this study, living with ecocatastrophe. Chapter 6 examines representations of restrained anthropocentrism in its contemporary form, that is, sustainability. Chapter 7 examines representations of ecocentrism, a radical position with inherent difficulties of representation. According to the analysis undertaken, the focus texts convey the subtleties and complexities of human engagement with the environment and advocate ways of viewing and responding to contemporary unease about the environment. The study concludes that these ways of viewing and responding conform to and/or challenge dominant socio-cultural and political-economic opinions regarding the environment. This study, the first extended work of its kind, makes an original contribution to ecocritical study of Australian children's literature. By undertaking a comprehensive analysis of how texts for children represent human engagement with the environment at a time when important environmental concerns pose significant threats to human existence, I hope to contribute new knowledge to an area of children's literature research that to date has been significantly under-represented.
Resumo:
Creating an acceptance of Visual Effects (VFX) as an effective non-fiction communication tool has the potential to significantly boost return on investment for filmmakers producing documentary. Obtaining this acceptance does not necessarily mean rethinking the way documentary is defined, however, the need to address negative perceptions presently dominant within the production industry does exist; specifically, the misguided judgement that use of sequences which include visual effects discredits a filmmaker's attempt to represent reality. After completing a documentary utilising a traditional model of production for methodology, the question of how to increase this film's marketability is then examined by testing the specific assertion that Visual Effects is capable of increasing the level of appeal inherent within the documentary genre. Whilst this area of research is speculative, qualifying Visual Effects as an acceptable communication tool in non-fiction narratives will allow the documentary sector to benefit from increased production capabilities.
Resumo:
This research explores the empirical association between takeover bid premium and acquired (purchased) goodwill, and tests whether the strength of the association changes after the passage of approved accounting standard AASB 1013 in Australia in 1988. AASB 1013 mandated capitalization and amortization of acquired goodwill to the income statement over a maximum period of 20 years. We use regressions to assess how the association between bid premium and acquired goodwill varies in the pre-AASB and post-AASB 1013 periods after controlling for confounding factors. Our results show that reducing the variety of accounting policy options available to bidder management after an acquisition results in a systematic reduction in the strength of the association between premium and goodwill.
Resumo:
This paper is concerned with choosing image features for image based visual servo control and how this choice influences the closed-loop dynamics of the system. In prior work, image features tend to be chosen on the basis of image processing simplicity and noise sensitivity. In this paper we show that the choice of feature directly influences the closed-loop dynamics in task-space. We focus on the depth axis control of a visual servo system and compare analytically various approaches that have been reported recently in the literature. The theoretical predictions are verified by experiment.
Resumo:
Position estimation for planetary rovers has been typically limited to odometry based on proprioceptive measurements such as the integration of distance traveled and measurement of heading change. Here we present and compare two methods of online visual odometry suited for planetary rovers. Both methods use omnidirectional imagery to estimate motion of the rover. One method is based on robust estimation of optical flow and subsequent integration of the flow. The second method is a full structure-from-motion solution. To make the comparison meaningful we use the same set of raw corresponding visual features for each method. The dataset is an sequence of 2000 images taken during a field experiment in the Atacama desert, for which high resolution GPS ground truth is available.
Resumo:
Visual localization systems that are practical for autonomous vehicles in outdoor industrial applications must perform reliably in a wide range of conditions. Changing outdoor conditions cause difficulty by drastically altering the information available in the camera images. To confront the problem, we have developed a visual localization system that uses a surveyed three-dimensional (3D)-edge map of permanent structures in the environment. The map has the invariant properties necessary to achieve long-term robust operation. Previous 3D-edge map localization systems usually maintain a single pose hypothesis, making it difficult to initialize without an accurate prior pose estimate and also making them susceptible to misalignment with unmapped edges detected in the camera image. A multihypothesis particle filter is employed here to perform the initialization procedure with significant uncertainty in the vehicle's initial pose. A novel observation function for the particle filter is developed and evaluated against two existing functions. The new function is shown to further improve the abilities of the particle filter to converge given a very coarse estimate of the vehicle's initial pose. An intelligent exposure control algorithm is also developed that improves the quality of the pertinent information in the image. Results gathered over an entire sunny day and also during rainy weather illustrate that the localization system can operate in a wide range of outdoor conditions. The conclusion is that an invariant map, a robust multihypothesis localization algorithm, and an intelligent exposure control algorithm all combine to enable reliable visual localization through challenging outdoor conditions.
Resumo:
This paper describes a biologically inspired approach to vision-only simultaneous localization and mapping (SLAM) on ground-based platforms. The core SLAM system, dubbed RatSLAM, is based on computational models of the rodent hippocampus, and is coupled with a lightweight vision system that provides odometry and appearance information. RatSLAM builds a map in an online manner, driving loop closure and relocalization through sequences of familiar visual scenes. Visual ambiguity is managed by maintaining multiple competing vehicle pose estimates, while cumulative errors in odometry are corrected after loop closure by a map correction algorithm. We demonstrate the mapping performance of the system on a 66 km car journey through a complex suburban road network. Using only a web camera operating at 10 Hz, RatSLAM generates a coherent map of the entire environment at real-time speed, correctly closing more than 51 loops of up to 5 km in length.
Resumo:
Recovering position from sensor information is an important problem in mobile robotics, known as localisation. Localisation requires a map or some other description of the environment to provide the robot with a context to interpret sensor data. The mobile robot system under discussion is using an artificial neural representation of position. Building a geometrical map of the environment with a single camera and artificial neural networks is difficult. Instead it would be simpler to learn position as a function of the visual input. Usually when learning images, an intermediate representation is employed. An appropriate starting point for biologically plausible image representation is the complex cells of the visual cortex, which have invariance properties that appear useful for localisation. The effectiveness for localisation of two different complex cell models are evaluated. Finally the ability of a simple neural network with single shot learning to recognise these representations and localise a robot is examined.
Resumo:
RatSLAM is a vision-based SLAM system based on extended models of the rodent hippocampus. RatSLAM creates environment representations that can be processed by the experience mapping algorithm to produce maps suitable for goal recall. The experience mapping algorithm also allows RatSLAM to map environments many times larger than could be achieved with a one to one correspondence between the map and environment, by reusing the RatSLAM maps to represent multiple sections of the environment. This paper describes experiments investigating the effects of the environment-representation size ratio and visual ambiguity on mapping and goal navigation performance. The experiments demonstrate that system performance is weakly dependent on either parameter in isolation, but strongly dependent on their joint values.
Resumo:
The Simultaneous Localisation And Mapping (SLAM) problem is one of the major challenges in mobile robotics. Probabilistic techniques using high-end range finding devices are well established in the field, but recent work has investigated vision-only approaches. We present an alternative approach to the leading existing techniques, which extracts approximate rotational and translation velocity information from a vehicle-mounted consumer camera, without tracking landmarks. When coupled with an existing SLAM system, the vision module is able to map a 45 metre long indoor loop and a 1.6 km long outdoor road loop, without any parameter or system adjustment between tests. The work serves as a promising pilot study into ground-based vision-only SLAM, with minimal geometric interpretation of the environment.
Resumo:
This paper investigates the use of the FAB-MAP appearance-only SLAM algorithm as a method for performing visual data association for RatSLAM, a semi-metric full SLAM system. While both systems have shown the ability to map large (60-70km) outdoor locations of approximately the same scale, for either larger areas or across longer time periods both algorithms encounter difficulties with false positive matches. By combining these algorithms using a mapping between appearance and pose space, both false positives and false negatives generated by FAB-MAP are significantly reduced during outdoor mapping using a forward-facing camera. The hybrid FAB-MAP-RatSLAM system developed demonstrates the potential for successful SLAM over large periods of time.
Resumo:
This paper presents a vision-based method of vehicle localisation that has been developed and tested on a large forklift type robotic vehicle which operates in a mainly outdoor industrial setting. The localiser uses a sparse 3D edgemap of the environment and a particle filter to estimate the pose of the vehicle. The vehicle operates in dynamic and non-uniform outdoor lighting conditions, an issue that is addressed by using knowledge of the scene to intelligently adjust the camera exposure and hence improve the quality of the information in the image. Results from the industrial vehicle are shown and compared to another laser-based localiser which acts as a ground truth. An improved likelihood metric, using peredge calculation, is presented and has shown to be 40% more accurate in estimating rotation. Visual localization results from the vehicle driving an arbitrary 1.5km path during a bright sunny period show an average position error of 0.44m and rotation error of 0.62deg.
Resumo:
This paper illustrates a method for finding useful visual landmarks for performing simultaneous localization and mapping (SLAM). The method is based loosely on biological principles, using layers of filtering and pooling to create learned templates that correspond to different views of the environment. Rather than using a set of landmarks and reporting range and bearing to the landmark, this system maps views to poses. The challenge is to produce a system that produces the same view for small changes in robot pose, but provides different views for larger changes in pose. The method has been developed to interface with the RatSLAM system, a biologically inspired method of SLAM. The paper describes the method of learning and recalling visual landmarks in detail, and shows the performance of the visual system in real robot tests.
Resumo:
Acoustically, car cabins are extremely noisy and as a consequence, existing audio-only speech recognition systems, for voice-based control of vehicle functions such as the GPS based navigator, perform poorly. Audio-only speech recognition systems fail to make use of the visual modality of speech (eg: lip movements). As the visual modality is immune to acoustic noise, utilising this visual information in conjunction with an audio only speech recognition system has the potential to improve the accuracy of the system. The field of recognising speech using both auditory and visual inputs is known as Audio Visual Speech Recognition (AVSR). Continuous research in AVASR field has been ongoing for the past twenty-five years with notable progress being made. However, the practical deployment of AVASR systems for use in a variety of real-world applications has not yet emerged. The main reason is due to most research to date neglecting to address variabilities in the visual domain such as illumination and viewpoint in the design of the visual front-end of the AVSR system. In this paper we present an AVASR system in a real-world car environment using the AVICAR database [1], which is publicly available in-car database and we show that the use of visual speech conjunction with the audio modality is a better approach to improve the robustness and effectiveness of voice-only recognition systems in car cabin environments.