298 resultados para Idrogeologia, Visual Modflow, Modellazione numerica.
Resumo:
In this paper, we present SMART (Sequence Matching Across Route Traversals): a vision- based place recognition system that uses whole image matching techniques and odometry information to improve the precision-recall performance, latency and general applicability of the SeqSLAM algorithm. We evaluate the system’s performance on challenging day and night journeys over several kilometres at widely varying vehicle velocities from 0 to 60 km/h, compare performance to the current state-of- the-art SeqSLAM algorithm, and provide parameter studies that evaluate the effectiveness of each system component. Using 30-metre sequences, SMART achieves place recognition performance of 81% recall at 100% precision, outperforming SeqSLAM, and is robust to significant degradations in odometry.
Resumo:
The ability to automate forced landings in an emergency such as engine failure is an essential ability to improve the safety of Unmanned Aerial Vehicles operating in General Aviation airspace. By using active vision to detect safe landing zones below the aircraft, the reliability and safety of such systems is vastly improved by gathering up-to-the-minute information about the ground environment. This paper presents the Site Detection System, a methodology utilising a downward facing camera to analyse the ground environment in both 2D and 3D, detect safe landing sites and characterise them according to size, shape, slope and nearby obstacles. A methodology is presented showing the fusion of landing site detection from 2D imagery with a coarse Digital Elevation Map and dense 3D reconstructions using INS-aided Structure-from-Motion to improve accuracy. Results are presented from an experimental flight showing the precision/recall of landing sites in comparison to a hand-classified ground truth, and improved performance with the integration of 3D analysis from visual Structure-from-Motion.
Resumo:
This paper presents a long-term experiment where a mobile robot uses adaptive spherical views to localize itself and navigate inside a non-stationary office environment. The office contains seven members of staff and experiences a continuous change in its appearance over time due to their daily activities. The experiment runs as an episodic navigation task in the office over a period of eight weeks. The spherical views are stored in the nodes of a pose graph and they are updated in response to the changes in the environment. The updating mechanism is inspired by the concepts of long- and short-term memories. The experimental evaluation is done using three performance metrics which evaluate the quality of both the adaptive spherical views and the navigation over time.
Resumo:
Purpose: Changes in pupil size and shape are relevant for peripheral imagery by affecting aberrations and how much light enters and/or exits the eye. The purpose of this study is to model the pattern of pupil shape across the complete horizontal visual field and to show how the pattern is influenced by refractive error. Methods: Right eyes of thirty participants were dilated with 1% cyclopentolate and images were captured using a modified COAS-HD aberrometer alignment camera along the horizontal visual field to ±90°. A two lens relay system enabled fixation at targets mounted on the wall 3m from the eye. Participants placed their heads on a rotatable chin rest and eye rotations were kept to less than 30°. Best-fit elliptical dimensions of pupils were determined. Ratios of minimum to maximum axis diameters were plotted against visual field angle. Results: Participants’ data were well fitted by cosine functions, with maxima at (–)1° to (–)9° in the temporal visual field and widths 9% to 15% greater than predicted by the cosine of the field angle . Mean functions were 0.99cos[( + 5.3)/1.121], R2 0.99 for the whole group and 0.99cos[( + 6.2)/1.126], R2 0.99 for the 13 emmetropes. The function peak became less temporal, and the width became smaller, with increase in myopia. Conclusion: Off-axis pupil shape changes are well described by a cosine function which is both decentered by a few degrees and flatter by about 12% than the cosine of the viewing angle, with minor influences of refraction.
Resumo:
This paper introduces an improved line tracker using IMU and vision data for visual servoing tasks. We utilize an Image Jacobian which describes motion of a line feature to corresponding camera movements. These camera motions are estimated using an IMU. We demonstrate impacts of the proposed method in challenging environments: maximum angular rate ~160 0/s, acceleration ~6m /s2 and in cluttered outdoor scenes. Simulation and quantitative tracking performance comparison with the Visual Servoing Platform (ViSP) are also presented.
Resumo:
In this paper we present a novel place recognition algorithm inspired by recent discoveries in human visual neuroscience. The algorithm combines intolerant but fast low resolution whole image matching with highly tolerant, sub-image patch matching processes. The approach does not require prior training and works on single images (although we use a cohort normalization score to exploit temporal frame information), alleviating the need for either a velocity signal or image sequence, differentiating it from current state of the art methods. We demonstrate the algorithm on the challenging Alderley sunny day – rainy night dataset, which has only been previously solved by integrating over 320 frame long image sequences. The system is able to achieve 21.24% recall at 100% precision, matching drastically different day and night-time images of places while successfully rejecting match hypotheses between highly aliased images of different places. The results provide a new benchmark for single image, condition-invariant place recognition.
Resumo:
Ongoing innovation in digital animation and visual effects technologies has provided new opportunities for stories to be visually rendered in ways never before possible. Films featuring animation and visual effects continue to perform well at the box office, proving to be highly profitable projects. The Avengers (Whedon, 2012) holds the current record for opening weekend sales, accruing as much as $207,438,708 USD and $623,357,910 USD gross at time of writing. Life of Pi (Lee, 2012) at time of writing has grossed as much as $608,791,063 USD (Box Office Mojo, 2013). With so much creative potential and a demonstrable ability to generate a large amount of revenue, the animation and visual effects industry – otherwise known as the Post, Digital and Visual Effects (PDV) industry – has become significant to the future growth and stability of the Australian film industry as a whole.
Resumo:
Long-term autonomy in robotics requires perception systems that are resilient to unusual but realistic conditions that will eventually occur during extended missions. For example, unmanned ground vehicles (UGVs) need to be capable of operating safely in adverse and low-visibility conditions, such as at night or in the presence of smoke. The key to a resilient UGV perception system lies in the use of multiple sensor modalities, e.g., operating at different frequencies of the electromagnetic spectrum, to compensate for the limitations of a single sensor type. In this paper, visual and infrared imaging are combined in a Visual-SLAM algorithm to achieve localization. We propose to evaluate the quality of data provided by each sensor modality prior to data combination. This evaluation is used to discard low-quality data, i.e., data most likely to induce large localization errors. In this way, perceptual failures are anticipated and mitigated. An extensive experimental evaluation is conducted on data sets collected with a UGV in a range of environments and adverse conditions, including the presence of smoke (obstructing the visual camera), fire, extreme heat (saturating the infrared camera), low-light conditions (dusk), and at night with sudden variations of artificial light. A total of 240 trajectory estimates are obtained using five different variations of data sources and data combination strategies in the localization method. In particular, the proposed approach for selective data combination is compared to methods using a single sensor type or combining both modalities without preselection. We show that the proposed framework allows for camera-based localization resilient to a large range of low-visibility conditions.
Resumo:
This work aims to contribute to the reliability and integrity of perceptual systems of unmanned ground vehicles (UGV). A method is proposed to evaluate the quality of sensor data prior to its use in a perception system by utilising a quality metric applied to heterogeneous sensor data such as visual and infrared camera images. The concept is illustrated specifically with sensor data that is evaluated prior to the use of the data in a standard SIFT feature extraction and matching technique. The method is then evaluated using various experimental data sets that were collected from a UGV in challenging environmental conditions, represented by the presence of airborne dust and smoke. In the first series of experiments, a motionless vehicle is observing a ’reference’ scene, then the method is extended to the case of a moving vehicle by compensating for its motion. This paper shows that it is possible to anticipate degradation of a perception algorithm by evaluating the input data prior to any actual execution of the algorithm.
Resumo:
This paper proposes an experimental study of quality metrics that can be applied to visual and infrared images acquired from cameras onboard an unmanned ground vehicle (UGV). The relevance of existing metrics in this context is discussed and a novel metric is introduced. Selected metrics are evaluated on data collected by a UGV in clear and challenging environmental conditions, represented in this paper by the presence of airborne dust or smoke. An example of application is given with monocular SLAM estimating the pose of the UGV while smoke is present in the environment. It is shown that the proposed novel quality metric can be used to anticipate situations where the quality of the pose estimate will be significantly degraded due to the input image data. This leads to decisions of advantageously switching between data sources (e.g. using infrared images instead of visual images).
Resumo:
This paper proposes an experimental study of quality metrics that can be applied to visual and infrared images acquired from cameras onboard an unmanned ground vehicle (UGV). The relevance of existing metrics in this context is discussed and a novel metric is introduced. Selected metrics are evaluated on data collected by a UGV in clear and challenging environmental conditions, represented in this paper by the presence of airborne dust or smoke.
Resumo:
This work aims to contribute to reliability and integrity in perceptual systems of autonomous ground vehicles. Information theoretic based metrics to evaluate the quality of sensor data are proposed and applied to visual and infrared camera images. The contribution of the proposed metrics to the discrimination of challenging conditions is discussed and illustrated with the presence of airborne dust and smoke.
Resumo:
Objectives: To investigate the relationship between two assessments to quantify delayed onset muscle soreness [DOMS]: visual analog scale [VAS] and pressure pain threshold [PPT]. Methods: Thirty-one healthy young men [25.8 ± 5.5 years] performed 10 sets of six maximal eccentric contractions of the elbow flexors with their non-dominant arm. Before and one to four days after the exercise, muscle pain perceived upon palpation of the biceps brachii at three sites [5, 9 and 13 cm above the elbow crease] was assessed by VAS with a 100 mm line [0 = no pain, 100 = extremely painful], and PPT of the same sites was determined by an algometer. Changes in VAS and PPT over time were compared amongst three sites by a two-way repeated measures analysis of variance, and the relationship between VAS and PPT was analyzed using a Pearson product-moment correlation. Results: The VAS increased one to four days after exercise and peaked two days post-exercise, while the PPT decreased most one day post-exercise and remained below baseline for four days following exercise [p < 0.05]. No significant difference among the three sites was found for VAS [p = 0.62] or PPT [p = 0.45]. The magnitude of change in VAS did not significantly correlate with that of PPT [r = −0.20, p = 0.28]. Conclusion: These results suggest that the level of muscle pain is not region-specific, at least among the three sites investigated in the study, and VAS and PPT provide different information about DOMS, indicating that VAS and PPT represent different aspects of pain.
Resumo:
The integration of separate, yet complimentary, cortical pathways appears to play a role in visual perception and action when intercepting objects. The ventral system is responsible for object recognition and identification, while the dorsal system facilitates continuous regulation of action. This dual-system model implies that empirically manipulating different visual information sources during performance of an interceptive action might lead to the emergence of distinct gaze and movement pattern profiles. To test this idea, we recorded hand kinematics and eye movements of participants as they attempted to catch balls projected from a novel apparatus that synchronised or de-synchronised accompanying video images of a throwing action and ball trajectory. Results revealed that ball catching performance was less successful when patterns of hand movements and gaze behaviours were constrained by the absence of advanced perceptual information from the thrower's actions. Under these task constraints, participants began tracking the ball later, followed less of its trajectory, and adapted their actions by initiating movements later and moving the hand faster. There were no performance differences when the throwing action image and ball speed were synchronised or de-synchronised since hand movements were closely linked to information from ball trajectory. Results are interpreted relative to the two-visual system hypothesis, demonstrating that accurate interception requires integration of advanced visual information from kinematics of the throwing action and from ball flight trajectory.
Resumo:
This practice-led research project investigates how new postcolonial conditions require new methods of critique to fully engage with the nuances of real world, 'lived' experiences. Framed by key aspects of postcolonial theory, this project examines contemporary artists' contributions to investigations of identity, race, ethnicity, otherness and diaspora, as well as questions of locality, nationality, and transnationality. Approaching these issues through the lens of my own experience as an artist and subject, it results in a body of creative work and a written exegesis that creatively and critically examine the complexities, ambiguities and ambivalences of the contemporary postcolonial condition.