394 resultados para 3D vision
Resumo:
During the current (1995-present) eruptive phase of the Soufrière Hills volcano on Montserrat, voluminous pyroclastic flows entered the sea off the eastern flank of the island, resulting in the deposition of well-defined submarine pyroclastic lobes. Previously reported bathymetric surveys documented the sequential construction of these deposits, but could not image their internal structure, the morphology or extent of their base, or interaction with the underlying sediments. We show, by combining these bathymetric data with new high-resolution three dimensional (3D) seismic data, that the sequence of previously detected pyroclastic deposits from different phases of the ongoing eruptive activity is still well preserved. A detailed interpretation of the 3D seismic data reveals the absence of significant (> 3. m) basal erosion in the distal extent of submarine pyroclastic deposits. We also identify a previously unrecognized seismic unit directly beneath the stack of recent lobes. We propose three hypotheses for the origin of this seismic unit, but prefer an interpretation that the deposit is the result of the subaerial flank collapse that formed the English's Crater scarp on the Soufrière Hills volcano. The 1995-recent volcanic activity on Montserrat accounts for a significant portion of the sediments on the southeast slope of Montserrat, in places forming deposits that are more than 60. m thick, which implies that the potential for pyroclastic flows to build volcanic island edifices is significant.
Resumo:
In this paper, we explore the effectiveness of patch-based gradient feature extraction methods when applied to appearance-based gait recognition. Extending existing popular feature extraction methods such as HOG and LDP, we propose a novel technique which we term the Histogram of Weighted Local Directions (HWLD). These 3 methods are applied to gait recognition using the GEI feature, with classification performed using SRC. Evaluations on the CASIA and OULP datasets show significant improvements using these patch-based methods over existing implementations, with the proposed method achieving the highest recognition rate for the respective datasets. In addition, the HWLD can easily be extended to 3D, which we demonstrate using the GEV feature on the DGD dataset, observing improvements in performance.
Resumo:
The problem of estimating pseudobearing rate information of an airborne target based on measurements from a vision sensor is considered. Novel image speed and heading angle estimators are presented that exploit image morphology, hidden Markov model (HMM) filtering, and relative entropy rate (RER) concepts to allow pseudobearing rate information to be determined before (or whilst) the target track is being estimated from vision information.
Resumo:
Executive Summary This project has commenced an exploration of learning and information experiences in the QUT Cube. Understanding learning in this environment has the potential to inform current implementations and future project development. In this report, we present early findings from the first phase of an investigation into what makes learning possible in the context of a giant interactive multi-media display such as the QUT Cube, which is an award-winning configuration that hosts several projects.
Resumo:
Next-generation autonomous underwater vehicles (AUVs) will be required to robustly identify underwater targets for tasks such as inspection, localization, and docking. Given their often unstructured operating environments, vision offers enormous potential in underwater navigation over more traditional methods; however, reliable target segmentation often plagues these systems. This paper addresses robust vision-based target recognition by presenting a novel scale and rotationally invariant target design and recognition routine based on self-similar landmarks that enables robust target pose estimation with respect to a single camera. These algorithms are applied to an AUV with controllers developed for vision-based docking with the target. Experimental results show that the system performs exceptionally on limited processing power and demonstrates how the combined vision and controller system enables robust target identification and docking in a variety of operating conditions.
Resumo:
Purpose. To compare the on-road driving performance of visually impaired drivers using bioptic telescopes with age-matched controls. Methods. Participants included 23 persons (mean age = 33 ± 12 years) with visual acuity of 20/63 to 20/200 who were legally licensed to drive through a state bioptic driving program, and 23 visually normal age-matched controls (mean age = 33 ± 12 years). On-road driving was assessed in an instrumented dual-brake vehicle along 14.6 miles of city, suburban, and controlled-access highways. Two backseat evaluators independently rated driving performance using a standardized scoring system. Vehicle control was assessed through vehicle instrumentation and video recordings used to evaluate head movements, lane-keeping, pedestrian detection, and frequency of bioptic telescope use. Results. Ninety-six percent (22/23) of bioptic drivers and 100% (23/23) of controls were rated as safe to drive by the evaluators. There were no group differences for pedestrian detection, or ratings for scanning, speed, gap judgments, braking, indicator use, or obeying signs/signals. Bioptic drivers received worse ratings than controls for lane position and steering steadiness and had lower rates of correct sign and traffic signal recognition. Bioptic drivers made significantly more right head movements, drove more often over the right-hand lane marking, and exhibited more sudden braking than controls. Conclusions. Drivers with central vision loss who are licensed to drive through a bioptic driving program can display proficient on-road driving skills. This raises questions regarding the validity of denying such drivers a license without the opportunity to train with a bioptic telescope and undergo on-road evaluation.
Resumo:
Intravitreal injections of GABA antagonists, dopamine agonists and brief periods of normal vision have been shown separately to inhibit form-deprivation myopia (FDM). Our study had three aims: (i) establish whether GABAergic agents modify the myopia protective effect of normal vision, (ii) investigate the receptor sub-type specificity of any observed effect, and (iii) consider an interaction with the dopamine (DA) system. Prior to the period of normal vision GABAergic agents were applied either (i) individually, (ii) in combination with other GABAergic agents (an agonist with an antagonist), or (iii) in combination with DA agonists and antagonists. Water injections were given to groups not receiving drug treatments so that all experimental eyes received intravitreal injections. As shown previously, constant form-deprivation resulted in high myopia and when diffusers were removed for 2 h per day the period of normal vision greatly reduced the FDM that developed. GABA agonists inhibited the protective effect of normal vision whereas antagonists had the opposite effect. GABAA/C agonists and D2 DA antagonists when used in combination were additive in suppressing the protective effect of normal vision. A D2 DA agonist restored some of the protective effect of normal vision that was inhibited by a GABA agonist (muscimol). The protective effect of normal vision against form-deprivation is modifiable by both the GABAergic and DAergic pathways.
Resumo:
The purpose of this study is to determine visual performance in water, including the influence of pupil size. The water en-vironment was simulated by placing a goggle filled with saline in front of eyes, with apertures placed at the front of the goggle. Correction factors were determined for the different magnification under this condition in order to to estimate vision in water. Experiments were conducted on letter visual acuity (7 participants), grating resolution (8 participants), and grating contrast sensitivity (1 participant). For letter acuity, mean loss in vision in water, compared to corrected vision in air, varied between 1.1 log minutes of arc resolution (logMAR) for a 1mm aperture to 2.2 logMAR for a 7mm aperture. The vision in minutes of arc was described well by a linear relationship with pupil size. For grating acuity, mean loss varied between 1.1 logMAR for a 2mm aperture to 1.2 logMAR for a 6mm aperture. Contrast sensitivity for a 2mm aperture dete-riorated as spatial frequency increased, with 2 log unit loss by 3 cycles/degree. Superimposed on this deterioration were depressions (notches) in sensitivity, with the first three notches occurring at 0.45, 0.8 and 1.3 cycles/degree with esti-mates for water of 0.39, 0.70 and 1.13 cycles/degree. In conclusion, vision in water is poor. It becomes worse as pupil size increases, but the effects are much more marked for letter targets than for grating targets.
Resumo:
Purpose: To determine visual performance in water, including the influence of pupil size. Method: The water environment was simulated by placing a goggle filled with saline in front of eyes, with apertures placed at the front of the goggle. Correction factors were determined for the different magnification under this condition to estimate vision in water. Experiments were conducted on letter visual acuity (7 participants), grating resolution (8 participants), and grating contrast sensitivity (1 participant). Results: For letter acuity, mean loss in vision in water, compared to corrected vision in air, varied between 1.1 log minutes of arc resolution (logMAR) for a 1mm aperture to 2.2 logMAR for a 7mm aperture. The vision in minutes of arc was described well by a linear relationship with pupil size. For grating acuity, mean loss varied between 1.1 logMAR for a 2mm aperture to 1.2 logMAR for a 6mm aperture. Contrast sensitivity for a 2mm aperture deteriorated as spatial frequency increased, with 2 log unit loss by 3 cycles/degree. Superimposed on this deterioration were depressions (notches) in sensitivity, with the first three notches occurring at 0.45, 0.8 and 1.3 cycles/degree and with estimates for water of 0.39, 0.70 and 1.13 cycles/degree. Conclusion: Vision in water is poor. It becomes worse as pupil size increases, but the effects are much more marked for letter targets than for grating targets.
Resumo:
The process of translating research into policy and practice is not well understood. This paper uses a case study approach to interpret an example of translation with respect to theoretical approaches identified in the literature. The case study concerns research into “biological motion” or “biomotion”: when lights are placed on the moveable joints of the body and the person moves in a dark setting, there is immediate and accurate recognition of the human form although only the lights can be seen. QUT was successful in gaining Australian Research Council funding with the support of the predecessors of the Queensland Department of Transport and Main Roads (TMR) to research the biomotion effect in road worker clothing using reflective tape rather than lights, and this resulted in the incorporation of biomotion marking into AS/NZS 4602.1 2011. The most promising approach to understanding the success of this translation, SWOV’s “knowledge utilisation approach” provided some insights but was more descriptive than predictive and provided “necessary but not sufficient” conditions for translation. In particular, the supportive efforts of TMR staff engaged in the review and promulgation of national standards were critical in this case. A model of the conclusions is presented. The experiences gained in this case should provide insights into the processes involved in effectively translating research into practice.
Resumo:
Novel computer vision techniques have been developed for automatic monitoring of crowed environments such as airports, railway stations and shopping malls. Using video feeds from multiple cameras, the techniques enable crowd counting, crowd flow monitoring, queue monitoring and abnormal event detection. The outcome of the research is useful for surveillance applications and for obtaining operational metrics to improve business efficiency.
Resumo:
The ability to automate forced landings in an emergency such as engine failure is an essential ability to improve the safety of Unmanned Aerial Vehicles operating in General Aviation airspace. By using active vision to detect safe landing zones below the aircraft, the reliability and safety of such systems is vastly improved by gathering up-to-the-minute information about the ground environment. This paper presents the Site Detection System, a methodology utilising a downward facing camera to analyse the ground environment in both 2D and 3D, detect safe landing sites and characterise them according to size, shape, slope and nearby obstacles. A methodology is presented showing the fusion of landing site detection from 2D imagery with a coarse Digital Elevation Map and dense 3D reconstructions using INS-aided Structure-from-Motion to improve accuracy. Results are presented from an experimental flight showing the precision/recall of landing sites in comparison to a hand-classified ground truth, and improved performance with the integration of 3D analysis from visual Structure-from-Motion.
Resumo:
Machine vision is emerging as a viable sensing approach for mid-air collision avoidance (particularly for small to medium aircraft such as unmanned aerial vehicles). In this paper, using relative entropy rate concepts, we propose and investigate a new change detection approach that uses hidden Markov model filters to sequentially detect aircraft manoeuvres from morphologically processed image sequences. Experiments using simulated and airborne image sequences illustrate the performance of our proposed algorithm in comparison to other sequential change detection approaches applied to this application.
Resumo:
Cancer-associated proteases promote peritoneal dissemination and chemoresistance in malignant progression. In this study, kallikrein-related peptidases 4, 5, 6, and 7 (KLK4-7)-cotransfected OV-MZ-6 ovarian cancer cells were embedded in a bioengineered three-dimensional (3D) microenvironment that contains RGD motifs for integrin engagement to analyze their spheroid growth and survival after chemotreatment. KLK4-7-cotransfected cells formed larger spheroids and proliferated more than controls in 3D, particularly within RGD-functionalized matrices, which was reduced upon integrin inhibition. In contrast, KLK4-7-expressing cell monolayers proliferated less than controls, emphasizing the relevance of the 3D microenvironment and integrin engagement. In a spheroid-based animal model, KLK4-7-overexpression induced tumor growth after 4 weeks and intraperitoneal spread after 8 weeks. Upon paclitaxel administration, KLK4-7-expressing tumors declined in size by 91% (controls: 87%) and showed 90% less metastatic outgrowth (controls: 33%, P<0.001). KLK4-7-expressing spheroids showed 53% survival upon paclitaxel treatment (controls: 51%), accompanied by enhanced chemoresistance-related factors, and their survival was further reduced by combination treatment of paclitaxel with KLK4/5/7 (22%, P=0.007) or MAPK (6%, P=0.006) inhibition. The concomitant presence of KLK4-7 in ovarian cancer cells together with integrin activation drives spheroid formation and proliferation. Combinatorial approaches of paclitaxel and KLK/MAPK inhibition may be more efficient for late-stage disease than chemotherapeutics alone as these inhibitory regimens reduced cancer spheroid growth to a greater extent than paclitaxel alone.
Resumo:
Timely and comprehensive scene segmentation is often a critical step for many high level mobile robotic tasks. This paper examines a projected area based neighbourhood lookup approach with the motivation towards faster unsupervised segmentation of dense 3D point clouds. The proposed algorithm exploits the projection geometry of a depth camera to find nearest neighbours which is time independent of the input data size. Points near depth discontinuations are also detected to reinforce object boundaries in the clustering process. The search method presented is evaluated using both indoor and outdoor dense depth images and demonstrates significant improvements in speed and precision compared to the commonly used Fast library for approximate nearest neighbour (FLANN) [Muja and Lowe, 2009].