93 resultados para interactivity and 3D relational maps
Resumo:
Stereo-based visual odometry algorithms are heavily dependent on an accurate calibration of the rigidly fixed stereo pair. Even small shifts in the rigid transform between the cameras can impact on feature matching and 3D scene triangulation, adversely affecting pose estimates and applications dependent on long-term autonomy. In many field-based scenarios where vibration, knocks and pressure change affect a robotic vehicle, maintaining an accurate stereo calibration cannot be guaranteed over long periods. This paper presents a novel method of recalibrating overlapping stereo camera rigs from online visual data while simultaneously providing an up-to-date and up-to-scale pose estimate. The proposed technique implements a novel form of partitioned bundle adjustment that explicitly includes the homogeneous transform between a stereo camera pair to generate an optimal calibration. Pose estimates are computed in parallel to the calibration, providing online recalibration which seamlessly integrates into a stereo visual odometry framework. We present results demonstrating accurate performance of the algorithm on both simulated scenarios and real data gathered from a wide-baseline stereo pair on a ground vehicle traversing urban roads.
Resumo:
Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.
Resumo:
The application of different EMS current thresholds on muscle activates not only the muscle but also peripheral sensory axons that send proprioceptive and pain signals to the cerebral cortex. A 32-channel time-domain fNIRS instrument was employed to map regional cortical activities under varied EMS current intensities applied on the right wrist extensor muscle. Eight healthy volunteers underwent four EMS at different current thresholds based on their individual maximal tolerated intensity (MTI), i.e., 10 % < 50 % < 100 % < over 100 % MTI. Time courses of the absolute oxygenated and deoxygenated hemoglobin concentrations primarily over the bilateral sensorimotor cortical (SMC) regions were extrapolated, and cortical activation maps were determined by general linear model using the NIRS-SPM software. The stimulation-induced wrist extension paradigm significantly increased activation of the contralateral SMC region according to the EMS intensities, while the ipsilateral SMC region showed no significant changes. This could be due in part to a nociceptive response to the higher EMS current intensities and result also from increased sensorimotor integration in these cortical regions.
Resumo:
The ability to automate forced landings in an emergency such as engine failure is an essential ability to improve the safety of Unmanned Aerial Vehicles operating in General Aviation airspace. By using active vision to detect safe landing zones below the aircraft, the reliability and safety of such systems is vastly improved by gathering up-to-the-minute information about the ground environment. This paper presents the Site Detection System, a methodology utilising a downward facing camera to analyse the ground environment in both 2D and 3D, detect safe landing sites and characterise them according to size, shape, slope and nearby obstacles. A methodology is presented showing the fusion of landing site detection from 2D imagery with a coarse Digital Elevation Map and dense 3D reconstructions using INS-aided Structure-from-Motion to improve accuracy. Results are presented from an experimental flight showing the precision/recall of landing sites in comparison to a hand-classified ground truth, and improved performance with the integration of 3D analysis from visual Structure-from-Motion.
Resumo:
OBJECTIVES To identify the meteorological drivers of dengue vector density and determine high- and low-risk transmission zones for dengue prevention and control in Cairns, Australia. METHODS Weekly adult female Ae. aegypti data were obtained from 79 double sticky ovitraps (SOs) located in Cairns for the period September 2007-May 2012. Maximum temperature, total rainfall and average relative humidity data were obtained from the Australian Bureau of Meteorology for the study period. Time series-distributed lag nonlinear models were used to assess the relationship between meteorological variables and vector density. Spatial autocorrelation was assessed via semivariography, and ordinary kriging was undertaken to predict vector density in Cairns. RESULTS Ae. aegypti density was associated with temperature and rainfall. However, these relationships differed between short (0-6 weeks) and long (0-30 weeks) lag periods. Semivariograms showed that vector distributions were spatially autocorrelated in September 2007-May 2008 and January 2009-May 2009, and vector density maps identified high transmission zones in the most populated parts of Cairns city, as well as Machans Beach. CONCLUSION Spatiotemporal patterns of Ae. aegypti in Cairns are complex, showing spatial autocorrelation and associations with temperature and rainfall. Sticky ovitraps should be placed no more than 1.2 km apart to ensure entomological coverage and efficient use of resources. Vector density maps provide evidence for the targeting of prevention and control activities. Further research is needed to explore the possibility of developing an early warning system of dengue based on meteorological and environmental factors.
Resumo:
Gel dosimetry and plastic chemical dosimeters such as PresageTM are capable of very accurately mapping dose distributions in three dimensions. Combined with their near tissue equivalence one would expect that after several decades of development they would be the dosimeter of choice for dosimetry, however they have not achieve widespread clinical use. This presentation will include a brief description and history of developments in gels and 3D plastics for dosimetry, the limitations and advantages, and their role in the future.
Resumo:
The literature around Library 2.0 remains largely theoretical with few empirically studies and is particularly limited in developing countries such as Indonesia. This study addresses this gap and aims to provide information about the current state of knowledge on Indonesian LIS professionals’ understanding of Library 2.0. The researchers used qualitative and quantitative approaches for this study, asking thirteen closed- and open-ended questions in an online survey. The researchers used descriptive and in vivo coding to analyze the responses. Through their analysis, they identified three themes: technology, interactivity, and awareness of Library 2.0. Respondents demonstrated awareness of Library 2.0 and a basic understanding of the roles of interactivity and technology in libraries. However, overreliance on technology used in libraries to conceptualize Library 2.0 without an emphasis on its core characteristics and principles could lead to the misalignment of limited resources. The study results will potentially strengthen the research base for Library 2.0 practice, as well as inform LIS curriculum in Indonesia so as to develop practitioners who are able to adapt to users’ changing needs and expectations. It is expected that the preliminary data of this study could be used to design a much larger and more complex future research project in this area.
Resumo:
We have developed a Hierarchical Look-Ahead Trajectory Model (HiLAM) that incorporates the firing pattern of medial entorhinal grid cells in a planning circuit that includes interactions with hippocampus and prefrontal cortex. We show the model’s flexibility in representing large real world environments using odometry information obtained from challenging video sequences. We acquire the visual data from a camera mounted on a small tele-operated vehicle. The camera has a panoramic field of view with its focal point approximately 5 cm above the ground level, similar to what would be expected from a rat’s point of view. Using established algorithms for calculating perceptual speed from the apparent rate of visual change over time, we generate raw dead reckoning information which loses spatial fidelity over time due to error accumulation. We rectify the loss of fidelity by exploiting the loop-closure detection ability of a biologically inspired, robot navigation model termed RatSLAM. The rectified motion information serves as a velocity input to the HiLAM to encode the environment in the form of grid cell and place cell maps. Finally, we show goal directed path planning results of HiLAM in two different environments, an indoor square maze used in rodent experiments and an outdoor arena more than two orders of magnitude larger than the indoor maze. Together these results bridge for the first time the gap between higher fidelity bio-inspired navigation models (HiLAM) and more abstracted but highly functional bio-inspired robotic mapping systems (RatSLAM), and move from simulated environments into real-world studies in rodent-sized arenas and beyond.
Resumo:
Contralateral bones are often used in many medical applications but it is assumed that their bilateral differences are insignificant. Previous studies used a limited number of distance measurements in quantifying the corresponding differences; therefore, little is known about their bilateral 3D surface asymmetries. The aim of the study is to develop a comprehensive method to quantify geometrical asymmetries between the left and right tibia in order to provide first results on whether the contralateral tibia can be used as an equivalent reference. In this study, 3D bone models were reconstructed from CT scans of seven tibiae pairs, and 34 variables consisting of 2D and 3D measurements were measured from various anatomical regions. All 2D measurements, and lateral plateau and distal subchondral bone surface measurements showed insignificant differences (p > 0.05), but the rest of the surfaces showed significant differences (p < 0.05). Our results suggest that the contralateral tibia can be used as a reference especially in surgical applications such as articular reconstructions since the bilateral differences in the subchondral bone surfaces were less than 0.3 mm. The method can also be potentially transferable to other relevant studies that require the accurate quantification of bone bilateral asymmetries.
Resumo:
The solutions proposed in this thesis contribute to improve gait recognition performance in practical scenarios that further enable the adoption of gait recognition into real world security and forensic applications that require identifying humans at a distance. Pioneering work has been conducted on frontal gait recognition using depth images to allow gait to be integrated with biometric walkthrough portals. The effects of gait challenging conditions including clothing, carrying goods, and viewpoint have been explored. Enhanced approaches are proposed on segmentation, feature extraction, feature optimisation and classification elements, and state-of-the-art recognition performance has been achieved. A frontal depth gait database has been developed and made available to the research community for further investigation. Solutions are explored in 2D and 3D domains using multiple images sources, and both domain-specific and independent modality gait features are proposed.
Resumo:
As a Lecturer of Animation History and 3D Computer Animator, I received a copy of Moving Innovation: A History of Computer Animation by Tom Sito with an element of anticipation in the hope that this text would clarify the complex evolution of Computer Graphics (CG). Tom Sito did not disappoint, as this text weaves together the multiple development streams and convergent technologies and techniques throughout history that would ultimately result in modern CG. Universities now have students who have never known a world without computer animation and many students are younger than the first 3D CG animated feature film, Toy Story (1996); this text is ideal for teaching computer animation history and, as I would argue, it also provides a model for engaging young students in the study of animation history in general. This is because Sito places the development of computer animation within the context of its pre-digital ancestry and throughout the text he continues to link the discussion to the broader history of animation, its pioneers, technologies and techniques...
Resumo:
In order to explore some of the possibilities and constraints of picture books on tablets, this chapter addresses adaptations of contemporary Australian picture books for tablet devices. It considers how publishing technologies shape form and meaning of picture books, and attends particularly to the impact of interactivity and adaptation on such meaning. After discussing some contextual issues for electronic literature, this chapter explores the print and tablet versions of three picture books: Libby Gleeson and Freya Blackwood’s Look, A Book! (2011), Nick Bland’s The Wrong Book (2009), and Shaun Tan’s Rules of Summer (2013).
Resumo:
There is an increased interest on the use of UAVs for environmental research and to track bush fire plumes, volcanic plumes or pollutant sources. The aim of this paper is to describe the theory and results of a bio-inspired plume tracking algorithm. A memory based and gradient based approach, were developed and compared. A method for generating sparse plumes was also developed. Results indicate the ability of the algorithms to track plumes in 2D and 3D.
Resumo:
Spontaneous emission (SE) of a Quantum emitter depends mainly on the transmission strength between the upper and lower energy levels as well as the Local Density of States (LDOS)[1]. When a QD is placed in near a plasmon waveguide, LDOS of the QD is increased due to addition of the non-radiative decay and a plasmonic decay channel to free space emission[2-4]. The slow velocity and dramatic concentration of the electric field of the plasmon can capture majority of the SE into guided plasmon mode (Гpl ). This paper focused on studying the effect of waveguide height on the efficiency of coupling QD decay into plasmon mode using a numerical model based on finite elemental method (FEM). Symmetric gap waveguide considered in this paper support single mode and QD as a dipole emitter. 2D simulation models are done to find normalized Гpl and 3D models are used to find probability of SE decaying into plasmon mode ( β) including all three decay channels. It is found out that changing gap height can increase QD-plasmon coupling, by up to a factor of 5 and optimally placed QD up to a factor of 8. To make the paper more realistic we briefly studied the effect of sharpness of the waveguide edge on SE emission into guided plasmon mode. Preliminary nano gap waveguide fabrication and testing are already underway. Authors expect to compare the theoretical results with experimental outcomes in the future
Resumo:
Introduction and Objectives Joint moments and joint powers during gait are widely used to determine the effects of rehabilitation programs as well as prosthetic fitting. Following the definition of power (dot product of joint moment and joint angular velocity) it has been previously proposed to analyse the 3D angle between both vectors, αMw. Basically, joint power is maximised when both vectors are parallel and cancelled when both vectors are orthogonal. In other words, αMw < 60° reveals a propulsion configuration (more than 50% of the moment contribute to positive power) while αMw > 120° reveals a resistance configuration (more than 50% of the moment contribute to negative power). A stabilisation configuration (less than 50% of the moment contribute to power) corresponds to 60° < αMw < 120°. Previous studies demonstrated that hip joints of able-bodied adults (AB) are mainly in a stabilisation configuration (αMw about 90°) during the stance phase of gait. [1, 2] Individuals with transfemoral amputation (TFA) need to maximise joint power at the hip while controlling the prosthetic knee during stance. Therefore, we tested the hypothesis that TFAs should adopt a strategy that is different from a continuous stabilisation. The objective of this study was to compute joint power and αMw for TFA and to compare them with AB. Methods Three trials of walking at self-selected speed were analysed for 8 TFAs (7 males and 1 female, 46±10 years old, 1.78±0.08 m 82±13 kg) and 8 ABs (males, 25±3 years old, 1.75±0.04, m 67±6 kg). The joint moments are computed from a motion analysis system (Qualisys, Goteborg, Sweden) and a multi-axial transducer (JR3, Woodland, USA) mounted above the prosthetic knee for TFAs and from a motion analysis system (Motion Analysis, Santa Rosa, USA) and force plates (Bertec, Columbus, USA) for ABs. The TFAs were fitted with an OPRA (Integrum, AB, Gothengurg, Sweden) osseointegrated implant system and their prosthetic designs include pneumatic, hydraulic and microprocessor knees. Previous studies showed that the inverse dynamics computed from the multi-axial transducer is the proper method considering the absorption at the foot and resistance at the knee. Results The peak of positive power at loading response (H1) was earlier and lower for TFA compared to AB. Although the joint power is lower, the 3D angle between joint moment and joint angular velocity, αMw, reveals an obvious propulsion configuration (mean αMw about 20°) for TFA compared to a stabilisation configuration (mean αMw about 70°) for AB. The peaks of negative power at midstance (H2) and of positive power at preswing / initial swing (H3) occurred later, lower and longer for TFA compared to AB. Again, the joint powers are lower for TFA but, in this case, αMw is almost comparable (with a time lag), demonstrating a stabilisation (almost a resistance for TFA, mean αMw about 120°) and a propulsion configuration, respectively. The swing phase is not analysed in the present study. Conclusion The analysis of hip joint power may indicate that TFAs demonstrated less propulsion and resistance than ABs during the stance phase of gait. This is true from a quantitative point of view. On the contrary, the 3D angle between joint moment and joint angular velocity, αMw, reveals that TFAs have a remarkable propulsion strategy at loading response and almost a resistance strategy at midstance while ABs adopted a stabilisation strategy. The propulsion configuration, with αMw close to 0°, seems to aim at maximising the positive joint power. The configuration close to resistance, with αMw far from 180°, might aim at unlocking the prosthetic knee before swing while minimising the negative power. This analysis of both joint power and 3D angle between the joint moment and the joint angular velocity provides complementary insights into the gait strategies of TFA that can be used to support evidence-based rehabilitation and fitting of prosthetic components.