43 resultados para OPTICAL SYSTEMS
em Queensland University of Technology - ePrints Archive
Resumo:
3D Motion capture is a medium that plots motion, typically human motion, converting it into a form that can be represented digitally. It is a fast evolving field and recent inertial technology may provide new artistic possibilities for its use in live performance. Although not often used in this context, motion capture has a combination of attributes that can provide unique forms of collaboration with performance arts. The inertial motion capture suit used for this study has orientation sensors placed at strategic points on the body to map body motion. Its portability, real-time performance, ease of use, and its immunity from line-of-sight problems inherent in optical systems suggest it would work well as a live performance technology. Many animation techniques can be used in real-time. This research examines a broad cross-section of these techniques using four practice-led cases to assess the suitability of inertial motion capture to live performance. Although each case explores different visual possibilities, all make use of the performativity of the medium, using either an improvisational format or interactivity among stage, audience and screen that would be difficult to emulate any other way. A real-time environment is not capable of reproducing the depth and sophistication of animation people have come to expect through media. These environments take many hours to render. In time the combination of what can be produced in real-time and the tools available in a 3D environment will no doubt create their own tree of aesthetic directions in live performance. The case study looks at the potential of interactivity that this technology offers.
Resumo:
Purpose: James Clerk Maxwell is usually recognized as being the first, in 1854, to consider using inhomogeneous media in optical systems. However, some fifty years earlier Thomas Young, stimulated by his interest in the optics of the eye and accommodation, had already modeled some applications of gradient-index optics. These applications included using an axial gradient to provide spherical aberration-free optics and a spherical gradient to describe the optics of the atmosphere and the eye lens. We evaluated Young’s contributions. Method: We attempted to derive Young’s equations for axial and spherical refractive index gradients. Raytracing was used to confirm accuracy of formula. Results: We did not confirm Young’s equation for the axial gradient to provide aberration-free optics, but derived a slightly different equation. We confirmed the correctness of his equations for deviation of rays in a spherical gradient index and for the focal length of a lens with a nucleus of fixed index surrounded by a cortex of reducing index towards the edge. Young claimed that the equation for focal length applied to a lens with part of the constant index nucleus of the sphere removed, such that the loss of focal length was a quarter of the thickness removed, but this is not strictly correct. Conclusion: Young’s theoretical work in gradient-index optics received no acknowledgement from either his contemporaries or later authors. While his model of the eye lens is not an accurate physiological description of the human lens, with the index reducing least quickly at the edge, it represented a bold attempt to approximate the characteristics of the lens. Thomas Young’s work deserves wider recognition.
Resumo:
In general optical systems, the range of distances over which the detector cannot detect any change in focus is called the depth-of-field. This may be specified by movement of the object or image planes, with the former being referred to as depth-of-field and the latter as depth-of-focus (DOF). Either term can be used in vision science, where we refer to changes in vergence which have the same value in both object and image space.
Resumo:
We show that the parallax motion resulting from non-nodal rotation in panorama capture can be exploited for light field construction from commodity hardware. Automated panoramic image capture typically seeks to rotate a camera exactly about its nodal point, for which no parallax motion is observed. This can be difficult or impossible to achieve due to limitations of the mounting or optical systems, and consequently a wide range of captured panoramas suffer from parallax between images. We show that by capturing such imagery over a regular grid of camera poses, then appropriately transforming the captured imagery to a common parameterisation, a light field can be constructed. The resulting four-dimensional image encodes scene geometry as well as texture, allowing an increasingly rich range of light field processing techniques to be applied. Employing an Ocular Robotics REV25 camera pointing system, we demonstrate light field capture,refocusing and low-light image enhancement.
Resumo:
Piezoelectric polymers based on polyvinylidene fluoride (PVDF) are of interest as smart materials for novel space-based telescope applications. Dimensional adjustments of adaptive thin polymer films are achieved via controlled charge deposition. Predicting their long-term performance requires a detailed understanding of the piezoelectric property changes that develop during space environmental exposure. The overall materials performance is governed by a combination of chemical and physical degradation processes occurring in low Earth orbit as established by our past laboratory-based materials performance experiments (see report SAND 2005-6846). Molecular changes are primarily induced via radiative damage, and physical damage from temperature and atomic oxygen exposure is evident as depoling, loss of orientation and surface erosion. The current project extension has allowed us to design and fabricate small experimental units to be exposed to low Earth orbit environments as part of the Materials International Space Station Experiments program. The space exposure of these piezoelectric polymers will verify the observed trends and their degradation pathways, and provide feedback on using piezoelectric polymer films in space. This will be the first time that PVDF-based adaptive polymer films will be operated and exposed to combined atomic oxygen, solar UV and temperature variations in an actual space environment. The experiments are designed to be fully autonomous, involving cyclic application of excitation voltages, sensitive film position sensors and remote data logging. This mission will provide critically needed feedback on the long-term performance and degradation of such materials, and ultimately the feasibility of large adaptive and low weight optical systems utilizing these polymers in space.
Resumo:
In many parts of the world, uncontrolled fires in sparsely populated areas are a major concern as they can quickly grow into large and destructive conflagrations in short time spans. Detecting these fires has traditionally been a job for trained humans on the ground, or in the air. In many cases, these manned solutions are simply not able to survey the amount of area necessary to maintain sufficient vigilance and coverage. This paper investigates the use of unmanned aerial systems (UAS) for automated wildfire detection. The proposed system uses low-cost, consumer-grade electronics and sensors combined with various airframes to create a system suitable for automatic detection of wildfires. The system employs automatic image processing techniques to analyze captured images and autonomously detect fire-related features such as fire lines, burnt regions, and flammable material. This image recognition algorithm is designed to cope with environmental occlusions such as shadows, smoke and obstructions. Once the fire is identified and classified, it is used to initialize a spatial/temporal fire simulation. This simulation is based on occupancy maps whose fidelity can be varied to include stochastic elements, various types of vegetation, weather conditions, and unique terrain. The simulations can be used to predict the effects of optimized firefighting methods to prevent the future propagation of the fires and greatly reduce time to detection of wildfires, thereby greatly minimizing the ensuing damage. This paper also documents experimental flight tests using a SenseFly Swinglet UAS conducted in Brisbane, Australia as well as modifications for custom UAS.
Resumo:
Person tracking systems to date have either relied on motion detection or optical flow as a basis for person detection and tracking. As yet, systems have not been developed that utilise both these techniques. We propose a person tracking system that uses both, made possible by a novel hybrid optical flow-motion detection technique that we have developed. This provides the system with two methods of person detection, helping to avoid missed detections and the need to predict position, which can lead to errors in tracking and mistakes when handling occlusion situations. Our results show that our system is able to track people accurately, with an average error less than four pixels, and that our system outperforms the current CAVIAR benchmark system.
Resumo:
Person tracking systems are dependent on being able to locate a person accurately across a series of frames. Optical flow can be used to segment a moving object from a scene, provided the expected velocity of the moving object is known; but successful detection also relies on being able segment the background. A problem with existing optical flow techniques is that they don’t discriminate the foreground from the background, and so often detect motion (and thus the object) in the background. To overcome this problem, we propose a new optical flow technique, that is based upon an adaptive background segmentation technique, which only determines optical flow in regions of motion. This technique has been developed with a view to being used in surveillance systems, and our testing shows that for this application it is more effective than other standard optical flow techniques.
Resumo:
Machine downtime, whether planned or unplanned, is intuitively costly to manufacturing organisations, but is often very difficult to quantify. The available literature showed that costing processes are rarely undertaken within manufacturing organisations. Where cost analyses have been undertaken, they generally have only valued a small proportion of the affected costs, leading to an overly conservative estimate. This thesis aimed to develop a cost of downtime model, with particular emphasis on the application of the model to Australia Post’s Flat Mail Optical Character Reader (FMOCR). The costing analysis determined a cost of downtime of $5,700,000 per annum, or an average cost of $138 per operational hour. The second section of this work focused on the use of the cost of downtime to objectively determine areas of opportunity for cost reduction on the FMOCR. This was the first time within Post that maintenance costs were considered along side of downtime for determining machine performance. Because of this, the results of the analysis revealed areas which have historically not been targeted for cost reduction. Further exploratory work was undertaken on the Flats Lift Module (FLM) and Auto Induction Station (AIS) Deceleration Belts through the comparison of the results against two additional FMOCR analysis programs. This research has demonstrated the development of a methodical and quantifiable cost of downtime for the FMOCR. This has been the first time that Post has endeavoured to examine the cost of downtime. It is also one of the very few methodologies for valuing downtime costs that has been proposed in literature. The work undertaken has also demonstrated how the cost of downtime can be incorporated into machine performance analysis with specific application to identifying high costs modules. The outcome of this report has both been the methodology for costing downtime, as well as a list of areas for cost reduction. In doing so, this thesis has outlined the two key deliverables presented at the outset of the research.
Resumo:
In this study, the authors propose a novel video stabilisation algorithm for mobile platforms with moving objects in the scene. The quality of videos obtained from mobile platforms, such as unmanned airborne vehicles, suffers from jitter caused by several factors. In order to remove this undesired jitter, the accurate estimation of global motion is essential. However it is difficult to estimate global motions accurately from mobile platforms due to increased estimation errors and noises. Additionally, large moving objects in the video scenes contribute to the estimation errors. Currently, only very few motion estimation algorithms have been developed for video scenes collected from mobile platforms, and this paper shows that these algorithms fail when there are large moving objects in the scene. In this study, a theoretical proof is provided which demonstrates that the use of delta optical flow can improve the robustness of video stabilisation in the presence of large moving objects in the scene. The authors also propose to use sorted arrays of local motions and the selection of feature points to separate outliers from inliers. The proposed algorithm is tested over six video sequences, collected from one fixed platform, four mobile platforms and one synthetic video, of which three contain large moving objects. Experiments show our proposed algorithm performs well to all these video sequences.
Resumo:
Object tracking systems require accurate segmentation of the objects from the background for effective tracking. Motion segmentation or optical flow can be used to segment incoming images. Whilst optical flow allows multiple moving targets to be separated based on their individual velocities, optical flow techniques are prone to errors caused by changing lighting and occlusions, both common in a surveillance environment. Motion segmentation techniques are more robust to fluctuating lighting and occlusions, but don't provide information on the direction of the motion. In this paper we propose a combined motion segmentation/optical flow algorithm for use in object tracking. The proposed algorithm uses the motion segmentation results to inform the optical flow calculations and ensure that optical flow is only calculated in regions of motion, and improve the performance of the optical flow around the edge of moving objects. Optical flow is calculated at pixel resolution and tracking of flow vectors is employed to improve performance and detect discontinuities, which can indicate the location of overlaps between objects. The algorithm is evaluated by attempting to extract a moving target within the flow images, given expected horizontal and vertical movement (i.e. the algorithms intended use for object tracking). Results show that the proposed algorithm outperforms other widely used optical flow techniques for this surveillance application.
Resumo:
Motion has been examined in biology to be a critical component for obstacle avoidance and navigation. In particular, optical flow is a powerful motion cue that has been exploited in many biological systems for survival. In this paper, we investigate an obstacle detection system that uses optical flow to obtain range information to objects. Our experimental results demonstrate that optical flow is capable of providing good obstacle information but has obvious failure modes. We acknowledge that our optical flow system has certain disadvantages and cannot be solely used for navigation. Instead, we believe that optical flow is a critical visual subsystem used when moving at reason- able speeds. When combined with other visual subsystems, considerable synergy can result.