293 resultados para optical processing
Resumo:
Person tracking systems to date have either relied on motion detection or optical flow as a basis for person detection and tracking. As yet, systems have not been developed that utilise both these techniques. We propose a person tracking system that uses both, made possible by a novel hybrid optical flow-motion detection technique that we have developed. This provides the system with two methods of person detection, helping to avoid missed detections and the need to predict position, which can lead to errors in tracking and mistakes when handling occlusion situations. Our results show that our system is able to track people accurately, with an average error less than four pixels, and that our system outperforms the current CAVIAR benchmark system.
Resumo:
Person tracking systems are dependent on being able to locate a person accurately across a series of frames. Optical flow can be used to segment a moving object from a scene, provided the expected velocity of the moving object is known; but successful detection also relies on being able segment the background. A problem with existing optical flow techniques is that they don’t discriminate the foreground from the background, and so often detect motion (and thus the object) in the background. To overcome this problem, we propose a new optical flow technique, that is based upon an adaptive background segmentation technique, which only determines optical flow in regions of motion. This technique has been developed with a view to being used in surveillance systems, and our testing shows that for this application it is more effective than other standard optical flow techniques.
Resumo:
Object tracking systems require accurate segmentation of the objects from the background for effective tracking. Motion segmentation or optical flow can be used to segment incoming images. Whilst optical flow allows multiple moving targets to be separated based on their individual velocities, optical flow techniques are prone to errors caused by changing lighting and occlusions, both common in a surveillance environment. Motion segmentation techniques are more robust to fluctuating lighting and occlusions, but don't provide information on the direction of the motion. In this paper we propose a combined motion segmentation/optical flow algorithm for use in object tracking. The proposed algorithm uses the motion segmentation results to inform the optical flow calculations and ensure that optical flow is only calculated in regions of motion, and improve the performance of the optical flow around the edge of moving objects. Optical flow is calculated at pixel resolution and tracking of flow vectors is employed to improve performance and detect discontinuities, which can indicate the location of overlaps between objects. The algorithm is evaluated by attempting to extract a moving target within the flow images, given expected horizontal and vertical movement (i.e. the algorithms intended use for object tracking). Results show that the proposed algorithm outperforms other widely used optical flow techniques for this surveillance application.
Resumo:
Within a surveillance video, occlusions are commonplace, and accurately resolving these occlusions is key when seeking to accurately track objects. The challenge of accurately segmenting objects is further complicated by the fact that within many real-world surveillance environments, the objects appear very similar. For example, footage of pedestrians in a city environment will consist of many people wearing dark suits. In this paper, we propose a novel technique to segment groups and resolve occlusions using optical flow discontinuities. We demonstrate that the ratio of continuous to discontinuous pixels within a region can be used to locate the overlapping edges, and incorporate this into an object tracking framework. Results on a portion of the ETISEO database show that the proposed algorithm results in improved tracking performance overall, and improved tracking within occlusions.
Resumo:
In this paper, a method has been developed for estimating pitch angle, roll angle and aircraft body rates based on horizon detection and temporal tracking using a forward-looking camera, without assistance from other sensors. Using an image processing front-end, we select several lines in an image that may or may not correspond to the true horizon. The optical flow at each candidate line is calculated, which may be used to measure the body rates of the aircraft. Using an Extended Kalman Filter (EKF), the aircraft state is propagated using a motion model and a candidate horizon line is associated using a statistical test based on the optical flow measurements and the location of the horizon. Once associated, the selected horizon line, along with the associated optical flow, is used as a measurement to the EKF. To test the accuracy of the algorithm, two flights were conducted, one using a highly dynamic Uninhabited Airborne Vehicle (UAV) in clear flight conditions and the other in a human-piloted Cessna 172 in conditions where the horizon was partially obscured by terrain, haze and smoke. The UAV flight resulted in pitch and roll error standard deviations of 0.42◦ and 0.71◦ respectively when compared with a truth attitude source. The Cessna flight resulted in pitch and roll error standard deviations of 1.79◦ and 1.75◦ respectively. The benefits of selecting and tracking the horizon using a motion model and optical flow rather than naively relying on the image processing front-end is also demonstrated.
Resumo:
Automated visual surveillance of crowds is a rapidly growing area of research. In this paper we focus on motion representation for the purpose of abnormality detection in crowded scenes. We propose a novel visual representation called textures of optical flow. The proposed representation measures the uniformity of a flow field in order to detect anomalous objects such as bicycles, vehicles and skateboarders; and can be combined with spatial information to detect other forms of abnormality. We demonstrate that the proposed approach outperforms state-of-the-art anomaly detection algorithms on a large, publicly-available dataset.
Resumo:
We have developed digital image registration program for a MC 68000 based fundus image processing system (FIPS). FIPS not only is capable of executing typical image processing algorithms in spatial as well as Fourier domain, the execution time for many operations has been made much quicker by using a hybrid of "C", Fortran and MC6000 assembly languages.
Resumo:
Signal-degrading speckle is one factor that can reduce the quality of optical coherence tomography images. We demonstrate the use of a hierarchical model-based motion estimation processing scheme based on an affine-motion model to reduce speckle in optical coherence tomography imaging, by image registration and the averaging of multiple B-scans. The proposed technique is evaluated against other methods available in the literature. The results from a set of retinal images show the benefit of the proposed technique, which provides an improvement in signal-to-noise ratio of the square root of the number of averaged images, leading to clearer visual information in the averaged image. The benefits of the proposed technique are also explored in the case of ocular anterior segment imaging.
Resumo:
This paper presents the flight trials of an electro-optical (EO) sense-and-avoid system onboard a Cessna host aircraft (camera aircraft). We focus on the autonomous collision avoidance capability of the sense-and-avoid system; that is, closed-loop integration with the onboard aircraft autopilot. We also discuss the system’s approach to target detection and avoidance control, as well as the methodology of the flight trials. The results demonstrate the ability of the sense-and-avoid system to automatically detect potential conflicting aircraft and engage the host Cessna autopilot to perform an avoidance manoeuvre, all without any human intervention
Resumo:
Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.
Resumo:
The assessment of choroidal thickness from optical coherence tomography (OCT) images of the human choroid is an important clinical and research task, since it provides valuable information regarding the eye’s normal anatomy and physiology, and changes associated with various eye diseases and the development of refractive error. Due to the time consuming and subjective nature of manual image analysis, there is a need for the development of reliable objective automated methods of image segmentation to derive choroidal thickness measures. However, the detection of the two boundaries which delineate the choroid is a complicated and challenging task, in particular the detection of the outer choroidal boundary, due to a number of issues including: (i) the vascular ocular tissue is non-uniform and rich in non-homogeneous features, and (ii) the boundary can have a low contrast. In this paper, an automatic segmentation technique based on graph-search theory is presented to segment the inner choroidal boundary (ICB) and the outer choroidal boundary (OCB) to obtain the choroid thickness profile from OCT images. Before the segmentation, the B-scan is pre-processed to enhance the two boundaries of interest and to minimize the artifacts produced by surrounding features. The algorithm to detect the ICB is based on a simple edge filter and a directional weighted map penalty, while the algorithm to detect the OCB is based on OCT image enhancement and a dual brightness probability gradient. The method was tested on a large data set of images from a pediatric (1083 B-scans) and an adult (90 B-scans) population, which were previously manually segmented by an experienced observer. The results demonstrate the proposed method provides robust detection of the boundaries of interest and is a useful tool to extract clinical data.
Resumo:
Sparse optical flow algorithms, such as the Lucas-Kanade approach, provide more robustness to noise than dense optical flow algorithms and are the preferred approach in many scenarios. Sparse optical flow algorithms estimate the displacement for a selected number of pixels in the image. These pixels can be chosen randomly. However, pixels in regions with more variance between the neighbours will produce more reliable displacement estimates. The selected pixel locations should therefore be chosen wisely. In this study, the suitability of Harris corners, Shi-Tomasi's “Good features to track", SIFT and SURF interest point extractors, Canny edges, and random pixel selection for the purpose of frame-by-frame tracking using a pyramidical Lucas-Kanade algorithm is investigated. The evaluation considers the important factors of processing time, feature count, and feature trackability in indoor and outdoor scenarios using ground vehicles and unmanned aerial vehicles, and for the purpose of visual odometry estimation.
Resumo:
Radio frequency (R.F.) glow discharge polyterpenol thin films were prepared on silicon wafers and irradiated with I10+ ions to fluences of 1 × 1010 and 1 × 1012 ions/cm2. Post-irradiation characterisation of these films indicated the development of well-defined nano-scale ion entry tracks, highlighting prospective applications for ion irradiated polyterpenol thin films in a variety of membrane and nanotube-fabrication functions. Optical characterisation showed the films to be optically transparent within the visible spectrum and revealed an ability to selectively control the thin film refractive index as a function of fluence. This indicates that ion irradiation processing may be employed to produce plasma-polymer waveguides to accommodate a variety of wavelengths. XRR probing of the substrate-thin film interface revealed interfacial roughness values comparable to those obtained for the uncoated substrate's surface (i.e., both on the order of 5 Å), indicating minimal substrate etching during the plasma deposition process.