11 resultados para uneven lighting image correction
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
The use of image processing techniques to assess the performance of airport landing lighting using images of it collected from an aircraft-mounted camera is documented. In order to assess the performance of the lighting, it is necessary to uniquely identify each luminaire within an image and then track the luminaires through the entire sequence and store the relevant information for each luminaire, that is, the total number of pixels that each luminaire covers and the total grey level of these pixels. This pixel grey level can then be used for performance assessment. The authors propose a robust model-based (MB) featurematching technique by which the performance is assessed. The development of this matching technique is the key to the automated performance assessment of airport lighting. The MB matching technique utilises projective geometry in addition to accurate template of the 3D model of a landing-lighting system. The template is projected onto the image data and an optimum match found, using nonlinear least-squares optimisation. The MB matching software is compared with standard feature extraction and tracking techniques known within the community, these being the Kanade–Lucus–Tomasi (KLT) and scaleinvariant feature transform (SIFT) techniques. The new MB matching technique compares favourably with the SIFT and KLT feature-tracking alternatives. As such, it provides a solid foundation to achieve the central aim of this research which is to automatically assess the performance of airport lighting.
Resumo:
The increasing demand for fast air transportation around the clock
has increased the number of night flights in civil aviation over
the past few decades. In night aviation, to land an aircraft, a
pilot needs to be able to identify an airport. The approach
lighting system (ALS) at an airport is used to provide
identification and guidance to pilots from a distance. ALS
consists of more than $100$ luminaires which are installed in a
defined pattern following strict guidelines by the International
Civil Aviation Organization (ICAO). ICAO also has strict
regulations for maintaining the performance level of the
luminaires. However, once installed, to date there is no automated
technique by which to monitor the performance of the lighting. We
suggest using images of the lighting pattern captured using a camera
placed inside an aircraft. Based on the information contained
within these images, the performance of the luminaires has to be
evaluated which requires identification of over $100$ luminaires
within the pattern of ALS image. This research proposes analysis
of the pattern using morphology filters which use a variable
length structuring element (VLSE). The dimension of the VLSE changes
continuously within an image and varies for different images.
A novel
technique for automatic determination of the VLSE is proposed and
it allows successful identification of the luminaires from the
image data as verified through the use of simulated and real data.
Resumo:
This article provides an overview of a novel prototype device that can be used to aid airports in monitoring their landing lighting. Known as Aerodrome Ground Lighting (AGL), the device is comprised of a camera that is capable of capturing images of landing lighting as aircraft approach the airport. AGL is designed to automatically examine landing lighting to assess if it is operating under uniform brightness standards (i.e., luminous intensity of luminares) that aviation governing bodies require. A detailed discussion of the hardware and software requirements of AGL -- currently under joint development by researchers at Queens University Belfast and Cobham Flight Inspection Limited -- is presented. Results from the research indicate that assessing the performance of both ground-based runway luminaries and elevated approach luminaries is possible, though further testing is needed for full validation.
Resumo:
The development of an automated system for the quality assessment of aerodrome ground lighting (AGL), in accordance with associated standards and recommendations, is presented. The system is composed of an image sensor, placed inside the cockpit of an aircraft to record images of the AGL during a normal descent to an aerodrome. A model-based methodology is used to ascertain the optimum match between a template of the AGL and the actual image data in order to calculate the position and orientation of the camera at the instant the image was acquired. The camera position and orientation data are used along with the pixel grey level for each imaged luminaire, to estimate a value for the luminous intensity of a given luminaire. This can then be compared with the expected brightness for that luminaire to ensure it is operating to the required standards. As such, a metric for the quality of the AGL pattern is determined. Experiments on real image data is presented to demonstrate the application and effectiveness of the system.
Resumo:
Utilising cameras as a means to survey the surrounding environment is becoming increasingly popular in a number of different research areas and applications. Central to using camera sensors as input to a vision system, is the need to be able to manipulate and process the information captured in these images. One such application, is the use of cameras to monitor the quality of airport landing lighting at aerodromes where a camera is placed inside an aircraft and used to record images of the lighting pattern during the landing phase of a flight. The images are processed to determine a performance metric. This requires the development of custom software for the localisation and identification of luminaires within the image data. However, because of the necessity to keep airport operations functioning as efficiently as possible, it is difficult to collect enough image data to develop, test and validate any developed software. In this paper, we present a technique to model a virtual landing lighting pattern. A mathematical model is postulated which represents the glide path of the aircraft including random deviations from the expected path. A morphological method has been developed to localise and track the luminaires under different operating conditions. © 2011 IEEE.
Resumo:
In this paper we demonstrate a simple and novel illumination model that can be used for illumination invariant facial recognition. This model requires no prior knowledge of the illumination conditions and can be used when there is only a single training image per-person. The proposed illumination model separates the effects of illumination over a small area of the face into two components; an additive component modelling the mean illumination and a multiplicative component, modelling the variance within the facial area. Illumination invariant facial recognition is performed in a piecewise manner, by splitting the face image into blocks, then normalizing the illumination within each block based on the new lighting model. The assumptions underlying this novel lighting model have been verified on the YaleB face database. We show that magnitude 2D Fourier features can be used as robust facial descriptors within the new lighting model. Using only a single training image per-person, our new method achieves high (in most cases 100%) identification accuracy on the YaleB, extended YaleB and CMU-PIE face databases.
Resumo:
This paper proposes a two-level 3D human pose tracking method for a specific action captured by several cameras. The generation of pose estimates relies on fitting a 3D articulated model on a Visual Hull generated from the input images. First, an initial pose estimate is constrained by a low dimensional manifold learnt by Temporal Laplacian Eigenmaps. Then, an improved global pose is calculated by refining individual limb poses. The validation of our method uses a public standard dataset and demonstrates its accurate and computational efficiency. © 2011 IEEE.
Resumo:
Background and purpose: We are developing a technique for highly focused vocal cord irradiation in early glottic carcinoma to optimally treat a target volume confined to a single cord. This technique, in contrast with the conventional methods, aims at sparing the healthy vocal cord. As such a technique requires sub-mm daily targeting accuracy to be effective, we investigate the accuracy achievable with on-line kV-cone beam CT (CBCT) corrections. Materials and methods: CBCT scans were obtained in 10 early glottic cancer patients in each treatment fraction. The grey value registration available in X-ray volume imaging (XVI) software (Elekta, Synergy) was applied to a volume of interest encompassing the thyroid cartilage. After application of the thus derived corrections, residue displacements with respect to the planning CT scan were measured at clearly identifiable relevant landmarks. The intra- and inter-observer variations were also measured. Results: While before correction the systematic displacements of the vocal cords were as large as 2.4 ± 3.3 mm (cranial-caudal population mean ± SD Σ), daily CBCT registration and correction reduced these values to less than 0.2 ± 0.5 mm in all directions. Random positioning errors (SD σ) were reduced to less than 1 mm. Correcting only for translations and not for rotations did not appreciably affect this accuracy. The residue random displacements partly stem from intra-observer variations (SD = 0.2-0.6 mm). Conclusion: The use of CBCT for daily image guidance in combination with standard mask fixation reduced systematic and random set-up errors of the vocal cords to <1 mm prior to the delivery of each fraction dose. Thus, this facilitates the high targeting precision required for a single vocal cord irradiation. © 2009 Elsevier Ireland Ltd. All rights reserved.
Resumo:
In this paper, we introduce a novel approach to face recognition which simultaneously tackles three combined challenges: 1) uneven illumination; 2) partial occlusion; and 3) limited training data. The new approach performs lighting normalization, occlusion de-emphasis and finally face recognition, based on finding the largest matching area (LMA) at each point on the face, as opposed to traditional fixed-size local area-based approaches. Robustness is achieved with novel approaches for feature extraction, LMA-based face image comparison and unseen data modeling. On the extended YaleB and AR face databases for face identification, our method using only a single training image per person, outperforms other methods using a single training image, and matches or exceeds methods which require multiple training images. On the labeled faces in the wild face verification database, our method outperforms comparable unsupervised methods. We also show that the new method performs competitively even when the training images are corrupted.
Resumo:
Photometry of moving sources typically suffers from a reduced signal-to-noise ratio (S/N) or flux measurements biased to incorrect low values through the use of circular apertures. To address this issue, we present the software package, TRIPPy: TRailed Image Photometry in Python. TRIPPy introduces the pill aperture, which is the natural extension of the circular aperture appropriate for linearly trailed sources. The pill shape is a rectangle with two semicircular end-caps and is described by three parameters, the trail length and angle, and the radius. The TRIPPy software package also includes a new technique to generate accurate model point-spread functions (PSFs) and trailed PSFs (TSFs) from stationary background sources in sidereally tracked images. The TSF is merely the convolution of the model PSF, which consists of a moffat profile, and super-sampled lookup table. From the TSF, accurate pill aperture corrections can be estimated as a function of pill radius with an accuracy of 10 mmag for highly trailed sources. Analogous to the use of small circular apertures and associated aperture corrections, small radius pill apertures can be used to preserve S/Ns of low flux sources, with appropriate aperture correction applied to provide an accurate, unbiased flux measurement at all S/Ns.