962 resultados para Pixel


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several algorithms for optical flow are studied theoretically and experimentally. Differential and matching methods are examined; these two methods have differing domains of application- differential methods are best when displacements in the image are small (<2 pixels) while matching methods work well for moderate displacements but do not handle sub-pixel motions. Both types of optical flow algorithm can use either local or global constraints, such as spatial smoothness. Local matching and differential techniques and global differential techniques will be examined. Most algorithms for optical flow utilize weak assumptions on the local variation of the flow and on the variation of image brightness. Strengthening these assumptions improves the flow computation. The computational consequence of this is a need for larger spatial and temporal support. Global differential approaches can be extended to local (patchwise) differential methods and local differential methods using higher derivatives. Using larger support is valid when constraint on the local shape of the flow are satisfied. We show that a simple constraint on the local shape of the optical flow, that there is slow spatial variation in the image plane, is often satisfied. We show how local differential methods imply the constraints for related methods using higher derivatives. Experiments show the behavior of these optical flow methods on velocity fields which so not obey the assumptions. Implementation of these methods highlights the importance of numerical differentiation. Numerical approximation of derivatives require care, in two respects: first, it is important that the temporal and spatial derivatives be matched, because of the significant scale differences in space and time, and, second, the derivative estimates improve with larger support.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We introduce a new method to describe, in a single image, changes in shape over time. We acquire both range and image information with a stationary stereo camera. From the pictures taken, we display a composite image consisting of the image data from the surface closest to the camera at every pixel. This reveals the 3-d relationships over time by easy-to-interpret occlusion relationships in the composite image. We call the composite a shape-time photograph. Small errors in depth measurements cause artifacts in the shape-time images. We correct most of these using a Markov network to estimate the most probable front surface, taking into account the depth measurements, their uncertainties, and layer continuity assumptions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For applications involving the control of moving vehicles, the recovery of relative motion between a camera and its environment is of high utility. This thesis describes the design and testing of a real-time analog VLSI chip which estimates the focus of expansion (FOE) from measured time-varying images. Our approach assumes a camera moving through a fixed world with translational velocity; the FOE is the projection of the translation vector onto the image plane. This location is the point towards which the camera is moving, and other points appear to be expanding outward from. By way of the camera imaging parameters, the location of the FOE gives the direction of 3-D translation. The algorithm we use for estimating the FOE minimizes the sum of squares of the differences at every pixel between the observed time variation of brightness and the predicted variation given the assumed position of the FOE. This minimization is not straightforward, because the relationship between the brightness derivatives depends on the unknown distance to the surface being imaged. However, image points where brightness is instantaneously constant play a critical role. Ideally, the FOE would be at the intersection of the tangents to the iso-brightness contours at these "stationary" points. In practice, brightness derivatives are hard to estimate accurately given that the image is quite noisy. Reliable results can nevertheless be obtained if the image contains many stationary points and the point is found that minimizes the sum of squares of the perpendicular distances from the tangents at the stationary points. The FOE chip calculates the gradient of this least-squares minimization sum, and the estimation is performed by closing a feedback loop around it. The chip has been implemented using an embedded CCD imager for image acquisition and a row-parallel processing scheme. A 64 x 64 version was fabricated in a 2um CCD/ BiCMOS process through MOSIS with a design goal of 200 mW of on-chip power, a top frame rate of 1000 frames/second, and a basic accuracy of 5%. A complete experimental system which estimates the FOE in real time using real motion and image scenes is demonstrated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este documento apresenta os procedimentos para instalação e utilização do sistema NAVPRO 3.0, desenvolvido para o processamento automático e geração de produtos de imagens do sensor Advanced Very High Resolution Radiometer (AVHRR) a bordo dos satélites da National Oceanic Atmospheric Administration (NOAA). O sistema NAVPRO foi criado pela Embrapa Informática Agropecuária em parceria com a Universidade Estadual de Campinas (Unicamp), que contou com o repasse do pacote computacional NAV (NAVigation), desenvolvido pelo Colorado Center for Astrodynamics Research (CCAR), da Universidade do Colorado, Boulder, EUA. O diferencial do sistema é seu método de georreferenciamento automático e preciso, capaz de gerar imagens com deslocamentos máximos de 1 pixel, valor aceito em aplicações envolvendo dados de baixa resolução espacial.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabalho apresenta metodologias para estimativa de perda de solos na Bacia Hidrográfica do Rio Guapi-Macacu - RJ. Para a estimativa anual de perdas de solo foi utilizada a ferramenta InVest (Integrated Valuation of Environmental Services and Tradeoffs) através do módulo que desenvolve retenção de sedimentos. Esse módulo permite calcular a perda de solo média anual de cada parcela de terra, além de determinar o quanto de solo pode chegar a um determinado ponto de interesse. Para identificar a perda potencial de solo, o modelo emprega a Equação Universal de Perda de Solo (USLE) na escala do pixel, integrando informações sobre relevo, precipitação, padrões de uso da terra e propriedades do solo. Seus resultados são dados em toneladas por sub-bacias. Os resultados obtidos neste trabalho permitiram concluir que embora haja limitações no uso da Equação Universal de Perda de Solo, o modelo possibilitou a espacialização de classes de perdas de solos com indicações de áreas consideradas mais ou menos vulneráveis aos processos erosivos, considerando os dados disponíveis e suas escalas. O uso do InVest para calcular a USLE apresentou como principal vantagem a possibilidade de integração dos dados necessários em um único ambiente, reduzindo a possibilidade de erros na conversão de dados. Contudo, a maior limitação encontrada a sua aplicação está na dificuldade de obtenção de dados de entrada necessários ao modelo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

C.R. Bull, N.J.B. McFarlane, R. Zwiggelaar, C.J. Allen and T.T. Mottram, 'Inspection of teats by colour image analysis for automatic milking systems', Computers and Electronics in Agriculture 15 (1), 15-26 (1996)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a novel image registration framework which uses classifiers trained from examples of aligned images to achieve registration. Our approach is designed to register images of medical data where the physical condition of the patient has changed significantly and image intensities are drastically different. We use two boosted classifiers for each degree of freedom of image transformation. These two classifiers can both identify when two images are correctly aligned and provide an efficient means of moving towards correct registration for misaligned images. The classifiers capture local alignment information using multi-pixel comparisons and can therefore achieve correct alignments where approaches like correlation and mutual-information which rely on only pixel-to-pixel comparisons fail. We test our approach using images from CT scans acquired in a study of acute respiratory distress syndrome. We show significant increase in registration accuracy in comparison to an approach using mutual information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose to investigate a model-based technique for encoding non-rigid object classes in terms of object prototypes. Objects from the same class can be parameterized by identifying shape and appearance invariants of the class to devise low-level representations. The approach presented here creates a flexible model for an object class from a set of prototypes. This model is then used to estimate the parameters of low-level representation of novel objects as combinations of the prototype parameters. Variations in the object shape are modeled as non-rigid deformations. Appearance variations are modeled as intensity variations. In the training phase, the system is presented with several example prototype images. These prototype images are registered to a reference image by a finite element-based technique called Active Blobs. The deformations of the finite element model to register a prototype image with the reference image provide the shape description or shape vector for the prototype. The shape vector for each prototype, is then used to warp the prototype image onto the reference image and obtain the corresponding texture vector. The prototype texture vectors, being warped onto the same reference image have a pixel by pixel correspondence with each other and hence are "shape normalized". Given sufficient number of prototypes that exhibit appropriate in-class variations, the shape and the texture vectors define a linear prototype subspace that spans the object class. Each prototype is a vector in this subspace. The matching phase involves the estimation of a set of combination parameters for synthesis of the novel object by combining the prototype shape and texture vectors. The strengths of this technique lie in the combined estimation of both shape and appearance parameters. This is in contrast with the previous approaches where shape and appearance parameters were estimated separately.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

CONFIGR (CONtour FIgure GRound) is a computational model based on principles of biological vision that completes sparse and noisy image figures. Within an integrated vision/recognition system, CONFIGR posits an initial recognition stage which identifies figure pixels from spatially local input information. The resulting, and typically incomplete, figure is fed back to the “early vision” stage for long-range completion via filling-in. The reconstructed image is then re-presented to the recognition system for global functions such as object recognition. In the CONFIGR algorithm, the smallest independent image unit is the visible pixel, whose size defines a computational spatial scale. Once pixel size is fixed, the entire algorithm is fully determined, with no additional parameter choices. Multi-scale simulations illustrate the vision/recognition system. Open-source CONFIGR code is available online, but all examples can be derived analytically, and the design principles applied at each step are transparent. The model balances filling-in as figure against complementary filling-in as ground, which blocks spurious figure completions. Lobe computations occur on a subpixel spatial scale. Originally designed to fill-in missing contours in an incomplete image such as a dashed line, the same CONFIGR system connects and segments sparse dots, and unifies occluded objects from pieces locally identified as figure in the initial recognition stage. The model self-scales its completion distances, filling-in across gaps of any length, where unimpeded, while limiting connections among dense image-figure pixel groups that already have intrinsic form. Long-range image completion promises to play an important role in adaptive processors that reconstruct images from highly compressed video and still camera images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Grey-White Decision Network is introduced as an application of an on-center, off-surround recurrent cooperative/competitive network for segmentation of magnetic resonance imaging (MRI) brain images. The three layer dynamical system relaxes into a solution where each pixel is labeled as either grey matter, white matter, or "other" matter by considering raw input intensity, edge information, and neighbor interactions. This network is presented as an example of applying a recurrent cooperative/competitive field (RCCF) to a problem with multiple conflicting constraints. Simulations of the network and its phase plane analysis are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study was to develop a methodology, based on satellite remote sensing, to estimate the vegetation Start of Season (SOS) across the whole island of Ireland on an annual basis. This growing body of research is known as Land Surface Phenology (LSP) monitoring. The SOS was estimated for each year from a 7-year time series of 10-day composited, 1.2 km reduced resolution MERIS Global Vegetation Index (MGVI) data from 2003 to 2009, using the time series analysis software, TIMESAT. The selection of a 10-day composite period was guided by in-situ observations of leaf unfolding and cloud cover at representative point locations on the island. The MGVI time series was smoothed and the SOS metric extracted at a point corresponding to 20% of the seasonal MGVI amplitude. The SOS metric was extracted on a per pixel basis and gridded for national scale coverage. There were consistent spatial patterns in the SOS grids which were replicated on an annual basis and were qualitatively linked to variation in landcover. Analysis revealed that three statistically separable groups of CORINE Land Cover (CLC) classes could be derived from differences in the SOS, namely agricultural and forest land cover types, peat bogs, and natural and semi-natural vegetation types. These groups demonstrated that managed vegetation, e.g. pastures has a significantly earlier SOS than in unmanaged vegetation e.g. natural grasslands. There was also interannual spatio-temporal variability in the SOS. Such variability was highlighted in a series of anomaly grids showing variation from the 7-year mean SOS. An initial climate analysis indicated that an anomalously cold winter and spring in 2005/2006, linked to a negative North Atlantic Oscillation index value, delayed the 2006 SOS countrywide, while in other years the SOS anomalies showed more complex variation. A correlation study using air temperature as a climate variable revealed the spatial complexity of the air temperature-SOS relationship across the Republic of Ireland as the timing of maximum correlation varied from November to April depending on location. The SOS was found to occur earlier due to warmer winters in the Southeast while it was later with warmer winters in the Northwest. The inverse pattern emerged in the spatial patterns of the spring correlates. This contrasting pattern would appear to be linked to vegetation management as arable cropping is typically practiced in the southeast while there is mixed agriculture and mostly pastures to the west. Therefore, land use as well as air temperature appears to be an important determinant of national scale patterns in the SOS. The TIMESAT tool formed a crucial component of the estimation of SOS across the country in all seven years as it minimised the negative impact of noise and data dropouts in the MGVI time series by applying a smoothing algorithm. The extracted SOS metric was sensitive to temporal and spatial variation in land surface vegetation seasonality while the spatial patterns in the gridded SOS estimates aligned with those in landcover type. The methodology can be extended for a longer time series of FAPAR as MERIS will be replaced by the ESA Sentinel mission in 2013, while the availability of full resolution (300m) MERIS FAPAR and equivalent sensor products holds the possibility of monitoring finer scale seasonality variation. This study has shown the utility of the SOS metric as an indicator of spatiotemporal variability in vegetation phenology, as well as a correlate of other environmental variables such as air temperature. However, the satellite-based method is not seen as a replacement of ground-based observations, but rather as a complementary approach to studying vegetation phenology at the national scale. In future, the method can be extended to extract other metrics of the seasonal cycle in order to gain a more comprehensive view of seasonal vegetation development.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Colour is everywhere in our daily lives and impacts things like our mood, yet we rarely take notice of it. One method of capturing and analysing the predominant colours that we encounter is through visual lifelogging devices such as the SenseCam. However an issue related to these devices is the privacy concerns of capturing image level detail. Therefore in this work we demonstrate a hardware prototype wearable camera that captures only one pixel - of the dominant colour prevelant in front of the user, thus circumnavigating the privacy concerns raised in relation to lifelogging. To simulate whether the capture of dominant colour would be sufficient we report on a simulation carried out on 1.2 million SenseCam images captured by a group of 20 individuals. We compare the dominant colours that different groups of people are exposed to and show that useful inferences can be made from this data. We believe our prototype may be valuable in future experiments to capture colour correlated associated with an individual's mood.Colour is everywhere in our daily lives and impacts things like our mood, yet we rarely take notice of it. One method of capturing and analysing the predominant colours that we encounter is through visual lifelogging devices such as the SenseCam. However an issue related to these devices is the privacy concerns of capturing image level detail. Therefore in this work we demonstrate a hardware prototype wearable camera that captures only one pixel - of the dominant colour prevelant in front of the user, thus circumnavigating the privacy concerns raised in relation to lifelogging. To simulate whether the capture of dominant colour would be sufficient we report on a simulation carried out on 1.2 million SenseCam images captured by a group of 20 individuals. We compare the dominant colours that different groups of people are exposed to and show that useful inferences can be made from this data. We believe our prototype may be valuable in future experiments to capture colour correlated associated with an individual's mood.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation proposes and demonstrates novel smart modules to solve challenging problems in the areas of imaging, communications, and displays. The smartness of the modules is due to their ability to be able to adapt to changes in operating environment and application using programmable devices, specifically, electronically variable focus lenses (ECVFLs) and digital micromirror devices (DMD). The proposed modules include imagers for laser characterization and general purpose imaging which smartly adapt to changes in irradiance, optical wireless communication systems which can adapt to the number of users and to changes in link length, and a smart laser projection display that smartly adjust the pixel size to achieve a high resolution projected image at each screen distance. The first part of the dissertation starts with the proposal of using an ECVFL to create a novel multimode laser beam characterizer for coherent light. This laser beam characterizer uses the ECVFL and a DMD so that no mechanical motion of optical components along the optical axis is required. This reduces the mechanical motion overhead that traditional laser beam characterizers have, making this laser beam characterizer more accurate and reliable. The smart laser beam characterizer is able to account for irradiance fluctuations in the source. Using image processing, the important parameters that describe multimode laser beam propagation have been successfully extracted for a multi-mode laser test source. Specifically, the laser beam analysis parameters measured are the M2 parameter, w0 the minimum beam waist, and zR the Rayleigh range. Next a general purpose incoherent light imager that has a high dynamic range (>100 dB) and automatically adjusts for variations in irradiance in the scene is proposed. Then a data efficient image sensor is demonstrated. The idea of this smart image sensor is to reduce the bandwidth needed for transmitting data from the sensor by only sending the information which is required for the specific application while discarding the unnecessary data. In this case, the imager demonstrated sends only information regarding the boundaries of objects in the image so that after transmission to a remote image viewing location, these boundaries can be used to map out objects in the original image. The second part of the dissertation proposes and demonstrates smart optical communications systems using ECVFLs. This starts with the proposal and demonstration of a zero propagation loss optical wireless link using visible light with experiments covering a 1 to 4 m range. By adjusting the focal length of the ECVFLs for this directed line-of-sight link (LOS) the laser beam propagation parameters are adjusted such that the maximum amount of transmitted optical power is captured by the receiver for each link length. This power budget saving enables a longer achievable link range, a better SNR/BER, or higher power efficiency since more received power means the transmitted power can be reduced. Afterwards, a smart dual mode optical wireless link is proposed and demonstrated using a laser and LED coupled to the ECVFL to provide for the first time features of high bandwidths and wide beam coverage. This optical wireless link combines the capabilities of smart directed LOS link from the previous section with a diffuse optical wireless link, thus achieving high data rates and robustness to blocking. The proposed smart system can switch from LOS mode to Diffuse mode when blocking occurs or operate in both modes simultaneously to accommodate multiple users and operate a high speed link if one of the users requires extra bandwidth. The last part of this section presents the design of fibre optic and free-space optical switches which use ECVFLs to deflect the beams to achieve switching operation. These switching modules can be used in the proposed optical wireless indoor network. The final section of the thesis presents a novel smart laser scanning display. The ECVFL is used to create the smallest beam spot size possible for the system designed at the distance of the screen. The smart laser scanning display increases the spatial resoluti on of the display for any given distance. A basic smart display operation has been tested for red light and a 4X improvement in pixel resolution for the image has been demonstrated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Very Long Baseline Interferometry (VLBI) polarisation observations of the relativistic jets from Active Galactic Nuclei (AGN) allow the magnetic field environment around the jet to be probed. In particular, multi-wavelength observations of AGN jets allow the creation of Faraday rotation measure maps which can be used to gain an insight into the magnetic field component of the jet along the line of sight. Recent polarisation and Faraday rotation measure maps of many AGN show possible evidence for the presence of helical magnetic fields. The detection of such evidence is highly dependent both on the resolution of the images and the quality of the error analysis and statistics used in the detection. This thesis focuses on the development of new methods for high resolution radio astronomy imaging in both of these areas. An implementation of the Maximum Entropy Method (MEM) suitable for multi-wavelength VLBI polarisation observations is presented and the advantage in resolution it possesses over the CLEAN algorithm is discussed and demonstrated using Monte Carlo simulations. This new polarisation MEM code has been applied to multi-wavelength imaging of the Active Galactic Nuclei 0716+714, Mrk 501 and 1633+382, in each case providing improved polarisation imaging compared to the case of deconvolution using the standard CLEAN algorithm. The first MEM-based fractional polarisation and Faraday-rotation VLBI images are presented, using these sources as examples. Recent detections of gradients in Faraday rotation measure are presented, including an observation of a reversal in the direction of a gradient further along a jet. Simulated observations confirming the observability of such a phenomenon are conducted, and possible explanations for a reversal in the direction of the Faraday rotation measure gradient are discussed. These results were originally published in Mahmud et al. (2013). Finally, a new error model for the CLEAN algorithm is developed which takes into account correlation between neighbouring pixels. Comparison of error maps calculated using this new model and Monte Carlo maps show striking similarities when the sources considered are well resolved, indicating that the method is correctly reproducing at least some component of the overall uncertainty in the images. The calculation of many useful quantities using this model is demonstrated and the advantages it poses over traditional single pixel calculations is illustrated. The limitations of the model as revealed by Monte Carlo simulations are also discussed; unfortunately, the error model does not work well when applied to compact regions of emission.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: The purpose of this work is to improve the noise power spectrum (NPS), and thus the detective quantum efficiency (DQE), of computed radiography (CR) images by correcting for spatial gain variations specific to individual imaging plates. CR devices have not traditionally employed gain-map corrections, unlike the case with flat-panel detectors, because of the multiplicity of plates used with each reader. The lack of gain-map correction has limited the DQE(f) at higher exposures with CR. This current work describes a feasible solution to generating plate-specific gain maps. METHODS: Ten high-exposure open field images were taken with an RQA5 spectrum, using a sixth generation CR plate suspended in air without a cassette. Image values were converted to exposure, the plates registered using fiducial dots on the plate, the ten images averaged, and then high-pass filtered to remove low frequency contributions from field inhomogeneity. A gain-map was then produced by converting all pixel values in the average into fractions with mean of one. The resultant gain-map of the plate was used to normalize subsequent single images to correct for spatial gain fluctuation. To validate performance, the normalized NPS (NNPS) for all images was calculated both with and without the gain-map correction. Variations in the quality of correction due to exposure levels, beam voltage/spectrum, CR reader used, and registration were investigated. RESULTS: The NNPS with plate-specific gain-map correction showed improvement over the noncorrected case over the range of frequencies from 0.15 to 2.5 mm(-1). At high exposure (40 mR), NNPS was 50%-90% better with gain-map correction than without. A small further improvement in NNPS was seen from carefully registering the gain-map with subsequent images using small fiducial dots, because of slight misregistration during scanning. Further improvement was seen in the NNPS from scaling the gain map about the mean to account for different beam spectra. CONCLUSIONS: This study demonstrates that a simple gain-map can be used to correct for the fixed-pattern noise in a given plate and thus improve the DQE of CR imaging. Such a method could easily be implemented by manufacturers because each plate has a unique bar code and the gain-map for all plates associated with a reader could be stored for future retrieval. These experiments indicated that an improvement in NPS (and hence, DQE) is possible, depending on exposure level, over a wide range of frequencies with this technique.