544 resultados para Détecteur à pixels


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several algorithms for optical flow are studied theoretically and experimentally. Differential and matching methods are examined; these two methods have differing domains of application- differential methods are best when displacements in the image are small (<2 pixels) while matching methods work well for moderate displacements but do not handle sub-pixel motions. Both types of optical flow algorithm can use either local or global constraints, such as spatial smoothness. Local matching and differential techniques and global differential techniques will be examined. Most algorithms for optical flow utilize weak assumptions on the local variation of the flow and on the variation of image brightness. Strengthening these assumptions improves the flow computation. The computational consequence of this is a need for larger spatial and temporal support. Global differential approaches can be extended to local (patchwise) differential methods and local differential methods using higher derivatives. Using larger support is valid when constraint on the local shape of the flow are satisfied. We show that a simple constraint on the local shape of the optical flow, that there is slow spatial variation in the image plane, is often satisfied. We show how local differential methods imply the constraints for related methods using higher derivatives. Experiments show the behavior of these optical flow methods on velocity fields which so not obey the assumptions. Implementation of these methods highlights the importance of numerical differentiation. Numerical approximation of derivatives require care, in two respects: first, it is important that the temporal and spatial derivatives be matched, because of the significant scale differences in space and time, and, second, the derivative estimates improve with larger support.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe a new method for motion estimation and 3D reconstruction from stereo image sequences obtained by a stereo rig moving through a rigid world. We show that given two stereo pairs one can compute the motion of the stereo rig directly from the image derivatives (spatial and temporal). Correspondences are not required. One can then use the images from both pairs combined to compute a dense depth map. The motion estimates between stereo pairs enable us to combine depth maps from all the pairs in the sequence to form an extended scene reconstruction and we show results from a real image sequence. The motion computation is a linear least squares computation using all the pixels in the image. Areas with little or no contrast are implicitly weighted less so one does not have to explicitly apply a confidence measure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We seek to both detect and segment objects in images. To exploit both local image data as well as contextual information, we introduce Boosted Random Fields (BRFs), which uses Boosting to learn the graph structure and local evidence of a conditional random field (CRF). The graph structure is learned by assembling graph fragments in an additive model. The connections between individual pixels are not very informative, but by using dense graphs, we can pool information from large regions of the image; dense models also support efficient inference. We show how contextual information from other objects can improve detection performance, both in terms of accuracy and speed, by using a computational cascade. We apply our system to detect stuff and things in office and street scenes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rapid judgments about the properties and spatial relations of objects are the crux of visually guided interaction with the world. Vision begins, however, with essentially pointwise representations of the scene, such as arrays of pixels or small edge fragments. For adequate time-performance in recognition, manipulation, navigation, and reasoning, the processes that extract meaningful entities from the pointwise representations must exploit parallelism. This report develops a framework for the fast extraction of scene entities, based on a simple, local model of parallel computation.sAn image chunk is a subset of an image that can act as a unit in the course of spatial analysis. A parallel preprocessing stage constructs a variety of simple chunks uniformly over the visual array. On the basis of these chunks, subsequent serial processes locate relevant scene components and assemble detailed descriptions of them rapidly. This thesis defines image chunks that facilitate the most potentially time-consuming operations of spatial analysis---boundary tracing, area coloring, and the selection of locations at which to apply detailed analysis. Fast parallel processes for computing these chunks from images, and chunk-based formulations of indexing, tracing, and coloring, are presented. These processes have been simulated and evaluated on the lisp machine and the connection machine.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

C.R. Bull, N.J.B. McFarlane, R. Zwiggelaar, C.J. Allen and T.T. Mottram, 'Inspection of teats by colour image analysis for automatic milking systems', Computers and Electronics in Agriculture 15 (1), 15-26 (1996)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Space carving has emerged as a powerful method for multiview scene reconstruction. Although a wide variety of methods have been proposed, the quality of the reconstruction remains highly-dependent on the photometric consistency measure, and the threshold used to carve away voxels. In this paper, we present a novel photo-consistency measure that is motivated by a multiset variant of the chamfer distance. The new measure is robust to high amounts of within-view color variance and also takes into account the projection angles of back-projected pixels. Another critical issue in space carving is the selection of the photo-consistency threshold used to determine what surface voxels are kept or carved away. In this paper, a reliable threshold selection technique is proposed that examines the photo-consistency values at contour generator points. Contour generators are points that lie on both the surface of the object and the visual hull. To determine the threshold, a percentile ranking of the photo-consistency values of these generator points is used. This improved technique is applicable to a wide variety of photo-consistency measures, including the new measure presented in this paper. Also presented in this paper is a method to choose between photo-consistency measures, and voxel array resolutions prior to carving using receiver operating characteristic (ROC) curves.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

CONFIGR (CONtour FIgure GRound) is a computational model based on principles of biological vision that completes sparse and noisy image figures. Within an integrated vision/recognition system, CONFIGR posits an initial recognition stage which identifies figure pixels from spatially local input information. The resulting, and typically incomplete, figure is fed back to the “early vision” stage for long-range completion via filling-in. The reconstructed image is then re-presented to the recognition system for global functions such as object recognition. In the CONFIGR algorithm, the smallest independent image unit is the visible pixel, whose size defines a computational spatial scale. Once pixel size is fixed, the entire algorithm is fully determined, with no additional parameter choices. Multi-scale simulations illustrate the vision/recognition system. Open-source CONFIGR code is available online, but all examples can be derived analytically, and the design principles applied at each step are transparent. The model balances filling-in as figure against complementary filling-in as ground, which blocks spurious figure completions. Lobe computations occur on a subpixel spatial scale. Originally designed to fill-in missing contours in an incomplete image such as a dashed line, the same CONFIGR system connects and segments sparse dots, and unifies occluded objects from pieces locally identified as figure in the initial recognition stage. The model self-scales its completion distances, filling-in across gaps of any length, where unimpeded, while limiting connections among dense image-figure pixel groups that already have intrinsic form. Long-range image completion promises to play an important role in adaptive processors that reconstruct images from highly compressed video and still camera images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An improved Boundary Contour System (BCS) and Feature Contour System (FCS) neural network model of preattentive vision is applied to large images containing range data gathered by a synthetic aperture radar (SAR) sensor. The goal of processing is to make structures such as motor vehicles, roads, or buildings more salient and more interpretable to human observers than they are in the original imagery. Early processing by shunting center-surround networks compresses signal dynamic range and performs local contrast enhancement. Subsequent processing by filters sensitive to oriented contrast, including short-range competition and long-range cooperation, segments the image into regions. The segmentation is performed by three "copies" of the BCS and FCS, of small, medium, and large scales, wherein the "short-range" and "long-range" interactions within each scale occur over smaller or larger distances, corresponding to the size of the early filters of each scale. A diffusive filling-in operation within the segmented regions at each scale produces coherent surface representations. The combination of BCS and FCS helps to locate and enhance structure over regions of many pixels, without the resulting blur characteristic of approaches based on low spatial frequency filtering alone.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An improved Boundary Contour System (BCS) and Feature Contour System (FCS) neural network model of preattentive vision is applied to two large images containing range data gathered by a synthetic aperture radar (SAR) sensor. The goal of processing is to make structures such as motor vehicles, roads, or buildings more salient and more interpretable to human observers than they are in the original imagery. Early processing by shunting center-surround networks compresses signal dynamic range and performs local contrast enhancement. Subsequent processing by filters sensitive to oriented contrast, including short-range competition and long-range cooperation, segments the image into regions. Finally, a diffusive filling-in operation within the segmented regions produces coherent visible structures. The combination of BCS and FCS helps to locate and enhance structure over regions of many pixels, without the resulting blur characteristic of approaches based on low spatial frequency filtering alone.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Very Long Baseline Interferometry (VLBI) polarisation observations of the relativistic jets from Active Galactic Nuclei (AGN) allow the magnetic field environment around the jet to be probed. In particular, multi-wavelength observations of AGN jets allow the creation of Faraday rotation measure maps which can be used to gain an insight into the magnetic field component of the jet along the line of sight. Recent polarisation and Faraday rotation measure maps of many AGN show possible evidence for the presence of helical magnetic fields. The detection of such evidence is highly dependent both on the resolution of the images and the quality of the error analysis and statistics used in the detection. This thesis focuses on the development of new methods for high resolution radio astronomy imaging in both of these areas. An implementation of the Maximum Entropy Method (MEM) suitable for multi-wavelength VLBI polarisation observations is presented and the advantage in resolution it possesses over the CLEAN algorithm is discussed and demonstrated using Monte Carlo simulations. This new polarisation MEM code has been applied to multi-wavelength imaging of the Active Galactic Nuclei 0716+714, Mrk 501 and 1633+382, in each case providing improved polarisation imaging compared to the case of deconvolution using the standard CLEAN algorithm. The first MEM-based fractional polarisation and Faraday-rotation VLBI images are presented, using these sources as examples. Recent detections of gradients in Faraday rotation measure are presented, including an observation of a reversal in the direction of a gradient further along a jet. Simulated observations confirming the observability of such a phenomenon are conducted, and possible explanations for a reversal in the direction of the Faraday rotation measure gradient are discussed. These results were originally published in Mahmud et al. (2013). Finally, a new error model for the CLEAN algorithm is developed which takes into account correlation between neighbouring pixels. Comparison of error maps calculated using this new model and Monte Carlo maps show striking similarities when the sources considered are well resolved, indicating that the method is correctly reproducing at least some component of the overall uncertainty in the images. The calculation of many useful quantities using this model is demonstrated and the advantages it poses over traditional single pixel calculations is illustrated. The limitations of the model as revealed by Monte Carlo simulations are also discussed; unfortunately, the error model does not work well when applied to compact regions of emission.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spatio-temporal data on cytotaxonomic identifications of larvae of different members of the Simulium damnosum complex collected from rivers in southern Ghana and south-western Togo from 1975 until 1997 were analysed. When the data were combined, the percentages of savannah blackflies (S. damnosum sensu stricto and S. sirbanum) in the samples were shown to have been progressively increasing since 1975. The increases were statistically significant (P < 0·001), but the rates of increase were not linear. Further analyses were conducted according to the collection seasons and locations of the samples, to account for possible biases such as savannah flies occurring further south in the dry season or a preponderance of later samples from northern rivers having more savannah flies. These analyses showed that the increasing trend was statistically significant (P< 0·0001) only during the periods April to June and October to December. The presence of adult savannah flies carrying infective larvae (L3) indistinguishable from those of Onchocerca volvulus in the study zone was confirmed by examinations of captured flies. The percentages of savannah flies amongst the human-biting populations and the percentages with L3s in the head were higher during dry seasons than wet seasons and the savannah species were found furthest south (5 °25′N) in the dry season. Comparisons of satellite images taken in 1973 and 1990 over a study area in south-western Ghana encompassing stretches of the Tano and Bia rivers demonstrated that there have been substantial increases in urban and savannah areas, at the expense of forest. This was so not only for the whole images but also for subsamples of the images taken at 1, 2, 4, 8 and 16 km distant from sites alongside the River Tano. At every distance from the river, the percentages of pixels classified as urban or savannah have increased in 1990 compared with 1973, while those classified as degraded or dense forest have decreased. The possibility that the proportionate increases in savannah forms of the vectors of onchocerciasis, and hence in the likelihood of the transmission of savannah strains of the disease in formerly forested areas, were related to the decreases in forest cover is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The detection of dense harmful algal blooms (HABs) by satellite remote sensing is usually based on analysis of chlorophyll-a as a proxy. However, this approach does not provide information about the potential harm of bloom, nor can it identify the dominant species. The developed HAB risk classification method employs a fully automatic data-driven approach to identify key characteristics of water leaving radiances and derived quantities, and to classify pixels into “harmful”, “non-harmful” and “no bloom” categories using Linear Discriminant Analysis (LDA). Discrimination accuracy is increased through the use of spectral ratios of water leaving radiances, absorption and backscattering. To reduce the false alarm rate the data that cannot be reliably classified are automatically labelled as “unknown”. This method can be trained on different HAB species or extended to new sensors and then applied to generate independent HAB risk maps; these can be fused with other sensors to fill gaps or improve spatial or temporal resolution. The HAB discrimination technique has obtained accurate results on MODIS and MERIS data, correctly identifying 89% of Phaeocystis globosa HABs in the southern North Sea and 88% of Karenia mikimotoi blooms in the Western English Channel. A linear transformation of the ocean colour discriminants is used to estimate harmful cell counts, demonstrating greater accuracy than if based on chlorophyll-a; this will facilitate its integration into a HAB early warning system operating in the southern North Sea.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Satellite-based remote sensing of active fires is the only practical way to consistently and continuously monitor diurnal fluctuations in biomass burning from regional, to continental, to global scales. Failure to understand, quantify, and communicate the performance of an active fire detection algorithm, however, can lead to improper interpretations of the spatiotemporal distribution of biomass burning, and flawed estimates of fuel consumption and trace gas and aerosol emissions. This work evaluates the performance of the Spinning Enhanced Visible and Infrared Imager (SEVIRI) Fire Thermal Anomaly (FTA) detection algorithm using seven months of active fire pixels detected by the Moderate Resolution Imaging Spectroradiometer (MODIS) across the Central African Republic (CAR). Results indicate that the omission rate of the SEVIRI FTA detection algorithm relative to MODIS varies spatially across the CAR, ranging from 25% in the south to 74% in the east. In the absence of confounding artifacts such as sunglint, uncertainties in the background thermal characterization, and cloud cover, the regional variation in SEVIRI's omission rate can be attributed to a coupling between SEVIRI's low spatial resolution detection bias (i.e., the inability to detect fires below a certain size and intensity) and a strong geographic gradient in active fire characteristics across the CAR. SEVIRI's commission rate relative to MODIS increases from 9% when evaluated near MODIS nadir to 53% near the MODIS scene edges, indicating that SEVIRI errors of commission at the MODIS scene edges may not be false alarms but rather true fires that MODIS failed to detect as a result of larger pixel sizes at extreme MODIS scan angles. Results from this work are expected to facilitate (i) future improvements to the SEVIRI FTA detection algorithm; (ii) the assimilation of the SEVIRI and MODIS active fire products; and (iii) the potential inclusion of SEVIRI into a network of geostationary sensors designed to achieve global diurnal active fire monitoring.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is ongoing debate as to whether the oligotrophic ocean is predominantly net autotrophic and acts as a CO2 sink, or net heterotrophic and therefore acts as a CO2 source to the atmosphere. This quantification is challenging, both spatially and temporally, due to the sparseness of measurements. There has been a concerted effort to derive accurate estimates of phytoplankton photosynthesis and primary production from satellite data to fill these gaps; however there have been few satellite estimates of net community production (NCP). In this paper, we compare a number of empirical approaches to estimate NCP from satellite data with in vitro measurements of changes in dissolved O2 concentration at 295 stations in the N and S Atlantic Ocean (including the Antarctic), Greenland and Mediterranean Seas. Algorithms based on power laws between NCP and particulate organic carbon production (POC) derived from 14C uptake tend to overestimate NCP at negative values and underestimate at positive values. An algorithm that includes sea surface temperature (SST) in the power function of NCP and 14C POC has the lowest bias and root-mean square error compared with in vitro measured NCP and is the most accurate algorithm for the Atlantic Ocean. Nearly a 13 year time series of NCP was generated using this algorithm with SeaWiFS data to assess changes over time in different regions and in relation to climate variability. The North Atlantic subtropical and tropical Gyres (NATL) were predominantly net autotrophic from 1998 to 2010 except for boreal autumn/winter, suggesting that the northern hemisphere has remained a net sink for CO2 during this period. The South Atlantic subtropical Gyre (SATL) fluctuated from being net autotrophic in austral spring-summer, to net heterotrophic in austral autumn–winter. Recent decadal trends suggest that the SATL is becoming more of a CO2 source. Over the Atlantic basin, the percentage of satellite pixels with negative NCP was 27%, with the largest contributions from the NATL and SATL during boreal and austral autumn–winter, respectively. Variations in NCP in the northern and southern hemispheres were correlated with climate indices. Negative correlations between NCP and the multivariate ENSO index (MEI) occurred in the SATL, which explained up to 60% of the variability in NCP. Similarly there was a negative correlation between NCP and the North Atlantic Oscillation (NAO) in the Southern Sub-Tropical Convergence Zone (SSTC),which explained 90% of the variability. There were also positive correlations with NAO in the Canary Current Coastal Upwelling (CNRY) and Western Tropical Atlantic (WTRA)which explained 80% and 60% of the variability in each province, respectively. MEI and NAO seem to play a role in modifying phases of net autotrophy and heterotrophy in the Atlantic Ocean.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quantitative examination of prostate histology offers clues in the diagnostic classification of lesions and in the prediction of response to treatment and prognosis. To facilitate the collection of quantitative data, the development of machine vision systems is necessary. This study explored the use of imaging for identifying tissue abnormalities in prostate histology. Medium-power histological scenes were recorded from whole-mount radical prostatectomy sections at × 40 objective magnification and assessed by a pathologist as exhibiting stroma, normal tissue (nonneoplastic epithelial component), or prostatic carcinoma (PCa). A machine vision system was developed that divided the scenes into subregions of 100 × 100 pixels and subjected each to image-processing techniques. Analysis of morphological characteristics allowed the identification of normal tissue. Analysis of image texture demonstrated that Haralick feature 4 was the most suitable for discriminating stroma from PCa. Using these morphological and texture measurements, it was possible to define a classification scheme for each subregion. The machine vision system is designed to integrate these classification rules and generate digital maps of tissue composition from the classification of subregions; 79.3% of subregions were correctly classified. Established classification rates have demonstrated the validity of the methodology on small scenes; a logical extension was to apply the methodology to whole slide images via scanning technology. The machine vision system is capable of classifying these images. The machine vision system developed in this project facilitates the exploration of morphological and texture characteristics in quantifying tissue composition. It also illustrates the potential of quantitative methods to provide highly discriminatory information in the automated identification of prostatic lesions using computer vision.