962 resultados para Pixel
Resumo:
In this paper, a new reconfigurable multi-standard architecture is introduced for integer-pixel motion estimation and a standard-cell based chip design study is presented. This has been designed to cover most of the common block-based video compression standards, including MPEG-2, MPEG-4, H.263, H.264, AVS and WMV-9. The architecture exhibits simpler control, high throughput and relative low hardware cost and highly competitive when compared with excising designs for specific video standards. It can also, through the use of control signals, be dynamically reconfigured at run-time to accommodate different system constraint such as the trade-off in power dissipation and video-quality. The computational rates achieved make the circuit suitable for high end video processing applications. Silicon design studies indicate that circuits based on this approach incur only a relatively small penalty in terms of power dissipation and silicon area when compared with implementations for specific standards.
Resumo:
Aim - To describe a new method of evaluating the topographic distribution of fundus autofluorescence in eyes with retinal disease. Methods - Images of fundus autofluorescence were obtained in five patients and 34 normal volunteers using a confocal scanning laser ophthalmoscope (cSLO). To evaluate the topographic distribution of fundus autofluorescence throughout the posterior pole a rectangular box, 10 x 750 pixels, was used as the area of analysis. The box was placed, horizontally, across the macular region. The intensity of fundus autofluorescence of each pixel within the rectangular box was plotted against its degree of eccentricity. Profiles of fundus autofluorescence from patients were compared with those obtained from the age matched control group and with cSLO images. Results - Profiles of fundus autofluorescence appeared to represent the topographic distribution of fundus autofluorescence throughout the posterior pole appreciated in the cSLO images, and allowed rapid identification and quantification of areas of increased or decreased fundus autofluorescence. Conclusions - Fundus autofluorescence profiles appear to be useful to study the spatial distribution of fundus autofluorescence in eyes with retinal disease.
Resumo:
Power dissipation and robustness to process variation have conflicting design requirements. Scaling of voltage is associated with larger variations, while Vdd upscaling or transistor upsizing for parametric-delay variation tolerance can be detrimental for power dissipation. However, for a class of signal-processing systems, effective tradeoff can be achieved between Vdd scaling, variation tolerance, and output quality. In this paper, we develop a novel low-power variation-tolerant algorithm/architecture for color interpolation that allows a graceful degradation in the peak-signal-to-noise ratio (PSNR) under aggressive voltage scaling as well as extreme process variations. This feature is achieved by exploiting the fact that all computations used in interpolating the pixel values do not equally contribute to PSNR improvement. In the presence of Vdd scaling and process variations, the architecture ensures that only the less important computations are affected by delay failures. We also propose a different sliding-window size than the conventional one to improve interpolation performance by a factor of two with negligible overhead. Simulation results show that, even at a scaled voltage of 77% of nominal value, our design provides reasonable image PSNR with 40% power savings. © 2006 IEEE.
Resumo:
Data registration refers to a series of techniques for matching or bringing similar objects or datasets together into alignment. These techniques enjoy widespread use in a diverse variety of applications, such as video coding, tracking, object and face detection and recognition, surveillance and satellite imaging, medical image analysis and structure from motion. Registration methods are as numerous as their manifold uses, from pixel level and block or feature based methods to Fourier domain methods.
This book is focused on providing algorithms and image and video techniques for registration and quality performance metrics. The authors provide various assessment metrics for measuring registration quality alongside analyses of registration techniques, introducing and explaining both familiar and state-of-the-art registration methodologies used in a variety of targeted applications.
Key features:
- Provides a state-of-the-art review of image and video registration techniques, allowing readers to develop an understanding of how well the techniques perform by using specific quality assessment criteria
- Addresses a range of applications from familiar image and video processing domains to satellite and medical imaging among others, enabling readers to discover novel methodologies with utility in their own research
- Discusses quality evaluation metrics for each application domain with an interdisciplinary approach from different research perspectives
Resumo:
One of the most widely used techniques in computer vision for foreground detection is to model each background pixel as a Mixture of Gaussians (MoG). While this is effective for a static camera with a fixed or a slowly varying background, it fails to handle any fast, dynamic movement in the background. In this paper, we propose a generalised framework, called region-based MoG (RMoG), that takes into consideration neighbouring pixels while generating the model of the observed scene. The model equations are derived from Expectation Maximisation theory for batch mode, and stochastic approximation is used for online mode updates. We evaluate our region-based approach against ten sequences containing dynamic backgrounds, and show that the region-based approach provides a performance improvement over the traditional single pixel MoG. For feature and region sizes that are equal, the effect of increasing the learning rate is to reduce both true and false positives. Comparison with four state-of-the art approaches shows that RMoG outperforms the others in reducing false positives whilst still maintaining reasonable foreground definition. Lastly, using the ChangeDetection (CDNet 2014) benchmark, we evaluated RMoG against numerous surveillance scenes and found it to amongst the leading performers for dynamic background scenes, whilst providing comparable performance for other commonly occurring surveillance scenes.
Resumo:
The Rapid Oscillations in the Solar Atmosphere (ROSA) instrument is a synchronized, six-camera high-cadence solar imaging instrument developed by Queen's University Belfast and recently commissioned at the Dunn Solar Telescope at the National Solar Observatory in Sunspot, New Mexico, USA, as a common-user instrument. Consisting of six 1k x 1k Peltier-cooled frame-transfer CCD cameras with very low noise (0.02 - 15 e/pixel/s), each ROSA camera is capable of full-chip readout speeds in excess of 30 Hz, and up to 200 Hz when the CCD is windowed. ROSA will allow for multi-wavelength studies of the solar atmosphere at a high temporal resolution. We will present the current instrument set-up and parameters, observing modes, and future plans, including a new high QE camera allowing 15 Hz for Halpha. Interested parties should see https://habu.pst.qub.ac.uk/groups/arcresearch/wiki/de502/ROSA.html
Automated image analysis for experimental investigations of salt water intrusion in coastal aquifers
Resumo:
A novel methodology has been developed to quantify important saltwater intrusion parameters in a sandbox style experiment using image analysis. Existing methods found in the literature are based mainly on visual observations, which are subjective, labour intensive and limits the temporal and spatial resolutions that can be analysed. A robust error analysis was undertaken to determine the optimum methodology to convert image light intensity to concentration. Results showed that defining a relationship on a pixel-wise basis provided the most accurate image to concentration conversion and allowed quantification of the width of mixing zone between the saltwater and freshwater. A large image sample rate was used to investigate the transient dynamics of saltwater intrusion, which rendered analysis by visual observation unsuitable. This paper presents the methodologies developed to minimise human input and promote autonomy, provide high resolution image to concentration conversion and allow the quantification of intrusion parameters under transient conditions.
Resumo:
Efficient identification and follow-up of astronomical transients is hindered by the need for humans to manually select promising candidates from data streams that contain many false positives. These artefacts arise in the difference images that are produced by most major ground-based time-domain surveys with large format CCD cameras. This dependence on humans to reject bogus detections is unsustainable for next generation all-sky surveys and significant effort is now being invested to solve the problem computationally. In this paper, we explore a simple machine learning approach to real-bogus classification by constructing a training set from the image data of similar to 32 000 real astrophysical transients and bogus detections from the Pan-STARRS1 Medium Deep Survey. We derive our feature representation from the pixel intensity values of a 20 x 20 pixel stamp around the centre of the candidates. This differs from previous work in that it works directly on the pixels rather than catalogued domain knowledge for feature design or selection. Three machine learning algorithms are trained (artificial neural networks, support vector machines and random forests) and their performances are tested on a held-out subset of 25 per cent of the training data. We find the best results from the random forest classifier and demonstrate that by accepting a false positive rate of 1 per cent, the classifier initially suggests a missed detection rate of around 10 per cent. However, we also find that a combination of bright star variability, nuclear transients and uncertainty in human labelling means that our best estimate of the missed detection rate is approximately 6 per cent.
Resumo:
We present Hubble Space Telescope (HST) rest-frame ultraviolet imaging of the host galaxies of 16 hydrogen-poor superluminous supernovae (SLSNe), including 11 events from the Pan-STARRS Medium Deep Survey. Taking advantage of the superb angular resolution of HST, we characterize the galaxies' morphological properties, sizes, and star formation rate (SFR) densities. We determine the supernova (SN) locations within the host galaxies through precise astrometric matching and measure physical and host-normalized offsets as well as the SN positions within the cumulative distribution of UV light pixel brightness. We find that the host galaxies of H-poor SLSNe are irregular, compact dwarf galaxies, with a median half-light radius of just 0.9 kpc. The UV-derived SFR densities are high ([Sigma(SFR)] similar or equal to 0.1M(circle dot) yr(-1) kpc(-1)), suggesting that SLSNe form in overdense environments. Their locations trace the UV light of their host galaxies, with a distribution intermediate between that of long-duration gamma-ray bursts (LGRBs; which are strongly clustered on the brightest regions of their hosts) and a uniform distribution (characteristic of normal core-collapse SNe), though cannot be statistically distinguished from either with the current sample size. Taken together, this strengthens the picture that SLSN progenitors require different conditions than those of ordinary core-collapse SNe to form and that they explode in broadly similar galaxies as do LGRBs. If the tendency for SLSNe to be less clustered on the brightest regions than are LGRBs is confirmed by a larger sample, this would indicate a different, potentially lower-mass progenitor for SLSNe than LRGBs.
Resumo:
Computer vision for realtime applications requires tremendous computational power because all images must be processed from the first to the last pixel. Ac tive vision by probing specific objects on the basis of already acquired context may lead to a significant reduction of processing. This idea is based on a few concepts from our visual cortex (Rensink, Visual Cogn. 7, 17-42, 2000): (1) our physical surround can be seen as memory, i.e. there is no need to construct detailed and complete maps, (2) the bandwidth of the what and where systems is limited, i.e. only one object can be probed at any time, and (3) bottom-up, low-level feature extraction is complemented by top-down hypothesis testing, i.e. there is a rapid convergence of activities in dendritic/axonal connections.
Resumo:
In this study, Artificial Neural Networks are applied to multistep long term solar radiation prediction. The networks are trained as one-step-ahead predictors and iterated over time to obtain multi-step longer term predictions. Auto-regressive and Auto-regressive with exogenous inputs solar radiationmodels are compared, considering cloudiness indices as inputs in the latter case. These indices are obtained through pixel classification of ground-to-sky images. The input-output structure of the neural network models is selected using evolutionary computation methods.
Resumo:
Thesis (Ph.D.)--University of Washington, 2013
Resumo:
Data registration refers to a series of techniques for matching or bringing similar objects or datasets together into alignment. These techniques enjoy widespread use in a diverse variety of applications, such as video coding, tracking, object and face detection and recognition, surveillance and satellite imaging, medical image analysis and structure from motion. Registration methods are as numerous as their manifold uses, from pixel level and block or feature based methods to Fourier domain methods. This book is focused on providing algorithms and image and video techniques for registration and quality performance metrics. The authors provide various assessment metrics for measuring registration quality alongside analyses of registration techniques, introducing and explaining both familiar and state–of–the–art registration methodologies used in a variety of targeted applications.
Resumo:
Dissertação de Mestrado em Gestão do Território, Especialização em Detecção Remota e Sistemas de Informação Geográfica
Resumo:
The forest has a crucial ecological role and the continuous forest loss can cause colossal effects on the environment. As Armenia is one of the low forest covered countries in the world, this problem is more critical. Continuous forest disturbances mainly caused by illegal logging started from the early 1990s had a huge damage on the forest ecosystem by decreasing the forest productivity and making more areas vulnerable to erosion. Another aspect of the Armenian forest is the lack of continuous monitoring and absence of accurate estimation of the level of cuts in some years. In order to have insight about the forest and the disturbances in the long period of time we used Landsat TM/ETM + images. Google Earth Engine JavaScript API was used, which is an online tool enabling the access and analysis of a great amount of satellite imagery. To overcome the data availability problem caused by the gap in the Landsat series in 1988- 1998, extensive cloud cover in the study area and the missing scan lines, we used pixel based compositing for the temporal window of leaf on vegetation (June-late September). Subsequently, pixel based linear regression analyses were performed. Vegetation indices derived from the 10 biannual composites for the years 1984-2014 were used for trend analysis. In order to derive the disturbances only in forests, forest cover layer was aggregated and the original composites were masked. It has been found, that around 23% of forests were disturbed during the study period.