997 resultados para pixel


Relevância:

10.00% 10.00%

Publicador:

Resumo:

自动通用胶片判读仪是一种高智能化的精密光学测试设备,采用了计算机控制飞点扫描技术、精密光学测量技术、图象跟踪测量与信息处理技术。飞点管分辨率达4096×4096象元,通过光学系统胶片上获得6.55μm高分辨率,飞点扫描方式灵活多样且可随意控制,通用于目前我国靶场所有的电影经纬仪和高速摄影机35mm胶片的数据判读。具有自动判读和半自动判读两种工作模式,自动判读的速度为5帧/秒,自动判读的精度为σ=±0.011mm,半自动判读的精度为σ=±0.009mm,测量数据可以记录、打印和显示。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

本文介绍用光学阵列传感器的机器人物体分类系统。传感器直接安装在机器人的两个手指上。被抓物体的阴影通过光导纤维传到安放在“安全区”的光敏元件上。计算机识别物体的轮廓后命令机器人抓握物体,并把它运送到指定的地点从而达到物体分类的目的。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

模糊C-means算法在聚类分析中已得到了成功的应用,本文提出一种利用模糊C-means算法消除噪声的新方法。一般来说,图象中的噪声点就是其灰度值与其周围象素的灰度值之差超过某个门限值的点。根据这个事实,首先利用模糊C-means算法分类,再利用标准核函数检测出噪声点,然后将噪声点去掉。由于只修改噪声点处的象素灰度值,而对于其它象素的灰度值不予改变,所以本算法能够很好地保护细节和边缘。本方法每次处理3×3个点,而以往的方法只能每次处理一个点,所以本方法能提高运算速度。文中给出了利用本方法对实际图象处理的结果,并与梯度倒数权值法进行了定量的比较。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

介绍了Zernike矩及基于Zernike矩的图像亚像素边缘检测原理,针对Ghosal提出的基于Zernike矩的亚像素图像边缘检测算法检测出的图像存在边缘较粗及边缘亚像素定位精度低等不足,提出了一种改进算法.推导了7×7 Zernike矩模板系数,提出一种新的边缘判断依据.改进的算法能较好检测图像边缘并实现了较高的边缘定位.最后,设计了3组不同的实验.实验结果同Canny算子及Ghosal算法相比,证明了改进算法的优越性.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Population data which collected and saved according to administrative region is a kind of statistical data. As a traditional method of spatial data expression, average distribution in every administrative region brings population data on a low spatial and temporal precision. Now, an accurate population data with high spatial resolution is becoming more and more important in regional planning, environment protection, policy making and rural-urban development. Spatial distribution of population data is becoming more important in GIS study area. In this article, the author reviewed the progress of research on spatial distribution of population. Under the support of GIS, correlative geographical theories and Grid data model, Remote Sensing data, terrain data, traffic data, river data, resident data, and social economic statistic were applied to calculate the spatial distribution of population in Fujian province, which includes following parts: (1) Simulating of boundary at township level. Based on access cost index, land use data, traffic data, river data, DEM, and correlative social economic statistic data, the access cost surface in study area was constructed. Supported by the lowest cost path query and weighted Voronoi diagram, DVT model (Demarcation of Villages and Towns) was established to simulate the boundary at township level in Fujian province. (2) Modeling of population spatial distribution. Based on the knowledge in geography, seven impact factors, such as land use, altitude, slope, residential area, railway, road, and river were chosen as the parameters in this study. Under the support of GIS, the relations of population distribution to these impact factors were analyzed quantificationally, and the coefficients of population density on pixel scale were calculated. Last, the model of population spatial distribution at township level was established through multiplicative fusion of population density coefficients and simulated boundary of towns. (3) Error test and analysis of population spatial distribution base on modeling. The author not only analyzed the numerical character of modeling error, but also its spatial distribution. The reasons of error were discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several algorithms for optical flow are studied theoretically and experimentally. Differential and matching methods are examined; these two methods have differing domains of application- differential methods are best when displacements in the image are small (<2 pixels) while matching methods work well for moderate displacements but do not handle sub-pixel motions. Both types of optical flow algorithm can use either local or global constraints, such as spatial smoothness. Local matching and differential techniques and global differential techniques will be examined. Most algorithms for optical flow utilize weak assumptions on the local variation of the flow and on the variation of image brightness. Strengthening these assumptions improves the flow computation. The computational consequence of this is a need for larger spatial and temporal support. Global differential approaches can be extended to local (patchwise) differential methods and local differential methods using higher derivatives. Using larger support is valid when constraint on the local shape of the flow are satisfied. We show that a simple constraint on the local shape of the optical flow, that there is slow spatial variation in the image plane, is often satisfied. We show how local differential methods imply the constraints for related methods using higher derivatives. Experiments show the behavior of these optical flow methods on velocity fields which so not obey the assumptions. Implementation of these methods highlights the importance of numerical differentiation. Numerical approximation of derivatives require care, in two respects: first, it is important that the temporal and spatial derivatives be matched, because of the significant scale differences in space and time, and, second, the derivative estimates improve with larger support.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We introduce a new method to describe, in a single image, changes in shape over time. We acquire both range and image information with a stationary stereo camera. From the pictures taken, we display a composite image consisting of the image data from the surface closest to the camera at every pixel. This reveals the 3-d relationships over time by easy-to-interpret occlusion relationships in the composite image. We call the composite a shape-time photograph. Small errors in depth measurements cause artifacts in the shape-time images. We correct most of these using a Markov network to estimate the most probable front surface, taking into account the depth measurements, their uncertainties, and layer continuity assumptions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For applications involving the control of moving vehicles, the recovery of relative motion between a camera and its environment is of high utility. This thesis describes the design and testing of a real-time analog VLSI chip which estimates the focus of expansion (FOE) from measured time-varying images. Our approach assumes a camera moving through a fixed world with translational velocity; the FOE is the projection of the translation vector onto the image plane. This location is the point towards which the camera is moving, and other points appear to be expanding outward from. By way of the camera imaging parameters, the location of the FOE gives the direction of 3-D translation. The algorithm we use for estimating the FOE minimizes the sum of squares of the differences at every pixel between the observed time variation of brightness and the predicted variation given the assumed position of the FOE. This minimization is not straightforward, because the relationship between the brightness derivatives depends on the unknown distance to the surface being imaged. However, image points where brightness is instantaneously constant play a critical role. Ideally, the FOE would be at the intersection of the tangents to the iso-brightness contours at these "stationary" points. In practice, brightness derivatives are hard to estimate accurately given that the image is quite noisy. Reliable results can nevertheless be obtained if the image contains many stationary points and the point is found that minimizes the sum of squares of the perpendicular distances from the tangents at the stationary points. The FOE chip calculates the gradient of this least-squares minimization sum, and the estimation is performed by closing a feedback loop around it. The chip has been implemented using an embedded CCD imager for image acquisition and a row-parallel processing scheme. A 64 x 64 version was fabricated in a 2um CCD/ BiCMOS process through MOSIS with a design goal of 200 mW of on-chip power, a top frame rate of 1000 frames/second, and a basic accuracy of 5%. A complete experimental system which estimates the FOE in real time using real motion and image scenes is demonstrated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este documento apresenta os procedimentos para instalação e utilização do sistema NAVPRO 3.0, desenvolvido para o processamento automático e geração de produtos de imagens do sensor Advanced Very High Resolution Radiometer (AVHRR) a bordo dos satélites da National Oceanic Atmospheric Administration (NOAA). O sistema NAVPRO foi criado pela Embrapa Informática Agropecuária em parceria com a Universidade Estadual de Campinas (Unicamp), que contou com o repasse do pacote computacional NAV (NAVigation), desenvolvido pelo Colorado Center for Astrodynamics Research (CCAR), da Universidade do Colorado, Boulder, EUA. O diferencial do sistema é seu método de georreferenciamento automático e preciso, capaz de gerar imagens com deslocamentos máximos de 1 pixel, valor aceito em aplicações envolvendo dados de baixa resolução espacial.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabalho apresenta metodologias para estimativa de perda de solos na Bacia Hidrográfica do Rio Guapi-Macacu - RJ. Para a estimativa anual de perdas de solo foi utilizada a ferramenta InVest (Integrated Valuation of Environmental Services and Tradeoffs) através do módulo que desenvolve retenção de sedimentos. Esse módulo permite calcular a perda de solo média anual de cada parcela de terra, além de determinar o quanto de solo pode chegar a um determinado ponto de interesse. Para identificar a perda potencial de solo, o modelo emprega a Equação Universal de Perda de Solo (USLE) na escala do pixel, integrando informações sobre relevo, precipitação, padrões de uso da terra e propriedades do solo. Seus resultados são dados em toneladas por sub-bacias. Os resultados obtidos neste trabalho permitiram concluir que embora haja limitações no uso da Equação Universal de Perda de Solo, o modelo possibilitou a espacialização de classes de perdas de solos com indicações de áreas consideradas mais ou menos vulneráveis aos processos erosivos, considerando os dados disponíveis e suas escalas. O uso do InVest para calcular a USLE apresentou como principal vantagem a possibilidade de integração dos dados necessários em um único ambiente, reduzindo a possibilidade de erros na conversão de dados. Contudo, a maior limitação encontrada a sua aplicação está na dificuldade de obtenção de dados de entrada necessários ao modelo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

C.R. Bull, N.J.B. McFarlane, R. Zwiggelaar, C.J. Allen and T.T. Mottram, 'Inspection of teats by colour image analysis for automatic milking systems', Computers and Electronics in Agriculture 15 (1), 15-26 (1996)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a novel image registration framework which uses classifiers trained from examples of aligned images to achieve registration. Our approach is designed to register images of medical data where the physical condition of the patient has changed significantly and image intensities are drastically different. We use two boosted classifiers for each degree of freedom of image transformation. These two classifiers can both identify when two images are correctly aligned and provide an efficient means of moving towards correct registration for misaligned images. The classifiers capture local alignment information using multi-pixel comparisons and can therefore achieve correct alignments where approaches like correlation and mutual-information which rely on only pixel-to-pixel comparisons fail. We test our approach using images from CT scans acquired in a study of acute respiratory distress syndrome. We show significant increase in registration accuracy in comparison to an approach using mutual information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose to investigate a model-based technique for encoding non-rigid object classes in terms of object prototypes. Objects from the same class can be parameterized by identifying shape and appearance invariants of the class to devise low-level representations. The approach presented here creates a flexible model for an object class from a set of prototypes. This model is then used to estimate the parameters of low-level representation of novel objects as combinations of the prototype parameters. Variations in the object shape are modeled as non-rigid deformations. Appearance variations are modeled as intensity variations. In the training phase, the system is presented with several example prototype images. These prototype images are registered to a reference image by a finite element-based technique called Active Blobs. The deformations of the finite element model to register a prototype image with the reference image provide the shape description or shape vector for the prototype. The shape vector for each prototype, is then used to warp the prototype image onto the reference image and obtain the corresponding texture vector. The prototype texture vectors, being warped onto the same reference image have a pixel by pixel correspondence with each other and hence are "shape normalized". Given sufficient number of prototypes that exhibit appropriate in-class variations, the shape and the texture vectors define a linear prototype subspace that spans the object class. Each prototype is a vector in this subspace. The matching phase involves the estimation of a set of combination parameters for synthesis of the novel object by combining the prototype shape and texture vectors. The strengths of this technique lie in the combined estimation of both shape and appearance parameters. This is in contrast with the previous approaches where shape and appearance parameters were estimated separately.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

CONFIGR (CONtour FIgure GRound) is a computational model based on principles of biological vision that completes sparse and noisy image figures. Within an integrated vision/recognition system, CONFIGR posits an initial recognition stage which identifies figure pixels from spatially local input information. The resulting, and typically incomplete, figure is fed back to the “early vision” stage for long-range completion via filling-in. The reconstructed image is then re-presented to the recognition system for global functions such as object recognition. In the CONFIGR algorithm, the smallest independent image unit is the visible pixel, whose size defines a computational spatial scale. Once pixel size is fixed, the entire algorithm is fully determined, with no additional parameter choices. Multi-scale simulations illustrate the vision/recognition system. Open-source CONFIGR code is available online, but all examples can be derived analytically, and the design principles applied at each step are transparent. The model balances filling-in as figure against complementary filling-in as ground, which blocks spurious figure completions. Lobe computations occur on a subpixel spatial scale. Originally designed to fill-in missing contours in an incomplete image such as a dashed line, the same CONFIGR system connects and segments sparse dots, and unifies occluded objects from pieces locally identified as figure in the initial recognition stage. The model self-scales its completion distances, filling-in across gaps of any length, where unimpeded, while limiting connections among dense image-figure pixel groups that already have intrinsic form. Long-range image completion promises to play an important role in adaptive processors that reconstruct images from highly compressed video and still camera images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Grey-White Decision Network is introduced as an application of an on-center, off-surround recurrent cooperative/competitive network for segmentation of magnetic resonance imaging (MRI) brain images. The three layer dynamical system relaxes into a solution where each pixel is labeled as either grey matter, white matter, or "other" matter by considering raw input intensity, edge information, and neighbor interactions. This network is presented as an example of applying a recurrent cooperative/competitive field (RCCF) to a problem with multiple conflicting constraints. Simulations of the network and its phase plane analysis are presented.