996 resultados para partial Delaunay triangulation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes an approach based on Zernike moments and Delaunay triangulation for localization of hand-written text in machine printed text documents. The Zernike moments of the image are first evaluated and we classify the text as hand-written using the nearest neighbor classifier. These features are independent of size, slant, orientation, translation and other variations in handwritten text. We then use Delaunay triangulation to reclassify the misclassified text regions. When imposing Delaunay triangulation on the centroid points of the connected components, we extract features based on the triangles and reclassify the text. We remove the noise components in the document as part of the preprocessing step so this method works well on noisy documents. The success rate of the method is found to be 86%. Also for specific hand-written elements such as signatures or similar text the accuracy is found to be even higher at 93%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer vision algorithms that use color information require color constant images to operate correctly. Color constancy of the images is usually achieved in two steps: first the illuminant is detected and then image is transformed with the chromatic adaptation transform ( CAT). Existing CAT methods use a single transformation matrix for all the colors of the input image. The method proposed in this paper requires multiple corresponding color pairs between source and target illuminants given by patches of the Macbeth color checker. It uses Delaunay triangulation to divide the color gamut of the input image into small triangles. Each color of the input image is associated with the triangle containing the color point and transformed with a full linear model associated with the triangle. Full linear model is used because diagonal models are known to be inaccurate if channel color matching functions do not have narrow peaks. Objective evaluation showed that the proposed method outperforms existing CAT methods by more than 21%; that is, it performs statistically significantly better than other existing methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer vision algorithms that use color information require color constant images to operate correctly. Color constancy of the images is usually achieved in two steps: first the illuminant is detected and then image is transformed with the chromatic adaptation transform ( CAT). Existing CAT methods use a single transformation matrix for all the colors of the input image. The method proposed in this paper requires multiple corresponding color pairs between source and target illuminants given by patches of the Macbeth color checker. It uses Delaunay triangulation to divide the color gamut of the input image into small triangles. Each color of the input image is associated with the triangle containing the color point and transformed with a full linear model associated with the triangle. Full linear model is used because diagonal models are known to be inaccurate if channel color matching functions do not have narrow peaks. Objective evaluation showed that the proposed method outperforms existing CAT methods by more than 21%; that is, it performs statistically significantly better than other existing methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study a problem about shortest paths in Delaunay triangulations. Given two nodes s; t in the Delaunay triangulation of a point set P, we look for a new point p that can be added, such that the shortest path from s to t in the Delaunay triangulation of P u{p} improves as much as possible. We study properties of the problem and give efficient algorithms to find such a point when the graph-distance used is Euclidean and for the link-distance. Several other variations of the problem are also discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An improvement to the quality bidimensional Delaunay mesh generation algorithm, which combines the mesh refinement algorithms strategy of Ruppert and Shewchuk is proposed in this research. The developed technique uses diametral lenses criterion, introduced by L. P. Chew, with the purpose of eliminating the extremely obtuse triangles in the boundary mesh. This method splits the boundary segment and obtains an initial prerefinement, and thus reducing the number of necessary iterations to generate a high quality sequential triangulation. Moreover, it decreases the intensity of the communication and synchronization between subdomains in parallel mesh refinement. © 2008 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis investigates the problem of robot navigation using only landmark bearings. The proposed system allows a robot to move to a ground target location specified by the sensor values observed at this ground target posi- tion. The control actions are computed based on the difference between the current landmark bearings and the target landmark bearings. No Cartesian coordinates with respect to the ground are computed by the control system. The robot navigates using solely information from the bearing sensor space. Most existing robot navigation systems require a ground frame (2D Cartesian coordinate system) in order to navigate from a ground point A to a ground point B. The commonly used sensors such as laser range scanner, sonar, infrared, and vision do not directly provide the 2D ground coordi- nates of the robot. The existing systems use the sensor measurements to localise the robot with respect to a map, a set of 2D coordinates of the objects of interest. It is more natural to navigate between the points in the sensor space corresponding to A and B without requiring the Cartesian map and the localisation process. Research on animals has revealed how insects are able to exploit very limited computational and memory resources to successfully navigate to a desired destination without computing Cartesian positions. For example, a honeybee balances the left and right optical flows to navigate in a nar- row corridor. Unlike many other ants, Cataglyphis bicolor does not secrete pheromone trails in order to find its way home but instead uses the sun as a compass to keep track of its home direction vector. The home vector can be inaccurate, so the ant also uses landmark recognition. More precisely, it takes snapshots and compass headings of some landmarks. To return home, the ant tries to line up the landmarks exactly as they were before it started wandering. This thesis introduces a navigation method based on reflex actions in sensor space. The sensor vector is made of the bearings of some landmarks, and the reflex action is a gradient descent with respect to the distance in sensor space between the current sensor vector and the target sensor vec- tor. Our theoretical analysis shows that except for some fully characterized pathological cases, any point is reachable from any other point by reflex action in the bearing sensor space provided the environment contains three landmarks and is free of obstacles. The trajectories of a robot using reflex navigation, like other image- based visual control strategies, do not correspond necessarily to the shortest paths on the ground, because the sensor error is minimized, not the moving distance on the ground. However, we show that the use of a sequence of waypoints in sensor space can address this problem. In order to identify relevant waypoints, we train a Self Organising Map (SOM) from a set of observations uniformly distributed with respect to the ground. This SOM provides a sense of location to the robot, and allows a form of path planning in sensor space. The navigation proposed system is analysed theoretically, and evaluated both in simulation and with experiments on a real robot.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The element-based piecewise smooth functional approximation in the conventional finite element method (FEM) results in discontinuous first and higher order derivatives across element boundaries Despite the significant advantages of the FEM in modelling complicated geometries, a motivation in developing mesh-free methods has been the ease with which higher order globally smooth shape functions can be derived via the reproduction of polynomials There is thus a case for combining these advantages in a so-called hybrid scheme or a `smooth FEM' that, whilst retaining the popular mesh-based discretization, obtains shape functions with uniform C-p (p >= 1) continuity One such recent attempt, a NURBS based parametric bridging method (Shaw et al 2008b), uses polynomial reproducing, tensor-product non-uniform rational B-splines (NURBS) over a typical FE mesh and relies upon a (possibly piecewise) bijective geometric map between the physical domain and a rectangular (cuboidal) parametric domain The present work aims at a significant extension and improvement of this concept by replacing NURBS with DMS-splines (say, of degree n > 0) that are defined over triangles and provide Cn-1 continuity across the triangle edges This relieves the need for a geometric map that could precipitate ill-conditioning of the discretized equations Delaunay triangulation is used to discretize the physical domain and shape functions are constructed via the polynomial reproduction condition, which quite remarkably relieves the solution of its sensitive dependence on the selected knotsets Derivatives of shape functions are also constructed based on the principle of reproduction of derivatives of polynomials (Shaw and Roy 2008a) Within the present scheme, the triangles also serve as background integration cells in weak formulations thereby overcoming non-conformability issues Numerical examples involving the evaluation of derivatives of targeted functions up to the fourth order and applications of the method to a few boundary value problems of general interest in solid mechanics over (non-simply connected) bounded domains in 2D are presented towards the end of the paper

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work sets forth a `hybrid' discretization scheme utilizing bivariate simplex splines as kernels in a polynomial reproducing scheme constructed over a conventional Finite Element Method (FEM)-like domain discretization based on Delaunay triangulation. Careful construction of the simplex spline knotset ensures the success of the polynomial reproduction procedure at all points in the domain of interest, a significant advancement over its precursor, the DMS-FEM. The shape functions in the proposed method inherit the global continuity (Cp-1) and local supports of the simplex splines of degree p. In the proposed scheme, the triangles comprising the domain discretization also serve as background cells for numerical integration which here are near-aligned to the supports of the shape functions (and their intersections), thus considerably ameliorating an oft-cited source of inaccuracy in the numerical integration of mesh-free (MF) schemes. Numerical experiments show the proposed method requires lower order quadrature rules for accurate evaluation of integrals in the Galerkin weak form. Numerical demonstrations of optimal convergence rates for a few test cases are given and the method is also implemented to compute crack-tip fields in a gradient-enhanced elasticity model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabalho apresenta um estudo teórico e numérico sobre os erros que ocorrem nos cálculos de gradientes em malhas não estruturadas constituídas pelo diagrama de Voronoi, malhas estas, formadas também pela triangulação de Delaunay. As malhas adotadas, no trabalho, foram as malhas cartesianas e as malhas triangulares, esta última é gerada pela divisão de um quadrado em dois ou quatro triângulos iguais. Para tal análise, adotamos a escolha de três metodologias distintas para o cálculo dos gradientes: método de Green Gauss, método do Mínimo Resíduo Quadrático e método da Média do Gradiente Projetado Corrigido. O texto se baseia em dois enfoques principais: mostrar que as equações de erros dadas pelos gradientes podem ser semelhantes, porém com sinais opostos, para pontos de cálculos em volumes vizinhos e que a ordem do erro das equações analíticas pode ser melhorada em malhas uniformes quando comparada as não uniformes, nos casos unidimensionais, e quando analisada na face de tais volumes vizinhos nos casos bidimensionais.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Uma simulação numérica que leva em conta os efeitos de estratificação e mistura escalar (como a temperatura, salinidade ou substância solúvel em água) é necessária para estudar e prever os impactos ambientais que um reservatório de usina hidrelétrica pode produzir. Este trabalho sugere uma metodologia para o estudo de escoamentos ambientais, principalmente aqueles em que o conhecimento da interação entre a estratificação e mistura pode dar noções importantes dos fenômenos que ocorrem. Por esta razão, ferramentas de simulação numérica 3D de escoamento ambiental são desenvolvidas. Um gerador de malha de tetraedros do reservatório e o modelo de turbulência algébrico baseado no número de Richardson são as principais ferramentas desenvolvidas. A principal dificuldade na geração de uma malha de tetraedros de um reservatório é a distribuição não uniforme dos pontos relacionada com a relação desproporcional entre as escalas horizontais e verticais do reservatório. Neste tipo de distribuição de pontos, o algoritmo convencional de geração de malha de tetraedros pode tornar-se instável. Por esta razão, um gerador de malha não estruturada de tetraedros é desenvolvido e a metodologia utilizada para obter elementos conformes é descrita. A geração de malha superficial de triângulos utilizando a triangulação Delaunay e a construção do tetraedros a partir da malha triangular são os principais passos para o gerador de malha. A simulação hidrodinâmica com o modelo de turbulência fornece uma ferramenta útil e computacionalmente viável para fins de engenharia. Além disso, o modelo de turbulência baseado no número de Richardson leva em conta os efeitos da interação entre turbulência e estratificação. O modelo algébrico é o mais simples entre os diversos modelos de turbulência. Mas, fornece resultados realistas com o ajuste de uma pequena quantidade de parâmetros. São incorporados os modelos de viscosidade/difusividade turbulenta para escoamento estratificado. Na aproximação das equações médias de Reynolds e transporte de escalar é utilizando o Método dos Elementos Finitos. Os termos convectivos são aproximados utilizando o método semi-Lagrangeano, e a aproximação espacial é baseada no método de Galerkin. Os resultados computacionais são comparados com os resultados disponíveis na literatura. E, finalmente, a simulação de escoamento em um braço de reservatório é apresentada.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

I address of reconstruction of spatial irregular sampling seismic data to regular grids. Spatial irregular sampling data impairs results of prestack migration, multiple attenuations, spectra estimation. Prestack 5-D volumes are often divided into sub-sections for further processing. Shot gathers are easy to obtain from irregular sampling volumes. My strategy for reconstruction is as follows: I resort irregular sampling gathers into a form of easy to bin and perform bin regularization, then utilize F-K inversion to reconstruct seismic data. In consideration of poor ability of F-K regularization to fill in large gaps, I sort regular sampling gathers to CMP and proposed high-resolution parabolic Radon transform to interpolate data and extrapolate offsets. To strong interfering noise--multiples, I use hybrid-domain high-resolution parabolic Radon transform to attenuate it. F-K regularization demand ultimately for lower computing costs. I proposed several methods to further improve efficiency of F-K inversion: first I introduce 1D and 2D NFFT algorithm for a rapid calculation of DFT operators; then develop fast 1D and 2D CG method to solve least-square equations, and utilize preconditioner to accelerate convergence of CG iterations; what’s more, I use Delaunay triangulation for weight calculation and use bandlimit frequency and varying bandwidth technique for competitive computation. Numerical 2D and 3D examples are offered to verify reasonable results and more efficiency. F-K regularization has poor ability to fill in large gaps, so I rearrange data as CMP gathers and develop hybrid-domain high-resolution parabolic Radon transforms which be used ether to interpolate null traces and extrapolate near and far offsets or suppress a strong interfere noise: multiples. I use it to attenuate multiples to verify performances of our algorithm and proposed routines for industrial application. Numerical examples and field data examples show a nice performance of our method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis mainly studies the technologies of 3-D seismic visualization and Graphic User Interface of seismic processing software. By studying Computer Graphics and 3-D geological modeling, the author designs and implements the visualization module of seismic data processing software using OpenGL and Motif. Setting seismic visualization flow as the subject, NURBS surface approximation and Delaunay Triangulation as the two different methods, the thesis discusses the key algorithms and technologies of seismic visualization and attempts to apply Octree Space Partitioning and Mip Mapping to enhance system performance. According to the research mentioned above, in view of portability and scalability, the author adopts Object-oriented Analysis and Object-oriented Design, uses standard C++ as programming language, OpenGL as 3-D graphics library and Motif as GUI developing tool to implement the seismic visualization framework on SGI Irix platform. This thesis also studies the solution of fluid equations in porous media. 2-D alternating direction implicit procedure has been turned into 3-D successive over relaxation iteration, which possesses such virtues as faster computing speed, faster convergence rate, better adaptability to heterogeneous media and less memory demanding.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The histological grading of cervical intraepithelial neoplasia (CIN) remains subjective, resulting in inter- and intra-observer variation and poor reproducibility in the grading of cervical lesions. This study has attempted to develop an objective grading system using automated machine vision. The architectural features of cervical squamous epithelium are quantitatively analysed using a combination of computerized digital image processing and Delaunay triangulation analysis; 230 images digitally captured from cases previously classified by a gynaecological pathologist included normal cervical squamous epithelium (n = 30), koilocytosis (n = 46), CIN 1 (n = 52), CIN 2 (n = 56), and CIN 3 (n=46). Intra- and inter-observer variation had kappa values of 0.502 and 0.415, respectively. A machine vision system was developed in KS400 macro programming language to segment and mark the centres of all nuclei within the epithelium. By object-oriented analysis of image components, the positional information of nuclei was used to construct a Delaunay triangulation mesh. Each mesh was analysed to compute triangle dimensions including the mean triangle area, the mean triangle edge length, and the number of triangles per unit area, giving an individual quantitative profile of measurements for each case. Discriminant analysis of the geometric data revealed the significant discriminatory variables from which a classification score was derived. The scoring system distinguished between normal and CIN 3 in 98.7% of cases and between koilocytosis and CIN 1 in 76.5% of cases, but only 62.3% of the CIN cases were classified into the correct group, with the CIN 2 group showing the highest rate of misclassification. Graphical plots of triangulation data demonstrated the continuum of morphological change from normal squamous epithelium to the highest grade of CIN, with overlapping of the groups originally defined by the pathologists. This study shows that automated location of nuclei in cervical biopsies using computerized image analysis is possible. Analysis of positional information enables quantitative evaluation of architectural features in CIN using Delaunay triangulation meshes, which is effective in the objective classification of CIN. This demonstrates the future potential of automated machine vision systems in diagnostic histopathology. Copyright (C) 2000 John Wiley and Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Tissue MicroArrays (TMAs) represent a potential high-throughput platform for the analysis and discovery of tissue biomarkers. As TMA slides are produced manually and subject to processing and sectioning artefacts, the layout of TMA cores on the final slide and subsequent digital scan (TMA digital slide) is often disturbed making it difficult to associate cores with their original position in the planned TMA map. Additionally, the individual cores can be greatly altered and contain numerous irregularities such as missing cores, grid rotation and stretching. These factors demand the development of a robust method for de-arraying TMAs which identifies each TMA core, and assigns them to their appropriate coordinates on the constructed TMA slide.

Methodology: This study presents a robust TMA de-arraying method consisting of three functional phases: TMA core segmentation, gridding and mapping. The segmentation of TMA cores uses a set of morphological operations to identify each TMA core. Gridding then utilises a Delaunay Triangulation based method to find the row and column indices of each TMA core. Finally, mapping correlates each TMA core from a high resolution TMA whole slide image with its name within a TMAMap.

Conclusion: This study describes a genuine robust TMA de-arraying algorithm for the rapid identification of TMA cores from digital slides. The result of this de-arraying algorithm allows the easy partition of each TMA core for further processing. Based on a test group of 19 TMA slides (3129 cores), 99.84% of cores were segmented successfully, 99.81% of cores were gridded correctly and 99.96% of cores were mapped with their correct names via TMAMaps. The gridding of TMA cores were also extensively tested using a set of 113 pseudo slide (13,536 cores) with a variety of irregular grid layouts including missing cores, rotation and stretching. 100% of the cores were gridded correctly.