952 resultados para Image processing.


Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper a new partial differential equation based method is presented with a view to denoising images having textures. The proposed model combines a nonlinear anisotropic diffusion filter with recent harmonic analysis techniques. A wave atom shrinkage allied to detection by gradient technique is used to guide the diffusion process so as to smooth and maintain essential image characteristics. Two forcing terms are used to maintain and improve edges, boundaries and oscillatory features of an image having irregular details and texture. Experimental results show the performance of our model for texture preserving denoising when compared to recent methods in literature. © 2009 IEEE.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The main application area in this project, is to deploy image processing and segmentation techniques in computer vision through an omnidirectional vision system to agricultural mobile robots (AMR) used for trajectory navigation problems, as well as localization matters. Thereby, computational methods based on the JSEG algorithm were used to provide the classification and the characterization of such problems, together with Artificial Neural Networks (ANN) for image recognition. Hence, it was possible to run simulations and carry out analyses of the performance of JSEG image segmentation technique through Matlab/Octave computational platforms, along with the application of customized Back-propagation Multilayer Perceptron (MLP) algorithm and statistical methods as structured heuristics methods in a Simulink environment. Having the aforementioned procedures been done, it was practicable to classify and also characterize the HSV space color segments, not to mention allow the recognition of segmented images in which reasonably accurate results were obtained. © 2010 IEEE.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Different from the first attempts to solve the image categorization problem (often based on global features), recently, several researchers have been tackling this research branch through a new vantage point - using features around locally invariant interest points and visual dictionaries. Although several advances have been done in the visual dictionaries literature in the past few years, a problem we still need to cope with is calculation of the number of representative words in the dictionary. Therefore, in this paper we introduce a new solution for automatically finding the number of visual words in an N-Way image categorization problem by means of supervised pattern classification based on optimum-path forest. © 2011 IEEE.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The efficiency in image classification tasks can be improved using combined information provided by several sources, such as shape, color, and texture visual properties. Although many works proposed to combine different feature vectors, we model the descriptor combination as an optimization problem to be addressed by evolutionary-based techniques, which compute distances between samples that maximize their separability in the feature space. The robustness of the proposed technique is assessed by the Optimum-Path Forest classifier. Experiments showed that the proposed methodology can outperform individual information provided by single descriptors in well-known public datasets. © 2012 IEEE.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Digital techniques have been developed and validated to assess semiquantitatively immunohistochemical nuclear staining. Currently visual classification is the standard for qualitative nuclear evaluation. Analysis of pixels that represents the immunohistochemical labeling can be more sensitive, reproducible and objective than visual grading. This study compared two semiquantitative techniques of digital image analysis with three techniques of visual analysis imaging to estimate the p53 nuclear immunostaining. Methods: Sixty-three sun-exposed forearm-skin biopsies were photographed and submitted to three visual analyses of images: the qualitative visual evaluation method (0 to 4 +), the percentage of labeled nuclei and HSCORE. Digital image analysis was performed using ImageJ 1.45p; the density of nuclei was scored per ephitelial area (DensNU) and the pixel density was established in marked suprabasal epithelium (DensPSB). Results: Statistical significance was found in: the agreement and correlation among the visual estimates of evaluators, correlation among the median visual score of the evaluators, the HSCORE and the percentage of marked nuclei with the DensNU and DensPSB estimates. DensNU was strongly correlated to the percentage of p53-marked nuclei in the epidermis, and DensPSB with the HSCORE. Conclusion: The parameters presented herein can be applied in routine analysis of immunohistochemical nuclear staining of epidermis. © 2012 John Wiley & Sons A/S.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The aim of this study was to evaluate the accuracy of virtual three-dimensional (3D) reconstructions of human dry mandibles, produced from two segmentation protocols (outline only and all-boundary lines).Twenty virtual three-dimensional (3D) images were built from computed tomography exam (CT) of 10 dry mandibles, in which linear measurements between anatomical landmarks were obtained and compared to an error probability of 5 %.The results showed no statistically significant difference among the dry mandibles and the virtual 3D reconstructions produced from segmentation protocols tested (p = 0,24).During the designing of a virtual 3D reconstruction, both outline only and all-boundary lines segmentation protocols can be used.Virtual processing of CT images is the most complex stage during the manufacture of the biomodel. Establishing a better protocol during this phase allows the construction of a biomodel with characteristics that are closer to the original anatomical structures. This is essential to ensure a correct preoperative planning and a suitable treatment.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The aim of this study was to evaluate the influence of digitization parameters on periapical radiographic image quality, with regard to anatomic landmarks. Digitized images (n = 160) were obtained using a flatbed scanner with resolutions of 300, 600 and 2400 dpi. The radiographs of 2400 dpi were decreased to 300 and 600 dpi before storage. Digitizations were performed with and without black masking using 8-bit and 16-bit grayscale and saved in TIFF format. Four anatomic landmarks were classified by two observers (very good, good, moderate, regular, poor), in two random sessions. Intraobserver and interobserver agreements were evaluated by Kappa statistics. Inter and intraobserver agreements ranged according to the anatomic landmarks and resolution used. The results obtained demonstrated that the cement enamel junction was the anatomic landmark that presented the poorest concordance. The use of black masking provided better results in the digitized image. The use of a mask to cover radiographs during digitization is necessary. Therefore, the concordance ranged from regular to moderate for the intraobserver evaluation and concordance ranged from regular to poor for interobserver evaluation.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The human dentition is naturally translucent, opalescent and fluorescent. Differences between the level of fluorescence of tooth structure and restorative materials may result in distinct metameric properties and consequently perceptible disparate esthetic behavior, which impairs the esthetic result of the restorations, frustrating both patients and staff. In this study, we evaluated the level of fluorescence of different composites (Durafill in tones A2 (Du), Charisma in tones A2 (Ch), Venus in tone A2 (Ve), Opallis enamel and dentin in tones A2 (OPD and OPE), Point 4 in tones A2 (P4), Z100 in tones A2 ( Z1), Z250 in tones A2 (Z2), Te-Econom in tones A2 (TE), Tetric Ceram in tones A2 (TC), Tetric Ceram N in tones A1, A2, A4 (TN1, TN2, TN4), Four seasons enamel and dentin in tones A2 (and 4SD 4SE), Empress Direct enamel and dentin in tones A2 (EDE and EDD) and Brilliant in tones A2 (Br)). Cylindrical specimens were prepared, coded and photographed in a standardized manner with a Canon EOS digital camera (400 ISO, 2.8 aperture and 1/ 30 speed), in a dark environment under the action of UV light (25 W). The images were analyzed with the software ScanWhite©-DMC/Darwin systems. The results showed statistical differences between the groups (p < 0.05), and between these same groups and the average fluorescence of the dentition of young (18 to 25 years) and adults (40 to 45 years) taken as control. It can be concluded that: Composites Z100, Z250 (3M ESPE) and Point 4 (Kerr) do not match with the fluorescence of human dentition and the fluorescence of the materials was found to be affected by their own tone.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Research on image processing has shown that combining segmentation methods may lead to a solid approach to extract semantic information from different sort of images. Within this context, the Normalized Cut (NCut) is usually used as a final partitioning tool for graphs modeled in some chosen method. This work explores the Watershed Transform as a modeling tool, using different criteria of the hierarchical Watershed to convert an image into an adjacency graph. The Watershed is combined with an unsupervised distance learning step that redistributes the graph weights and redefines the Similarity matrix, before the final segmentation step using NCut. Adopting the Berkeley Segmentation Data Set and Benchmark as a background, our goal is to compare the results obtained for this method with previous work to validate its performance.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Image segmentation is a process frequently used in several different areas including Cartography. Feature extraction is a very troublesome task, and successful results require more complex techniques and good quality data. The aims of this paper is to study Digital Image Processing techniques, with emphasis in Mathematical Morphology, to use Remote Sensing imagery, making image segmentation, using morphological operators, mainly the multi-scale morphological gradient operator. In the segmentation process, pre-processing operators of Mathematical Morphology were used, and the multi-scales gradient was implemented to create one of the images used as marker image. Orbital image of the Landsat satellite, sensor TM was used. The MATLAB software was used in the implementation of the routines. With the accomplishment of tests, the performance of the implemented operators was verified and carried through the analysis of the results. The extration of linear feature, using mathematical morphology techniques, can contribute in cartographic applications, as cartographic products updating. The comparison to the best result obtained was performed by means of the morphology with conventional techniques of features extraction. © Springer-Verlag 2004.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper presents an optimum user-steered boundary tracking approach for image segmentation, which simulates the behavior of water flowing through a riverbed. The riverbed approach was devised using the image foresting transform with a never-exploited connectivity function. We analyze its properties in the derived image graphs and discuss its theoretical relation with other popular methods such as live wire and graph cuts. Several experiments show that riverbed can significantly reduce the number of user interactions (anchor points), as compared to live wire for objects with complex shapes. This paper also includes a discussion about how to combine different methods in order to take advantage of their complementary strengths.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

OBJECTIVE: To evaluate tools for the fusion of images generated by tomography and structural and functional magnetic resonance imaging. METHODS: Magnetic resonance and functional magnetic resonance imaging were performed while a volunteer who had previously undergone cranial tomography performed motor and somatosensory tasks in a 3-Tesla scanner. Image data were analyzed with different programs, and the results were compared. RESULTS: We constructed a flow chart of computational processes that allowed measurement of the spatial congruence between the methods. There was no single computational tool that contained the entire set of functions necessary to achieve the goal. CONCLUSION: The fusion of the images from the three methods proved to be feasible with the use of four free-access software programs (OsiriX, Register, MRIcro and FSL). Our results may serve as a basis for building software that will be useful as a virtual tool prior to neurosurgery.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This thesis deals with Visual Servoing and its strictly connected disciplines like projective geometry, image processing, robotics and non-linear control. More specifically the work addresses the problem to control a robotic manipulator through one of the largely used Visual Servoing techniques: the Image Based Visual Servoing (IBVS). In Image Based Visual Servoing the robot is driven by on-line performing a feedback control loop that is closed directly in the 2D space of the camera sensor. The work considers the case of a monocular system with the only camera mounted on the robot end effector (eye in hand configuration). Through IBVS the system can be positioned with respect to a 3D fixed target by minimizing the differences between its initial view and its goal view, corresponding respectively to the initial and the goal system configurations: the robot Cartesian Motion is thus generated only by means of visual informations. However, the execution of a positioning control task by IBVS is not straightforward because singularity problems may occur and local minima may be reached where the reached image is very close to the target one but the 3D positioning task is far from being fulfilled: this happens in particular for large camera displacements, when the the initial and the goal target views are noticeably different. To overcame singularity and local minima drawbacks, maintaining the good properties of IBVS robustness with respect to modeling and camera calibration errors, an opportune image path planning can be exploited. This work deals with the problem of generating opportune image plane trajectories for tracked points of the servoing control scheme (a trajectory is made of a path plus a time law). The generated image plane paths must be feasible i.e. they must be compliant with rigid body motion of the camera with respect to the object so as to avoid image jacobian singularities and local minima problems. In addition, the image planned trajectories must generate camera velocity screws which are smooth and within the allowed bounds of the robot. We will show that a scaled 3D motion planning algorithm can be devised in order to generate feasible image plane trajectories. Since the paths in the image are off-line generated it is also possible to tune the planning parameters so as to maintain the target inside the camera field of view even if, in some unfortunate cases, the feature target points would leave the camera images due to 3D robot motions. To test the validity of the proposed approach some both experiments and simulations results have been reported taking also into account the influence of noise in the path planning strategy. The experiments have been realized with a 6DOF anthropomorphic manipulator with a fire-wire camera installed on its end effector: the results demonstrate the good performances and the feasibility of the proposed approach.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The discrete cosine transform (DCT) is an important functional block for image processing applications. The implementation of a DCT has been viewed as a specialized research task. We apply a micro-architecture based methodology to the hardware implementation of an efficient DCT algorithm in a digital design course. Several circuit optimization and design space exploration techniques at the register-transfer and logic levels are introduced in class for generating the final design. The students not only learn how the algorithm can be implemented, but also receive insights about how other signal processing algorithms can be translated into a hardware implementation. Since signal processing has very broad applications, the study and implementation of an extensively used signal processing algorithm in a digital design course significantly enhances the learning experience in both digital signal processing and digital design areas for the students.