935 resultados para Optical signal and image processing device
Resumo:
OBJECTIVE: To carry out a retrospective study to determine whether human papillomavirus (HPV) infection and immunohistochemical expression of p53 and proliferating cell nuclear antigen (PCNA) are related to the risk of oral cancer. STUDY DESIGN: Fifty-seven oral biopsies, consisting of 30 oral squamous papillomas (OSPs) and 27 oral squamous cell carcinomas (OSCCs) were tested for the presence of HPV 6/11 and 16/18 by in situ hybridization using catalyzed signal amplification and in situ hybridization. p53 And PCNA expression was analyzed by immunohistochemistry and evaluated quantitatively by image analysis. RESULTS: Nineteen of the 57 oral lesions (33.3%) were positive for HPV. HPV 6/11 was found in 6 of 30 (20%) OSPs and 1 of 27 (3.7%) OSCCs. HPV 16/18 was found in 10 of 27 (37%) OSCCs and 2 of 30 (6.7%) OSPs. Sixteen of the 19 HPV-positive cases (84.2%) were p53 negative; 5 (9%) were HPV 6/11 and 11 (19%) HPV 16/18, with an inverse correlation between the presence of HPV DNA and p53 expression (P=.017, P < .05). PCNA expression appeared in 18 (94.7%) of HPV positive cases, showing that HPV 16/18 was associated with intensity of PCNA expression and with OSCCs (P=.037, P < .05). CONCLUSION: Quantitative evaluation of p53 by image analysis showed an inverse correlation between p53 expression and HPV presence, suggesting protein degradation. Image analysis also demonstrated that PCNA expression was more intense in HPV DNA 16/18 OSCCs. These findings suggest involvement of high-risk HPV types in oral carcinogenesis.
Resumo:
A very simple and robust method for ceramics grains quantitative image analysis is presented. Based on the use of optimal imaging conditions for reflective light microscopy of bulk samples, a digital image processing routine was developed for shading correction, noise suppressing and contours enhancement. Image analysis was done for grains selected according to their concavities, evaluated by perimeter ratio shape factor, to avoid consider the effects of breakouts and ghost boundaries due to ceramographic preparation limitations. As an example, the method was applied for two ceramics, to compare grain size and morphology distributions. In this case, most of artefacts introduced by ceramographic preparation could be discarded due to the use of perimeter ratio exclusion range.
Resumo:
This paper presents a semi-automated method for extracting road segments from medium-resolution images based on active testing and edge analysis. The method is based on two sequential and independent stages. Firstly, an active testing method is used to extract an approximated road centreline which is based on a sequential and local exploitation of the image. Secondly, an iterative strategy based on edge analysis and the approximated centreline is used to measure precisely the road centreline. Based on the results obtained using medium-resolution test images, the method seems to be very promising. In general, the method proved to be very accurate whenever the roads are characterized by two well-defined anti-parallel edges and robust even in the presence of larger obstacles such as trees and shadows.
Resumo:
The aim of this paper is to present a photogrammetric method for determining the dimensions of flat surfaces, such as billboards, based on a single digital image. A mathematical model was adapted to generate linear equations for vertical and horizontal lines in the object space. These lines are identified and measured in the image and the rotation matrix is computed using an indirect method. The distance between the camera and the surface is measured using a lasermeter, providing the coordinates of the camera perspective center. Eccentricity of the lasermeter center related to the camera perspective center is modeled by three translations, which are computed using a calibration procedure. Some experiments were performed to test the proposed method and the achieved results are within a relative error of about 1 percent in areas and distances in the object space. This accuracy fulfills the requirements of the intended applications. © 2005 American Society for Photogrammetry and Remote Sensing.
Resumo:
In this work an image pre-processing module has been developed to extract quantitative information from plantation images with various degrees of infestation. Four filters comprise this module: the first one acts on smoothness of the image, the second one removes image background enhancing plants leaves, the third filter removes isolated dots not removed by the previous filter, and the fourth one is used to highlight leaves' edges. At first the filters were tested with MATLAB, for a quick visual feedback of the filters' behavior. Then the filters were implemented in the C programming language. At last, the module as been coded in VHDL for the implementation on a Stratix II family FPGA. Tests were run and the results are shown in this paper. © 2008 Springer-Verlag Berlin Heidelberg.
Resumo:
Optical microscopy and morphometric analysis were used in this study to evaluate, in vitro, the cleaning of the apical region in root canals with mild or moderate curvatures subjected to biomechanical preparation with a rotary system, as well as to assess the amount of extruded material to the periapical area. Lateral incisors (n = 32), 16 with curvature angles smaller or equal to 10° (GI) and 16 between 11° and 25° angles (GII) were submitted to Hero 642 rotary instrumentation with different surgical diameters: (A) 30.02 and (B) 45.02. Irrigation was performed at each change of instrument with 5 mL of ultrapure Milli-Q water and the extruded material through the apical foramen was collected. Root cross-sections were subjected to histological analysis by optical microscopy (×40) and the images were evaluated morphometrically using the Image Tool software. Quantification of the extruded material was performed by weighing after liquid evaporation. ANOVA showed no statistically significant differences (p>0.05) among the groups with respect to the procedures used to clean the apical region. Considering the amount of extruded material, the Tukey's HSD showed that canals with mild curvature prepared with the 45.02 surgical diameter showed significantly higher values (p<0.05) that those of the other groups, which were similar between themselves (p>0.05). In conclusion, the effect of cleaning the apical region did not differ in the groups, considering root curvature and the surgical diameter of instruments used for apical preparation. The amount of extruded material was greater in canals with mild curvature that were prepared with the 45.02 surgical instrument diameter.
Resumo:
Optical remote sensing techniques have obvious advantages for monitoring gas and aerosol emissions, since they enable the operation over large distances, far from hostile environments, and fast processing of the measured signal. In this study two remote sensing devices, namely a Lidar (Light Detection and Ranging) for monitoring the vertical profile of backscattered light intensity, and a Sodar (Acoustic Radar, Sound Detection and Ranging) for monitoring the vertical profile of the wind vector were operated during specific periods. The acquired data were processed and compared with data of air quality obtained from ground level monitoring stations, in order to verify the possibility of using the remote sensing techniques to monitor industrial emissions. The campaigns were carried out in the area of the Environmental Research Center (Cepema) of the University of São Paulo, in the city of Cubatão, Brazil, a large industrial site, where numerous different industries are located, including an oil refinery, a steel plant, as well as fertilizer, cement and chemical/petrochemical plants. The local environmental problems caused by the industrial activities are aggravated by the climate and topography of the site, unfavorable to pollutant dispersion. Results of a campaign are presented for a 24- hour period, showing data of a Lidar, an air quality monitoring station and a Sodar. © 2011 SPIE.
Resumo:
Different from the first attempts to solve the image categorization problem (often based on global features), recently, several researchers have been tackling this research branch through a new vantage point - using features around locally invariant interest points and visual dictionaries. Although several advances have been done in the visual dictionaries literature in the past few years, a problem we still need to cope with is calculation of the number of representative words in the dictionary. Therefore, in this paper we introduce a new solution for automatically finding the number of visual words in an N-Way image categorization problem by means of supervised pattern classification based on optimum-path forest. © 2011 IEEE.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
With the widespread proliferation of computers, many human activities entail the use of automatic image analysis. The basic features used for image analysis include color, texture, and shape. In this paper, we propose a new shape description method, called Hough Transform Statistics (HTS), which uses statistics from the Hough space to characterize the shape of objects or regions in digital images. A modified version of this method, called Hough Transform Statistics neighborhood (HTSn), is also presented. Experiments carried out on three popular public image databases showed that the HTS and HTSn descriptors are robust, since they presented precision-recall results much better than several other well-known shape description methods. When compared to Beam Angle Statistics (BAS) method, a shape description method that inspired their development, both the HTS and the HTSn methods presented inferior results regarding the precision-recall criterion, but superior results in the processing time and multiscale separability criteria. The linear complexity of the HTS and the HTSn algorithms, in contrast to BAS, make them more appropriate for shape analysis in high-resolution image retrieval tasks when very large databases are used, which are very common nowadays. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Research on image processing has shown that combining segmentation methods may lead to a solid approach to extract semantic information from different sort of images. Within this context, the Normalized Cut (NCut) is usually used as a final partitioning tool for graphs modeled in some chosen method. This work explores the Watershed Transform as a modeling tool, using different criteria of the hierarchical Watershed to convert an image into an adjacency graph. The Watershed is combined with an unsupervised distance learning step that redistributes the graph weights and redefines the Similarity matrix, before the final segmentation step using NCut. Adopting the Berkeley Segmentation Data Set and Benchmark as a background, our goal is to compare the results obtained for this method with previous work to validate its performance.
Resumo:
This paper addresses the problem of survivable lightpath provisioning in wavelength-division-multiplexing (WDM) mesh networks, taking into consideration optical-layer protection and some realistic optical signal quality constraints. The investigated networks use sparsely placed optical–electrical–optical (O/E/O) modules for regeneration and wavelength conversion. Given a fixed network topology with a number of sparsely placed O/E/O modules and a set of connection requests, a pair of link-disjoint lightpaths is established for each connection. Due to physical impairments and wavelength continuity, both the working and protection lightpaths need to be regenerated at some intermediate nodes to overcome signal quality degradation and wavelength contention. In the present paper, resource-efficient provisioning solutions are achieved with the objective of maximizing resource sharing. The authors propose a resource-sharing scheme that supports three kinds of resource-sharing scenarios, including a conventional wavelength-link sharing scenario, which shares wavelength links between protection lightpaths, and two new scenarios, which share O/E/O modules between protection lightpaths and between working and protection lightpaths. An integer linear programming (ILP)-based solution approach is used to find optimal solutions. The authors also propose a local optimization heuristic approach and a tabu search heuristic approach to solve this problem for real-world, large mesh networks. Numerical results show that our solution approaches work well under a variety of network settings and achieves a high level of resource-sharing rates (over 60% for O/E/O modules and over 30% for wavelength links), which translate into great savings in network costs.
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.
Resumo:
OBJECTIVE: To evaluate tools for the fusion of images generated by tomography and structural and functional magnetic resonance imaging. METHODS: Magnetic resonance and functional magnetic resonance imaging were performed while a volunteer who had previously undergone cranial tomography performed motor and somatosensory tasks in a 3-Tesla scanner. Image data were analyzed with different programs, and the results were compared. RESULTS: We constructed a flow chart of computational processes that allowed measurement of the spatial congruence between the methods. There was no single computational tool that contained the entire set of functions necessary to achieve the goal. CONCLUSION: The fusion of the images from the three methods proved to be feasible with the use of four free-access software programs (OsiriX, Register, MRIcro and FSL). Our results may serve as a basis for building software that will be useful as a virtual tool prior to neurosurgery.