996 resultados para Original images
Resumo:
Intravascular ultrasound (IVUS) image segmentation can provide more detailed vessel and plaque information, resulting in better diagnostics, evaluation and therapy planning. A novel automatic segmentation proposal is described herein; the method relies on a binary morphological object reconstruction to segment the coronary wall in IVUS images. First, a preprocessing followed by a feature extraction block are performed, allowing for the desired information to be extracted. Afterward, binary versions of the desired objects are reconstructed, and their contours are extracted to segment the image. The effectiveness is demonstrated by segmenting 1300 images, in which the outcomes had a strong correlation to their corresponding gold standard. Moreover, the results were also corroborated statistically by having as high as 92.72% and 91.9% of true positive area fraction for the lumen and media adventitia border, respectively. In addition, this approach can be adapted easily and applied to other related modalities, such as intravascular optical coherence tomography and intravascular magnetic resonance imaging. (E-mail: matheuscardosomg@hotmail.com) (C) 2011 World Federation for Ultrasound in Medicine & Biology.
Resumo:
We use networks composed of three phase-locked loops (PLLs), where one of them is the master, for recognizing noisy images. The values of the coupling weights among the PLLs control the noise level which does not affect the successful identification of the input image. Analytical results and numerical tests are presented concerning the scheme performance. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
This work discusses a 4D lung reconstruction method from unsynchronized MR sequential images. The lung, differently from the heart, does not have its own muscles, turning impossible to see its real movements. The visualization of the lung in motion is an actual topic of research in medicine. CT (Computerized Tomography) can obtain spatio-temporal images of the heart by synchronizing with electrocardiographic waves. The FOV of the heart is small when compared to the lung`s FOV. The lung`s movement is not periodic and is susceptible to variations in the degree of respiration. Compared to CT, MR (Magnetic Resonance) imaging involves longer acquisition times and it is not possible to obtain instantaneous 3D images of the lung. For each slice, only one temporal sequence of 2D images can be obtained. However, methods using MR are preferable because they do not involve radiation. In this paper, based on unsynchronized MR images of the lung an animated B-Repsolid model of the lung is created. The 3D animation represents the lung`s motion associated to one selected sequence of MR images. The proposed method can be divided in two parts. First, the lung`s silhouettes moving in time are extracted by detecting the presence of a respiratory pattern on 2D spatio-temporal MR images. This approach enables us to determine the lung`s silhouette for every frame, even on frames with obscure edges. The sequence of extracted lung`s silhouettes are unsynchronized sagittal and coronal silhouettes. Using our algorithm it is possible to reconstruct a 3D lung starting from a silhouette of any type (coronal or sagittal) selected from any instant in time. A wire-frame model of the lung is created by composing coronal and sagittal planar silhouettes representing cross-sections. The silhouette composition is severely underconstrained. Many wire-frame models can be created from the observed sequences of silhouettes in time. Finally, a B-Rep solid model is created using a meshing algorithm. Using the B-Rep solid model the volume in time for the right and left lungs were calculated. It was possible to recognize several characteristics of the 3D real right and left lungs in the shaded model. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Modulation of subjective time was examined using static images eliciting perceptions of different intensities of body movement. Undergraduate students were exposed to photographs of dancer sculptures in different dance positions for 36 sec. and asked to estimate the exposure duration. Lower movement intensities were related to shorter estimated durations. Mean durations for images of unmoving dancers were underestimated and for dancers taking a ballet step were overestimated. Temporal estimations were also related to the order of presentation of the stimuli, which suggested that subjective time estimations were influenced by the experimental context. Subjective time is related not only to the visual perception of moving images, but also of elicited perceptions of movement in static images, suggesting an embodiment effect on subjective time estimation.
Resumo:
A long-standing challenge of content-based image retrieval (CBIR) systems is the definition of a suitable distance function to measure the similarity between images in an application context which complies with the human perception of similarity. In this paper, we present a new family of distance functions, called attribute concurrence influence distances (AID), which serve to retrieve images by similarity. These distances address an important aspect of the psychophysical notion of similarity in comparisons of images: the effect of concurrent variations in the values of different image attributes. The AID functions allow for comparisons of feature vectors by choosing one of two parameterized expressions: one targeting weak attribute concurrence influence and the other for strong concurrence influence. This paper presents the mathematical definition and implementation of the AID family for a two-dimensional feature space and its extension to any dimension. The composition of the AID family with L (p) distance family is considered to propose a procedure to determine the best distance for a specific application. Experimental results involving several sets of medical images demonstrate that, taking as reference the perception of the specialist in the field (radiologist), the AID functions perform better than the general distance functions commonly used in CBIR.
Resumo:
OBJECTIVE: To describe the microsurgical anatomy, branches, and anatomic relationships of the posterior cerebral artery (PCA) represented in three-dimensional images. METHODS: Seventy hemispheres of 35 brain specimens were studied. They were previously injected with red silicone and fixed in 10% formalin for at least 40 days. Four of the studied specimens were frozen at -10 degrees to -15 degrees C for 14 days, and additional dissection was done with the Klingler`s fiber dissection technique at x6 to x40 magnification. Each segment of the artery was measured and photographed to obtain three-dimensional stereoscopic images. RESULTS: The PCA origin was in the interpeduncular cistern at the pontomesencephalic junction level in 23 specimens (65.7%). The PCA was divided into four segments: P1 extends from the PCA origin to its junction with the posterior communicating artery with an average length of 7.7 mm; P2 was divided into an anterior and posterior segment. The P2A segment begins at the posterior communicating artery and ends at the most lateral aspect of the cerebral peduncle, with an average length of 23.6 mm, and the P2P segment extends from the most lateral aspect of the cerebral peduncle to the posterior edge of the lateral surface of the midbrain, with an average length of 16.4 mm; P3 extends from the posterior edge of the lateral surface of the midbrain and ends at the origin of the parieto-occipital sulcus along the calcarine fissure, with an average length of 19.8 mm; and the P4 segment corresponds to the parts of the PCA that run along or inside both the parieto-occipital sulcus and the distal part of the calcarine fissure. CONCLUSIONS: To standardize the neurosurgical practice and knowledge, surgical anatomic classifications should be used uniformly and further modified according to the neurosurgical experience gathered. The PCA classification proposed intends to correlate its anatomic segments with their required microneurosurgical approaches.
Resumo:
Olm MA, Kogler JE Jr, Macchione M, Shoemark A, Saldiva PH, Rodrigues JC. Primary ciliary dyskinesia: evaluation using cilia beat frequency assessment via spectral analysis of digital microscopy images. J Appl Physiol 111: 295-302, 2011. First published May 5, 2011; doi:10.1152/japplphysiol.00629.2010.-Ciliary beat frequency (CBF) measurements provide valuable information for diagnosing of primary ciliary dyskinesia (PCD). We developed a system for measuring CBF, used it in association with electron microscopy to diagnose PCD, and then analyzed characteristics of PCD patients. 1 The CBF measurement system was based on power spectra measured through digital imaging. Twenty-four patients suspected of having PCD (age 1-19 yr) were selected from a group of 75 children and adolescents with pneumopathies of unknown causes. Ten healthy, nonsmoking volunteers (age >= 17 yr) served as a control group. Nasal brush samples were collected, and CBF and electron microscopy were performed. PCD was diagnosed in 12 patients: 5 had radial spoke defects, 3 showed absent central microtubule pairs with transposition, 2 had outer dynein arm defects, 1 had a shortened outer dynein arm, and 1 had a normal ultrastructure. Previous studies have reported that the most common cilia defects are in the dynein arm. As expected, the mean CBF was higher in the control group (P < 0.001) and patients with normal ultrastructure (P < 0.002), than in those diagnosed with cilia ultrastructural defects (i.e., PCD patients). An obstructive ventilatory pattern was observed in 70% of the PCD patients who underwent pulmonary function tests. All PCD patients presented bronchial wall thickening on chest computed tomography scans. The protocol and diagnostic techniques employed allowed us to diagnose PCD in 16% of patients in this study.
Resumo:
Objective. The purpose of this research was to provide further evidence to demonstrate the precision and accuracy of maxillofacial linear and angular measurements obtained by cone-beam computed tomography (CBCT) images. Study design. The study population consisted of 15 dry human skulls that were submitted to CBCT, and 3-dimensional (3D) images were generated. Linear and angular measurements based on conventional craniometric anatomical landmarks, and were identified in 3D-CBCT images by 2 radiologists twice each independently. Subsequently, physical measurements were made by a third examiner using a digital caliper and a digital goniometer. Results. The results demonstrated no statistically significant difference between inter-and intra-examiner analysis. Regarding accuracy test, no statistically significant differences were found of the comparison between the physical and CBCT-based linear and angular measurements for both examiners (P = .968 and .915, P = .844 and .700, respectively). Conclusions. 3D-CBCT images can be used to obtain dimensionally accurate linear and angular measurements from bony maxillofacial structures and landmarks. (Oral Surg Oral Med Oral Pathol Oral Radiol Endod 2009; 108: 430-436)
Resumo:
The mental foramen (MF) is an important anatomic landmark of the mandible, through which the mental nerve and blood vessels emerge. The importance of MF in dental practice is especially related to dental implants placement and other surgical procedures in the region. It is fundamental to be careful in order to avoid nerve and vessels injury during procedures. Anatomic variations of the MF can be found, such as occurrence of multiple foramina and unusual location. In very rare occasions, the absence of MF can be detected. The observation of this variation is not always possible using only conventional radiographs. The modern imaging resource cone beam computed tomography (CBCT) allows an accurate three-dimensional assessment of MF, as well as the identification of its variations. The aim of this article is to report MF absence and hypoplasia detected in CBCT images of a 27-year-old daughter and her 63-year-old mother, both from Brazil. Despite the MF anatomic variations, they presented no sensorial disturbance in the regions supplied by the mental nerve.
Resumo:
Objective: The aim of this study was to evaluate the performances of observers in diagnosing proximal caries in digital images obtained from digital bitewing radiographs using two scanners and four digital cameras in Joint Photographic Experts Group (JPEG) and tagged image file format (TIFF) files, and comparing them with the original conventional radiographs. Method: In total, 56 extracted teeth were radiographed with Kodak Insight film (Eastman Kodak, Rochester, NY) in a Kaycor Yoshida X-ray device (Kaycor X-707;Yoshida Dental Manufacturing Co., Tokyo, Japan) operating at 70 kV and 7 mA with an exposure time of 0.40 s. The radiographs were obtained and scanned by CanonScan D646U (Canon USA Inc., Newport News, VA) and Genius ColorPage HR7X (KYE Systems Corp. America, Doral, FL) scanners, and by Canon Powershot G2 (Canon USA Inc.), Canon RebelXT (Canon USA Inc.), Nikon Coolpix 8700 (Nikon Inc., Melville, NY), and Nikon D70s (Nikon Inc.) digital cameras in JPEG and TIFF formats. Three observers evaluated the images. The teeth were then observed under the microscope in polarized light for the verification of the presence and depth of the carious lesions. Results: The probability of no diagnosis ranged from 1.34% (Insight film) to 52.83% (CanonScan/JPEG). The sensitivity ranged from 0.24 (Canon RebelXT/JPEG) to 0.53 (Insight film), the specificity ranged from 0.93 (Nikon Coolpix/JPEG, Canon Powershot/TIFF, Canon RebelXT/JPEG and TIFF) to 0.97 (CanonScan/TIFF and JPEG) and the accuracy ranged from 0.82 (Canon RebelXT/JPEG) to 0.91 (CanonScan/JPEG). Conclusion: The carious lesion diagnosis did not change in either of the file formats (JPEG and TIFF) in which the images were saved for any of the equipment used. Only the CanonScan scanner did not have adequate performance in radiography digitalization for caries diagnosis and it is not recommended for this purpose. Dentomaxillofacial Radiology (2011) 40, 338-343. doi: 10.1259/dmfr/67185962
Resumo:
We compared the quality of realtime fetal ultrasound images transmitted using ISDN and IP networks. Four experienced obstetric ultrasound specialists viewed standard recordings in a randomized trial and rated the appearance of 30 fetal anatomical landmarks, each on a seven-point scale. A total of 12 evaluations were performed for various combinations of bandwidths (128, 384 or 768 kbit/s) and networks (ISDN or IF). The intraobserver coefficient of variation was 2.9%, 5.0%, 12.7% and 14.7% for the four observers. The mean overall ratings by each of the four observers were 4.6, 4.8, 5.0 and 5.3, respectively (a rating of 4 indicated satisfactory visualization and 7 indicated as good as the original recording). Analysis of variance showed that there were no significant interobserver variations nor significant differences in the mean scores for the different types of videoconferencing machines used. The most significant variable affecting the mean score was the bandwidth used. For ISDN, the mean score was 3.7 at 128 kbit/s, which was significantly worse than the mean score of 4.9 at 384 kbit/s, which was in turn significantly worse than the mean score of 5.9 at 768 kbit/s. The mean score for transmission using IP was about 0.5 points lower than that using ISDN across all the different bandwidths, but the differences were not significant. It appears that IP transmission in a private (non-shared) network is an acceptable alternative to ISDN for fetal tele-ultrasound and one deserving further study.
Resumo:
Dental implant recognition in patients without available records is a time-consuming and not straightforward task. The traditional method is a complete user-dependent process, where the expert compares a 2D X-ray image of the dental implant with a generic database. Due to the high number of implants available and the similarity between them, automatic/semi-automatic frameworks to aide implant model detection are essential. In this study, a novel computer-aided framework for dental implant recognition is suggested. The proposed method relies on image processing concepts, namely: (i) a segmentation strategy for semi-automatic implant delineation; and (ii) a machine learning approach for implant model recognition. Although the segmentation technique is the main focus of the current study, preliminary details of the machine learning approach are also reported. Two different scenarios are used to validate the framework: (1) comparison of the semi-automatic contours against implant’s manual contours of 125 X-ray images; and (2) classification of 11 known implants using a large reference database of 601 implants. Regarding experiment 1, 0.97±0.01, 2.24±0.85 pixels and 11.12±6 pixels of dice metric, mean absolute distance and Hausdorff distance were obtained, respectively. In experiment 2, 91% of the implants were successfully recognized while reducing the reference database to 5% of its original size. Overall, the segmentation technique achieved accurate implant contours. Although the preliminary classification results prove the concept of the current work, more features and an extended database should be used in a future work.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
For bilipschitz images of Cantor sets in Rd we estimate the Lipschitz harmonic capacity and show this capacity is invariant under bilipschitz homeomorphisms.
Resumo:
Aquesta memoria resumeix el treball de final de carrera d’Enginyeria Superior d’Informàtica. Explicarà les principals raons que han motivat el projecte així com exemples que il·lustren l’aplicació resultant. En aquest cas el software intentarà resoldre la actual necessitat que hi ha de tenir dades de Ground Truth per als algoritmes de segmentació de text per imatges de color complexes. Tots els procesos seran explicats en els diferents capítols partint de la definició del problema, la planificació, els requeriments i el disseny fins a completar la il·lustració dels resultats del programa i les dades de Ground Truth resultants.