44 resultados para Object-oriented image analysis
Resumo:
PURPOSE The purpose of this study was to identify morphologic factors affecting type I endoleak formation and bird-beak configuration after thoracic endovascular aortic repair (TEVAR). METHODS Computed tomography (CT) data of 57 patients (40 males; median age, 66 years) undergoing TEVAR for thoracic aortic aneurysm (34 TAA, 19 TAAA) or penetrating aortic ulcer (n = 4) between 2001 and 2010 were retrospectively reviewed. In 28 patients, the Gore TAG® stent-graft was used, followed by the Medtronic Valiant® in 16 cases, the Medtronic Talent® in 8, and the Cook Zenith® in 5 cases. Proximal landing zone (PLZ) was in zone 1 in 13, zone 2 in 13, zone 3 in 23, and zone 4 in 8 patients. In 14 patients (25%), the procedure was urgent or emergent. In each case, pre- and postoperative CT angiography was analyzed using a dedicated image processing workstation and complimentary in-house developed software based on a 3D cylindrical intensity model to calculate aortic arch angulation and conicity of the landing zones (LZ). RESULTS Primary type Ia endoleak rate was 12% (7/57) and subsequent re-intervention rate was 86% (6/7). Left subclavian artery (LSA) coverage (p = 0.036) and conicity of the PLZ (5.9 vs. 2.6 mm; p = 0.016) were significantly associated with an increased type Ia endoleak rate. Bird-beak configuration was observed in 16 patients (28%) and was associated with a smaller radius of the aortic arch curvature (42 vs. 65 mm; p = 0.049). Type Ia endoleak was not associated with a bird-beak configuration (p = 0.388). Primary type Ib endoleak rate was 7% (4/57) and subsequent re-intervention rate was 100%. Conicity of the distal LZ was associated with an increased type Ib endoleak rate (8.3 vs. 2.6 mm; p = 0.038). CONCLUSIONS CT-based 3D aortic morphometry helps to identify risk factors of type I endoleak formation and bird-beak configuration during TEVAR. These factors were LSA coverage and conicity within the landing zones for type I endoleak formation and steep aortic angulation for bird-beak configuration.
Resumo:
A two-step etching technique for fine-grained calcite mylonites using 0.37% hydrochloric and 0.1% acetic acid produces a topographic relief which reflects the grain boundary geometry. With this technique, calcite grain boundaries become more intensely dissolved than their grain interiors but second phase minerals like dolomite, quartz, feldspars, apatite, hematite and pyrite are not affected by the acid and therefore form topographic peaks. Based on digital backscatter electron images and element distribution maps acquired on a scanning electron microscope, the geometry of calcite and the second phase minerals can be automatically quantified using image analysis software. For research on fine-grained carbonate rocks (e.g. dolomite calcite mixtures), this low-cost approach is an attractive alternative to the generation of manual grain boundary maps based on photographs from ultra-thin sections or orientation contrast images.
Resumo:
A large body of research analyzes the runtime execution of a system to extract abstract behavioral views. Those approaches primarily analyze control flow by tracing method execution events or they analyze object graphs of heap snapshots. However, they do not capture how objects are passed through the system at runtime. We refer to the exchange of objects as the object flow, and we claim that object flow is necessary to analyze if we are to understand the runtime of an object-oriented application. We propose and detail Object Flow Analysis, a novel dynamic analysis technique that takes this new information into account. To evaluate its usefulness, we present a visual approach that allows a developer to study classes and components in terms of how they exchange objects at runtime. We illustrate our approach on three case studies.
Resumo:
To analyze the impact of opacities in the optical pathway and image compression of 32-bit raw data to 8-bit jpg images on quantified optical coherence tomography (OCT) image analysis.
Resumo:
In this paper we compare the performance of two image classification paradigms (object- and pixel-based) for creating a land cover map of Asmara, the capital of Eritrea and its surrounding areas using a Landsat ETM+ imagery acquired in January 2000. The image classification methods used were maximum likelihood for the pixel-based approach and Bhattacharyya distance for the object-oriented approach available in, respectively, ArcGIS and SPRING software packages. Advantages and limitations of both approaches are presented and discussed. Classifications outputs were assessed using overall accuracy and Kappa indices. Pixel- and object-based classification methods result in an overall accuracy of 78% and 85%, respectively. The Kappa coefficient for pixel- and object-based approaches was 0.74 and 0.82, respectively. Although pixel-based approach is the most commonly used method, assessment and visual interpretation of the results clearly reveal that the object-oriented approach has advantages for this specific case-study.
Resumo:
Statistical models have been recently introduced in computational orthopaedics to investigate the bone mechanical properties across several populations. A fundamental aspect for the construction of statistical models concerns the establishment of accurate anatomical correspondences among the objects of the training dataset. Various methods have been proposed to solve this problem such as mesh morphing or image registration algorithms. The objective of this study is to compare a mesh-based and an image-based statistical appearance model approaches for the creation of nite element(FE) meshes. A computer tomography (CT) dataset of 157 human left femurs was used for the comparison. For each approach, 30 finite element meshes were generated with the models. The quality of the obtained FE meshes was evaluated in terms of volume, size and shape of the elements. Results showed that the quality of the meshes obtained with the image-based approach was higher than the quality of the mesh-based approach. Future studies are required to evaluate the impact of this finding on the final mechanical simulations.
Resumo:
Non-linear image registration is an important tool in many areas of image analysis. For instance, in morphometric studies of a population of brains, free-form deformations between images are analyzed to describe the structural anatomical variability. Such a simple deformation model is justified by the absence of an easy expressible prior about the shape changes. Applying the same algorithms used in brain imaging to orthopedic images might not be optimal due to the difference in the underlying prior on the inter-subject deformations. In particular, using an un-informed deformation prior often leads to local minima far from the expected solution. To improve robustness and promote anatomically meaningful deformations, we propose a locally affine and geometry-aware registration algorithm that automatically adapts to the data. We build upon the log-domain demons algorithm and introduce a new type of OBBTree-based regularization in the registration with a natural multiscale structure. The regularization model is composed of a hierarchy of locally affine transformations via their logarithms. Experiments on mandibles show improved accuracy and robustness when used to initialize the demons, and even similar performance by direct comparison to the demons, with a significantly lower degree of freedom. This closes the gap between polyaffine and non-rigid registration and opens new ways to statistically analyze the registration results.
Resumo:
The study describes brain areas involved in medial temporal lobe (mTL) seizures of 12 patients. All patients showed so-called oro-alimentary behavior within the first 20 s of clinical seizure manifestation characteristic of mTL seizures. Single photon emission computed tomography (SPECT) images of regional cerebral blood flow (rCBF) were acquired from the patients in ictal and interictal phases and from normal volunteers. Image analysis employed categorical comparisons with statistical parametric mapping and principal component analysis (PCA) to assess functional connectivity. PCA supplemented the findings of the categorical analysis by decomposing the covariance matrix containing images of patients and healthy subjects into distinct component images of independent variance, including areas not identified by the categorical analysis. Two principal components (PCs) discriminated the subject groups: patients with right or left mTL seizures and normal volunteers, indicating distinct neuronal networks implicated by the seizure. Both PCs were correlated with seizure duration, one positively and the other negatively, confirming their physiological significance. The independence of the two PCs yielded a clear clustering of subject groups. The local pattern within the temporal lobe describes critical relay nodes which are the counterpart of oro-alimentary behavior: (1) right mesial temporal zone and ipsilateral anterior insula in right mTL seizures, and (2) temporal poles on both sides that are densely interconnected by the anterior commissure. Regions remote from the temporal lobe may be related to seizure propagation and include positively and negatively loaded areas. These patterns, the covarying areas of the temporal pole and occipito-basal visual association cortices, for example, are related to known anatomic paths.
Resumo:
Most of today's dynamic analysis approaches are based on method traces. However, in the case of object-orientation understanding program execution by analyzing method traces is complicated because the behavior of a program depends on the sharing and the transfer of object references (aliasing). We argue that trace-based dynamic analysis is at a too low level of abstraction for object-oriented systems. We propose a new approach that captures the life cycle of objects by explicitly taking into account object aliasing and how aliases propagate during the execution of the program. In this paper, we present in detail our new meta-model and discuss future tracks opened by it.
Resumo:
PURPOSE: To quantify optical coherence tomography (OCT) images of the central retina in patients with blue-cone monochromatism (BCM) and achromatopsia (ACH) compared with healthy control individuals. METHODS: The study included 15 patients with ACH, 6 with BCM, and 20 control subjects. Diagnosis of BCM and ACH was established by visual acuity testing, morphologic examination, color vision testing, and Ganzfeld ERG recording. OCT images were acquired with the Stratus OCT 3 (Carl Zeiss Meditec AG, Oberkochen, Germany). Foveal OCT images were analyzed by calculating longitudinal reflectivity profiles (LRPs) from scan lines. Profiles were analyzed quantitatively to determine foveal thickness and distances between reflectivity layers. RESULTS: Patients with ACH and BCM had a mean visual acuity of 20/200 and 20/60, respectively. Color vision testing results were characteristic of the diseases. The LRPs of control subjects yielded four peaks (P1-P4), presumably representing the RPE (P1), the ovoid region of the photoreceptors (P2), the external limiting membrane (ELM) (P3), and the internal limiting membrane (P4). In patients with ACH, P2 was absent, but foveal thickness (P1-P4) did not differ significantly from that in the control subjects (187 +/- 20 vs. 192 +/- 14 microm, respectively). The distance from P1 to P3 did not differ significantly (78 +/- 10 vs. 82 +/- 5 microm) between ACH and controls subjects. In patients with BCM, P3 was lacking, and P2 advanced toward P1 compared with the control subjects (32 +/- 6 vs. 48 +/- 4 microm). Foveal thickness (153 +/- 16 microm) was significantly reduced compared with that in control subjects and patients with ACH. CONCLUSIONS: Quantitative OCT image analysis reveals distinct patterns for controls subjects and patients with ACH and BCM, respectively. Quantitative analysis of OCT imaging can be useful in differentiating retinal diseases affecting photoreceptors. Foveal thickness is similar in both normal subjects and patients with ACH but is decreased in patients with BCM.
Resumo:
Writing unit tests for legacy systems is a key maintenance task. When writing tests for object-oriented programs, objects need to be set up and the expected effects of executing the unit under test need to be verified. If developers lack internal knowledge of a system, the task of writing tests is non-trivial. To address this problem, we propose an approach that exposes side effects detected in example runs of the system and uses these side effects to guide the developer when writing tests. We introduce a visualization called Test Blueprint, through which we identify what the required fixture is and what assertions are needed to verify the correct behavior of a unit under test. The dynamic analysis technique that underlies our approach is based on both tracing method executions and on tracking the flow of objects at runtime. To demonstrate the usefulness of our approach we present results from two case studies.