106 resultados para Object Segmentation
Resumo:
OBJECTIVE: To develop a novel application of a tool for semi-automatic volume segmentation and adapt it for analysis of fetal cardiac cavities and vessels from heart volume datasets. METHODS: We studied retrospectively virtual cardiac volume cycles obtained with spatiotemporal image correlation (STIC) from six fetuses with postnatally confirmed diagnoses: four with normal hearts between 19 and 29 completed gestational weeks, one with d-transposition of the great arteries and one with hypoplastic left heart syndrome. The volumes were analyzed offline using a commercially available segmentation algorithm designed for ovarian folliculometry. Using this software, individual 'cavities' in a static volume are selected and assigned individual colors in cross-sections and in 3D-rendered views, and their dimensions (diameters and volumes) can be calculated. RESULTS: Individual segments of fetal cardiac cavities could be separated, adjacent segments merged and the resulting electronic casts studied in their spatial context. Volume measurements could also be performed. Exemplary images and interactive videoclips showing the segmented digital casts were generated. CONCLUSION: The approach presented here is an important step towards an automated fetal volume echocardiogram. It has the potential both to help in obtaining a correct structural diagnosis, and to generate exemplary visual displays of cardiac anatomy in normal and structurally abnormal cases for consultation and teaching.
Resumo:
In this paper we compare the performance of two image classification paradigms (object- and pixel-based) for creating a land cover map of Asmara, the capital of Eritrea and its surrounding areas using a Landsat ETM+ imagery acquired in January 2000. The image classification methods used were maximum likelihood for the pixel-based approach and Bhattacharyya distance for the object-oriented approach available in, respectively, ArcGIS and SPRING software packages. Advantages and limitations of both approaches are presented and discussed. Classifications outputs were assessed using overall accuracy and Kappa indices. Pixel- and object-based classification methods result in an overall accuracy of 78% and 85%, respectively. The Kappa coefficient for pixel- and object-based approaches was 0.74 and 0.82, respectively. Although pixel-based approach is the most commonly used method, assessment and visual interpretation of the results clearly reveal that the object-oriented approach has advantages for this specific case-study.
Resumo:
Behavioral reflection is crucial to support for example functional upgrades, on-the-fly debugging, or monitoring critical applications. However the use of reflective features can lead to severe problems due to infinite metacall recursion even in simple cases. This is especially a problem when reflecting on core language features since there is a high chance that such features are used to implement the reflective behavior itself. In this paper we analyze the problem of infinite meta-object call recursion and solve it by providing a first class representation of meta-level execution: at any point in the execution of a system it can be determined if we are operating on a meta-level or base level so that we can prevent infinite recursion. We present how meta-level execution can be represented by a meta-context and how reflection becomes context-aware. Our solution makes it possible to freely apply behavioral reflection even on system classes: the meta-context brings stability to behavioral reflection. We validate the concept with a robust implementation and we present benchmarks.
Resumo:
Back-in-time debuggers are extremely useful tools for identifying the causes of bugs, as they allow us to inspect the past states of objects no longer present in the current execution stack. Unfortunately the "omniscient" approaches that try to remember all previous states are impractical because they either consume too much space or they are far too slow. Several approaches rely on heuristics to limit these penalties, but they ultimately end up throwing out too much relevant information. In this paper we propose a practical approach to back-in-time debugging that attempts to keep track of only the relevant past data. In contrast to other approaches, we keep object history information together with the regular objects in the application memory. Although seemingly counter-intuitive, this approach has the effect that past data that is not reachable from current application objects (and hence, no longer relevant) is automatically garbage collected. In this paper we describe the technical details of our approach, and we present benchmarks that demonstrate that memory consumption stays within practical bounds. Furthermore since our approach works at the virtual machine level, the performance penalty is significantly better than with other approaches.
Resumo:
A large body of research analyzes the runtime execution of a system to extract abstract behavioral views. Those approaches primarily analyze control flow by tracing method execution events or they analyze object graphs of heap snapshots. However, they do not capture how objects are passed through the system at runtime. We refer to the exchange of objects as the object flow, and we claim that object flow is necessary to analyze if we are to understand the runtime of an object-oriented application. We propose and detail Object Flow Analysis, a novel dynamic analysis technique that takes this new information into account. To evaluate its usefulness, we present a visual approach that allows a developer to study classes and components in terms of how they exchange objects at runtime. We illustrate our approach on three case studies.
Resumo:
As object-oriented languages are extended with novel modularization mechanisms, better underlying models are required to implement these high-level features. This paper describes CELL, a language model that builds on delegation-based chains of object fragments. Composition of groups of cells is used: 1) to represent objects, 2) to realize various forms of method lookup, and 3) to keep track of method references. A running prototype of CELL is provided and used to realize the basic kernel of a Smalltalk system. The paper shows, using several examples, how higher-level features such as traits can be supported by the lower-level model.
Resumo:
The rapid growth of object-oriented development over the past twenty years has given rise to many object-oriented systems that are large, complex and hard to maintain. Object-Oriented Reengineering Patterns addresses the problem of understanding and reengineering such object-oriented legacy systems. This book collects and distills successful techniques in planning a reengineering project, reverse-engineering, problem detection, migration strategies and software redesign. The material in this book is presented as a set of "reengineering patterns" --- recurring solutions that experts apply while reengineering and maintaining object-oriented systems. The principles and techniques described in this book have been observed and validated in a number of industrial projects, and reflect best practice in object-oriented reengineering.
Resumo:
Map landscape-based segmentation of the sequences of momentary potential distribution maps (42-channel recordings) into brain microstates during spontaneous brain activity was used to study brain electric field spatial effects of single doses of piracetam (2.9, 4.8, and 9.6 g Nootropil® UCB and placebo) in a double-blind study of five normal young volunteers. Four 15-second epochs were analyzed from each subject and drug condition. The most prominent class of microstates (covering 49% of the time) consisted of potential maps with a generally anterior-posterior field orientation. The map orientation of this microstate class showed an increasing clockwise deviation from the placebo condition with increasing drug doses (Fisher's probability product, p < 0.014). The results of this study suggest the use of microstate segmentation analysis for the assessment of central effects of medication in spontaneous multichannel electroencephalographic data, as a complementary approach to frequency-domain analysis.
Resumo:
Background: Statistical shape models are widely used in biomedical research. They are routinely implemented for automatic image segmentation or object identification in medical images. In these fields, however, the acquisition of the large training datasets, required to develop these models, is usually a time-consuming process. Even after this effort, the collections of datasets are often lost or mishandled resulting in replication of work. Objective: To solve these problems, the Virtual Skeleton Database (VSD) is proposed as a centralized storage system where the data necessary to build statistical shape models can be stored and shared. Methods: The VSD provides an online repository system tailored to the needs of the medical research community. The processing of the most common image file types, a statistical shape model framework, and an ontology-based search provide the generic tools to store, exchange, and retrieve digital medical datasets. The hosted data are accessible to the community, and collaborative research catalyzes their productivity. Results: To illustrate the need for an online repository for medical research, three exemplary projects of the VSD are presented: (1) an international collaboration to achieve improvement in cochlear surgery and implant optimization, (2) a population-based analysis of femoral fracture risk between genders, and (3) an online application developed for the evaluation and comparison of the segmentation of brain tumors. Conclusions: The VSD is a novel system for scientific collaboration for the medical image community with a data-centric concept and semantically driven search option for anatomical structures. The repository has been proven to be a useful tool for collaborative model building, as a resource for biomechanical population studies, or to enhance segmentation algorithms.