897 resultados para Automatic segmentation
Resumo:
RATIONALE AND OBJECTIVES: To evaluate the effect of automatic tube current modulation on radiation dose and image quality for low tube voltage computed tomography (CT) angiography. MATERIALS AND METHODS: An anthropomorphic phantom was scanned with a 64-section CT scanner using following tube voltages: 140 kVp (Protocol A), 120 kVp (Protocol B), 100 kVp (Protocol C), and 80 kVp (Protocol D). To achieve similar noise, combined z-axis and xy-axes automatic tube current modulation was applied. Effective dose (ED) for the four tube voltages was assessed. Three plastic vials filled with different concentrations of iodinated solution were placed on the phantom's abdomen to obtain attenuation measurements. The signal-to-noise ratio (SNR) was calculated and a figure of merit (FOM) for each iodinated solution was computed as SNR(2)/ED. RESULTS: The ED was kept similar for the four different tube voltages: (A) 5.4 mSv +/- 0.3, (B) 4.1 mSv +/- 0.6, (C) 3.9 mSv +/- 0.5, and (D) 4.2 mSv +/- 0.3 (P > .05). As the tube voltage decreased from 140 to 80 kVp, image noise was maintained (range, 13.8-14.9 HU) (P > .05). SNR increased as the tube voltage decreased, with an overall gain of 119% for the 80-kVp compared to the 140-kVp protocol (P < .05). The FOM results indicated that with a reduction of the tube voltage from 140 to 120, 100, and 80 kVp, at constant SNR, ED was reduced by a factor of 2.1, 3.3, and 5.1, respectively, (P < .001). CONCLUSIONS: As tube voltage decreases, automatic tube current modulation for CT angiography yields either a significant increase in image quality at constant radiation dose or a significant decrease in radiation dose at a constant image quality.
Resumo:
As more and more open-source software components become available on the internet we need automatic ways to label and compare them. For example, a developer who searches for reusable software must be able to quickly gain an understanding of retrieved components. This understanding cannot be gained at the level of source code due to the semantic gap between source code and the domain model. In this paper we present a lexical approach that uses the log-likelihood ratios of word frequencies to automatically provide labels for software components. We present a prototype implementation of our labeling/comparison algorithm and provide examples of its application. In particular, we apply the approach to detect trends in the evolution of a software system.
Resumo:
Methods for optical motion capture often require timeconsuming manual processing before the data can be used for subsequent tasks such as retargeting or character animation. These processing steps restrict the applicability of motion capturing especially for dynamic VR-environments with real time requirements. To solve these problems, we present two additional, fast and automatic processing stages based on our motion capture pipeline presented in [HSK05]. A normalization step aligns the recorded coordinate systems with the skeleton structure to yield a common and intuitive data basis across different recording sessions. A second step computes a parameterization based on automatically extracted main movement axes to generate a compact motion description. Our method does not restrict the placement of marker bodies nor the recording setup, and only requires a short calibration phase.
Resumo:
wo methods for registering laser-scans of human heads and transforming them to a new semantically consistent topology defined by a user-provided template mesh are described. Both algorithms are stated within the Iterative Closest Point framework. The first method is based on finding landmark correspondences by iteratively registering the vicinity of a landmark with a re-weighted error function. Thin-plate spline interpolation is then used to deform the template mesh and finally the scan is resampled in the topology of the deformed template. The second algorithm employs a morphable shape model, which can be computed from a database of laser-scans using the first algorithm. It directly optimizes pose and shape of the morphable model. The use of the algorithm with PCA mixture models, where the shape is split up into regions each described by an individual subspace, is addressed. Mixture models require either blending or regularization strategies, both of which are described in detail. For both algorithms, strategies for filling in missing geometry for incomplete laser-scans are described. While an interpolation-based approach can be used to fill in small or smooth regions, the model-driven algorithm is capable of fitting a plausible complete head mesh to arbitrarily small geometry, which is known as "shape completion". The importance of regularization in the case of extreme shape completion is shown.
Resumo:
Given arbitrary pictures, we explore the possibility of using new techniques from computer vision and artificial intelligence to create customized visual games on-the-fly. This includes coloring books, link-the-dot and spot-the-difference popular games. The feasibility of these systems is discussed and we describe prototype implementation that work well in practice in an automatic or semi-automatic way.
Resumo:
This paper provides an insight to the development of a process model for the essential expansion of the automatic miniload warehouse. The model is based on the literature research and covers four phases of a warehouse expansion: the preparatory phase, the current state analysis, the design phase and the decision making phase. In addition to the literature research, the presented model is based on a reliable data set and can be applicable with a reasonable effort to ensure the informed decision on the warehouse layout. The model is addressed to users who are usually employees of logistics department, and is oriented on the improvement of the daily business organization combined with the warehouse expansion planning.
Resumo:
Map landscape-based segmentation of the sequences of momentary potential distribution maps (42-channel recordings) into brain microstates during spontaneous brain activity was used to study brain electric field spatial effects of single doses of piracetam (2.9, 4.8, and 9.6 g Nootropil® UCB and placebo) in a double-blind study of five normal young volunteers. Four 15-second epochs were analyzed from each subject and drug condition. The most prominent class of microstates (covering 49% of the time) consisted of potential maps with a generally anterior-posterior field orientation. The map orientation of this microstate class showed an increasing clockwise deviation from the placebo condition with increasing drug doses (Fisher's probability product, p < 0.014). The results of this study suggest the use of microstate segmentation analysis for the assessment of central effects of medication in spontaneous multichannel electroencephalographic data, as a complementary approach to frequency-domain analysis.
Resumo:
UNLABELLED The automatic implantable defibrillator (AID) is the treatment of choice for primary and secondary prevention of sudden death. At the Instituto Nacional de Cardiología, since October 1996 until January 2002, 25 patients were implanted with 26 AID. There were 23 men (92%) and the mean age of the whole group, was 51.4 years. Twenty-three patients (92%) presented structural heart disease, the most common was ischemic heart disease in 13 patients (52%), with a mean ejection fraction of 37.8%. One patient without structural heart disease had Brugada Syndrome. The most frequent clinical arrhythmia was ventricular tachycardia in 14 patients (56%). The mean follow-up was of 29.3 months during which a total of 30 events of ventricular arrhythmia were treated through AID; six of them were inappropriate due to paroxismal atrial fibrillation; 10 AID patients (34%) have not applied for therapy. Three patients (12%) of the group died due to congestive heart failure refractory to pharmacologic treatment. CONCLUSION The implant of the AID is a safe and effective measure for primary and secondary prevention of sudden death. World-wide experience evidences, that this kind of device has not modified the mortality rate due to heart failure in these patients, but it has diminished sudden arrhythmic death.
Resumo:
Background: Statistical shape models are widely used in biomedical research. They are routinely implemented for automatic image segmentation or object identification in medical images. In these fields, however, the acquisition of the large training datasets, required to develop these models, is usually a time-consuming process. Even after this effort, the collections of datasets are often lost or mishandled resulting in replication of work. Objective: To solve these problems, the Virtual Skeleton Database (VSD) is proposed as a centralized storage system where the data necessary to build statistical shape models can be stored and shared. Methods: The VSD provides an online repository system tailored to the needs of the medical research community. The processing of the most common image file types, a statistical shape model framework, and an ontology-based search provide the generic tools to store, exchange, and retrieve digital medical datasets. The hosted data are accessible to the community, and collaborative research catalyzes their productivity. Results: To illustrate the need for an online repository for medical research, three exemplary projects of the VSD are presented: (1) an international collaboration to achieve improvement in cochlear surgery and implant optimization, (2) a population-based analysis of femoral fracture risk between genders, and (3) an online application developed for the evaluation and comparison of the segmentation of brain tumors. Conclusions: The VSD is a novel system for scientific collaboration for the medical image community with a data-centric concept and semantically driven search option for anatomical structures. The repository has been proven to be a useful tool for collaborative model building, as a resource for biomechanical population studies, or to enhance segmentation algorithms.
Resumo:
The COSMIC-2 mission is a follow-on mission of the Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) with an upgraded payload for improved radio occultation (RO) applications. The objective of this paper is to develop a near-real-time (NRT) orbit determination system, called NRT National Chiao Tung University (NCTU) system, to support COSMIC-2 in atmospheric applications and verify the orbit product of COSMIC. The system is capable of automatic determinations of the NRT GPS clocks and LEO orbit and clock. To assess the NRT (NCTU) system, we use eight days of COSMIC data (March 24-31, 2011), which contain a total of 331 GPS observation sessions and 12 393 RO observable files. The parallel scheduling for independent GPS and LEO estimations and automatic time matching improves the computational efficiency by 64% compared to the sequential scheduling. Orbit difference analyses suggest a 10-cm accuracy for the COSMIC orbits from the NRT (NCTU) system, and it is consistent as the NRT University Corporation for Atmospheric Research (URCA) system. The mean velocity accuracy from the NRT orbits of COSMIC is 0.168 mm/s, corresponding to an error of about 0.051 μrad in the bending angle. The rms differences in the NRT COSMIC clock and in GPS clocks between the NRT (NCTU) and the postprocessing products are 3.742 and 1.427 ns. The GPS clocks determined from a partial ground GPS network [from NRT (NCTU)] and a full one [from NRT (UCAR)] result in mean rms frequency stabilities of 6.1E-12 and 2.7E-12, respectively, corresponding to range fluctuations of 5.5 and 2.4 cm and bending angle errors of 3.75 and 1.66 μrad .
Resumo:
Information theory-based metric such as mutual information (MI) is widely used as similarity measurement for multimodal registration. Nevertheless, this metric may lead to matching ambiguity for non-rigid registration. Moreover, maximization of MI alone does not necessarily produce an optimal solution. In this paper, we propose a segmentation-assisted similarity metric based on point-wise mutual information (PMI). This similarity metric, termed SPMI, enhances the registration accuracy by considering tissue classification probabilities as prior information, which is generated from an expectation maximization (EM) algorithm. Diffeomorphic demons is then adopted as the registration model and is optimized in a hierarchical framework (H-SPMI) based on different levels of anatomical structure as prior knowledge. The proposed method is evaluated using Brainweb synthetic data and clinical fMRI images. Both qualitative and quantitative assessment were performed as well as a sensitivity analysis to the segmentation error. Compared to the pure intensity-based approaches which only maximize mutual information, we show that the proposed algorithm provides significantly better accuracy on both synthetic and clinical data.
Resumo:
Image-based modeling of tumor growth combines methods from cancer simulation and medical imaging. In this context, we present a novel approach to adapt a healthy brain atlas to MR images of tumor patients. In order to establish correspondence between a healthy atlas and a pathologic patient image, tumor growth modeling in combination with registration algorithms is employed. In a first step, the tumor is grown in the atlas based on a new multi-scale, multi-physics model including growth simulation from the cellular level up to the biomechanical level, accounting for cell proliferation and tissue deformations. Large-scale deformations are handled with an Eulerian approach for finite element computations, which can operate directly on the image voxel mesh. Subsequently, dense correspondence between the modified atlas and patient image is established using nonrigid registration. The method offers opportunities in atlasbased segmentation of tumor-bearing brain images as well as for improved patient-specific simulation and prognosis of tumor progression.