50 resultados para Automatic application configuration
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Point Distribution Models (PDM) are among the most popular shape description techniques and their usefulness has been demonstrated in a wide variety of medical imaging applications. However, to adequately characterize the underlying modeled population it is essential to have a representative number of training samples, which is not always possible. This problem is especially relevant as the complexity of the modeled structure increases, being the modeling of ensembles of multiple 3D organs one of the most challenging cases. In this paper, we introduce a new GEneralized Multi-resolution PDM (GEM-PDM) in the context of multi-organ analysis able to efficiently characterize the different inter-object relations, as well as the particular locality of each object separately. Importantly, unlike previous approaches, the configuration of the algorithm is automated thanks to a new agglomerative landmark clustering method proposed here, which equally allows us to identify smaller anatomically significant regions within organs. The significant advantage of the GEM-PDM method over two previous approaches (PDM and hierarchical PDM) in terms of shape modeling accuracy and robustness to noise, has been successfully verified for two different databases of sets of multiple organs: six subcortical brain structures, and seven abdominal organs. Finally, we propose the integration of the new shape modeling framework into an active shape-model-based segmentation algorithm. The resulting algorithm, named GEMA, provides a better overall performance than the two classical approaches tested, ASM, and hierarchical ASM, when applied to the segmentation of 3D brain MRI.
Resumo:
Osteoarticular allograft is one possible treatment in wide surgical resections with large defects. Performing best osteoarticular allograft selection is of great relevance for optimal exploitation of the bone databank, good surgery outcome and patient’s recovery. Current approaches are, however, very time consuming hindering these points in practice. We present a validation study of a software able to perform automatic bone measurements used to automatically assess the distal femur sizes across a databank. 170 distal femur surfaces were reconstructed from CT data and measured manually using a size measure protocol taking into account the transepicondyler distance (A), anterior-posterior distance in medial condyle (B) and anterior-posterior distance in lateral condyle (C). Intra- and inter-observer studies were conducted and regarded as ground truth measurements. Manual and automatic measures were compared. For the automatic measurements, the correlation coefficients between observer one and automatic method, were of 0.99 for A measure and 0.96 for B and C measures. The average time needed to perform the measurements was of 16 h for both manual measurements, and of 3 min for the automatic method. Results demonstrate the high reliability and, most importantly, high repeatability of the proposed approach, and considerable speed-up on the planning.
Resumo:
This abstract presents the biomechanical model that is used in the European ContraCancrum project, aiming at simulating tumor evolution in the brain and lung. The construction of the finite element model as well as a simulation of tumor growth are shown. The construction of the mesh is fully automatic and is therefore compatible with a clinical application. This biomechanical model will be later combined to a cellular level simulator also developed in the project.
Resumo:
Navigated ultrasound (US) imaging is used for the intra-operative acquisition of 3D image data during imageguided surgery. The presented approach includes the design of a compact and easy to use US calibration device and its integration into a software application for navigated liver surgery. User interaction during the calibration process is minimized through automatic detection of the calibration process followed by automatic image segmentation, calculation of the calibration transform and validation of the obtained result. This leads to a fast, interaction-free and fully automatic calibration procedure enabling intra-operative
Resumo:
The rapid technical advances in computed tomography have led to an increased number of clinical indications. Unfortunately, at the same time the radiation exposure to the population has also increased due to the increased total number of CT examinations. In the last few years various publications have demonstrated the feasibility of radiation dose reduction for CT examinations with no compromise in image quality and loss in interpretation accuracy. The majority of the proposed methods for dose optimization are easy to apply and are independent of the detector array configuration. This article reviews indication-dependent principles (e.g. application of reduced tube voltage for CT angiography, selection of the collimation and the pitch, reducing the total number of imaging series, lowering the tube voltage and tube current for non-contrast CT scans), manufacturer-dependent principles (e.g. accurate application of automatic modulation of tube current, use of adaptive image noise filter and use of iterative image reconstruction) and general principles (e.g. appropriate patient-centering in the gantry, avoiding over-ranging of the CT scan, lowering the tube voltage and tube current for survey CT scans) which lead to radiation dose reduction.
Resumo:
Delineating brain tumor boundaries from magnetic resonance images is an essential task for the analysis of brain cancer. We propose a fully automatic method for brain tissue segmentation, which combines Support Vector Machine classification using multispectral intensities and textures with subsequent hierarchical regularization based on Conditional Random Fields. The CRF regularization introduces spatial constraints to the powerful SVM classification, which assumes voxels to be independent from their neighbors. The approach first separates healthy and tumor tissue before both regions are subclassified into cerebrospinal fluid, white matter, gray matter and necrotic, active, edema region respectively in a novel hierarchical way. The hierarchical approach adds robustness and speed by allowing to apply different levels of regularization at different stages. The method is fast and tailored to standard clinical acquisition protocols. It was assessed on 10 multispectral patient datasets with results outperforming previous methods in terms of segmentation detail and computation times.
Resumo:
Automatic scan planning for magnetic resonance imaging of the knee aims at defining an oriented bounding box around the knee joint from sparse scout images in order to choose the optimal field of view for the diagnostic images and limit acquisition time. We propose a fast and fully automatic method to perform this task based on the standard clinical scout imaging protocol. The method is based on sequential Chamfer matching of 2D scout feature images with a three-dimensional mean model of femur and tibia. Subsequently, the joint plane separating femur and tibia, which contains both menisci, can be automatically detected using an information-augmented active shape model on the diagnostic images. This can assist the clinicians in quickly defining slices with standardized and reproducible orientation, thus increasing diagnostic accuracy and also comparability of serial examinations. The method has been evaluated on 42 knee MR images. It has the potential to be incorporated into existing systems because it does not change the current acquisition protocol.
Resumo:
Cigarettes may contain up to 10% by weight additives which are intended to make them more attractive. A fast and rugged method for a cigarette-screening for additives with medium volatility was developed using automatic headspace solid phase microextraction (HS-SPME) with a 65 microm carbowax-divinylbenzene fiber and gas chromatography-mass spectrometry (GC-MS) with standard electron impact ionisation. In three runs, each cigarette sample was extracted in closed headspace vials using basic, acidic and neutral medium containing 0.5 g NaCl or Na2SO4. Furthermore, the method was optimized for quantitative determination of 17 frequently occurring additives. The practical applicability of the method was demonstrated for cigarettes from 32 brands.
Resumo:
Electroencephalograms (EEG) are often contaminated with high amplitude artifacts limiting the usability of data. Methods that reduce these artifacts are often restricted to certain types of artifacts, require manual interaction or large training data sets. Within this paper we introduce a novel method, which is able to eliminate many different types of artifacts without manual intervention. The algorithm first decomposes the signal into different sub-band signals in order to isolate different types of artifacts into specific frequency bands. After signal decomposition with principal component analysis (PCA) an adaptive threshold is applied to eliminate components with high variance corresponding to the dominant artifact activity. Our results show that the algorithm is able to significantly reduce artifacts while preserving the EEG activity. Parameters for the algorithm do not have to be identified for every patient individually making the method a good candidate for preprocessing in automatic seizure detection and prediction algorithms.
Resumo:
Automatic identification and extraction of bone contours from X-ray images is an essential first step task for further medical image analysis. In this paper we propose a 3D statistical model based framework for the proximal femur contour extraction from calibrated X-ray images. The automatic initialization is solved by an estimation of Bayesian network algorithm to fit a multiple component geometrical model to the X-ray data. The contour extraction is accomplished by a non-rigid 2D/3D registration between a 3D statistical model and the X-ray images, in which bone contours are extracted by a graphical model based Bayesian inference. Preliminary experiments on clinical data sets verified its validity
Resumo:
OBJECTIVE: To develop a novel application of a tool for semi-automatic volume segmentation and adapt it for analysis of fetal cardiac cavities and vessels from heart volume datasets. METHODS: We studied retrospectively virtual cardiac volume cycles obtained with spatiotemporal image correlation (STIC) from six fetuses with postnatally confirmed diagnoses: four with normal hearts between 19 and 29 completed gestational weeks, one with d-transposition of the great arteries and one with hypoplastic left heart syndrome. The volumes were analyzed offline using a commercially available segmentation algorithm designed for ovarian folliculometry. Using this software, individual 'cavities' in a static volume are selected and assigned individual colors in cross-sections and in 3D-rendered views, and their dimensions (diameters and volumes) can be calculated. RESULTS: Individual segments of fetal cardiac cavities could be separated, adjacent segments merged and the resulting electronic casts studied in their spatial context. Volume measurements could also be performed. Exemplary images and interactive videoclips showing the segmented digital casts were generated. CONCLUSION: The approach presented here is an important step towards an automated fetal volume echocardiogram. It has the potential both to help in obtaining a correct structural diagnosis, and to generate exemplary visual displays of cardiac anatomy in normal and structurally abnormal cases for consultation and teaching.
Resumo:
As more and more open-source software components become available on the internet we need automatic ways to label and compare them. For example, a developer who searches for reusable software must be able to quickly gain an understanding of retrieved components. This understanding cannot be gained at the level of source code due to the semantic gap between source code and the domain model. In this paper we present a lexical approach that uses the log-likelihood ratios of word frequencies to automatically provide labels for software components. We present a prototype implementation of our labeling/comparison algorithm and provide examples of its application. In particular, we apply the approach to detect trends in the evolution of a software system.