850 resultados para computer-based
Resumo:
Atlas registration is a recognized paradigm for the automatic segmentation of normal MR brain images. Unfortunately, atlas-based segmentation has been of limited use in presence of large space-occupying lesions. In fact, brain deformations induced by such lesions are added to normal anatomical variability and they may dramatically shift and deform anatomically or functionally important brain structures. In this work, we chose to focus on the problem of inter-subject registration of MR images with large tumors, inducing a significant shift of surrounding anatomical structures. First, a brief survey of the existing methods that have been proposed to deal with this problem is presented. This introduces the discussion about the requirements and desirable properties that we consider necessary to be fulfilled by a registration method in this context: To have a dense and smooth deformation field and a model of lesion growth, to model different deformability for some structures, to introduce more prior knowledge, and to use voxel-based features with a similarity measure robust to intensity differences. In a second part of this work, we propose a new approach that overcomes some of the main limitations of the existing techniques while complying with most of the desired requirements above. Our algorithm combines the mathematical framework for computing a variational flow proposed by Hermosillo et al. [G. Hermosillo, C. Chefd'Hotel, O. Faugeras, A variational approach to multi-modal image matching, Tech. Rep., INRIA (February 2001).] with the radial lesion growth pattern presented by Bach et al. [M. Bach Cuadra, C. Pollo, A. Bardera, O. Cuisenaire, J.-G. Villemure, J.-Ph. Thiran, Atlas-based segmentation of pathological MR brain images using a model of lesion growth, IEEE Trans. Med. Imag. 23 (10) (2004) 1301-1314.]. Results on patients with a meningioma are visually assessed and compared to those obtained with the most similar method from the state-of-the-art.
Resumo:
Computed Tomography (CT) represents the standard imaging modality for tumor volume delineation for radiotherapy treatment planning of retinoblastoma despite some inherent limitations. CT scan is very useful in providing information on physical density for dose calculation and morphological volumetric information but presents a low sensitivity in assessing the tumor viability. On the other hand, 3D ultrasound (US) allows a highly accurate definition of the tumor volume thanks to its high spatial resolution but it is not currently integrated in the treatment planning but used only for diagnosis and follow-up. Our ultimate goal is an automatic segmentation of gross tumor volume (GTV) in the 3D US, the segmentation of the organs at risk (OAR) in the CT and the registration of both modalities. In this paper, we present some preliminary results in this direction. We present 3D active contour-based segmentation of the eye ball and the lens in CT images; the presented approach incorporates the prior knowledge of the anatomy by using a 3D geometrical eye model. The automated segmentation results are validated by comparing with manual segmentations. Then, we present two approaches for the fusion of 3D CT and US images: (i) landmark-based transformation, and (ii) object-based transformation that makes use of eye ball contour information on CT and US images.
Impact of partial-thickness tears on supraspinatus tendon strain based on a finite element analysis.
Resumo:
The high complexity of cortical convolutions in humans is very challenging both for engineers to measure and compare it, and for biologists and physicians to understand it. In this paper, we propose a surface-based method for the quantification of cortical gyrification. Our method uses accurate 3-D cortical reconstruction and computes local measurements of gyrification at thousands of points over the whole cortical surface. The potential of our method to identify and localize precisely gyral abnormalities is illustrated by a clinical study on a group of children affected by 22q11 Deletion Syndrome, compared to control individuals.
Resumo:
BACKGROUND: A relative inability to capture a sufficiently large patient population in any one geographic location has traditionally limited research into rare diseases. METHODS AND RESULTS: Clinicians interested in the rare disease lymphangioleiomyomatosis (LAM) have worked with the LAM Treatment Alliance, the MIT Media Lab, and Clozure Associates to cooperate in the design of a state-of-the-art data coordination platform that can be used for clinical trials and other research focused on the global LAM patient population. This platform is a component of a set of web-based resources, including a patient self-report data portal, aimed at accelerating research in rare diseases in a rigorous fashion. CONCLUSIONS: Collaboration between clinicians, researchers, advocacy groups, and patients can create essential community resource infrastructure to accelerate rare disease research. The International LAM Registry is an example of such an effort. 82.
Resumo:
Map units directly related to properties of soil-landscape are generated by local soil classes. Therefore to take into consideration the knowledge of farmers is essential to automate the procedure. The aim of this study was to map local soil classes by computer-assisted cartography (CAC), using several combinations of topographic properties produced by GIS (digital elevation model, aspect, slope, and profile curvature). A decision tree was used to find the number of topographic properties required for digital cartography of the local soil classes. The maps produced were evaluated based on the attributes of map quality defined as precision and accuracy of the CAC-based maps. The evaluation was carried out in Central Mexico using three maps of local soil classes with contrasting landscape and climatic conditions (desert, temperate, and tropical). In the three areas the precision (56 %) of the CAC maps based on elevation as topographical feature was higher than when based on slope, aspect and profile curvature. The accuracy of the maps (boundary locations) was however low (33 %), in other words, further research is required to improve this indicator.
Resumo:
One of the most important problems in optical pattern recognition by correlation is the appearance of sidelobes in the correlation plane, which causes false alarms. We present a method that eliminate sidelobes of up to a given height if certain conditions are satisfied. The method can be applied to any generalized synthetic discriminant function filter and is capable of rejecting lateral peaks that are even higher than the central correlation. Satisfactory results were obtained in both computer simulations and optical implementation.
Resumo:
This paper presents the segmentation of bilateral parotid glands in the Head and Neck (H&N) CT images using an active contour based atlas registration. We compare segmentation results from three atlas selection strategies: (i) selection of "single-most-similar" atlas for each image to be segmented, (ii) fusion of segmentation results from multiple atlases using STAPLE, and (iii) fusion of segmentation results using majority voting. Among these three approaches, fusion using majority voting provided the best results. Finally, we present a detailed evaluation on a dataset of eight images (provided as a part of H&N auto segmentation challenge conducted in conjunction with MICCAI-2010 conference) using majority voting strategy.
Resumo:
Underbody plows can be very useful tools in winter maintenance, especially when compacted snow or hard ice must be removed from the roadway. By the application of significant down-force, and the use of an appropriate cutting edge angle, compacted snow and ice can be removed very effectively by such plows, with much greater efficiency than any other tool under those circumstances. However, the successful operation of an underbody plow requires considerable skill. If too little down pressure is applied to the plow, then it will not cut the ice or compacted snow. However, if too much force is applied, then either the cutting edge may gouge the road surface, causing significant damage often to both the road surface and the plow, or the plow may ride up on the cutting edge so that it is no longer controllable by the operator. Spinning of the truck in such situations is easily accomplished. Further, excessive down force will result in rapid wear of the cutting edge. Given this need for a high level of operator skill, the operation of an underbody plow is a candidate for automation. In order to successfully automate the operation of an underbody plow, a control system must be developed that follows a set of rules that represent appropriate operation of such a plow. These rules have been developed, based upon earlier work in which operational underbody plows were instrumented to determine the loading upon them (both vertical and horizontal) and the angle at which the blade was operating.These rules have been successfully coded into two different computer programs, both using the MatLab® software. In the first program, various load and angle inputs are analyzed to determine when, whether, and how they violate the rules of operation. This program is essentially deterministic in nature. In the second program, the Simulink® package in the MatLab® software system was used to implement these rules using fuzzy logic. Fuzzy logic essentially replaces a fixed and constant rule with one that varies in such a way as to improve operational control. The development of the fuzzy logic in this simulation was achieved simply by using appropriate routines in the computer software, rather than being developed directly. The results of the computer testing and simulation indicate that a fully automated, computer controlled underbody plow is indeed possible. The issue of whether the next steps toward full automation should be taken (and by whom) has also been considered, and the possibility of some sort of joint venture between a Department of Transportation and a vendor has been suggested.
Resumo:
Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.
Resumo:
In three-dimensional (3D) coronary magnetic resonance angiography (MRA), the in-flow contrast between the coronary blood and the surrounding myocardium is attenuated as compared to thin-slab two-dimensional (2D) techniques. The application of a gadolinium (Gd)-based intravascular contrast agent may provide an additional source of signal and contrast by reducing T(1blood) and supporting the visualization of more distal or branching segments of the coronary arterial tree. In six healthy adults, the left coronary artery (LCA) system was imaged pre- and postcontrast with a 0.075-mmol/kg bodyweight dose of the intravascular contrast agent B-22956. For imaging, an optimized free-breathing, navigator-gated and -corrected 3D inversion recovery (IR) sequence was used. For comparison, state-of-the-art baseline 3D coronary MRA with T(2) preparation for non-exogenous contrast enhancement was acquired. The combination of IR 3D coronary MRA, sophisticated navigator technology, and B-22956 allowed for an extensive visualization of the LCA system. Postcontrast, a significant increase in both the signal-to-noise ratio (SNR; 46%, P < 0.05) and contrast-to-noise ratio (CNR; 160%, P < 0.01) was observed, while vessel sharpness of the left anterior descending (LAD) artery and the left coronary circumflex (LCX) were improved by 20% (P < 0.05) and 18% (P < 0.05), respectively.
Resumo:
Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.
Resumo:
The relaxivity of commercially available gadolinium (Gd)-based contrast agents was studied for X-nuclei resonances with long intrinsic relaxation times ranging from 6 s to several hundred seconds. Omniscan in pure 13C formic acid had a relaxivity of 2.9 mM(-1) s(-1), whereas its relaxivity on glutamate C1 and C5 in aqueous solution was approximately 0.5 mM(-1) s(-1). Both relaxivities allow the preparation of solutions with a predetermined short T1 and suggest that in vitro substantial sensitivity gains in their measurement can be achieved. 6Li has a long intrinsic relaxation time, on the order of several minutes, which was strongly affected by the contrast agents. Relaxivity ranged from approximately 0.1 mM(-1) s(-1) for Omniscan to 0.3 for Magnevist, whereas the relaxivity of Gd-DOTP was at 11 mM(-1) s(-1), which is two orders of magnitude higher. Overall, these experiments suggest that the presence of 0.1- to 10-microM contrast agents should be detectable, provided sufficient sensitivity is available, such as that afforded by hyperpolarization, recently introduced to in vivo imaging.
Resumo:
Intensity-modulated radiotherapy (IMRT) treatment plan verification by comparison with measured data requires having access to the linear accelerator and is time consuming. In this paper, we propose a method for monitor unit (MU) calculation and plan comparison for step and shoot IMRT based on the Monte Carlo code EGSnrc/BEAMnrc. The beamlets of an IMRT treatment plan are individually simulated using Monte Carlo and converted into absorbed dose to water per MU. The dose of the whole treatment can be expressed through a linear matrix equation of the MU and dose per MU of every beamlet. Due to the positivity of the absorbed dose and MU values, this equation is solved for the MU values using a non-negative least-squares fit optimization algorithm (NNLS). The Monte Carlo plan is formed by multiplying the Monte Carlo absorbed dose to water per MU with the Monte Carlo/NNLS MU. Several treatment plan localizations calculated with a commercial treatment planning system (TPS) are compared with the proposed method for validation. The Monte Carlo/NNLS MUs are close to the ones calculated by the TPS and lead to a treatment dose distribution which is clinically equivalent to the one calculated by the TPS. This procedure can be used as an IMRT QA and further development could allow this technique to be used for other radiotherapy techniques like tomotherapy or volumetric modulated arc therapy.