40 resultados para medical segmentation
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Purpose: To evaluate the suitability of an improved version of an automatic segmentation method based on geodesic active regions (GAR) for segmenting cerebral vasculature with aneurysms from 3D X-ray reconstruc-tion angiography (3DRA) and time of °ight magnetic resonance angiography (TOF-MRA) images available in the clinical routine.Methods: Three aspects of the GAR method have been improved: execution time, robustness to variability in imaging protocols and robustness to variability in image spatial resolutions. The improved GAR was retrospectively evaluated on images from patients containing intracranial aneurysms in the area of the Circle of Willis and imaged with two modalities: 3DRA and TOF-MRA. Images were obtained from two clinical centers, each using di®erent imaging equipment. Evaluation included qualitative and quantitative analyses ofthe segmentation results on 20 images from 10 patients. The gold standard was built from 660 cross-sections (33 per image) of vessels and aneurysms, manually measured by interventional neuroradiologists. GAR has also been compared to an interactive segmentation method: iso-intensity surface extraction (ISE). In addition, since patients had been imaged with the two modalities, we performed an inter-modality agreement analysis with respect to both the manual measurements and each of the two segmentation methods. Results: Both GAR and ISE di®ered from the gold standard within acceptable limits compared to the imaging resolution. GAR (ISE, respectively) had an average accuracy of 0.20 (0.24) mm for 3DRA and 0.27 (0.30) mm for TOF-MRA, and had a repeatability of 0.05 (0.20) mm. Compared to ISE, GAR had a lower qualitative error in the vessel region and a lower quantitative error in the aneurysm region. The repeatabilityof GAR was superior to manual measurements and ISE. The inter-modality agreement was similar between GAR and the manual measurements. Conclusions: The improved GAR method outperformed ISE qualitatively as well as quantitatively and is suitable for segmenting 3DRA and TOF-MRA images from clinical routine.
Resumo:
This paper describes an evaluation framework that allows a standardized and quantitative comparison of IVUS lumen and media segmentation algorithms. This framework has been introduced at the MICCAI 2011 Computing and Visualization for (Intra)Vascular Imaging (CVII) workshop, comparing the results of eight teams that participated. We describe the available data-base comprising of multi-center, multi-vendor and multi-frequency IVUS datasets, their acquisition, the creation of the reference standard and the evaluation measures. The approaches address segmentation of the lumen, the media, or both borders; semi- or fully-automatic operation; and 2-D vs. 3-D methodology. Three performance measures for quantitative analysis have been proposed. The results of the evaluation indicate that segmentation of the vessel lumen and media is possible with an accuracy that is comparable to manual annotation when semi-automatic methods are used, as well as encouraging results can be obtained also in case of fully-automatic segmentation. The analysis performed in this paper also highlights the challenges in IVUS segmentation that remains to be solved.
Resumo:
We explore the determinants of usage of six different types of health care services, using the Medical Expenditure Panel Survey data, years 1996-2000. We apply a number of models for univariate count data, including semiparametric, semi-nonparametric and finite mixture models. We find that the complexity of the model that is required to fit the data well depends upon the way in which the data is pooled across sexes and over time, and upon the characteristics of the usage measure. Pooling across time and sexes is almost always favored, but when more heterogeneous data is pooled it is often the case that a more complex statistical model is required.
Resumo:
We review recent likelihood-based approaches to modeling demand for medical care. A semi-nonparametric model along the lines of Cameron and Johansson's Poisson polynomial model, but using a negative binomial baseline model, is introduced. We apply these models, as well a semiparametric Poisson, hurdle semiparametric Poisson, and finite mixtures of negative binomial models to six measures of health care usage taken from the Medical Expenditure Panel survey. We conclude that most of the models lead to statistically similar results, both in terms of information criteria and conditional and unconditional prediction. This suggests that applied researchers may not need to be overly concerned with the choice of which of these models they use to analyze data on health care demand.
Resumo:
We study the optimal public intervention in setting minimum standards of formation for specialized medical care. The abilities the physicians obtain by means of their training allow them to improve their performance as providers of cure and earn some monopoly rents.. Our aim is to characterize the most efficient regulation in this field taking into account different regulatory frameworks. We find that the existing situation in some countries, in which the amount of specialization is controlled, and the costs of this process of specialization are publicly financed, can be supported as the best possible intervention.
Resumo:
The 1998 Spanish reform of the Personal Income Tax eliminated the 15% deduction for private medical expenditures including payments on private health insurance (PHI) policies. To avoid an undesirable increase in the demand for publicly funded health care, tax incentives to buy PHI were not completely removed but basically shifted from individual to group employer-paid policies. In a unique fiscal experiment, at the same time that the tax relief for individually purchased policies was abolished, the government provided for tax allowances on policies taken out through employment. Using a bivariate probit model on data from National Health Surveys, we estimate the impact of said reform on the demand for PHI and the changes occurred within it. Our findings suggest that the total probability of buying PHI was not significantly affected. Indeed, the fall in the demand for individual policies (by 10% between 1997 and 2001) was offset by an increase in the demand for group employer-paid ones, so that the overall size of the market remained virtually unchanged. We also briefly discuss the welfare effects on the state budget, the industry and society at large.
Resumo:
The 1st chapter of this work presents the different experiments and collaborations in which I am involved during my PhD studies of Physics. Following those descriptions, the 2nd chapter is dedicated to how the radiation affects the silicon sensors, as well as some experimental measurements carried out at CERN (Geneve, Schwitzerland) and IFIC (Valencia, Spain) laboratories. Besides the previous investigation results, this chapter includes the most recent scientific papers appeared in the latest RD50 (Research & Development #50) Status Report, published in January 2007, as well as some others published this year. The 3rd and 4th are dedicated to the simulation of the electrical behavior of solid state detectors. In chapter 3 are reported the results obtained for the illumination of edgeless detectors irradiated at different fluences, in the framework of the TOSTER Collaboration. The 4th chapter reports about simulation design, simulation and fabrication of a novel 3D detector developed at CNM for ions detection in the future ITER fusion reactor. This chapter will be extended with irradiation simulations and experimental measurements in my PhD Thesis.
Resumo:
A review article of the The New England Journal of Medicine refers that almost a century ago, Abraham Flexner, a research scholar at the Carnegie Foundation for the Advancement of Teaching, undertook an assessment of medical education in 155 medical schools in operation in the United States and Canada. Flexner’s report emphasized the nonscientific approach of American medical schools to preparation for the profession, which contrasted with the university-based system of medical education in Germany. At the core of Flexner’s view was the notion that formal analytic reasoning, the kind of thinking integral to the natural sciences, should hold pride of place in the intellectual training of physicians. This idea was pioneered at Harvard University, the University of Michigan, and the University of Pennsylvania in the 1880s, but was most fully expressed in the educational program at Johns Hopkins University, which Flexner regarded as the ideal for medical education. (...)
Resumo:
La localització d'òrgans és un tòpic important en l'àmbit de la imatge mèdica per l'ajuda del tractament i diagnosi del càncer. Un exemple es pot trobar en la cal•libració de models farmacoquinètics. Aquesta pot ésser realitzada utilitzant un teixit de referència, on, per exemple en imatges de ressonància magnètica de pit, una correcta segmentació del múscul pectoral és necessària per a la detecció de signes de malignitat. Els mètodes de segmentació basat en atlas han estat altament avaluats en imatge de ressonància magnètica de cervell, obtenint resultats satisfactoris. En aquest projecte, en col•laboració amb el el Diagnostic Image Analysis Group de la Radboud University Nijmegen Medical Centre i la supervisió del Dr. N.Karssemeijer, es presenta la primera aproximació d'un mètode de segmentació basat en atlas per segmentar els diferents teixits visibles en imatges de ressonància magnètica (T1) del pit femení. L'atlas consisteix en 5 estructures (teixit greixòs, teixit dens, cor, pulmons i múscul pectoral) i ha estat utilitzat en un algorisme de segmentació Bayesià per tal de delinear les esmentades estructures. A més a més, s'ha dut a terme una comparació entre un mètode de registre global i un de local, utilitzats tant en la construcció de l'atlas com en la fase de segmentació, essent el primer el que ha presentat millors resultats en termes d'eficiència i precisió. Per a l'avaluació, s'ha dut a terme una comparació visual i numèrica entre les segmentacions obtingudes i les realitzades manualment pels experts col•laboradors. Pel que fa a la numèrica, s'ha emprat el coeficient de similitud de Dice ( mesura que dóna valors entre 0 i 1, on 0 significa no similitud i 1 similitud màxima) i s'ha obtingut una mitjana general de 0.8. Aquest resultat confirma la validesa del mètode presentat per a la segmentació d'imatges de ressonància magnètica del pit.
Resumo:
A recent trend in digital mammography is computer-aided diagnosis systems, which are computerised tools designed to assist radiologists. Most of these systems are used for the automatic detection of abnormalities. However, recent studies have shown that their sensitivity is significantly decreased as the density of the breast increases. This dependence is method specific. In this paper we propose a new approach to the classification of mammographic images according to their breast parenchymal density. Our classification uses information extracted from segmentation results and is based on the underlying breast tissue texture. Classification performance was based on a large set of digitised mammograms. Evaluation involves different classifiers and uses a leave-one-out methodology. Results demonstrate the feasibility of estimating breast density using image processing and analysis techniques
Resumo:
In image segmentation, clustering algorithms are very popular because they are intuitive and, some of them, easy to implement. For instance, the k-means is one of the most used in the literature, and many authors successfully compare their new proposal with the results achieved by the k-means. However, it is well known that clustering image segmentation has many problems. For instance, the number of regions of the image has to be known a priori, as well as different initial seed placement (initial clusters) could produce different segmentation results. Most of these algorithms could be slightly improved by considering the coordinates of the image as features in the clustering process (to take spatial region information into account). In this paper we propose a significant improvement of clustering algorithms for image segmentation. The method is qualitatively and quantitative evaluated over a set of synthetic and real images, and compared with classical clustering approaches. Results demonstrate the validity of this new approach
Resumo:
One of the key aspects in 3D-image registration is the computation of the joint intensity histogram. We propose a new approach to compute this histogram using uniformly distributed random lines to sample stochastically the overlapping volume between two 3D-images. The intensity values are captured from the lines at evenly spaced positions, taking an initial random offset different for each line. This method provides us with an accurate, robust and fast mutual information-based registration. The interpolation effects are drastically reduced, due to the stochastic nature of the line generation, and the alignment process is also accelerated. The results obtained show a better performance of the introduced method than the classic computation of the joint histogram
Resumo:
In this paper, an information theoretic framework for image segmentation is presented. This approach is based on the information channel that goes from the image intensity histogram to the regions of the partitioned image. It allows us to define a new family of segmentation methods which maximize the mutual information of the channel. Firstly, a greedy top-down algorithm which partitions an image into homogeneous regions is introduced. Secondly, a histogram quantization algorithm which clusters color bins in a greedy bottom-up way is defined. Finally, the resulting regions in the partitioning algorithm can optionally be merged using the quantized histogram
Resumo:
Given an observed test statistic and its degrees of freedom, one may compute the observed P value with most statistical packages. It is unknown to what extent test statistics and P values are congruent in published medical papers. Methods:We checked the congruence of statistical results reported in all the papers of volumes 409–412 of Nature (2001) and a random sample of 63 results from volumes 322–323 of BMJ (2001). We also tested whether the frequencies of the last digit of a sample of 610 test statistics deviated from a uniform distribution (i.e., equally probable digits).Results: 11.6% (21 of 181) and 11.1% (7 of 63) of the statistical results published in Nature and BMJ respectively during 2001 were incongruent, probably mostly due to rounding, transcription, or type-setting errors. At least one such error appeared in 38% and 25% of the papers of Nature and BMJ, respectively. In 12% of the cases, the significance level might change one or more orders of magnitude. The frequencies of the last digit of statistics deviated from the uniform distribution and suggested digit preference in rounding and reporting.Conclusions: this incongruence of test statistics and P values is another example that statistical practice is generally poor, even in the most renowned scientific journals, and that quality of papers should be more controlled and valued
Resumo:
A novel technique for estimating the rank of the trajectory matrix in the local subspace affinity (LSA) motion segmentation framework is presented. This new rank estimation is based on the relationship between the estimated rank of the trajectory matrix and the affinity matrix built with LSA. The result is an enhanced model selection technique for trajectory matrix rank estimation by which it is possible to automate LSA, without requiring any a priori knowledge, and to improve the final segmentation