883 resultados para Radiographic Image Interpretation, Computer-Assisted
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica
Resumo:
OBJECTIVE: To assess the effect of the inhibition of the angiotensin-converting enzyme on the collagen matrix (CM) of the heart of newborn spontaneously hypertensive rats (SHR) during embryonic development. METHODS: The study comprised the 2 following groups of SHR (n=5 each): treated group - rats conceived from SHR females treated with enalapril maleate (15 mg. kg-1.day-1) during gestation; and nontreated group - offspring of nontreated females. The newborns were euthanized within the first 24 hours after birth and their hearts were removed and processed for histological study. Three fields per animal were considered for computer-assisted digital analysis and determination of the volume densities (Vv) of the nuclei and CM. The images were segmented with the aid of Image Pro Plus® 4.5.029 software (Media Cybernetics). RESULTS: No difference was observed between the treated and nontreated groups in regard to body mass, cardiac mass, and the relation between cardiac and body mass. A significant reduction in the Vv[matrix] and a concomitant increase in the Vv[nuclei] were observed in the treated group as compared with those in the nontreated group. CONCLUSION: The treatment with enalapril of hypertensive rats during pregnancy alters the collagen content and structure of the myocardium of newborns.
Resumo:
Therapeutic drug monitoring (TDM) aims to optimize treatments by individualizing dosage regimens based on the measurement of blood concentrations. Dosage individualization to maintain concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculations currently represent the gold standard TDM approach but require computation assistance. In recent decades computer programs have been developed to assist clinicians in this assignment. The aim of this survey was to assess and compare computer tools designed to support TDM clinical activities. The literature and the Internet were searched to identify software. All programs were tested on personal computers. Each program was scored against a standardized grid covering pharmacokinetic relevance, user friendliness, computing aspects, interfacing and storage. A weighting factor was applied to each criterion of the grid to account for its relative importance. To assess the robustness of the software, six representative clinical vignettes were processed through each of them. Altogether, 12 software tools were identified, tested and ranked, representing a comprehensive review of the available software. Numbers of drugs handled by the software vary widely (from two to 180), and eight programs offer users the possibility of adding new drug models based on population pharmacokinetic analyses. Bayesian computation to predict dosage adaptation from blood concentration (a posteriori adjustment) is performed by ten tools, while nine are also able to propose a priori dosage regimens, based only on individual patient covariates such as age, sex and bodyweight. Among those applying Bayesian calculation, MM-USC*PACK© uses the non-parametric approach. The top two programs emerging from this benchmark were MwPharm© and TCIWorks. Most other programs evaluated had good potential while being less sophisticated or less user friendly. Programs vary in complexity and might not fit all healthcare settings. Each software tool must therefore be regarded with respect to the individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Computer-assisted TDM is gaining growing interest and should further improve, especially in terms of information system interfacing, user friendliness, data storage capability and report generation.
Resumo:
In recent years, multi-atlas fusion methods have gainedsignificant attention in medical image segmentation. Inthis paper, we propose a general Markov Random Field(MRF) based framework that can perform edge-preservingsmoothing of the labels at the time of fusing the labelsitself. More specifically, we formulate the label fusionproblem with MRF-based neighborhood priors, as an energyminimization problem containing a unary data term and apairwise smoothness term. We present how the existingfusion methods like majority voting, global weightedvoting and local weighted voting methods can be reframedto profit from the proposed framework, for generatingmore accurate segmentations as well as more contiguoussegmentations by getting rid of holes and islands. Theproposed framework is evaluated for segmenting lymphnodes in 3D head and neck CT images. A comparison ofvarious fusion algorithms is also presented.
Resumo:
PURPOSE: To understand the reasons for differences in the delineation of target volumes between physicians. MATERIAL AND METHODS: 18 Swiss radiooncology centers were invited to delineate volumes for one prostate and one head-and-neck case. In addition, a questionnaire was sent to evaluate the differences in the volume definition (GTV [gross tumor volume], CTV [clinical target volume], PTV [planning target volume]), the various estimated margins, and the nodes at risk. Coherence between drawn and stated margins by centers was calculated. The questionnaire also included a nonspecific series of questions regarding planning methods in each institution. RESULTS: Fairly large differences in the drawn volumes were seen between the centers in both cases and also in the definition of volumes. Correlation between drawn and stated margins was fair in the prostate case and poor in the head-and-neck case. The questionnaire revealed important differences in the planning methods between centers. CONCLUSION: These large differences could be explained by (1) a variable knowledge/interpretation of ICRU definitions, (2) variable interpretations of the potential microscopic extent, (3) difficulties in GTV identification, (4) differences in the concept, and (5) incoherence between theory (i.e., stated margins) and practice (i.e., drawn margins).
Resumo:
The effect of copper (Cu) filtration on image quality and dose in different digital X-ray systems was investigated. Two computed radiography systems and one digital radiography detector were used. Three different polymethylmethacrylate blocks simulated the pediatric body. The effect of Cu filters of 0.1, 0.2, and 0.3 mm thickness on the entrance surface dose (ESD) and the corresponding effective doses (EDs) were measured at tube voltages of 60, 66, and 73 kV. Image quality was evaluated in a contrast-detail phantom with an automated analyzer software. Cu filters of 0.1, 0.2, and 0.3 mm thickness decreased the ESD by 25-32%, 32-39%, and 40-44%, respectively, the ranges depending on the respective tube voltages. There was no consistent decline in image quality due to increasing Cu filtration. The estimated ED of anterior-posterior (AP) chest projections was reduced by up to 23%. No relevant reduction in the ED was noted in AP radiographs of the abdomen and pelvis or in posterior-anterior radiographs of the chest. Cu filtration reduces the ESD, but generally does not reduce the effective dose. Cu filters can help protect radiosensitive superficial organs, such as the mammary glands in AP chest projections.
Resumo:
The EVS4CSCL project starts in the context of a Computer Supported Collaborative Learning environment (CSCL). Previous UOC projects created a CSCL generic platform (CLPL) to facilitate the development of CSCL applications. A discussion forum (DF) was the first application developed over the framework. This discussion forum was different from other products on the marketplace because of its focus on the learning process. The DF carried out the specification and elaboration phases from the discussion learning process but there was a lack in the consensus phase. The consensus phase in a learning environment is not something to be achieved but tested. Common tests are done by Electronic Voting System (EVS) tools, but consensus test is not an assessment test. We are not evaluating our students by their answers but by their discussion activity. Our educational EVS would be used as a discussion catalyst proposing a discussion about the results after an initial query or it would be used after a discussion period in order to manifest how the discussion changed the students mind (consensus). It should be also used by the teacher as a quick way to know where the student needs some reinforcement. That is important in a distance-learning environment where there is no direct contact between the teacher and the student and it is difficult to detect the learning lacks. In an educational environment, assessment it is a must and the EVS will provide direct assessment by peer usefulness evaluation, teacher marks on every query created and indirect assessment from statistics regarding the user activity.
Resumo:
A fully-automated 3D image analysis method is proposed to segment lung nodules in HRCT. A specific gray-level mathematical morphology operator, the SMDC-connection cost, acting in the 3D space of the thorax volume is defined in order to discriminate lung nodules from other dense (vascular) structures. Applied to clinical data concerning patients with pulmonary carcinoma, the proposed method detects isolated, juxtavascular and peripheral nodules with sizes ranging from 2 to 20 mm diameter. The segmentation accuracy was objectively evaluated on real and simulated nodules. The method showed a sensitivity and a specificity ranging from 85% to 97% and from 90% to 98%, respectively.
Resumo:
This paper addresses a fully automatic landmarks detection method for breast reconstruction aesthetic assessment. The set of landmarks detected are the supraesternal notch (SSN), armpits, nipples, and inframammary fold (IMF). These landmarks are commonly used in order to perform anthropometric measurements for aesthetic assessment. The methodological approach is based on both illumination and morphological analysis. The proposed method has been tested with 21 images. A good overall performance is observed, although several improvements must be achieved in order to refine the detection of nipples and SSNs.
Resumo:
We present a segmentation method for fetal brain tissuesof T2w MR images, based on the well known ExpectationMaximization Markov Random Field (EM- MRF) scheme. Ourmain contribution is an intensity model composed of 7Gaussian distribution designed to deal with the largeintensity variability of fetal brain tissues. The secondmain contribution is a 3-steps MRF model that introducesboth local spatial and anatomical priors given by acortical distance map. Preliminary results on 4 subjectsare presented and evaluated in comparison to manualsegmentations showing that our methodology cansuccessfully be applied to such data, dealing with largeintensity variability within brain tissues and partialvolume (PV).
Resumo:
Positron emission tomography is a functional imaging technique that allows the detection of the regional metabolic rate, and is often coupled with other morphological imaging technique such as computed tomography. The rationale for its use is based on the clearly demonstrated fact that functional changes in tumor processes happen before morphological changes. Its introduction to the clinical practice added a new dimension in conventional imaging techniques. This review presents the current and proposed indications of the use of positron emission/computed tomography for prostate, bladder and testes, and the potential role of this exam in radiotherapy planning.
Resumo:
Diagnosis of several neurological disorders is based on the detection of typical pathological patterns in the electroencephalogram (EEG). This is a time-consuming task requiring significant training and experience. Automatic detection of these EEG patterns would greatly assist in quantitative analysis and interpretation. We present a method, which allows automatic detection of epileptiform events and discrimination of them from eye blinks, and is based on features derived using a novel application of independent component analysis. The algorithm was trained and cross validated using seven EEGs with epileptiform activity. For epileptiform events with compensation for eyeblinks, the sensitivity was 65 +/- 22% at a specificity of 86 +/- 7% (mean +/- SD). With feature extraction by PCA or classification of raw data, specificity reduced to 76 and 74%, respectively, for the same sensitivity. On exactly the same data, the commercially available software Reveal had a maximum sensitivity of 30% and concurrent specificity of 77%. Our algorithm performed well at detecting epileptiform events in this preliminary test and offers a flexible tool that is intended to be generalized to the simultaneous classification of many waveforms in the EEG.
Resumo:
One of the most relevant difficulties faced by first-year undergraduate students is to settle into the educational environment of universities. This paper presents a case study that proposes a computer-assisted collaborative experience designed to help students in their transition from high school to university. This is done by facilitating their first contact with the campus and its services, the university community, methodologies and activities. The experience combines individual and collaborative activities, conducted in and out of the classroom, structured following the Jigsaw Collaborative Learning Flow Pattern. A specific environment including portable technologies with network and computer applications has been developed to support and facilitate the orchestration of a flow of learning activities into a single integrated learning setting. The result is a Computer-Supported Collaborative Blended Learning scenario, which has been evaluated with first-year university students of the degrees of Software and Audiovisual Engineering within the subject Introduction to Information and Communications Technologies. The findings reveal that the scenario improves significantly students’ interest in their studies and their understanding about the campus and services provided. The environment is also an innovative approach to successfully support the heterogeneous activities conducted by both teachers and students during the scenario. This paper introduces the goals and context of the case study, describes how the technology was employed to conduct the learning scenario, the evaluation methods and the main results of the experience.
Resumo:
OBJECTIVE: Although intracranial hypertension is one of the important prognostic factors after head injury, increased intracranial pressure (ICP) may also be observed in patients with favourable outcome. We have studied whether the value of ICP monitoring can be augmented by indices describing cerebrovascular pressure-reactivity and pressure-volume compensatory reserve derived from ICP and arterial blood pressure (ABP) waveforms. METHOD: 96 patients with intracranial hypertension were studied retrospectively: 57 with fatal outcome and 39 with favourable outcome. ABP and ICP waveforms were recorded. Indices of cerebrovascular reactivity (PRx) and cerebrospinal compensatory reserve (RAP) were calculated as moving correlation coefficients between slow waves of ABP and ICP, and between slow waves of ICP pulse amplitude and mean ICP, respectively. The magnitude of 'slow waves' was derived using ICP low-pass spectral filtration. RESULTS: The most significant difference was found in the magnitude of slow waves that was persistently higher in patients with a favourable outcome (p<0.00004). In patients who died ICP was significantly higher (p<0.0001) and cerebrovascular pressure-reactivity (described by PRx) was compromised (p<0.024). In the same patients, pressure-volume compensatory reserve showed a gradual deterioration over time with a sudden drop of RAP when ICP started to rise, suggesting an overlapping disruption of the vasomotor response. CONCLUSION: Indices derived from ICP waveform analysis can be helpful for the interpretation of progressive intracranial hypertension in patients after brain trauma.
Resumo:
This paper presents the segmentation of bilateral parotid glands in the Head and Neck (H&N) CT images using an active contour based atlas registration. We compare segmentation results from three atlas selection strategies: (i) selection of "single-most-similar" atlas for each image to be segmented, (ii) fusion of segmentation results from multiple atlases using STAPLE, and (iii) fusion of segmentation results using majority voting. Among these three approaches, fusion using majority voting provided the best results. Finally, we present a detailed evaluation on a dataset of eight images (provided as a part of H&N auto segmentation challenge conducted in conjunction with MICCAI-2010 conference) using majority voting strategy.