997 resultados para imaging software
Resumo:
BACKGROUND: Given the fragmentation of outpatient care, timely follow-up of abnormal diagnostic imaging results remains a challenge. We hypothesized that an electronic medical record (EMR) that facilitates the transmission and availability of critical imaging results through either automated notification (alerting) or direct access to the primary report would eliminate this problem. METHODS: We studied critical imaging alert notifications in the outpatient setting of a tertiary care Department of Veterans Affairs facility from November 2007 to June 2008. Tracking software determined whether the alert was acknowledged (ie, health care practitioner/provider [HCP] opened the message for viewing) within 2 weeks of transmission; acknowledged alerts were considered read. We reviewed medical records and contacted HCPs to determine timely follow-up actions (eg, ordering a follow-up test or consultation) within 4 weeks of transmission. Multivariable logistic regression models accounting for clustering effect by HCPs analyzed predictors for 2 outcomes: lack of acknowledgment and lack of timely follow-up. RESULTS: Of 123 638 studies (including radiographs, computed tomographic scans, ultrasonograms, magnetic resonance images, and mammograms), 1196 images (0.97%) generated alerts; 217 (18.1%) of these were unacknowledged. Alerts had a higher risk of being unacknowledged when the ordering HCPs were trainees (odds ratio [OR], 5.58; 95% confidence interval [CI], 2.86-10.89) and when dual-alert (>1 HCP alerted) as opposed to single-alert communication was used (OR, 2.02; 95% CI, 1.22-3.36). Timely follow-up was lacking in 92 (7.7% of all alerts) and was similar for acknowledged and unacknowledged alerts (7.3% vs 9.7%; P = .22). Risk for lack of timely follow-up was higher with dual-alert communication (OR, 1.99; 95% CI, 1.06-3.48) but lower when additional verbal communication was used by the radiologist (OR, 0.12; 95% CI, 0.04-0.38). Nearly all abnormal results lacking timely follow-up at 4 weeks were eventually found to have measurable clinical impact in terms of further diagnostic testing or treatment. CONCLUSIONS: Critical imaging results may not receive timely follow-up actions even when HCPs receive and read results in an advanced, integrated electronic medical record system. A multidisciplinary approach is needed to improve patient safety in this area.
Resumo:
PURPOSE Fundus autofluorescence (FAF) cannot only be characterized by the intensity or the emission spectrum, but also by its lifetime. As the lifetime of a fluorescent molecule is sensitive to its local microenvironment, this technique may provide more information than fundus autofluorescence imaging. We report here the characteristics and repeatability of FAF lifetime measurements of the human macula using a new fluorescence lifetime imaging ophthalmoscope (FLIO). METHODS A total of 31 healthy phakic subjects were included in this study with an age range from 22 to 61 years. For image acquisition, a fluorescence lifetime ophthalmoscope based on a Heidelberg Engineering Spectralis system was used. Fluorescence lifetime maps of the retina were recorded in a short- (498-560 nm) and a long- (560-720 nm) spectral channel. For quantification of fluorescence lifetimes a standard ETDRS grid was used. RESULTS Mean fluorescence lifetimes were shortest in the fovea, with 208 picoseconds for the short-spectral channel and 239 picoseconds for the long-spectral channel, respectively. Fluorescence lifetimes increased from the central area to the outer ring of the ETDRS grid. The test-retest reliability of FLIO was very high for all ETDRS areas (Spearman's ρ = 0.80 for the short- and 0.97 for the long-spectral channel, P < 0.0001). Fluorescence lifetimes increased with age. CONCLUSIONS The FLIO allows reproducible measurements of fluorescence lifetimes of the macula in healthy subjects. By using a custom-built software, we were able to quantify fluorescence lifetimes within the ETDRS grid. Establishing a clinically accessible standard against which to measure FAF lifetimes within the retina is a prerequisite for future studies in retinal disease.
Resumo:
OBJECTIVE To evaluate the role of an ultra-low-dose dual-source CT coronary angiography (CTCA) scan with high pitch for delimiting the range of the subsequent standard CTCA scan. METHODS 30 patients with an indication for CTCA were prospectively examined using a two-scan dual-source CTCA protocol (2.0 × 64.0 × 0.6 mm; pitch, 3.4; rotation time of 280 ms; 100 kV): Scan 1 was acquired with one-fifth of the tube current suggested by the automatic exposure control software [CareDose 4D™ (Siemens Healthcare, Erlangen, Germany) using 100 kV and 370 mAs as a reference] with the scan length from the tracheal bifurcation to the diaphragmatic border. Scan 2 was acquired with standard tube current extending with reduced scan length based on Scan 1. Nine central coronary artery segments were analysed qualitatively on both scans. RESULTS Scan 2 (105.1 ± 10.1 mm) was significantly shorter than Scan 1 (127.0 ± 8.7 mm). Image quality scores were significantly better for Scan 2. However, in 5 of 6 (83%) patients with stenotic coronary artery disease, a stenosis was already detected in Scan 1 and in 13 of 24 (54%) patients with non-stenotic coronary arteries, a stenosis was already excluded by Scan 1. Using Scan 2 as reference, the positive- and negative-predictive value of Scan 1 was 83% (5 of 6 patients) and 100% (13 of 13 patients), respectively. CONCLUSION An ultra-low-dose CTCA planning scan enables a reliable scan length reduction of the following standard CTCA scan and allows for correct diagnosis in a substantial proportion of patients. ADVANCES IN KNOWLEDGE Further dose reductions are possible owing to a change in the individual patient's imaging strategy as a prior ultra-low-dose CTCA scan may already rule out the presence of a stenosis or may lead to a direct transferal to an invasive catheter procedure.
Resumo:
PURPOSE To evaluate the utility of attenuation correction (AC) of V/P SPECT images for patients with pulmonary emphysema. MATERIALS AND METHODS Twenty-one patients (mean age 67.6 years) with pulmonary emphysema who underwent V/P SPECT/CT were included. AC/non-AC V/P SPECT images were compared visually and semiquantitatively. Visual comparison of AC/non-AC images was based on a 5-point likert scale. Semiquantitative comparison assessed absolute counts per lung (aCpLu) and lung lobe (aCpLo) for AC/non-AC images using software-based analysis; percentage counts (PC = (aCpLo/aCpLu) × 100) were calculated. Correlation between AC/non-AC V/P SPECT images was analyzed using Spearman's rho correlation coefficient; differences were tested for significance with the Wilcoxon rank sum test. RESULTS Visual analysis revealed high conformity for AC and non-AC V/P SPECT images. Semiquantitative analysis of PC in AC/non-AC images had an excellent correlation and showed no significant differences in perfusion (ρ = 0.986) or ventilation (ρ = 0.979, p = 0.809) SPECT/CT images. CONCLUSION AC of V/P SPECT images for lung lobe-based function imaging in patients with pulmonary emphysema do not improve visual or semiquantitative image analysis.
Resumo:
BACKGROUND The aim of this study was to evaluate the accuracy of linear measurements on three imaging modalities: lateral cephalograms from a cephalometric machine with a 3 m source-to-mid-sagittal-plane distance (SMD), from a machine with 1.5 m SMD and 3D models from cone-beam computed tomography (CBCT) data. METHODS Twenty-one dry human skulls were used. Lateral cephalograms were taken, using two cephalometric devices: one with a 3 m SMD and one with a 1.5 m SMD. CBCT scans were taken by 3D Accuitomo® 170, and 3D surface models were created in Maxilim® software. Thirteen linear measurements were completed twice by two observers with a 4 week interval. Direct physical measurements by a digital calliper were defined as the gold standard. Statistical analysis was performed. RESULTS Nasion-Point A was significantly different from the gold standard in all methods. More statistically significant differences were found on the measurements of the 3 m SMD cephalograms in comparison to the other methods. Intra- and inter-observer agreement based on 3D measurements was slightly better than others. LIMITATIONS Dry human skulls without soft tissues were used. Therefore, the results have to be interpreted with caution, as they do not fully represent clinical conditions. CONCLUSIONS 3D measurements resulted in a better observer agreement. The accuracy of the measurements based on CBCT and 1.5 m SMD cephalogram was better than a 3 m SMD cephalogram. These findings demonstrated the linear measurements accuracy and reliability of 3D measurements based on CBCT data when compared to 2D techniques. Future studies should focus on the implementation of 3D cephalometry in clinical practice.
Resumo:
High Angular Resolution Diffusion Imaging (HARDI) techniques, including Diffusion Spectrum Imaging (DSI), have been proposed to resolve crossing and other complex fiber architecture in the human brain white matter. In these methods, directional information of diffusion is inferred from the peaks in the orientation distribution function (ODF). Extensive studies using histology on macaque brain, cat cerebellum, rat hippocampus and optic tracts, and bovine tongue are qualitatively in agreement with the DSI-derived ODFs and tractography. However, there are only two studies in the literature which validated the DSI results using physical phantoms and both these studies were not performed on a clinical MRI scanner. Also, the limited studies which optimized DSI in a clinical setting, did not involve a comparison against physical phantoms. Finally, there is lack of consensus on the necessary pre- and post-processing steps in DSI; and ground truth diffusion fiber phantoms are not yet standardized. Therefore, the aims of this dissertation were to design and construct novel diffusion phantoms, employ post-processing techniques in order to systematically validate and optimize (DSI)-derived fiber ODFs in the crossing regions on a clinical 3T MR scanner, and develop user-friendly software for DSI data reconstruction and analysis. Phantoms with a fixed crossing fiber configuration of two crossing fibers at 90° and 45° respectively along with a phantom with three crossing fibers at 60°, using novel hollow plastic capillaries and novel placeholders, were constructed. T2-weighted MRI results on these phantoms demonstrated high SNR, homogeneous signal, and absence of air bubbles. Also, a technique to deconvolve the response function of an individual peak from the overall ODF was implemented, in addition to other DSI post-processing steps. This technique greatly improved the angular resolution of the otherwise unresolvable peaks in a crossing fiber ODF. The effects of DSI acquisition parameters and SNR on the resultant angular accuracy of DSI on the clinical scanner were studied and quantified using the developed phantoms. With a high angular direction sampling and reasonable levels of SNR, quantification of a crossing region in the 90°, 45° and 60° phantoms resulted in a successful detection of angular information with mean ± SD of 86.93°±2.65°, 44.61°±1.6° and 60.03°±2.21° respectively, while simultaneously enhancing the ODFs in regions containing single fibers. For the applicability of these validated methodologies in DSI, improvement in ODFs and fiber tracking from known crossing fiber regions in normal human subjects were demonstrated; and an in-house software package in MATLAB which streamlines the data reconstruction and post-processing for DSI, with easy to use graphical user interface was developed. In conclusion, the phantoms developed in this dissertation offer a means of providing ground truth for validation of reconstruction and tractography algorithms of various diffusion models (including DSI). Also, the deconvolution methodology (when applied as an additional DSI post-processing step) significantly improved the angular accuracy of the ODFs obtained from DSI, and should be applicable to ODFs obtained from the other high angular resolution diffusion imaging techniques.
Resumo:
PURPOSE: To describe and follow cotton wool spots (CWS) in branch retinal vein occlusion (BRVO) using multimodal imaging. METHODS: In this prospective cohort study including 24 patients with new-onset BRVO, CWS were described and analyzed in color fundus photography (CF), spectral domain optical coherence tomography (SD-OCT), infrared (IR) and fluorescein angiography (FA) every 3 months for 3 years. The CWS area on SD-OCT and CF was evaluated using OCT-Tool-Kit software: CWS were marked in each single OCT B-scan and the software calculated the area by interpolation. RESULTS: 29 central CWS lesions were found. 100% of these CWS were visible on SD-OCT, 100% on FA and 86.2% on IR imaging, but only 65.5% on CF imaging. CWS were visible for 12.4 ± 7.5 months on SD-OCT, for 4.4 ± 3 months and 4.3 ± 3.4 months on CF and on IR, respectively, and for 17.5 ± 7.1 months on FA. The evaluated CWS area on SD-OCT was larger than on CF (0.26 ± 0.17 mm(2) vs. 0.13 ± 0.1 mm(2), p < 0.0001). The CWS area on SD-OCT and surrounding pathology such as intraretinal cysts, avascular zones and intraretinal hemorrhage were predictive for how long CWS remained visible (r(2) = 0.497, p < 0.002). CONCLUSIONS: The lifetime and presentation of CWS in BRVO seem comparable to other diseases. SD-OCT shows a higher sensitivity for detecting CWS compared to CF. The duration of visibility of CWS varies among different image modalities and depends on the surrounding pathology and the CWS size.
Resumo:
Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.
Resumo:
Quantification of neurotransmission Single-Photon Emission Computed Tomography (SPECT) studies of the dopaminergic system can be used to track, stage and facilitate early diagnosis of the disease. The aim of this study was to implement QuantiDOPA, a semi-automatic quantification software of application in clinical routine to reconstruct and quantify neurotransmission SPECT studies using radioligands which bind the dopamine transporter (DAT). To this end, a workflow oriented framework for the biomedical imaging (GIMIAS) was employed. QuantiDOPA allows the user to perform a semiautomatic quantification of striatal uptake by following three stages: reconstruction, normalization and quantification. QuantiDOPA is a useful tool for semi-automatic quantification inDAT SPECT imaging and it has revealed simple and flexible
Resumo:
In the last decades accumulated clinical evidence has proven that intra-operative radiation therapy (IORT) is a very valuable technique. In spite of that, planning technology has not evolved since its conception, being outdated in comparison to current state of the art in other radiotherapy techniques and therefore slowing down the adoption of IORT. RADIANCE is an IORT planning system, CE and FDA certified, developed by a consortium of companies, hospitals and universities to overcome such technological backwardness. RADIANCE provides all basic radiotherapy planning tools which are specifically adapted to IORT. These include, but are not limited to image visualization, contouring, dose calculation algorithms-Pencil Beam (PB) and Monte Carlo (MC), DVH calculation and reporting. Other new tools, such as surgical simulation tools have been developed to deal with specific conditions of the technique. Planning with preoperative images (preplanning) has been evaluated and the validity of the system being proven in terms of documentation, treatment preparation, learning as well as improvement of surgeons/radiation oncologists (ROs) communication process. Preliminary studies on Navigation systems envisage benefits on how the specialist to accurately/safely apply the pre-plan into the treatment, updating the plan as needed. Improvements on the usability of this kind of systems and workflow are needed to make them more practical. Preliminary studies on Intraoperative imaging could provide an improved anatomy for the dose computation, comparing it with the previous pre-plan, although not all devices in the market provide good characteristics to do so. DICOM.RT standard, for radiotherapy information exchange, has been updated to cover IORT particularities and enabling the possibility of dose summation with external radiotherapy. The effect of this planning technology on the global risk of the IORT technique has been assessed and documented as part of a failure mode and effect analysis (FMEA). Having these technological innovations and their clinical evaluation (including risk analysis) we consider that RADIANCE is a very valuable tool to the specialist covering the demands from professional societies (AAPM, ICRU, EURATOM) for current radiotherapy procedures.
Resumo:
Object tracking with subpixel accuracy is of fundamental importance in many fields since it provides optimal performance at relatively low-cost. Although there are many theoretical proposals that lead to resolution increments of several orders of magnitude, in practice, this resolution is limited by the imaging systems. In this paper we propose and demonstrate through numerical models a realistic limit for subpixel accuracy. The final result is that maximum achievable resolution enhancement is connected with the dynamic range of the image, i.e. the detection limit is 1/2^(nr.bits). Results here presented may help to proper design of superresolution experiments in microscopy, surveillance, defense and other fields.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Purpose-To develop a non-invasive method for quantification of blood and pigment distributions across the posterior pole of the fundus from multispectral images using a computer-generated reflectance model of the fundus. Methods - A computer model was developed to simulate light interaction with the fundus at different wavelengths. The distribution of macular pigment (MP) and retinal haemoglobins in the fundus was obtained by comparing the model predictions with multispectral image data at each pixel. Fundus images were acquired from 16 healthy subjects from various ethnic backgrounds and parametric maps showing the distribution of MP and of retinal haemoglobins throughout the posterior pole were computed. Results - The relative distributions of MP and retinal haemoglobins in the subjects were successfully derived from multispectral images acquired at wavelengths 507, 525, 552, 585, 596, and 611?nm, providing certain conditions were met and eye movement between exposures was minimal. Recovery of other fundus pigments was not feasible and further development of the imaging technique and refinement of the software are necessary to understand the full potential of multispectral retinal image analysis. Conclusion - The distributions of MP and retinal haemoglobins obtained in this preliminary investigation are in good agreement with published data on normal subjects. The ongoing development of the imaging system should allow for absolute parameter values to be computed. A further study will investigate subjects with known pathologies to determine the effectiveness of the method as a screening and diagnostic tool.
Resumo:
Purpose: The Nidek F-10 is a scanning laser ophthalmoscope that is capable of a novel fundus imaging technique, so-called ‘retro-mode’ imaging. The standard method of imaging drusen in age-related macular degeneration (AMD) is by fundus photography. The aim of the study was to assess drusen quantification using retro-mode imaging. Methods: Stereoscopic fundus photographs and retro-mode images were captured in 31 eyes of 20 patients with varying stages of AMD. Two experienced masked retinal graders independently assessed images for the number and size of drusen, using purpose-designed software. Drusen were further assessed in a subset of eight patients using optical coherence tomography (OCT) imaging. Results: Drusen observed by fundus photography (mean 33.5) were significantly fewer in number than subretinal deposits seen in retro-mode (mean 81.6; p < 0.001). The predominant deposit diameter was on average 5 µm smaller in retro-mode imaging than in fundus photography (p = 0.004). Agreement between graders for both types of imaging was substantial for number of deposits (weighted ? = 0.69) and moderate for size of deposits (weighted ? = 0.42). Retro-mode deposits corresponded to drusen on OCT imaging in all eight patients. Conclusion: The subretinal deposits detected by retro-mode imaging were consistent with the appearance of drusen on OCT imaging; however, a larger longitudinal study would be required to confirm this finding. Retro-mode imaging detected significantly more deposits than conventional colour fundus photography. Retro-mode imaging provides a rapid non-invasive technique, useful in monitoring subtle changes and progression of AMD, which may be useful in monitoring the response of drusen to future therapeutic interventions.
Resumo:
This dissertation establishes the foundation for a new 3-D visual interface integrating Magnetic Resonance Imaging (MRI) to Diffusion Tensor Imaging (DTI). The need for such an interface is critical for understanding brain dynamics, and for providing more accurate diagnosis of key brain dysfunctions in terms of neuronal connectivity. ^ This work involved two research fronts: (1) the development of new image processing and visualization techniques in order to accurately establish relational positioning of neuronal fiber tracts and key landmarks in 3-D brain atlases, and (2) the obligation to address the computational requirements such that the processing time is within the practical bounds of clinical settings. The system was evaluated using data from thirty patients and volunteers with the Brain Institute at Miami Children's Hospital. ^ Innovative visualization mechanisms allow for the first time white matter fiber tracts to be displayed alongside key anatomical structures within accurately registered 3-D semi-transparent images of the brain. ^ The segmentation algorithm is based on the calculation of mathematically-tuned thresholds and region-detection modules. The uniqueness of the algorithm is in its ability to perform fast and accurate segmentation of the ventricles. In contrast to the manual selection of the ventricles, which averaged over 12 minutes, the segmentation algorithm averaged less than 10 seconds in its execution. ^ The registration algorithm established searches and compares MR with DT images of the same subject, where derived correlation measures quantify the resulting accuracy. Overall, the images were 27% more correlated after registration, while an average of 1.5 seconds is all it took to execute the processes of registration, interpolation, and re-slicing of the images all at the same time and in all the given dimensions. ^ This interface was fully embedded into a fiber-tracking software system in order to establish an optimal research environment. This highly integrated 3-D visualization system reached a practical level that makes it ready for clinical deployment. ^