938 resultados para automatic summarization
Resumo:
Purpose In recent years, selective retina laser treatment (SRT), a sub-threshold therapy method, avoids widespread damage to all retinal layers by targeting only a few. While these methods facilitate faster healing, their lack of visual feedback during treatment represents a considerable shortcoming as induced lesions remain invisible with conventional imaging and make clinical use challenging. To overcome this, we present a new strategy to provide location-specific and contact-free automatic feedback of SRT laser applications. Methods We leverage time-resolved optical coherence tomography (OCT) to provide informative feedback to clinicians on outcomes of location-specific treatment. By coupling an OCT system to SRT treatment laser, we visualize structural changes in the retinal layers as they occur via time-resolved depth images. We then propose a novel strategy for automatic assessment of such time-resolved OCT images. To achieve this, we introduce novel image features for this task that when combined with standard machine learning classifiers yield excellent treatment outcome classification capabilities. Results Our approach was evaluated on both ex vivo porcine eyes and human patients in a clinical setting, yielding performances above 95 % accuracy for predicting patient treatment outcomes. In addition, we show that accurate outcomes for human patients can be estimated even when our method is trained using only ex vivo porcine data. Conclusion The proposed technique presents a much needed strategy toward noninvasive, safe, reliable, and repeatable SRT applications. These results are encouraging for the broader use of new treatment options for neovascularization-based retinal pathologies.
Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry.
Resumo:
Information about the size of a tumor and its temporal evolution is needed for diagnosis as well as treatment of brain tumor patients. The aim of the study was to investigate the potential of a fully-automatic segmentation method, called BraTumIA, for longitudinal brain tumor volumetry by comparing the automatically estimated volumes with ground truth data acquired via manual segmentation. Longitudinal Magnetic Resonance (MR) Imaging data of 14 patients with newly diagnosed glioblastoma encompassing 64 MR acquisitions, ranging from preoperative up to 12 month follow-up images, was analysed. Manual segmentation was performed by two human raters. Strong correlations (R = 0.83-0.96, p < 0.001) were observed between volumetric estimates of BraTumIA and of each of the human raters for the contrast-enhancing (CET) and non-enhancing T2-hyperintense tumor compartments (NCE-T2). A quantitative analysis of the inter-rater disagreement showed that the disagreement between BraTumIA and each of the human raters was comparable to the disagreement between the human raters. In summary, BraTumIA generated volumetric trend curves of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments comparable to estimates of human raters. These findings suggest the potential of automated longitudinal tumor segmentation to substitute manual volumetric follow-up of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments.
Resumo:
MRSI grids frequently show spectra with poor quality, mainly because of the high sensitivity of MRS to field inhomogeneities. These poor quality spectra are prone to quantification and/or interpretation errors that can have a significant impact on the clinical use of spectroscopic data. Therefore, quality control of the spectra should always precede their clinical use. When performed manually, quality assessment of MRSI spectra is not only a tedious and time-consuming task, but is also affected by human subjectivity. Consequently, automatic, fast and reliable methods for spectral quality assessment are of utmost interest. In this article, we present a new random forest-based method for automatic quality assessment of (1) H MRSI brain spectra, which uses a new set of MRS signal features. The random forest classifier was trained on spectra from 40 MRSI grids that were classified as acceptable or non-acceptable by two expert spectroscopists. To account for the effects of intra-rater reliability, each spectrum was rated for quality three times by each rater. The automatic method classified these spectra with an area under the curve (AUC) of 0.976. Furthermore, in the subset of spectra containing only the cases that were classified every time in the same way by the spectroscopists, an AUC of 0.998 was obtained. Feature importance for the classification was also evaluated. Frequency domain skewness and kurtosis, as well as time domain signal-to-noise ratios (SNRs) in the ranges 50-75 ms and 75-100 ms, were the most important features. Given that the method is able to assess a whole MRSI grid faster than a spectroscopist (approximately 3 s versus approximately 3 min), and without loss of accuracy (agreement between classifier trained with just one session and any of the other labelling sessions, 89.88%; agreement between any two labelling sessions, 89.03%), the authors suggest its implementation in the clinical routine. The method presented in this article was implemented in jMRUI's SpectrIm plugin. Copyright © 2016 John Wiley & Sons, Ltd.
Resumo:
This paper shows that countries characterized by a financial accelerator mechanism may reverse the usual finding of the literature -- flexible exchange rate regimes do a worse job of insulating open economies from external shocks. I obtain this result with a calibrated small open economy model that endogenizes foreign interest rates by linking them to the banking sector's foreign currency leverage. This relationship renders exchange rate policy more important compared to the usual exogeneity assumption. I find empirical support for this prediction using the Local Projections method. Finally, 2nd order approximation to the model finds larger welfare losses under flexible regimes.
Resumo:
This talk illustrates how results from various Stata commands can be processed efficiently for inclusion in customized reports. A two-step procedure is proposed in which results are gathered and archived in the first step and then tabulated in the second step. Such an approach disentangles the tasks of computing results (which may take long) and preparing results for inclusion in presentations, papers, and reports (which you may have to do over and over). Examples using results from model estimation commands and various other Stata commands such as tabulate, summarize, or correlate are presented. Users will also be shown how to dynamically link results into word processors or into LaTeX documents.