4 resultados para annotated image database
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
BackgroundDespite the increasingly higher spatial and contrast resolution of CT, nodular lesions are prone to be missed on chest CT. Tinted lenses increase visual acuity and contrast sensitivity by filtering short wavelength light of solar and artificial origin.PurposeTo test the impact of Gunnar eyewear, image quality (standard versus low dose CT) and nodule location on detectability of lung nodules in CT and to compare their individual influence.Material and MethodsA pre-existing database of CT images of patients with lung nodules >5 mm, scanned with standard does image quality (150 ref mAs/120 kVp) and lower dose/quality (40 ref mAs/120 kVp), was used. Five radiologists read 60 chest CTs twice: once with Gunnar glasses and once without glasses with a 1 month break between. At both read-outs the cases were shown at lower dose or standard dose level to quantify the influence of both variables (eyewear vs. image quality) on nodule sensitivity.ResultsThe sensitivity of CT for lung nodules increased significantly using Gunnar eyewear for two readers and insignificantly for two other readers. Over all, the mean sensitivity of all radiologist raised significantly from 50% to 53%, using the glasses (P value = 0.034). In contrast, sensitivity for lung nodules was not significantly affected by lowering the image quality from 150 to 40 ref mAs. The average sensitivity was 52% at low dose level, that was even 0.7% higher than at standard dose level (P value = 0.40). The strongest impact on sensitivity had the factors readers and nodule location (lung segments).ConclusionSensitivity for lung nodules was significantly enhanced by Gunnar eyewear (+3%), while lower image quality (40 ref mAs) had no impact on nodule sensitivity. Not using the glasses had a bigger impact on sensitivity than lowering the image quality.
Resumo:
Background: Statistical shape models are widely used in biomedical research. They are routinely implemented for automatic image segmentation or object identification in medical images. In these fields, however, the acquisition of the large training datasets, required to develop these models, is usually a time-consuming process. Even after this effort, the collections of datasets are often lost or mishandled resulting in replication of work. Objective: To solve these problems, the Virtual Skeleton Database (VSD) is proposed as a centralized storage system where the data necessary to build statistical shape models can be stored and shared. Methods: The VSD provides an online repository system tailored to the needs of the medical research community. The processing of the most common image file types, a statistical shape model framework, and an ontology-based search provide the generic tools to store, exchange, and retrieve digital medical datasets. The hosted data are accessible to the community, and collaborative research catalyzes their productivity. Results: To illustrate the need for an online repository for medical research, three exemplary projects of the VSD are presented: (1) an international collaboration to achieve improvement in cochlear surgery and implant optimization, (2) a population-based analysis of femoral fracture risk between genders, and (3) an online application developed for the evaluation and comparison of the segmentation of brain tumors. Conclusions: The VSD is a novel system for scientific collaboration for the medical image community with a data-centric concept and semantically driven search option for anatomical structures. The repository has been proven to be a useful tool for collaborative model building, as a resource for biomechanical population studies, or to enhance segmentation algorithms.
Resumo:
Diet-related chronic diseases severely affect personal and global health. However, managing or treating these diseases currently requires long training and high personal involvement to succeed. Computer vision systems could assist with the assessment of diet by detecting and recognizing different foods and their portions in images. We propose novel methods for detecting a dish in an image and segmenting its contents with and without user interaction. All methods were evaluated on a database of over 1600 manually annotated images. The dish detection scored an average of 99% accuracy with a .2s/image run time, while the automatic and semi-automatic dish segmentation methods reached average accuracies of 88% and 91% respectively, with an average run time of .5s/image, outperforming competing solutions.