992 resultados para Image Berkeley database
Resumo:
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
Resumo:
In this paper we propose a novel method for shape analysis called HTS (Hough Transform Statistics), which uses statistics from Hough Transform space in order to characterize the shape of objects in digital images. Experimental results showed that the HTS descriptor is robust and presents better accuracy than some traditional shape description methods. Furthermore, HTS algorithm has linear complexity, which is an important requirement for content based image retrieval from large databases. © 2013 IEEE.
Resumo:
This paper describes an evaluation framework that allows a standardized and quantitative comparison of IVUS lumen and media segmentation algorithms. This framework has been introduced at the MICCAI 2011 Computing and Visualization for (Intra)Vascular Imaging (CVII) workshop, comparing the results of eight teams that participated. We describe the available data-base comprising of multi-center, multi-vendor and multi-frequency IVUS datasets, their acquisition, the creation of the reference standard and the evaluation measures. The approaches address segmentation of the lumen, the media, or both borders; semi- or fully-automatic operation; and 2-D vs. 3-D methodology. Three performance measures for quantitative analysis have been proposed. The results of the evaluation indicate that segmentation of the vessel lumen and media is possible with an accuracy that is comparable to manual annotation when semi-automatic methods are used, as well as encouraging results can be obtained also in case of fully-automatic segmentation. The analysis performed in this paper also highlights the challenges in IVUS segmentation that remains to be solved.
Resumo:
Although a vast amount of life sciences data is generated in the form of images, most scientists still store images on extremely diverse and often incompatible storage media, without any type of metadata structure, and thus with no standard facility with which to conduct searches or analyses. Here we present a solution to unlock the value of scientific images. The Global Image Database (GID) is a web-based (http://www.g wer.ch/qv/gid/gid.htm) structured central repository for scientific annotated images. The GID was designed to manage images from a wide spectrum of imaging domains ranging from microscopy to automated screening. The annotations in the GID define the source experiment of the images by describing who the authors of the experiment are, when the images were created, the biological origin of the experimental sample and how the sample was processed for visualization. A collection of experimental imaging protocols provides details of the sample preparation, and labeling, or visualization procedures. In addition, the entries in the GID reference these imaging protocols with the probe sequences or antibody names used in labeling experiments. The GID annotations are searchable by field or globally. The query results are first shown as image thumbnail previews, enabling quick browsing prior to original-sized annotated image retrieval. The development of the GID continues, aiming at facilitating the management and exchange of image data in the scientific community, and at creating new query tools for mining image data.
Resumo:
Current image database metadata schemas require users to adopt a specific text-based vocabulary. Text-based metadata is good for searching but not for browsing. Existing image-based search facilities, on the other hand, are highly specialised and so suffer similar problems. Wexelblat's semantic dimensional spatial visualisation schemas go some way towards addressing this problem by making both searching and browsing more accessible to the user in a single interface. But the question of how and what initial metadata to enter a database remains. Different people see different things in an image and will organise a collection in equally diverse ways. However, we can find some similarity across groups of users regardless of their reasoning. For example, a search on Amazon.com returns other products also, based on an averaging of how users navigate the database. In this paper, we report on applying this concept to a set of images for which we have visualised them using traditional methods and the Amazon.com method. We report on the findings of this comparative investigation in a case study setting involving a group of randomly selected participants. We conclude with the recommendation that in combination, the traditional and averaging methods would provide an enhancement to current database visualisation, searching, and browsing facilities.
Resumo:
Considering the difficulties in finding good-quality images for the development and test of computer-aided diagnosis (CAD), this paper presents a public online mammographic images database free for all interested viewers and aimed to help develop and evaluate CAD schemes. The digitalization of the mammographic images is made with suitable contrast and spatial resolution for processing purposes. The broad recuperation system allows the user to search for different images, exams, or patient characteristics. Comparison with other databases currently available has shown that the presented database has a sufficient number of images, is of high quality, and is the only one to include a functional search system.
Resumo:
Particle-image velocimetry (PIV) was used to visualize the flow within an optically transparent pediatric ventricular assist device (PVAD) under development in our laboratory The device studied is a diaphragm type pulsatile pump with an ejection volume of 30 ml per beating cycle intended for temporary cardiac assistance as a bridge to transplantation or recovery in children. Of particular interest was the identification of flow patterns, including regions of stagnation and/or strong turbulence that often promote thrombus formation and hemolysis, which can degrade the usefulness of such devices. For this purpose, phase-locked PIV measurements were performed in planes parallel to the diaphram that drives the flow in the device. The test fluid was seeded with 10 Am polystyrene spheres, and the motion of these particles was used to determine the instantaneous flow velocity distribution in the illumination plane. These measurements revealed that flow velocities up to 1.0 m/s can occur within the PVAD. Phase-averaged velocity fields revealed the fixed vortices that drive the bulk flow within the device, though significant cycle-to-cycle variability was also quite apparent in the instantaneous velocity distributions, most notably during the filling phase. This cycle-to-cycle variability can generate strong turbulence that may contribute to greater hemolysis. Stagnation regions have also been observed between the input and output branches of the prototype, which can increase the likelihood of thrombus formation. [DOI: 10.1115/1.4001252]
Resumo:
While multimedia data, image data in particular, is an integral part of most websites and web documents, our quest for information so far is still restricted to text based search. To explore the World Wide Web more effectively, especially its rich repository of truly multimedia information, we are facing a number of challenging problems. Firstly, we face the ambiguous and highly subjective nature of defining image semantics and similarity. Secondly, multimedia data could come from highly diversified sources, as a result of automatic image capturing and generation processes. Finally, multimedia information exists in decentralised sources over the Web, making it difficult to use conventional content-based image retrieval (CBIR) techniques for effective and efficient search. In this special issue, we present a collection of five papers on visual and multimedia information management and retrieval topics, addressing some aspects of these challenges. These papers have been selected from the conference proceedings (Kluwer Academic Publishers, ISBN: 1-4020- 7060-8) of the Sixth IFIP 2.6 Working Conference on Visual Database Systems (VDB6), held in Brisbane, Australia, on 29–31 May 2002.
Resumo:
OBJECTIVE To analyze Brazilian literature on body image and the theoretical and methodological advances that have been made. METHODS A detailed review was undertaken of the Brazilian literature on body image, selecting published articles, dissertations and theses from the SciELO, SCOPUS, LILACS and PubMed databases and the CAPES thesis database. Google Scholar was also used. There was no start date for the search, which used the following search terms: “body image” AND “Brazil” AND “scale(s)”; “body image” AND “Brazil” AND “questionnaire(s)”; “body image” AND “Brazil” AND “instrument(s)”; “body image” limited to Brazil and “body image”. RESULTS The majority of measures available were intended to be used in college students, with half of them evaluating satisfaction/dissatisfaction with the body. Females and adolescents of both sexes were the most studied population. There has been a significant increase in the number of available instruments. Nevertheless, numerous published studies have used non-validated instruments, with much confusion in the use of the appropriate terms (e.g., perception, dissatisfaction, distortion). CONCLUSIONS Much more is needed to understand body image within the Brazilian population, especially in terms of evaluating different age groups and diversifying the components/dimensions assessed. However, interest in this theme is increasing, and important steps have been taken in a short space of time.
Resumo:
Brain dopamine transporters imaging by Single Emission Tomography (SPECT) with 123I-FP-CIT (DaTScanTM) has become an important tool in the diagnosis and evaluation of Parkinson syndromes.This diagnostic method allows the visualization of a portion of the striatum – where healthy pattern resemble two symmetric commas - allowing the evaluation of dopamine presynaptic system, in which dopamine transporters are responsible for dopamine release into the synaptic cleft, and their reabsorption into the nigrostriatal nerve terminals, in order to be stored or degraded. In daily practice for assessment of DaTScan TM, it is common to rely only on visual assessment for diagnosis. However, this process is complex and subjective as it depends on the observer’s experience and it is associated with high variability intra and inter observer. Studies have shown that semiquantification can improve the diagnosis of Parkinson syndromes. For semiquantification, analysis methods of image segmentation using regions of interest (ROI) are necessary. ROIs are drawn, in specific - striatum - and in nonspecific – background – uptake areas. Subsequently, specific binding ratios are calculated. Low adherence of semiquantification for diagnosis of Parkinson syndromes is related, not only with the associated time spent, but also with the need of an adapted database of reference values for the population concerned, as well as, the examination of each service protocol. Studies have concluded, that this process increases the reproducibility of semiquantification. The aim of this investigation was to create and validate a database of healthy controls for Dopamine transporters with DaTScanTM named DBRV. The created database has been adapted to the Nuclear Medicine Department’s protocol, and the population of Infanta Cristina’s Hospital located in Badajoz, Spain.
Resumo:
Nowadays, the vulgarization of information and communication technologies has reached to a level that the majority of people spend a lot of time using software to do regular tasks, ranging from games and ordinary time and weather utilities to some more sophisticated ones, like retail or banking applications. This new way of life is supported by the Internet or by specific applications that changed the image people had about using information and communication technologies. All over the world, the first cycle of studies of educational systems also has been addressed with the justification that this encourages the development of children. Taking this into consideration, we design and develop a visual explorer system for relational databases that can be used by everyone, from “7 to 77”, in an intuitive and easy way, getting immediate results – a new database querying experience. Thus, in this paper we will expose the main characteristics and features of this visual database explorer, showing how it works and how it can be used to execute the most current data manipulation operations over a database.
Resumo:
Diabetes is a rapidly increasing worldwide problem which is characterised by defective metabolism of glucose that causes long-term dysfunction and failure of various organs. The most common complication of diabetes is diabetic retinopathy (DR), which is one of the primary causes of blindness and visual impairment in adults. The rapid increase of diabetes pushes the limits of the current DR screening capabilities for which the digital imaging of the eye fundus (retinal imaging), and automatic or semi-automatic image analysis algorithms provide a potential solution. In this work, the use of colour in the detection of diabetic retinopathy is statistically studied using a supervised algorithm based on one-class classification and Gaussian mixture model estimation. The presented algorithm distinguishes a certain diabetic lesion type from all other possible objects in eye fundus images by only estimating the probability density function of that certain lesion type. For the training and ground truth estimation, the algorithm combines manual annotations of several experts for which the best practices were experimentally selected. By assessing the algorithm’s performance while conducting experiments with the colour space selection, both illuminance and colour correction, and background class information, the use of colour in the detection of diabetic retinopathy was quantitatively evaluated. Another contribution of this work is the benchmarking framework for eye fundus image analysis algorithms needed for the development of the automatic DR detection algorithms. The benchmarking framework provides guidelines on how to construct a benchmarking database that comprises true patient images, ground truth, and an evaluation protocol. The evaluation is based on the standard receiver operating characteristics analysis and it follows the medical practice in the decision making providing protocols for image- and pixel-based evaluations. During the work, two public medical image databases with ground truth were published: DIARETDB0 and DIARETDB1. The framework, DR databases and the final algorithm, are made public in the web to set the baseline results for automatic detection of diabetic retinopathy. Although deviating from the general context of the thesis, a simple and effective optic disc localisation method is presented. The optic disc localisation is discussed, since normal eye fundus structures are fundamental in the characterisation of DR.