888 resultados para Database application, Biologia cellulare, Image retrieval


Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper describes the open source framework MARVIN for rapid application development in the field of biomedical and clinical research. MARVIN applications consist of modules that can be plugged together in order to provide the functionality required for a specific experimental scenario. Application modules work on a common patient database that is used to store and organize medical data as well as derived data. MARVIN provides a flexible input/output system with support for many file formats including DICOM, various 2D image formats and surface mesh data. Furthermore, it implements an advanced visualization system and interfaces to a wide range of 3D tracking hardware. Since it uses only highly portable libraries, MARVIN applications run on Unix/Linux, Mac OS X and Microsoft Windows.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Obesity is becoming an epidemic phenomenon in most developed countries. The fundamental cause of obesity and overweight is an energy imbalance between calories consumed and calories expended. It is essential to monitor everyday food intake for obesity prevention and management. Existing dietary assessment methods usually require manually recording and recall of food types and portions. Accuracy of the results largely relies on many uncertain factors such as user's memory, food knowledge, and portion estimations. As a result, the accuracy is often compromised. Accurate and convenient dietary assessment methods are still blank and needed in both population and research societies. In this thesis, an automatic food intake assessment method using cameras, inertial measurement units (IMUs) on smart phones was developed to help people foster a healthy life style. With this method, users use their smart phones before and after a meal to capture images or videos around the meal. The smart phone will recognize food items and calculate the volume of the food consumed and provide the results to users. The technical objective is to explore the feasibility of image based food recognition and image based volume estimation. This thesis comprises five publications that address four specific goals of this work: (1) to develop a prototype system with existing methods to review the literature methods, find their drawbacks and explore the feasibility to develop novel methods; (2) based on the prototype system, to investigate new food classification methods to improve the recognition accuracy to a field application level; (3) to design indexing methods for large-scale image database to facilitate the development of new food image recognition and retrieval algorithms; (4) to develop novel convenient and accurate food volume estimation methods using only smart phones with cameras and IMUs. A prototype system was implemented to review existing methods. Image feature detector and descriptor were developed and a nearest neighbor classifier were implemented to classify food items. A reedit card marker method was introduced for metric scale 3D reconstruction and volume calculation. To increase recognition accuracy, novel multi-view food recognition algorithms were developed to recognize regular shape food items. To further increase the accuracy and make the algorithm applicable to arbitrary food items, new food features, new classifiers were designed. The efficiency of the algorithm was increased by means of developing novel image indexing method in large-scale image database. Finally, the volume calculation was enhanced through reducing the marker and introducing IMUs. Sensor fusion technique to combine measurements from cameras and IMUs were explored to infer the metric scale of the 3D model as well as reduce noises from these sensors.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

All optical systems that operate in or through the atmosphere suffer from turbulence induced image blur. Both military and civilian surveillance, gun-sighting, and target identification systems are interested in terrestrial imaging over very long horizontal paths, but atmospheric turbulence can blur the resulting images beyond usefulness. My dissertation explores the performance of a multi-frame-blind-deconvolution technique applied under anisoplanatic conditions for both Gaussian and Poisson noise model assumptions. The technique is evaluated for use in reconstructing images of scenes corrupted by turbulence in long horizontal-path imaging scenarios and compared to other speckle imaging techniques. Performance is evaluated via the reconstruction of a common object from three sets of simulated turbulence degraded imagery representing low, moderate and severe turbulence conditions. Each set consisted of 1000 simulated, turbulence degraded images. The MSE performance of the estimator is evaluated as a function of the number of images, and the number of Zernike polynomial terms used to characterize the point spread function. I will compare the mean-square-error (MSE) performance of speckle imaging methods and a maximum-likelihood, multi-frame blind deconvolution (MFBD) method applied to long-path horizontal imaging scenarios. Both methods are used to reconstruct a scene from simulated imagery featuring anisoplanatic turbulence induced aberrations. This comparison is performed over three sets of 1000 simulated images each for low, moderate and severe turbulence-induced image degradation. The comparison shows that speckle-imaging techniques reduce the MSE 46 percent, 42 percent and 47 percent on average for low, moderate, and severe cases, respectively using 15 input frames under daytime conditions and moderate frame rates. Similarly, the MFBD method provides, 40 percent, 29 percent, and 36 percent improvements in MSE on average under the same conditions. The comparison is repeated under low light conditions (less than 100 photons per pixel) where improvements of 39 percent, 29 percent and 27 percent are available using speckle imaging methods and 25 input frames and 38 percent, 34 percent and 33 percent respectively for the MFBD method and 150 input frames. The MFBD estimator is applied to three sets of field data and the results presented. Finally, a combined Bispectrum-MFBD Hybrid estimator is proposed and investigated. This technique consistently provides a lower MSE and smaller variance in the estimate under all three simulated turbulence conditions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Statistical shape models are widely used in biomedical research. They are routinely implemented for automatic image segmentation or object identification in medical images. In these fields, however, the acquisition of the large training datasets, required to develop these models, is usually a time-consuming process. Even after this effort, the collections of datasets are often lost or mishandled resulting in replication of work. Objective: To solve these problems, the Virtual Skeleton Database (VSD) is proposed as a centralized storage system where the data necessary to build statistical shape models can be stored and shared. Methods: The VSD provides an online repository system tailored to the needs of the medical research community. The processing of the most common image file types, a statistical shape model framework, and an ontology-based search provide the generic tools to store, exchange, and retrieve digital medical datasets. The hosted data are accessible to the community, and collaborative research catalyzes their productivity. Results: To illustrate the need for an online repository for medical research, three exemplary projects of the VSD are presented: (1) an international collaboration to achieve improvement in cochlear surgery and implant optimization, (2) a population-based analysis of femoral fracture risk between genders, and (3) an online application developed for the evaluation and comparison of the segmentation of brain tumors. Conclusions: The VSD is a novel system for scientific collaboration for the medical image community with a data-centric concept and semantically driven search option for anatomical structures. The repository has been proven to be a useful tool for collaborative model building, as a resource for biomechanical population studies, or to enhance segmentation algorithms.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Extensive experience with the analysis of human prophase chromosomes and studies into the complexity of prophase GTG-banding patterns have suggested that at least some prophase chromosomal segments can be accurately identified and characterized independently of the morphology of the chromosome as a whole. In this dissertation the feasibility of identifying and analyzing specified prophase chromosome segments was thus investigated as an alternative approach to prophase chromosome analysis based on whole chromosome recognition. Through the use of prophase idiograms at the 850-band-stage (FRANCKE, 1981) and a comparison system based on the calculation of cross-correlation coefficients between idiogram profiles, we have demonstrated that it is possible to divide the 24 human prophase idiograms into a set of 94 unique band sequences. Each unique band sequence has a banding pattern that is recognizable and distinct from any other non-homologous chromosome portion.^ Using chromosomes 11p and 16 thru 22 to demonstrate unique band sequence integrity at the chromosome level, we found that prophase chromosome banding pattern variation can be compensated for and that a set of unique band sequences very similar to those at the idiogram level can be identified on actual chromosomes.^ The use of a unique band sequence approach in prophase chromosome analysis is expected to increase efficiency and sensitivity through more effective use of available banding information. The use of a unique band sequence approach to prophase chromosome analysis is discussed both at the routine level by cytogeneticists and at an image processing level with a semi-automated approach to prophase chromosome analysis. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

X-ray imaging is one of the most commonly used medical imaging modality. Albeit X-ray radiographs provide important clinical information for diagnosis, planning and post-operative follow-up, the challenging interpretation due to its 2D projection characteristics and the unknown magnification factor constrain the full benefit of X-ray imaging. In order to overcome these drawbacks, we proposed here an easy-to-use X-ray calibration object and developed an optimization method to robustly find correspondences between the 3D fiducials of the calibration object and their 2D projections. In this work we present all the details of this outlined concept. Moreover, we demonstrate the potential of using such a method to precisely extract information from calibrated X-ray radiographs for two different orthopedic applications: post-operative acetabular cup implant orientation measurement and 3D vertebral body displacement measurement during preoperative traction tests. In the first application, we have achieved a clinically acceptable accuracy of below 1° for both anteversion and inclination angles, where in the second application an average displacement of 8.06±3.71 mm was measured. The results of both applications indicate the importance of using X-ray calibration in the clinical routine.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In attempts to elucidate the underlying mechanisms of spinal injuries and spinal deformities, several experimental and numerical studies have been conducted to understand the biomechanical behavior of the spine. However, numerical biomechanical studies suffer from uncertainties associated with hard- and soft-tissue anatomies. Currently, these parameters are identified manually on each mesh model prior to simulations. The determination of soft connective tissues on finite element meshes can be a tedious procedure, which limits the number of models used in the numerical studies to a few instances. In order to address these limitations, an image-based method for automatic morphing of soft connective tissues has been proposed. Results showed that the proposed method is capable to accurately determine the spatial locations of predetermined bony landmarks. The present method can be used to automatically generate patient-specific models, which may be helpful in designing studies involving a large number of instances and to understand the mechanical behavior of biomechanical structures across a given population.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The metabolic network of a cell represents the catabolic and anabolic reactions that interconvert small molecules (metabolites) through the activity of enzymes, transporters and non-catalyzed chemical reactions. Our understanding of individual metabolic networks is increasing as we learn more about the enzymes that are active in particular cells under particular conditions and as technologies advance to allow detailed measurements of the cellular metabolome. Metabolic network databases are of increasing importance in allowing us to contextualise data sets emerging from transcriptomic, proteomic and metabolomic experiments. Here we present a dynamic database, TrypanoCyc (http://www.metexplore.fr/trypanocyc/), which describes the generic and condition-specific metabolic network of Trypanosoma brucei, a parasitic protozoan responsible for human and animal African trypanosomiasis. In addition to enabling navigation through the BioCyc-based TrypanoCyc interface, we have also implemented a network-based representation of the information through MetExplore, yielding a novel environment in which to visualise the metabolism of this important parasite.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A new topographic database for King George Island, one of the most visited areas in Antarctica, is presented. Data from differential GPS surveys, gained during the summers 1997/98 and 1999/2000, were combined with up to date coastlines from a SPOT satellite image mosaic, and topographic information from maps as well as from the Antarctic Digital Database. A digital terrain model (DTM) was generated using ARC/INFO GIS. From contour lines derived from the DTM and the satellite image mosaic a satellite image map was assembled. Extensive information on data accuracy, the database as well as on the criteria applied to select place names is given in the multilingual map. A lack of accurate topographic information in the eastern part of the island was identified. It was concluded that additional topographic surveying or radar interferometry should be conducted to improve the data quality in this area. In three case studies, the potential applications of the improved topographic database are demonstrated. The first two examples comprise the verification of glacier velocities and the study of glacier retreat from the various input data-sets as well as the use of the DTM for climatological modelling. The last case study focuses on the use of the new digital database as a basic GIS (Geographic Information System) layer for environmental monitoring and management on King George Island.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

At present time, there is a lack of knowledge on the interannual climate-related variability of zooplankton communities of the tropical Atlantic, central Mediterranean Sea, Caspian Sea, and Aral Sea, due to the absence of appropriate databases. In the mid latitudes, the North Atlantic Oscillation (NAO) is the dominant mode of atmospheric fluctuations over eastern North America, the northern Atlantic Ocean and Europe. Therefore, one of the issues that need to be addressed through data synthesis is the evaluation of interannual patterns in species abundance and species diversity over these regions in regard to the NAO. The database has been used to investigate the ecological role of the NAO in interannual variations of mesozooplankton abundance and biomass along the zonal array of the NAO influence. Basic approach to the proposed research involved: (1) development of co-operation between experts and data holders in Ukraine, Russia, Kazakhstan, Azerbaijan, UK, and USA to rescue and compile the oceanographic data sets and release them on CD-ROM, (2) organization and compilation of a database based on FSU cruises to the above regions, (3) analysis of the basin-scale interannual variability of the zooplankton species abundance, biomass, and species diversity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Vast portions of Arctic and sub-Arctic Siberia, Alaska and the Yukon Territory are covered by ice-rich silty to sandy deposits that are containing large ice wedges, resulting from syngenetic sedimentation and freezing. Accompanied by wedge-ice growth in polygonal landscapes, the sedimentation process was driven by cold continental climatic and environmental conditions in unglaciated regions during the late Pleistocene, inducing the accumulation of the unique Yedoma deposits up to >50 meters thick. Because of fast incorporation of organic material into syngenetic permafrost during its formation, Yedoma deposits include well-preserved organic matter. Ice-rich deposits like Yedoma are especially prone to degradation triggered by climate changes or human activity. When Yedoma deposits degrade, large amounts of sequestered organic carbon as well as other nutrients are released and become part of active biogeochemical cycling. This could be of global significance for future climate warming as increased permafrost thaw is likely to lead to a positive feedback through enhanced greenhouse gas fluxes. Therefore, a detailed assessment of the current Yedoma deposit coverage and its volume is of importance to estimate its potential response to future climate changes. We synthesized the map of the coverage and thickness estimation, which will provide critical data needed for further research. In particular, this preliminary Yedoma map is a great step forward to understand the spatial heterogeneity of Yedoma deposits and its regional coverage. There will be further applications in the context of reconstructing paleo-environmental dynamics and past ecosystems like the mammoth-steppe-tundra, or ground ice distribution including future thermokarst vulnerability. Moreover, the map will be a crucial improvement of the data basis needed to refine the present-day Yedoma permafrost organic carbon inventory, which is assumed to be between 83±12 (Strauss et al., 2013, doi:10.1002/2013GL058088) and 129±30 (Walter Anthony et al., 2014, doi:10.1038/nature13560) gigatonnes (Gt) of organic carbon in perennially-frozen archives. Hence, here we synthesize data on the circum-Arctic and sub-Arctic distribution and thickness of Yedoma for compiling a preliminary circum-polar Yedoma map. For compiling this map, we used (1) maps of the previous Yedoma coverage estimates, (2) included the digitized areas from Grosse et al. (2013) as well as extracted areas of potential Yedoma distribution from additional surface geological and Quaternary geological maps (1.: 1:500,000: Q-51-V,G; P-51-A,B; P-52-A,B; Q-52-V,G; P-52-V,G; Q-51-A,B; R-51-V,G; R-52-V,G; R-52-A,B; 2.: 1:1,000,000: P-50-51; P-52-53; P-58-59; Q-42-43; Q-44-45; Q-50-51; Q-52-53; Q-54-55; Q-56-57; Q-58-59; Q-60-1; R-(40)-42; R-43-(45); R-(45)-47; R-48-(50); R-51; R-53-(55); R-(55)-57; R-58-(60); S-44-46; S-47-49; S-50-52; S-53-55; 3.: 1:2,500,000: Quaternary map of the territory of Russian Federation, 4.: Alaska Permafrost Map). The digitalization was done using GIS techniques (ArcGIS) and vectorization of raster Images (Adobe Photoshop and Illustrator). Data on Yedoma thickness are obtained from boreholes and exposures reported in the scientific literature. The map and database are still preliminary and will have to undergo a technical and scientific vetting and review process. In their current form, we included a range of attributes for Yedoma area polygons based on lithological and stratigraphical information from the original source maps as well as a confidence level for our classification of an area as Yedoma (3 stages: confirmed, likely, or uncertain). In its current version, our database includes more than 365 boreholes and exposures and more than 2000 digitized Yedoma areas. We expect that the database will continue to grow. In this preliminary stage, we estimate the Northern Hemisphere Yedoma deposit area to cover approximately 625,000 km². We estimate that 53% of the total Yedoma area today is located in the tundra zone, 47% in the taiga zone. Separated from west to east, 29% of the Yedoma area is found in North America and 71 % in North Asia. The latter include 9% in West Siberia, 11% in Central Siberia, 44% in East Siberia and 7% in Far East Russia. Adding the recent maximum Yedoma region (including all Yedoma uplands, thermokarst lakes and basins, and river valleys) of 1.4 million km² (Strauss et al., 2013, doi:10.1002/2013GL058088) and postulating that Yedoma occupied up to 80% of the adjacent formerly exposed and now flooded Beringia shelves (1.9 million km², down to 125 m below modern sea level, between 105°E - 128°W and >68°N), we assume that the Last Glacial Maximum Yedoma region likely covered more than 3 million km² of Beringia. Acknowledgements: This project is part of the Action Group "The Yedoma Region: A Synthesis of Circum-Arctic Distribution and Thickness" (funded by the International Permafrost Association (IPA) to J. Strauss) and is embedded into the Permafrost Carbon Network (working group Yedoma Carbon Stocks). We acknowledge the support by the European Research Council (Starting Grant #338335), the German Federal Ministry of Education and Research (Grant 01DM12011 and "CarboPerm" (03G0836A)), the Initiative and Networking Fund of the Helmholtz Association (#ERC-0013) and the German Federal Environment Agency (UBA, project UFOPLAN FKZ 3712 41 106).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper describes the participation of DAEDALUS at ImageCLEF 2011 Medical Retrieval task. We have focused on multimodal (or mixed) experiments that combine textual and visual retrieval. The main objective of our research has been to evaluate the effect on the medical retrieval process of the existence of an extended corpus that is annotated with the image type, associated to both the image itself and also to its textual description. For this purpose, an image classifier has been developed to tag each document with its class (1st level of the hierarchy: Radiology, Microscopy, Photograph, Graphic, Other) and subclass (2nd level: AN, CT, MR, etc.). For the textual-based experiments, several runs using different semantic expansion techniques have been performed. For the visual-based retrieval, different runs are defined by the corpus used in the retrieval process and the strategy for obtaining the class and/or subclass. The best results are achieved in runs that make use of the image subclass based on the classification of the sample images. Although different multimodal strategies have been submitted, none of them has shown to be able to provide results that are at least comparable to the ones achieved by the textual retrieval alone. We believe that we have been unable to find a metric for the assessment of the relevance of the results provided by the visual and textual processes

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a study on the effect of blurred images in hand biometrics. Blurred images simulates out-of-focus effects in hand image acquisition, a common consequence of unconstrained, contact-less and platform-free hand biometrics in mobile devices. The proposed biometric system presents a hand image segmentation based on multiscale aggregation, a segmentation method invariant to different changes like noise or blurriness, together with an innovative feature extraction and a template creation, oriented to obtain an invariant performance against blurring effects. The results highlight that the proposed system is invariant to some low degrees of blurriness, requiring an image quality control to detect and correct those images with a high degree of blurriness. The evaluation has considered a synthetic database created based on a publicly available database with 120 individuals. In addition, several biometric techniques could benefit from the approach proposed in this paper, since blurriness is a very common effect in biometric techniques involving image acquisition.