919 resultados para Image-Based Visual Hull
Resumo:
Microspectrophotometric examination of the retina of a procellariiform marine bird, the wedge-tailed shearwater Puffinus pacificus, revealed the presence of five different types of vitamin A(1)-based visual pigment in seven different types of photoreceptor. A single class of rod contained a medium-wavelength sensitive visual pigment with a wavelength of maximum absorbance (lambda(max)) at 502 nm. Four different types of single cone contained visual pigments maximally sensitive in either the violet (VS, lambda(max) 406 nm), short (SWS, lambda(max) 450 nm), medium (MWS, lambda(max) 503 nm) or long (LWS, lambda(max) 566 nm) spectral ranges. In the peripheral retina, the SWS, MWS and LWS single cones contained pigmented oil droplets in their inner segments with cut-off wavelengths (lambda(cut)) at 445 (C-type), 506 (Y-type) and 562 nm (R-type), respectively. The VS visual pigment was paired with a transparent (T-type) oil droplet that displayed no significant absorption above at least 370 run. Both the principal and accessory members of the double cone pair contained the same 566 nm lambda(max) visual pigment as the LWS single cones but only the principal member contained an oil droplet, which had a lambda(cut) at 413 nm. The retina had a horizontal band or 'visual streak' of increased photoreceptor density running across the retina approximately 1.5 mm dorsal to the top of the pecten. Cones in the centre of the horizontal streak were smaller and had oil droplets that were either transparent/colourless or much less pigmented than at the periphery. It is proposed that the reduction in cone oil droplet pigmentation in retinal areas associated with high visual acuity is an adaptation to compensate for the reduced photon capture ability of the narrower photoreceptors found there. Measurements of the spectral transmittance of the ocular media reveal that wavelengths down to at least 300 nm would be transmitted to the retina.
Resumo:
Government agencies responsible for riparian environments are assessing the combined utility of field survey and remote sensing for mapping and monitoring indicators of riparian zone condition. The objective of this work was to compare the Tropical Rapid Appraisal of Riparian Condition (TRARC) method to a satellite image based approach. TRARC was developed for rapid assessment of the environmental condition of savanna riparian zones. The comparison assessed mapping accuracy, representativeness of TRARC assessment, cost-effectiveness, and suitability for multi-temporal analysis. Two multi-spectral QuickBird images captured in 2004 and 2005 and coincident field data covering sections of the Daly River in the Northern Territory, Australia were used in this work. Both field and image data were processed to map riparian health indicators (RHIs) including percentage canopy cover, organic litter, canopy continuity, stream bank stability, and extent of tree clearing. Spectral vegetation indices, image segmentation and supervised classification were used to produce RHI maps. QuickBird image data were used to examine if the spatial distribution of TRARC transects provided a representative sample of ground based RHI measurements. Results showed that TRARC transects were required to cover at least 3% of the study area to obtain a representative sample. The mapping accuracy and costs of the image based approach were compared to those of the ground based TRARC approach. Results proved that TRARC was more cost-effective at smaller scales (1-100km), while image based assessment becomes more feasible at regional scales (100-1000km). Finally, the ability to use both the image and field based approaches for multi-temporal analysis of RHIs was assessed. Change detection analysis demonstrated that image data can provide detailed information on gradual change, while the TRARC method was only able to identify more gross scale changes. In conclusion, results from both methods were considered to complement each other if used at appropriate spatial scales.
Resumo:
This paper addresses the problem of obtaining complete, detailed reconstructions of textureless shiny objects. We present an algorithm which uses silhouettes of the object, as well as images obtained under changing illumination conditions. In contrast with previous photometric stereo techniques, ours is not limited to a single viewpoint but produces accurate reconstructions in full 3D. A number of images of the object are obtained from multiple viewpoints, under varying lighting conditions. Starting from the silhouettes, the algorithm recovers camera motion and constructs the object's visual hull. This is then used to recover the illumination and initialize a multiview photometric stereo scheme to obtain a closed surface reconstruction. There are two main contributions in this paper: First, we describe a robust technique to estimate light directions and intensities and, second, we introduce a novel formulation of photometric stereo which combines multiple viewpoints and, hence, allows closed surface reconstructions. The algorithm has been implemented as a practical model acquisition system. Here, a quantitative evaluation of the algorithm on synthetic data is presented together with complete reconstructions of challenging real objects. Finally, we show experimentally how, even in the case of highly textured objects, this technique can greatly improve on correspondence-based multiview stereo results.
Resumo:
Within the framework of heritage preservation, 3D scanning and modeling for heritage documentation has increased significantly in recent years, mainly due to the evolution of laser and image-based techniques, modeling software, powerful computers and virtual reality. 3D laser acquisition constitutes a real development opportunity for 3D modeling based previously on theoretical data. The representation of the object information rely on the knowledge of its historic and theoretical frame to reconstitute a posteriori its previous states. This project proposes an approach dealing with data extraction based on architectural knowledge and Laser statement informing measurements, the whole leading to 3D reconstruction. The experimented Khmer objects are exposed at Guimet museum in Paris. The purpose of this digital modeling meets the need of exploitable models for simulation projects, prototyping, exhibitions, promoting cultural tourism and particularly for archiving against any likely disaster and as an aided tool for the formulation of virtual museum concept.
Classification of Paintings by Artist, Movement, and Indoor Setting Using MPEG-7 Descriptor Features
Resumo:
ACM Computing Classification System (1998): I.4.9, I.4.10.
Resumo:
Photometric Stereo is a powerful image based 3D reconstruction technique that has recently been used to obtain very high quality reconstructions. However, in its classic form, Photometric Stereo suffers from two main limitations: Firstly, one needs to obtain images of the 3D scene under multiple different illuminations. As a result the 3D scene needs to remain static during illumination changes, which prohibits the reconstruction of deforming objects. Secondly, the images obtained must be from a single viewpoint. This leads to depth-map based 2.5 reconstructions, instead of full 3D surfaces. The aim of this Chapter is to show how these limitations can be alleviated, leading to the derivation of two practical 3D acquisition systems: The first one, based on the powerful Coloured Light Photometric Stereo method can be used to reconstruct moving objects such as cloth or human faces. The second, permits the complete 3D reconstruction of challenging objects such as porcelain vases. In addition to algorithmic details, the Chapter pays attention to practical issues such as setup calibration, detection and correction of self and cast shadows. We provide several evaluation experiments as well as reconstruction results. © 2010 Springer-Verlag Berlin Heidelberg.
Resumo:
X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].
Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.
As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.
More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.
With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.
Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.
With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.
Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.
Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.
Resumo:
Background - Image blurring in Full Field Digital Mammography (FFDM) is reported to be a problem within many UK breast screening units resulting in significant proportion of technical repeats/recalls. Our study investigates monitors of differing pixel resolution, and whether there is a difference in blurring detection between a 2.3 MP technical review monitor and a 5MP standard reporting monitor. Methods - Simulation software was created to induce different magnitudes of blur on 20 artifact free FFDM screening images. 120 blurred and non-blurred images were randomized and displayed on the 2.3 and 5MP monitors; they were reviewed by 28 trained observers. Monitors were calibrated to the DICOM Grayscale Standard Display Function. T-test was used to determine whether significant differences exist in blurring detection between the monitors. Results - The blurring detection rate on the 2.3MP monitor for 0.2, 0.4, 0.6, 0.8 and 1 mm blur was 46, 59, 66, 77and 78% respectively; and on the 5MP monitor 44, 70, 83 , 96 and 98%. All the non-motion images were identified correctly. A statistical difference (p <0.01) in the blurring detection rate between the two monitors was demonstrated. Conclusions - Given the results of this study and knowing that monitors as low as 1 MP are used in clinical practice, we speculate that technical recall/repeat rates because of blurring could be reduced if higher resolution monitors are used for technical review at the time of imaging. Further work is needed to determine monitor minimum specification for visual blurring detection.
Resumo:
Problema. Esta investigación se aproxima al entorno escolar con el propósito de avanzar en la comprensión de los imaginarios de los adolescentes y docentes en torno al cuerpo, la corporalidad y la AF, como un elemento relevante en el diseño de programas y planes efectivos para fomento de la práctica de AF. Objetivo. Analizar los imaginarios sociales de docentes y adolescentes en torno a los conceptos de cuerpo, corporalidad y AF. Métodos. Investigación de corte cualitativo, descriptivo e interpretativo. Se realizaron entrevistas semi-estructuradas a docentes y a estudiantes entre los 12 y 18 años de un colegio público de Bogotá. Se realizó análisis de contenido. Se compararon los resultados de estudiantes por grupos de edades y género. Resultados. Docentes y estudiantes definen el cuerpo a partir de las características biológicas, las diferencias sexuales y las funciones vitales. La definición de corporalidad en los estudiantes se encuentra ligada con la imagen y la apariencia física; los docentes la entienden como la posibilidad de interactuar con el entorno y como la materialización de la existencia. La AF en los estudiantes se asocia con la práctica de ejercicio y deporte, en los docentes se comprende como una práctica de autocuidado que permite el mantenimiento de la salud. Conclusiones. Para promover la AF tempranamente como una experiencia vital es necesario intervenir los espacios escolares. Hay que vincular al cuerpo a los procesos formativos con el propósito de desarrollar la autonomía corporal, este aspecto implica cambios en los currículos.
Resumo:
Given the dynamic nature of cardiac function, correct temporal alignment of pre-operative models and intraoperative images is crucial for augmented reality in cardiac image-guided interventions. As such, the current study focuses on the development of an image-based strategy for temporal alignment of multimodal cardiac imaging sequences, such as cine Magnetic Resonance Imaging (MRI) or 3D Ultrasound (US). First, we derive a robust, modality-independent signal from the image sequences, estimated by computing the normalized crosscorrelation between each frame in the temporal sequence and the end-diastolic frame. This signal is a resembler for the left-ventricle (LV) volume curve over time, whose variation indicates di erent temporal landmarks of the cardiac cycle. We then perform the temporal alignment of these surrogate signals derived from MRI and US sequences of the same patient through Dynamic Time Warping (DTW), allowing to synchronize both sequences. The proposed framework was evaluated in 98 patients, which have undergone both 3D+t MRI and US scans. The end-systolic frame could be accurately estimated as the minimum of the image-derived surrogate signal, presenting a relative error of 1:6 1:9% and 4:0 4:2% for the MRI and US sequences, respectively, thus supporting its association with key temporal instants of the cardiac cycle. The use of DTW reduces the desynchronization of the cardiac events in MRI and US sequences, allowing to temporally align multimodal cardiac imaging sequences. Overall, a generic, fast and accurate method for temporal synchronization of MRI and US sequences of the same patient was introduced. This approach could be straightforwardly used for the correct temporal alignment of pre-operative MRI information and intra-operative US images.
Resumo:
Relatório de estágio apresentado à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Gestão Estratégica das Relações Públicas.
Resumo:
High-content analysis has revolutionized cancer drug discovery by identifying substances that alter the phenotype of a cell, which prevents tumor growth and metastasis. The high-resolution biofluorescence images from assays allow precise quantitative measures enabling the distinction of small molecules of a host cell from a tumor. In this work, we are particularly interested in the application of deep neural networks (DNNs), a cutting-edge machine learning method, to the classification of compounds in chemical mechanisms of action (MOAs). Compound classification has been performed using image-based profiling methods sometimes combined with feature reduction methods such as principal component analysis or factor analysis. In this article, we map the input features of each cell to a particular MOA class without using any treatment-level profiles or feature reduction methods. To the best of our knowledge, this is the first application of DNN in this domain, leveraging single-cell information. Furthermore, we use deep transfer learning (DTL) to alleviate the intensive and computational demanding effort of searching the huge parameter's space of a DNN. Results show that using this approach, we obtain a 30% speedup and a 2% accuracy improvement.
Resumo:
This report gives a comprehensive and up-to-date review of Alzheimer's disease biomarkers. Recent years have seen significant advances in this field. Whilst considerable effort has focused on A�_ and tau related markers, a substantial number of other molecules have been identified, that may offer new opportunities.This Report : Identifies 60 candidate Alzheimer's (AD) biomarkers and their associated studies. Of these, 49 are single species or single parameters, 7 are combinations or panels and 4 involve the measurement of two species or parameters or their ratios. These include proteins (n=34), genes (n=11), image-based parameters (n=7), small molecules (n=3), proteins + genes (n=2) and others (n=3). Of these, 30 (50%) relate to species identified in CSF and 19 (32%) were found in the blood. These candidate may be classified on the basis of their diagnostic utility, namely those which i) may allow AD to be detected when the disease has developed (48 of 75†= 64%), ii) may allow early detection of AD (18 of 75† = 24%) and iii) may allow AD to be predicted before the disease has begun to develop (9 of 75†= 12%). † Note: Of these, 11 were linked to two or more of these capabilities (e.g. allowed both early-stage detection as well as diagnosis after the disease has developed).Biomarkers: AD biomarkers identified in this report show significant diversity, however of the 60 described, 18 (30%) are associated with amyloid beta (A�_) and 9 (15%) relate to Tau. The remainder of the biomarkers (just over half) fall into a number of different groups. Of these, some are associated with other hypotheses on the pathogenesis of AD however the vast majority are individually unique and not obviously linked with other markers. Analysis and discussion presented in this report includes summaries of the studies and clinical trials that have lead to the identification of these markers. Where it has been calculated, diagnostic sensitivity, specificity and the capacity of these markers to differentiate patients with suspected AD from healthy controls and individuals believed to be suffering from other neurodegenerative conditions, have been indicated. These findings are discussed in relation to existing hypotheses on the pathogenesis of the AD and the current drug development pipeline. Many uncertainties remain in relation to the pathogenesis of AD, in diagnosing and treating the disease and many of the studies carried out to identify disease markers are at an early stage and will require confirmation through larger and longer investigations. Nevertheless, significant advances in the identification of AD biomarkers have now been made. Moreover, whilst much of the research on AD biomarkers has focused on amyloid and tau related species, it is evident that a substantial number of other species may provide important opportunities.Purpose of Report: To provide a comprehensive review of important and recently discovered candidate biomarkers of AD, in particular those with potential to reliably detect the disease or with utility in clinical development, drug repurposing, in studies of the pathogenesis and in monitoring drug response and the course of the disease. Other key goals were to identify markers that support current pipeline developments, indicate new potential drug targets or which advance understanding of the pathogenesis of this disease.Drug Repurposing: Studies of the pathogenesis of AD have identified aberrant changes in a number of other disease areas including inflammation, diabetes, oxidative stress, lipid metabolism and others. These findings have prompted studies to evaluate some existing approved drugs to treat AD. This report identifies studies of 9 established drug classes currently being investigated for potential repurposing.Alzheimer’s Disease: In 2005, the global prevalence of dementia was estimated at 25 million, with more than 4 million new cases occurring each year. It is also calculated that the number of people affected will double every 20 years, to 80 million by 2040, if a cure is not found. More than 50% of dementia cases are due to AD. Today, approximately 5 million individuals in the US suffer from AD, representing one in eight people over the age of 65. Direct and indirect costs of AD and other forms of dementia in the US are around $150 billion annually. Worldwide, costs for dementia care are estimated at $315 billion annually. Despite significant research into this debilitating and ultimately fatal disease, advances in the development of diagnostic tests for AD and moreover, effective treatments, remain elusive.Background: Alzheimer's disease is the most common cause of dementia, yet its clinical diagnosis remains uncertain until an eventual post-mortem histopathology examination is carried out. Currently, therapy for patients with Alzheimer disease only treats the symptoms; however, it is anticipated that new disease-modifying drugs will soon become available. The urgency for new and effective treatments for AD is matched by the need for new tests to detect and diagnose the condition. Uncertainties in the diagnosis of AD mean that the disease is often undiagnosed and under treated. Moreover, it is clear that clinical confirmation of AD, using cognitive tests, can only be made after substantial neuronal cell loss has occurred; a process that may have taken place over many years. Poor response to current therapies may therefore, in part, reflect the fact that such treatments are generally commenced only after neuronal damage has occurred. The absence of tests to detect or diagnose presymptomatic AD also means that there is no standard that can be applied to validate experimental findings (e.g. in drug discovery) without performing lengthy studies, and eventual confirmation by autopsy.These limitations are focusing considerable effort on the identification of biomarkers that advance understanding of the pathogenesis of AD and how the disease can be diagnosed in its early stages and treated. It is hoped that developments in these areas will help physicians to detect AD and guide therapy before the first signs of neuronal damage appears. The last 5-10 years have seen substantial research into the pathogenesis of AD and this has lead to the identification of a substantial number of AD biomarkers, which offer important insights into this disease. This report brings together the latest advances in the identification of AD biomarkers and analyses the opportunities they offer in drug R&D and diagnostics.��
Resumo:
PURPOSE: Respiratory motion correction remains a challenge in coronary magnetic resonance imaging (MRI) and current techniques, such as navigator gating, suffer from sub-optimal scan efficiency and ease-of-use. To overcome these limitations, an image-based self-navigation technique is proposed that uses "sub-images" and compressed sensing (CS) to obtain translational motion correction in 2D. The method was preliminarily implemented as a 2D technique and tested for feasibility for targeted coronary imaging. METHODS: During a 2D segmented radial k-space data acquisition, heavily undersampled sub-images were reconstructed from the readouts collected during each cardiac cycle. These sub-images may then be used for respiratory self-navigation. Alternatively, a CS reconstruction may be used to create these sub-images, so as to partially compensate for the heavy undersampling. Both approaches were quantitatively assessed using simulations and in vivo studies, and the resulting self-navigation strategies were then compared to conventional navigator gating. RESULTS: Sub-images reconstructed using CS showed a lower artifact level than sub-images reconstructed without CS. As a result, the final image quality was significantly better when using CS-assisted self-navigation as opposed to the non-CS approach. Moreover, while both self-navigation techniques led to a 69% scan time reduction (as compared to navigator gating), there was no significant difference in image quality between the CS-assisted self-navigation technique and conventional navigator gating, despite the significant decrease in scan time. CONCLUSIONS: CS-assisted self-navigation using 2D translational motion correction demonstrated feasibility of producing coronary MRA data with image quality comparable to that obtained with conventional navigator gating, and does so without the use of additional acquisitions or motion modeling, while still allowing for 100% scan efficiency and an improved ease-of-use. In conclusion, compressed sensing may become a critical adjunct for 2D translational motion correction in free-breathing cardiac imaging with high spatial resolution. An expansion to modern 3D approaches is now warranted.
Resumo:
Patient-specific simulations of the hemodynamics in intracranial aneurysms can be constructed by using image-based vascular models and CFD techniques. This work evaluates the impact of the choice of imaging technique on these simulations