944 resultados para Segmentation of threedimensional images
Resumo:
In this thesis we have developed solutions to common issues regarding widefield microscopes, facing the problem of the intensity inhomogeneity of an image and dealing with two strong limitations: the impossibility of acquiring either high detailed images representative of whole samples or deep 3D objects. First, we cope with the problem of the non-uniform distribution of the light signal inside a single image, named vignetting. In particular we proposed, for both light and fluorescent microscopy, non-parametric multi-image based methods, where the vignetting function is estimated directly from the sample without requiring any prior information. After getting flat-field corrected images, we studied how to fix the problem related to the limitation of the field of view of the camera, so to be able to acquire large areas at high magnification. To this purpose, we developed mosaicing techniques capable to work on-line. Starting from a set of overlapping images manually acquired, we validated a fast registration approach to accurately stitch together the images. Finally, we worked to virtually extend the field of view of the camera in the third dimension, with the purpose of reconstructing a single image completely in focus, stemming from objects having a relevant depth or being displaced in different focus planes. After studying the existing approaches for extending the depth of focus of the microscope, we proposed a general method that does not require any prior information. In order to compare the outcome of existing methods, different standard metrics are commonly used in literature. However, no metric is available to compare different methods in real cases. First, we validated a metric able to rank the methods as the Universal Quality Index does, but without needing any reference ground truth. Second, we proved that the approach we developed performs better in both synthetic and real cases.
Resumo:
Natural stones have been widely used in the construction field since antiquity. Building materials undergo decay processes due to mechanical,chemical, physical and biological causes that can act together. Therefore an interdisciplinary approach is required in order to understand the interaction between the stone and the surrounding environment. Utilization of buildings, inadequate restoration activities and in general anthropogenic weathering factors may contribute to this degradation process. For this reasons, in the last few decades new technologies and techniques have been developed and introduced in the restoration field. Consolidants are largely used in restoration and conservation of cultural heritage in order to improve the internal cohesion and to reduce the weathering rate of building materials. It is important to define the penetration depth of a consolidant for determining its efficacy. Impregnation mainly depends on the microstructure of the stone (i.e. porosity) and on the properties of the product itself. Throughout this study, tetraethoxysilane (TEOS) applied on globigerina limestone samples has been chosen as object of investigation. After hydrolysis and condensation, TEOS deposits silica gel inside the pores, improving the cohesion of the grains. X-ray computed tomography has been used to characterize the internal structure of the limestone samples,treated and untreated with a TEOS-based consolidant. The aim of this work is to investigate the penetration depth and the distribution of the TEOS inside the porosity, using both traditional approaches and advanced X-ray tomographic techniques, the latter allowing the internal visualization in three dimensions of the materials. Fluid transport properties and porosity have been studied both at macroscopic scale, by means of capillary uptake tests and radiography, and at microscopic scale,investigated with X-ray Tomographic Microscopy (XTM). This allows identifying changes in the porosity, by comparison of the images before and after the treatment, and locating the consolidant inside the stone. Tests were initially run at University of Bologna, where characterization of the stone was carried out. Then the research continued in Switzerland: X-ray tomography and radiography were performed at Empa, Swiss Federal Laboratories for Materials Science and Technology, while XTM measurements with synchrotron radiation were run at Paul Scherrer Institute in Villigen.
Resumo:
The atmosphere is a global influence on the movement of heat and humidity between the continents, and thus significantly affects climate variability. Information about atmospheric circulation are of major importance for the understanding of different climatic conditions. Dust deposits from maar lakes and dry maars from the Eifel Volcanic Field (Germany) are therefore used as proxy data for the reconstruction of past aeolian dynamics.rnrnIn this thesis past two sediment cores from the Eifel region are examined: the core SM3 from Lake Schalkenmehren and the core DE3 from the Dehner dry maar. Both cores contain the tephra of the Laacher See eruption, which is dated to 12,900 before present. Taken together the cores cover the last 60,000 years: SM3 the Holocene and DE3 the marine isotope stages MIS-3 and MIS-2, respectively. The frequencies of glacial dust storm events and their paleo wind direction are detected by high resolution grain size and provenance analysis of the lake sediments. Therefore two different methods are applied: geochemical measurements of the sediment using µXRF-scanning and the particle analysis method RADIUS (rapid particle analysis of digital images by ultra-high-resolution scanning of thin sections).rnIt is shown that single dust layers in the lake sediment are characterized by an increased content of aeolian transported carbonate particles. The limestone-bearing Eifel-North-South zone is the most likely source for the carbonate rich aeolian dust in the lake sediments of the Dehner dry maar. The dry maar is located on the western side of the Eifel-North-South zone. Thus, carbonate rich aeolian sediment is most likely to be transported towards the Dehner dry maar within easterly winds. A methodology is developed which limits the detection to the aeolian transported carbonate particles in the sediment, the RADIUS-carbonate module.rnrnIn summary, during the marine isotope stage MIS-3 the storm frequency and the east wind frequency are both increased in comparison to MIS-2. These results leads to the suggestion that atmospheric circulation was affected by more turbulent conditions during MIS-3 in comparison to the more stable atmospheric circulation during the full glacial conditions of MIS-2.rnThe results of the investigations of the dust records are finally evaluated in relation a study of atmospheric general circulation models for a comprehensive interpretation. Here, AGCM experiments (ECHAM3 and ECHAM4) with different prescribed SST patterns are used to develop a synoptic interpretation of long-persisting east wind conditions and of east wind storm events, which are suggested to lead to an enhanced accumulation of sediment being transported by easterly winds to the proxy site of the Dehner dry maar.rnrnThe basic observations made on the proxy record are also illustrated in the 10 m-wind vectors in the different model experiments under glacial conditions with different prescribed sea surface temperature patterns. Furthermore, the analysis of long-persisting east wind conditions in the AGCM data shows a stronger seasonality under glacial conditions: all the different experiments are characterized by an increase of the relative importance of the LEWIC during spring and summer. The different glacial experiments consistently show a shift from a long-lasting high over the Baltic Sea towards the NW, directly above the Scandinavian Ice Sheet, together with contemporary enhanced westerly circulation over the North Atlantic.rnrnThis thesis is a comprehensive analysis of atmospheric circulation patterns during the last glacial period. It has been possible to reconstruct important elements of the glacial paleo climate in Central Europe. While the proxy data from sediment cores lead to a binary signal of the wind direction changes (east versus west wind), a synoptic interpretation using atmospheric circulation models is successful. This shows a possible distribution of high and low pressure areas and thus the direction and strength of wind fields which have the capacity to transport dust. In conclusion, the combination of numerical models, to enhance understanding of processes in the climate system, with proxy data from the environmental record is the key to a comprehensive approach to paleo climatic reconstruction.rn
Resumo:
Over the last decades the impact of natural disasters to the global environment is becoming more and more severe. The number of disasters has dramatically increased, as well as the cost to the global economy and the number of people affected. Among the natural disaster, flood catastrophes are considered to be the most costly, devastating, broad extent and frequent, because of the tremendous fatalities, injuries, property damage, economic and social disruption they cause to the humankind. In the last thirty years, the World has suffered from severe flooding and the huge impact of floods has caused hundreds of thousands of deaths, destruction of infrastructures, disruption of economic activity and the loss of property for worth billions of dollars. In this context, satellite remote sensing, along with Geographic Information Systems (GIS), has become a key tool in flood risk management analysis. Remote sensing for supporting various aspects of flood risk management was investigated in the present thesis. In particular, the research focused on the use of satellite images for flood mapping and monitoring, damage assessment and risk assessment. The contribution of satellite remote sensing for the delineation of flood prone zones, the identification of damaged areas and the development of hazard maps was explored referring to selected cases of study.
Resumo:
21 cm cosmology opens an observational window to previously unexplored cosmological epochs such as the Epoch of Reionization (EoR), the Cosmic Dawn and the Dark Ages using powerful radio interferometers such as the planned Square Kilometer Array (SKA). Among all the other applications which can potentially improve the understanding of standard cosmology, we study the promising opportunity given by measuring the weak gravitational lensing sourced by 21 cm radiation. We performed this study in two different cosmological epochs, at a typical EoR redshift and successively at a post-EoR redshift. We will show how the lensing signal can be reconstructed using a three dimensional optimal quadratic lensing estimator in Fourier space, using single frequency band or combining multiple frequency band measurements. To this purpose, we implemented a simulation pipeline capable of dealing with issues that can not be treated analytically. Considering the current SKA plans, we studied the performance of the quadratic estimator at typical EoR redshifts, for different survey strategies and comparing two thermal noise models for the SKA-Low array. The simulation we performed takes into account the beam of the telescope and the discreteness of visibility measurements. We found that an SKA-Low interferometer should obtain high-fidelity images of the underlying mass distribution in its phase 1 only if several bands are stacked together, covering a redshift range that goes from z=7 to z=11.5. The SKA-Low phase 2, modeled in order to improve the sensitivity of the instrument by almost an order of magnitude, should be capable of providing images with good quality even when the signal is detected within a single frequency band. Considering also the serious effect that foregrounds could have on this detections, we discussed the limits of these results and also the possibility provided by these models of measuring an accurate lensing power spectrum.
Resumo:
Zeitreihen sind allgegenwärtig. Die Erfassung und Verarbeitung kontinuierlich gemessener Daten ist in allen Bereichen der Naturwissenschaften, Medizin und Finanzwelt vertreten. Das enorme Anwachsen aufgezeichneter Datenmengen, sei es durch automatisierte Monitoring-Systeme oder integrierte Sensoren, bedarf außerordentlich schneller Algorithmen in Theorie und Praxis. Infolgedessen beschäftigt sich diese Arbeit mit der effizienten Berechnung von Teilsequenzalignments. Komplexe Algorithmen wie z.B. Anomaliedetektion, Motivfabfrage oder die unüberwachte Extraktion von prototypischen Bausteinen in Zeitreihen machen exzessiven Gebrauch von diesen Alignments. Darin begründet sich der Bedarf nach schnellen Implementierungen. Diese Arbeit untergliedert sich in drei Ansätze, die sich dieser Herausforderung widmen. Das umfasst vier Alignierungsalgorithmen und ihre Parallelisierung auf CUDA-fähiger Hardware, einen Algorithmus zur Segmentierung von Datenströmen und eine einheitliche Behandlung von Liegruppen-wertigen Zeitreihen.rnrnDer erste Beitrag ist eine vollständige CUDA-Portierung der UCR-Suite, die weltführende Implementierung von Teilsequenzalignierung. Das umfasst ein neues Berechnungsschema zur Ermittlung lokaler Alignierungsgüten unter Verwendung z-normierten euklidischen Abstands, welches auf jeder parallelen Hardware mit Unterstützung für schnelle Fouriertransformation einsetzbar ist. Des Weiteren geben wir eine SIMT-verträgliche Umsetzung der Lower-Bound-Kaskade der UCR-Suite zur effizienten Berechnung lokaler Alignierungsgüten unter Dynamic Time Warping an. Beide CUDA-Implementierungen ermöglichen eine um ein bis zwei Größenordnungen schnellere Berechnung als etablierte Methoden.rnrnAls zweites untersuchen wir zwei Linearzeit-Approximierungen für das elastische Alignment von Teilsequenzen. Auf der einen Seite behandeln wir ein SIMT-verträgliches Relaxierungschema für Greedy DTW und seine effiziente CUDA-Parallelisierung. Auf der anderen Seite führen wir ein neues lokales Abstandsmaß ein, den Gliding Elastic Match (GEM), welches mit der gleichen asymptotischen Zeitkomplexität wie Greedy DTW berechnet werden kann, jedoch eine vollständige Relaxierung der Penalty-Matrix bietet. Weitere Verbesserungen umfassen Invarianz gegen Trends auf der Messachse und uniforme Skalierung auf der Zeitachse. Des Weiteren wird eine Erweiterung von GEM zur Multi-Shape-Segmentierung diskutiert und auf Bewegungsdaten evaluiert. Beide CUDA-Parallelisierung verzeichnen Laufzeitverbesserungen um bis zu zwei Größenordnungen.rnrnDie Behandlung von Zeitreihen beschränkt sich in der Literatur in der Regel auf reellwertige Messdaten. Der dritte Beitrag umfasst eine einheitliche Methode zur Behandlung von Liegruppen-wertigen Zeitreihen. Darauf aufbauend werden Distanzmaße auf der Rotationsgruppe SO(3) und auf der euklidischen Gruppe SE(3) behandelt. Des Weiteren werden speichereffiziente Darstellungen und gruppenkompatible Erweiterungen elastischer Maße diskutiert.
Resumo:
Modern imaging technologies, such as computed tomography (CT) techniques, represent a great challenge in forensic pathology. The field of forensics has experienced a rapid increase in the use of these new techniques to support investigations on critical cases, as indicated by the implementation of CT scanning by different forensic institutions worldwide. Advances in CT imaging techniques over the past few decades have finally led some authors to propose that virtual autopsy, a radiological method applied to post-mortem analysis, is a reliable alternative to traditional autopsy, at least in certain cases. The authors investigate the occurrence and the causes of errors and mistakes in diagnostic imaging applied to virtual autopsy. A case of suicide by a gunshot wound was submitted to full-body CT scanning before autopsy. We compared the first examination of sectional images with the autopsy findings and found a preliminary misdiagnosis in detecting a peritoneal lesion by gunshot wound that was due to radiologist's error. Then we discuss a new emerging issue related to the risk of diagnostic failure in virtual autopsy due to radiologist's error that is similar to what occurs in clinical radiology practice.
Resumo:
Computed tomography (CT) and magnetic resonance (MR) imaging have become important elements of forensic radiology. Whereas the feasibility and potential of CT angiography have long been explored, postmortem MR angiography (PMMRA) has so far been neglected. We tested the feasibility of PMMRA on four adult human cadavers. Technical quality of PMMRA was assessed relative to postmortem CT angiography (PMCTA), separately for each body region. Intra-aortic contrast volumes were calculated on PMCTA and PMMRA with segmentation software. The results showed that technical quality of PMMRA images was equal to PMCTA in 4/4 cases for the head, the heart, and the chest, and in 3/4 cases for the abdomen, and the pelvis. There was a mean decrease in intra-aortic contrast volume from PMCTA to PMMRA of 46%. PMMRA is technically feasible and allows combining the soft tissue detail provided by MR and the information afforded by angiography.
Resumo:
Pictorial representations of three-dimensional objects are often used to investigate animal cognitive abilities; however, investigators rarely evaluate whether the animals conceptualize the two-dimensional image as the object it is intended to represent. We tested for picture recognition in lion-tailed macaques by presenting five monkeys with digitized images of familiar foods on a touch screen. Monkeys viewed images of two different foods and learned that they would receive a piece of the one they touched first. After demonstrating that they would reliably select images of their preferred foods on one set of foods, animals were transferred to images of a second set of familiar foods. We assumed that if the monkeys recognized the images, they would spontaneously select images of their preferred foods on the second set of foods. Three monkeys selected images of their preferred foods significantly more often than chance on their first transfer session. In an additional test of the monkeys' picture recognition abilities, animals were presented with pairs of food images containing a medium-preference food paired with either a high-preference food or a low-preference food. The same three monkeys selected the medium-preference foods significantly more often when they were paired with low-preference foods and significantly less often when those same foods were paired with high-preference foods. Our novel design provided convincing evidence that macaques recognized the content of two-dimensional images on a touch screen. Results also suggested that the animals understood the connection between the two-dimensional images and the three-dimensional objects they represented.
Resumo:
In binocular rivalry, presentation of different images to the separate eyes leads to conscious perception alternating between the two possible interpretations every few seconds. During perceptual transitions, a stimulus emerging into dominance can spread in a wave-like manner across the visual field. These traveling waves of rivalry dominance have been successfully related to the cortical magnification properties and functional activity of early visual areas, including the primary visual cortex (V1). Curiously however, these traveling waves undergo a delay when passing from one hemifield to another. In the current study, we used diffusion tensor imaging (DTI) to investigate whether the strength of interhemispheric connections between the left and right visual cortex might be related to the delay of traveling waves across hemifields. We measured the delay in traveling wave times (ΔTWT) in 19 participants and repeated this test 6 weeks later to evaluate the reliability of our behavioral measures. We found large interindividual variability but also good test-retest reliability for individual measures of ΔTWT. Using DTI in connection with fiber tractography, we identified parts of the corpus callosum connecting functionally defined visual areas V1-V3. We found that individual differences in ΔTWT was reliably predicted by the diffusion properties of transcallosal fibers connecting left and right V1, but observed no such effect for neighboring transcallosal visual fibers connecting V2 and V3. Our results demonstrate that the anatomical characteristics of topographically specific transcallosal connections predict the individual delay of interhemispheric traveling waves, providing further evidence that V1 is an important site for neural processes underlying binocular rivalry.
Resumo:
Nowadays computer simulation is used in various fields, particularly in laboratories where it is used for the exploration data which are sometimes experimentally inaccessible. In less developed countries where there is a need for up to date laboratories for the realization of practical lessons in chemistry, especially in secondary schools and some higher institutions of learning, it may permit learners to carryout experiments such as titrations without the use of laboratory materials and equipments. Computer simulations may also permit teachers to better explain the realities of practical lessons, given that computers have now become very accessible and less expensive compared to the acquisition of laboratory materials and equipments. This work is aimed at coming out with a virtual laboratory that shall permit the simulation of an acid-base titration and an oxidation-reduction titration with the use of synthetic images. To this effect, an appropriate numerical method was used to obtain appropriate organigram, which were further transcribed into source codes with the help of a programming language so as to come out with the software.
Resumo:
BackgroundDespite the increasingly higher spatial and contrast resolution of CT, nodular lesions are prone to be missed on chest CT. Tinted lenses increase visual acuity and contrast sensitivity by filtering short wavelength light of solar and artificial origin.PurposeTo test the impact of Gunnar eyewear, image quality (standard versus low dose CT) and nodule location on detectability of lung nodules in CT and to compare their individual influence.Material and MethodsA pre-existing database of CT images of patients with lung nodules >5 mm, scanned with standard does image quality (150 ref mAs/120 kVp) and lower dose/quality (40 ref mAs/120 kVp), was used. Five radiologists read 60 chest CTs twice: once with Gunnar glasses and once without glasses with a 1 month break between. At both read-outs the cases were shown at lower dose or standard dose level to quantify the influence of both variables (eyewear vs. image quality) on nodule sensitivity.ResultsThe sensitivity of CT for lung nodules increased significantly using Gunnar eyewear for two readers and insignificantly for two other readers. Over all, the mean sensitivity of all radiologist raised significantly from 50% to 53%, using the glasses (P value = 0.034). In contrast, sensitivity for lung nodules was not significantly affected by lowering the image quality from 150 to 40 ref mAs. The average sensitivity was 52% at low dose level, that was even 0.7% higher than at standard dose level (P value = 0.40). The strongest impact on sensitivity had the factors readers and nodule location (lung segments).ConclusionSensitivity for lung nodules was significantly enhanced by Gunnar eyewear (+3%), while lower image quality (40 ref mAs) had no impact on nodule sensitivity. Not using the glasses had a bigger impact on sensitivity than lowering the image quality.
Resumo:
A group of 406 Polish university students (210 women and 196 men) were asked to describe typical representatives of selected ethnic groups and their typical female and male members. The descriptions were based on a list of 24 traits and a list of 18 values, accompanied by scales for measuring trait-typicality and value-importance. The participants' level of confidence about the accuracy of the descriptions, their ethnic attitudes and their perception of the relative social status of men and women in ethnic groups were also measured. The results indicate an effect of masculinisation of ethnic images for both traits and values. Descriptions of typical representatives of ethnic groups resemble the images of typical men significantly more than those of typical women of these nationalities, even for the most modern nations. Differences registered between images of typical representatives of ethnic groups and their male and female members concerned primarily the traits and values basic to gender stereotypes. The images of women were significantly more favourable than those of men. The bias in ethnic perception towards the gender of the stereotype-holder was also indicated. Several differences were found between women's and men's perception of typical representatives of ethnic groups and especially of ethnic gender subgroups, without however the predicted effect of gender in-group favouritism. There was also a degree of ethnic in-group favouritism of Poles related to the gender both of participants and of the ethnic target groups.
Resumo:
The purpose of this study was to demonstrate the improvement in diagnostic quality and diagnostic accuracy of SonoVue microbubble contrast-enhanced ultrasound (CE-US) versus unenhanced ultrasound imaging during the investigation of extracranial carotid or peripheral arteries. 82 patients with suspected extracranial carotid or peripheral arterial disease received four SonoVue doses (0.3 ml, 0.6 ml, 1.2 ml and 2.4 ml) with Doppler ultrasound performed before and following each dose. Diagnostic quality of the CE-US examinations was evaluated off-site for duration of clinically useful contrast enhancement, artefact effects and percentage of examinations converted from non-diagnostic to diagnostic. Accuracy, sensitivity and specificity were assessed as agreement of CE-US diagnosis evaluated by an independent panel of experts with reference standard modality. The median duration of clinically useful signal enhancement significantly increased with increasing SonoVue doses (p< or =0.002). At the dose of 2.4 ml of SonoVue, diagnostic quality evaluated as number of inconclusive examinations significantly improved, falling from 40.7% at baseline down to 5.1%. Furthermore, SonoVue significantly (p<0.01) increased the accuracy, sensitivity and specificity of assessment of disease compared with baseline ultrasound. SonoVue increases the diagnostic quality of Doppler images and improves the accuracy of both spectral and colour Doppler examinations of extracranial carotid or peripheral arterial disease.
Resumo:
OBJECTIVES: The goal of the present study was to compare the accuracy of in vivo tissue characterization obtained by intravascular ultrasound (IVUS) radiofrequency (RF) data analysis, known as Virtual Histology (VH), to the in vitro histopathology of coronary atherosclerotic plaques obtained by directional coronary atherectomy. BACKGROUND: Vulnerable plaque leading to acute coronary syndrome (ACS) has been associated with specific plaque composition, and its characterization is an important clinical focus. METHODS: Virtual histology IVUS images were performed before and after a single debulking cut using directional coronary atherectomy. Debulking region of in vivo histology image was predicted by comparing pre- and post-debulking VH images. Analysis of VH images with the corresponding tissue cross section was performed. RESULTS: Fifteen stable angina pectoris (AP) and 15 ACS patients were enrolled. The results of IVUS RF data analysis correlated well with histopathologic examination (predictive accuracy from all patients data: 87.1% for fibrous, 87.1% for fibro-fatty, 88.3% for necrotic core, and 96.5% for dense calcium regions, respectively). In addition, the frequency of necrotic core was significantly higher in the ACS group than in the stable AP group (in vitro histopathology: 22.6% vs. 12.6%, p = 0.02; in vivo virtual histology: 24.5% vs. 10.4%, p = 0.002). CONCLUSIONS: Correlation of in vivo IVUS RF data analysis with histopathology shows a high accuracy. In vivo IVUS RF data analysis is a useful modality for the classification of different types of coronary components, and may play an important role in the detection of vulnerable plaque.