881 resultados para image texture analysis
Resumo:
Abstract. Ancient Lake Ohrid is a steep-sided, oligotrophic, karst lake that was tectonically formed most likely within the Pliocene and often referred to as a hotspot of endemic biodiversity. This study aims on tracing significant lake level fluctuations at Lake Ohrid using high-resolution acoustic data in combination with lithological, geochemical, and chronological information from two sediment cores recovered from sub-aquatic terrace levels at ca. 32 and 60m water depth. According to our data, significant lake level fluctuations with prominent lowstands of ca. 60 and 35m below the present water level occurred during Marine Isotope Stage (MIS) 6 and MIS 5, respectively. The effect of these lowstands on biodiversity in most coastal parts of the lake is negligible, due to only small changes in lake surface area, coastline, and habitat. In contrast, biodiversity in shallower areas was more severely affected due to disconnection of today sublacustrine springs from the main water body. Multichannel seismic data from deeper parts of the lake clearly image several clinoform structures stacked on top of each other. These stacked clinoforms indicate significantly lower lake levels prior to MIS 6 and a stepwise rise of water level with intermittent stillstands since its existence as water-filled body, which might have caused enhanced expansion of endemic species within Lake Ohrid.
Resumo:
Magnetic resonance temperature imaging (MRTI) is recognized as a noninvasive means to provide temperature imaging for guidance in thermal therapies. The most common method of estimating temperature changes in the body using MR is by measuring the water proton resonant frequency (PRF) shift. Calculation of the complex phase difference (CPD) is the method of choice for measuring the PRF indirectly since it facilitates temperature mapping with high spatiotemporal resolution. Chemical shift imaging (CSI) techniques can provide the PRF directly with high sensitivity to temperature changes while minimizing artifacts commonly seen in CPD techniques. However, CSI techniques are currently limited by poor spatiotemporal resolution. This research intends to develop and validate a CSI-based MRTI technique with intentional spectral undersampling which allows relaxed parameters to improve spatiotemporal resolution. An algorithm based on autoregressive moving average (ARMA) modeling is developed and validated to help overcome limitations of Fourier-based analysis allowing highly accurate and precise PRF estimates. From the determined acquisition parameters and ARMA modeling, robust maps of temperature using the k-means algorithm are generated and validated in laser treatments in ex vivo tissue. The use of non-PRF based measurements provided by the technique is also investigated to aid in the validation of thermal damage predicted by an Arrhenius rate dose model.
Resumo:
Multimodality – the interdependence of semiotic resources in text – is an existential element of today’s media. The term multimodality attends systematically to the social interpretation of a wide range of communicational forms used in meaning making. A primary focus of social- semiotic multimodal analysis is on mapping how modal resources are used by people in a given social context. In November 2012 the “Ola ke ase” catchphrase, which is a play on “Hola ¿qué hace?”, appeared for the first time in Spain and immediately has been adopted as a Twitter hashtag and an image macro series. Its viral spread on social networks has been tremendous, being a trending topic in various Spanish-speaking countries. The objective of analysis is how language and image work together in the “Ola ke ase” meme. The interplay between text and image in one of the original memes and some of its variations is quantitatively analysed applying a social-semiotic approach. Results demonstrate how the “Ola ke ase” meme functions through its multimodal character and the non-standard orthography. The spread of uncountable variations of the meme shows the social process that goes on in the meaning making of the semiotic elements.
Resumo:
The combination of scaled analogue experiments, material mechanics, X-ray computed tomography (XRCT) and Digital Volume Correlation techniques (DVC) is a powerful new tool not only to examine the 3 dimensional structure and kinematic evolution of complex deformation structures in scaled analogue experiments, but also to fully quantify their spatial strain distribution and complete strain history. Digital image correlation (DIC) is an important advance in quantitative physical modelling and helps to understand non-linear deformation processes. Optical non-intrusive (DIC) techniques enable the quantification of localised and distributed deformation in analogue experiments based either on images taken through transparent sidewalls (2D DIC) or on surface views (3D DIC). X-ray computed tomography (XRCT) analysis permits the non-destructive visualisation of the internal structure and kinematic evolution of scaled analogue experiments simulating tectonic evolution of complex geological structures. The combination of XRCT sectional image data of analogue experiments with 2D DIC only allows quantification of 2D displacement and strain components in section direction. This completely omits the potential of CT experiments for full 3D strain analysis of complex, non-cylindrical deformation structures. In this study, we apply digital volume correlation (DVC) techniques on XRCT scan data of “solid” analogue experiments to fully quantify the internal displacement and strain in 3 dimensions over time. Our first results indicate that the application of DVC techniques on XRCT volume data can successfully be used to quantify the 3D spatial and temporal strain patterns inside analogue experiments. We demonstrate the potential of combining DVC techniques and XRCT volume imaging for 3D strain analysis of a contractional experiment simulating the development of a non-cylindrical pop-up structure. Furthermore, we discuss various options for optimisation of granular materials, pattern generation, and data acquisition for increased resolution and accuracy of the strain results. Three-dimensional strain analysis of analogue models is of particular interest for geological and seismic interpretations of complex, non-cylindrical geological structures. The volume strain data enable the analysis of the large-scale and small-scale strain history of geological structures.
Resumo:
Extensive experience with the analysis of human prophase chromosomes and studies into the complexity of prophase GTG-banding patterns have suggested that at least some prophase chromosomal segments can be accurately identified and characterized independently of the morphology of the chromosome as a whole. In this dissertation the feasibility of identifying and analyzing specified prophase chromosome segments was thus investigated as an alternative approach to prophase chromosome analysis based on whole chromosome recognition. Through the use of prophase idiograms at the 850-band-stage (FRANCKE, 1981) and a comparison system based on the calculation of cross-correlation coefficients between idiogram profiles, we have demonstrated that it is possible to divide the 24 human prophase idiograms into a set of 94 unique band sequences. Each unique band sequence has a banding pattern that is recognizable and distinct from any other non-homologous chromosome portion.^ Using chromosomes 11p and 16 thru 22 to demonstrate unique band sequence integrity at the chromosome level, we found that prophase chromosome banding pattern variation can be compensated for and that a set of unique band sequences very similar to those at the idiogram level can be identified on actual chromosomes.^ The use of a unique band sequence approach in prophase chromosome analysis is expected to increase efficiency and sensitivity through more effective use of available banding information. The use of a unique band sequence approach to prophase chromosome analysis is discussed both at the routine level by cytogeneticists and at an image processing level with a semi-automated approach to prophase chromosome analysis. ^
Resumo:
PURPOSE Fundus autofluorescence (FAF) cannot only be characterized by the intensity or the emission spectrum, but also by its lifetime. As the lifetime of a fluorescent molecule is sensitive to its local microenvironment, this technique may provide more information than fundus autofluorescence imaging. We report here the characteristics and repeatability of FAF lifetime measurements of the human macula using a new fluorescence lifetime imaging ophthalmoscope (FLIO). METHODS A total of 31 healthy phakic subjects were included in this study with an age range from 22 to 61 years. For image acquisition, a fluorescence lifetime ophthalmoscope based on a Heidelberg Engineering Spectralis system was used. Fluorescence lifetime maps of the retina were recorded in a short- (498-560 nm) and a long- (560-720 nm) spectral channel. For quantification of fluorescence lifetimes a standard ETDRS grid was used. RESULTS Mean fluorescence lifetimes were shortest in the fovea, with 208 picoseconds for the short-spectral channel and 239 picoseconds for the long-spectral channel, respectively. Fluorescence lifetimes increased from the central area to the outer ring of the ETDRS grid. The test-retest reliability of FLIO was very high for all ETDRS areas (Spearman's ρ = 0.80 for the short- and 0.97 for the long-spectral channel, P < 0.0001). Fluorescence lifetimes increased with age. CONCLUSIONS The FLIO allows reproducible measurements of fluorescence lifetimes of the macula in healthy subjects. By using a custom-built software, we were able to quantify fluorescence lifetimes within the ETDRS grid. Establishing a clinically accessible standard against which to measure FAF lifetimes within the retina is a prerequisite for future studies in retinal disease.
Resumo:
In this thesis, we develop an adaptive framework for Monte Carlo rendering, and more specifically for Monte Carlo Path Tracing (MCPT) and its derivatives. MCPT is attractive because it can handle a wide variety of light transport effects, such as depth of field, motion blur, indirect illumination, participating media, and others, in an elegant and unified framework. However, MCPT is a sampling-based approach, and is only guaranteed to converge in the limit, as the sampling rate grows to infinity. At finite sampling rates, MCPT renderings are often plagued by noise artifacts that can be visually distracting. The adaptive framework developed in this thesis leverages two core strategies to address noise artifacts in renderings: adaptive sampling and adaptive reconstruction. Adaptive sampling consists in increasing the sampling rate on a per pixel basis, to ensure that each pixel value is below a predefined error threshold. Adaptive reconstruction leverages the available samples on a per pixel basis, in an attempt to have an optimal trade-off between minimizing the residual noise artifacts and preserving the edges in the image. In our framework, we greedily minimize the relative Mean Squared Error (rMSE) of the rendering by iterating over sampling and reconstruction steps. Given an initial set of samples, the reconstruction step aims at producing the rendering with the lowest rMSE on a per pixel basis, and the next sampling step then further reduces the rMSE by distributing additional samples according to the magnitude of the residual rMSE of the reconstruction. This iterative approach tightly couples the adaptive sampling and adaptive reconstruction strategies, by ensuring that we only sample densely regions of the image where adaptive reconstruction cannot properly resolve the noise. In a first implementation of our framework, we demonstrate the usefulness of our greedy error minimization using a simple reconstruction scheme leveraging a filterbank of isotropic Gaussian filters. In a second implementation, we integrate a powerful edge aware filter that can adapt to the anisotropy of the image. Finally, in a third implementation, we leverage auxiliary feature buffers that encode scene information (such as surface normals, position, or texture), to improve the robustness of the reconstruction in the presence of strong noise.
Resumo:
Image denoising continues to be an active research topic. Although state-of-the-art denoising methods are numerically impressive and approch theoretical limits, they suffer from visible artifacts.While they produce acceptable results for natural images, human eyes are less forgiving when viewing synthetic images. At the same time, current methods are becoming more complex, making analysis, and implementation difficult. We propose image denoising as a simple physical process, which progressively reduces noise by deterministic annealing. The results of our implementation are numerically and visually excellent. We further demonstrate that our method is particularly suited for synthetic images. Finally, we offer a new perspective on image denoising using robust estimators.
Resumo:
One of the most promising applications for the restoration of small or moderately sized focal articular lesions is mosaicplasty (MP). Although recurrent hemarthrosis is a rare complication after MP, recently, various strategies have been designed to find an effective filling material to prevent postoperative bleeding from the donor site. The porous biodegradable polymer Polyactive (PA; a polyethylene glycol terephthalate - polybutylene terephthalate copolymer) represents a promising solution in this respect. A histological evaluation of the longterm PA-filled donor sites obtained from 10 experimental horses was performed. In this study, attention was primarily focused on the bone tissue developed in the plug. A computer-assisted image analysis and quantitative polarized light microscopic measurements of decalcified, longitudinally sectioned, dimethylmethylene blue (DMMB)- and picrosirius red (PS) stained sections revealed that the coverage area of the bone trabecules in the PA-filled donor tunnels was substantially (25%) enlarged compared to the neighboring cancellous bone. For this quantification, identical ROIs (regions of interest) were used and compared. The birefringence retardation values were also measured with a polarized light microscope using monochromatic light. Identical retardation values could be recorded from the bone trabeculae developed in the PA and in the neighboring bone, which indicates that the collagen orientation pattern does not differ significantly among these bone trabecules. Based on our new data, we speculate that PA promotes bone formation, and some of the currently identified degradation products of PA may enhance osteo-conduction and osteoinduction inside the donor canal.
Resumo:
Accurate three-dimensional (3D) models of lumbar vertebrae are required for image-based 3D kinematics analysis. MRI or CT datasets are frequently used to derive 3D models but have the disadvantages that they are expensive, time-consuming or involving ionizing radiation (e.g., CT acquisition). In this chapter, we present an alternative technique that can reconstruct a scaled 3D lumbar vertebral model from a single two-dimensional (2D) lateral fluoroscopic image and a statistical shape model. Cadaveric studies are conducted to verify the reconstruction accuracy by comparing the surface models reconstructed from a single lateral fluoroscopic image to the ground truth data from 3D CT segmentation. A mean reconstruction error between 0.7 and 1.4 mm was found.
Resumo:
Automated identification of vertebrae from X-ray image(s) is an important step for various medical image computing tasks such as 2D/3D rigid and non-rigid registration. In this chapter we present a graphical model-based solution for automated vertebra identification from X-ray image(s). Our solution does not ask for a training process using training data and has the capability to automatically determine the number of vertebrae visible in the image(s). This is achieved by combining a graphical model-based maximum a posterior probability (MAP) estimate with a mean-shift based clustering. Experiments conducted on simulated X-ray images as well as on a low-dose low quality X-ray spinal image of a scoliotic patient verified its performance.
Resumo:
In this paper, we propose a new method for fully-automatic landmark detection and shape segmentation in X-ray images. To detect landmarks, we estimate the displacements from some randomly sampled image patches to the (unknown) landmark positions, and then we integrate these predictions via a voting scheme. Our key contribution is a new algorithm for estimating these displacements. Different from other methods where each image patch independently predicts its displacement, we jointly estimate the displacements from all patches together in a data driven way, by considering not only the training data but also geometric constraints on the test image. The displacements estimation is formulated as a convex optimization problem that can be solved efficiently. Finally, we use the sparse shape composition model as the a priori information to regularize the landmark positions and thus generate the segmented shape contour. We validate our method on X-ray image datasets of three different anatomical structures: complete femur, proximal femur and pelvis. Experiments show that our method is accurate and robust in landmark detection, and, combined with the shape model, gives a better or comparable performance in shape segmentation compared to state-of-the art methods. Finally, a preliminary study using CT data shows the extensibility of our method to 3D data.
Resumo:
The nematode Caenorhabditis elegans is a well-known model organism used to investigate fundamental questions in biology. Motility assays of this small roundworm are designed to study the relationships between genes and behavior. Commonly, motility analysis is used to classify nematode movements and characterize them quantitatively. Over the past years, C. elegans' motility has been studied across a wide range of environments, including crawling on substrates, swimming in fluids, and locomoting through microfluidic substrates. However, each environment often requires customized image processing tools relying on heuristic parameter tuning. In the present study, we propose a novel Multi-Environment Model Estimation (MEME) framework for automated image segmentation that is versatile across various environments. The MEME platform is constructed around the concept of Mixture of Gaussian (MOG) models, where statistical models for both the background environment and the nematode appearance are explicitly learned and used to accurately segment a target nematode. Our method is designed to simplify the burden often imposed on users; here, only a single image which includes a nematode in its environment must be provided for model learning. In addition, our platform enables the extraction of nematode ‘skeletons’ for straightforward motility quantification. We test our algorithm on various locomotive environments and compare performances with an intensity-based thresholding method. Overall, MEME outperforms the threshold-based approach for the overwhelming majority of cases examined. Ultimately, MEME provides researchers with an attractive platform for C. elegans' segmentation and ‘skeletonizing’ across a wide range of motility assays.
Resumo:
Digital light, fluorescence and electron microscopy in combination with wavelength-dispersive spectroscopy were used to visualize individual polymers, air voids, cement phases and filler minerals in a polymer-modified cementitious tile adhesive. In order to investigate the evolution and processes involved in formation of the mortar microstructure, quantifications of the phase distribution in the mortar were performed including phase-specific imaging and digital image analysis. The required sample preparation techniques and imaging related topics are discussed. As a form of case study, the different techniques were applied to obtain a quantitative characterization of a specific mortar mixture. The results indicate that the mortar fractionates during different stages ranging from the early fresh mortar until the final hardened mortar stage. This induces process-dependent enrichments of the phases at specific locations in the mortar. The approach presented provides important information for a comprehensive understanding of the functionality of polymer-modified mortars.