899 resultados para Multi-scale Fractal Dimension
Resumo:
Digital image processing is exploited in many diverse applications but the size of digital images places excessive demands on current storage and transmission technology. Image data compression is required to permit further use of digital image processing. Conventional image compression techniques based on statistical analysis have reached a saturation level so it is necessary to explore more radical methods. This thesis is concerned with novel methods, based on the use of fractals, for achieving significant compression of image data within reasonable processing time without introducing excessive distortion. Images are modelled as fractal data and this model is exploited directly by compression schemes. The validity of this is demonstrated by showing that the fractal complexity measure of fractal dimension is an excellent predictor of image compressibility. A method of fractal waveform coding is developed which has low computational demands and performs better than conventional waveform coding methods such as PCM and DPCM. Fractal techniques based on the use of space-filling curves are developed as a mechanism for hierarchical application of conventional techniques. Two particular applications are highlighted: the re-ordering of data during image scanning and the mapping of multi-dimensional data to one dimension. It is shown that there are many possible space-filling curves which may be used to scan images and that selection of an optimum curve leads to significantly improved data compression. The multi-dimensional mapping property of space-filling curves is used to speed up substantially the lookup process in vector quantisation. Iterated function systems are compared with vector quantisers and the computational complexity or iterated function system encoding is also reduced by using the efficient matching algcnithms identified for vector quantisers.
Resumo:
T-cell activation requires interaction of T-cell receptors (TCR) with peptide epitopes bound by major histocompatibility complex (MHC) proteins. This interaction occurs at a special cell-cell junction known as the immune or immunological synapse. Fluorescence microscopy has shown that the interplay among one agonist peptide-MHC (pMHC), one TCR and one CD4 provides the minimum complexity needed to trigger transient calcium signalling. We describe a computational approach to the study of the immune synapse. Using molecular dynamics simulation, we report here on a study of the smallest viable model, a TCR-pMHC-CD4 complex in a membrane environment. The computed structural and thermodynamic properties are in fair agreement with experiment. A number of biomolecules participate in the formation of the immunological synapse. Multi-scale molecular dynamics simulations may be the best opportunity we have to reach a full understanding of this remarkable supra-macromolecular event at a cell-cell junction.
Resumo:
2010 Mathematics Subject Classification: 65D18.
Resumo:
The freshwater Everglades is a complex system containing thousands of tree islands embedded within a marsh-grassland matrix. The tree island-marsh mosaic is shaped and maintained by hydrologic, edaphic and biological mechanisms that interact across multiple scales. Preserving tree islands requires a more integrated understanding of how scale-dependent phenomena interact in the larger freshwater system. The hierarchical patch dynamics paradigm provides a conceptual framework for exploring multi-scale interactions within complex systems. We used a three-tiered approach to examine the spatial variability and patterning of nutrients in relation to site parameters within and between two hydrologically defined Everglades landscapes: the freshwater Marl Prairie and the Ridge and Slough. Results were scale-dependent and complexly interrelated. Total carbon and nitrogen patterning were correlated with organic matter accumulation, driven by hydrologic conditions at the system scale. Total and bioavailable phosphorus were most strongly related to woody plant patterning within landscapes, and were found to be 3 to 11 times more concentrated in tree island soils compared to surrounding marshes. Below canopy resource islands in the slough were elongated in a downstream direction, indicating soil resource directional drift. Combined multi-scale results suggest that hydrology plays a significant role in landscape patterning and also the development and maintenance of tree islands. Once developed, tree islands appear to exert influence over the spatial distribution of nutrients, which can reciprocally affect other ecological processes.
Resumo:
Edible oil is an important contaminant in water and wastewater. Oil droplets smaller than 40 μm may remain in effluent as an emulsion and combine with other contaminants in water. Coagulation/flocculation processes are used to remove oil droplets from water and wastewater. By adding a polymer at proper dose, small oil droplets can be flocculated and separated from water. The purpose of this study was to characterize and analyze the morphology of flocs and floc formation in edible oil-water emulsions by using microscopic image analysis techniques. The fractal dimension, concentration of polymer, effect of pH and temperature are investigated and analyzed to develop a fractal model of the flocs. Three types of edible oil (corn, olive, and sunflower oil) at concentrations of 600 ppm (by volume) were used to determine the optimum polymer dosage and effect of pH and temperature. To find the optimum polymer dose, polymer was added to the oil-water emulsions at concentration of 0.5, 1.0, 1.5, 2.0, 3.0 and 3.5 ppm (by volume). The clearest supernatants obtained from flocculation of corn, olive, and sunflower oil were achieved at polymer dosage of 3.0 ppm producing turbidities of 4.52, 12.90, and 13.10 NTU, respectively. This concentration of polymer was subsequently used to study the effect of pH and temperature on flocculation. The effect of pH was studied at pH 5, 7, 9, and 11 at 30°C. Microscopic image analysis was used to investigate the morphology of flocs in terms of fractal dimension, radius of oil droplets trapped in floc, floc size, and histograms of oil droplet distribution. Fractal dimension indicates the density of oil droplets captured in flocs. By comparison of fractal dimensions, pH was found to be one of the most important factors controlling droplet flocculation. Neutral pH or pH 7 showed the highest degree of flocculation, while acidic (pH 5) and basic pH (pH 9 and pH 11) showed low efficiency of flocculation. The fractal dimensions achieved from flocculation of corn, olive, and sunflower oil at pH 7 and temperature 30°C were 1.2763, 1.3592, and 1.4413, respectively. The effect of temperature was explored at temperatures 20°, 30°, and 40°C and pH 7. The results of flocculation of oil at pH 7 and different temperatures revealed that temperature significantly affected flocculation. The fractal dimension of flocs formed in corn, olive and sunflower oil emulsion at pH 7 and temperature 20°, 30°, and 40°C were 1.82, 1.28, 1.29, 1.62, 1.36, 1.42, 1.36, 1.44, and 1.28, respectively. After comparison of fractal dimension, radius of oil droplets captured, and floc length in each oil type, the optimal flocculation temperature was determined to be 30°C. ^
Resumo:
In this work we have investigated some aspects of the two-dimensional flow of a viscous Newtonian fluid through a disordered porous medium modeled by a random fractal system similar to the Sierpinski carpet. This fractal is formed by obstacles of various sizes, whose distribution function follows a power law. They are randomly disposed in a rectangular channel. The velocity field and other details of fluid dynamics are obtained by solving numerically of the Navier-Stokes and continuity equations at the pore level, where occurs actually the flow of fluids in porous media. The results of numerical simulations allowed us to analyze the distribution of shear stresses developed in the solid-fluid interfaces, and find algebraic relations between the viscous forces or of friction with the geometric parameters of the model, including its fractal dimension. Based on the numerical results, we proposed scaling relations involving the relevant parameters of the phenomenon, allowing quantifying the fractions of these forces with respect to size classes of obstacles. Finally, it was also possible to make inferences about the fluctuations in the form of the distribution of viscous stresses developed on the surface of obstacles.
Resumo:
Abstract
The goal of modern radiotherapy is to precisely deliver a prescribed radiation dose to delineated target volumes that contain a significant amount of tumor cells while sparing the surrounding healthy tissues/organs. Precise delineation of treatment and avoidance volumes is the key for the precision radiation therapy. In recent years, considerable clinical and research efforts have been devoted to integrate MRI into radiotherapy workflow motivated by the superior soft tissue contrast and functional imaging possibility. Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that measures properties of tissue microvasculature. Its sensitivity to radiation-induced vascular pharmacokinetic (PK) changes has been preliminary demonstrated. In spite of its great potential, two major challenges have limited DCE-MRI’s clinical application in radiotherapy assessment: the technical limitations of accurate DCE-MRI imaging implementation and the need of novel DCE-MRI data analysis methods for richer functional heterogeneity information.
This study aims at improving current DCE-MRI techniques and developing new DCE-MRI analysis methods for particular radiotherapy assessment. Thus, the study is naturally divided into two parts. The first part focuses on DCE-MRI temporal resolution as one of the key DCE-MRI technical factors, and some improvements regarding DCE-MRI temporal resolution are proposed; the second part explores the potential value of image heterogeneity analysis and multiple PK model combination for therapeutic response assessment, and several novel DCE-MRI data analysis methods are developed.
I. Improvement of DCE-MRI temporal resolution. First, the feasibility of improving DCE-MRI temporal resolution via image undersampling was studied. Specifically, a novel MR image iterative reconstruction algorithm was studied for DCE-MRI reconstruction. This algorithm was built on the recently developed compress sensing (CS) theory. By utilizing a limited k-space acquisition with shorter imaging time, images can be reconstructed in an iterative fashion under the regularization of a newly proposed total generalized variation (TGV) penalty term. In the retrospective study of brain radiosurgery patient DCE-MRI scans under IRB-approval, the clinically obtained image data was selected as reference data, and the simulated accelerated k-space acquisition was generated via undersampling the reference image full k-space with designed sampling grids. Two undersampling strategies were proposed: 1) a radial multi-ray grid with a special angular distribution was adopted to sample each slice of the full k-space; 2) a Cartesian random sampling grid series with spatiotemporal constraints from adjacent frames was adopted to sample the dynamic k-space series at a slice location. Two sets of PK parameters’ maps were generated from the undersampled data and from the fully-sampled data, respectively. Multiple quantitative measurements and statistical studies were performed to evaluate the accuracy of PK maps generated from the undersampled data in reference to the PK maps generated from the fully-sampled data. Results showed that at a simulated acceleration factor of four, PK maps could be faithfully calculated from the DCE images that were reconstructed using undersampled data, and no statistically significant differences were found between the regional PK mean values from undersampled and fully-sampled data sets. DCE-MRI acceleration using the investigated image reconstruction method has been suggested as feasible and promising.
Second, for high temporal resolution DCE-MRI, a new PK model fitting method was developed to solve PK parameters for better calculation accuracy and efficiency. This method is based on a derivative-based deformation of the commonly used Tofts PK model, which is presented as an integrative expression. This method also includes an advanced Kolmogorov-Zurbenko (KZ) filter to remove the potential noise effect in data and solve the PK parameter as a linear problem in matrix format. In the computer simulation study, PK parameters representing typical intracranial values were selected as references to simulated DCE-MRI data for different temporal resolution and different data noise level. Results showed that at both high temporal resolutions (<1s) and clinically feasible temporal resolution (~5s), this new method was able to calculate PK parameters more accurate than the current calculation methods at clinically relevant noise levels; at high temporal resolutions, the calculation efficiency of this new method was superior to current methods in an order of 102. In a retrospective of clinical brain DCE-MRI scans, the PK maps derived from the proposed method were comparable with the results from current methods. Based on these results, it can be concluded that this new method can be used for accurate and efficient PK model fitting for high temporal resolution DCE-MRI.
II. Development of DCE-MRI analysis methods for therapeutic response assessment. This part aims at methodology developments in two approaches. The first one is to develop model-free analysis method for DCE-MRI functional heterogeneity evaluation. This approach is inspired by the rationale that radiotherapy-induced functional change could be heterogeneous across the treatment area. The first effort was spent on a translational investigation of classic fractal dimension theory for DCE-MRI therapeutic response assessment. In a small-animal anti-angiogenesis drug therapy experiment, the randomly assigned treatment/control groups received multiple fraction treatments with one pre-treatment and multiple post-treatment high spatiotemporal DCE-MRI scans. In the post-treatment scan two weeks after the start, the investigated Rényi dimensions of the classic PK rate constant map demonstrated significant differences between the treatment and the control groups; when Rényi dimensions were adopted for treatment/control group classification, the achieved accuracy was higher than the accuracy from using conventional PK parameter statistics. Following this pilot work, two novel texture analysis methods were proposed. First, a new technique called Gray Level Local Power Matrix (GLLPM) was developed. It intends to solve the lack of temporal information and poor calculation efficiency of the commonly used Gray Level Co-Occurrence Matrix (GLCOM) techniques. In the same small animal experiment, the dynamic curves of Haralick texture features derived from the GLLPM had an overall better performance than the corresponding curves derived from current GLCOM techniques in treatment/control separation and classification. The second developed method is dynamic Fractal Signature Dissimilarity (FSD) analysis. Inspired by the classic fractal dimension theory, this method measures the dynamics of tumor heterogeneity during the contrast agent uptake in a quantitative fashion on DCE images. In the small animal experiment mentioned before, the selected parameters from dynamic FSD analysis showed significant differences between treatment/control groups as early as after 1 treatment fraction; in contrast, metrics from conventional PK analysis showed significant differences only after 3 treatment fractions. When using dynamic FSD parameters, the treatment/control group classification after 1st treatment fraction was improved than using conventional PK statistics. These results suggest the promising application of this novel method for capturing early therapeutic response.
The second approach of developing novel DCE-MRI methods is to combine PK information from multiple PK models. Currently, the classic Tofts model or its alternative version has been widely adopted for DCE-MRI analysis as a gold-standard approach for therapeutic response assessment. Previously, a shutter-speed (SS) model was proposed to incorporate transcytolemmal water exchange effect into contrast agent concentration quantification. In spite of richer biological assumption, its application in therapeutic response assessment is limited. It might be intriguing to combine the information from the SS model and from the classic Tofts model to explore potential new biological information for treatment assessment. The feasibility of this idea was investigated in the same small animal experiment. The SS model was compared against the Tofts model for therapeutic response assessment using PK parameter regional mean value comparison. Based on the modeled transcytolemmal water exchange rate, a biological subvolume was proposed and was automatically identified using histogram analysis. Within the biological subvolume, the PK rate constant derived from the SS model were proved to be superior to the one from Tofts model in treatment/control separation and classification. Furthermore, novel biomarkers were designed to integrate PK rate constants from these two models. When being evaluated in the biological subvolume, this biomarker was able to reflect significant treatment/control difference in both post-treatment evaluation. These results confirm the potential value of SS model as well as its combination with Tofts model for therapeutic response assessment.
In summary, this study addressed two problems of DCE-MRI application in radiotherapy assessment. In the first part, a method of accelerating DCE-MRI acquisition for better temporal resolution was investigated, and a novel PK model fitting algorithm was proposed for high temporal resolution DCE-MRI. In the second part, two model-free texture analysis methods and a multiple-model analysis method were developed for DCE-MRI therapeutic response assessment. The presented works could benefit the future DCE-MRI routine clinical application in radiotherapy assessment.
Resumo:
RNA viruses are an important cause of global morbidity and mortality. The rapid evolutionary rates of RNA virus pathogens, caused by high replication rates and error-prone polymerases, can make the pathogens difficult to control. RNA viruses can undergo immune escape within their hosts and develop resistance to the treatment and vaccines we design to fight them. Understanding the spread and evolution of RNA pathogens is essential for reducing human suffering. In this dissertation, I make use of the rapid evolutionary rate of viral pathogens to answer several questions about how RNA viruses spread and evolve. To address each of the questions, I link mathematical techniques for modeling viral population dynamics with phylogenetic and coalescent techniques for analyzing and modeling viral genetic sequences and evolution. The first project uses multi-scale mechanistic modeling to show that decreases in viral substitution rates over the course of an acute infection, combined with the timing of infectious hosts transmitting new infections to susceptible individuals, can account for discrepancies in viral substitution rates in different host populations. The second project combines coalescent models with within-host mathematical models to identify driving evolutionary forces in chronic hepatitis C virus infection. The third project compares the effects of intrinsic and extrinsic viral transmission rate variation on viral phylogenies.
Resumo:
Résumé : Face à l’accroissement de la résolution spatiale des capteurs optiques satellitaires, de nouvelles stratégies doivent être développées pour classifier les images de télédétection. En effet, l’abondance de détails dans ces images diminue fortement l’efficacité des classifications spectrales; de nombreuses méthodes de classification texturale, notamment les approches statistiques, ne sont plus adaptées. À l’inverse, les approches structurelles offrent une ouverture intéressante : ces approches orientées objet consistent à étudier la structure de l’image pour en interpréter le sens. Un algorithme de ce type est proposé dans la première partie de cette thèse. Reposant sur la détection et l’analyse de points-clés (KPC : KeyPoint-based Classification), il offre une solution efficace au problème de la classification d’images à très haute résolution spatiale. Les classifications effectuées sur les données montrent en particulier sa capacité à différencier des textures visuellement similaires. Par ailleurs, il a été montré dans la littérature que la fusion évidentielle, reposant sur la théorie de Dempster-Shafer, est tout à fait adaptée aux images de télédétection en raison de son aptitude à intégrer des concepts tels que l’ambiguïté et l’incertitude. Peu d’études ont en revanche été menées sur l’application de cette théorie à des données texturales complexes telles que celles issues de classifications structurelles. La seconde partie de cette thèse vise à combler ce manque, en s’intéressant à la fusion de classifications KPC multi-échelle par la théorie de Dempster-Shafer. Les tests menés montrent que cette approche multi-échelle permet d’améliorer la classification finale dans le cas où l’image initiale est de faible qualité. De plus, l’étude effectuée met en évidence le potentiel d’amélioration apporté par l’estimation de la fiabilité des classifications intermédiaires, et fournit des pistes pour mener ces estimations.
Resumo:
We review the use of neural field models for modelling the brain at the large scales necessary for interpreting EEG, fMRI, MEG and optical imaging data. Albeit a framework that is limited to coarse-grained or mean-field activity, neural field models provide a framework for unifying data from different imaging modalities. Starting with a description of neural mass models we build to spatially extended cortical models of layered two-dimensional sheets with long range axonal connections mediating synaptic interactions. Reformulations of the fundamental non-local mathematical model in terms of more familiar local differential (brain wave) equations are described. Techniques for the analysis of such models, including how to determine the onset of spatio-temporal pattern forming instabilities, are reviewed. Extensions of the basic formalism to treat refractoriness, adaptive feedback and inhomogeneous connectivity are described along with open challenges for the development of multi-scale models that can integrate macroscopic models at large spatial scales with models at the microscopic scale.
Resumo:
Overrecentdecades,remotesensinghasemergedasaneffectivetoolforimprov- ing agriculture productivity. In particular, many works have dealt with the problem of identifying characteristics or phenomena of crops and orchards on different scales using remote sensed images. Since the natural processes are scale dependent and most of them are hierarchically structured, the determination of optimal study scales is mandatory in understanding these processes and their interactions. The concept of multi-scale/multi- resolution inherent to OBIA methodologies allows the scale problem to be dealt with. But for that multi-scale and hierarchical segmentation algorithms are required. The question that remains unsolved is to determine the suitable scale segmentation that allows different objects and phenomena to be characterized in a single image. In this work, an adaptation of the Simple Linear Iterative Clustering (SLIC) algorithm to perform a multi-scale hierarchi- cal segmentation of satellite images is proposed. The selection of the optimal multi-scale segmentation for different regions of the image is carried out by evaluating the intra- variability and inter-heterogeneity of the regions obtained on each scale with respect to the parent-regions defined by the coarsest scale. To achieve this goal, an objective function, that combines weighted variance and the global Moran index, has been used. Two different kinds of experiment have been carried out, generating the number of regions on each scale through linear and dyadic approaches. This methodology has allowed, on the one hand, the detection of objects on different scales and, on the other hand, to represent them all in a sin- gle image. Altogether, the procedure provides the user with a better comprehension of the land cover, the objects on it and the phenomena occurring.
Resumo:
Integration, inclusion, and equity constitute fundamental dimensions of democracy in post-World War II societies and their institutions. The study presented here reports upon the ways in which individuals and institutions both use and account for the roles that technologies, including ICT, play in disabling and enabling access for learning in higher education for all. Technological innovations during the 20th and 21st centuries, including ICT, have been heralded as holding significant promise for revolutionizing issues of access in societal institutions like schools, healthcare services, etc. (at least in the global North). Taking a socially oriented perspective, the study presented in this paper focuses on an ethnographically framed analysis of two datasets that critically explores the role that technologies, including ICT, play in higher education for individuals who are “differently abled” and who constitute a variation on a continuum of capabilities. Functionality as a dimension of everyday life in higher education in the 21st century is explored through the analysis of (i) case studies of two “differently abled” students in Sweden and (ii) current support services at universities in Sweden. The findings make visible the work that institutions and their members do through analyses of the organization of time and space and the use of technologies in institutional settings against the backdrop of individuals’ accountings and life trajectories. This study also highlights the relevance of multi-scale data analyses for revisiting the ways in which identity positions become framed or understood within higher education.
Resumo:
In medicine, innovation depends on a better knowledge of the human body mechanism, which represents a complex system of multi-scale constituents. Unraveling the complexity underneath diseases proves to be challenging. A deep understanding of the inner workings comes with dealing with many heterogeneous information. Exploring the molecular status and the organization of genes, proteins, metabolites provides insights on what is driving a disease, from aggressiveness to curability. Molecular constituents, however, are only the building blocks of the human body and cannot currently tell the whole story of diseases. This is why nowadays attention is growing towards the contemporary exploitation of multi-scale information. Holistic methods are then drawing interest to address the problem of integrating heterogeneous data. The heterogeneity may derive from the diversity across data types and from the diversity within diseases. Here, four studies conducted data integration using customly designed workflows that implement novel methods and views to tackle the heterogeneous characterization of diseases. The first study devoted to determine shared gene regulatory signatures for onco-hematology and it showed partial co-regulation across blood-related diseases. The second study focused on Acute Myeloid Leukemia and refined the unsupervised integration of genomic alterations, which turned out to better resemble clinical practice. In the third study, network integration for artherosclerosis demonstrated, as a proof of concept, the impact of network intelligibility when it comes to model heterogeneous data, which showed to accelerate the identification of new potential pharmaceutical targets. Lastly, the fourth study introduced a new method to integrate multiple data types in a unique latent heterogeneous-representation that facilitated the selection of important data types to predict the tumour stage of invasive ductal carcinoma. The results of these four studies laid the groundwork to ease the detection of new biomarkers ultimately beneficial to medical practice and to the ever-growing field of Personalized Medicine.