521 resultados para Tikhonov regularization
Resumo:
The quantization scheme is suggested for a spatially inhomogeneous 1+1 Bianchi I model. The scheme consists in quantization of the equations of motion and gives the operator (so called quasi-Heisenberg) equations describing explicit evolution of a system. Some particular gauge suitable for quantization is proposed. The Wheeler-DeWitt equation is considered in the vicinity of zero scale factor and it is used to construct a space where the quasi-Heisenberg operators act. Spatial discretization as a UV regularization procedure is suggested for the equations of motion.
Resumo:
We propose and investigate an application of the method of fundamental solutions (MFS) to the radially symmetric and axisymmetric backward heat conduction problem (BHCP) in a solid or hollow cylinder. In the BHCP, the initial temperature is to be determined from the temperature measurements at a later time. This is an inverse and ill-posed problem, and we employ and generalize the MFS regularization approach [B.T. Johansson and D. Lesnic, A method of fundamental solutions for transient heat conduction, Eng. Anal. Boundary Elements 32 (2008), pp. 697–703] for the time-dependent heat equation to obtain a stable and accurate numerical approximation with small computational cost.
Resumo:
We propose and investigate an application of the method of fundamental solutions (MFS) to the radially symmetric and axisymmetric backward heat conduction problem (BHCP) in a solid or hollow cylinder. In the BHCP, the initial temperature is to be determined from the temperature measurements at a later time. This is an inverse and ill-posed problem, and we employ and generalize the MFS regularization approach [B.T. Johansson and D. Lesnic, A method of fundamental solutions for transient heat conduction, Eng. Anal. Boundary Elements 32 (2008), pp. 697–703] for the time-dependent heat equation to obtain a stable and accurate numerical approximation with small computational cost.
Resumo:
The great demand for Federal Institutions of High Education (IFES) design, triggered by the favourable political moment, boosts the public works market and brings with it the stigma of seeking the lowest cost and meet the federal Law 8.666/93 (Bids). In this context, this research makes an analysis of compliance with Fire Safety’s normative requirements in IFES’s architectural designs, taking this point as an evaluation of quality of designs. For the study, were used, IFES’s designs, specifically from UFERSA – Universidade Federal Rural do Semiárido and UFRN – Universidade Federal do Rio Grande do Norte, selected by the relationship use x public served and yet, in the replication of these buildings’ construction. The research was developed through the identification of the Fire Safety applicable legislation to the designs in question, with the determination of the demanded conditions that are architect’s autonomy or that affect the architectural design. Tabulated the requirements, through data collection and measures gathered in the blueprints, was made a comparison and verification of the compliance with these. The results of this evaluation reveal that the minimum requirements was not fulfilled and that IFES’s architectural designs, taken as object in this study, certainly will suffer restrictions in their regularization process with the Fire Department. It is concluded that is necessary an improvement in the IFES’s designs to meet the minimum fire safety regulations and improvement in quality. Moreover, the results direct to the understanding that level of knowledge about Fire Safety received in undergraduate by architects is insufficient for appropriate elaboration of architectural designs in this area.
Resumo:
In the late 1980s, the quilombola (or maroon) communities emerged on the Brazilian public scene. They established themselves as new collective subjects and ethnic groups, in a historical moment of sensitive political changes in several social conflicts and struggles, both in Brazil and in Latin America. Because of their socio-cultural and historical singularities, these communities have self-identified in the same collective expression and have organized in search of recognition and respect for their rights. Quilombo communities and other self-labeled as "traditional communities" seek to reaffirm their differences in opposition to a conscious colonizer cultural project and re-signify their memories and traditions, that serve as reference in the construction of alternative production projects and community organization. One of the distinguishing characteristics of this quilombola political emergence process is the territorial nature of the struggles, manifested in at least two directions: on the one hand, the struggle for legal and formal recognition of a given space, i.e., the regularization and titling of occupied territories, considering that the Brazilian Constitution of 1988 recognizes the right of these communities to the final possession of the traditional lands. On the other hand, the struggle for recognition of their territoriality in a broader sense, not necessarily restricted to the demarcated area, but as the recognition of a culture and its own way of life, that originated historically in these territories. The current accomplishments and challenges of the Brazilian quilombola communities are well exemplified by the quilombo of Acauã, in the Poço Branco municipality of Rio Grande do Norte. The last fifteen years have been marked by important changes in this community, which has gained visibility and has emerged as a new political player. Acauã identified itself as quilombola community in 2004, the same year that it formalized its political structure, through the creation of the Association of Residents of Quilombo Acauã (AMQA, in Portuguese). Also in 2004, it requested to the National Institute of Colonization and Land Reform (INCRA, in Portuguese) the opening of the process for regularization and titling of quilombo territory, which is at an advanced stage, but so far without definitive resolution. This study aims to understand the process of territorialization (struggle for territorial claim) played in the last fifteen years by the community of Acauã.
Resumo:
In the oil prospection research seismic data are usually irregular and sparsely sampled along the spatial coordinates due to obstacles in placement of geophones. Fourier methods provide a way to make the regularization of seismic data which are efficient if the input data is sampled on a regular grid. However, when these methods are applied to a set of irregularly sampled data, the orthogonality among the Fourier components is broken and the energy of a Fourier component may "leak" to other components, a phenomenon called "spectral leakage". The objective of this research is to study the spectral representation of irregularly sampled data method. In particular, it will be presented the basic structure of representation of the NDFT (nonuniform discrete Fourier transform), study their properties and demonstrate its potential in the processing of the seismic signal. In this way we study the FFT (fast Fourier transform) and the NFFT (nonuniform fast Fourier transform) which rapidly calculate the DFT (discrete Fourier transform) and NDFT. We compare the recovery of the signal using the FFT, DFT and NFFT. We approach the interpolation of seismic trace using the ALFT (antileakage Fourier transform) to overcome the problem of spectral leakage caused by uneven sampling. Applications to synthetic and real data showed that ALFT method works well on complex geology seismic data and suffers little with irregular spatial sampling of the data and edge effects, in addition it is robust and stable with noisy data. However, it is not as efficient as the FFT and its reconstruction is not as good in the case of irregular filling with large holes in the acquisition.
Resumo:
In the oil prospection research seismic data are usually irregular and sparsely sampled along the spatial coordinates due to obstacles in placement of geophones. Fourier methods provide a way to make the regularization of seismic data which are efficient if the input data is sampled on a regular grid. However, when these methods are applied to a set of irregularly sampled data, the orthogonality among the Fourier components is broken and the energy of a Fourier component may "leak" to other components, a phenomenon called "spectral leakage". The objective of this research is to study the spectral representation of irregularly sampled data method. In particular, it will be presented the basic structure of representation of the NDFT (nonuniform discrete Fourier transform), study their properties and demonstrate its potential in the processing of the seismic signal. In this way we study the FFT (fast Fourier transform) and the NFFT (nonuniform fast Fourier transform) which rapidly calculate the DFT (discrete Fourier transform) and NDFT. We compare the recovery of the signal using the FFT, DFT and NFFT. We approach the interpolation of seismic trace using the ALFT (antileakage Fourier transform) to overcome the problem of spectral leakage caused by uneven sampling. Applications to synthetic and real data showed that ALFT method works well on complex geology seismic data and suffers little with irregular spatial sampling of the data and edge effects, in addition it is robust and stable with noisy data. However, it is not as efficient as the FFT and its reconstruction is not as good in the case of irregular filling with large holes in the acquisition.
Resumo:
Marine spatial planning and ecological research call for high-resolution species distribution data. However, those data are still not available for most marine large vertebrates. The dynamic nature of oceanographic processes and the wide-ranging behavior of many marine vertebrates create further difficulties, as distribution data must incorporate both the spatial and temporal dimensions. Cetaceans play an essential role in structuring and maintaining marine ecosystems and face increasing threats from human activities. The Azores holds a high diversity of cetaceans but the information about spatial and temporal patterns of distribution for this marine megafauna group in the region is still very limited. To tackle this issue, we created monthly predictive cetacean distribution maps for spring and summer months, using data collected by the Azores Fisheries Observer Programme between 2004 and 2009. We then combined the individual predictive maps to obtain species richness maps for the same period. Our results reflect a great heterogeneity in distribution among species and within species among different months. This heterogeneity reflects a contrasting influence of oceanographic processes on the distribution of cetacean species. However, some persistent areas of increased species richness could also be identified from our results. We argue that policies aimed at effectively protecting cetaceans and their habitats must include the principle of dynamic ocean management coupled with other area-based management such as marine spatial planning.
Resumo:
In this study, we join up in the theoretical assumptions of the French Discourse Analysis in order to analyze effects of the demand of objectification of language in the context of vestibular essays. More specifically, we analyze the operation of said objectification via discourses constructed by the traditional vestibular exam due to the requirement to have, in the students’ essays, paraphrases of statements from the motivating texts (TM) of the test in question. From our perspective, the objectification mechanism of language, the paraphrase, in the vestibular, its logic of clarity and non-contradiction of ideas, is made by (in)determination of senses in the order of its speech and, also, in its practice: the correction of the vestibular essay. Therefore, in spite of what is assumed as guarantee to language in the moment of the vestibular essay, we suggest there are regularization-recognition conflicts of same senses— the constitutive senses of TM — in the evaluative speech of two vestibular-essay correctors(CA and CB). These correctors, with their history of reading (grammar and Linguistic Textual), stress the concept of paraphrase taken by the vestibular instance for the correction of students’ essays. Such stress creates a dispute of speeches: the speech of knowledge (university policy) versus the speech of produce (neoliberal policy); the latter as reading policy that favors literal meanings, consensus. Because of all this, we question: what are the effects of senses produced in (and about) vestibular essays by the demand of determining of the saying there instituted? To answer this question, we build analysis from clippings of documents that regulate the vestibular exam (institutional texts) in our country and, also, analysis of two vestibular essays in which at times appear, at times not, according to the judgment of CA and CB of essays, paraphrases of TM statements of the essay. The analysis, in theory, punctuates effects of sense of the objectification process of the saying in vestibular, and primarily the rarefaction of legal-position subject-of-knowing by the current institution of the subject-of-making. Moreover, our work comprises affiliations of sense that relates to the subject-speech relationship in evaluative exercise of vestibular essays, on the question of authorship.
Resumo:
lmage super-resolution is defined as a class of techniques that enhance the spatial resolution of images. Super-resolution methods can be subdivided in single and multi image methods. This thesis focuses on developing algorithms based on mathematical theories for single image super resolution problems. lndeed, in arder to estimate an output image, we adopta mixed approach: i.e., we use both a dictionary of patches with sparsity constraints (typical of learning-based methods) and regularization terms (typical of reconstruction-based methods). Although the existing methods already per- form well, they do not take into account the geometry of the data to: regularize the solution, cluster data samples (samples are often clustered using algorithms with the Euclidean distance as a dissimilarity metric), learn dictionaries (they are often learned using PCA or K-SVD). Thus, state-of-the-art methods still suffer from shortcomings. In this work, we proposed three new methods to overcome these deficiencies. First, we developed SE-ASDS (a structure tensor based regularization term) in arder to improve the sharpness of edges. SE-ASDS achieves much better results than many state-of-the- art algorithms. Then, we proposed AGNN and GOC algorithms for determining a local subset of training samples from which a good local model can be computed for recon- structing a given input test sample, where we take into account the underlying geometry of the data. AGNN and GOC methods outperform spectral clustering, soft clustering, and geodesic distance based subset selection in most settings. Next, we proposed aSOB strategy which takes into account the geometry of the data and the dictionary size. The aSOB strategy outperforms both PCA and PGA methods. Finally, we combine all our methods in a unique algorithm, named G2SR. Our proposed G2SR algorithm shows better visual and quantitative results when compared to the results of state-of-the-art methods.
Resumo:
We study the helical edge states of a two-dimensional topological insulator without axial spin symmetry due to the Rashba spin-orbit interaction. Lack of axial spin symmetry can lead to so-called generic helical edge states, which have energy-dependent spin orientation. This opens the possibility of inelastic backscattering and thereby nonquantized transport. Here we find analytically the new dispersion relations and the energy dependent spin orientation of the generic helical edge states in the presence of Rashba spin-orbit coupling within the Bernevig-Hughes-Zhang model, for both a single isolated edge and for a finite width ribbon. In the single-edge case, we analytically quantify the energy dependence of the spin orientation, which turns out to be weak for a realistic HgTe quantum well. Nevertheless, finite size effects combined with Rashba spin-orbit coupling result in two avoided crossings in the energy dispersions, where the spin orientation variation of the edge states is very significantly increased for realistic parameters. Finally, our analytical results are found to compare well to a numerical tight-binding regularization of the model.
Diagnóstico ambiental da área de influência do complexo sucroalcooleiro Usina Vale do São Simão Ltda
Resumo:
One of the most widespread renewable energy sources in Brazil is ethanol, from sugarcane, therefore, the sugar and alcohol sector is expanding, with positive impacts for the economy of the country. Sugar cane was introduced in Brazil as a crop during its colonization, for the production of sugar, and put the country in the global scenario. The expansion of this crop occurred in the seventies, to reduce the reliance in fossil energy sources and to stimulate the development of the agricultural activity. Thus, the federal government has promoted the sugar cane crop and the production of ethanol as a fuel. However, it is important to minimize possible impacts that the crop may cause to the environment. Sugar cane has expanded in the frontiers of the mesoregion of Triângulo Mineiro and Alto Paranaíba-MG, and, in this perspective, the agroindustrial complex known as Companhia Energética Vale do São Simão Ltda., with the Mill located in the county of Santa Vitória, Minas Gerais, was adopted to evaluate the environmental impacts caused by the sugarcane in the area of influence of the mill. The mill has a polygonal area corresponding to 53,525.20 hectares, and for its establishment a Study and Report of Environmental Impacts (EIA/RIMA) was presented, as required as an environment protection instrument by the Environment National Policy (Law nº 6.938/81), and detailed by the Resolution CONAMA nº 01/1986. These studies pointed that native vegetation fragments in the Area of Influence of the Mill, before its implantation, corresponded to approximately 20.7% of the area. Therefore, this study evaluated the impacts of the installation of Usina Vale do São Simão, between 2007 and 2012, determining its reflex on the environmental regularization of the farms, and the vegetation fragments existing in the area, in the recovery and recomposition of areas defined as Legal Reserve and Permanent Preservation. Previous studies of the area were analyzed, soil use and occupation was mapped for the years 2007 and 2012, and the areas of permanent preservation and native vegetation fragments were marked. In general, there was a decline in native vegetation coverage in the period, although it cannot be stated that such reduction was a direct effect of the milling activity. Therefore, the legal requirement of preserving such areas was not capable of bringing the positive effects of protection and recovery as demanded by the Law, highlighting that the current legislation was not enough to protect such areas.
Resumo:
We propose a mathematically well-founded approach for locating the source (initial state) of density functions evolved within a nonlinear reaction-diffusion model. The reconstruction of the initial source is an ill-posed inverse problem since the solution is highly unstable with respect to measurement noise. To address this instability problem, we introduce a regularization procedure based on the nonlinear Landweber method for the stable determination of the source location. This amounts to solving a sequence of well-posed forward reaction-diffusion problems. The developed framework is general, and as a special instance we consider the problem of source localization of brain tumors. We show numerically that the source of the initial densities of tumor cells are reconstructed well on both imaging data consisting of simple and complex geometric structures.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
Abstract
The goal of modern radiotherapy is to precisely deliver a prescribed radiation dose to delineated target volumes that contain a significant amount of tumor cells while sparing the surrounding healthy tissues/organs. Precise delineation of treatment and avoidance volumes is the key for the precision radiation therapy. In recent years, considerable clinical and research efforts have been devoted to integrate MRI into radiotherapy workflow motivated by the superior soft tissue contrast and functional imaging possibility. Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that measures properties of tissue microvasculature. Its sensitivity to radiation-induced vascular pharmacokinetic (PK) changes has been preliminary demonstrated. In spite of its great potential, two major challenges have limited DCE-MRI’s clinical application in radiotherapy assessment: the technical limitations of accurate DCE-MRI imaging implementation and the need of novel DCE-MRI data analysis methods for richer functional heterogeneity information.
This study aims at improving current DCE-MRI techniques and developing new DCE-MRI analysis methods for particular radiotherapy assessment. Thus, the study is naturally divided into two parts. The first part focuses on DCE-MRI temporal resolution as one of the key DCE-MRI technical factors, and some improvements regarding DCE-MRI temporal resolution are proposed; the second part explores the potential value of image heterogeneity analysis and multiple PK model combination for therapeutic response assessment, and several novel DCE-MRI data analysis methods are developed.
I. Improvement of DCE-MRI temporal resolution. First, the feasibility of improving DCE-MRI temporal resolution via image undersampling was studied. Specifically, a novel MR image iterative reconstruction algorithm was studied for DCE-MRI reconstruction. This algorithm was built on the recently developed compress sensing (CS) theory. By utilizing a limited k-space acquisition with shorter imaging time, images can be reconstructed in an iterative fashion under the regularization of a newly proposed total generalized variation (TGV) penalty term. In the retrospective study of brain radiosurgery patient DCE-MRI scans under IRB-approval, the clinically obtained image data was selected as reference data, and the simulated accelerated k-space acquisition was generated via undersampling the reference image full k-space with designed sampling grids. Two undersampling strategies were proposed: 1) a radial multi-ray grid with a special angular distribution was adopted to sample each slice of the full k-space; 2) a Cartesian random sampling grid series with spatiotemporal constraints from adjacent frames was adopted to sample the dynamic k-space series at a slice location. Two sets of PK parameters’ maps were generated from the undersampled data and from the fully-sampled data, respectively. Multiple quantitative measurements and statistical studies were performed to evaluate the accuracy of PK maps generated from the undersampled data in reference to the PK maps generated from the fully-sampled data. Results showed that at a simulated acceleration factor of four, PK maps could be faithfully calculated from the DCE images that were reconstructed using undersampled data, and no statistically significant differences were found between the regional PK mean values from undersampled and fully-sampled data sets. DCE-MRI acceleration using the investigated image reconstruction method has been suggested as feasible and promising.
Second, for high temporal resolution DCE-MRI, a new PK model fitting method was developed to solve PK parameters for better calculation accuracy and efficiency. This method is based on a derivative-based deformation of the commonly used Tofts PK model, which is presented as an integrative expression. This method also includes an advanced Kolmogorov-Zurbenko (KZ) filter to remove the potential noise effect in data and solve the PK parameter as a linear problem in matrix format. In the computer simulation study, PK parameters representing typical intracranial values were selected as references to simulated DCE-MRI data for different temporal resolution and different data noise level. Results showed that at both high temporal resolutions (<1s) and clinically feasible temporal resolution (~5s), this new method was able to calculate PK parameters more accurate than the current calculation methods at clinically relevant noise levels; at high temporal resolutions, the calculation efficiency of this new method was superior to current methods in an order of 102. In a retrospective of clinical brain DCE-MRI scans, the PK maps derived from the proposed method were comparable with the results from current methods. Based on these results, it can be concluded that this new method can be used for accurate and efficient PK model fitting for high temporal resolution DCE-MRI.
II. Development of DCE-MRI analysis methods for therapeutic response assessment. This part aims at methodology developments in two approaches. The first one is to develop model-free analysis method for DCE-MRI functional heterogeneity evaluation. This approach is inspired by the rationale that radiotherapy-induced functional change could be heterogeneous across the treatment area. The first effort was spent on a translational investigation of classic fractal dimension theory for DCE-MRI therapeutic response assessment. In a small-animal anti-angiogenesis drug therapy experiment, the randomly assigned treatment/control groups received multiple fraction treatments with one pre-treatment and multiple post-treatment high spatiotemporal DCE-MRI scans. In the post-treatment scan two weeks after the start, the investigated Rényi dimensions of the classic PK rate constant map demonstrated significant differences between the treatment and the control groups; when Rényi dimensions were adopted for treatment/control group classification, the achieved accuracy was higher than the accuracy from using conventional PK parameter statistics. Following this pilot work, two novel texture analysis methods were proposed. First, a new technique called Gray Level Local Power Matrix (GLLPM) was developed. It intends to solve the lack of temporal information and poor calculation efficiency of the commonly used Gray Level Co-Occurrence Matrix (GLCOM) techniques. In the same small animal experiment, the dynamic curves of Haralick texture features derived from the GLLPM had an overall better performance than the corresponding curves derived from current GLCOM techniques in treatment/control separation and classification. The second developed method is dynamic Fractal Signature Dissimilarity (FSD) analysis. Inspired by the classic fractal dimension theory, this method measures the dynamics of tumor heterogeneity during the contrast agent uptake in a quantitative fashion on DCE images. In the small animal experiment mentioned before, the selected parameters from dynamic FSD analysis showed significant differences between treatment/control groups as early as after 1 treatment fraction; in contrast, metrics from conventional PK analysis showed significant differences only after 3 treatment fractions. When using dynamic FSD parameters, the treatment/control group classification after 1st treatment fraction was improved than using conventional PK statistics. These results suggest the promising application of this novel method for capturing early therapeutic response.
The second approach of developing novel DCE-MRI methods is to combine PK information from multiple PK models. Currently, the classic Tofts model or its alternative version has been widely adopted for DCE-MRI analysis as a gold-standard approach for therapeutic response assessment. Previously, a shutter-speed (SS) model was proposed to incorporate transcytolemmal water exchange effect into contrast agent concentration quantification. In spite of richer biological assumption, its application in therapeutic response assessment is limited. It might be intriguing to combine the information from the SS model and from the classic Tofts model to explore potential new biological information for treatment assessment. The feasibility of this idea was investigated in the same small animal experiment. The SS model was compared against the Tofts model for therapeutic response assessment using PK parameter regional mean value comparison. Based on the modeled transcytolemmal water exchange rate, a biological subvolume was proposed and was automatically identified using histogram analysis. Within the biological subvolume, the PK rate constant derived from the SS model were proved to be superior to the one from Tofts model in treatment/control separation and classification. Furthermore, novel biomarkers were designed to integrate PK rate constants from these two models. When being evaluated in the biological subvolume, this biomarker was able to reflect significant treatment/control difference in both post-treatment evaluation. These results confirm the potential value of SS model as well as its combination with Tofts model for therapeutic response assessment.
In summary, this study addressed two problems of DCE-MRI application in radiotherapy assessment. In the first part, a method of accelerating DCE-MRI acquisition for better temporal resolution was investigated, and a novel PK model fitting algorithm was proposed for high temporal resolution DCE-MRI. In the second part, two model-free texture analysis methods and a multiple-model analysis method were developed for DCE-MRI therapeutic response assessment. The presented works could benefit the future DCE-MRI routine clinical application in radiotherapy assessment.