11 resultados para scientific computation

em DRUM (Digital Repository at the University of Maryland)


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis deals with tensor completion for the solution of multidimensional inverse problems. We study the problem of reconstructing an approximately low rank tensor from a small number of noisy linear measurements. New recovery guarantees, numerical algorithms, non-uniform sampling strategies, and parameter selection algorithms are developed. We derive a fixed point continuation algorithm for tensor completion and prove its convergence. A restricted isometry property (RIP) based tensor recovery guarantee is proved. Probabilistic recovery guarantees are obtained for sub-Gaussian measurement operators and for measurements obtained by non-uniform sampling from a Parseval tight frame. We show how tensor completion can be used to solve multidimensional inverse problems arising in NMR relaxometry. Algorithms are developed for regularization parameter selection, including accelerated k-fold cross-validation and generalized cross-validation. These methods are validated on experimental and simulated data. We also derive condition number estimates for nonnegative least squares problems. Tensor recovery promises to significantly accelerate N-dimensional NMR relaxometry and related experiments, enabling previously impractical experiments. Our methods could also be applied to other inverse problems arising in machine learning, image processing, signal processing, computer vision, and other fields.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Although tyrosine kinase inhibitors (TKIs) such as imatinib have transformed chronic myelogenous leukemia (CML) into a chronic condition, these therapies are not curative in the majority of cases. Most patients must continue TKI therapy indefinitely, a requirement that is both expensive and that compromises a patient's quality of life. While TKIs are known to reduce leukemic cells' proliferative capacity and to induce apoptosis, their effects on leukemic stem cells, the immune system, and the microenvironment are not fully understood. A more complete understanding of their global therapeutic effects would help us to identify any limitations of TKI monotherapy and to address these issues through novel combination therapies. Mathematical models are a complementary tool to experimental and clinical data that can provide valuable insights into the underlying mechanisms of TKI therapy. Previous modeling efforts have focused on CML patients who show biphasic and triphasic exponential declines in BCR-ABL ratio during therapy. However, our patient data indicates that many patients treated with TKIs show fluctuations in BCR-ABL ratio yet are able to achieve durable remissions. To investigate these fluctuations, we construct a mathematical model that integrates CML with a patient's autologous immune response to the disease. In our model, we define an immune window, which is an intermediate range of leukemic concentrations that lead to an effective immune response against CML. While small leukemic concentrations provide insufficient stimulus, large leukemic concentrations actively suppress a patient's immune system, thus limiting it's ability to respond. Our patient data and modeling results suggest that at diagnosis, a patient's high leukemic concentration is able to suppress their immune system. TKI therapy drives the leukemic population into the immune window, allowing the patient's immune cells to expand and eventually mount an efficient response against the residual CML. This response drives the leukemic population below the immune window, causing the immune population to contract and allowing the leukemia to partially recover. The leukemia eventually reenters the immune window, thus stimulating a sequence of weaker immune responses as the two populations approach equilibrium. We hypothesize that a patient's autologous immune response to CML may explain the fluctuations in BCR-ABL ratio that are regularly seen during TKI therapy. These fluctuations may serve as a signature of a patient's individual immune response to CML. By applying our modeling framework to patient data, we are able to construct an immune profile that can then be used to propose patient-specific combination therapies aimed at further reducing a patient's leukemic burden. Our characterization of a patient's anti-leukemia immune response may be especially valuable in the study of drug resistance, treatment cessation, and combination therapy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a detailed analysis of the application of a multi-scale Hierarchical Reconstruction method for solving a family of ill-posed linear inverse problems. When the observations on the unknown quantity of interest and the observation operators are known, these inverse problems are concerned with the recovery of the unknown from its observations. Although the observation operators we consider are linear, they are inevitably ill-posed in various ways. We recall in this context the classical Tikhonov regularization method with a stabilizing function which targets the specific ill-posedness from the observation operators and preserves desired features of the unknown. Having studied the mechanism of the Tikhonov regularization, we propose a multi-scale generalization to the Tikhonov regularization method, so-called the Hierarchical Reconstruction (HR) method. First introduction of the HR method can be traced back to the Hierarchical Decomposition method in Image Processing. The HR method successively extracts information from the previous hierarchical residual to the current hierarchical term at a finer hierarchical scale. As the sum of all the hierarchical terms, the hierarchical sum from the HR method provides an reasonable approximate solution to the unknown, when the observation matrix satisfies certain conditions with specific stabilizing functions. When compared to the Tikhonov regularization method on solving the same inverse problems, the HR method is shown to be able to decrease the total number of iterations, reduce the approximation error, and offer self control of the approximation distance between the hierarchical sum and the unknown, thanks to using a ladder of finitely many hierarchical scales. We report numerical experiments supporting our claims on these advantages the HR method has over the Tikhonov regularization method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Theories of sparse signal representation, wherein a signal is decomposed as the sum of a small number of constituent elements, play increasing roles in both mathematical signal processing and neuroscience. This happens despite the differences between signal models in the two domains. After reviewing preliminary material on sparse signal models, I use work on compressed sensing for the electron tomography of biological structures as a target for exploring the efficacy of sparse signal reconstruction in a challenging application domain. My research in this area addresses a topic of keen interest to the biological microscopy community, and has resulted in the development of tomographic reconstruction software which is competitive with the state of the art in its field. Moving from the linear signal domain into the nonlinear dynamics of neural encoding, I explain the sparse coding hypothesis in neuroscience and its relationship with olfaction in locusts. I implement a numerical ODE model of the activity of neural populations responsible for sparse odor coding in locusts as part of a project involving offset spiking in the Kenyon cells. I also explain the validation procedures we have devised to help assess the model's similarity to the biology. The thesis concludes with the development of a new, simplified model of locust olfactory network activity, which seeks with some success to explain statistical properties of the sparse coding processes carried out in the network.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This dissertation concerns the well-posedness of the Navier-Stokes-Smoluchowski system. The system models a mixture of fluid and particles in the so-called bubbling regime. The compressible Navier-Stokes equations governing the evolution of the fluid are coupled to the Smoluchowski equation for the particle density at a continuum level. First, working on fixed domains, the existence of weak solutions is established using a three-level approximation scheme and based largely on the Lions-Feireisl theory of compressible fluids. The system is then posed over a moving domain. By utilizing a Brinkman-type penalization as well as penalization of the viscosity, the existence of weak solutions of the Navier-Stokes-Smoluchowski system is proved over moving domains. As a corollary the convergence of the Brinkman penalization is proved. Finally, a suitable relative entropy is defined. This relative entropy is used to establish a weak-strong uniqueness result for the Navier-Stokes-Smoluchowski system over moving domains, ensuring that strong solutions are unique in the class of weak solutions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The dissertation is devoted to the study of problems in calculus of variation, free boundary problems and gradient flows with respect to the Wasserstein metric. More concretely, we consider the problem of characterizing the regularity of minimizers to a certain interaction energy. Minimizers of the interaction energy have a somewhat surprising relationship with solutions to obstacle problems. Here we prove and exploit this relationship to obtain novel regularity results. Another problem we tackle is describing the asymptotic behavior of the Cahn-Hilliard equation with degenerate mobility. By framing the Cahn-Hilliard equation with degenerate mobility as a gradient flow in Wasserstein metric, in one space dimension, we prove its convergence to a degenerate parabolic equation under the framework recently developed by Sandier-Serfaty.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This dissertation investigates the connection between spectral analysis and frame theory. When considering the spectral properties of a frame, we present a few novel results relating to the spectral decomposition. We first show that scalable frames have the property that the inner product of the scaling coefficients and the eigenvectors must equal the inverse eigenvalues. From this, we prove a similar result when an approximate scaling is obtained. We then focus on the optimization problems inherent to the scalable frames by first showing that there is an equivalence between scaling a frame and optimization problems with a non-restrictive objective function. Various objective functions are considered, and an analysis of the solution type is presented. For linear objectives, we can encourage sparse scalings, and with barrier objective functions, we force dense solutions. We further consider frames in high dimensions, and derive various solution techniques. From here, we restrict ourselves to various frame classes, to add more specificity to the results. Using frames generated from distributions allows for the placement of probabilistic bounds on scalability. For discrete distributions (Bernoulli and Rademacher), we bound the probability of encountering an ONB, and for continuous symmetric distributions (Uniform and Gaussian), we show that symmetry is retained in the transformed domain. We also prove several hyperplane-separation results. With the theory developed, we discuss graph applications of the scalability framework. We make a connection with graph conditioning, and show the in-feasibility of the problem in the general case. After a modification, we show that any complete graph can be conditioned. We then present a modification of standard PCA (robust PCA) developed by Cand\`es, and give some background into Electron Energy-Loss Spectroscopy (EELS). We design a novel scheme for the processing of EELS through robust PCA and least-squares regression, and test this scheme on biological samples. Finally, we take the idea of robust PCA and apply the technique of kernel PCA to perform robust manifold learning. We derive the problem and present an algorithm for its solution. There is also discussion of the differences with RPCA that make theoretical guarantees difficult.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work is devoted to creating an abstract framework for the study of certain spectral properties of parabolic systems. Specifically, we determine under which general conditions to expect the presence of absolutely continuous spectral measures. We use these general conditions to derive results for spectral properties of time-changes of unipotent flows on homogeneous spaces of semisimple groups regarding absolutely continuous spectrum as well as maximal spectral type; the time-changes of the horocycle flow are special cases of this general category of flows. In addition we use the general conditions to derive spectral results for twisted horocycle flows and to rederive spectral results for skew products over translations and Furstenberg transformations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A primary goal of this dissertation is to understand the links between mathematical models that describe crystal surfaces at three fundamental length scales: The scale of individual atoms, the scale of collections of atoms forming crystal defects, and macroscopic scale. Characterizing connections between different classes of models is a critical task for gaining insight into the physics they describe, a long-standing objective in applied analysis, and also highly relevant in engineering applications. The key concept I use in each problem addressed in this thesis is coarse graining, which is a strategy for connecting fine representations or models with coarser representations. Often this idea is invoked to reduce a large discrete system to an appropriate continuum description, e.g. individual particles are represented by a continuous density. While there is no general theory of coarse graining, one closely related mathematical approach is asymptotic analysis, i.e. the description of limiting behavior as some parameter becomes very large or very small. In the case of crystalline solids, it is natural to consider cases where the number of particles is large or where the lattice spacing is small. Limits such as these often make explicit the nature of links between models capturing different scales, and, once established, provide a means of improving our understanding, or the models themselves. Finding appropriate variables whose limits illustrate the important connections between models is no easy task, however. This is one area where computer simulation is extremely helpful, as it allows us to see the results of complex dynamics and gather clues regarding the roles of different physical quantities. On the other hand, connections between models enable the development of novel multiscale computational schemes, so understanding can assist computation and vice versa. Some of these ideas are demonstrated in this thesis. The important outcomes of this thesis include: (1) a systematic derivation of the step-flow model of Burton, Cabrera, and Frank, with corrections, from an atomistic solid-on-solid-type models in 1+1 dimensions; (2) the inclusion of an atomistically motivated transport mechanism in an island dynamics model allowing for a more detailed account of mound evolution; and (3) the development of a hybrid discrete-continuum scheme for simulating the relaxation of a faceted crystal mound. Central to all of these modeling and simulation efforts is the presence of steps composed of individual layers of atoms on vicinal crystal surfaces. Consequently, a recurring theme in this research is the observation that mesoscale defects play a crucial role in crystal morphological evolution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mathematical models of gene regulation are a powerful tool for understanding the complex features of genetic control. While various modeling efforts have been successful at explaining gene expression dynamics, much less is known about how evolution shapes the structure of these networks. An important feature of gene regulatory networks is their stability in response to environmental perturbations. Regulatory systems are thought to have evolved to exist near the transition between stability and instability, in order to have the required stability to environmental fluctuations while also being able to achieve a wide variety of functions (corresponding to different dynamical patterns). We study a simplified model of gene network evolution in which links are added via different selection rules. These growth models are inspired by recent work on `explosive' percolation which shows that when network links are added through competitive rather than random processes, the connectivity phase transition can be significantly delayed, and when it is reached, it appears to be first order (discontinuous, e.g., going from no failure at all to large expected failure) instead of second order (continuous, e.g., going from no failure at all to very small expected failure). We find that by modifying the traditional framework for networks grown via competitive link addition to capture how gene networks evolve to avoid damage propagation, we also see significant delays in the transition that depend on the selection rules, but the transitions always appear continuous rather than `explosive'.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Presentation at the Controlling Dangerous Pathogens Project Regional Workshop on Dual-Use Research, Teresopolis, Brazil