11 resultados para computational algebra

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies three classes of randomized numerical linear algebra algorithms, namely: (i) randomized matrix sparsification algorithms, (ii) low-rank approximation algorithms that use randomized unitary transformations, and (iii) low-rank approximation algorithms for positive-semidefinite (PSD) matrices.

Randomized matrix sparsification algorithms set randomly chosen entries of the input matrix to zero. When the approximant is substituted for the original matrix in computations, its sparsity allows one to employ faster sparsity-exploiting algorithms. This thesis contributes bounds on the approximation error of nonuniform randomized sparsification schemes, measured in the spectral norm and two NP-hard norms that are of interest in computational graph theory and subset selection applications.

Low-rank approximations based on randomized unitary transformations have several desirable properties: they have low communication costs, are amenable to parallel implementation, and exploit the existence of fast transform algorithms. This thesis investigates the tradeoff between the accuracy and cost of generating such approximations. State-of-the-art spectral and Frobenius-norm error bounds are provided.

The last class of algorithms considered are SPSD "sketching" algorithms. Such sketches can be computed faster than approximations based on projecting onto mixtures of the columns of the matrix. The performance of several such sketching schemes is empirically evaluated using a suite of canonical matrices drawn from machine learning and data analysis applications, and a framework is developed for establishing theoretical error bounds.

In addition to studying these algorithms, this thesis extends the Matrix Laplace Transform framework to derive Chernoff and Bernstein inequalities that apply to all the eigenvalues of certain classes of random matrices. These inequalities are used to investigate the behavior of the singular values of a matrix under random sampling, and to derive convergence rates for each individual eigenvalue of a sample covariance matrix.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computational general relativity is a field of study which has reached maturity only within the last decade. This thesis details several studies that elucidate phenomena related to the coalescence of compact object binaries. Chapters 2 and 3 recounts work towards developing new analytical tools for visualizing and reasoning about dynamics in strongly curved spacetimes. In both studies, the results employ analogies with the classical theory of electricity and magnitism, first (Ch. 2) in the post-Newtonian approximation to general relativity and then (Ch. 3) in full general relativity though in the absence of matter sources. In Chapter 4, we examine the topological structure of absolute event horizons during binary black hole merger simulations conducted with the SpEC code. Chapter 6 reports on the progress of the SpEC code in simulating the coalescence of neutron star-neutron star binaries, while Chapter 7 tests the effects of various numerical gauge conditions on the robustness of black hole formation from stellar collapse in SpEC. In Chapter 5, we examine the nature of pseudospectral expansions of non-smooth functions motivated by the need to simulate the stellar surface in Chapters 6 and 7. In Chapter 8, we study how thermal effects in the nuclear equation of state effect the equilibria and stability of hypermassive neutron stars. Chapter 9 presents supplements to the work in Chapter 8, including an examination of the stability question raised in Chapter 8 in greater mathematical detail.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis addresses a series of topics related to the question of how people find the foreground objects from complex scenes. With both computer vision modeling, as well as psychophysical analyses, we explore the computational principles for low- and mid-level vision.

We first explore the computational methods of generating saliency maps from images and image sequences. We propose an extremely fast algorithm called Image Signature that detects the locations in the image that attract human eye gazes. With a series of experimental validations based on human behavioral data collected from various psychophysical experiments, we conclude that the Image Signature and its spatial-temporal extension, the Phase Discrepancy, are among the most accurate algorithms for saliency detection under various conditions.

In the second part, we bridge the gap between fixation prediction and salient object segmentation with two efforts. First, we propose a new dataset that contains both fixation and object segmentation information. By simultaneously presenting the two types of human data in the same dataset, we are able to analyze their intrinsic connection, as well as understanding the drawbacks of today’s “standard” but inappropriately labeled salient object segmentation dataset. Second, we also propose an algorithm of salient object segmentation. Based on our novel discoveries on the connections of fixation data and salient object segmentation data, our model significantly outperforms all existing models on all 3 datasets with large margins.

In the third part of the thesis, we discuss topics around the human factors of boundary analysis. Closely related to salient object segmentation, boundary analysis focuses on delimiting the local contours of an object. We identify the potential pitfalls of algorithm evaluation for the problem of boundary detection. Our analysis indicates that today’s popular boundary detection datasets contain significant level of noise, which may severely influence the benchmarking results. To give further insights on the labeling process, we propose a model to characterize the principles of the human factors during the labeling process.

The analyses reported in this thesis offer new perspectives to a series of interrelating issues in low- and mid-level vision. It gives warning signs to some of today’s “standard” procedures, while proposing new directions to encourage future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computational protein design (CPD) is a burgeoning field that uses a physical-chemical or knowledge-based scoring function to create protein variants with new or improved properties. This exciting approach has recently been used to generate proteins with entirely new functions, ones that are not observed in naturally occurring proteins. For example, several enzymes were designed to catalyze reactions that are not in the repertoire of any known natural enzyme. In these designs, novel catalytic activity was built de novo (from scratch) into a previously inert protein scaffold. In addition to de novo enzyme design, the computational design of protein-protein interactions can also be used to create novel functionality, such as neutralization of influenza. Our goal here was to design a protein that can self-assemble with DNA into nanowires. We used computational tools to homodimerize a transcription factor that binds a specific sequence of double-stranded DNA. We arranged the protein-protein and protein-DNA binding sites so that the self-assembly could occur in a linear fashion to generate nanowires. Upon mixing our designed protein homodimer with the double-stranded DNA, the molecules immediately self-assembled into nanowires. This nanowire topology was confirmed using atomic force microscopy. Co-crystal structure showed that the nanowire is assembled via the desired interactions. To the best of our knowledge, this is the first example of a protein-DNA self-assembly that does not rely on covalent interactions. We anticipate that this new material will stimulate further interest in the development of advanced biomaterials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

These studies explore how, where, and when representations of variables critical to decision-making are represented in the brain. In order to produce a decision, humans must first determine the relevant stimuli, actions, and possible outcomes before applying an algorithm that will select an action from those available. When choosing amongst alternative stimuli, the framework of value-based decision-making proposes that values are assigned to the stimuli and that these values are then compared in an abstract “value space” in order to produce a decision. Despite much progress, in particular regarding the pinpointing of ventromedial prefrontal cortex (vmPFC) as a region that encodes the value, many basic questions remain. In Chapter 2, I show that distributed BOLD signaling in vmPFC represents the value of stimuli under consideration in a manner that is independent of the type of stimulus it is. Thus the open question of whether value is represented in abstraction, a key tenet of value-based decision-making, is confirmed. However, I also show that stimulus-dependent value representations are also present in the brain during decision-making and suggest a potential neural pathway for stimulus-to-value transformations that integrates these two results.

More broadly speaking, there is both neural and behavioral evidence that two distinct control systems are at work during action selection. These two systems compose the “goal-directed system”, which selects actions based on an internal model of the environment, and the “habitual” system, which generates responses based on antecedent stimuli only. Computational characterizations of these two systems imply that they have different informational requirements in terms of input stimuli, actions, and possible outcomes. Associative learning theory predicts that the habitual system should utilize stimulus and action information only, while goal-directed behavior requires that outcomes as well as stimuli and actions be processed. In Chapter 3, I test whether areas of the brain hypothesized to be involved in habitual versus goal-directed control represent the corresponding theorized variables.

The question of whether one or both of these neural systems drives Pavlovian conditioning is less well-studied. Chapter 4 describes an experiment in which subjects were scanned while engaged in a Pavlovian task with a simple non-trivial structure. After comparing a variety of model-based and model-free learning algorithms (thought to underpin goal-directed and habitual decision-making, respectively), it was found that subjects’ reaction times were better explained by a model-based system. In addition, neural signaling of precision, a variable based on a representation of a world model, was found in the amygdala. These data indicate that the influence of model-based representations of the environment can extend even to the most basic learning processes.

Knowledge of the state of hidden variables in an environment is required for optimal inference regarding the abstract decision structure of a given environment and therefore can be crucial to decision-making in a wide range of situations. Inferring the state of an abstract variable requires the generation and manipulation of an internal representation of beliefs over the values of the hidden variable. In Chapter 5, I describe behavioral and neural results regarding the learning strategies employed by human subjects in a hierarchical state-estimation task. In particular, a comprehensive model fit and comparison process pointed to the use of "belief thresholding". This implies that subjects tended to eliminate low-probability hypotheses regarding the state of the environment from their internal model and ceased to update the corresponding variables. Thus, in concert with incremental Bayesian learning, humans explicitly manipulate their internal model of the generative process during hierarchical inference consistent with a serial hypothesis testing strategy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

G protein-coupled receptors (GPCRs) are the largest family of proteins within the human genome. They consist of seven transmembrane (TM) helices, with a N-terminal region of varying length and structure on the extracellular side, and a C-terminus on the intracellular side. GPCRs are involved in transmitting extracellular signals to cells, and as such are crucial drug targets. Designing pharmaceuticals to target GPCRs is greatly aided by full-atom structural information of the proteins. In particular, the TM region of GPCRs is where small molecule ligands (much more bioavailable than peptide ligands) typically bind to the receptors. In recent years nearly thirty distinct GPCR TM regions have been crystallized. However, there are more than 1,000 GPCRs, leaving the vast majority of GPCRs with limited structural information. Additionally, GPCRs are known to exist in a myriad of conformational states in the body, rendering the static x-ray crystal structures an incomplete reflection of GPCR structures. In order to obtain an ensemble of GPCR structures, we have developed the GEnSeMBLE procedure to rapidly sample a large number of variations of GPCR helix rotations and tilts. The lowest energy GEnSeMBLE structures are then docked to small molecule ligands and optimized. The GPCR family consists of five subfamilies with little to no sequence homology between them: class A, B1, B2, C, and Frizzled/Taste2. Almost all of the GPCR crystal structures have been of class A GPCRs, and much is known about their conserved interactions and binding sites. In this work we particularly focus on class B1 GPCRs, and aim to understand that family’s interactions and binding sites both to small molecules and their native peptide ligands. Specifically, we predict the full atom structure and peptide binding site of the glucagon-like peptide receptor and the TM region and small molecule binding sites for eight other class B1 GPCRs: CALRL, CRFR1, GIPR, GLR, PACR, PTH1R, VIPR1, and VIPR2. Our class B1 work reveals multiple conserved interactions across the B1 subfamily as well as a consistent small molecule binding site centrally located in the TM bundle. Both the interactions and the binding sites are distinct from those seen in the more well-characterized class A GPCRs, and as such our work provides a strong starting point for drug design targeting class B1 proteins. We also predict the full structure of CXCR4 bound to a small molecule, a class A GPCR that was not closely related to any of the class A GPCRs at the time of the work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a complete system for Spectral Cauchy characteristic extraction (Spectral CCE). Implemented in C++ within the Spectral Einstein Code (SpEC), the method employs numerous innovative algorithms to efficiently calculate the Bondi strain, news, and flux.

Spectral CCE was envisioned to ensure physically accurate gravitational wave-forms computed for the Laser Interferometer Gravitational wave Observatory (LIGO) and similar experiments, while working toward a template bank with more than a thousand waveforms to span the binary black hole (BBH) problem’s seven-dimensional parameter space.

The Bondi strain, news, and flux are physical quantities central to efforts to understand and detect astrophysical gravitational wave sources within the Simulations of eXtreme Spacetime (SXS) collaboration, with the ultimate aim of providing the first strong field probe of the Einstein field equation.

In a series of included papers, we demonstrate stability, convergence, and gauge invariance. We also demonstrate agreement between Spectral CCE and the legacy Pitt null code, while achieving a factor of 200 improvement in computational efficiency.

Spectral CCE represents a significant computational advance. It is the foundation upon which further capability will be built, specifically enabling the complete calculation of junk-free, gauge-free, and physically valid waveform data on the fly within SpEC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The layout of a typical optical microscope has remained effectively unchanged over the past century. Besides the widespread adoption of digital focal plane arrays, relatively few innovations have helped improve standard imaging with bright-field microscopes. This thesis presents a new microscope imaging method, termed Fourier ptychography, which uses an LED to provide variable sample illumination and post-processing algorithms to recover useful sample information. Examples include increasing the resolution of megapixel-scale images to one gigapixel, measuring quantitative phase, achieving oil-immersion quality resolution without an immersion medium, and recovering complex three dimensional sample structure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let M be an Abelian W*-algebra of operators on a Hilbert space H. Let M0 be the set of all linear, closed, densely defined transformations in H which commute with every unitary operator in the commutant M’ of M. A well known result of R. Pallu de Barriere states that if ɸ is a normal positive linear functional on M, then ɸ is of the form T → (Tx, x) for some x in H, where T is in M. An elementary proof of this result is given, using only those properties which are consequences of the fact that ReM is a Dedekind complete Riesz space with plenty of normal integrals. The techniques used lead to a natural construction of the class M0, and an elementary proof is given of the fact that a positive self-adjoint transformation in M0 has a unique positive square root in M0. It is then shown that when the algebraic operations are suitably defined, then M0 becomes a commutative algebra. If ReM0 denotes the set of all self-adjoint elements of M0, then it is proved that ReM0 is Dedekind complete, universally complete Riesz spaces which contains ReM as an order dense ideal. A generalization of the result of R. Pallu de la Barriere is obtained for the Riesz space ReM0 which characterizes the normal integrals on the order dense ideals of ReM0. It is then shown that ReM0 may be identified with the extended order dual of ReM, and that ReM0 is perfect in the extended sense.

Some secondary questions related to the Riesz space ReM are also studied. In particular it is shown that ReM is a perfect Riesz space, and that every integral is normal under the assumption that every decomposition of the identity operator has non-measurable cardinal. The presence of atoms in ReM is examined briefly, and it is shown that ReM is finite dimensional if and only if every order bounded linear functional on ReM is a normal integral.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis we are concerned with finding representations of the algebra of SU(3) vector and axial-vector charge densities at infinite momentum (the "current algebra") to describe the mesons, idealizing the real continua of multiparticle states as a series of discrete resonances of zero width. Such representations would describe the masses and quantum numbers of the mesons, the shapes of their Regge trajectories, their electromagnetic and weak form factors, and (approximately, through the PCAC hypothesis) pion emission or absorption amplitudes.

We assume that the mesons have internal degrees of freedom equivalent to being made of two quarks (one an antiquark) and look for models in which the mass is SU(3)-independent and the current is a sum of contributions from the individual quarks. Requiring that the current algebra, as well as conditions of relativistic invariance, be satisfied turns out to be very restrictive, and, in fact, no model has been found which satisfies all requirements and gives a reasonable mass spectrum. We show that using more general mass and current operators but keeping the same internal degrees of freedom will not make the problem any more solvable. In particular, in order for any two-quark solution to exist it must be possible to solve the "factorized SU(2) problem," in which the currents are isospin currents and are carried by only one of the component quarks (as in the K meson and its excited states).

In the free-quark model the currents at infinite momentum are found using a manifestly covariant formalism and are shown to satisfy the current algebra, but the mass spectrum is unrealistic. We then consider a pair of quarks bound by a potential, finding the current as a power series in 1/m where m is the quark mass. Here it is found impossible to satisfy the algebra and relativistic invariance with the type of potential tried, because the current contributions from the two quarks do not commute with each other to order 1/m3. However, it may be possible to solve the factorized SU(2) problem with this model.

The factorized problem can be solved exactly in the case where all mesons have the same mass, using a covariant formulation in terms of an internal Lorentz group. For a more realistic, nondegenerate mass there is difficulty in covariantly solving even the factorized problem; one model is described which almost works but appears to require particles of spacelike 4-momentum, which seem unphysical.

Although the search for a completely satisfactory model has been unsuccessful, the techniques used here might eventually reveal a working model. There is also a possibility of satisfying a weaker form of the current algebra with existing models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computational imaging is flourishing thanks to the recent advancement in array photodetectors and image processing algorithms. This thesis presents Fourier ptychography, which is a computational imaging technique implemented in microscopy to break the limit of conventional optics. With the implementation of Fourier ptychography, the resolution of the imaging system can surpass the diffraction limit of the objective lens's numerical aperture; the quantitative phase information of a sample can be reconstructed from intensity-only measurements; and the aberration of a microscope system can be characterized and computationally corrected. This computational microscopy technique enhances the performance of conventional optical systems and expands the scope of their applications.