11 resultados para computational costs

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies three classes of randomized numerical linear algebra algorithms, namely: (i) randomized matrix sparsification algorithms, (ii) low-rank approximation algorithms that use randomized unitary transformations, and (iii) low-rank approximation algorithms for positive-semidefinite (PSD) matrices.

Randomized matrix sparsification algorithms set randomly chosen entries of the input matrix to zero. When the approximant is substituted for the original matrix in computations, its sparsity allows one to employ faster sparsity-exploiting algorithms. This thesis contributes bounds on the approximation error of nonuniform randomized sparsification schemes, measured in the spectral norm and two NP-hard norms that are of interest in computational graph theory and subset selection applications.

Low-rank approximations based on randomized unitary transformations have several desirable properties: they have low communication costs, are amenable to parallel implementation, and exploit the existence of fast transform algorithms. This thesis investigates the tradeoff between the accuracy and cost of generating such approximations. State-of-the-art spectral and Frobenius-norm error bounds are provided.

The last class of algorithms considered are SPSD "sketching" algorithms. Such sketches can be computed faster than approximations based on projecting onto mixtures of the columns of the matrix. The performance of several such sketching schemes is empirically evaluated using a suite of canonical matrices drawn from machine learning and data analysis applications, and a framework is developed for establishing theoretical error bounds.

In addition to studying these algorithms, this thesis extends the Matrix Laplace Transform framework to derive Chernoff and Bernstein inequalities that apply to all the eigenvalues of certain classes of random matrices. These inequalities are used to investigate the behavior of the singular values of a matrix under random sampling, and to derive convergence rates for each individual eigenvalue of a sample covariance matrix.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computational general relativity is a field of study which has reached maturity only within the last decade. This thesis details several studies that elucidate phenomena related to the coalescence of compact object binaries. Chapters 2 and 3 recounts work towards developing new analytical tools for visualizing and reasoning about dynamics in strongly curved spacetimes. In both studies, the results employ analogies with the classical theory of electricity and magnitism, first (Ch. 2) in the post-Newtonian approximation to general relativity and then (Ch. 3) in full general relativity though in the absence of matter sources. In Chapter 4, we examine the topological structure of absolute event horizons during binary black hole merger simulations conducted with the SpEC code. Chapter 6 reports on the progress of the SpEC code in simulating the coalescence of neutron star-neutron star binaries, while Chapter 7 tests the effects of various numerical gauge conditions on the robustness of black hole formation from stellar collapse in SpEC. In Chapter 5, we examine the nature of pseudospectral expansions of non-smooth functions motivated by the need to simulate the stellar surface in Chapters 6 and 7. In Chapter 8, we study how thermal effects in the nuclear equation of state effect the equilibria and stability of hypermassive neutron stars. Chapter 9 presents supplements to the work in Chapter 8, including an examination of the stability question raised in Chapter 8 in greater mathematical detail.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis addresses a series of topics related to the question of how people find the foreground objects from complex scenes. With both computer vision modeling, as well as psychophysical analyses, we explore the computational principles for low- and mid-level vision.

We first explore the computational methods of generating saliency maps from images and image sequences. We propose an extremely fast algorithm called Image Signature that detects the locations in the image that attract human eye gazes. With a series of experimental validations based on human behavioral data collected from various psychophysical experiments, we conclude that the Image Signature and its spatial-temporal extension, the Phase Discrepancy, are among the most accurate algorithms for saliency detection under various conditions.

In the second part, we bridge the gap between fixation prediction and salient object segmentation with two efforts. First, we propose a new dataset that contains both fixation and object segmentation information. By simultaneously presenting the two types of human data in the same dataset, we are able to analyze their intrinsic connection, as well as understanding the drawbacks of today’s “standard” but inappropriately labeled salient object segmentation dataset. Second, we also propose an algorithm of salient object segmentation. Based on our novel discoveries on the connections of fixation data and salient object segmentation data, our model significantly outperforms all existing models on all 3 datasets with large margins.

In the third part of the thesis, we discuss topics around the human factors of boundary analysis. Closely related to salient object segmentation, boundary analysis focuses on delimiting the local contours of an object. We identify the potential pitfalls of algorithm evaluation for the problem of boundary detection. Our analysis indicates that today’s popular boundary detection datasets contain significant level of noise, which may severely influence the benchmarking results. To give further insights on the labeling process, we propose a model to characterize the principles of the human factors during the labeling process.

The analyses reported in this thesis offer new perspectives to a series of interrelating issues in low- and mid-level vision. It gives warning signs to some of today’s “standard” procedures, while proposing new directions to encourage future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computational protein design (CPD) is a burgeoning field that uses a physical-chemical or knowledge-based scoring function to create protein variants with new or improved properties. This exciting approach has recently been used to generate proteins with entirely new functions, ones that are not observed in naturally occurring proteins. For example, several enzymes were designed to catalyze reactions that are not in the repertoire of any known natural enzyme. In these designs, novel catalytic activity was built de novo (from scratch) into a previously inert protein scaffold. In addition to de novo enzyme design, the computational design of protein-protein interactions can also be used to create novel functionality, such as neutralization of influenza. Our goal here was to design a protein that can self-assemble with DNA into nanowires. We used computational tools to homodimerize a transcription factor that binds a specific sequence of double-stranded DNA. We arranged the protein-protein and protein-DNA binding sites so that the self-assembly could occur in a linear fashion to generate nanowires. Upon mixing our designed protein homodimer with the double-stranded DNA, the molecules immediately self-assembled into nanowires. This nanowire topology was confirmed using atomic force microscopy. Co-crystal structure showed that the nanowire is assembled via the desired interactions. To the best of our knowledge, this is the first example of a protein-DNA self-assembly that does not rely on covalent interactions. We anticipate that this new material will stimulate further interest in the development of advanced biomaterials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Red fluorescent proteins (RFPs) have attracted significant engineering focus because of the promise of near infrared fluorescent proteins, whose light penetrates biological tissue, and which would allow imaging inside of vertebrate animals. The RFP landscape, which numbers ~200 members, is mostly populated by engineered variants of four native RFPs, leaving the vast majority of native RFP biodiversity untouched. This is largely due to the fact that native RFPs are obligate tetramers, limiting their usefulness as fusion proteins. Monomerization has imposed critical costs on these evolved tetramers, however, as it has invariably led to loss of brightness, and often to many other adverse effects on the fluorescent properties of the derived monomeric variants. Here we have attempted to understand why monomerization has taken such a large toll on Anthozoa class RFPs, and to outline a clear strategy for their monomerization. We begin with a structural study of the far-red fluorescence of AQ143, one of the furthest red emitting RFPs. We then try to separate the problem of stable and bright fluorescence from the design of a soluble monomeric β-barrel surface by engineering a hybrid protein (DsRmCh) with an oligomeric parent that had been previously monomerized, DsRed, and a pre-stabilized monomeric core from mCherry. This allows us to use computational design to successfully design a stable, soluble, fluorescent monomer. Next we took HcRed, which is a previously unmonomerized RFP that has far-red fluorescence (λemission = 633 nm) and attempted to monomerize it making use of lessons learned from DsRmCh. We engineered two monomeric proteins by pre-stabilizing HcRed’s core, then monomerizing in stages, making use of computational design and directed evolution techniques such as error-prone mutagenesis and DNA shuffling. We call these proteins mGinger0.1 (λem = 637 nm / Φ = 0.02) and mGinger0.2 (λem = 631 nm Φ = 0.04). They are the furthest red first generation monomeric RFPs ever developed, are significantly thermostabilized, and add diversity to a small field of far-red monomeric FPs. We anticipate that the techniques we describe will be facilitate future RFP monomerization, and that further core optimization of the mGingers may allow significant improvements in brightness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

These studies explore how, where, and when representations of variables critical to decision-making are represented in the brain. In order to produce a decision, humans must first determine the relevant stimuli, actions, and possible outcomes before applying an algorithm that will select an action from those available. When choosing amongst alternative stimuli, the framework of value-based decision-making proposes that values are assigned to the stimuli and that these values are then compared in an abstract “value space” in order to produce a decision. Despite much progress, in particular regarding the pinpointing of ventromedial prefrontal cortex (vmPFC) as a region that encodes the value, many basic questions remain. In Chapter 2, I show that distributed BOLD signaling in vmPFC represents the value of stimuli under consideration in a manner that is independent of the type of stimulus it is. Thus the open question of whether value is represented in abstraction, a key tenet of value-based decision-making, is confirmed. However, I also show that stimulus-dependent value representations are also present in the brain during decision-making and suggest a potential neural pathway for stimulus-to-value transformations that integrates these two results.

More broadly speaking, there is both neural and behavioral evidence that two distinct control systems are at work during action selection. These two systems compose the “goal-directed system”, which selects actions based on an internal model of the environment, and the “habitual” system, which generates responses based on antecedent stimuli only. Computational characterizations of these two systems imply that they have different informational requirements in terms of input stimuli, actions, and possible outcomes. Associative learning theory predicts that the habitual system should utilize stimulus and action information only, while goal-directed behavior requires that outcomes as well as stimuli and actions be processed. In Chapter 3, I test whether areas of the brain hypothesized to be involved in habitual versus goal-directed control represent the corresponding theorized variables.

The question of whether one or both of these neural systems drives Pavlovian conditioning is less well-studied. Chapter 4 describes an experiment in which subjects were scanned while engaged in a Pavlovian task with a simple non-trivial structure. After comparing a variety of model-based and model-free learning algorithms (thought to underpin goal-directed and habitual decision-making, respectively), it was found that subjects’ reaction times were better explained by a model-based system. In addition, neural signaling of precision, a variable based on a representation of a world model, was found in the amygdala. These data indicate that the influence of model-based representations of the environment can extend even to the most basic learning processes.

Knowledge of the state of hidden variables in an environment is required for optimal inference regarding the abstract decision structure of a given environment and therefore can be crucial to decision-making in a wide range of situations. Inferring the state of an abstract variable requires the generation and manipulation of an internal representation of beliefs over the values of the hidden variable. In Chapter 5, I describe behavioral and neural results regarding the learning strategies employed by human subjects in a hierarchical state-estimation task. In particular, a comprehensive model fit and comparison process pointed to the use of "belief thresholding". This implies that subjects tended to eliminate low-probability hypotheses regarding the state of the environment from their internal model and ceased to update the corresponding variables. Thus, in concert with incremental Bayesian learning, humans explicitly manipulate their internal model of the generative process during hierarchical inference consistent with a serial hypothesis testing strategy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

G protein-coupled receptors (GPCRs) are the largest family of proteins within the human genome. They consist of seven transmembrane (TM) helices, with a N-terminal region of varying length and structure on the extracellular side, and a C-terminus on the intracellular side. GPCRs are involved in transmitting extracellular signals to cells, and as such are crucial drug targets. Designing pharmaceuticals to target GPCRs is greatly aided by full-atom structural information of the proteins. In particular, the TM region of GPCRs is where small molecule ligands (much more bioavailable than peptide ligands) typically bind to the receptors. In recent years nearly thirty distinct GPCR TM regions have been crystallized. However, there are more than 1,000 GPCRs, leaving the vast majority of GPCRs with limited structural information. Additionally, GPCRs are known to exist in a myriad of conformational states in the body, rendering the static x-ray crystal structures an incomplete reflection of GPCR structures. In order to obtain an ensemble of GPCR structures, we have developed the GEnSeMBLE procedure to rapidly sample a large number of variations of GPCR helix rotations and tilts. The lowest energy GEnSeMBLE structures are then docked to small molecule ligands and optimized. The GPCR family consists of five subfamilies with little to no sequence homology between them: class A, B1, B2, C, and Frizzled/Taste2. Almost all of the GPCR crystal structures have been of class A GPCRs, and much is known about their conserved interactions and binding sites. In this work we particularly focus on class B1 GPCRs, and aim to understand that family’s interactions and binding sites both to small molecules and their native peptide ligands. Specifically, we predict the full atom structure and peptide binding site of the glucagon-like peptide receptor and the TM region and small molecule binding sites for eight other class B1 GPCRs: CALRL, CRFR1, GIPR, GLR, PACR, PTH1R, VIPR1, and VIPR2. Our class B1 work reveals multiple conserved interactions across the B1 subfamily as well as a consistent small molecule binding site centrally located in the TM bundle. Both the interactions and the binding sites are distinct from those seen in the more well-characterized class A GPCRs, and as such our work provides a strong starting point for drug design targeting class B1 proteins. We also predict the full structure of CXCR4 bound to a small molecule, a class A GPCR that was not closely related to any of the class A GPCRs at the time of the work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a complete system for Spectral Cauchy characteristic extraction (Spectral CCE). Implemented in C++ within the Spectral Einstein Code (SpEC), the method employs numerous innovative algorithms to efficiently calculate the Bondi strain, news, and flux.

Spectral CCE was envisioned to ensure physically accurate gravitational wave-forms computed for the Laser Interferometer Gravitational wave Observatory (LIGO) and similar experiments, while working toward a template bank with more than a thousand waveforms to span the binary black hole (BBH) problem’s seven-dimensional parameter space.

The Bondi strain, news, and flux are physical quantities central to efforts to understand and detect astrophysical gravitational wave sources within the Simulations of eXtreme Spacetime (SXS) collaboration, with the ultimate aim of providing the first strong field probe of the Einstein field equation.

In a series of included papers, we demonstrate stability, convergence, and gauge invariance. We also demonstrate agreement between Spectral CCE and the legacy Pitt null code, while achieving a factor of 200 improvement in computational efficiency.

Spectral CCE represents a significant computational advance. It is the foundation upon which further capability will be built, specifically enabling the complete calculation of junk-free, gauge-free, and physically valid waveform data on the fly within SpEC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the installation by the Pacific Electric Railway of a bus system in Pasadena to supplant most of its trolley lines, the problem of the comparison of the costs of the two systems naturally presented itself. The study here undertaken was originally started as just a comparison of the motor bus and Birney Safety Car, but as the work progressed it seemed advisable to include the trolley bus as well - a method of transportation that is comparatively new as far as development is concerned, but which seems to be finding increasing favor in the East.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The layout of a typical optical microscope has remained effectively unchanged over the past century. Besides the widespread adoption of digital focal plane arrays, relatively few innovations have helped improve standard imaging with bright-field microscopes. This thesis presents a new microscope imaging method, termed Fourier ptychography, which uses an LED to provide variable sample illumination and post-processing algorithms to recover useful sample information. Examples include increasing the resolution of megapixel-scale images to one gigapixel, measuring quantitative phase, achieving oil-immersion quality resolution without an immersion medium, and recovering complex three dimensional sample structure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computational imaging is flourishing thanks to the recent advancement in array photodetectors and image processing algorithms. This thesis presents Fourier ptychography, which is a computational imaging technique implemented in microscopy to break the limit of conventional optics. With the implementation of Fourier ptychography, the resolution of the imaging system can surpass the diffraction limit of the objective lens's numerical aperture; the quantitative phase information of a sample can be reconstructed from intensity-only measurements; and the aberration of a microscope system can be characterized and computationally corrected. This computational microscopy technique enhances the performance of conventional optical systems and expands the scope of their applications.