7 resultados para individual interest
em CaltechTHESIS
Resumo:
This thesis studies three classes of randomized numerical linear algebra algorithms, namely: (i) randomized matrix sparsification algorithms, (ii) low-rank approximation algorithms that use randomized unitary transformations, and (iii) low-rank approximation algorithms for positive-semidefinite (PSD) matrices.
Randomized matrix sparsification algorithms set randomly chosen entries of the input matrix to zero. When the approximant is substituted for the original matrix in computations, its sparsity allows one to employ faster sparsity-exploiting algorithms. This thesis contributes bounds on the approximation error of nonuniform randomized sparsification schemes, measured in the spectral norm and two NP-hard norms that are of interest in computational graph theory and subset selection applications.
Low-rank approximations based on randomized unitary transformations have several desirable properties: they have low communication costs, are amenable to parallel implementation, and exploit the existence of fast transform algorithms. This thesis investigates the tradeoff between the accuracy and cost of generating such approximations. State-of-the-art spectral and Frobenius-norm error bounds are provided.
The last class of algorithms considered are SPSD "sketching" algorithms. Such sketches can be computed faster than approximations based on projecting onto mixtures of the columns of the matrix. The performance of several such sketching schemes is empirically evaluated using a suite of canonical matrices drawn from machine learning and data analysis applications, and a framework is developed for establishing theoretical error bounds.
In addition to studying these algorithms, this thesis extends the Matrix Laplace Transform framework to derive Chernoff and Bernstein inequalities that apply to all the eigenvalues of certain classes of random matrices. These inequalities are used to investigate the behavior of the singular values of a matrix under random sampling, and to derive convergence rates for each individual eigenvalue of a sample covariance matrix.
Resumo:
A general framework for multi-criteria optimal design is presented which is well-suited for automated design of structural systems. A systematic computer-aided optimal design decision process is developed which allows the designer to rapidly evaluate and improve a proposed design by taking into account the major factors of interest related to different aspects such as design, construction, and operation.
The proposed optimal design process requires the selection of the most promising choice of design parameters taken from a large design space, based on an evaluation using specified criteria. The design parameters specify a particular design, and so they relate to member sizes, structural configuration, etc. The evaluation of the design uses performance parameters which may include structural response parameters, risks due to uncertain loads and modeling errors, construction and operating costs, etc. Preference functions are used to implement the design criteria in a "soft" form. These preference functions give a measure of the degree of satisfaction of each design criterion. The overall evaluation measure for a design is built up from the individual measures for each criterion through a preference combination rule. The goal of the optimal design process is to obtain a design that has the highest overall evaluation measure - an optimization problem.
Genetic algorithms are stochastic optimization methods that are based on evolutionary theory. They provide the exploration power necessary to explore high-dimensional search spaces to seek these optimal solutions. Two special genetic algorithms, hGA and vGA, are presented here for continuous and discrete optimization problems, respectively.
The methodology is demonstrated with several examples involving the design of truss and frame systems. These examples are solved by using the proposed hGA and vGA.
Resumo:
The visual system is a remarkable platform that evolved to solve difficult computational problems such as detection, recognition, and classification of objects. Of great interest is the face-processing network, a sub-system buried deep in the temporal lobe, dedicated for analyzing specific type of objects (faces). In this thesis, I focus on the problem of face detection by the face-processing network. Insights obtained from years of developing computer-vision algorithms to solve this task have suggested that it may be efficiently and effectively solved by detection and integration of local contrast features. Does the brain use a similar strategy? To answer this question, I embark on a journey that takes me through the development and optimization of dedicated tools for targeting and perturbing deep brain structures. Data collected using MR-guided electrophysiology in early face-processing regions was found to have strong selectivity for contrast features, similar to ones used by artificial systems. While individual cells were tuned for only a small subset of features, the population as a whole encoded the full spectrum of features that are predictive to the presence of a face in an image. Together with additional evidence, my results suggest a possible computational mechanism for face detection in early face processing regions. To move from correlation to causation, I focus on adopting an emergent technology for perturbing brain activity using light: optogenetics. While this technique has the potential to overcome problems associated with the de-facto way of brain stimulation (electrical microstimulation), many open questions remain about its applicability and effectiveness for perturbing the non-human primate (NHP) brain. In a set of experiments, I use viral vectors to deliver genetically encoded optogenetic constructs to the frontal eye field and faceselective regions in NHP and examine their effects side-by-side with electrical microstimulation to assess their effectiveness in perturbing neural activity as well as behavior. Results suggest that cells are robustly and strongly modulated upon light delivery and that such perturbation can modulate and even initiate motor behavior, thus, paving the way for future explorations that may apply these tools to study connectivity and information flow in the face processing network.
Resumo:
The warm plasma resonance cone structure of the quasistatic field produced by a gap source in a bounded magnetized slab plasma is determined theoretically. This is initially determined for a homogeneous or mildly inhomogeneous plasma with source frequency lying between the lower hybrid frequency and the plasma frequency. It is then extended to the complicated case of an inhomogeneous plasma with two internal lower hybrid layers present, which is of interest to radio frequency heating of plasmas.
In the first case, the potential is obtained as a sum of multiply reflected warm plasma resonance cones, each of which has a similar structure, but a different size, amplitude, and position. An important interference between nearby multiply-reflected resonance cones is found. The cones are seen to spread out as they move away from the source, so that this interference increases and the individual resonance cones become obscured far away from the source.
In the second case, the potential is found to be expressible as a sum of multiply-reflected, multiply-tunnelled, and mode converted resonance cones, each of which has a unique but similar structure. The effects of both collisional and collisionless damping are included and their effects on the decay of the cone structure studied. Various properties of the cones such as how they move into and out of the hybrid layers, through the evanescent region, and transform at the hybrid layers are determined. It is found that cones can tunnel through the evanescent layer if the layer is thin, and the effect of the thin evanescent layer is to subdue the secondary maxima of cone relative to the main peak, while slightly broadening the main peak and shifting it closer to the cold plasma cone line.
Energy theorems for quasistatic fields are developed and applied to determine the power flow and absorption along the individual cones. This reveals the points of concentration of the flow and the various absorption mechanisms.
Resumo:
With the advent of the laser in the year 1960, the field of optics experienced a renaissance from what was considered to be a dull, solved subject to an active area of development, with applications and discoveries which are yet to be exhausted 55 years later. Light is now nearly ubiquitous not only in cutting-edge research in physics, chemistry, and biology, but also in modern technology and infrastructure. One quality of light, that of the imparted radiation pressure force upon reflection from an object, has attracted intense interest from researchers seeking to precisely monitor and control the motional degrees of freedom of an object using light. These optomechanical interactions have inspired myriad proposals, ranging from quantum memories and transducers in quantum information networks to precision metrology of classical forces. Alongside advances in micro- and nano-fabrication, the burgeoning field of optomechanics has yielded a class of highly engineered systems designed to produce strong interactions between light and motion.
Optomechanical crystals are one such system in which the patterning of periodic holes in thin dielectric films traps both light and sound waves to a micro-scale volume. These devices feature strong radiation pressure coupling between high-quality optical cavity modes and internal nanomechanical resonances. Whether for applications in the quantum or classical domain, the utility of optomechanical crystals hinges on the degree to which light radiating from the device, having interacted with mechanical motion, can be collected and detected in an experimental apparatus consisting of conventional optical components such as lenses and optical fibers. While several efficient methods of optical coupling exist to meet this task, most are unsuitable for the cryogenic or vacuum integration required for many applications. The first portion of this dissertation will detail the development of robust and efficient methods of optically coupling optomechanical resonators to optical fibers, with an emphasis on fabrication processes and optical characterization.
I will then proceed to describe a few experiments enabled by the fiber couplers. The first studies the performance of an optomechanical resonator as a precise sensor for continuous position measurement. The sensitivity of the measurement, limited by the detection efficiency of intracavity photons, is compared to the standard quantum limit imposed by the quantum properties of the laser probe light. The added noise of the measurement is seen to fall within a factor of 3 of the standard quantum limit, representing an order of magnitude improvement over previous experiments utilizing optomechanical crystals, and matching the performance of similar measurements in the microwave domain.
The next experiment uses single photon counting to detect individual phonon emission and absorption events within the nanomechanical oscillator. The scattering of laser light from mechanical motion produces correlated photon-phonon pairs, and detection of the emitted photon corresponds to an effective phonon counting scheme. In the process of scattering, the coherence properties of the mechanical oscillation are mapped onto the reflected light. Intensity interferometry of the reflected light then allows measurement of the temporal coherence of the acoustic field. These correlations are measured for a range of experimental conditions, including the optomechanical amplification of the mechanics to a self-oscillation regime, and comparisons are drawn to a laser system for phonons. Finally, prospects for using phonon counting and intensity interferometry to produce non-classical mechanical states are detailed following recent proposals in literature.
Resumo:
The quasicontinuum (QC) method was introduced to coarse-grain crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. Though many QC formulations have been proposed with varying characteristics and capabilities, a crucial cornerstone of all QC techniques is the concept of summation rules, which attempt to efficiently approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of atoms. In this work we propose a novel, fully-nonlocal, energy-based formulation of the QC method with support for legacy and new summation rules through a general energy-sampling scheme. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. Within this structure, we introduce a new class of summation rules which leverage the affine kinematics of this QC formulation to most accurately integrate thermodynamic quantities of interest. By comparing this new class of summation rules to commonly-employed rules through analysis of energy and spurious force errors, we find that the new rules produce no residual or spurious force artifacts in the large-element limit under arbitrary affine deformation, while allowing us to seamlessly bridge to full atomistics. We verify that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors than all comparable previous summation rules through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions. Due to the unique structure of these summation rules, we also use the new formulation to study scenarios with large regions of free surface, a class of problems previously out of reach of the QC method. Lastly, we present the key components of a high-performance, distributed-memory realization of the new method, including a novel algorithm for supporting unparalleled levels of deformation. Overall, this new formulation and implementation allows us to efficiently perform simulations containing an unprecedented number of degrees of freedom with low approximation error.
Resumo:
Systems-level studies of biological systems rely on observations taken at a resolution lower than the essential unit of biology, the cell. Recent technical advances in DNA sequencing have enabled measurements of the transcriptomes in single cells excised from their environment, but it remains a daunting technical problem to reconstruct in situ gene expression patterns from sequencing data. In this thesis I develop methods for the routine, quantitative in situ measurement of gene expression using fluorescence microscopy.
The number of molecular species that can be measured simultaneously by fluorescence microscopy is limited by the pallet of spectrally distinct fluorophores. Thus, fluorescence microscopy is traditionally limited to the simultaneous measurement of only five labeled biomolecules at a time. The two methods described in this thesis, super-resolution barcoding and temporal barcoding, represent strategies for overcoming this limitation to monitor expression of many genes in a single cell. Super-resolution barcoding employs optical super-resolution microscopy (SRM) and combinatorial labeling via-smFISH (single molecule fluorescence in situ hybridization) to uniquely label individual mRNA species with distinct barcodes resolvable at nanometer resolution. This method dramatically increases the optical space in a cell, allowing a large numbers of barcodes to be visualized simultaneously. As a proof of principle this technology was used to study the S. cerevisiae calcium stress response. The second method, sequential barcoding, reads out a temporal barcode through multiple rounds of oligonucleotide hybridization to the same mRNA. The multiplexing capacity of sequential barcoding increases exponentially with the number of rounds of hybridization, allowing over a hundred genes to be profiled in only a few rounds of hybridization.
The utility of sequential barcoding was further demonstrated by adapting this method to study gene expression in mammalian tissues. Mammalian tissues suffer both from a large amount of auto-fluorescence and light scattering, making detection of smFISH probes on mRNA difficult. An amplified single molecule detection technology, smHCR (single molecule hairpin chain reaction), was developed to allow for the quantification of mRNA in tissue. This technology is demonstrated in combination with light sheet microscopy and background reducing tissue clearing technology, enabling whole-organ sequential barcoding to monitor in situ gene expression directly in intact mammalian tissue.
The methods presented in this thesis, specifically sequential barcoding and smHCR, enable multiplexed transcriptional observations in any tissue of interest. These technologies will serve as a general platform for future transcriptomic studies of complex tissues.