937 resultados para iterative multitier ensembles
Resumo:
This thesis presents the development of chip-based technology for informative in vitro cancer diagnostics. In the first part of this thesis, I will present my contribution in the development of a technology called “Nucleic Acid Cell Sorting (NACS)”, based on microarrays composed of nucleic acid encoded peptide major histocompatibility complexes (p/MHC), and the experimental and theoretical methods to detect and analyze secreted proteins from single or few cells.
Secondly, a novel portable platform for imaging of cellular metabolism with radio probes is presented. A microfluidic chip, so called “Radiopharmaceutical Imaging Chip” (RIMChip), combined with a beta-particle imaging camera, is developed to visualize the uptake of radio probes in a small number of cells. Due to its sophisticated design, RIMChip allows robust and user-friendly execution of sensitive and quantitative radio assays. The performance of this platform is validated with adherent and suspension cancer cell lines. This platform is then applied to study the metabolic response of cancer cells under the treatment of drugs. Both cases of mouse lymphoma and human glioblastoma cell lines, the metabolic responses to the drug exposures are observed within a short time (~ 1 hour), and are correlated with the arrest of cell-cycle, or with changes in receptor tyrosine kinase signaling.
The last parts of this thesis present summaries of ongoing projects: development of a new agent as an in vivo imaging probe for c-MET, and quantitative monitoring of glycolytic metabolism of primary glioblastoma cells. To develop a new agent for c-MET imaging, the one-bead-one-compound combinatorial library method is used, coupled with iterative screening. The performance of the agent is quantitatively validated with cell-based fluorescent assays. In the case of monitoring the metabolism of primary glioblastoma cell, by RIMChip, cells were sorting according to their expression levels of oncoprotein, or were treated with different kinds of drugs to study the metabolic heterogeneity of cancer cells or metabolic response of glioblastoma cells to drug treatments, respectively.
Resumo:
Quasi Delay-Insensitive (QDI) systems must be reset into a valid initial state before normal operation can start. Otherwise, deadlock may occur due to wrong handshake communication between processes. This thesis first reviews the traditional Global Reset Schemes (GRS). It then proposes a new Wave Reset Schemes (WRS). By utilizing the third possible value of QDI data codes - reset value, WRS propagates the data with reset value and triggers Local Reset (LR) sequentially. The global reset network for GRS can be removed and all reset signals are generated locally for each process. Circuits templates as well as some special blocks are modified to accommodate the reset value in WRS. An algorithm is proposed to choose the proper Local Reset Input (LRI) in order to shorten reset time. WRS is then applied to an iterative multiplier. The multiplier is proved working under different operating conditions.
Resumo:
This thesis presents two different forms of the Born approximations for acoustic and elastic wavefields and discusses their application to the inversion of seismic data. The Born approximation is valid for small amplitude heterogeneities superimposed over a slowly varying background. The first method is related to frequency-wavenumber migration methods. It is shown to properly recover two independent acoustic parameters within the bandpass of the source time function of the experiment for contrasts of about 5 percent from data generated using an exact theory for flat interfaces. The independent determination of two parameters is shown to depend on the angle coverage of the medium. For surface data, the impedance profile is well recovered.
The second method explored is mathematically similar to iterative tomographic methods recently introduced in the geophysical literature. Its basis is an integral relation between the scattered wavefield and the medium parameters obtained after applying a far-field approximation to the first-order Born approximation. The Davidon-Fletcher-Powell algorithm is used since it converges faster than the steepest descent method. It consists essentially of successive backprojections of the recorded wavefield, with angular and propagation weighing coefficients for density and bulk modulus. After each backprojection, the forward problem is computed and the residual evaluated. Each backprojection is similar to a before-stack Kirchhoff migration and is therefore readily applicable to seismic data. Several examples of reconstruction for simple point scatterer models are performed. Recovery of the amplitudes of the anomalies are improved with successive iterations. Iterations also improve the sharpness of the images.
The elastic Born approximation, with the addition of a far-field approximation is shown to correspond physically to a sum of WKBJ-asymptotic scattered rays. Four types of scattered rays enter in the sum, corresponding to P-P, P-S, S-P and S-S pairs of incident-scattered rays. Incident rays propagate in the background medium, interacting only once with the scatterers. Scattered rays propagate as if in the background medium, with no interaction with the scatterers. An example of P-wave impedance inversion is performed on a VSP data set consisting of three offsets recorded in two wells.
Resumo:
We investigate the propagation of an arbitrary elliptically polarized few-cycle ultrashort laser pulse in resonant two-level quantum systems using an iterative predictor-corrector finite-difference time-domain method. It is shown that when the initial effective area is equal to 2 pi, the effective area will remain invariant during the course of propagation, and a complete Rabi oscillation can be achieved. However, for an elliptically polarized few-cycle ultrashort laser pulse, polarization conversion can occur. Eventually, the laser pulse will evolve into two separate circularly polarized laser pulses with opposite helicities.
Resumo:
4 p.
Resumo:
This thesis describes the expansion and improvement of the iterative in situ click chemistry OBOC peptide library screening technology. Previous work provided a proof-of-concept demonstration that this technique was advantageous for the production of protein-catalyzed capture (PCC) agents that could be used as drop-in replacements for antibodies in a variety of applications. Chapter 2 describes the technology development that was undertaken to optimize this screening process and make it readily available for a wide variety of targets. This optimization is what has allowed for the explosive growth of the PCC agent project over the past few years.
These technology improvements were applied to the discovery of PCC agents specific for single amino acid point mutations in proteins, which have many applications in cancer detection and treatment. Chapter 3 describes the use of a general all-chemical epitope-targeting strategy that can focus PCC agent development directly to a site of interest on a protein surface. This technique utilizes a chemically-synthesized chunk of the protein, called an epitope, substituted with a click handle in combination with the OBOC in situ click chemistry libraries in order to focus ligand development at a site of interest. Specifically, Chapter 3 discusses the use of this technique in developing a PCC agent specific for the E17K mutation of Akt1. Chapter 4 details the expansion of this ligand into a mutation-specific inhibitor, with applications in therapeutics.
Resumo:
The propagation of an arbitrary polarized few-cycle ultrashort laser pulse in a degenerate three-level medium is investigated by using an iterative predictor-corrector finite-difference time-domain method. It is found that the polarization evolution of the ultrashort laser pulse is dependent not only on the initial atomic coherence of the medium but also on the polarization condition of the incident laser pulse. When the initial effective area is equal to 2 pi, complete linear-to-circular and circular-to-linear polarization conversion of few-cycle ultrashort laser pulses can be achieved due to the quantum interference effects between the two different transition paths.
Resumo:
Fundamental studies of magnetic alignment of highly anisotropic mesostructures can enable the clean-room-free fabrication of flexible, array-based solar and electronic devices, in which preferential orientation of nano- or microwire-type objects is desired. In this study, ensembles of 100 micron long Si microwires with ferromagnetic Ni and Co coatings are oriented vertically in the presence of magnetic fields. The degree of vertical alignment and threshold field strength depend on geometric factors, such as microwire length and ferromagnetic coating thickness, as well as interfacial interactions, which are modulated by varying solvent and substrate surface chemistry. Microwire ensembles with vertical alignment over 97% within 10 degrees of normal, as measured by X-ray diffraction, are achieved over square cm scale areas and set into flexible polymer films. A force balance model has been developed as a predictive tool for magnetic alignment, incorporating magnetic torque and empirically derived surface adhesion parameters. As supported by these calculations, microwires are shown to detach from the surface and align vertically in the presence of magnetic fields on the order of 100 gauss. Microwires aligned in this manner are set into a polydimethylsiloxane film where they retain their vertical alignment after the field has been removed and can subsequently be used as a flexible solar absorber layer. Finally, these microwires arrays can be protected for use in electrochemical cells by the conformal deposition of a graphene layer.
Resumo:
While concentrator photovoltaic cells have shown significant improvements in efficiency in the past ten years, once these cells are integrated into concentrating optics, connected to a power conditioning system and deployed in the field, the overall module efficiency drops to only 34 to 36%. This efficiency is impressive compared to conventional flat plate modules, but it is far short of the theoretical limits for solar energy conversion. Designing a system capable of achieving ultra high efficiency of 50% or greater cannot be achieved by refinement and iteration of current design approaches.
This thesis takes a systems approach to designing a photovoltaic system capable of 50% efficient performance using conventional diode-based solar cells. The effort began with an exploration of the limiting efficiency of spectrum splitting ensembles with 2 to 20 sub cells in different electrical configurations. Incorporating realistic non-ideal performance with the computationally simple detailed balance approach resulted in practical limits that are useful to identify specific cell performance requirements. This effort quantified the relative benefit of additional cells and concentration for system efficiency, which will help in designing practical optical systems.
Efforts to improve the quality of the solar cells themselves focused on the development of tunable lattice constant epitaxial templates. Initially intended to enable lattice matched multijunction solar cells, these templates would enable increased flexibility in band gap selection for spectrum splitting ensembles and enhanced radiative quality relative to metamorphic growth. The III-V material family is commonly used for multijunction solar cells both for its high radiative quality and for the ease of integrating multiple band gaps into one monolithic growth. The band gap flexibility is limited by the lattice constant of available growth templates. The virtual substrate consists of a thin III-V film with the desired lattice constant. The film is grown strained on an available wafer substrate, but the thickness is below the dislocation nucleation threshold. By removing the film from the growth substrate, allowing the strain to relax elastically, and bonding it to a supportive handle, a template with the desired lattice constant is formed. Experimental efforts towards this structure and initial proof of concept are presented.
Cells with high radiative quality present the opportunity to recover a large amount of their radiative losses if they are incorporated in an ensemble that couples emission from one cell to another. This effect is well known, but has been explored previously in the context of sub cells that independently operate at their maximum power point. This analysis explicitly accounts for the system interaction and identifies ways to enhance overall performance by operating some cells in an ensemble at voltages that reduce the power converted in the individual cell. Series connected multijunctions, which by their nature facilitate strong optical coupling between sub-cells, are reoptimized with substantial performance benefit.
Photovoltaic efficiency is usually measured relative to a standard incident spectrum to allow comparison between systems. Deployed in the field systems may differ in energy production due to sensitivity to changes in the spectrum. The series connection constraint in particular causes system efficiency to decrease as the incident spectrum deviates from the standard spectral composition. This thesis performs a case study comparing performance of systems over a year at a particular location to identify the energy production penalty caused by series connection relative to independent electrical connection.
Resumo:
We study the possibility of manipulating the focusing properties of a medium with electromagnetically induced transparency. In the focal region of focused ultraslow light pulses, the spectral anomalous behaviors can be actively modified by varying the control field intensity. Unlike the case in free space, we find in slow light focusing that the spectrum bandwidth of the incident field needed to produce observable spectral changes can be reduced by several orders. Numerical simulations with accessible parameters clearly show that spectral anomalies of focused mu s pulses are observable.
Resumo:
Part I: The dynamic response of an elastic half space to an explosion in a buried spherical cavity is investigated by two methods. The first is implicit, and the final expressions for the displacements at the free surface are given as a series of spherical wave functions whose coefficients are solutions of an infinite set of linear equations. The second method is based on Schwarz's technique to solve boundary value problems, and leads to an iterative solution, starting with the known expression for the point source in a half space as first term. The iterative series is transformed into a system of two integral equations, and into an equivalent set of linear equations. In this way, a dual interpretation of the physical phenomena is achieved. The systems are treated numerically and the Rayleigh wave part of the displacements is given in the frequency domain. Several comparisons with simpler cases are analyzed to show the effect of the cavity radius-depth ratio on the spectra of the displacements.
Part II: A high speed, large capacity, hypocenter location program has been written for an IBM 7094 computer. Important modifications to the standard method of least squares have been incorporated in it. Among them are a new way to obtain the depth of shocks from the normal equations, and the computation of variable travel times for the local shocks in order to account automatically for crustal variations. The multiregional travel times, largely based upon the investigations of the United States Geological Survey, are confronted with actual traverses to test their validity.
It is shown that several crustal phases provide control enough to obtain good solutions in depth for nuclear explosions, though not all the recording stations are in the region where crustal corrections are considered. The use of the European travel times, to locate the French nuclear explosion of May 1962 in the Sahara, proved to be more adequate than previous work.
A simpler program, with manual crustal corrections, is used to process the Kern County series of aftershocks, and a clearer picture of tectonic mechanism of the White Wolf fault is obtained.
Shocks in the California region are processed automatically and statistical frequency-depth and energy depth curves are discussed in relation to the tectonics of the area.
Resumo:
An exciting frontier in quantum information science is the integration of otherwise "simple'' quantum elements into complex quantum networks. The laboratory realization of even small quantum networks enables the exploration of physical systems that have not heretofore existed in the natural world. Within this context, there is active research to achieve nanoscale quantum optical circuits, for which atoms are trapped near nano-scopic dielectric structures and "wired'' together by photons propagating through the circuit elements. Single atoms and atomic ensembles endow quantum functionality for otherwise linear optical circuits and thereby enable the capability of building quantum networks component by component. Toward these goals, we have experimentally investigated three different systems, from conventional to rather exotic systems : free-space atomic ensembles, optical nano fibers, and photonics crystal waveguides. First, we demonstrate measurement-induced quadripartite entanglement among four quantum memories. Next, following the landmark realization of a nanofiber trap, we demonstrate the implementation of a state-insensitive, compensated nanofiber trap. Finally, we reach more exotic systems based on photonics crystal devices. Beyond conventional topologies of resonators and waveguides, new opportunities emerge from the powerful capabilities of dispersion and modal engineering in photonic crystal waveguides. We have implemented an integrated optical circuit with a photonics crystal waveguide capable of both trapping and interfacing atoms with guided photons, and have observed the collective effect, superradiance, mediated by the guided photons. These advances provide an important capability for engineered light-matter interactions, enabling explorations of novel quantum transport and quantum many-body phenomena.
Resumo:
We develop a logarithmic potential theory on Riemann surfaces which generalizes logarithmic potential theory on the complex plane. We show the existence of an equilibrium measure and examine its structure. This leads to a formula for the structure of the equilibrium measure which is new even in the plane. We then use our results to study quadrature domains, Laplacian growth, and Coulomb gas ensembles on Riemann surfaces. We prove that the complement of the support of the equilibrium measure satisfies a quadrature identity. Furthermore, our setup allows us to naturally realize weak solutions of Laplacian growth (for a general time-dependent source) as an evolution of the support of equilibrium measures. When applied to the Riemann sphere this approach unifies the known methods for generating interior and exterior Laplacian growth. We later narrow our focus to a special class of quadrature domains which we call Algebraic Quadrature Domains. We show that many of the properties of quadrature domains generalize to this setting. In particular, the boundary of an Algebraic Quadrature Domain is the inverse image of a planar algebraic curve under a meromorphic function. This makes the study of the topology of Algebraic Quadrature Domains an interesting problem. We briefly investigate this problem and then narrow our focus to the study of the topology of classical quadrature domains. We extend the results of Lee and Makarov and prove (for n ≥ 3) c ≤ 5n-5, where c and n denote the connectivity and degree of a (classical) quadrature domain. At the same time we obtain a new upper bound on the number of isolated points of the algebraic curve corresponding to the boundary and thus a new upper bound on the number of special points. In the final chapter we study Coulomb gas ensembles on Riemann surfaces.
Resumo:
The quasicontinuum (QC) method was introduced to coarse-grain crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. Though many QC formulations have been proposed with varying characteristics and capabilities, a crucial cornerstone of all QC techniques is the concept of summation rules, which attempt to efficiently approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of atoms. In this work we propose a novel, fully-nonlocal, energy-based formulation of the QC method with support for legacy and new summation rules through a general energy-sampling scheme. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. Within this structure, we introduce a new class of summation rules which leverage the affine kinematics of this QC formulation to most accurately integrate thermodynamic quantities of interest. By comparing this new class of summation rules to commonly-employed rules through analysis of energy and spurious force errors, we find that the new rules produce no residual or spurious force artifacts in the large-element limit under arbitrary affine deformation, while allowing us to seamlessly bridge to full atomistics. We verify that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors than all comparable previous summation rules through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions. Due to the unique structure of these summation rules, we also use the new formulation to study scenarios with large regions of free surface, a class of problems previously out of reach of the QC method. Lastly, we present the key components of a high-performance, distributed-memory realization of the new method, including a novel algorithm for supporting unparalleled levels of deformation. Overall, this new formulation and implementation allows us to efficiently perform simulations containing an unprecedented number of degrees of freedom with low approximation error.
Resumo:
The application of principles from evolutionary biology has long been used to gain new insights into the progression and clinical control of both infectious diseases and neoplasms. This iterative evolutionary process consists of expansion, diversification and selection within an adaptive landscape - species are subject to random genetic or epigenetic alterations that result in variations; genetic information is inherited through asexual reproduction and strong selective pressures such as therapeutic intervention can lead to the adaptation and expansion of resistant variants. These principles lie at the center of modern evolutionary synthesis and constitute the primary reasons for the development of resistance and therapeutic failure, but also provide a framework that allows for more effective control.
A model system for studying the evolution of resistance and control of therapeutic failure is the treatment of chronic HIV-1 infection by broadly neutralizing antibody (bNAb) therapy. A relatively recent discovery is that a minority of HIV-infected individuals can produce broadly neutralizing antibodies, that is, antibodies that inhibit infection by many strains of HIV. Passive transfer of human antibodies for the prevention and treatment of HIV-1 infection is increasingly being considered as an alternative to a conventional vaccine. However, recent evolution studies have uncovered that antibody treatment can exert selective pressure on virus that results in the rapid evolution of resistance. In certain cases, complete resistance to an antibody is conferred with a single amino acid substitution on the viral envelope of HIV.
The challenges in uncovering resistance mechanisms and designing effective combination strategies to control evolutionary processes and prevent therapeutic failure apply more broadly. We are motivated by two questions: Can we predict the evolution to resistance by characterizing genetic alterations that contribute to modified phenotypic fitness? Given an evolutionary landscape and a set of candidate therapies, can we computationally synthesize treatment strategies that control evolution to resistance?
To address the first question, we propose a mathematical framework to reason about evolutionary dynamics of HIV from computationally derived Gibbs energy fitness landscapes -- expanding the theoretical concept of an evolutionary landscape originally conceived by Sewall Wright to a computable, quantifiable, multidimensional, structurally defined fitness surface upon which to study complex HIV evolutionary outcomes.
To design combination treatment strategies that control evolution to resistance, we propose a methodology that solves for optimal combinations and concentrations of candidate therapies, and allows for the ability to quantifiably explore tradeoffs in treatment design, such as limiting the number of candidate therapies in the combination, dosage constraints and robustness to error. Our algorithm is based on the application of recent results in optimal control to an HIV evolutionary dynamics model and is constructed from experimentally derived antibody resistant phenotypes and their single antibody pharmacodynamics. This method represents a first step towards integrating principled engineering techniques with an experimentally based mathematical model in the rational design of combination treatment strategies and offers predictive understanding of the effects of combination therapies of evolutionary dynamics and resistance of HIV. Preliminary in vitro studies suggest that the combination antibody therapies predicted by our algorithm can neutralize heterogeneous viral populations despite containing resistant mutations.