8 resultados para LIMITATION

em CaltechTHESIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nucleic acids are most commonly associated with the genetic code, transcription and gene expression. Recently, interest has grown in engineering nucleic acids for biological applications such as controlling or detecting gene expression. The natural presence and functionality of nucleic acids within living organisms coupled with their thermodynamic properties of base-pairing make them ideal for interfacing (and possibly altering) biological systems. We use engineered small conditional RNA or DNA (scRNA, scDNA, respectively) molecules to control and detect gene expression. Three novel systems are presented: two for conditional down-regulation of gene expression via RNA interference (RNAi) and a third system for simultaneous sensitive detection of multiple RNAs using labeled scRNAs.

RNAi is a powerful tool to study genetic circuits by knocking down a gene of interest. RNAi executes the logic: If gene Y is detected, silence gene Y. The fact that detection and silencing are restricted to the same gene means that RNAi is constitutively on. This poses a significant limitation when spatiotemporal control is needed. In this work, we engineered small nucleic acid molecules that execute the logic: If mRNA X is detected, form a Dicer substrate that targets independent mRNA Y for silencing. This is a step towards implementing the logic of conditional RNAi: If gene X is detected, silence gene Y. We use scRNAs and scDNAs to engineer signal transduction cascades that produce an RNAi effector molecule in response to hybridization to a nucleic acid target X. The first mechanism is solely based on hybridization cascades and uses scRNAs to produce a double-stranded RNA (dsRNA) Dicer substrate against target gene Y. The second mechanism is based on hybridization of scDNAs to detect a nucleic acid target and produce a template for transcription of a short hairpin RNA (shRNA) Dicer substrate against target gene Y. Test-tube studies for both mechanisms demonstrate that the output Dicer substrate is produced predominantly in the presence of a correct input target and is cleaved by Dicer to produce a small interfering RNA (siRNA). Both output products can lead to gene knockdown in tissue culture. To date, signal transduction is not observed in cells; possible reasons are explored.

Signal transduction cascades are composed of multiple scRNAs (or scDNAs). The need to study multiple molecules simultaneously has motivated the development of a highly sensitive method for multiplexed northern blots. The core technology of our system is the utilization of a hybridization chain reaction (HCR) of scRNAs as the detection signal for a northern blot. To achieve multiplexing (simultaneous detection of multiple genes), we use fluorescently tagged scRNAs. Moreover, by using radioactive labeling of scRNAs, the system exhibits a five-fold increase, compared to the literature, in detection sensitivity. Sensitive multiplexed northern blot detection provides an avenue for exploring the fate of scRNAs and scDNAs in tissue culture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is motivated by safety-critical applications involving autonomous air, ground, and space vehicles carrying out complex tasks in uncertain and adversarial environments. We use temporal logic as a language to formally specify complex tasks and system properties. Temporal logic specifications generalize the classical notions of stability and reachability that are studied in the control and hybrid systems communities. Given a system model and a formal task specification, the goal is to automatically synthesize a control policy for the system that ensures that the system satisfies the specification. This thesis presents novel control policy synthesis algorithms for optimal and robust control of dynamical systems with temporal logic specifications. Furthermore, it introduces algorithms that are efficient and extend to high-dimensional dynamical systems.

The first contribution of this thesis is the generalization of a classical linear temporal logic (LTL) control synthesis approach to optimal and robust control. We show how we can extend automata-based synthesis techniques for discrete abstractions of dynamical systems to create optimal and robust controllers that are guaranteed to satisfy an LTL specification. Such optimal and robust controllers can be computed at little extra computational cost compared to computing a feasible controller.

The second contribution of this thesis addresses the scalability of control synthesis with LTL specifications. A major limitation of the standard automaton-based approach for control with LTL specifications is that the automaton might be doubly-exponential in the size of the LTL specification. We introduce a fragment of LTL for which one can compute feasible control policies in time polynomial in the size of the system and specification. Additionally, we show how to compute optimal control policies for a variety of cost functions, and identify interesting cases when this can be done in polynomial time. These techniques are particularly relevant for online control, as one can guarantee that a feasible solution can be found quickly, and then iteratively improve on the quality as time permits.

The final contribution of this thesis is a set of algorithms for computing feasible trajectories for high-dimensional, nonlinear systems with LTL specifications. These algorithms avoid a potentially computationally-expensive process of computing a discrete abstraction, and instead compute directly on the system's continuous state space. The first method uses an automaton representing the specification to directly encode a series of constrained-reachability subproblems, which can be solved in a modular fashion by using standard techniques. The second method encodes an LTL formula as mixed-integer linear programming constraints on the dynamical system. We demonstrate these approaches with numerical experiments on temporal logic motion planning problems with high-dimensional (10+ states) continuous systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concept of a carbon nanotube microneedle array is explored in this thesis from multiple perspectives including microneedle fabrication, physical aspects of transdermal delivery, and in vivo transdermal drug delivery experiments. Starting with standard techniques in carbon nanotube (CNT) fabrication, including catalyst patterning and chemical vapor deposition, vertically-aligned carbon nanotubes are utilized as a scaffold to define the shape of the hollow microneedle. Passive, scalable techniques based on capillary action and unique photolithographic methods are utilized to produce a CNT-polymer composite microneedle. Specific examples of CNT-polyimide and CNT-epoxy microneedles are investigated. Further analysis of the transport properties of polymer resins reveals general requirements for applying arbitrary polymers to the fabrication process.

The bottom-up fabrication approach embodied by vertically-aligned carbon nanotubes allows for more direct construction of complex high-aspect ratio features than standard top-down fabrication approaches, making microneedles an ideal application for CNTs. However, current vertically-aligned CNT fabrication techniques only allow for the production of extruded geometries with a constant cross-sectional area, such as cylinders. To rectify this limitation, isotropic oxygen etching is introduced as a novel fabrication technique to create true 3D CNT geometry. Oxygen etching is utilized to create a conical geometry from a cylindrical CNT structure as well as create complex shape transformations in other CNT geometries.

CNT-polymer composite microneedles are anchored onto a common polymer base less than 50 µm thick, which allows for the microneedles to be incorporated into multiple drug delivery platforms, including modified hypodermic syringes and silicone skin patches. Cylindrical microneedles are fabricated with 100 µm outer diameter and height of 200-250 µm with a central cavity, or lumen, diameter of 30 µm to facilitate liquid drug flow. In vitro delivery experiments in swine skin demonstrate the ability of the microneedles to successfully penetrate the skin and deliver aqueous solutions.

An in vivo study was performed to assess the ability of the CNT-polymer microneedles to deliver drugs transdermally. CNT-polymer microneedles are attached to a hand actuated silicone skin patch that holds a liquid reservoir of drugs. Fentanyl, a potent analgesic, was administered to New Zealand White Rabbits through 3 routes of delivery: topical patch, CNT-polymer microneedles, and subcutaneous hypodermic injection. Results demonstrate that the CNT-polymer microneedles have a similar onset of action as the topical patch. CNT-polymer microneedles were also vetted as a painless delivery approach compared to hypodermic injection. Comparative analysis with contemporary microneedle designs demonstrates that the delivery achieved through CNT-polymer microneedles is akin to current hollow microneedle architectures. The inherent advantage of applying a bottom-up fabrication approach alongside similar delivery performance to contemporary microneedle designs demonstrates that the CNT-polymer composite microneedle is a viable architecture in the emerging field of painless transdermal delivery.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Part I.

In recent years, backscattering spectrometry has become an important tool for the analysis of thin films. An inherent limitation, though, is the loss of depth resolution due to energy straggling of the beam. To investigate this, energy straggling of 4He has been measured in thin films of Ni, Al, Au and Pt. Straggling is roughly proportional to square root of thickness, appears to have a slight energy dependence and generally decreases with decreasing atomic number of the adsorber. The results are compared with predictions of theory and with previous measurements. While Ni measurements are in fair agreement with Bohr's theory, Al measurements are 30% above and Au measurements are 40% below predicted values. The Au and Pt measurements give straggling values which are close to one another.

Part II.

MeV backscattering spectrometry and X-ray diffraction are used to investigate the behavior of sputter-deposited Ti-W mixed films on Si substrates. During vacuum anneals at temperatures near 700°C for several hours, the metallization layer reacts with the substrate. Backscattering analysis shows that the resulting compound layer is uniform in composition and contains Ti, Wand Si. The Ti:W ratio in the compound corresponds to that of the deposited metal film. X-ray analyses with Reed and Guinier cameras reveal the presence of the ternary TixW(1-x)Si2 compound. Its composition is unaffected by oxygen contamination during annealing, but the reaction rate is affected. The rate measured on samples with about 15% oxygen contamination after annealing is linear, of the order of 0.5 Å per second at 725°C, and depends on the crystallographic orientation of the substrate and the dc bias during sputter-deposition of the Ti-W film.

Au layers of about 1000 Å thickness were deposited onto unreacted Ti-W films on Si. When annealed at 400°C these samples underwent a color change,and SEM micrographs of the samples showed that an intricate pattern of fissures which were typically 3µm wide had evolved. Analysis by electron microprobe revealed that Au had segregated preferentially into the fissures. This result suggests that Ti-W is not a barrier to Au-Si intermixing at 400°C.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Experimental studies were conducted with the goals of 1) determining the origin of Pt- group element (PGE) alloys and associated mineral assemblages in refractory inclusions from meteorites and 2) developing a new ultrasensitive method for the in situ chemical and isotopic analysis of PGE. A general review of the geochemistry and cosmochemistry of the PGE is given, and specific research contributions are presented within the context of this broad framework.

An important step toward understanding the cosmochemistry of the PGE is the determination of the origin of POE-rich metallic phases (most commonly εRu-Fe) that are found in Ca, AJ-rich refractory inclusions (CAI) in C3V meteorites. These metals occur along with γNi-Fe metals, Ni-Fe sulfides and Fe oxides in multiphase opaque assemblages. Laboratory experiments were used to show that the mineral assemblages and textures observed in opaque assemblages could be produced by sulfidation and oxidation of once homogeneous Ni-Fe-PGE metals. Phase equilibria, partitioning and diffusion kinetics were studied in the Ni-Fe-Ru system in order to quantify the conditions of opaque assemblage formation. Phase boundaries and tie lines in the Ni-Fe-Ru system were determined at 1273, 1073 and 873K using an experimental technique that allowed the investigation of a large portion of the Ni-Fe-Ru system with a single experiment at each temperature by establishing a concentration gradient within which local equilibrium between coexisting phases was maintained. A wide miscibility gap was found to be present at each temperature, separating a hexagonal close-packed εRu-Fe phase from a face-centered cubic γNi-Fe phase. Phase equilibria determined here for the Ni-Fe-Ru system, and phase equilibria from the literature for the Ni-Fe-S and Ni-Fe-O systems, were compared with analyses of minerals from opaque assemblages to estimate the temperature and chemical conditions of opaque assemblage formation. It was determined that opaque assemblages equilibrated at a temperature of ~770K, a sulfur fugacity 10 times higher than an equilibrium solar gas, and an oxygen fugacity 106 times higher than an equilibrium solar gas.

Diffusion rates between -γNi-Fe and εRu-Fe metal play a critical role in determining the time (with respect to CAI petrogenesis) and duration of the opaque assemblage equilibration process. The diffusion coefficient for Ru in Ni (DRuNi) was determined as an analog for the Ni-Fe-Ru system by the thin-film diffusion method in the temperature range of 1073 to 1673K and is given by the expression:

DRuNi (cm2 sec-1) = 5.0(±0.7) x 10-3 exp(-2.3(±0.1) x 1012 erg mole-1/RT) where R is the gas constant and T is the temperature in K. Based on the rates of dissolution and exsolution of metallic phases in the Ni-Fe-Ru system it is suggested that opaque assemblages equilibrated after the melting and crystallization of host CAI during a metamorphic event of ≥ 103 years duration. It is inferred that opaque assemblages originated as immiscible metallic liquid droplets in the CAI silicate liquid. The bulk compositions of PGE in these precursor alloys reflects an early stage of condensation from the solar nebula and the partitioning of V between the precursor alloys and CAI silicate liquid reflects the reducing nebular conditions under which CAI were melted. The individual mineral phases now observed in opaque assemblages do not preserve an independent history prior to CAI melting and crystallization, but instead provide important information on the post-accretionary history of C3V meteorites and allow the quantification of the temperature, sulfur fugacity and oxygen fugacity of cooling planetary environments. This contrasts with previous models that called upon the formation of opaque assemblages by aggregation of phases that formed independently under highly variable conditions in the solar nebula prior to the crystallization of CAI.

Analytical studies were carried out on PGE-rich phases from meteorites and the products of synthetic experiments using traditional electron microprobe x-ray analytical techniques. The concentrations of PGE in common minerals from meteorites and terrestrial rocks are far below the ~100 ppm detection limit of the electron microprobe. This has limited the scope of analytical studies to the very few cases where PGE are unusually enriched. To study the distribution of PGE in common minerals will require an in situ analytical technique with much lower detection limits than any methods currently in use. To overcome this limitation, resonance ionization of sputtered atoms was investigated for use as an ultrasensitive in situ analytical technique for the analysis of PGE. The mass spectrometric analysis of Os and Re was investigated using a pulsed primary Ar+ ion beam to provide sputtered atoms for resonance ionization mass spectrometry. An ionization scheme for Os that utilizes three resonant energy levels (including an autoionizing energy level) was investigated and found to have superior sensitivity and selectivity compared to nonresonant and one and two energy level resonant ionization schemes. An elemental selectivity for Os over Re of ≥ 103 was demonstrated. It was found that detuning the ionizing laser from the autoionizing energy level to an arbitrary region in the ionization continuum resulted in a five-fold decrease in signal intensity and a ten-fold decrease in elemental selectivity. Osmium concentrations in synthetic metals and iron meteorites were measured to demonstrate the analytical capabilities of the technique. A linear correlation between Os+ signal intensity and the known Os concentration was observed over a range of nearly 104 in Os concentration with an accuracy of ~ ±10%, a millimum detection limit of 7 parts per billion atomic, and a useful yield of 1%. Resonance ionization of sputtered atoms samples the dominant neutral-fraction of sputtered atoms and utilizes multiphoton resonance ionization to achieve high sensitivity and to eliminate atomic and molecular interferences. Matrix effects should be small compared to secondary ion mass spectrometry because ionization occurs in the gas phase and is largely independent of the physical properties of the matrix material. Resonance ionization of sputtered atoms can be applied to in situ chemical analysis of most high ionization potential elements (including all of the PGE) in a wide range of natural and synthetic materials. The high useful yield and elemental selectivity of this method should eventually allow the in situ measurement of Os isotope ratios in some natural samples and in sample extracts enriched in PGE by fire assay fusion.

Phase equilibria and diffusion experiments have provided the basis for a reinterpretation of the origin of opaque assemblages in CAI and have yielded quantitative information on conditions in the primitive solar nebula and cooling planetary environments. Development of the method of resonance ionization of sputtered atoms for the analysis of Os has shown that this technique has wide applications in geochemistry and will for the first time allow in situ studies of the distribution of PGE at the low concentration levels at which they occur in common minerals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many applications in cosmology and astrophysics at millimeter wavelengths including CMB polarization, studies of galaxy clusters using the Sunyaev-Zeldovich effect (SZE), and studies of star formation at high redshift and in our local universe and our galaxy, require large-format arrays of millimeter-wave detectors. Feedhorn and phased-array antenna architectures for receiving mm-wave light present numerous advantages for control of systematics, for simultaneous coverage of both polarizations and/or multiple spectral bands, and for preserving the coherent nature of the incoming light. This enables the application of many traditional "RF" structures such as hybrids, switches, and lumped-element or microstrip band-defining filters.

Simultaneously, kinetic inductance detectors (KIDs) using high-resistivity materials like titanium nitride are an attractive sensor option for large-format arrays because they are highly multiplexable and because they can have sensitivities reaching the condition of background-limited detection. A KID is a LC resonator. Its inductance includes the geometric inductance and kinetic inductance of the inductor in the superconducting phase. A photon absorbed by the superconductor breaks a Cooper pair into normal-state electrons and perturbs its kinetic inductance, rendering it a detector of light. The responsivity of KID is given by the fractional frequency shift of the LC resonator per unit optical power.

However, coupling these types of optical reception elements to KIDs is a challenge because of the impedance mismatch between the microstrip transmission line exiting these architectures and the high resistivity of titanium nitride. Mitigating direct absorption of light through free space coupling to the inductor of KID is another challenge. We present a detailed titanium nitride KID design that addresses these challenges. The KID inductor is capacitively coupled to the microstrip in such a way as to form a lossy termination without creating an impedance mismatch. A parallel plate capacitor design mitigates direct absorption, uses hydrogenated amorphous silicon, and yields acceptable noise. We show that the optimized design can yield expected sensitivities very close to the fundamental limit for a long wavelength imager (LWCam) that covers six spectral bands from 90 to 400 GHz for SZE studies.

Excess phase (frequency) noise has been observed in KID and is very likely caused by two-level systems (TLS) in dielectric materials. The TLS hypothesis is supported by the measured dependence of the noise on resonator internal power and temperature. However, there is still a lack of a unified microscopic theory which can quantitatively model the properties of the TLS noise. In this thesis we derive the noise power spectral density due to the coupling of TLS with phonon bath based on an existing model and compare the theoretical predictions about power and temperature dependences with experimental data. We discuss the limitation of such a model and propose the direction for future study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Systems-level studies of biological systems rely on observations taken at a resolution lower than the essential unit of biology, the cell. Recent technical advances in DNA sequencing have enabled measurements of the transcriptomes in single cells excised from their environment, but it remains a daunting technical problem to reconstruct in situ gene expression patterns from sequencing data. In this thesis I develop methods for the routine, quantitative in situ measurement of gene expression using fluorescence microscopy.

The number of molecular species that can be measured simultaneously by fluorescence microscopy is limited by the pallet of spectrally distinct fluorophores. Thus, fluorescence microscopy is traditionally limited to the simultaneous measurement of only five labeled biomolecules at a time. The two methods described in this thesis, super-resolution barcoding and temporal barcoding, represent strategies for overcoming this limitation to monitor expression of many genes in a single cell. Super-resolution barcoding employs optical super-resolution microscopy (SRM) and combinatorial labeling via-smFISH (single molecule fluorescence in situ hybridization) to uniquely label individual mRNA species with distinct barcodes resolvable at nanometer resolution. This method dramatically increases the optical space in a cell, allowing a large numbers of barcodes to be visualized simultaneously. As a proof of principle this technology was used to study the S. cerevisiae calcium stress response. The second method, sequential barcoding, reads out a temporal barcode through multiple rounds of oligonucleotide hybridization to the same mRNA. The multiplexing capacity of sequential barcoding increases exponentially with the number of rounds of hybridization, allowing over a hundred genes to be profiled in only a few rounds of hybridization.

The utility of sequential barcoding was further demonstrated by adapting this method to study gene expression in mammalian tissues. Mammalian tissues suffer both from a large amount of auto-fluorescence and light scattering, making detection of smFISH probes on mRNA difficult. An amplified single molecule detection technology, smHCR (single molecule hairpin chain reaction), was developed to allow for the quantification of mRNA in tissue. This technology is demonstrated in combination with light sheet microscopy and background reducing tissue clearing technology, enabling whole-organ sequential barcoding to monitor in situ gene expression directly in intact mammalian tissue.

The methods presented in this thesis, specifically sequential barcoding and smHCR, enable multiplexed transcriptional observations in any tissue of interest. These technologies will serve as a general platform for future transcriptomic studies of complex tissues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is an investigation into the nature of data analysis and computer software systems which support this activity.

The first chapter develops the notion of data analysis as an experimental science which has two major components: data-gathering and theory-building. The basic role of language in determining the meaningfulness of theory is stressed, and the informativeness of a language and data base pair is studied. The static and dynamic aspects of data analysis are then considered from this conceptual vantage point. The second chapter surveys the available types of computer systems which may be useful for data analysis. Particular attention is paid to the questions raised in the first chapter about the language restrictions imposed by the computer system and its dynamic properties.

The third chapter discusses the REL data analysis system, which was designed to satisfy the needs of the data analyzer in an operational relational data system. The major limitation on the use of such systems is the amount of access to data stored on a relatively slow secondary memory. This problem of the paging of data is investigated and two classes of data structure representations are found, each of which has desirable paging characteristics for certain types of queries. One representation is used by most of the generalized data base management systems in existence today, but the other is clearly preferred in the data analysis environment, as conceptualized in Chapter I.

This data representation has strong implications for a fundamental process of data analysis -- the quantification of variables. Since quantification is one of the few means of summarizing and abstracting, data analysis systems are under strong pressure to facilitate the process. Two implementations of quantification are studied: one analagous to the form of the lower predicate calculus and another more closely attuned to the data representation. A comparison of these indicates that the use of the "label class" method results in orders of magnitude improvement over the lower predicate calculus technique.