15 resultados para Aeronautics in police work
em CaltechTHESIS
Resumo:
Because so little is known about the structure of membrane proteins, an attempt has been made in this work to develop techniques by which to model them in three dimensions. The procedures devised rely heavily upon the availability of several sequences of a given protein. The modelling procedure is composed of two parts. The first identifies transmembrane regions within the protein sequence on the basis of hydrophobicity, β-turn potential, and the presence of certain amino acid types, specifically, proline and basic residues. The second part of the procedure arranges these transmembrane helices within the bilayer based upon the evolutionary conservation of their residues. Conserved residues are oriented toward other helices and variable residues are positioned to face the surrounding lipids. Available structural information concerning the protein's helical arrangement, including the lengths of interhelical loops, is also taken into account. Rhodopsin, band 3, and the nicotinic acetylcholine receptor have all been modelled using this methodology, and mechanisms of action could be proposed based upon the resulting structures.
Specific residues in the rhodopsin and iodopsin sequences were identified, which may regulate the proteins' wavelength selectivities. A hinge-like motion of helices M3, M4, and M5 with respect to the rest of the protein was proposed to result in the activation of transducin, the G-protein associated with rhodopsin. A similar mechanism is also proposed for signal transduction by the muscarinic acetylcholine and β-adrenergic receptors.
The nicotinic acetylcholine receptor was modelled with four trans-membrane helices per subunit and with the five homologous M2 helices forming the cation channel. Putative channel-lining residues were identified and a mechanism of channel-opening based upon the concerted, tangential rotation of the M2 helices was proposed.
Band 3, the anion exchange protein found in the erythrocyte membrane, was modelled with 14 transmembrane helices. In general the pathway of anion transport can be viewed as a channel composed of six helices that contains a single hydrophobic restriction. This hydrophobic region will not allow the passage of charged species, unless they are part of an ion-pair. An arginine residue located near this restriction is proposed to be responsible for anion transport. When ion-paired with a transportable anion it rotates across the barrier and releases the anion on the other side of the membrane. A similar process returns it to its original position. This proposed mechanism, based on the three-dimensional model, can account for the passive, electroneutral, anion exchange observed for band 3. Dianions can be transported through a similar mechanism with the additional participation of a histidine residue. Both residues are located on M10.
Resumo:
This thesis presents a study of the dynamical, nonlinear interaction of colliding gravitational waves, as described by classical general relativity. It is focused mainly on two fundamental questions: First, what is the general structure of the singularities and Killing-Cauchy horizons produced in the collisions of exactly plane-symmetric gravitational waves? Second, under what conditions will the collisions of almost-plane gravitational waves (waves with large but finite transverse sizes) produce singularities?
In the work on the collisions of exactly-plane waves, it is shown that Killing horizons in any plane-symmetric spacetime are unstable against small plane-symmetric perturbations. It is thus concluded that the Killing-Cauchy horizons produced by the collisions of some exactly plane gravitational waves are nongeneric, and that generic initial data for the colliding plane waves always produce "pure" spacetime singularities without such horizons. This conclusion is later proved rigorously (using the full nonlinear theory rather than perturbation theory), in connection with an analysis of the asymptotic singularity structure of a general colliding plane-wave spacetime. This analysis also proves that asymptotically the singularities created by colliding plane waves are of inhomogeneous-Kasner type; the asymptotic Kasner axes and exponents of these singularities in general depend on the spatial coordinate that runs tangentially to the singularity in the non-plane-symmetric direction.
In the work on collisions of almost-plane gravitational waves, first some general properties of single almost-plane gravitational-wave spacetimes are explored. It is shown that, by contrast with an exact plane wave, an almost-plane gravitational wave cannot have a propagation direction that is Killing; i.e., it must diffract and disperse as it propagates. It is also shown that an almost-plane wave cannot be precisely sandwiched between two null wavefronts; i.e., it must leave behind tails in the spacetime region through which it passes. Next, the occurrence of spacetime singularities in the collisions of almost-plane waves is investigated. It is proved that if two colliding, almost-plane gravitational waves are initially exactly plane-symmetric across a central region of sufficiently large but finite transverse dimensions, then their collision produces a spacetime singularity with the same local structure as in the exact-plane-wave collision. Finally, it is shown that a singularity still forms when the central regions are only approximately plane-symmetric initially. Stated more precisely, it is proved that if the colliding almost-plane waves are initially sufficiently close to being exactly plane-symmetric across a bounded central region of sufficiently large transverse dimensions, then their collision necessarily produces spacetime singularities. In this case, nothing is now known about the local and global structures of the singularities.
Resumo:
In this work, computationally efficient approximate methods are developed for analyzing uncertain dynamical systems. Uncertainties in both the excitation and the modeling are considered and examples are presented illustrating the accuracy of the proposed approximations.
For nonlinear systems under uncertain excitation, methods are developed to approximate the stationary probability density function and statistical quantities of interest. The methods are based on approximating solutions to the Fokker-Planck equation for the system and differ from traditional methods in which approximate solutions to stochastic differential equations are found. The new methods require little computational effort and examples are presented for which the accuracy of the proposed approximations compare favorably to results obtained by existing methods. The most significant improvements are made in approximating quantities related to the extreme values of the response, such as expected outcrossing rates, which are crucial for evaluating the reliability of the system.
Laplace's method of asymptotic approximation is applied to approximate the probability integrals which arise when analyzing systems with modeling uncertainty. The asymptotic approximation reduces the problem of evaluating a multidimensional integral to solving a minimization problem and the results become asymptotically exact as the uncertainty in the modeling goes to zero. The method is found to provide good approximations for the moments and outcrossing rates for systems with uncertain parameters under stochastic excitation, even when there is a large amount of uncertainty in the parameters. The method is also applied to classical reliability integrals, providing approximations in both the transformed (independently, normally distributed) variables and the original variables. In the transformed variables, the asymptotic approximation yields a very simple formula for approximating the value of SORM integrals. In many cases, it may be computationally expensive to transform the variables, and an approximation is also developed in the original variables. Examples are presented illustrating the accuracy of the approximations and results are compared with existing approximations.
Resumo:
The negative impacts of ambient aerosol particles, or particulate matter (PM), on human health and climate are well recognized. However, owing to the complexity of aerosol particle formation and chemical evolution, emissions control strategies remain difficult to develop in a cost effective manner. In this work, three studies are presented to address several key issues currently stymieing California's efforts to continue improving its air quality.
Gas-phase organic mass (GPOM) and CO emission factors are used in conjunction with measured enhancements in oxygenated organic aerosol (OOA) relative to CO to quantify the significant lack of closure between expected and observed organic aerosol concentrations attributable to fossil-fuel emissions. Two possible conclusions emerge from the analysis to yield consistency with the ambient organic data: (1) vehicular emissions are not a dominant source of anthropogenic fossil SOA in the Los Angeles Basin, or (2) the ambient SOA mass yields used to determine the SOA formation potential of vehicular emissions are substantially higher than those derived from laboratory chamber studies. Additional laboratory chamber studies confirm that, owing to vapor-phase wall loss, the SOA mass yields currently used in virtually all 3D chemical transport models are biased low by as much as a factor of 4. Furthermore, predictions from the Statistical Oxidation Model suggest that this bias could be as high as a factor of 8 if the influence of the chamber walls could be removed entirely.
Once vapor-phase wall loss has been accounted for in a new suite of laboratory chamber experiments, the SOA parameterizations within atmospheric chemical transport models should also be updated. To address the numerical challenges of implementing the next generation of SOA models in atmospheric chemical transport models, a novel mathematical framework, termed the Moment Method, is designed and presented. Assessment of the Moment Method strengths and weaknesses provide valuable insight that can guide future development of SOA modules for atmospheric CTMs.
Finally, regional inorganic aerosol formation and evolution is investigated via detailed comparison of predictions from the Community Multiscale Air Quality (CMAQ version 4.7.1) model against a suite of airborne and ground-based meteorological measurements, gas- and aerosol-phase inorganic measurements, and black carbon (BC) measurements over Southern California during the CalNex field campaign in May/June 2010. Results suggests that continuing to target sulfur emissions with the hopes of reducing ambient PM concentrations may not the most effective strategy for Southern California. Instead, targeting dairy emissions is likely to be an effective strategy for substantially reducing ammonium nitrate concentrations in the eastern part of the Los Angeles Basin.
Biophysical and network mechanisms of high frequency extracellular potentials in the rat hippocampus
Resumo:
A fundamental question in neuroscience is how distributed networks of neurons communicate and coordinate dynamically and specifically. Several models propose that oscillating local networks can transiently couple to each other through phase-locked firing. Coherent local field potentials (LFP) between synaptically connected regions is often presented as evidence for such coupling. The physiological correlates of LFP signals depend on many anatomical and physiological factors, however, and how the underlying neural processes collectively generate features of different spatiotemporal scales is poorly understood. High frequency oscillations in the hippocampus, including gamma rhythms (30-100 Hz) that are organized by the theta oscillations (5-10 Hz) during active exploration and REM sleep, as well as sharp wave-ripples (SWRs, 140-200 Hz) during immobility or slow wave sleep, have each been associated with various aspects of learning and memory. Deciphering their physiology and functional consequences is crucial to understanding the operation of the hippocampal network.
We investigated the origins and coordination of high frequency LFPs in the hippocampo-entorhinal network using both biophysical models and analyses of large-scale recordings in behaving and sleeping rats. We found that the synchronization of pyramidal cell spikes substantially shapes, or even dominates, the electrical signature of SWRs in area CA1 of the hippocampus. The precise mechanisms coordinating this synchrony are still unresolved, but they appear to also affect CA1 activity during theta oscillations. The input to CA1, which often arrives in the form of gamma-frequency waves of activity from area CA3 and layer 3 of entorhinal cortex (EC3), did not strongly influence the timing of CA1 pyramidal cells. Rather, our data are more consistent with local network interactions governing pyramidal cells' spike timing during the integration of their inputs. Furthermore, the relative timing of input from EC3 and CA3 during the theta cycle matched that found in previous work to engage mechanisms for synapse modification and active dendritic processes. Our work demonstrates how local networks interact with upstream inputs to generate a coordinated hippocampal output during behavior and sleep, in the form of theta-gamma coupling and SWRs.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
The superspace approach provides a manifestly supersymmetric formulation of supersymmetric theories. For N= 1 supersymmetry one can use either constrained or unconstrained superfields for such a formulation. Only the unconstrained formulation is suitable for quantum calculations. Until now, all interacting N>1 theories have been written using constrained superfields. No solutions of the nonlinear constraint equations were known.
In this work, we first review the superspace approach and its relation to conventional component methods. The difference between constrained and unconstrained formulations is explained, and the origin of the nonlinear constraints in supersymmetric gauge theories is discussed. It is then shown that these nonlinear constraint equations can be solved by transforming them into linear equations. The method is shown to work for N=1 Yang-Mills theory in four dimensions.
N=2 Yang-Mills theory is formulated in constrained form in six-dimensional superspace, which can be dimensionally reduced to four-dimensional N=2 extended superspace. We construct a superfield calculus for six-dimensional superspace, and show that known matter multiplets can be described very simply. Our method for solving constraints is then applied to the constrained N=2 Yang-Mills theory, and we obtain an explicit solution in terms of an unconstrained superfield. The solution of the constraints can easily be expanded in powers of the unconstrained superfield, and a similar expansion of the action is also given. A background-field expansion is provided for any gauge theory in which the constraints can be solved by our methods. Some implications of this for superspace gauge theories are briefly discussed.
Resumo:
Part I
Regression analyses are performed on in vivo hemodialysis data for the transfer of creatinine, urea, uric acid and inorganic phosphate to determine the effects of variations in certain parameters on the efficiency of dialysis with a Kiil dialyzer. In calculating the mass transfer rates across the membrane, the effects of cell-plasma mass transfer kinetics are considered. The concept of the effective permeability coefficient for the red cell membrane is introduced to account for these effects. A discussion of the consequences of neglecting cell-plasma kinetics, as has been done to date in the literature, is presented.
A physical model for the Kiil dialyzer is presented in order to calculate the available membrane area for mass transfer, the linear blood and dialysate velocities, and other variables. The equations used to determine the independent variables of the regression analyses are presented. The potential dependent variables in the analyses are discussed.
Regression analyses were carried out considering overall mass-transfer coefficients, dialysances, relative dialysances, and relative permeabilities for each substance as the dependent variables. The independent variables were linear blood velocity, linear dialysate velocity, the pressure difference across the membrane, the elapsed time of dialysis, the blood hematocrit, and the arterial plasma concentrations of each substance transferred. The resulting correlations are tabulated, presented graphically, and discussed. The implications of these correlations are discussed from the viewpoint of a research investigator and from the viewpoint of patient treatment.
Recommendations for further experimental work are presented.
Part II
The interfacial structure of concurrent air-water flow in a two-inch diameter horizontal tube in the wavy flow regime has been measured using resistance wave gages. The median water depth, r.m.s. wave height, wave frequency, extrema frequency, and wave velocity have been measured as functions of air and water flow rates. Reynolds numbers, Froude numbers, Weber numbers, and bulk velocities for each phase may be calculated from these measurements. No theory for wave formation and propagation available in the literature was sufficient to describe these results.
The water surface level distribution generally is not adequately represented as a stationary Gaussian process. Five types of deviation from the Gaussian process function were noted in this work. The presence of the tube walls and the relatively large interfacial shear stresses precludes the use of simple statistical analyses to describe the interfacial structure. A detailed study of the behavior of individual fluid elements near the interface may be necessary to describe adequately wavy two-phase flow in systems similar to the one used in this work.
Resumo:
The isotopic composition of hydrogen and helium in solar cosmic rays provides a means of studying solar flare particle acceleration mechanisms since the enhanced relative abundance of rare isotopes, such as 2H, 3H and 3He, is due to their production by inelastic nuclear collisions in the solar atmosphere during the flare. In this work the Caltech Electron/Isotope Spectrometer on the IMP-7 spacecraft has been used to measure this isotopic composition. The response of the dE/dx-E particle telescope is discussed and alpha particle channeling in thin detectors is identified as an important background source affecting measurement of low values of (3He/4He).
The following flare-averaged results are obtained for the period, October, 1972 - November, 1973: (2H/1H) = 7+10-6 X 10-6 (1.6 - 8.6 MeV/nuc), (3H/1H) less than 3.4 x 10-6 (1.2 - 6.8 MeV/nuc), (3He/4He) = (9 ± 4) x 10-3, (3He/1H) = (1.7 ± 0.7) x 10-4 (3.1 - 15.0 MeV/nuc). The deuterium and tritium ratios are significantly lower than the same ratios at higher energies, suggesting that the deuterium and tritium spectra are harder than that of the protons. They are, however, consistent with the same thin target model relativistic path length of ~ 1 g/cm2 (or equivalently ~ 0.3 g/cm2 at 30 MeV/nuc) which is implied by the higher energy results. The 3He results, consistent with previous observations, would imply a path length at least 3 times as long, but the observations may be contaminated by small 3He rich solar events.
During 1973 three "3He rich events," containing much more 3He than 2H or 3H were observed on 14 February, 29 June and 5 September. Although the total production cross sections for 2H,3H and 3He are comparable, an upper limit to (2H/3He) and (3H/3He) was 0.053 (2.9-6.8 MeV/nuc), summing over the three events. This upper limit is marginally consistent with Ramaty and Kozlovsky's thick target model which accounts for such events by the nuclear reaction kinematics and directional properties of the flare acceleration process. The 5 September event was particularly significant in that much more 3He was observed than 4He and the fluxes of 3He and 1H were about equal. The range of (3He/4He) for such events reported to date is 0.2 to ~ 6 while (3He/1H) extends from 10-3 to ~ 1. The role of backscattered and mirroring protons and alphas in accounting for such variations is discussed.
Resumo:
Biological information storage and retrieval is a dynamic process that requires the genome to undergo dramatic structural rearrangements. Recent advances in single-molecule techniques have allowed precise quantification of the nano-mechanical properties of DNA [1, 2], and direct in vivo observation of molecules in action [3]. In this work, we will examine elasticity in protein-mediated DNA looping, whose structural rearrangement is essential for transcriptional regulation in both prokaryotes and eukaryotes. We will look at hydrodynamics in the process of viral DNA ejection, which mediates information transfer and exchange and has prominent implications in evolution. As in the case of Kepler's laws of planetary motion leading to Newton's gravitational theory, and the allometric scaling laws in biology revealing the organizing principles of complex networks [4], experimental data collapse in these biological phenomena has guided much of our studies and urged us to find the underlying physical principles.
Resumo:
Paralysis is a debilitating condition afflicting millions of people across the globe, and is particularly deleterious to quality of life when motor function of the legs is severely impaired or completely absent. Fortunately, spinal cord stimulation has shown great potential for improving motor function after spinal cord injury and other pathological conditions. Many animal studies have shown stimulation of the neural networks in the spinal cord can improve motor ability so dramatically that the animals can even stand and step after a complete spinal cord transaction.
This thesis presents work to successfully provide a chronically implantable device for rats that greatly enhances the ability to control the site of spinal cord stimulation. This is achieved through the use of a parylene-C based microelectrode array, which enables a density of stimulation sites unattainable with conventional wire electrodes. While many microelectrode devices have been proposed in the past, the spinal cord is a particularly challenging environment due to the bending and movement it undergoes in a live animal. The developed microelectrode array is the first to have been implanted in vivo while retaining functionality for over a month. In doing so, different neural pathways can be selectively activated to facilitate standing and stepping in spinalized rats using various electrode combinations, and important differences in responses are observed.
An engineering challenge for the usability of any high density electrode array is connecting the numerous electrodes to a stimulation source. This thesis develops several technologies to address this challenge, beginning with a fully passive implant that uses one wire per electrode to connect to an external stimulation source. The number of wires passing through the body and the skin proved to be a hazard for the health of the animal, so a multiplexed implant was devised in which active electronics reduce the number of wires. Finally, a fully wireless implant was developed. As these implants are tested in vivo, encapsulation is of critical importance to retain functionality in a chronic experiment, especially for the active implants, and it was achieved without the use of costly ceramic or metallic hermetic packaging. Active implants were built that retained functionality 8 weeks after implantation, and achieved stepping in spinalized rats after just 8-10 days, which is far sooner than wire-based electrical stimulation has achieved in prior work.
Resumo:
Part I
The latent heat of vaporization of n-decane is measured calorimetrically at temperatures between 160° and 340°F. The internal energy change upon vaporization, and the specific volume of the vapor at its dew point are calculated from these data and are included in this work. The measurements are in excellent agreement with available data at 77° and also at 345°F, and are presented in graphical and tabular form.
Part II
Simultaneous material and energy transport from a one-inch adiabatic porous cylinder is studied as a function of free stream Reynolds Number and turbulence level. Experimental data is presented for Reynolds Numbers between 1600 and 15,000 based on the cylinder diameter, and for apparent turbulence levels between 1.3 and 25.0 per cent. n-heptane and n-octane are the evaporating fluids used in this investigation.
Gross Sherwood Numbers are calculated from the data and are in substantial agreement with existing correlations of the results of other workers. The Sherwood Numbers, characterizing mass transfer rates, increase approximately as the 0.55 power of the Reynolds Number. At a free stream Reynolds Number of 3700 the Sherwood Number showed a 40% increase as the apparent turbulence level of the free stream was raised from 1.3 to 25 per cent.
Within the uncertainties involved in the diffusion coefficients used for n-heptane and n-octane, the Sherwood Numbers are comparable for both materials. A dimensionless Frössling Number is computed which characterizes either heat or mass transfer rates for cylinders on a comparable basis. The calculated Frössling Numbers based on mass transfer measurements are in substantial agreement with Frössling Numbers calculated from the data of other workers in heat transfer.
Resumo:
Biomolecular circuit engineering is critical for implementing complex functions in vivo, and is a baseline method in the synthetic biology space. However, current methods for conducting biomolecular circuit engineering are time-consuming and tedious. A complete design-build-test cycle typically takes weeks' to months' time due to the lack of an intermediary between design ex vivo and testing in vivo. In this work, we explore the development and application of a "biomolecular breadboard" composed of an in-vitro transcription-translation (TX-TL) lysate to rapidly speed up the engineering design-build-test cycle. We first developed protocols for creating and using lysates for conducting biological circuit design. By doing so we simplified the existing technology to an affordable ($0.03/uL) and easy to use three-tube reagent system. We then developed tools to accelerate circuit design by allowing for linear DNA use in lieu of plasmid DNA, and by utilizing principles of modular assembly. This allowed the design-build-test cycle to be reduced to under a business day. We then characterized protein degradation dynamics in the breadboard to aid to implementing complex circuits. Finally, we demonstrated that the breadboard could be applied to engineer complex synthetic circuits in vitro and in vivo. Specifically, we utilized our understanding of linear DNA prototyping, modular assembly, and protein degradation dynamics to characterize the repressilator oscillator and to prototype novel three- and five-node negative feedback oscillators both in vitro and in vivo. We therefore believe the biomolecular breadboard has wide application for acting as an intermediary for biological circuit engineering.
Resumo:
Experimental measurements of rate of energy loss were made for protons of energy .5 to 1.6 MeV channeling through 1 μm thick silicon targets along the <110>, <111>, and <211> axial directions, and the {100}, {110}, {111}, and {211} planar directions. A .05% resolution automatically controlled magnetic spectrometer was used. The data are presented graphically along with an extensive summary of data in the literature. The data taken cover a wider range of channels than has previously been examined, and are in agreement with the data of F. Eisen, et al., Radd. Eff. 13, 93 (1972).
The theory in the literature for channeling energy loss due to interaction with local electrons, core electrons, and distant valence electrons of the crystal atoms is summarized. Straggling is analyzed, and a computer program which calculates energy loss and straggling using this theory and the Moliere approximation to the Thomas Fermi potential, VTF, and the detailed silicon crystal structure is described. Values for the local electron density Zloc in each of the channels listed above are extracted from the data by graphical matching of the experimental and computer results.
Zeroth and second order contributions to Zloc as a function of distance from the center of the channel were computed from ∇2VTF = 4πρ for various channels in silicon. For data taken in this work and data of F. Eisen, et al., Rad. Eff. 13, 93 (1972), the calculated zeroth order contribution to Zloc lies between the experimentally extracted Zloc values obtained by using the peak and the leading edge of the transmission spectra, suggesting that the observed straggling is due both to statistical fluctuations and to path variation.
Resumo:
I. The attenuation of sound due to particles suspended in a gas was first calculated by Sewell and later by Epstein in their classical works on the propagation of sound in a two-phase medium. In their work, and in more recent works which include calculations of sound dispersion, the calculations were made for systems in which there was no mass transfer between the two phases. In the present work, mass transfer between phases is included in the calculations.
The attenuation and dispersion of sound in a two-phase condensing medium are calculated as functions of frequency. The medium in which the sound propagates consists of a gaseous phase, a mixture of inert gas and condensable vapor, which contains condensable liquid droplets. The droplets, which interact with the gaseous phase through the interchange of momentum, energy, and mass (through evaporation and condensation), are treated from the continuum viewpoint. Limiting cases, for flow either frozen or in equilibrium with respect to the various exchange processes, help demonstrate the effects of mass transfer between phases. Included in the calculation is the effect of thermal relaxation within droplets. Pressure relaxation between the two phases is examined, but is not included as a contributing factor because it is of interest only at much higher frequencies than the other relaxation processes. The results for a system typical of sodium droplets in sodium vapor are compared to calculations in which there is no mass exchange between phases. It is found that the maximum attenuation is about 25 per cent greater and occurs at about one-half the frequency for the case which includes mass transfer, and that the dispersion at low frequencies is about 35 per cent greater. Results for different values of latent heat are compared.
II. In the flow of a gas-particle mixture through a nozzle, a normal shock may exist in the diverging section of the nozzle. In Marble’s calculation for a shock in a constant area duct, the shock was described as a usual gas-dynamic shock followed by a relaxation zone in which the gas and particles return to equilibrium. The thickness of this zone, which is the total shock thickness in the gas-particle mixture, is of the order of the relaxation distance for a particle in the gas. In a nozzle, the area may change significantly over this relaxation zone so that the solution for a constant area duct is no longer adequate to describe the flow. In the present work, an asymptotic solution, which accounts for the area change, is obtained for the flow of a gas-particle mixture downstream of the shock in a nozzle, under the assumption of small slip between the particles and gas. This amounts to the assumption that the shock thickness is small compared with the length of the nozzle. The shock solution, valid in the region near the shock, is matched to the well known small-slip solution, which is valid in the flow downstream of the shock, to obtain a composite solution valid for the entire flow region. The solution is applied to a conical nozzle. A discussion of methods of finding the location of a shock in a nozzle is included.