11 resultados para Pauli-like contributions

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Synthetic biology combines biological parts from different sources in order to engineer non-native, functional systems. While there is a lot of potential for synthetic biology to revolutionize processes, such as the production of pharmaceuticals, engineering synthetic systems has been challenging. It is oftentimes necessary to explore a large design space to balance the levels of interacting components in the circuit. There are also times where it is desirable to incorporate enzymes that have non-biological functions into a synthetic circuit. Tuning the levels of different components, however, is often restricted to a fixed operating point, and this makes synthetic systems sensitive to changes in the environment. Natural systems are able to respond dynamically to a changing environment by obtaining information relevant to the function of the circuit. This work addresses these problems by establishing frameworks and mechanisms that allow synthetic circuits to communicate with the environment, maintain fixed ratios between components, and potentially add new parts that are outside the realm of current biological function. These frameworks provide a way for synthetic circuits to behave more like natural circuits by enabling a dynamic response, and provide a systematic and rational way to search design space to an experimentally tractable size where likely solutions exist. We hope that the contributions described below will aid in allowing synthetic biology to realize its potential.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sensory-motor circuits course through the parietal cortex of the human and monkey brain. How parietal cortex manipulates these signals has been an important question in behavioral neuroscience. This thesis presents experiments that explore the contributions of monkey parietal cortex to sensory-motor processing, with an emphasis on the area's contributions to reaching. First, it is shown that parietal cortex is organized into subregions devoted to specific movements. Area LIP encodes plans to make saccadic eye movements. A nearby area, the parietal reach region (PRR), plans reaches. A series of experiments are then described which explore the contributions of PRR to reach planning. Reach plans are represented in an eye-centered reference frame in PRR. This representation is shown to be stable across eye movements. When a sequence of reaches is planned, only the impending movement is represented in PRR, showing that the area is more related to movement planning than to storing the memory of reach targets. PRR resembles area LIP in each of these properties: the two areas may provide a substrate for hand-eye coordination. These findings yield new perspectives on the functions of the parietal cortex and on the organization of sensory-motor processing in primate brains.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advances in optical techniques have enabled many breakthroughs in biology and medicine. However, light scattering by biological tissues remains a great obstacle, restricting the use of optical methods to thin ex vivo sections or superficial layers in vivo. In this thesis, we present two related methods that overcome the optical depth limit—digital time reversal of ultrasound encoded light (digital TRUE) and time reversal of variance-encoded light (TROVE). These two techniques share the same principle of using acousto-optic beacons for time reversal optical focusing within highly scattering media, like biological tissues. Ultrasound, unlike light, is not significantly scattered in soft biological tissues, allowing for ultrasound focusing. In addition, a fraction of the scattered optical wavefront that passes through the ultrasound focus gets frequency-shifted via the acousto-optic effect, essentially creating a virtual source of frequency-shifted light within the tissue. The scattered ultrasound-tagged wavefront can be selectively measured outside the tissue and time-reversed to converge at the location of the ultrasound focus, enabling optical focusing within deep tissues. In digital TRUE, we time reverse ultrasound-tagged light with an optoelectronic time reversal device (the digital optical phase conjugate mirror, DOPC). The use of the DOPC enables high optical gain, allowing for high intensity optical focusing and focal fluorescence imaging in thick tissues at a lateral resolution of 36 µm by 52 µm. The resolution of the TRUE approach is fundamentally limited to that of the wavelength of ultrasound. The ultrasound focus (~ tens of microns wide) usually contains hundreds to thousands of optical modes, such that the scattered wavefront measured is a linear combination of the contributions of all these optical modes. In TROVE, we make use of our ability to digitally record, analyze and manipulate the scattered wavefront to demix the contributions of these spatial modes using variance encoding. In essence, we encode each spatial mode inside the scattering sample with a unique variance, allowing us to computationally derive the time reversal wavefront that corresponds to a single optical mode. In doing so, we uncouple the system resolution from the size of the ultrasound focus, demonstrating optical focusing and imaging between highly diffusing samples at an unprecedented, speckle-scale lateral resolution of ~ 5 µm. Our methods open up the possibility of fully exploiting the prowess and versatility of biomedical optics in deep tissues.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A group G → Homeo_+(S^1) is a Möbius-like group if every element of G is conjugate in Homeo(S^1) to a Mobius transformation. Our main result is: given a Mobus like like group G which has at least one global fixed point, G is conjugate in Homeo(S^1) to a Möbius group if and only if the limit set of G is all of S^1 . Moreover, we prove that if the limit set of G is not SI, then after identifying some closed subintervals of S^1 to points, the induced action of G is conjugate to an action of a Möbius group.

We also show that the above result does not hold in the case when G has no global fixed points. Namely, we construct examples of Möbius-like groups with limit set equal to S^1, but these groups cannot be conjugated to Möbius groups.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast–all while remaining functional.

This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of “active self-assembly” of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology’s numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules.

One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved.

One might think that because a system is Turing-complete, capable of computing “anything,” that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not “computations” in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface.

Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors “energetically incomplete” programmable behaviors. This class of behaviors includes any behavior where a passive physical system simply does not have enough physical energy to perform the specified tasks in the requisite amount of time.

As we will demonstrate and prove, a sufficiently expressive implementation of an “active” molecular self-assembly approach can achieve these behaviors. Using an external source of fuel solves part of the the problem, so the system is not “energetically incomplete.” But the programmable system also needs to have sufficient expressive power to achieve the specified behaviors. Perhaps surprisingly, some of these systems do not even require Turing completeness to be sufficiently expressive.

Building on a large variety of work by other scientists in the fields of DNA nanotechnology, chemistry and reconfigurable robotics, this thesis introduces several research contributions in the context of active self-assembly.

We show that simple primitives such as insertion and deletion are able to generate complex and interesting results such as the growth of a linear polymer in logarithmic time and the ability of a linear polymer to treadmill. To this end we developed a formal model for active-self assembly that is directly implementable with DNA molecules. We show that this model is computationally equivalent to a machine capable of producing strings that are stronger than regular languages and, at most, as strong as context-free grammars. This is a great advance in the theory of active self- assembly as prior models were either entirely theoretical or only implementable in the context of macro-scale robotics.

We developed a chain reaction method for the autonomous exponential growth of a linear DNA polymer. Our method is based on the insertion of molecules into the assembly, which generates two new insertion sites for every initial one employed. The building of a line in logarithmic time is a first step toward building a shape in logarithmic time. We demonstrate the first construction of a synthetic linear polymer that grows exponentially fast via insertion. We show that monomer molecules are converted into the polymer in logarithmic time via spectrofluorimetry and gel electrophoresis experiments. We also demonstrate the division of these polymers via the addition of a single DNA complex that competes with the insertion mechanism. This shows the growth of a population of polymers in logarithmic time. We characterize the DNA insertion mechanism that we utilize in Chapter 4. We experimentally demonstrate that we can control the kinetics of this re- action over at least seven orders of magnitude, by programming the sequences of DNA that initiate the reaction.

In addition, we review co-authored work on programming molecular robots using prescriptive landscapes of DNA origami; this was the first microscopic demonstration of programming a molec- ular robot to walk on a 2-dimensional surface. We developed a snapshot method for imaging these random walking molecular robots and a CAPTCHA-like analysis method for difficult-to-interpret imaging data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The nature of the subducted lithospheric slab is investigated seismologically by tomographic inversions of ISC residual travel times. The slab, in which nearly all deep earthquakes occur, is fast in the seismic images because it is much cooler than the ambient mantle. High resolution three-dimensional P and S wave models in the NW Pacific are obtained using regional data, while inversion for the SW Pacific slabs includes teleseismic arrivals. Resolution and noise estimations show the models are generally well-resolved.

The slab anomalies in these models, as inferred from the seismicity, are generally coherent in the upper mantle and become contorted and decrease in amplitude with depth. Fast slabs are surrounded by slow regions shallower than 350 km depth. Slab fingering, including segmentation and spreading, is indicated near the bottom of the upper mantle. The fast anomalies associated with the Japan, Izu-Bonin, Mariana and Kermadec subduction zones tend to flatten to sub-horizontal at depth, while downward spreading may occur under parts of the Mariana and Kuril arcs. The Tonga slab appears to end around 550 km depth, but is underlain by a fast band at 750-1000 km depths.

The NW Pacific model combined with the Clayton-Comer mantle model predicts many observed residual sphere patterns. The predictions indicate that the near-source anomalies affect the residual spheres less than the teleseismic contributions. The teleseismic contributions may be removed either by using a mantle model, or using teleseismic station averages of residuals from only regional events. The slab-like fast bands in the corrected residual spheres are are consistent with seismicity trends under the Mariana Tzu-Bonin and Japan trenches, but are inconsistent for the Kuril events.

The comparison of the tomographic models with earthquake focal mechanisms shows that deep compression axes and fast velocity slab anomalies are in consistent alignment, even when the slab is contorted or flattened. Abnormal stress patterns are seen at major junctions of the arcs. The depth boundary between tension and compression in the central parts of these arcs appears to depend on the dip and topology of the slab.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dynamic rupture simulations are unique in their contributions to the study of earthquake physics. The current rapid development of dynamic rupture simulations poses several new questions: Do the simulations reflect the real world? Do the simulations have predictive power? Which one should we believe when the simulations disagree? This thesis illustrates how integration with observations can help address these questions and reduce the effects of non-uniqueness of both dynamic rupture simulations and kinematic inversion problems. Dynamic rupture simulations with observational constraints can effectively identify non-physical features inferred from observations. Moreover, the integrative technique can also provide more physical insights into the mechanisms of earthquakes. This thesis demonstrates two examples of such kinds of integration: dynamic rupture simulations of the Mw 9.0 2011 Tohoku-Oki earthquake and of earthquake ruptures in damaged fault zones:

(1) We develop simulations of the Tohoku-Oki earthquake based on a variety of observations and minimum assumptions of model parameters. The simulations provide realistic estimations of stress drop and fracture energy of the region and explain the physical mechanisms of high-frequency radiation in the deep region. We also find that the overridding subduction wedge contributes significantly to the up-dip rupture propagation and large final slip in the shallow region. Such findings are also applicable to other megathrust earthquakes.

(2) Damaged fault zones are usually found around natural faults, but their effects on earthquake ruptures have been largely unknown. We simulate earthquake ruptures in damaged fault zones with material properties constrained by seismic and geological observations. We show that reflected waves in fault zones are effective at generating pulse-like ruptures and head waves tend to accelerate and decelerate rupture speeds. These mechanisms are robust in natural fault zones with large attenuation and off-fault plasticity. Moreover, earthquakes in damaged fault zones can propagate at super-Rayleigh speeds that are unstable in homogeneous media. Supershear transitions in fault zones do not require large fault stresses. In the end, we present observations in the Big Bear region, where variability of rupture speeds of small earthquakes correlates with the laterally variable materials in a damaged fault zone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Alternative scaffolds are non-antibody proteins that can be engineered to bind new targets. They have found useful niches in the therapeutic space due to their smaller size and the ease with which they can be engineered to be bispecific. We sought a new scaffold that could be used for therapeutic ends and chose the C2 discoidin domain of factor VIII, which is well studied and of human origin. Using yeast surface display, we engineered the C2 domain to bind to αvβ3 integrin with a 16 nM affinity while retaining its thermal stability and monomeric nature. We obtained a crystal structure of the engineered domain at 2.1 Å resolution. We have christened this discoidin domain alternative scaffold the “discobody.”

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The amorphous phases of the Pd-Cu-P system has been obtained using the technique of rapidly quenching from the liquid state. Broad maxima in the diffraction pattern were obtained in the X-ray diffraction studies which are indicative of a glass-like structure. The composition range over which the amorphous solid phase is retained for the Pd-Cu-P system is (Pd100-xCux)80P20 with 10 ≤ x ≤ 50 and (Pd65Cu35)100-yPy with 15 ≤ y ≤ 24 and (Pd60Cu40)100-yPy with 15 ≤ y ≤ 24.

The electrical resistivity for the Pd-Cu-P alloys decreases with temperature as T2 at low temperatures and as T at high temperatures up to the crystallization temperature. The structural scattering model of the resistivity proposed by Sinha and the spin-fluctuation resistivity model proposed by Hasegawa are re-examined in the light of the similarity of this result to the Pt-Ni-P and Pd-Ni-P systems. Objections are raised to these interpretations of the resistivity results and an alternate model is proposed consistent with the new results on Pd-Cu-P and the observation of similar effects in crystalline transition metal alloys. The observed negative temperature coefficients of resistivity in these amorphous alloys are thus interpreted as being due to the modification of the density of states with temperature through the electron-phonon interaction. The weak Pauli paramagnetism of the Pd-Cu-P, Pt-Ni-P and Pd-Ni-P alloys is interpreted as being modifications of the transition d-states as a result of the formation of strong transition metal-metalloid bonds rather than a large transfer of electrons from the glass former atoms (P in this case) to the d-band of the transition metal in a rigid band picture.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An instrument, the Caltech High Energy Isotope Spectrometer Telescope (HEIST), has been developed to measure isotopic abundances of cosmic ray nuclei in the charge range 3 ≤ Z ≤ 28 and the energy range between 30 and 800 MeV/nuc by employing an energy loss -- residual energy technique. Measurements of particle trajectories and energy losses are made using a multiwire proportional counter hodoscope and a stack of CsI(TI) crystal scintillators, respectively. A detailed analysis has been made of the mass resolution capabilities of this instrument.

Landau fluctuations set a fundamental limit on the attainable mass resolution, which for this instrument ranges between ~.07 AMU for z~3 and ~.2 AMU for z~2b. Contributions to the mass resolution due to uncertainties in measuring the path-length and energy losses of the detected particles are shown to degrade the overall mass resolution to between ~.1 AMU (z~3) and ~.3 AMU (z~2b).

A formalism, based on the leaky box model of cosmic ray propagation, is developed for obtaining isotopic abundance ratios at the cosmic ray sources from abundances measured in local interstellar space for elements having three or more stable isotopes, one of which is believed to be absent at the cosmic ray sources. This purely secondary isotope is used as a tracer of secondary production during propagation. This technique is illustrated for the isotopes of the elements O, Ne, S, Ar and Ca.

The uncertainties in the derived source ratios due to errors in fragmentation and total inelastic cross sections, in observed spectral shapes, and in measured abundances are evaluated. It is shown that the dominant sources of uncertainty are uncorrelated errors in the fragmentation cross sections and statistical uncertainties in measuring local interstellar abundances.

These results are applied to estimate the extent to which uncertainties must be reduced in order to distinguish between cosmic ray production in a solar-like environment and in various environments with greater neutron enrichments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.