8 resultados para Individual ability
em CaltechTHESIS
Resumo:
Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast–all while remaining functional.
This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of “active self-assembly” of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology’s numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules.
One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved.
One might think that because a system is Turing-complete, capable of computing “anything,” that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not “computations” in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface.
Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors “energetically incomplete” programmable behaviors. This class of behaviors includes any behavior where a passive physical system simply does not have enough physical energy to perform the specified tasks in the requisite amount of time.
As we will demonstrate and prove, a sufficiently expressive implementation of an “active” molecular self-assembly approach can achieve these behaviors. Using an external source of fuel solves part of the the problem, so the system is not “energetically incomplete.” But the programmable system also needs to have sufficient expressive power to achieve the specified behaviors. Perhaps surprisingly, some of these systems do not even require Turing completeness to be sufficiently expressive.
Building on a large variety of work by other scientists in the fields of DNA nanotechnology, chemistry and reconfigurable robotics, this thesis introduces several research contributions in the context of active self-assembly.
We show that simple primitives such as insertion and deletion are able to generate complex and interesting results such as the growth of a linear polymer in logarithmic time and the ability of a linear polymer to treadmill. To this end we developed a formal model for active-self assembly that is directly implementable with DNA molecules. We show that this model is computationally equivalent to a machine capable of producing strings that are stronger than regular languages and, at most, as strong as context-free grammars. This is a great advance in the theory of active self- assembly as prior models were either entirely theoretical or only implementable in the context of macro-scale robotics.
We developed a chain reaction method for the autonomous exponential growth of a linear DNA polymer. Our method is based on the insertion of molecules into the assembly, which generates two new insertion sites for every initial one employed. The building of a line in logarithmic time is a first step toward building a shape in logarithmic time. We demonstrate the first construction of a synthetic linear polymer that grows exponentially fast via insertion. We show that monomer molecules are converted into the polymer in logarithmic time via spectrofluorimetry and gel electrophoresis experiments. We also demonstrate the division of these polymers via the addition of a single DNA complex that competes with the insertion mechanism. This shows the growth of a population of polymers in logarithmic time. We characterize the DNA insertion mechanism that we utilize in Chapter 4. We experimentally demonstrate that we can control the kinetics of this re- action over at least seven orders of magnitude, by programming the sequences of DNA that initiate the reaction.
In addition, we review co-authored work on programming molecular robots using prescriptive landscapes of DNA origami; this was the first microscopic demonstration of programming a molec- ular robot to walk on a 2-dimensional surface. We developed a snapshot method for imaging these random walking molecular robots and a CAPTCHA-like analysis method for difficult-to-interpret imaging data.
Resumo:
Waking up from a dreamless sleep, I open my eyes, recognize my wife’s face and am filled with joy. In this thesis, I used functional Magnetic Resonance Imaging (fMRI) to gain insights into the mechanisms involved in this seemingly simple daily occurrence, which poses at least three great challenges to neuroscience: how does conscious experience arise from the activity of the brain? How does the brain process visual input to the point of recognizing individual faces? How does the brain store semantic knowledge about people that we know? To start tackling the first question, I studied the neural correlates of unconscious processing of invisible faces. I was unable to image significant activations related to the processing of completely invisible faces, despite existing reports in the literature. I thus moved on to the next question and studied how recognition of a familiar person was achieved in the brain; I focused on finding invariant representations of person identity – representations that would be activated any time we think of a familiar person, read their name, see their picture, hear them talk, etc. There again, I could not find significant evidence for such representations with fMRI, even in regions where they had previously been found with single unit recordings in human patients (the Jennifer Aniston neurons). Faced with these null outcomes, the scope of my investigations eventually turned back towards the technique that I had been using, fMRI, and the recently praised analytical tools that I had been trusting, Multivariate Pattern Analysis. After a mostly disappointing attempt at replicating a strong single unit finding of a categorical response to animals in the right human amygdala with fMRI, I put fMRI decoding to an ultimate test with a unique dataset acquired in the macaque monkey. There I showed a dissociation between the ability of fMRI to pick up face viewpoint information and its inability to pick up face identity information, which I mostly traced back to the poor clustering of identity selective units. Though fMRI decoding is a powerful new analytical tool, it does not rid fMRI of its inherent limitations as a hemodynamics-based measure.
Resumo:
Humans are particularly adept at modifying their behavior in accordance with changing environmental demands. Through various mechanisms of cognitive control, individuals are able to tailor actions to fit complex short- and long-term goals. The research described in this thesis uses functional magnetic resonance imaging to characterize the neural correlates of cognitive control at two levels of complexity: response inhibition and self-control in intertemporal choice. First, we examined changes in neural response associated with increased experience and skill in response inhibition; successful response inhibition was associated with decreased neural response over time in the right ventrolateral prefrontal cortex, a region widely implicated in cognitive control, providing evidence for increased neural efficiency with learned automaticity. We also examined a more abstract form of cognitive control using intertemporal choice. In two experiments, we identified putative neural substrates for individual differences in temporal discounting, or the tendency to prefer immediate to delayed rewards. Using dynamic causal models, we characterized the neural circuit between ventromedial prefrontal cortex, an area involved in valuation, and dorsolateral prefrontal cortex, a region implicated in self-control in intertemporal and dietary choice, and found that connectivity from dorsolateral prefrontal cortex to ventromedial prefrontal cortex increases at the time of choice, particularly when delayed rewards are chosen. Moreover, estimates of the strength of connectivity predicted out-of-sample individual rates of temporal discounting, suggesting a neurocomputational mechanism for variation in the ability to delay gratification. Next, we interrogated the hypothesis that individual differences in temporal discounting are in part explained by the ability to imagine future reward outcomes. Using a novel paradigm, we imaged neural response during the imagining of primary rewards, and identified negative correlations between activity in regions associated the processing of both real and imagined rewards (lateral orbitofrontal cortex and ventromedial prefrontal cortex, respectively) and the individual temporal discounting parameters estimated in the previous experiment. These data suggest that individuals who are better able to represent reward outcomes neurally are less susceptible to temporal discounting. Together, these findings provide further insight into role of the prefrontal cortex in implementing cognitive control, and propose neurobiological substrates for individual variation.
Resumo:
Uncovering the demographics of extrasolar planets is crucial to understanding the processes of their formation and evolution. In this thesis, we present four studies that contribute to this end, three of which relate to NASA's Kepler mission, which has revolutionized the field of exoplanets in the last few years.
In the pre-Kepler study, we investigate a sample of exoplanet spin-orbit measurements---measurements of the inclination of a planet's orbit relative to the spin axis of its host star---to determine whether a dominant planet migration channel can be identified, and at what confidence. Applying methods of Bayesian model comparison to distinguish between the predictions of several different migration models, we find that the data strongly favor a two-mode migration scenario combining planet-planet scattering and disk migration over a single-mode Kozai migration scenario. While we test only the predictions of particular Kozai and scattering migration models in this work, these methods may be used to test the predictions of any other spin-orbit misaligning mechanism.
We then present two studies addressing astrophysical false positives in Kepler data. The Kepler mission has identified thousands of transiting planet candidates, and only relatively few have yet been dynamically confirmed as bona fide planets, with only a handful more even conceivably amenable to future dynamical confirmation. As a result, the ability to draw detailed conclusions about the diversity of exoplanet systems from Kepler detections relies critically on understanding the probability that any individual candidate might be a false positive. We show that a typical a priori false positive probability for a well-vetted Kepler candidate is only about 5-10%, enabling confidence in demographic studies that treat candidates as true planets. We also present a detailed procedure that can be used to securely and efficiently validate any individual transit candidate using detailed information of the signal's shape as well as follow-up observations, if available.
Finally, we calculate an empirical, non-parametric estimate of the shape of the radius distribution of small planets with periods less than 90 days orbiting cool (less than 4000K) dwarf stars in the Kepler catalog. This effort reveals several notable features of the distribution, in particular a maximum in the radius function around 1-1.25 Earth radii and a steep drop-off in the distribution larger than 2 Earth radii. Even more importantly, the methods presented in this work can be applied to a broader subsample of Kepler targets to understand how the radius function of planets changes across different types of host stars.
Resumo:
The Notch signaling pathway enables neighboring cells to coordinate developmental fates in diverse processes such as angiogenesis, neuronal differentiation, and immune system development. Although key components and interactions in the Notch pathway are known, it remains unclear how they work together to determine a cell's signaling state, defined as its quantitative ability to send and receive signals using particular Notch receptors and ligands. Recent work suggests that several aspects of the system can lead to complex signaling behaviors: First, receptors and ligands interact in two distinct ways, inhibiting each other in the same cell (in cis) while productively interacting between cells (in trans) to signal. The ability of a cell to send or receive signals depends strongly on both types of interactions. Second, mammals have multiple types of receptors and ligands, which interact with different strengths, and are frequently co-expressed in natural systems. Third, the three mammalian Fringe proteins can modify receptor-ligand interaction strengths in distinct and ligand-specific ways. Consequently, cells can exhibit non-intuitive signaling states even with relatively few components.
In order to understand what signaling states occur in natural processes, and what types of signaling behaviors they enable, this thesis puts forward a quantitative and predictive model of how the Notch signaling state is determined by the expression levels of receptors, ligands, and Fringe proteins. To specify the parameters of the model, we constructed a set of cell lines that allow control of ligand and Fringe expression level, and readout of the resulting Notch activity. We subjected these cell lines to an assay to quantitatively assess the levels of Notch ligands and receptors on the surface of individual cells. We further analyzed the dependence of these interactions on the level and type of Fringe expression. We developed a mathematical modeling framework that uses these data to predict the signaling states of individual cells from component expression levels. These methods allow us to reconstitute and analyze a diverse set of Notch signaling configurations from the bottom up, and provide a comprehensive view of the signaling repertoire of this major signaling pathway.
Resumo:
Wide field-of-view (FOV) microscopy is of high importance to biological research and clinical diagnosis where a high-throughput screening of samples is needed. This thesis presents the development of several novel wide FOV imaging technologies and demonstrates their capabilities in longitudinal imaging of living organisms, on the scale of viral plaques to live cells and tissues.
The ePetri Dish is a wide FOV on-chip bright-field microscope. Here we applied an ePetri platform for plaque analysis of murine norovirus 1 (MNV-1). The ePetri offers the ability to dynamically track plaques at the individual cell death event level over a wide FOV of 6 mm × 4 mm at 30 min intervals. A density-based clustering algorithm is used to analyze the spatial-temporal distribution of cell death events to identify plaques at their earliest stages. We also demonstrate the capabilities of the ePetri in viral titer count and dynamically monitoring plaque formation, growth, and the influence of antiviral drugs.
We developed another wide FOV imaging technique, the Talbot microscope, for the fluorescence imaging of live cells. The Talbot microscope takes advantage of the Talbot effect and can generate a focal spot array to scan the fluorescence samples directly on-chip. It has a resolution of 1.2 μm and a FOV of ~13 mm2. We further upgraded the Talbot microscope for the long-term time-lapse fluorescence imaging of live cell cultures, and analyzed the cells’ dynamic response to an anticancer drug.
We present two wide FOV endoscopes for tissue imaging, named the AnCam and the PanCam. The AnCam is based on the contact image sensor (CIS) technology, and can scan the whole anal canal within 10 seconds with a resolution of 89 μm, a maximum FOV of 100 mm × 120 mm, and a depth-of-field (DOF) of 0.65 mm. We also demonstrate the performance of the AnCam in whole anal canal imaging in both animal models and real patients. In addition to this, the PanCam is based on a smartphone platform integrated with a panoramic annular lens (PAL), and can capture a FOV of 18 mm × 120 mm in a single shot with a resolution of 100─140 μm. In this work we demonstrate the PanCam’s performance in imaging a stained tissue sample.
Resumo:
STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.
It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.
In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.
Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.
Resumo:
Experiments are described using the random dot stereo patterns devised by Julesz, but substituting various colors and luminances for the usual black and white random squares. The ability to perceive the patterns in depth depends on a luminance difference between the colors used. If two colors are the same luminance, then depth is not perceived although each of the individual squares which make up the patterns is easily seen due to the color difference. This is true for any combination of different colors. If different colors are used for corresponding random squares between the left and right eye patterns, stereopsis is possible for all combinations of binocular rivalry in color, provided the luminance difference is large enough. Rivalry in luminance always precludes stereopsis, regardless of the colors involved.