10 resultados para directed polymers in random environment

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Long linear polymers that are end-functionalized with associative groups were studied as additives to hydrocarbon fluids to mitigate the fire hazard associated with the presence of mist in a crash scenario. These polymers were molecularly designed to overcome both the shear-degradation of long polymer chains in turbulent flows, and the chain collapse induced by the random placement of associative groups along polymer backbones. Architectures of associative groups on the polymer chain ends that were tested included clusters of self-associative carboxyl groups and pairs of hetero-complementary associative units.

Linear polymers with clusters of discrete numbers of carboxyl groups on their chain ends were investigated first: an innovative synthetic strategy was devised to achieve unprecedented backbone lengths and precise control of the number of carboxyl groups on chain ends (N). We found that a very narrow range of N allows the co-existence of sufficient end-association strength and polymer solubility in apolar media. Subsequent steady-flow rheological study on solution behavior of such soluble polymers in apolar media revealed that the end-association of very long chains in apolar media leads to the formation of flower-like micelles interconnected by bridging chains, which trap significant fraction of polymer chains into looped structures with low contribution to mist-control. The efficacy of very long 1,4-polybutadiene chains end-functionalized with clusters of four carboxyl groups as mist-control additives for jet fuel was further tested. In addition to being shear-resistant, the polymer was found capable of providing fire-protection to jet fuel at concentrations as low as 0.3wt%. We also found that this polymer has excellent solubility in jet fuel over a wide range of temperature (-30 to +70°C) and negligible interference with dewatering of jet fuel. It does not cause an adverse increase in viscosity at concentrations where mist-control efficacy exists.

Four pairs of hetero-complementary associative end-groups of varying strengths were subsequently investigated, in the hopes of achieving supramolecular aggregates with both mist-control ability and better utilization of polymer building blocks. Rheological study of solutions of the corresponding complementary associative polymer pairs in apolar media revealed the strength of complementary end-association required to achieve supramolecular aggregates capable of modulating rheological properties of the solution.

Both self-associating and complementary associating polymers have therefore been found to resist shear degradation. The successful strategy of building soluble, end-associative polymers with either self-associative or complementary associative groups will guide the next generation of mist-control technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast–all while remaining functional.

This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of “active self-assembly” of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology’s numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules.

One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved.

One might think that because a system is Turing-complete, capable of computing “anything,” that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not “computations” in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface.

Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors “energetically incomplete” programmable behaviors. This class of behaviors includes any behavior where a passive physical system simply does not have enough physical energy to perform the specified tasks in the requisite amount of time.

As we will demonstrate and prove, a sufficiently expressive implementation of an “active” molecular self-assembly approach can achieve these behaviors. Using an external source of fuel solves part of the the problem, so the system is not “energetically incomplete.” But the programmable system also needs to have sufficient expressive power to achieve the specified behaviors. Perhaps surprisingly, some of these systems do not even require Turing completeness to be sufficiently expressive.

Building on a large variety of work by other scientists in the fields of DNA nanotechnology, chemistry and reconfigurable robotics, this thesis introduces several research contributions in the context of active self-assembly.

We show that simple primitives such as insertion and deletion are able to generate complex and interesting results such as the growth of a linear polymer in logarithmic time and the ability of a linear polymer to treadmill. To this end we developed a formal model for active-self assembly that is directly implementable with DNA molecules. We show that this model is computationally equivalent to a machine capable of producing strings that are stronger than regular languages and, at most, as strong as context-free grammars. This is a great advance in the theory of active self- assembly as prior models were either entirely theoretical or only implementable in the context of macro-scale robotics.

We developed a chain reaction method for the autonomous exponential growth of a linear DNA polymer. Our method is based on the insertion of molecules into the assembly, which generates two new insertion sites for every initial one employed. The building of a line in logarithmic time is a first step toward building a shape in logarithmic time. We demonstrate the first construction of a synthetic linear polymer that grows exponentially fast via insertion. We show that monomer molecules are converted into the polymer in logarithmic time via spectrofluorimetry and gel electrophoresis experiments. We also demonstrate the division of these polymers via the addition of a single DNA complex that competes with the insertion mechanism. This shows the growth of a population of polymers in logarithmic time. We characterize the DNA insertion mechanism that we utilize in Chapter 4. We experimentally demonstrate that we can control the kinetics of this re- action over at least seven orders of magnitude, by programming the sequences of DNA that initiate the reaction.

In addition, we review co-authored work on programming molecular robots using prescriptive landscapes of DNA origami; this was the first microscopic demonstration of programming a molec- ular robot to walk on a 2-dimensional surface. We developed a snapshot method for imaging these random walking molecular robots and a CAPTCHA-like analysis method for difficult-to-interpret imaging data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Escherichia coli is one of the best studied living organisms and a model system for many biophysical investigations. Despite countless discoveries of the details of its physiology, we still lack a holistic understanding of how these bacteria react to changes in their environment. One of the most important examples is their response to osmotic shock. One of the mechanistic elements protecting cell integrity upon exposure to sudden changes of osmolarity is the presence of mechanosensitive channels in the cell membrane. These channels are believed to act as tension release valves protecting the inner membrane from rupturing. This thesis presents an experimental study of various aspects of mechanosensation in bacteria. We examine cell survival after osmotic shock and how the number of MscL (Mechanosensitive channel of Large conductance) channels expressed in a cell influences its physiology. We developed an assay that allows real-time monitoring of the rate of the osmotic challenge and direct observation of cell morphology during and after the exposure to osmolarity change. The work described in this thesis introduces tools that can be used to quantitatively determine at the single-cell level the number of expressed proteins (in this case MscL channels) as a function of, e.g., growth conditions. The improvement in our quantitative description of mechanosensation in bacteria allows us to address many, so far unsolved, problems, like the minimal number of channels needed for survival, and can begin to paint a clearer picture of why there are so many distinct types of mechanosensitive channels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis belongs to the growing field of economic networks. In particular, we develop three essays in which we study the problem of bargaining, discrete choice representation, and pricing in the context of networked markets. Despite analyzing very different problems, the three essays share the common feature of making use of a network representation to describe the market of interest.

In Chapter 1 we present an analysis of bargaining in networked markets. We make two contributions. First, we characterize market equilibria in a bargaining model, and find that players' equilibrium payoffs coincide with their degree of centrality in the network, as measured by Bonacich's centrality measure. This characterization allows us to map, in a simple way, network structures into market equilibrium outcomes, so that payoffs dispersion in networked markets is driven by players' network positions. Second, we show that the market equilibrium for our model converges to the so called eigenvector centrality measure. We show that the economic condition for reaching convergence is that the players' discount factor goes to one. In particular, we show how the discount factor, the matching technology, and the network structure interact in a very particular way in order to see the eigenvector centrality as the limiting case of our market equilibrium.

We point out that the eigenvector approach is a way of finding the most central or relevant players in terms of the “global” structure of the network, and to pay less attention to patterns that are more “local”. Mathematically, the eigenvector centrality captures the relevance of players in the bargaining process, using the eigenvector associated to the largest eigenvalue of the adjacency matrix of a given network. Thus our result may be viewed as an economic justification of the eigenvector approach in the context of bargaining in networked markets.

As an application, we analyze the special case of seller-buyer networks, showing how our framework may be useful for analyzing price dispersion as a function of sellers and buyers' network positions.

Finally, in Chapter 3 we study the problem of price competition and free entry in networked markets subject to congestion effects. In many environments, such as communication networks in which network flows are allocated, or transportation networks in which traffic is directed through the underlying road architecture, congestion plays an important role. In particular, we consider a network with multiple origins and a common destination node, where each link is owned by a firm that sets prices in order to maximize profits, whereas users want to minimize the total cost they face, which is given by the congestion cost plus the prices set by firms. In this environment, we introduce the notion of Markovian traffic equilibrium to establish the existence and uniqueness of a pure strategy price equilibrium, without assuming that the demand functions are concave nor imposing particular functional forms for the latency functions. We derive explicit conditions to guarantee existence and uniqueness of equilibria. Given this existence and uniqueness result, we apply our framework to study entry decisions and welfare, and establish that in congested markets with free entry, the number of firms exceeds the social optimum.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the complexity of biological networks, we find that certain common architectures govern network structures. These architectures impose fundamental constraints on system performance and create tradeoffs that the system must balance in the face of uncertainty in the environment. This means that while a system may be optimized for a specific function through evolution, the optimal achievable state must follow these constraints. One such constraining architecture is autocatalysis, as seen in many biological networks including glycolysis and ribosomal protein synthesis. Using a minimal model, we show that ATP autocatalysis in glycolysis imposes stability and performance constraints and that the experimentally well-studied glycolytic oscillations are in fact a consequence of a tradeoff between error minimization and stability. We also show that additional complexity in the network results in increased robustness. Ribosome synthesis is also autocatalytic where ribosomes must be used to make more ribosomal proteins. When ribosomes have higher protein content, the autocatalysis is increased. We show that this autocatalysis destabilizes the system, slows down response, and also constrains the system’s performance. On a larger scale, transcriptional regulation of whole organisms also follows architectural constraints and this can be seen in the differences between bacterial and yeast transcription networks. We show that the degree distributions of bacterial transcription network follow a power law distribution while the yeast network follows an exponential distribution. We then explored the evolutionary models that have previously been proposed and show that neither the preferential linking model nor the duplication-divergence model of network evolution generates the power-law, hierarchical structure found in bacteria. However, in real biological systems, the generation of new nodes occurs through both duplication and horizontal gene transfers, and we show that a biologically reasonable combination of the two mechanisms generates the desired network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The compound eye of Drosophila melanogaster begins to differentiate during the late third larval instar in the eye-antennal imaginal disc. A wave of morphogenesis crosses the disc from posterior to anterior, leaving behind precisely patterned clusters of photoreceptor cells and accessory cells that will constitute the adult ommatidia of the retina. By the analysis of genetically mosaic eyes, it appears that any cell in the eye disc can adopt the characteristics of any one of the different cell types found in the mature eye, including photoreceptor cells and non-neuronal accessory cells such as cone cells. Therefore, cells within the prospective retinal epithelium assume different fates presumably via information present in the environment. The sevenless^+ (sev^+) gene appears to play a role in the expression of one of the possible fates, since the mutant phenotype is the lack of one of the pattern elements, namely, photoreceptor cell R7. The sev^+ gene product had been shown to be required during development of the eye, and had also been shown in genetic mosaics to be autonomous to presumptive R7. As a means of better understanding the pathway instructing the differentiation R7, the gene and its protein product were characterized.

The sev+ gene was cloned by P-element transposon tagging, and was found to encode an 8.2 kb transcript expressed in developing eye discs and adult heads. By raising monoclonal antibodies (MAbs) against a sev^+- β-galactosidase fusion protein, the expression of the protein in the eye disc was localized by immuno-electronmicroscopy. The protein localizes to the apical cell membranes and microvilli of cells in the eye disc epithelium. It appears during development at a time coincident with the initial formation of clusters, and in all the developing photoreceptors and accessory cone cells at a time prior to the overt differentiation of R7. This result is consistent with the pluripotency of cells in the eye disc. Its localization in the membranes suggests that it may receive information directing the development of R7. Its localization in the apical membranes and microvilli is away from the bulk of the cell contacts, which have been cited as a likely regions for information presentation and processing. Biochemical characterization of the sev^+ protein will be necessary to describe further its role in development.

Other mutations in Drosophila have eye phenotypes. These were analyzed to find which ones affected the initial patterning of cells in the eye disc, in order to identify other genes, like sev, whose gene products may be involved in generating the pattern. The adult eye phenotypes ranged from severe reduction of the eye, to variable numbers of photoreceptor cells per ommatidium, to sub de defects in the organization of the supporting cells. Developing eye discs from the different strains were screened using a panel of MAbs, which highlight various developmental stages. Two identified matrix elements in and anterior to the furrow, while others identified the developing ommatidia themselves, like the anti-sev MAb. Mutation phenotypes were shown to appear at many stages of development. Some mutations seem to affect the precursor cells, others, the setting up of the pattern, and still others, the maintenance of the pattern. Thus, additional genes have now been identified that may function to support the development of a complex pattern.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

My thesis studies how people pay attention to other people and the environment. How does the brain figure out what is important and what are the neural mechanisms underlying attention? What is special about salient social cues compared to salient non-social cues? In Chapter I, I review social cues that attract attention, with an emphasis on the neurobiology of these social cues. I also review neurological and psychiatric links: the relationship between saliency, the amygdala and autism. The first empirical chapter then begins by noting that people constantly move in the environment. In Chapter II, I study the spatial cues that attract attention during locomotion using a cued speeded discrimination task. I found that when the motion was expansive, attention was attracted towards the singular point of the optic flow (the focus of expansion, FOE) in a sustained fashion. The more ecologically valid the motion features became (e.g., temporal expansion of each object, spatial depth structure implied by distribution of the size of the objects), the stronger the attentional effects. However, compared to inanimate objects and cues, people preferentially attend to animals and faces, a process in which the amygdala is thought to play an important role. To directly compare social cues and non-social cues in the same experiment and investigate the neural structures processing social cues, in Chapter III, I employ a change detection task and test four rare patients with bilateral amygdala lesions. All four amygdala patients showed a normal pattern of reliably faster and more accurate detection of animate stimuli, suggesting that advantageous processing of social cues can be preserved even without the amygdala, a key structure of the “social brain”. People not only attend to faces, but also pay attention to others’ facial emotions and analyze faces in great detail. Humans have a dedicated system for processing faces and the amygdala has long been associated with a key role in recognizing facial emotions. In Chapter IV, I study the neural mechanisms of emotion perception and find that single neurons in the human amygdala are selective for subjective judgment of others’ emotions. Lastly, people typically pay special attention to faces and people, but people with autism spectrum disorders (ASD) might not. To further study social attention and explore possible deficits of social attention in autism, in Chapter V, I employ a visual search task and show that people with ASD have reduced attention, especially social attention, to target-congruent objects in the search array. This deficit cannot be explained by low-level visual properties of the stimuli and is independent of the amygdala, but it is dependent on task demands. Overall, through visual psychophysics with concurrent eye-tracking, my thesis found and analyzed socially salient cues and compared social vs. non-social cues and healthy vs. clinical populations. Neural mechanisms underlying social saliency were elucidated through electrophysiology and lesion studies. I finally propose further research questions based on the findings in my thesis and introduce my follow-up studies and preliminary results beyond the scope of this thesis in the very last section, Future Directions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

These studies explore how, where, and when representations of variables critical to decision-making are represented in the brain. In order to produce a decision, humans must first determine the relevant stimuli, actions, and possible outcomes before applying an algorithm that will select an action from those available. When choosing amongst alternative stimuli, the framework of value-based decision-making proposes that values are assigned to the stimuli and that these values are then compared in an abstract “value space” in order to produce a decision. Despite much progress, in particular regarding the pinpointing of ventromedial prefrontal cortex (vmPFC) as a region that encodes the value, many basic questions remain. In Chapter 2, I show that distributed BOLD signaling in vmPFC represents the value of stimuli under consideration in a manner that is independent of the type of stimulus it is. Thus the open question of whether value is represented in abstraction, a key tenet of value-based decision-making, is confirmed. However, I also show that stimulus-dependent value representations are also present in the brain during decision-making and suggest a potential neural pathway for stimulus-to-value transformations that integrates these two results.

More broadly speaking, there is both neural and behavioral evidence that two distinct control systems are at work during action selection. These two systems compose the “goal-directed system”, which selects actions based on an internal model of the environment, and the “habitual” system, which generates responses based on antecedent stimuli only. Computational characterizations of these two systems imply that they have different informational requirements in terms of input stimuli, actions, and possible outcomes. Associative learning theory predicts that the habitual system should utilize stimulus and action information only, while goal-directed behavior requires that outcomes as well as stimuli and actions be processed. In Chapter 3, I test whether areas of the brain hypothesized to be involved in habitual versus goal-directed control represent the corresponding theorized variables.

The question of whether one or both of these neural systems drives Pavlovian conditioning is less well-studied. Chapter 4 describes an experiment in which subjects were scanned while engaged in a Pavlovian task with a simple non-trivial structure. After comparing a variety of model-based and model-free learning algorithms (thought to underpin goal-directed and habitual decision-making, respectively), it was found that subjects’ reaction times were better explained by a model-based system. In addition, neural signaling of precision, a variable based on a representation of a world model, was found in the amygdala. These data indicate that the influence of model-based representations of the environment can extend even to the most basic learning processes.

Knowledge of the state of hidden variables in an environment is required for optimal inference regarding the abstract decision structure of a given environment and therefore can be crucial to decision-making in a wide range of situations. Inferring the state of an abstract variable requires the generation and manipulation of an internal representation of beliefs over the values of the hidden variable. In Chapter 5, I describe behavioral and neural results regarding the learning strategies employed by human subjects in a hierarchical state-estimation task. In particular, a comprehensive model fit and comparison process pointed to the use of "belief thresholding". This implies that subjects tended to eliminate low-probability hypotheses regarding the state of the environment from their internal model and ceased to update the corresponding variables. Thus, in concert with incremental Bayesian learning, humans explicitly manipulate their internal model of the generative process during hierarchical inference consistent with a serial hypothesis testing strategy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study some aspects of conformal field theory, wormhole physics and two-dimensional random surfaces. Inspite of being rather different, these topics serve as examples of the issues that are involved, both at high and low energy scales, in formulating a quantum theory of gravity. In conformal field theory we show that fusion and braiding properties can be used to determine the operator product coefficients of the non-diagonal Wess-Zumino-Witten models. In wormhole physics we show how Coleman's proposed probability distribution would result in wormholes determining the value of θQCD. We attempt such a calculation and find the most probable value of θQCD to be π. This hints at a potential conflict with nature. In random surfaces we explore the behaviour of conformal field theories coupled to gravity and calculate some partition functions and correlation functions. Our results throw some light on the transition that is believed to occur when the central charge of the matter theory gets larger than one.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work deals with two related areas: processing of visual information in the central nervous system, and the application of computer systems to research in neurophysiology.

Certain classes of interneurons in the brain and optic lobes of the blowfly Calliphora phaenicia were previously shown to be sensitive to the direction of motion of visual stimuli. These units were identified by visual field, preferred direction of motion, and anatomical location from which recorded. The present work is addressed to the questions: (1) is there interaction between pairs of these units, and (2) if such relationships can be found, what is their nature. To answer these questions, it is essential to record from two or more units simultaneously, and to use more than a single recording electrode if recording points are to be chosen independently. Accordingly, such techniques were developed and are described.

One must also have practical, convenient means for analyzing the large volumes of data so obtained. It is shown that use of an appropriately designed computer system is a profitable approach to this problem. Both hardware and software requirements for a suitable system are discussed and an approach to computer-aided data analysis developed. A description is given of members of a collection of application programs developed for analysis of neuro-physiological data and operated in the environment of and with support from an appropriate computer system. In particular, techniques developed for classification of multiple units recorded on the same electrode are illustrated as are methods for convenient graphical manipulation of data via a computer-driven display.

By means of multiple electrode techniques and the computer-aided data acquisition and analysis system, the path followed by one of the motion detection units was traced from open optic lobe through the brain and into the opposite lobe. It is further shown that this unit and its mirror image in the opposite lobe have a mutually inhibitory relationship. This relationship is investigated. The existence of interaction between other pairs of units is also shown. For pairs of units responding to motion in the same direction, the relationship is of an excitatory nature; for those responding to motion in opposed directions, it is inhibitory.

Experience gained from use of the computer system is discussed and a critical review of the current system is given. The most useful features of the system were found to be the fast response, the ability to go from one analysis technique to another rapidly and conveniently, and the interactive nature of the display system. The shortcomings of the system were problems in real-time use and the programming barrier—the fact that building new analysis techniques requires a high degree of programming knowledge and skill. It is concluded that computer system of the kind discussed will play an increasingly important role in studies of the central nervous system.