930 resultados para ENTANGLEMENT MANIPULATION
Resumo:
The development of Ring Opening Metathesis Polymerization has allowed the world of block copolymers to expand into brush block copolymers. Brush block copolymers consist of a polymer backbone with polymeric side chains, forcing the backbone to hold a stretched conformation and giving it a worm-like shape. These brush block copolymers have a number of advantages over tradition block copolymers, including faster self-assembly behavior, larger domain sizes, and much less entanglement. This makes them an ideal candidate in the development of a bottom-up approach to forming photonic crystals. Photonic crystals are periodic nanostructures that transmit and reflect only certain wavelengths of light, forming a band gap. These are used in a number of coatings and other optical uses. One and two dimensional photonic crystals are commercially available, though are often expensive and difficult to manufacture. Previous work has focused on the creation of one dimensional photonic crystals from brush block copolymers. In this thesis, I will focus on the synthesis and characterization of asymmetric brush block copolymers for self-assembly into two and three dimensional photonic crystals. Three series of brush block copolymers were made and characterized by Gel Permeation Chromatography and Nuclear Magnetic Resonance spectroscopy. They were then made into films through compressive thermal annealing and characterized by UV-Vis Spectroscopy and Scanning Electron Microscopy. Evidence of non-lamellar structures were seen, indicating the first reported creation of two or three dimensional photonic crystals from brush block copolymers.
Resumo:
DNA damage is extremely detrimental to the cell and must be repaired to protect the genome. DNA is capable of conducting charge through the overlapping π-orbitals of stacked bases; this phenomenon is extremely sensitive to the integrity of the π-stack, as perturbations attenuate DNA charge transport (CT). Based on the E. coli base excision repair (BER) proteins EndoIII and MutY, it has recently been proposed that redox-active proteins containing metal clusters can utilize DNA CT to signal one another to locate sites of DNA damage.
To expand our repertoire of proteins that utilize DNA-mediated signaling, we measured the DNA-bound redox potential of the nucleotide excision repair (NER) helicase XPD from Sulfolobus acidocaldarius. A midpoint potential of 82 mV versus NHE was observed, resembling that of the previously reported BER proteins. The redox signal increases in intensity with ATP hydrolysis in only the WT protein and mutants that maintain ATPase activity and not for ATPase-deficient mutants. The signal increase correlates directly with ATP activity, suggesting that DNA-mediated signaling may play a general role in protein signaling. Several mutations in human XPD that lead to XP-related diseases have been identified; using SaXPD, we explored how these mutations, which are conserved in the thermophile, affect protein electrochemistry.
To further understand the electrochemical signaling of XPD, we studied the yeast S. cerevisiae Rad3 protein. ScRad3 mutants were incubated on a DNA-modified electrode and exhibited a similar redox potential to SaXPD. We developed a haploid strain of S. cerevisiae that allowed for easy manipulation of Rad3. In a survival assay, the ATPase- and helicase-deficient mutants show little survival, while the two disease-related mutants exhibit survival similar to WT. When both a WT and G47R (ATPase/helicase deficient) strain were challenged with different DNA damaging agents, both exhibited comparable survival in the presence of hydroxyurea, while with methyl methanesulfonate and camptothecin, the G47R strain exhibits a significant change in growth, suggesting that Rad3 is involved in repairing damage beyond traditional NER substrates. Together, these data expand our understanding of redox-active proteins at the interface of DNA repair.
Resumo:
We present a universal analyzer for the three-particle Greenberger-Horne-Zeilinger (GHZ) states with quantum nondemolition parity detectors and linear-optics elements. In our scheme, all of the three-photon GHZ states can be discriminated with nearly unity probability in the regime of weak nonlinearity feasible at the present state of the art experimentally. We also show that our scheme can be easily extended to the analysis of the multi-particle GHZ states.
Resumo:
A feasible scheme for constructing quantum logic gates is proposed on the basis of quantum switches in cavity QED. It is shown that the light field which is fed into the cavity due to the passage of an atom in a certain state can be used to manipulate the conditioned quantum logical gate. In our scheme, the quantum information is encoded in the states of Rydberg atoms and the cavity mode is not used as logical qubits or as a communicating "bus"; thus, the effect of atomic spontaneous emission can be neglected and the strict requirements for the cavity can be relaxed.
Resumo:
The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.
The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.
Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.
Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.
A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.
The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.
Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.
Resumo:
The ability to regulate gene expression is of central importance for the adaptability of living organisms to changes in their internal and external environment. At the transcriptional level, binding of transcription factors (TFs) in the vicinity of promoters can modulate the rate at which transcripts are produced, and as such play an important role in gene regulation. TFs with regulatory action at multiple promoters is the rule rather than the exception, with examples ranging from TFs like the cAMP receptor protein (CRP) in E. coli that regulates hundreds of different genes, to situations involving multiple copies of the same gene, such as on plasmids, or viral DNA. When the number of TFs heavily exceeds the number of binding sites, TF binding to each promoter can be regarded as independent. However, when the number of TF molecules is comparable to the number of binding sites, TF titration will result in coupling ("entanglement") between transcription of different genes. The last few decades have seen rapid advances in our ability to quantitatively measure such effects, which calls for biophysical models to explain these data. Here we develop a statistical mechanical model which takes the TF titration effect into account and use it to predict both the level of gene expression and the resulting correlation in transcription rates for a general set of promoters. To test these predictions experimentally, we create genetic constructs with known TF copy number, binding site affinities, and gene copy number; hence avoiding the need to use free fit parameters. Our results clearly prove the TF titration effect and that the statistical mechanical model can accurately predict the fold change in gene expression for the studied cases. We also generalize these experimental efforts to cover systems with multiple different genes, using the method of mRNA fluorescence in situ hybridization (FISH). Interestingly, we can use the TF titration affect as a tool to measure the plasmid copy number at different points in the cell cycle, as well as the plasmid copy number variance. Finally, we investigate the strategies of transcriptional regulation used in a real organism by analyzing the thousands of known regulatory interactions in E. coli. We introduce a "random promoter architecture model" to identify overrepresented regulatory strategies, such as TF pairs which coregulate the same genes more frequently than would be expected by chance, indicating a related biological function. Furthermore, we investigate whether promoter architecture has a systematic effect on gene expression by linking the regulatory data of E. coli to genome-wide expression censuses.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
We present the theoretical analysis and the numerical modeling of optical levitation and trapping of the stuck particles with a pulsed optical tweezers. In our model, a pulsed laser was used to generate a large gradient force within a short duration that overcame the adhesive interaction between the stuck particles and the surface; and then a low power continuous - wave (cw) laser was used to capture the levitated particle. We describe the gradient force generated by the pulsed optical tweezers and model the binding interaction between the stuck beads and glass surface by the dominative van der Waals force with a randomly distributed binding strength. We numerically calculate the single pulse levitation efficiency for polystyrene beads as the function of the pulse energy, the axial displacement from the surface to the pulsed laser focus and the pulse duration. The result of our numerical modeling is qualitatively consistent with the experimental result. (C) 2005 Optical Society of America.
Resumo:
Over the last several decades there have been significant advances in the study and understanding of light behavior in nanoscale geometries. Entire fields such as those based on photonic crystals, plasmonics and metamaterials have been developed, accelerating the growth of knowledge related to nanoscale light manipulation. Coupled with recent interest in cheap, reliable renewable energy, a new field has blossomed, that of nanophotonic solar cells.
In this thesis, we examine important properties of thin-film solar cells from a nanophotonics perspective. We identify key differences between nanophotonic devices and traditional, thick solar cells. We propose a new way of understanding and describing limits to light trapping and show that certain nanophotonic solar cell designs can have light trapping limits above the so called ray-optic or ergodic limit. We propose that a necessary requisite to exceed the traditional light trapping limit is that the active region of the solar cell must possess a local density of optical states (LDOS) higher than that of the corresponding, bulk material. Additionally, we show that in addition to having an increased density of states, the absorber must have an appropriate incoupling mechanism to transfer light from free space into the optical modes of the device. We outline a portfolio of new solar cell designs that have potential to exceed the traditional light trapping limit and numerically validate our predictions for select cases.
We emphasize the importance of thinking about light trapping in terms of maximizing the optical modes of the device and efficiently coupling light into them from free space. To further explore these two concepts, we optimize patterns of superlattices of air holes in thin slabs of Si and show that by adding a roughened incoupling layer the total absorbed current can be increased synergistically. We suggest that the addition of a random scattering surface to a periodic patterning can increase incoupling by lifting the constraint of selective mode occupation associated with periodic systems.
Lastly, through experiment and simulation, we investigate a potential high efficiency solar cell architecture that can be improved with the nanophotonic light trapping concepts described in this thesis. Optically thin GaAs solar cells are prepared by the epitaxial liftoff process by removal from their growth substrate and addition of a metallic back reflector. A process of depositing large area nano patterns on the surface of the cells is developed using nano imprint lithography and implemented on the thin GaAs cells.
Resumo:
We propose a surface planar ion chip which forms a linear radio frequency Paul ion trap. The electrodes reside in the two planes of a chip, and the trap axis is located above the chip surface. Its electric field and potential distribution are similar to the standard linear radio frequency Paul ion trap. This ion trap geometry may be greatly meaningful for quantum information processing.
Resumo:
The effects of the relative phase between two laser beams on the propagation of a weak electromagnetic pulse are investigated in a V-type system with spontaneously generated coherence (SGC). Due to the relative phase, the subluminal and superluminal group velocity can be unified. Meanwhile, SGC can be regarded as a knob to manipulate light propagation between subluminal and superluminal.
Resumo:
作为量子信息领域分支的鬼成像,由于物体的像将出现在不包含物体的光路上的特点,使得这一领域的研究引人入胜。一度认为,只有基于纠缠态双光子的纠缠光源,才能实现鬼成像;但近年来的研究表明,经典热光场也能实现这一过程。从经典统计光学入手,建立了热光场的数值模型,模拟符合热光特性的光场变化、光场传播、以及物体透射函数对热光场的调制,进而从光强度起伏的关联函数中,分别重现振幅型物体和纯相位型物体的傅里叶变换图像;通过与真实实验结果的对比,表明基于统计光学原理的该数值模型所预测的实验结果,与真实的实验结果完全一致。
Resumo:
These studies explore how, where, and when representations of variables critical to decision-making are represented in the brain. In order to produce a decision, humans must first determine the relevant stimuli, actions, and possible outcomes before applying an algorithm that will select an action from those available. When choosing amongst alternative stimuli, the framework of value-based decision-making proposes that values are assigned to the stimuli and that these values are then compared in an abstract “value space” in order to produce a decision. Despite much progress, in particular regarding the pinpointing of ventromedial prefrontal cortex (vmPFC) as a region that encodes the value, many basic questions remain. In Chapter 2, I show that distributed BOLD signaling in vmPFC represents the value of stimuli under consideration in a manner that is independent of the type of stimulus it is. Thus the open question of whether value is represented in abstraction, a key tenet of value-based decision-making, is confirmed. However, I also show that stimulus-dependent value representations are also present in the brain during decision-making and suggest a potential neural pathway for stimulus-to-value transformations that integrates these two results.
More broadly speaking, there is both neural and behavioral evidence that two distinct control systems are at work during action selection. These two systems compose the “goal-directed system”, which selects actions based on an internal model of the environment, and the “habitual” system, which generates responses based on antecedent stimuli only. Computational characterizations of these two systems imply that they have different informational requirements in terms of input stimuli, actions, and possible outcomes. Associative learning theory predicts that the habitual system should utilize stimulus and action information only, while goal-directed behavior requires that outcomes as well as stimuli and actions be processed. In Chapter 3, I test whether areas of the brain hypothesized to be involved in habitual versus goal-directed control represent the corresponding theorized variables.
The question of whether one or both of these neural systems drives Pavlovian conditioning is less well-studied. Chapter 4 describes an experiment in which subjects were scanned while engaged in a Pavlovian task with a simple non-trivial structure. After comparing a variety of model-based and model-free learning algorithms (thought to underpin goal-directed and habitual decision-making, respectively), it was found that subjects’ reaction times were better explained by a model-based system. In addition, neural signaling of precision, a variable based on a representation of a world model, was found in the amygdala. These data indicate that the influence of model-based representations of the environment can extend even to the most basic learning processes.
Knowledge of the state of hidden variables in an environment is required for optimal inference regarding the abstract decision structure of a given environment and therefore can be crucial to decision-making in a wide range of situations. Inferring the state of an abstract variable requires the generation and manipulation of an internal representation of beliefs over the values of the hidden variable. In Chapter 5, I describe behavioral and neural results regarding the learning strategies employed by human subjects in a hierarchical state-estimation task. In particular, a comprehensive model fit and comparison process pointed to the use of "belief thresholding". This implies that subjects tended to eliminate low-probability hypotheses regarding the state of the environment from their internal model and ceased to update the corresponding variables. Thus, in concert with incremental Bayesian learning, humans explicitly manipulate their internal model of the generative process during hierarchical inference consistent with a serial hypothesis testing strategy.
Resumo:
Terns and skimmers nesting on saltmarsh islands often suffer large nest losses due to tidal and storm flooding. Nests located near the center of an island and on wrack (mats of dead vegetation, mostly eelgrass Zostera) are less susceptible to flooding than those near the edge of an island and those on bare soil or in saltmarsh cordgrass (Spartina alterniflora). In the 1980’s Burger and Gochfeld constructed artificial eelgrass mats on saltmarsh islands in Ocean County, New Jersey. These mats were used as nesting substrate by common terns (Sterna hirundo) and black skimmers (Rynchops niger). Every year since 2002 I have transported eelgrass to one of their original sites to make artificial mats. This site, Pettit Island, typically supports between 125 and 200 pairs of common terns. There has often been very little natural wrack present on the island at the start of the breeding season, and in most years natural wrack has been most common along the edges of the island. The terns readily used the artificial mats for nesting substrate. Because I placed artificial mats in the center of the island, the terns have often avoided the large nest losses incurred by terns nesting in peripheral locations. However, during particularly severe flooding events even centrally located nests on mats are vulnerable. Construction of eelgrass mats represents an easy habitat manipulation that can improve the nesting success of marsh-nesting seabirds.
Resumo:
An exciting frontier in quantum information science is the integration of otherwise "simple'' quantum elements into complex quantum networks. The laboratory realization of even small quantum networks enables the exploration of physical systems that have not heretofore existed in the natural world. Within this context, there is active research to achieve nanoscale quantum optical circuits, for which atoms are trapped near nano-scopic dielectric structures and "wired'' together by photons propagating through the circuit elements. Single atoms and atomic ensembles endow quantum functionality for otherwise linear optical circuits and thereby enable the capability of building quantum networks component by component. Toward these goals, we have experimentally investigated three different systems, from conventional to rather exotic systems : free-space atomic ensembles, optical nano fibers, and photonics crystal waveguides. First, we demonstrate measurement-induced quadripartite entanglement among four quantum memories. Next, following the landmark realization of a nanofiber trap, we demonstrate the implementation of a state-insensitive, compensated nanofiber trap. Finally, we reach more exotic systems based on photonics crystal devices. Beyond conventional topologies of resonators and waveguides, new opportunities emerge from the powerful capabilities of dispersion and modal engineering in photonic crystal waveguides. We have implemented an integrated optical circuit with a photonics crystal waveguide capable of both trapping and interfacing atoms with guided photons, and have observed the collective effect, superradiance, mediated by the guided photons. These advances provide an important capability for engineered light-matter interactions, enabling explorations of novel quantum transport and quantum many-body phenomena.