15 resultados para Computational experiment

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

These studies explore how, where, and when representations of variables critical to decision-making are represented in the brain. In order to produce a decision, humans must first determine the relevant stimuli, actions, and possible outcomes before applying an algorithm that will select an action from those available. When choosing amongst alternative stimuli, the framework of value-based decision-making proposes that values are assigned to the stimuli and that these values are then compared in an abstract “value space” in order to produce a decision. Despite much progress, in particular regarding the pinpointing of ventromedial prefrontal cortex (vmPFC) as a region that encodes the value, many basic questions remain. In Chapter 2, I show that distributed BOLD signaling in vmPFC represents the value of stimuli under consideration in a manner that is independent of the type of stimulus it is. Thus the open question of whether value is represented in abstraction, a key tenet of value-based decision-making, is confirmed. However, I also show that stimulus-dependent value representations are also present in the brain during decision-making and suggest a potential neural pathway for stimulus-to-value transformations that integrates these two results.

More broadly speaking, there is both neural and behavioral evidence that two distinct control systems are at work during action selection. These two systems compose the “goal-directed system”, which selects actions based on an internal model of the environment, and the “habitual” system, which generates responses based on antecedent stimuli only. Computational characterizations of these two systems imply that they have different informational requirements in terms of input stimuli, actions, and possible outcomes. Associative learning theory predicts that the habitual system should utilize stimulus and action information only, while goal-directed behavior requires that outcomes as well as stimuli and actions be processed. In Chapter 3, I test whether areas of the brain hypothesized to be involved in habitual versus goal-directed control represent the corresponding theorized variables.

The question of whether one or both of these neural systems drives Pavlovian conditioning is less well-studied. Chapter 4 describes an experiment in which subjects were scanned while engaged in a Pavlovian task with a simple non-trivial structure. After comparing a variety of model-based and model-free learning algorithms (thought to underpin goal-directed and habitual decision-making, respectively), it was found that subjects’ reaction times were better explained by a model-based system. In addition, neural signaling of precision, a variable based on a representation of a world model, was found in the amygdala. These data indicate that the influence of model-based representations of the environment can extend even to the most basic learning processes.

Knowledge of the state of hidden variables in an environment is required for optimal inference regarding the abstract decision structure of a given environment and therefore can be crucial to decision-making in a wide range of situations. Inferring the state of an abstract variable requires the generation and manipulation of an internal representation of beliefs over the values of the hidden variable. In Chapter 5, I describe behavioral and neural results regarding the learning strategies employed by human subjects in a hierarchical state-estimation task. In particular, a comprehensive model fit and comparison process pointed to the use of "belief thresholding". This implies that subjects tended to eliminate low-probability hypotheses regarding the state of the environment from their internal model and ceased to update the corresponding variables. Thus, in concert with incremental Bayesian learning, humans explicitly manipulate their internal model of the generative process during hierarchical inference consistent with a serial hypothesis testing strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We carried out quantum mechanics (QM) studies aimed at improving the performance of hydrogen fuel cells. This led to predictions of improved materials, some of which were subsequently validated with experiments by our collaborators.

In part I, the challenge was to find a replacement for the Pt cathode that would lead to improved performance for the Oxygen Reduction Reaction (ORR) while remaining stable under operational conditions and decreasing cost. Our design strategy was to find an alloy with composition Pt3M that would lead to surface segregation such that the top layer would be pure Pt, with the second and subsequent layers richer in M. Under operating conditions we expect the surface to have significant O and/or OH chemisorbed on the surface, and hence we searched for M that would remain segregated under these conditions. Using QM we examined surface segregation for 28 Pt3M alloys, where M is a transition metal. We found that only Pt3Os and Pt3Ir showed significant surface segregation when O and OH are chemisorbed on the catalyst surfaces. This result indicates that Pt3Os and Pt3Ir favor formation of a Pt-skin surface layer structure that would resist the acidic electrolyte corrosion during fuel cell operation environments. We chose to focus on Os because the phase diagram for Pt-Ir indicated that Pt-Ir could not form a homogeneous alloy at lower temperature. To determine the performance for ORR, we used QM to examine all intermediates, reaction pathways, and reaction barriers involved in the processes for which protons from the anode reactions react with O2 to form H2O. These QM calculations used our Poisson-Boltzmann implicit solvation model include the effects of the solvent (water with dielectric constant 78 with pH 7 at 298K). We found that the rate determination step (RDS) was the Oad hydration reaction (Oad + H2Oad -> OHad + OHad) in both cases, but that the barrier for pure Pt of 0.50 eV is reduced to 0.48 eV for Pt3Os, which at 80 degrees C would increase the rate by 218%. We collaborated with the Pu-Wei Wu’s group to carry out experiments, where we found that the dealloying process-treated Pt2Os catalyst showed two-fold higher activity at 25 degrees C than pure Pt and that the alloy had 272% improved stability, validating our theoretical predictions.

We also carried out similar QM studies followed by experimental validation for the Os/Pt core-shell catalyst fabricated by the underpotential deposition (UPD) method. The QM results indicated that the RDS for ORR is a compromise between the OOH formation step (0.37 eV for Pt, 0.23 eV for Pt2ML/Os core-shell) and H2O formation steps (0.32 eV for Pt, 0.22 eV for Pt2ML/Os core-shell). We found that Pt2ML/Os has the highest activity (compared to pure Pt and to the Pt3Os alloy) because the 0.37 eV barrier decreases to 0.23 eV. To understand what aspects of the core shell structure lead to this improved performance, we considered the effect on ORR of compressing the alloy slab to the dimensions of pure Pt. However this had little effect, with the same RDS barrier 0.37 eV. This shows that the ligand effect (the electronic structure modification resulting from the Os substrate) plays a more important role than the strain effect, and is responsible for the improved activity of the core- shell catalyst. Experimental materials characterization proves the core-shell feature of our catalyst. The electrochemical experiment for Pt2ML/Os/C showed 3.5 to 5 times better ORR activity at 0.9V (vs. NHE) in 0.1M HClO4 solution at 25 degrees C as compared to those of commercially available Pt/C. The excellent correlation between experimental half potential and the OH binding energies and RDS barriers validate the feasibility of predicting catalyst activity using QM calculation and a simple Langmuir–Hinshelwood model.

In part II, we used QM calculations to study methane stream reforming on a Ni-alloy catalyst surfaces for solid oxide fuel cell (SOFC) application. SOFC has wide fuel adaptability but the coking and sulfur poisoning will reduce its stability. Experimental results suggested that the Ni4Fe alloy improves both its activity and stability compared to pure Ni. To understand the atomistic origin of this, we carried out QM calculations on surface segregation and found that the most stable configuration for Ni4Fe has a Fe atom distribution of (0%, 50%, 25%, 25%, 0%) starting at the bottom layer. We calculated that the binding of C atoms on the Ni4Fe surface is 142.9 Kcal/mol, which is about 10 Kcal/mol weaker compared to the pure Ni surface. This weaker C binding energy is expected to make coke formation less favorable, explaining why Ni4Fe has better coking resistance. This result confirms the experimental observation. The reaction energy barriers for CHx decomposition and C binding on various alloy surface, Ni4X (X=Fe, Co, Mn, and Mo), showed Ni4Fe, Ni4Co, and Fe4Mn all have better coking resistance than pure Ni, but that only Ni4Fe and Fe4Mn have (slightly) improved activity compared to pure Ni.

In part III, we used QM to examine the proton transport in doped perovskite-ceramics. Here we used a 2x2x2 supercell of perovskite with composition Ba8X7M1(OH)1O23 where X=Ce or Zr and M=Y, Gd, or Dy. Thus in each case a 4+ X is replace by a 3+ M plus a proton on one O. Here we predicted the barriers for proton diffusion allowing both includes intra-octahedron and inter-octahedra proton transfer. Without any restriction, we only observed the inter-octahedra proton transfer with similar energy barrier as previous computational work but 0.2 eV higher than experimental result for Y doped zirconate. For one restriction in our calculations is that the Odonor-Oacceptor atoms were kept at fixed distances, we found that the barrier difference between cerates/zirconates with various dopants are only 0.02~0.03 eV. To fully address performance one would need to examine proton transfer at grain boundaries, which will require larger scale ReaxFF reactive dynamics for systems with millions of atoms. The QM calculations used here will be used to train the ReaxFF force field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inspired by key experimental and analytical results regarding Shape Memory Alloys (SMAs), we propose a modelling framework to explore the interplay between martensitic phase transformations and plastic slip in polycrystalline materials, with an eye towards computational efficiency. The resulting framework uses a convexified potential for the internal energy density to capture the stored energy associated with transformation at the meso-scale, and introduces kinetic potentials to govern the evolution of transformation and plastic slip. The framework is novel in the way it treats plasticity on par with transformation.

We implement the framework in the setting of anti-plane shear, using a staggered implicit/explict update: we first use a Fast-Fourier Transform (FFT) solver based on an Augmented Lagrangian formulation to implicitly solve for the full-field displacements of a simulated polycrystal, then explicitly update the volume fraction of martensite and plastic slip using their respective stick-slip type kinetic laws. We observe that, even in this simple setting with an idealized material comprising four martensitic variants and four slip systems, the model recovers a rich variety of SMA type behaviors. We use this model to gain insight into the isothermal behavior of stress-stabilized martensite, looking at the effects of the relative plastic yield strength, the memory of deformation history under non-proportional loading, and several others.

We extend the framework to the generalized 3-D setting, for which the convexified potential is a lower bound on the actual internal energy, and show that the fully implicit discrete time formulation of the framework is governed by a variational principle for mechanical equilibrium. We further propose an extension of the method to finite deformations via an exponential mapping. We implement the generalized framework using an existing Optimal Transport Mesh-free (OTM) solver. We then model the $\alpha$--$\gamma$ and $\alpha$--$\varepsilon$ transformations in pure iron, with an initial attempt in the latter to account for twinning in the parent phase. We demonstrate the scalability of the framework to large scale computing by simulating Taylor impact experiments, observing nearly linear (ideal) speed-up through 256 MPI tasks. Finally, we present preliminary results of a simulated Split-Hopkinson Pressure Bar (SHPB) experiment using the $\alpha$--$\varepsilon$ model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computational general relativity is a field of study which has reached maturity only within the last decade. This thesis details several studies that elucidate phenomena related to the coalescence of compact object binaries. Chapters 2 and 3 recounts work towards developing new analytical tools for visualizing and reasoning about dynamics in strongly curved spacetimes. In both studies, the results employ analogies with the classical theory of electricity and magnitism, first (Ch. 2) in the post-Newtonian approximation to general relativity and then (Ch. 3) in full general relativity though in the absence of matter sources. In Chapter 4, we examine the topological structure of absolute event horizons during binary black hole merger simulations conducted with the SpEC code. Chapter 6 reports on the progress of the SpEC code in simulating the coalescence of neutron star-neutron star binaries, while Chapter 7 tests the effects of various numerical gauge conditions on the robustness of black hole formation from stellar collapse in SpEC. In Chapter 5, we examine the nature of pseudospectral expansions of non-smooth functions motivated by the need to simulate the stellar surface in Chapters 6 and 7. In Chapter 8, we study how thermal effects in the nuclear equation of state effect the equilibria and stability of hypermassive neutron stars. Chapter 9 presents supplements to the work in Chapter 8, including an examination of the stability question raised in Chapter 8 in greater mathematical detail.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis addresses a series of topics related to the question of how people find the foreground objects from complex scenes. With both computer vision modeling, as well as psychophysical analyses, we explore the computational principles for low- and mid-level vision.

We first explore the computational methods of generating saliency maps from images and image sequences. We propose an extremely fast algorithm called Image Signature that detects the locations in the image that attract human eye gazes. With a series of experimental validations based on human behavioral data collected from various psychophysical experiments, we conclude that the Image Signature and its spatial-temporal extension, the Phase Discrepancy, are among the most accurate algorithms for saliency detection under various conditions.

In the second part, we bridge the gap between fixation prediction and salient object segmentation with two efforts. First, we propose a new dataset that contains both fixation and object segmentation information. By simultaneously presenting the two types of human data in the same dataset, we are able to analyze their intrinsic connection, as well as understanding the drawbacks of today’s “standard” but inappropriately labeled salient object segmentation dataset. Second, we also propose an algorithm of salient object segmentation. Based on our novel discoveries on the connections of fixation data and salient object segmentation data, our model significantly outperforms all existing models on all 3 datasets with large margins.

In the third part of the thesis, we discuss topics around the human factors of boundary analysis. Closely related to salient object segmentation, boundary analysis focuses on delimiting the local contours of an object. We identify the potential pitfalls of algorithm evaluation for the problem of boundary detection. Our analysis indicates that today’s popular boundary detection datasets contain significant level of noise, which may severely influence the benchmarking results. To give further insights on the labeling process, we propose a model to characterize the principles of the human factors during the labeling process.

The analyses reported in this thesis offer new perspectives to a series of interrelating issues in low- and mid-level vision. It gives warning signs to some of today’s “standard” procedures, while proposing new directions to encourage future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computational protein design (CPD) is a burgeoning field that uses a physical-chemical or knowledge-based scoring function to create protein variants with new or improved properties. This exciting approach has recently been used to generate proteins with entirely new functions, ones that are not observed in naturally occurring proteins. For example, several enzymes were designed to catalyze reactions that are not in the repertoire of any known natural enzyme. In these designs, novel catalytic activity was built de novo (from scratch) into a previously inert protein scaffold. In addition to de novo enzyme design, the computational design of protein-protein interactions can also be used to create novel functionality, such as neutralization of influenza. Our goal here was to design a protein that can self-assemble with DNA into nanowires. We used computational tools to homodimerize a transcription factor that binds a specific sequence of double-stranded DNA. We arranged the protein-protein and protein-DNA binding sites so that the self-assembly could occur in a linear fashion to generate nanowires. Upon mixing our designed protein homodimer with the double-stranded DNA, the molecules immediately self-assembled into nanowires. This nanowire topology was confirmed using atomic force microscopy. Co-crystal structure showed that the nanowire is assembled via the desired interactions. To the best of our knowledge, this is the first example of a protein-DNA self-assembly that does not rely on covalent interactions. We anticipate that this new material will stimulate further interest in the development of advanced biomaterials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The intensities and relative abundances of galactic cosmic ray protons and antiprotons have been measured with the Isotope Matter Antimatter Experiment (IMAX), a balloon-borne magnet spectrometer. The IMAX payload had a successful flight from Lynn Lake, Manitoba, Canada on July 16, 1992. Particles detected by IMAX were identified by mass and charge via the Cherenkov-Rigidity and TOP-Rigidity techniques, with measured rms mass resolution ≤0.2 amu for Z=1 particles.

Cosmic ray antiprotons are of interest because they can be produced by the interactions of high energy protons and heavier nuclei with the interstellar medium as well as by more exotic sources. Previous cosmic ray antiproton experiments have reported an excess of antiprotons over that expected solely from cosmic ray interactions.

Analysis of the flight data has yielded 124405 protons and 3 antiprotons in the energy range 0.19-0.97 GeV at the instrument, 140617 protons and 8 antiprotons in the energy range 0.97-2.58 GeV, and 22524 protons and 5 antiprotons in the energy range 2.58-3.08 GeV. These measurements are a statistical improvement over previous antiproton measurements, and they demonstrate improved separation of antiprotons from the more abundant fluxes of protons, electrons, and other cosmic ray species.

When these results are corrected for instrumental and atmospheric background and losses, the ratios at the top of the atmosphere are p/p=3.21(+3.49, -1.97)x10^(-5) in the energy range 0.25-1.00 GeV, p/p=5.38(+3.48, -2.45) x10^(-5) in the energy range 1.00-2.61 GeV, and p/p=2.05(+1.79, -1.15) x10^(-4) in the energy range 2.61-3.11 GeV. The corresponding antiproton intensities, also corrected to the top of the atmosphere, are 2.3(+2.5, -1.4) x10^(-2) (m^2 s sr GeV)^(-1), 2.1(+1.4, -1.0) x10^(-2) (m^2 s sr GeV)^(-1), and 4.3(+3.7, -2.4) x10^(-2) (m^2 s sr GeV)^(-1) for the same energy ranges.

The IMAX antiproton fluxes and antiproton/proton ratios are compared with recent Standard Leaky Box Model (SLBM) calculations of the cosmic ray antiproton abundance. According to this model, cosmic ray antiprotons are secondary cosmic rays arising solely from the interaction of high energy cosmic rays with the interstellar medium. The effects of solar modulation of protons and antiprotons are also calculated, showing that the antiproton/proton ratio can vary by as much as an order of magnitude over the solar cycle. When solar modulation is taken into account, the IMAX antiproton measurements are found to be consistent with the most recent calculations of the SLBM. No evidence is found in the IMAX data for excess antiprotons arising from the decay of galactic dark matter, which had been suggested as an interpretation of earlier measurements. Furthermore, the consistency of the current results with the SLBM calculations suggests that the mean antiproton lifetime is at least as large as the cosmic ray storage time in the galaxy (~10^7 yr, based on measurements of cosmic ray ^(10)Be). Recent measurements by two other experiments are consistent with this interpretation of the IMAX antiproton results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly.

We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments.

We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which the melting temperature is a design criterion.

We present in detail two examples of refractory materials. First, we demonstrate how key material properties that provide guidance in the design of refractory materials can be accurately determined via ab initio thermodynamic calculations in conjunction with experimental techniques based on synchrotron X-ray diffraction and thermal analysis under laser-heated aerodynamic levitation. The properties considered include melting point, heat of fusion, heat capacity, thermal expansion coefficients, thermal stability, and sublattice disordering, as illustrated in a motivating example of lanthanum zirconate (La2Zr2O7). The close agreement with experiment in the known but structurally complex compound La2Zr2O7 provides good indication that the computation methods described can be used within a computational screening framework to identify novel refractory materials. Second, we report an extensive investigation into the melting temperatures of the Hf-C and Hf-Ta-C systems using ab initio calculations. With melting points above 4000 K, hafnium carbide (HfC) and tantalum carbide (TaC) are among the most refractory binary compounds known to date. Their mixture, with a general formula TaxHf1-xCy, is known to have a melting point of 4215 K at the composition Ta4HfC5, which has long been considered as the highest melting temperature for any solid. Very few measurements of melting point in tantalum and hafnium carbides have been documented, because of the obvious experimental difficulties at extreme temperatures. The investigation lets us identify three major chemical factors that contribute to the high melting temperatures. Based on these three factors, we propose and explore a new class of materials, which, according to our ab initio calculations, may possess even higher melting temperatures than Ta-Hf-C. This example also demonstrates the feasibility of materials screening and discovery via ab initio calculations for the optimization of "higher-level" properties whose determination requires extensive sampling of atomic configuration space.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

G protein-coupled receptors (GPCRs) are the largest family of proteins within the human genome. They consist of seven transmembrane (TM) helices, with a N-terminal region of varying length and structure on the extracellular side, and a C-terminus on the intracellular side. GPCRs are involved in transmitting extracellular signals to cells, and as such are crucial drug targets. Designing pharmaceuticals to target GPCRs is greatly aided by full-atom structural information of the proteins. In particular, the TM region of GPCRs is where small molecule ligands (much more bioavailable than peptide ligands) typically bind to the receptors. In recent years nearly thirty distinct GPCR TM regions have been crystallized. However, there are more than 1,000 GPCRs, leaving the vast majority of GPCRs with limited structural information. Additionally, GPCRs are known to exist in a myriad of conformational states in the body, rendering the static x-ray crystal structures an incomplete reflection of GPCR structures. In order to obtain an ensemble of GPCR structures, we have developed the GEnSeMBLE procedure to rapidly sample a large number of variations of GPCR helix rotations and tilts. The lowest energy GEnSeMBLE structures are then docked to small molecule ligands and optimized. The GPCR family consists of five subfamilies with little to no sequence homology between them: class A, B1, B2, C, and Frizzled/Taste2. Almost all of the GPCR crystal structures have been of class A GPCRs, and much is known about their conserved interactions and binding sites. In this work we particularly focus on class B1 GPCRs, and aim to understand that family’s interactions and binding sites both to small molecules and their native peptide ligands. Specifically, we predict the full atom structure and peptide binding site of the glucagon-like peptide receptor and the TM region and small molecule binding sites for eight other class B1 GPCRs: CALRL, CRFR1, GIPR, GLR, PACR, PTH1R, VIPR1, and VIPR2. Our class B1 work reveals multiple conserved interactions across the B1 subfamily as well as a consistent small molecule binding site centrally located in the TM bundle. Both the interactions and the binding sites are distinct from those seen in the more well-characterized class A GPCRs, and as such our work provides a strong starting point for drug design targeting class B1 proteins. We also predict the full structure of CXCR4 bound to a small molecule, a class A GPCR that was not closely related to any of the class A GPCRs at the time of the work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes investigations of two classes of laboratory plasmas with rather different properties: partially ionized low pressure radiofrequency (RF) discharges, and fully ionized high density magnetohydrodynamically (MHD)-driven jets. An RF pre-ionization system was developed to enable neutral gas breakdown at lower pressures and create hotter, faster jets in the Caltech MHD-Driven Jet Experiment. The RF plasma source used a custom pulsed 3 kW 13.56 MHz RF power amplifier that was powered by AA batteries, allowing it to safely float at 4-6 kV with the cathode of the jet experiment. The argon RF discharge equilibrium and transport properties were analyzed, and novel jet dynamics were observed.

Although the RF plasma source was conceived as a wave-heated helicon source, scaling measurements and numerical modeling showed that inductive coupling was the dominant energy input mechanism. A one-dimensional time-dependent fluid model was developed to quantitatively explain the expansion of the pre-ionized plasma into the jet experiment chamber. The plasma transitioned from an ionizing phase with depressed neutral emission to a recombining phase with enhanced emission during the course of the experiment, causing fast camera images to be a poor indicator of the density distribution. Under certain conditions, the total visible and infrared brightness and the downstream ion density both increased after the RF power was turned off. The time-dependent emission patterns were used for an indirect measurement of the neutral gas pressure.

The low-mass jets formed with the aid of the pre-ionization system were extremely narrow and collimated near the electrodes, with peak density exceeding that of jets created without pre-ionization. The initial neutral gas distribution prior to plasma breakdown was found to be critical in determining the ultimate jet structure. The visible radius of the dense central jet column was several times narrower than the axial current channel radius, suggesting that the outer portion of the jet must have been force free, with the current parallel to the magnetic field. The studies of non-equilibrium flows and plasma self-organization being carried out at Caltech are relevant to astrophysical jets and fusion energy research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a complete system for Spectral Cauchy characteristic extraction (Spectral CCE). Implemented in C++ within the Spectral Einstein Code (SpEC), the method employs numerous innovative algorithms to efficiently calculate the Bondi strain, news, and flux.

Spectral CCE was envisioned to ensure physically accurate gravitational wave-forms computed for the Laser Interferometer Gravitational wave Observatory (LIGO) and similar experiments, while working toward a template bank with more than a thousand waveforms to span the binary black hole (BBH) problem’s seven-dimensional parameter space.

The Bondi strain, news, and flux are physical quantities central to efforts to understand and detect astrophysical gravitational wave sources within the Simulations of eXtreme Spacetime (SXS) collaboration, with the ultimate aim of providing the first strong field probe of the Einstein field equation.

In a series of included papers, we demonstrate stability, convergence, and gauge invariance. We also demonstrate agreement between Spectral CCE and the legacy Pitt null code, while achieving a factor of 200 improvement in computational efficiency.

Spectral CCE represents a significant computational advance. It is the foundation upon which further capability will be built, specifically enabling the complete calculation of junk-free, gauge-free, and physically valid waveform data on the fly within SpEC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The layout of a typical optical microscope has remained effectively unchanged over the past century. Besides the widespread adoption of digital focal plane arrays, relatively few innovations have helped improve standard imaging with bright-field microscopes. This thesis presents a new microscope imaging method, termed Fourier ptychography, which uses an LED to provide variable sample illumination and post-processing algorithms to recover useful sample information. Examples include increasing the resolution of megapixel-scale images to one gigapixel, measuring quantitative phase, achieving oil-immersion quality resolution without an immersion medium, and recovering complex three dimensional sample structure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part 1. Many interesting visual and mechanical phenomena occur in the critical region of fluids, both for the gas-liquid and liquid-liquid transitions. The precise thermodynamic and transport behavior here has some broad consequences for the molecular theory of liquids. Previous studies in this laboratory on a liquid-liquid critical mixture via ultrasonics supported a basically classical analysis of fluid behavior by M. Fixman (e. g., the free energy is assumed analytic in intensive variables in the thermodynamics)--at least when the fluid is not too close to critical. A breakdown in classical concepts is evidenced close to critical, in some well-defined ways. We have studied herein a liquid-liquid critical system of complementary nature (possessing a lower critical mixing or consolute temperature) to all previous mixtures, to look for new qualitative critical behavior. We did not find such new behavior in the ultrasonic absorption ascribable to the critical fluctuations, but we did find extra absorption due to chemical processes (yet these are related to the mixing behavior generating the lower consolute point). We rederived, corrected, and extended Fixman's analysis to interpret our experimental results in these more complex circumstances. The entire account of theory and experiment is prefaced by an extensive introduction recounting the general status of liquid state theory. The introduction provides a context for our present work, and also points out problems deserving attention. Interest in these problems was stimulated by this work but also by work in Part 3.

Part 2. Among variational theories of electronic structure, the Hartree-Fock theory has proved particularly valuable for a practical understanding of such properties as chemical binding, electric multipole moments, and X-ray scattering intensity. It also provides the most tractable method of calculating first-order properties under external or internal one-electron perturbations, either developed explicitly in orders of perturbation theory or in the fully self-consistent method. The accuracy and consistency of first-order properties are poorer than those of zero-order properties, but this is most often due to the use of explicit approximations in solving the perturbed equations, or to inadequacy of the variational basis in size or composition. We have calculated the electric polarizabilities of H2, He, Li, Be, LiH, and N2 by Hartree-Fock theory, using exact perturbation theory or the fully self-consistent method, as dictated by convenience. By careful studies on total basis set composition, we obtained good approximations to limiting Hartree-Fock values of polarizabilities with bases of reasonable size. The values for all species, and for each direction in the molecular cases, are within 8% of experiment, or of best theoretical values in the absence of the former. Our results support the use of unadorned Hartree-Pock theory for static polarizabilities needed in interpreting electron-molecule scattering data, collision-induced light scattering experiments, and other phenomena involving experimentally inaccessible polarizabilities.

Part 3. Numerical integration of the close-coupled scattering equations has been carried out to obtain vibrational transition probabilities for some models of the electronically adiabatic H2-H2 collision. All the models use a Lennard-Jones interaction potential between nearest atoms in the collision partners. We have analyzed the results for some insight into the vibrational excitation process in its dependence on the energy of collision, the nature of the vibrational binding potential, and other factors. We conclude also that replacement of earlier, simpler models of the interaction potential by the Lennard-Jones form adds very little realism for all the complication it introduces. A brief introduction precedes the presentation of our work and places it in the context of attempts to understand the collisional activation process in chemical reactions as well as some other chemical dynamics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computational imaging is flourishing thanks to the recent advancement in array photodetectors and image processing algorithms. This thesis presents Fourier ptychography, which is a computational imaging technique implemented in microscopy to break the limit of conventional optics. With the implementation of Fourier ptychography, the resolution of the imaging system can surpass the diffraction limit of the objective lens's numerical aperture; the quantitative phase information of a sample can be reconstructed from intensity-only measurements; and the aberration of a microscope system can be characterized and computationally corrected. This computational microscopy technique enhances the performance of conventional optical systems and expands the scope of their applications.