960 resultados para Computational studies


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today our understanding of the vibrational thermodynamics of materials at low temperatures is emerging nicely, based on the harmonic model in which phonons are independent. At high temperatures, however, this understanding must accommodate how phonons interact with other phonons or with other excitations. We shall see that the phonon-phonon interactions give rise to interesting coupling problems, and essentially modify the equilibrium and non-equilibrium properties of materials, e.g., thermodynamic stability, heat capacity, optical properties and thermal transport of materials. Despite its great importance, to date the anharmonic lattice dynamics is poorly understood and most studies on lattice dynamics still rely on the harmonic or quasiharmonic models. There have been very few studies on the pure phonon anharmonicity and phonon-phonon interactions. The work presented in this thesis is devoted to the development of experimental and computational methods on this subject.

Modern inelastic scattering techniques with neutrons or photons are ideal for sorting out the anharmonic contribution. Analysis of the experimental data can generate vibrational spectra of the materials, i.e., their phonon densities of states or phonon dispersion relations. We obtained high quality data from laser Raman spectrometer, Fourier transform infrared spectrometer and inelastic neutron spectrometer. With accurate phonon spectra data, we obtained the energy shifts and lifetime broadenings of the interacting phonons, and the vibrational entropies of different materials. The understanding of them then relies on the development of the fundamental theories and the computational methods.

We developed an efficient post-processor for analyzing the anharmonic vibrations from the molecular dynamics (MD) calculations. Currently, most first principles methods are not capable of dealing with strong anharmonicity, because the interactions of phonons are ignored at finite temperatures. Our method adopts the Fourier transformed velocity autocorrelation method to handle the big data of time-dependent atomic velocities from MD calculations, and efficiently reconstructs the phonon DOS and phonon dispersion relations. Our calculations can reproduce the phonon frequency shifts and lifetime broadenings very well at various temperatures.

To understand non-harmonic interactions in a microscopic way, we have developed a numerical fitting method to analyze the decay channels of phonon-phonon interactions. Based on the quantum perturbation theory of many-body interactions, this method is used to calculate the three-phonon and four-phonon kinematics subject to the conservation of energy and momentum, taking into account the weight of phonon couplings. We can assess the strengths of phonon-phonon interactions of different channels and anharmonic orders with the calculated two-phonon DOS. This method, with high computational efficiency, is a promising direction to advance our understandings of non-harmonic lattice dynamics and thermal transport properties.

These experimental techniques and theoretical methods have been successfully performed in the study of anharmonic behaviors of metal oxides, including rutile and cuprite stuctures, and will be discussed in detail in Chapters 4 to 6. For example, for rutile titanium dioxide (TiO2), we found that the anomalous anharmonic behavior of the B1g mode can be explained by the volume effects on quasiharmonic force constants, and by the explicit cubic and quartic anharmonicity. For rutile tin dioxide (SnO2), the broadening of the B2g mode with temperature showed an unusual concave downwards curvature. This curvature was caused by a change with temperature in the number of down-conversion decay channels, originating with the wide band gap in the phonon dispersions. For silver oxide (Ag2O), strong anharmonic effects were found for both phonons and for the negative thermal expansion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

These studies explore how, where, and when representations of variables critical to decision-making are represented in the brain. In order to produce a decision, humans must first determine the relevant stimuli, actions, and possible outcomes before applying an algorithm that will select an action from those available. When choosing amongst alternative stimuli, the framework of value-based decision-making proposes that values are assigned to the stimuli and that these values are then compared in an abstract “value space” in order to produce a decision. Despite much progress, in particular regarding the pinpointing of ventromedial prefrontal cortex (vmPFC) as a region that encodes the value, many basic questions remain. In Chapter 2, I show that distributed BOLD signaling in vmPFC represents the value of stimuli under consideration in a manner that is independent of the type of stimulus it is. Thus the open question of whether value is represented in abstraction, a key tenet of value-based decision-making, is confirmed. However, I also show that stimulus-dependent value representations are also present in the brain during decision-making and suggest a potential neural pathway for stimulus-to-value transformations that integrates these two results.

More broadly speaking, there is both neural and behavioral evidence that two distinct control systems are at work during action selection. These two systems compose the “goal-directed system”, which selects actions based on an internal model of the environment, and the “habitual” system, which generates responses based on antecedent stimuli only. Computational characterizations of these two systems imply that they have different informational requirements in terms of input stimuli, actions, and possible outcomes. Associative learning theory predicts that the habitual system should utilize stimulus and action information only, while goal-directed behavior requires that outcomes as well as stimuli and actions be processed. In Chapter 3, I test whether areas of the brain hypothesized to be involved in habitual versus goal-directed control represent the corresponding theorized variables.

The question of whether one or both of these neural systems drives Pavlovian conditioning is less well-studied. Chapter 4 describes an experiment in which subjects were scanned while engaged in a Pavlovian task with a simple non-trivial structure. After comparing a variety of model-based and model-free learning algorithms (thought to underpin goal-directed and habitual decision-making, respectively), it was found that subjects’ reaction times were better explained by a model-based system. In addition, neural signaling of precision, a variable based on a representation of a world model, was found in the amygdala. These data indicate that the influence of model-based representations of the environment can extend even to the most basic learning processes.

Knowledge of the state of hidden variables in an environment is required for optimal inference regarding the abstract decision structure of a given environment and therefore can be crucial to decision-making in a wide range of situations. Inferring the state of an abstract variable requires the generation and manipulation of an internal representation of beliefs over the values of the hidden variable. In Chapter 5, I describe behavioral and neural results regarding the learning strategies employed by human subjects in a hierarchical state-estimation task. In particular, a comprehensive model fit and comparison process pointed to the use of "belief thresholding". This implies that subjects tended to eliminate low-probability hypotheses regarding the state of the environment from their internal model and ceased to update the corresponding variables. Thus, in concert with incremental Bayesian learning, humans explicitly manipulate their internal model of the generative process during hierarchical inference consistent with a serial hypothesis testing strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We carried out quantum mechanics (QM) studies aimed at improving the performance of hydrogen fuel cells. This led to predictions of improved materials, some of which were subsequently validated with experiments by our collaborators.

In part I, the challenge was to find a replacement for the Pt cathode that would lead to improved performance for the Oxygen Reduction Reaction (ORR) while remaining stable under operational conditions and decreasing cost. Our design strategy was to find an alloy with composition Pt3M that would lead to surface segregation such that the top layer would be pure Pt, with the second and subsequent layers richer in M. Under operating conditions we expect the surface to have significant O and/or OH chemisorbed on the surface, and hence we searched for M that would remain segregated under these conditions. Using QM we examined surface segregation for 28 Pt3M alloys, where M is a transition metal. We found that only Pt3Os and Pt3Ir showed significant surface segregation when O and OH are chemisorbed on the catalyst surfaces. This result indicates that Pt3Os and Pt3Ir favor formation of a Pt-skin surface layer structure that would resist the acidic electrolyte corrosion during fuel cell operation environments. We chose to focus on Os because the phase diagram for Pt-Ir indicated that Pt-Ir could not form a homogeneous alloy at lower temperature. To determine the performance for ORR, we used QM to examine all intermediates, reaction pathways, and reaction barriers involved in the processes for which protons from the anode reactions react with O2 to form H2O. These QM calculations used our Poisson-Boltzmann implicit solvation model include the effects of the solvent (water with dielectric constant 78 with pH 7 at 298K). We found that the rate determination step (RDS) was the Oad hydration reaction (Oad + H2Oad -> OHad + OHad) in both cases, but that the barrier for pure Pt of 0.50 eV is reduced to 0.48 eV for Pt3Os, which at 80 degrees C would increase the rate by 218%. We collaborated with the Pu-Wei Wu’s group to carry out experiments, where we found that the dealloying process-treated Pt2Os catalyst showed two-fold higher activity at 25 degrees C than pure Pt and that the alloy had 272% improved stability, validating our theoretical predictions.

We also carried out similar QM studies followed by experimental validation for the Os/Pt core-shell catalyst fabricated by the underpotential deposition (UPD) method. The QM results indicated that the RDS for ORR is a compromise between the OOH formation step (0.37 eV for Pt, 0.23 eV for Pt2ML/Os core-shell) and H2O formation steps (0.32 eV for Pt, 0.22 eV for Pt2ML/Os core-shell). We found that Pt2ML/Os has the highest activity (compared to pure Pt and to the Pt3Os alloy) because the 0.37 eV barrier decreases to 0.23 eV. To understand what aspects of the core shell structure lead to this improved performance, we considered the effect on ORR of compressing the alloy slab to the dimensions of pure Pt. However this had little effect, with the same RDS barrier 0.37 eV. This shows that the ligand effect (the electronic structure modification resulting from the Os substrate) plays a more important role than the strain effect, and is responsible for the improved activity of the core- shell catalyst. Experimental materials characterization proves the core-shell feature of our catalyst. The electrochemical experiment for Pt2ML/Os/C showed 3.5 to 5 times better ORR activity at 0.9V (vs. NHE) in 0.1M HClO4 solution at 25 degrees C as compared to those of commercially available Pt/C. The excellent correlation between experimental half potential and the OH binding energies and RDS barriers validate the feasibility of predicting catalyst activity using QM calculation and a simple Langmuir–Hinshelwood model.

In part II, we used QM calculations to study methane stream reforming on a Ni-alloy catalyst surfaces for solid oxide fuel cell (SOFC) application. SOFC has wide fuel adaptability but the coking and sulfur poisoning will reduce its stability. Experimental results suggested that the Ni4Fe alloy improves both its activity and stability compared to pure Ni. To understand the atomistic origin of this, we carried out QM calculations on surface segregation and found that the most stable configuration for Ni4Fe has a Fe atom distribution of (0%, 50%, 25%, 25%, 0%) starting at the bottom layer. We calculated that the binding of C atoms on the Ni4Fe surface is 142.9 Kcal/mol, which is about 10 Kcal/mol weaker compared to the pure Ni surface. This weaker C binding energy is expected to make coke formation less favorable, explaining why Ni4Fe has better coking resistance. This result confirms the experimental observation. The reaction energy barriers for CHx decomposition and C binding on various alloy surface, Ni4X (X=Fe, Co, Mn, and Mo), showed Ni4Fe, Ni4Co, and Fe4Mn all have better coking resistance than pure Ni, but that only Ni4Fe and Fe4Mn have (slightly) improved activity compared to pure Ni.

In part III, we used QM to examine the proton transport in doped perovskite-ceramics. Here we used a 2x2x2 supercell of perovskite with composition Ba8X7M1(OH)1O23 where X=Ce or Zr and M=Y, Gd, or Dy. Thus in each case a 4+ X is replace by a 3+ M plus a proton on one O. Here we predicted the barriers for proton diffusion allowing both includes intra-octahedron and inter-octahedra proton transfer. Without any restriction, we only observed the inter-octahedra proton transfer with similar energy barrier as previous computational work but 0.2 eV higher than experimental result for Y doped zirconate. For one restriction in our calculations is that the Odonor-Oacceptor atoms were kept at fixed distances, we found that the barrier difference between cerates/zirconates with various dopants are only 0.02~0.03 eV. To fully address performance one would need to examine proton transfer at grain boundaries, which will require larger scale ReaxFF reactive dynamics for systems with millions of atoms. The QM calculations used here will be used to train the ReaxFF force field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parallel strand models for base sequences d(A)(10). d(T)(10), d(AT)(5) . d(TA)(5), d(G(5)C(5)). d(C(5)G(5)), d(GC)(5) . d(CG)(5) and d(CTATAGGGAT). d(GATATCCCTA), where reverse Watson-Crick A-T pairing with two H-bonds and reverse Watson-Crick G-C pairing with one H-bond or with two H-bonds were adopted, and three models of d(T)(14). d(A)(14). d(T)(14) triple helix with different strand orientations were built up by molecular architecture and energy minimization. Comparisons of parallel duplex models with their corresponding B-DNA models and comparisons among the three triple helices showed: (i) conformational energies of parallel AT duplex models were a little lower, while for GC duplex models they were about 8% higher than that of their corresponding B-DNA models; (ii) the energy differences between parallel and B-type duplex models and among the three triple helices arose mainly from base stacking energies, especially for GC base pairing; (iii) the parallel duplexes with one H-bond G-C pairs were less stable than those with two H-bonds G-C pairs. The present paper includes a brief discussion about the effect of base stacking and base sequences on DNA conformations. (C) 1997 Academic Press Limited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

From modelling to manufacturing, computers have increasingly become partners in the design process, helping automate many phases once carried out by hand. In the creative phase, computational synthesis methods aim at facilitating designers' task through the automated generation of optimally directed design alternatives. Nevertheless, applications of these techniques are mainly academic and industrial design practice is still far from applying them routinely. This is due to the complex nature of many design tasks and to the difficulty of developing synthesis methods that can be easily adapted to multiple case studies and automated simulation. This work stems from the analysis of implementation issues and obstacles to the widespread use of these tools. The research investigates the possibility to remove these obstacles through the application of a novel technique to complex design tasks. The ability of this technique to scale-up without sacrificing accuracy is demonstrated. The successful results confirm the possibility to use synthesis methods in complex design tasks and spread their commercial and industrial application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two-phase computational fluid dynamics modelling is used to investigate the magnitude of different contributions to the wet steam losses in a three-stage model low pressure steam turbine. The thermodynamic losses (due to irreversible heat transfer across a finite temperature difference) and the kinematic relaxation losses (due to the frictional drag of the drops) are evaluated directly from the computational fluid dynamics simulation using a concept based on entropy production rates. The braking losses (due to the impact of large drops on the rotor) are investigated by a separate numerical prediction. The simulations show that in the present case, the dominant effect is the thermodynamic loss that accounts for over 90% of the wetness losses and that both the thermodynamic and the kinematic relaxation losses depend on the droplet diameter. The numerical results are brought into context with the well-known Baumann correlation, and a comparison with available measurement data in the literature is given. The ability of the numerical approach to predict the main wetness losses is confirmed, which permits the use of computational fluid dynamics for further studies on wetness loss correlations. © IMechE 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our recent studies on kinetic behaviors of gas flows are reviewed in this paper. These flows have a wide range of background, but share a common feature that the flow Knudsen number is larger than 0.01. Thus kinetic approaches such as the direct simulation Monte Carlo method are required for their description. In the past few years, we studied several micro/nano-scale flows by developing novel particle simulation approach, and investigated the flows in low-pressure chambers and at high altitude. In addition, the microscopic behaviors of a couple of classical flow problems were analyzed, which shows the potential for kinetic approaches to reveal the microscopic mechanism of gas flows.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this research. we found CoMFA alone could not obtain sufficiently a strong equation to allow confident prediction for aminobenzenes. When some other parameter. such as heat of molecular formation of the compounds, was introduced into the CoMFA model, the results Were improved greatly. It gives us a hint that a better description for molecular structures will yield a better prediction model, and this hint challenged us to look for another method-the projection areas of molecules in 3D space for 3D-QSAR. It is surprising that much better results than that obtained by using CoMFA Were achieved. Besides the CoMFA analysis. multiregression analysis and neural network methods for building the models were used in this paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantitative structure-activity/property relationships (QSAR/QSPR) studies have been exploited extensively in the designs of drugs and pesticides, but few such studies have been applied to the design of colour reagents. In this work, the topological indices A(x1)-A(x3) suggested in this laboratory were applied to multivariate analysis in structure-property studies. The topological indices of 43 phosphone bisazo derivatives of chromotropic acid were calculated. The structure-property relationships between colour reagents and their colour reactions with cerium were studied using A(x1-Ax3) indices with satisfactory results. The purpose of this work was to establish whether QSAR can be used to predict the contrasts of colour reactions and in the longer term to be a helpful tool in colour reagent design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the new topological indices A(x1)-A(x3) suggested in our laboratory and molecular connectivity indices have been applied to multivariate analysis in structure-property studies. The topological indices of twenty asymmetrical phosphono bisazo derivatives of chromotropic acid have been calculated. The structure-property relationships between colour reagents and their colour reactions with ytterbium have been studied by A(x1)-A(x3) indices and molecular connectivity indices with satisfactory results. Multiple regression analysis and neural networks were employed simultaneously in this study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The conditional nonlinear optimal perturbation (CNOP), which is a nonlinear generalization of the linear singular vector (LSV), is applied in important problems of atmospheric and oceanic sciences, including ENSO predictability, targeted observations, and ensemble forecast. In this study, we investigate the computational cost of obtaining the CNOP by several methods. Differences and similarities, in terms of the computational error and cost in obtaining the CNOP, are compared among the sequential quadratic programming (SQP) algorithm, the limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm, and the spectral projected gradients (SPG2) algorithm. A theoretical grassland ecosystem model and the classical Lorenz model are used as examples. Numerical results demonstrate that the computational error is acceptable with all three algorithms. The computational cost to obtain the CNOP is reduced by using the SQP algorithm. The experimental results also reveal that the L-BFGS algorithm is the most effective algorithm among the three optimization algorithms for obtaining the CNOP. The numerical results suggest a new approach and algorithm for obtaining the CNOP for a large-scale optimization problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article describes advances in statistical computation for large-scale data analysis in structured Bayesian mixture models via graphics processing unit (GPU) programming. The developments are partly motivated by computational challenges arising in fitting models of increasing heterogeneity to increasingly large datasets. An example context concerns common biological studies using high-throughput technologies generating many, very large datasets and requiring increasingly high-dimensional mixture models with large numbers of mixture components.We outline important strategies and processes for GPU computation in Bayesian simulation and optimization approaches, give examples of the benefits of GPU implementations in terms of processing speed and scale-up in ability to analyze large datasets, and provide a detailed, tutorial-style exposition that will benefit readers interested in developing GPU-based approaches in other statistical models. Novel, GPU-oriented approaches to modifying existing algorithms software design can lead to vast speed-up and, critically, enable statistical analyses that presently will not be performed due to compute time limitations in traditional computational environments. Supplementalmaterials are provided with all source code, example data, and details that will enable readers to implement and explore the GPU approach in this mixture modeling context. © 2010 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transcriptional regulation has been studied intensively in recent decades. One important aspect of this regulation is the interaction between regulatory proteins, such as transcription factors (TF) and nucleosomes, and the genome. Different high-throughput techniques have been invented to map these interactions genome-wide, including ChIP-based methods (ChIP-chip, ChIP-seq, etc.), nuclease digestion methods (DNase-seq, MNase-seq, etc.), and others. However, a single experimental technique often only provides partial and noisy information about the whole picture of protein-DNA interactions. Therefore, the overarching goal of this dissertation is to provide computational developments for jointly modeling different experimental datasets to achieve a holistic inference on the protein-DNA interaction landscape.

We first present a computational framework that can incorporate the protein binding information in MNase-seq data into a thermodynamic model of protein-DNA interaction. We use a correlation-based objective function to model the MNase-seq data and a Markov chain Monte Carlo method to maximize the function. Our results show that the inferred protein-DNA interaction landscape is concordant with the MNase-seq data and provides a mechanistic explanation for the experimentally collected MNase-seq fragments. Our framework is flexible and can easily incorporate other data sources. To demonstrate this flexibility, we use prior distributions to integrate experimentally measured protein concentrations.

We also study the ability of DNase-seq data to position nucleosomes. Traditionally, DNase-seq has only been widely used to identify DNase hypersensitive sites, which tend to be open chromatin regulatory regions devoid of nucleosomes. We reveal for the first time that DNase-seq datasets also contain substantial information about nucleosome translational positioning, and that existing DNase-seq data can be used to infer nucleosome positions with high accuracy. We develop a Bayes-factor-based nucleosome scoring method to position nucleosomes using DNase-seq data. Our approach utilizes several effective strategies to extract nucleosome positioning signals from the noisy DNase-seq data, including jointly modeling data points across the nucleosome body and explicitly modeling the quadratic and oscillatory DNase I digestion pattern on nucleosomes. We show that our DNase-seq-based nucleosome map is highly consistent with previous high-resolution maps. We also show that the oscillatory DNase I digestion pattern is useful in revealing the nucleosome rotational context around TF binding sites.

Finally, we present a state-space model (SSM) for jointly modeling different kinds of genomic data to provide an accurate view of the protein-DNA interaction landscape. We also provide an efficient expectation-maximization algorithm to learn model parameters from data. We first show in simulation studies that the SSM can effectively recover underlying true protein binding configurations. We then apply the SSM to model real genomic data (both DNase-seq and MNase-seq data). Through incrementally increasing the types of genomic data in the SSM, we show that different data types can contribute complementary information for the inference of protein binding landscape and that the most accurate inference comes from modeling all available datasets.

This dissertation provides a foundation for future research by taking a step toward the genome-wide inference of protein-DNA interaction landscape through data integration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

© 2014 .The adoption of antisense gene silencing as a novel disinfectant for prokaryotic organisms is hindered by poor silencing efficiencies. Few studies have considered the effects of off-targets on silencing efficiencies, especially in prokaryotic organisms. In this computational study, a novel algorithm was developed that determined and sorted the number of off-targets as a function of alignment length in Escherichia coli K-12 MG1655 and Mycobacterium tuberculosis H37Rv. The mean number of off-targets per a single location was calculated to be 14.1. ±. 13.3 and 36.1. ±. 58.5 for the genomes of E. coli K-12 MG1655 and M. tuberculosis H37Rv, respectively. Furthermore, when the entire transcriptome was analyzed, it was found that there was no general gene location that could be targeted to minimize or maximize the number of off-targets. In an effort to determine the effects of off-targets on silencing efficiencies, previously published studies were used. Analyses with acpP, ino1, and marORAB revealed a statistically significant relationship between the number of short alignment length off-targets hybrids and the efficacy of the antisense gene silencing, suggesting that the minimization of off-targets may be beneficial for antisense gene silencing in prokaryotic organisms.