988 resultados para Monte Carlo experiments


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Redshift Space Distortions (RSD) are an apparent anisotropy in the distribution of galaxies due to their peculiar motion. These features are imprinted in the correlation function of galaxies, which describes how these structures distribute around each other. RSD can be represented by a distortions parameter $\beta$, which is strictly related to the growth of cosmic structures. For this reason, measurements of RSD can be exploited to give constraints on the cosmological parameters, such us for example the neutrino mass. Neutrinos are neutral subatomic particles that come with three flavours, the electron, the muon and the tau neutrino. Their mass differences can be measured in the oscillation experiments. Information on the absolute scale of neutrino mass can come from cosmology, since neutrinos leave a characteristic imprint on the large scale structure of the universe. The aim of this thesis is to provide constraints on the accuracy with which neutrino mass can be estimated when expoiting measurements of RSD. In particular we want to describe how the error on the neutrino mass estimate depends on three fundamental parameters of a galaxy redshift survey: the density of the catalogue, the bias of the sample considered and the volume observed. In doing this we make use of the BASICC Simulation from which we extract a series of dark matter halo catalogues, characterized by different value of bias, density and volume. This mock data are analysed via a Markov Chain Monte Carlo procedure, in order to estimate the neutrino mass fraction, using the software package CosmoMC, which has been conveniently modified. In this way we are able to extract a fitting formula describing our measurements, which can be used to forecast the precision reachable in future surveys like Euclid, using this kind of observations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

For the improvement of current neutron capture therapy, several liposomal formulations of neutron capture agent gadolinium were developed and tested in a glioma cell model. Formulations were analyzed regarding physicochemical and biological parameters, such as size, zeta potential, uptake into cancer cells and performance under neutron irradiation. The neutron and photon dose derived from intracellular as well as extracellular Gd was calculated via Monte Carlo simulations and set in correlation with the reduction of cell survival after irradiation. To investigate the suitability of Gd as a radiosensitizer for photon radiation, cells were also irradiated with synchrotron radiation in addition to clinically used photons generated by linear accelerator.rnIrradiation with neutrons led to significantly lower survival for Gd-liposome-treated F98 and LN229 cells, compared to irradiated control cells and cells treated with non-liposomal Gd-DTPA. Correlation between Gd-content and -dose and respective cell survival displayed proportional relationship for most of the applied formulations. Photon irradiation experiments showed the proof-of-principle for the radiosensitizer approach, although the photon spectra currently used have to be optimized for higher efficiency of the radiosensitizer. In conclusion, the newly developed Gd-liposomes show great potential for the improvement of radiation treatment options for highly malignant glioblastoma.rn

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In condensed matter systems, the interfacial tension plays a central role for a multitude of phenomena. It is the driving force for nucleation processes, determines the shape and structure of crystalline structures and is important for industrial applications. Despite its importance, the interfacial tension is hard to determine in experiments and also in computer simulations. While for liquid-vapor interfacial tensions there exist sophisticated simulation methods to compute the interfacial tension, current methods for solid-liquid interfaces produce unsatisfactory results.rnrnAs a first approach to this topic, the influence of the interfacial tension on nuclei is studied within the three-dimensional Ising model. This model is well suited because despite its simplicity, one can learn much about nucleation of crystalline nuclei. Below the so-called roughening temperature, nuclei in the Ising model are not spherical anymore but become cubic because of the anisotropy of the interfacial tension. This is similar to crystalline nuclei, which are in general not spherical but more like a convex polyhedron with flat facets on the surface. In this context, the problem of distinguishing between the two bulk phases in the vicinity of the diffuse droplet surface is addressed. A new definition is found which correctly determines the volume of a droplet in a given configuration if compared to the volume predicted by simple macroscopic assumptions.rnrnTo compute the interfacial tension of solid-liquid interfaces, a new Monte Carlo method called ensemble switch method'' is presented which allows to compute the interfacial tension of liquid-vapor interfaces as well as solid-liquid interfaces with great accuracy. In the past, the dependence of the interfacial tension on the finite size and shape of the simulation box has often been neglected although there is a nontrivial dependence on the box dimensions. As a consequence, one needs to systematically increase the box size and extrapolate to infinite volume in order to accurately predict the interfacial tension. Therefore, a thorough finite-size scaling analysis is established in this thesis. Logarithmic corrections to the finite-size scaling are motivated and identified, which are of leading order and therefore must not be neglected. The astounding feature of these logarithmic corrections is that they do not depend at all on the model under consideration. Using the ensemble switch method, the validity of a finite-size scaling ansatz containing the aforementioned logarithmic corrections is carefully tested and confirmed. Combining the finite-size scaling theory with the ensemble switch method, the interfacial tension of several model systems, ranging from the Ising model to colloidal systems, is computed with great accuracy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this thesis I present a new coarse-grained model suitable to investigate the phase behavior of rod-coil block copolymers on mesoscopic length scales. In this model the rods are represented by hard spherocylinders, whereas the coil block consists of interconnected beads. The interactions between the constituents are based on local densities. This facilitates an efficient Monte-Carlo sampling of the phase space. I verify the applicability of the model and the simulation approach by means of several examples. I treat pure rod systems and mixtures of rod and coil polymers. Then I append coils to the rods and investigate the role of the different model parameters. Furthermore, I compare different implementations of the model. I prove the capability of the rod-coil block copolymers in our model to exhibit typical micro-phase separated configurations as well as extraordinary phases, such as the wavy lamellar state, percolating structuresrnand clusters. Additionally, I demonstrate the metastability of the observed zigzag phase in our model. A central point of this thesis is the examination of the phase behavior of the rod-coil block copolymers in dependence of different chain lengths and interaction strengths between rods and coil. The observations of these studies are summarized in a phase diagram for rod-coil block copolymers. Furthermore, I validate a stabilization of the smectic phase with increasing coil fraction.rnIn the second part of this work I present a side project in which I derive a model permitting the simulation of tetrapods with and without grafted semiconducting block copolymers. The effect of these polymers is added in an implicit manner by effective interactions between the tetrapods. While the depletion interaction is described in an approximate manner within the Asakura-Oosawa model, the free energy penalty for the brush compression is calculated within the Alexander-de Gennes model. Recent experiments with CdSe tetrapods show that grafted tetrapods are clearly much better dispersed in the polymer matrix than bare tetrapods. My simulations confirm that bare tetrapods tend to aggregate in the matrix of excess polymers, while clustering is significantly reduced after grafting polymer chains to the tetrapods. Finally, I propose a possible extension enabling the simulation of a system with fluctuating volume and demonstrate its basic functionality. This study is originated in a cooperation with an experimental group with the goal to analyze the morphology of these systems in order to find the ideal morphology for hybrid solar cells.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tissue phantoms play a central role in validating biomedical imaging techniques. Here we employ a series of methods that aim to fully determine the optical properties, i.e., the refractive index n, absorption coefficient μa, transport mean free path ℓ∗, and scattering coefficient μs of a TiO2 in gelatin phantom intended for use in optoacoustic imaging. For the determination of the key parameters μa and ℓ∗, we employ a variant of time of flight measurements, where fiber optodes are immersed into the phantom to minimize the influence of boundaries. The robustness of the method was verified with Monte Carlo simulations, where the experimentally obtained values served as input parameters for the simulations. The excellent agreement between simulations and experiments confirmed the reliability of the results. The parameters determined at 780 nm are n=1.359(±0.002), μ′s=1/ℓ∗=0.22(±0.02) mm-1, μa= 0.0053(+0.0006-0.0003) mm-1, and μs=2.86(±0.04) mm-1. The asymmetry parameter g obtained from the parameters ℓ∗ and μ′s is 0.93, which indicates that the scattering entities are not bare TiO2 particles but large sparse clusters. The interaction between the scattering particles and the gelatin matrix should be taken into account when developing such phantoms.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

(31)P MRS magnetization transfer ((31)P-MT) experiments allow the estimation of exchange rates of biochemical reactions, such as the creatine kinase equilibrium and adenosine triphosphate (ATP) synthesis. Although various (31)P-MT methods have been successfully used on isolated organs or animals, their application on humans in clinical scanners poses specific challenges. This study compared two major (31)P-MT methods on a clinical MR system using heteronuclear surface coils. Although saturation transfer (ST) is the most commonly used (31)P-MT method, sequences such as inversion transfer (IT) with short pulses might be better suited for the specific hardware and software limitations of a clinical scanner. In addition, small NMR-undetectable metabolite pools can transfer MT to NMR-visible pools during long saturation pulses, which is prevented with short pulses. (31)P-MT sequences were adapted for limited pulse length, for heteronuclear transmit-receive surface coils with inhomogeneous B1 , for the need for volume selection and for the inherently low signal-to-noise ratio (SNR) on a clinical 3-T MR system. The ST and IT sequences were applied to skeletal muscle and liver in 10 healthy volunteers. Monte-Carlo simulations were used to evaluate the behavior of the IT measurements with increasing imperfections. In skeletal muscle of the thigh, ATP synthesis resulted in forward reaction constants (k) of 0.074 ± 0.022 s(-1) (ST) and 0.137 ± 0.042 s(-1) (IT), whereas the creatine kinase reaction yielded 0.459 ± 0.089 s(-1) (IT). In the liver, ATP synthesis resulted in k = 0.267 ± 0.106 s(-1) (ST), whereas the IT experiment yielded no consistent results. ST results were close to literature values; however, the IT results were either much larger than the corresponding ST values and/or were widely scattered. To summarize, ST and IT experiments can both be implemented on a clinical body scanner with heteronuclear transmit-receive surface coils; however, ST results are much more robust against experimental imperfections than the current implementation of IT.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Gaussian random field (GRF) conditional simulation is a key ingredient in many spatial statistics problems for computing Monte-Carlo estimators and quantifying uncertainties on non-linear functionals of GRFs conditional on data. Conditional simulations are known to often be computer intensive, especially when appealing to matrix decomposition approaches with a large number of simulation points. This work studies settings where conditioning observations are assimilated batch sequentially, with one point or a batch of points at each stage. Assuming that conditional simulations have been performed at a previous stage, the goal is to take advantage of already available sample paths and by-products to produce updated conditional simulations at mini- mal cost. Explicit formulae are provided, which allow updating an ensemble of sample paths conditioned on n ≥ 0 observations to an ensemble conditioned on n + q observations, for arbitrary q ≥ 1. Compared to direct approaches, the proposed formulae proveto substantially reduce computational complexity. Moreover, these formulae explicitly exhibit how the q new observations are updating the old sample paths. Detailed complexity calculations highlighting the benefits of this approach with respect to state-of-the-art algorithms are provided and are complemented by numerical experiments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Several meta-analysis methods can be used to quantitatively combine the results of a group of experiments, including the weighted mean difference, statistical vote counting, the parametric response ratio and the non-parametric response ratio. The software engineering community has focused on the weighted mean difference method. However, other meta-analysis methods have distinct strengths, such as being able to be used when variances are not reported. There are as yet no guidelines to indicate which method is best for use in each case. Aim: Compile a set of rules that SE researchers can use to ascertain which aggregation method is best for use in the synthesis phase of a systematic review. Method: Monte Carlo simulation varying the number of experiments in the meta analyses, the number of subjects that they include, their variance and effect size. We empirically calculated the reliability and statistical power in each case Results: WMD is generally reliable if the variance is low, whereas its power depends on the effect size and number of subjects per meta-analysis; the reliability of RR is generally unaffected by changes in variance, but it does require more subjects than WMD to be powerful; NPRR is the most reliable method, but it is not very powerful; SVC behaves well when the effect size is moderate, but is less reliable with other effect sizes. Detailed tables of results are annexed. Conclusions: Before undertaking statistical aggregation in software engineering, it is worthwhile checking whether there is any appreciable difference in the reliability and power of the methods. If there is, software engineers should select the method that optimizes both parameters.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La gestión de los residuos radiactivos de vida larga producidos en los reactores nucleares constituye uno de los principales desafíos de la tecnología nuclear en la actualidad. Una posible opción para su gestión es la transmutación de los nucleidos de vida larga en otros de vida más corta. Los sistemas subcríticos guiados por acelerador (ADS por sus siglas en inglés) son una de las tecnologías en desarrollo para logar este objetivo. Un ADS consiste en un reactor nuclear subcrítico mantenido en un estado estacionario mediante una fuente externa de neutrones guiada por un acelerador de partículas. El interés de estos sistemas radica en su capacidad para ser cargados con combustibles que tengan contenidos de actínidos minoritarios mayores que los reactores críticos convencionales, y de esta manera, incrementar las tasas de trasmutación de estos elementos, que son los principales responsables de la radiotoxicidad a largo plazo de los residuos nucleares. Uno de los puntos clave que han sido identificados para la operación de un ADS a escala industrial es la necesidad de monitorizar continuamente la reactividad del sistema subcrítico durante la operación. Por esta razón, desde los años 1990 se han realizado varios experimentos en conjuntos subcríticos de potencia cero (MUSE, RACE, KUCA, Yalina, GUINEVERE/FREYA) con el fin de validar experimentalmente estas técnicas. En este contexto, la presente tesis se ocupa de la validación de técnicas de monitorización de la reactividad en el conjunto subcrítico Yalina-Booster. Este conjunto pertenece al Joint Institute for Power and Nuclear Research (JIPNR-Sosny) de la Academia Nacional de Ciencias de Bielorrusia. Dentro del proyecto EUROTRANS del 6º Programa Marco de la UE, en el año 2008 se ha realizado una serie de experimentos en esta instalación concernientes a la monitorización de la reactividad bajo la dirección del CIEMAT. Se han realizado dos tipos de experimentos: experimentos con una fuente de neutrones pulsada (PNS) y experimentos con una fuente continua con interrupciones cortas (beam trips). En el caso de los primeros, experimentos con fuente pulsada, existen dos técnicas fundamentales para medir la reactividad, conocidas como la técnica del ratio bajo las áreas de los neutrones inmediatos y retardados (o técnica de Sjöstrand) y la técnica de la constante de decaimiento de los neutrones inmediatos. Sin embargo, varios experimentos han mostrado la necesidad de aplicar técnicas de corrección para tener en cuenta los efectos espaciales y energéticos presentes en un sistema real y obtener valores precisos de la reactividad. En esta tesis, se han investigado estas correcciones mediante simulaciones del sistema con el código de Montecarlo MCNPX. Esta investigación ha servido también para proponer una versión generalizada de estas técnicas donde se buscan relaciones entre la reactividad el sistema y las cantidades medidas a través de simulaciones de Monte Carlo. El segundo tipo de experimentos, experimentos con una fuente continua e interrupciones del haz, es más probable que sea empleado en un ADS industrial. La versión generalizada de las técnicas desarrolladas para los experimentos con fuente pulsada también ha sido aplicada a los resultados de estos experimentos. Además, el trabajo presentado en esta tesis es la primera vez, en mi conocimiento, en que la reactividad de un sistema subcrítico se monitoriza durante la operación con tres técnicas simultáneas: la técnica de la relación entre la corriente y el flujo (current-to-flux), la técnica de desconexión rápida de la fuente (source-jerk) y la técnica del decaimiento de los neutrones inmediatos. Los casos analizados incluyen la variación rápida de la reactividad del sistema (inserción y extracción de las barras de control) y la variación rápida de la fuente de neutrones (interrupción larga del haz y posterior recuperación). ABSTRACT The management of long-lived radioactive wastes produced by nuclear reactors constitutes one of the main challenges of nuclear technology nowadays. A possible option for its management consists in the transmutation of long lived nuclides into shorter lived ones. Accelerator Driven Subcritical Systems (ADS) are one of the technologies in development to achieve this goal. An ADS consists in a subcritical nuclear reactor maintained in a steady state by an external neutron source driven by a particle accelerator. The interest of these systems lays on its capacity to be loaded with fuels having larger contents of minor actinides than conventional critical reactors, and in this way, increasing the transmutation rates of these elements, that are the main responsible of the long-term radiotoxicity of nuclear waste. One of the key points that have been identified for the operation of an industrial-scale ADS is the need of continuously monitoring the reactivity of the subcritical system during operation. For this reason, since the 1990s a number of experiments have been conducted in zero-power subcritical assemblies (MUSE, RACE, KUCA, Yalina, GUINEVERE/FREYA) in order to experimentally validate these techniques. In this context, the present thesis is concerned with the validation of reactivity monitoring techniques at the Yalina-Booster subcritical assembly. This assembly belongs to the Joint Institute for Power and Nuclear Research (JIPNR-Sosny) of the National Academy of Sciences of Belarus. Experiments concerning reactivity monitoring have been performed in this facility under the EUROTRANS project of the 6th EU Framework Program in year 2008 under the direction of CIEMAT. Two types of experiments have been carried out: experiments with a pulsed neutron source (PNS) and experiments with a continuous source with short interruptions (beam trips). For the case of the first ones, PNS experiments, two fundamental techniques exist to measure the reactivity, known as the prompt-to-delayed neutron area-ratio technique (or Sjöstrand technique) and the prompt neutron decay constant technique. However, previous experiments have shown the need to apply correction techniques to take into account the spatial and energy effects present in a real system and thus obtain accurate values for the reactivity. In this thesis, these corrections have been investigated through simulations of the system with the Monte Carlo code MCNPX. This research has also served to propose a generalized version of these techniques where relationships between the reactivity of the system and the measured quantities are obtained through Monte Carlo simulations. The second type of experiments, with a continuous source with beam trips, is more likely to be employed in an industrial ADS. The generalized version of the techniques developed for the PNS experiments has also been applied to the result of these experiments. Furthermore, the work presented in this thesis is the first time, to my knowledge, that the reactivity of a subcritical system has been monitored during operation simultaneously with three different techniques: the current-to-flux, the source-jerk and the prompt neutron decay techniques. The cases analyzed include the fast variation of the system reactivity (insertion and extraction of a control rod) and the fast variation of the neutron source (long beam interruption and subsequent recovery).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

How colloidal particles interact with each other is one of the key issues that determines our ability to interpret experimental results for phase transitions in colloidal dispersions and our ability to apply colloid science to various industrial processes. The long-accepted theories for answering this question have been challenged by results from recent experiments. Herein we show from Monte-Carlo simulations that there is a short-range attractive force between identical macroions in electrolyte solutions containing divalent counterions. Complementing some recent and related results by others, we present strong evidence of attraction between a pair of spherical macroions in the presence of added salt ions for the conditions where the interacting macroion pair is not affected by any other macroions that may be in the solution. This attractive force follows from the internal-energy contribution of counterion mediation. Contrary to conventional expectations, for charged macroions in an electrolyte solution, the entropic force is repulsive at most solution conditions because of localization of small ions in the vicinity of macroions. Both Derjaguin–Landau–Verwey–Overbeek theory and Sogami–Ise theory fail to describe the attractive interactions found in our simulations; the former predicts only repulsive interaction and the latter predicts a long-range attraction that is too weak and occurs at macroion separations that are too large. Our simulations provide fundamental “data” toward an improved theory for the potential of mean force as required for optimum design of new materials including those containing nanoparticles.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Two of the most important models to account for the specificity and sensitivity of the T cell receptor (TCR) are the kinetic proofreading and serial ligation models. However, although kinetic proofreading provides a means for individual TCRs to measure accurately the length of time they are engaged and signal appropriately, the stochastic nature of ligand dissociation means the kinetic proofreading model implies that at high concentrations the response of the cell will be relatively nonspecific. Recent ligand experiments have revealed the phenomenon of both negative and positive crosstalk among neighboring TCRs. By using a Monte Carlo simulation of a lattice of TCRs, we integrate receptor crosstalk with the kinetic proofreading and serial ligation models and discover that receptor cooperativity can enhance T cell specificity significantly at a very modest cost to the sensitivity of the response.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The relationship between the optimization of the potential function and the foldability of theoretical protein models is studied based on investigations of a 27-mer cubic-lattice protein model and a more realistic lattice model for the protein crambin. In both the simple and the more complicated systems, optimization of the energy parameters achieves significant improvements in the statistical-mechanical characteristics of the systems and leads to foldable protein models in simulation experiments. The foldability of the protein models is characterized by their statistical-mechanical properties--e.g., by the density of states and by Monte Carlo folding simulations of the models. With optimized energy parameters, a high level of consistency exists among different interactions in the native structures of the protein models, as revealed by a correlation function between the optimized energy parameters and the native structure of the model proteins. The results of this work are relevant to the design of a general potential function for folding proteins by theoretical simulations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

No último século, houve grande avanço no entendimento das interações das radiações com a matéria. Essa compreensão se faz necessária para diversas aplicações, entre elas o uso de raios X no diagnóstico por imagens. Neste caso, imagens são formadas pelo contraste resultante da diferença na atenuação dos raios X pelos diferentes tecidos do corpo. Entretanto, algumas das interações dos raios X com a matéria podem levar à redução da qualidade destas imagens, como é o caso dos fenômenos de espalhamento. Muitas abordagens foram propostas para estimar a distribuição espectral de fótons espalhados por uma barreira, ou seja, como no caso de um feixe de campo largo, ao atingir um plano detector, tais como modelos que utilizam métodos de Monte Carlo e modelos que utilizam aproximações analíticas. Supondo-se um espectro de um feixe primário que não interage com nenhum objeto após sua emissão pelo tubo de raios X, este espectro é, essencialmente representado pelos modelos propostos anteriormente. Contudo, considerando-se um feixe largo de radiação X, interagindo com um objeto, a radiação a ser detectada por um espectrômetro, passa a ser composta pelo feixe primário, atenuado pelo material adicionado, e uma fração de radiação espalhada. A soma destas duas contribuições passa a compor o feixe resultante. Esta soma do feixe primário atenuado, com o feixe de radiação espalhada, é o que se mede em um detector real na condição de feixe largo. O modelo proposto neste trabalho visa calcular o espectro de um tubo de raios X, em situação de feixe largo, o mais fidedigno possível ao que se medem em condições reais. Neste trabalho se propõe a discretização do volume de interação em pequenos elementos de volume, nos quais se calcula o espalhamento Compton, fazendo uso de um espectro de fótons gerado pelo Modelo de TBC, a equação de Klein-Nishina e considerações geométricas. Por fim, o espectro de fótons espalhados em cada elemento de volume é somado ao espalhamento dos demais elementos de volume, resultando no espectro total espalhado. O modelo proposto foi implementado em ambiente computacional MATLAB® e comparado com medições experimentais para sua validação. O modelo proposto foi capaz de produzir espectros espalhados em diferentes condições, apresentando boa conformidade com os valores medidos, tanto em termos quantitativos, nas quais a diferença entre kerma no ar calculado e kerma no ar medido é menor que 10%, quanto qualitativos, com fatores de mérito superiores a 90%.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Motivation: The clustering of gene profiles across some experimental conditions of interest contributes significantly to the elucidation of unknown gene function, the validation of gene discoveries and the interpretation of biological processes. However, this clustering problem is not straightforward as the profiles of the genes are not all independently distributed and the expression levels may have been obtained from an experimental design involving replicated arrays. Ignoring the dependence between the gene profiles and the structure of the replicated data can result in important sources of variability in the experiments being overlooked in the analysis, with the consequent possibility of misleading inferences being made. We propose a random-effects model that provides a unified approach to the clustering of genes with correlated expression levels measured in a wide variety of experimental situations. Our model is an extension of the normal mixture model to account for the correlations between the gene profiles and to enable covariate information to be incorporated into the clustering process. Hence the model is applicable to longitudinal studies with or without replication, for example, time-course experiments by using time as a covariate, and to cross-sectional experiments by using categorical covariates to represent the different experimental classes. Results: We show that our random-effects model can be fitted by maximum likelihood via the EM algorithm for which the E(expectation) and M(maximization) steps can be implemented in closed form. Hence our model can be fitted deterministically without the need for time-consuming Monte Carlo approximations. The effectiveness of our model-based procedure for the clustering of correlated gene profiles is demonstrated on three real datasets, representing typical microarray experimental designs, covering time-course, repeated-measurement and cross-sectional data. In these examples, relevant clusters of the genes are obtained, which are supported by existing gene-function annotation. A synthetic dataset is considered too.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Monte Carlo and molecular dynamics simulations and neutron scattering experiments are used to study the adsorption and diffusion of hydrogen and deuterium in zeolite Rho in the temperature range of 30-150 K. In the molecular simulations, quantum effects are incorporated via the Feynman-Hibbs variational approach. We suggest a new set of potential parameters for hydrogen, which can be used when Feynman-Hibbs variational approach is used for quantum corrections. The dynamic properties obtained from molecular dynamics simulations are in excellent agreement with the experimental results and show significant quantum effects on the transport at very low temperature. The molecular dynamics simulation results show that the quantum effect is very sensitive to pore dimensions and under suitable conditions can lead to a reverse kinetic molecular sieving with deuterium diffusing faster than hydrogen.