849 resultados para GIBBS SAMPLER
Resumo:
Hydrogen storage capacity of Tin-1B (n = 3-7) clusters is studied and compared with that of the pristine Ti-n (n = 3-7), using density functional theory (DFT) based calculations. Among these clusters, Ti3B shows the most significant enhancement in the storage capacity by adsorbing 12 H-2, out of which three are dissociated and the other nine are stored as dihydrogen via Kubas-interaction. The best storage in Ti3B is owed to a large charge transfer from Ti to B along with the largest distance of Ti empty d-states above the Fermi level, which is a distinct feature of this particular cluster. Furthermore, the effect of substrates on the storage capacity of Ti3B was assessed by calculating the number of adsorbed H-2 on Ti-3 cluster anchored onto B atoms in the B-doped graphene, BC3, and BN substrates. Similar to free-standing Ti3B, Ti-3 anchored onto boron atom in BC3, stores nine di-hydrogen via Kubas interaction, at the same time eliminating the total number of non-useful dissociated hydrogen. Gibbs energy of adsorption as a function of H-2 partial pressure, indicated that at 250 K and 300 K the di-hydrogens on Ti-3@BC3 adsorb and desorb at ambient pressures. Importantly, Ti-3@BC3 avoids the clustering, hence meeting the criteria for efficient and reversible hydrogen storage media. Copyright (C) 2014, Hydrogen Energy Publications, LLC. Published by Elsevier Ltd. All rights reserved.
Resumo:
Background: Computational protein design is a rapidly maturing field within structural biology, with the goal of designing proteins with custom structures and functions. Such proteins could find widespread medical and industrial applications. Here, we have adapted algorithms from the Rosetta software suite to design much larger proteins, based on ideal geometric and topological criteria. Furthermore, we have developed techniques to incorporate symmetry into designed structures. For our first design attempt, we targeted the (alpha/beta)(8) TIM barrel scaffold. We gained novel insights into TIM barrel folding mechanisms from studying natural TIM barrel structures, and from analyzing previous TIM barrel design attempts. Methods: Computational protein design and analysis was performed using the Rosetta software suite and custom scripts. Genes encoding all designed proteins were synthesized and cloned on the pET20-b vector. Standard circular dichroism and gel chromatographic experiments were performed to determine protein biophysical characteristics. 1D NMR and 2D HSQC experiments were performed to determine protein structural characteristics. Results: Extensive protein design simulations coupled with ab initio modeling yielded several all-atom models of ideal, 4-fold symmetric TIM barrels. Four such models were experimentally characterized. The best designed structure (Symmetrin-1) contained a polar, histidine-rich pore, forming an extensive hydrogen bonding network. Symmetrin-1 was easily expressed and readily soluble. It showed circular dichroism spectra characteristic of well-folded alpha/beta proteins. Temperature melting experiments revealed cooperative and reversible unfolding, with a T-m of 44 degrees C and a Gibbs free energy of unfolding (Delta G degrees) of 8.0 kJ/mol. Urea denaturing experiments confirmed these observations, revealing a C-m of 1.6 M and a Delta G degrees of 8.3 kJ/mol. Symmetrin-1 adopted a monomeric conformation, with an apparent molecular weight of 32.12 kDa, and displayed well resolved 1D-NMR spectra. However, the HSQC spectrum revealed somewhat molten characteristics. Conclusions: Despite the detection of molten characteristics, the creation of a soluble, cooperatively folding protein represents an advancement over previous attempts at TIM barrel design. Strategies to further improve Symmetrin-1 are elaborated. Our techniques may be used to create other large, internally symmetric proteins.
Resumo:
Computational models based on the phase-field method typically operate on a mesoscopic length scale and resolve structural changes of the material and furthermore provide valuable information about microstructure and mechanical property relations. An accurate calculation of the stresses and mechanical energy at the transition region is therefore indispensable. We derive a quantitative phase-field elasticity model based on force balance and Hadamard jump conditions at the interface. Comparing the simulated stress profiles calculated with Voigt/Taylor (Annalen der Physik 274(12):573, 1889), Reuss/Sachs (Z Angew Math Mech 9:49, 1929) and the proposed model with the theoretically predicted stress fields in a plate with a round inclusion under hydrostatic tension, we show the quantitative characteristics of the model. In order to validate the elastic contribution to the driving force for phase transition, we demonstrate the absence of excess energy, calculated by Durga et al. (Model Simul Mater Sci Eng 21(5):055018, 2013), in a one-dimensional equilibrium condition of serial and parallel material chains. To validate the driving force for systems with curved transition regions, we relate simulations to the Gibbs-Thompson equilibrium condition
Resumo:
Melt spun ribbons of Fe95-x Zr (x) B4Cu1 with x = 7 (Z7B4) and 9 (Z9B4) alloys have been prepared, and their structure and magnetic properties have been evaluated using XRD, DSC, TEM, VSM, and Mossbauer spectroscopy. The glass forming ability (GFA) of both alloys has been calculated theoretically using thermodynamical parameters, and Z9B4 alloy is found to possess higher GFA than that of Z7B4 alloy which is validated by XRD results. On annealing, the amorphous Z7B4 ribbon crystallizes into nanocrystalline alpha-Fe, whereas amorphous Z9B4 ribbon shows two-stage crystallization process, first partially to bcc solid solution which is then transformed to nanocrystalline alpha-Fe and Fe2Zr phases exhibiting bimodal distribution. A detailed phase analysis using Mossbauer spectroscopy through hyperfine field distribution of phases has been carried out to understand the crystallization behavior of Z7B4 and Z9B4 alloy ribbons. In order to understand the phase transformation behavior of Z7B4 and Z9B4 ribbons, molar Gibbs free energies of amorphous, alpha-Fe, and Fe2Zr phases have been evaluated. It is found that in case of Z7B4, alpha-Fe is always a stable phase, whereas Fe2Zr is stable at higher temperature for Z9B4. (C) The Minerals, Metals & Materials Society and ASM International 2015
Resumo:
We present the Gaussian process density sampler (GPDS), an exchangeable generative model for use in nonparametric Bayesian density estimation. Samples drawn from the GPDS are consistent with exact, independent samples from a distribution defined by a density that is a transformation of a function drawn from a Gaussian process prior. Our formulation allows us to infer an unknown density from data using Markov chain Monte Carlo, which gives samples from the posterior distribution over density functions and from the predictive distribution on data space. We describe two such MCMC methods. Both methods also allow inference of the hyperparameters of the Gaussian process.
Resumo:
Con el objetivo de verificar la presencia del hongo causante de la enfermedad de las cerezas verdes también conocida como coffea berry disease (CBD), se recolectaron nuestras en siete fincas de Coffea arabica L en la IV y VI Región de Nicaragua, ubicadas a diferentes pisos altitudinales y manejo agronómico. El estudio de las cepas obtenidas se realizó en el laboratorio de Fitopatología de la Escuela de Sanidad Vegetal de la Universidad Nacional Agraria (UNA) durante el período comprendido de Agosto de 1991-Agosto 1992. Se encontraron cuatro grupos asociadas al sistema: Colletotrichum coffeanum Noack y Colletotrichum gloeosporiodes Penz con tres formas conocidas como cca. ccm v Vermeulen. Colletotrichum coffeanum variedad virulans Noack causante del coffee berry disease (CBD) o enfermedad de las cerezas verdes en África no fue reconocido en las zonas de estudio. Cada grupo, presentó diferentes características ecológicas, patogénicas y monoculturales que son cualitativamente similares a los grupos de Kenia (África), referidas por Gibbs (1969), Hindorf (1970; 1972) y Muthappa (1976), pero no idénticos. El hongo fue encontrado parasitando las cerezas verdes y maduras del cafeto, y aunque sus características in vitro no fueron similares a las del hongo causante del CBD, sus cepas desarrollaron patogenicidad en ridículas, y plántulas de café con dos hojas cotiledonales (variedad Catuaí amarillo). Se desconocen los factores que están induciendo a la selectividad del patógeno, y pensamos que deben realizarse más estudios de manera integral, acerca de todos los elementos que influyen en el sistema, como la fertilización del cultivo, la conservación de suelos, el uso de fungicidas, etc. Para evitar a tiempo cualquier explosión epidémica de la antracnosis en que los costos de producción se elevarían, y las pérdidas serían cuantiosas debido a que el área sembrado sería reducido.
Resumo:
This paper compares parallel and distributed implementations of an iterative, Gibbs sampling, machine learning algorithm. Distributed implementations run under Hadoop on facility computing clouds. The probabilistic model under study is the infinite HMM [1], in which parameters are learnt using an instance blocked Gibbs sampling, with a step consisting of a dynamic program. We apply this model to learn part-of-speech tags from newswire text in an unsupervised fashion. However our focus here is on runtime performance, as opposed to NLP-relevant scores, embodied by iteration duration, ease of development, deployment and debugging. © 2010 IEEE.
Resumo:
This work addresses the problem of estimating the optimal value function in a Markov Decision Process from observed state-action pairs. We adopt a Bayesian approach to inference, which allows both the model to be estimated and predictions about actions to be made in a unified framework, providing a principled approach to mimicry of a controller on the basis of observed data. A new Markov chain Monte Carlo (MCMC) sampler is devised for simulation from theposterior distribution over the optimal value function. This step includes a parameter expansion step, which is shown to be essential for good convergence properties of the MCMC sampler. As an illustration, the method is applied to learning a human controller.
Resumo:
A constitutive model, based on an (n + 1)-phase mixture of the Mori-Tanaka average theory, has been developed for stress-induced martensitic transformation and reorientation in single crystalline shape memory alloys. Volume fractions of different martensite lattice correspondence variants are chosen as internal variables to describe microstructural evolution. Macroscopic Gibbs free energy for the phase transformation is derived with thermodynamics principles and the ensemble average method of micro-mechanics. The critical condition and the evolution equation are proposed for both the phase transition and reorientation. This model can also simulate interior hysteresis loops during loading/unloading by switching the critical driving forces when an opposite transition takes place.
Resumo:
Methods for generating a new population are a fundamental component of estimation of distribution algorithms (EDAs). They serve to transfer the information contained in the probabilistic model to the new generated population. In EDAs based on Markov networks, methods for generating new populations usually discard information contained in the model to gain in efficiency. Other methods like Gibbs sampling use information about all interactions in the model but are computationally very costly. In this paper we propose new methods for generating new solutions in EDAs based on Markov networks. We introduce approaches based on inference methods for computing the most probable configurations and model-based template recombination. We show that the application of different variants of inference methods can increase the EDAs’ convergence rate and reduce the number of function evaluations needed to find the optimum of binary and non-binary discrete functions.
Resumo:
Summary: The offshore shelf and canyon habitats of the OCNMS (Fig. 1) are areas of high primary productivity and biodiversity that support extensive groundfish fisheries. Recent acoustic surveys conducted in these waters have indicated the presence of hard-bottom substrates believed to harbor unique deep-sea coral and sponge assemblages. Such fauna are often associated with shallow tropical waters, however an increasing number of studies around the world have recorded them in deeper, cold-water habitats in both northern and southern latitudes. These habitats are of tremendous value as sites of recruitment for commercially important fishes. Yet, ironically, studies have shown how the gear used in offshore demersal fishing, as well as other commercial operations on the seafloor, can cause severe physical disturbances to resident benthic fauna. Due to their exposed structure, slow growth and recruitment rates, and long life spans, deep-sea corals and sponges may be especially vulnerable to such disturbances, requiring very long periods to recover. Potential effects of fishing and other commercial operations in such critical habitats, and the need to define appropriate strategies for the protection of these resources, have been identified as a high-priority management issue for the sanctuary. To begin addressing this issue, an initial pilot survey was conducted June 1-12, 2004 at six sites in offshore waters of the OCNMS (Fig. 2, average depths of 147-265 m) to explore for the presence of deep-sea coral/sponge assemblages and to look for evidence of potential anthropogenic impacts in these critical habitats. The survey was conducted on the NOAA Ship McARTHUR-II using the Navy’s Phantom DHD2+2 remotely operated vehicle (ROV), which was equipped with a video camera, lasers, and a manipulator arm for the collection of voucher specimens. At each site, a 0.1-m2 grab sampler also was used to collect samples of sediments for the analysis of macroinfauna (> 1.0 mm), total organic carbon (TOC), grain size, and chemical contaminants. Vertical profiles of salinity, dissolved oxygen (DO), temperature, and pressure were recorded at each site with a small SeaCat conductivity-temperature-depth (CTD) profiler. Niskin bottles attached to the CTD also obtained near-bottom water samples in support of a companion study of microbial indicators of coral health and general ecological condition across these sites. All samples except the sediment-contaminant samples are being analyzed with present project funds. Original cruise plans included a total of 12 candidate stations to investigate (Fig. 3). However, inclement weather and equipment failures restricted the sampling to half of these sites. In spite of the limited sampling, the work completed was sufficient to address key project objectives and included several significant scientific observations. Foremost, the cruise was successful in demonstrating the presence of target deepwater coral species in these waters. Patches of the rare stony coral Lophelia pertusa, more characteristic of deepwater coral/sponge assemblages in the North Atlantic, were observed for the first time in OCNMS at a site in 271 meters of water. A large proportion of these corals consisted of dead and broken skeletal remains, and a broken gorgonian (soft coral) also was observed nearby. The source of these disturbances is not known. However, observations from several sites included evidence of bottom trawl marks in the sediment and derelict fishing gear (long lines). Preliminary results also support the view that these areas are important reservoirs of marine biodiversity and of value as habitat for demersal fishes. For example, onboard examination of 18 bottom-sediment grabs revealed benthic infaunal species representative of 14 different invertebrate phyla. Twenty-eight species of fishes from 11 families, including 11 (possibly 12) species of ommercially important rockfishes, also were identified from ROV video footage. These initial discoveries have sparked considerable interests in follow-up studies to learn more about the spatial extent of these assemblages and magnitude of potential impacts from commercial-fishing and other anthropogenic activities in the area. It is essential to expand our knowledge of these deep-sea communities and their vulnerability to potential environmental risks in order to determine the most appropriate management strategies. The survey was conducted under a partnership between NOAA’s National Centers for Coastal Ocean Science (NCCOS) and National Marine Sanctuary Program (NMSP) and included scientists from NCCOS, OCNMS, and several other west-coast State, academic, private, and tribal research institutions (see Section 4 for a complete listing of participating scientists). (PDF contains 20 pages)
Resumo:
Sediment sampling was used to evaluate chinook salmon (Oncorhynchus tshawytscha) and steelhead (O. mykiss) spawning habitat quality in the South Fork Trinity River (SFTR) basin. Sediment samples were collected using a McNeil-type sampler and wet sieved through a series of Tyler screens (25.00 mm, 12.50 mm, 6.30 mm, 3.35 mm, 1.00 mm, and 0.85 mm). Fines (particles < 0.85 mm) were determined after a l0-minute settling period in Imhoff cones. Thirteen stations were sampled in the SFTR basin: five stations were located in mainstem SFTR between rk 2.1 and 118.5, 2 stations each were located in EF of the SFTR, Grouse Creek, and Madden Creek, and one station each was located in Eltapom and Hayfork Creeks. Sample means for fines(particles < 0.85 mm) fer SFTR stations ranged between 14.4 and 19.4%; tributary station sample mean fines ranged between 3.4 and 19.4%. Decreased egg survival would be expected at 4 of 5 mainstem SFTR stations and at one station in EF of SFTR and Grouse Creek where fines content exceed 15%. Small gravel/sand content measured at all stations were high, and exceed levels associated with reduced sac fry emergence rates. Reduction of egg survival or sac fry emergence due to sedimentation in spawning gravels could lead to reduced juvenile production from the South Fork Trinity River. (PDF contains 18 pages.)
Resumo:
538 p.
Resumo:
How powerful are Quantum Computers? Despite the prevailing belief that Quantum Computers are more powerful than their classical counterparts, this remains a conjecture backed by little formal evidence. Shor's famous factoring algorithm [Shor97] gives an example of a problem that can be solved efficiently on a quantum computer with no known efficient classical algorithm. Factoring, however, is unlikely to be NP-Hard, meaning that few unexpected formal consequences would arise, should such a classical algorithm be discovered. Could it then be the case that any quantum algorithm can be simulated efficiently classically? Likewise, could it be the case that Quantum Computers can quickly solve problems much harder than factoring? If so, where does this power come from, and what classical computational resources do we need to solve the hardest problems for which there exist efficient quantum algorithms?
We make progress toward understanding these questions through studying the relationship between classical nondeterminism and quantum computing. In particular, is there a problem that can be solved efficiently on a Quantum Computer that cannot be efficiently solved using nondeterminism? In this thesis we address this problem from the perspective of sampling problems. Namely, we give evidence that approximately sampling the Quantum Fourier Transform of an efficiently computable function, while easy quantumly, is hard for any classical machine in the Polynomial Time Hierarchy. In particular, we prove the existence of a class of distributions that can be sampled efficiently by a Quantum Computer, that likely cannot be approximately sampled in randomized polynomial time with an oracle for the Polynomial Time Hierarchy.
Our work complements and generalizes the evidence given in Aaronson and Arkhipov's work [AA2013] where a different distribution with the same computational properties was given. Our result is more general than theirs, but requires a more powerful quantum sampler.