954 resultados para nuclear potential energy surface
Resumo:
A non-local gradient-based damage formulation within a geometrically non-linear setting is presented. The hyperelastic constitutive response at local material point level is governed by a strain energy which is additively composed of an isotropic matrix and of an anisotropic fibre-reinforced material, respectively. The inelastic constitutive response is governed by a scalar [1–d]-type damage formulation, where only the anisotropic elastic part is assumed to be affected by the damage. Following the concept in Dimitrijević and Hackl [28], the local free energy function is enhanced by a gradient-term. This term essentially contains the gradient of the non-local damage variable which, itself, is introduced as an additional independent variable. In order to guarantee the equivalence between the local and non-local damage variable, a penalisation term is incorporated within the free energy function. Based on the principle of minimum total potential energy, a coupled system of Euler–Lagrange equations, i.e., the balance of linear momentum and the balance of the non-local damage field, is obtained and solved in weak form. The resulting coupled, highly non-linear system of equations is symmetric and can conveniently be solved by a standard incremental-iterative Newton–Raphson-type solution scheme. Several three-dimensional displacement- and force-driven boundary value problems—partially motivated by biomechanical application—highlight the mesh-objective characteristics and constitutive properties of the model and illustratively underline the capabilities of the formulation proposed
Resumo:
En España existen del orden de 1,300 grandes presas, de las cuales un 20% fueron construidas antes de los años 60. El hecho de que existan actualmente una gran cantidad de presas antiguas aún en operación, ha producido un creciente interés en reevaluar su seguridad empleando herramientas nuevas o modificadas que incorporan modelos de fallo teóricos más completos, conceptos geotécnicos más complejos y nuevas técnicas de evaluación de la seguridad. Una manera muy común de abordar el análisis de estabilidad de presas de gravedad es, por ejemplo, considerar el deslizamiento a través de la interfase presa-cimiento empleando el criterio de rotura lineal de Mohr-Coulomb, en donde la cohesión y el ángulo de rozamiento son los parámetros que definen la resistencia al corte de la superficie de contacto. Sin embargo la influencia de aspectos como la presencia de planos de debilidad en el macizo rocoso de cimentación; la influencia de otros criterios de rotura para la junta y para el macizo rocoso (ej. el criterio de rotura de Hoek-Brown); las deformaciones volumétricas que ocurren durante la deformación plástica en el fallo del macizo rocoso (i.e., influencia de la dilatancia) no son usualmente consideradas durante el diseño original de la presa. En este contexto, en la presente tesis doctoral se propone una metodología analítica para el análisis de la estabilidad al deslizamiento de presas de hormigón, considerando un mecanismo de fallo en la cimentación caracterizado por la presencia de una familia de discontinuidades. En particular, se considera la posibilidad de que exista una junta sub-horizontal, preexistente y persistente en el macizo rocoso de la cimentación, con una superficie potencial de fallo que se extiende a través del macizo rocoso. El coeficiente de seguridad es entonces estimado usando una combinación de las resistencias a lo largo de los planos de rotura, cuyas resistencias son evaluadas empleando los criterios de rotura no lineales de Barton y Choubey (1977) y Barton y Bandis (1990), a lo largo del plano de deslizamiento de la junta; y el criterio de rotura de Hoek y Brown (1980) en su versión generalizada (Hoek et al. 2002), a lo largo del macizo rocoso. La metodología propuesta también considera la influencia del comportamiento del macizo rocoso cuando este sigue una ley de flujo no asociada con ángulo de dilatancia constante (Hoek y Brown 1997). La nueva metodología analítica propuesta es usada para evaluar las condiciones de estabilidad empleando dos modelos: un modelo determinista y un modelo probabilista, cuyos resultados son el valor del coeficiente de seguridad y la probabilidad de fallo al deslizamiento, respectivamente. El modelo determinista, implementado en MATLAB, es validado usando soluciones numéricas calculadas mediante el método de las diferencias finitas, empleando el código FLAC 6.0. El modelo propuesto proporciona resultados que son bastante similares a aquellos calculados con FLAC; sin embargo, los costos computacionales de la formulación propuesta son significativamente menores, facilitando el análisis de sensibilidad de la influencia de los diferentes parámetros de entrada sobre la seguridad de la presa, de cuyos resultados se obtienen los parámetros que más peso tienen en la estabilidad al deslizamiento de la estructura, manifestándose además la influencia de la ley de flujo en la rotura del macizo rocoso. La probabilidad de fallo es obtenida empleando el método de fiabilidad de primer orden (First Order Reliability Method; FORM), y los resultados de FORM son posteriormente validados mediante simulaciones de Monte Carlo. Los resultados obtenidos mediante ambas metodologías demuestran que, para el caso no asociado, los valores de probabilidad de fallo se ajustan de manera satisfactoria a los obtenidos mediante las simulaciones de Monte Carlo. Los resultados del caso asociado no son tan buenos, ya que producen resultados con errores del 0.7% al 66%, en los que no obstante se obtiene una buena concordancia cuando los casos se encuentran en, o cerca de, la situación de equilibrio límite. La eficiencia computacional es la principal ventaja que ofrece el método FORM para el análisis de la estabilidad de presas de hormigón, a diferencia de las simulaciones de Monte Carlo (que requiere de al menos 4 horas por cada ejecución) FORM requiere tan solo de 1 a 3 minutos en cada ejecución. There are 1,300 large dams in Spain, 20% of which were built before 1960. The fact that there are still many old dams in operation has produced an interest of reevaluate their safety using new or updated tools that incorporate state-of-the-art failure modes, geotechnical concepts and new safety assessment techniques. For instance, for gravity dams one common design approach considers the sliding through the dam-foundation interface, using a simple linear Mohr-Coulomb failure criterion with constant friction angle and cohesion parameters. But the influence of aspects such as the persistence of joint sets in the rock mass below the dam foundation; of the influence of others failure criteria proposed for rock joint and rock masses (e.g. the Hoek-Brown criterion); or the volumetric strains that occur during plastic failure of rock masses (i.e., the influence of dilatancy) are often no considered during the original dam design. In this context, an analytical methodology is proposed herein to assess the sliding stability of concrete dams, considering an extended failure mechanism in its rock foundation, which is characterized by the presence of an inclined, and impersistent joint set. In particular, the possibility of a preexisting sub-horizontal and impersistent joint set is considered, with a potential failure surface that could extend through the rock mass; the safety factor is therefore computed using a combination of strength along the rock joint (using the nonlinear Barton and Choubey (1977) and Barton and Bandis (1990) failure criteria) and along the rock mass (using the nonlinear failure criterion of Hoek and Brown (1980) in its generalized expression from Hoek et al. (2002)). The proposed methodology also considers the influence of a non-associative flow rule that has been incorporated using a (constant) dilation angle (Hoek and Brown 1997). The newly proposed analytical methodology is used to assess the dam stability conditions, employing for this purpose the deterministic and probabilistic models, resulting in the sliding safety factor and the probability of failure respectively. The deterministic model, implemented in MATLAB, is validated using numerical solution computed with the finite difference code FLAC 6.0. The proposed deterministic model provides results that are very similar to those computed with FLAC; however, since the new formulation can be implemented in a spreadsheet, the computational cost of the proposed model is significantly smaller, hence allowing to more easily conduct parametric analyses of the influence of the different input parameters on the dam’s safety. Once the model is validated, parametric analyses are conducting using the main parameters that describe the dam’s foundation. From this study, the impact of the more influential parameters on the sliding stability analysis is obtained and the error of considering the flow rule is assessed. The probability of failure is obtained employing the First Order Reliability Method (FORM). The probabilistic model is then validated using the Monte Carlo simulation method. Results obtained using both methodologies show good agreement for cases in which the rock mass has a nonassociate flow rule. For cases with an associated flow rule errors between 0.70% and 66% are obtained, so that the better adjustments are obtained for cases with, or close to, limit equilibrium conditions. The main advantage of FORM on sliding stability analyses of gravity dams is its computational efficiency, so that Monte Carlo simulations require at least 4 hours on each execution, whereas FORM requires only 1 to 3 minutes on each execution.
Resumo:
Desde el siglo VIII hasta prácticamente el siglo XX, los molinos de marea han sido una fuente de desarrollo de las zonas en las que estaban implantados. En toda la costa atlántica de Europa, y posteriormente en América, se desarrollaron estos ingenios conectados por su naturaleza con los puertos. En ellos se procesaban materia prima que tenía su origen o destino en dichos puertos. La aparición de otras fuentes de energía más económicas y eficaces supuso la decadencia paulatina de estos ingenios, hasta la práctica desaparición de un buen número de ellos. En los últimos años, tanto instituciones privadas como públicas, especialmente ayuntamientos han mostrado un interés en conservar estos ingenios ya sea como edificación singular en el que se desarrollen distintos negocios, o como museos o “centros de interpretación” en los que se explica el funcionamiento del molino y su relación con la comarca de influencia. Este nuevo interés por el tema, unido a la necesidad de buscar nuevas fuentes de energías renovables para dar cumplimento a las condiciones del Tratado de Kyoto motivan la necesidad de un estudio de la aplicación de la energía minihidráulica a los antiguos molinos. En el presente documento se ha procedido en primer lugar a describir la historia de los molinos de marea y a continuación a localizarlos en cada provincia para identificar los posibles puntos de producción de energía. Seguidamente se procedió a identificar los diferentes tipos de turbinas aplicables a estos casos, de los que se han elegido dos de ellos, uno de ellos consolidado y el otro en fase experimental, para determinar y analizar la tendencia de determinadas magnitudes financieras en función de la carrera de marea. Las conclusiones resultantes de este análisis han sido que el sistema de funcionamiento mediante el flujo de la marea es menos productivo que mediante reflujo y que la altura de marea en las costas españolas limita considerablemente la rentabilidad de la inversión. Esta circunstancia obliga a hacer un estudio de viabilidad muy detallado de cada uno de los casos. Las investigaciones futuras en este campo se deben encaminar hacia el desarrollo de un nuevo tipo de miniturbina con una mayor regulación para obtener un mayor rendimiento, teniendo en cuenta, además, que el estuario de un molino de marea puede ser además un excelente banco de pruebas. Por otro lado, la posible producción de energía puede ser un elemento a estudiar dentro de un sistema de generación distribuida en el ámbito de una smart-city. From the eighth century until practically the twentieth century, the tide mills have been a source of development of the areas in which they were implanted. Across the Atlantic coast of Europe, and subsequently in America, these devices were developed and connected by its nature with the nearby ports. In these places the raw material, with its origin and destination from these ports, were processed. The emergence of other sources of energy more economic and efficient caused the gradual decline of these devices which led to the disappearance of a large number of them. In recent years, both private and public institutions, especially municipalities, have shown interest in preserving these devices as singular buildings, or as museums or "visitor centers" where the process of milling is explained and also its relationship with the region of influence. This renewed interest in the subject, coupled with the need of finding new sources of renewable energy in order to comply with the conditions of the Kyoto Treaty, has created the need for a study of the possible implementation of small hydro power in the old mills. In the present document, first we have proceeded to describe the history of the tide mills and afterwards we have located them in the Spanish provinces to identify the possible locations of energy generation. In the next step, we proceeded to identify the different types of turbines suitable to these cases, and we have been selected two of them. The first one is consolidated and the second one is in an experimental phase. With these pair of turbines we have determined and analyzed the outcome of certain financial data depending on the tidal range. The conclusions drawn from this analysis are that operating the system by the flow tide is less productive than by ebb tide. The limited height of tide in the Spanish coast considerably limits the return on investment. This outcome forces potential investors to make a very detailed analysis of each case study. Future researches in this field should be guided towards the development of a new type of mini turbine with more regulation for a higher performance, taking into account also that the basin of a tidal mill can also be an excellent test place. Furthermore, the potential energy output can be a factor to consider in a distributed generation system within a smart city.
Resumo:
In a Finite Element (FE) analysis of elastic solids several items are usually considered, namely, type and shape of the elements, number of nodes per element, node positions, FE mesh, total number of degrees of freedom (dot) among others. In this paper a method to improve a given FE mesh used for a particular analysis is described. For the improvement criterion different objective functions have been chosen (Total potential energy and Average quadratic error) and the number of nodes and dof's of the new mesh remain constant and equal to the initial FE mesh. In order to find the mesh producing the minimum of the selected objective function the steepest descent gradient technique has been applied as optimization algorithm. However this efficient technique has the drawback that demands a large computation power. Extensive application of this methodology to different 2-D elasticity problems leads to the conclusion that isometric isostatic meshes (ii-meshes) produce better results than the standard reasonably initial regular meshes used in practice. This conclusion seems to be independent on the objective function used for comparison. These ii-meshes are obtained by placing FE nodes along the isostatic lines, i.e. curves tangent at each point to the principal direction lines of the elastic problem to be solved and they should be regularly spaced in order to build regular elements. That means ii-meshes are usually obtained by iteration, i.e. with the initial FE mesh the elastic analysis is carried out. By using the obtained results of this analysis the net of isostatic lines can be drawn and in a first trial an ii-mesh can be built. This first ii-mesh can be improved, if it necessary, by analyzing again the problem and generate after the FE analysis the new and improved ii-mesh. Typically, after two first tentative ii-meshes it is sufficient to produce good FE results from the elastic analysis. Several example of this procedure are presented.
Resumo:
An evolutionary process is simulated with a simple spin-glass-like model of proteins to examine the origin of folding ability. At each generation, sequences are randomly mutated and subjected to a simulation of the folding process based on the model. According to the frequency of local configurations at the active sites, sequences are selected and passed to the next generation. After a few hundred generations, a sequence capable of folding globally into a native conformation emerges. Moreover, the selected sequence has a distinct energy minimum and an anisotropic funnel on the energy surface, which are the imperative features for fast folding of proteins. The proposed model reveals that the functional selection on the local configurations leads a sequence to fold globally into a conformation at a faster rate.
Resumo:
We use an off-lattice minimalist model to describe the effects of pressure in slowing down the folding/unfolding kinetics of proteins when subjected to increasingly larger pressures. The potential energy function used to describe the interactions between beads in the model includes the effects of pressure on the pairwise interaction of hydrophobic groups in water. We show that pressure affects the participation of contacts in the transition state. More significantly, pressure exponentially decreases the chain reconfigurational diffusion coefficient. These results are consistent with experimental results on the kinetics of pressure-denaturation of staphylococcal nuclease.
Resumo:
The calculated folding thermodynamics of a simple off-lattice three-helix-bundle protein model under equilibrium conditions shows the experimentally observed protein transitions: a collapse transition, a disordered-to-ordered globule transition, a globule to native-state transition, and the transition from the active native state to a frozen inactive state. The cooperativity and physical origin of the various transitions are explored with a single “optimization” parameter and characterized with the Lindemann criterion for liquid versus solid-state dynamics. Below the folding temperature, the model has a simple free energy surface with a single basin near the native state; the surface is similar to that calculated from a simulation of the same three-helix-bundle protein with an all-atom representation [Boczko, E. M. & Brooks III, C. L. (1995) Science 269, 393–396].
Resumo:
To identify yeast cytosolic proteins that mediate targeting of precursor proteins to mitochondria, we developed an in vitro import system consisting of purified yeast mitochondria and a radiolabeled mitochondrial precursor protein whose C terminus was still attached to the ribosome. In this system, the N terminus of the nascent chain was translocated across both mitochondrial membranes, generating a translocation intermediate spanning both membranes. The nascent chain could then be completely chased into the mitochondrial matrix after release from the ribosome. Generation of this import intermediate was dependent on a mitochondrial membrane potential, mitochondrial surface proteins, and was stimulated by proteins that could be released from the ribosomes by high salt. The major salt-released stimulatory factor was yeast nascent polypeptide–associated complex (NAC). Purified NAC fully restored import of salt-washed ribosome-bound nascent chains by enhancing productive binding of the chains to mitochondria. We propose that ribosome-associated NAC facilitates recognition of nascent precursor chains by the mitochondrial import machinery.
Resumo:
Protein folding is a grand challenge of the postgenomic era. In this paper, 58 folding events sampled during 47 molecular dynamics trajectories for a total simulation time of more than 4 μs provide an atomic detail picture of the folding of a 20-residue synthetic peptide with a stable three-stranded antiparallel β-sheet fold. The simulations successfully reproduce the NMR solution conformation, irrespective of the starting structure. The sampling of the conformational space is sufficient to determine the free energy surface and localize the minima and transition states. The statistically predominant folding pathway involves the formation of contacts between strands 2 and 3, starting with the side chains close to the turn, followed by association of the N-terminal strand onto the preformed 2–3 β-hairpin. The folding mechanism presented here, formation of a β-hairpin followed by consolidation, is in agreement with a computational study of the free energy surface of another synthetic three-stranded antiparallel β-sheet by Bursulaya and Brooks [(1999) J. Am. Chem. Soc. 121, 9947–9951]. Hence, it might hold in general for antiparallel β-sheets with short turns.
Resumo:
The correlation functions of the fluctuations of vibrational frequencies of azide ions and carbon monoxide in proteins are determined directly from stimulated photon echoes generated with femtosecond infrared pulses. The asymmetric stretching vibration of azide bound to carbonic anhydrase II exhibits a pronounced evolution of its vibrational frequency distribution on the time scale of a few picoseconds, which is attributed to modifications of the ligand structure through interactions with the nearby Thr-199. When azide is bound in hemoglobin, a more complex evolution of the protein structure is required to interchange the different ligand configurations, as evidenced by the much slower relaxation of the frequency distribution in this case. The time evolution of the distribution of frequencies of carbon monoxide bound in hemoglobin occurs on the ≈10-ps time scale and is very nonexponential. The correlation functions of the frequency fluctuations determine the evolution of the protein structure local to the probe and the extent to which the probe can navigate those parts of the energy landscape where the structural configurations are able to modify the local potential energy function of the probe.
Resumo:
Intramolecular chain diffusion is an elementary process in the conformational fluctuations of the DNA hairpin-loop. We have studied the temperature and viscosity dependence of a model DNA hairpin-loop by FRET (fluorescence resonance energy transfer) fluctuation spectroscopy (FRETfs). Apparent thermodynamic parameters were obtained by analyzing the correlation amplitude through a two-state model and are consistent with steady-state fluorescence measurements. The kinetics of closing the loop show non-Arrhenius behavior, in agreement with theoretical prediction and other experimental measurements on peptide folding. The fluctuation rates show a fractional power dependence (β = 0.83) on the solution viscosity. A much slower intrachain diffusion coefficient in comparison to that of polypeptides was derived based on the first passage time theory of SSS [Szabo, A., Schulten, K. & Schulten, Z. (1980) J. Chem. Phys. 72, 4350–4357], suggesting that intrachain interactions, especially stacking interaction in the loop, might increase the roughness of the free energy surface of the DNA hairpin-loop.
Resumo:
The topic of femtochemistry is surveyed from both theoretical and experimental points of view. A time-dependent wave packet description of the photodissociation of the O—C—S molecule reveals vibrational motion in the transition-state region and suggests targets for direct experimental observation. Theoretical approaches for treating femtosecond chemical phenomena in condensed phases are featured along with prospects for laser-controlled chemical reactions by using tailored ultrashort chirped pulses. An experimental study of the photoisomerization of retinal in the protein bacteriorhodopsin is discussed with an aim to gain insight into the potential energy surfaces on which this remarkably efficient and selective reactions proceeds. Finally, a prospective view of new frontiers in femtochemistry is given.
Resumo:
Elucidating the mechanism of folding of polynucleotides depends on accurate estimates of free energy surfaces and a quantitative description of the kinetics of structure formation. Here, the kinetics of hairpin formation in single-stranded DNA are measured after a laser temperature jump. The kinetics are modeled as configurational diffusion on a free energy surface obtained from a statistical mechanical description of equilibrium melting profiles. The effective diffusion coefficient is found to be strongly temperature-dependent in the nucleation step as a result of formation of misfolded loops that do not lead to subsequent zipping. This simple system exhibits many of the features predicted from theoretical studies of protein folding, including a funnel-like energy surface with many folding pathways, trapping in misfolded conformations, and non-Arrhenius folding rates.
Resumo:
The phenomenon of Manning-Oosawa counterion condensation is given an explicit statistical mechanical and qualitative basis via a dressed polyelectrolyte formalism in connection with the topology of the electrostatic free-energy surface and is derived explicitly in terms of the adsorption excess of ions about the polyion via the nonlinear Poisson-Boltzmann equation. The approach is closely analogous to the theory of ion binding in micelles. Our results not only elucidate a Poisson-Boltzmann analysis, which shows that a fraction of the counterions lie within a finite volume around the polyion even if the volume of the system tends towards infinity, but also provide a direct link between Manning's theta-the number of condensed counterions for each polyion site-and a statistical thermodynamic quantity, namely, the adsorption excess per monomer.
Resumo:
We show numeric evidence that, at low enough temperatures, the potential energy density of a glass-forming liquid fluctuates over length scales much larger than the interaction range. We focus on the behavior of translationally invariant quantities. The growing correlation length is unveiled by studying the finite-size effects. In the thermodynamic limit, the specific heat and the relaxation time diverge as a power law. Both features point towards the existence of a critical point in the metastable supercooled liquid phase.