984 resultados para Lorenz, Hendrik


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Homologous desensitization of beta(2)-adrenergic and other G-protein-coupled receptors is a two-step process. After phosphorylation of agonist-occupied receptors by G-protein-coupled receptor kinases, they bind beta-arrestins, which triggers desensitization and internalization of the receptors. Because it is not known which regions of the receptor are recognized by beta-arrestins, we have investigated beta-arrestin interaction and internalization of a set of mutants of the human beta(2)-adrenergic receptor. Mutation of the four serine/threonine residues between residues 355 and 364 led to the loss of agonist-induced receptor-beta-arrestin2 interaction as revealed by fluorescence resonance energy transfer (FRET), translocation of beta-arrestin2 to the plasma membrane, and receptor internalization. Mutation of all seven serine/threonine residues distal to residue 381 did not affect agonist-induced receptor internalization and beta-arrestin2 translocation. A beta(2)-adrenergic receptor truncated distal to residue 381 interacted normally with beta-arrestin2, whereas its ability to internalize in an agonist-dependent manner was compromised. A similar impairment of internalization was observed when only the last eight residues of the C terminus were deleted. Our experiments show that the C terminus distal to residue 381 does not affect the initial interaction between receptor and beta-arrestin, but its last eight amino acids facilitate receptor internalization in concert with beta-arrestin2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Homologous desensitization of beta(2)-adrenergic receptors has been shown to be mediated by phosphorylation of the agonist-stimulated receptor by G-protein-coupled receptor kinase 2 (GRK2) followed by binding of beta-arrestins to the phosphorylated receptor. Binding of beta-arrestin to the receptor is a prerequisite for subsequent receptor desensitization, internalization via clathrin-coated pits, and the initiation of alternative signaling pathways. In this study we have investigated the interactions between receptors and beta-arrestin2 in living cells using fluorescence resonance energy transfer. We show that (a) the initial kinetics of beta-arrestin2 binding to the receptor is limited by the kinetics of GRK2-mediated receptor phosphorylation; (b) repeated stimulation leads to the accumulation of GRK2-phosphorylated receptor, which can bind beta-arrestin2 very rapidly; and (c) the interaction of beta-arrestin2 with the receptor depends on the activation of the receptor by agonist because agonist withdrawal leads to swift dissociation of the receptor-beta-arrestin2 complex. This fast agonist-controlled association and dissociation of beta-arrestins from prephosphorylated receptors should permit rapid control of receptor sensitivity in repeatedly stimulated cells such as neurons.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using a recent theoretical approach, we study how global warming impacts the thermodynamics of the climate system by performing experiments with a simplified yet Earth-like climate model. The intensity of the Lorenz energy cycle, the Carnot efficiency, the material entropy production, and the degree of irreversibility of the system change monotonically with the CO2 concentration. Moreover, these quantities feature an approximately linear behaviour with respect to the logarithm of the CO2 concentration in a relatively wide range. These generalized sensitivities suggest that the climate becomes less efficient, more irreversible, and features higher entropy production as it becomes warmer, with changes in the latent heat fluxes playing a predominant role. These results may be of help for explaining recent findings obtained with state of the art climate models regarding how increases in CO2 concentration impact the vertical stratification of the tropical and extratropical atmosphere and the position of the storm tracks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a general class of chaotic Lorenz 96 models. We then verify the theoretical predictions from the outputs of the simulations up to a high degree of precision. The theory is used to explain differences in the response of local and global observables, to define the intensive properties of the system, which do not depend on the spatial resolution of the Lorenz 96 model, and to generalize the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable for finite as well as infinite time. Finally, we propose a simple yet general methodology to study general Climate Change problems on virtually any time scale by resorting to only well selected simulations, and by taking full advantage of ensemble methods. The specific case of globally averaged surface temperature response to a general pattern of change of the CO2 concentration is discussed. We believe that the proposed approach may constitute a mathematically rigorous and practically very effective way to approach the problem of climate sensitivity, climate prediction, and climate change from a radically new perspective.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The coadsorption of water with organic molecules under near-ambient pressure and temperature conditions opens up new reaction pathways on model catalyst surfaces that are not accessible in conventional ultrahigh-vacuum surfacescience experiments. The surface chemistry of glycine and alanine at the water-exposed Cu{110} interface was studied in situ using ambient-pressure photoemission and X-ray absorption spectroscopy techniques. At water pressures above 10-5 Torr a significant pressure-dependent decrease in the temperature for dissociative desorption was observed for both amino acids, accompanied by the appearance of a newCN intermediate, which is not observed for lower pressures. The most likely reaction mechanisms involve dehydrogenation induced by O and/or OH surface species resulting from the dissociative adsorption of water. The linear relationship between the inverse decomposition temperature and the logarithm of water pressure enables determination of the activation energy for the surface reaction, between 213 and 232 kJ/mol, and a prediction of the decomposition temperature at the solidliquid interface by extrapolating toward the equilibrium vapor pressure. Such experiments near the equilibrium vapor pressure provide important information about elementary surface processes at the solidliquid interface, which can be retrieved neither under ultrahigh vacuum conditions nor from interfaces immersed in a solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Analogue computers provide actual rather than virtual representations of model systems. They are powerful and engaging computing machines that are cheap and simple to build. This two-part Retronics article helps you build (and understand!) your own analogue computer to simulate the Lorenz butterfly that's become iconic for Chaos theory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Almost all research fields in geosciences use numerical models and observations and combine these using data-assimilation techniques. With ever-increasing resolution and complexity, the numerical models tend to be highly nonlinear and also observations become more complicated and their relation to the models more nonlinear. Standard data-assimilation techniques like (ensemble) Kalman filters and variational methods like 4D-Var rely on linearizations and are likely to fail in one way or another. Nonlinear data-assimilation techniques are available, but are only efficient for small-dimensional problems, hampered by the so-called ‘curse of dimensionality’. Here we present a fully nonlinear particle filter that can be applied to higher dimensional problems by exploiting the freedom of the proposal density inherent in particle filtering. The method is illustrated for the three-dimensional Lorenz model using three particles and the much more complex 40-dimensional Lorenz model using 20 particles. By also applying the method to the 1000-dimensional Lorenz model, again using only 20 particles, we demonstrate the strong scale-invariance of the method, leading to the optimistic conjecture that the method is applicable to realistic geophysical problems. Copyright c 2010 Royal Meteorological Society

Relevância:

10.00% 10.00%

Publicador:

Resumo:

New ways of combining observations with numerical models are discussed in which the size of the state space can be very large, and the model can be highly nonlinear. Also the observations of the system can be related to the model variables in highly nonlinear ways, making this data-assimilation (or inverse) problem highly nonlinear. First we discuss the connection between data assimilation and inverse problems, including regularization. We explore the choice of proposal density in a Particle Filter and show how the ’curse of dimensionality’ might be beaten. In the standard Particle Filter ensembles of model runs are propagated forward in time until observations are encountered, rendering it a pure Monte-Carlo method. In large-dimensional systems this is very inefficient and very large numbers of model runs are needed to solve the data-assimilation problem realistically. In our approach we steer all model runs towards the observations resulting in a much more efficient method. By further ’ensuring almost equal weight’ we avoid performing model runs that are useless in the end. Results are shown for the 40 and 1000 dimensional Lorenz 1995 model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A particle filter is a data assimilation scheme that employs a fully nonlinear, non-Gaussian analysis step. Unfortunately as the size of the state grows the number of ensemble members required for the particle filter to converge to the true solution increases exponentially. To overcome this Vaswani [Vaswani N. 2008. IEEE Trans Signal Process 56:4583–97] proposed a new method known as mode tracking to improve the efficiency of the particle filter. When mode tracking, the state is split into two subspaces. One subspace is forecast using the particle filter, the other is treated so that its values are set equal to the mode of the marginal pdf. There are many ways to split the state. One hypothesis is that the best results should be obtained from the particle filter with mode tracking when we mode track the maximum number of unimodal dimensions. The aim of this paper is to test this hypothesis using the three dimensional stochastic Lorenz equations with direct observations. It is found that mode tracking the maximum number of unimodal dimensions does not always provide the best result. The best choice of states to mode track depends on the number of particles used and the accuracy and frequency of the observations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present an outlook on the climate system thermodynamics. First, we construct an equivalent Carnot engine with efficiency and frame the Lorenz energy cycle in a macroscale thermodynamic context. Then, by exploiting the second law, we prove that the lower bound to the entropy production is times the integrated absolute value of the internal entropy fluctuations. An exergetic interpretation is also proposed. Finally, the controversial maximum entropy production principle is reinterpreted as requiring the joint optimization of heat transport and mechanical work production. These results provide tools for climate change analysis and for climate models’ validation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper the authors exploit two equivalent formulations of the average rate of material entropy production in the climate system to propose an approximate splitting between contributions due to vertical and eminently horizontal processes. This approach is based only on 2D radiative fields at the surface and at the top of atmosphere. Using 2D fields at the top of atmosphere alone, lower bounds to the rate of material entropy production and to the intensity of the Lorenz energy cycle are derived. By introducing a measure of the efficiency of the planetary system with respect to horizontal thermodynamic processes, it is possible to gain insight into a previous intuition on the possibility of defining a baroclinic heat engine extracting work from the meridional heat flux. The approximate formula of the material entropy production is verified and used for studying the global thermodynamic properties of climate models (CMs) included in the Program for Climate Model Diagnosis and Intercomparison (PCMDI)/phase 3 of the Coupled Model Intercomparison Project (CMIP3) dataset in preindustrial climate conditions. It is found that about 90% of the material entropy production is due to vertical processes such as convection, whereas the large-scale meridional heat transport contributes to only about 10% of the total. This suggests that the traditional two-box models used for providing a minimal representation of entropy production in planetary systems are not appropriate, whereas a basic—but conceptually correct—description can be framed in terms of a four-box model. The total material entropy production is typically 55 mW m−2 K−1, with discrepancies on the order of 5%, and CMs’ baroclinic efficiencies are clustered around 0.055. The lower bounds on the intensity of the Lorenz energy cycle featured by CMs are found to be around 1.0–1.5 W m−2, which implies that the derived inequality is rather stringent. When looking at the variability and covariability of the considered thermodynamic quantities, the agreement among CMs is worse, suggesting that the description of feedbacks is more uncertain. The contributions to material entropy production from vertical and horizontal processes are positively correlated, so that no compensation mechanism seems in place. Quite consistently among CMs, the variability of the efficiency of the system is a better proxy for variability of the entropy production due to horizontal processes than that of the large-scale heat flux. The possibility of providing constraints on the 3D dynamics of the fluid envelope based only on 2D observations of radiative fluxes seems promising for the observational study of planets and for testing numerical models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using the formalism of the Ruelle response theory, we study how the invariant measure of an Axiom A dynamical system changes as a result of adding noise, and describe how the stochastic perturbation can be used to explore the properties of the underlying deterministic dynamics. We first find the expression for the change in the expectation value of a general observable when a white noise forcing is introduced in the system, both in the additive and in the multiplicative case. We also show that the difference between the expectation value of the power spectrum of an observable in the stochastically perturbed case and of the same observable in the unperturbed case is equal to the variance of the noise times the square of the modulus of the linear susceptibility describing the frequency-dependent response of the system to perturbations with the same spatial patterns as the considered stochastic forcing. This provides a conceptual bridge between the change in the fluctuation properties of the system due to the presence of noise and the response of the unperturbed system to deterministic forcings. Using Kramers-Kronig theory, it is then possible to derive the real and imaginary part of the susceptibility and thus deduce the Green function of the system for any desired observable. We then extend our results to rather general patterns of random forcing, from the case of several white noise forcings, to noise terms with memory, up to the case of a space-time random field. Explicit formulas are provided for each relevant case analysed. As a general result, we find, using an argument of positive-definiteness, that the power spectrum of the stochastically perturbed system is larger at all frequencies than the power spectrum of the unperturbed system. We provide an example of application of our results by considering the spatially extended chaotic Lorenz 96 model. These results clarify the property of stochastic stability of SRB measures in Axiom A flows, provide tools for analysing stochastic parameterisations and related closure ansatz to be implemented in modelling studies, and introduce new ways to study the response of a system to external perturbations. Taking into account the chaotic hypothesis, we expect that our results have practical relevance for a more general class of system than those belonging to Axiom A.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The accurate prediction of the biochemical function of a protein is becoming increasingly important, given the unprecedented growth of both structural and sequence databanks. Consequently, computational methods are required to analyse such data in an automated manner to ensure genomes are annotated accurately. Protein structure prediction methods, for example, are capable of generating approximate structural models on a genome-wide scale. However, the detection of functionally important regions in such crude models, as well as structural genomics targets, remains an extremely important problem. The method described in the current study, MetSite, represents a fully automatic approach for the detection of metal-binding residue clusters applicable to protein models of moderate quality. The method involves using sequence profile information in combination with approximate structural data. Several neural network classifiers are shown to be able to distinguish metal sites from non-sites with a mean accuracy of 94.5%. The method was demonstrated to identify metal-binding sites correctly in LiveBench targets where no obvious metal-binding sequence motifs were detectable using InterPro. Accurate detection of metal sites was shown to be feasible for low-resolution predicted structures generated using mGenTHREADER where no side-chain information was available. High-scoring predictions were observed for a recently solved hypothetical protein from Haemophilus influenzae, indicating a putative metal-binding site.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new incremental four-dimensional variational (4D-Var) data assimilation algorithm is introduced. The algorithm does not require the computationally expensive integrations with the nonlinear model in the outer loops. Nonlinearity is accounted for by modifying the linearization trajectory of the observation operator based on integrations with the tangent linear (TL) model. This allows us to update the linearization trajectory of the observation operator in the inner loops at negligible computational cost. As a result the distinction between inner and outer loops is no longer necessary. The key idea on which the proposed 4D-Var method is based is that by using Gaussian quadrature it is possible to get an exact correspondence between the nonlinear time evolution of perturbations and the time evolution in the TL model. It is shown that J-point Gaussian quadrature can be used to derive the exact adjoint-based observation impact equations and furthermore that it is straightforward to account for the effect of multiple outer loops in these equations if the proposed 4D-Var method is used. The method is illustrated using a three-level quasi-geostrophic model and the Lorenz (1996) model.