865 resultados para topological stability
Resumo:
The riboregulator RsmY of Pseudomonas fluorescens strain CHA0 is an example of small regulatory RNAs belonging to the global Rsm/Csr regulatory systems controlling diverse cellular processes such as glycogen accumulation, motility, or formation of extracellular products in various bacteria. By binding multiple molecules of the small regulatory protein RsmA, RsmY relieves the negative effect of RsmA on the translation of several target genes involved in the biocontrol properties of strain CHA0. RsmY and functionally related riboregulators have repeated GGA motifs predicted to be exposed in single-stranded regions, notably in the loops of hairpins. The secondary structure of RsmY was corroborated by in vivo cleavage with lead acetate. RsmY mutants lacking three or five (out of six) of the GGA motifs showed reduced ability to derepress the expression of target genes in vivo and failed to bind the RsmA protein efficiently in vitro. The absence of GGA motifs in RsmY mutants resulted in reduced abundance of these transcripts and in a shorter half-life (< or = 6 min as compared with 27 min for wild type RsmY). These results suggest that both the interaction of RsmY with RsmA and the stability of RsmY strongly depend on the GGA repeats and that the ability of RsmY to interact with small regulatory proteins such as RsmA may protect this RNA from degradation.
Resumo:
Background: With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, τ, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where τ can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called τ-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as τ grows. Results: In this paper we extend Poisson τ-leap methods to a general class of Runge-Kutta (RK) τ-leap methods. We show that with the proper selection of the coefficients, the variance of the extended τ-leap can be well-behaved, leading to significantly larger step sizes.Conclusions: The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original τ-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.
Resumo:
Background: The cooperative interaction between transcription factors has a decisive role in the control of the fate of the eukaryotic cell. Computational approaches for characterizing cooperative transcription factors in yeast, however, are based on different rationales and provide a low overlap between their results. Because the wealth of information contained in protein interaction networks and regulatory networks has proven highly effective in elucidating functional relationships between proteins, we compared different sets of cooperative transcription factor pairs (predicted by four different computational methods) within the frame of those networks. Results: Our results show that the overlap between the sets of cooperative transcription factors predicted by the different methods is low yet significant. Cooperative transcription factors predicted by all methods are closer and more clustered in the protein interaction network than expected by chance. On the other hand, members of a cooperative transcription factor pair neither seemed to regulate each other nor shared similar regulatory inputs, although they do regulate similar groups of target genes. Conclusion: Despite the different definitions of transcriptional cooperativity and the different computational approaches used to characterize cooperativity between transcription factors, the analysis of their roles in the framework of the protein interaction network and the regulatory network indicates a common denominator for the predictions under study. The knowledge of the shared topological properties of cooperative transcription factor pairs in both networks can be useful not only for designing better prediction methods but also for better understanding the complexities of transcriptional control in eukaryotes.
Resumo:
In neurons, the regulation of microtubules plays an important role for neurite outgrowth, axonal elongation, and growth cone steering. SCG10 family proteins are the only known neuronal proteins that have a strong destabilizing effect, are highly enriched in growth cones and are thought to play an important role during axonal elongation. MAP1B, a microtubule-stabilizing protein, is found in growth cones as well, therefore it was important to test their effect on microtubules in the presence of both proteins. We used recombinant proteins in microtubule assembly assays and in transfected COS-7 cells to analyze their combined effects in vitro and in living cells, respectively. Individually, both proteins showed their expected activities in microtubule stabilization and destruction respectively. In MAP1B/SCG10 double-transfected cells, MAP1B could not protect microtubules from SCG10-induced disassembly in most cells, in particular not in cells that contained high levels of SCG10. This suggests that SCG10 is more potent to destabilize microtubules than MAP1B to rescue them. In microtubule assembly assays, MAP1B promoted microtubule formation at a ratio of 1 MAP1B per 70 tubulin dimers while a ratio of 1 SCG10 per two tubulin dimers was needed to destroy microtubules. In addition to its known binding to tubulin dimers, SCG10 binds also to purified microtubules in growth cones of dorsal root ganglion neurons in culture. In conclusion, neuronal microtubules are regulated by antagonistic effects of MAP1B and SCG10 and a fine tuning of the balance of these proteins may be critical for the regulation of microtubule dynamics in growth cones.
Resumo:
Stability berms are commonly constructed where roadway embankments cross soft or unstable ground conditions. Under certain circumstances, the construction of stability berms cause unfavorable environmental impacts, either directly or indirectly, through their effect on wetlands, endangered species habitat, stream channelization, longer culvert lengths, larger right-of-way purchases, and construction access limits. Due to an ever more restrictive regulatory environment, these impacts are problematic. The result is the loss of valuable natural resources to the public, lengthy permitting review processes for the department of transportation and permitting agencies, and the additional expenditures of time and money for all parties. The purpose of this project was to review existing stability berm alternatives for potential use in environmentally sensitive areas. The project also evaluates how stabilization technologies are made feasible, desirable, and cost-effective for transportation projects and determines which alternatives afford practical solutions for avoiding and minimizing impacts to environmentally sensitive areas. An online survey of engineers at state departments of transportation was also conducted to assess the frequency and cost effectiveness of the various stabilization technologies. Geotechnical engineers that responded to the survey overwhelmingly use geosynthetic reinforcement as a suitable and cost-effective solution for stabilizing embankments and cut slopes. Alternatively, chemical stabilization and installation of lime/cement columns is rarely a remediation measure employed by state departments of transportation.
Resumo:
This paper retakes previous work of the authors, about the relationship between non-quasi-competitiveness (the increase in price caused by an increase in the number of oligopolists) and stability of the equilibrium in the classical Cournot oligopoly model. Though it has been widely accepted in the literature that the loss of quasi-competitiveness is linked, in the long run as new firms entered the market, to instability of the model, the authors in their previous work put forward a model in which a situation of monopoly changed to duopoly losing quasi-competitiveness but maintaining the stability of the equilibrium. That model could not, at the time, be extended to any number of oligopolists. The present paper exhibits such an extension. An oligopoly model is shown in which the loss of quasi-competitiveness resists the presence in the market of as many firms as one wishes and where the successive Cournot's equilibrium points are unique and asymptotically stable. In this way, for the first time, the conjecture that non-quasi- competitiveness and instability were equivalent in the long run, is proved false.
Resumo:
BACKGROUND: Complex foot and ankle fractures, such as calcaneum fractures or Lisfranc dislocations, are often associated with a poor outcome, especially in terms of gait capacity. Indeed, degenerative changes often lead to chronic pain and chronic functional limitations. Prescription footwear represents an important therapeutic tool during the rehabilitation process. Local Dynamic Stability (LDS) is the ability of locomotor system to maintain continuous walking by accommodating small perturbations that occur naturally during walking. Because it reflects the degree of control over the gait, LDS has been advocated as a relevant indicator for evaluating different conditions and pathologies. The aim of this study was to analyze changes in LDS induced by orthopaedic shoes in patients with persistent foot and ankle injuries. We hypothesised that footwear adaptation might help patients to improve gait control, which could lead to higher LDS: METHODS: Twenty-five middle-aged inpatients (5 females, 20 males) participated in the study. They were treated for chronic post-traumatic disabilities following ankle and/or foot fractures in a Swiss rehabilitation clinic. During their stay, included inpatients received orthopaedic shoes with custom-made orthoses (insoles). They performed two 30s walking trials with standard shoes and two 30s trials with orthopaedic shoes. A triaxial motion sensor recorded 3D accelerations at the lower back level. LDS was assessed by computing divergence exponents in the acceleration signals (maximal Lyapunov exponents). Pain was evaluated with Visual Analogue Scale (VAS). LDS and pain differences between the trials with standard shoes and the trials with orthopaedic shoes were assessed. RESULTS: Orthopaedic shoes significantly improved LDS in the three axes (medio-lateral: 10% relative change, paired t-test p < 0.001; vertical: 9%, p = 0.03; antero-posterior: 7%, p = 0.04). A significant decrease in pain level (VAS score -29%) was observed. CONCLUSIONS: Footwear adaptation led to pain relief and to improved foot & ankle proprioception. It is likely that that enhancement allows patients to better control foot placement. As a result, higher dynamic stability has been observed. LDS seems therefore a valuable index that could be used in early evaluation of footwear outcome in clinical settings.
Resumo:
Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.
Resumo:
In experiments with two-person sequential games we analyzewhether responses to favorable and unfavorable actions dependon the elicitation procedure. In our hot treatment thesecond player responds to the first player s observed actionwhile in our cold treatment we follow the strategy method and have the second player decide on a contingent action foreach and every possible first player move, without firstobserving this move. Our analysis centers on the degree towhich subjects deviate from the maximization of their pecuniaryrewards, as a response to others actions. Our results show nodifference in behavior between the two treatments. We also findevidence of the stability of subjects preferences with respectto their behavior over time and to the consistency of theirchoices as first and second mover.
Resumo:
We estimate a forward-looking monetary policy reaction function for thepostwar United States economy, before and after Volcker's appointmentas Fed Chairman in 1979. Our results point to substantial differencesin the estimated rule across periods. In particular, interest ratepolicy in the Volcker-Greenspan period appears to have been much moresensitive to changes in expected inflation than in the pre-Volckerperiod. We then compare some of the implications of the estimated rulesfor the equilibrium properties of inflation and output, using a simplemacroeconomic model, and show that the Volcker-Greenspan rule is stabilizing.
Resumo:
Choosing a financially strong insurance company is important when buying health insurance. You want the company to still be in business when you have claims, which can be 20 to 30 years from now. Insurance companies selling insurance in Iowa have met the minimum legal standards to be licensed by the State of Iowa Insurance Division. This licensure doesn’t mean the company has a high financial stability rating. Several independent rating agencies evaluate the financial stability of insurance companies. The rating for an individual insurance company is an opinion as to its financial strength and ability to pay claims in the future. When evaluating a company, a rating agency may consider a company's balance sheet strength, operating performance and business management and strategies.
Resumo:
Avidity of Ag recognition by tumor-specific T cells is one of the main parameters that determines the potency of a tumor rejection Ag. In this study we show that the relative efficiency of staining of tumor Ag-specific T lymphocytes with the corresponding fluorescent MHC class I/peptide multimeric complexes can considerably vary with staining conditions and does not necessarily correlate with avidity of Ag recognition. Instead, we found a clear correlation between avidity of Ag recognition and the stability of MHC class I/peptide multimeric complexes interaction with TCR as measured in dissociation kinetic experiments. These findings are relevant for both identification and isolation of tumor-reactive CTL.
Resumo:
It is widely accepted in the literature about the classicalCournot oligopoly model that the loss of quasi competitiveness is linked,in the long run as new firms enter the market, to instability of the equilibrium. In this paper, though, we present a model in which a stableunique symmetric equilibrium is reached for any number of oligopolistsas industry price increases with each new entry. Consequently, the suspicion that non quasi competitiveness implies, in the long run, instabilityis proved false.
Resumo:
We develop a coordination game to model interactions betweenfundamentals and liquidity during unstable periods in financial markets.We then propose a flexible econometric framework for estimationof the model and analysis of its quantitative implications. The specificempirical application is carry trades in the yen dollar market, includingthe turmoil of 1998. We find a generally very deep market, withlow information disparities amongst agents. We observe occasionallyepisodes of market fragility, or turmoil with up by the escalator, downby the elevator patterns in prices. The key role of strategic behaviorin the econometric model is also confirmed.