987 resultados para complexity theory
Resumo:
The article is concerned with the formal definition of a largely unnoticed factor in narrative structure. Based on the assumptions that (1) the semantics of a written text depend, among other factors, directly on its visual alignment in space, that (2) the formal structure of a text has to meet that of its spatial presentation and that (3) these assumptions hold true also for narrative texts (which, however, in modern times typically conceal their spatial dimensions by a low-key linear layout), it is argued that, how ever low-key, the expected material shape of a given narrative determines the configuration of its plot by its author. The ,implied book' thus denotes an author's historically assumable, not necessarily conscious idea of how his text, which is still in the process of creation, will be dimensionally presented and under these circumstances visually absorbed. Assuming that an author's knowledge of this later (potentially) substantiated material form influences the composition, the implied book is to be understood as a text-genetically determined, structuring moment of the text. Historically reconstructed, it thus serves the methodical analysis of structural characteristics of a completed text.
Resumo:
By means of computer simulations and solution of the equations of the mode coupling theory (MCT),we investigate the role of the intramolecular barriers on several dynamic aspects of nonentangled polymers. The investigated dynamic range extends from the caging regime characteristic of glass-formers to the relaxation of the chain Rouse modes. We review our recent work on this question,provide new results, and critically discuss the limitations of the theory. Solutions of the MCT for the structural relaxation reproduce qualitative trends of simulations for weak and moderate barriers. However, a progressive discrepancy is revealed as the limit of stiff chains is approached. This dis-agreement does not seem related with dynamic heterogeneities, which indeed are not enhanced by increasing barrier strength. It is not connected either with the breakdown of the convolution approximation for three-point static correlations, which retains its validity for stiff chains. These findings suggest the need of an improvement of the MCT equations for polymer melts. Concerning the relaxation of the chain degrees of freedom, MCT provides a microscopic basis for time scales from chain reorientation down to the caging regime. It rationalizes, from first principles, the observed deviations from the Rouse model on increasing the barrier strength. These include anomalous scaling of relaxation times, long-time plateaux, and nonmonotonous wavelength dependence of the mode correlators.
Resumo:
We present a new phenomenological approach to nucleation, based on the combination of the extended modified liquid drop model and dynamical nucleation theory. The new model proposes a new cluster definition, which properly includes the effect of fluctuations, and it is consistent both thermodynamically and kinetically. The model is able to predict successfully the free energy of formation of the critical nucleus, using only macroscopic thermodynamic properties. It also accounts for the spinodal and provides excellent agreement with the result of recent simulations.
Resumo:
We present a model in which particles (or individuals of a biological population) disperse with a rest time between consecutive motions (or migrations) which may take several possible values from a discrete set. Particles (or individuals) may also react (or reproduce). We derive a new equation for the effective rest time T˜ of the random walk. Application to the neolithic transition in Europe makes it possible to derive more realistic theoretical values for its wavefront speed than those following from the single-delayed framework presented previously [J. Fort and V. Méndez, Phys. Rev. Lett. 82, 867 (1999)]. The new results are consistent with the archaeological observations of this important historical process
Resumo:
The "one-gene, one-protein" rule, coined by Beadle and Tatum, has been fundamental to molecular biology. The rule implies that the genetic complexity of an organism depends essentially on its gene number. The discovery, however, that alternative gene splicing and transcription are widespread phenomena dramatically altered our understanding of the genetic complexity of higher eukaryotic organisms; in these, a limited number of genes may potentially encode a much larger number of proteins. Here we investigate yet another phenomenon that may contribute to generate additional protein diversity. Indeed, by relying on both computational and experimental analysis, we estimate that at least 4%-5% of the tandem gene pairs in the human genome can be eventually transcribed into a single RNA sequence encoding a putative chimeric protein. While the functional significance of most of these chimeric transcripts remains to be determined, we provide strong evidence that this phenomenon does not correspond to mere technical artifacts and that it is a common mechanism with the potential of generating hundreds of additional proteins in the human genome.
Resumo:
The theory of language has occupied a special place in the history of Indian thought. Indian philosophers give particular attention to the analysis of the cognition obtained from language, known under the generic name of śābdabodha. This term is used to denote, among other things, the cognition episode of the hearer, the content of which is described in the form of a paraphrase of a sentence represented as a hierarchical structure. Philosophers submit the meaning of the component items of a sentence and their relationship to a thorough examination, and represent the content of the resulting cognition as a paraphrase centred on a meaning element, that is taken as principal qualificand (mukhyaviśesya) which is qualified by the other meaning elements. This analysis is the object of continuous debate over a period of more than a thousand years between the philosophers of the schools of Mimāmsā, Nyāya (mainly in its Navya form) and Vyākarana. While these philosophers are in complete agreement on the idea that the cognition of sentence meaning has a hierarchical structure and share the concept of a single principal qualificand (qualified by other meaning elements), they strongly disagree on the question which meaning element has this role and by which morphological item it is expressed. This disagreement is the central point of their debate and gives rise to competing versions of this theory. The Mïmāmsakas argue that the principal qualificand is what they call bhāvanā ̒bringing into being̒, ̒efficient force̒ or ̒productive operation̒, expressed by the verbal affix, and distinct from the specific procedures signified by the verbal root; the Naiyāyikas generally take it to be the meaning of the word with the first case ending, while the Vaiyākaranas take it to be the operation expressed by the verbal root. All the participants rely on the Pāninian grammar, insofar as the Mimāmsakas and Naiyāyikas do not compose a new grammar of Sanskrit, but use different interpretive strategies in order to justify their views, that are often in overt contradiction with the interpretation of the Pāninian rules accepted by the Vaiyākaranas. In each of the three positions, weakness in one area is compensated by strength in another, and the cumulative force of the total argumentation shows that no position can be declared as correct or overall superior to the others. This book is an attempt to understand this debate, and to show that, to make full sense of the irreconcilable positions of the three schools, one must go beyond linguistic factors and consider the very beginnings of each school's concern with the issue under scrutiny. The texts, and particularly the late texts of each school present very complex versions of the theory, yet the key to understanding why these positions remain irreconcilable seems to lie elsewhere, this in spite of extensive argumentation involving a great deal of linguistic and logical technicalities. Historically, this theory arises in Mimāmsā (with Sabara and Kumārila), then in Nyāya (with Udayana), in a doctrinal and theological context, as a byproduct of the debate over Vedic authority. The Navya-Vaiyākaranas enter this debate last (with Bhattoji Dïksita and Kaunda Bhatta), with the declared aim of refuting the arguments of the Mïmāmsakas and Naiyāyikas by bringing to light the shortcomings in their understanding of Pāninian grammar. The central argument has focused on the capacity of the initial contexts, with the network of issues to which the principal qualificand theory is connected, to render intelligible the presuppositions and aims behind the complex linguistic justification of the classical and late stages of this debate. Reading the debate in this light not only reveals the rationality and internal coherence of each position beyond the linguistic arguments, but makes it possible to understand why the thinkers of the three schools have continued to hold on to three mutually exclusive positions. They are defending not only their version of the principal qualificand theory, but (though not openly acknowledged) the entire network of arguments, linguistic and/or extra-linguistic, to which this theory is connected, as well as the presuppositions and aims underlying these arguments.
Resumo:
Two-way alternating automata were introduced by Vardi in order to study the satisfiability problem for the modal μ-calculus extended with backwards modalities. In this paper, we present a very simple proof by way of Wadge games of the strictness of the hierarchy of Motowski indices of two-way alternating automata over trees.
Resumo:
The spatial resolution visualized with hydrological models and the conceptualized images of subsurface hydrological processes often exceed resolution of the data collected with classical instrumentation at the field scale. In recent years it was possible to increasingly diminish the inherent gap to information from point like field data through the application of hydrogeophysical methods at field-scale. With regards to all common geophysical exploration techniques, electric and electromagnetic methods have arguably to greatest sensitivity to hydrologically relevant parameters. Of particular interest in this context are induced polarisation (IP) measurements, which essentially constrain the capacity of a probed subsurface region to store an electrical charge. In the absence of metallic conductors the IP- response is largely driven by current conduction along the grain surfaces. This offers the perspective to link such measurements to the characteristics of the solid-fluid-interface and thus, at least in unconsolidated sediments, should allow for first-order estimates of the permeability structure.¦While the IP-effect is well explored through laboratory experiments and in part verified through field data for clay-rich environments, the applicability of IP-based characterizations to clay-poor aquifers is not clear. For example, polarization mechanisms like membrane polarization are not applicable in the rather wide pore-systems of clay free sands, and the direct transposition of Schwarz' theory relating polarization of spheres to the relaxation mechanism of polarized cells to complex natural sediments yields ambiguous results.¦In order to improve our understanding of the structural origins of IP-signals in such environments as well as their correlation with pertinent hydrological parameters, various laboratory measurements have been conducted. We consider saturated quartz samples with a grain size spectrum varying from fine sand to fine gravel, that is grain diameters between 0,09 and 5,6 mm, as well as corresponding pertinent mixtures which can be regarded as proxies for widespread alluvial deposits. The pore space characteristics are altered by changing (i) the grain size spectra, (ii) the degree of compaction, and (iii) the level of sorting. We then examined how these changes affect the SIP response, the hydraulic conductivity, and the specific surface area of the considered samples, while keeping any electrochemical variability during the measurements as small as possible. The results do not follow simple assumptions on relationships to single parameters such as grain size. It was found that the complexity of natural occurring media is not yet sufficiently represented when modelling IP. At the same time simple correlation to permeability was found to be strong and consistent. Hence, adaptations with the aim of better representing the geo-structure of natural porous media were applied to the simplified model space used in Schwarz' IP-effect-theory. The resulting semi- empiric relationship was found to more accurately predict the IP-effect and its relation to the parameters grain size and permeability. If combined with recent findings about the effect of pore fluid electrochemistry together with advanced complex resistivity tomography, these results will allow us to picture diverse aspects of the subsurface with relative certainty. Within the framework of single measurement campaigns, hydrologiste can than collect data with information about the geo-structure and geo-chemistry of the subsurface. However, additional research efforts will be necessary to further improve the understanding of the physical origins of IP-effect and minimize the potential for false interpretations.¦-¦Dans l'étude des processus et caractéristiques hydrologiques des subsurfaces, la résolution spatiale donnée par les modèles hydrologiques dépasse souvent la résolution des données du terrain récoltées avec des méthodes classiques d'hydrologie. Récemment il est possible de réduire de plus en plus cet divergence spatiale entre modèles numériques et données du terrain par l'utilisation de méthodes géophysiques, notamment celles géoélectriques. Parmi les méthodes électriques, la polarisation provoquée (PP) permet de représenter la capacité des roches poreuses et des sols à stocker une charge électrique. En l'absence des métaux dans le sous-sol, cet effet est largement influencé par des caractéristiques de surface des matériaux. En conséquence les mesures PP offrent une information des interfaces entre solides et fluides dans les matériaux poreux que nous pouvons lier à la perméabilité également dirigée par ces mêmes paramètres. L'effet de la polarisation provoquée à été étudié dans différentes études de laboratoire, ainsi que sur le terrain. A cause d'une faible capacité de polarisation des matériaux sableux, comparé aux argiles, leur caractérisation par l'effet-PP reste difficile a interpréter d'une manière cohérente pour les environnements hétérogènes.¦Pour améliorer les connaissances sur l'importance de la structure du sous-sol sableux envers l'effet PP et des paramètres hydrologiques, nous avons fait des mesures de laboratoire variées. En détail, nous avons considéré des échantillons sableux de quartz avec des distributions de taille de grain entre sables fins et graviers fins, en diamètre cela fait entre 0,09 et 5,6 mm. Les caractéristiques de l'espace poreux sont changées en modifiant (i) la distribution de taille des grains, (ii) le degré de compaction, et (iii) le niveau d'hétérogénéité dans la distribution de taille de grains. En suite nous étudions comment ces changements influencent l'effet-PP, la perméabilité et la surface spécifique des échantillons. Les paramètres électrochimiques sont gardés à un minimum pendant les mesures. Les résultats ne montrent pas de relation simple entre les paramètres pétro-physiques comme par exemples la taille des grains. La complexité des media naturels n'est pas encore suffisamment représenté par les modèles des processus PP. Néanmoins, la simple corrélation entre effet PP et perméabilité est fort et consistant. En conséquence la théorie de Schwarz sur l'effet-PP a été adapté de manière semi-empirique pour mieux pouvoir estimer la relation entre les résultats de l'effet-PP et les paramètres taille de graines et perméabilité. Nos résultats concernant l'influence de la texture des matériaux et celles de l'effet de l'électrochimie des fluides dans les pores, permettront de visualiser des divers aspects du sous-sol. Avec des telles mesures géo-électriques, les hydrologues peuvent collectionner des données contenant des informations sur la structure et la chimie des fluides des sous-sols. Néanmoins, plus de recherches sur les origines physiques de l'effet-PP sont nécessaires afin de minimiser le risque potentiel d'une mauvaise interprétation des données.
Resumo:
We report on the study of nonequilibrium ordering in the reaction-diffusion lattice gas. It is a kinetic model that relaxes towards steady states under the simultaneous competition of a thermally activated creation-annihilation $(reaction$) process at temperature T, and a diffusion process driven by a heat bath at temperature T?T. The phase diagram as one varies T and T, the system dimension d, the relative priori probabilities for the two processes, and their dynamical rates is investigated. We compare mean-field theory, new Monte Carlo data, and known exact results for some limiting cases. In particular, no evidence of Landau critical behavior is found numerically when d=2 for Metropolis rates but Onsager critical points and a variety of first-order phase transitions.
Resumo:
Closing talk of the Open Access Week 2011 at the UOC, by Josep Jover. Why do altruistic strategies beat selfish ones in the spheres of both free software and the #15m movement? The #15m movement, like software but unlike tangible goods, cannot be owned. It can be used (by joining it) by an indeterminate number of people without depriving anyone else of the chance to do the same. And that turns everything on its head: how universities manage information and what their mission is in this new society. In the immediate future, universities will be valued not for the information they harbour, which will always be richer and more extensive beyond their walls, but rather for their capacity to create critical masses, whether of knowledge research, skill-building, or networks of peers... universities must implement the new model or risk becoming obsolete.
Resumo:
Différentes organisations et différents pays aboutissent souvent à des conclusions différentes quant à la pertinence d'introduire un test de dépistage génétique dans la population générale. Cet article décrit la complexité du dépistage basé sur des tests génétiques. Utilisant l'exemple de la mucoviscidose - pour laquelle un groupe de travail national est en train d'évaluer la pertinence d'un dépistage génétique - les auteurs relèvent les situaions où les recommandations de dépistage sont parfois basées sur l'émergence de nouvelles technologies (par exemple, test génétique) et d'opinion publique plutôt que sur la base d'évidences. Ils présentent également les enjeux éthiques et économiques du dépistage génétique de la mucoviscidose. [Abstract] Various institutions and countries often reach different conclusions about the utility of introducing a newborn screening test in the general population. This paper highlights the complexity of population screening including genetic tests. Using the example of cystic fibrosis genetic screening, for which a Swiss Working Group for Cystic Fibrosis is currently evaluating the pertinence, we outline that screening recommendations are often based more on expert opinion and emerging new technologies rather than on evidence. We also present some ethical and economic issues related to cystic fibrosis genetic screening.