61 resultados para subgrid-scale models
Resumo:
Accurate modeling of flow instabilities requires computational tools able to deal with several interacting scales, from the scale at which fingers are triggered up to the scale at which their effects need to be described. The Multiscale Finite Volume (MsFV) method offers a framework to couple fine-and coarse-scale features by solving a set of localized problems which are used both to define a coarse-scale problem and to reconstruct the fine-scale details of the flow. The MsFV method can be seen as an upscaling-downscaling technique, which is computationally more efficient than standard discretization schemes and more accurate than traditional upscaling techniques. We show that, although the method has proven accurate in modeling density-driven flow under stable conditions, the accuracy of the MsFV method deteriorates in case of unstable flow and an iterative scheme is required to control the localization error. To avoid large computational overhead due to the iterative scheme, we suggest several adaptive strategies both for flow and transport. In particular, the concentration gradient is used to identify a front region where instabilities are triggered and an accurate (iteratively improved) solution is required. Outside the front region the problem is upscaled and both flow and transport are solved only at the coarse scale. This adaptive strategy leads to very accurate solutions at roughly the same computational cost as the non-iterative MsFV method. In many circumstances, however, an accurate description of flow instabilities requires a refinement of the computational grid rather than a coarsening. For these problems, we propose a modified iterative MsFV, which can be used as downscaling method (DMsFV). Compared to other grid refinement techniques the DMsFV clearly separates the computational domain into refined and non-refined regions, which can be treated separately and matched later. This gives great flexibility to employ different physical descriptions in different regions, where different equations could be solved, offering an excellent framework to construct hybrid methods.
Resumo:
Debris flows are among the most dangerous processes in mountainous areas due to their rapid rate of movement and long runout zone. Sudden and rather unexpected impacts produce not only damages to buildings and infrastructure but also threaten human lives. Medium- to regional-scale susceptibility analyses allow the identification of the most endangered areas and suggest where further detailed studies have to be carried out. Since data availability for larger regions is mostly the key limiting factor, empirical models with low data requirements are suitable for first overviews. In this study a susceptibility analysis was carried out for the Barcelonnette Basin, situated in the southern French Alps. By means of a methodology based on empirical rules for source identification and the empirical angle of reach concept for the 2-D runout computation, a worst-case scenario was first modelled. In a second step, scenarios for high, medium and low frequency events were developed. A comparison with the footprints of a few mapped events indicates reasonable results but suggests a high dependency on the quality of the digital elevation model. This fact emphasises the need for a careful interpretation of the results while remaining conscious of the inherent assumptions of the model used and quality of the input data.
Resumo:
This study investigated the psychometric properties of the Horizontal and Vertical Individualism and Collectivism Scale (HVIC) and the Auckland Individualism and Collectivism Scale (AICS). The sample consisted of 1,403 working individuals from Switzerland (N = 585) and from South Africa (N = 818). Principal component factor analyses indicated that a two-factor structure replicated well across the two countries for both scales. In addition, the HVIC four-factor structure replicated well across countries, whereas the responsibility dimension of individualism of the AICS replicated poorly. Confirmatory factor analyses provided satisfactory support to the original theoretical models for both the HVIC and the AICS. Equivalence measurement indices indicated that the cross-cultural replicability properties of both instruments are generally acceptable. However, canonical correlations and correlations between the HVIC and AICS dimensions confirm that these two instruments differ in their underlying meaning of the individualism and collectivism constructs, suggesting that these two instruments assess individualism and collectivism differently.
Resumo:
There is a debate on whether an influence of biotic interactions on species distributions can be reflected at macro-scale levels. Whereas the influence of biotic interactions on spatial arrangements is beginning to be studied at local scales, similar studies at macro-scale levels are scarce. There is no example disentangling, from other similarities with related species, the influence of predator-prey interactions on species distributions at macro-scale levels. In this study we aimed to disentangle predator-prey interactions from species distribution data following an experimental approach including a factorial design. As a case of study we selected the short-toed eagle because of its known specialization on certain prey reptiles. We used presence-absence data at a 100 Km2 spatial resolution to extract the explanatory capacity of different environmental predictors (five abiotic and two biotic predictors) on the short-toed eagle species distribution in Peninsular Spain. Abiotic predictors were relevant climatic and topographic variables, and relevant biotic predictors were prey richness and forest density. In addition to the short-toed eagle, we also obtained the predictor's explanatory capacities for i) species of the same family Accipitridae (as a reference), ii) for other birds of different families (as controls) and iii) species with randomly selected presences (as null models). We run 650 models to test for similarities of the short-toed eagle, controls and null models with reference species, assessed by regressions of explanatory capacities. We found higher similarities between the short-toed eagle and other species of the family Accipitridae than for the other two groups. Once corrected by the family effect, our analyses revealed a signal of predator-prey interaction embedded in species distribution data. This result was corroborated with additional analyses testing for differences in the concordance between the distributions of different bird categories and the distributions of either prey or non-prey species of the short-toed eagle. Our analyses were useful to disentangle a signal of predator-prey interactions from species distribution data at a macro-scale. This study highlights the importance of disentangling specific features from the variation shared with a given taxonomic level.
Resumo:
Recent advances in remote sensing technologies have facilitated the generation of very high resolution (VHR) environmental data. Exploratory studies suggested that, if used in species distribution models (SDMs), these data should enable modelling species' micro-habitats and allow improving predictions for fine-scale biodiversity management. In the present study, we tested the influence, in SDMs, of predictors derived from a VHR digital elevation model (DEM) by comparing the predictive power of models for 239 plant species and their assemblages fitted at six different resolutions in the Swiss Alps. We also tested whether changes of the model quality for a species is related to its functional and ecological characteristics. Refining the resolution only contributed to slight improvement of the models for more than half of the examined species, with the best results obtained at 5 m, but no significant improvement was observed, on average, across all species. Contrary to our expectations, we could not consistently correlate the changes in model performance with species characteristics such as vegetation height. Temperature, the most important variable in the SDMs across the different resolutions, did not contribute any substantial improvement. Our results suggest that improving resolution of topographic data only is not sufficient to improve SDM predictions - and therefore local management - compared to previously used resolutions (here 25 and 100 m). More effort should be dedicated now to conduct finer-scale in-situ environmental measurements (e.g. for temperature, moisture, snow) to obtain improved environmental measurements for fine-scale species mapping and management.
Resumo:
Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Aims :¦Several studies have questioned the validity of separating the diagnosis of alcohol abuse from that of alcohol dependence, and the DSM-5 task force has proposed combining the criteria from these two diagnoses to assess a single category of alcohol use disorders (AUD). Furthermore, the DSM-5 task force has proposed including a new 2-symptom threshold and a severity scale based on symptom counts for the AUD diagnosis. The current study aimed to examine these modifications in a large population-based sample.¦Method :¦Data stemmed from an adult sample (N=2588 ; mean age 51.3 years (s.d.: 0.2), 44.9% female) of current and lifetime drinkers from the PsyCoLaus study, conducted in the Lausanne area in Switzerland. AUDs and validating variables were assessed using a semi-structured diagnostic interview for the assessment of alcohol¦and other major psychiatric disorders. First, the adequacy of the proposed 2- symptom threshold was tested by comparing threshold models at each possible cutoff and a linear model, in relation to different validating variables. The model with the smallest Akaike Criterion Information (AIC) value was established as the best¦model for each validating variable. Second, models with varying subsets of individual AUD symptoms were created to assess the associations between each symptom and the validating variables. The subset of symptoms with the smallest AIC value was established as the best subset for each validator.¦Results :¦1) For the majority of validating variables, the linear model was found to be the best fitting model. 2) Among the various subsets of symptoms, the symptoms most frequently associated with the validating variables were : a) drinking despite having knowledge of a physical or psychological problem, b) having had a persistent desire or unsuccessful efforts to cut down or control drinking and c) craving. The¦least frequent symptoms were : d) drinking in larger amounts or over a longer period than was intended, e) spending a great deal of time in obtaining, using or recovering from alcohol use and f) failing to fulfill major role obligations.¦Conclusions :¦The proposed DSM-5 2-symptom threshold did not receive support in our data. Instead, a linear AUD diagnosis was supported with individuals receiving an increasingly severe AUD diagnosis. Moreover, certain symptoms were more frequently associated with the validating variables, which suggests that these¦symptoms should be considered as more severe.
Resumo:
Detailed large-scale information on mammal distribution has often been lacking, hindering conservation efforts. We used the information from the 2009 IUCN Red List of Threatened Species as a baseline for developing habitat suitability models for 5027 out of 5330 known terrestrial mammal species, based on their habitat relationships. We focused on the following environmental variables: land cover, elevation and hydrological features. Models were developed at 300 m resolution and limited to within species' known geographical ranges. A subset of the models was validated using points of known species occurrence. We conducted a global, fine-scale analysis of patterns of species richness. The richness of mammal species estimated by the overlap of their suitable habitat is on average one-third less than that estimated by the overlap of their geographical ranges. The highest absolute difference is found in tropical and subtropical regions in South America, Africa and Southeast Asia that are not covered by dense forest. The proportion of suitable habitat within mammal geographical ranges correlates with the IUCN Red List category to which they have been assigned, decreasing monotonically from Least Concern to Endangered. These results demonstrate the importance of fine-resolution distribution data for the development of global conservation strategies for mammals.
Resumo:
Indirect topographic variables have been used successfully as surrogates for disturbance processes in plant species distribution models (SDM) in mountain environments. However, no SDM studies have directly tested the performance of disturbance variables. In this study, we developed two disturbance variables: a geomorphic index (GEO) and an index of snow redistribution by wind (SNOW). These were developed in order to assess how they improved both the fit and predictive power of presenceabsence SDM based on commonly used topoclimatic (TC) variables for 91 plants in the Western Swiss Alps. The individual contribution of the disturbance variables was compared to TC variables. Maps of models were prepared to spatially test the effect of disturbance variables. On average, disturbance variables significantly improved the fit but not the predictive power of the TC models and their individual contribution was weak (5.6% for GEO and 3.3% for SNOW). However their maximum individual contribution was important (24.7% and 20.7%). Finally, maps including disturbance variables (i) were significantly divergent from TC models in terms of predicted suitable surfaces and connectivity between potential habitats, and (ii) were interpreted as more ecologically relevant. Disturbance variables did not improve the transferability of models at the local scale in a complex mountain system, and the performance and contribution of these variables were highly species-specific. However, improved spatial projections and change in connectivity are important issues when preparing projections under climate change because the future range size of the species will determine the sensitivity to changing conditions.
Resumo:
At the beginning of the 1990s, the concept of "European integration" could still be said to be fairly unambiguous. Nowadays, it has become plural and complex almost to the point of unintelligibility. This is due, of course, to the internal differentiation of EU membership, with several Member States pulling out of key integrative projects such as establishing an area without frontiers, the "Schengen" area, and a common currency. But this is also due to the differentiated extension of key integrative projects to European non-EU countries - Schengen is again a case in point. Such processes of "integration without membership", the focus of the present publication, are acquiring an ever-growing topicality both in the political arena and in academia. International relations between the EU and its neighbouring countries are crucial for both, and their development through new agreements features prominently on the continent's political agenda. Over and above this aspect, the dissemination of EU values and standards beyond the Union's borders raises a whole host of theoretical and methodological questions, unsettling in some cases traditional conceptions of the autonomy and separation of national legal orders. This publication brings together the papers presented at the Integration without EU Membership workshop held in May 2008 at the EUI (Max Weber Programme and Department of Law). It aims to compare different models and experiences of integration between the EU, on the one hand, and those European countries that do not currently have an accession perspective on the other hand. In delimiting the geographical scope of the inquiry, so as to scale it down to manageable proportions, the guiding principles have been to include both the "Eastern" and "Western" neighbours of the EU, and to examine both structured frameworks of cooperation, such as the European Neighbourhood Policy and the European Economic Area, and bilateral relations developing on a more ad hoc basis. These principles are reflected in the arrangement of the papers, which consider in turn the positions of Ukraine, Russia, Norway, and Switzerland in European integration - current standing, perspectives for evolution, consequences in terms of the EU-ization of their respective legal orders1. These subjects are examined from several perspectives. We had the privilege of receiving contributions from leading practitioners and scholars from the countries concerned, from EU highranking officials, from prominent specialists in EU external relations law, and from young and talented researchers. We wish to thank them all here for their invaluable insights. We are moreover deeply indebted to Marise Cremona (EUI, Law Department, EUI) for her inspiring advice and encouragement, as well as to Ramon Marimon, Karin Tilmans, Lotte Holm, Alyson Price and Susan Garvin (Max Weber Programme, EUI) for their unflinching support throughout this project. A word is perhaps needed on the propriety and usefulness of the research concept embodied in this publication. Does it make sense to compare the integration models and experiences of countries as different as Norway, Russia, Switzerland, and Ukraine? Needless to say, this list of four evokes a staggering diversity of political, social, cultural, and economic conditions, and at least as great a diversity of approaches to European integration. Still, we would argue that such diversity only makes comparisons more meaningful. Indeed, while the particularities and idiosyncratic elements of each "model" of integration are fully displayed in the present volume, common themes and preoccupations run through the pages of every contribution: the difficulty in conceptualizing the finalité and essence of integration, which is evident in the EU today but which is greatly amplified for non-EU countries; the asymmetries and tradeoffs between integration and autonomy that are inherent in any attempt to participate in European integration from outside; the alteration of deeply seated legal concepts, and concepts about the law, that are already observable in the most integrated of the non-EU countries concerned. These issues are not transient or coincidental: they are inextricably bound up with the integration of non-EU countries in the EU project. By publishing this collection, we make no claim to have dealt with them in an exhaustive, still less in a definitive manner. Our ambition is more modest: to highlight the relevance of these themes, to place them more firmly on the scientific agenda, and to provide a stimulating basis for future research and reflection.
Resumo:
AIM: Atomic force microscopy nanoindentation of myofibers was used to assess and quantitatively diagnose muscular dystrophies from human patients. MATERIALS & METHODS: Myofibers were probed from fresh or frozen muscle biopsies from human dystrophic patients and healthy volunteers, as well as mice models, and Young's modulus stiffness values were determined. RESULTS: Fibers displaying abnormally low mechanical stability were detected in biopsies from patients affected by 11 distinct muscle diseases, and Young's modulus values were commensurate to the severity of the disease. Abnormal myofiber resistance was also observed from consulting patients whose muscle condition could not be detected or unambiguously diagnosed otherwise. DISCUSSION & CONCLUSION: This study provides a proof-of-concept that atomic force microscopy yields a quantitative read-out of human muscle function from clinical biopsies, and that it may thereby complement current muscular dystrophy diagnosis.
Resumo:
The current study aimed to explore the validity of an adaptation into French of the self-rated form of the Health of the Nation Outcome Scales for Children and Adolescents (F-HoNOSCA-SR) and to test its usefulness in a clinical routine use. One hundred and twenty nine patients, admitted into two inpatient units, were asked to participate in the study. One hundred and seven patients filled out the F-HoNOSCA-SR (for a subsample (N=17): at two occasions, one week apart) and the strengths and difficulties questionnaire (SDQ). In addition, the clinician rated the clinician-rated form of the HoNOSCA (HoNOSCA-CR, N=82). The reliability (assessed with split-half coefficient, item response theory (IRT) models and intraclass correlations (ICC) between the two occasions) revealed that the F-HoNSOCA-SR provides reliable measures. The concurrent validity assessed by correlating the F-HoNOSCA-SR and the SDQ revealed a good convergent validity of the instrument. The relationship analyses between the F-HoNOSCA-SR and the HoNOSCA-CR revealed weak but significant correlations. The comparison between the F-HoNOSCA-SR and the HoNOSCA-CR with paired sample t-tests revealed a higher score for the self-rated version. The F-HoNSOCA-SR was reported to provide reliable measures. In addition, it allows us to measure complementary information when used together with the HoNOSCA-CR.
Resumo:
Les reconstructions palinspastiques fournissent le cadre idéal à de nombreuses études géologiques, géographiques, océanographique ou climatiques. En tant qu?historiens de la terre, les "reconstructeurs" essayent d?en déchiffrer le passé. Depuis qu?ils savent que les continents bougent, les géologues essayent de retracer leur évolution à travers les âges. Si l?idée originale de Wegener était révolutionnaire au début du siècle passé, nous savons depuis le début des années « soixante » que les continents ne "dérivent" pas sans but au milieu des océans mais sont inclus dans un sur-ensemble associant croûte « continentale » et « océanique »: les plaques tectoniques. Malheureusement, pour des raisons historiques aussi bien que techniques, cette idée ne reçoit toujours pas l'écho suffisant parmi la communauté des reconstructeurs. Néanmoins, nous sommes intimement convaincus qu?en appliquant certaines méthodes et certains principes il est possible d?échapper à l?approche "Wégenerienne" traditionnelle pour enfin tendre vers la tectonique des plaques. Le but principal du présent travail est d?exposer, avec tous les détails nécessaires, nos outils et méthodes. Partant des données paléomagnétiques et paléogéographiques classiquement utilisées pour les reconstructions, nous avons développé une nouvelle méthodologie replaçant les plaques tectoniques et leur cinématique au coeur du problème. En utilisant des assemblages continentaux (aussi appelés "assemblées clés") comme des points d?ancrage répartis sur toute la durée de notre étude (allant de l?Eocène jusqu?au Cambrien), nous développons des scénarios géodynamiques permettant de passer de l?une à l?autre en allant du passé vers le présent. Entre deux étapes, les plaques lithosphériques sont peu à peu reconstruites en additionnant/ supprimant les matériels océaniques (symbolisés par des isochrones synthétiques) aux continents. Excepté lors des collisions, les plaques sont bougées comme des entités propres et rigides. A travers les âges, les seuls éléments évoluant sont les limites de plaques. Elles sont préservées aux cours du temps et suivent une évolution géodynamique consistante tout en formant toujours un réseau interconnecté à travers l?espace. Cette approche appelée "limites de plaques dynamiques" intègre de multiples facteurs parmi lesquels la flottabilité des plaques, les taux d'accrétions aux rides, les courbes de subsidence, les données stratigraphiques et paléobiogéographiques aussi bien que les évènements tectoniques et magmatiques majeurs. Cette méthode offre ainsi un bon contrôle sur la cinématique des plaques et fournit de sévères contraintes au modèle. Cette approche "multi-source" nécessite une organisation et une gestion des données efficaces. Avant le début de cette étude, les masses de données nécessaires était devenues un obstacle difficilement surmontable. Les SIG (Systèmes d?Information Géographiques) et les géo-databases sont des outils informatiques spécialement dédiés à la gestion, au stockage et à l?analyse des données spatialement référencées et de leurs attributs. Grâce au développement dans ArcGIS de la base de données PaleoDyn nous avons pu convertir cette masse de données discontinues en informations géodynamiques précieuses et facilement accessibles pour la création des reconstructions. Dans le même temps, grâce à des outils spécialement développés, nous avons, tout à la fois, facilité le travail de reconstruction (tâches automatisées) et amélioré le modèle en développant fortement le contrôle cinématique par la création de modèles de vitesses des plaques. Sur la base des 340 terranes nouvellement définis, nous avons ainsi développé un set de 35 reconstructions auxquelles est toujours associé un modèle de vitesse. Grâce à cet ensemble de données unique, nous pouvons maintenant aborder des problématiques majeurs de la géologie moderne telles que l?étude des variations du niveau marin et des changements climatiques. Nous avons commencé par aborder un autre problème majeur (et non définitivement élucidé!) de la tectonique moderne: les mécanismes contrôlant les mouvements des plaques. Nous avons pu observer que, tout au long de l?histoire de la terre, les pôles de rotation des plaques (décrivant les mouvements des plaques à la surface de la terre) tendent à se répartir le long d'une bande allant du Pacifique Nord au Nord de l'Amérique du Sud, l'Atlantique Central, l'Afrique du Nord, l'Asie Centrale jusqu'au Japon. Fondamentalement, cette répartition signifie que les plaques ont tendance à fuir ce plan médian. En l'absence d'un biais méthodologique que nous n'aurions pas identifié, nous avons interprété ce phénomène comme reflétant l'influence séculaire de la Lune sur le mouvement des plaques. La Lune sur le mouvement des plaques. Le domaine océanique est la clé de voute de notre modèle. Nous avons attaché un intérêt tout particulier à le reconstruire avec beaucoup de détails. Dans ce modèle, la croûte océanique est préservée d?une reconstruction à l?autre. Le matériel crustal y est symbolisé sous la forme d?isochrones synthétiques dont nous connaissons les âges. Nous avons également reconstruit les marges (actives ou passives), les rides médio-océaniques et les subductions intra-océaniques. En utilisant ce set de données très détaillé, nous avons pu développer des modèles bathymétriques 3-D unique offrant une précision bien supérieure aux précédents.<br/><br/>Palinspastic reconstructions offer an ideal framework for geological, geographical, oceanographic and climatology studies. As historians of the Earth, "reconstructers" try to decipher the past. Since they know that continents are moving, geologists a trying to retrieve the continents distributions through ages. If Wegener?s view of continent motions was revolutionary at the beginning of the 20th century, we know, since the Early 1960?s that continents are not drifting without goal in the oceanic realm but are included in a larger set including, all at once, the oceanic and the continental crust: the tectonic plates. Unfortunately, mainly due to technical and historical issues, this idea seems not to receive a sufficient echo among our particularly concerned community. However, we are intimately convinced that, by applying specific methods and principles we can escape the traditional "Wegenerian" point of view to, at last, reach real plate tectonics. This is the main aim of this study to defend this point of view by exposing, with all necessary details, our methods and tools. Starting with the paleomagnetic and paleogeographic data classically used in reconstruction studies, we developed a modern methodology placing the plates and their kinematics at the centre of the issue. Using assemblies of continents (referred as "key assemblies") as anchors distributed all along the scope of our study (ranging from Eocene time to Cambrian time) we develop geodynamic scenarios leading from one to the next, from the past to the present. In between, lithospheric plates are progressively reconstructed by adding/removing oceanic material (symbolized by synthetic isochrones) to major continents. Except during collisions, plates are moved as single rigid entities. The only evolving elements are the plate boundaries which are preserved and follow a consistent geodynamical evolution through time and form an interconnected network through space. This "dynamic plate boundaries" approach integrates plate buoyancy factors, oceans spreading rates, subsidence patterns, stratigraphic and paleobiogeographic data, as well as major tectonic and magmatic events. It offers a good control on plate kinematics and provides severe constraints for the model. This multi-sources approach requires an efficient data management. Prior to this study, the critical mass of necessary data became a sorely surmountable obstacle. GIS and geodatabases are modern informatics tools of specifically devoted to store, analyze and manage data and associated attributes spatially referenced on the Earth. By developing the PaleoDyn database in ArcGIS software we converted the mass of scattered data offered by the geological records into valuable geodynamical information easily accessible for reconstructions creation. In the same time, by programming specific tools we, all at once, facilitated the reconstruction work (tasks automation) and enhanced the model (by highly increasing the kinematic control of plate motions thanks to plate velocity models). Based on the 340 terranes properly defined, we developed a revised set of 35 reconstructions associated to their own velocity models. Using this unique dataset we are now able to tackle major issues of the geology (such as the global sea-level variations and climate changes). We started by studying one of the major unsolved issues of the modern plate tectonics: the driving mechanism of plate motions. We observed that, all along the Earth?s history, plates rotation poles (describing plate motions across the Earth?s surface) tend to follow a slight linear distribution along a band going from the Northern Pacific through Northern South-America, Central Atlantic, Northern Africa, Central Asia up to Japan. Basically, it sighifies that plates tend to escape this median plan. In the absence of a non-identified methodological bias, we interpreted it as the potential secular influence ot the Moon on plate motions. The oceanic realms are the cornerstone of our model and we attached a particular interest to reconstruct them with many details. In this model, the oceanic crust is preserved from one reconstruction to the next. The crustal material is symbolised by the synthetic isochrons from which we know the ages. We also reconstruct the margins (active or passive), ridges and intra-oceanic subductions. Using this detailed oceanic dataset, we developed unique 3-D bathymetric models offering a better precision than all the previously existing ones.
Resumo:
Cooperation and coordination are desirable behaviors that are fundamental for the harmonious development of society. People need to rely on cooperation with other individuals in many aspects of everyday life, such as teamwork and economic exchange in anonymous markets. However, cooperation may easily fall prey to exploitation by selfish individuals who only care about short- term gain. For cooperation to evolve, specific conditions and mechanisms are required, such as kinship, direct and indirect reciprocity through repeated interactions, or external interventions such as punishment. In this dissertation we investigate the effect of the network structure of the population on the evolution of cooperation and coordination. We consider several kinds of static and dynamical network topologies, such as Baraba´si-Albert, social network models and spatial networks. We perform numerical simulations and laboratory experiments using the Prisoner's Dilemma and co- ordination games in order to contrast human behavior with theoretical results. We show by numerical simulations that even a moderate amount of random noise on the Baraba´si-Albert scale-free network links causes a significant loss of cooperation, to the point that cooperation almost vanishes altogether in the Prisoner's Dilemma when the noise rate is high enough. Moreover, when we consider fixed social-like networks we find that current models of social networks may allow cooperation to emerge and to be robust at least as much as in scale-free networks. In the framework of spatial networks, we investigate whether cooperation can evolve and be stable when agents move randomly or performing Le´vy flights in a continuous space. We also consider discrete space adopting purposeful mobility and binary birth-death process to dis- cover emergent cooperative patterns. The fundamental result is that cooperation may be enhanced when this migration is opportunistic or even when agents follow very simple heuristics. In the experimental laboratory, we investigate the issue of social coordination between indi- viduals located on networks of contacts. In contrast to simulations, we find that human players dynamics do not converge to the efficient outcome more often in a social-like network than in a random network. In another experiment, we study the behavior of people who play a pure co- ordination game in a spatial environment in which they can move around and when changing convention is costly. We find that each convention forms homogeneous clusters and is adopted by approximately half of the individuals. When we provide them with global information, i.e., the number of subjects currently adopting one of the conventions, global consensus is reached in most, but not all, cases. Our results allow us to extract the heuristics used by the participants and to build a numerical simulation model that agrees very well with the experiments. Our findings have important implications for policymakers intending to promote specific, desired behaviors in a mobile population. Furthermore, we carry out an experiment with human subjects playing the Prisoner's Dilemma game in a diluted grid where people are able to move around. In contrast to previous results on purposeful rewiring in relational networks, we find no noticeable effect of mobility in space on the level of cooperation. Clusters of cooperators form momentarily but in a few rounds they dissolve as cooperators at the boundaries stop tolerating being cheated upon. Our results highlight the difficulties that mobile agents have to establish a cooperative environment in a spatial setting without a device such as reputation or the possibility of retaliation. i.e. punishment. Finally, we test experimentally the evolution of cooperation in social networks taking into ac- count a setting where we allow people to make or break links at their will. In this work we give particular attention to whether information on an individual's actions is freely available to poten- tial partners or not. Studying the role of information is relevant as information on other people's actions is often not available for free: a recruiting firm may need to call a job candidate's refer- ences, a bank may need to find out about the credit history of a new client, etc. We find that people cooperate almost fully when information on their actions is freely available to their potential part- ners. Cooperation is less likely, however, if people have to pay about half of what they gain from cooperating with a cooperator. Cooperation declines even further if people have to pay a cost that is almost equivalent to the gain from cooperating with a cooperator. Thus, costly information on potential neighbors' actions can undermine the incentive to cooperate in dynamical networks.
Resumo:
BACKGROUND: The recent large randomized controlled trial of glutamine and antioxidant supplementation suggested that high-dose glutamine is associated with increased mortality in critically ill patients with multiorgan failure. The objectives of the present analyses were to reevaluate the effect of supplementation after controlling for baseline covariates and to identify potentially important subgroup effects. MATERIALS AND METHODS: This study was a post hoc analysis of a prospective factorial 2 × 2 randomized trial conducted in 40 intensive care units in North America and Europe. In total, 1223 mechanically ventilated adult patients with multiorgan failure were randomized to receive glutamine, antioxidants, both glutamine and antioxidants, or placebo administered separate from artificial nutrition. We compared each of the 3 active treatment arms (glutamine alone, antioxidants alone, and glutamine + antioxidants) with placebo on 28-day mortality. Post hoc, treatment effects were examined within subgroups defined by baseline patient characteristics. Logistic regression was used to estimate treatment effects within subgroups after adjustment for baseline covariates and to identify treatment-by-subgroup interactions (effect modification). RESULTS: The 28-day mortality rates in the placebo, glutamine, antioxidant, and combination arms were 25%, 32%, 29%, and 33%, respectively. After adjusting for prespecified baseline covariates, the adjusted odds ratio of 28-day mortality vs placebo was 1.5 (95% confidence interval, 1.0-2.1, P = .05), 1.2 (0.8-1.8, P = .40), and 1.4 (0.9-2.0, P = .09) for glutamine, antioxidant, and glutamine plus antioxidant arms, respectively. In the post hoc subgroup analysis, both glutamine and antioxidants appeared most harmful in patients with baseline renal dysfunction. No subgroups suggested reduced mortality with supplements. CONCLUSIONS: After adjustment for baseline covariates, early provision of high-dose glutamine administered separately from artificial nutrition was not beneficial and may be associated with increased mortality in critically ill patients with multiorgan failure. For both glutamine and antioxidants, the greatest potential for harm was observed in patients with multiorgan failure that included renal dysfunction upon study enrollment.