971 resultados para Reading and Interpretation of Statistical Graphs


Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND AND AIMS: Data from the literature reveal the contrasting influences of family members and friends on the survival of old adults. On one hand, numerous studies have reported a positive association between social relationships and survival. On the other, ties with children may be associated with an increased risk of disability, whereas ties with friends or other relatives tend to improve survival. A five-year prospective, population-based study of 295 Swiss octogenarians tested the hypothesis that having a spouse, siblings or close friends, and regular contacts with relatives or friends are associated with longer survival, even at a very old age. METHODS: Data were collected through individual interviews, and a Cox regression model was applied to assess the effects of kinship and friendship networks on survival, after adjusting for socio-demographic and health-related variables. RESULTS: Our analyses indicate that the presence of a spouse in the household is not significantly related to survival, whereas the presence of siblings at baseline improves the oldest old's chances of surviving five years later. Moreover, the existence of close friends is a central component in the patterns of social relationships of oldest adults, and one which is significantly associated with survival. Overall, the protective effect of social relationships on survival is more related to the quality of those relationships (close friends) than to the frequency of relationships (regular contacts). CONCLUSIONS: We hypothesize that the existence of siblings or close friends may beneficially affect survival, due to the potential influence on the attitudes of octogenarians regarding health practices and adaptive strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The classical statistical study of the wind speed in the atmospheric surface layer is madegenerally from the analysis of the three habitual components that perform the wind data,that is, the component W-E, the component S-N and the vertical component,considering these components independent.When the goal of the study of these data is the Aeolian energy, so is when wind isstudied from an energetic point of view and the squares of wind components can beconsidered as compositional variables. To do so, each component has to be divided bythe module of the corresponding vector.In this work the theoretical analysis of the components of the wind as compositionaldata is presented and also the conclusions that can be obtained from the point of view ofthe practical applications as well as those that can be derived from the application ofthis technique in different conditions of weather

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents and discusses further aspects of the subjectivist interpretation of probability (also known as the 'personalist' view of probabilities) as initiated in earlier forensic and legal literature. It shows that operational devices to elicit subjective probabilities - in particular the so-called scoring rules - provide additional arguments in support of the standpoint according to which categorical claims of forensic individualisation do not follow from a formal analysis under that view of probability theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La tomodensitométrie (CT) est une technique d'imagerie dont l'intérêt n'a cessé de croître depuis son apparition dans le début des années 70. Dans le domaine médical, son utilisation est incontournable à tel point que ce système d'imagerie pourrait être amené à devenir victime de son succès si son impact au niveau de l'exposition de la population ne fait pas l'objet d'une attention particulière. Bien évidemment, l'augmentation du nombre d'examens CT a permis d'améliorer la prise en charge des patients ou a rendu certaines procédures moins invasives. Toutefois, pour assurer que le compromis risque - bénéfice soit toujours en faveur du patient, il est nécessaire d'éviter de délivrer des doses non utiles au diagnostic.¦Si cette action est importante chez l'adulte elle doit être une priorité lorsque les examens se font chez l'enfant, en particulier lorsque l'on suit des pathologies qui nécessitent plusieurs examens CT au cours de la vie du patient. En effet, les enfants et jeunes adultes sont plus radiosensibles. De plus, leur espérance de vie étant supérieure à celle de l'adulte, ils présentent un risque accru de développer un cancer radio-induit dont la phase de latence peut être supérieure à vingt ans. Partant du principe que chaque examen radiologique est justifié, il devient dès lors nécessaire d'optimiser les protocoles d'acquisitions pour s'assurer que le patient ne soit pas irradié inutilement. L'avancée technologique au niveau du CT est très rapide et depuis 2009, de nouvelles techniques de reconstructions d'images, dites itératives, ont été introduites afin de réduire la dose et améliorer la qualité d'image.¦Le présent travail a pour objectif de déterminer le potentiel des reconstructions itératives statistiques pour réduire au minimum les doses délivrées lors d'examens CT chez l'enfant et le jeune adulte tout en conservant une qualité d'image permettant le diagnostic, ceci afin de proposer des protocoles optimisés.¦L'optimisation d'un protocole d'examen CT nécessite de pouvoir évaluer la dose délivrée et la qualité d'image utile au diagnostic. Alors que la dose est estimée au moyen d'indices CT (CTDIV0| et DLP), ce travail a la particularité d'utiliser deux approches radicalement différentes pour évaluer la qualité d'image. La première approche dite « physique », se base sur le calcul de métriques physiques (SD, MTF, NPS, etc.) mesurées dans des conditions bien définies, le plus souvent sur fantômes. Bien que cette démarche soit limitée car elle n'intègre pas la perception des radiologues, elle permet de caractériser de manière rapide et simple certaines propriétés d'une image. La seconde approche, dite « clinique », est basée sur l'évaluation de structures anatomiques (critères diagnostiques) présentes sur les images de patients. Des radiologues, impliqués dans l'étape d'évaluation, doivent qualifier la qualité des structures d'un point de vue diagnostique en utilisant une échelle de notation simple. Cette approche, lourde à mettre en place, a l'avantage d'être proche du travail du radiologue et peut être considérée comme méthode de référence.¦Parmi les principaux résultats de ce travail, il a été montré que les algorithmes itératifs statistiques étudiés en clinique (ASIR?, VEO?) ont un important potentiel pour réduire la dose au CT (jusqu'à-90%). Cependant, par leur fonctionnement, ils modifient l'apparence de l'image en entraînant un changement de texture qui pourrait affecter la qualité du diagnostic. En comparant les résultats fournis par les approches « clinique » et « physique », il a été montré que ce changement de texture se traduit par une modification du spectre fréquentiel du bruit dont l'analyse permet d'anticiper ou d'éviter une perte diagnostique. Ce travail montre également que l'intégration de ces nouvelles techniques de reconstruction en clinique ne peut se faire de manière simple sur la base de protocoles utilisant des reconstructions classiques. Les conclusions de ce travail ainsi que les outils développés pourront également guider de futures études dans le domaine de la qualité d'image, comme par exemple, l'analyse de textures ou la modélisation d'observateurs pour le CT.¦-¦Computed tomography (CT) is an imaging technique in which interest has been growing since it first began to be used in the early 1970s. In the clinical environment, this imaging system has emerged as the gold standard modality because of its high sensitivity in producing accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase of the number of CT examinations performed has raised concerns about the potential negative effects of ionizing radiation on the population. To insure a benefit - risk that works in favor of a patient, it is important to balance image quality and dose in order to avoid unnecessary patient exposure.¦If this balance is important for adults, it should be an absolute priority for children undergoing CT examinations, especially for patients suffering from diseases requiring several follow-up examinations over the patient's lifetime. Indeed, children and young adults are more sensitive to ionizing radiation and have an extended life span in comparison to adults. For this population, the risk of developing cancer, whose latency period exceeds 20 years, is significantly higher than for adults. Assuming that each patient examination is justified, it then becomes a priority to optimize CT acquisition protocols in order to minimize the delivered dose to the patient. Over the past few years, CT advances have been developing at a rapid pace. Since 2009, new iterative image reconstruction techniques, called statistical iterative reconstructions, have been introduced in order to decrease patient exposure and improve image quality.¦The goal of the present work was to determine the potential of statistical iterative reconstructions to reduce dose as much as possible without compromising image quality and maintain diagnosis of children and young adult examinations.¦The optimization step requires the evaluation of the delivered dose and image quality useful to perform diagnosis. While the dose is estimated using CT indices (CTDIV0| and DLP), the particularity of this research was to use two radically different approaches to evaluate image quality. The first approach, called the "physical approach", computed physical metrics (SD, MTF, NPS, etc.) measured on phantoms in well-known conditions. Although this technique has some limitations because it does not take radiologist perspective into account, it enables the physical characterization of image properties in a simple and timely way. The second approach, called the "clinical approach", was based on the evaluation of anatomical structures (diagnostic criteria) present on patient images. Radiologists, involved in the assessment step, were asked to score image quality of structures for diagnostic purposes using a simple rating scale. This approach is relatively complicated to implement and also time-consuming. Nevertheless, it has the advantage of being very close to the practice of radiologists and is considered as a reference method.¦Primarily, this work revealed that the statistical iterative reconstructions studied in clinic (ASIR? and VECO have a strong potential to reduce CT dose (up to -90%). However, by their mechanisms, they lead to a modification of the image appearance with a change in image texture which may then effect the quality of the diagnosis. By comparing the results of the "clinical" and "physical" approach, it was showed that a change in texture is related to a modification of the noise spectrum bandwidth. The NPS analysis makes possible to anticipate or avoid a decrease in image quality. This project demonstrated that integrating these new statistical iterative reconstruction techniques can be complex and cannot be made on the basis of protocols using conventional reconstructions. The conclusions of this work and the image quality tools developed will be able to guide future studies in the field of image quality as texture analysis or model observers dedicated to CT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Treatment strategies for acute basilar artery occlusion (BAO) are based on case series and data that have been extrapolated from stroke intervention trials in other cerebrovascular territories, and information on the efficacy of different treatments in unselected patients with BAO is scarce. We therefore assessed outcomes and differences in treatment response after BAO. METHODS: The Basilar Artery International Cooperation Study (BASICS) is a prospective, observational registry of consecutive patients who presented with an acute symptomatic and radiologically confirmed BAO between November 1, 2002, and October 1, 2007. Stroke severity at time of treatment was dichotomised as severe (coma, locked-in state, or tetraplegia) or mild to moderate (any deficit that was less than severe). Outcome was assessed at 1 month. Poor outcome was defined as a modified Rankin scale score of 4 or 5, or death. Patients were divided into three groups according to the treatment they received: antithrombotic treatment only (AT), which comprised antiplatelet drugs or systemic anticoagulation; primary intravenous thrombolysis (IVT), including subsequent intra-arterial thrombolysis; or intra-arterial therapy (IAT), which comprised thrombolysis, mechanical thrombectomy, stenting, or a combination of these approaches. Risk ratios (RR) for treatment effects were adjusted for age, the severity of neurological deficits at the time of treatment, time to treatment, prodromal minor stroke, location of the occlusion, and diabetes. FINDINGS: 619 patients were entered in the registry. 27 patients were excluded from the analyses because they did not receive AT, IVT, or IAT, and all had a poor outcome. Of the 592 patients who were analysed, 183 were treated with only AT, 121 with IVT, and 288 with IAT. Overall, 402 (68%) of the analysed patients had a poor outcome. No statistically significant superiority was found for any treatment strategy. Compared with outcome after AT, patients with a mild-to-moderate deficit (n=245) had about the same risk of poor outcome after IVT (adjusted RR 0.94, 95% CI 0.60-1.45) or after IAT (adjusted RR 1.29, 0.97-1.72) but had a worse outcome after IAT compared with IVT (adjusted RR 1.49, 1.00-2.23). Compared with AT, patients with a severe deficit (n=347) had a lower risk of poor outcome after IVT (adjusted RR 0.88, 0.76-1.01) or IAT (adjusted RR 0.94, 0.86-1.02), whereas outcomes were similar after treatment with IAT or IVT (adjusted RR 1.06, 0.91-1.22). INTERPRETATION: Most patients in the BASICS registry received IAT. Our results do not support unequivocal superiority of IAT over IVT, and the efficacy of IAT versus IVT in patients with an acute BAO needs to be assessed in a randomised controlled trial. FUNDING: Department of Neurology, University Medical Center Utrecht.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report presents systematic empirical annotation of transcript products from 399 annotated protein-coding loci across the 1% of the human genome targeted by the Encyclopedia of DNA elements (ENCODE) pilot project using a combination of 5' rapid amplification of cDNA ends (RACE) and high-density resolution tiling arrays. We identified previously unannotated and often tissue- or cell-line-specific transcribed fragments (RACEfrags), both 5' distal to the annotated 5' terminus and internal to the annotated gene bounds for the vast majority (81.5%) of the tested genes. Half of the distal RACEfrags span large segments of genomic sequences away from the main portion of the coding transcript and often overlap with the upstream-annotated gene(s). Notably, at least 20% of the resultant novel transcripts have changes in their open reading frames (ORFs), most of them fusing ORFs of adjacent transcripts. A significant fraction of distal RACEfrags show expression levels comparable to those of known exons of the same locus, suggesting that they are not part of very minority splice forms. These results have significant implications concerning (1) our current understanding of the architecture of protein-coding genes; (2) our views on locations of regulatory regions in the genome; and (3) the interpretation of sequence polymorphisms mapping to regions hitherto considered to be "noncoding," ultimately relating to the identification of disease-related sequence alterations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

THESIS ABSTRACT Nucleation and growth of metamorphic minerals are the consequence of changing P-T-X-conditions. The thesis presented here focuses on processes governing nucleation and growth of minerals in contact metamorphic environments using a combination of geochemical analytics (chemical-, isotope-, and trace element composition), statistical treatments of spatial data, and numerical models. It is shown, that a combination of textural modeling and stable isotope analysis allows a distinction between several possible reaction paths for olivine growth in a siliceous dolomite contact aureole. It is suggested that olivine forms directly from dolomite and quartz. The formation of olivine from this metastable reaction implies metamorphic crystallization far from equilibrium. As a major consequence, the spatial distribution of metamorphic mineral assemblages in a contact aureole cannot be interpreted as a proxy for the temporal evolution of a single rock specimen, because each rock undergoes a different reaction path, depending on temperature, heating rate, and fluid-infiltration rate. A detailed calcite-dolomite thermometry study was initiated on multiple scales ranging from aureole scale to the size of individual crystals. Quantitative forward models were developed to evaluate the effect of growth zoning, volume diffusion and the formation of submicroscopic exsolution lamellae (<1 µm) on the measured Mg-distribution in individual calcite crystals and compare the modeling results to field data. This study concludes that Mg-distributions in calcite grains of the Ubehebe Peak contact aureole are the consequence of rapid crystal growth in combination with diffusion and exsolution. The crystallization history of a rock is recorded in the chemical composition, the size and the distribution of its minerals. Near the Cima Uzza summit, located in the southern Adamello massif (Italy), contact metamorphic brucite bearing dolomite marbles are exposed as xenoliths surrounded by mafic intrusive rocks. Brucite is formed retrograde pseudomorphing spherical periclase crystals. Crystal size distributions (CSD's) of brucite pseudomorphs are presented for two profiles and combined with geochemistry data and petrological information. Textural analyses are combined with geochemistry data in a qualitative model that describes the formation periclase. As a major outcome, this expands the potential use of CSD's to systems of mineral formation driven by fluid-infiltration. RESUME DE LA THESE La nucléation et la croissance des minéraux métamorphiques sont la conséquence de changements des conditions de pression, température et composition chimique du système (PT-X). Cette thèse s'intéresse aux processus gouvernant la nucléation et la croissance des minéraux au cours d'un épisode de métamorphisme de contact, en utilisant la géochimie analytique (composition chimique, isotopique et en éléments traces), le traitement statistique des données spatiales et la modélisation numérique. Il est montré que la combinaison d'un modèle textural avec des analyses en isotopes stables permet de distinguer plusieurs chemins de réactions possibles conduisant à la croissance de l'olivine dans une auréole de contact riche en Silice et dolomite. Il est suggéré que l'olivine se forme directement à partir de la dolomie et du quartz. Cette réaction métastable de formation de l'olivine implique une cristallisation métamorphique loin de l'équilibre. La principale conséquence est que la distribution spatiale des assemblages de minéraux métamorphiques dans une auréole de contact ne peut pas être considérée comme un témoin de l'évolution temporelle d'un type de roche donné, puisque chaque type de roche suit différents chemins de réactions, en fonction de la température, la vitesse de réchauffement et le taux d'infiltration du fluide. Une étude thermométrique calcite-dolomite détaillée a été réalisée à diverses échelles, depuis l'échelle de l'auréole de contact jusqu'à l'échelle du cristal. Des modèles numériques quantitatifs ont été développés pour évaluer l'effet des zonations de croissance, de la diffusion volumique et de la formation de lamelles d'exsolution submicroscopiques (<1µm) sur la distribution du magnésium mesuré dans des cristaux de calcite individuels. Les résultats de ce modèle ont été comparés ä des échantillons naturels. Cette étude montre que la distribution du Mg dans les grains de calcite de l'auréole de contact de l'Ubehebe Peak (USA) résulte d'une croissance cristalline rapide, associée aux processus de diffusion et d'exsolution. L'histoire de cristallisation d'une roche est enregistrée dans la composition chimique, la taille et la distribution de ses minéraux. Près du sommet Cima Uzza situé au sud du massif d'Adamello (Italie), des marbres dolomitiques à brucite du métamorphisme de contact forment des xénolithes dans une intrusion mafique. La brucite constitue des pseudomorphes rétrogrades du périclase. Les distributions de taille des cristaux (CSD) des pseudomorphes de brucite sont présentées pour deux profiles et sont combinées aux données géochimiques et pétrologiques. Les analyses textorales sont combinées aux données géochimiques dans un modèle qualitatif qui décrit la formation du périclase. Ceci élargit l'utilisation potentielle de la C5D aux systèmes de formation de minéraux controlés par les infiltrations fluides. THESIS ABSTRACT (GENERAL PUBLIC) Rock textures are essentially the result of a complex interaction of nucleation, growth and deformation as a function of changing physical conditions such as pressure and temperature. Igneous and metamorphic textures are especially attractive to study the different mechanisms of texture formation since most of the parameters like pressure-temperature-paths are quite well known for a variety of geological settings. The fact that textures are supposed to record the crystallization history of a rock traditionally allowed them to be used for geothermobarometry or dating. During the last decades the focus of metamorphic petrology changed from a static point of view, i.e. the representation of a texture as one single point in the petrogenetic grid towards a more dynamic view, where multiple metamorphic processes govern the texture formation, including non-equilibrium processes. This thesis tries to advance our understanding on the processes governing nucleation and growth of minerals in contact metamorphic environments and their dynamic interplay by using a combination of geochemical analyses (chemical-, isotope-, and trace element composition), statistical treatments of spatial data and numerical models. In a first part the thesis describes the formation of metamorphic olivine porphyroblast in the Ubehebe Peak contact aureole (USA). It is shown that not the commonly assumed succession of equilibrium reactions along a T-t-path formed the textures present in the rocks today, but rather the presence of a meta-stable reaction is responsible for forming the olivine porphyroblast. Consequently, the spatial distribution of metamorphic minerals within a contact aureole can no longer be regarded as a proxy for the temporal evolution of a single rock sample. Metamorphic peak temperatures for samples of the Ubehebe Peak contact aureole were determined using calcite-dolomite. This geothermometer is based on the temperature-dependent exchange of Mg between calcite and dolomite. The purpose of the second part of this thesis was to explain the interfering systematic scatter of measured Mg-content on different scales and thus to clarify the interpretation of metamorphic temperatures recorded in carbonates. Numerical quantitative forward models are used to evaluate the effect of several processes on the distribution of magnesium in individual calcite crystals and the modeling results were then compared to measured field. Information about the crystallization history is not only recorded in the chemical composition of grains, like isotope composition or mineral zoning. Crystal size distributions (CSD's) provide essential information about the complex interaction of nucleation and growth of minerals. CSD's of brucite pseudomorphs formed retrograde after periclase of the southern Adamello massif (Italy) are presented. A combination of the textural 3D-information with geochemistry data is then used to evaluate reaction kinetics and to constrain the actual reaction mechanism for the formation of periclase. The reaction is shown to be the consequence of the infiltration of a limited amount of a fluid phase at high temperatures. The composition of this fluid phase is in large disequilibrium with the rest of the rock resulting in very fast reaction rates. RESUME DE LA THESE POUR LE GRAND PUBLIC: La texture d'une roche résulte de l'interaction complexe entre les processus de nucléation, croissance et déformation, en fonction des variations de conditions physiques telles que la pression et la température. Les textures ignées et métamorphiques présentent un intérêt particulier pour l'étude des différents mécanismes à l'origine de ces textures, puisque la plupart des paramètres comme les chemin pression-température sont relativement bien contraints dans la plupart des environnements géologiques. Le fait que les textures soient supposées enregistrer l'histoire de cristallisation des roches permet leur utilisation pour la datation et la géothermobarométrie. Durant les dernières décennies, la recherche en pétrologie métamorphique a évolué depuis une visualisation statique, c'est-à-dire qu'une texture donnée correspondait à un point unique de la grille pétrogénétique, jusqu'à une visualisation plus dynamique, où les multiples processus métamorphiques qui gouvernent 1a formation d'une texture incluent des processus hors équilibre. Cette thèse a pour but d'améliorer les connaissances actuelles sur les processus gouvernant la nucléation et la croissance des minéraux lors d'épisodes de métamorphisme de contact et l'interaction dynamique existant entre nucléation et croissance. Pour cela, les analyses géochimiques (compositions chimiques en éléments majeurs et traces et composition isotopique), le traitement statistique des données spatiales et la modélisation numérique ont été combinés. Dans la première partie, cette thèse décrit la formation de porphyroblastes d'olivine métamorphique dans l'auréole de contact de l'Ubehebe Peak (USA). Il est montré que la succession généralement admise des réactions d'équilibre le long d'un chemin T-t ne peut pas expliquer les textures présentes dans les roches aujourd'hui. Cette thèse montre qu'il s'agirait plutôt d'une réaction métastable qui soit responsable de la formation des porphyroblastes d'olivine. En conséquence, la distribution spatiale des minéraux métamorphiques dans l'auréole de contact ne peut plus être interprétée comme le témoin de l'évolution temporelle d'un échantillon unique de roche. Les pics de température des échantillons de l'auréole de contact de l'Ubehebe Peak ont été déterminés grâce au géothermomètre calcite-dolomite. Celui-ci est basé sur l'échange du magnésium entre la calcite et la dolomite, qui est fonction de la température. Le but de la deuxième partie de cette thèse est d'expliquer la dispersion systématique de la composition en magnésium à différentes échelles, et ainsi d'améliorer l'interprétation des températures du métamorphisme enregistrées dans les carbonates. Des modèles numériques quantitatifs ont permis d'évaluer le rôle de différents processus sur la distribution du magnésium dans des cristaux de calcite individuels. Les résultats des modèles ont été comparés aux échantillons naturels. La composition chimique des grains, comme la composition isotopique ou la zonation minérale, n'est pas le seul témoin de l'histoire de la cristallisation. La distribution de la taille des cristaux (CSD) fournit des informations essentielles sur les interactions entre nucléation et croissance des minéraux. La CSD des pseudomorphes de brucite retrograde formés après le périclase dans le sud du massif Adamello (Italie) est présentée dans la troisième partie. La combinaison entre les données textorales en trois dimensions et les données géochimiques a permis d'évaluer les cinétiques de réaction et de contraindre les mécanismes conduisant à la formation du périclase. Cette réaction est présentée comme étant la conséquence de l'infiltration d'une quantité limitée d'une phase fluide à haute température. La composition de cette phase fluide est en grand déséquilibre avec le reste de la roche, ce qui permet des cinétiques de réactions très rapides.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analisa una serie de sedimentos geoquimicos de nueve muestras colectadas a lo largo del recorrido de Panamá.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper exploits an unusual transportation setting to estimate the value of a statistical life(VSL). We estimate the trade-offs individuals are willing to make between mortality risk andcost as they travel to and from the international airport in Sierra Leone (which is separated fromthe capital Freetown by a body of water). Travelers choose from among multiple transportoptions ? namely, ferry, helicopter, hovercraft, and water taxi. The setting and original datasetallow us to address some typical omitted variable concerns in order to generate some of the firstrevealed preference VSL estimates from Africa. The data also allows us to compare VSLestimates for travelers from 56 countries, including 20 African and 36 non-African countries, allfacing the same choice situation. The average VSL estimate for African travelers in the sample isUS$577,000 compared to US$924,000 for non-Africans. Individual characteristics, particularlyjob earnings, can largely account for the difference between Africans and non-Africans; Africansin the sample typically earn somewhat less. There is little evidence that individual VSL estimatesare driven by a lack of information, predicted life expectancy, or cultural norms around risktakingor fatalism. The data implies an income elasticity of the VSL of 1.77. These revealedpreference VSL estimates from a developing country fill an important gap in the existingliterature, and can be used for a variety of public policy purposes, including in current debateswithin Sierra Leone regarding the desirability of constructing new transportation infrastructure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: As part of EUROCAT's surveillance of congenital anomalies in Europe, a statistical monitoring system has been developed to detect recent clusters or long-term (10 year) time trends. The purpose of this article is to describe the system for the identification and investigation of 10-year time trends, conceived as a "screening" tool ultimately leading to the identification of trends which may be due to changing teratogenic factors.METHODS: The EUROCAT database consists of all cases of congenital anomalies including livebirths, fetal deaths from 20 weeks gestational age, and terminations of pregnancy for fetal anomaly. Monitoring of 10-year trends is performed for each registry for each of 96 non-independent EUROCAT congenital anomaly subgroups, while Pan-Europe analysis combines data from all registries. The monitoring results are reviewed, prioritized according to a prioritization strategy, and communicated to registries for investigation. Twenty-one registries covering over 4 million births, from 1999 to 2008, were included in monitoring in 2010.CONCLUSIONS: Significant increasing trends were detected for abdominal wall anomalies, gastroschisis, hypospadias, Trisomy 18 and renal dysplasia in the Pan-Europe analysis while 68 increasing trends were identified in individual registries. A decreasing trend was detected in over one-third of anomaly subgroups in the Pan-Europe analysis, and 16.9% of individual registry tests. Registry preliminary investigations indicated that many trends are due to changes in data quality, ascertainment, screening, or diagnostic methods. Some trends are inevitably chance phenomena related to multiple testing, while others seem to represent real and continuing change needing further investigation and response by regional/national public health authorities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ground clutter caused by anomalous propagation (anaprop) can affect seriously radar rain rate estimates, particularly in fully automatic radar processing systems, and, if not filtered, can produce frequent false alarms. A statistical study of anomalous propagation detected from two operational C-band radars in the northern Italian region of Emilia Romagna is discussed, paying particular attention to its diurnal and seasonal variability. The analysis shows a high incidence of anaprop in summer, mainly in the morning and evening, due to the humid and hot summer climate of the Po Valley, particularly in the coastal zone. Thereafter, a comparison between different techniques and datasets to retrieve the vertical profile of the refractive index gradient in the boundary layer is also presented. In particular, their capability to detect anomalous propagation conditions is compared. Furthermore, beam path trajectories are simulated using a multilayer ray-tracing model and the influence of the propagation conditions on the beam trajectory and shape is examined. High resolution radiosounding data are identified as the best available dataset to reproduce accurately the local propagation conditions, while lower resolution standard TEMP data suffers from interpolation degradation and Numerical Weather Prediction model data (Lokal Model) are able to retrieve a tendency to superrefraction but not to detect ducting conditions. Observing the ray tracing of the centre, lower and upper limits of the radar antenna 3-dB half-power main beam lobe it is concluded that ducting layers produce a change in the measured volume and in the power distribution that can lead to an additional error in the reflectivity estimate and, subsequently, in the estimated rainfall rate.