966 resultados para Precision
Resumo:
[1] We present new analytical data of major and trace elements for the geological MPI-DING glasses KL2-G, ML3B-G, StHs6/80-G, GOR128-G, GOR132-G, BM90/21-G, T1-G, and ATHO-G. Different analytical methods were used to obtain a large spectrum of major and trace element data, in particular, EPMA, SIMS, LA-ICPMS, and isotope dilution by TIMS and ICPMS. Altogether, more than 60 qualified geochemical laboratories worldwide contributed to the analyses, allowing us to present new reference and information values and their uncertainties ( at 95% confidence level) for up to 74 elements. We complied with the recommendations for the certification of geological reference materials by the International Association of Geoanalysts (IAG). The reference values were derived from the results of 16 independent techniques, including definitive ( isotope dilution) and comparative bulk ( e. g., INAA, ICPMS, SSMS) and microanalytical ( e. g., LA-ICPMS, SIMS, EPMA) methods. Agreement between two or more independent methods and the use of definitive methods provided traceability to the fullest extent possible. We also present new and recently published data for the isotopic compositions of H, B, Li, O, Ca, Sr, Nd, Hf, and Pb. The results were mainly obtained by high-precision bulk techniques, such as TIMS and MC-ICPMS. In addition, LA-ICPMS and SIMS isotope data of B, Li, and Pb are presented.
Resumo:
When a new treatment is compared to an established one in a randomized clinical trial, it is standard practice to statistically test for non-inferiority rather than for superiority. When the endpoint is binary, one usually compares two treatments using either an odds-ratio or a difference of proportions. In this paper, we propose a mixed approach which uses both concepts. One first defines the non-inferiority margin using an odds-ratio and one ultimately proves non-inferiority statistically using a difference of proportions. The mixed approach is shown to be more powerful than the conventional odds-ratio approach when the efficacy of the established treatment is known (with good precision) and high (e.g. with more than 56% of success). The gain of power achieved may lead in turn to a substantial reduction in the sample size needed to prove non-inferiority. The mixed approach can be generalized to ordinal endpoints.
Resumo:
This paper analyzes the implications of pre-trade transpareny on market performance. We find that transparency increases the precision held by agents, however we show that this increase in precision may not be due to prices themselves. In competitive markets, transparency increases market liquidity and reduces price volatility, whereas these results may not hold under imperfect competition. More importantly, market depth and volatility might be positively related with proper priors. Moreover, we study the incentives for liquidity traders to engage in sunshine trading. We obtain that the choice of sunshine/dark trading for a noise trader is independent of his order size, being the traders with higher liquidity needs more interested in sunshine trading, as long as this practice is desirable. Key words: Market Microstructure, Transparency, Prior Information, Market Quality, Sunshine Trading
Resumo:
Over thirty years ago, Leamer (1983) - among many others - expressed doubts about the quality and usefulness of empirical analyses for the economic profession by stating that "hardly anyone takes data analyses seriously. Or perhaps more accurately, hardly anyone takes anyone else's data analyses seriously" (p.37). Improvements in data quality, more robust estimation methods and the evolution of better research designs seem to make that assertion no longer justifiable (see Angrist and Pischke (2010) for a recent response to Leamer's essay). The economic profes- sion and policy makers alike often rely on empirical evidence as a means to investigate policy relevant questions. The approach of using scientifically rigorous and systematic evidence to identify policies and programs that are capable of improving policy-relevant outcomes is known under the increasingly popular notion of evidence-based policy. Evidence-based economic policy often relies on randomized or quasi-natural experiments in order to identify causal effects of policies. These can require relatively strong assumptions or raise concerns of external validity. In the context of this thesis, potential concerns are for example endogeneity of policy reforms with respect to the business cycle in the first chapter, the trade-off between precision and bias in the regression-discontinuity setting in chapter 2 or non-representativeness of the sample due to self-selection in chapter 3. While the identification strategies are very useful to gain insights into the causal effects of specific policy questions, transforming the evidence into concrete policy conclusions can be challenging. Policy develop- ment should therefore rely on the systematic evidence of a whole body of research on a specific policy question rather than on a single analysis. In this sense, this thesis cannot and should not be viewed as a comprehensive analysis of specific policy issues but rather as a first step towards a better understanding of certain aspects of a policy question. The thesis applies new and innovative identification strategies to policy-relevant and topical questions in the fields of labor economics and behavioral environmental economics. Each chapter relies on a different identification strategy. In the first chapter, we employ a difference- in-differences approach to exploit the quasi-experimental change in the entitlement of the max- imum unemployment benefit duration to identify the medium-run effects of reduced benefit durations on post-unemployment outcomes. Shortening benefit duration carries a double- dividend: It generates fiscal benefits without deteriorating the quality of job-matches. On the contrary, shortened benefit durations improve medium-run earnings and employment possibly through containing the negative effects of skill depreciation or stigmatization. While the first chapter provides only indirect evidence on the underlying behavioral channels, in the second chapter I develop a novel approach that allows to learn about the relative impor- tance of the two key margins of job search - reservation wage choice and search effort. In the framework of a standard non-stationary job search model, I show how the exit rate from un- employment can be decomposed in a way that is informative on reservation wage movements over the unemployment spell. The empirical analysis relies on a sharp discontinuity in unem- ployment benefit entitlement, which can be exploited in a regression-discontinuity approach to identify the effects of extended benefit durations on unemployment and survivor functions. I find evidence that calls for an important role of reservation wage choices for job search be- havior. This can have direct implications for the optimal design of unemployment insurance policies. The third chapter - while thematically detached from the other chapters - addresses one of the major policy challenges of the 21st century: climate change and resource consumption. Many governments have recently put energy efficiency on top of their agendas. While pricing instru- ments aimed at regulating the energy demand have often been found to be short-lived and difficult to enforce politically, the focus of energy conservation programs has shifted towards behavioral approaches - such as provision of information or social norm feedback. The third chapter describes a randomized controlled field experiment in which we discuss the effective- ness of different types of feedback on residential electricity consumption. We find that detailed and real-time feedback caused persistent electricity reductions on the order of 3 to 5 % of daily electricity consumption. Also social norm information can generate substantial electricity sav- ings when designed appropriately. The findings suggest that behavioral approaches constitute effective and relatively cheap way of improving residential energy-efficiency.
Resumo:
El present projecte s'ha dut a terme a l'American Museum of Natural History (AMNH, New York) entre el 31 de Desembre de 2010 i el 30 de Desembre de 2012. L'objectiu del projecte era elucidar la història evolutiva de la mà humana: traçar els canvis evolutius en la seva forma i proporcions que van propiciar la seva estructura moderna que permet als humans manipular amb precisió. El treball realitzat ha inclòs recol•lecció de dades i anàlisis, redacció de resultats i formació en mètodes analítics específics. Durant aquest temps, l'autor a completat la seva de base de dades existent en mesures lineals de la mà a hominoides. També s'han agafat dades del peu; d'aquesta forma ara mateix es compta amb una base de dades amb més de 500 individus, amb més de 200 mesures per cada un. També s'han agafat dades en tres imensions utilitzant un làser escàner. S'han après tècniques de morfometria geomètrica 3D directament dels pioners al camp a l'AMNH. Com a resultat d'aquesta feina s'han produït 10 resums (publicats a congressos internacionals) i 9 manuscrits (molts d'ells ja publicats a revistes internacionals) amb resultats de gran rellevància: La mà humana posseeix unes proporcions relativament primitives, que són més similars a les proporciones que tenien els hominoides fòssils del Miocè que no pas a la dels grans antropomorfs actuals. Els darrers tenen unes mans allargades amb un polzes molt curts que reflexen l'ús de la mà com a eina de suspensió sota les branques. En canvi, els hominoides del Miocè tenien unes mans relativament curtes amb un polze llarg que feien servir per estabilitzar el seu pes quan caminaven per sobre de les branques. Una vegada els primers homínids van aparèixer al final del Miocè (fa uns 6 Ma) i van començar a fer servir el bipedisme com a mitjà més comú de locomoció, les seves mans van ser "alliberades" de les seves funcions locomotores. La selecció natural—ara només treballant en la manipulació—va convertir les proporcions ja existents de la mà d'aquests primats en l'òrgan manipulatori que representa la mà humana avui dia.
Resumo:
A liquid chromatography method coupled to mass spectrometry was developed for the quantification of bupropion, its metabolite hydroxy-bupropion, moclobemide, reboxetine and trazodone in human plasma. The validation of the analytical procedure was assessed according to Société Française des Sciences et Techniques Pharmaceutiques and the latest Food and Drug Administration guidelines. The sample preparation was performed with 0.5mL of plasma extracted on a cation-exchange solid phase 96-well plate. The separation was achieved in 14min on a C18 XBridge column (2.1mm×100mm, 3.5μm) using a 50mM ammonium acetate pH 9/acetonitrile mobile phase in gradient mode. The compounds of interest were analysed in the single ion monitoring mode on a single quadrupole mass spectrometer working in positive electrospray ionisation mode. Two ions were selected per molecule to increase the number of identification points and to avoid as much as possible any false positives. Since selectivity is always a critical point for routine therapeutic drug monitoring, more than sixty common comedications for the psychiatric population were tested. For each analyte, the analytical procedure was validated to cover the common range of concentrations measured in plasma samples: 1-400ng/mL for reboxetine and bupropion, 2-2000ng/mL for hydroxy-bupropion, moclobemide, and trazodone. For all investigated compounds, reliable performance in terms of accuracy, precision, trueness, recovery, selectivity and stability was obtained. One year after its implementation in a routine process, this method demonstrated a high robustness with accurate values over the wide concentration range commonly observed among a psychiatric population.
Resumo:
A selective and sensitive method was developed for the simultaneous quantification of seven typical antipsychotic drugs (cis-chlorprothixene, flupentixol, haloperidol, levomepromazine, pipamperone, promazine and zuclopenthixol) in human plasma. Ultra-high performance liquid chromatography (UHPLC) was used for complete separation of the compounds in less than 4.5min on an Acquity UPLC BEH C18 column (2.1mm×50mm; 1.7μm), with a gradient elution of ammonium formate buffer pH 4.0 and acetonitrile at a flow rate of 400μl/min. Detection was performed on a tandem quadrupole mass spectrometer (MS/MS) equipped with an electrospray ionization interface. A simple protein precipitation procedure with acetonitrile was used for sample preparation. Thanks to the use of stable isotope-labeled internal standards for all analytes, internal standard-normalized matrix effects were in the range of 92-108%. The method was fully validated to cover large concentration ranges of 0.2-90ng/ml for haloperidol, 0.5-90ng/ml for flupentixol, 1-450ng/ml for levomepromazine, promazine and zuclopenthixol and 2-900ng/ml for cis-chlorprothixene and pipamperone. Trueness (89.1-114.8%), repeatability (1.8-9.9%), intermediate precision (1.9-16.3%) and accuracy profiles (<30%) were in accordance with the latest international recommendations. The method was successfully used in our laboratory for routine quantification of more than 500 patient plasma samples for therapeutic drug monitoring. To the best of our knowledge, this is the first UHPLC-MS/MS method for the quantification of the studied drugs with a sample preparation based on protein precipitation.
Resumo:
In vivo dosimetry is a way to verify the radiation dose delivered to the patient in measuring the dose generally during the first fraction of the treatment. It is the only dose delivery control based on a measurement performed during the treatment. In today's radiotherapy practice, the dose delivered to the patient is planned using 3D dose calculation algorithms and volumetric images representing the patient. Due to the high accuracy and precision necessary in radiation treatments, national and international organisations like ICRU and AAPM recommend the use of in vivo dosimetry. It is also mandatory in some countries like France. Various in vivo dosimetry methods have been developed during the past years. These methods are point-, line-, plane- or 3D dose controls. A 3D in vivo dosimetry provides the most information about the dose delivered to the patient, with respect to ID and 2D methods. However, to our knowledge, it is generally not routinely applied to patient treatments yet. The aim of this PhD thesis was to determine whether it is possible to reconstruct the 3D delivered dose using transmitted beam measurements in the context of narrow beams. An iterative dose reconstruction method has been described and implemented. The iterative algorithm includes a simple 3D dose calculation algorithm based on the convolution/superposition principle. The methodology was applied to narrow beams produced by a conventional 6 MV linac. The transmitted dose was measured using an array of ion chambers, as to simulate the linear nature of a tomotherapy detector. We showed that the iterative algorithm converges quickly and reconstructs the dose within a good agreement (at least 3% / 3 mm locally), which is inside the 5% recommended by the ICRU. Moreover it was demonstrated on phantom measurements that the proposed method allows us detecting some set-up errors and interfraction geometry modifications. We also have discussed the limitations of the 3D dose reconstruction for dose delivery error detection. Afterwards, stability tests of the tomotherapy MVCT built-in onboard detector was performed in order to evaluate if such a detector is suitable for 3D in-vivo dosimetry. The detector showed stability on short and long terms comparable to other imaging devices as the EPIDs, also used for in vivo dosimetry. Subsequently, a methodology for the dose reconstruction using the tomotherapy MVCT detector is proposed in the context of static irradiations. This manuscript is composed of two articles and a script providing further information related to this work. In the latter, the first chapter introduces the state-of-the-art of in vivo dosimetry and adaptive radiotherapy, and explains why we are interested in performing 3D dose reconstructions. In chapter 2 a dose calculation algorithm implemented for this work is reviewed with a detailed description of the physical parameters needed for calculating 3D absorbed dose distributions. The tomotherapy MVCT detector used for transit measurements and its characteristics are described in chapter 3. Chapter 4 contains a first article entitled '3D dose reconstruction for narrow beams using ion chamber array measurements', which describes the dose reconstruction method and presents tests of the methodology on phantoms irradiated with 6 MV narrow photon beams. Chapter 5 contains a second article 'Stability of the Helical TomoTherapy HiArt II detector for treatment beam irradiations. A dose reconstruction process specific to the use of the tomotherapy MVCT detector is presented in chapter 6. A discussion and perspectives of the PhD thesis are presented in chapter 7, followed by a conclusion in chapter 8. The tomotherapy treatment device is described in appendix 1 and an overview of 3D conformai- and intensity modulated radiotherapy is presented in appendix 2. - La dosimétrie in vivo est une technique utilisée pour vérifier la dose délivrée au patient en faisant une mesure, généralement pendant la première séance du traitement. Il s'agit de la seule technique de contrôle de la dose délivrée basée sur une mesure réalisée durant l'irradiation du patient. La dose au patient est calculée au moyen d'algorithmes 3D utilisant des images volumétriques du patient. En raison de la haute précision nécessaire lors des traitements de radiothérapie, des organismes nationaux et internationaux tels que l'ICRU et l'AAPM recommandent l'utilisation de la dosimétrie in vivo, qui est devenue obligatoire dans certains pays dont la France. Diverses méthodes de dosimétrie in vivo existent. Elles peuvent être classées en dosimétrie ponctuelle, planaire ou tridimensionnelle. La dosimétrie 3D est celle qui fournit le plus d'information sur la dose délivrée. Cependant, à notre connaissance, elle n'est généralement pas appliquée dans la routine clinique. Le but de cette recherche était de déterminer s'il est possible de reconstruire la dose 3D délivrée en se basant sur des mesures de la dose transmise, dans le contexte des faisceaux étroits. Une méthode itérative de reconstruction de la dose a été décrite et implémentée. L'algorithme itératif contient un algorithme simple basé sur le principe de convolution/superposition pour le calcul de la dose. La dose transmise a été mesurée à l'aide d'une série de chambres à ionisations alignées afin de simuler la nature linéaire du détecteur de la tomothérapie. Nous avons montré que l'algorithme itératif converge rapidement et qu'il permet de reconstruire la dose délivrée avec une bonne précision (au moins 3 % localement / 3 mm). De plus, nous avons démontré que cette méthode permet de détecter certaines erreurs de positionnement du patient, ainsi que des modifications géométriques qui peuvent subvenir entre les séances de traitement. Nous avons discuté les limites de cette méthode pour la détection de certaines erreurs d'irradiation. Par la suite, des tests de stabilité du détecteur MVCT intégré à la tomothérapie ont été effectués, dans le but de déterminer si ce dernier peut être utilisé pour la dosimétrie in vivo. Ce détecteur a démontré une stabilité à court et à long terme comparable à d'autres détecteurs tels que les EPIDs également utilisés pour l'imagerie et la dosimétrie in vivo. Pour finir, une adaptation de la méthode de reconstruction de la dose a été proposée afin de pouvoir l'implémenter sur une installation de tomothérapie. Ce manuscrit est composé de deux articles et d'un script contenant des informations supplémentaires sur ce travail. Dans ce dernier, le premier chapitre introduit l'état de l'art de la dosimétrie in vivo et de la radiothérapie adaptative, et explique pourquoi nous nous intéressons à la reconstruction 3D de la dose délivrée. Dans le chapitre 2, l'algorithme 3D de calcul de dose implémenté pour ce travail est décrit, ainsi que les paramètres physiques principaux nécessaires pour le calcul de dose. Les caractéristiques du détecteur MVCT de la tomothérapie utilisé pour les mesures de transit sont décrites dans le chapitre 3. Le chapitre 4 contient un premier article intitulé '3D dose reconstruction for narrow beams using ion chamber array measurements', qui décrit la méthode de reconstruction et présente des tests de la méthodologie sur des fantômes irradiés avec des faisceaux étroits. Le chapitre 5 contient un second article intitulé 'Stability of the Helical TomoTherapy HiArt II detector for treatment beam irradiations'. Un procédé de reconstruction de la dose spécifique pour l'utilisation du détecteur MVCT de la tomothérapie est présenté au chapitre 6. Une discussion et les perspectives de la thèse de doctorat sont présentées au chapitre 7, suivies par une conclusion au chapitre 8. Le concept de la tomothérapie est exposé dans l'annexe 1. Pour finir, la radiothérapie «informationnelle 3D et la radiothérapie par modulation d'intensité sont présentées dans l'annexe 2.
Resumo:
Résumé Scientific:Pétrologie et Géochimie du Complexe Plutonique de Chaltén et les conséquences pour l'évolution magmatique et tectonique du Andes du Sud (Patagonia) pendant le MiocèneLe sujet de cette thèse est le Complexe Plutonique de Chaltén (CHPC), situé à la frontière entre le Chili et l'Argentine, en Patagonie (49°15'S). Ce complexe s'est mis en place au début du Miocène, dans un contexte de changements tectoniques importants. La géométrie et la vitesse de migration des plaques en Patagonie a été modifiée suite l'ouverture de la plaque Farallon il y a 25Ma (Pardo-Casas and Molnar 1987) et la subduction de la ride active du Chili sous la plaque sud-américaine il y a 14Ma (Cande and Leslie 1986). Les effets de cette reconfiguration tectonique sur la morphologie et le magmatisme de la plaque supérieure sont encore sujets à discussion. Dans ce contexte, un groupe d'intrusions miocènes - telle que le CHPC - est particulièrement intriguant, car en position transitionnelle entre le batholithe patagonien et l'arc volcanique cénozoïque et récent à l'ouest, et les laves de plateau de Patagonie à l'est (Fig. 1). A cause de leur position tectonique transitoire, ces plutons isolés hors du batholithe représentent un endroit clé pour comprendre les interactions entre la tectonique à large échelle et le magmatisme en Patagonie. Ici, je présente de nouvelles données de terrain, petrologiques, géochimiques et géochronologiques dans le but de caractériser la nature du CHPC, qui était largement inconnu avant cette étude, dans le but de tester l'hypothèse de migration de l'arc et erosion par subduction.Les résultats de l'investigation géochimique (chapitre 2) montrent que le CHPC n'est qu'un exemple parmi les plutons isolés d'arrière arc ave une composition calco-alcaline caractéristique, c-à-d une signature d'arc. La plupart de ces plutons isolés ont une composition alcaline. Le CHPC, contrairement, a une signature calco-alcaline avec Κ intermédiaire, tel que le batholithe patagonien et la plupart des roches volcaniques quaternaires liées à l'arc le long des Andes.De nouvelles données géochronologiques U-Pb de haute précision sur des zircons, acquis par TIMS, sur le CHPC donnent des âges entre 17.0 et 16.4Ma. Les âges absolus sont en accord avec la séquence intrusive déduite des relations de terrain (chapitre 1). Ces données sont les premières contraintes d'âge U-Pb sur le CHPC. Elles montrent clairement que l'histoire magmatique du CHPC n'a pas de lien direct avec la subduction de la ride à cette latitude (Cande and Leslie 1986), car le complexe est au moins 6Ma plus ancien.Une comparaison en profondeur avec les autres intrusions d'âge Miocène en Patagonie révèlent - pour la première fois - une évolution temporelle intéressante. Il y a une tendance E-W distincte au magmatisme calco-alcalin entre 20-16Ma avec une diminution de l'âge vers l'est - le CHPC est l'expression la plus orientale de cette tendance. Je suggère que la relation espace-temps reflète une migration vers l'est (vers le continent) de l'arc magmatique. Je propose que le facteur principal contrôlant cette migration est la subduction rapide suite à la reconfiguration de la vitesse des plaques tectoniques après l'ouverture la plaque Farallon (à ~26Ma) qui résulterait en une déformation importante ainsi qu'à des taux élevés d'érosion dans la fosse de subduction.Les rapports d'isotopes radiogéniques (Pb, Sr, Nd) élevés, une signature 6018 basse et un rapport Th/La élevé sont des paramètres distinctifs pour les roches mafiques du CHPC. Le modèle isotopique présenté (chapitre 2) suggère que cette signature reflète une contamination de la source, dans le coin de manteau, plutôt qu'une contamination crustale. La signature des éléments en trace du CHPC indiquent que le coin de manteau a été contaminé par des composés terrigènes, le plus vraisemblablement par des sédiments paléozoïques.Les travaux de terrain, la pétrographie et la géothermobarométrie ont été utilisés dans le but de comprendre l'histoire interne du CHPC (chapitre 3). Ces données suggèrent deux niveaux distincts de cristallisation : l'un dans la croûte moyenne (6 à 4.5kbar) et l'autre à un niveau peu profond (3.5 à 2kbar). La modélisation isotopique AFC de la contamination crustale indique des taux variables d'assimilation, qui ne sont pas corrélés avec le degré de différenciation. Cela suggère que différents volumes de magma se sont différenciés en profondeur, de façon indépendante. Cela implique que le CHPC se serait formés en plusieurs puises de magmas provenant d'au moins trois sources différentes. Les textures des granodiorites et des granites indiquent des teneurs élevées en cristaux avant la mise en place et, par conséquent, des températures d'emplacement faibles. Les observations de terrain montrent que les roches mafiques sont déformées, alors que ce n'est pas le cas pour les granodiorites et granites (plus jeunes). La déformation des roches mafiques est encore sujet de recherche, afin de savoir si elle est liée à la déformation régionale en régime compressif ou à l'emplacement lui-même. Cependant, la mise en place de grand volume de magma felsique riche en cristaux suggère un régime d'extension.Scientific Abstract:Petrology and chemistry of the Chaltén Plutonic Complex and implications on the magmatic and tectonic evolution of the Southernmost Andes (Patagonia) during the MioceneThe subject of this thesis is the Chaltén Plutonic Complex (CHPC) located at the frontier between Chile and Argentina in Patagonia (at 49° 15 'Southern latitude). This complex intruded during early Miocene in a context of major tectonics changes. The plate geometry of Patagonia has been modified by changes in the plate motions after the break up of the Farallôn plate at 25Ma (Pardo-Casas and Molnar 1987) and by the subduction of the Chile spreading Ridge beneath South-America at 14 Ma (Cande and Leslie 1986). The effects of this tectonic setting on the morphology and the magmatism of the overriding plate are a matter of on-going discussion. Particularly intriguing in this context is a group of isolated Miocene intrusions - like the CHPC - which are located in a transitional position between the Patagonian Batholith and the Cenozoic and Recent volcanic arc in the West, and the Patagonian plateau lavas in the East (Fig. 1). Due to their transient tectonic position these isolated plutons outside the batholith represent a key to understanding the interaction between global-scale tectonics and magmatism in Patagonia. Here, I present new field, penological, geochemical and geochronological data to characterize the nature of the CHPC, which was largely unknown before this study, in order to test the hypothesis of time- transgressive magmatism.The results of the geochemical investigation (Chapter 2) show that the CHPC is only one among these isolated back-arc plutons with a characteristic calc-alkaline composition, i.e. arc signature. Most of these isolated intrusives have an alkaline character. The CHPC, in contrast, has a medium Κ calc-alkaline signature, like the Patagonian batholith and most of the Quaternary arc-related volcanic rocks along the Andes.New high precision TIMS U-Pb zircon dating of the CHPC yield ages between 17.0 to 16.4 Ma. The absolute ages support the sequence of intrusion relations established in the field (Chapter 1). These data are the first U-Pb age constraints on the CHPC, and clearly show that the magmatic history of CHPC has no direct link to the subduction of the ridge, since this complex is at least 6 Ma older than the time of collision of the Chile ridge at this latitude (Cande and Leslie 1986).An in-depth comparison with other intrusion of Miocene age in Patagonia reveals - for the first time - an interesting temporal pattern. There is a distinct E-W trend of calc-alkaline magmatism between 20-16 Ma with the younging of ages in the East - the CHPC is the easternmost expression of this trend. I suggest that this time-space relation reflects an eastward (landward) migration of the magmatic arc. I propose that main factor controlling this migration is the fast rates of subduction after the major reconfigurations of the plate tectonic motions after the break up of the Farallôn Plate (at -26 ) resulting in strong deformation and high rates of subduction erosion.High radiogenic isotope ratios (Pb, Sr, Nd) ratios, low 5018 signature and high Th/La ratios in mafic rocks are distinctive features of the CHPC. The presented isotopic models (Chapter 2) suggest that this signature reflects source contamination of the mantle wedge rather than crustal contamination. The trace element signature of the CHPC indicates that the mantle wedge was contaminated with a terrigenous component, most likely from Paleozoic sediments.Fieldwork, petrography and geothermobarometry were used to further unravel the internal history of the CHPC (Chapter 3). These data suggest two main levels of crystallization: one a mid crustal levels (6 to 4.5 kbar) and other a shallow level (3.5 to 2 kbar). Isotopic AFC modeling of crustal contamination indicate variable rates of assimilation, which are not correlated with the degree of differentiation. This suggests that different batches of magma differentiate independently at depths. This implies that the CHPC would have formed by several pulses of magmas from at least 3 different sources. Textures of granodiorites and granites indicate a high content of crystals previous to the emplacement and consequently low emplacement temperatures. Field observations show that the mafic rocks are deformed, whereas the (younger) granodiorites and granites are not. It is still subject of investigation whether the deformation of the mafic rocks is related to regional deformation during a compressional regime or to the emplacement it self. However, the emplacement of huge amount of crystal rich felsic magmas suggests an extensional regime.Résumé Grand PublicPétrologie et Géochimie du Complexe Plutonique de Chaltén et les conséquences pour l'évolution magmatique et tectonique du Andes du Sud (Patagonia) pendant le MiocèneLe Complexe Plutonique de Chaltén (CHPC) est un massif montagneux situé à 49°S à la frontière entre le Chili et l'Argentine, en Patagonie (région la plus au sud de l'Amérique du Sud). Il est composé de montagnes qui peuvent atteindre plus de 3000 mètres d'altitude, telles que le Cerro Fitz Roy (3400m) et le Cerro Torre (3100m). Ces montagnes sont composées de roches plutoniques, c.-à-d. des magmas qui se sont refroidis et ont cristallisés sous la surface terrestre.La composition chimique de ces roches montre que les magmas, qui ont formé ce complexe plutonique, font partie d'un volcanisme d'arc. Celui-ci se forme lorsqu'une plaque océanique plonge sous une plaque continentale. Les géologues appellent ce processus « subduction ». Dans un tel scénario, le manteau terrestre, qui se fait prendre entre ces deux plaques, fond pour former ainsi du magma. Ce magma remonte à travers la plaque continentale vers la surface. Si celui-ci atteint la surface, il forme les roches volcaniques, comme par exemple des laves. S'il n'atteint pas la surface, le magma se refroidit pour former finalement les roches plutoniques.Le long de la marge ouest d'Amérique du Sud, la plaque Nazca - qui se situe au sud-est de la plaque océanique pacifique - passe en dessous de la plaque d'Amérique du Sud. La bordure ouest du sud de la plaque sud-américaine a également été affectée par d'autres processus tectoniques, tels que des changements dramatiques dans les déplacements de plaques (il y a 25Ma) et la collision de la ride du Chili (depuis 15 Ma jusqu'à aujourd'hui). Ces caractéristiques tectoniques et magmatiques font de cette région un haut lieu pour les géologues. La plaque Nazca, s'est formée suite à l'ouverture d'une plaque océanique plus ancienne, il y a 25Ma. Cette ouverture est liée aux vitesses de subduction les plus rapides jamais connues. La ride du Chili est l'endroit où le sol de l'Océan Pacifique s'ouvre, formant deux plaques océaniques : les plaques Nazca et Antarctique. La ride du Chili subducte sous la plaque sud-américaine depuis 15Ma, en association avec la formation de grands volumes de magma ainsi que des changements morphologiques importants. La question de savoir lequel de ces changements tectoniques globaux affecte la géologie et la géographie de Patagonie a été, et est encore, discutée pendant de nombreuses années. De nombreux chercheurs suggèrent que la plupart des caractéristiques morphologiques et magmatiques en Patagonie sont liés à la subduction de la ride du Chili, mais cette suggestion est encore débattue comme le montre notre étude.Le batholithe de Patagonie du sud (SPB) est un énorme massif composé de roches plutoniques et il s'étend tout au long de la côte ouest de Patagonie (au sud de 47°S). Ces roches correspondent certainement aux racines d'un ancien arc volcanique, qui a été soulevé et érodé. Le CHPC, ainsi que d'autres petites intrusions dans la région, se situe dans une position exotique, à 100km à l'est du SPB. Certains chercheurs suggèrent que ces intrusions pourraient être liées à la subduction de la ride du Chili.Afin de débattre de cette problématique, nous avons utilisé différentes méthodes géochronologiques pour déterminer l'âge du CHPC et le comparer (a) à l'âge des roches intrusives similaires du SPB et (b) à l'âge de la collision de la ride du Chili. Dans ce travail, nous prouvons que le CHPC s'est formé au moins 7Ma avant la collision avec la ride du Chili. Sur la base des âges du CHPC et de la composition chimique de ses roches et minéraux, nous proposons que le CHPC fait partie d'un arc volcanique ancien. La migration de l'arc volcanique plus profondément dans le continent résulte de la grande vitesse de subduction entre 25 et lOMa. Des caractéristiques évidentes pour un tel processus - telles qu'une déformation importante et une vitesse d'érosion élevée - peuvent être rencontrées tout au long de la bordure ouest de l'Amérique du sud.
Resumo:
Background: Single Nucleotide Polymorphisms, among other type of sequence variants, constitute key elements in genetic epidemiology and pharmacogenomics. While sequence data about genetic variation is found at databases such as dbSNP, clues about the functional and phenotypic consequences of the variations are generally found in biomedical literature. The identification of the relevant documents and the extraction of the information from them are hampered by the large size of literature databases and the lack of widely accepted standard notation for biomedical entities. Thus, automatic systems for the identification of citations of allelic variants of genes in biomedical texts are required. Results: Our group has previously reported the development of OSIRIS, a system aimed at the retrieval of literature about allelic variants of genes http://ibi.imim.es/osirisform.html. Here we describe the development of a new version of OSIRIS (OSIRISv1.2, http://ibi.imim.es/OSIRISv1.2.html webcite) which incorporates a new entity recognition module and is built on top of a local mirror of the MEDLINE collection and HgenetInfoDB: a database that collects data on human gene sequence variations. The new entity recognition module is based on a pattern-based search algorithm for the identification of variation terms in the texts and their mapping to dbSNP identifiers. The performance of OSIRISv1.2 was evaluated on a manually annotated corpus, resulting in 99% precision, 82% recall, and an F-score of 0.89. As an example, the application of the system for collecting literature citations for the allelic variants of genes related to the diseases intracranial aneurysm and breast cancer is presented. Conclusion: OSIRISv1.2 can be used to link literature references to dbSNP database entries with high accuracy, and therefore is suitable for collecting current knowledge on gene sequence variations and supporting the functional annotation of variation databases. The application of OSIRISv1.2 in combination with controlled vocabularies like MeSH provides a way to identify associations of biomedical interest, such as those that relate SNPs with diseases.
Resumo:
Time scale parametric spike train distances like the Victor and the van Rossum distancesare often applied to study the neural code based on neural stimuli discrimination.Different neural coding hypotheses, such as rate or coincidence coding,can be assessed by combining a time scale parametric spike train distance with aclassifier in order to obtain the optimal discrimination performance. The time scalefor which the responses to different stimuli are distinguished best is assumed to bethe discriminative precision of the neural code. The relevance of temporal codingis evaluated by comparing the optimal discrimination performance with the oneachieved when assuming a rate code.We here characterize the measures quantifying the discrimination performance,the discriminative precision, and the relevance of temporal coding. Furthermore,we evaluate the information these quantities provide about the neural code. Weshow that the discriminative precision is too unspecific to be interpreted in termsof the time scales relevant for encoding. Accordingly, the time scale parametricnature of the distances is mainly an advantage because it allows maximizing thediscrimination performance across a whole set of measures with different sensitivitiesdetermined by the time scale parameter, but not due to the possibility toexamine the temporal properties of the neural code.
Resumo:
A simple wipe sampling procedure was developed for the surface contamination determination of ten cytotoxic drugs: cytarabine, gemcitabine, methotrexate, etoposide phosphate, cyclophosphamide, ifosfamide, irinotecan, doxorubicin, epirubicin and vincristine. Wiping was performed using Whatman filter paper on different surfaces such as stainless steel, polypropylene, polystyrol, glass, latex gloves, computer mouse and coated paperboard. Wiping and desorption procedures were investigated: The same solution containing 20% acetonitrile and 0.1% formic acid in water gave the best results. After ultrasonic desorption and then centrifugation, samples were analysed by a validated liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) in selected reaction monitoring mode. The whole analytical strategy from wipe sampling to LC-MS/MS analysis was evaluated to determine quantitative performance. The lowest limit of quantification of 10 ng per wiping sample (i.e. 0.1 ng cm(-2)) was determined for the ten investigated cytotoxic drugs. Relative standard deviation for intermediate precision was always inferior to 20%. As recovery was dependent on the tested surface for each drug, a correction factor was determined and applied for real samples. The method was then successfully applied at the cytotoxic production unit of the Geneva University Hospitals pharmacy.
Resumo:
In this work we propose a new automatic methodology for computing accurate digital elevation models (DEMs) in urban environments from low baseline stereo pairs that shall be available in the future from a new kind of earth observation satellite. This setting makes both views of the scene similarly, thus avoiding occlusions and illumination changes, which are the main disadvantages of the commonly accepted large-baseline configuration. There still remain two crucial technological challenges: (i) precisely estimating DEMs with strong discontinuities and (ii) providing a statistically proven result, automatically. The first one is solved here by a piecewise affine representation that is well adapted to man-made landscapes, whereas the application of computational Gestalt theory introduces reliability and automation. In fact this theory allows us to reduce the number of parameters to be adjusted, and tocontrol the number of false detections. This leads to the selection of a suitable segmentation into affine regions (whenever possible) by a novel and completely automatic perceptual grouping method. It also allows us to discriminate e.g. vegetation-dominated regions, where such an affine model does not apply anda more classical correlation technique should be preferred. In addition we propose here an extension of the classical ”quantized” Gestalt theory to continuous measurements, thus combining its reliability with the precision of variational robust estimation and fine interpolation methods that are necessary in the low baseline case. Such an extension is very general and will be useful for many other applications as well.
Resumo:
Automatic classification of makams from symbolic data is a rarely studied topic. In this paper, first a review of an n-gram based approach is presented using various representations of the symbolic data. While a high degree of precision can be obtained, confusion happens mainly for makams using (almost) the same scale and pitch hierarchy but differ in overall melodic progression, seyir. To further improve the system, first n-gram based classification is tested for various sections of the piece to take into account a feature of the seyir that melodic progression starts in a certain region of the scale. In a second test, a hierarchical classification structure is designed which uses n-grams and seyir features in different levels to further improve the system.
Resumo:
Automatic creation of polarity lexicons is a crucial issue to be solved in order to reduce time andefforts in the first steps of Sentiment Analysis. In this paper we present a methodology based onlinguistic cues that allows us to automatically discover, extract and label subjective adjectivesthat should be collected in a domain-based polarity lexicon. For this purpose, we designed abootstrapping algorithm that, from a small set of seed polar adjectives, is capable to iterativelyidentify, extract and annotate positive and negative adjectives. Additionally, the methodautomatically creates lists of highly subjective elements that change their prior polarity evenwithin the same domain. The algorithm proposed reached a precision of 97.5% for positiveadjectives and 71.4% for negative ones in the semantic orientation identification task.