134 resultados para on-disk data layout
Resumo:
OBJECTIVE: To evaluate the power of various parameters of the vestibulo-ocular reflex (VOR) in detecting unilateral peripheral vestibular dysfunction and in characterizing certain inner ear pathologies. STUDY DESIGN: Prospective study of consecutive ambulatory patients presenting with acute onset of peripheral vertigo and spontaneous nystagmus. SETTING: Tertiary referral center. PATIENTS: Seventy-four patients (40 females, 34 males) and 22 normal subjects (11 females, 11 males) were included in the study. Patients were classified in three main diagnoses: vestibular neuritis: 40; viral labyrinthitis: 22; Meniere's disease: 12. METHODS: The VOR function was evaluated by standard caloric and impulse rotary tests (velocity step). A mathematical model of vestibular function was used to characterize the VOR response to rotational stimulation. The diagnostic value of the different VOR parameters was assessed by uni- and multivariable logistic regression. RESULTS: In univariable analysis, caloric asymmetry emerged as the most powerful VOR parameter in identifying unilateral vestibular deficit, with a boundary limit set at 20%. In multivariable analysis, the combination of caloric asymmetry and rotational time constant asymmetry significantly improved the discriminatory power over caloric alone (p<0.0001) and produced a detection score with a correct classification of 92.4%. In discriminating labyrinthine diseases, different combinations of the VOR parameters were obtained for each diagnosis (p<0.003) supporting that the VOR characteristics differ between the three inner ear disorders. However, the clinical usefulness of these characteristics in separating the pathologies was limited. CONCLUSION: We propose a powerful logistic model combining the indices of caloric and time constant asymmetries to detect a peripheral vestibular loss, with an accuracy of 92.4%. Based on vestibular data only, the discrimination between the different inner ear diseases is statistically possible, which supports different pathophysiologic changes in labyrinthine pathologies.
Resumo:
Prostate cancer (PCa) is a potentially curable disease when diagnosed in early stages and subsequently treated with radical prostatectomy (RP). However, a significant proportion of patients tend to relapse early, with the emergence of biochemical failure (BF) as an established precursor of progression to metastatic disease. Several candidate molecular markers have been studied in an effort to enhance the accuracy of existing predictive tools regarding the risk of BF after RP. We studied the immunohistochemical expression of p53, cyclooxygenase-2 (COX-2) and cyclin D1 in a cohort of 70 patients that underwent RP for early stage, hormone naïve PCa, with the aim of prospectively identifying any possible interrelations as well as correlations with known prognostic parameters such as Gleason score, pathological stage and time to prostate-specific antigen (PSA) relapse. We observed a significant (p = 0.003) prognostic role of p53, with high protein expression correlating with shorter time to BF (TTBF) in univariate analysis. Both p53 and COX-2 expression were directly associated with cyclin D1 expression (p = 0.055 and p = 0.050 respectively). High p53 expression was also found to be an independent prognostic factor (p = 0.023). Based on previous data and results provided by this study, p53 expression exerts an independent negative prognostic role in localized prostate cancer and could therefore be evaluated as a useful new molecular marker to be added in the set of known prognostic indicators of the disease. With respect to COX-2 and cyclin D1, further studies are required to elucidate their role in early prediction of PCa relapse after RP.
Resumo:
The parasellar region is the location of a wide variety of inflammatory and benign or malignant lesions. A pathological diagnostic strategy may be difficult to establish relying solely on imaging data. Percutaneous biopsy through the foramen ovale using the Hartel technique has been developed for decision-making process. It is an accurate diagnostic tool allowing pathological diagnosis to determine the best treatment strategy. However, in some cases, this procedure may fail or may be inappropriate particularly for anterior parasellar lesions. Over these past decades, endoscopy has been widely developed and promoted in many indications. It represents an interesting alternative approach to parasellar lesions with low morbidity when compared to the classic microscopic sub-temporal extradural approach with or without orbito-zygomatic removal. In this chapter, we describe our experience with the endoscopic approach to parasellar lesions. We propose a complete overview of surgical anatomy and describe methods and results of the technique. We also suggest a model of a decision-making tree for the diagnosis and treatment of parasellar lesions.
Resumo:
Our current knowledge of the general factor requirement in transcription by the three mammalian RNA polymerases is based on a small number of model promoters. Here, we present a comprehensive chromatin immunoprecipitation (ChIP)-on-chip analysis for 28 transcription factors on a large set of known and novel TATA-binding protein (TBP)-binding sites experimentally identified via ChIP cloning. A large fraction of identified TBP-binding sites is located in introns or lacks a gene/mRNA annotation and is found to direct transcription. Integrated analysis of the ChIP-on-chip data and functional studies revealed that TAF12 hitherto regarded as RNA polymerase II (RNAP II)-specific was found to be also involved in RNAP I transcription. Distinct profiles for general transcription factors and TAF-containing complexes were uncovered for RNAP II promoters located in CpG and non-CpG islands suggesting distinct transcription initiation pathways. Our study broadens the spectrum of general transcription factor function and uncovers a plethora of novel, functional TBP-binding sites in the human genome.
Resumo:
A statistical methodology for the objective comparison of LDI-MS mass spectra of blue gel pen inks was evaluated. Thirty-three blue gel pen inks previously studied by RAMAN were analyzed directly on the paper using both positive and negative mode. The obtained mass spectra were first compared using relative areas of selected peaks using the Pearson correlation coefficient and the Euclidean distance. Intra-variability among results from one ink and inter-variability between results from different inks were compared in order to choose a differentiation threshold minimizing the rate of false negative (i.e. avoiding false differentiation of the inks). This yielded a discriminating power of up to 77% for analysis made in the negative mode. The whole mass spectra were then compared using the same methodology, allowing for a better DP in the negative mode of 92% using the Pearson correlation on standardized data. The positive mode results generally yielded a lower differential power (DP) than the negative mode due to a higher intra-variability compared to the inter-variability in the mass spectra of the ink samples.
Resumo:
OBJECTIVE. Data on human natality, stillbirth and perinatal mortality from Switzerland (1979-1987), available in four birthweight categories, are reexamined to assess any about-weekly (circaseptan) and changes in about-daily (circadian) patterns in central Europe over a century and a halfDESIGN. Retrospective analyses on archived data.SETTING. Federal Office of Statistics for Switzerland.RESULTS. In addition to prominent circadians, weekly patterns are also documented.CONCLUSION. Exogenous variations, prominent in early extrauterine life, such as changes of scheduling in obstetrics, may contribute to circadian and cireaseptan natality patterns. Information on these patterns serves in the optimization of neonatal care. Partly endogenous, partly physical environmental aspects, at least of about-weekly patterns, remain to be elucidated in series consisting exclusively of spontaneous parturitions.
Resumo:
An extensive study of the central part of the Sesia Lanzo Zone has been undertaken to identify pre-Alpine protoliths and to reconstruct the lithologic and tectonic setting of this part of the Western Alps. Three main complexes have been defined: 1) the Polymetamorphic Basement Complex, corresponding to the lower unit of the Sesia Lanzo Zone after COMPAGNONI et al. (1977), is further subdivided into the three following units: a) an Internal Unit characterized by eo-Alpine high pressure (HP) assemblages (DAL PIAZ et al., 1972) (Eclogitic Micaschists); b) an Intermediate Unit where HP parageneses are partially re-equilibrated under greenschist conditions and c) an External Unit where the main foliation is defined by a greenschist paragenesis (Gneiss Minuti auct.). 2) the Monometamorphic Cover Complex, subdivided into the followings: a) the Bonze Unit, composed of sheared metagabbros, eclogitized metabasalts with MORB geochemical affinity and related metasediments (micaschists, quartzites and Mn-cherts) and b) the Scalaro Unit, containing predominantly metasediments of supposed Permo-Triassic age (yellow dolomitic marbles, calcschists and conglomeratic limestones, micaschists and quartzites with thin levels of basic rocks with within plate basalts [WPB] geochimical affinity). Multiple lithostratigraphic sequences for the Monometamorphic Cover Complex are proposed. The contact between the Bonze and Scalaro Units is defined by repetitions of dolomitic marbles and metabasalts; the ages of the metasediments have been assigned solely by analogy with other sediments of the Western Alps, due to the absence of fossils. The Monometamorphic Cover Complex can be considered as the autochthonous cover of the Sesia Lanzo Zone because of the primary contacts with the basement and because of the presence of preAlpine HT basement blocks in the cover sequences. 3) The pre-Alpine high temperature (HT) Basement Complex (or `'Seconda Zona Diorito-Kinzigitica''), comprises HT Hercynian rocks like kinzigites, amphibolites, granulites and calcite marbles; this Complex is always located between the Internal and the External Units and can be followed continuously for several kilometers south of the Gressoney Valley to the Orco Valley. A schematic evolution for the Sesia Lanzo Zone is proposed; based on available data together with new geochronological data, this study shows that the internal and external parts of the polymetamorphic basement of the Sesia Zone experienced different cooling histories .
Resumo:
AbstractIn addition to genetic changes affecting the function of gene products, changes in gene expression have been suggested to underlie many or even most of the phenotypic differences among mammals. However, detailed gene expression comparisons were, until recently, restricted to closely related species, owing to technological limitations. Thus, we took advantage of the latest technologies (RNA-Seq) to generate extensive qualitative and quantitative transcriptome data for a unique collection of somatic and germline tissues from representatives of all major mammalian lineages (placental mammals, marsupials and monotremes) and birds, the evolutionary outgroup.In the first major project of my thesis, we performed global comparative analyses of gene expression levels based on these data. Our analyses provided fundamental insights into the dynamics of transcriptome change during mammalian evolution (e.g., the rate of expression change across species, tissues and chromosomes) and allowed the exploration of the functional relevance and phenotypic implications of transcription changes at a genome-wide scale (e.g., we identified numerous potentially selectively driven expression switches).In a second project of my thesis, which was also based on the unique transcriptome data generated in the context of the first project we focused on the evolution of alternative splicing in mammals. Alternative splicing contributes to transcriptome complexity by generating several transcript isoforms from a single gene, which can, thus, perform various functions. To complete the global comparative analysis of gene expression changes, we explored patterns of alternative splicing evolution. This work uncovered several general and unexpected patterns of alternative splicing evolution (e.g., we found that alternative splicing evolves extremely rapidly) as well as a large number of conserved alternative isoforms that may be crucial for the functioning of mammalian organs.Finally, the third and final project of my PhD consisted in analyzing in detail the unique functional and evolutionary properties of the testis by exploring the extent of its transcriptome complexity. This organ was previously shown to evolve rapidly both at the phenotypic and molecular level, apparently because of the specific pressures that act on this organ and are associated with its reproductive function. Moreover, my analyses of the amniote tissue transcriptome data described above, revealed strikingly widespread transcriptional activity of both functional and nonfunctional genomic elements in the testis compared to the other organs. To elucidate the cellular source and mechanisms underlying this promiscuous transcription in the testis, we generated deep coverage RNA-Seq data for all major testis cell types as well as epigenetic data (DNA and histone methylation) using the mouse as model system. The integration of these complete dataset revealed that meiotic and especially post-meiotic germ cells are the major contributors to the widespread functional and nonfunctional transcriptome complexity of the testis, and that this "promiscuous" spermatogenic transcription is resulting, at least partially, from an overall transcriptionally permissive chromatin state. We hypothesize that this particular open state of the chromatin results from the extensive chromatin remodeling that occurs during spermatogenesis which ultimately leads to the replacement of histones by protamines in the mature spermatozoa. Our results have important functional and evolutionary implications (e.g., regarding new gene birth and testicular gene expression evolution).Generally, these three large-scale projects of my thesis provide complete and massive datasets that constitute valuables resources for further functional and evolutionary analyses of mammalian genomes.
Resumo:
MoS(x) lubricating thin films were deposited by nonreactive, reactive, and low energy ion-assisted radio-frequency (rf) magnetron sputtering from a MoS2 target. Depending on the total and reactive gas pressures, the film composition ranges between MoS0.7 and MoS2.8. A low working pressure was found to have effects similar to those of low-energy ion irradiation. Films deposited at high pressure have (002) planes preferentially perpendicular to the substrate, whereas films deposited at low pressure or under low-energy ion irradiation have (002) mainly parallel to it. Parallel films are sulfur deficient (MoS1.2-1.4). Their growth is explained in terms of an increased reactivity of the basal surfaces, itself a consequence of the creation of surface defects due to ion irradiation. The films exhibit a lubricating character for all compositions above MoS1.2. The longest lifetime in ball-on-disk wear test was found for MoS1.5.
Resumo:
1. Identifying those areas suitable for recolonization by threatened species is essential to support efficient conservation policies. Habitat suitability models (HSM) predict species' potential distributions, but the quality of their predictions should be carefully assessed when the species-environment equilibrium assumption is violated.2. We studied the Eurasian otter Lutra lutra, whose numbers are recovering in southern Italy. To produce widely applicable results, we chose standard HSM procedures and looked for the models' capacities in predicting the suitability of a recolonization area. We used two fieldwork datasets: presence-only data, used in the Ecological Niche Factor Analyses (ENFA), and presence-absence data, used in a Generalized Linear Model (GLM). In addition to cross-validation, we independently evaluated the models with data from a recolonization event, providing presences on a previously unoccupied river.3. Three of the models successfully predicted the suitability of the recolonization area, but the GLM built with data before the recolonization disagreed with these predictions, missing the recolonized river's suitability and badly describing the otter's niche. Our results highlighted three points of relevance to modelling practices: (1) absences may prevent the models from correctly identifying areas suitable for a species spread; (2) the selection of variables may lead to randomness in the predictions; and (3) the Area Under Curve (AUC), a commonly used validation index, was not well suited to the evaluation of model quality, whereas the Boyce Index (CBI), based on presence data only, better highlighted the models' fit to the recolonization observations.4. For species with unstable spatial distributions, presence-only models may work better than presence-absence methods in making reliable predictions of suitable areas for expansion. An iterative modelling process, using new occurrences from each step of the species spread, may also help in progressively reducing errors.5. Synthesis and applications. Conservation plans depend on reliable models of the species' suitable habitats. In non-equilibrium situations, such as the case for threatened or invasive species, models could be affected negatively by the inclusion of absence data when predicting the areas of potential expansion. Presence-only methods will here provide a better basis for productive conservation management practices.
Resumo:
With the advancement of high-throughput sequencing and dramatic increase of available genetic data, statistical modeling has become an essential part in the field of molecular evolution. Statistical modeling results in many interesting discoveries in the field, from detection of highly conserved or diverse regions in a genome to phylogenetic inference of species evolutionary history Among different types of genome sequences, protein coding regions are particularly interesting due to their impact on proteins. The building blocks of proteins, i.e. amino acids, are coded by triples of nucleotides, known as codons. Accordingly, studying the evolution of codons leads to fundamental understanding of how proteins function and evolve. The current codon models can be classified into three principal groups: mechanistic codon models, empirical codon models and hybrid ones. The mechanistic models grasp particular attention due to clarity of their underlying biological assumptions and parameters. However, they suffer from simplified assumptions that are required to overcome the burden of computational complexity. The main assumptions applied to the current mechanistic codon models are (a) double and triple substitutions of nucleotides within codons are negligible, (b) there is no mutation variation among nucleotides of a single codon and (c) assuming HKY nucleotide model is sufficient to capture essence of transition- transversion rates at nucleotide level. In this thesis, I develop a framework of mechanistic codon models, named KCM-based model family framework, based on holding or relaxing the mentioned assumptions. Accordingly, eight different models are proposed from eight combinations of holding or relaxing the assumptions from the simplest one that holds all the assumptions to the most general one that relaxes all of them. The models derived from the proposed framework allow me to investigate the biological plausibility of the three simplified assumptions on real data sets as well as finding the best model that is aligned with the underlying characteristics of the data sets. -- Avec l'avancement de séquençage à haut débit et l'augmentation dramatique des données géné¬tiques disponibles, la modélisation statistique est devenue un élément essentiel dans le domaine dé l'évolution moléculaire. Les résultats de la modélisation statistique dans de nombreuses découvertes intéressantes dans le domaine de la détection, de régions hautement conservées ou diverses dans un génome de l'inférence phylogénétique des espèces histoire évolutive. Parmi les différents types de séquences du génome, les régions codantes de protéines sont particulièrement intéressants en raison de leur impact sur les protéines. Les blocs de construction des protéines, à savoir les acides aminés, sont codés par des triplets de nucléotides, appelés codons. Par conséquent, l'étude de l'évolution des codons mène à la compréhension fondamentale de la façon dont les protéines fonctionnent et évoluent. Les modèles de codons actuels peuvent être classés en trois groupes principaux : les modèles de codons mécanistes, les modèles de codons empiriques et les hybrides. Les modèles mécanistes saisir une attention particulière en raison de la clarté de leurs hypothèses et les paramètres biologiques sous-jacents. Cependant, ils souffrent d'hypothèses simplificatrices qui permettent de surmonter le fardeau de la complexité des calculs. Les principales hypothèses retenues pour les modèles actuels de codons mécanistes sont : a) substitutions doubles et triples de nucleotides dans les codons sont négligeables, b) il n'y a pas de variation de la mutation chez les nucléotides d'un codon unique, et c) en supposant modèle nucléotidique HKY est suffisant pour capturer l'essence de taux de transition transversion au niveau nucléotidique. Dans cette thèse, je poursuis deux objectifs principaux. Le premier objectif est de développer un cadre de modèles de codons mécanistes, nommé cadre KCM-based model family, sur la base de la détention ou de l'assouplissement des hypothèses mentionnées. En conséquence, huit modèles différents sont proposés à partir de huit combinaisons de la détention ou l'assouplissement des hypothèses de la plus simple qui détient toutes les hypothèses à la plus générale qui détend tous. Les modèles dérivés du cadre proposé nous permettent d'enquêter sur la plausibilité biologique des trois hypothèses simplificatrices sur des données réelles ainsi que de trouver le meilleur modèle qui est aligné avec les caractéristiques sous-jacentes des jeux de données. Nos expériences montrent que, dans aucun des jeux de données réelles, tenant les trois hypothèses mentionnées est réaliste. Cela signifie en utilisant des modèles simples qui détiennent ces hypothèses peuvent être trompeuses et les résultats de l'estimation inexacte des paramètres. Le deuxième objectif est de développer un modèle mécaniste de codon généralisée qui détend les trois hypothèses simplificatrices, tandis que d'informatique efficace, en utilisant une opération de matrice appelée produit de Kronecker. Nos expériences montrent que sur un jeux de données choisis au hasard, le modèle proposé de codon mécaniste généralisée surpasse autre modèle de codon par rapport à AICc métrique dans environ la moitié des ensembles de données. En outre, je montre à travers plusieurs expériences que le modèle général proposé est biologiquement plausible.
Resumo:
Despite the key importance of altered oceanic mantle as a repository and carrier of light elements (B, Li, and Be) to depth, its inventory of these elements has hardly been explored and quantified. In order to constrain the systematics and budget of these elements we have studied samples of highly serpentinized (>50%) spinel harzburgite drilled at the Mid-Atlantic Ridge (Fifteen-Twenty Fracture zone, ODP Leg 209, Sites 1272A and 1274A). In-situ analysis by secondary ion mass spectrometry reveals that the B, Li and Be contents of mantle minerals (olivine, orthopyroxene, and clinopyroxene) remain unchanged during serpentinization. B and Li abundances largely correspond to those of unaltered mantle minerals whereas Be is close to the detection limit. The Li contents of clinopyroxene are slightly higher (0.44-2.8 mu g g(-1)) compared to unaltered mantle clinopyroxene, and olivine and clinopyroxene show an inverse Li partitioning compared to literature data. These findings along with textural observations and major element composition obtained from microprobe analysis suggest reaction of the peridotites with a mafic silicate melt before serpentinization. Serpentine minerals are enriched in B (most values between 10 and 100 mu g g(-1)), depleted in Li (most values below I mu g g(-1)) compared to the primary phases, with considerable variation within and between samples. Be is at the detection limit. Analysis of whole rock samples by prompt gamma activation shows that serpentinization tends to increase B (10.4-65.0 mu g g(-1)), H2O and Cl contents and to lower Li contents (0.07-3.37 mu g g(-1)) of peridotites, implying that-contrary to alteration of oceanic crust-B is fractionated from Li and that the B and Li inventory should depend essentially on rock-water ratios. Based on our results and on literature data, we calculate the inventory of B and Li contained in the oceanic lithosphere, and its partitioning between crust and mantle as a function of plate characteristics. We model four cases, an ODP Leg 209-type lithosphere with almost no igneous crust, and a Semail-type lithosphere with a thick igneous crust, both at I and 75 Ma, respectively. The results show that the Li contents of the oceanic lithosphere are highly variable (17-307 kg in a column of I m x I m x thickness of the lithosphere (kg/col)). They are controlled by the primary mantle phases and by altered crust, whereas the B contents (25-904 kg/col) depend entirely on serpentinization. In all cases, large quantities of B reside in the uppermost part of the plate and could hence be easily liberated during slab dehydration. The most prominent input of Li into subduction zones is to be expected from Semail-type lithosphere because most of the Li is stored at shallow levels in the plate. Subducting an ODP Leg 209-type lithosphere would mean only very little Li contribution from the slab. Serpentinized mantle thus plays an important role in B recycling in subduction zones, but it is of lesser importance for Li. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
General Summary Although the chapters of this thesis address a variety of issues, the principal aim is common: test economic ideas in an international economic context. The intention has been to supply empirical findings using the largest suitable data sets and making use of the most appropriate empirical techniques. This thesis can roughly be divided into two parts: the first one, corresponding to the first two chapters, investigates the link between trade and the environment, the second one, the last three chapters, is related to economic geography issues. Environmental problems are omnipresent in the daily press nowadays and one of the arguments put forward is that globalisation causes severe environmental problems through the reallocation of investments and production to countries with less stringent environmental regulations. A measure of the amplitude of this undesirable effect is provided in the first part. The third and the fourth chapters explore the productivity effects of agglomeration. The computed spillover effects between different sectors indicate how cluster-formation might be productivity enhancing. The last chapter is not about how to better understand the world but how to measure it and it was just a great pleasure to work on it. "The Economist" writes every week about the impressive population and economic growth observed in China and India, and everybody agrees that the world's center of gravity has shifted. But by how much and how fast did it shift? An answer is given in the last part, which proposes a global measure for the location of world production and allows to visualize our results in Google Earth. A short summary of each of the five chapters is provided below. The first chapter, entitled "Unraveling the World-Wide Pollution-Haven Effect" investigates the relative strength of the pollution haven effect (PH, comparative advantage in dirty products due to differences in environmental regulation) and the factor endowment effect (FE, comparative advantage in dirty, capital intensive products due to differences in endowments). We compute the pollution content of imports using the IPPS coefficients (for three pollutants, namely biological oxygen demand, sulphur dioxide and toxic pollution intensity for all manufacturing sectors) provided by the World Bank and use a gravity-type framework to isolate the two above mentioned effects. Our study covers 48 countries that can be classified into 29 Southern and 19 Northern countries and uses the lead content of gasoline as proxy for environmental stringency. For North-South trade we find significant PH and FE effects going in the expected, opposite directions and being of similar magnitude. However, when looking at world trade, the effects become very small because of the high North-North trade share, where we have no a priori expectations about the signs of these effects. Therefore popular fears about the trade effects of differences in environmental regulations might by exaggerated. The second chapter is entitled "Is trade bad for the Environment? Decomposing worldwide SO2 emissions, 1990-2000". First we construct a novel and large database containing reasonable estimates of SO2 emission intensities per unit labor that vary across countries, periods and manufacturing sectors. Then we use these original data (covering 31 developed and 31 developing countries) to decompose the worldwide SO2 emissions into the three well known dynamic effects (scale, technique and composition effect). We find that the positive scale (+9,5%) and the negative technique (-12.5%) effect are the main driving forces of emission changes. Composition effects between countries and sectors are smaller, both negative and of similar magnitude (-3.5% each). Given that trade matters via the composition effects this means that trade reduces total emissions. We next construct, in a first experiment, a hypothetical world where no trade happens, i.e. each country produces its imports at home and does no longer produce its exports. The difference between the actual and this no-trade world allows us (under the omission of price effects) to compute a static first-order trade effect. The latter now increases total world emissions because it allows, on average, dirty countries to specialize in dirty products. However, this effect is smaller (3.5%) in 2000 than in 1990 (10%), in line with the negative dynamic composition effect identified in the previous exercise. We then propose a second experiment, comparing effective emissions with the maximum or minimum possible level of SO2 emissions. These hypothetical levels of emissions are obtained by reallocating labour accordingly across sectors within each country (under the country-employment and the world industry-production constraints). Using linear programming techniques, we show that emissions are reduced by 90% with respect to the worst case, but that they could still be reduced further by another 80% if emissions were to be minimized. The findings from this chapter go together with those from chapter one in the sense that trade-induced composition effect do not seem to be the main source of pollution, at least in the recent past. Going now to the economic geography part of this thesis, the third chapter, entitled "A Dynamic Model with Sectoral Agglomeration Effects" consists of a short note that derives the theoretical model estimated in the fourth chapter. The derivation is directly based on the multi-regional framework by Ciccone (2002) but extends it in order to include sectoral disaggregation and a temporal dimension. This allows us formally to write present productivity as a function of past productivity and other contemporaneous and past control variables. The fourth chapter entitled "Sectoral Agglomeration Effects in a Panel of European Regions" takes the final equation derived in chapter three to the data. We investigate the empirical link between density and labour productivity based on regional data (245 NUTS-2 regions over the period 1980-2003). Using dynamic panel techniques allows us to control for the possible endogeneity of density and for region specific effects. We find a positive long run elasticity of density with respect to labour productivity of about 13%. When using data at the sectoral level it seems that positive cross-sector and negative own-sector externalities are present in manufacturing while financial services display strong positive own-sector effects. The fifth and last chapter entitled "Is the World's Economic Center of Gravity Already in Asia?" computes the world economic, demographic and geographic center of gravity for 1975-2004 and compares them. Based on data for the largest cities in the world and using the physical concept of center of mass, we find that the world's economic center of gravity is still located in Europe, even though there is a clear shift towards Asia. To sum up, this thesis makes three main contributions. First, it provides new estimates of orders of magnitudes for the role of trade in the globalisation and environment debate. Second, it computes reliable and disaggregated elasticities for the effect of density on labour productivity in European regions. Third, it allows us, in a geometrically rigorous way, to track the path of the world's economic center of gravity.
Resumo:
In addition to differences in protein-coding gene sequences, changes in expression resulting from mutations in regulatory sequences have long been hypothesized to be responsible for phenotypic differences between species. However, unlike comparison of genome sequences, few studies, generally restricted to pairwise comparisons of closely related mammalian species, have assessed between-species differences at the transcriptome level. They reported that gene expression evolves at different rates in various organs and in a pattern that is overall consistent with neutral models of evolution. In the first part of my thesis, I investigated the evolution of gene expression in therian mammals (i.e.7 placental and marsupials), based on microarray data from human, mouse and the gray short-tailed opossum (Monodelphis domestica). In addition to autosomal genes, a special focus was given to the evolution of X-linked genes. The therian X chromosome was recently shown to be younger than previously thought and to harbor a specific gene content (e.g., genes involved in brain or reproductive functions) that is thought to have been shaped by specific sex-related evolutionary forces. Sex chromosomes derive from ordinary autosomes and their differentiation led to the degeneration of the Y chromosome (in mammals) or W chromosome (in birds). Consequently, X- or Z-linked genes differ in gene dose between males and females such that the heterogametic sex has half the X/Z gene dose compared to the ancestral state. To cope with this dosage imbalance, mammals have been reported to have evolved mechanisms of dosage compensation.¦In the first project, I could first show that transcriptomes evolve at different rates in different organs. Out of the five tissues I investigated, the testis is the most rapidly evolving organ at the gene expression level while the brain has the most conserved transcriptome. Second, my analyses revealed that mammalian gene expression evolution is compatible with a neutral model, where the rates of change in gene expression levels is linked to the efficiency of purifying selection in a given lineage, which, in turn, is determined by the long-term effective population size in that lineage. Thus, the rate of DNA sequence evolution, which could be expected to determine the rate of regulatory sequence change, does not seem to be a major determinant of the rate of gene expression evolution. Thus, most gene expression changes seem to be (slightly) deleterious. Finally, X-linked genes seem to have experienced elevated rates of gene expression change during the early stage of X evolution. To further investigate the evolution of mammalian gene expression, we generated an extensive RNA-Seq gene expression dataset for nine mammalian species and a bird. The analyses of this dataset confirmed the patterns previously observed with microarrays and helped to significantly deepen our view on gene expression evolution.¦In a specific project based on these data, I sought to assess in detail patterns of evolution of dosage compensation in amniotes. My analyses revealed the absence of male to female dosage compensation in monotremes and its presence in marsupials and, in addition, confirmed patterns previously described for placental mammals and birds. I then assessed the global level of expression of X/Z chromosomes and contrasted this with its ancestral gene expression levels estimated from orthologous autosomal genes in species with non-homologous sex chromosomes. This analysis revealed a lack of up-regulation for placental mammals, the level of expression of X-linked genes being proportional to gene dose. Interestingly, the ancestral gene expression level was at least partially restored in marsupials as well as in the heterogametic sex of monotremes and birds. Finally, I investigated alternative mechanisms of dosage compensation and found that gene duplication did not seem to be a widespread mechanism to restore the ancestral gene dose. However, I could show that placental mammals have preferentially down-regulated autosomal genes interacting with X-linked genes which underwent gene expression decrease, and thus identified a novel alternative mechanism of dosage compensation.
Resumo:
Summary : International comparisons in the area of victimization, particularly in the field of violence against women, are fraught with methodological problems that previous research has not systematically addressed, and whose answer does not seem to be agreed up~n. For obvious logistic and financial reasons, international studies on violence against women (i.e. studies that administer the same instrument in different countries). are seldom; therefore, researchers are bound to resort to secondary comparisons. Many studies simply juxtapose their results to the ones of previous wòrk or to findings obtained in different contexts, in order to offer an allegedly comparative perspective to their conclusions. If, most of the time, researchers indicate the methodological limitations of a direct comparison, it is not rare that these do not result in concrete methodological controls. Yet, many studies have shown the influence of surveys methodological parameters on findings, listing recommendations fora «best practice» of research. Although, over the past decades, violence against women surveys have become more and more similar -tending towards a sort of uniformization that could be interpreted as a passive consensus -these instruments retain more or less subtle differences that are still susceptible to influence the validity of a comparison. Yet, only a small number of studies have directly worked on the comparability of violence against women data, striving to control the methodological parameters of the surveys in order to guarantee the validity of their comparisons. The goal of this work is to compare data from two national surveys on violence against women: the Swiss component of the International Violence Against Women Survey [CH-IVAWS] and the National Violence Against Women Survey [NVAWS] administered in the United States. The choice of these studies certainly ensues from the author's affiliations; however, it is far from being trivial. Indeed, the criminological field currently endows American and Anglo-Saxon literature with a predominant space, compelling researchers from other countries to almost do the splits to interpret their results in the light of previous work or to develop effective interventions in their own context. Turning to hypotheses or concepts developed in a specific framework inevitably raises the issue of their applicability to another context, i.e. the Swiss context, if not at least European. This problematic then takes on an interest that goes beyond the particular topic of violence against women, adding to its relevance. This work articulates around three axes. First, it shows the way survey characteristics influence estimates. The comparability of the nature of the CH-IVAWS and NVAWS, their sampling design and the characteristics of their administration are discussed. The definitions used, the operationalization of variables based on comparable items, the control of reference periods, as well as the nature of the victim-offender relationship are included among controlled factors. This study establishes content validity within and across studies, presenting a systematic process destined to maximize the comparability of secondary data. Implications of the process are illustrated with the successive presentation of comparable and non-comparable operationalizations of computed variables. Measuring violence against. women in Switzerland and the United-States, this work compares the prevalence of different forms (threats, physical violence and sexual violence) and types of violence (partner and nonpartner violence). Second, it endeavors to analyze concepts of multivictimization (i.e. experiencing different forms of victimization), repeat victimization (i.e. experiencing the same form of violence more than once), and revictimization (i.e. the link between childhood and adulthood victimization) in a comparative -and comparable -approach. Third, aiming at understanding why partner violence appears higher in the United States, while victims of nonpartners are more frequent in Switzerland, as well as in other European countries, different victimization correlates are examined. This research contributes to a better understanding of the relevance of controlling methodological parameters in comparisons across studies, as it illustrates, systematically, the imposed controls and their implications on quantitative data. Moreover, it details how ignoring these parameters might lead to erroneous conclusions, statistically as well as theoretically. The conclusion of the study puts into a wider perspective the discussion of differences and similarities of violence against women in Switzerland and the United States, and integrates recommendations as to the relevance and validity of international comparisons, whatever the'field they are conducted in. Résumé: Les comparaisons internationales dans le domaine de la victimisation, et plus particulièrement en ce qui concerne les violences envers les femmes, se caractérisent par des problèmes méthodologiques que les recherches antérieures n'ont pas systématiquement adressés, et dont la réponse ne semble pas connaître de consensus. Pour des raisons logistiques et financières évidentes, les études internationales sur les violences envers les femmes (c.-à-d. les études utilisant un même instrument dans différents pays) sont rares, aussi les chercheurs sont-ils contraints de se tourner vers des comparaisons secondaires. Beaucoup de recherches juxtaposent alors simplement leurs résultats à ceux de travaux antérieurs ou à des résultats obtenus dans d'autres contextes, afin d'offrir à leurs conclusions une perspective prétendument comparative. Si, le plus souvent, les auteurs indiquent les limites méthodologiques d'une comparaison directe, il est fréquent que ces dernières ne se traduisent pas par des contrôles méthodologiques concrets. Et pourtant, quantité de travaux ont mis en évidence l'influence des paramètres méthodologiques des enquêtes sur les résultats obtenus, érigeant des listes de recommandations pour une «meilleure pratique» de la recherche. Bien que, ces dernières décennies, les sondages sur les violences envers les femmes soient devenus de plus en plus similaires -tendant, vers une certaine uniformisation que l'on peut interpréter comme un consensus passif-, il n'en demeure pas moins que ces instruments possèdent des différences plus ou moins subtiles, mais toujours susceptibles d'influencer la validité d'une comparaison. Pourtant, seules quelques recherches ont directement travaillé sur la comparabilité des données sur les violences envers les femmes, ayant à coeur de contrôler les paramètres méthodologiques des études utilisées afin de garantir la validité de leurs comparaisons. L'objectif de ce travail est la comparaison des données de deux sondages nationaux sur les violences envers les femmes: le composant suisse de l'International Violence Against Women Survey [CHIVAWSj et le National Violence Against Women Survey [NVAWS) administré aux États-Unis. Le choix de ces deux études découle certes des affiliations de l'auteure, cependant il est loin d'être anodin. Le champ criminologique actuel confère, en effet, une place prépondérante à la littérature américaine et anglo-saxonne, contraignant ainsi les chercheurs d'autres pays à un exercice proche du grand écart pour interpréter leurs résultats à la lumière des travaux antérieurs ou développer des interventions efficaces dans leur propre contexte. Le fait de recourir à des hypothèses et des concepts développés dans un cadre spécifique pose inévitablement la question de leur applicabilité à un autre contexte, soit ici le contexte suisse, sinon du moins européen. Cette problématique revêt alors un intérêt qui dépasse la thématique spécifique des violences envers les femmes, ce qui ajoute à sa pertinence. Ce travail s'articule autour de trois axes. Premièrement, il met en évidence la manière dont les caractéristiques d'un sondage influencent les estimations qui en découlent. La comparabilité de la nature du CH-IVAWS et du NVAWS, de leur processus d'échantillonnage et des caractéristiques de leur administration est discutée. Les définitions utilisées, l'opérationnalisation des variables sur la base d'items comparables, le contrôle des périodes de référence, ainsi que la nature de la relation victime-auteur figurent également parmi les facteurs contrôlés. Ce travail établit ainsi la validité de contenu intra- et inter-études, offrant un processus systématique destiné à maximiser la comparabilité des données secondaires. Les implications de cette démarche sont illustrées avec la présentation successive d'opérationnalisations comparables et non-comparables des variables construites. Mesurant les violences envers les femmes en Suisse et aux États-Unis, ce travail compare la prévalence de plusieurs formes (menaces, violences physiques et violences sexuelles) et types de violence (violences partenaires et non-partenaires). 11 s'attache également à analyser les concepts de multivictimisation (c.-à-d. le fait de subir plusieurs formes de victimisation), victimisation répétée (c.-à.-d. le fait de subir plusieurs incidents de même forme) et revictimisation (c.-à-d. le lien entre la victimisation dans l'enfance et à l'âge adulte) dans une approche comparative - et comparable. Dans un troisième temps, cherchant à comprendre pourquoi la violence des partenaires apparaît plus fréquente aux États-Unis, tandis que les victimes de non-partenaires sont plus nombreuses en Suisse, et dans d'autres pays européens, différents facteurs associés à la victimisation sont évalués. Cette recherche participe d'une meilleure compréhension de la pertinence du contrôle des paramètres méthodologiques dans les comparaisons entre études puisqu'elle illustre, pas à pas, les contrôles imposés et leurs effets sur les données quantitatives, et surtout comment l'ignorance de ces paramètres peut conduire à des conclusions erronées, tant statistiquement que théoriquement. La conclusion replace, dans un contexte plus large, la discussion des différences et des similitudes observées quant à la prévalence des violences envers les femmes en Suisse et aux États-Unis, et intègre des recommandations quant à la pertinence et à la validité des comparaisons internationales, cela quel que soit le domaine considéré.