952 resultados para Data sets storage


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background and aim of the study: Genomic gains and losses play a crucial role in the development and progression of DLBCL and are closely related to gene expression profiles (GEP), including the germinal center B-cell like (GCB) and activated B-cell like (ABC) cell of origin (COO) molecular signatures. To identify new oncogenes or tumor suppressor genes (TSG) involved in DLBCL pathogenesis and to determine their prognostic values, an integrated analysis of high-resolution gene expression and copy number profiling was performed. Patients and methods: Two hundred and eight adult patients with de novo CD20+ DLBCL enrolled in the prospective multicentric randomized LNH-03 GELA trials (LNH03-1B, -2B, -3B, 39B, -5B, -6B, -7B) with available frozen tumour samples, centralized reviewing and adequate DNA/RNA quality were selected. 116 patients were treated by Rituximab(R)-CHOP/R-miniCHOP and 92 patients were treated by the high dose (R)-ACVBP regimen dedicated to patients younger than 60 years (y) in frontline. Tumour samples were simultaneously analysed by high resolution comparative genomic hybridization (CGH, Agilent, 144K) and gene expression arrays (Affymetrix, U133+2). Minimal common regions (MCR), as defined by segments that affect the same chromosomal region in different cases, were delineated. Gene expression and MCR data sets were merged using Gene expression and dosage integrator algorithm (GEDI, Lenz et al. PNAS 2008) to identify new potential driver genes. Results: A total of 1363 recurrent (defined by a penetrance > 5%) MCRs within the DLBCL data set, ranging in size from 386 bp, affecting a single gene, to more than 24 Mb were identified by CGH. Of these MCRs, 756 (55%) showed a significant association with gene expression: 396 (59%) gains, 354 (52%) single-copy deletions, and 6 (67%) homozygous deletions. By this integrated approach, in addition to previously reported genes (CDKN2A/2B, PTEN, DLEU2, TNFAIP3, B2M, CD58, TNFRSF14, FOXP1, REL...), several genes targeted by gene copy abnormalities with a dosage effect and potential physiopathological impact were identified, including genes with TSG activity involved in cell cycle (HACE1, CDKN2C) immune response (CD68, CD177, CD70, TNFSF9, IRAK2), DNA integrity (XRCC2, BRCA1, NCOR1, NF1, FHIT) or oncogenic functions (CD79b, PTPRT, MALT1, AUTS2, MCL1, PTTG1...) with distinct distribution according to COO signature. The CDKN2A/2B tumor suppressor locus (9p21) was deleted homozygously in 27% of cases and hemizygously in 9% of cases. Biallelic loss was observed in 49% of ABC DLBCL and in 10% of GCB DLBCL. This deletion was strongly correlated to age and associated to a limited number of additional genetic abnormalities including trisomy 3, 18 and short gains/losses of Chr. 1, 2, 19 regions (FDR < 0.01), allowing to identify genes that may have synergistic effects with CDKN2A/2B inactivation. With a median follow-up of 42.9 months, only CDKN2A/2B biallelic deletion strongly correlates (FDR p.value < 0.01) to a poor outcome in the entire cohort (4y PFS = 44% [32-61] respectively vs. 74% [66-82] for patients in germline configuration; 4y OS = 53% [39-72] vs 83% [76-90]). In a Cox proportional hazard prediction of the PFS, CDKN2A/2B deletion remains predictive (HR = 1.9 [1.1-3.2], p = 0.02) when combined with IPI (HR = 2.4 [1.4-4.1], p = 0.001) and GCB status (HR = 1.3 [0.8-2.3], p = 0.31). This difference remains predictive in the subgroup of patients treated by R-CHOP (4y PFS = 43% [29-63] vs. 66% [55-78], p=0.02), in patients treated by R-ACVBP (4y PFS = 49% [28-84] vs. 83% [74-92], p=0.003), and in GCB (4y PFS = 50% [27-93] vs. 81% [73-90], p=0.02), or ABC/unclassified (5y PFS = 42% [28-61] vs. 67% [55-82] p = 0.009) molecular subtypes (Figure 1). Conclusion: We report for the first time an integrated genetic analysis of a large cohort of DLBCL patients included in a prospective multicentric clinical trial program allowing identifying new potential driver genes with pathogenic impact. However CDKN2A/2B deletion constitutes the strongest and unique prognostic factor of chemoresistance to R-CHOP, regardless the COO signature, which is not overcome by a more intensified immunochemotherapy. Patients displaying this frequent genomic abnormality warrant new and dedicated therapeutic approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

ABSTRACT: BACKGROUND: Many studies have been published outlining the global effects of 17 beta-estradiol (E2) on gene expression in human epithelial breast cancer derived MCF-7 cells. These studies show large variation in results, reporting between ~100 and ~1500 genes regulated by E2, with poor overlap. RESULTS: We performed a meta-analysis of these expression studies, using the Rank product method to obtain a more accurate and stable list of the differentially expressed genes, and of pathways regulated by E2. We analyzed 9 time-series data sets, concentrating on response at 3-4 hrs (early) and at 24 hrs (late). We found >1000 statistically significant probe sets after correction for multiple testing at 3-4 hrs, and >2000 significant probe sets at 24 hrs. Differentially expressed genes were examined by pathway analysis. This revealed 15 early response pathways, mostly related to cell signaling and proliferation, and 20 late response pathways, mostly related to breast cancer, cell division, DNA repair and recombination. CONCLUSIONS: Our results show that meta-analysis identified more differentially expressed genes than the individual studies, and that these genes act together in networks. These results provide new insight into E2 regulated mechanisms, especially in the context of breast cancer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes the development and applications of a super-resolution method, known as Super-Resolution Variable-Pixel Linear Reconstruction. The algorithm works combining different lower resolution images in order to obtain, as a result, a higher resolution image. We show that it can make significant spatial resolution improvements to satellite images of the Earth¿s surface allowing recognition of objects with size approaching the limiting spatial resolution of the lower resolution images. The algorithm is based on the Variable-Pixel Linear Reconstruction algorithm developed by Fruchter and Hook, a well-known method in astronomy but never used for Earth remote sensing purposes. The algorithm preserves photometry, can weight input images according to the statistical significance of each pixel, and removes the effect of geometric distortion on both image shape and photometry. In this paper, we describe its development for remote sensing purposes, show the usefulness of the algorithm working with images as different to the astronomical images as the remote sensing ones, and show applications to: 1) a set of simulated multispectral images obtained from a real Quickbird image; and 2) a set of multispectral real Landsat Enhanced Thematic Mapper Plus (ETM+) images. These examples show that the algorithm provides a substantial improvement in limiting spatial resolution for both simulated and real data sets without significantly altering the multispectral content of the input low-resolution images, without amplifying the noise, and with very few artifacts.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper reports the results from a second characterisation of the 91500 zircon, including data from electron probe microanalysis, laser ablation inductively coupled plasma-mass spectrometry (LA-ICP-MS), secondary ion mass spectrometry (SIMS) and laser fluorination analyses. The focus of this initiative was to establish the suitability of this large single zircon crystal for calibrating in situ analyses of the rare earth elements and oxygen isotopes, as well as to provide working values for key geochemical systems. In addition to extensive testing of the chemical and structural homogeneity of this sample, the occurrence of banding in 91500 in both backscattered electron and cathodoluminescence images is described in detail. Blind intercomparison data reported by both LA-ICP-MS and SIMS laboratories indicate that only small systematic differences exist between the data sets provided by these two techniques. Furthermore, the use of NIST SRM 610 glass as the calibrant for SIMS analyses was found to introduce little or no systematic error into the results for zircon. Based on both laser fluorination and SIMS data, zircon 91500 seems to be very well suited for calibrating in situ oxygen isotopic analyses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The evolution of ants is marked by remarkable adaptations that allowed the development of very complex social systems. To identify how ant-specific adaptations are associated with patterns of molecular evolution, we searched for signs of positive selection on amino-acid changes in proteins. We identified 24 functional categories of genes which were enriched for positively selected genes in the ant lineage. We also reanalyzed genome-wide data sets in bees and flies with the same methodology to check whether positive selection was specific to ants or also present in other insects. Notably, genes implicated in immunity were enriched for positively selected genes in the three lineages, ruling out the hypothesis that the evolution of hygienic behaviors in social insects caused a major relaxation of selective pressure on immune genes. Our scan also indicated that genes implicated in neurogenesis and olfaction started to undergo increased positive selection before the evolution of sociality in Hymenoptera. Finally, the comparison between these three lineages allowed us to pinpoint molecular evolution patterns that were specific to the ant lineage. In particular, there was ant-specific recurrent positive selection on genes with mitochondrial functions, suggesting that mitochondrial activity was improved during the evolution of this lineage. This might have been an important step toward the evolution of extreme lifespan that is a hallmark of ants.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE: To use diffusion-tensor (DT) magnetic resonance (MR) imaging in patients with essential tremor who were treated with transcranial MR imaging-guided focused ultrasound lesion inducement to identify the structural connectivity of the ventralis intermedius nucleus of the thalamus and determine how DT imaging changes correlated with tremor changes after lesion inducement. MATERIALS AND METHODS: With institutional review board approval, and with prospective informed consent, 15 patients with medication-refractory essential tremor were enrolled in a HIPAA-compliant pilot study and were treated with transcranial MR imaging-guided focused ultrasound surgery targeting the ventralis intermedius nucleus of the thalamus contralateral to their dominant hand. Fourteen patients were ultimately included. DT MR imaging studies at 3.0 T were performed preoperatively and 24 hours, 1 week, 1 month, and 3 months after the procedure. Fractional anisotropy (FA) maps were calculated from the DT imaging data sets for all time points in all patients. Voxels where FA consistently decreased over time were identified, and FA change in these voxels was correlated with clinical changes in tremor over the same period by using Pearson correlation. RESULTS: Ipsilateral brain structures that showed prespecified negative correlation values of FA over time of -0.5 or less included the pre- and postcentral subcortical white matter in the hand knob area; the region of the corticospinal tract in the centrum semiovale, in the posterior limb of the internal capsule, and in the cerebral peduncle; the thalamus; the region of the red nucleus; the location of the central tegmental tract; and the region of the inferior olive. The contralateral middle cerebellar peduncle and bilateral portions of the superior vermis also showed persistent decrease in FA over time. There was strong correlation between decrease in FA and clinical improvement in hand tremor 3 months after lesion inducement (P < .001). CONCLUSION: DT MR imaging after MR imaging-guided focused ultrasound thalamotomy depicts changes in specific brain structures. The magnitude of the DT imaging changes after thalamic lesion inducement correlates with the degree of clinical improvement in essential tremor.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The most advanced stage of water erosion, the gully, represents severe problems in different contexts, both in rural and urban environments. In the search for a stabilization of the process in a viable manner it is of utmost importance to assess the efficiency of evaluation methodologies. For this purpose, the efficiency of low-cost conservation practices were tested for the reduction of soil and nutrient losses caused by erosion from gullies in Pinheiral, state of Rio de Janeiro. The following areas were studied: gully recovered by means of physical and biological strategies; gullies in recovering stage, by means of physical strategies only, and gullies under no restoration treatment. During the summer of 2005/2006, the following data sets were collected for this study: soil classification of each of the eroded gully areas; planimetric and altimetric survey; determination of rain erosivity indexes; determination of amount of soil sediment; sediment grain size characteristics; natural amounts of nutrients Ca, Mg, K and P, as well as total C and N concentrations. The results for the three first measurements were 52.5, 20.5, and 29.0 Mg in the sediments from the gully without intervention, and of 1.0, 1.7 and 1.8 Mg from the gully with physical interventions, indicating an average reduction of 95 %. The fully recovered gully produced no sediment during the period. The data of total nutrient loss from the three gullies under investigation showed reductions of 98 % for the recovering gully, and 99 % for the fully recovered one. As for the loss of nutrients, the data indicate a nutrient loss of 1,811 kg from for the non-treated gully. The use of physical and biological interventions made it possible to reduce overall nutrient loss by more than 96 %, over the entire rainy season, as compared to the non-treated gully. Results show that the methods used were effective in reducing soil and nutrient losses from gullies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective: Health status measures usually have an asymmetric distribution and present a highpercentage of respondents with the best possible score (ceiling effect), specially when they areassessed in the overall population. Different methods to model this type of variables have beenproposed that take into account the ceiling effect: the tobit models, the Censored Least AbsoluteDeviations (CLAD) models or the two-part models, among others. The objective of this workwas to describe the tobit model, and compare it with the Ordinary Least Squares (OLS) model,that ignores the ceiling effect.Methods: Two different data sets have been used in order to compare both models: a) real datacomming from the European Study of Mental Disorders (ESEMeD), in order to model theEQ5D index, one of the measures of utilities most commonly used for the evaluation of healthstatus; and b) data obtained from simulation. Cross-validation was used to compare thepredicted values of the tobit model and the OLS models. The following estimators werecompared: the percentage of absolute error (R1), the percentage of squared error (R2), the MeanSquared Error (MSE) and the Mean Absolute Prediction Error (MAPE). Different datasets werecreated for different values of the error variance and different percentages of individuals withceiling effect. The estimations of the coefficients, the percentage of explained variance and theplots of residuals versus predicted values obtained under each model were compared.Results: With regard to the results of the ESEMeD study, the predicted values obtained with theOLS model and those obtained with the tobit models were very similar. The regressioncoefficients of the linear model were consistently smaller than those from the tobit model. In thesimulation study, we observed that when the error variance was small (s=1), the tobit modelpresented unbiased estimations of the coefficients and accurate predicted values, specially whenthe percentage of individuals wiht the highest possible score was small. However, when theerrror variance was greater (s=10 or s=20), the percentage of explained variance for the tobitmodel and the predicted values were more similar to those obtained with an OLS model.Conclusions: The proportion of variability accounted for the models and the percentage ofindividuals with the highest possible score have an important effect in the performance of thetobit model in comparison with the linear model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose: SIOPEN scoring of 123I mIBG imaging has been shown to predict response to induction chemotherapy and outcome at diagnosis in children with HRN.Method: Patterns of skeletal 123I mIBG uptake were assigned numerical scores (Mscore) ranging from 0 (no metastasis) to 72 (diffuse metastases) within 12 body areas as described previously. 271 anonymised, paired image data sets acquired at diagnosis and on completion of Rapid COJEC induction chemotherapy were reviewed, constituting a representative sample of 1602 children treated prospectively within the HR-NBL1/SIOPEN trial. Pre-and post-treatment Mscores were compared with bone marrow cytology (BM) and 3 year event free survival (EFS).Results: Results 224/271 patients showed skeletal MIBG-uptake at diagnosis and were evaluable forMIBG-response. Complete response (CR) on MIBG to Rapid COJEC induction was achieved by 66%, 34% and 15% of patients who had pre-treatment Mscores of <18 (n¼65, 29%), 18-44 (n¼95,42%) and Y ´ 45 (n¼64, 28.5%) respectively (chi squared test p<.0001). Mscore at diagnosis and on completion of Rapid COJEC correlated strongly with BM involvement (p<0.0001). The correlation of pre score with post scores and response was highly significant (p<0.001). Most importantly, the 3 year EFS in 47 children with Mscore 0 at diagnosis was 0.68 (A ` 0.07), by comparison with 0.42 (A` 0.06), 0.35 (A` 0.05) and 0.25 (A` 0.06) for patients in pre-treatment score groups <18, 18-44 and Y ´ 45, respectively (p<0.001). AnMscore threshold ofY ´ 45 at diagnosis was associated with significantly worse outcome by comparison with all other Mscore groups (p¼0.029). The 3 year EFS of 0.53 (A` 0.07) of patients in metastatic CR (mIBG and BM) after Rapid Cojec (33%) is clearly superior to patients not achieving metastatic CR (0.24 (A ` 0.04), p¼0.005).Conclusion: SIOPEN scoring of 123I mIBG imaging has been shown to predict response to induction chemotherapy and outcome at diagnosis in children with HRN.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Statistical models allow the representation of data sets and the estimation and/or prediction of the behavior of a given variable through its interaction with the other variables involved in a phenomenon. Among other different statistical models, are the autoregressive state-space models (ARSS) and the linear regression models (LR), which allow the quantification of the relationships among soil-plant-atmosphere system variables. To compare the quality of the ARSS and LR models for the modeling of the relationships between soybean yield and soil physical properties, Akaike's Information Criterion, which provides a coefficient for the selection of the best model, was used in this study. The data sets were sampled in a Rhodic Acrudox soil, along a spatial transect with 84 points spaced 3 m apart. At each sampling point, soybean samples were collected for yield quantification. At the same site, soil penetration resistance was also measured and soil samples were collected to measure soil bulk density in the 0-0.10 m and 0.10-0.20 m layers. Results showed autocorrelation and a cross correlation structure of soybean yield and soil penetration resistance data. Soil bulk density data, however, were only autocorrelated in the 0-0.10 m layer and not cross correlated with soybean yield. The results showed the higher efficiency of the autoregressive space-state models in relation to the equivalent simple and multiple linear regression models using Akaike's Information Criterion. The resulting values were comparatively lower than the values obtained by the regression models, for all combinations of explanatory variables.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The evolution of continuous traits is the central component of comparative analyses in phylogenetics, and the comparison of alternative models of trait evolution has greatly improved our understanding of the mechanisms driving phenotypic differentiation. Several factors influence the comparison of models, and we explore the effects of random errors in trait measurement on the accuracy of model selection. We simulate trait data under a Brownian motion model (BM) and introduce different magnitudes of random measurement error. We then evaluate the resulting statistical support for this model against two alternative models: Ornstein-Uhlenbeck (OU) and accelerating/decelerating rates (ACDC). Our analyses show that even small measurement errors (10%) consistently bias model selection towards erroneous rejection of BM in favour of more parameter-rich models (most frequently the OU model). Fortunately, methods that explicitly incorporate measurement errors in phylogenetic analyses considerably improve the accuracy of model selection. Our results call for caution in interpreting the results of model selection in comparative analyses, especially when complex models garner only modest additional support. Importantly, as measurement errors occur in most trait data sets, we suggest that estimation of measurement errors should always be performed during comparative analysis to reduce chances of misidentification of evolutionary processes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the advancement of high-throughput sequencing and dramatic increase of available genetic data, statistical modeling has become an essential part in the field of molecular evolution. Statistical modeling results in many interesting discoveries in the field, from detection of highly conserved or diverse regions in a genome to phylogenetic inference of species evolutionary history Among different types of genome sequences, protein coding regions are particularly interesting due to their impact on proteins. The building blocks of proteins, i.e. amino acids, are coded by triples of nucleotides, known as codons. Accordingly, studying the evolution of codons leads to fundamental understanding of how proteins function and evolve. The current codon models can be classified into three principal groups: mechanistic codon models, empirical codon models and hybrid ones. The mechanistic models grasp particular attention due to clarity of their underlying biological assumptions and parameters. However, they suffer from simplified assumptions that are required to overcome the burden of computational complexity. The main assumptions applied to the current mechanistic codon models are (a) double and triple substitutions of nucleotides within codons are negligible, (b) there is no mutation variation among nucleotides of a single codon and (c) assuming HKY nucleotide model is sufficient to capture essence of transition- transversion rates at nucleotide level. In this thesis, I develop a framework of mechanistic codon models, named KCM-based model family framework, based on holding or relaxing the mentioned assumptions. Accordingly, eight different models are proposed from eight combinations of holding or relaxing the assumptions from the simplest one that holds all the assumptions to the most general one that relaxes all of them. The models derived from the proposed framework allow me to investigate the biological plausibility of the three simplified assumptions on real data sets as well as finding the best model that is aligned with the underlying characteristics of the data sets. -- Avec l'avancement de séquençage à haut débit et l'augmentation dramatique des données géné¬tiques disponibles, la modélisation statistique est devenue un élément essentiel dans le domaine dé l'évolution moléculaire. Les résultats de la modélisation statistique dans de nombreuses découvertes intéressantes dans le domaine de la détection, de régions hautement conservées ou diverses dans un génome de l'inférence phylogénétique des espèces histoire évolutive. Parmi les différents types de séquences du génome, les régions codantes de protéines sont particulièrement intéressants en raison de leur impact sur les protéines. Les blocs de construction des protéines, à savoir les acides aminés, sont codés par des triplets de nucléotides, appelés codons. Par conséquent, l'étude de l'évolution des codons mène à la compréhension fondamentale de la façon dont les protéines fonctionnent et évoluent. Les modèles de codons actuels peuvent être classés en trois groupes principaux : les modèles de codons mécanistes, les modèles de codons empiriques et les hybrides. Les modèles mécanistes saisir une attention particulière en raison de la clarté de leurs hypothèses et les paramètres biologiques sous-jacents. Cependant, ils souffrent d'hypothèses simplificatrices qui permettent de surmonter le fardeau de la complexité des calculs. Les principales hypothèses retenues pour les modèles actuels de codons mécanistes sont : a) substitutions doubles et triples de nucleotides dans les codons sont négligeables, b) il n'y a pas de variation de la mutation chez les nucléotides d'un codon unique, et c) en supposant modèle nucléotidique HKY est suffisant pour capturer l'essence de taux de transition transversion au niveau nucléotidique. Dans cette thèse, je poursuis deux objectifs principaux. Le premier objectif est de développer un cadre de modèles de codons mécanistes, nommé cadre KCM-based model family, sur la base de la détention ou de l'assouplissement des hypothèses mentionnées. En conséquence, huit modèles différents sont proposés à partir de huit combinaisons de la détention ou l'assouplissement des hypothèses de la plus simple qui détient toutes les hypothèses à la plus générale qui détend tous. Les modèles dérivés du cadre proposé nous permettent d'enquêter sur la plausibilité biologique des trois hypothèses simplificatrices sur des données réelles ainsi que de trouver le meilleur modèle qui est aligné avec les caractéristiques sous-jacentes des jeux de données. Nos expériences montrent que, dans aucun des jeux de données réelles, tenant les trois hypothèses mentionnées est réaliste. Cela signifie en utilisant des modèles simples qui détiennent ces hypothèses peuvent être trompeuses et les résultats de l'estimation inexacte des paramètres. Le deuxième objectif est de développer un modèle mécaniste de codon généralisée qui détend les trois hypothèses simplificatrices, tandis que d'informatique efficace, en utilisant une opération de matrice appelée produit de Kronecker. Nos expériences montrent que sur un jeux de données choisis au hasard, le modèle proposé de codon mécaniste généralisée surpasse autre modèle de codon par rapport à AICc métrique dans environ la moitié des ensembles de données. En outre, je montre à travers plusieurs expériences que le modèle général proposé est biologiquement plausible.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pedotransfer functions (PTF) were developed to estimate the parameters (α, n, θr and θs) of the van Genuchten model (1980) to describe soil water retention curves. The data came from various sources, mainly from studies conducted by universities in Northeast Brazil, by the Brazilian Agricultural Research Corporation (Embrapa) and by a corporation for the development of the São Francisco and Parnaíba river basins (Codevasf), totaling 786 retention curves, which were divided into two data sets: 85 % for the development of PTFs, and 15 % for testing and validation, considered independent data. Aside from the development of general PTFs for all soils together, specific PTFs were developed for the soil classes Ultisols, Oxisols, Entisols, and Alfisols by multiple regression techniques, using a stepwise procedure (forward and backward) to select the best predictors. Two types of PTFs were developed: the first included all predictors (soil density, proportions of sand, silt, clay, and organic matter), and the second only the proportions of sand, silt and clay. The evaluation of adequacy of the PTFs was based on the correlation coefficient (R) and Willmott index (d). To evaluate the PTF for the moisture content at specific pressure heads, we used the root mean square error (RMSE). The PTF-predicted retention curve is relatively poor, except for the residual water content. The inclusion of organic matter as a PTF predictor improved the prediction of parameter a of van Genuchten. The performance of soil-class-specific PTFs was not better than of the general PTF. Except for the water content of saturated soil estimated by particle size distribution, the tested models for water content prediction at specific pressure heads proved satisfactory. Predictions of water content at pressure heads more negative than -0.6 m, using a PTF considering particle size distribution, are only slightly lower than those obtained by PTFs including bulk density and organic matter content.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

General Summary Although the chapters of this thesis address a variety of issues, the principal aim is common: test economic ideas in an international economic context. The intention has been to supply empirical findings using the largest suitable data sets and making use of the most appropriate empirical techniques. This thesis can roughly be divided into two parts: the first one, corresponding to the first two chapters, investigates the link between trade and the environment, the second one, the last three chapters, is related to economic geography issues. Environmental problems are omnipresent in the daily press nowadays and one of the arguments put forward is that globalisation causes severe environmental problems through the reallocation of investments and production to countries with less stringent environmental regulations. A measure of the amplitude of this undesirable effect is provided in the first part. The third and the fourth chapters explore the productivity effects of agglomeration. The computed spillover effects between different sectors indicate how cluster-formation might be productivity enhancing. The last chapter is not about how to better understand the world but how to measure it and it was just a great pleasure to work on it. "The Economist" writes every week about the impressive population and economic growth observed in China and India, and everybody agrees that the world's center of gravity has shifted. But by how much and how fast did it shift? An answer is given in the last part, which proposes a global measure for the location of world production and allows to visualize our results in Google Earth. A short summary of each of the five chapters is provided below. The first chapter, entitled "Unraveling the World-Wide Pollution-Haven Effect" investigates the relative strength of the pollution haven effect (PH, comparative advantage in dirty products due to differences in environmental regulation) and the factor endowment effect (FE, comparative advantage in dirty, capital intensive products due to differences in endowments). We compute the pollution content of imports using the IPPS coefficients (for three pollutants, namely biological oxygen demand, sulphur dioxide and toxic pollution intensity for all manufacturing sectors) provided by the World Bank and use a gravity-type framework to isolate the two above mentioned effects. Our study covers 48 countries that can be classified into 29 Southern and 19 Northern countries and uses the lead content of gasoline as proxy for environmental stringency. For North-South trade we find significant PH and FE effects going in the expected, opposite directions and being of similar magnitude. However, when looking at world trade, the effects become very small because of the high North-North trade share, where we have no a priori expectations about the signs of these effects. Therefore popular fears about the trade effects of differences in environmental regulations might by exaggerated. The second chapter is entitled "Is trade bad for the Environment? Decomposing worldwide SO2 emissions, 1990-2000". First we construct a novel and large database containing reasonable estimates of SO2 emission intensities per unit labor that vary across countries, periods and manufacturing sectors. Then we use these original data (covering 31 developed and 31 developing countries) to decompose the worldwide SO2 emissions into the three well known dynamic effects (scale, technique and composition effect). We find that the positive scale (+9,5%) and the negative technique (-12.5%) effect are the main driving forces of emission changes. Composition effects between countries and sectors are smaller, both negative and of similar magnitude (-3.5% each). Given that trade matters via the composition effects this means that trade reduces total emissions. We next construct, in a first experiment, a hypothetical world where no trade happens, i.e. each country produces its imports at home and does no longer produce its exports. The difference between the actual and this no-trade world allows us (under the omission of price effects) to compute a static first-order trade effect. The latter now increases total world emissions because it allows, on average, dirty countries to specialize in dirty products. However, this effect is smaller (3.5%) in 2000 than in 1990 (10%), in line with the negative dynamic composition effect identified in the previous exercise. We then propose a second experiment, comparing effective emissions with the maximum or minimum possible level of SO2 emissions. These hypothetical levels of emissions are obtained by reallocating labour accordingly across sectors within each country (under the country-employment and the world industry-production constraints). Using linear programming techniques, we show that emissions are reduced by 90% with respect to the worst case, but that they could still be reduced further by another 80% if emissions were to be minimized. The findings from this chapter go together with those from chapter one in the sense that trade-induced composition effect do not seem to be the main source of pollution, at least in the recent past. Going now to the economic geography part of this thesis, the third chapter, entitled "A Dynamic Model with Sectoral Agglomeration Effects" consists of a short note that derives the theoretical model estimated in the fourth chapter. The derivation is directly based on the multi-regional framework by Ciccone (2002) but extends it in order to include sectoral disaggregation and a temporal dimension. This allows us formally to write present productivity as a function of past productivity and other contemporaneous and past control variables. The fourth chapter entitled "Sectoral Agglomeration Effects in a Panel of European Regions" takes the final equation derived in chapter three to the data. We investigate the empirical link between density and labour productivity based on regional data (245 NUTS-2 regions over the period 1980-2003). Using dynamic panel techniques allows us to control for the possible endogeneity of density and for region specific effects. We find a positive long run elasticity of density with respect to labour productivity of about 13%. When using data at the sectoral level it seems that positive cross-sector and negative own-sector externalities are present in manufacturing while financial services display strong positive own-sector effects. The fifth and last chapter entitled "Is the World's Economic Center of Gravity Already in Asia?" computes the world economic, demographic and geographic center of gravity for 1975-2004 and compares them. Based on data for the largest cities in the world and using the physical concept of center of mass, we find that the world's economic center of gravity is still located in Europe, even though there is a clear shift towards Asia. To sum up, this thesis makes three main contributions. First, it provides new estimates of orders of magnitudes for the role of trade in the globalisation and environment debate. Second, it computes reliable and disaggregated elasticities for the effect of density on labour productivity in European regions. Third, it allows us, in a geometrically rigorous way, to track the path of the world's economic center of gravity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Taking into account the nature of the hydrological processes involved in in situ measurement of Field Capacity (FC), this study proposes a variation of the definition of FC aiming not only at minimizing the inadequacies of its determination, but also at maintaining its original, practical meaning. Analysis of FC data for 22 Brazilian soils and additional FC data from the literature, all measured according to the proposed definition, which is based on a 48-h drainage time after infiltration by shallow ponding, indicates a weak dependency on the amount of infiltrated water, antecedent moisture level, soil morphology, and the level of the groundwater table, but a strong dependency on basic soil properties. The dependence on basic soil properties allowed determination of FC of the 22 soil profiles by pedotransfer functions (PTFs) using the input variables usually adopted in prediction of soil water retention. Among the input variables, soil moisture content θ (6 kPa) had the greatest impact. Indeed, a linear PTF based only on it resulted in an FC with a root mean squared residue less than 0.04 m³ m-3 for most soils individually. Such a PTF proved to be a better FC predictor than the traditional method of using moisture content at an arbitrary suction. Our FC data were compatible with an equivalent and broader USA database found in the literature, mainly for medium-texture soil samples. One reason for differences between FCs of the two data sets of fine-textured soils is due to their different drainage times. Thus, a standardized procedure for in situ determination of FC is recommended.