23 resultados para machine and metal product industry
em Université de Lausanne, Switzerland
Resumo:
Duchenne muscular dystrophy (DMD) is an X-linked genetic disease, caused by the absence of the dystrophin protein. Although many novel therapies are under development for DMD, there is currently no cure and affected individuals are often confined to a wheelchair by their teens and die in their twenties/thirties. DMD is a rare disease (prevalence <5/10,000). Even the largest countries do not have enough affected patients to rigorously assess novel therapies, unravel genetic complexities, and determine patient outcomes. TREAT-NMD is a worldwide network for neuromuscular diseases that provides an infrastructure to support the delivery of promising new therapies for patients. The harmonized implementation of national and ultimately global patient registries has been central to the success of TREAT-NMD. For the DMD registries within TREAT-NMD, individual countries have chosen to collect patient information in the form of standardized patient registries to increase the overall patient population on which clinical outcomes and new technologies can be assessed. The registries comprise more than 13,500 patients from 31 different countries. Here, we describe how the TREAT-NMD national patient registries for DMD were established. We look at their continued growth and assess how successful they have been at fostering collaboration between academia, patient organizations, and industry.
Resumo:
Purpose: The aim of this review was to systematically evaluate and compare the frequency of veneer chipping and core fracture of zirconia fixed dental prostheses (FOPS) and porcelain-fused-to-metal (PFM) FDPs and determine possible influencing factors. Materials and Methods: The SCOPUS database and International Association of Dental Research abstracts were searched for clinical studies involving zirconia and PFM FDPs. Furthermore, studies that were integrated into systematic reviews on PFM FDPs were also evaluated. The principle investigators of any clinical studies on zirconia FDPs were contacted to provide additional information. Based on the available information for each FOP, a data file was constructed. Veneer chipping was divided into three grades (grade 1 = polishing, grade 2 = repair, grade 3 = replacement). To assess the frequency of veneer chipping and possible influencing factors, a piecewise exponential model was used to adjust for a study effect. Results: None of the studies on PFM FDPs (reviews and additional searching) sufficiently satisfied the criteria of this review to be included. Thirteen clinical studies on zirconia FDPs and two studies that investigated both zirconia and PFM FDPs were identified. These studies involved 664 zirconia and 134 PFM FDPs at baseline. Follow-up data were available for 595 zirconia and 127 PFM FDPs. The mean observation period was approximately 3 years for both groups. The frequency of core fracture was less than 1% in the zirconia group and 0% in the PFM group. When all studies were included, 142 veneer chippings were recorded for zirconia FDPs (24%) and 43 for PFM FDPs (34%). However, the studies differed extensively with regard to veneer chipping of zirconia: 85% of all chippings occurred in 4 studies, and 43% of all chippings included zirconia FDPs. If only studies that evaluated both types of core materials were included, the frequency of chipping was 54% for the zirconia-supported FDPs and 34% for PFM FDPs. When adjusting the survival rate for the study effect, the difference between zirconia and PFM FDPs was statistically significant for all grades of chippings (P = .001), as well as for chipping grade 3 (P = .02). If all grades of veneer chippings were taken into account, the survival of PFM FDPs was 97%, while the survival rate of the zirconia FDPs was 90% after 3 years for a typical study. For both PFM and zirconia FDPs, the frequency of grades 1 and 2 veneer chippings was considerably higher than grade 3. Veneer chipping was significantly less frequent in pressed materials than in hand-layered materials, both for zirconia and PFM FDPs (P = .04). Conclusions: Since the frequency of veneer chipping was significantly higher in the zirconia FDPs than PFM FDPs, and as refined processing procedures have started to yield better results in the laboratory, new clinical studies with these new procedures must confirm whether the frequency of veneer chipping can be reduced to the level of PFM. Int J Prosthodont 2010;23:493-502
Resumo:
Financial markets play an important role in an economy performing various functions like mobilizing and pooling savings, producing information about investment opportunities, screening and monitoring investments, implementation of corporate governance, diversification and management of risk. These functions influence saving rates, investment decisions, technological innovation and, therefore, have important implications for welfare. In my PhD dissertation I examine the interplay of financial and product markets by looking at different channels through which financial markets may influence an economy.My dissertation consists of four chapters. The first chapter is a co-authored work with Martin Strieborny, a PhD student from the University of Lausanne. The second chapter is a co-authored work with Melise Jaud, a PhD student from the Paris School of Economics. The third chapter is co-authored with both Melise Jaud and Martin Strieborny. The last chapter of my PhD dissertation is a single author paper.Chapter 1 of my PhD thesis analyzes the effect of financial development on growth of contract intensive industries. These industries intensively use intermediate inputs that neither can be sold on organized exchange, nor are reference-priced (Levchenko, 2007; Nunn, 2007). A typical example of a contract intensive industry would be an industry where an upstream supplier has to make investments in order to customize a product for needs of a downstream buyer. After the investment is made and the product is adjusted, the buyer may refuse to meet a commitment and trigger ex post renegotiation. Since the product is customized to the buyer's needs, the supplier cannot sell the product to a different buyer at the original price. This is referred in the literature as the holdup problem. As a consequence, the individually rational suppliers will underinvest into relationship-specific assets, hurting the downstream firms with negative consequences for aggregate growth. The standard way to mitigate the hold up problem is to write a binding contract and to rely on the legal enforcement by the state. However, even the most effective contract enforcement might fail to protect the supplier in tough times when the buyer lacks a reliable source of external financing. This suggests the potential role of financial intermediaries, banks in particular, in mitigating the incomplete contract problem. First, financial products like letters of credit and letters of guarantee can substantially decrease a risk and transaction costs of parties. Second, a bank loan can serve as a signal about a buyer's true financial situation, an upstream firm will be more willing undertake relationship-specific investment knowing that the business partner is creditworthy and will abstain from myopic behavior (Fama, 1985; von Thadden, 1995). Therefore, a well-developed financial (especially banking) system should disproportionately benefit contract intensive industries.The empirical test confirms this hypothesis. Indeed, contract intensive industries seem to grow faster in countries with a well developed financial system. Furthermore, this effect comes from a more developed banking sector rather than from a deeper stock market. These results are reaffirmed examining the effect of US bank deregulation on the growth of contract intensive industries in different states. Beyond an overall pro-growth effect, the bank deregulation seems to disproportionately benefit the industries requiring relationship-specific investments from their suppliers.Chapter 2 of my PhD focuses on the role of the financial sector in promoting exports of developing countries. In particular, it investigates how credit constraints affect the ability of firms operating in agri-food sectors of developing countries to keep exporting to foreign markets.Trade in high-value agri-food products from developing countries has expanded enormously over the last two decades offering opportunities for development. However, trade in agri-food is governed by a growing array of standards. Sanitary and Phytosanitary standards (SPS) and technical regulations impose additional sunk, fixed and operating costs along the firms' export life. Such costs may be detrimental to firms' survival, "pricing out" producers that cannot comply. The existence of these costs suggests a potential role of credit constraints in shaping the duration of trade relationships on foreign markets. A well-developed financial system provides the funds to exporters necessary to adjust production processes in order to meet quality and quantity requirements in foreign markets and to maintain long-standing trade relationships. The products with higher needs for financing should benefit the most from a well functioning financial system. This differential effect calls for a difference-in-difference approach initially proposed by Rajan and Zingales (1998). As a proxy for demand for financing of agri-food products, the sanitary risk index developed by Jaud et al. (2009) is used. The empirical literature on standards and norms show high costs of compliance, both variable and fixed, for high-value food products (Garcia-Martinez and Poole, 2004; Maskus et al., 2005). The sanitary risk index reflects the propensity of products to fail health and safety controls on the European Union (EU) market. Given the high costs of compliance, the sanitary risk index captures the demand for external financing to comply with such regulations.The prediction is empirically tested examining the export survival of different agri-food products from firms operating in Ghana, Mali, Malawi, Senegal and Tanzania. The results suggest that agri-food products that require more financing to keep up with food safety regulation of the destination market, indeed sustain longer in foreign market, when they are exported from countries with better developed financial markets.Chapter 3 analyzes the link between financial markets and efficiency of resource allocation in an economy. Producing and exporting products inconsistent with a country's factor endowments constitutes a serious misallocation of funds, which undermines competitiveness of the economy and inhibits its long term growth. In this chapter, inefficient exporting patterns are analyzed through the lens of the agency theories from the corporate finance literature. Managers may pursue projects with negative net present values because their perquisites or even their job might depend on them. Exporting activities are particularly prone to this problem. Business related to foreign markets involves both high levels of additional spending and strong incentives for managers to overinvest. Rational managers might have incentives to push for exports that use country's scarce factors which is suboptimal from a social point of view. Export subsidies might further skew the incentives towards inefficient exporting. Management can divert the export subsidies into investments promoting inefficient exporting.Corporate finance literature stresses the disciplining role of outside debt in counteracting the internal pressures to divert such "free cash flow" into unprofitable investments. Managers can lose both their reputation and the control of "their" firm if the unpaid external debt triggers a bankruptcy procedure. The threat of possible failure to satisfy debt service payments pushes the managers toward an efficient use of available resources (Jensen, 1986; Stulz, 1990; Hart and Moore, 1995). The main sources of debt financing in the most countries are banks. The disciplining role of banks might be especially important in the countries suffering from insufficient judicial quality. Banks, in pursuing their rights, rely on comparatively simple legal interventions that can be implemented even by mediocre courts. In addition to their disciplining role, banks can promote efficient exporting patterns in a more direct way by relaxing credit constraints of producers, through screening, identifying and investing in the most profitable investment projects. Therefore, a well-developed domestic financial system, and particular banking system, would help to push a country's exports towards products congruent with its comparative advantage.This prediction is tested looking at the survival of different product categories exported to US market. Products are identified according to the Euclidian distance between their revealed factor intensity and the country's factor endowments. The results suggest that products suffering from a comparative disadvantage (labour-intensive products from capital-abundant countries) survive less on the competitive US market. This pattern is stronger if the exporting country has a well-developed banking system. Thus, a strong banking sector promotes exports consistent with a country comparative advantage.Chapter 4 of my PhD thesis further examines the role of financial markets in fostering efficient resource allocation in an economy. In particular, the allocative efficiency hypothesis is investigated in the context of equity market liberalization.Many empirical studies document a positive and significant effect of financial liberalization on growth (Levchenko et al. 2009; Quinn and Toyoda 2009; Bekaert et al., 2005). However, the decrease in the cost of capital and the associated growth in investment appears rather modest in comparison to the large GDP growth effect (Bekaert and Harvey, 2005; Henry, 2000, 2003). Therefore, financial liberalization may have a positive impact on growth through its effect on the allocation of funds across firms and sectors.Free access to international capital markets allows the largest and most profitable domestic firms to borrow funds in foreign markets (Rajan and Zingales, 2003). As domestic banks loose some of their best clients, they reoptimize their lending practices seeking new clients among small and younger industrial firms. These firms are likely to be more risky than large and established companies. Screening of customers becomes prevalent as the return to screening rises. Banks, ceteris paribus, tend to focus on firms operating in comparative-advantage sectors because they are better risks. Firms in comparative-disadvantage sectors finding it harder to finance their entry into or survival in export markets either exit or refrain from entering export markets. On aggregate, one should therefore expect to see less entry, more exit, and shorter survival on export markets in those sectors after financial liberalization.The paper investigates the effect of financial liberalization on a country's export pattern by comparing the dynamics of entry and exit of different products in a country export portfolio before and after financial liberalization.The results suggest that products that lie far from the country's comparative advantage set tend to disappear relatively faster from the country's export portfolio following the liberalization of financial markets. In other words, financial liberalization tends to rebalance the composition of a country's export portfolio towards the products that intensively use the economy's abundant factors.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.
Resumo:
OBJECTIVES: Regarding recent progress, musculoskeletal ultrasound (US) will probably soon be integrated in standard care of patient with rheumatoid arthritis (RA). However, in daily care, quality of US machines and level of experience of sonographers are varied. We conducted a study to assess reproducibility and feasibility of an US scoring for RA, including US devices of different quality and rheumatologist with various levels of expertise in US as it would be in daily care. METHODS: The Swiss Sonography in Arthritis and Rheumatism (SONAR) group has developed a semi-quantitative score using OMERACT criteria for synovitis and erosion in RA. The score was taught to 108 rheumatologists trained in US. One year after the last workshop, 19 rheumatologists participated in the study. Scans were performed on 6 US machines ranging from low to high quality, each with a different patient. Weighted kappa was calculated for each pair of readers. RESULTS: Overall, the agreement was fair to moderate. Quality of device, experience of the sonographers and practice of the score before the study improved substantially the agreement. Agreement assessed on higher quality machine, among sonographers with good experience in US increased to substantial (median kappa for B-mode and Doppler: 0.64 and 0.41 for erosion). CONCLUSIONS: This study demonstrated feasibility and reproducibility of the Swiss US SONAR score for RA. Our results confirmed importance of the quality of US machine and the training of sonographers for the implementation of US scoring in the routine daily care of RA.
Resumo:
BACKGROUND: There are limited data on the composition and smoke emissions of 'herbal' shisha products and the air quality of establishments where they are smoked. METHODS: Three studies of 'herbal' shisha were conducted: (1) samples of 'herbal' shisha products were chemically analysed; (2) 'herbal' and tobacco shisha were burned in a waterpipe smoking machine and main and sidestream smoke analysed by standard methods and (3) the air quality of six waterpipe cafes was assessed by measurement of CO, particulate and nicotine vapour content. RESULTS: We found considerable variation in heavy metal content between the three products sampled, one being particularly high in lead, chromium, nickel and arsenic. A similar pattern emerged for polycyclic aromatic hydrocarbons. Smoke emission analyses indicated that toxic byproducts produced by the combustion of 'herbal' shisha were equivalent or greater than those produced by tobacco shisha. The results of our air quality assessment demonstrated that mean PM2.5 levels and CO content were significantly higher in waterpipe establishments compared to a casino where cigarette smoking was permitted. Nicotine vapour was detected in one of the waterpipe cafes. CONCLUSIONS: 'Herbal' shisha products tested contained toxic trace metals and PAHs levels equivalent to, or in excess of, that found in cigarettes. Their mainstream and sidestream smoke emissions contained carcinogens equivalent to, or in excess of, those of tobacco products. The content of the air in the waterpipe cafes tested was potentially hazardous. These data, in aggregate, suggest that smoking 'herbal' shisha may well be dangerous to health.
Resumo:
PRINCIPLES: The literature has described opinion leaders not only as marketing tools of the pharmaceutical industry, but also as educators promoting good clinical practice. This qualitative study addresses the distinction between the opinion-leader-as-marketing-tool and the opinion-leader-as-educator, as it is revealed in the discourses of physicians and experts, focusing on the prescription of antidepressants. We explore the relational dynamic between physicians, opinion leaders and the pharmaceutical industry in an area of French-speaking Switzerland. METHODS: Qualitative content analysis of 24 semistructured interviews with physicians and local experts in psychopharmacology, complemented by direct observation of educational events led by the experts, which were all sponsored by various pharmaceutical companies. RESULTS: Both physicians and experts were critical of the pharmaceutical industry and its use of opinion leaders. Local experts, in contrast, were perceived by the physicians as critical of the industry and, therefore, as a legitimate source of information. Local experts did not consider themselves opinion leaders and argued that they remained intellectually independent from the industry. Field observations confirmed that local experts criticised the industry at continuing medical education events. CONCLUSIONS: Local experts were vocal critics of the industry, which nevertheless sponsor their continuing education. This critical attitude enhanced their credibility in the eyes of the prescribing physicians. We discuss how the experts, despite their critical attitude, might still be beneficial to the industry's interests.
Resumo:
PURPOSE: Postmortem computed tomography angiography (PMCTA) was introduced into forensic investigations a few years ago. It provides reliable images that can be consulted at any time. Conventional autopsy remains the reference standard for defining the cause of death, but provides only limited possibility of a second examination. This study compares these two procedures and discusses findings that can be detected exclusively using each method. MATERIALS AND METHODS: This retrospective study compared radiological reports from PMCTA to reports from conventional autopsy for 50 forensic autopsy cases. Reported findings from autopsy and PMCTA were extracted and compared to each other. PMCTA was performed using a modified heart-lung machine and the oily contrast agent Angiofil® (Fumedica AG, Muri, Switzerland). RESULTS: PMCTA and conventional autopsy would have drawn similar conclusions regarding causes of death. Nearly 60 % of all findings were visualized with both techniques. PMCTA demonstrates a higher sensitivity for identifying skeletal and vascular lesions. However, vascular occlusions due to postmortem blood clots could be falsely assumed to be vascular lesions. In contrast, conventional autopsy does not detect all bone fractures or the exact source of bleeding. Conventional autopsy provides important information about organ morphology and remains the only way to diagnose a vital vascular occlusion with certitude. CONCLUSION: Overall, PMCTA and conventional autopsy provide comparable findings. However, each technique presents advantages and disadvantages for detecting specific findings. To correctly interpret findings and clearly define the indications for PMCTA, these differences must be understood.
Resumo:
Building a personalized model to describe the drug concentration inside the human body for each patient is highly important to the clinical practice and demanding to the modeling tools. Instead of using traditional explicit methods, in this paper we propose a machine learning approach to describe the relation between the drug concentration and patients' features. Machine learning has been largely applied to analyze data in various domains, but it is still new to personalized medicine, especially dose individualization. We focus mainly on the prediction of the drug concentrations as well as the analysis of different features' influence. Models are built based on Support Vector Machine and the prediction results are compared with the traditional analytical models.
Resumo:
BACKGROUND: Many factors affect survival in haemodialysis (HD) patients. Our aim was to study whether quality of clinical care may affect survival in this population, when adjusted for demographic characteristics and co-morbidities. METHODS: We studied survival in 553 patients treated by chronic HD during March 2001 in 21 dialysis facilities in western Switzerland. Indicators of quality of care were established for anaemia control, calcium and phosphate product, serum albumin, pre-dialysis blood pressure (BP), type of vascular access and dialysis adequacy (spKt/V) and their baseline values were related to 3-year survival. The modified Charlson co-morbidity index (including age) and transplantation status were also considered as a predictor of survival. RESULTS: Three-year survival was obtained for 96% of the patients; 39% (211/541) of these patients had died. The 3-year survival was 50, 62 and 69%, respectively, in patients who had 0-2, 3 and >or=4 fulfilled indicators of quality of care (test for linear trend, P < 0.001). In a Cox multivariate analysis model, the absence of transplantation, a higher modified Charlson's score, decreased fulfilment of indicators of good clinical care and low pre-dialysis systolic BP were independent predictors of death. CONCLUSION: Good clinical care improves survival in HD patients, even after adjustment for availability of transplantation and co-morbidities.
Resumo:
This study aimed to compare two different maximal incremental tests with different time durations [a maximal incremental ramp test with a short time duration (8-12 min) (STest) and a maximal incremental test with a longer time duration (20-25 min) (LTest)] to investigate whether an LTest accurately assesses aerobic fitness in class II and III obese men. Twenty obese men (BMI≥35 kg.m-2) without secondary pathologies (mean±SE; 36.7±1.9 yr; 41.8±0.7 kg*m-2) completed an STest (warm-up: 40 W; increment: 20 W*min-1) and an LTest [warm-up: 20% of the peak power output (PPO) reached during the STest; increment: 10% PPO every 5 min until 70% PPO was reached or until the respiratory exchange ratio reached 1.0, followed by 15 W.min-1 until exhaustion] on a cycle-ergometer to assess the peak oxygen uptake [Formula: see text] and peak heart rate (HRpeak) of each test. There were no significant differences in [Formula: see text] (STest: 3.1±0.1 L*min-1; LTest: 3.0±0.1 L*min-1) and HRpeak (STest: 174±4 bpm; LTest: 173±4 bpm) between the two tests. Bland-Altman plot analyses showed good agreement and Pearson product-moment and intra-class correlation coefficients showed a strong correlation between [Formula: see text] (r=0.81 for both; p≤0.001) and HRpeak (r=0.95 for both; p≤0.001) during both tests. [Formula: see text] and HRpeak assessments were not compromised by test duration in class II and III obese men. Therefore, we suggest that the LTest is a feasible test that accurately assesses aerobic fitness and may allow for the exercise intensity prescription and individualization that will lead to improved therapeutic approaches in treating obesity and severe obesity.
Resumo:
General Summary Although the chapters of this thesis address a variety of issues, the principal aim is common: test economic ideas in an international economic context. The intention has been to supply empirical findings using the largest suitable data sets and making use of the most appropriate empirical techniques. This thesis can roughly be divided into two parts: the first one, corresponding to the first two chapters, investigates the link between trade and the environment, the second one, the last three chapters, is related to economic geography issues. Environmental problems are omnipresent in the daily press nowadays and one of the arguments put forward is that globalisation causes severe environmental problems through the reallocation of investments and production to countries with less stringent environmental regulations. A measure of the amplitude of this undesirable effect is provided in the first part. The third and the fourth chapters explore the productivity effects of agglomeration. The computed spillover effects between different sectors indicate how cluster-formation might be productivity enhancing. The last chapter is not about how to better understand the world but how to measure it and it was just a great pleasure to work on it. "The Economist" writes every week about the impressive population and economic growth observed in China and India, and everybody agrees that the world's center of gravity has shifted. But by how much and how fast did it shift? An answer is given in the last part, which proposes a global measure for the location of world production and allows to visualize our results in Google Earth. A short summary of each of the five chapters is provided below. The first chapter, entitled "Unraveling the World-Wide Pollution-Haven Effect" investigates the relative strength of the pollution haven effect (PH, comparative advantage in dirty products due to differences in environmental regulation) and the factor endowment effect (FE, comparative advantage in dirty, capital intensive products due to differences in endowments). We compute the pollution content of imports using the IPPS coefficients (for three pollutants, namely biological oxygen demand, sulphur dioxide and toxic pollution intensity for all manufacturing sectors) provided by the World Bank and use a gravity-type framework to isolate the two above mentioned effects. Our study covers 48 countries that can be classified into 29 Southern and 19 Northern countries and uses the lead content of gasoline as proxy for environmental stringency. For North-South trade we find significant PH and FE effects going in the expected, opposite directions and being of similar magnitude. However, when looking at world trade, the effects become very small because of the high North-North trade share, where we have no a priori expectations about the signs of these effects. Therefore popular fears about the trade effects of differences in environmental regulations might by exaggerated. The second chapter is entitled "Is trade bad for the Environment? Decomposing worldwide SO2 emissions, 1990-2000". First we construct a novel and large database containing reasonable estimates of SO2 emission intensities per unit labor that vary across countries, periods and manufacturing sectors. Then we use these original data (covering 31 developed and 31 developing countries) to decompose the worldwide SO2 emissions into the three well known dynamic effects (scale, technique and composition effect). We find that the positive scale (+9,5%) and the negative technique (-12.5%) effect are the main driving forces of emission changes. Composition effects between countries and sectors are smaller, both negative and of similar magnitude (-3.5% each). Given that trade matters via the composition effects this means that trade reduces total emissions. We next construct, in a first experiment, a hypothetical world where no trade happens, i.e. each country produces its imports at home and does no longer produce its exports. The difference between the actual and this no-trade world allows us (under the omission of price effects) to compute a static first-order trade effect. The latter now increases total world emissions because it allows, on average, dirty countries to specialize in dirty products. However, this effect is smaller (3.5%) in 2000 than in 1990 (10%), in line with the negative dynamic composition effect identified in the previous exercise. We then propose a second experiment, comparing effective emissions with the maximum or minimum possible level of SO2 emissions. These hypothetical levels of emissions are obtained by reallocating labour accordingly across sectors within each country (under the country-employment and the world industry-production constraints). Using linear programming techniques, we show that emissions are reduced by 90% with respect to the worst case, but that they could still be reduced further by another 80% if emissions were to be minimized. The findings from this chapter go together with those from chapter one in the sense that trade-induced composition effect do not seem to be the main source of pollution, at least in the recent past. Going now to the economic geography part of this thesis, the third chapter, entitled "A Dynamic Model with Sectoral Agglomeration Effects" consists of a short note that derives the theoretical model estimated in the fourth chapter. The derivation is directly based on the multi-regional framework by Ciccone (2002) but extends it in order to include sectoral disaggregation and a temporal dimension. This allows us formally to write present productivity as a function of past productivity and other contemporaneous and past control variables. The fourth chapter entitled "Sectoral Agglomeration Effects in a Panel of European Regions" takes the final equation derived in chapter three to the data. We investigate the empirical link between density and labour productivity based on regional data (245 NUTS-2 regions over the period 1980-2003). Using dynamic panel techniques allows us to control for the possible endogeneity of density and for region specific effects. We find a positive long run elasticity of density with respect to labour productivity of about 13%. When using data at the sectoral level it seems that positive cross-sector and negative own-sector externalities are present in manufacturing while financial services display strong positive own-sector effects. The fifth and last chapter entitled "Is the World's Economic Center of Gravity Already in Asia?" computes the world economic, demographic and geographic center of gravity for 1975-2004 and compares them. Based on data for the largest cities in the world and using the physical concept of center of mass, we find that the world's economic center of gravity is still located in Europe, even though there is a clear shift towards Asia. To sum up, this thesis makes three main contributions. First, it provides new estimates of orders of magnitudes for the role of trade in the globalisation and environment debate. Second, it computes reliable and disaggregated elasticities for the effect of density on labour productivity in European regions. Third, it allows us, in a geometrically rigorous way, to track the path of the world's economic center of gravity.
Resumo:
RésuméLes champignons sont impliqués dans les cycles biogéochimiques de différentes manières. En particulier, ils sont reconnus en tant qu'acteurs clés dans la dégradation de la matière organique, comme fournisseurs d'éléments nutritifs via l'altération des minéraux mais aussi comme grands producteurs d'acide oxalique et de complexes oxalo-métalliques. Toutefois, peu de choses sont connues quant à leur contribution à la genèse d'autres types de minéraux, tel que le carbonate de calcium (CaCO3). Le CaCO3 est un minéral ubiquiste dans de nombreux écosystèmes et il joue un rôle essentiel dans les cycles biogéochimiques du carbone (C) et du calcium (Ca). Le CaCO3 peut être d'origine physico-chimique ou biogénique et de nombreux organismes sont connus pour contrôler ou induire sa biominéralisation. Les champignons ont souvent été soupçonnés d'être impliqué dans ce processus, cependant il existe très peu d'informations pour étayer cette hypothèse.Cette thèse a eu pour but l'étude de cet aspect négligé de l'impact des champignons dans les cycles biogéochimiques, par l'exploration de leur implication potentielle dans la formation d'un type particulier de CaCO3 secondaires observés dans les sols et dans les grottes des environnements calcaires. Dans les grottes, ces dépôts sont appelés moonmilk, alors que dans les sols on les appelle calcite en aiguilles. Cependant ces deux descriptions correspondent en fait au même assemblage microscopique de deux habitus particulier de la calcite: la calcite en aiguilles (au sens strict du terme cette fois-ci) et les nanofibres. Ces deux éléments sont des habitus aciculaires de la calcite, mais présentent des dimensions différentes. Leur origine, physico-chimique ou biologique, est l'objet de débats intenses depuis plusieurs années déjà.L'observation d'échantillons environnementaux avec des techniques de microscopie (microscopie électronique et micromorphologie), ainsi que de la microanalyse EDX, ont démontré plusieurs relations intéressantes entre la calcite en aiguilles, les nanofibres et des éléments organiques. Premièrement, il est montré que les nanofibres peuvent être organiques ou minérales. Deuxièmement, la calcite en aiguilles et les nanofibres présentent de fortes analogies avec des structures hyphales, ce qui permet de confirmer l'hypothèse de leur origine fongique. En outre, des expériences en laboratoire ont confirmé l'origine fongique des nanofibres, par des digestions enzymatiques d'hyphes fongiques. En effet, des structures à base de nanofibres, similaires à celles observées dans des échantillons naturels, ont pu être produites par cette approche. Finalement, des enrichissements en calcium ont été mesurés dans les parois des hyphes et dans des inclusions intrahyphales provenant d'échantillons naturels de rhizomorphes. Ces résultats suggèrent une implication de la séquestration de calcium dans la formation de la calcite en aiguilles et/ou des nanofibres.Plusieurs aspects restent à élucider, en particulier la compréhension des processus physiologiques impliqués dans la nucléation de calcite dans les hyphes fongiques. Cependant, les résultats obtenus dans cette thèse ont permis de confirmer l'implication des champignons dans la formation de la calcite en aiguilles et des nanofibres. Ces découvertes sont d'une grande importance dans les cycles biogéochimiques puisqu'ils apportent de nouveaux éléments dans le cycle couplé C-Ca. Classiquement, les champignons sont considérés comme étant impliqués principalement dans la minéralisation de la matière organique et dans l'altération minérale. Cette étude démontre que les champignons doivent aussi être pris en compte en tant qu'agents majeurs de la genèse de minéraux, en particulier de CaCO3. Ceci représente une toute nouvelle perspective en géomycologie quant à la participation des champignons au cycle biologique du C. En effet, la présence de ces précipitations de CaCO3 secondaires représente un court-circuit dans le cycle biologique du C puisque du C inorganique du sol se retrouve piégé dans de la calcite plutôt que d'être retourné dans l'atmosphère.AbstractFungi are known to be involved in biogeochemical cycles in numerous ways. In particular, they are recognized as key players in organic matter recycling, as nutrient suppliers via mineral weathering, as well as large producers of oxalic acid and metal-oxalate. However, little is known about their contribution to the genesis of other types of minerals such as calcium carbonate (CaCO3). Yet, CaC03 are ubiquitous minerals in many ecosystems and play an essential role in the biogeochemical cycles of both carbon (C) and calcium (Ca). CaC03 may be physicochemical or biogenic in origin and numerous organisms have been recognized to control or induce calcite biomineralization. While fungi have often been suspected to be involved in this process, only scarce information support this hypothesis.This Ph.D. thesis aims at investigating this disregarded aspect of fungal impact on biogeochemical cycles by exploring their possible implication in the formation of a particular type of secondary CaC03 deposit ubiquitously observed in soils and caves from calcareous environments. In caves, these deposits are known as moonmilk, whereas in soils, they are known as Needle Fibre Calcite (NFC - sensu lato). However, they both correspond to the same microscopic assemblage of two distinct and unusual habits of calcite: NFC {sensu stricto) and nanofibres. Both features are acicular habits of calcite displaying different dimensions. Whether these habits are physicochemical or biogenic in origin has been under discussion for a long time.Observations of natural samples using microscopic techniques (electron microscopy and micromorphology) and EDX microanalyses have demonstrated several interesting relationships between NFC, nanofibres, and organic features. First, it has shown that nanofibres can be either organic or minera! in nature. Second, both nanofibres and NFC display strong structural analogies with fungal hyphal features, supporting their fungal origin. Furthermore, laboratory experiments have confirmed the fungal origin of nanofibres through an enzymatic digestion of fungal hyphae. Indeed, structures made of nanofibres with similar features as those observed in natural samples have been produced. Finally, calcium enrichments have been measured in both cell walls and intrahyphal inclusions of hyphae from rhizomorphs sampled in the natural environment. These results point out an involvement of calcium sequestration in nanofibres and/or NFC genesis.Several aspects need further investigation, in particular the understanding of the physiological processes involved in hyphal calcite nucleation. However, the results obtained during this study have allowed the confirmation of the implication of fungi in the formation of both NFC and nanofibres. These findings are of great importance regarding global biogeochemical cycles as they bring new insights into the coupled C and Ca cycles. Conventionally, fungi are considered to be involved in organic matter mineralization and mineral weathering. In this study, we demonstrate that they must also be considered as major agents in mineral genesis, in particular CaC03. This is a completely new perspective in geomycology regarding the role of fungi in the short-term (or biological) C cycle. Indeed, the presence of these secondary CaC03 precipitations represents a bypass in the short- term carbon cycle, as soil inorganic C is not readily returned to the atmosphere.
Resumo:
Different interactions have been described between glucocorticoids and the product of the ob gene leptin. Leptin can inhibit the activation of the hypothalamo-pituitary-adrenal axis by stressful stimuli, whereas adrenal glucocorticoids stimulate leptin production by the adipocyte. The present study was designed to investigate the potential direct effects of leptin to modulate glucocorticoid production by the adrenal. Human adrenal glands from kidney transplant donors were dissociated, and isolated primary cells were studied in vitro. These cells were preincubated with recombinant leptin (10(-10)-10(-7) M) for 6 or 24 h, and basal or ACTH-stimulated cortisol secretion was subsequently measured. Basal cortisol secretion was unaffected by leptin, but a significant and dose-dependent inhibition of ACTH-stimulated cortisol secretion was observed [down by 29 +/- 0.1% of controls with the highest leptin dose, P < 0.01 vs. CT (unrelated positive control)]. This effect of leptin was also observed in rat primary adrenocortical cells, where leptin inhibited stimulated corticosterone secretion in a dose-dependent manner (down by 46 +/- 0.1% of controls with the highest leptin dose, P < 0.001 vs. CT). These effects of leptin in adrenal cells are likely mediated by the long isoform of the leptin receptor (OB-R), because its transcript was found to be expressed in the adrenal tissue and leptin had no inhibitory effect in adrenal glands obtained from db/db mice. Therefore, leptin inhibits directly stimulated cortisol secretion from human and rat adrenal glands, and this may represent an important mechanism to modulate glucocorticoid levels in various metabolic states.