65 resultados para Make to Stock
Resumo:
Introduction: Glenoid bone volume and bone quality can render the fixation of a reversed shoulder arthroplasty (RSA) basis plate hazardous. Cadaveric study at our institution has demonstrated that optimal baseplate fixation could be achieved with screws in three major columns. Our aim is to review our early rate of aseptic glenoid loosening in a series of baseplates fixed according to this principle. Methods: Between 2005 and 2008, 48 consecutive RSA (Reversed Aequalis) were implanted in 48 patients with an average age of 74.4 years (range, 56 to 86 years). There were 37 women and 11 men. Twenty-seven primary RSAs were performed for cuff tear arthropathy, 3 after failed rotator cuff surgery, 6 for failed arthroplasties, 7 for acute fractures and 5 after failed ORIF. All baseplate fixations were done using a nonlocking posterior screw in the scapular spine, a nonlocking anterior screw in the glenoid body, a locking superior screw in the coracoid and a locking inferior screw in the pillar. All patients were reviewed with standardized radiographs. We reported the positions of the screws in relation to the scapular spine and the coracoid process in two different views. We defined screw positions as totally, partially or out of the target. Finally, we reported aseptic glenoid loosening which was defined as implant subsidence. Results: Four patients were lost to follow-up. Thus 44 shoulders could be reviewed after a mean follow-up of 16 months (range, 9 to 32 months). Thirty-seven (84%) screws were either partially or totally in the spine. Thus, 7 (16%) scapular spine screws were out of the target. No coracoid screw was out of the target. At final follow-up control, we reported no glenoid loosening. Conclusion: Early glenoid loosening occurred before the two years follow-up and is most of time related to technical problems and/or insufficient glenoid bone stock and bone quality. Our study demonstrate that baseplate fixation of a RSA according to the three columns principle is a reproducible technique and a valuable way to prevent early glenoid loosening.
Resumo:
Financial markets play an important role in an economy performing various functions like mobilizing and pooling savings, producing information about investment opportunities, screening and monitoring investments, implementation of corporate governance, diversification and management of risk. These functions influence saving rates, investment decisions, technological innovation and, therefore, have important implications for welfare. In my PhD dissertation I examine the interplay of financial and product markets by looking at different channels through which financial markets may influence an economy.My dissertation consists of four chapters. The first chapter is a co-authored work with Martin Strieborny, a PhD student from the University of Lausanne. The second chapter is a co-authored work with Melise Jaud, a PhD student from the Paris School of Economics. The third chapter is co-authored with both Melise Jaud and Martin Strieborny. The last chapter of my PhD dissertation is a single author paper.Chapter 1 of my PhD thesis analyzes the effect of financial development on growth of contract intensive industries. These industries intensively use intermediate inputs that neither can be sold on organized exchange, nor are reference-priced (Levchenko, 2007; Nunn, 2007). A typical example of a contract intensive industry would be an industry where an upstream supplier has to make investments in order to customize a product for needs of a downstream buyer. After the investment is made and the product is adjusted, the buyer may refuse to meet a commitment and trigger ex post renegotiation. Since the product is customized to the buyer's needs, the supplier cannot sell the product to a different buyer at the original price. This is referred in the literature as the holdup problem. As a consequence, the individually rational suppliers will underinvest into relationship-specific assets, hurting the downstream firms with negative consequences for aggregate growth. The standard way to mitigate the hold up problem is to write a binding contract and to rely on the legal enforcement by the state. However, even the most effective contract enforcement might fail to protect the supplier in tough times when the buyer lacks a reliable source of external financing. This suggests the potential role of financial intermediaries, banks in particular, in mitigating the incomplete contract problem. First, financial products like letters of credit and letters of guarantee can substantially decrease a risk and transaction costs of parties. Second, a bank loan can serve as a signal about a buyer's true financial situation, an upstream firm will be more willing undertake relationship-specific investment knowing that the business partner is creditworthy and will abstain from myopic behavior (Fama, 1985; von Thadden, 1995). Therefore, a well-developed financial (especially banking) system should disproportionately benefit contract intensive industries.The empirical test confirms this hypothesis. Indeed, contract intensive industries seem to grow faster in countries with a well developed financial system. Furthermore, this effect comes from a more developed banking sector rather than from a deeper stock market. These results are reaffirmed examining the effect of US bank deregulation on the growth of contract intensive industries in different states. Beyond an overall pro-growth effect, the bank deregulation seems to disproportionately benefit the industries requiring relationship-specific investments from their suppliers.Chapter 2 of my PhD focuses on the role of the financial sector in promoting exports of developing countries. In particular, it investigates how credit constraints affect the ability of firms operating in agri-food sectors of developing countries to keep exporting to foreign markets.Trade in high-value agri-food products from developing countries has expanded enormously over the last two decades offering opportunities for development. However, trade in agri-food is governed by a growing array of standards. Sanitary and Phytosanitary standards (SPS) and technical regulations impose additional sunk, fixed and operating costs along the firms' export life. Such costs may be detrimental to firms' survival, "pricing out" producers that cannot comply. The existence of these costs suggests a potential role of credit constraints in shaping the duration of trade relationships on foreign markets. A well-developed financial system provides the funds to exporters necessary to adjust production processes in order to meet quality and quantity requirements in foreign markets and to maintain long-standing trade relationships. The products with higher needs for financing should benefit the most from a well functioning financial system. This differential effect calls for a difference-in-difference approach initially proposed by Rajan and Zingales (1998). As a proxy for demand for financing of agri-food products, the sanitary risk index developed by Jaud et al. (2009) is used. The empirical literature on standards and norms show high costs of compliance, both variable and fixed, for high-value food products (Garcia-Martinez and Poole, 2004; Maskus et al., 2005). The sanitary risk index reflects the propensity of products to fail health and safety controls on the European Union (EU) market. Given the high costs of compliance, the sanitary risk index captures the demand for external financing to comply with such regulations.The prediction is empirically tested examining the export survival of different agri-food products from firms operating in Ghana, Mali, Malawi, Senegal and Tanzania. The results suggest that agri-food products that require more financing to keep up with food safety regulation of the destination market, indeed sustain longer in foreign market, when they are exported from countries with better developed financial markets.Chapter 3 analyzes the link between financial markets and efficiency of resource allocation in an economy. Producing and exporting products inconsistent with a country's factor endowments constitutes a serious misallocation of funds, which undermines competitiveness of the economy and inhibits its long term growth. In this chapter, inefficient exporting patterns are analyzed through the lens of the agency theories from the corporate finance literature. Managers may pursue projects with negative net present values because their perquisites or even their job might depend on them. Exporting activities are particularly prone to this problem. Business related to foreign markets involves both high levels of additional spending and strong incentives for managers to overinvest. Rational managers might have incentives to push for exports that use country's scarce factors which is suboptimal from a social point of view. Export subsidies might further skew the incentives towards inefficient exporting. Management can divert the export subsidies into investments promoting inefficient exporting.Corporate finance literature stresses the disciplining role of outside debt in counteracting the internal pressures to divert such "free cash flow" into unprofitable investments. Managers can lose both their reputation and the control of "their" firm if the unpaid external debt triggers a bankruptcy procedure. The threat of possible failure to satisfy debt service payments pushes the managers toward an efficient use of available resources (Jensen, 1986; Stulz, 1990; Hart and Moore, 1995). The main sources of debt financing in the most countries are banks. The disciplining role of banks might be especially important in the countries suffering from insufficient judicial quality. Banks, in pursuing their rights, rely on comparatively simple legal interventions that can be implemented even by mediocre courts. In addition to their disciplining role, banks can promote efficient exporting patterns in a more direct way by relaxing credit constraints of producers, through screening, identifying and investing in the most profitable investment projects. Therefore, a well-developed domestic financial system, and particular banking system, would help to push a country's exports towards products congruent with its comparative advantage.This prediction is tested looking at the survival of different product categories exported to US market. Products are identified according to the Euclidian distance between their revealed factor intensity and the country's factor endowments. The results suggest that products suffering from a comparative disadvantage (labour-intensive products from capital-abundant countries) survive less on the competitive US market. This pattern is stronger if the exporting country has a well-developed banking system. Thus, a strong banking sector promotes exports consistent with a country comparative advantage.Chapter 4 of my PhD thesis further examines the role of financial markets in fostering efficient resource allocation in an economy. In particular, the allocative efficiency hypothesis is investigated in the context of equity market liberalization.Many empirical studies document a positive and significant effect of financial liberalization on growth (Levchenko et al. 2009; Quinn and Toyoda 2009; Bekaert et al., 2005). However, the decrease in the cost of capital and the associated growth in investment appears rather modest in comparison to the large GDP growth effect (Bekaert and Harvey, 2005; Henry, 2000, 2003). Therefore, financial liberalization may have a positive impact on growth through its effect on the allocation of funds across firms and sectors.Free access to international capital markets allows the largest and most profitable domestic firms to borrow funds in foreign markets (Rajan and Zingales, 2003). As domestic banks loose some of their best clients, they reoptimize their lending practices seeking new clients among small and younger industrial firms. These firms are likely to be more risky than large and established companies. Screening of customers becomes prevalent as the return to screening rises. Banks, ceteris paribus, tend to focus on firms operating in comparative-advantage sectors because they are better risks. Firms in comparative-disadvantage sectors finding it harder to finance their entry into or survival in export markets either exit or refrain from entering export markets. On aggregate, one should therefore expect to see less entry, more exit, and shorter survival on export markets in those sectors after financial liberalization.The paper investigates the effect of financial liberalization on a country's export pattern by comparing the dynamics of entry and exit of different products in a country export portfolio before and after financial liberalization.The results suggest that products that lie far from the country's comparative advantage set tend to disappear relatively faster from the country's export portfolio following the liberalization of financial markets. In other words, financial liberalization tends to rebalance the composition of a country's export portfolio towards the products that intensively use the economy's abundant factors.
Resumo:
General Introduction This thesis can be divided into two main parts :the first one, corresponding to the first three chapters, studies Rules of Origin (RoOs) in Preferential Trade Agreements (PTAs); the second part -the fourth chapter- is concerned with Anti-Dumping (AD) measures. Despite wide-ranging preferential access granted to developing countries by industrial ones under North-South Trade Agreements -whether reciprocal, like the Europe Agreements (EAs) or NAFTA, or not, such as the GSP, AGOA, or EBA-, it has been claimed that the benefits from improved market access keep falling short of the full potential benefits. RoOs are largely regarded as a primary cause of the under-utilization of improved market access of PTAs. RoOs are the rules that determine the eligibility of goods to preferential treatment. Their economic justification is to prevent trade deflection, i.e. to prevent non-preferred exporters from using the tariff preferences. However, they are complex, cost raising and cumbersome, and can be manipulated by organised special interest groups. As a result, RoOs can restrain trade beyond what it is needed to prevent trade deflection and hence restrict market access in a statistically significant and quantitatively large proportion. Part l In order to further our understanding of the effects of RoOs in PTAs, the first chapter, written with Pr. Olivier Cadot, Celine Carrère and Pr. Jaime de Melo, describes and evaluates the RoOs governing EU and US PTAs. It draws on utilization-rate data for Mexican exports to the US in 2001 and on similar data for ACP exports to the EU in 2002. The paper makes two contributions. First, we construct an R-index of restrictiveness of RoOs along the lines first proposed by Estevadeordal (2000) for NAFTA, modifying it and extending it for the EU's single-list (SL). This synthetic R-index is then used to compare Roos under NAFTA and PANEURO. The two main findings of the chapter are as follows. First, it shows, in the case of PANEURO, that the R-index is useful to summarize how countries are differently affected by the same set of RoOs because of their different export baskets to the EU. Second, it is shown that the Rindex is a relatively reliable statistic in the sense that, subject to caveats, after controlling for the extent of tariff preference at the tariff-line level, it accounts for differences in utilization rates at the tariff line level. Finally, together with utilization rates, the index can be used to estimate total compliance costs of RoOs. The second chapter proposes a reform of preferential Roos with the aim of making them more transparent and less discriminatory. Such a reform would make preferential blocs more "cross-compatible" and would therefore facilitate cumulation. It would also contribute to move regionalism toward more openness and hence to make it more compatible with the multilateral trading system. It focuses on NAFTA, one of the most restrictive FTAs (see Estevadeordal and Suominen 2006), and proposes a way forward that is close in spirit to what the EU Commission is considering for the PANEURO system. In a nutshell, the idea is to replace the current array of RoOs by a single instrument- Maximum Foreign Content (MFC). An MFC is a conceptually clear and transparent instrument, like a tariff. Therefore changing all instruments into an MFC would bring improved transparency pretty much like the "tariffication" of NTBs. The methodology for this exercise is as follows: In step 1, I estimate the relationship between utilization rates, tariff preferences and RoOs. In step 2, I retrieve the estimates and invert the relationship to get a simulated MFC that gives, line by line, the same utilization rate as the old array of Roos. In step 3, I calculate the trade-weighted average of the simulated MFC across all lines to get an overall equivalent of the current system and explore the possibility of setting this unique instrument at a uniform rate across lines. This would have two advantages. First, like a uniform tariff, a uniform MFC would make it difficult for lobbies to manipulate the instrument at the margin. This argument is standard in the political-economy literature and has been used time and again in support of reductions in the variance of tariffs (together with standard welfare considerations). Second, uniformity across lines is the only way to eliminate the indirect source of discrimination alluded to earlier. Only if two countries face uniform RoOs and tariff preference will they face uniform incentives irrespective of their initial export structure. The result of this exercise is striking: the average simulated MFC is 25% of good value, a very low (i.e. restrictive) level, confirming Estevadeordal and Suominen's critical assessment of NAFTA's RoOs. Adopting a uniform MFC would imply a relaxation from the benchmark level for sectors like chemicals or textiles & apparel, and a stiffening for wood products, papers and base metals. Overall, however, the changes are not drastic, suggesting perhaps only moderate resistance to change from special interests. The third chapter of the thesis considers whether Europe Agreements of the EU, with the current sets of RoOs, could be the potential model for future EU-centered PTAs. First, I have studied and coded at the six-digit level of the Harmonised System (HS) .both the old RoOs -used before 1997- and the "Single list" Roos -used since 1997. Second, using a Constant Elasticity Transformation function where CEEC exporters smoothly mix sales between the EU and the rest of the world by comparing producer prices on each market, I have estimated the trade effects of the EU RoOs. The estimates suggest that much of the market access conferred by the EAs -outside sensitive sectors- was undone by the cost-raising effects of RoOs. The chapter also contains an analysis of the evolution of the CEECs' trade with the EU from post-communism to accession. Part II The last chapter of the thesis is concerned with anti-dumping, another trade-policy instrument having the effect of reducing market access. In 1995, the Uruguay Round introduced in the Anti-Dumping Agreement (ADA) a mandatory "sunset-review" clause (Article 11.3 ADA) under which anti-dumping measures should be reviewed no later than five years from their imposition and terminated unless there was a serious risk of resumption of injurious dumping. The last chapter, written with Pr. Olivier Cadot and Pr. Jaime de Melo, uses a new database on Anti-Dumping (AD) measures worldwide to assess whether the sunset-review agreement had any effect. The question we address is whether the WTO Agreement succeeded in imposing the discipline of a five-year cycle on AD measures and, ultimately, in curbing their length. Two methods are used; count data analysis and survival analysis. First, using Poisson and Negative Binomial regressions, the count of AD measures' revocations is regressed on (inter alia) the count of "initiations" lagged five years. The analysis yields a coefficient on measures' initiations lagged five years that is larger and more precisely estimated after the agreement than before, suggesting some effect. However the coefficient estimate is nowhere near the value that would give a one-for-one relationship between initiations and revocations after five years. We also find that (i) if the agreement affected EU AD practices, the effect went the wrong way, the five-year cycle being quantitatively weaker after the agreement than before; (ii) the agreement had no visible effect on the United States except for aone-time peak in 2000, suggesting a mopping-up of old cases. Second, the survival analysis of AD measures around the world suggests a shortening of their expected lifetime after the agreement, and this shortening effect (a downward shift in the survival function postagreement) was larger and more significant for measures targeted at WTO members than for those targeted at non-members (for which WTO disciplines do not bind), suggesting that compliance was de jure. A difference-in-differences Cox regression confirms this diagnosis: controlling for the countries imposing the measures, for the investigated countries and for the products' sector, we find a larger increase in the hazard rate of AD measures covered by the Agreement than for other measures.
Resumo:
Background: Gout patients initiating urate lowering therapy have an increased risk of flares. Inflammation in gouty arthritis is induced by interleukin (IL)-1b. Canakinumab inhibits IL-1b effectively in clinical studies. This study compared different doses of canakinumab vs colchicine in preventing flares in gout patients initiating allopurinol therapy.Methods: In this 24 wk double blind study, gout patients (20-79 years) initiating allopurinol were randomized (1:1:1:1:1:1:2) to canakinumab s.c. single doses of 25, 50, 100, 200, 300 mg, or 150mg divided in doses every 4 wks (50þ50þ25þ25mg [q4wk]) or colchicine 0.5mg p.o. daily for 16 wks. Primary outcome was to determine the canakinumab dose giving comparable efficacy to colchicine with respect to number of flares occurring during first 16 wks. Secondary outcomes included number of patients with flares and C-reactive protein (CRP) levels during the first 16 wks.Results: 432 patients were randomized and 391 (91%) completed the study. All canakinumab doses were better than colchicine in preventing flares and therefore, a canakinumab dose comparable to colchicine couldn't be determined. Based on a negative binomialmodel, all canakinumab groups, except 25 mg, reduced the flare rate ratio per patient significantly compared to colchicine group (rate ratio estimates 25mg 0.60, 50mg 0.34, 100mg 0.28, 200mg 0.37, 300mg 0.29, q4wk 0.38; p_0.05). Percentage of patients with flares was lower for all canakinumab groups (25mg 27.3%, 50mg 16.7%, 100mg 14.8%, 200mg 18.5%, 300mg 15.1%, q4wk 16.7%) compared to colchicine group (44.4%). All patients taking canakinumab were significantly less likely to experience at least one gout flare than patients taking colchicine (odds ratio range [0.22 - 0.47]; p_0.05 for all). Median baseline CRP levels were 2.86 mg/L for 25 mg, 3.42 mg/L for 50 mg, 1.76 mg/L for 100 mg, 3.66 mg/L for 200 mg, 3.21 mg/L for 300 mg, 3.23 mg/L for q4wk canakinumab groups and 2.69 mg/L for colchicine group. In all canakinumab groups with median CRP levels above the normal range at baseline, median levels declined within 15 days of treatment and were maintained at normal levels (ULN¼3 mg/L) throughout the 16 wk period. Adverse events (AEs) occurred in 52.7% (25 mg), 55.6% (50 mg), 51.9% (100 mg), 51.9% (200 mg), 54.7% (300 mg), 58.5% (q4wk) of patients on canakinumab vs 53.7% of patients on colchicine. Serious AEs (SAE) were reported in 2 (3.6%; 25 mg), 2 (3.7%, 50 mg), 3 (5.6%, 100 mg), 3 (5.6%, 200 mg), 3 (5.7%, 300 mg), 1 (1.9%, q4wk) patients on canakinumab and in 5 (4.6%) patients on colchicine. 1 fatal SAE (myocardial infarction, not related to study drug) occurred in colchicine group.Conclusions: In this randomized, double-blind active controlled study of flare prevention in gout patients initiating allopurinol therapy, treatment with canakinumab led to a statistically significant reduction in flares compared with colchicine and was well tolerated.Disclosure statement: U.A., A.B., G.K., D.R. and P.S. are employees of and have stock options or bold holdings with Novartis Pharma AG. E.M. is a principal investigator for Novartis Pharmaceuticals Corporation. E.N. has received consulting fees from Roche. N.S. has received research grants from Novartis Pharmaceuticals Corporation. A.S. has received consultancy fees from Novartis Pharma AG, Abbott, Bristol-Myers Squibb, Essex, Pfizer, MSD, Roche, UCB and Wyeth. All other authors have declared no conflicts of interest.
Resumo:
Health literacy is defined as "the degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions." Low health literacy mainly affects certain populations at risk limiting access to care, interaction with caregivers and self-management. If there are screening tests, their routine use is not advisable and recommended interventions in practice consist rather to reduce barriers to patient-caregiver communication. It is thus important to include not only population's health literacy but also communication skills of a health system wich tend to become more complex.
Resumo:
Although dispersal is recognized as a key issue in several fields of population biology (such as behavioral ecology, population genetics, metapopulation dynamics or evolutionary modeling), these disciplines focus on different aspects of the concept and often make different implicit assumptions regarding migration models. Using simulations, we investigate how such assumptions translate into effective gene flow and fixation probability of selected alleles. Assumptions regarding migration type (e.g. source-sink, resident pre-emption, or balanced dispersal) and patterns (e.g. stepping-stone versus island dispersal) have large impacts when demes differ in sizes or selective pressures. The effects of fragmentation, as well as the spatial localization of newly arising mutations, also strongly depend on migration type and patterns. Migration rate also matters: depending on the migration type, fixation probabilities at an intermediate migration rate may lie outside the range defined by the low- and high-migration limits when demes differ in sizes. Given the extreme sensitivity of fixation probability to characteristics of dispersal, we underline the importance of making explicit (and documenting empirically) the crucial ecological/ behavioral assumptions underlying migration models.
Resumo:
The molecular diversity of viruses complicates the interpretation of viral genomic and proteomic data. To make sense of viral gene functions, investigators must be familiar with the virus host range, replication cycle and virion structure. Our aim is to provide a comprehensive resource bridging together textbook knowledge with genomic and proteomic sequences. ViralZone web resource (www.expasy.org/viralzone/) provides fact sheets on all known virus families/genera with easy access to sequence data. A selection of reference strains (RefStrain) provides annotated standards to circumvent the exponential increase of virus sequences. Moreover ViralZone offers a complete set of detailed and accurate virion pictures.
Resumo:
Vivre, c'est passer d'un espace à un autre en essayant le plus possible de ne pas se cogner déclamait George Pérec. Cet énoncé poétiquement géographique pourrait résumer d'une certaine façon le défi de connaissance saisi par cette recherche. L'enjeu consiste effectivement à envisager le fait à'habiter, entendu dans son acception du « faire avec de l'espace » de la part des individus, comme n'allant pas de soi, de mettre en exergue le caractère problématique que constitue la pratique d'un lieu pour un individu. A ce titre, l'une des propositions de ce travail est de considérer tout lieu comme un assemblage d'épreuves spatiales face auxquelles les individus sont confrontés. La question se pose alors de savoir comment les individus font avec ces épreuves spatiales. L'hypothèse défendue dans ce travail est celle de la mobilisation, par ces derniers, de compétences - ressortissant d'une « capacité à » telle qu'exprimée par Wittgenstein dans le domaine linguistique, c'est-à-dire d'une « maîtrise technique » - et d'un capital spatial - que l'on peut faire synthétiquement correspondre à l'expérience accumulée par un individu en terme de pratique de lieux. L'argumentation étaye l'hypothèse que les manières d'habiter touristiquement une métropole dépendent notamment de ces deux éléments interdépendants dont dispose tout individu de façon variable et évolutive ; leur importance, sans déterminer aucunement des pratiques spécifiques, participe d'une maîtrise accrue de l'espace, d'une facilité pour faire avec les épreuves spatiales, atténuant le caractère potentiellement contraignant de ces dernières. Il s'agit donc d'une enquête menant une réflexion tout à la fois sur la dimension actorielle des individus, mais également sur le lieu en tant qu'espace habité : travailler sur cette question revient à investir la question de l'agencement urbain d'un lieu, c'est-à-dire d'appréhender la façon dont une configuration urbaine (les épreuves spatiales coïncidant avec les principales caractéristiques de cette dernière) est habitée, et plus particulièrement en l'occurrence ici, est habitée touristiquement. Pour aborder empiriquement cette problématique, l'enquête se focalise donc sur les touristes : d'une part pour leur faible degré de familiarité avec le lieu pratiqué (faire avec cet espace ne relève donc pas d'une routine) et d'autre part parce que leur présence dorénavant massive au sein des métropoles a des effets sur l'agencement de ces lieux qu'il est nécessaire d'envisager. Le laboratoire utilisé est celui de Los Angeles, cette aire urbaine de 18 millions de résidents : son étalement considérable, l'absence d'un centre-ville historiquement important, et la forte prégnance de sa métrique automobile étant des caractéristiques qui font de ce lieu un « exceptionnel normal » aux épreuves spatiales particulièrement proéminentes. La recherche avance à ce titre des arguments permettant d'en souligner un agencement, par les manières d'habiter des touristes, différencié du modèle classique de la métropole touristique : pour exprimer cette singularité, l'enquête étaye l'hypothèse consistant à qualifier ce lieu de métapole touristique. - Living is moving from one space to another while trying not to collide claimed George Pérec. This poetically geographic statement could in a way sum up the challenge seized by this research. The challenge is indeed to consider the fact of dwelling, in the sense of "make do with space" on the part of individuals, as not an evidence but highlighting the problematic characteristics of the practice of a place by people. Accordingly, one of the proposals of this work is to consider each place as a gathering of spatial stakes against which individuals are faced. The question then arises how are individuals facing these spatial stakes. The hypothesis debated in this work is that of the mobilization of skills such as "the ability of' as expressed by Wittgenstein in the linguistic field, i.e. a "technical mastery" - and a spatial capital - that can synthetically correspond to the experience accumulated by one single individual in terms of practice of places. Argument supports the hypothesis that the ways of touristically dwelling a metropolis depend on these two interdependent elements which everyone deal with in a variable and scalable manner; their importance, without determining any specific practices, participates in an increased proficiency of space, easing to make do with the space stakes, moderating the potentially binding character of the latter. It is therefore a survey leading a reflection both on the actorial dimension of individuals, but also on the place as a living space: working on this issue is exploring the question of the urban layout of a place, i.e. to understand how an urban configuration (the space stakes coinciding with the main features of the latter) is inhabited, and in particular in the present case, is touristically dwelled. To empirically address this issue, the inquiry therefore focuses on tourists: on the one hand for their low degree of familiarity with the place (make do with this space is therefore not a routine) and secondly because their now massive presence within the metropolis has effects on the layout of these places that is necessary to consider. The laboratory used is that of Los Angeles, this urban area of 18 million residents: its considerable spread, the absence of an historically important downtown» and high salience of "automobile metric" are features that make this place a "normal exceptional" with particularly prominent space stakes. Hence, research advances the arguments underlining the layout, by the ways of tourists dwelling different from the classical model of the metropolis: to express this uniqueness, the survey supports hypothesis to describe this place as a tourist metapolis.
Resumo:
Background: Public hospitals' long waiting lists make outpatient surgery in private facilities very attractive provided a standardized protocol is applied. The aim of this study was to assess this kind of innovative collaboration in abdominal surgery from a clinical and economical perspective. Methods: All consecutive patients operated on in an outpatient basis in a private facility by a public hospital abdominal surgeon and an assistant over a 5-year period (2004-2009) were included. Clinical assessment was carried out from patients' charts and satisfaction questionnaire, and economic assessment from the comparison between the surgeons' charges paid by the private facility and the surgeons' hospital salaries during the days devoted to surgery at the private facility. Results: Over the 5 years, 602 operative procedures were carried out during 190 operative days. All patients could be discharged the same day and only 1% of minor complications occurred. The patients' satisfaction was 98%. The balance between the surgeons' charges paid by the private facility and their hospital salary costs was positive by 25.8% for the senior surgeon and 12.6% for the assistant or, on average, 21.9% for both. Conclusion: Collaboration between an overloaded university hospital surgery department and a private surgical facility was successful, effective, safe, and cost-effective. It could be extended to other surgical specialities. Copyright (C) 2011 S. Karger AG, Basel
Resumo:
With increased activity and reduced financial and human resources, there is a need for automation in clinical bacteriology. Initial processing of clinical samples includes repetitive and fastidious steps. These tasks are suitable for automation, and several instruments are now available on the market, including the WASP (Copan), Previ-Isola (BioMerieux), Innova (Becton-Dickinson) and Inoqula (KIESTRA) systems. These new instruments allow efficient and accurate inoculation of samples, including four main steps: (i) selecting the appropriate Petri dish; (ii) inoculating the sample; (iii) spreading the inoculum on agar plates to obtain, upon incubation, well-separated bacterial colonies; and (iv) accurate labelling and sorting of each inoculated media. The challenge for clinical bacteriologists is to determine what is the ideal automated system for their own laboratory. Indeed, different solutions will be preferred, according to the number and variety of samples, and to the types of sample that will be processed with the automated system. The final choice is troublesome, because audits proposed by industrials risk being biased towards the solution proposed by their company, and because these automated systems may not be easily tested on site prior to the final decision, owing to the complexity of computer connections between the laboratory information system and the instrument. This article thus summarizes the main parameters that need to be taken into account for choosing the optimal system, and provides some clues to help clinical bacteriologists to make their choice.
Resumo:
Previous studies have shown that arbuscular mycorrhizal fungi (AMF) can influence plant diversity and ecosystem productivity. However, little is known about the effects of AMF and different AMF taxa on other important community properties such as nutrient acquisition, plant survival and soil structure. We established experimental grassland microcosms and tested the impact of AMF and of different AMF taxa on a number of grassland characteristics. We also tested whether plant species benefited from the same or different AMF taxa in subsequent growing seasons. AMF enhanced phosphorus acquisition, soil aggregation and survival of several plant species, but AMF did not increase total plant productivity. Moreover, AMF increased nitrogen acquisition by some plant species, but AMF had no effect on total N uptake by the plant community. Plant growth responses to AMF were temporally variable and some plant species obtained the highest biomass with different AMF in different years. Hence the results indicate that it may be beneficial for a plant to be colonized by different AMF taxa in different seasons. This study shows that AMF play a key role in grassland by improving plant nutrition and soil structure, and by regulating the make-up of the plant community.
Resumo:
Social scientists often estimate models from correlational data, where the independent variable has not been exogenously manipulated; they also make implicit or explicit causal claims based on these models. When can these claims be made? We answer this question by first discussing design and estimation conditions under which model estimates can be interpreted, using the randomized experiment as the gold standard. We show how endogeneity--which includes omitted variables, omitted selection, simultaneity, common methods bias, and measurement error--renders estimates causally uninterpretable. Second, we present methods that allow researchers to test causal claims in situations where randomization is not possible or when causal interpretation is confounded, including fixed-effects panel, sample selection, instrumental variable, regression discontinuity, and difference-in-differences models. Third, we take stock of the methodological rigor with which causal claims are being made in a social sciences discipline by reviewing a representative sample of 110 articles on leadership published in the previous 10 years in top-tier journals. Our key finding is that researchers fail to address at least 66 % and up to 90 % of design and estimation conditions that make causal claims invalid. We conclude by offering 10 suggestions on how to improve non-experimental research.
Resumo:
Les carences en compétences en santé touchent principalement certaines populations à risques en limitant l'accès aux soins, l'interaction avec les soignants et l'autoprise en charge. L'utilisation systématique d'instruments de dépistage n'est pas recommandée et les interventions préconisées en pratique consistent plutôt à diminuer les obstacles entravant la communication patient-soignant. Il s'agit d'intégrer non seulement les compétences de la population en matière de santé mais aussi les compétences communicationnelles d'un système de santé qui se complexifie. Health literacy is defined as "the degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions." Low health literacy mainly affects certain populations at risk limiting access to care, interaction with caregivers and self-management. If there are screening tests, their routine use is not advisable and recommended interventions in practice consist rather to reduce barriers to patient-caregiver communication. It is thus important to include not only population's health literacy but also communication skills of a health system wich tend to become more complex.
Resumo:
Abstract : The occupational health risk involved with handling nanoparticles is the probability that a worker will experience an adverse health effect: this is calculated as a function of the worker's exposure relative to the potential biological hazard of the material. Addressing the risks of nanoparticles requires therefore knowledge on occupational exposure and the release of nanoparticles into the environment as well as toxicological data. However, information on exposure is currently not systematically collected; therefore this risk assessment lacks quantitative data. This thesis aimed at, first creating the fundamental data necessary for a quantitative assessment and, second, evaluating methods to measure the occupational nanoparticle exposure. The first goal was to determine what is being used where in Swiss industries. This was followed by an evaluation of the adequacy of existing measurement methods to assess workplace nanopaiticle exposure to complex size distributions and concentration gradients. The study was conceived as a series of methodological evaluations aimed at better understanding nanoparticle measurement devices and methods. lt focused on inhalation exposure to airborne particles, as respiration is considered to be the most important entrance pathway for nanoparticles in the body in terms of risk. The targeted survey (pilot study) was conducted as a feasibility study for a later nationwide survey on the handling of nanoparticles and the applications of specific protection means in industry. The study consisted of targeted phone interviews with health and safety officers of Swiss companies that were believed to use or produce nanoparticles. This was followed by a representative survey on the level of nanoparticle usage in Switzerland. lt was designed based on the results of the pilot study. The study was conducted among a representative selection of clients of the Swiss National Accident Insurance Fund (SUVA), covering about 85% of Swiss production companies. The third part of this thesis focused on the methods to measure nanoparticles. Several pre- studies were conducted studying the limits of commonly used measurement devices in the presence of nanoparticle agglomerates, This focus was chosen, because several discussions with users and producers of the measurement devices raised questions about their accuracy measuring nanoparticle agglomerates and because, at the same time, the two survey studies revealed that such powders are frequently used in industry. The first preparatory experiment focused on the accuracy of the scanning mobility particle sizer (SMPS), which showed an improbable size distribution when measuring powders of nanoparticle agglomerates. Furthermore, the thesis includes a series of smaller experiments that took a closer look at problems encountered with other measurement devices in the presence of nanoparticle agglomerates: condensation particle counters (CPC), portable aerosol spectrometer (PAS) a device to estimate the aerodynamic diameter, as well as diffusion size classifiers. Some initial feasibility tests for the efficiency of filter based sampling and subsequent counting of carbon nanotubes (CNT) were conducted last. The pilot study provided a detailed picture of the types and amounts of nanoparticles used and the knowledge of the health and safety experts in the companies. Considerable maximal quantities (> l'000 kg/year per company) of Ag, Al-Ox, Fe-Ox, SiO2, TiO2, and ZnO (mainly first generation particles) were declared by the contacted Swiss companies, The median quantity of handled nanoparticles, however, was 100 kg/year. The representative survey was conducted by contacting by post mail a representative selection of l '626 SUVA-clients (Swiss Accident Insurance Fund). It allowed estimation of the number of companies and workers dealing with nanoparticles in Switzerland. The extrapolation from the surveyed companies to all companies of the Swiss production sector suggested that l'309 workers (95%-confidence interval l'073 to l'545) of the Swiss production sector are potentially exposed to nanoparticles in 586 companies (145 to l'027). These numbers correspond to 0.08% (0.06% to 0.09%) of all workers and to 0.6% (0.2% to 1.1%) of companies in the Swiss production sector. To measure airborne concentrations of sub micrometre-sized particles, a few well known methods exist. However, it was unclear how well the different instruments perform in the presence of the often quite large agglomerates of nanostructured materials. The evaluation of devices and methods focused on nanoparticle agglomerate powders. lt allowed the identification of the following potential sources of inaccurate measurements at workplaces with considerable high concentrations of airborne agglomerates: - A standard SMPS showed bi-modal particle size distributions when measuring large nanoparticle agglomerates. - Differences in the range of a factor of a thousand were shown between diffusion size classifiers and CPC/SMPS. - The comparison between CPC/SMPS and portable aerosol Spectrometer (PAS) was much better, but depending on the concentration, size or type of the powders measured, the differences were still of a high order of magnitude - Specific difficulties and uncertainties in the assessment of workplaces were identified: the background particles can interact with particles created by a process, which make the handling of background concentration difficult. - Electric motors produce high numbers of nanoparticles and confound the measurement of the process-related exposure. Conclusion: The surveys showed that nanoparticles applications exist in many industrial sectors in Switzerland and that some companies already use high quantities of them. The representative survey demonstrated a low prevalence of nanoparticle usage in most branches of the Swiss industry and led to the conclusion that the introduction of applications using nanoparticles (especially outside industrial chemistry) is only beginning. Even though the number of potentially exposed workers was reportedly rather small, it nevertheless underscores the need for exposure assessments. Understanding exposure and how to measure it correctly is very important because the potential health effects of nanornaterials are not yet fully understood. The evaluation showed that many devices and methods of measuring nanoparticles need to be validated for nanoparticles agglomerates before large exposure assessment studies can begin. Zusammenfassung : Das Gesundheitsrisiko von Nanopartikel am Arbeitsplatz ist die Wahrscheinlichkeit dass ein Arbeitnehmer einen möglichen Gesundheitsschaden erleidet wenn er diesem Stoff ausgesetzt ist: sie wird gewöhnlich als Produkt von Schaden mal Exposition gerechnet. Für eine gründliche Abklärung möglicher Risiken von Nanomaterialien müssen also auf der einen Seite Informationen über die Freisetzung von solchen Materialien in die Umwelt vorhanden sein und auf der anderen Seite solche über die Exposition von Arbeitnehmenden. Viele dieser Informationen werden heute noch nicht systematisch gesarnmelt und felilen daher für Risikoanalysen, Die Doktorarbeit hatte als Ziel, die Grundlagen zu schaffen für eine quantitative Schatzung der Exposition gegenüber Nanopartikel am Arbeitsplatz und die Methoden zu evaluieren die zur Messung einer solchen Exposition nötig sind. Die Studie sollte untersuchen, in welchem Ausmass Nanopartikel bereits in der Schweizer Industrie eingesetzt werden, wie viele Arbeitnehrner damit potentiel] in Kontakt komrrien ob die Messtechnologie für die nötigen Arbeitsplatzbelastungsmessungen bereits genügt, Die Studie folcussierte dabei auf Exposition gegenüber luftgetragenen Partikel, weil die Atmung als Haupteintrittspforte iïlr Partikel in den Körper angesehen wird. Die Doktorarbeit besteht baut auf drei Phasen auf eine qualitative Umfrage (Pilotstudie), eine repräsentative, schweizerische Umfrage und mehrere technische Stndien welche dem spezitischen Verständnis der Mëglichkeiten und Grenzen einzelner Messgeräte und - teclmikeri dienen. Die qualitative Telephonumfrage wurde durchgeführt als Vorstudie zu einer nationalen und repräsentativen Umfrage in der Schweizer Industrie. Sie zielte auf Informationen ab zum Vorkommen von Nanopartikeln, und den angewendeten Schutzmassnahmen. Die Studie bestand aus gezielten Telefoninterviews mit Arbeit- und Gesundheitsfachpersonen von Schweizer Unternehmen. Die Untemehmen wurden aufgrund von offentlich zugànglichen lnformationen ausgewählt die darauf hinwiesen, dass sie mit Nanopartikeln umgehen. Der zweite Teil der Dolctorarbeit war die repräsentative Studie zur Evalniernng der Verbreitnng von Nanopaitikelanwendungen in der Schweizer lndustrie. Die Studie baute auf lnformationen der Pilotstudie auf und wurde mit einer repräsentativen Selektion von Firmen der Schweizerischen Unfall Versicherungsanstalt (SUVA) durchgeüihxt. Die Mehrheit der Schweizerischen Unternehmen im lndustrieselctor wurde damit abgedeckt. Der dritte Teil der Doktorarbeit fokussierte auf die Methodik zur Messung von Nanopartikeln. Mehrere Vorstudien wurden dnrchgefîihrt, um die Grenzen von oft eingesetzten Nanopartikelmessgeräten auszuloten, wenn sie grösseren Mengen von Nanopartikel Agglomeraten ausgesetzt messen sollen. Dieser F okns wurde ans zwei Gründen gewählt: weil mehrere Dislcussionen rnit Anwendem und auch dem Produzent der Messgeràte dort eine Schwachstelle vermuten liessen, welche Zweifel an der Genauigkeit der Messgeräte aufkommen liessen und weil in den zwei Umfragestudien ein häufiges Vorkommen von solchen Nanopartikel-Agglomeraten aufgezeigt wurde. i Als erstes widmete sich eine Vorstndie der Genauigkeit des Scanning Mobility Particle Sizer (SMPS). Dieses Messgerät zeigte in Präsenz von Nanopartikel Agglorneraten unsinnige bimodale Partikelgrössenverteilung an. Eine Serie von kurzen Experimenten folgte, welche sich auf andere Messgeräte und deren Probleme beim Messen von Nanopartikel-Agglomeraten konzentrierten. Der Condensation Particle Counter (CPC), der portable aerosol spectrometer (PAS), ein Gerät zur Schàtzung des aerodynamischen Durchniessers von Teilchen, sowie der Diffusion Size Classifier wurden getestet. Einige erste Machbarkeitstests zur Ermittlnng der Effizienz von tilterbasierter Messung von luftgetragenen Carbon Nanotubes (CNT) wnrden als letztes durchgeiührt. Die Pilotstudie hat ein detailliiertes Bild der Typen und Mengen von genutzten Nanopartikel in Schweizer Unternehmen geliefert, und hat den Stand des Wissens der interviewten Gesundheitsschntz und Sicherheitsfachleute aufgezeigt. Folgende Typen von Nanopaitikeln wurden von den kontaktierten Firmen als Maximalmengen angegeben (> 1'000 kg pro Jahr / Unternehrnen): Ag, Al-Ox, Fe-Ox, SiO2, TiO2, und ZnO (hauptsächlich Nanopartikel der ersten Generation). Die Quantitäten von eingesetzten Nanopartikeln waren stark verschieden mit einem ein Median von 100 kg pro Jahr. ln der quantitativen Fragebogenstudie wurden l'626 Unternehmen brieflich kontaktiert; allesamt Klienten der Schweizerischen Unfallversicherringsanstalt (SUVA). Die Resultate der Umfrage erlaubten eine Abschätzung der Anzahl von Unternehmen und Arbeiter, welche Nanopartikel in der Schweiz anwenden. Die Hochrechnung auf den Schweizer lndnstriesektor hat folgendes Bild ergeben: ln 586 Unternehmen (95% Vertrauensintervallz 145 bis 1'027 Unternehmen) sind 1'309 Arbeiter potentiell gegenüber Nanopartikel exponiert (95%-Vl: l'073 bis l'545). Diese Zahlen stehen für 0.6% der Schweizer Unternehmen (95%-Vl: 0.2% bis 1.1%) und 0.08% der Arbeiternehmerschaft (95%-V1: 0.06% bis 0.09%). Es gibt einige gut etablierte Technologien um die Luftkonzentration von Submikrometerpartikel zu messen. Es besteht jedoch Zweifel daran, inwiefern sich diese Technologien auch für die Messurrg von künstlich hergestellten Nanopartikeln verwenden lassen. Aus diesem Grund folcussierten die vorbereitenden Studien für die Arbeitsplatzbeurteilnngen auf die Messung von Pulverri, welche Nan0partike1-Agg10merate enthalten. Sie erlaubten die ldentifikation folgender rnöglicher Quellen von fehlerhaften Messungen an Arbeitsplätzen mit erhöhter Luft-K0nzentrati0n von Nanopartikel Agglomeratenz - Ein Standard SMPS zeigte eine unglaubwürdige bimodale Partikelgrössenverteilung wenn er grössere Nan0par'til<e1Agg10merate gemessen hat. - Grosse Unterschiede im Bereich von Faktor tausend wurden festgestellt zwischen einem Diffusion Size Classiîier und einigen CPC (beziehungsweise dem SMPS). - Die Unterschiede zwischen CPC/SMPS und dem PAS waren geringer, aber abhängig von Grosse oder Typ des gemessenen Pulvers waren sie dennoch in der Grössenordnung von einer guten Grössenordnung. - Spezifische Schwierigkeiten und Unsicherheiten im Bereich von Arbeitsplatzmessungen wurden identitiziert: Hintergrundpartikel können mit Partikeln interagieren die während einem Arbeitsprozess freigesetzt werden. Solche Interaktionen erschweren eine korrekte Einbettung der Hintergrunds-Partikel-Konzentration in die Messdaten. - Elektromotoren produzieren grosse Mengen von Nanopartikeln und können so die Messung der prozessbezogenen Exposition stören. Fazit: Die Umfragen zeigten, dass Nanopartikel bereits Realitàt sind in der Schweizer Industrie und dass einige Unternehmen bereits grosse Mengen davon einsetzen. Die repräsentative Umfrage hat diese explosive Nachricht jedoch etwas moderiert, indem sie aufgezeigt hat, dass die Zahl der Unternehmen in der gesamtschweizerischen Industrie relativ gering ist. In den meisten Branchen (vor allem ausserhalb der Chemischen Industrie) wurden wenig oder keine Anwendungen gefunden, was schliessen last, dass die Einführung dieser neuen Technologie erst am Anfang einer Entwicklung steht. Auch wenn die Zahl der potentiell exponierten Arbeiter immer noch relativ gering ist, so unterstreicht die Studie dennoch die Notwendigkeit von Expositionsmessungen an diesen Arbeitsplätzen. Kenntnisse um die Exposition und das Wissen, wie solche Exposition korrekt zu messen, sind sehr wichtig, vor allem weil die möglichen Auswirkungen auf die Gesundheit noch nicht völlig verstanden sind. Die Evaluation einiger Geräte und Methoden zeigte jedoch, dass hier noch Nachholbedarf herrscht. Bevor grössere Mess-Studien durgefîihrt werden können, müssen die Geräte und Methodem für den Einsatz mit Nanopartikel-Agglomeraten validiert werden.
Resumo:
Introduction: Human experience takes place in the line of mental-time (MT) created through imagination of oneself in different time-points in past or future (self-projection in time). Here we manipulated self-projection in MT not only with respect to one's life-events but also with respect to one's faces from different past and future time-points. Methods: We here compared MTT with respect to one's facial images from different time points in past and future (study 1: MT-faces) as well as with respect to different past and future life events (study 2: MT-events). Participants were asked to make judgments about past and future face images and past and future events from three different time-points: the present (Now), eight years earlier (Past) or eight years later (Future). In addition, as a control task participants were asked to make recognition judgments with respect to faces and memory-related judgments with respect to events without changing their habitual self-location in time. Behavioral measures and functional magnetic resonance imaging (fMRI) activity after subtraction of recognition and memory related activities show both absolute MT and relative MT effects for faces and events, signifying a fundamental brain mechanism of MT, disentangled from episodic memory functions. Results: Behavioural and event-related fMRI activity showed three independent effects characterized by (1) similarity between past recollection and future imagination, (2) facilitation of judgments related to the future as compared to the past, and (3) facilitation of judgments related to time-points distant from the present. These effects were found with respect to faces and events suggesting that the brain mechanisms of MT are independent of whether actual life episodes have to be re-/pre-experienced and recruited a common cerebral network including the medial-temporal, precuneus, inferior-frontal, temporo-parietal, and insular cortices. Conclusions: These behavioural and neural data suggest that self-projection in time is a crucial aspect of MT, relying on neural structures encoding memory, mental imagery, and self. Furthermore our results emphasize the idea that mental temporal processing is more strongly directed to future prediction than to past recollection.