93 resultados para Make to availability


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cyclosporine A (CsA) has been demonstrated to be effective for the treatment of a variety of ophthalmological conditions, including ocular surface disorders such as the dry eye disease (DED). Since CsA is characterised by its low water solubility, the development of a topical ophthalmic formulation represents an interesting pharmaceutical question. In the present study, two different strategies to address this challenge were studied and compared: (i) a water-soluble CsA prodrug formulated within an aqueous solution and (ii) a CsA oil-in-water emulsion (Restasis, Allergan Inc., Irvine, CA). First, the prodrug formulation was shown to have an excellent ocular tolerance as well as no influence on the basal tear production; maintaining the ocular surface properties remained unchanged. Then, in order to allow in vivo investigations, a specific analytical method based on ultra high pressure liquid chromatography coupled with triple quadrupole mass spectrometer (UHPLC-MS/MS) was developed and optimised to quantify CsA in ocular tissues and fluids. The CsA ocular kinetics in lachrymal fluid for both formulations were found to be similar between 15 min and 48 h. The CsA ocular distribution study evidenced the ability of the prodrug formulation to penetrate into the eye, achieving therapeutically active CsA levels in tissues of both the anterior and posterior segments. In addition, the detailed analysis of the in vivo data using a bicompartmental model pointed out a higher bioavailability and lower elimination rate for CsA when it is generated from the prodrug than after direct application as an emulsion. The interesting in vivo properties displayed by the prodrug solution make it a safe and suitable option for the treatment of DED.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Cardiovascular diseases (CVD) cause 1.8 million premature (<75 years) death annually in Europe. The majority of these deaths are preventable with the most efficient and cost-effective approach being on the population level. The aim of this position paper is to assist authorities in selecting the most adequate management strategies to prevent CVD. DESIGN AND METHODS: Experts reviewed and summarized the published evidence on the major modifiable CVD risk factors: food, physical inactivity, smoking, and alcohol. Population-based preventive strategies focus on fiscal measures (e.g. taxation), national and regional policies (e.g. smoke-free legislation), and environmental changes (e.g. availability of alcohol). RESULTS: Food is a complex area, but several strategies can be effective in increasing fruit and vegetables and lowering intake of salt, saturated fat, trans-fats, and free sugars. Tobacco and alcohol can be regulated mainly by fiscal measures and national policies, but local availability also plays a role. Changes in national policies and the built environment will integrate physical activity into daily life. CONCLUSION: Societal changes and commercial influences have led to the present unhealthy environment, in which default option in life style increases CVD risk. A challenge for both central and local authorities is, therefore, to ensure healthier defaults. This position paper summarizes the evidence and recommends a number of structural strategies at international, national, and regional levels that in combination can substantially reduce CVD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

General Introduction This thesis can be divided into two main parts :the first one, corresponding to the first three chapters, studies Rules of Origin (RoOs) in Preferential Trade Agreements (PTAs); the second part -the fourth chapter- is concerned with Anti-Dumping (AD) measures. Despite wide-ranging preferential access granted to developing countries by industrial ones under North-South Trade Agreements -whether reciprocal, like the Europe Agreements (EAs) or NAFTA, or not, such as the GSP, AGOA, or EBA-, it has been claimed that the benefits from improved market access keep falling short of the full potential benefits. RoOs are largely regarded as a primary cause of the under-utilization of improved market access of PTAs. RoOs are the rules that determine the eligibility of goods to preferential treatment. Their economic justification is to prevent trade deflection, i.e. to prevent non-preferred exporters from using the tariff preferences. However, they are complex, cost raising and cumbersome, and can be manipulated by organised special interest groups. As a result, RoOs can restrain trade beyond what it is needed to prevent trade deflection and hence restrict market access in a statistically significant and quantitatively large proportion. Part l In order to further our understanding of the effects of RoOs in PTAs, the first chapter, written with Pr. Olivier Cadot, Celine Carrère and Pr. Jaime de Melo, describes and evaluates the RoOs governing EU and US PTAs. It draws on utilization-rate data for Mexican exports to the US in 2001 and on similar data for ACP exports to the EU in 2002. The paper makes two contributions. First, we construct an R-index of restrictiveness of RoOs along the lines first proposed by Estevadeordal (2000) for NAFTA, modifying it and extending it for the EU's single-list (SL). This synthetic R-index is then used to compare Roos under NAFTA and PANEURO. The two main findings of the chapter are as follows. First, it shows, in the case of PANEURO, that the R-index is useful to summarize how countries are differently affected by the same set of RoOs because of their different export baskets to the EU. Second, it is shown that the Rindex is a relatively reliable statistic in the sense that, subject to caveats, after controlling for the extent of tariff preference at the tariff-line level, it accounts for differences in utilization rates at the tariff line level. Finally, together with utilization rates, the index can be used to estimate total compliance costs of RoOs. The second chapter proposes a reform of preferential Roos with the aim of making them more transparent and less discriminatory. Such a reform would make preferential blocs more "cross-compatible" and would therefore facilitate cumulation. It would also contribute to move regionalism toward more openness and hence to make it more compatible with the multilateral trading system. It focuses on NAFTA, one of the most restrictive FTAs (see Estevadeordal and Suominen 2006), and proposes a way forward that is close in spirit to what the EU Commission is considering for the PANEURO system. In a nutshell, the idea is to replace the current array of RoOs by a single instrument- Maximum Foreign Content (MFC). An MFC is a conceptually clear and transparent instrument, like a tariff. Therefore changing all instruments into an MFC would bring improved transparency pretty much like the "tariffication" of NTBs. The methodology for this exercise is as follows: In step 1, I estimate the relationship between utilization rates, tariff preferences and RoOs. In step 2, I retrieve the estimates and invert the relationship to get a simulated MFC that gives, line by line, the same utilization rate as the old array of Roos. In step 3, I calculate the trade-weighted average of the simulated MFC across all lines to get an overall equivalent of the current system and explore the possibility of setting this unique instrument at a uniform rate across lines. This would have two advantages. First, like a uniform tariff, a uniform MFC would make it difficult for lobbies to manipulate the instrument at the margin. This argument is standard in the political-economy literature and has been used time and again in support of reductions in the variance of tariffs (together with standard welfare considerations). Second, uniformity across lines is the only way to eliminate the indirect source of discrimination alluded to earlier. Only if two countries face uniform RoOs and tariff preference will they face uniform incentives irrespective of their initial export structure. The result of this exercise is striking: the average simulated MFC is 25% of good value, a very low (i.e. restrictive) level, confirming Estevadeordal and Suominen's critical assessment of NAFTA's RoOs. Adopting a uniform MFC would imply a relaxation from the benchmark level for sectors like chemicals or textiles & apparel, and a stiffening for wood products, papers and base metals. Overall, however, the changes are not drastic, suggesting perhaps only moderate resistance to change from special interests. The third chapter of the thesis considers whether Europe Agreements of the EU, with the current sets of RoOs, could be the potential model for future EU-centered PTAs. First, I have studied and coded at the six-digit level of the Harmonised System (HS) .both the old RoOs -used before 1997- and the "Single list" Roos -used since 1997. Second, using a Constant Elasticity Transformation function where CEEC exporters smoothly mix sales between the EU and the rest of the world by comparing producer prices on each market, I have estimated the trade effects of the EU RoOs. The estimates suggest that much of the market access conferred by the EAs -outside sensitive sectors- was undone by the cost-raising effects of RoOs. The chapter also contains an analysis of the evolution of the CEECs' trade with the EU from post-communism to accession. Part II The last chapter of the thesis is concerned with anti-dumping, another trade-policy instrument having the effect of reducing market access. In 1995, the Uruguay Round introduced in the Anti-Dumping Agreement (ADA) a mandatory "sunset-review" clause (Article 11.3 ADA) under which anti-dumping measures should be reviewed no later than five years from their imposition and terminated unless there was a serious risk of resumption of injurious dumping. The last chapter, written with Pr. Olivier Cadot and Pr. Jaime de Melo, uses a new database on Anti-Dumping (AD) measures worldwide to assess whether the sunset-review agreement had any effect. The question we address is whether the WTO Agreement succeeded in imposing the discipline of a five-year cycle on AD measures and, ultimately, in curbing their length. Two methods are used; count data analysis and survival analysis. First, using Poisson and Negative Binomial regressions, the count of AD measures' revocations is regressed on (inter alia) the count of "initiations" lagged five years. The analysis yields a coefficient on measures' initiations lagged five years that is larger and more precisely estimated after the agreement than before, suggesting some effect. However the coefficient estimate is nowhere near the value that would give a one-for-one relationship between initiations and revocations after five years. We also find that (i) if the agreement affected EU AD practices, the effect went the wrong way, the five-year cycle being quantitatively weaker after the agreement than before; (ii) the agreement had no visible effect on the United States except for aone-time peak in 2000, suggesting a mopping-up of old cases. Second, the survival analysis of AD measures around the world suggests a shortening of their expected lifetime after the agreement, and this shortening effect (a downward shift in the survival function postagreement) was larger and more significant for measures targeted at WTO members than for those targeted at non-members (for which WTO disciplines do not bind), suggesting that compliance was de jure. A difference-in-differences Cox regression confirms this diagnosis: controlling for the countries imposing the measures, for the investigated countries and for the products' sector, we find a larger increase in the hazard rate of AD measures covered by the Agreement than for other measures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Health literacy is defined as "the degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions." Low health literacy mainly affects certain populations at risk limiting access to care, interaction with caregivers and self-management. If there are screening tests, their routine use is not advisable and recommended interventions in practice consist rather to reduce barriers to patient-caregiver communication. It is thus important to include not only population's health literacy but also communication skills of a health system wich tend to become more complex.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the key emphases of these three essays is to provide practical managerial insight. However, good practical insight, can only be created by grounding it firmly on theoretical and empirical research. Practical experience-based understanding without theoretical grounding remains tacit and cannot be easily disseminated. Theoretical understanding without links to real life remains sterile. My studies aim to increase the understanding of how radical innovation could be generated at large established firms and how it can have an impact on business performance as most businesses pursue innovation with one prime objective: value creation. My studies focus on large established firms with sales revenue exceeding USD $ 1 billion. Usually large established firms cannot rely on informal ways of management, as these firms tend to be multinational businesses operating with subsidiaries, offices, or production facilities in more than one country. I. Internal and External Determinants of Corporate Venture Capital Investment The goal of this chapter is to focus on CVC as one of the mechanisms available for established firms to source new ideas that can be exploited. We explore the internal and external determinants under which established firms engage in CVC to source new knowledge through investment in startups. We attempt to make scholars and managers aware of the forces that influence CVC activity by providing findings and insights to facilitate the strategic management of CVC. There are research opportunities to further understand the CVC phenomenon. Why do companies engage in CVC? What motivates them to continue "playing the game" and keep their active CVC investment status. The study examines CVC investment activity, and the importance of understanding the influential factors that make a firm decide to engage in CVC. The main question is: How do established firms' CVC programs adapt to changing internal conditions and external environments. Adaptation typically involves learning from exploratory endeavors, which enable companies to transform the ways they compete (Guth & Ginsberg, 1990). Our study extends the current stream of research on CVC. It aims to contribute to the literature by providing an extensive comparison of internal and external determinants leading to CVC investment activity. To our knowledge, this is the first study to examine the influence of internal and external determinants on CVC activity throughout specific expansion and contraction periods determined by structural breaks occurring between 1985 to 2008. Our econometric analysis indicates a strong and significant positive association between CVC activity and R&D, cash flow availability and environmental financial market conditions, as well as a significant negative association between sales growth and the decision to engage into CVC. The analysis of this study reveals that CVC investment is highly volatile, as demonstrated by dramatic fluctuations in CVC investment activity over the past decades. When analyzing the overall cyclical CVC period from 1985 to 2008 the results of our study suggest that CVC activity has a pattern influenced by financial factors such as the level of R&D, free cash flow, lack of sales growth, and external conditions of the economy, with the NASDAQ price index as the most significant variable influencing CVC during this period. II. Contribution of CVC and its Interaction with R&D to Value Creation The second essay takes into account the demands of corporate executives and shareholders regarding business performance and value creation justifications for investments in innovation. Billions of dollars are invested in CVC and R&D. However there is little evidence that CVC and its interaction with R&D create value. Firms operating in dynamic business sectors seek to innovate to create the value demanded by changing market conditions, consumer preferences, and competitive offerings. Consequently, firms operating in such business sectors put a premium on finding new, sustainable and competitive value propositions. CVC and R&D can help them in this challenge. Dushnitsky and Lenox (2006) presented evidence that CVC investment is associated with value creation. However, studies have shown that the most innovative firms do not necessarily benefit from innovation. For instance Oyon (2007) indicated that between 1995 and 2005 the most innovative automotive companies did not obtain adequate rewards for shareholders. The interaction between CVC and R&D has generated much debate in the CVC literature. Some researchers see them as substitutes suggesting that firms have to choose between CVC and R&D (Hellmann, 2002), while others expect them to be complementary (Chesbrough & Tucci, 2004). This study explores the interaction that CVC and R&D have on value creation. This essay examines the impact of CVC and R&D on value creation over sixteen years across six business sectors and different geographical regions. Our findings suggest that the effect of CVC and its interaction with R&D on value creation is positive and significant. In dynamic business sectors technologies rapidly relinquish obsolete, consequently firms operating in such business sectors need to continuously develop new sources of value creation (Eisenhardt & Martin, 2000; Qualls, Olshavsky, & Michaels, 1981). We conclude that in order to impact value creation, firms operating in business sectors such as Engineering & Business Services, and Information Communication & Technology ought to consider CVC as a vital element of their innovation strategy. Moreover, regarding the CVC and R&D interaction effect, our findings suggest that R&D and CVC are complementary to value creation hence firms in certain business sectors can be better off supporting both R&D and CVC simultaneously to increase the probability of generating value creation. III. MCS and Organizational Structures for Radical Innovation Incremental innovation is necessary for continuous improvement but it does not provide a sustainable permanent source of competitiveness (Cooper, 2003). On the other hand, radical innovation pursuing new technologies and new market frontiers can generate new platforms for growth providing firms with competitive advantages and high economic margin rents (Duchesneau et al., 1979; Markides & Geroski, 2005; O'Connor & DeMartino, 2006; Utterback, 1994). Interestingly, not all companies distinguish between incremental and radical innovation, and more importantly firms that manage innovation through a one-sizefits- all process can almost guarantee a sub-optimization of certain systems and resources (Davila et al., 2006). Moreover, we conducted research on the utilization of MCS along with radical innovation and flexible organizational structures as these have been associated with firm growth (Cooper, 2003; Davila & Foster, 2005, 2007; Markides & Geroski, 2005; O'Connor & DeMartino, 2006). Davila et al. (2009) identified research opportunities for innovation management and provided a list of pending issues: How do companies manage the process of radical and incremental innovation? What are the performance measures companies use to manage radical ideas and how do they select them? The fundamental objective of this paper is to address the following research question: What are the processes, MCS, and organizational structures for generating radical innovation? Moreover, in recent years, research on innovation management has been conducted mainly at either the firm level (Birkinshaw, Hamel, & Mol, 2008a) or at the project level examining appropriate management techniques associated with high levels of uncertainty (Burgelman & Sayles, 1988; Dougherty & Heller, 1994; Jelinek & Schoonhoven, 1993; Kanter, North, Bernstein, & Williamson, 1990; Leifer et al., 2000). Therefore, we embarked on a novel process-related research framework to observe the process stages, MCS, and organizational structures that can generate radical innovation. This article is based on a case study at Alcan Engineered Products, a division of a multinational company provider of lightweight material solutions. Our observations suggest that incremental and radical innovation should be managed through different processes, MCS and organizational structures that ought to be activated and adapted contingent to the type of innovation that is being pursued (i.e. incremental or radical innovation). More importantly, we conclude that radical can be generated in a systematic way through enablers such as processes, MCS, and organizational structures. This is in line with the findings of Jelinek and Schoonhoven (1993) and Davila et al. (2006; 2007) who show that innovative firms have institutionalized mechanisms, arguing that radical innovation cannot occur in an organic environment where flexibility and consensus are the main managerial mechanisms. They rather argue that radical innovation requires a clear organizational structure and formal MCS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: The presence of minority nonnucleoside reverse transcriptase inhibitor (NNRTI)-resistant HIV-1 variants prior to antiretroviral therapy (ART) has been linked to virologic failure in treatment-naive patients. DESIGN: We performed a large retrospective study to determine the number of treatment failures that could have been prevented by implementing minority drug-resistant HIV-1 variant analyses in ART-naïve patients in whom no NNRTI resistance mutations were detected by routine resistance testing. METHODS: Of 1608 patients in the Swiss HIV Cohort Study, who have initiated first-line ART with two nucleoside reverse transcriptase inhibitors (NRTIs) and one NNRTI before July 2008, 519 patients were eligible by means of HIV-1 subtype, viral load and sample availability. Key NNRTI drug resistance mutations K103N and Y181C were measured by allele-specific PCR in 208 of 519 randomly chosen patients. RESULTS: Minority K103N and Y181C drug resistance mutations were detected in five out of 190 (2.6%) and 10 out of 201 (5%) patients, respectively. Focusing on 183 patients for whom virologic success or failure could be examined, virologic failure occurred in seven out of 183 (3.8%) patients; minority K103N and/or Y181C variants were present prior to ART initiation in only two of those patients. The NNRTI-containing, first-line ART was effective in 10 patients with preexisting minority NNRTI-resistant HIV-1 variant. CONCLUSION: As revealed in settings of case-control studies, minority NNRTI-resistant HIV-1 variants can have an impact on ART. However, the implementation of minority NNRTI-resistant HIV-1 variant analysis in addition to genotypic resistance testing (GRT) cannot be recommended in routine clinical settings. Additional associated risk factors need to be discovered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although dispersal is recognized as a key issue in several fields of population biology (such as behavioral ecology, population genetics, metapopulation dynamics or evolutionary modeling), these disciplines focus on different aspects of the concept and often make different implicit assumptions regarding migration models. Using simulations, we investigate how such assumptions translate into effective gene flow and fixation probability of selected alleles. Assumptions regarding migration type (e.g. source-sink, resident pre-emption, or balanced dispersal) and patterns (e.g. stepping-stone versus island dispersal) have large impacts when demes differ in sizes or selective pressures. The effects of fragmentation, as well as the spatial localization of newly arising mutations, also strongly depend on migration type and patterns. Migration rate also matters: depending on the migration type, fixation probabilities at an intermediate migration rate may lie outside the range defined by the low- and high-migration limits when demes differ in sizes. Given the extreme sensitivity of fixation probability to characteristics of dispersal, we underline the importance of making explicit (and documenting empirically) the crucial ecological/ behavioral assumptions underlying migration models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The molecular diversity of viruses complicates the interpretation of viral genomic and proteomic data. To make sense of viral gene functions, investigators must be familiar with the virus host range, replication cycle and virion structure. Our aim is to provide a comprehensive resource bridging together textbook knowledge with genomic and proteomic sequences. ViralZone web resource (www.expasy.org/viralzone/) provides fact sheets on all known virus families/genera with easy access to sequence data. A selection of reference strains (RefStrain) provides annotated standards to circumvent the exponential increase of virus sequences. Moreover ViralZone offers a complete set of detailed and accurate virion pictures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT: BACKGROUND: Sphingomonas wittichii strain RW1 can completely oxidize dibenzo-p-dioxins and dibenzofurans, which are persistent contaminants of soils and sediments. For successful application in soil bioremediation systems, strain RW1 must cope with fluctuations in water availability, or water potential. Thus far, however, little is known about the adaptive strategies used by Sphingomonas bacteria to respond to changes in water potential. To improve our understanding, strain RW1 was perturbed with either the cell-permeating solute sodium chloride or the non-permeating solute polyethylene glycol with a molecular weight of 8000 (PEG8000). These solutes are assumed to simulate the solute and matric components of the total water potential, respectively. The responses to these perturbations were then assessed and compared using a combination of growth assays, transcriptome profiling, and membrane fatty acid analyses. RESULTS: Under conditions producing a similar decrease in water potential but without effect on growth rate, there was only a limited shared response to perturbation with sodium chloride or PEG8000. This shared response included the increased expression of genes involved with trehalose and exopolysaccharide biosynthesis and the reduced expression of genes involved with flagella biosynthesis. Mostly, the responses to perturbation with sodium chloride or PEG8000 were very different. Only sodium chloride triggered the increased expression of two ECF-type RNA polymerase sigma factors and the differential expression of many genes involved with outer membrane and amino acid metabolism. In contrast, only PEG8000 triggered the increased expression of a heat shock-type RNA polymerase sigma factor along with many genes involved with protein turnover and repair. Membrane fatty acid analyses further corroborated these differences. The degree of saturation of membrane fatty acids increased after perturbation with sodium chloride but had the opposite effect and decreased after perturbation with PEG8000. CONCLUSIONS: A combination of growth assays, transcriptome profiling, and membrane fatty acid analyses revealed that permeating and non-permeating solutes trigger different adaptive responses in strain RW1, suggesting these solutes affect cells in fundamentally different ways. Future work is now needed that connects these responses with the responses observed in more realistic scenarios of soil desiccation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Public hospitals' long waiting lists make outpatient surgery in private facilities very attractive provided a standardized protocol is applied. The aim of this study was to assess this kind of innovative collaboration in abdominal surgery from a clinical and economical perspective. Methods: All consecutive patients operated on in an outpatient basis in a private facility by a public hospital abdominal surgeon and an assistant over a 5-year period (2004-2009) were included. Clinical assessment was carried out from patients' charts and satisfaction questionnaire, and economic assessment from the comparison between the surgeons' charges paid by the private facility and the surgeons' hospital salaries during the days devoted to surgery at the private facility. Results: Over the 5 years, 602 operative procedures were carried out during 190 operative days. All patients could be discharged the same day and only 1% of minor complications occurred. The patients' satisfaction was 98%. The balance between the surgeons' charges paid by the private facility and their hospital salary costs was positive by 25.8% for the senior surgeon and 12.6% for the assistant or, on average, 21.9% for both. Conclusion: Collaboration between an overloaded university hospital surgery department and a private surgical facility was successful, effective, safe, and cost-effective. It could be extended to other surgical specialities. Copyright (C) 2011 S. Karger AG, Basel

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Summary Landscapes are continuously changing. Natural forces of change such as heavy rainfall and fires can exert lasting influences on their physical form. However, changes related to human activities have often shaped landscapes more distinctly. In Western Europe, especially modern agricultural practices and the expanse of overbuilt land have left their marks in the landscapes since the middle of the 20th century. In the recent years men realised that mare and more changes that were formerly attributed to natural forces might indirectly be the result of their own action. Perhaps the most striking landscape change indirectly driven by human activity we can witness in these days is the large withdrawal of Alpine glaciers. Together with the landscapes also habitats of animal and plant species have undergone vast and sometimes rapid changes that have been hold responsible for the ongoing loss of biodiversity. Thereby, still little knowledge is available about probable effects of the rate of landscape change on species persistence and disappearance. Therefore, the development and speed of land use/land cover in the Swiss communes between the 1950s and 1990s were reconstructed using 10 parameters from agriculture and housing censuses, and were further correlated with changes in butterfly species occurrences. Cluster analyses were used to detect spatial patterns of change on broad spatial scales. Thereby, clusters of communes showing similar changes or transformation rates were identified for single decades and put into a temporally dynamic sequence. The obtained picture on the changes showed a prevalent replacement of non-intensive agriculture by intensive practices, a strong spreading of urban communes around city centres, and transitions towards larger farm sizes in the mountainous areas. Increasing transformation rates toward more intensive agricultural managements were especially found until the 1970s, whereas afterwards the trends were commonly negative. However, transformation rates representing the development of residential buildings showed positive courses at any time. The analyses concerning the butterfly species showed that grassland species reacted sensitively to the density of livestock in the communes. This might indicate the augmented use of dry grasslands as cattle pastures that show altered plant species compositions. Furthermore, these species also decreased in communes where farms with an agricultural area >5ha have disappeared. The species of the wetland habitats were favoured in communes with smaller fractions of agricultural areas and lower densities of large farms (>10ha) but did not show any correlation to transformation rates. It was concluded from these analyses that transformation rates might influence species disappearance to a certain extent but that states of the environmental predictors might generally outweigh the importance of the corresponding rates. Information on the current distribution of species is evident for nature conservation. Planning authorities that define priority areas for species protection or examine and authorise construction projects need to know about the spatial distribution of species. Hence, models that simulate the potential spatial distribution of species have become important decision tools. The underlying statistical analyses such as the widely used generalised linear models (GLM) often rely on binary species presence-absence data. However, often only species presence data have been colleted, especially for vagrant, rare or cryptic species such as butterflies or reptiles. Modellers have thus introduced randomly selected absence data to design distribution models. Yet, selecting false absence data might bias the model results. Therefore, we investigated several strategies to select more reliable absence data to model the distribution of butterfly species based on historical distribution data. The results showed that better models were obtained when historical data from longer time periods were considered. Furthermore, model performance was additionally increased when long-term data of species that show similar habitat requirements as the modelled species were used. This successful methodological approach was further applied to assess consequences of future landscape changes on the occurrence of butterfly species inhabiting dry grasslands or wetlands. These habitat types have been subjected to strong deterioration in the recent decades, what makes their protection a future mission. Four spatially explicit scenarios that described (i) ongoing land use changes as observed between 1985 and 1997, (ii) liberalised agricultural markets, and (iii) slightly and (iv) strongly lowered agricultural production provided probable directions of landscape change. Current species-environment relationships were derived from a statistical model and used to predict future occurrence probabilities in six major biogeographical regions in Switzerland, comprising the Jura Mountains, the Plateau, the Northern and Southern Alps, as well as the Western and Eastern Central Alps. The main results were that dry grasslands species profited from lowered agricultural production, whereas overgrowth of open areas in the liberalisation scenario might impair species occurrence. The wetland species mostly responded with decreases in their occurrence probabilities in the scenarios, due to a loss of their preferred habitat. Further analyses about factors currently influencing species occurrences confirmed anthropogenic causes such as urbanisation, abandonment of open land, and agricultural intensification. Hence, landscape planning should pay more attention to these forces in areas currently inhabited by these butterfly species to enable sustainable species persistence. In this thesis historical data were intensively used to reconstruct past developments and to make them useful for current investigations. Yet, the availability of historical data and the analyses on broader spatial scales has often limited the explanatory power of the conducted analyses. Meaningful descriptors of former habitat characteristics and abundant species distribution data are generally sparse, especially for fine scale analyses. However, this situation can be ameliorated by broadening the extent of the study site and the used grain size, as was done in this thesis by considering the whole of Switzerland with its communes. Nevertheless, current monitoring projects and data recording techniques are promising data sources that might allow more detailed analyses about effects of long-term species reactions on landscape changes in the near future. This work, however, also showed the value of historical species distribution data as for example their potential to locate still unknown species occurrences. The results might therefore contribute to further research activities that investigate current and future species distributions considering the immense richness of historical distribution data. Résumé Les paysages changent continuellement. Des farces naturelles comme des pluies violentes ou des feux peuvent avoir une influence durable sur la forme du paysage. Cependant, les changements attribués aux activités humaines ont souvent modelé les paysages plus profondément. Depuis les années 1950 surtout, les pratiques agricoles modernes ou l'expansion des surfaces d'habitat et d'infrastructure ont caractérisé le développement du paysage en Europe de l'Ouest. Ces dernières années, l'homme a commencé à réaliser que beaucoup de changements «naturels » pourraient indirectement résulter de ses propres activités. Le changement de paysage le plus apparent dont nous sommes témoins de nos jours est probablement l'immense retraite des glaciers alpins. Avec les paysages, les habitats des animaux et des plantes ont aussi été exposés à des changements vastes et quelquefois rapides, tenus pour coresponsable de la continuelle diminution de la biodiversité. Cependant, nous savons peu des effets probables de la rapidité des changements du paysage sur la persistance et la disparition des espèces. Le développement et la rapidité du changement de l'utilisation et de la couverture du sol dans les communes suisses entre les années 50 et 90 ont donc été reconstruits au moyen de 10 variables issues des recensements agricoles et résidentiels et ont été corrélés avec des changements de présence des papillons diurnes. Des analyses de groupes (Cluster analyses) ont été utilisées pour détecter des arrangements spatiaux de changements à l'échelle de la Suisse. Des communes avec des changements ou rapidités comparables ont été délimitées pour des décennies séparées et ont été placées en séquence temporelle, en rendrent une certaine dynamique du changement. Les résultats ont montré un remplacement répandu d'une agriculture extensive des pratiques intensives, une forte expansion des faubourgs urbains autour des grandes cités et des transitions vers de plus grandes surfaces d'exploitation dans les Alpes. Dans le cas des exploitations agricoles, des taux de changement croissants ont été observés jusqu'aux années 70, alors que la tendance a généralement été inversée dans les années suivantes. Par contre, la vitesse de construction des nouvelles maisons a montré des courbes positives pendant les 50 années. Les analyses sur la réaction des papillons diurnes ont montré que les espèces des prairies sèches supportaient une grande densité de bétail. Il est possible que dans ces communes beaucoup des prairies sèches aient été fertilisées et utilisées comme pâturages, qui ont une autre composition floristique. De plus, les espèces ont diminué dans les communes caractérisées par une rapide perte des fermes avec une surface cultivable supérieure à 5 ha. Les espèces des marais ont été favorisées dans des communes avec peu de surface cultivable et peu de grandes fermes, mais n'ont pas réagi aux taux de changement. Il en a donc été conclu que la rapidité des changements pourrait expliquer les disparitions d'espèces dans certains cas, mais que les variables prédictives qui expriment des états pourraient être des descripteurs plus importants. Des informations sur la distribution récente des espèces sont importantes par rapport aux mesures pour la conservation de la nature. Pour des autorités occupées à définir des zones de protection prioritaires ou à autoriser des projets de construction, ces informations sont indispensables. Les modèles de distribution spatiale d'espèces sont donc devenus des moyens de décision importants. Les méthodes statistiques courantes comme les modèles linéaires généralisés (GLM) demandent des données de présence et d'absence des espèces. Cependant, souvent seules les données de présence sont disponibles, surtout pour les animaux migrants, rares ou cryptiques comme des papillons ou des reptiles. C'est pourquoi certains modélisateurs ont choisi des absences au hasard, avec le risque d'influencer le résultat en choisissant des fausses absences. Nous avons établi plusieurs stratégies, basées sur des données de distribution historique des papillons diurnes, pour sélectionner des absences plus fiables. Les résultats ont démontré que de meilleurs modèles pouvaient être obtenus lorsque les données proviennent des périodes de temps plus longues. En plus, la performance des modèles a pu être augmentée en considérant des données de distribution à long terme d'espèces qui occupent des habitats similaires à ceux de l'espèce cible. Vu le succès de cette stratégie, elle a été utilisée pour évaluer les effets potentiels des changements de paysage futurs sur la distribution des papillons des prairies sèches et marais, deux habitats qui ont souffert de graves détériorations. Quatre scénarios spatialement explicites, décrivant (i) l'extrapolation des changements de l'utilisation de sol tels qu'observés entre 1985 et 1997, (ii) la libéralisation des marchés agricoles, et une production agricole (iii) légèrement amoindrie et (iv) fortement diminuée, ont été utilisés pour générer des directions de changement probables. Les relations actuelles entre la distribution des espèces et l'environnement ont été déterminées par le biais des modèles statistiques et ont été utilisées pour calculer des probabilités de présence selon les scénarios dans six régions biogéographiques majeures de la Suisse, comportant le Jura, le Plateau, les Alpes du Nord, du Sud, centrales orientales et centrales occidentales. Les résultats principaux ont montré que les espèces des prairies sèches pourraient profiter d'une diminution de la production agricole, mais qu'elles pourraient aussi disparaître à cause de l'embroussaillement des terres ouvertes dû à la libéralisation des marchés agricoles. La probabilité de présence des espèces de marais a décrû à cause d'une perte générale des habitats favorables. De plus, les analyses ont confirmé que des causes humaines comme l'urbanisation, l'abandon des terres ouvertes et l'intensification de l'agriculture affectent actuellement ces espèces. Ainsi ces forces devraient être mieux prises en compte lors de planifications paysagères, pour que ces papillons diurnes puissent survivre dans leurs habitats actuels. Dans ce travail de thèse, des données historiques ont été intensivement utilisées pour reconstruire des développements anciens et pour les rendre utiles à des recherches contemporaines. Cependant, la disponibilité des données historiques et les analyses à grande échelle ont souvent limité le pouvoir explicatif des analyses. Des descripteurs pertinents pour caractériser les habitats anciens et des données suffisantes sur la distribution des espèces sont généralement rares, spécialement pour des analyses à des échelles fores. Cette situation peut être améliorée en augmentant l'étendue du site d'étude et la résolution, comme il a été fait dans cette thèse en considérant toute la Suisse avec ses communes. Cependant, les récents projets de surveillance et les techniques de collecte de données sont des sources prometteuses, qui pourraient permettre des analyses plus détaillés sur les réactions à long terme des espèces aux changements de paysage dans le futur. Ce travail a aussi montré la valeur des anciennes données de distribution, par exemple leur potentiel pour aider à localiser des' présences d'espèces encore inconnues. Les résultats peuvent contribuer à des activités de recherche à venir, qui étudieraient les distributions récentes ou futures d'espèces en considérant l'immense richesse des données de distribution historiques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With increased activity and reduced financial and human resources, there is a need for automation in clinical bacteriology. Initial processing of clinical samples includes repetitive and fastidious steps. These tasks are suitable for automation, and several instruments are now available on the market, including the WASP (Copan), Previ-Isola (BioMerieux), Innova (Becton-Dickinson) and Inoqula (KIESTRA) systems. These new instruments allow efficient and accurate inoculation of samples, including four main steps: (i) selecting the appropriate Petri dish; (ii) inoculating the sample; (iii) spreading the inoculum on agar plates to obtain, upon incubation, well-separated bacterial colonies; and (iv) accurate labelling and sorting of each inoculated media. The challenge for clinical bacteriologists is to determine what is the ideal automated system for their own laboratory. Indeed, different solutions will be preferred, according to the number and variety of samples, and to the types of sample that will be processed with the automated system. The final choice is troublesome, because audits proposed by industrials risk being biased towards the solution proposed by their company, and because these automated systems may not be easily tested on site prior to the final decision, owing to the complexity of computer connections between the laboratory information system and the instrument. This article thus summarizes the main parameters that need to be taken into account for choosing the optimal system, and provides some clues to help clinical bacteriologists to make their choice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Arabidopsis thaliana (L.) Heynh. expressing the Crepis palaestina (L.) linoleic acid delta12-epoxygenase in its developing seeds typically accumulates low levels of vernolic acid (12,13-epoxy-octadec-cis-9-enoic acid) in comparison to levels found in seeds of the native C. palaestina. In order to determine some of the factors limiting the accumulation of this unusual fatty acid, we have examined the effects of increasing the availability of linoleic acid (9cis, 12cis-octadecadienoic acid), the substrate of the delta12-epoxygenase, on the quantity of epoxy fatty acids accumulating in transgenic A. thaliana. The addition of linoleic acid to liquid cultures of transgenic plants expressing the delta12-epoxygenase under the control of the cauliflower mosaic virus 35S promoter increased the amount of vernolic acid in vegetative tissues by 2.8-fold. In contrast, the addition to these cultures of linoelaidic acid (9trans, 12trans-octadecadienoic acid), which is not a substrate of the delta12-epoxygenase, resulted in a slight decrease in vernolic acid accumulation. Expression of the delta12-epoxygenase under the control of the napin promoter in the A. thaliana triple mutant fad3/fad7-1/fad9, which is deficient in the synthesis of tri-unsaturated fatty acids and has a 60% higher level of linoleic acid than the wild type, was found to increase the average vernolic acid content of the seeds by 55% compared to the expression of the delta12-epoxygenase in a wild-type background. Together, these results reveal that the availability of linoleic acid is an important factor affecting the synthesis of epoxy fatty acid in transgenic plants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous studies have shown that arbuscular mycorrhizal fungi (AMF) can influence plant diversity and ecosystem productivity. However, little is known about the effects of AMF and different AMF taxa on other important community properties such as nutrient acquisition, plant survival and soil structure. We established experimental grassland microcosms and tested the impact of AMF and of different AMF taxa on a number of grassland characteristics. We also tested whether plant species benefited from the same or different AMF taxa in subsequent growing seasons. AMF enhanced phosphorus acquisition, soil aggregation and survival of several plant species, but AMF did not increase total plant productivity. Moreover, AMF increased nitrogen acquisition by some plant species, but AMF had no effect on total N uptake by the plant community. Plant growth responses to AMF were temporally variable and some plant species obtained the highest biomass with different AMF in different years. Hence the results indicate that it may be beneficial for a plant to be colonized by different AMF taxa in different seasons. This study shows that AMF play a key role in grassland by improving plant nutrition and soil structure, and by regulating the make-up of the plant community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Achieving a high degree of dependability in complex macro-systems is challenging. Because of the large number of components and numerous independent teams involved, an overview of the global system performance is usually lacking to support both design and operation adequately. A functional failure mode, effects and criticality analysis (FMECA) approach is proposed to address the dependability optimisation of large and complex systems. The basic inductive model FMECA has been enriched to include considerations such as operational procedures, alarm systems. environmental and human factors, as well as operation in degraded mode. Its implementation on a commercial software tool allows an active linking between the functional layers of the system and facilitates data processing and retrieval, which enables to contribute actively to the system optimisation. The proposed methodology has been applied to optimise dependability in a railway signalling system. Signalling systems are typical example of large complex systems made of multiple hierarchical layers. The proposed approach appears appropriate to assess the global risk- and availability-level of the system as well as to identify its vulnerabilities. This enriched-FMECA approach enables to overcome some of the limitations and pitfalls previously reported with classical FMECA approaches.