988 resultados para Empirical qualitative
Resumo:
Objectives To review the epidemiology of native septic arthritis to establish local guidelines for empirical antibiotic therapy as part of an antibiotic stewardship programme. Methods We conducted a 10 year retrospective study based on positive synovial fluid cultures and discharge diagnosis of septic arthritis in adult patients. Microbiology results and medical records were reviewed. Results Between 1999 and 2008, we identified 233 episodes of septic arthritis. The predominant causative pathogens were methicillin-susceptible Staphylococcus aureus (MSSA) and streptococci (respectively, 44.6% and 14.2% of cases). Only 11 cases (4.7%) of methicillin-resistant S. aureus (MRSA) arthritis were diagnosed, among which 5 (45.5%) occurred in known carriers. For large-joint infections, amoxicillin/clavulanate or cefuroxime would have been appropriate in 84.5% of cases. MRSA and Mycobacterium tuberculosis would have been the most frequent pathogens that would not have been covered. In contrast, amoxicillin/clavulanate would have been appropriate for only 75.3% of small-joint infections (82.6% if diabetics are excluded). MRSA and Pseudomonas aeruginosa would have been the main pathogens not covered. Piperacillin/tazobactam would have been appropriate in 93.8% of cases (P < 0.01 versus amoxicillin/clavulanate). This statistically significant advantage is lost after exclusion of diabetics (P = 0.19). Conclusions Amoxicillin/clavulanate or cefuroxime would be adequate for empirical coverage of large-joint septic arthritis in our area. A broad-spectrum antibiotic would be significantly superior for small-joint infections in diabetics. Systematic coverage of MRSA is not justified, but should be considered for known carriers. These recommendations are applicable to our local setting. They might also apply to hospitals sharing the same epidemiology.
Resumo:
Background: Primary care physicians are often requested to assess their patients' fitness to drive. Little is however known on their needs to help them in this task. Aims: The aim of this study is to develop theories on needs, expectations, and barriers for clinical instruments helping physicians assess fitness to drive in primary care. Methods: This qualitative study used semi-structured interviews to investigate needs and expectations for instruments used to assess fitness to drive. From August 2011 to April 2013, we recorded opinions from five experts in traffic medicine, five primary care physicians, and five senior drivers. All interviews were integrally transcribed. Two independent researchers extracted, coded, and stratified categories relying on multi-grounded theory. All participants validated the final scheme. Results: Our theory suggests that for an instruments assessing fitness to drive to be implemented in primary care, it need to contribute to the decisional process. This requires at least five conditions: 1) it needs to reduce the range of uncertainty, 2) it needs to be adapted to local resources and possibilities, 3) it needs to be accepted by patients, 4) choices of tasks need to adaptable to clinical conditions, 5) and interpretation of results need to remain dependant of each patient's context. Discussion and conclusions: Most existing instruments assessing fitness to drive are not designed for primary care settings. Future instruments should also aim to support patient-centred dialogue, help anticipate driving cessation, and offer patients the opportunity to freely take their own decision on driving cessation as often as possible.
Resumo:
Discussion on improving the power of genome-wide association studies to identify candidate variants and genes is generally centered on issues of maximizing sample size; less attention is given to the role of phenotype definition and ascertainment. The authors used genome-wide data from patients infected with human immunodeficiency virus type 1 (HIV-1) to assess whether differences in type of population (622 seroconverters vs. 636 seroprevalent subjects) or the number of measurements available for defining the phenotype resulted in differences in the effect sizes of associations between single nucleotide polymorphisms and the phenotype, HIV-1 viral load at set point. The effect estimate for the top 100 single nucleotide polymorphisms was 0.092 (95% confidence interval: 0.074, 0.110) log(10) viral load (log(10) copies of HIV-1 per mL of blood) greater in seroconverters than in seroprevalent subjects. The difference was even larger when the authors focused on chromosome 6 variants (0.153 log(10) viral load) or on variants that achieved genome-wide significance (0.232 log(10) viral load). The estimates of the genetic effects tended to be slightly larger when more viral load measurements were available, particularly among seroconverters and for variants that achieved genome-wide significance. Differences in phenotype definition and ascertainment may affect the estimated magnitude of genetic effects and should be considered in optimizing power for discovering new associations.
Resumo:
Advancements in high-throughput technologies to measure increasingly complex biological phenomena at the genomic level are rapidly changing the face of biological research from the single-gene single-protein experimental approach to studying the behavior of a gene in the context of the entire genome (and proteome). This shift in research methodologies has resulted in a new field of network biology that deals with modeling cellular behavior in terms of network structures such as signaling pathways and gene regulatory networks. In these networks, different biological entities such as genes, proteins, and metabolites interact with each other, giving rise to a dynamical system. Even though there exists a mature field of dynamical systems theory to model such network structures, some technical challenges are unique to biology such as the inability to measure precise kinetic information on gene-gene or gene-protein interactions and the need to model increasingly large networks comprising thousands of nodes. These challenges have renewed interest in developing new computational techniques for modeling complex biological systems. This chapter presents a modeling framework based on Boolean algebra and finite-state machines that are reminiscent of the approach used for digital circuit synthesis and simulation in the field of very-large-scale integration (VLSI). The proposed formalism enables a common mathematical framework to develop computational techniques for modeling different aspects of the regulatory networks such as steady-state behavior, stochasticity, and gene perturbation experiments.
Resumo:
Summary This dissertation explores how stakeholder dialogue influences corporate processes, and speculates about the potential of this phenomenon - particularly with actors, like non-governmental organizations (NGOs) and other representatives of civil society, which have received growing attention against a backdrop of increasing globalisation and which have often been cast in an adversarial light by firms - as a source of teaming and a spark for innovation in the firm. The study is set within the context of the introduction of genetically-modified organisms (GMOs) in Europe. Its significance lies in the fact that scientific developments and new technologies are being generated at an unprecedented rate in an era where civil society is becoming more informed, more reflexive, and more active in facilitating or blocking such new developments, which could have the potential to trigger widespread changes in economies, attitudes, and lifestyles, and address global problems like poverty, hunger, climate change, and environmental degradation. In the 1990s, companies using biotechnology to develop and offer novel products began to experience increasing pressure from civil society to disclose information about the risks associated with the use of biotechnology and GMOs, in particular. Although no harmful effects for humans or the environment have been factually demonstrated even to date (2008), this technology remains highly-contested and its introduction in Europe catalysed major companies to invest significant financial and human resources in stakeholder dialogue. A relatively new phenomenon at the time, with little theoretical backing, dialogue was seen to reflect a move towards greater engagement with stakeholders, commonly defined as those "individuals or groups with which. business interacts who have a 'stake', or vested interest in the firm" (Carroll, 1993:22) with whom firms are seen to be inextricably embedded (Andriof & Waddock, 2002). Regarding the organisation of this dissertation, Chapter 1 (Introduction) describes the context of the study, elaborates its significance for academics and business practitioners as an empirical work embedded in a sector at the heart of the debate on corporate social responsibility (CSR). Chapter 2 (Literature Review) traces the roots and evolution of CSR, drawing on Stakeholder Theory, Institutional Theory, Resource Dependence Theory, and Organisational Learning to establish what has already been developed in the literature regarding the stakeholder concept, motivations for engagement with stakeholders, the corporate response to external constituencies, and outcomes for the firm in terms of organisational learning and change. I used this review of the literature to guide my inquiry and to develop the key constructs through which I viewed the empirical data that was gathered. In this respect, concepts related to how the firm views itself (as a victim, follower, leader), how stakeholders are viewed (as a source of pressure and/or threat; as an asset: current and future), corporate responses (in the form of buffering, bridging, boundary redefinition), and types of organisational teaming (single-loop, double-loop, triple-loop) and change (first order, second order, third order) were particularly important in building the key constructs of the conceptual model that emerged from the analysis of the data. Chapter 3 (Methodology) describes the methodology that was used to conduct the study, affirms the appropriateness of the case study method in addressing the research question, and describes the procedures for collecting and analysing the data. Data collection took place in two phases -extending from August 1999 to October 2000, and from May to December 2001, which functioned as `snapshots' in time of the three companies under study. The data was systematically analysed and coded using ATLAS/ti, a qualitative data analysis tool, which enabled me to sort, organise, and reduce the data into a manageable form. Chapter 4 (Data Analysis) contains the three cases that were developed (anonymised as Pioneer, Helvetica, and Viking). Each case is presented in its entirety (constituting a `within case' analysis), followed by a 'cross-case' analysis, backed up by extensive verbatim evidence. Chapter 5 presents the research findings, outlines the study's limitations, describes managerial implications, and offers suggestions for where more research could elaborate the conceptual model developed through this study, as well as suggestions for additional research in areas where managerial implications were outlined. References and Appendices are included at the end. This dissertation results in the construction and description of a conceptual model, grounded in the empirical data and tied to existing literature, which portrays a set of elements and relationships deemed important for understanding the impact of stakeholder engagement for firms in terms of organisational learning and change. This model suggests that corporate perceptions about the nature of stakeholder influence the perceived value of stakeholder contributions. When stakeholders are primarily viewed as a source of pressure or threat, firms tend to adopt a reactive/defensive posture in an effort to manage stakeholders and protect the firm from sources of outside pressure -behaviour consistent with Resource Dependence Theory, which suggests that firms try to get control over extemal threats by focussing on the relevant stakeholders on whom they depend for critical resources, and try to reverse the control potentially exerted by extemal constituencies by trying to influence and manipulate these valuable stakeholders. In situations where stakeholders are viewed as a current strategic asset, firms tend to adopt a proactive/offensive posture in an effort to tap stakeholder contributions and connect the organisation to its environment - behaviour consistent with Institutional Theory, which suggests that firms try to ensure the continuing license to operate by internalising external expectations. In instances where stakeholders are viewed as a source of future value, firms tend to adopt an interactive/innovative posture in an effort to reduce or widen the embedded system and bring stakeholders into systems of innovation and feedback -behaviour consistent with the literature on Organisational Learning, which suggests that firms can learn how to optimize their performance as they develop systems and structures that are more adaptable and responsive to change The conceptual model moreover suggests that the perceived value of stakeholder contribution drives corporate aims for engagement, which can be usefully categorised as dialogue intentions spanning a continuum running from low-level to high-level to very-high level. This study suggests that activities aimed at disarming critical stakeholders (`manipulation') providing guidance and correcting misinformation (`education'), being transparent about corporate activities and policies (`information'), alleviating stakeholder concerns (`placation'), and accessing stakeholder opinion ('consultation') represent low-level dialogue intentions and are experienced by stakeholders as asymmetrical, persuasive, compliance-gaining activities that are not in line with `true' dialogue. This study also finds evidence that activities aimed at redistributing power ('partnership'), involving stakeholders in internal corporate processes (`participation'), and demonstrating corporate responsibility (`stewardship') reflect high-level dialogue intentions. This study additionally finds evidence that building and sustaining high-quality, trusted relationships which can meaningfully influence organisational policies incline a firm towards the type of interactive, proactive processes that underpin the development of sustainable corporate strategies. Dialogue intentions are related to type of corporate response: low-level intentions can lead to buffering strategies; high-level intentions can underpin bridging strategies; very high-level intentions can incline a firm towards boundary redefinition. The nature of corporate response (which encapsulates a firm's posture towards stakeholders, demonstrated by the level of dialogue intention and the firm's strategy for dealing with stakeholders) favours the type of learning and change experienced by the organisation. This study indicates that buffering strategies, where the firm attempts to protect itself against external influences and cant' out its existing strategy, typically lead to single-loop learning, whereby the firm teams how to perform better within its existing paradigm and at most, improves the performance of the established system - an outcome associated with first-order change. Bridging responses, where the firm adapts organisational activities to meet external expectations, typically leads a firm to acquire new behavioural capacities characteristic of double-loop learning, whereby insights and understanding are uncovered that are fundamentally different from existing knowledge and where stakeholders are brought into problem-solving conversations that enable them to influence corporate decision-making to address shortcomings in the system - an outcome associated with second-order change. Boundary redefinition suggests that the firm engages in triple-loop learning, where the firm changes relations with stakeholders in profound ways, considers problems from a whole-system perspective, examining the deep structures that sustain the system, producing innovation to address chronic problems and develop new opportunities - an outcome associated with third-order change. This study supports earlier theoretical and empirical studies {e.g. Weick's (1979, 1985) work on self-enactment; Maitlis & Lawrence's (2007) and Maitlis' (2005) work and Weick et al's (2005) work on sensegiving and sensemaking in organisations; Brickson's (2005, 2007) and Scott & Lane's (2000) work on organisational identity orientation}, which indicate that corporate self-perception is a key underlying factor driving the dynamics of organisational teaming and change. Such theorizing has important implications for managerial practice; namely, that a company which perceives itself as a 'victim' may be highly inclined to view stakeholders as a source of negative influence, and would therefore be potentially unable to benefit from the positive influence of engagement. Such a selfperception can blind the firm from seeing stakeholders in a more positive, contributing light, which suggests that such firms may not be inclined to embrace external sources of innovation and teaming, as they are focussed on protecting the firm against disturbing environmental influences (through buffering), and remain more likely to perform better within an existing paradigm (single-loop teaming). By contrast, a company that perceives itself as a 'leader' may be highly inclined to view stakeholders as a source of positive influence. On the downside, such a firm might have difficulty distinguishing when stakeholder contributions are less pertinent as it is deliberately more open to elements in operating environment (including stakeholders) as potential sources of learning and change, as the firm is oriented towards creating space for fundamental change (through boundary redefinition), opening issues to entirely new ways of thinking and addressing issues from whole-system perspective. A significant implication of this study is that potentially only those companies who see themselves as a leader are ultimately able to tap the innovation potential of stakeholder dialogue.
Resumo:
STUDY DESIGN.: Retrospective radiologic study on a prospective patient cohort. OBJECTIVE.: To devise a qualitative grading of lumbar spinal stenosis (LSS), study its reliability and clinical relevance. SUMMARY OF BACKGROUND DATA.: Radiologic stenosis is assessed commonly by measuring dural sac cross-sectional area (DSCA). Great variation is observed though in surfaces recorded between symptomatic and asymptomatic individuals. METHODS.: We describe a 7-grade classification based on the morphology of the dural sac as observed on T2 axial magnetic resonance images based on the rootlet/cerebrospinal fluid ratio. Grades A and B show cerebrospinal fluid presence while grades C and D show none at all. The grading was applied to magnetic resonance images of 95 subjects divided in 3 groups as follows: 37 symptomatic LSS surgically treated patients; 31 symptomatic LSS conservatively treated patients (average follow-up, 2.5 and 3.1 years); and 27 low back pain (LBP) sufferers. DSCA was also digitally measured. We studied intra- and interobserver reliability, distribution of grades, relation between morphologic grading and DSCA, as well relation between grades, DSCA, and Oswestry Disability Index. RESULTS.: Average intra- and interobserver agreement was substantial and moderate, respectively (k = 0.65 and 0.44), whereas they were substantial for physicians working in the study originating unit. Surgical patients had the smallest DSCA. A larger proportion of C and D grades was observed in the surgical group. Surface measurementsresulted in overdiagnosis of stenosis in 35 patients and under diagnosis in 12. No relation could be found between stenosis grade or DSCA and baseline Oswestry Disability Index or surgical result. C and D grade patients were more likely to fail conservative treatment, whereas grades A and B were less likely to warrant surgery. CONCLUSION.: The grading defines stenosis in different subjects than surface measurements alone. Since it mainly considers impingement of neural tissue it might be a more appropriate clinical and research tool as well as carrying a prognostic value.
Resumo:
Peer-reviewed
Resumo:
Some faculty members from different universities around the world have begun to use Wikipedia as a teaching tool in recent years. These experiences show, in most cases, very satisfactory results and a substantial improvement in various basic skills, as well as a positive influence on the students' motivation. Nevertheless and despite the growing importance of e-learning methodologies based on the use of the Internet for higher education, the use of Wikipedia as a teaching resource remains scarce among university faculty.Our investigation tries to identify which are the main factors that determine acceptance or resistance to that use. We approach the decision to use Wikipedia as a teaching tool by analyzing both the individual attributes of faculty members and the characteristics of the environment where they develop their teaching activity. From a specific survey sent to all faculty of the Universitat Oberta de Catalunya (UOC), pioneer and leader in online education in Spain, we have tried to infer the influence of these internal and external elements. The questionnaire was designed to measure different constructs: perceived quality of Wikipedia, teaching practices involving Wikipedia, use experience, perceived usefulness and use of 2.0 tools. Control items were also included for gathering information on gender, age, teaching experience, academic rank, and area of expertise.Our results reveal that academic rank, teaching experience, age or gender, are not decisive factors in explaining the educational use of Wikipedia. Instead, the decision to use it is closely linked to the perception of Wikipedia's quality, the use of other collaborative learning tools, an active attitude towards web 2.0 applications, and connections with the professional non-academic world. Situational context is also very important, since the use is higher when faculty members have got reference models in their close environment and when they perceive it is positively valued by their colleagues. As far as these attitudes, practices and cultural norms diverge in different scientific disciplines, we have also detected clear differences in the use of Wikipedia among areas of academic expertise. As a consequence, a greater application of Wikipedia both as a teaching resource and as a driver for teaching innovation would require much more active institutional policies and some changes in the dominant academic culture among faculty members.
Resumo:
Extensive gene flow between wheat (Triticum sp.) and several wild relatives of the genus Aegilops has recently been detected despite notoriously high levels of selfing in these species. Here, we assess and model the spread of wheat alleles into natural populations of the barbed goatgrass (Aegilops triuncialis), a wild wheat relative prevailing in the Mediterranean flora. Our sampling, based on an extensive survey of 31 Ae. triuncialis populations collected along a 60 km × 20 km area in southern Spain (Grazalema Mountain chain, Andalousia, totalling 458 specimens), is completed with 33 wheat cultivars representative of the European domesticated pool. All specimens were genotyped with amplified fragment length polymorphism with the aim of estimating wheat admixture levels in Ae. triuncialis populations. This survey first confirmed extensive hybridization and backcrossing of wheat into the wild species. We then used explicit modelling of populations and approximate Bayesian computation to estimate the selfing rate of Ae. triuncialis along with the magnitude, the tempo and the geographical distance over which wheat alleles introgress into Ae. triuncialis populations. These simulations confirmed that extensive introgression of wheat alleles (2.7 × 10(-4) wheat immigrants for each Ae. triuncialis resident, at each generation) into Ae. triuncialis occurs despite a high selfing rate (Fis ≈ 1 and selfing rate = 97%). These results are discussed in the light of risks associated with the release of genetically modified wheat cultivars in Mediterranean agrosystems.
Resumo:
Selostus: Maatalouden ympäristöpolitiikan reformien tehokkuus ravinnepäästöjen vähentämisessä - teoreettinen ja empiirinen analyysi
Resumo:
Tutkimuksen tavoitteena oli selvittää Keskuskauppakamarin tilintarkastuslautakunnan ja Valtion tilintarkastuslautakunnan valvontaratkaisujen vaikutusta hyvään tilintarkastustapaan. Tavoitteeseen päästiin alatavoitteiden kautta: a) selvittämällä tilintarkastukselle asetetut vaatimukset ja tilintarkastajien valvonnan periaatteet, b) tarkastelemalla valvontaratkaisujen kautta tilintarkastajien velvollisuuksia: riippumattomuutta, salassapitovelvollisuutta sekä ammattitaitoa ja huolellisuutta, c) tutkimalla, millaisia seikkoja Keskuskauppakamarin tilintarkastuslautakunta ja Valtion tilintarkastuslautakunta korostavat ja nostavat esiin valvontaratkaisujen yhteydessä d) tutkimalla, mihin edellä mainitut valvontaorganisaatiot kiinnittävät huomiota valvontaratkaisujen käsittelyssä ja mihin päätökset perustuvat. Tutkimus on luonteeltaan kvalitatiivinen ja deskriptiivinen. Se on myös normiperustainen, koska tilintarkastuksen taustalla vaikuttavat erilaiset lait, asetukset ja suositukset. Tutkimuksen empiirinen aineisto koostuu Keskuskauppakamarin tilintarkas tuslautakunnan ja Valtion tilintarkastuslautakunnan valvontaratkaisuselosteista vuosilta 1995-2004. Tutkimuksen perusteella valvontaratkaisut muokkaavat hyvää tilintarkastustapaa. Tilintarkastuslautakunta julkaisee ratkaisujen johdosta kannanottoja, antaa tarvittaessa tilintarkastusalan säännöstöä täydentäviä suosituksia ja ohjeita sekä esittää oman kannanottonsa tulkinnanvaraisiin seikkoihin. Valvontaratkaisut perustuvat voimassa olevaan säädöstöön, tilintarkastusalan suosituksiin ja toimielinten aiempiin ratkaisuihin. Tilintarkastajien toiminnan arvioinnissa tilintarkastajan tekemän virheen olennaisuudella on keskeinen merkitys. Lisäksi arvioinnissa kiinnitetään erityisesti huomiota tilintarkastajien toimintaan kokonaisuutena ja siihen, onko ulkopuolinen voinut saada virheen johdosta väärän käsityksen. Ratkaisuissa korostuu säätiöiden, julkisen kaupankäynnin kohteena olevien yhtiöiden sekä erikoistarkastusten erityinen tarkastusvastuu.
Resumo:
La gouvernance de l'Internet est une thématique récente dans la politique mondiale. Néanmoins, elle est devenue au fil des années un enjeu économique et politique important. La question a même pris une importance particulière au cours des derniers mois en devenant un sujet d'actualité récurrent. Forte de ce constat, c ette recherche retrace l'histoire de la gouvernance de l'Internet depuis son émergence comme enjeu politique dans les années 1980 jusqu'à la fin du Sommet Mondial sur la Société de l'Information (SMSI) en 2005. Plutôt que de se focaliser sur l'une ou l'autre des institutions impliquées dans la régulation du réseau informatique mondial, cette recherche analyse l'émergence et l'évolution historique d'un espace de luttes rassemblant un nombre croissant d'acteurs différents. Cette évolution est décrite à travers le prisme de la relation dialectique entre élites et non-élites et de la lutte autour de la définition de la gouvernance de l'Internet. Cette thèse explore donc la question de comment les relations au sein des élites de la gouvernance de l'Internet et entre ces élites et les non-élites expliquent l'emergence, l'évolution et la structuration d'un champ relativement autonome de la politique mondiale centré sur la gouvernance de l'Internet. Contre les perspectives dominantes réaliste et libérales, cette recherche s'ancre dans une approche issue de la combinaison des traditions hétérodoxes en économie politique internationale et des apports de la sociologie politique internationale. Celle-ci s'articule autour des concepts de champ, d'élites et d'hégémonie. Le concept de champ, développé par Bourdieu inspire un nombre croissant d'études de la politique mondiale. Il permet à la fois une étude différenciée de la mondialisation et l'émergence d'espaces de lutte et de domination au niveau transnational. La sociologie des élites, elle, permet une approche pragmatique et centrée sur les acteurs des questions de pouvoir dans la mondialisation. Cette recherche utilise plus particulièrement le concept d'élite du pouvoir de Wright Mills pour étudier l'unification d'élites a priori différentes autour de projets communs. Enfin, cette étude reprend le concept néo-gramscien d'hégémonie afin d'étudier à la fois la stabilité relative du pouvoir d'une élite garantie par la dimension consensuelle de la domination, et les germes de changement contenus dans tout ordre international. A travers l'étude des documents produits au cours de la période étudiée et en s'appuyant sur la création de bases de données sur les réseaux d'acteurs, cette étude s'intéresse aux débats qui ont suivi la commercialisation du réseau au début des années 1990 et aux négociations lors du SMSI. La première période a abouti à la création de l'Internet Corporation for Assigned Names and Numbers (ICANN) en 1998. Cette création est le résultat de la recherche d'un consensus entre les discours dominants des années 1990. C'est également le fruit d'une coalition entre intérêts au sein d'une élite du pouvoir de la gouvernance de l'Internet. Cependant, cette institutionnalisation de l'Internet autour de l'ICANN excluait un certain nombre d'acteurs et de discours qui ont depuis tenté de renverser cet ordre. Le SMSI a été le cadre de la remise en cause du mode de gouvernance de l'Internet par les États exclus du système, des universitaires et certaines ONG et organisations internationales. C'est pourquoi le SMSI constitue la seconde période historique étudiée dans cette thèse. La confrontation lors du SMSI a donné lieu à une reconfiguration de l'élite du pouvoir de la gouvernance de l'Internet ainsi qu'à une redéfinition des frontières du champ. Un nouveau projet hégémonique a vu le jour autour d'éléments discursifs tels que le multipartenariat et autour d'insitutions telles que le Forum sur la Gouvernance de l'Internet. Le succès relatif de ce projet a permis une stabilité insitutionnelle inédite depuis la fin du SMSI et une acceptation du discours des élites par un grand nombre d'acteurs du champ. Ce n'est que récemment que cet ordre a été remis en cause par les pouvoirs émergents dans la gouvernance de l'Internet. Cette thèse cherche à contribuer au débat scientifique sur trois plans. Sur le plan théorique, elle contribue à l'essor d'un dialogue entre approches d'économie politique mondiale et de sociologie politique internationale afin d'étudier à la fois les dynamiques structurelles liées au processus de mondialisation et les pratiques localisées des acteurs dans un domaine précis. Elle insiste notamment sur l'apport de les notions de champ et d'élite du pouvoir et sur leur compatibilité avec les anlayses néo-gramsciennes de l'hégémonie. Sur le plan méthodologique, ce dialogue se traduit par une utilisation de méthodes sociologiques telles que l'anlyse de réseaux d'acteurs et de déclarations pour compléter l'analyse qualitative de documents. Enfin, sur le plan empirique, cette recherche offre une perspective originale sur la gouvernance de l'Internet en insistant sur sa dimension historique, en démontrant la fragilité du concept de gouvernance multipartenaire (multistakeholder) et en se focalisant sur les rapports de pouvoir et les liens entre gouvernance de l'Internet et mondialisation. - Internet governance is a recent issue in global politics. However, it gradually became a major political and economic issue. It recently became even more important and now appears regularly in the news. Against this background, this research outlines the history of Internet governance from its emergence as a political issue in the 1980s to the end of the World Summit on the Information Society (WSIS) in 2005. Rather than focusing on one or the other institution involved in Internet governance, this research analyses the emergence and historical evolution of a space of struggle affecting a growing number of different actors. This evolution is described through the analysis of the dialectical relation between elites and non-elites and through the struggle around the definition of Internet governance. The thesis explores the question of how the relations among the elites of Internet governance and between these elites and non-elites explain the emergence, the evolution, and the structuration of a relatively autonomous field of world politics centred around Internet governance. Against dominant realist and liberal perspectives, this research draws upon a cross-fertilisation of heterodox international political economy and international political sociology. This approach focuses on concepts such as field, elites and hegemony. The concept of field, as developed by Bourdieu, is increasingly used in International Relations to build a differentiated analysis of globalisation and to describe the emergence of transnational spaces of struggle and domination. Elite sociology allows for a pragmatic actor-centred analysis of the issue of power in the globalisation process. This research particularly draws on Wright Mill's concept of power elite in order to explore the unification of different elites around shared projects. Finally, this thesis uses the Neo-Gramscian concept of hegemony in order to study both the consensual dimension of domination and the prospect of change contained in any international order. Through the analysis of the documents produced within the analysed period, and through the creation of databases of networks of actors, this research focuses on the debates that followed the commercialisation of the Internet throughout the 1990s and during the WSIS. The first time period led to the creation of the Internet Corporation for Assigned Names and Numbers (ICANN) in 1998. This creation resulted from the consensus-building between the dominant discourses of the time. It also resulted from the coalition of interests among an emerging power elite. However, this institutionalisation of Internet governance around the ICANN excluded a number of actors and discourses that resisted this mode of governance. The WSIS became the institutional framework within which the governance system was questioned by some excluded states, scholars, NGOs and intergovernmental organisations. The confrontation between the power elite and counter-elites during the WSIS triggered a reconfiguration of the power elite as well as a re-definition of the boundaries of the field. A new hegemonic project emerged around discursive elements such as the idea of multistakeholderism and institutional elements such as the Internet Governance Forum. The relative success of the hegemonic project allowed for a certain stability within the field and an acceptance by most non-elites of the new order. It is only recently that this order began to be questioned by the emerging powers of Internet governance. This research provides three main contributions to the scientific debate. On the theoretical level, it contributes to the emergence of a dialogue between International Political Economy and International Political Sociology perspectives in order to analyse both the structural trends of the globalisation process and the located practices of actors in a given issue-area. It notably stresses the contribution of concepts such as field and power elite and their compatibility with a Neo-Gramscian framework to analyse hegemony. On the methodological level, this perspective relies on the use of mixed methods, combining qualitative content analysis with social network analysis of actors and statements. Finally, on the empirical level, this research provides an original perspective on Internet governance. It stresses the historical dimension of current Internet governance arrangements. It also criticise the notion of multistakeholde ism and focuses instead on the power dynamics and the relation between Internet governance and globalisation.
Resumo:
Abstract: To understand the processes of evolution, biologists are interested in the ability of a population to respond to natural or artificial selection. The amount of genetic variation is often viewed as the main factor allowing a species to answer to selection. Many theories have thus focused on the maintenance of genetic variability. Ecologists and population geneticists have long-suspected that the structure of the environment is connected to the maintenance of diversity. Theorists have shown that diversity can be permanently and stably maintained in temporal and spatial varying environment in certain conditions. Moreover, varying environments have been also theoretically demonstrated to cause the evolution of divergent life history strategies in the different niches constituting the environment. Although there is a huge number of theoretical studies selection and on life history evolution in heterogeneous environments, there is a clear lack of empirical studies. The purpose of this thesis was to. empirically study the evolutionary consequences of a heterogeneous environment in a freshwater snail Galba truncatula. Indeed, G. truncatula lives in two habitat types according the water availability. First, it can be found in streams or ponds which never completely dry out: a permanent habitat. Second, G. truncatula can be found in pools that freeze during winter and dry during summer: a temporary habitat. Using a common garden approach, we empirically demonstrated local adaptation of G. truncatula to temporary and permanent habitats. We used at first a comparison of molecular (FST) vs. quantitative (QST) genetic differentiation between temporary and permanent habitats. To confirm the pattern QST> FST between habitats suggesting local adaptation, we then tested the desiccation resistance of individuals from temporary and permanent habitats. This study confirmed that drought resistance seemed to be the main factor selected between habitats, and life history traits linked to the desiccation resistance were thus found divergent between habitats. However, despite this evidence of selection acting on mean values of traits between habitats, drift was suggested to be the main factor responsible of variation in variances-covariances between populations. At last, we found life history traits variation of individuals in a heterogeneous environment varying in parasite prevalence. This thesis empirically demonstrated the importance of heterogeneous environments in local adaptation and life history evolution and suggested that more experimental studies are needed to investigate this topic. Résumé: Les biologistes se sont depuis toujours intéressés en l'aptitude d'une population à répondre à la sélection naturelle. Cette réponse dépend de la quantité de variabilité génétique présente dans cette population. Plus particulièrement, les théoriciens se sont penchés sur la question du maintient de la variabilité génétique au sein d'environnements hétérogènes. Ils ont alors démontré que, sous certaines conditions, la diversité génétique peut se maintenir de manière stable et permanente dans des environnements variant au niveau spatial et temporel. De plus, ces environments variables ont été démontrés comme responsable de divergence de traits d'histoire de vie au sein des différentes niches constituant l'environnement. Cependant, malgré ce nombre important d'études théoriques portant sur la sélection et l'évolution des traits d'histoire de vie en environnement hétérogène, les études empiriques sont plus rares. Le but de cette thèse était donc d'étudier les conséquences évolutives d'un environnement hétérogène chez un esgarcot d'eau douce Galba truncatula. En effet, G. truncatula est trouvé dans deux types d'habitats qui diffèrent par leur niveau d'eau. Le premier, l'habitat temporaire, est constitué de flaques d'eau qui peuvent s'assécher pendant l'été et geler pendant l'hiver. Le second, l'habitat permanent, correspond à des marres ou à des ruisseaux qui ont un niveau d'eau constant durant toute l'année. Utilisant une approche expérimentale de type "jardin commun", nous avons démontré l'adaptation locale des individus à leur type d'habitat, permanent ou temporaire. Nous avons utilisé l'approche Fsr/QsT qui compare la différentiation génétique moléculaire avec la différentiation génétique quantitative entre les 2 habitats. Le phénomène d'adapation locale démontré par QsT > FsT, a été testé experimentalement en mesurant la résistance à la dessiccation d'individus d'habitat temporaire et permanent. Cette étude confirma que la résistance à la sécheresse a été sélectionné entre habitats et que les traits responsables de cette resistance sont différents entre habitats. Cependant si la sélection agit sur la valeur moyenne des traits entre habitats, la dérive génétique semble être le responsable majeur de la différence de variances-covariances entre populations. Pour finir, une variation de traits d'histoire de vie a été trouvée au sein d'un environnement hétérogène constitué de populations variants au niveau de leur taux de parasitisme. Pour conclure, cette thèse a donc démontré l'importance d'un environnement hétérogène sur l'adaptation locale et l'évolution des traits d'histoire de vie et suggère que plus d'études empiriques sur le sujet sont nécessaires.
Resumo:
BACKGROUND: Medication adherence has been identified as an important factor for clinical success. Twenty-four Swiss community pharmacists participated in the implementation of an adherence support programme for patients with hypertension, diabetes mellitus and/or dyslipidemia. The programme combined tailored consultations with patients about medication taking (expected at an average of one intervention per month) and the delivery of each drug in an electronic monitoring system (MEMS6?). OBJECTIVE: To explore pharmacists' perceptions and experiences with implementation of the medication adherence programme and to clarify why only seven patients were enrolled in total. SETTING: Community pharmacies in French-speaking Switzerland. METHOD: Individual in-depth interviews were audio-recorded, with 20 of the pharmacists who participated in the adherence programme. These were transcribed verbatim, coded and thematically analysed. Process quality was ensured by using an audit trail detailing the development of codes and themes; furthermore, each step in the coding and analysis was verified by a second, experienced qualitative researcher. MAIN OUTCOME MEASURE: Community pharmacists' experiences and perceptions of the determining factors influencing the implementation of the adherence programme. RESULTS: Four major barriers were identified: (1) poor communication with patients resulting in insufficient promotion of the programme; (2) insufficient collaboration with physicians; (3) difficulty in integrating the programme into pharmacy organisation; and (4) insufficient pharmacist motivation. This was related to the remuneration perceived as insufficient and to the absence of clear strategic thinking about the pharmacist position in the health care system. One major facilitator of the programme's implementation was pre-existing collaboration with physicians. CONCLUSION: A wide range of barriers was identified. The implementation of medication adherence programmes in Swiss community pharmacies would benefit from an extended training aimed at developing communication and change management skills. Individualised onsite support addressing relevant barriers would also be necessary throughout the implementation process.