124 resultados para concept of man
Resumo:
The focus of my PhD research was the concept of modularity. In the last 15 years, modularity has become a classic term in different fields of biology. On the conceptual level, a module is a set of interacting elements that remain mostly independent from the elements outside of the module. I used modular analysis techniques to study gene expression evolution in vertebrates. In particular, I identified ``natural'' modules of gene expression in mouse and human, and I showed that expression of organ-specific and system-specific genes tends to be conserved between such distance vertebrates as mammals and fishes. Also with a modular approach, I studied patterns of developmental constraints on transcriptome evolution. I showed that none of the two commonly accepted models of the evolution of embryonic development (``evo-devo'') are exclusively valid. In particular, I found that the conservation of the sequences of regulatory regions is highest during mid-development of zebrafish, and thus it supports the ``hourglass model''. In contrast, events of gene duplication and new gene introduction are most rare in early development, which supports the ``early conservation model''. In addition to the biological insights on transcriptome evolution, I have also discussed in detail the advantages of modular approaches in large-scale data analysis. Moreover, I re-analyzed several studies (published in high-ranking journals), and showed that their conclusions do not hold out under a detailed analysis. This demonstrates that complex analysis of high-throughput data requires a co-operation between biologists, bioinformaticians, and statisticians.
Resumo:
This research was conducted in the context of the project IRIS 8A Health and Society (2002-2008) and financially supported by the University of Lausanne. It was aomed at developping a model based on the elder people's experience and allowed us to develop a "Portrait evaluation" of fear of falling using their examples and words. It is a very simple evaluation, which can be used by professionals, but by the elder people themselves. The "Portrait evaluation" and the user's guide are on free access, but we would very much approciate to know whether other people or scientists have used it and collect their comments. (contact: Chantal.Piot-Ziegler@unil.ch)The purpose of this study is to create a model grounded in the elderly people's experience allowing the development of an original instrument to evaluate FOF.In a previous study, 58 semi-structured interviews were conducted with community-dwelling elderly people. The qualitative thematic analysis showed that fear of falling was defined through the functional, social and psychological long-term consequences of falls (Piot-Ziegler et al., 2007).In order to reveal patterns in the expression of fear of falling, an original qualitative thematic pattern analysis (QUAlitative Pattern Analysis - QUAPA) is developed and applied on these interviews.The results of this analysis show an internal coherence across the three dimensions (functional, social and psychological). Four different patterns are found, corresponding to four degrees of fear of falling. They are formalized in a fear of falling intensity model.This model leads to a portrait-evaluation for fallers and non-fallers. The evaluation must be confronted to large samples of elderly people, living in different environments. It presents an original alternative to the concept of self-efficacy to evaluate fear of falling in older people.The model of FOF presented in this article is grounded on elderly people's experience. It gives an experiential description of the three dimensions constitutive of FOF and of their evolution as fear increases, and defines an evaluation tool using situations and wordings based on the elderly people's discourse.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.
Resumo:
Glucose-induced thermogenesis (GIT) after a 100-g oral glucose load was measured by continuous indirect calorimetry in 32 nondiabetic and diabetic obese subjects and compared to 17 young and 13 middle aged control subjects. The obese subjects were divided into three groups: A (n = 12) normal glucose tolerance, B (n = 13) impaired glucose tolerance, and C (n = 7) diabetics, and were studied before and after a body weight loss ranging from 9.6 to 33.5 kg consecutive to a 4 to 6 months hypocaloric diet. GIT, measured over 3 h and expressed as percentage of the energy content of the load, was significantly reduced in obese groups A and C (6.2 +/- 0.6, and 3.8 +/- 0.7%, respectively) when compared to their age-matched control groups: 8.6 +/- 0.7 (young) and 5.8 +/- 0.3% (middle aged). Obese group B had a GIT of 6.1 +/- 0.6% which was lower than that of the young control group but not different from the middle-aged control group. After weight loss, GIT in the obese was further reduced in groups A and B than before weight loss: ie, 3.4 +/- 0.6 (p less than 0.001), 3.7 +/- 0.5 (p less than 0.01) respectively, whereas in group C, weight loss induced no further diminution in GIT (3.8 +/- 0.6%). These results support the concept of a thermogenic defect after glucose ingestion in obese individuals which is not the consequence of their excess body weight but may be one of the factors favoring the relapse of obesity after weight loss.
Resumo:
OBJECTIVE: To discuss the difficulty in using the concept of sepsis for clinical trials and propose new ways for designing future trials for severe infections. DESIGN: Short position statement. METHODS AND MAIN RESEARCH: Using a thorough evaluation of the recent literature in the field of severe sepsis and septic shock, the authors challenge the concept of sepsis as used in the past two decades and propose new ideas for designing future trials in this setting. The two main proposals are first to use a systematic assessment of the targeted inflammatory mediators when the study intends to counteract or replace those mediators (e.g., anti-tumor necrosis factor-alpha, activated protein C) and, second, to select more homogeneous populations, coming back to "precise infectious diseases," such as severe community-acquired pneumonia, severe peritonitis, or meningitis. CONCLUSIONS: The concept of sepsis has been useful to help clinicians to suspect and detect severe infections. Due to a considerable heterogeneity in the patients and type of infections included in the trials performed in the last two decades, it has not been useful in demonstrating the efficacy of new compounds. The authors propose a dramatic change in the design of future trials dealing with severe infections.
Resumo:
The assimilation model is a qualitative and integrative approach that enables to study change processes that occur in psychotherapy. According to Stiles, this model conceives the individual's personality as constituent of different voices; the concept of voice is used to describe traces left by past experiences. During the psychotherapy, we can observe the progressive integration of the problematic voices into the patient's personality. We applied the assimilation model to a 34-session-long case of an effective short-term dynamic psychotherapy. We've chosen eight sessions we transcribed and analyzed by establishing points of contact between the case and the theory. The results are presented and discussed in terms of the evolution of the main voices in the patient.
Resumo:
Migration partnerships (MPs) have become a key instrument in global migration governance. In contrast to traditional unilateral approaches, MPs emphasize a more comprehensive and inclusive tackling of migration issues between countries of origin, transit, and destination. Due to this cooperation-oriented concept, most of the existing studies on MPs neglect power questions within partnerships in line with the official discourse, reflecting a broader trend in the international migration governance literature. Others take an instrumentalist view in analysing the power of partnerships or focus on soft power. Illustrated with the examples of the European Mobility Partnerships (EU MPs) and the Swiss Migration Partnerships (CH MPs), we conduct an analysis based on a concept of productive power drawing on post-structural and post-colonial insights. Our main argument is that in contrast to their seemingly consent-oriented and technical character, MPs are sites of intense (discursive) struggles, and (re-)produce meanings, subjects, and resistances. A productive power analysis allows us to move beyond the dichotomy in the literature between coercion and cooperation, as well as between power and resistance more broadly.
Resumo:
The generic concept of the artificial meteorite experiment STONE is to fix rock samples bearing microorganisms on the heat shield of a recoverable space capsule and to study their modifications during atmospheric re-entry. The STONE-5 experiment was performed mainly to answer astrobiological questions. The rock samples mounted on the heat shield were used (i) as a carrier for microorganisms and (ii) as internal control to verify whether physical conditions during atmospheric re-entry were comparable to those experienced by "real" meteorites. Samples of dolerite (an igneous rock), sandstone (a sedimentary rock), and gneiss impactite from Haughton Crater carrying endolithic cyanobacteria were fixed to the heat shield of the unmanned recoverable capsule FOTON-M2. Holes drilled on the back side of each rock sample were loaded with bacterial and fungal spores and with dried vegetative cryptoendoliths. The front of the gneissic sample was also soaked with cryptoendoliths. <p>The mineralogical differences between pre- and post-flight samples are detailed. Despite intense ablation resulting in deeply eroded samples, all rocks in part survived atmospheric re-entry. Temperatures attained during re-entry were high enough to melt dolerite, silica, and the gneiss impactite sample. The formation of fusion crusts in STONE-5 was a real novelty and strengthens the link with real meteorites. The exposed part of the dolerite is covered by a fusion crust consisting of silicate glass formed from the rock sample with an admixture of holder material (silica). Compositionally, the fusion crust varies from silica-rich areas (undissolved silica fibres of the holder material) to areas whose composition is "basaltic". Likewise, the fusion crust on the exposed gneiss surface was formed from gneiss with an admixture of holder material. The corresponding composition of the fusion crust varies from silica-rich areas to areas with "gneiss" composition (main component potassium-rich feldspar). The sandstone sample was retrieved intact and did not develop a fusion crust. Thermal decomposition of the calcite matrix followed by disintegration and liberation of the silicate grains prevented the formation of a melt.</p> <p>Furthermore, the non-exposed surface of all samples experienced strong thermal alterations. Hot gases released during ablation pervaded the empty space between sample and sample holder leading to intense local heating. The intense heating below the protective sample holder led to surface melting of the dolerite rock and to the formation of calcium-silicate rims on quartz grains in the sandstone sample. (c) 2008 Elsevier Ltd. All rights reserved.</p>
Resumo:
The theory of language has occupied a special place in the history of Indian thought. Indian philosophers give particular attention to the analysis of the cognition obtained from language, known under the generic name of śābdabodha. This term is used to denote, among other things, the cognition episode of the hearer, the content of which is described in the form of a paraphrase of a sentence represented as a hierarchical structure. Philosophers submit the meaning of the component items of a sentence and their relationship to a thorough examination, and represent the content of the resulting cognition as a paraphrase centred on a meaning element, that is taken as principal qualificand (mukhyaviśesya) which is qualified by the other meaning elements. This analysis is the object of continuous debate over a period of more than a thousand years between the philosophers of the schools of Mimāmsā, Nyāya (mainly in its Navya form) and Vyākarana. While these philosophers are in complete agreement on the idea that the cognition of sentence meaning has a hierarchical structure and share the concept of a single principal qualificand (qualified by other meaning elements), they strongly disagree on the question which meaning element has this role and by which morphological item it is expressed. This disagreement is the central point of their debate and gives rise to competing versions of this theory. The Mïmāmsakas argue that the principal qualificand is what they call bhāvanā ̒bringing into being̒, ̒efficient force̒ or ̒productive operation̒, expressed by the verbal affix, and distinct from the specific procedures signified by the verbal root; the Naiyāyikas generally take it to be the meaning of the word with the first case ending, while the Vaiyākaranas take it to be the operation expressed by the verbal root. All the participants rely on the Pāninian grammar, insofar as the Mimāmsakas and Naiyāyikas do not compose a new grammar of Sanskrit, but use different interpretive strategies in order to justify their views, that are often in overt contradiction with the interpretation of the Pāninian rules accepted by the Vaiyākaranas. In each of the three positions, weakness in one area is compensated by strength in another, and the cumulative force of the total argumentation shows that no position can be declared as correct or overall superior to the others. This book is an attempt to understand this debate, and to show that, to make full sense of the irreconcilable positions of the three schools, one must go beyond linguistic factors and consider the very beginnings of each school's concern with the issue under scrutiny. The texts, and particularly the late texts of each school present very complex versions of the theory, yet the key to understanding why these positions remain irreconcilable seems to lie elsewhere, this in spite of extensive argumentation involving a great deal of linguistic and logical technicalities. Historically, this theory arises in Mimāmsā (with Sabara and Kumārila), then in Nyāya (with Udayana), in a doctrinal and theological context, as a byproduct of the debate over Vedic authority. The Navya-Vaiyākaranas enter this debate last (with Bhattoji Dïksita and Kaunda Bhatta), with the declared aim of refuting the arguments of the Mïmāmsakas and Naiyāyikas by bringing to light the shortcomings in their understanding of Pāninian grammar. The central argument has focused on the capacity of the initial contexts, with the network of issues to which the principal qualificand theory is connected, to render intelligible the presuppositions and aims behind the complex linguistic justification of the classical and late stages of this debate. Reading the debate in this light not only reveals the rationality and internal coherence of each position beyond the linguistic arguments, but makes it possible to understand why the thinkers of the three schools have continued to hold on to three mutually exclusive positions. They are defending not only their version of the principal qualificand theory, but (though not openly acknowledged) the entire network of arguments, linguistic and/or extra-linguistic, to which this theory is connected, as well as the presuppositions and aims underlying these arguments.
Resumo:
Since the introduction of the principle of respect of autonomy in medical ethics, the respect of the will of the patient occupied a central place in the decision-making process. To face up to the difficulties that appeared during the application of this principle in clinical medicine, Bruce Miller proposed in the early eighties one way to clarify the significance of this notion in the field of medical practice. He showed that the concept of autonomy can be understood under four senses which deserve to be explored in case of ethical conflict. This article shows, through the analysis of a clinical situation, the relevance of the approach suggested by this author and proposes to refer to this approach in case of ethical dilemmas in clinical practice.
Resumo:
Diagnostic reference levels (DRLs) were established for 21 indication-based CT examinations for adults in Switzerland. One hundred and seventy-nine of 225 computed tomography (CT) scanners operated in hospitals and private radiology institutes were audited on-site and patient doses were collected. For each CT scanner, a correction factor was calculated expressing the deviation of the measured weighted computed tomography dose index (CTDI) to the nominal weighted CTDI as displayed on the workstation. Patient doses were corrected by this factor providing a realistic basis for establishing national DRLs. Results showed large variations in doses between different radiology departments in Switzerland, especially for examinations of the petrous bone, pelvis, lower limbs and heart. This indicates that the concept of DRLs has not yet been correctly applied for CT examinations in clinical routine. A close collaboration of all stakeholders is mandatory to assure an effective radiation protection of patients. On-site audits will be intensified to further establish the concept of DRLs in Switzerland.
Resumo:
La gouvernance de l'Internet est une thématique récente dans la politique mondiale. Néanmoins, elle est devenue au fil des années un enjeu économique et politique important. La question a même pris une importance particulière au cours des derniers mois en devenant un sujet d'actualité récurrent. Forte de ce constat, c ette recherche retrace l'histoire de la gouvernance de l'Internet depuis son émergence comme enjeu politique dans les années 1980 jusqu'à la fin du Sommet Mondial sur la Société de l'Information (SMSI) en 2005. Plutôt que de se focaliser sur l'une ou l'autre des institutions impliquées dans la régulation du réseau informatique mondial, cette recherche analyse l'émergence et l'évolution historique d'un espace de luttes rassemblant un nombre croissant d'acteurs différents. Cette évolution est décrite à travers le prisme de la relation dialectique entre élites et non-élites et de la lutte autour de la définition de la gouvernance de l'Internet. Cette thèse explore donc la question de comment les relations au sein des élites de la gouvernance de l'Internet et entre ces élites et les non-élites expliquent l'emergence, l'évolution et la structuration d'un champ relativement autonome de la politique mondiale centré sur la gouvernance de l'Internet. Contre les perspectives dominantes réaliste et libérales, cette recherche s'ancre dans une approche issue de la combinaison des traditions hétérodoxes en économie politique internationale et des apports de la sociologie politique internationale. Celle-ci s'articule autour des concepts de champ, d'élites et d'hégémonie. Le concept de champ, développé par Bourdieu inspire un nombre croissant d'études de la politique mondiale. Il permet à la fois une étude différenciée de la mondialisation et l'émergence d'espaces de lutte et de domination au niveau transnational. La sociologie des élites, elle, permet une approche pragmatique et centrée sur les acteurs des questions de pouvoir dans la mondialisation. Cette recherche utilise plus particulièrement le concept d'élite du pouvoir de Wright Mills pour étudier l'unification d'élites a priori différentes autour de projets communs. Enfin, cette étude reprend le concept néo-gramscien d'hégémonie afin d'étudier à la fois la stabilité relative du pouvoir d'une élite garantie par la dimension consensuelle de la domination, et les germes de changement contenus dans tout ordre international. A travers l'étude des documents produits au cours de la période étudiée et en s'appuyant sur la création de bases de données sur les réseaux d'acteurs, cette étude s'intéresse aux débats qui ont suivi la commercialisation du réseau au début des années 1990 et aux négociations lors du SMSI. La première période a abouti à la création de l'Internet Corporation for Assigned Names and Numbers (ICANN) en 1998. Cette création est le résultat de la recherche d'un consensus entre les discours dominants des années 1990. C'est également le fruit d'une coalition entre intérêts au sein d'une élite du pouvoir de la gouvernance de l'Internet. Cependant, cette institutionnalisation de l'Internet autour de l'ICANN excluait un certain nombre d'acteurs et de discours qui ont depuis tenté de renverser cet ordre. Le SMSI a été le cadre de la remise en cause du mode de gouvernance de l'Internet par les États exclus du système, des universitaires et certaines ONG et organisations internationales. C'est pourquoi le SMSI constitue la seconde période historique étudiée dans cette thèse. La confrontation lors du SMSI a donné lieu à une reconfiguration de l'élite du pouvoir de la gouvernance de l'Internet ainsi qu'à une redéfinition des frontières du champ. Un nouveau projet hégémonique a vu le jour autour d'éléments discursifs tels que le multipartenariat et autour d'insitutions telles que le Forum sur la Gouvernance de l'Internet. Le succès relatif de ce projet a permis une stabilité insitutionnelle inédite depuis la fin du SMSI et une acceptation du discours des élites par un grand nombre d'acteurs du champ. Ce n'est que récemment que cet ordre a été remis en cause par les pouvoirs émergents dans la gouvernance de l'Internet. Cette thèse cherche à contribuer au débat scientifique sur trois plans. Sur le plan théorique, elle contribue à l'essor d'un dialogue entre approches d'économie politique mondiale et de sociologie politique internationale afin d'étudier à la fois les dynamiques structurelles liées au processus de mondialisation et les pratiques localisées des acteurs dans un domaine précis. Elle insiste notamment sur l'apport de les notions de champ et d'élite du pouvoir et sur leur compatibilité avec les anlayses néo-gramsciennes de l'hégémonie. Sur le plan méthodologique, ce dialogue se traduit par une utilisation de méthodes sociologiques telles que l'anlyse de réseaux d'acteurs et de déclarations pour compléter l'analyse qualitative de documents. Enfin, sur le plan empirique, cette recherche offre une perspective originale sur la gouvernance de l'Internet en insistant sur sa dimension historique, en démontrant la fragilité du concept de gouvernance multipartenaire (multistakeholder) et en se focalisant sur les rapports de pouvoir et les liens entre gouvernance de l'Internet et mondialisation. - Internet governance is a recent issue in global politics. However, it gradually became a major political and economic issue. It recently became even more important and now appears regularly in the news. Against this background, this research outlines the history of Internet governance from its emergence as a political issue in the 1980s to the end of the World Summit on the Information Society (WSIS) in 2005. Rather than focusing on one or the other institution involved in Internet governance, this research analyses the emergence and historical evolution of a space of struggle affecting a growing number of different actors. This evolution is described through the analysis of the dialectical relation between elites and non-elites and through the struggle around the definition of Internet governance. The thesis explores the question of how the relations among the elites of Internet governance and between these elites and non-elites explain the emergence, the evolution, and the structuration of a relatively autonomous field of world politics centred around Internet governance. Against dominant realist and liberal perspectives, this research draws upon a cross-fertilisation of heterodox international political economy and international political sociology. This approach focuses on concepts such as field, elites and hegemony. The concept of field, as developed by Bourdieu, is increasingly used in International Relations to build a differentiated analysis of globalisation and to describe the emergence of transnational spaces of struggle and domination. Elite sociology allows for a pragmatic actor-centred analysis of the issue of power in the globalisation process. This research particularly draws on Wright Mill's concept of power elite in order to explore the unification of different elites around shared projects. Finally, this thesis uses the Neo-Gramscian concept of hegemony in order to study both the consensual dimension of domination and the prospect of change contained in any international order. Through the analysis of the documents produced within the analysed period, and through the creation of databases of networks of actors, this research focuses on the debates that followed the commercialisation of the Internet throughout the 1990s and during the WSIS. The first time period led to the creation of the Internet Corporation for Assigned Names and Numbers (ICANN) in 1998. This creation resulted from the consensus-building between the dominant discourses of the time. It also resulted from the coalition of interests among an emerging power elite. However, this institutionalisation of Internet governance around the ICANN excluded a number of actors and discourses that resisted this mode of governance. The WSIS became the institutional framework within which the governance system was questioned by some excluded states, scholars, NGOs and intergovernmental organisations. The confrontation between the power elite and counter-elites during the WSIS triggered a reconfiguration of the power elite as well as a re-definition of the boundaries of the field. A new hegemonic project emerged around discursive elements such as the idea of multistakeholderism and institutional elements such as the Internet Governance Forum. The relative success of the hegemonic project allowed for a certain stability within the field and an acceptance by most non-elites of the new order. It is only recently that this order began to be questioned by the emerging powers of Internet governance. This research provides three main contributions to the scientific debate. On the theoretical level, it contributes to the emergence of a dialogue between International Political Economy and International Political Sociology perspectives in order to analyse both the structural trends of the globalisation process and the located practices of actors in a given issue-area. It notably stresses the contribution of concepts such as field and power elite and their compatibility with a Neo-Gramscian framework to analyse hegemony. On the methodological level, this perspective relies on the use of mixed methods, combining qualitative content analysis with social network analysis of actors and statements. Finally, on the empirical level, this research provides an original perspective on Internet governance. It stresses the historical dimension of current Internet governance arrangements. It also criticise the notion of multistakeholde ism and focuses instead on the power dynamics and the relation between Internet governance and globalisation.
Resumo:
Answering patients' evolving, more complex needs has been recognized as a main incentive for the development of interprofessional care. Thus, it is not surprising that patient-centered practice (PCP) has been adopted as a major outcome for interprofessional education. Nevertheless, little research has focused on how PCP is perceived across the professions. This study aimed to address this issue by adopting a phenomenological approach and interviewing three groups of professionals: social workers (n = 10), nurses (n = 10) and physicians (n = 8). All the participants worked in the same department (the General Internal Medicine department of a university affiliated hospital). Although the participants agreed on a core meaning of PCP as identifying, understanding and answering patients' needs, they used many dimensions to define PCP. Overall, the participants expressed value for PCP as a philosophy of care, but there was the sense of a hierarchy of patient-centeredness across the professions, in which both social work and nursing regarded themselves as more patient-centered than others. On their side, physicians seemed inclined to accept their lower position in this hierarchy. Gieryn's concept of boundary work is employed to help illuminate the nature of PCP within an interprofessional context.
Resumo:
In the late 19th century, it was already known that severe infections could be associated with cardiovascular collapse, a fact essentially attributed to cardiac failure. A major experimental work in the rabbit, published by Romberg and Pässler in 1899, shifted attention to disturbed peripheral vascular tone as the mechanism of hypotension in these conditions. In the first half of the 20th century, great progresses were made in the pathophysiologic understanding of hemorrhagic and traumatic shocks, while researchers devoted relatively little attention to septic shock. Progress in the hemodynamic understanding of septic shock resumed with the advent of critical care units. The hyperdynamic state was recognized in the late fifties and early sixties. The present short review ends with landmark studies by Max Harry Weil, demonstrating the importance of venous pooling, and John H. Siegel, which introduced the concept of deficient peripheral utilization of oxygen, inspiring later work on the microvascular disturbances of septic shock.
Resumo:
Most corporate codes of conduct and multi-stakeholder sustainability standards guarantee workers' rights to freedom of association and collective bargaining, but many authors are sceptical about the concrete impact of codes and standards of this kind. In this paper we use Hancher and Moran's (1998) concept of 'regulatory space' to assess the potential of private transnational regulation to support the growth of trade union membership and collective bargaining relationships, drawing on some preliminary case study results from a project on the impact of the International Finance Corporation's (IFC) social conditionality on worker organization and social dialogue. One of the major effects of neoliberal economic and industrial policy has been the routine exclusion of workers' organizations from regulatory processes on the grounds that they introduce inappropriate 'political' motives into what ought to be technical decision-making processes. This, rather than any direct attack on their capacity to take action, is what seems best to explain the global decline in union influence (Cradden 2004; Howell 2007; Howe 2012). The evidence we present in the paper suggests that private labour regulation may under certain conditions contribute to a reversal of this tendency, re-establishing the legitimacy of workers' organizations within regulatory processes and by extension the legitimacy of their use of economic and social power. We argue that guarantees of freedom of association and bargaining rights within private regulation schemes are effective to the extent that they can be used by workers' organizations in support of a claim for access to the regulatory space within which the terms and conditions of the employment relationship are determined. Our case study evidence shows that certain trade unions in East Africa have indeed been able to use IFC and other private regulation schemes as levers to win recognition from employers and to establish collective bargaining relationships. Although they did not attempt to use formal procedures to make a claim for the enforcement of freedom of association rights on behalf of their members, the unions did use enterprises' adherence to private regulation schemes as a normative point of reference in argument and political exchange about worker representation. For these unions, the regulation was a useful addition to the range of arguments that they could deploy as means to justify their demand for recognition by employers. By contrast, the private regulation that helps workers' organizations to win access to regulatory processes does little to ensure that they are able to participate meaningfully, whether in terms of technical capacity or of their ability to mobilize social power as a counterweight to the economic power of employers. To the extent that our East African unions were able to make an impact on terms and conditions of employment via their participation in regulatory space it was solely on the basis of their own capacities and resources and the application of national labour law.