57 resultados para first order transition system
em Université de Lausanne, Switzerland
Resumo:
OBJECTIVE: To assess the change in non-compliant items in prescription orders following the implementation of a computerized physician order entry (CPOE) system named PreDiMed. SETTING: The department of internal medicine (39 and 38 beds) in two regional hospitals in Canton Vaud, Switzerland. METHOD: The prescription lines in 100 pre- and 100 post-implementation patients' files were classified according to three modes of administration (medicines for oral or other non-parenteral uses; medicines administered parenterally or via nasogastric tube; pro re nata (PRN), as needed) and analyzed for a number of relevant variables constitutive of medical prescriptions. MAIN OUTCOME MEASURE: The monitored variables depended on the pharmaceutical category and included mainly name of medicine, pharmaceutical form, posology and route of administration, diluting solution, flow rate and identification of prescriber. RESULTS: In 2,099 prescription lines, the total number of non-compliant items was 2,265 before CPOE implementation, or 1.079 non-compliant items per line. Two-thirds of these were due to missing information, and the remaining third to incomplete information. In 2,074 prescription lines post-CPOE implementation, the number of non-compliant items had decreased to 221, or 0.107 non-compliant item per line, a dramatic 10-fold decrease (chi(2) = 4615; P < 10(-6)). Limitations of the computerized system were the risk for erroneous items in some non-prefilled fields and ambiguity due to a field with doses shown on commercial products. CONCLUSION: The deployment of PreDiMed in two departments of internal medicine has led to a major improvement in formal aspects of physicians' prescriptions. Some limitations of the first version of PreDiMed were unveiled and are being corrected.
Resumo:
The Helvetic nappe system in Western Switzerland is a stack of fold nappes and thrust sheets em-placed at low grade metamorphism. Fold nappes and thrust sheets are also some of the most common features in orogens. Fold nappes are kilometer scaled recumbent folds which feature a weakly deformed normal limb and an intensely deformed overturned limb. Thrust sheets on the other hand are characterized by the absence of overturned limb and can be defined as almost rigid blocks of crust that are displaced sub-horizontally over up to several tens of kilometers. The Morcles and Doldenhom nappe are classic examples of fold nappes and constitute the so-called infra-Helvetic complex in Western and Central Switzerland, respectively. This complex is overridden by thrust sheets such as the Diablerets and Wildhörn nappes in Western Switzerland. One of the most famous example of thrust sheets worldwide is the Glariis thrust sheet in Central Switzerland which features over 35 kilometers of thrusting which are accommodated by a ~1 m thick shear zone. Since the works of the early Alpine geologist such as Heim and Lugeon, the knowledge of these nappes has been steadily refined and today the geometry and kinematics of the Helvetic nappe system is generally agreed upon. However, despite the extensive knowledge we have today of the kinematics of fold nappes and thrust sheets, the mechanical process leading to the emplacement of these nappe is still poorly understood. For a long time geologist were facing the so-called 'mechanical paradox' which arises from the fact that a block of rock several kilometers high and tens of kilometers long (i.e. nappe) would break internally rather than start moving on a low angle plane. Several solutions were proposed to solve this apparent paradox. Certainly the most successful is the theory of critical wedges (e.g. Chappie 1978; Dahlen, 1984). In this theory the orogen is considered as a whole and this change of scale allows thrust sheet like structures to form while being consistent with mechanics. However this theoiy is intricately linked to brittle rheology and fold nappes, which are inherently ductile structures, cannot be created in these models. When considering the problem of nappe emplacement from the perspective of ductile rheology the problem of strain localization arises. The aim of this thesis was to develop and apply models based on continuum mechanics and integrating heat transfer to understand the emplacement of nappes. Models were solved either analytically or numerically. In the first two papers of this thesis we derived a simple model which describes channel flow in a homogeneous material with temperature dependent viscosity. We applied this model to the Morcles fold nappe and to several kilometer-scale shear zones worldwide. In the last paper we zoomed out and studied the tectonics of (i) ductile and (ii) visco-elasto-plastic and temperature dependent wedges. In this last paper we focused on the relationship between basement and cover deformation. We demonstrated that during the compression of a ductile passive margin both fold nappes and thrust sheets can develop and that these apparently different structures constitute two end-members of a single structure (i.e. nappe). The transition from fold nappe to thrust sheet is to first order controlled by the deformation of the basement. -- Le système des nappes helvétiques en Suisse occidentale est un empilement de nappes de plis et de nappes de charriage qui se sont mis en place à faible grade métamorphique. Les nappes de plis et les nappes de charriage sont parmi les objets géologiques les plus communs dans les orogènes. Les nappes de plis sont des plis couchés d'échelle kilométrique caractérisés par un flanc normal faiblement défor-mé, au contraire de leur flanc inverse, intensément déformé. Les nappes de charriage, à l'inverse se caractérisent par l'absence d'un flanc inverse bien défini. Elles peuvent être définies comme des blocs de croûte terrestre qui se déplacent de manière presque rigide qui sont déplacés sub-horizontalement jusqu'à plusieurs dizaines de kilomètres. La nappe de Mordes et la nappe du Doldenhorn sont des exemples classiques de nappes de plis et constitue le complexe infra-helvétique en Suisse occidentale et centrale, respectivement. Ce complexe repose sous des nappes de charriages telles les nappes des Diablerets et du Widlhörn en Suisse occidentale. La nappe du Glariis en Suisse centrale se distingue par un déplacement de plus de 35 kilomètres qui s'est effectué à la faveur d'une zone de cisaillement basale épaisse de seulement 1 mètre. Aujourd'hui la géométrie et la cinématique des nappes alpines fait l'objet d'un consensus général. Malgré cela, les processus mécaniques par lesquels ces nappes se sont mises en place restent mal compris. Pendant toute la première moitié du vingtième siècle les géologues les géologues ont été confrontés au «paradoxe mécanique». Celui-ci survient du fait qu'un bloc de roche haut de plusieurs kilomètres et long de plusieurs dizaines de kilomètres (i.e., une nappe) se fracturera de l'intérieur plutôt que de se déplacer sur une surface frictionnelle. Plusieurs solutions ont été proposées pour contourner cet apparent paradoxe. La solution la plus populaire est la théorie des prismes d'accrétion critiques (par exemple Chappie, 1978 ; Dahlen, 1984). Dans le cadre de cette théorie l'orogène est considéré dans son ensemble et ce simple changement d'échelle solutionne le paradoxe mécanique (la fracturation interne de l'orogène correspond aux nappes). Cette théorie est étroitement lié à la rhéologie cassante et par conséquent des nappes de plis ne peuvent pas créer au sein d'un prisme critique. Le but de cette thèse était de développer et d'appliquer des modèles basés sur la théorie de la méca-nique des milieux continus et sur les transferts de chaleur pour comprendre l'emplacement des nappes. Ces modèles ont été solutionnés de manière analytique ou numérique. Dans les deux premiers articles présentés dans ce mémoire nous avons dérivé un modèle d'écoulement dans un chenal d'un matériel homogène dont la viscosité dépend de la température. Nous avons appliqué ce modèle à la nappe de Mordes et à plusieurs zone de cisaillement d'échelle kilométrique provenant de différents orogènes a travers le monde. Dans le dernier article nous avons considéré le problème à l'échelle de l'orogène et avons étudié la tectonique de prismes (i) ductiles, et (ii) visco-élasto-plastiques en considérant les transferts de chaleur. Nous avons démontré que durant la compression d'une marge passive ductile, a la fois des nappes de plis et des nappes de charriages peuvent se développer. Nous avons aussi démontré que nappes de plis et de charriages sont deux cas extrêmes d'une même structure (i.e. nappe) La transition entre le développement d'une nappe de pli ou d'une nappe de charriage est contrôlé au premier ordre par la déformation du socle. -- Le système des nappes helvétiques en Suisse occidentale est un emblement de nappes de plis et de nappes de chaînage qui se sont mis en place à faible grade métamoiphique. Les nappes de plis et les nappes de charriage sont parmi les objets géologiques les plus communs dans les orogènes. Les nappes de plis sont des plis couchés d'échelle kilométrique caractérisés par un flanc normal faiblement déformé, au contraire de leur flanc inverse, intensément déformé. Les nappes de charriage, à l'inverse se caractérisent par l'absence d'un flanc inverse bien défini. Elles peuvent être définies comme des blocs de croûte terrestre qui se déplacent de manière presque rigide qui sont déplacés sub-horizontalement jusqu'à plusieurs dizaines de kilomètres. La nappe de Morcles and la nappe du Doldenhorn sont des exemples classiques de nappes de plis et constitue le complexe infra-helvétique en Suisse occidentale et centrale, respectivement. Ce complexe repose sous des nappes de charriages telles les nappes des Diablerets et du Widlhörn en Suisse occidentale. La nappe du Glarüs en Suisse centrale est certainement l'exemple de nappe de charriage le plus célèbre au monde. Elle se distingue par un déplacement de plus de 35 kilomètres qui s'est effectué à la faveur d'une zone de cisaillement basale épaisse de seulement 1 mètre. La géométrie et la cinématique des nappes alpines fait l'objet d'un consensus général parmi les géologues. Au contraire les processus physiques par lesquels ces nappes sont mises en place reste mal compris. Les sédiments qui forment les nappes alpines se sont déposés à l'ère secondaire et à l'ère tertiaire sur le socle de la marge européenne qui a été étiré durant l'ouverture de l'océan Téthys. Lors de la fermeture de la Téthys, qui donnera naissance aux Alpes, le socle et les sédiments de la marge européenne ont été déformés pour former les nappes alpines. Le but de cette thèse était de développer et d'appliquer des modèles basés sur la théorie de la mécanique des milieux continus et sur les transferts de chaleur pour comprendre l'emplacement des nappes. Ces modèles ont été solutionnés de manière analytique ou numérique. Dans les deux premiers articles présentés dans ce mémoire nous nous sommes intéressés à la localisation de la déformation à l'échelle d'une nappe. Nous avons appliqué le modèle développé à la nappe de Morcles et à plusieurs zones de cisaillement provenant de différents orogènes à travers le monde. Dans le dernier article nous avons étudié la relation entre la déformation du socle et la défonnation des sédiments. Nous avons démontré que nappe de plis et nappes de charriages constituent les cas extrêmes d'un continuum. La transition entre nappe de pli et nappe de charriage est intrinsèquement lié à la déformation du socle sur lequel les sédiments reposent.
Resumo:
The specific interactions of the pairs laminin binding protein (LBP)-purified tick-borne encephalitis viral surface protein E and certain recombinant fragments of this protein, as well as West Nile viral surface protein E and certain recombinant fragments of that protein, are studied by combined methods of single-molecule dynamic force spectroscopy (SMDFS), enzyme immunoassay and optical surface waves-based biosensor measurements. The experiments were performed at neutral pH (7.4) and acid pH (5.3) conditions. The data obtained confirm the role of LBP as a cell receptor for two typical viral species of the Flavivirus genus. A comparison of these data with similar data obtained for another cell receptor of this family, namely human αVβ3 integrin, reveals that both these receptors are very important. Studying the specific interaction between the cell receptors in question and specially prepared monoclonal antibodies against them, we could show that both interaction sites involved in the process of virus-cell interaction remain intact at pH 5.3. At the same time, for these acid conditions characteristic for an endosome during flavivirus-cell membrane fusion, SMDFS data reveal the existence of a force-induced (effective already for forces as small as 30-70 pN) sharp globule-coil transition for LBP and LBP-fragments of protein E complexes. We argue that this conformational transformation, being an analog of abrupt first-order phase transition and having similarity with the famous Rayleigh hydrodynamic instability, might be indispensable for the flavivirus-cell membrane fusion process. Copyright © 2014 John Wiley & Sons, Ltd.
Resumo:
Rockfall hazard zoning is usually achieved using a qualitative estimate of hazard, and not an absolute scale. In Switzerland, danger maps, which correspond to a hazard zoning depending on the intensity of the considered phenomenon (e.g. kinetic energy for rockfalls), are replacing hazard maps. Basically, the danger grows with the mean frequency and with the intensity of the rockfall. This principle based on intensity thresholds may also be applied to other intensity threshold values than those used in Switzerland for rockfall hazard zoning method, i.e. danger mapping. In this paper, we explore the effect of slope geometry and rockfall frequency on the rockfall hazard zoning. First, the transition from 2D zoning to 3D zoning based on rockfall trajectory simulation is examined; then, its dependency on slope geometry is emphasized. The spatial extent of hazard zones is examined, showing that limits may vary widely depending on the rockfall frequency. This approach is especially dedicated to highly populated regions, because the hazard zoning has to be very fine in order to delineate the greatest possible territory containing acceptable risks.
Resumo:
This paper reports molar heat capacities of Ru50SixGe(50-x) and Ru40SiyGe(60-y) ternary solid solutions determined by differential scanning calorimetry. A second order transition has been characterised for alloys ranging from Ru40Ge60 to Ru40Si10Ge50 at temperatures ranging from 850 to 1040 K, respectively. Tie lines have been established at 1000-900-800-700-600 degrees C by electron microprobe measurements on annealed alloys of the two phase domains: Ru50SixGe(50-x)-Ru40SiyGe(60-y) and Ru40SiyGe(60-y)-SizGe(100-z).
Resumo:
BACKGROUND: The aim of this study was to evaluate the efficacy of sustained release of vancomycin and teicoplanin from a resorbable gelatin glycerol sponge, in order to establish a new delivery system for local anti-infective therapy. MATERIALS AND METHODS: 60 plasticized glycerol gelatin sponges containing either 10 or 20% gelatin (w/v) were incubated in vancomycin or teicoplanin solution at 20 degrees C for either 1 or 24 h. In vitro release properties of the sponges were investigated over a period of 1 week by determining the levels of vancomycin and teicoplanin eluted in plasma using fluorescent polarization immunoassay. The rate constant and the half-life for the antibiotic release of each group were calculated by linear regression assuming first order kinetics. RESULTS: Presoaking for 24 h was associated with a significant increase in the total antibiotic release in all groups opposed to 1 h of incubation, except for the 10% sponges presoaked in teicoplanin. Doubling the gelatin content of the sponges from 10 to 20% significantly increased the total release of antibiotic load only in teicoplanin-containing sponges after 24 h incubation. In all corresponding groups investigated, release of vancomycin was more prolonged compared to teicoplanin, which allowed a gradual release beyond 5 days. The half-life (h +/- SEM) of both types of vancomycin-containing sponges was significantly prolonged by 24 h incubation in comparison to 1 h incubation (29.1 +/- 5.9 vs 5.9 +/- 1.0; p < 0.001, 30.0 +/- 2.1 vs 11.1 +/- 1.9; p < 0.001). However, neither doubling the gelatin content of the sponges nor a prolonged incubation was associated with a significantly prolonged delivery of teicoplanin. CONCLUSION: This study demonstrated a better diffusion-controlled release of vancomycin-impregnated glycerol gelatin sponges compared to those pretreated with teicoplanin. The plasticized glycerol gelatin sponge may be a promising carrier for the application of vancomycin to infected wounds for local anti-infective therapy.
Resumo:
OBJECTIVE: The reverse transcriptase inhibitor efavirenz is currently used at a fixed dose of 600 mg/d. However, dosage individualization based on plasma concentration monitoring might be indicated. This study aimed to assess the efavirenz pharmacokinetic profile and interpatient versus intrapatient variability in patients who are positive for human immunodeficiency virus, to explore the relationship between drug exposure, efficacy, and central nervous system toxicity and to build up a Bayesian approach for dosage adaptation. METHODS: The population pharmacokinetic analysis was performed by use of NONMEM based on plasma samples from a cohort of unselected patients receiving efavirenz. With the use of a 1-compartment model with first-order absorption, the influence of demographic and clinical characteristics on oral clearance and oral volume of distribution was examined. The average drug exposure during 1 dosing interval was estimated for each patient and correlated with markers of efficacy and toxicity. The population kinetic parameters and the variabilities were integrated into a Bayesian equation for dosage adaptation based on a single plasma sample. RESULTS: Data from 235 patients with a total of 719 efavirenz concentrations were collected. Oral clearance was 9.4 L/h, oral volume of distribution was 252 L, and the absorption rate constant was 0.3 h(-1). Neither the demographic covariates evaluated nor the comedications showed a clinically significant influence on efavirenz pharmacokinetics. A large interpatient variability was found to affect efavirenz relative bioavailability (coefficient of variation, 54.6%), whereas the intrapatient variability was small (coefficient of variation, 26%). An inverse correlation between average drug exposure and viral load and a trend with central nervous system toxicity were detected. This enabled the derivation of a dosing adaptation strategy suitable to bring the average concentration into a therapeutic target from 1000 to 4000 microg/L to optimize viral load suppression and to minimize central nervous system toxicity. CONCLUSIONS: The high interpatient and low intrapatient variability values, as well as the potential relationship with markers of efficacy and toxicity, support the therapeutic drug monitoring of efavirenz. However, further evaluation is needed before individualization of an efavirenz dosage regimen based on routine drug level monitoring should be recommended for optimal patient management.
Resumo:
Energy balance exerts a critical influence on reproductive function. Leptin and insulin are among the metabolic factors signaling the nutritional status of an individual to the hypothalamus, and their role in the overall modulation of the activity of GnRH neurons is increasingly recognized. As such, they participate to a more generalized phenomenon: the signaling of peripheral metabolic changes to the central nervous system. The physiological importance that the interactions occurring between peripheral metabolic factors and the central nervous system bear for the control of food intake is increasingly recognized. The central mechanisms implicated are the focus of attention of very many research groups worldwide. We review here the experimental data that suggest that similar mechanisms are at play for the metabolic control of the neuroendocrine reproductive function. It is appearing that metabolic signals are integrated at the levels of first-order neurons equipped with the proper receptors, ant that these neurons send their signals towards hypothalamic GnRH neurons which constitute the integrative element of this network.
Resumo:
This thesis is a compilation of projects to study sediment processes recharging debris flow channels. These works, conducted during my stay at the University of Lausanne, focus in the geological and morphological implications of torrent catchments to characterize debris supply, a fundamental element to predict debris flows. Other aspects of sediment dynamics are considered, e.g. the coupling headwaters - torrent, as well as the development of a modeling software that simulates sediment transfer in torrent systems. The sediment activity at Manival, an active torrent system of the northern French Alps, was investigated using terrestrial laser scanning and supplemented with geostructural investigations and a survey of sediment transferred in the main torrent. A full year of sediment flux could be observed, which coincided with two debris flows and several bedload transport events. This study revealed that both debris flows generated in the torrent and were preceded in time by recharge of material from the headwaters. Debris production occurred mostly during winter - early spring time and was caused by large slope failures. Sediment transfers were more puzzling, occurring almost exclusively in early spring subordinated to runoffconditions and in autumn during long rainfall. Intense rainstorms in summer did not affect debris storage that seems to rely on the stability of debris deposits. The morpho-geological implication in debris supply was evaluated using DEM and field surveys. A slope angle-based classification of topography could characterize the mode of debris production and transfer. A slope stability analysis derived from the structures in rock mass could assess susceptibility to failure. The modeled rockfall source areas included more than 97% of the recorded events and the sediment budgets appeared to be correlated to the density of potential slope failure. This work showed that the analysis of process-related terrain morphology and of susceptibility to slope failure document the sediment dynamics to quantitatively assess erosion zones leading to debris flow activity. The development of erosional landforms was evaluated by analyzing their geometry with the orientations of potential rock slope failure and with the direction of the maximum joint frequency. Structure in rock mass, but in particular wedge failure and the dominant discontinuities, appear as a first-order control of erosional mechanisms affecting bedrock- dominated catchment. They represent some weaknesses that are exploited primarily by mass wasting processes and erosion, promoting not only the initiation of rock couloirs and gullies, but also their propagation. Incorporating the geological control in geomorphic processes contributes to better understand the landscape evolution of active catchments. A sediment flux algorithm was implemented in a sediment cascade model that discretizes the torrent catchment in channel reaches and individual process-response systems. Each conceptual element includes in simple manner geomorphological and sediment flux information derived from GIS complemented with field mapping. This tool enables to simulate sediment transfers in channels considering evolving debris supply and conveyance, and helps reducing the uncertainty inherent to sediment budget prediction in torrent systems. Cette thèse est un recueil de projets d'études des processus de recharges sédimentaires des chenaux torrentiels. Ces travaux, réalisés lorsque j'étais employé à l'Université de Lausanne, se concentrent sur les implications géologiques et morphologiques des bassins dans l'apport de sédiments, élément fondamental dans la prédiction de laves torrentielles. D'autres aspects de dynamique sédimentaire ont été abordés, p. ex. le couplage torrent - bassin, ainsi qu'un modèle de simulation du transfert sédimentaire en milieu torrentiel. L'activité sédimentaire du Manival, un système torrentiel actif des Alpes françaises, a été étudiée par relevés au laser scanner terrestre et complétée par une étude géostructurale ainsi qu'un suivi du transfert en sédiments du torrent. Une année de flux sédimentaire a pu être observée, coïncidant avec deux laves torrentielles et plusieurs phénomènes de charriages. Cette étude a révélé que les laves s'étaient générées dans le torrent et étaient précédées par une recharge de débris depuis les versants. La production de débris s'est passée principalement en l'hiver - début du printemps, causée par de grandes ruptures de pentes. Le transfert était plus étrange, se produisant presque exclusivement au début du printemps subordonné aux conditions d'écoulement et en automne lors de longues pluies. Les orages d'été n'affectèrent guère les dépôts, qui semblent dépendre de leur stabilité. Les implications morpho-géologiques dans l'apport sédimentaire ont été évaluées à l'aide de MNT et études de terrain. Une classification de la topographie basée sur la pente a permis de charactériser le mode de production et transfert. Une analyse de stabilité de pente à partir des structures de roches a permis d'estimer la susceptibilité à la rupture. Les zones sources modélisées comprennent plus de 97% des chutes de blocs observées et les bilans sédimentaires sont corrélés à la densité de ruptures potentielles. Ce travail d'analyses des morphologies du terrain et de susceptibilité à la rupture documente la dynamique sédimentaire pour l'estimation quantitative des zones érosives induisant l'activité torrentielle. Le développement des formes d'érosion a été évalué par l'analyse de leur géométrie avec celle des ruptures potentielles et avec la direction de la fréquence maximale des joints. Les structures de roches, mais en particulier les dièdres et les discontinuités dominantes, semblent être très influents dans les mécanismes d'érosion affectant les bassins rocheux. Ils représentent des zones de faiblesse exploitées en priorité par les processus de démantèlement et d'érosion, encourageant l'initiation de ravines et couloirs, mais aussi leur propagation. L'incorporation du control géologique dans les processus de surface contribue à une meilleure compréhension de l'évolution topographique de bassins actifs. Un algorithme de flux sédimentaire a été implémenté dans un modèle en cascade, lequel divise le bassin en biefs et en systèmes individuels répondant aux processus. Chaque unité inclut de façon simple les informations géomorpologiques et celles du flux sédimentaire dérivées à partir de SIG et de cartographie de terrain. Cet outil permet la simulation des transferts de masse dans les chenaux, considérants la variabilité de l'apport et son transport, et aide à réduire l'incertitude liée à la prédiction de bilans sédimentaires torrentiels. Ce travail vise très humblement d'éclairer quelques aspects de la dynamique sédimentaire en milieu torrentiel.
Resumo:
Summary This dissertation explores how stakeholder dialogue influences corporate processes, and speculates about the potential of this phenomenon - particularly with actors, like non-governmental organizations (NGOs) and other representatives of civil society, which have received growing attention against a backdrop of increasing globalisation and which have often been cast in an adversarial light by firms - as a source of teaming and a spark for innovation in the firm. The study is set within the context of the introduction of genetically-modified organisms (GMOs) in Europe. Its significance lies in the fact that scientific developments and new technologies are being generated at an unprecedented rate in an era where civil society is becoming more informed, more reflexive, and more active in facilitating or blocking such new developments, which could have the potential to trigger widespread changes in economies, attitudes, and lifestyles, and address global problems like poverty, hunger, climate change, and environmental degradation. In the 1990s, companies using biotechnology to develop and offer novel products began to experience increasing pressure from civil society to disclose information about the risks associated with the use of biotechnology and GMOs, in particular. Although no harmful effects for humans or the environment have been factually demonstrated even to date (2008), this technology remains highly-contested and its introduction in Europe catalysed major companies to invest significant financial and human resources in stakeholder dialogue. A relatively new phenomenon at the time, with little theoretical backing, dialogue was seen to reflect a move towards greater engagement with stakeholders, commonly defined as those "individuals or groups with which. business interacts who have a 'stake', or vested interest in the firm" (Carroll, 1993:22) with whom firms are seen to be inextricably embedded (Andriof & Waddock, 2002). Regarding the organisation of this dissertation, Chapter 1 (Introduction) describes the context of the study, elaborates its significance for academics and business practitioners as an empirical work embedded in a sector at the heart of the debate on corporate social responsibility (CSR). Chapter 2 (Literature Review) traces the roots and evolution of CSR, drawing on Stakeholder Theory, Institutional Theory, Resource Dependence Theory, and Organisational Learning to establish what has already been developed in the literature regarding the stakeholder concept, motivations for engagement with stakeholders, the corporate response to external constituencies, and outcomes for the firm in terms of organisational learning and change. I used this review of the literature to guide my inquiry and to develop the key constructs through which I viewed the empirical data that was gathered. In this respect, concepts related to how the firm views itself (as a victim, follower, leader), how stakeholders are viewed (as a source of pressure and/or threat; as an asset: current and future), corporate responses (in the form of buffering, bridging, boundary redefinition), and types of organisational teaming (single-loop, double-loop, triple-loop) and change (first order, second order, third order) were particularly important in building the key constructs of the conceptual model that emerged from the analysis of the data. Chapter 3 (Methodology) describes the methodology that was used to conduct the study, affirms the appropriateness of the case study method in addressing the research question, and describes the procedures for collecting and analysing the data. Data collection took place in two phases -extending from August 1999 to October 2000, and from May to December 2001, which functioned as `snapshots' in time of the three companies under study. The data was systematically analysed and coded using ATLAS/ti, a qualitative data analysis tool, which enabled me to sort, organise, and reduce the data into a manageable form. Chapter 4 (Data Analysis) contains the three cases that were developed (anonymised as Pioneer, Helvetica, and Viking). Each case is presented in its entirety (constituting a `within case' analysis), followed by a 'cross-case' analysis, backed up by extensive verbatim evidence. Chapter 5 presents the research findings, outlines the study's limitations, describes managerial implications, and offers suggestions for where more research could elaborate the conceptual model developed through this study, as well as suggestions for additional research in areas where managerial implications were outlined. References and Appendices are included at the end. This dissertation results in the construction and description of a conceptual model, grounded in the empirical data and tied to existing literature, which portrays a set of elements and relationships deemed important for understanding the impact of stakeholder engagement for firms in terms of organisational learning and change. This model suggests that corporate perceptions about the nature of stakeholder influence the perceived value of stakeholder contributions. When stakeholders are primarily viewed as a source of pressure or threat, firms tend to adopt a reactive/defensive posture in an effort to manage stakeholders and protect the firm from sources of outside pressure -behaviour consistent with Resource Dependence Theory, which suggests that firms try to get control over extemal threats by focussing on the relevant stakeholders on whom they depend for critical resources, and try to reverse the control potentially exerted by extemal constituencies by trying to influence and manipulate these valuable stakeholders. In situations where stakeholders are viewed as a current strategic asset, firms tend to adopt a proactive/offensive posture in an effort to tap stakeholder contributions and connect the organisation to its environment - behaviour consistent with Institutional Theory, which suggests that firms try to ensure the continuing license to operate by internalising external expectations. In instances where stakeholders are viewed as a source of future value, firms tend to adopt an interactive/innovative posture in an effort to reduce or widen the embedded system and bring stakeholders into systems of innovation and feedback -behaviour consistent with the literature on Organisational Learning, which suggests that firms can learn how to optimize their performance as they develop systems and structures that are more adaptable and responsive to change The conceptual model moreover suggests that the perceived value of stakeholder contribution drives corporate aims for engagement, which can be usefully categorised as dialogue intentions spanning a continuum running from low-level to high-level to very-high level. This study suggests that activities aimed at disarming critical stakeholders (`manipulation') providing guidance and correcting misinformation (`education'), being transparent about corporate activities and policies (`information'), alleviating stakeholder concerns (`placation'), and accessing stakeholder opinion ('consultation') represent low-level dialogue intentions and are experienced by stakeholders as asymmetrical, persuasive, compliance-gaining activities that are not in line with `true' dialogue. This study also finds evidence that activities aimed at redistributing power ('partnership'), involving stakeholders in internal corporate processes (`participation'), and demonstrating corporate responsibility (`stewardship') reflect high-level dialogue intentions. This study additionally finds evidence that building and sustaining high-quality, trusted relationships which can meaningfully influence organisational policies incline a firm towards the type of interactive, proactive processes that underpin the development of sustainable corporate strategies. Dialogue intentions are related to type of corporate response: low-level intentions can lead to buffering strategies; high-level intentions can underpin bridging strategies; very high-level intentions can incline a firm towards boundary redefinition. The nature of corporate response (which encapsulates a firm's posture towards stakeholders, demonstrated by the level of dialogue intention and the firm's strategy for dealing with stakeholders) favours the type of learning and change experienced by the organisation. This study indicates that buffering strategies, where the firm attempts to protect itself against external influences and cant' out its existing strategy, typically lead to single-loop learning, whereby the firm teams how to perform better within its existing paradigm and at most, improves the performance of the established system - an outcome associated with first-order change. Bridging responses, where the firm adapts organisational activities to meet external expectations, typically leads a firm to acquire new behavioural capacities characteristic of double-loop learning, whereby insights and understanding are uncovered that are fundamentally different from existing knowledge and where stakeholders are brought into problem-solving conversations that enable them to influence corporate decision-making to address shortcomings in the system - an outcome associated with second-order change. Boundary redefinition suggests that the firm engages in triple-loop learning, where the firm changes relations with stakeholders in profound ways, considers problems from a whole-system perspective, examining the deep structures that sustain the system, producing innovation to address chronic problems and develop new opportunities - an outcome associated with third-order change. This study supports earlier theoretical and empirical studies {e.g. Weick's (1979, 1985) work on self-enactment; Maitlis & Lawrence's (2007) and Maitlis' (2005) work and Weick et al's (2005) work on sensegiving and sensemaking in organisations; Brickson's (2005, 2007) and Scott & Lane's (2000) work on organisational identity orientation}, which indicate that corporate self-perception is a key underlying factor driving the dynamics of organisational teaming and change. Such theorizing has important implications for managerial practice; namely, that a company which perceives itself as a 'victim' may be highly inclined to view stakeholders as a source of negative influence, and would therefore be potentially unable to benefit from the positive influence of engagement. Such a selfperception can blind the firm from seeing stakeholders in a more positive, contributing light, which suggests that such firms may not be inclined to embrace external sources of innovation and teaming, as they are focussed on protecting the firm against disturbing environmental influences (through buffering), and remain more likely to perform better within an existing paradigm (single-loop teaming). By contrast, a company that perceives itself as a 'leader' may be highly inclined to view stakeholders as a source of positive influence. On the downside, such a firm might have difficulty distinguishing when stakeholder contributions are less pertinent as it is deliberately more open to elements in operating environment (including stakeholders) as potential sources of learning and change, as the firm is oriented towards creating space for fundamental change (through boundary redefinition), opening issues to entirely new ways of thinking and addressing issues from whole-system perspective. A significant implication of this study is that potentially only those companies who see themselves as a leader are ultimately able to tap the innovation potential of stakeholder dialogue.
Resumo:
Automation was introduced many years ago in several diagnostic disciplines such as chemistry, haematology and molecular biology. The first laboratory automation system for clinical bacteriology was released in 2006, and it rapidly proved its value by increasing productivity, allowing a continuous increase in sample volumes despite limited budgets and personnel shortages. Today, two major manufacturers, BD Kiestra and Copan, are commercializing partial or complete laboratory automation systems for bacteriology. The laboratory automation systems are rapidly evolving to provide improved hardware and software solutions to optimize laboratory efficiency. However, the complex parameters of the laboratory and automation systems must be considered to determine the best system for each given laboratory. We address several topics on laboratory automation that may help clinical bacteriologists to understand the particularities and operative modalities of the different systems. We present (a) a comparison of the engineering and technical features of the various elements composing the two different automated systems currently available, (b) the system workflows of partial and complete laboratory automation, which define the basis for laboratory reorganization required to optimize system efficiency, (c) the concept of digital imaging and telebacteriology, (d) the connectivity of laboratory automation to the laboratory information system, (e) the general advantages and disadvantages as well as the expected impacts provided by laboratory automation and (f) the laboratory data required to conduct a workflow assessment to determine the best configuration of an automated system for the laboratory activities and specificities.
Resumo:
The Ajjanahalli gold mine is spatially associated with a Late Archean craton-scale shear zone in the eastern Chitradurga greenstone belt of the Dharwar craton, India. Gold mineralization is hosted by an similar to100-m-wide antiform in a banded iron formation. Original magnetite and siderite are replaced by a peak metamorphic alteration assemblage of chlorite, stilpnomelane, minnesotaite, sericite, ankerite, arsenopyrite, pyrite, pyrrhotite, and gold at ca. 300degrees to 350degreesC. Elements enriched in the banded iron formation include Ca, Mg, C, S, An, As, Bi. Cu, Sb, Zn, Pb, Se, Ag, and Te, whereas in the wall rocks As, Cu, Zn, Bi, Ag, and An are only slightly enriched. Strontium correlates with CaO, MgO, CO2, and As, which indicates cogenetic formation of arsenopyrite and Mg-Ca carbonates. The greater extent of alteration in the Fe-rich banded iron formation layers than in the wall rock reflects the greater reactivity of the banded iron formation layers. The ore fluids, as interpreted from their isotopic composition (delta(18)O = 6.5-8.5parts per thousand; initial Sr-87/Sr-86 = 0.7068-0.7078), formed by metamorphic devolatilization of deeper levels of the Chitradurga greenstone belt. Arsenopyrite, chalcopyrite, and pyrrhotite have delta(34)S values within a narrow range between 2.1 and 2.7 per mil, consistent with a sulfur source in Chitradurga greenstone belt lithologies. Based on spatial and temporal relationships between mineralization, local structure development, and sinistral strike-slip deformation in the shear zone at the eastern contact of the Chitradurga greenstone belt, we suggest that the Ajjanahalli gold mineralization formed by fluid infiltration into a low strain area within the first-order structure. The ore fluids were transported along this shear zone into relatively shallow crustal levels during lateral terrane accretion and a change from thrust to transcurrent tectonics. Based on this model of fluid flow, exploration should focus on similar low strain areas or potentially connected higher order splays of the first-order shear zone.
Resumo:
BACKGROUND: Recommended oral voriconazole (VRC) doses are lower than intravenous doses. Because plasma concentrations impact efficacy and safety of therapy, optimizing individual drug exposure may improve these outcomes. METHODS: A population pharmacokinetic analysis (NONMEM) was performed on 505 plasma concentration measurements involving 55 patients with invasive mycoses who received recommended VRC doses. RESULTS: A 1-compartment model with first-order absorption and elimination best fitted the data. VRC clearance was 5.2 L/h, the volume of distribution was 92 L, the absorption rate constant was 1.1 hour(-1), and oral bioavailability was 0.63. Severe cholestasis decreased VRC elimination by 52%. A large interpatient variability was observed on clearance (coefficient of variation [CV], 40%) and bioavailability (CV 84%), and an interoccasion variability was observed on bioavailability (CV, 93%). Lack of response to therapy occurred in 12 of 55 patients (22%), and grade 3 neurotoxicity occurred in 5 of 55 patients (9%). A logistic multivariate regression analysis revealed an independent association between VRC trough concentrations and probability of response or neurotoxicity by identifying a therapeutic range of 1.5 mg/L (>85% probability of response) to 4.5 mg/L (<15% probability of neurotoxicity). Population-based simulations with the recommended 200 mg oral or 300 mg intravenous twice-daily regimens predicted probabilities of 49% and 87%, respectively, for achievement of 1.5 mg/L and of 8% and 37%, respectively, for achievement of 4.5 mg/L. With 300-400 mg twice-daily oral doses and 200-300 mg twice-daily intravenous doses, the predicted probabilities of achieving the lower target concentration were 68%-78% for the oral regimen and 70%-87% for the intravenous regimen, and the predicted probabilities of achieving the upper target concentration were 19%-29% for the oral regimen and 18%-37% for the intravenous regimen. CONCLUSIONS: Higher oral than intravenous VRC doses, followed by individualized adjustments based on measured plasma concentrations, improve achievement of the therapeutic target that maximizes the probability of therapeutic response and minimizes the probability of neurotoxicity. These findings challenge dose recommendations for VRC.
Resumo:
Modern sonic logging tools designed for shallow environmental and engineering applications allow for P-wave phase velocity measurements over a wide frequency band. Methodological considerations indicate that, for saturated unconsolidated sediments in the silt to sand range and source frequencies ranging from approximately 1 to 30 kHz, the observable poro-elastic P-wave velocity dispersion is sufficiently pronounced to allow for reliable first-order estimations of the underlying permeability structure. These predictions have been tested on and verified for a surficial alluvial aquifer. Our results indicate that, even without any further calibration, the thus obtained permeability estimates as well as their variabilities within the pertinent lithological units are remarkably close to those expected based on the corresponding granulometric characteristics.
Resumo:
A growing number of studies have been addressing the relationship between theory of mind (TOM) and executive functions (EF) in patients with acquired neurological pathology. In order to provide a global overview on the main findings, we conducted a systematic review on group studies where we aimed to (1) evaluate the patterns of impaired and preserved abilities of both TOM and EF in groups of patients with acquired neurological pathology and (2) investigate the existence of particular relations between different EF domains and TOM tasks. The search was conducted in Pubmed/Medline. A total of 24 articles met the inclusion criteria. We considered for analysis classical clinically accepted TOM tasks (first- and second-order false belief stories, the Faux Pas test, Happe's stories, the Mind in the Eyes task, and Cartoon's tasks) and EF domains (updating, shifting, inhibition, and access). The review suggests that (1) EF and TOM appear tightly associated. However, the few dissociations observed suggest they cannot be reduced to a single function; (2) no executive subprocess could be specifically associated with TOM performances; (3) the first-order false belief task and the Happe's story task seem to be less sensitive to neurological pathologies and less associated to EF. Even though the analysis of the reviewed studies demonstrates a close relationship between TOM and EF in patients with acquired neurological pathology, the nature of this relationship must be further investigated. Studies investigating ecological consequences of TOM and EF deficits, and intervention researches may bring further contributions to this question.