901 resultados para event-related potential (ERP)
Resumo:
Road dust is caused by wind entraining fine material from the roadway surface and the main source of Iowa road dust is attrition of carbonate rock used as aggregate. The mechanisms of dust suppression can be considered as two processes: increasing particle size of the surface fines by agglomeration and inhibiting degradation of the coarse material. Agglomeration may occur by capillary tension in the pore water, surfactants that increase bonding between clay particles, and cements that bind the mineral matter together. Hygroscopic dust suppressants such as calcium chloride have short durations of effectiveness because capillary tension is the primary agglomeration mechanism. Somewhat more permanent methods of agglomeration result from chemicals that cement smaller particles into a mat or larger particles. The cements include lignosulfonates, resins, and asphalt products. The duration of the cements depend on their solubility and the climate. The only dust palliative that decreases aggregate degradation is shredded shingles that act as cushions between aggregate particles. It is likely that synthetic polymers also provide some protection against coarse aggregate attrition. Calcium chloride and lignosulfonates are widely used in Iowa. Both palliatives have a useful duration of about 6 months. Calcium chloride is effective with surface soils of moderate fine content and plasticity whereas lignin works best with materials that have high fine content and high plasticity indices. Bentonite appears to be effective for up to two years and works well with surface materials having low fines and plasticity and works well with limestone aggregate. Selection of appropriate dust suppressants should be based on characterization of the road surface material. Estimation of dosage rates for potential palliatives can be based on data from this report, from technical reports, information from reliable vendors, or laboratory screening tests. The selection should include economic analysis of construction and maintenance costs. The effectiveness of the treatment should be evaluated by any of the field performance measuring techniques discussed in this report. Novel dust control agents that need research for potential application in Iowa include; acidulated soybean oil (soapstock), soybean oil, ground up asphalt shingles, and foamed asphalt. New laboratory evaluation protocols to screen additives for potential effectiveness and determine dosage are needed. A modification of ASTM D 560 to estimate the freeze-thaw and wet-dry durability of Portland cement stabilized soils would be a starting point for improved laboratory testing of dust palliatives.
Resumo:
Abstract: The expansion of a recovering population - whether re-introduced or spontaneously returning - is shaped by (i) biological (intrinsic) factors such as the land tenure system or dispersal, (ii) the distribution and availability of resources (e.g. prey), (iii) habitat and landscape features, and (iv) human attitudes and activities. In order to develop efficient conservation and recovery strategies, we need to understand all these factors and to predict the potential distribution and explore ways to reach it. An increased number of lynx in the north-western Swiss Alps in the nineties lead to a new controversy about the return of this cat. When the large carnivores were given legal protection in many European countries, most organizations and individuals promoting their protection did not foresee the consequences. Management plans describing how to handle conflicts with large predators are needed to find a balance between "overabundance" and extinction. Wildlife and conservation biologists need to evaluate the various threats confronting populations so that adequate management decisions can be taken. I developed a GIS probability model for the lynx, based on habitat information and radio-telemetry data from the Swiss Jura Mountains, in order to predict the potential distribution of the lynx in this mountain range, which is presently only partly occupied by lynx. Three of the 18 variables tested for each square kilometre describing land use, vegetation, and topography, qualified to predict the probability of lynx presence. The resulting map was evaluated with data from dispersing subadult lynx. Young lynx that were not able to establish home ranges in what was identified as good lynx habitat did not survive their first year of independence, whereas the only one that died in good lynx habitat was illegally killed. Radio-telemetry fixes are often used as input data to calibrate habitat models. Radio-telemetry is the only way to gather accurate and unbiased data on habitat use of elusive larger terrestrial mammals. However, it is time consuming and expensive, and can therefore only be applied in limited areas. Habitat models extrapolated over large areas can in turn be problematic, as habitat characteristics and availability may change from one area to the other. I analysed the predictive power of Ecological Niche Factor Analysis (ENFA) in Switzerland with the lynx as focal species. According to my results, the optimal sampling strategy to predict species distribution in an Alpine area lacking available data would be to pool presence cells from contrasted regions (Jura Mountains, Alps), whereas in regions with a low ecological variance (Jura Mountains), only local presence cells should be used for the calibration of the model. Dispersal influences the dynamics and persistence of populations, the distribution and abundance of species, and gives the communities and ecosystems their characteristic texture in space and time. Between 1988 and 2001, the spatio-temporal behaviour of subadult Eurasian lynx in two re-introduced populations in Switzerland was studied, based on 39 juvenile lynx of which 24 were radio-tagged to understand the factors influencing dispersal. Subadults become independent from their mothers at the age of 8-11 months. No sex bias neither in the dispersal rate nor in the distance moved was detected. Lynx are conservative dispersers, compared to bear and wolf, and settled within or close to known lynx occurrences. Dispersal distances reached in the high lynx density population - shorter than those reported in other Eurasian lynx studies - are limited by habitat restriction hindering connections with neighbouring metapopulations. I postulated that high lynx density would lead to an expansion of the population and validated my predictions with data from the north-western Swiss Alps where about 1995 a strong increase in lynx abundance took place. The general hypothesis that high population density will foster the expansion of the population was not confirmed. This has consequences for the re-introduction and recovery of carnivores in a fragmented landscape. To establish a strong source population in one place might not be an optimal strategy. Rather, population nuclei should be founded in several neighbouring patches. Exchange between established neighbouring subpopulations will later on take place, as adult lynx show a higher propensity to cross barriers than subadults. To estimate the potential population size of the lynx in the Jura Mountains and to assess possible corridors between this population and adjacent areas, I adapted a habitat probability model for lynx distribution in the Jura Mountains with new environmental data and extrapolated it over the entire mountain range. The model predicts a breeding population ranging from 74-101 individuals and from 51-79 individuals when continuous habitat patches < 50 km2 are disregarded. The Jura Mountains could once be part of a metapopulation, as potential corridors exist to the adjoining areas (Alps, Vosges Mountains, and Black Forest). Monitoring of the population size, spatial expansion, and the genetic surveillance in the Jura Mountains must be continued, as the status of the population is still critical. ENFA was used to predict the potential distribution of lynx in the Alps. The resulting model divided the Alps into 37 suitable habitat patches ranging from 50 to 18,711 km2, covering a total area of about 93,600 km2. When using the range of lynx densities found in field studies in Switzerland, the Alps could host a population of 961 to 1,827 residents. The results of the cost-distance analysis revealed that all patches were within the reach of dispersing lynx, as the connection costs were in the range of dispersal cost of radio-tagged subadult lynx moving through unfavorable habitat. Thus, the whole Alps could once be considered as a metapopulation. But experience suggests that only few disperser will cross unsuitable areas and barriers. This low migration rate may seldom allow the spontaneous foundation of new populations in unsettled areas. As an alternative to natural dispersal, artificial transfer of individuals across the barriers should be considered. Wildlife biologists can play a crucial role in developing adaptive management experiments to help managers learning by trial. The case of the lynx in Switzerland is a good example of a fruitful cooperation between wildlife biologists, managers, decision makers and politician in an adaptive management process. This cooperation resulted in a Lynx Management Plan which was implemented in 2000 and updated in 2004 to give the cantons directives on how to handle lynx-related problems. This plan was put into practice e.g. in regard to translocation of lynx into unsettled areas. Résumé: L'expansion d'une population en phase de recolonisation, qu'elle soit issue de réintroductions ou d'un retour naturel dépend 1) de facteurs biologiques tels que le système social et le mode de dispersion, 2) de la distribution et la disponibilité des ressources (proies), 3) de l'habitat et des éléments du paysage, 4) de l'acceptation de l'espèce par la population locale et des activités humaines. Afin de pouvoir développer des stratégies efficaces de conservation et de favoriser la recolonisation, chacun de ces facteurs doit être pris en compte. En plus, la distribution potentielle de l'espèce doit pouvoir être déterminée et enfin, toutes les possibilités pour atteindre les objectifs, examinées. La phase de haute densité que la population de lynx a connue dans les années nonante dans le nord-ouest des Alpes suisses a donné lieu à une controverse assez vive. La protection du lynx dans de nombreux pays européens, promue par différentes organisations, a entraîné des conséquences inattendues; ces dernières montrent que tout plan de gestion doit impérativement indiquer des pistes quant à la manière de gérer les conflits, tout en trouvant un équilibre entre l'extinction et la surabondance de l'espèce. Les biologistes de la conservation et de la faune sauvage doivent pour cela évaluer les différents risques encourus par les populations de lynx, afin de pouvoir rapidement prendre les meilleuresmdécisions de gestion. Un modèle d'habitat pour le lynx, basé sur des caractéristiques de l'habitat et des données radio télémétriques collectées dans la chaîne du Jura, a été élaboré afin de prédire la distribution potentielle dans cette région, qui n'est que partiellement occupée par l'espèce. Trois des 18 variables testées, décrivant pour chaque kilomètre carré l'utilisation du sol, la végétation ainsi que la topographie, ont été retenues pour déterminer la probabilité de présence du lynx. La carte qui en résulte a été comparée aux données télémétriques de lynx subadultes en phase de dispersion. Les jeunes qui n'ont pas pu établir leur domaine vital dans l'habitat favorable prédit par le modèle n'ont pas survécu leur première année d'indépendance alors que le seul individu qui est mort dans l'habitat favorable a été braconné. Les données radio-télémétriques sont souvent utilisées pour l'étalonnage de modèles d'habitat. C'est un des seuls moyens à disposition qui permette de récolter des données non biaisées et précises sur l'occupation de l'habitat par des mammifères terrestres aux moeurs discrètes. Mais ces méthodes de- mandent un important investissement en moyens financiers et en temps et peuvent, de ce fait, n'être appliquées qu'à des zones limitées. Les modèles d'habitat sont ainsi souvent extrapolés à de grandes surfaces malgré le risque d'imprécision, qui résulte des variations des caractéristiques et de la disponibilité de l'habitat d'une zone à l'autre. Le pouvoir de prédiction de l'Analyse Ecologique de la Niche (AEN) dans les zones où les données de présence n'ont pas été prises en compte dans le calibrage du modèle a été analysée dans le cas du lynx en Suisse. D'après les résultats obtenus, la meilleure mé- thode pour prédire la distribution du lynx dans une zone alpine dépourvue d'indices de présence est de combiner des données provenant de régions contrastées (Alpes, Jura). Par contre, seules les données sur la présence locale de l'espèce doivent être utilisées pour les zones présentant une faible variance écologique tel que le Jura. La dispersion influence la dynamique et la stabilité des populations, la distribution et l'abondance des espèces et détermine les caractéristiques spatiales et temporelles des communautés vivantes et des écosystèmes. Entre 1988 et 2001, le comportement spatio-temporel de lynx eurasiens subadultes de deux populations réintroduites en Suisse a été étudié, basé sur le suivi de 39 individus juvéniles dont 24 étaient munis d'un collier émetteur, afin de déterminer les facteurs qui influencent la dispersion. Les subadultes se sont séparés de leur mère à l'âge de 8 à 11 mois. Le sexe n'a pas eu d'influence sur le nombre d'individus ayant dispersés et la distance parcourue au cours de la dispersion. Comparé à l'ours et au loup, le lynx reste très modéré dans ses mouvements de dispersion. Tous les individus ayant dispersés se sont établis à proximité ou dans des zones déjà occupées par des lynx. Les distances parcourues lors de la dispersion ont été plus courtes pour la population en phase de haute densité que celles relevées par les autres études de dispersion du lynx eurasien. Les zones d'habitat peu favorables et les barrières qui interrompent la connectivité entre les populations sont les principales entraves aux déplacements, lors de la dispersion. Dans un premier temps, nous avons fait l'hypothèse que les phases de haute densité favorisaient l'expansion des populations. Mais cette hypothèse a été infirmée par les résultats issus du suivi des lynx réalisé dans le nord-ouest des Alpes, où la population connaissait une phase de haute densité depuis 1995. Ce constat est important pour la conservation d'une population de carnivores dans un habitat fragmenté. Ainsi, instaurer une forte population source à un seul endroit n'est pas forcément la stratégie la plus judicieuse. Il est préférable d'établir des noyaux de populations dans des régions voisines où l'habitat est favorable. Des échanges entre des populations avoisinantes pourront avoir lieu par la suite car les lynx adultes sont plus enclins à franchir les barrières qui entravent leurs déplacements que les individus subadultes. Afin d'estimer la taille de la population de lynx dans le Jura et de déterminer les corridors potentiels entre cette région et les zones avoisinantes, un modèle d'habitat a été utilisé, basé sur un nouveau jeu de variables environnementales et extrapolé à l'ensemble du Jura. Le modèle prédit une population reproductrice de 74 à 101 individus et de 51 à 79 individus lorsque les surfaces d'habitat d'un seul tenant de moins de 50 km2 sont soustraites. Comme des corridors potentiels existent effectivement entre le Jura et les régions avoisinantes (Alpes, Vosges, et Forêt Noire), le Jura pourrait faire partie à l'avenir d'une métapopulation, lorsque les zones avoisinantes seront colonisées par l'espèce. La surveillance de la taille de la population, de son expansion spatiale et de sa structure génétique doit être maintenue car le statut de cette population est encore critique. L'AEN a également été utilisée pour prédire l'habitat favorable du lynx dans les Alpes. Le modèle qui en résulte divise les Alpes en 37 sous-unités d'habitat favorable dont la surface varie de 50 à 18'711 km2, pour une superficie totale de 93'600 km2. En utilisant le spectre des densités observées dans les études radio-télémétriques effectuées en Suisse, les Alpes pourraient accueillir une population de lynx résidents variant de 961 à 1'827 individus. Les résultats des analyses de connectivité montrent que les sous-unités d'habitat favorable se situent à des distances telles que le coût de la dispersion pour l'espèce est admissible. L'ensemble des Alpes pourrait donc un jour former une métapopulation. Mais l'expérience montre que très peu d'individus traverseront des habitats peu favorables et des barrières au cours de leur dispersion. Ce faible taux de migration rendra difficile toute nouvelle implantation de populations dans des zones inoccupées. Une solution alternative existe cependant : transférer artificiellement des individus d'une zone à l'autre. Les biologistes spécialistes de la faune sauvage peuvent jouer un rôle important et complémentaire pour les gestionnaires de la faune, en les aidant à mener des expériences de gestion par essai. Le cas du lynx en Suisse est un bel exemple d'une collaboration fructueuse entre biologistes de la faune sauvage, gestionnaires, organes décisionnaires et politiciens. Cette coopération a permis l'élaboration du Concept Lynx Suisse qui est entré en vigueur en 2000 et remis à jour en 2004. Ce plan donne des directives aux cantons pour appréhender la problématique du lynx. Il y a déjà eu des applications concrètes sur le terrain, notamment par des translocations d'individus dans des zones encore inoccupées.
Resumo:
Aims: To compare the frequency of life events in the year preceding illness onset in a series of Conversion Disorder (CD) patients, with those of a matched control group and to characterize the nature of those events in terms of "escape" potential. Traditional models of CD hypothesise that relevant stressful experiences are "converted" into physical symptoms to relieve psychological pressure, and that the resultant disability allows "escape" from the stressor, providing some advantage to the individual. Methods: The Life Events and Difficulties Schedule (LEDS) is a validated semi-structured interview designed to minimise recall and interviewer bias through rigorous assessment and independent rating of events. An additional "escape" rating was developed. Results: In the year preceding onset in 25 CD patients (mean age 38.9 years ± 8) and a similar matched period in 13 controls (mean age 36.2 years ± 10), no significant difference was found in the proportion of subjects having ≥ 1 severe event (CD 64%, controls 38%; p=0.2). In the last month preceding onset, a higher number of patients experienced ≥1 severe events than controls (52% vs 15%, odds ratio 5.95 (CI: 1.09-32.57)). Patients were twice as much more likely to have a severe escape events than controls, in the month preceding onset (44% vs 7%, odds ratio 9.43 (CI: 1.06-84.04). Conclusion: Preliminary data from this ongoing study suggest that the time frame (preceding month) and the nature ("escape") of the events may play an important role in identifying key events related to CD onset.
Resumo:
Background: Mantle cell lymphoma (MCL) is a rare subtype (3-9%) of Non Hodgkin Lymphoma (NHL) with a relatively poor prognosis (5-year survival < 40%). Although consolidation of first remission with autologous stem cell transplantation (ASCT) is regarded as "golden standard", less than half of the patients may be subjected to this intensive treatment due to advanced age and co-morbidities. Standard-dose non-myeloablative radioimmunotherapy (RIT) seems to be a very efficient approach for treatment of certain NHL. However, there are almost no data available on the efficacy and safety of RIT in MCL. Methods and Patients: In the RIT-Network, a web-based international registry collecting real observational data from RIT-treated patients, 115 MCL patients treated with ibritumomab tiuxetan were recorded. Most of the patients were elderly males with advanced stage of the disease: median age - 63 (range 31-78); males - 70.4%, stage III/IV - 92%. RIT (i.e. application of ibritumomab tiuxetan) was a part of the first line therapy in 48 pts. (43%). Further 38 pts. (33%) received ibritumomab tiuxetan after two previous chemotherapy regimens, and 33 pts. (24%) after completing 3-8 lines. In 75 cases RIT was applied as a consolidation of chemotherapy induced response; the rest of the patients received ibritumomab tiuxetan because of relapse/refractory disease. At the moment follow up data are available for 74 MCL patients. Results: After RIT the patients achieved high response rate: CR 60.8%, PR 25.7%, and SD 2.7%. Only 10.8% of the patients progressed. For survival analysis many data had to be censored since the documentation had not been completed yet. The projected 3-year overall survival (OAS, fig.1 - image 001.gif) after radioimmunotherapy was 72% for pts. subjected to RIT consolidation versus 29% for those treated in relapse/refractory disease (p=0.03). RIT was feasible for almost all patients; only 3 procedure-related deaths were reported in the whole group. The main adverse event was hematological toxicity (grade III/IV cytopenias) showing a median time of recovery of Hb, WBC and Plt of 45, 40 and 38 days respectively. Conclusion: Standard-dose non-myeloablative RIT is a feasible and safe treatment modality, even for elderly MCL pts. Consolidation radioimmunotherapy with ibritumomab tiuxetan may prolong survival of patients who achieved clinical response after chemotherapy. Therefore, this consolidation approach should be considered as a treatment strategy for those, who are not eligible for ASCT. RIT also has a potential role as a palliation therapy in relapsing/resistant cases.
Resumo:
Amnestic mild cognitive impairment (aMCI) is characterized by memory deficits alone (single-domain, sd-aMCI) or associated with other cognitive disabilities (multi-domain, md-aMCI). The present study assessed the patterns of electroencephalographic (EEG) activity during the encoding and retrieval phases of short-term memory in these two aMCI subtypes, to identify potential functional differences according to the neuropsychological profile. Continuous EEG was recorded in 43 aMCI patients, whose 16 sd-aMCI and 27 md-aMCI, and 36 age-matched controls (EC) during delayed match-to-sample tasks for face and letter stimuli. At encoding, attended stimuli elicited parietal alpha (8-12 Hz) power decrease (desynchronization), whereas distracting stimuli were associated with alpha power increase (synchronization) over right central sites. No difference was observed in parietal alpha desynchronization among the three groups. For attended faces, the alpha synchronization underlying suppression of distracting letters was reduced in both aMCI subgroups, but more severely in md-aMCI cases that differed significantly from EC. At retrieval, the early N250r recognition effect was significantly reduced for faces in md-aMCI as compared to both sd-aMCI and EC. The results suggest a differential alteration of working memory cerebral processes for faces in the two aMCI subtypes, face covert recognition processes being specifically altered in md-aMCI.
Resumo:
Additions of lactams, imides, (S)-4-benzyl-1,3-oxazolidin-2-one, 2-pyridone, pyrimidine-2,4-diones (AZT derivatives), or inosines to the electron-deficient triple bonds of methyl propynoate, tert-butyl propynoate, 3-butyn-2-one, N-propynoylmorpholine, or N-methoxy-N-methylpropynamide in the presence of many potential catalysts were examined. DABCO and, second, DMAP appeared to be the best (highest reaction rates and E/Z ratios), while RuCl3, RuClCp*(PPh3)2, AuCl, AuCl(PPh3), CuI, and Cu2(OTf)2 were incapable of catalyzing such additions. The groups incorporated (for example, the 2-(methoxycarbonyl)ethenyl group that we name MocVinyl) serve as protecting groups for the above-mentioned heterocyclic CONH or CONHCO moieties. Deprotections were accomplished via exchange with good nucleophiles: the 1-dodecanethiolate anion turned out to be the most general and efficient reagent, but in some particular cases other nucleophiles also worked (e.g., MocVinyl-inosines can be cleaved with succinimide anion). Some structural and mechanistic details have been accounted for with the help of DFT and MP2 calculations.
Resumo:
The potential consequences of early and late puberty on the psychological and behavioural development of the adolescent are not well known. This paper presents focused analyses from the Swiss SMASH study, a self-administered questionnaire survey conducted among a representative sample of 7488 adolescents from 16 to 20 years old. Data from participants reporting early or late timing of puberty were compared with those reporting average timing of maturation. Early maturing girls reported a higher rate of dissatisfaction with body image (OR=1.32) and functional symptoms (OR=1.52) and reported engaging in sexual activity more often (OR=1.93). Early maturing boys reported engaging in exploratory behaviours (sexual intercourse, legal and illegal substance use) at a significantly higher rate (OR varying between 1.4 and 1.99). Both early and late maturing boys reported higher rates of dysfunctional eating patterns (OR=1.59 and 1.38, respectively), victimisation (OR=1.61 and 1.37, respectively) and depressive symptoms (OR=2.11 and 1.53, respectively). Clinicians should take into account the pubertal stage of their patients and provide them, as well as their parents, with appropriate counselling in the field of mental health and health behaviour.
Resumo:
Summary This dissertation explores how stakeholder dialogue influences corporate processes, and speculates about the potential of this phenomenon - particularly with actors, like non-governmental organizations (NGOs) and other representatives of civil society, which have received growing attention against a backdrop of increasing globalisation and which have often been cast in an adversarial light by firms - as a source of teaming and a spark for innovation in the firm. The study is set within the context of the introduction of genetically-modified organisms (GMOs) in Europe. Its significance lies in the fact that scientific developments and new technologies are being generated at an unprecedented rate in an era where civil society is becoming more informed, more reflexive, and more active in facilitating or blocking such new developments, which could have the potential to trigger widespread changes in economies, attitudes, and lifestyles, and address global problems like poverty, hunger, climate change, and environmental degradation. In the 1990s, companies using biotechnology to develop and offer novel products began to experience increasing pressure from civil society to disclose information about the risks associated with the use of biotechnology and GMOs, in particular. Although no harmful effects for humans or the environment have been factually demonstrated even to date (2008), this technology remains highly-contested and its introduction in Europe catalysed major companies to invest significant financial and human resources in stakeholder dialogue. A relatively new phenomenon at the time, with little theoretical backing, dialogue was seen to reflect a move towards greater engagement with stakeholders, commonly defined as those "individuals or groups with which. business interacts who have a 'stake', or vested interest in the firm" (Carroll, 1993:22) with whom firms are seen to be inextricably embedded (Andriof & Waddock, 2002). Regarding the organisation of this dissertation, Chapter 1 (Introduction) describes the context of the study, elaborates its significance for academics and business practitioners as an empirical work embedded in a sector at the heart of the debate on corporate social responsibility (CSR). Chapter 2 (Literature Review) traces the roots and evolution of CSR, drawing on Stakeholder Theory, Institutional Theory, Resource Dependence Theory, and Organisational Learning to establish what has already been developed in the literature regarding the stakeholder concept, motivations for engagement with stakeholders, the corporate response to external constituencies, and outcomes for the firm in terms of organisational learning and change. I used this review of the literature to guide my inquiry and to develop the key constructs through which I viewed the empirical data that was gathered. In this respect, concepts related to how the firm views itself (as a victim, follower, leader), how stakeholders are viewed (as a source of pressure and/or threat; as an asset: current and future), corporate responses (in the form of buffering, bridging, boundary redefinition), and types of organisational teaming (single-loop, double-loop, triple-loop) and change (first order, second order, third order) were particularly important in building the key constructs of the conceptual model that emerged from the analysis of the data. Chapter 3 (Methodology) describes the methodology that was used to conduct the study, affirms the appropriateness of the case study method in addressing the research question, and describes the procedures for collecting and analysing the data. Data collection took place in two phases -extending from August 1999 to October 2000, and from May to December 2001, which functioned as `snapshots' in time of the three companies under study. The data was systematically analysed and coded using ATLAS/ti, a qualitative data analysis tool, which enabled me to sort, organise, and reduce the data into a manageable form. Chapter 4 (Data Analysis) contains the three cases that were developed (anonymised as Pioneer, Helvetica, and Viking). Each case is presented in its entirety (constituting a `within case' analysis), followed by a 'cross-case' analysis, backed up by extensive verbatim evidence. Chapter 5 presents the research findings, outlines the study's limitations, describes managerial implications, and offers suggestions for where more research could elaborate the conceptual model developed through this study, as well as suggestions for additional research in areas where managerial implications were outlined. References and Appendices are included at the end. This dissertation results in the construction and description of a conceptual model, grounded in the empirical data and tied to existing literature, which portrays a set of elements and relationships deemed important for understanding the impact of stakeholder engagement for firms in terms of organisational learning and change. This model suggests that corporate perceptions about the nature of stakeholder influence the perceived value of stakeholder contributions. When stakeholders are primarily viewed as a source of pressure or threat, firms tend to adopt a reactive/defensive posture in an effort to manage stakeholders and protect the firm from sources of outside pressure -behaviour consistent with Resource Dependence Theory, which suggests that firms try to get control over extemal threats by focussing on the relevant stakeholders on whom they depend for critical resources, and try to reverse the control potentially exerted by extemal constituencies by trying to influence and manipulate these valuable stakeholders. In situations where stakeholders are viewed as a current strategic asset, firms tend to adopt a proactive/offensive posture in an effort to tap stakeholder contributions and connect the organisation to its environment - behaviour consistent with Institutional Theory, which suggests that firms try to ensure the continuing license to operate by internalising external expectations. In instances where stakeholders are viewed as a source of future value, firms tend to adopt an interactive/innovative posture in an effort to reduce or widen the embedded system and bring stakeholders into systems of innovation and feedback -behaviour consistent with the literature on Organisational Learning, which suggests that firms can learn how to optimize their performance as they develop systems and structures that are more adaptable and responsive to change The conceptual model moreover suggests that the perceived value of stakeholder contribution drives corporate aims for engagement, which can be usefully categorised as dialogue intentions spanning a continuum running from low-level to high-level to very-high level. This study suggests that activities aimed at disarming critical stakeholders (`manipulation') providing guidance and correcting misinformation (`education'), being transparent about corporate activities and policies (`information'), alleviating stakeholder concerns (`placation'), and accessing stakeholder opinion ('consultation') represent low-level dialogue intentions and are experienced by stakeholders as asymmetrical, persuasive, compliance-gaining activities that are not in line with `true' dialogue. This study also finds evidence that activities aimed at redistributing power ('partnership'), involving stakeholders in internal corporate processes (`participation'), and demonstrating corporate responsibility (`stewardship') reflect high-level dialogue intentions. This study additionally finds evidence that building and sustaining high-quality, trusted relationships which can meaningfully influence organisational policies incline a firm towards the type of interactive, proactive processes that underpin the development of sustainable corporate strategies. Dialogue intentions are related to type of corporate response: low-level intentions can lead to buffering strategies; high-level intentions can underpin bridging strategies; very high-level intentions can incline a firm towards boundary redefinition. The nature of corporate response (which encapsulates a firm's posture towards stakeholders, demonstrated by the level of dialogue intention and the firm's strategy for dealing with stakeholders) favours the type of learning and change experienced by the organisation. This study indicates that buffering strategies, where the firm attempts to protect itself against external influences and cant' out its existing strategy, typically lead to single-loop learning, whereby the firm teams how to perform better within its existing paradigm and at most, improves the performance of the established system - an outcome associated with first-order change. Bridging responses, where the firm adapts organisational activities to meet external expectations, typically leads a firm to acquire new behavioural capacities characteristic of double-loop learning, whereby insights and understanding are uncovered that are fundamentally different from existing knowledge and where stakeholders are brought into problem-solving conversations that enable them to influence corporate decision-making to address shortcomings in the system - an outcome associated with second-order change. Boundary redefinition suggests that the firm engages in triple-loop learning, where the firm changes relations with stakeholders in profound ways, considers problems from a whole-system perspective, examining the deep structures that sustain the system, producing innovation to address chronic problems and develop new opportunities - an outcome associated with third-order change. This study supports earlier theoretical and empirical studies {e.g. Weick's (1979, 1985) work on self-enactment; Maitlis & Lawrence's (2007) and Maitlis' (2005) work and Weick et al's (2005) work on sensegiving and sensemaking in organisations; Brickson's (2005, 2007) and Scott & Lane's (2000) work on organisational identity orientation}, which indicate that corporate self-perception is a key underlying factor driving the dynamics of organisational teaming and change. Such theorizing has important implications for managerial practice; namely, that a company which perceives itself as a 'victim' may be highly inclined to view stakeholders as a source of negative influence, and would therefore be potentially unable to benefit from the positive influence of engagement. Such a selfperception can blind the firm from seeing stakeholders in a more positive, contributing light, which suggests that such firms may not be inclined to embrace external sources of innovation and teaming, as they are focussed on protecting the firm against disturbing environmental influences (through buffering), and remain more likely to perform better within an existing paradigm (single-loop teaming). By contrast, a company that perceives itself as a 'leader' may be highly inclined to view stakeholders as a source of positive influence. On the downside, such a firm might have difficulty distinguishing when stakeholder contributions are less pertinent as it is deliberately more open to elements in operating environment (including stakeholders) as potential sources of learning and change, as the firm is oriented towards creating space for fundamental change (through boundary redefinition), opening issues to entirely new ways of thinking and addressing issues from whole-system perspective. A significant implication of this study is that potentially only those companies who see themselves as a leader are ultimately able to tap the innovation potential of stakeholder dialogue.
Resumo:
Additions of lactams, imides, (S)-4-benzyl-1,3-oxazolidin-2-one, 2-pyridone, pyrimidine-2,4-diones (AZT derivatives), or inosines to the electron-deficient triple bonds of methyl propynoate, tert-butyl propynoate, 3-butyn-2-one, N-propynoylmorpholine, or N-methoxy-N-methylpropynamide in the presence of many potential catalysts were examined. DABCO and, second, DMAP appeared to be the best (highest reaction rates and E/Z ratios), while RuCl3, RuClCp*(PPh3)2, AuCl, AuCl(PPh3), CuI, and Cu2(OTf)2 were incapable of catalyzing such additions. The groups incorporated (for example, the 2-(methoxycarbonyl)ethenyl group that we name MocVinyl) serve as protecting groups for the above-mentioned heterocyclic CONH or CONHCO moieties. Deprotections were accomplished via exchange with good nucleophiles: the 1-dodecanethiolate anion turned out to be the most general and efficient reagent, but in some particular cases other nucleophiles also worked (e.g., MocVinyl-inosines can be cleaved with succinimide anion). Some structural and mechanistic details have been accounted for with the help of DFT and MP2 calculations.
Resumo:
Kiihtyvä kilpailu yritysten välillä on tuonut yritykset vaikeidenhaasteiden eteen. Tuotteet pitäisi saada markkinoille nopeammin, uusien tuotteiden pitäisi olla parempia kuin vanhojen ja etenkin parempia kuin kilpailijoiden vastaavat tuotteet. Lisäksi tuotteiden suunnittelu-, valmistus- ja muut kustannukset eivät saisi olla suuria. Näiden haasteiden toteuttamisessa yritetään usein käyttää apuna tuotetietoja, niiden hallintaa ja vaihtamista. Andritzin, kuten muidenkin yritysten, on otettava nämä asiat huomioon pärjätäkseen kilpailussa. Tämä työ on tehty Andritzille, joka on maailman johtavia paperin ja sellun valmistukseen tarkoitettujen laitteiden valmistajia ja huoltopalveluiden tarjoajia. Andritz on ottamassa käyttöön ERP-järjestelmän kaikissa toimipisteissään. Sitä halutaan hyödyntää mahdollisimman tehokkaasti, joten myös tuotetiedot halutaan järjestelmään koko elinkaaren ajalta. Osan tuotetiedoista luo Andritzin kumppanit ja alihankkijat, joten myös tietojen vaihto partnereiden välillä halutaan hoitaasiten, että tiedot saadaan suoraan ERP-järjestelmään. Tämän työn tavoitteena onkin löytää ratkaisu, jonka avulla Andritzin ja sen kumppaneiden välinen tietojenvaihto voidaan hoitaa. Tämä diplomityö esittelee tuotetietojen, niiden hallinnan ja vaihtamisen tarkoituksen ja tärkeyden. Työssä esitellään erilaisia ratkaisuvaihtoehtoja tiedonvaihtojärjestelmän toteuttamiseksi. Osa niistä perustuu yleisiin ja toimialakohtaisiin standardeihin. Myös kaksi kaupallista tuotetta esitellään. Tarkasteltavana onseuraavat standardit: PaperIXI, papiNet, X-OSCO, PSK-standardit sekä RosettaNet. Lisäksi työssä tarkastellaan ERP-järjestelmän toimittajan, SAP:in ratkaisuja tietojenvaihtoon. Näistä vaihtoehdoista parhaimpia tarkastellaan vielä yksityiskohtaisemmin ja lopuksi eri ratkaisuja vertaillaan keskenään, jotta löydettäisiin Andritzin tarpeisiin paras vaihtoehto.
Resumo:
Different compounds have been reported as biomarkers of a smoking habit, but, to date, there is no appropriate biomarker for tobacco-related exposure because the proposed chemicals seem to be nonspecific or they are only appropriate for short-term exposure. Moreover, conventional sampling methodologies require an invasive method because blood or urine samples are required. The use of a microtrap system coupled to gas chromatography–mass spectrometry analysis has been found to be very effective for the noninvasive analysis of volatile organic compounds in breath samples. The levels of benzene, 2,5-dimethylfuran, toluene, o-xylene, and m- p-xylene have been analyzed in breath samples obtained from 204 volunteers (100 smokers, 104 nonsmokers; 147 females, 57 males; ages 16 to 53 years). 2,5-Dimethylfuran was always below the limit of detection (0.005 ppbv) in the nonsmoker population and always detected in smokers independently of the smoking habits. Benzene was only an effective biomarker for medium and heavy smokers, and its level was affected by smoking habits. Regarding the levels of xylenes and toluene, they were only different in heavy smokers and after short-term exposure. The results obtained suggest that 2,5-dimethylfuran is a specific breath biomarker of smoking status independently of the smoking habits (e.g., short- and long-term exposure, light and heavy consumption), and so this compound might be useful as a biomarker of smoking exposure
Resumo:
The integration of information which can be gained from accessory [i.e. age (t)] and rock-forming minerals [i.e. temperature (T) and pressure (P)] requires a more profound understanding of the equilibration kinetics during metamorphic processes. This paper presents an approach comparing conventional P-T estimate from equilibrated assemblages of rock-forming minerals with temperature data derived from yttrium-garnet-monazite (YGM) and yttrium-garnet-xenotime (YGX) geothermometry. Such a comparison provides an initial indication on differences between equilibration of major and trace elements. Regarding this purpose, two migmatites, two polycyclic and one monocyclic gneiss from the Central Alps (Switzerland, northern Italy) were investigated. While the polycyclic samples exhibit trace-element equilibration between monazite and garnet grains assigned to the same metamorphic event, there are relics of monazite and garnet obviously surviving independent of their textural position. These observations suggest that surface processes dominate transport processes during equilibration of those samples. The monocyclic gneiss, on the contrary, displays rare isolated monazite with equilibration of all elements, despite comparably large transport distances. With a nearly linear crystal-size distribution of the garnet grain population, growth kinetics, related to the major elements, were likely surface-controlled in this sample. In contrast to these completely equilibrated examples, the migmatites indicate disequilibrium between garnet and monazite with a change in REE patterns on garnet transects. The cause for this disequilibrium may be related to a potential disequilibrium initiated by a changing bulk chemistry during melt segregation. While migmatite environments are expected to support high transport rates (i.e. high temperatures and melt presence), the evolution of equilibration in migmatites is additionaly related to change in chemistry. As a key finding, surface-controlled equilibration kinetics seem to dominate transport-controlled processes in the investigated samples. This may be decisive information towards the understanding of age data derived from monazite.
Resumo:
Mechanisms concerning life or death decisions in protozoan parasites are still imperfectly understood. Comparison with higher eukaryotes has led to the hypothesis that caspase-like enzymes could be involved in death pathways. This hypothesis was reinforced by the description of caspase-related sequences in the genome of several parasites, including Plasmodium, Trypanosoma and Leishmania. Although several teams are working to decipher the exact role of metacaspases in protozoan parasites, partial, conflicting or negative results have been obtained with respect to the relationship between protozoan metacaspases and cell death. The aim of this paper is to review current knowledge of protozoan parasite metacaspases within a drug targeting perspective.
Resumo:
BACKGROUND: Persons infected with human immunodeficiency virus (HIV) have increased rates of coronary artery disease (CAD). The relative contribution of genetic background, HIV-related factors, antiretroviral medications, and traditional risk factors to CAD has not been fully evaluated in the setting of HIV infection. METHODS: In the general population, 23 common single-nucleotide polymorphisms (SNPs) were shown to be associated with CAD through genome-wide association analysis. Using the Metabochip, we genotyped 1875 HIV-positive, white individuals enrolled in 24 HIV observational studies, including 571 participants with a first CAD event during the 9-year study period and 1304 controls matched on sex and cohort. RESULTS: A genetic risk score built from 23 CAD-associated SNPs contributed significantly to CAD (P = 2.9 × 10(-4)). In the final multivariable model, participants with an unfavorable genetic background (top genetic score quartile) had a CAD odds ratio (OR) of 1.47 (95% confidence interval [CI], 1.05-2.04). This effect was similar to hypertension (OR = 1.36; 95% CI, 1.06-1.73), hypercholesterolemia (OR = 1.51; 95% CI, 1.16-1.96), diabetes (OR = 1.66; 95% CI, 1.10-2.49), ≥ 1 year lopinavir exposure (OR = 1.36; 95% CI, 1.06-1.73), and current abacavir treatment (OR = 1.56; 95% CI, 1.17-2.07). The effect of the genetic risk score was additive to the effect of nongenetic CAD risk factors, and did not change after adjustment for family history of CAD. CONCLUSIONS: In the setting of HIV infection, the effect of an unfavorable genetic background was similar to traditional CAD risk factors and certain adverse antiretroviral exposures. Genetic testing may provide prognostic information complementary to family history of CAD.
Resumo:
Cancer-related inflammation has emerged in recent years as a major event contributing to tumor angiogenesis, tumor progression and metastasis formation. Bone marrow-derived and inflammatory cells promote tumor angiogenesis by providing endothelial progenitor cells that differentiate into mature endothelial cells, and by secreting pro-angiogenic factors and remodeling the extracellular matrix to stimulate angiogenesis though paracrine mechanisms. Several bone marrow-derived myelonomocytic cells, including monocytes and macrophages, have been identified and characterized by several laboratories in recent years. While the central role of these cells in promoting tumor angiogenesis, tumor progression and metastasis is nowadays well established, many questions remain open and new ones are emerging. These include the relationship between their phenotype and function, the mechanisms of pro-angiogenic programming, their contribution to resistance to anti-angiogenic treatments and to metastasis and their potential clinical use as biomarkers of angiogenesis and anti-angiogenic therapies. Here, we will review phenotypical and functional aspects of bone marrow-derived myelonomocytic cells and discuss some of the current outstanding questions.