935 resultados para Whether costs may be awarded on indemnity basis
Resumo:
Valuable minerals can be recovered by using froth flotation. This is a widely used separation technique in mineral processing. In a flotation cell hydrophobic particles attach on air bubbles dispersed in the slurry and rise on the top of the cell. Valuable particles are made hydrophobic by adding collector chemicals in the slurry. With the help of a frother reagent a stable froth forms on the top of the cell and the froth with valuable minerals, i.e. the concentrate, can be removed for further processing. Normally the collector is dosed on the basis of the feed rate of the flotation circuit and the head grade of the valuable metal. However, also the mineral composition of the ore affects the consumption of the collector, i.e. how much is adsorbed on the mineral surfaces. Therefore it is worth monitoring the residual collector concentration in the flotation tailings. Excess usage of collector causes unnecessary costs and may even disturb the process. In the literature part of the Master’s thesis the basics of flotation process and collector chemicals are introduced. Capillary electrophoresis (CE), an analytical technique suitable for detecting collector chemicals, is also reviewed. In the experimental part of the thesis the development of an on-line CE method for monitoring the concentration of collector chemicals in a flotation process and the results of a measurement campaign are presented. It was possible to determine the quality and quantity of collector chemicals in nickel flotation tailings at a concentrator plant with the developed on-line CE method. Sodium ethyl xanthate and sodium isopropyl xanthate residuals were found in the tailings and slight correlation between the measured concentrations and the dosage amounts could be seen.
Resumo:
The aim of the present set of longitudinal studies was to explore 3-7-year-old children.s Spontaneous FOcusing on Numerosity (SFON) and its relation to early mathematical development. The specific goals were to capture in method and theory the distinct process by which children focus on numerosity as a part of their activities involving exact number recognition, and individual differences in this process that may be informative in the development of more complex number skills. Over the course of conducting the five studies, fifteen novel tasks were progressively developed for the SFON assessments. In the tasks, confounding effects of insufficient number recognition, verbal comprehension, other procedural skills as well as working memory capacity were aimed to be controlled. Furthermore, how children.s individual differences in SFON are related to their development of number sequence, subitizing-based enumeration, object counting and basic arithmetic skills was explored. The effect of social interaction on SFON was tested. Study I captured the first phase of the 3-year longitudinal study with 39 children. It was investigated whether there were differences in 3-year-old children.s tendency to focus on numerosity, and whether these differences were related to the children.s development of cardinality recognition skills from the age of 3 to 4 years. It was found that the two groups of children formed on the basis of their amount of SFON tendency at the age of 3 years differed in their development of recognising and producing small numbers. The children whose SFON tendency was very predominant developed faster in cardinality related skills from the age of 3 to 4 years than the children whose SFON tendency was not as predominant. Thus, children.s development in cardinality recognition skills is related to their SFON tendency. Studies II and III were conducted to investigate, firstly, children.s individual differences in SFON, and, secondly, whether children.s SFON is related to their counting development. Altogether nine tasks were designed for the assessments of spontaneous and guided focusing on numerosity. The longitudinal data of 39 children in Study II from the age of 3.5 to 6 years showed individual differences in SFON at the ages of 4, 5 and 6 years, as well as stability in children.s SFON across tasks used at different ages. The counting skills were assessed at the ages of 3.5, 5 and 6 years. Path analyses indicated a reciprocal tendency in the relationship between SFON and counting development. In Study III, these results on the individual differences in SFON tendency, the stability of SFON across different tasks and the relationship of SFON and mathematical skills were confirmed by a larger-scale cross-sectional study of 183 on average 6.5-year-old children (range 6;0-7;0 years). The significant amount of unique variance that SFON accounted for number sequence elaboration, object counting and basic arithmetic skills stayed statistically significant (partial correlations varying from .27 to .37) when the effects of non-verbal IQ and verbal comprehension were controlled. In addition, to confirm that the SFON tasks assess SFON tendency independently from enumeration skills, guided focusing tasks were used for children who had failed in SFON tasks. It was explored whether these children were able to proceed in similar tasks to SFON tasks once they were guided to focus on number. The results showed that these children.s poor performance in the SFON tasks was not caused by their deficiency in executing the tasks but on lacking focusing on numerosity. The longitudinal Study IV of 39 children aimed at increasing the knowledge of associations between children.s long-term SFON tendency, subitizing-based enumeration and verbal counting skills. Children were tested twice at the age of 4-5 years on their SFON, and once at the age of 5 on their subitizing-based enumeration, number sequence production, as well as on their skills for counting of objects. Results showed considerable stability in SFON tendency measured at different ages, and that there is a positive direct association between SFON and number sequence production. The association between SFON and object counting skills was significantly mediated by subitizing-based enumeration. These results indicate that the associations between the child.s SFON and sub-skills of verbal counting may differ on the basis of how significant a role understanding the cardinal meanings of number words plays in learning these skills. The specific goal of Study V was to investigate whether it is possible to enhance 3-year old children.s SFON tendency, and thus start children.s deliberate practice in early mathematical skills. Participants were 3-year-old children in Finnish day care. The SFON scores and cardinality-related skills of the experimental group of 17 children were compared to the corresponding results of the 17 children in the control group. The results show an experimental effect on SFON tendency and subsequent development in cardinality-related skills during the 6-month period from pretest to delayed posttest in the children with some initial SFON tendency in the experimental group. Social interaction has an effect on children.s SFON tendency. The results of the five studies assert that within a child.s existing mathematical competence, it is possible to distinguish a separate process, which refers to the child.s tendency to spontaneously focus on numerosity. Moreover, there are significant individual differences in children.s SFON at the age of 3-7 years. Moderate stability was found in this tendency across different tasks assessed both at the same and at different ages. Furthermore, SFON tendency is related to the development of early mathematical skills. Educational implications of the findings emphasise, first, the importance of regarding focusing on numerosity as a separate, essential process in the assessments of young children.s mathematical skills. Second, the substantial individual differences in SFON tendency during the childhood years suggest that uncovering and modeling this kind of mathematically meaningful perceiving of the surroundings and tasks could be an efficient tool for promoting young children.s mathematical development, and thus prevent later failures in learning mathematical skills. It is proposed to consider focusing on numerosity as one potential sub-process of activities involving exact number recognition in future studies.
Resumo:
Results of subgroup analysis (SA) reported in randomized clinical trials (RCT) cannot be adequately interpreted without information about the methods used in the study design and the data analysis. Our aim was to show how often inaccurate or incomplete reports occur. First, we selected eight methodological aspects of SA on the basis of their importance to a reader in determining the confidence that should be placed in the author's conclusions regarding such analysis. Then, we reviewed the current practice of reporting these methodological aspects of SA in clinical trials in four leading journals, i.e., the New England Journal of Medicine, the Journal of the American Medical Association, the Lancet, and the American Journal of Public Health. Eight consecutive reports from each journal published after July 1, 1998 were included. Of the 32 trials surveyed, 17 (53%) had at least one SA. Overall, the proportion of RCT reporting a particular methodological aspect ranged from 23 to 94%. Information on whether the SA preceded/followed the analysis was reported in only 7 (41%) of the studies. Of the total possible number of items to be reported, NEJM, JAMA, Lancet and AJPH clearly mentioned 59, 67, 58 and 72%, respectively. We conclude that current reporting of SA in RCT is incomplete and inaccurate. The results of such SA may have harmful effects on treatment recommendations if accepted without judicious scrutiny. We recommend that editors improve the reporting of SA in RCT by giving authors a list of the important items to be reported.
Resumo:
Cette thèse est composée de trois essais liés à la conception de mécanisme et aux enchères. Dans le premier essai j'étudie la conception de mécanismes bayésiens efficaces dans des environnements où les fonctions d'utilité des agents dépendent de l'alternative choisie même lorsque ceux-ci ne participent pas au mécanisme. En plus d'une règle d'attribution et d'une règle de paiement le planificateur peut proférer des menaces afin d'inciter les agents à participer au mécanisme et de maximiser son propre surplus; Le planificateur peut présumer du type d'un agent qui ne participe pas. Je prouve que la solution du problème de conception peut être trouvée par un choix max-min des types présumés et des menaces. J'applique ceci à la conception d'une enchère multiple efficace lorsque la possession du bien par un acheteur a des externalités négatives sur les autres acheteurs. Le deuxième essai considère la règle du juste retour employée par l'agence spatiale européenne (ESA). Elle assure à chaque état membre un retour proportionnel à sa contribution, sous forme de contrats attribués à des sociétés venant de cet état. La règle du juste retour est en conflit avec le principe de la libre concurrence puisque des contrats ne sont pas nécessairement attribués aux sociétés qui font les offres les plus basses. Ceci a soulevé des discussions sur l'utilisation de cette règle: les grands états ayant des programmes spatiaux nationaux forts, voient sa stricte utilisation comme un obstacle à la compétitivité et à la rentabilité. Apriori cette règle semble plus coûteuse à l'agence que les enchères traditionnelles. Nous prouvons au contraire qu'une implémentation appropriée de la règle du juste retour peut la rendre moins coûteuse que des enchères traditionnelles de libre concurrence. Nous considérons le cas de l'information complète où les niveaux de technologie des firmes sont de notoriété publique, et le cas de l'information incomplète où les sociétés observent en privée leurs coûts de production. Enfin, dans le troisième essai je dérive un mécanisme optimal d'appel d'offre dans un environnement où un acheteur d'articles hétérogènes fait face a de potentiels fournisseurs de différents groupes, et est contraint de choisir une liste de gagnants qui est compatible avec des quotas assignés aux différents groupes. La règle optimale d'attribution consiste à assigner des niveaux de priorité aux fournisseurs sur la base des coûts individuels qu'ils rapportent au décideur. La manière dont ces niveaux de priorité sont déterminés est subjective mais connue de tous avant le déroulement de l'appel d'offre. Les différents coûts rapportés induisent des scores pour chaque liste potentielle de gagnant. Les articles sont alors achetés à la liste ayant les meilleurs scores, s'il n'est pas plus grand que la valeur de l'acheteur. Je montre également qu'en général il n'est pas optimal d'acheter les articles par des enchères séparées.
Resumo:
Introduction: La démence peut être causée par la maladie d’Alzheimer (MA), la maladie cérébrovasculaire (MCEREV), ou une combinaison des deux. Lorsque la maladie cérébrovasculaire est associée à la démence, les chances de survie sont considérées réduites. Il reste à démontrer si le traitement avec des inhibiteurs de la cholinestérase (ChEIs), qui améliore les symptômes cognitifs et la fonction globale chez les patients atteints de la MA, agit aussi sur les formes vasculaires de démence. Objectifs: La présente étude a été conçue pour déterminer si la coexistence d’une MCEREV était associée avec les chances de survie ou la durée de la période jusqu’au placement en hebergement chez les patients atteints de la MA et traités avec des ChEIs. Des études montrant de moins bons résultats chez les patients souffrant de MCEREV que chez ceux n’en souffrant pas pourrait militer contre l’utilisation des ChEIs chez les patients atteints à la fois de la MA et la MCEREV. L'objectif d'une seconde analyse était d'évaluer pour la première fois chez les patients atteints de la MA l'impact potentiel du biais de « temps-immortel » (et de suivi) sur ces résultats (mort ou placement en hebergement). Méthodes: Une étude de cohorte rétrospective a été conduite en utilisant les bases de données de la Régie de l’Assurance Maladie du Québec (RAMQ) pour examiner la durée de la période jusqu’au placement en hebergement ou jusqu’au v décès des patients atteints de la MA, âgés de 66 ans et plus, avec ou sans MCEREV, et traités avec des ChEIs entre le 1er Juillet 2000 et le 30 Juin 2003. Puisque les ChEIs sont uniquement indiquées pour la MA au Canada, chaque prescription de ChEIs a été considérée comme un diagnostic de la MA. La MCEREV concomitante a été identifié sur la base d'un diagnostic à vie d’un accident vasculaire cérébral (AVC) ou d’une endartériectomie, ou d’un diagnostic d'un accident ischémique transitoire au cours des six mois précédant la date d’entrée. Des analyses séparées ont été conduites pour les patients utilisant les ChEIs de façon persistante et pour ceux ayant interrompu la thérapie. Sept modèles de régression à risque proportionnel de Cox qui ont varié par rapport à la définition de la date d’entrée (début du suivi) et à la durée du suivi ont été utilisés pour évaluer l'impact du biais de temps-immortel. Résultats: 4,428 patients ont répondu aux critères d’inclusion pour la MA avec MCEREV; le groupe de patients souffrant seulement de la MA comptait 13,512 individus. Pour le critère d’évaluation composite considérant la durée de la période jusqu’au placement en hebergement ou jusqu’au décès, les taux de survie à 1,000 jours étaient plus faibles parmi les patients atteints de la MA avec MCEREV que parmi ceux atteints seulement de la MA (p<0.01), mais les différences absolues étaient très faibles (84% vs. 86% pour l’utilisation continue de ChEIs ; 77% vs. 78% pour la thérapie avec ChEIs interrompue). Pour les critères d’évaluation secondaires, la période jusqu’au décès était plus courte chez les patients avec la MCEREV que sans la MCEREV, mais la période jusqu’au vi placement en hebergement n’était pas différente entre les deux groupes. Dans l'analyse primaire (non-biaisée), aucune association a été trouvée entre le type de ChEI et la mort ou le placement en maison d'hébergement. Cependant, après l'introduction du biais de temps-immortel, on a observé un fort effet différentiel. Limitations: Les résultats peuvent avoir été affectés par le biais de sélection (classification impropre), par les différences entre les groupes en termes de consommation de tabac et d’indice de masse corporelle (ces informations n’étaient pas disponibles dans les bases de données de la RAMQ) et de durée de la thérapie avec les ChEIs. Conclusions: Les associations entre la coexistence d’une MCEREV et la durée de la période jusqu’au placement en hebergement ou au décès apparaissent peu pertinentes cliniquement parmi les patients atteints de la MA traités avec des ChEIs. L’absence de différence entre les patients atteints de la MA souffrant ou non de la MCEREV suggère que la coexistence d’une MCEREV ne devrait pas être une raison de refuser aux patients atteints de la MA l’accès au traitement avec des ChEIs. Le calcul des « personne-temps » non exposés dans l'analyse élimine les estimations biaisées de l'efficacité des médicaments.
Resumo:
The country has witnessed tremendous increase in the vehicle population and increased axle loading pattern during the last decade, leaving its road network overstressed and leading to premature failure. The type of deterioration present in the pavement should be considered for determining whether it has a functional or structural deficiency, so that appropriate overlay type and design can be developed. Structural failure arises from the conditions that adversely affect the load carrying capability of the pavement structure. Inadequate thickness, cracking, distortion and disintegration cause structural deficiency. Functional deficiency arises when the pavement does not provide a smooth riding surface and comfort to the user. This can be due to poor surface friction and texture, hydro planning and splash from wheel path, rutting and excess surface distortion such as potholes, corrugation, faulting, blow up, settlement, heaves etc. Functional condition determines the level of service provided by the facility to its users at a particular time and also the Vehicle Operating Costs (VOC), thus influencing the national economy. Prediction of the pavement deterioration is helpful to assess the remaining effective service life (RSL) of the pavement structure on the basis of reduction in performance levels, and apply various alternative designs and rehabilitation strategies with a long range funding requirement for pavement preservation. In addition, they can predict the impact of treatment on the condition of the sections. The infrastructure prediction models can thus be classified into four groups, namely primary response models, structural performance models, functional performance models and damage models. The factors affecting the deterioration of the roads are very complex in nature and vary from place to place. Hence there is need to have a thorough study of the deterioration mechanism under varied climatic zones and soil conditions before arriving at a definite strategy of road improvement. Realizing the need for a detailed study involving all types of roads in the state with varying traffic and soil conditions, the present study has been attempted. This study attempts to identify the parameters that affect the performance of roads and to develop performance models suitable to Kerala conditions. A critical review of the various factors that contribute to the pavement performance has been presented based on the data collected from selected road stretches and also from five corporations of Kerala. These roads represent the urban conditions as well as National Highways, State Highways and Major District Roads in the sub urban and rural conditions. This research work is a pursuit towards a study of the road condition of Kerala with respect to varying soil, traffic and climatic conditions, periodic performance evaluation of selected roads of representative types and development of distress prediction models for roads of Kerala. In order to achieve this aim, the study is focused into 2 parts. The first part deals with the study of the pavement condition and subgrade soil properties of urban roads distributed in 5 Corporations of Kerala; namely Thiruvananthapuram, Kollam, Kochi, Thrissur and Kozhikode. From selected 44 roads, 68 homogeneous sections were studied. The data collected on the functional and structural condition of the surface include pavement distress in terms of cracks, potholes, rutting, raveling and pothole patching. The structural strength of the pavement was measured as rebound deflection using Benkelman Beam deflection studies. In order to collect the details of the pavement layers and find out the subgrade soil properties, trial pits were dug and the in-situ field density was found using the Sand Replacement Method. Laboratory investigations were carried out to find out the subgrade soil properties, soil classification, Atterberg limits, Optimum Moisture Content, Field Moisture Content and 4 days soaked CBR. The relative compaction in the field was also determined. The traffic details were also collected by conducting traffic volume count survey and axle load survey. From the data thus collected, the strength of the pavement was calculated which is a function of the layer coefficient and thickness and is represented as Structural Number (SN). This was further related to the CBR value of the soil and the Modified Structural Number (MSN) was found out. The condition of the pavement was represented in terms of the Pavement Condition Index (PCI) which is a function of the distress of the surface at the time of the investigation and calculated in the present study using deduct value method developed by U S Army Corps of Engineers. The influence of subgrade soil type and pavement condition on the relationship between MSN and rebound deflection was studied using appropriate plots for predominant types of soil and for classified value of Pavement Condition Index. The relationship will be helpful for practicing engineers to design the overlay thickness required for the pavement, without conducting the BBD test. Regression analysis using SPSS was done with various trials to find out the best fit relationship between the rebound deflection and CBR, and other soil properties for Gravel, Sand, Silt & Clay fractions. The second part of the study deals with periodic performance evaluation of selected road stretches representing National Highway (NH), State Highway (SH) and Major District Road (MDR), located in different geographical conditions and with varying traffic. 8 road sections divided into 15 homogeneous sections were selected for the study and 6 sets of continuous periodic data were collected. The periodic data collected include the functional and structural condition in terms of distress (pothole, pothole patch, cracks, rutting and raveling), skid resistance using a portable skid resistance pendulum, surface unevenness using Bump Integrator, texture depth using sand patch method and rebound deflection using Benkelman Beam. Baseline data of the study stretches were collected as one time data. Pavement history was obtained as secondary data. Pavement drainage characteristics were collected in terms of camber or cross slope using camber board (slope meter) for the carriage way and shoulders, availability of longitudinal side drain, presence of valley, terrain condition, soil moisture content, water table data, High Flood Level, rainfall data, land use and cross slope of the adjoining land. These data were used for finding out the drainage condition of the study stretches. Traffic studies were conducted, including classified volume count and axle load studies. From the field data thus collected, the progression of each parameter was plotted for all the study roads; and validated for their accuracy. Structural Number (SN) and Modified Structural Number (MSN) were calculated for the study stretches. Progression of the deflection, distress, unevenness, skid resistance and macro texture of the study roads were evaluated. Since the deterioration of the pavement is a complex phenomena contributed by all the above factors, pavement deterioration models were developed as non linear regression models, using SPSS with the periodic data collected for all the above road stretches. General models were developed for cracking progression, raveling progression, pothole progression and roughness progression using SPSS. A model for construction quality was also developed. Calibration of HDM–4 pavement deterioration models for local conditions was done using the data for Cracking, Raveling, Pothole and Roughness. Validation was done using the data collected in 2013. The application of HDM-4 to compare different maintenance and rehabilitation options were studied considering the deterioration parameters like cracking, pothole and raveling. The alternatives considered for analysis were base alternative with crack sealing and patching, overlay with 40 mm BC using ordinary bitumen, overlay with 40 mm BC using Natural Rubber Modified Bitumen and an overlay of Ultra Thin White Topping. Economic analysis of these options was done considering the Life Cycle Cost (LCC). The average speed that can be obtained by applying these options were also compared. The results were in favour of Ultra Thin White Topping over flexible pavements. Hence, Design Charts were also plotted for estimation of maximum wheel load stresses for different slab thickness under different soil conditions. The design charts showed the maximum stress for a particular slab thickness and different soil conditions incorporating different k values. These charts can be handy for a design engineer. Fuzzy rule based models developed for site specific conditions were compared with regression models developed using SPSS. The Riding Comfort Index (RCI) was calculated and correlated with unevenness to develop a relationship. Relationships were developed between Skid Number and Macro Texture of the pavement. The effort made through this research work will be helpful to highway engineers in understanding the behaviour of flexible pavements in Kerala conditions and for arriving at suitable maintenance and rehabilitation strategies. Key Words: Flexible Pavements – Performance Evaluation – Urban Roads – NH – SH and other roads – Performance Models – Deflection – Riding Comfort Index – Skid Resistance – Texture Depth – Unevenness – Ultra Thin White Topping
Resumo:
Collaborative working methods offer the hope of reduced waste, lower tendering costs and improved outputs. The costs of tendering may be influenced by the introduction of different working methods. Transaction cost economics appears to offer an analytical framework for studying the costs of tendering, but it is more to do with providing explanations at the institutional/industry level, not at the level of individual projects. Surveys and interviews were carried out with small samples in UK. The data show that that while tendering costs are not necessarily higher in collaborative working arrangements, there is no correlation between costs of tendering and the way the work is organized. Practitioners perceive that the benefits of working in collaborative procurement routes far outweigh the costs. Tendering practices can be improved to avoid waste, and the suggested improvements include restricting selective tendering lists to 23 bidders, letting bidders know who they are competing with, reimbursing tendering costs for aborted projects and ensuring that timely and comprehensive information is provided to bidders.
Resumo:
This report summarises a workshop convened by the UK Food Standards Agency (FSA) on 11 September 2006 to review the results of three FSA-funded studies and other recent research on effects of the dietary n-6:n-3 fatty acid ratio on cardiovascular health. The objective of this workshop was to reach a clear conclusion on whether or not it was worth funding any further research in this area. On the basis of this review of the experimental evidence and on theoretical grounds, it was concluded that the n-6:n-3 fatty acid ratio is not a useful concept and that it distracts attention away from increasing absolute intakes of long-chain n-3 fatty acids which have been shown to have beneficial effects on cardiovascular health. Other markers of fatty acid intake, that more closely relate to physiological function, may be more useful.
Resumo:
In the present contribution, I discuss the claim, endorsed by a number of authors, that contributing to a collective harm is the ground for special responsibilities to the victims of that harm. Contributors should, between them, cover the costs of the harms they have inflicted, at least if those harms would otherwise be rights-violating. I raise some doubts about the generality of this principle before moving on to sketch a framework for thinking about liability for the costs of harms in general. This framework uses a contractualist framework to build an account of how to think about liability for costs on the basis of the presumably attractive thought that individual agents should have as much control over their liabilities as is compatible with others having like control. I then use that framework to suggest that liability on the basis of contribution should be restricted to cases in which the contributors could have avoided their contribution relatively costlessly, in which meeting the liability is not crippling for them, and in which such a liability would not have chilling effects, either on them or on third parties. This account of the grounds for contributory liability also has the advantage of avoiding a number of awkward questions about what counts as a contribution by shifting the issue away from often unanswerable questions about the precise causal genesis of some harm or other. Instead, control over conduct, which plausibly has some relation to the harm, becomes crucial. On the basis of this account, I then investigate whether a number of uses of the contributory principle are entirely appropriate. I argue that contributory liability is not appropriate for cases of collective harms committed by coordinated groups in the way that, for example, Iris Marion Young and Thomas Pogge have suggested and that further investigation of how members of such groups may be liable will be needed.
Resumo:
In the 1970s, Corporate Social Responsibility (CSR) was discussed by Nobel laureate Milton Friedman in his article “The Social Responsibility of Business Is to Increase Its Profits.” (Friedman, 1970). His view on CSR was contemptuous as he referred to it as “hypocritical window-dressing” a reflection of the view of Corporate America on CSR back then. For a long time short-term maximization of shareholder value was the only maxim for top management across industries and companies. Over the last decade, CSR has become a more important and relevant factor of a company’s reputation, shifting the discussion from whether CSR is necessary to how best CSR commitments should be done (Smith, 2003). Inevitably, companies do have an environmental, social and economic impact, thereby imposing social costs on current and future generations. In 2013, 50 of the world biggest companies have been responsible for 73 percent of the total carbon dioxide (CO2) emission (Global 500 Climate Change Report 2013). Post et al. (2002) refer to these social costs as a company’s need to retain its “license to operate”. In the late 1990s, CSR reporting was nearly unknown, which drastically changed during the last decade. Allen White, co-founder of the Global Reporting Initiative (GRI), said that CSR reporting”… has evolved from the extraordinary to the exceptional to the expected” (Confino, 2013). In confirmation of this, virtually all of the world’s largest 250 companies report on CSR (93%) and reporting by now appears to be business standard (KPMG, 2013). CSR reports are a medium for transparency which may lead to an improved company reputation (Noked, 2013; Thorne et al, 2008; Wilburn and Wilburn, 2013). In addition, it may be used as part of an ongoing shareholder relations campaign, which may prevent shareholders from submitting Environmental and Social (E&S)1 proposals (Noked, 2013), based on an Ernst & Young report 1 The top five E&S proposal topic areas in 2013 were: 1. Political spending/ lobbying; 2. Environmental sustainability; 3. Corporate diversity/ EEO; 4.Labor/ human rights and 5. Animal testing/ animal welfare. Three groups of environmental sustainability proposal topics of sub-category number two (environmental sustainability) 6 2013, representing the largest category of shareholder proposals submitted. PricewaterhouseCoopers (PwC) even goes as far as to claim that CSR reports are “…becoming critical to a company’s credibility, transparency and endurance.” (PwC, 2013).
Resumo:
It is required that patients are provided information about therapeutic possibilities, showing the risks, benefits, prognosis and costs of each possible and indicated alternative. This is an ethical and legal resolution. However, health professionals possess the clinical/technical/scientific knowledge and determine what information will be (or not) provided. The patient in question decides to undergo a treatment, providing his/her free and informed consent on the basis of the data presented. Unfortunately, some professionals may not provide all the information necessary for making an informed decision or, after obtaining the consent of the patient, may provide him information that causes the patient to give up on the treatment initially accepted. Such information, if relevant, and not a supervening fact, should have been provided initially. However, the information may not be entirely true, and bring the patient, for instance, to decide based on inadequately presented risks. The craniofacial rehabilitation of the temporomandibular joint (TMJ) by means of TMJ prosthesis, is indicated in many situations. Often, patients in need of such prostheses have aesthetic and functional problems and the rehabilitation expectations run high. This work presents a case and discusses ethical and legal issues, including the liability of partial and inadequate information to a patient.
Resumo:
ZusammenfassungDie Bildung von mittelozeanischen Rückenbasalten (MORB) ist einer der wichtigsten Stoffflüsse der Erde. Jährlich wird entlang der 75.000 km langen mittelozeanischen Rücken mehr als 20 km3 neue magmatische Kruste gebildet, das sind etwa 90 Prozent der globalen Magmenproduktion. Obwohl ozeanische Rücken und MORB zu den am meisten untersuchten geologischen Themenbereichen gehören, existieren weiterhin einige Streit-fragen. Zu den wichtigsten zählt die Rolle von geodynamischen Rahmenbedingungen, wie etwa Divergenzrate oder die Nähe zu Hotspots oder Transformstörungen, sowie der absolute Aufschmelzgrad, oder die Tiefe, in der die Aufschmelzung unter den Rücken beginnt. Diese Dissertation widmet sich diesen Themen auf der Basis von Haupt- und Spurenelementzusammensetzungen in Mineralen ozeanischer Mantelgesteine.Geochemische Charakteristika von MORB deuten darauf hin, dass der ozeanische Mantel im Stabilitätsfeld von Granatperidotit zu schmelzen beginnt. Neuere Experimente zeigen jedoch, dass die schweren Seltenerdelemente (SEE) kompatibel im Klinopyroxen (Cpx) sind. Aufgrund dieser granatähnlichen Eigenschaft von Cpx wird Granat nicht mehr zur Erklärung der MORB Daten benötigt, wodurch sich der Beginn der Aufschmelzung zu geringeren Drucken verschiebt. Aus diesem Grund ist es wichtig zu überprüfen, ob diese Hypothese mit Daten von abyssalen Peridotiten in Einklang zu bringen ist. Diese am Ozeanboden aufgeschlossenen Mantelfragmente stellen die Residuen des Aufschmelz-prozesses dar, und ihr Mineralchemismus enthält Information über die Bildungs-bedingungen der Magmen. Haupt- und Spurenelementzusammensetzungen von Peridotit-proben des Zentralindischen Rückens (CIR) wurden mit Mikrosonde und Ionensonde bestimmt, und mit veröffentlichten Daten verglichen. Cpx der CIR Peridotite weisen niedrige Verhältnisse von mittleren zu schweren SEE und hohe absolute Konzentrationen der schweren SEE auf. Aufschmelzmodelle eines Spinellperidotits unter Anwendung von üblichen, inkompatiblen Verteilungskoeffizienten (Kd's) können die gemessenen Fraktionierungen von mittleren zu schweren SEE nicht reproduzieren. Die Anwendung der neuen Kd's, die kompatibles Verhalten der schweren SEE im Cpx vorhersagen, ergibt zwar bessere Resultate, kann jedoch nicht die am stärksten fraktionierten Proben erklären. Darüber hinaus werden sehr hohe Aufschmelzgrade benötigt, was nicht mit Hauptelementdaten in Einklang zu bringen ist. Niedrige (~3-5%) Aufschmelzgrade im Stabilitätsfeld von Granatperidotit, gefolgt von weiterer Aufschmelzung von Spinellperidotit kann jedoch die Beobachtungen weitgehend erklären. Aus diesem Grund muss Granat weiterhin als wichtige Phase bei der Genese von MORB betrachtet werden (Kapitel 1).Eine weitere Hürde zum quantitativen Verständnis von Aufschmelzprozessen unter mittelozeanischen Rücken ist die fehlende Korrelation zwischen Haupt- und Spuren-elementen in residuellen abyssalen Peridotiten. Das Cr/(Cr+Al) Verhältnis (Cr#) in Spinell wird im Allgemeinen als guter qualitativer Indikator für den Aufschmelzgrad betrachtet. Die Mineralchemie der CIR Peridotite und publizierte Daten von anderen abyssalen Peridotiten zeigen, dass die schweren SEE sehr gut (r2 ~ 0.9) mit Cr# der koexistierenden Spinelle korreliert. Die Auswertung dieser Korrelation ergibt einen quantitativen Aufschmelz-indikator für Residuen, welcher auf dem Spinellchemismus basiert. Damit kann der Schmelzgrad als Funktion von Cr# in Spinell ausgedrückt werden: F = 0.10×ln(Cr#) + 0.24 (Hellebrand et al., Nature, in review; Kapitel 2). Die Anwendung dieses Indikators auf Mantelproben, für die keine Ionensondendaten verfügbar sind, ermöglicht es, geochemische und geophysikalischen Daten zu verbinden. Aus geodynamischer Perspektive ist der Gakkel Rücken im Arktischen Ozean von großer Bedeutung für das Verständnis von Aufschmelzprozessen, da er weltweit die niedrigste Divergenzrate aufweist und große Transformstörungen fehlen. Publizierte Basaltdaten deuten auf einen extrem niedrigen Aufschmelzgrad hin, was mit globalen Korrelationen im Einklang steht. Stark alterierte Mantelperidotite einer Lokalität entlang des kaum beprobten Gakkel Rückens wurden deshalb auf Primärminerale untersucht. Nur in einer Probe sind oxidierte Spinellpseudomorphosen mit Spuren primärer Spinelle erhalten geblieben. Ihre Cr# ist signifikant höher als die einiger Peridotite von schneller divergierenden Rücken und ihr Schmelzgrad ist damit höher als aufgrund der Basaltzusammensetzungen vermutet. Der unter Anwendung des oben erwähnten Indikators ermittelte Schmelzgrad ermöglicht die Berechnung der Krustenmächtigkeit am Gakkel Rücken. Diese ist wesentlich größer als die aus Schweredaten ermittelte Mächtigkeit, oder die aus der globalen Korrelation zwischen Divergenzrate und mittels Seismik erhaltene Krustendicke. Dieses unerwartete Ergebnis kann möglicherweise auf kompositionelle Heterogenitäten bei niedrigen Schmelzgraden, oder auf eine insgesamt größere Verarmung des Mantels unter dem Gakkel Rücken zurückgeführt werden (Hellebrand et al., Chem.Geol., in review; Kapitel 3).Zusätzliche Informationen zur Modellierung und Analytik sind im Anhang A-C aufgeführt
Resumo:
In dieser Arbeit werden die mikroskopischen, chemischen und spektroskopischen Charakteristika von 260 natürlichen Smaragden und 66 synthetischen „Smaragden“ untersucht. Die Konzentrationen der chemischen Elemente von Smaragden wurden mit Hilfe der LA-ICP-MS und EMS bestimmt. Ergänzende Raman- und IR spektroskopische Methoden ermöglichen es, die Herkunft der verschiedenen Smaragde und ihrer synthetischen Analoga zu bestimmen. Auf Grund der verschiedenen Gehalte von Si, Al und Be können synthetische „Smaragde“ von natürlichen getrennt werden. Die Smaragde von Malipo, Chivor und auch synthetische „Smaragde“ können von allen anderen natürlichen Smaragden wegen der unterschiedlichen Cr-, V-, und Fe-Gehalte von einander getrennt werden. Wegen der unterschiedlichen Mg-, Na-, K-Gehalte lassen sich eher „schiefer-gebundene“ Smaragde identifizieren. Dabei wird festgestellt, dass die Unterscheidung in „schiefer-„ und „nichtschiefer-gebundene“ Smaragd-Vorkommen im Wesentlichen nur die Endglieder einer offensichtlich kristallchemisch sehr variablen Mineralchemie der Berylle, bzw. Smaragde beschreibt, dass damit aber keinesfalls eine petrologisch vertretbare Trennung belegbar ist, sondern dass Smaragde nur das jeweils regierende chemische Regime unter geeigneten Druck-Temperatur-Bedingungen widerspiegeln. Einschlussmerkmale spielen eine große Rolle bei der Unterscheidung verschiedener Lagerstätten und Herstellungsmethoden. Zum Beispiel können die Smaragde der drei Lagerstätten Santa Terezinha, Chivor, und Kafubu mit Hilfe ihrer charakteristischen Pyriteinschlüsse identifiziert werden. Die Band-Positionen und FWHM -Werte der Raman-Bande bei 1068 cm-1 und der IR-Bande bei 1200 cm-1 ermöglichen eine Differenzierung zwischen synthetischen und natürlichen Smaragden, und können darüber hinaus auch Auskunft geben über die Lagerstätte. Zusammen mit chemischen Messwerten kann bewiesen werden, dass diese Banden von Si-O Schwingungen verursacht werden. Die Raman- und IR-Banden im Bereich der Wasserschwingungen und insbesondere das IR-Band um 1140 cm-1 führen zur Trennung von Flux-Synthesen, Hydrothermal-Synthesen und natürlichen Smaragden.
Resumo:
Tonalite-trondhjemite-granodiorite (TTG) gneisses form up to two-thirds of the preserved Archean continental crust and there is considerable debate regarding the primary magmatic processes of the generation of these rocks. The popular theories indicate that these rocks were formed by partial melting of basaltic oceanic crust which was previously metamorphosed to garnet-amphibolite and/or eclogite facies conditions either at the base of thick oceanic crust or by subduction processes.rnThis study investigates a new aspect regarding the source rock for Archean continental crust which is inferred to have had a bulk compostion richer in magnesium (picrite) than present-day basaltic oceanic crust. This difference is supposed to originate from a higher geothermal gradient in the early Archean which may have induced higher degrees of partial melting in the mantle, which resulted in a thicker and more magnesian oceanic crust. rnThe methods used to investigate the role of a more MgO-rich source rock in the formation of TTG-like melts in the context of this new approach are mineral equilibria calculations with the software THERMOCALC and high-pressure experiments conducted from 10–20 kbar and 900–1100 °C, both combined in a forward modelling approach. Initially, P–T pseudosections for natural rock compositions with increasing MgO contents were calculated in the system NCFMASHTO (Na2O–CaO–FeO–MgO–Al2O3–SiO2–H2O–TiO2) to ascertain the metamorphic products from rocks with increasing MgO contents from a MORB up to a komatiite. A small number of previous experiments on komatiites showed the development of pyroxenite instead of eclogite and garnet-amphibolite during metamorphism and established that melts of these pyroxenites are of basaltic composition, thus again building oceanic crust instead of continental crust.rnThe P–T pseudosections calculated represent a continuous development of their metamorphic products from amphibolites and eclogites towards pyroxenites. On the basis of these calculations and the changes within the range of compositions, three picritic Models of Archean Oceanic Crust (MAOC) were established with different MgO contents (11, 13 and 15 wt%) ranging between basalt and komatiite. The thermodynamic modelling for MAOC 11, 13 and 15 at supersolidus conditions is imprecise since no appropriate melt model for metabasic rocks is currently available and the melt model for metapelitic rocks resulted in unsatisfactory calculations. The partially molten region is therfore covered by high-pressure experiments. The results of the experiments show a transition from predominantly tonalitic melts in MAOC 11 to basaltic melts in MAOC 15 and a solidus moving towards higher temperatures with increasing magnesium in the bulk composition. Tonalitic melts were generated in MAOC 11 and 13 at pressures up to 12.5 kbar in the presence of garnet, clinopyroxene, plagioclase plus/minus quartz (plus/minus orthopyroxene in the presence of quartz and at lower pressures) in the absence of amphibole but it could not be explicitly indicated whether the tonalitic melts coexisting with an eclogitic residue and rutile at 20 kbar do belong to the Archean TTG suite. Basaltic melts were generated predominantly in the presence of granulite facies residues such as amphibole plus/minus garnet, plagioclase, orthopyroxene that lack quartz in all MAOC compositions at pressures up to 15 kbar. rnThe tonalitic melts generated in MAOC 11 and 13 indicate that thicker oceanic crust with more magnesium than that of a modern basalt is also a viable source for the generation of TTG-like melts and therefore continental crust in the Archean. The experimental results are related to different geologic settings as a function of pressure. The favoured setting for the generation of early TTG-like melts at 15 kbar is the base of an oceanic crust thicker than existing today or by melting of slabs in shallow subduction zones, both without interaction of tonalic melts with the mantle. Tonalitic melts at 20 kbar may have been generated below the plagioclase stability by slab melting in deeper subduction zones that have developed with time during the progressive cooling of the Earth, but it is unlikely that those melts reached lower pressure levels without further mantle interaction.rn
Resumo:
A recent study showed increased resistance against strongylid nematodes in offspring of a stallion affected by recurrent airway obstruction (RAG) compared with unrelated pasture mates. Resistance against strongylid nematodes was associated with RAG affection. Hypothesis: Resistance against strongylid nematodes has a genetic basis. The genetic variants influencing strongylid resistance also influence RAG susceptibility. Faecal samples from the half-sibling offspring of two RAG-affected Warmblood stallions 98 offspring from the first family (family 1) and 79 from the second family (family 2) were analysed using a combined sedimentation-flotation method. The phenotype was defined as a binary trait - either positive or negative for egg shedding. The influence of non-genetic factors on egg shedding was analysed using SAS, the mode of inheritance was investigated using PAP and iBay, and the association between shedding of strongyle eggs and RAG was estimated by odds ratios. Previously established genotypes for 315 microsatellite markers were used for QTL analyses using GRID QTL. The inheritance of "strongylid egg shedding" is influenced by major genes on ECA15 and ECA20. Shedding of strongylid eggs is associated with RAG in family 1 but not in family 2. Conclusions: The status of "shedding of strongyle eggs" has a genetic background. The results were inconclusive as to whether "egg shedding" and RAG share common genetic components. Our results suggest that it may be possible to select for resistance against strongylid nematodes.