991 resultados para 338.91
Resumo:
Introduction 1.1 Occurrence of polycyclic aromatic hydrocarbons (PAH) in the environment Worldwide industrial and agricultural developments have released a large number of natural and synthetic hazardous compounds into the environment due to careless waste disposal, illegal waste dumping and accidental spills. As a result, there are numerous sites in the world that require cleanup of soils and groundwater. Polycyclic aromatic hydrocarbons (PAHs) are one of the major groups of these contaminants (Da Silva et al., 2003). PAHs constitute a diverse class of organic compounds consisting of two or more aromatic rings with various structural configurations (Prabhu and Phale, 2003). Being a derivative of benzene, PAHs are thermodynamically stable. In addition, these chemicals tend to adhere to particle surfaces, such as soils, because of their low water solubility and strong hydrophobicity, and this results in greater persistence under natural conditions. This persistence coupled with their potential carcinogenicity makes PAHs problematic environmental contaminants (Cerniglia, 1992; Sutherland, 1992). PAHs are widely found in high concentrations at many industrial sites, particularly those associated with petroleum, gas production and wood preserving industries (Wilson and Jones, 1993). 1.2 Remediation technologies Conventional techniques used for the remediation of soil polluted with organic contaminants include excavation of the contaminated soil and disposal to a landfill or capping - containment - of the contaminated areas of a site. These methods have some drawbacks. The first method simply moves the contamination elsewhere and may create significant risks in the excavation, handling and transport of hazardous material. Additionally, it is very difficult and increasingly expensive to find new landfill sites for the final disposal of the material. The cap and containment method is only an interim solution since the contamination remains on site, requiring monitoring and maintenance of the isolation barriers long into the future, with all the associated costs and potential liability. A better approach than these traditional methods is to completely destroy the pollutants, if possible, or transform them into harmless substances. Some technologies that have been used are high-temperature incineration and various types of chemical decomposition (for example, base-catalyzed dechlorination, UV oxidation). However, these methods have significant disadvantages, principally their technological complexity, high cost , and the lack of public acceptance. Bioremediation, on the contrast, is a promising option for the complete removal and destruction of contaminants. 1.3 Bioremediation of PAH contaminated soil & groundwater Bioremediation is the use of living organisms, primarily microorganisms, to degrade or detoxify hazardous wastes into harmless substances such as carbon dioxide, water and cell biomass Most PAHs are biodegradable unter natural conditions (Da Silva et al., 2003; Meysami and Baheri, 2003) and bioremediation for cleanup of PAH wastes has been extensively studied at both laboratory and commercial levels- It has been implemented at a number of contaminated sites, including the cleanup of the Exxon Valdez oil spill in Prince William Sound, Alaska in 1989, the Mega Borg spill off the Texas coast in 1990 and the Burgan Oil Field, Kuwait in 1994 (Purwaningsih, 2002). Different strategies for PAH bioremediation, such as in situ , ex situ or on site bioremediation were developed in recent years. In situ bioremediation is a technique that is applied to soil and groundwater at the site without removing the contaminated soil or groundwater, based on the provision of optimum conditions for microbiological contaminant breakdown.. Ex situ bioremediation of PAHs, on the other hand, is a technique applied to soil and groundwater which has been removed from the site via excavation (soil) or pumping (water). Hazardous contaminants are converted in controlled bioreactors into harmless compounds in an efficient manner. 1.4 Bioavailability of PAH in the subsurface Frequently, PAH contamination in the environment is occurs as contaminants that are sorbed onto soilparticles rather than in phase (NAPL, non aqueous phase liquids). It is known that the biodegradation rate of most PAHs sorbed onto soil is far lower than rates measured in solution cultures of microorganisms with pure solid pollutants (Alexander and Scow, 1989; Hamaker, 1972). It is generally believed that only that fraction of PAHs dissolved in the solution can be metabolized by microorganisms in soil. The amount of contaminant that can be readily taken up and degraded by microorganisms is defined as bioavailability (Bosma et al., 1997; Maier, 2000). Two phenomena have been suggested to cause the low bioavailability of PAHs in soil (Danielsson, 2000). The first one is strong adsorption of the contaminants to the soil constituents which then leads to very slow release rates of contaminants to the aqueous phase. Sorption is often well correlated with soil organic matter content (Means, 1980) and significantly reduces biodegradation (Manilal and Alexander, 1991). The second phenomenon is slow mass transfer of pollutants, such as pore diffusion in the soil aggregates or diffusion in the organic matter in the soil. The complex set of these physical, chemical and biological processes is schematically illustrated in Figure 1. As shown in Figure 1, biodegradation processes are taking place in the soil solution while diffusion processes occur in the narrow pores in and between soil aggregates (Danielsson, 2000). Seemingly contradictory studies can be found in the literature that indicate the rate and final extent of metabolism may be either lower or higher for sorbed PAHs by soil than those for pure PAHs (Van Loosdrecht et al., 1990). These contrasting results demonstrate that the bioavailability of organic contaminants sorbed onto soil is far from being well understood. Besides bioavailability, there are several other factors influencing the rate and extent of biodegradation of PAHs in soil including microbial population characteristics, physical and chemical properties of PAHs and environmental factors (temperature, moisture, pH, degree of contamination). Figure 1: Schematic diagram showing possible rate-limiting processes during bioremediation of hydrophobic organic contaminants in a contaminated soil-water system (not to scale) (Danielsson, 2000). 1.5 Increasing the bioavailability of PAH in soil Attempts to improve the biodegradation of PAHs in soil by increasing their bioavailability include the use of surfactants , solvents or solubility enhancers.. However, introduction of synthetic surfactant may result in the addition of one more pollutant. (Wang and Brusseau, 1993).A study conducted by Mulder et al. showed that the introduction of hydropropyl-ß-cyclodextrin (HPCD), a well-known PAH solubility enhancer, significantly increased the solubilization of PAHs although it did not improve the biodegradation rate of PAHs (Mulder et al., 1998), indicating that further research is required in order to develop a feasible and efficient remediation method. Enhancing the extent of PAHs mass transfer from the soil phase to the liquid might prove an efficient and environmentally low-risk alternative way of addressing the problem of slow PAH biodegradation in soil.
Resumo:
La sindrome nefrosica (SN) è definita come la presenza concomitante di una proteinuria maggiore di 3.5g/24 h, ipoalbuminemia, ipercolesterolemia e presenza di edemi. I pazienti con SN sono più a rischio di quelli che presentano una nefropatia glomerulare non nefrosica (NNGD) per lo sviluppo di ipertensione, ipernatremia, complicazioni tromboemboliche e comparsa di insufficienza renale. In Medicina Veterinaria, la Letteratura riguardante l’argomento è molto limitata e non è ben nota la correlazione tra SN e gravità della proteinuria, ipoalbuminemia e sviluppo di tromboembolismo. L’obiettivo del presente studio retrospettivo è stato quello di descrivere e caratterizzare le alterazioni cliniche e clinicopatologiche che si verificano nei pazienti con rapporto proteine urinarie:creatinina urinaria (UPC) >2 con lo scopo di inquadrare con maggiore precisione lo stato clinico di questi pazienti e individuare le maggiori complicazioni a cui possono andare incontro. In un periodo di nove anni sono stati selezionati 338 cani e suddivisi in base ad un valore cut-off di UPC≥3.5. Valori mediani di creatinina, urea, fosforo, albumina urinaria, proteina C reattiva (CRP) e fibrinogeno sono risultati al di sopra del limite superiore dell’intervallo di riferimento, valori mediani di albumina sierica, ematocrito, antitrombina al disotto del limite inferiore di riferimento. Pazienti con UPC≥3.5 hanno mostrato concentrazioni di albumine, ematocrito, calcio, Total Iron Binding Capacity (TIBC), significativamente minori rispetto a quelli con UPC<3.5, concentrazioni di CRP, di urea e di fosforo significativamente maggiori. Nessuna differenza tra i gruppi nelle concentrazioni di creatinina colesterolo, trigliceridi, sodio, potassio, cloro, ferro totale e pressione sistolica. I pazienti con UPC≥3.5 si trovano verosimilmente in uno “stato infiammatorio” maggiore rispetto a quelli con UPC<3.5, questa ipotesi avvalorata dalle concentrazioni minori di albumina, di transferrina e da una concentrazione di CRP maggiore. I pazienti con UPC≥3.5 non presentano concentrazioni di creatinina più elevate ma sono maggiormente a rischio di anemia.
Resumo:
Diese Arbeit analysiert den Alltag in einem gymnasialen Jungeninternat in der Upper West Region in Ghana und setzt diese Analyse in eine anthropologische Perspektive zum Staat. Angeregt durch die Überlegungen von Michel Foucault zu Disziplinarmechanismen und „Technologien des Selbst“ steht die alltägliche Disziplinierung der Schüler im Mittelpunkt dieser Arbeit. Dabei wird Disziplin nicht als restriktiv, übergeordnet und allgegenwärtig gedacht, sondern als etwas, das die Schüler aktiv mitgestalten. Einerseits sind die Schüler der Nandom Secondary School Disziplinartechniken ausgesetzt, die sich auf die Kontrolle ihrer Körper beziehen: Sie werden vereinheitlicht, klassifiziert, hierarchisiert, überwacht, geprüft und bestraft. Andererseits sind die Schüler keine übermächtigten Objekte einer totalen Institution, sondern sie wirken kreativ mit beim Flechten des Netzes aus Kontrolle und Freiräumen in ihrer Schule: Sie spielen eine wichtige Rolle bei der Kontrolle der Lehrer; sie erkennen und nutzen Freiräume, deren Rahmen sie mit den Lehrern aushandeln. Auch ihr Selbst bekommen die Schüler nicht von den Lehrern einfach diktiert. Zwar wird ihnen durch das Bild des guten Schülers eine bestimmte Sicht auf sich selbst nahe gelegt, doch wie sie dieses Bild aufnehmen, ablehnen oder umdeuten liegt in ihren eigenen Händen.
Resumo:
Background The dose–response relation between physical activity and all-cause mortality is not well defined at present. We conducted a systematic review and meta-analysis to determine the association with all-cause mortality of different domains of physical activity and of defined increases in physical activity and energy expenditure. Methods MEDLINE, Embase and the Cochrane Library were searched up to September 2010 for cohort studies examining all-cause mortality across different domains and levels of physical activity in adult general populations. We estimated combined risk ratios (RRs) associated with defined increments and recommended levels, using random-effects meta-analysis and dose–response meta-regression models. Results Data from 80 studies with 1 338 143 participants (118 121 deaths) were included. Combined RRs comparing highest with lowest activity levels were 0.65 [95% confidence interval (95% CI) 0.60–0.71] for total activity, 0.74 (95% CI 0.70–0.77) for leisure activity, 0.64 (95% CI 0.55–0.75) for activities of daily living and 0.83 (95% CI 0.71–0.97) for occupational activity. RRs per 1-h increment per week were 0.91 (95% CI 0.87–0.94) for vigorous exercise and 0.96 (95% CI 0.93–0.98) for moderate-intensity activities of daily living. RRs corresponding to 150 and 300 min/week of moderate to vigorous activity were 0.86 (95% CI 0.80–0.92) and 0.74 (95% CI 0.65–0.85), respectively. Mortality reductions were more pronounced in women. Conclusion Higher levels of total and domain-specific physical activity were associated with reduced all-cause mortality. Risk reduction per unit of time increase was largest for vigorous exercise. Moderate-intensity activities of daily living were to a lesser extent beneficial in reducing mortality.
Resumo:
INTRODUCTION: Winter sports have evolved from an upper class activity to a mass industry. Especially sledging regained popularity at the start of this century, with more and more winter sports resorts offering sledge runs. This study investigated the rates of sledging injuries over the last 13 years and analysed injury patterns specific for certain age groups, enabling us to make suggestions for preventive measures. METHODS: We present a retrospective analysis of prospectively collected data. From 1996/1997 to 2008/2009, all patients involved in sledging injuries were recorded upon admission to a Level III trauma centre. Injuries were classified into body regions according to the Abbreviated Injury Scale (AIS). The Injury Severity Score (ISS) was calculated. Patients were stratified into 7 age groups. Associations between age and injured body region were tested using the chi-squared test. The slope of the linear regression with 95% confidence intervals was calculated for the proportion of patients with different injured body regions and winter season. RESULTS: 4956 winter sports patients were recorded. 263 patients (5%) sustained sledging injuries. Sledging injury patients had a median age of 22 years (interquartile range [IQR] 14-38 years) and a median ISS of 4 (IQR 1-4). 136 (51.7%) were male. Injuries (AIS≥2) were most frequent to the lower extremities (n=91, 51.7% of all AIS≥2 injuries), followed by the upper extremities (n=48, 27.3%), the head (n=17, 9.7%), the spine (n=7, 4.0%). AIS≥2 injuries to different body regions varied from season to season, with no significant trends (p>0.19). However, the number of patients admitted with AIS≥2 injuries increased significantly over the seasons analysed (p=0.031), as did the number of patients with any kind of sledging injury (p=0.004). Mild head injuries were most frequent in the youngest age group (1-10 years old). Injuries to the lower extremities were more often seen in the age groups from 21 to 60 years (p<0.001). CONCLUSION: Mild head trauma was mainly found in very young sledgers, and injuries to the lower extremities were more frequent in adults. In accordance with the current literature, we suggest that sledging should be performed in designated, obstacle-free areas that are specially prepared, and that children should always be supervised by adults. The effect of routine use of helmets and other protective devices needs further evaluation, but it seems evident that these should be obligatory on official runs.
Resumo:
This study evaluated the effects of 8 weeks of eccentric endurance training (EET) in male subjects (age range 42-66 years) with coronary artery disease (CAD). EET was compared to concentric endurance training (CET) carried out at the same metabolic exercise intensity, three times per week for half an hour. CET ( n=6) was done on a conventional cycle ergometer and EET ( n=6) on a custom-built motor-driven ergometer. During the first 5 weeks of the training program the metabolic load was progressively increased to 60% of peak oxygen uptake in both groups. At this metabolic load, mechanical work rate achieved was 97 (8) W [mean (SE)] for CET and 338 (34) W for EET, respectively. Leg muscle mass was determined by dual-energy X-ray absorptiometry, quadriceps strength with an isokinetic dynamometer and muscle fibre composition of the vastus lateralis muscle with morphometry. The leg muscle mass increased significantly in both groups by some 3%. Strength parameters of knee extensors improved in EET only. Significant changes of +11 (4.9)%, +15 (3.2)% and +9 (2.5)% were reached for peak isometric torque and peak concentric torques at 60 degrees s(-1) and 120 degrees s(-1), respectively. Fibre size increased significantly by 19% in CET only. In conclusion, the present investigation showed that EET is feasible in middle-aged CAD patients and has functional advantages over CET by increasing muscle strength. Muscle mass increased similarly in both groups whereas muscle structural composition was differently affected by the respective training protocols. Potential limitations of this study are the cautiously chosen conditioning protocol and the restricted number of subjects.