127 resultados para Minimum quantity of fluid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To provide an update to the original Surviving Sepsis Campaign clinical management guidelines, "Surviving Sepsis Campaign guidelines for management of severe sepsis and septic shock," published in 2004. DESIGN: Modified Delphi method with a consensus conference of 55 international experts, several subsequent meetings of subgroups and key individuals, teleconferences, and electronic-based discussion among subgroups and among the entire committee. This process was conducted independently of any industry funding. METHODS: We used the GRADE system to guide assessment of quality of evidence from high (A) to very low (D) and to determine the strength of recommendations. A strong recommendation indicates that an intervention's desirable effects clearly outweigh its undesirable effects (risk, burden, cost), or clearly do not. Weak recommendations indicate that the tradeoff between desirable and undesirable effects is less clear. The grade of strong or weak is considered of greater clinical importance than a difference in letter level of quality of evidence. In areas without complete agreement, a formal process of resolution was developed and applied. Recommendations are grouped into those directly targeting severe sepsis, recommendations targeting general care of the critically ill patient that are considered high priority in severe sepsis, and pediatric considerations. RESULTS: Key recommendations, listed by category, include: early goal-directed resuscitation of the septic patient during the first 6 hrs after recognition (1C); blood cultures prior to antibiotic therapy (1C); imaging studies performed promptly to confirm potential source of infection (1C); administration of broad-spectrum antibiotic therapy within 1 hr of diagnosis of septic shock (1B) and severe sepsis without septic shock (1D); reassessment of antibiotic therapy with microbiology and clinical data to narrow coverage, when appropriate (1C); a usual 7-10 days of antibiotic therapy guided by clinical response (1D); source control with attention to the balance of risks and benefits of the chosen method (1C); administration of either crystalloid or colloid fluid resuscitation (1B); fluid challenge to restore mean circulating filling pressure (1C); reduction in rate of fluid administration with rising filing pressures and no improvement in tissue perfusion (1D); vasopressor preference for norepinephrine or dopamine to maintain an initial target of mean arterial pressure > or = 65 mm Hg (1C); dobutamine inotropic therapy when cardiac output remains low despite fluid resuscitation and combined inotropic/vasopressor therapy (1C); stress-dose steroid therapy given only in septic shock after blood pressure is identified to be poorly responsive to fluid and vasopressor therapy (2C); recombinant activated protein C in patients with severe sepsis and clinical assessment of high risk for death (2B except 2C for post-operative patients). In the absence of tissue hypoperfusion, coronary artery disease, or acute hemorrhage, target a hemoglobin of 7-9 g/dL (1B); a low tidal volume (1B) and limitation of inspiratory plateau pressure strategy (1C) for acute lung injury (ALI)/acute respiratory distress syndrome (ARDS); application of at least a minimal amount of positive end-expiratory pressure in acute lung injury (1C); head of bed elevation in mechanically ventilated patients unless contraindicated (1B); avoiding routine use of pulmonary artery catheters in ALI/ARDS (1A); to decrease days of mechanical ventilation and ICU length of stay, a conservative fluid strategy for patients with established ALI/ARDS who are not in shock (1C); protocols for weaning and sedation/analgesia (1B); using either intermittent bolus sedation or continuous infusion sedation with daily interruptions or lightening (1B); avoidance of neuromuscular blockers, if at all possible (1B); institution of glycemic control (1B) targeting a blood glucose < 150 mg/dL after initial stabilization ( 2C ); equivalency of continuous veno-veno hemofiltration or intermittent hemodialysis (2B); prophylaxis for deep vein thrombosis (1A); use of stress ulcer prophylaxis to prevent upper GI bleeding using H2 blockers (1A) or proton pump inhibitors (1B); and consideration of limitation of support where appropriate (1D). Recommendations specific to pediatric severe sepsis include: greater use of physical examination therapeutic end points (2C); dopamine as the first drug of choice for hypotension (2C); steroids only in children with suspected or proven adrenal insufficiency (2C); a recommendation against the use of recombinant activated protein C in children (1B). CONCLUSION: There was strong agreement among a large cohort of international experts regarding many level 1 recommendations for the best current care of patients with severe sepsis. Evidenced-based recommendations regarding the acute management of sepsis and septic shock are the first step toward improved outcomes for this important group of critically ill patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We perform direct numerical simulations of drainage by solving Navier- Stokes equations in the pore space and employing the Volume Of Fluid (VOF) method to track the evolution of the fluid-fluid interface. After demonstrating that the method is able to deal with large viscosity contrasts and to model the transition from stable flow to viscous fingering, we focus on the definition of macroscopic capillary pressure. When the fluids are at rest, the difference between inlet and outlet pressures and the difference between the intrinsic phase average pressure coincide with the capillary pressure. However, when the fluids are in motion these quantities are dominated by viscous forces. In this case, only a definition based on the variation of the interfacial energy provides an accurate measure of the macroscopic capillary pressure and allows separating the viscous from the capillary pressure components.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Colony social organization in the fire ant Solenopsis invicta appears to be under strong genetic control. In the invasive USA range, polygyny (multiple queens per colony) is marked by the presence of the Gp-9(b) allele in most of a colony's workers, whereas monogyny (single queen per colony) is associated with the exclusive occurrence of the Gp-9(B) allele. Ross and Keller, Behav Ecol Sociobiol 51:287-295 (2002) experimentally manipulated social organization by cross-fostering queens into colonies of the alternate form, thereby changing adult worker Gp-9 genotype frequencies over time. Although these authors showed that social behavior switched predictably when the frequency of b-bearing adult workers crossed a threshold of 5-10%, the possibility that queen effects caused the conversions could not be excluded entirely. We addressed this problem by fostering polygyne brood into queenright monogyne colonies. All such treatment colonies switched social organization to become polygyne, coincident with their proportions of b-bearing workers exceeding 12%. Our results support the conclusion that polygyny in S. invicta is induced by a minimum frequency of colony workers carrying the b allele, and further confirm that its expression is independent of queen genotype or history, worker genotypes at genes not linked to Gp-9, and colony genetic diversity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: In alcohol withdrawal, fixed doses of benzodiazepine are generally recommended as a first-line pharmacologic approach. This study determines the benefits of an individualized treatment regimen on the quantity of benzodiazepine administered and the duration of its use during alcohol withdrawal treatment. METHODS: We conducted a prospective, randomized, double-blind, controlled trial including 117 consecutive patients with alcohol dependence, according to the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, entering an alcohol treatment program at both the Lausanne and Geneva university hospitals, Switzerland. Patients were randomized into 2 groups: (1) 56 were treated with oxazepam in response to the development of signs of alcohol withdrawal (symptom-triggered); and (2) 61 were treated with oxazepam every 6 hours with additional doses as needed (fixed-schedule). The administration of oxazepam in group 1 and additional oxazepam in group 2 was determined using a standardized measure of alcohol withdrawal. The main outcome measures were the total amount and duration of treatment with oxazepam, the incidence of complications, and the comfort level. RESULTS: A total of 22 patients (39%) in the symptom-triggered group were treated with oxazepam vs 100% in the fixed-schedule group (P<.001). The mean oxazepam dose administered in the symptom-triggered group was 37.5 mg compared with 231.4 mg in the fixed-schedule group (P<.001). The mean duration of oxazepam treatment was 20.0 hours in the symptom-triggered group vs 62.7 hours in the fixed-schedule group (P<.001). Withdrawal complications were limited to a single episode of seizures in the symptom-triggered group. There were no differences in the measures of comfort between the 2 groups. CONCLUSIONS: Symptom-triggered benzodiazepine treatment for alcohol withdrawal is safe, comfortable, and associated with a decrease in the quantity of medication and duration of treatment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cardiovascular magnetic resonance (CMR) has become an established imaging modality which provides often unique information on a wide range of cardiovascular diseases. The European Society of Cardiology (ESC) training curriculum reflects the emerging role of CMR by recommending that all trainees obtain a minimum level of training in CMR and by defining criteria for subspecialty training in CMR. 1 The wider use of CMR requires the definition of standards for data acquisition, reporting, and training in CMR across Europe. At the same time, training and accreditation in all cardiac imaging methods should be harmonized and integrated to promote the training of cardiac imaging specialists. The recommendations presented in this document are intended to inform the discussion about standards for accreditation and certification in CMR in Europe and the discussion on integrated imaging training. At present, the recommendations in this position statement are not to be interpreted as guidelines. Until such guidelines are available and nationally ratified, physicians will be able to train and practice CMR according to current national regulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Arabidopsis thaliana (L.) Heynh. expressing the Crepis palaestina (L.) linoleic acid delta12-epoxygenase in its developing seeds typically accumulates low levels of vernolic acid (12,13-epoxy-octadec-cis-9-enoic acid) in comparison to levels found in seeds of the native C. palaestina. In order to determine some of the factors limiting the accumulation of this unusual fatty acid, we have examined the effects of increasing the availability of linoleic acid (9cis, 12cis-octadecadienoic acid), the substrate of the delta12-epoxygenase, on the quantity of epoxy fatty acids accumulating in transgenic A. thaliana. The addition of linoleic acid to liquid cultures of transgenic plants expressing the delta12-epoxygenase under the control of the cauliflower mosaic virus 35S promoter increased the amount of vernolic acid in vegetative tissues by 2.8-fold. In contrast, the addition to these cultures of linoelaidic acid (9trans, 12trans-octadecadienoic acid), which is not a substrate of the delta12-epoxygenase, resulted in a slight decrease in vernolic acid accumulation. Expression of the delta12-epoxygenase under the control of the napin promoter in the A. thaliana triple mutant fad3/fad7-1/fad9, which is deficient in the synthesis of tri-unsaturated fatty acids and has a 60% higher level of linoleic acid than the wild type, was found to increase the average vernolic acid content of the seeds by 55% compared to the expression of the delta12-epoxygenase in a wild-type background. Together, these results reveal that the availability of linoleic acid is an important factor affecting the synthesis of epoxy fatty acid in transgenic plants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Daptomycin is a promising candidate for local treatment of bone infection due to its activity against multi-resistant staphylococci. We investigated the activity of antibiotic-loaded PMMA against Staphylococcus epidermidis biofilms using an ultra-sensitive method bacterial heat detection method (microcalorimetry). PMMA cylinders loaded with daptomycin alone or in combination with gentamicin or PEG600, vancomycin and gentamicin were incubated with S. epidermidis-RP62A in tryptic soy broth (TSB) for 72h. Cylinders were thereafter washed and transferred in microcalorimetry ampoules pre-filled with TSB. Bacterial heat production, proportional to the quantity of biofilm on the PMMA, was measured by isothermal microcalorimetry at 37°C. Heat detection time was considered time to reach 20μW. Experiments were performed in duplicate. The heat detection time was 5.7-7.0h for PMMA without antibiotics. When loaded with 5% of daptomycin, vancomycin or gentamicin, detection times were 5.6-16.4h, 16.8-35.7h and 4.7-6.2h, respectively. No heat was detected when 5% gentamicin or 0.5% PEG600 was added to the daptomycin-loaded PMMA. The study showed that vancomycin was superior to daptomycin and gentamicin in inhbiting staphylococcal adherence in vitro. However, PMMA loaded with daptomycin combined with gentamicin or PEG600 completely inhibited S. epidermidis-biofilm formation. PMMA loaded with these combinations may represent effective strategies for local treatment in the presence of multi-resistant staphylococci.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accreted terranes, comprising a wide variety of Late Jurassic and Early Cretaceous igneous and sedimentary rocks are an important feature of Cuban geology. Their characterization is helpful for understanding Caribbean paleogeography. The Guaniguanico terrane (western Cuba) is formed by upper Jurassic platform sediments intruded by microgranular dolerite dykes. The geochemical characteristics of the dolerite whole rock samples and their minerals (augitic clinopyroxene, labradorite and andesine) are consistent with a tholeiitic affinity. Major and trace element concentrations as well as Nd, Sr and Pb isotopes show that these rocks also have a continental affinity. Sample chemistry indicates that these lavas are similar to a low Ti-P2O5 (LTi) variety of continental flood basalts (CFB) similar to the dolerites of Ferrar (Tasmania). They derived from mixing of a lithospheric mantle Source and an asthenopheric component similar to E-MORB with minor markers of crustal contamination and sediment assimilation. However, the small quantity of Cuban magmatic rocks, similarly to Tasmania, Antarctica and Siberia differs from other volumetrically important CFB occurrences Such as Parana and Deccan. These dolerites are dated as 165-150 Ma and were emplaced during the separation of the Yucatan block from South America. They could in fact be part of the Yucatan-South America margin through which the intrusive system was emplaced and which was later accreted to the Cretaceous arc of central Cuba and to the Palaeogene arc of eastern Cuba. These samples could therefore reflect the pre-rift stage between North and South America and the opening of the gulf of Mexico.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: Characterization of persistent diffuse subretinal fluid using optical coherence tomography (OCT) after successful encircling buckle surgery for inferior macula-off retinal detachment in young patients. METHODS: Institutional retrospective review of six young patients (mean age 31 +/- 6 years; five female, one male) with spontaneous inferior rhegmatogenous macula-off retinal detachment. All patients were treated with encircling buckle surgery and five out of six underwent additional external drainage of subretinal fluid. Mean follow-up was 37 +/- 25 months (range 17-75 months) and included complete ophthalmic and OCT examination. RESULTS: At 6 months, 100% of patients showed persistence of subretinal fluid on OCT. Four patients had diffuse fluid accumulation, whereas two patients showed a 'bleb-like' accumulation of fluid. This fluid was present independent of whether or not patients had been treated with external fluid drainage. Subretinal fluid only started to disappear on OCT between 6 and more than 12 months after surgery. CONCLUSION: Young patients with inferior macula-off retinal detachments and a marginally liquefied vitreous may show persisting postoperative subclinical fluid under the macula for longer periods of time than described previously.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The circadian timing system is critically involved in the maintenance of fluid and electrolyte balance and BP control. However, the role of peripheral circadian clocks in these homeostatic mechanisms remains unknown. We addressed this question in a mouse model carrying a conditional allele of the circadian clock gene Bmal1 and expressing Cre recombinase under the endogenous Renin promoter (Bmal1(lox/lox)/Ren1(d)Cre mice). Analysis of Bmal1(lox/lox)/Ren1(d)Cre mice showed that the floxed Bmal1 allele was excised in the kidney. In the kidney, BMAL1 protein expression was absent in the renin-secreting granular cells of the juxtaglomerular apparatus and the collecting duct. A partial reduction of BMAL1 expression was observed in the medullary thick ascending limb. Functional analyses showed that Bmal1(lox/lox)/Ren1(d)Cre mice exhibited multiple abnormalities, including increased urine volume, changes in the circadian rhythm of urinary sodium excretion, increased GFR, and significantly reduced plasma aldosterone levels. These changes were accompanied by a reduction in BP. These results show that local renal circadian clocks control body fluid and BP homeostasis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The coverage and volume of geo-referenced datasets are extensive and incessantly¦growing. The systematic capture of geo-referenced information generates large volumes¦of spatio-temporal data to be analyzed. Clustering and visualization play a key¦role in the exploratory data analysis and the extraction of knowledge embedded in¦these data. However, new challenges in visualization and clustering are posed when¦dealing with the special characteristics of this data. For instance, its complex structures,¦large quantity of samples, variables involved in a temporal context, high dimensionality¦and large variability in cluster shapes.¦The central aim of my thesis is to propose new algorithms and methodologies for¦clustering and visualization, in order to assist the knowledge extraction from spatiotemporal¦geo-referenced data, thus improving making decision processes.¦I present two original algorithms, one for clustering: the Fuzzy Growing Hierarchical¦Self-Organizing Networks (FGHSON), and the second for exploratory visual data analysis:¦the Tree-structured Self-organizing Maps Component Planes. In addition, I present¦methodologies that combined with FGHSON and the Tree-structured SOM Component¦Planes allow the integration of space and time seamlessly and simultaneously in¦order to extract knowledge embedded in a temporal context.¦The originality of the FGHSON lies in its capability to reflect the underlying structure¦of a dataset in a hierarchical fuzzy way. A hierarchical fuzzy representation of¦clusters is crucial when data include complex structures with large variability of cluster¦shapes, variances, densities and number of clusters. The most important characteristics¦of the FGHSON include: (1) It does not require an a-priori setup of the number¦of clusters. (2) The algorithm executes several self-organizing processes in parallel.¦Hence, when dealing with large datasets the processes can be distributed reducing the¦computational cost. (3) Only three parameters are necessary to set up the algorithm.¦In the case of the Tree-structured SOM Component Planes, the novelty of this algorithm¦lies in its ability to create a structure that allows the visual exploratory data analysis¦of large high-dimensional datasets. This algorithm creates a hierarchical structure¦of Self-Organizing Map Component Planes, arranging similar variables' projections in¦the same branches of the tree. Hence, similarities on variables' behavior can be easily¦detected (e.g. local correlations, maximal and minimal values and outliers).¦Both FGHSON and the Tree-structured SOM Component Planes were applied in¦several agroecological problems proving to be very efficient in the exploratory analysis¦and clustering of spatio-temporal datasets.¦In this thesis I also tested three soft competitive learning algorithms. Two of them¦well-known non supervised soft competitive algorithms, namely the Self-Organizing¦Maps (SOMs) and the Growing Hierarchical Self-Organizing Maps (GHSOMs); and the¦third was our original contribution, the FGHSON. Although the algorithms presented¦here have been used in several areas, to my knowledge there is not any work applying¦and comparing the performance of those techniques when dealing with spatiotemporal¦geospatial data, as it is presented in this thesis.¦I propose original methodologies to explore spatio-temporal geo-referenced datasets¦through time. Our approach uses time windows to capture temporal similarities and¦variations by using the FGHSON clustering algorithm. The developed methodologies¦are used in two case studies. In the first, the objective was to find similar agroecozones¦through time and in the second one it was to find similar environmental patterns¦shifted in time.¦Several results presented in this thesis have led to new contributions to agroecological¦knowledge, for instance, in sugar cane, and blackberry production.¦Finally, in the framework of this thesis we developed several software tools: (1)¦a Matlab toolbox that implements the FGHSON algorithm, and (2) a program called¦BIS (Bio-inspired Identification of Similar agroecozones) an interactive graphical user¦interface tool which integrates the FGHSON algorithm with Google Earth in order to¦show zones with similar agroecological characteristics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been known for some time that different arbuscular mycorrhizal fungal (AMF) taxa confer differences in plant growth. Although genetic variation within AMF species has been given less attention, it could potentially be an ecologically important source of variation. Ongoing studies on variability in AMF genes within Glomus intraradices indicate that at least for some genes, such as the BiP gene, sequence variability can be high, even in coding regions. This suggests that genetic variation within an AMF may not be selectively neutral. This clearly needs to be investigated in more detail for other coding regions of AMF genomes. Similarly, studies on AMF population genetics indicate high genetic variation in AMF populations, and a considerable amount of variation seen in phenotypes in the population can be attributed to genetic differences among the fungi. The existence of high within-species genetic variation could have important consequences for how investigations on AMF gene expression and function are conducted. Furthermore, studies of within-species genetic variability and how it affects variation in plant growth will help to identify at what level of precision ecological studies should be conducted to identify AMF in plant roots in the field. A population genetic approach to studying AMF genetic variability can also be useful for inoculum development. By knowing the amount of genetic variability in an AMF population, the maximum and minimum numbers of spores that will contain a given amount of genetic diversity can be estimated. This could be particularly useful for developing inoculum with high adaptability to different environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wolves in Italy strongly declined in the past and were confined south of the Alps since the turn of the last century, reduced in the 1970s to approximately 100 individuals surviving in two fragmented subpopulations in the central-southern Apennines. The Italian wolves are presently expanding in the Apennines, and started to recolonize the western Alps in Italy, France and Switzerland about 16 years ago. In this study, we used a population genetic approach to elucidate some aspects of the wolf recolonization process. DNA extracted from 3068 tissue and scat samples collected in the Apennines (the source populations) and in the Alps (the colony), were genotyped at 12 microsatellite loci aiming to assess (i) the strength of the bottleneck and founder effects during the onset of colonization; (ii) the rates of gene flow between source and colony; and (iii) the minimum number of colonizers that are needed to explain the genetic variability observed in the colony. We identified a total of 435 distinct wolf genotypes, which showed that wolves in the Alps: (i) have significantly lower genetic diversity (heterozygosity, allelic richness, number of private alleles) than wolves in the Apennines; (ii) are genetically distinct using pairwise F(ST) values, population assignment test and Bayesian clustering; (iii) are not in genetic equilibrium (significant bottleneck test). Spatial autocorrelations are significant among samples separated up to c. 230 km, roughly correspondent to the apparent gap in permanent wolf presence between the Alps and north Apennines. The estimated number of first-generation migrants indicates that migration has been unidirectional and male-biased, from the Apennines to the Alps, and that wolves in southern Italy did not contribute to the Alpine population. These results suggest that: (i) the Alps were colonized by a few long-range migrating wolves originating in the north Apennine subpopulation; (ii) during the colonization process there has been a moderate bottleneck; and (iii) gene flow between sources and colonies was moderate (corresponding to 1.25-2.50 wolves per generation), despite high potential for dispersal. Bottleneck simulations showed that a total of c. 8-16 effective founders are needed to explain the genetic diversity observed in the Alps. Levels of genetic diversity in the expanding Alpine wolf population, and the permanence of genetic structuring, will depend on the future rates of gene flow among distinct wolf subpopulation fragments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The proportion of population living in or around cites is more important than ever. Urban sprawl and car dependence have taken over the pedestrian-friendly compact city. Environmental problems like air pollution, land waste or noise, and health problems are the result of this still continuing process. The urban planners have to find solutions to these complex problems, and at the same time insure the economic performance of the city and its surroundings. At the same time, an increasing quantity of socio-economic and environmental data is acquired. In order to get a better understanding of the processes and phenomena taking place in the complex urban environment, these data should be analysed. Numerous methods for modelling and simulating such a system exist and are still under development and can be exploited by the urban geographers for improving our understanding of the urban metabolism. Modern and innovative visualisation techniques help in communicating the results of such models and simulations. This thesis covers several methods for analysis, modelling, simulation and visualisation of problems related to urban geography. The analysis of high dimensional socio-economic data using artificial neural network techniques, especially self-organising maps, is showed using two examples at different scales. The problem of spatiotemporal modelling and data representation is treated and some possible solutions are shown. The simulation of urban dynamics and more specifically the traffic due to commuting to work is illustrated using multi-agent micro-simulation techniques. A section on visualisation methods presents cartograms for transforming the geographic space into a feature space, and the distance circle map, a centre-based map representation particularly useful for urban agglomerations. Some issues on the importance of scale in urban analysis and clustering of urban phenomena are exposed. A new approach on how to define urban areas at different scales is developed, and the link with percolation theory established. Fractal statistics, especially the lacunarity measure, and scale laws are used for characterising urban clusters. In a last section, the population evolution is modelled using a model close to the well-established gravity model. The work covers quite a wide range of methods useful in urban geography. Methods should still be developed further and at the same time find their way into the daily work and decision process of urban planners. La part de personnes vivant dans une région urbaine est plus élevé que jamais et continue à croître. L'étalement urbain et la dépendance automobile ont supplanté la ville compacte adaptée aux piétons. La pollution de l'air, le gaspillage du sol, le bruit, et des problèmes de santé pour les habitants en sont la conséquence. Les urbanistes doivent trouver, ensemble avec toute la société, des solutions à ces problèmes complexes. En même temps, il faut assurer la performance économique de la ville et de sa région. Actuellement, une quantité grandissante de données socio-économiques et environnementales est récoltée. Pour mieux comprendre les processus et phénomènes du système complexe "ville", ces données doivent être traitées et analysées. Des nombreuses méthodes pour modéliser et simuler un tel système existent et sont continuellement en développement. Elles peuvent être exploitées par le géographe urbain pour améliorer sa connaissance du métabolisme urbain. Des techniques modernes et innovatrices de visualisation aident dans la communication des résultats de tels modèles et simulations. Cette thèse décrit plusieurs méthodes permettant d'analyser, de modéliser, de simuler et de visualiser des phénomènes urbains. L'analyse de données socio-économiques à très haute dimension à l'aide de réseaux de neurones artificiels, notamment des cartes auto-organisatrices, est montré à travers deux exemples aux échelles différentes. Le problème de modélisation spatio-temporelle et de représentation des données est discuté et quelques ébauches de solutions esquissées. La simulation de la dynamique urbaine, et plus spécifiquement du trafic automobile engendré par les pendulaires est illustrée à l'aide d'une simulation multi-agents. Une section sur les méthodes de visualisation montre des cartes en anamorphoses permettant de transformer l'espace géographique en espace fonctionnel. Un autre type de carte, les cartes circulaires, est présenté. Ce type de carte est particulièrement utile pour les agglomérations urbaines. Quelques questions liées à l'importance de l'échelle dans l'analyse urbaine sont également discutées. Une nouvelle approche pour définir des clusters urbains à des échelles différentes est développée, et le lien avec la théorie de la percolation est établi. Des statistiques fractales, notamment la lacunarité, sont utilisées pour caractériser ces clusters urbains. L'évolution de la population est modélisée à l'aide d'un modèle proche du modèle gravitaire bien connu. Le travail couvre une large panoplie de méthodes utiles en géographie urbaine. Toutefois, il est toujours nécessaire de développer plus loin ces méthodes et en même temps, elles doivent trouver leur chemin dans la vie quotidienne des urbanistes et planificateurs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract : The occupational health risk involved with handling nanoparticles is the probability that a worker will experience an adverse health effect: this is calculated as a function of the worker's exposure relative to the potential biological hazard of the material. Addressing the risks of nanoparticles requires therefore knowledge on occupational exposure and the release of nanoparticles into the environment as well as toxicological data. However, information on exposure is currently not systematically collected; therefore this risk assessment lacks quantitative data. This thesis aimed at, first creating the fundamental data necessary for a quantitative assessment and, second, evaluating methods to measure the occupational nanoparticle exposure. The first goal was to determine what is being used where in Swiss industries. This was followed by an evaluation of the adequacy of existing measurement methods to assess workplace nanopaiticle exposure to complex size distributions and concentration gradients. The study was conceived as a series of methodological evaluations aimed at better understanding nanoparticle measurement devices and methods. lt focused on inhalation exposure to airborne particles, as respiration is considered to be the most important entrance pathway for nanoparticles in the body in terms of risk. The targeted survey (pilot study) was conducted as a feasibility study for a later nationwide survey on the handling of nanoparticles and the applications of specific protection means in industry. The study consisted of targeted phone interviews with health and safety officers of Swiss companies that were believed to use or produce nanoparticles. This was followed by a representative survey on the level of nanoparticle usage in Switzerland. lt was designed based on the results of the pilot study. The study was conducted among a representative selection of clients of the Swiss National Accident Insurance Fund (SUVA), covering about 85% of Swiss production companies. The third part of this thesis focused on the methods to measure nanoparticles. Several pre- studies were conducted studying the limits of commonly used measurement devices in the presence of nanoparticle agglomerates, This focus was chosen, because several discussions with users and producers of the measurement devices raised questions about their accuracy measuring nanoparticle agglomerates and because, at the same time, the two survey studies revealed that such powders are frequently used in industry. The first preparatory experiment focused on the accuracy of the scanning mobility particle sizer (SMPS), which showed an improbable size distribution when measuring powders of nanoparticle agglomerates. Furthermore, the thesis includes a series of smaller experiments that took a closer look at problems encountered with other measurement devices in the presence of nanoparticle agglomerates: condensation particle counters (CPC), portable aerosol spectrometer (PAS) a device to estimate the aerodynamic diameter, as well as diffusion size classifiers. Some initial feasibility tests for the efficiency of filter based sampling and subsequent counting of carbon nanotubes (CNT) were conducted last. The pilot study provided a detailed picture of the types and amounts of nanoparticles used and the knowledge of the health and safety experts in the companies. Considerable maximal quantities (> l'000 kg/year per company) of Ag, Al-Ox, Fe-Ox, SiO2, TiO2, and ZnO (mainly first generation particles) were declared by the contacted Swiss companies, The median quantity of handled nanoparticles, however, was 100 kg/year. The representative survey was conducted by contacting by post mail a representative selection of l '626 SUVA-clients (Swiss Accident Insurance Fund). It allowed estimation of the number of companies and workers dealing with nanoparticles in Switzerland. The extrapolation from the surveyed companies to all companies of the Swiss production sector suggested that l'309 workers (95%-confidence interval l'073 to l'545) of the Swiss production sector are potentially exposed to nanoparticles in 586 companies (145 to l'027). These numbers correspond to 0.08% (0.06% to 0.09%) of all workers and to 0.6% (0.2% to 1.1%) of companies in the Swiss production sector. To measure airborne concentrations of sub micrometre-sized particles, a few well known methods exist. However, it was unclear how well the different instruments perform in the presence of the often quite large agglomerates of nanostructured materials. The evaluation of devices and methods focused on nanoparticle agglomerate powders. lt allowed the identification of the following potential sources of inaccurate measurements at workplaces with considerable high concentrations of airborne agglomerates: - A standard SMPS showed bi-modal particle size distributions when measuring large nanoparticle agglomerates. - Differences in the range of a factor of a thousand were shown between diffusion size classifiers and CPC/SMPS. - The comparison between CPC/SMPS and portable aerosol Spectrometer (PAS) was much better, but depending on the concentration, size or type of the powders measured, the differences were still of a high order of magnitude - Specific difficulties and uncertainties in the assessment of workplaces were identified: the background particles can interact with particles created by a process, which make the handling of background concentration difficult. - Electric motors produce high numbers of nanoparticles and confound the measurement of the process-related exposure. Conclusion: The surveys showed that nanoparticles applications exist in many industrial sectors in Switzerland and that some companies already use high quantities of them. The representative survey demonstrated a low prevalence of nanoparticle usage in most branches of the Swiss industry and led to the conclusion that the introduction of applications using nanoparticles (especially outside industrial chemistry) is only beginning. Even though the number of potentially exposed workers was reportedly rather small, it nevertheless underscores the need for exposure assessments. Understanding exposure and how to measure it correctly is very important because the potential health effects of nanornaterials are not yet fully understood. The evaluation showed that many devices and methods of measuring nanoparticles need to be validated for nanoparticles agglomerates before large exposure assessment studies can begin. Zusammenfassung : Das Gesundheitsrisiko von Nanopartikel am Arbeitsplatz ist die Wahrscheinlichkeit dass ein Arbeitnehmer einen möglichen Gesundheitsschaden erleidet wenn er diesem Stoff ausgesetzt ist: sie wird gewöhnlich als Produkt von Schaden mal Exposition gerechnet. Für eine gründliche Abklärung möglicher Risiken von Nanomaterialien müssen also auf der einen Seite Informationen über die Freisetzung von solchen Materialien in die Umwelt vorhanden sein und auf der anderen Seite solche über die Exposition von Arbeitnehmenden. Viele dieser Informationen werden heute noch nicht systematisch gesarnmelt und felilen daher für Risikoanalysen, Die Doktorarbeit hatte als Ziel, die Grundlagen zu schaffen für eine quantitative Schatzung der Exposition gegenüber Nanopartikel am Arbeitsplatz und die Methoden zu evaluieren die zur Messung einer solchen Exposition nötig sind. Die Studie sollte untersuchen, in welchem Ausmass Nanopartikel bereits in der Schweizer Industrie eingesetzt werden, wie viele Arbeitnehrner damit potentiel] in Kontakt komrrien ob die Messtechnologie für die nötigen Arbeitsplatzbelastungsmessungen bereits genügt, Die Studie folcussierte dabei auf Exposition gegenüber luftgetragenen Partikel, weil die Atmung als Haupteintrittspforte iïlr Partikel in den Körper angesehen wird. Die Doktorarbeit besteht baut auf drei Phasen auf eine qualitative Umfrage (Pilotstudie), eine repräsentative, schweizerische Umfrage und mehrere technische Stndien welche dem spezitischen Verständnis der Mëglichkeiten und Grenzen einzelner Messgeräte und - teclmikeri dienen. Die qualitative Telephonumfrage wurde durchgeführt als Vorstudie zu einer nationalen und repräsentativen Umfrage in der Schweizer Industrie. Sie zielte auf Informationen ab zum Vorkommen von Nanopartikeln, und den angewendeten Schutzmassnahmen. Die Studie bestand aus gezielten Telefoninterviews mit Arbeit- und Gesundheitsfachpersonen von Schweizer Unternehmen. Die Untemehmen wurden aufgrund von offentlich zugànglichen lnformationen ausgewählt die darauf hinwiesen, dass sie mit Nanopartikeln umgehen. Der zweite Teil der Dolctorarbeit war die repräsentative Studie zur Evalniernng der Verbreitnng von Nanopaitikelanwendungen in der Schweizer lndustrie. Die Studie baute auf lnformationen der Pilotstudie auf und wurde mit einer repräsentativen Selektion von Firmen der Schweizerischen Unfall Versicherungsanstalt (SUVA) durchgeüihxt. Die Mehrheit der Schweizerischen Unternehmen im lndustrieselctor wurde damit abgedeckt. Der dritte Teil der Doktorarbeit fokussierte auf die Methodik zur Messung von Nanopartikeln. Mehrere Vorstudien wurden dnrchgefîihrt, um die Grenzen von oft eingesetzten Nanopartikelmessgeräten auszuloten, wenn sie grösseren Mengen von Nanopartikel Agglomeraten ausgesetzt messen sollen. Dieser F okns wurde ans zwei Gründen gewählt: weil mehrere Dislcussionen rnit Anwendem und auch dem Produzent der Messgeràte dort eine Schwachstelle vermuten liessen, welche Zweifel an der Genauigkeit der Messgeräte aufkommen liessen und weil in den zwei Umfragestudien ein häufiges Vorkommen von solchen Nanopartikel-Agglomeraten aufgezeigt wurde. i Als erstes widmete sich eine Vorstndie der Genauigkeit des Scanning Mobility Particle Sizer (SMPS). Dieses Messgerät zeigte in Präsenz von Nanopartikel Agglorneraten unsinnige bimodale Partikelgrössenverteilung an. Eine Serie von kurzen Experimenten folgte, welche sich auf andere Messgeräte und deren Probleme beim Messen von Nanopartikel-Agglomeraten konzentrierten. Der Condensation Particle Counter (CPC), der portable aerosol spectrometer (PAS), ein Gerät zur Schàtzung des aerodynamischen Durchniessers von Teilchen, sowie der Diffusion Size Classifier wurden getestet. Einige erste Machbarkeitstests zur Ermittlnng der Effizienz von tilterbasierter Messung von luftgetragenen Carbon Nanotubes (CNT) wnrden als letztes durchgeiührt. Die Pilotstudie hat ein detailliiertes Bild der Typen und Mengen von genutzten Nanopartikel in Schweizer Unternehmen geliefert, und hat den Stand des Wissens der interviewten Gesundheitsschntz und Sicherheitsfachleute aufgezeigt. Folgende Typen von Nanopaitikeln wurden von den kontaktierten Firmen als Maximalmengen angegeben (> 1'000 kg pro Jahr / Unternehrnen): Ag, Al-Ox, Fe-Ox, SiO2, TiO2, und ZnO (hauptsächlich Nanopartikel der ersten Generation). Die Quantitäten von eingesetzten Nanopartikeln waren stark verschieden mit einem ein Median von 100 kg pro Jahr. ln der quantitativen Fragebogenstudie wurden l'626 Unternehmen brieflich kontaktiert; allesamt Klienten der Schweizerischen Unfallversicherringsanstalt (SUVA). Die Resultate der Umfrage erlaubten eine Abschätzung der Anzahl von Unternehmen und Arbeiter, welche Nanopartikel in der Schweiz anwenden. Die Hochrechnung auf den Schweizer lndnstriesektor hat folgendes Bild ergeben: ln 586 Unternehmen (95% Vertrauensintervallz 145 bis 1'027 Unternehmen) sind 1'309 Arbeiter potentiell gegenüber Nanopartikel exponiert (95%-Vl: l'073 bis l'545). Diese Zahlen stehen für 0.6% der Schweizer Unternehmen (95%-Vl: 0.2% bis 1.1%) und 0.08% der Arbeiternehmerschaft (95%-V1: 0.06% bis 0.09%). Es gibt einige gut etablierte Technologien um die Luftkonzentration von Submikrometerpartikel zu messen. Es besteht jedoch Zweifel daran, inwiefern sich diese Technologien auch für die Messurrg von künstlich hergestellten Nanopartikeln verwenden lassen. Aus diesem Grund folcussierten die vorbereitenden Studien für die Arbeitsplatzbeurteilnngen auf die Messung von Pulverri, welche Nan0partike1-Agg10merate enthalten. Sie erlaubten die ldentifikation folgender rnöglicher Quellen von fehlerhaften Messungen an Arbeitsplätzen mit erhöhter Luft-K0nzentrati0n von Nanopartikel Agglomeratenz - Ein Standard SMPS zeigte eine unglaubwürdige bimodale Partikelgrössenverteilung wenn er grössere Nan0par'til<e1Agg10merate gemessen hat. - Grosse Unterschiede im Bereich von Faktor tausend wurden festgestellt zwischen einem Diffusion Size Classiîier und einigen CPC (beziehungsweise dem SMPS). - Die Unterschiede zwischen CPC/SMPS und dem PAS waren geringer, aber abhängig von Grosse oder Typ des gemessenen Pulvers waren sie dennoch in der Grössenordnung von einer guten Grössenordnung. - Spezifische Schwierigkeiten und Unsicherheiten im Bereich von Arbeitsplatzmessungen wurden identitiziert: Hintergrundpartikel können mit Partikeln interagieren die während einem Arbeitsprozess freigesetzt werden. Solche Interaktionen erschweren eine korrekte Einbettung der Hintergrunds-Partikel-Konzentration in die Messdaten. - Elektromotoren produzieren grosse Mengen von Nanopartikeln und können so die Messung der prozessbezogenen Exposition stören. Fazit: Die Umfragen zeigten, dass Nanopartikel bereits Realitàt sind in der Schweizer Industrie und dass einige Unternehmen bereits grosse Mengen davon einsetzen. Die repräsentative Umfrage hat diese explosive Nachricht jedoch etwas moderiert, indem sie aufgezeigt hat, dass die Zahl der Unternehmen in der gesamtschweizerischen Industrie relativ gering ist. In den meisten Branchen (vor allem ausserhalb der Chemischen Industrie) wurden wenig oder keine Anwendungen gefunden, was schliessen last, dass die Einführung dieser neuen Technologie erst am Anfang einer Entwicklung steht. Auch wenn die Zahl der potentiell exponierten Arbeiter immer noch relativ gering ist, so unterstreicht die Studie dennoch die Notwendigkeit von Expositionsmessungen an diesen Arbeitsplätzen. Kenntnisse um die Exposition und das Wissen, wie solche Exposition korrekt zu messen, sind sehr wichtig, vor allem weil die möglichen Auswirkungen auf die Gesundheit noch nicht völlig verstanden sind. Die Evaluation einiger Geräte und Methoden zeigte jedoch, dass hier noch Nachholbedarf herrscht. Bevor grössere Mess-Studien durgefîihrt werden können, müssen die Geräte und Methodem für den Einsatz mit Nanopartikel-Agglomeraten validiert werden.