946 resultados para Minimal quantity of lubricant (MQL)
Resumo:
Arabidopsis thaliana (L.) Heynh. expressing the Crepis palaestina (L.) linoleic acid delta12-epoxygenase in its developing seeds typically accumulates low levels of vernolic acid (12,13-epoxy-octadec-cis-9-enoic acid) in comparison to levels found in seeds of the native C. palaestina. In order to determine some of the factors limiting the accumulation of this unusual fatty acid, we have examined the effects of increasing the availability of linoleic acid (9cis, 12cis-octadecadienoic acid), the substrate of the delta12-epoxygenase, on the quantity of epoxy fatty acids accumulating in transgenic A. thaliana. The addition of linoleic acid to liquid cultures of transgenic plants expressing the delta12-epoxygenase under the control of the cauliflower mosaic virus 35S promoter increased the amount of vernolic acid in vegetative tissues by 2.8-fold. In contrast, the addition to these cultures of linoelaidic acid (9trans, 12trans-octadecadienoic acid), which is not a substrate of the delta12-epoxygenase, resulted in a slight decrease in vernolic acid accumulation. Expression of the delta12-epoxygenase under the control of the napin promoter in the A. thaliana triple mutant fad3/fad7-1/fad9, which is deficient in the synthesis of tri-unsaturated fatty acids and has a 60% higher level of linoleic acid than the wild type, was found to increase the average vernolic acid content of the seeds by 55% compared to the expression of the delta12-epoxygenase in a wild-type background. Together, these results reveal that the availability of linoleic acid is an important factor affecting the synthesis of epoxy fatty acid in transgenic plants.
Resumo:
The objective of this paper is to study selected components of the nutrient cycle of pure and mixed stands of native forest species of Atlantic Forest in southeastern Brazil. Tree diameter, height, above-ground biomass, and nutrient content were determined in 22-year-old stands. Litterfall, litter decomposition, and nutrient concentration were evaluated from August 1994 to July 1995. The following species were studied: Peltogyne angustiflora, Centrolobium robustum, Arapatiella psilophylla, Sclerolobium chrysophyllum, Cordia trichotoma, Macrolobium latifolium. The litter of a natural forest and a 40-year-old naturally regenerated second-growth forest was sampled as well. The mixed-species outmatched pure stands in height, stem volume and total biomass (29.4 % more). The greatest amount of forest litter was observed in the natural forest (9.3 Mg ha-1), followed by the mixed-species stand (7.6 Mg ha-1) and secondary forest (7.3 Mg ha-1), and least litterfall was measured in the pure C. robustum stand (5.5 Mg ha-1). Litterfall seasonality varied among species in pure stands (CV from 44.7 to 91.4 %), unlike litterfall in the mixed-tree stand, where the variation was lower (CV 31.2 %). In the natural and second-growth forest, litterfall varied by 57.8 and 34.0 %, respectively. The annual rate of nutrient return via litterfall varied widely among forest ecosystems. Differences were detected between forest ecosystems in both the litter accumulation and quantity of litterlayer nutrients. The highest mean nutrient accumulation in above-ground biomass was observed in mixed-species stands. The total nutrient accumulation (N + P + K+ Ca + Mg) ranged from 0.97 to 1.93 kg tree-1 in pure stands, and from 1.21 to 2.63 kg tree-1 in mixed-species stands. Soil fertility under mixed-species stands (0-10 cm) was intermediate between the primary forest and pure-stand systems. The litterfall rate of native forest species in a mixed-species system is more constant, resulting in a more continuous decomposition rate. Consequently, both nutrient availability and quantity of organic matter in the soil are higher and the production system ecologically more sustainable.
Resumo:
Diversity patterns of ammonoids are analyzed and compared with the timing of anoxic deposits around the Cenomanian/Turonian (C/T) boundary in the Vocontian, Anglo-Paris, and Monster basins of Western Europe. Differing from most previous studies, which concentrate on a narrow time span bracketing the C/T boundary, the present analysis covers the latest Albian to Early Turonian interval for which a high resolution, ammonoid-based biochronology, including 34 Unitary Associations zones, is now available. During the latest Albian-Middle Cenomanian interval, species richness of ammonoids reveals a dynamical equilibrium oscillating around an average of 20 species, whereas the Late Cenomanian-Early Turonian interval displays an equilibrium centered on an average value of 6 species. The abrupt transition between these two successive equilibria lasted no longer than two Unitary Associations. The onset of the decline of species richness thus largely predates the spread of oxygen-poor water masses onto the shelves, while minimal values of species richness coincide with the Cenomanian-Turonian boundary only. The decline of species richness during the entire Late Cenomanian seems to result from lower origination percentages rather than from higher extinction percentages. This result is also supported by the absence of statistically significant changes in the extinction probabilities of the poly-cohorts. Separate analyses of species richness for acanthoceratids and heteromorphs, the two essential components of the Cenomanian ammonoid community, reveal that heteromorphs declined sooner than acanthoceratids. Moreover, acanthoceratids showed a later decline at the genus level than at the species level. Such a decoupling is accompanied by a significant increase in morphological disparity of acanthoceratids, which is expressed by the appearance of new genera. Last, during the Late Cenomanian, paedomorphic processes, juvenile innovations and reductions of adult size dominated the evolutionary radiation of acanthoceratids. Hence, the decrease in ammonoid species richness and their major evolutionary changes significantly predates the spread of anoxic deposits. Other environmental constraints such as global flooding of platforms, warmer and more equable climate, as well as productivity changes better correlate with the timing of diversity changes and evolutionary patterns of ammonoids and therefore, provide more likely causative mechanisms than anoxia alone.
Resumo:
The forced oscillation technique (FOT) is a method for non-invasively assessing respiratory mechanics that is applicable both in paralysed and non-paralysed patients. As the FOT requires a minimal modification of the conventional ventilation setting and does not interfere with the ventilation protocol, the technique is potentially useful to monitor patient mechanics during invasive and noninvasive ventilation. FOT allows the assessment of the respiratory system linearity by measuring resistance and reactance at different lung volumes or end-expiratory pressures. Moreover, FOT allows the physician to track the changes in patient mechanics along the ventilation cycle. Applying FOT at different frequencies may allow the physician to interpret patient mechanics in terms of models with pathophysiological interest. The current methodological and technical experience make possible the implementation of portable and compact computerised FOT systems specifically addressed to its application in the mechanical ventilation setting.
Resumo:
Daptomycin is a promising candidate for local treatment of bone infection due to its activity against multi-resistant staphylococci. We investigated the activity of antibiotic-loaded PMMA against Staphylococcus epidermidis biofilms using an ultra-sensitive method bacterial heat detection method (microcalorimetry). PMMA cylinders loaded with daptomycin alone or in combination with gentamicin or PEG600, vancomycin and gentamicin were incubated with S. epidermidis-RP62A in tryptic soy broth (TSB) for 72h. Cylinders were thereafter washed and transferred in microcalorimetry ampoules pre-filled with TSB. Bacterial heat production, proportional to the quantity of biofilm on the PMMA, was measured by isothermal microcalorimetry at 37°C. Heat detection time was considered time to reach 20μW. Experiments were performed in duplicate. The heat detection time was 5.7-7.0h for PMMA without antibiotics. When loaded with 5% of daptomycin, vancomycin or gentamicin, detection times were 5.6-16.4h, 16.8-35.7h and 4.7-6.2h, respectively. No heat was detected when 5% gentamicin or 0.5% PEG600 was added to the daptomycin-loaded PMMA. The study showed that vancomycin was superior to daptomycin and gentamicin in inhbiting staphylococcal adherence in vitro. However, PMMA loaded with daptomycin combined with gentamicin or PEG600 completely inhibited S. epidermidis-biofilm formation. PMMA loaded with these combinations may represent effective strategies for local treatment in the presence of multi-resistant staphylococci.
Resumo:
Accreted terranes, comprising a wide variety of Late Jurassic and Early Cretaceous igneous and sedimentary rocks are an important feature of Cuban geology. Their characterization is helpful for understanding Caribbean paleogeography. The Guaniguanico terrane (western Cuba) is formed by upper Jurassic platform sediments intruded by microgranular dolerite dykes. The geochemical characteristics of the dolerite whole rock samples and their minerals (augitic clinopyroxene, labradorite and andesine) are consistent with a tholeiitic affinity. Major and trace element concentrations as well as Nd, Sr and Pb isotopes show that these rocks also have a continental affinity. Sample chemistry indicates that these lavas are similar to a low Ti-P2O5 (LTi) variety of continental flood basalts (CFB) similar to the dolerites of Ferrar (Tasmania). They derived from mixing of a lithospheric mantle Source and an asthenopheric component similar to E-MORB with minor markers of crustal contamination and sediment assimilation. However, the small quantity of Cuban magmatic rocks, similarly to Tasmania, Antarctica and Siberia differs from other volumetrically important CFB occurrences Such as Parana and Deccan. These dolerites are dated as 165-150 Ma and were emplaced during the separation of the Yucatan block from South America. They could in fact be part of the Yucatan-South America margin through which the intrusive system was emplaced and which was later accreted to the Cretaceous arc of central Cuba and to the Palaeogene arc of eastern Cuba. These samples could therefore reflect the pre-rift stage between North and South America and the opening of the gulf of Mexico.
Resumo:
Dispersed information on water retention and availability in soils may be compiled in databases to generate pedotransfer functions. The objectives of this study were: to generate pedotransfer functions to estimate soil water retention based on easily measurable soil properties; to evaluate the efficiency of existing pedotransfer functions for different geographical regions for the estimation of water retention in soils of Rio Grande do Sul (RS); and to estimate plant-available water capacity based on soil particle-size distribution. Two databases were set up for soil properties, including water retention: one based on literature data (725 entries) and the other with soil data from an irrigation scheduling and management system (239 entries). From the literature database, pedotransfer functions were generated, nine pedofunctions available in the literature were evaluated and the plant-available water capacity was calculated. The coefficient of determination of some pedotransfer functions ranged from 0.56 to 0.66. Pedotransfer functions generated based on soils from other regions were not appropriate for estimating the water retention for RS soils. The plant-available water content varied with soil texture classes, from 0.089 kg kg-1 for the sand class to 0.191 kg kg-1 for the silty clay class. These variations were more related to sand and silt than to clay content. The soils with a greater silt/clay ratio, which were less weathered and with a greater quantity of smectite clay minerals, had high water retention and plant-available water capacity.
Resumo:
High-altitude pulmonary edema is a life-threatening condition occurring in predisposed but otherwise healthy individuals. It therefore permits the study of underlying mechanisms of pulmonary edema in the absence of confounding factors such as coexisting cardiovascular or pulmonary disease, and/or drug therapy. There is evidence that some degree of asymptomatic alveolar fluid accumulation may represent a normal phenomenon in healthy humans shortly after arrival at high altitude. Two fundamental mechanisms then determine whether this fluid accumulation is cleared or whether it progresses to HAPE: the quantity of liquid escaping from the pulmonary vasculature and the rate of its clearance by the alveolar respiratory epithelium. The former is directly related to the degree of hypoxia-induced pulmonary hypertension, whereas the latter is determined by the alveolar epithelial sodium transport. Here, we will review evidence that, in HAPE-prone subjects, impaired pulmonary endothelial and epithelial NO synthesis and/or bioavailability may represent a central underlying defect predisposing to exaggerated hypoxic pulmonary vasoconstriction and, in turn, capillary stress failure and alveolar fluid flooding. We will then demonstrate that exaggerated pulmonary hypertension, although possibly a conditio sine qua non, may not always be sufficient to induce HAPE and how defective alveolar fluid clearance may represent a second important pathogenic mechanism.
Resumo:
Obtaining information about soil properties under different agricultural uses to plan soil management is very important with a view to sustainability in the different agricultural systems. The aim of this study was to evaluate changes in certain indicators of the physical quality of a dystrophic Red Latosol (Oxisol) under different agricultural uses. The study was conducted in an agricultural area located in northern Paraná State. Dystrophic Red Latosol samples were taken from four sites featuring different types of land use typical of the region: pasture of Brachiaria decumbens (P); sugarcane (CN); annual crops under no-tillage (CAPD); and native forest (permanent conservation area) (control (C)). For each land use, 20 completely randomized, disturbed and undisturbed soil samples were collected from the 0-20 cm soil layer, to determine soil texture, volume of water-dispersible clay, soil flocculation (FD), particle density, quantity of organic matter (OM), soil bulk density (Ds), soil macroporosity (Ma) and microporosity (Mi), total soil porosity (TSP), mean geometric diameter of soil aggregates (MGD), and penetration resistance (PR). The results showed differences in OM, FD, MGD, Ds, PR, and Ma between the control (soil under forest) and the areas used for agriculture (P, CN and CAPD). The soils of the lowest physical quality were those used for CN and CAPD, although only the former presented a Ma level very close to that representing unfavorable conditions for plant growth. For the purposes of this study, the physical properties studied were found to perform well as indicators of soil quality.
Resumo:
The contribution of humic substances of different composts to the synthesis of humin in a tropical soil was evaluated. Increasing doses (0, 13, 26, 52, and 104 Mg ha-1) of five different composts consisting of agroinpowderrial residues were applied to a Red-Yellow Latosol. These composts were chemically characterized and 13C NMR determined and the quantity of the functional alkyl groups of humic acids applied to the soil as compost was estimated. Thirty days after application of the treatments, organic matter samples were collected for fractionation of humic acids (HA), fulvic acids (FA) and humin (HU), from which the ratios HA/FA and (HA + FA)/HU were calculated. The application of the composts based on castor cake resulted in the highest HU levels in the soil; alkyl groups of the HA fraction of the composts were predominant in the organic components added to the HU soil fraction.
Resumo:
We show how to decompose any density matrix of the simplest binary composite systems, whether separable or not, in terms of only product vectors. We determine for all cases the minimal number of product vectors needed for such a decomposition. Separable states correspond to mixing from one to four pure product states. Inseparable states can be described as pseudomixtures of four or five pure product states, and can be made separable by mixing them with one or two pure product states.
Resumo:
In the quest to completely describe entanglement in the general case of a finite number of parties sharing a physical system of finite-dimensional Hilbert space an entanglement magnitude is introduced for its pure and mixed states: robustness. It corresponds to the minimal amount of mixing with locally prepared states which washes out all entanglement. It quantifies in a sense the endurance of entanglement against noise and jamming. Its properties are studied comprehensively. Analytical expressions for the robustness are given for pure states of two-party systems, and analytical bounds for mixed states of two-party systems. Specific results are obtained mainly for the qubit-qubit system (qubit denotes quantum bit). As by-products local pseudomixtures are generalized, a lower bound for the relative volume of separable states is deduced, and arguments for considering convexity a necessary condition of any entanglement measure are put forward.
Resumo:
The proportion of population living in or around cites is more important than ever. Urban sprawl and car dependence have taken over the pedestrian-friendly compact city. Environmental problems like air pollution, land waste or noise, and health problems are the result of this still continuing process. The urban planners have to find solutions to these complex problems, and at the same time insure the economic performance of the city and its surroundings. At the same time, an increasing quantity of socio-economic and environmental data is acquired. In order to get a better understanding of the processes and phenomena taking place in the complex urban environment, these data should be analysed. Numerous methods for modelling and simulating such a system exist and are still under development and can be exploited by the urban geographers for improving our understanding of the urban metabolism. Modern and innovative visualisation techniques help in communicating the results of such models and simulations. This thesis covers several methods for analysis, modelling, simulation and visualisation of problems related to urban geography. The analysis of high dimensional socio-economic data using artificial neural network techniques, especially self-organising maps, is showed using two examples at different scales. The problem of spatiotemporal modelling and data representation is treated and some possible solutions are shown. The simulation of urban dynamics and more specifically the traffic due to commuting to work is illustrated using multi-agent micro-simulation techniques. A section on visualisation methods presents cartograms for transforming the geographic space into a feature space, and the distance circle map, a centre-based map representation particularly useful for urban agglomerations. Some issues on the importance of scale in urban analysis and clustering of urban phenomena are exposed. A new approach on how to define urban areas at different scales is developed, and the link with percolation theory established. Fractal statistics, especially the lacunarity measure, and scale laws are used for characterising urban clusters. In a last section, the population evolution is modelled using a model close to the well-established gravity model. The work covers quite a wide range of methods useful in urban geography. Methods should still be developed further and at the same time find their way into the daily work and decision process of urban planners. La part de personnes vivant dans une région urbaine est plus élevé que jamais et continue à croître. L'étalement urbain et la dépendance automobile ont supplanté la ville compacte adaptée aux piétons. La pollution de l'air, le gaspillage du sol, le bruit, et des problèmes de santé pour les habitants en sont la conséquence. Les urbanistes doivent trouver, ensemble avec toute la société, des solutions à ces problèmes complexes. En même temps, il faut assurer la performance économique de la ville et de sa région. Actuellement, une quantité grandissante de données socio-économiques et environnementales est récoltée. Pour mieux comprendre les processus et phénomènes du système complexe "ville", ces données doivent être traitées et analysées. Des nombreuses méthodes pour modéliser et simuler un tel système existent et sont continuellement en développement. Elles peuvent être exploitées par le géographe urbain pour améliorer sa connaissance du métabolisme urbain. Des techniques modernes et innovatrices de visualisation aident dans la communication des résultats de tels modèles et simulations. Cette thèse décrit plusieurs méthodes permettant d'analyser, de modéliser, de simuler et de visualiser des phénomènes urbains. L'analyse de données socio-économiques à très haute dimension à l'aide de réseaux de neurones artificiels, notamment des cartes auto-organisatrices, est montré à travers deux exemples aux échelles différentes. Le problème de modélisation spatio-temporelle et de représentation des données est discuté et quelques ébauches de solutions esquissées. La simulation de la dynamique urbaine, et plus spécifiquement du trafic automobile engendré par les pendulaires est illustrée à l'aide d'une simulation multi-agents. Une section sur les méthodes de visualisation montre des cartes en anamorphoses permettant de transformer l'espace géographique en espace fonctionnel. Un autre type de carte, les cartes circulaires, est présenté. Ce type de carte est particulièrement utile pour les agglomérations urbaines. Quelques questions liées à l'importance de l'échelle dans l'analyse urbaine sont également discutées. Une nouvelle approche pour définir des clusters urbains à des échelles différentes est développée, et le lien avec la théorie de la percolation est établi. Des statistiques fractales, notamment la lacunarité, sont utilisées pour caractériser ces clusters urbains. L'évolution de la population est modélisée à l'aide d'un modèle proche du modèle gravitaire bien connu. Le travail couvre une large panoplie de méthodes utiles en géographie urbaine. Toutefois, il est toujours nécessaire de développer plus loin ces méthodes et en même temps, elles doivent trouver leur chemin dans la vie quotidienne des urbanistes et planificateurs.
Resumo:
A laboratory study has been conducted with two aims in mind. The first goal was to develop a description of how a cutting edge scrapes ice from the road surface. The second goal was to investigate the extent, if any, to which serrated blades were better than un-serrated or "classical" blades at ice removal. The tests were conducted in the Ice Research Laboratory at the Iowa Institute of Hydraulic Research of the University of Iowa. A specialized testing machine, with a hydraulic ram capable of attaining scraping velocities of up to 30 m.p.h. was used in the testing. In order to determine the ice scraping process, the effects of scraping velocity, ice thickness, and blade geometry on the ice scraping forces were determined. Higher ice thickness lead to greater ice chipping (as opposed to pulverization at lower thicknesses) and thus lower loads. Behavior was observed at higher velocities. The study of blade geometry included the effect of rake angle, clearance angle, and flat width. The latter were found to be particularly important in developing a clear picture of the scraping process. As clearance angle decreases and flat width increases, the scraping loads show a marked increase, due to the need to re-compress pulverized ice fragments. The effect of serrations was to decrease the scraping forces. However, for the coarsest serrated blades (with the widest teeth and gaps) the quantity of ice removed was significantly less than for a classical blade. Finer serrations appear to be able to match the ice removal of classical blades at lower scraping loads. Thus, one of the recommendations of this study is to examine the use of serrated blades in the field. Preliminary work (by Nixon and Potter, 1996) suggests such work will be fruitful. A second and perhaps more challenging result of the study is that chipping of ice is more preferable to pulverization of the ice. How such chipping can be forced to occur is at present an open question.
Resumo:
Abstract : The occupational health risk involved with handling nanoparticles is the probability that a worker will experience an adverse health effect: this is calculated as a function of the worker's exposure relative to the potential biological hazard of the material. Addressing the risks of nanoparticles requires therefore knowledge on occupational exposure and the release of nanoparticles into the environment as well as toxicological data. However, information on exposure is currently not systematically collected; therefore this risk assessment lacks quantitative data. This thesis aimed at, first creating the fundamental data necessary for a quantitative assessment and, second, evaluating methods to measure the occupational nanoparticle exposure. The first goal was to determine what is being used where in Swiss industries. This was followed by an evaluation of the adequacy of existing measurement methods to assess workplace nanopaiticle exposure to complex size distributions and concentration gradients. The study was conceived as a series of methodological evaluations aimed at better understanding nanoparticle measurement devices and methods. lt focused on inhalation exposure to airborne particles, as respiration is considered to be the most important entrance pathway for nanoparticles in the body in terms of risk. The targeted survey (pilot study) was conducted as a feasibility study for a later nationwide survey on the handling of nanoparticles and the applications of specific protection means in industry. The study consisted of targeted phone interviews with health and safety officers of Swiss companies that were believed to use or produce nanoparticles. This was followed by a representative survey on the level of nanoparticle usage in Switzerland. lt was designed based on the results of the pilot study. The study was conducted among a representative selection of clients of the Swiss National Accident Insurance Fund (SUVA), covering about 85% of Swiss production companies. The third part of this thesis focused on the methods to measure nanoparticles. Several pre- studies were conducted studying the limits of commonly used measurement devices in the presence of nanoparticle agglomerates, This focus was chosen, because several discussions with users and producers of the measurement devices raised questions about their accuracy measuring nanoparticle agglomerates and because, at the same time, the two survey studies revealed that such powders are frequently used in industry. The first preparatory experiment focused on the accuracy of the scanning mobility particle sizer (SMPS), which showed an improbable size distribution when measuring powders of nanoparticle agglomerates. Furthermore, the thesis includes a series of smaller experiments that took a closer look at problems encountered with other measurement devices in the presence of nanoparticle agglomerates: condensation particle counters (CPC), portable aerosol spectrometer (PAS) a device to estimate the aerodynamic diameter, as well as diffusion size classifiers. Some initial feasibility tests for the efficiency of filter based sampling and subsequent counting of carbon nanotubes (CNT) were conducted last. The pilot study provided a detailed picture of the types and amounts of nanoparticles used and the knowledge of the health and safety experts in the companies. Considerable maximal quantities (> l'000 kg/year per company) of Ag, Al-Ox, Fe-Ox, SiO2, TiO2, and ZnO (mainly first generation particles) were declared by the contacted Swiss companies, The median quantity of handled nanoparticles, however, was 100 kg/year. The representative survey was conducted by contacting by post mail a representative selection of l '626 SUVA-clients (Swiss Accident Insurance Fund). It allowed estimation of the number of companies and workers dealing with nanoparticles in Switzerland. The extrapolation from the surveyed companies to all companies of the Swiss production sector suggested that l'309 workers (95%-confidence interval l'073 to l'545) of the Swiss production sector are potentially exposed to nanoparticles in 586 companies (145 to l'027). These numbers correspond to 0.08% (0.06% to 0.09%) of all workers and to 0.6% (0.2% to 1.1%) of companies in the Swiss production sector. To measure airborne concentrations of sub micrometre-sized particles, a few well known methods exist. However, it was unclear how well the different instruments perform in the presence of the often quite large agglomerates of nanostructured materials. The evaluation of devices and methods focused on nanoparticle agglomerate powders. lt allowed the identification of the following potential sources of inaccurate measurements at workplaces with considerable high concentrations of airborne agglomerates: - A standard SMPS showed bi-modal particle size distributions when measuring large nanoparticle agglomerates. - Differences in the range of a factor of a thousand were shown between diffusion size classifiers and CPC/SMPS. - The comparison between CPC/SMPS and portable aerosol Spectrometer (PAS) was much better, but depending on the concentration, size or type of the powders measured, the differences were still of a high order of magnitude - Specific difficulties and uncertainties in the assessment of workplaces were identified: the background particles can interact with particles created by a process, which make the handling of background concentration difficult. - Electric motors produce high numbers of nanoparticles and confound the measurement of the process-related exposure. Conclusion: The surveys showed that nanoparticles applications exist in many industrial sectors in Switzerland and that some companies already use high quantities of them. The representative survey demonstrated a low prevalence of nanoparticle usage in most branches of the Swiss industry and led to the conclusion that the introduction of applications using nanoparticles (especially outside industrial chemistry) is only beginning. Even though the number of potentially exposed workers was reportedly rather small, it nevertheless underscores the need for exposure assessments. Understanding exposure and how to measure it correctly is very important because the potential health effects of nanornaterials are not yet fully understood. The evaluation showed that many devices and methods of measuring nanoparticles need to be validated for nanoparticles agglomerates before large exposure assessment studies can begin. Zusammenfassung : Das Gesundheitsrisiko von Nanopartikel am Arbeitsplatz ist die Wahrscheinlichkeit dass ein Arbeitnehmer einen möglichen Gesundheitsschaden erleidet wenn er diesem Stoff ausgesetzt ist: sie wird gewöhnlich als Produkt von Schaden mal Exposition gerechnet. Für eine gründliche Abklärung möglicher Risiken von Nanomaterialien müssen also auf der einen Seite Informationen über die Freisetzung von solchen Materialien in die Umwelt vorhanden sein und auf der anderen Seite solche über die Exposition von Arbeitnehmenden. Viele dieser Informationen werden heute noch nicht systematisch gesarnmelt und felilen daher für Risikoanalysen, Die Doktorarbeit hatte als Ziel, die Grundlagen zu schaffen für eine quantitative Schatzung der Exposition gegenüber Nanopartikel am Arbeitsplatz und die Methoden zu evaluieren die zur Messung einer solchen Exposition nötig sind. Die Studie sollte untersuchen, in welchem Ausmass Nanopartikel bereits in der Schweizer Industrie eingesetzt werden, wie viele Arbeitnehrner damit potentiel] in Kontakt komrrien ob die Messtechnologie für die nötigen Arbeitsplatzbelastungsmessungen bereits genügt, Die Studie folcussierte dabei auf Exposition gegenüber luftgetragenen Partikel, weil die Atmung als Haupteintrittspforte iïlr Partikel in den Körper angesehen wird. Die Doktorarbeit besteht baut auf drei Phasen auf eine qualitative Umfrage (Pilotstudie), eine repräsentative, schweizerische Umfrage und mehrere technische Stndien welche dem spezitischen Verständnis der Mëglichkeiten und Grenzen einzelner Messgeräte und - teclmikeri dienen. Die qualitative Telephonumfrage wurde durchgeführt als Vorstudie zu einer nationalen und repräsentativen Umfrage in der Schweizer Industrie. Sie zielte auf Informationen ab zum Vorkommen von Nanopartikeln, und den angewendeten Schutzmassnahmen. Die Studie bestand aus gezielten Telefoninterviews mit Arbeit- und Gesundheitsfachpersonen von Schweizer Unternehmen. Die Untemehmen wurden aufgrund von offentlich zugànglichen lnformationen ausgewählt die darauf hinwiesen, dass sie mit Nanopartikeln umgehen. Der zweite Teil der Dolctorarbeit war die repräsentative Studie zur Evalniernng der Verbreitnng von Nanopaitikelanwendungen in der Schweizer lndustrie. Die Studie baute auf lnformationen der Pilotstudie auf und wurde mit einer repräsentativen Selektion von Firmen der Schweizerischen Unfall Versicherungsanstalt (SUVA) durchgeüihxt. Die Mehrheit der Schweizerischen Unternehmen im lndustrieselctor wurde damit abgedeckt. Der dritte Teil der Doktorarbeit fokussierte auf die Methodik zur Messung von Nanopartikeln. Mehrere Vorstudien wurden dnrchgefîihrt, um die Grenzen von oft eingesetzten Nanopartikelmessgeräten auszuloten, wenn sie grösseren Mengen von Nanopartikel Agglomeraten ausgesetzt messen sollen. Dieser F okns wurde ans zwei Gründen gewählt: weil mehrere Dislcussionen rnit Anwendem und auch dem Produzent der Messgeràte dort eine Schwachstelle vermuten liessen, welche Zweifel an der Genauigkeit der Messgeräte aufkommen liessen und weil in den zwei Umfragestudien ein häufiges Vorkommen von solchen Nanopartikel-Agglomeraten aufgezeigt wurde. i Als erstes widmete sich eine Vorstndie der Genauigkeit des Scanning Mobility Particle Sizer (SMPS). Dieses Messgerät zeigte in Präsenz von Nanopartikel Agglorneraten unsinnige bimodale Partikelgrössenverteilung an. Eine Serie von kurzen Experimenten folgte, welche sich auf andere Messgeräte und deren Probleme beim Messen von Nanopartikel-Agglomeraten konzentrierten. Der Condensation Particle Counter (CPC), der portable aerosol spectrometer (PAS), ein Gerät zur Schàtzung des aerodynamischen Durchniessers von Teilchen, sowie der Diffusion Size Classifier wurden getestet. Einige erste Machbarkeitstests zur Ermittlnng der Effizienz von tilterbasierter Messung von luftgetragenen Carbon Nanotubes (CNT) wnrden als letztes durchgeiührt. Die Pilotstudie hat ein detailliiertes Bild der Typen und Mengen von genutzten Nanopartikel in Schweizer Unternehmen geliefert, und hat den Stand des Wissens der interviewten Gesundheitsschntz und Sicherheitsfachleute aufgezeigt. Folgende Typen von Nanopaitikeln wurden von den kontaktierten Firmen als Maximalmengen angegeben (> 1'000 kg pro Jahr / Unternehrnen): Ag, Al-Ox, Fe-Ox, SiO2, TiO2, und ZnO (hauptsächlich Nanopartikel der ersten Generation). Die Quantitäten von eingesetzten Nanopartikeln waren stark verschieden mit einem ein Median von 100 kg pro Jahr. ln der quantitativen Fragebogenstudie wurden l'626 Unternehmen brieflich kontaktiert; allesamt Klienten der Schweizerischen Unfallversicherringsanstalt (SUVA). Die Resultate der Umfrage erlaubten eine Abschätzung der Anzahl von Unternehmen und Arbeiter, welche Nanopartikel in der Schweiz anwenden. Die Hochrechnung auf den Schweizer lndnstriesektor hat folgendes Bild ergeben: ln 586 Unternehmen (95% Vertrauensintervallz 145 bis 1'027 Unternehmen) sind 1'309 Arbeiter potentiell gegenüber Nanopartikel exponiert (95%-Vl: l'073 bis l'545). Diese Zahlen stehen für 0.6% der Schweizer Unternehmen (95%-Vl: 0.2% bis 1.1%) und 0.08% der Arbeiternehmerschaft (95%-V1: 0.06% bis 0.09%). Es gibt einige gut etablierte Technologien um die Luftkonzentration von Submikrometerpartikel zu messen. Es besteht jedoch Zweifel daran, inwiefern sich diese Technologien auch für die Messurrg von künstlich hergestellten Nanopartikeln verwenden lassen. Aus diesem Grund folcussierten die vorbereitenden Studien für die Arbeitsplatzbeurteilnngen auf die Messung von Pulverri, welche Nan0partike1-Agg10merate enthalten. Sie erlaubten die ldentifikation folgender rnöglicher Quellen von fehlerhaften Messungen an Arbeitsplätzen mit erhöhter Luft-K0nzentrati0n von Nanopartikel Agglomeratenz - Ein Standard SMPS zeigte eine unglaubwürdige bimodale Partikelgrössenverteilung wenn er grössere Nan0par'til<e1Agg10merate gemessen hat. - Grosse Unterschiede im Bereich von Faktor tausend wurden festgestellt zwischen einem Diffusion Size Classiîier und einigen CPC (beziehungsweise dem SMPS). - Die Unterschiede zwischen CPC/SMPS und dem PAS waren geringer, aber abhängig von Grosse oder Typ des gemessenen Pulvers waren sie dennoch in der Grössenordnung von einer guten Grössenordnung. - Spezifische Schwierigkeiten und Unsicherheiten im Bereich von Arbeitsplatzmessungen wurden identitiziert: Hintergrundpartikel können mit Partikeln interagieren die während einem Arbeitsprozess freigesetzt werden. Solche Interaktionen erschweren eine korrekte Einbettung der Hintergrunds-Partikel-Konzentration in die Messdaten. - Elektromotoren produzieren grosse Mengen von Nanopartikeln und können so die Messung der prozessbezogenen Exposition stören. Fazit: Die Umfragen zeigten, dass Nanopartikel bereits Realitàt sind in der Schweizer Industrie und dass einige Unternehmen bereits grosse Mengen davon einsetzen. Die repräsentative Umfrage hat diese explosive Nachricht jedoch etwas moderiert, indem sie aufgezeigt hat, dass die Zahl der Unternehmen in der gesamtschweizerischen Industrie relativ gering ist. In den meisten Branchen (vor allem ausserhalb der Chemischen Industrie) wurden wenig oder keine Anwendungen gefunden, was schliessen last, dass die Einführung dieser neuen Technologie erst am Anfang einer Entwicklung steht. Auch wenn die Zahl der potentiell exponierten Arbeiter immer noch relativ gering ist, so unterstreicht die Studie dennoch die Notwendigkeit von Expositionsmessungen an diesen Arbeitsplätzen. Kenntnisse um die Exposition und das Wissen, wie solche Exposition korrekt zu messen, sind sehr wichtig, vor allem weil die möglichen Auswirkungen auf die Gesundheit noch nicht völlig verstanden sind. Die Evaluation einiger Geräte und Methoden zeigte jedoch, dass hier noch Nachholbedarf herrscht. Bevor grössere Mess-Studien durgefîihrt werden können, müssen die Geräte und Methodem für den Einsatz mit Nanopartikel-Agglomeraten validiert werden.