85 resultados para GasFree Filter
Resumo:
Molecular shape has long been known to be an important property for the process of molecular recognition. Previous studies postulated the existence of a drug-like shape space that could be used to artificially bias the composition of screening libraries, with the aim to increase the chance of success in Hit Identification. In this work, it was analysed to which extend this assumption holds true. Normalized Principal Moments of Inertia Ratios (NPRs) have been used to describe the molecular shape of small molecules. It was investigated, whether active molecules of diverse targets are located in preferred subspaces of the NPR shape space. Results illustrated a significantly stronger clustering than could be expected by chance, with parts of the space unlikely to be occupied by active compounds. Furthermore, a strong enrichment of elongated, rather flat shapes could be observed, while globular compounds were highly underrepresented. This was confirmed for a wide range of small molecule datasets from different origins. Active compounds exhibited a high overlap in their shape distributions across different targets, making a purely shape based discrimination very difficult. An additional perspective was provided by comparing the shapes of protein binding pockets with those of their respective ligands. Although more globular than their ligands, it was observed that binding sites shapes exhibited a similarly skewed distribution in shape space: spherical shapes were highly underrepresented. This was different for unoccupied binding pockets of smaller size. These were on the contrary identified to possess a more globular shape. The relation between shape complementarity and exhibited bioactivity was analysed; a moderate correlation between bioactivity and parameters including pocket coverage, distance in shape space, and others could be identified, which reflects the importance of shape complementarity. However, this also suggests that other aspects are of relevance for molecular recognition. A subsequent analysis assessed if and how shape and volume information retrieved from pocket or respective reference ligands could be used as a pre-filter in a virtual screening approach. ln Lead Optimization compounds need to get optimized with respect to a variety of pararneters. Here, the availability of past success stories is very valuable, as they can guide medicinal chemists during their analogue synthesis plans. However, although of tremendous interest for the public domain, so far only large corporations had the ability to mine historical knowledge in their proprietary databases. With the aim to provide such information, the SwissBioisostere database was developed and released during this thesis. This database contains information on 21,293,355 performed substructural exchanges, corresponding to 5,586,462 unique replacements that have been measured in 35,039 assays against 1,948 molecular targets representing 30 target classes, and on their impact on bioactivity . A user-friendly interface was developed that provides facile access to these data and is accessible at http//www.swissbioisostere.ch. The ChEMBL database was used as primary data source of bioactivity information. Matched molecular pairs have been identified in the extracted and cleaned data. Success-based scores were developed and integrated into the database to allow re-ranking of proposed replacements by their past outcomes. It was analysed to which degree these scores correlate with chemical similarity of the underlying fragments. An unexpectedly weak relationship was detected and further investigated. Use cases of this database were envisioned, and functionalities implemented accordingly: replacement outcomes are aggregatable at the assay level, and it was shawn that an aggregation at the target or target class level could also be performed, but should be accompanied by a careful case-by-case assessment. It was furthermore observed that replacement success depends on the activity of the starting compound A within a matched molecular pair A-B. With increasing potency the probability to lose bioactivity through any substructural exchange was significantly higher than in low affine binders. A potential existence of a publication bias could be refuted. Furthermore, often performed medicinal chemistry strategies for structure-activity-relationship exploration were analysed using the acquired data. Finally, data originating from pharmaceutical companies were compared with those reported in the literature. It could be seen that industrial medicinal chemistry can access replacement information not available in the public domain. In contrast, a large amount of often-performed replacements within companies could also be identified in literature data. Preferences for particular replacements differed between these two sources. The value of combining different endpoints in an evaluation of molecular replacements was investigated. The performed studies highlighted furthermore that there seem to exist no universal substructural replacement that always retains bioactivity irrespective of the biological environment. A generalization of bioisosteric replacements seems therefore not possible. - La forme tridimensionnelle des molécules a depuis longtemps été reconnue comme une propriété importante pour le processus de reconnaissance moléculaire. Des études antérieures ont postulé que les médicaments occupent préférentiellement un sous-ensemble de l'espace des formes des molécules. Ce sous-ensemble pourrait être utilisé pour biaiser la composition de chimiothèques à cribler, dans le but d'augmenter les chances d'identifier des Hits. L'analyse et la validation de cette assertion fait l'objet de cette première partie. Les Ratios de Moments Principaux d'Inertie Normalisés (RPN) ont été utilisés pour décrire la forme tridimensionnelle de petites molécules de type médicament. Il a été étudié si les molécules actives sur des cibles différentes se co-localisaient dans des sous-espaces privilégiés de l'espace des formes. Les résultats montrent des regroupements de molécules incompatibles avec une répartition aléatoire, avec certaines parties de l'espace peu susceptibles d'être occupées par des composés actifs. Par ailleurs, un fort enrichissement en formes allongées et plutôt plates a pu être observé, tandis que les composés globulaires étaient fortement sous-représentés. Cela a été confirmé pour un large ensemble de compilations de molécules d'origines différentes. Les distributions de forme des molécules actives sur des cibles différentes se recoupent largement, rendant une discrimination fondée uniquement sur la forme très difficile. Une perspective supplémentaire a été ajoutée par la comparaison des formes des ligands avec celles de leurs sites de liaison (poches) dans leurs protéines respectives. Bien que plus globulaires que leurs ligands, il a été observé que les formes des poches présentent une distribution dans l'espace des formes avec le même type d'asymétrie que celle observée pour les ligands: les formes sphériques sont fortement sous représentées. Un résultat différent a été obtenu pour les poches de plus petite taille et cristallisées sans ligand: elles possédaient une forme plus globulaire. La relation entre complémentarité de forme et bioactivité a été également analysée; une corrélation modérée entre bioactivité et des paramètres tels que remplissage de poche, distance dans l'espace des formes, ainsi que d'autres, a pu être identifiée. Ceci reflète l'importance de la complémentarité des formes, mais aussi l'implication d'autres facteurs. Une analyse ultérieure a évalué si et comment la forme et le volume d'une poche ou de ses ligands de référence pouvaient être utilisés comme un pré-filtre dans une approche de criblage virtuel. Durant l'optimisation d'un Lead, de nombreux paramètres doivent être optimisés simultanément. Dans ce contexte, la disponibilité d'exemples d'optimisations réussies est précieuse, car ils peuvent orienter les chimistes médicinaux dans leurs plans de synthèse par analogie. Cependant, bien que d'un extrême intérêt pour les chercheurs dans le domaine public, seules les grandes sociétés pharmaceutiques avaient jusqu'à présent la capacité d'exploiter de telles connaissances au sein de leurs bases de données internes. Dans le but de remédier à cette limitation, la base de données SwissBioisostere a été élaborée et publiée dans le domaine public au cours de cette thèse. Cette base de données contient des informations sur 21 293 355 échanges sous-structuraux observés, correspondant à 5 586 462 remplacements uniques mesurés dans 35 039 tests contre 1948 cibles représentant 30 familles, ainsi que sur leur impact sur la bioactivité. Une interface a été développée pour permettre un accès facile à ces données, accessible à http:/ /www.swissbioisostere.ch. La base de données ChEMBL a été utilisée comme source de données de bioactivité. Une version modifiée de l'algorithme de Hussain et Rea a été implémentée pour identifier les Matched Molecular Pairs (MMP) dans les données préparées au préalable. Des scores de succès ont été développés et intégrés dans la base de données pour permettre un reclassement des remplacements proposés selon leurs résultats précédemment observés. La corrélation entre ces scores et la similarité chimique des fragments correspondants a été étudiée. Une corrélation plus faible qu'attendue a été détectée et analysée. Différents cas d'utilisation de cette base de données ont été envisagés, et les fonctionnalités correspondantes implémentées: l'agrégation des résultats de remplacement est effectuée au niveau de chaque test, et il a été montré qu'elle pourrait également être effectuée au niveau de la cible ou de la classe de cible, sous réserve d'une analyse au cas par cas. Il a en outre été constaté que le succès d'un remplacement dépend de l'activité du composé A au sein d'une paire A-B. Il a été montré que la probabilité de perdre la bioactivité à la suite d'un remplacement moléculaire quelconque est plus importante au sein des molécules les plus actives que chez les molécules de plus faible activité. L'existence potentielle d'un biais lié au processus de publication par articles a pu être réfutée. En outre, les stratégies fréquentes de chimie médicinale pour l'exploration des relations structure-activité ont été analysées à l'aide des données acquises. Enfin, les données provenant des compagnies pharmaceutiques ont été comparées à celles reportées dans la littérature. Il a pu être constaté que les chimistes médicinaux dans l'industrie peuvent accéder à des remplacements qui ne sont pas disponibles dans le domaine public. Par contre, un grand nombre de remplacements fréquemment observés dans les données de l'industrie ont également pu être identifiés dans les données de la littérature. Les préférences pour certains remplacements particuliers diffèrent entre ces deux sources. L'intérêt d'évaluer les remplacements moléculaires simultanément selon plusieurs paramètres (bioactivité et stabilité métabolique par ex.) a aussi été étudié. Les études réalisées ont souligné qu'il semble n'exister aucun remplacement sous-structural universel qui conserve toujours la bioactivité quel que soit le contexte biologique. Une généralisation des remplacements bioisostériques ne semble donc pas possible.
Resumo:
STUDY OBJECTIVE: To evaluate the safety of a combined heat and moisture exchanger filter (HMEF) for the conditioning of inspired gas in long-term mechanical ventilation (MV). DESIGN: Randomized controlled trial. SETTING: Medical ICU in a large teaching hospital. PATIENTS: One hundred fifteen consecutive patients who required > or = 48 h of MV. INTERVENTIONS: Patients were randomized at intubation time (day 1) to receive inspired gas conditioned either by a water-bath humidifier heated at 32 degrees C (HWBH) or by an HMEF (Hygroster; DAR; Mirandola, Italy). MEASUREMENTS AND MAIN RESULTS: The two study groups were comparable in terms of primary pathologic condition at the time of hospital admission, disease severity as measured by the Simplified Acute Physiology Score, and ICU mortality. They did not differ with respect to ventilator days per patient (mean +/- SD: HMEF, 7.6 +/- 6.5; HWBH, 7.8 +/- 5.8), incidence of endotracheal tube obstruction (HMEF, 0/59; HWBH, 1/56), and incidence of hypothermic episodes (HMEF, five; HWBH, two). In 41 patients receiving MV for > or = 5 days, the morphologic integrity of respiratory epithelium was evaluated on day 1 and day 5, using a cytologic examination of tracheal aspirate smears. The state of ciliated epithelium was scored on a scale from 0 (poorest integrity) to 1,200 (maximum integrity), according to a well-described method. In both patient groups, the scores slightly but significantly decreased from day 1 to day 5 (mean +/- SD: HWBH, from 787 +/- 104 to 745 +/- 88; HMEF, from 813 +/- 79 to 739 +/- 62; p < 0.01 for both groups); there were no statistically significant differences between groups. CONCLUSIONS: These data indicate acceptable safety of HMEFs of the type used in the present study for long-term mechanical ventilation.
Resumo:
Résumé La théorie de l'autocatégorisation est une théorie de psychologie sociale qui porte sur la relation entre l'individu et le groupe. Elle explique le comportement de groupe par la conception de soi et des autres en tant que membres de catégories sociales, et par l'attribution aux individus des caractéristiques prototypiques de ces catégories. Il s'agit donc d'une théorie de l'individu qui est censée expliquer des phénomènes collectifs. Les situations dans lesquelles un grand nombre d'individus interagissent de manière non triviale génèrent typiquement des comportements collectifs complexes qui sont difficiles à prévoir sur la base des comportements individuels. La simulation informatique de tels systèmes est un moyen fiable d'explorer de manière systématique la dynamique du comportement collectif en fonction des spécifications individuelles. Dans cette thèse, nous présentons un modèle formel d'une partie de la théorie de l'autocatégorisation appelée principe du métacontraste. À partir de la distribution d'un ensemble d'individus sur une ou plusieurs dimensions comparatives, le modèle génère les catégories et les prototypes associés. Nous montrons que le modèle se comporte de manière cohérente par rapport à la théorie et est capable de répliquer des données expérimentales concernant divers phénomènes de groupe, dont par exemple la polarisation. De plus, il permet de décrire systématiquement les prédictions de la théorie dont il dérive, notamment dans des situations nouvelles. Au niveau collectif, plusieurs dynamiques peuvent être observées, dont la convergence vers le consensus, vers une fragmentation ou vers l'émergence d'attitudes extrêmes. Nous étudions également l'effet du réseau social sur la dynamique et montrons qu'à l'exception de la vitesse de convergence, qui augmente lorsque les distances moyennes du réseau diminuent, les types de convergences dépendent peu du réseau choisi. Nous constatons d'autre part que les individus qui se situent à la frontière des groupes (dans le réseau social ou spatialement) ont une influence déterminante sur l'issue de la dynamique. Le modèle peut par ailleurs être utilisé comme un algorithme de classification automatique. Il identifie des prototypes autour desquels sont construits des groupes. Les prototypes sont positionnés de sorte à accentuer les caractéristiques typiques des groupes, et ne sont pas forcément centraux. Enfin, si l'on considère l'ensemble des pixels d'une image comme des individus dans un espace de couleur tridimensionnel, le modèle fournit un filtre qui permet d'atténuer du bruit, d'aider à la détection d'objets et de simuler des biais de perception comme l'induction chromatique. Abstract Self-categorization theory is a social psychology theory dealing with the relation between the individual and the group. It explains group behaviour through self- and others' conception as members of social categories, and through the attribution of the proto-typical categories' characteristics to the individuals. Hence, it is a theory of the individual that intends to explain collective phenomena. Situations involving a large number of non-trivially interacting individuals typically generate complex collective behaviours, which are difficult to anticipate on the basis of individual behaviour. Computer simulation of such systems is a reliable way of systematically exploring the dynamics of the collective behaviour depending on individual specifications. In this thesis, we present a formal model of a part of self-categorization theory named metacontrast principle. Given the distribution of a set of individuals on one or several comparison dimensions, the model generates categories and their associated prototypes. We show that the model behaves coherently with respect to the theory and is able to replicate experimental data concerning various group phenomena, for example polarization. Moreover, it allows to systematically describe the predictions of the theory from which it is derived, specially in unencountered situations. At the collective level, several dynamics can be observed, among which convergence towards consensus, towards frag-mentation or towards the emergence of extreme attitudes. We also study the effect of the social network on the dynamics and show that, except for the convergence speed which raises as the mean distances on the network decrease, the observed convergence types do not depend much on the chosen network. We further note that individuals located at the border of the groups (whether in the social network or spatially) have a decisive influence on the dynamics' issue. In addition, the model can be used as an automatic classification algorithm. It identifies prototypes around which groups are built. Prototypes are positioned such as to accentuate groups' typical characteristics and are not necessarily central. Finally, if we consider the set of pixels of an image as individuals in a three-dimensional color space, the model provides a filter that allows to lessen noise, to help detecting objects and to simulate perception biases such as chromatic induction.
Resumo:
Abstract : The occupational health risk involved with handling nanoparticles is the probability that a worker will experience an adverse health effect: this is calculated as a function of the worker's exposure relative to the potential biological hazard of the material. Addressing the risks of nanoparticles requires therefore knowledge on occupational exposure and the release of nanoparticles into the environment as well as toxicological data. However, information on exposure is currently not systematically collected; therefore this risk assessment lacks quantitative data. This thesis aimed at, first creating the fundamental data necessary for a quantitative assessment and, second, evaluating methods to measure the occupational nanoparticle exposure. The first goal was to determine what is being used where in Swiss industries. This was followed by an evaluation of the adequacy of existing measurement methods to assess workplace nanopaiticle exposure to complex size distributions and concentration gradients. The study was conceived as a series of methodological evaluations aimed at better understanding nanoparticle measurement devices and methods. lt focused on inhalation exposure to airborne particles, as respiration is considered to be the most important entrance pathway for nanoparticles in the body in terms of risk. The targeted survey (pilot study) was conducted as a feasibility study for a later nationwide survey on the handling of nanoparticles and the applications of specific protection means in industry. The study consisted of targeted phone interviews with health and safety officers of Swiss companies that were believed to use or produce nanoparticles. This was followed by a representative survey on the level of nanoparticle usage in Switzerland. lt was designed based on the results of the pilot study. The study was conducted among a representative selection of clients of the Swiss National Accident Insurance Fund (SUVA), covering about 85% of Swiss production companies. The third part of this thesis focused on the methods to measure nanoparticles. Several pre- studies were conducted studying the limits of commonly used measurement devices in the presence of nanoparticle agglomerates, This focus was chosen, because several discussions with users and producers of the measurement devices raised questions about their accuracy measuring nanoparticle agglomerates and because, at the same time, the two survey studies revealed that such powders are frequently used in industry. The first preparatory experiment focused on the accuracy of the scanning mobility particle sizer (SMPS), which showed an improbable size distribution when measuring powders of nanoparticle agglomerates. Furthermore, the thesis includes a series of smaller experiments that took a closer look at problems encountered with other measurement devices in the presence of nanoparticle agglomerates: condensation particle counters (CPC), portable aerosol spectrometer (PAS) a device to estimate the aerodynamic diameter, as well as diffusion size classifiers. Some initial feasibility tests for the efficiency of filter based sampling and subsequent counting of carbon nanotubes (CNT) were conducted last. The pilot study provided a detailed picture of the types and amounts of nanoparticles used and the knowledge of the health and safety experts in the companies. Considerable maximal quantities (> l'000 kg/year per company) of Ag, Al-Ox, Fe-Ox, SiO2, TiO2, and ZnO (mainly first generation particles) were declared by the contacted Swiss companies, The median quantity of handled nanoparticles, however, was 100 kg/year. The representative survey was conducted by contacting by post mail a representative selection of l '626 SUVA-clients (Swiss Accident Insurance Fund). It allowed estimation of the number of companies and workers dealing with nanoparticles in Switzerland. The extrapolation from the surveyed companies to all companies of the Swiss production sector suggested that l'309 workers (95%-confidence interval l'073 to l'545) of the Swiss production sector are potentially exposed to nanoparticles in 586 companies (145 to l'027). These numbers correspond to 0.08% (0.06% to 0.09%) of all workers and to 0.6% (0.2% to 1.1%) of companies in the Swiss production sector. To measure airborne concentrations of sub micrometre-sized particles, a few well known methods exist. However, it was unclear how well the different instruments perform in the presence of the often quite large agglomerates of nanostructured materials. The evaluation of devices and methods focused on nanoparticle agglomerate powders. lt allowed the identification of the following potential sources of inaccurate measurements at workplaces with considerable high concentrations of airborne agglomerates: - A standard SMPS showed bi-modal particle size distributions when measuring large nanoparticle agglomerates. - Differences in the range of a factor of a thousand were shown between diffusion size classifiers and CPC/SMPS. - The comparison between CPC/SMPS and portable aerosol Spectrometer (PAS) was much better, but depending on the concentration, size or type of the powders measured, the differences were still of a high order of magnitude - Specific difficulties and uncertainties in the assessment of workplaces were identified: the background particles can interact with particles created by a process, which make the handling of background concentration difficult. - Electric motors produce high numbers of nanoparticles and confound the measurement of the process-related exposure. Conclusion: The surveys showed that nanoparticles applications exist in many industrial sectors in Switzerland and that some companies already use high quantities of them. The representative survey demonstrated a low prevalence of nanoparticle usage in most branches of the Swiss industry and led to the conclusion that the introduction of applications using nanoparticles (especially outside industrial chemistry) is only beginning. Even though the number of potentially exposed workers was reportedly rather small, it nevertheless underscores the need for exposure assessments. Understanding exposure and how to measure it correctly is very important because the potential health effects of nanornaterials are not yet fully understood. The evaluation showed that many devices and methods of measuring nanoparticles need to be validated for nanoparticles agglomerates before large exposure assessment studies can begin. Zusammenfassung : Das Gesundheitsrisiko von Nanopartikel am Arbeitsplatz ist die Wahrscheinlichkeit dass ein Arbeitnehmer einen möglichen Gesundheitsschaden erleidet wenn er diesem Stoff ausgesetzt ist: sie wird gewöhnlich als Produkt von Schaden mal Exposition gerechnet. Für eine gründliche Abklärung möglicher Risiken von Nanomaterialien müssen also auf der einen Seite Informationen über die Freisetzung von solchen Materialien in die Umwelt vorhanden sein und auf der anderen Seite solche über die Exposition von Arbeitnehmenden. Viele dieser Informationen werden heute noch nicht systematisch gesarnmelt und felilen daher für Risikoanalysen, Die Doktorarbeit hatte als Ziel, die Grundlagen zu schaffen für eine quantitative Schatzung der Exposition gegenüber Nanopartikel am Arbeitsplatz und die Methoden zu evaluieren die zur Messung einer solchen Exposition nötig sind. Die Studie sollte untersuchen, in welchem Ausmass Nanopartikel bereits in der Schweizer Industrie eingesetzt werden, wie viele Arbeitnehrner damit potentiel] in Kontakt komrrien ob die Messtechnologie für die nötigen Arbeitsplatzbelastungsmessungen bereits genügt, Die Studie folcussierte dabei auf Exposition gegenüber luftgetragenen Partikel, weil die Atmung als Haupteintrittspforte iïlr Partikel in den Körper angesehen wird. Die Doktorarbeit besteht baut auf drei Phasen auf eine qualitative Umfrage (Pilotstudie), eine repräsentative, schweizerische Umfrage und mehrere technische Stndien welche dem spezitischen Verständnis der Mëglichkeiten und Grenzen einzelner Messgeräte und - teclmikeri dienen. Die qualitative Telephonumfrage wurde durchgeführt als Vorstudie zu einer nationalen und repräsentativen Umfrage in der Schweizer Industrie. Sie zielte auf Informationen ab zum Vorkommen von Nanopartikeln, und den angewendeten Schutzmassnahmen. Die Studie bestand aus gezielten Telefoninterviews mit Arbeit- und Gesundheitsfachpersonen von Schweizer Unternehmen. Die Untemehmen wurden aufgrund von offentlich zugànglichen lnformationen ausgewählt die darauf hinwiesen, dass sie mit Nanopartikeln umgehen. Der zweite Teil der Dolctorarbeit war die repräsentative Studie zur Evalniernng der Verbreitnng von Nanopaitikelanwendungen in der Schweizer lndustrie. Die Studie baute auf lnformationen der Pilotstudie auf und wurde mit einer repräsentativen Selektion von Firmen der Schweizerischen Unfall Versicherungsanstalt (SUVA) durchgeüihxt. Die Mehrheit der Schweizerischen Unternehmen im lndustrieselctor wurde damit abgedeckt. Der dritte Teil der Doktorarbeit fokussierte auf die Methodik zur Messung von Nanopartikeln. Mehrere Vorstudien wurden dnrchgefîihrt, um die Grenzen von oft eingesetzten Nanopartikelmessgeräten auszuloten, wenn sie grösseren Mengen von Nanopartikel Agglomeraten ausgesetzt messen sollen. Dieser F okns wurde ans zwei Gründen gewählt: weil mehrere Dislcussionen rnit Anwendem und auch dem Produzent der Messgeràte dort eine Schwachstelle vermuten liessen, welche Zweifel an der Genauigkeit der Messgeräte aufkommen liessen und weil in den zwei Umfragestudien ein häufiges Vorkommen von solchen Nanopartikel-Agglomeraten aufgezeigt wurde. i Als erstes widmete sich eine Vorstndie der Genauigkeit des Scanning Mobility Particle Sizer (SMPS). Dieses Messgerät zeigte in Präsenz von Nanopartikel Agglorneraten unsinnige bimodale Partikelgrössenverteilung an. Eine Serie von kurzen Experimenten folgte, welche sich auf andere Messgeräte und deren Probleme beim Messen von Nanopartikel-Agglomeraten konzentrierten. Der Condensation Particle Counter (CPC), der portable aerosol spectrometer (PAS), ein Gerät zur Schàtzung des aerodynamischen Durchniessers von Teilchen, sowie der Diffusion Size Classifier wurden getestet. Einige erste Machbarkeitstests zur Ermittlnng der Effizienz von tilterbasierter Messung von luftgetragenen Carbon Nanotubes (CNT) wnrden als letztes durchgeiührt. Die Pilotstudie hat ein detailliiertes Bild der Typen und Mengen von genutzten Nanopartikel in Schweizer Unternehmen geliefert, und hat den Stand des Wissens der interviewten Gesundheitsschntz und Sicherheitsfachleute aufgezeigt. Folgende Typen von Nanopaitikeln wurden von den kontaktierten Firmen als Maximalmengen angegeben (> 1'000 kg pro Jahr / Unternehrnen): Ag, Al-Ox, Fe-Ox, SiO2, TiO2, und ZnO (hauptsächlich Nanopartikel der ersten Generation). Die Quantitäten von eingesetzten Nanopartikeln waren stark verschieden mit einem ein Median von 100 kg pro Jahr. ln der quantitativen Fragebogenstudie wurden l'626 Unternehmen brieflich kontaktiert; allesamt Klienten der Schweizerischen Unfallversicherringsanstalt (SUVA). Die Resultate der Umfrage erlaubten eine Abschätzung der Anzahl von Unternehmen und Arbeiter, welche Nanopartikel in der Schweiz anwenden. Die Hochrechnung auf den Schweizer lndnstriesektor hat folgendes Bild ergeben: ln 586 Unternehmen (95% Vertrauensintervallz 145 bis 1'027 Unternehmen) sind 1'309 Arbeiter potentiell gegenüber Nanopartikel exponiert (95%-Vl: l'073 bis l'545). Diese Zahlen stehen für 0.6% der Schweizer Unternehmen (95%-Vl: 0.2% bis 1.1%) und 0.08% der Arbeiternehmerschaft (95%-V1: 0.06% bis 0.09%). Es gibt einige gut etablierte Technologien um die Luftkonzentration von Submikrometerpartikel zu messen. Es besteht jedoch Zweifel daran, inwiefern sich diese Technologien auch für die Messurrg von künstlich hergestellten Nanopartikeln verwenden lassen. Aus diesem Grund folcussierten die vorbereitenden Studien für die Arbeitsplatzbeurteilnngen auf die Messung von Pulverri, welche Nan0partike1-Agg10merate enthalten. Sie erlaubten die ldentifikation folgender rnöglicher Quellen von fehlerhaften Messungen an Arbeitsplätzen mit erhöhter Luft-K0nzentrati0n von Nanopartikel Agglomeratenz - Ein Standard SMPS zeigte eine unglaubwürdige bimodale Partikelgrössenverteilung wenn er grössere Nan0par'til<e1Agg10merate gemessen hat. - Grosse Unterschiede im Bereich von Faktor tausend wurden festgestellt zwischen einem Diffusion Size Classiîier und einigen CPC (beziehungsweise dem SMPS). - Die Unterschiede zwischen CPC/SMPS und dem PAS waren geringer, aber abhängig von Grosse oder Typ des gemessenen Pulvers waren sie dennoch in der Grössenordnung von einer guten Grössenordnung. - Spezifische Schwierigkeiten und Unsicherheiten im Bereich von Arbeitsplatzmessungen wurden identitiziert: Hintergrundpartikel können mit Partikeln interagieren die während einem Arbeitsprozess freigesetzt werden. Solche Interaktionen erschweren eine korrekte Einbettung der Hintergrunds-Partikel-Konzentration in die Messdaten. - Elektromotoren produzieren grosse Mengen von Nanopartikeln und können so die Messung der prozessbezogenen Exposition stören. Fazit: Die Umfragen zeigten, dass Nanopartikel bereits Realitàt sind in der Schweizer Industrie und dass einige Unternehmen bereits grosse Mengen davon einsetzen. Die repräsentative Umfrage hat diese explosive Nachricht jedoch etwas moderiert, indem sie aufgezeigt hat, dass die Zahl der Unternehmen in der gesamtschweizerischen Industrie relativ gering ist. In den meisten Branchen (vor allem ausserhalb der Chemischen Industrie) wurden wenig oder keine Anwendungen gefunden, was schliessen last, dass die Einführung dieser neuen Technologie erst am Anfang einer Entwicklung steht. Auch wenn die Zahl der potentiell exponierten Arbeiter immer noch relativ gering ist, so unterstreicht die Studie dennoch die Notwendigkeit von Expositionsmessungen an diesen Arbeitsplätzen. Kenntnisse um die Exposition und das Wissen, wie solche Exposition korrekt zu messen, sind sehr wichtig, vor allem weil die möglichen Auswirkungen auf die Gesundheit noch nicht völlig verstanden sind. Die Evaluation einiger Geräte und Methoden zeigte jedoch, dass hier noch Nachholbedarf herrscht. Bevor grössere Mess-Studien durgefîihrt werden können, müssen die Geräte und Methodem für den Einsatz mit Nanopartikel-Agglomeraten validiert werden.
Resumo:
An assessment of wood workers' exposure to airborne cultivable bacteria, fungi, inhalable endotoxins and inhalable organic dust was performed at 12 sawmills that process mainly coniferous wood species. In each plant, samples were collected at four or five different work sites (debarking, sawing, sorting, planing and sawing cockpit) and the efficiency of sampling devices (impinger or filter) for determining endotoxins levels was evaluated. Results show that fungi are present in very high concentrations (up to 35 000 CFU m(-3)) in all sawmills. We also find that there are more bioaerosols at the sorting work site (mean +/- SD: 7723 +/- 9919 CFU m(-3) for total bacteria, 614 +/- 902 CFU m(-3) for Gram-negative, 19 438 +/- 14 246 CFU m(-3) for fungi, 7.0 +/- 9.0 EU m(-3) for endotoxin and 2.9 +/- 4.8 g m(-3) for dust) than at the sawing station (mean +/- SD: 1938 +/- 2478 CFU m(-3) for total bacteria, 141 +/- 206 CFU m(-3) for Gram-negative, 12 207 +/- 10 008 CFU m(-3) for fungi, 2.1 +/- 1.9 EU m(-3) for endotoxin and 0.75 +/- 0.49 mg m(-3) for dust). At the same time, the species composition and concentration of airborne Gram-negative bacteria were studied. Penicillinium sp. were the predominant fungi, while Bacillus sp. and the Pseudomonadacea family were the predominant Gram-positive and Gram-negative bacteria encountered, respectively. [Authors]
Resumo:
Anti-doping authorities have high expectations of the athlete steroidal passport (ASP) for anabolic-androgenic steroids misuse detection. However, it is still limited to the monitoring of known well-established compounds and might greatly benefit from the discovery of new relevant biomarkers candidates. In this context, steroidomics opens the way to the untargeted simultaneous evaluation of a high number of compounds. Analytical platforms associating the performance of ultra-high pressure liquid chromatography (UHPLC) and the high mass-resolving power of quadrupole time-of-flight (QTOF) mass spectrometers are particularly adapted for such purpose. An untargeted steroidomic approach was proposed to analyse urine samples from a clinical trial for the discovery of relevant biomarkers of testosterone undecanoate oral intake. Automatic peak detection was performed and a filter of reference steroid metabolites mass-to-charge ratio (m/z) values was applied to the raw data to ensure the selection of a subset of steroid-related features. Chemometric tools were applied for the filtering and the analysis of UHPLC-QTOF-MS(E) data. Time kinetics could be assessed with N-way projections to latent structures discriminant analysis (N-PLS-DA) and a detection window was confirmed. Orthogonal projections to latent structures discriminant analysis (O-PLS-DA) classification models were evaluated in a second step to assess the predictive power of both known metabolites and unknown compounds. A shared and unique structure plot (SUS-plot) analysis was performed to select the most promising unknown candidates and receiver operating characteristic (ROC) curves were computed to assess specificity criteria applied in routine doping control. This approach underlined the pertinence to monitor both glucuronide and sulphate steroid conjugates and include them in the athletes passport, while promising biomarkers were also highlighted.
Resumo:
Neuronal oscillations are an important aspect of EEG recordings. These oscillations are supposed to be involved in several cognitive mechanisms. For instance, oscillatory activity is considered a key component for the top-down control of perception. However, measuring this activity and its influence requires precise extraction of frequency components. This processing is not straightforward. Particularly, difficulties with extracting oscillations arise due to their time-varying characteristics. Moreover, when phase information is needed, it is of the utmost importance to extract narrow-band signals. This paper presents a novel method using adaptive filters for tracking and extracting these time-varying oscillations. This scheme is designed to maximize the oscillatory behavior at the output of the adaptive filter. It is then capable of tracking an oscillation and describing its temporal evolution even during low amplitude time segments. Moreover, this method can be extended in order to track several oscillations simultaneously and to use multiple signals. These two extensions are particularly relevant in the framework of EEG data processing, where oscillations are active at the same time in different frequency bands and signals are recorded with multiple sensors. The presented tracking scheme is first tested with synthetic signals in order to highlight its capabilities. Then it is applied to data recorded during a visual shape discrimination experiment for assessing its usefulness during EEG processing and in detecting functionally relevant changes. This method is an interesting additional processing step for providing alternative information compared to classical time-frequency analyses and for improving the detection and analysis of cross-frequency couplings.
Resumo:
A passive sampling device called Monitor of NICotine or "MoNIC", was constructed and evaluated by IST laboratory for determining nicotine in Second Hand Tobacco Smoke (SHTS) or Environmental Tobacco Smoke (ETS). Vapour nicotine was passively collected on a potassium bisulfate treated glass fibre filter as collection medium. Analysis of collected nicotine on the treated filter by gas chromatography equipped with Thermoionic-Specific Detector (GC-TSD) after liquid-liquid extraction of 1mL of 5N NaOH : 1 mL of n-heptane saturated with NH3 using quinoline as internal standard. Based on nicotine amount of 0.2 mg/cigarette as the reference, the inhaled Cigarette Equivalents (CE) by non-smokers can be calculated. Using the detected CE on the badge for non-smokers, and comparing with amount of nicotine and cotinine level in saliva of both smokers and exposed non-smokers, we can confirm the use of the CE concept for estimating exposure to ETS. The regional CIPRET (Center of information and prevention of the addiction to smoking) of different cantons (Valais (VS), Vaud (VD), Neuchâtel (NE) and Fribourg (FR)) are going to organize a big campaign on the subject of the passive addiction to smoking. This campaign took place in 2007-2008 and has for objective to inform clearly the Swiss population of the dangerousness of the passive smoke. More than 3'900 MoNIC badges were gracefully distributed to Swiss population to perform a self-monitoring of population exposure level to ETS, expressed in term of CE. Non-stimulated saliva was also collected to determine ETS biomarkers nicotine/cotinine levels of participating volunteers. Results of different levels of CE in occupational and non-occupational situations in relation with ETS were presented in this study. This study, unique in Switzerland, has established a base map on the population's exposure to SHTS. It underscored the fact that all the Swiss people involved in this campaign (N=1241) is exposed to passive smoke, from <0.2 cig/d (10.8%), 1-2 to more than 10 cig/d (89.2%). In the area of high exposure (15-38 cig/d), are the most workers in public restaurant, cafe, bar, disco. By monitoring ETS tracer nicotine and its biomarkers, salivary nicotine and cotinine, it is demonstrated that the MoNIC badge can serve as indicator of CE passive smoking. The MoNIC badge, accompanied with content of salivary nicotine/cotinine can serve as a tool of evaluation of the ETS passive smoking and contributes to supply useful data for future epidemiological studies. It is also demonstrated that the salivary nicotine (without stimulation) is a better biomarker of ETS exposure than cotinine.
Resumo:
The evaluation of radioactivity accidentally released into the atmosphere involves determining the radioactivity levels of rainwater samples. Rainwater scavenges atmospheric airborne radioactivity in such a way that surface contamination can be deduced from rainfall rate and rainwater radioactivity content. For this purpose, rainwater is usually collected in large surface collectors and then measured by gamma-spectrometry after such treatments as evaporation or iron hydroxide precipitation. We found that collectors can be adapted to accept large surface (diameter 47mm) cartridges containing a strongly acidic resin (Dowex AG 88) which is able to quantitatively extract radioactivity from rainwater, even during heavy rainfall. The resin can then be measured by gamma-spectrometry. The detection limit is 0.1Bq per sample of resin (80g) for (137)Cs. Natural (7)Be and (210)Pb can also be measured and the activity ratio of both radionuclides is comparable with those obtained through iron hydroxide precipitation and air filter measurements. Occasionally (22)Na has also been measured above the detection limit. A comparison between the evaporation method and the resin method demonstrated that 2/3 of (7)Be can be lost during the evaporation process. The resin method is simple and highly efficient at extracting radioactivity. Because of these great advantages, we anticipate it could replace former rainwater determination methods. Moreover, it does not necessitate the transportation of large rainwater volumes to the laboratory.
Resumo:
The use of urinary hexane diamine (HDA) as a biomarker to assess human respiratory exposure to hexamethylene diisocyanate (HDI) aerosol was evaluated. Twenty-three auto body shop workers were exposed to HDI biuret aerosol for two hours using a closed exposure apparatus. HDI exposures were quantified using both a direct-reading instrument and a treated-filter method. Urine samples collected at baseline, immediately post exposure, and every four to five hours for up to 20 hours were analyzed for HDA using gas chromatography and mass spectrometry. Mean urinary HDA (microg/g creatinine) sharply increased from the baseline value of 0.7 to 18.1 immediately post exposure and decreased rapidly to 4.7, 1.9 and 1.1, respectively, at 4, 9, and 18 hours post exposure. Considerable individual variability was found. Urinary HDA can assess acute respiratory exposure to HDI aerosol, but may have limited use as a biomarker of exposure in the workplace. [Authors]
Resumo:
The epithelial Na+ channel (ENaC) belongs to a new class of channel proteins called the ENaC/DEG superfamily involved in epithelial Na+ transport, mechanotransduction, and neurotransmission. The role of ENaC in Na+ homeostasis and in the control of blood pressure has been demonstrated recently by the identification of mutations in ENaC beta and gamma subunits causing hypertension. The function of ENaC in Na+ reabsorption depends critically on its ability to discriminate between Na+ and other ions like K+ or Ca2+. ENaC is virtually impermeant to K+ ions, and the molecular basis for its high ionic selectivity is largely unknown. We have identified a conserved Ser residue in the second transmembrane domain of the ENaC alpha subunit (alphaS589), which when mutated allows larger ions such as K+, Rb+, Cs+, and divalent cations to pass through the channel. The relative ion permeability of each of the alphaS589 mutants is related inversely to the ionic radius of the permeant ion, indicating that alphaS589 mutations increase the molecular cutoff of the channel by modifying the pore geometry at the selectivity filter. Proper geometry of the pore is required to tightly accommodate Na+ and Li+ ions and to exclude larger cations. We provide evidence that ENaC discriminates between cations mainly on the basis of their size and the energy of dehydration.
Resumo:
The functional response of predators is usually modelled as a function of absolute prey density. Arditi and Ginzburg have suggested that it should often depend instead on the prey available per capita of predators, i.e. on the prey/predator ratio. Theory suggests that these two forms of dependence are related to the degree of spatial and temporal heterogeneity. Experiments using four filter-feeding cladoceran species were designed to test this hypothesis and to investigate the relation between individual behaviour and population dynamics. The patterns of population abundance that the cladocerans reached at equilibrium match the expectation that species with homogeneous spatial behaviour follow prey-dependent dynamics while those with heterogeneous behaviour follow ratio-dependent dynamics.
Resumo:
Genetically constructed microbial biosensors for measuring organic pollutants are mostly applied in aqueous samples. Unfortunately, the detection limit of most biosensors is insufficient to detect pollutants at low but environmentally relevant concentrations. However, organic pollutants with low levels of water solubility often have significant gas-water partitioning coefficients, which in principle makes it possible to measure such compounds in the gas rather than the aqueous phase. Here we describe the first use of a microbial biosensor for measuring organic pollutants directly in the gas phase. For this purpose, we reconstructed a bioluminescent Pseudomonas putida naphthalene biosensor strain to carry the NAH7 plasmid and a chromosomally inserted gene fusion between the sal promoter and the luxAB genes. Specific calibration studies were performed with suspended and filter-immobilized biosensor cells, in aqueous solution and in the gas phase. Gas phase measurements with filter-immobilized biosensor cells in closed flasks, with a naphthalene-contaminated aqueous phase, showed that the biosensor cells can measure naphthalene effectively. The biosensor cells on the filter responded with increasing light output proportional to the naphthalene concentration added to the water phase, even though only a small proportion of the naphthalene was present in the gas phase. In fact, the biosensor cells could concentrate a larger proportion of naphthalene through the gas phase than in the aqueous suspension, probably due to faster transport of naphthalene to the cells in the gas phase. This led to a 10-fold lower detectable aqueous naphthalene concentration (50 nM instead of 0.5 micro M). Thus, the use of bacterial biosensors for measuring organic pollutants in the gas phase is a valid method for increasing the sensitivity of these valuable biological devices.
Resumo:
Six gases (N((CH3)3), NH2OH, CF3COOH, HCl, NO2, O3) were selected to probe the surface of seven combustion aerosol (amorphous carbon, flame soot) and three types of TiO2 nanoparticles using heterogeneous, that is gas-surface reactions. The gas uptake to saturation of the probes was measured under molecular flow conditions in a Knudsen flow reactor and expressed as a density of surface functional groups on a particular aerosol, namely acidic (carboxylic) and basic (conjugated oxides such as pyrones, N-heterocycles) sites, carbonyl (R1-C(O)-R2) and oxidizable (olefinic, -OH) groups. The limit of detection was generally well below 1% of a formal monolayer of adsorbed probe gas. With few exceptions most investigated aerosol samples interacted with all probe gases which points to the coexistence of different functional groups on the same aerosol surface such as acidic and basic groups. Generally, the carbonaceous particles displayed significant differences in surface group density: Printex 60 amorphous carbon had the lowest density of surface functional groups throughout, whereas Diesel soot recovered from a Diesel particulate filter had the largest. The presence of basic oxides on carbonaceous aerosol particles was inferred from the ratio of uptakes of CF3COOH and HCl owing to the larger stability of the acetate compared to the chloride counterion in the resulting pyrylium salt. Both soots generated from a rich and a lean hexane diffusion flame had a large density of oxidizable groups similar to amorphous carbon FS 101. TiO2 15 had the lowest density of functional groups among the three studied TiO2 nanoparticles for all probe gases despite the smallest size of its primary particles. The used technique enabled the measurement of the uptake probability of the probe gases on the various supported aerosol samples. The initial uptake probability, g0, of the probe gas onto the supported nanoparticles differed significantly among the various investigated aerosol samples but was roughly correlated with the density of surface groups, as expected. [Authors]
Resumo:
The goal of this work is to develop a method to objectively compare the performance of a digital and a screen-film mammography system in terms of image quality. The method takes into account the dynamic range of the image detector, the detection of high and low contrast structures, the visualisation of the images and the observer response. A test object, designed to represent a compressed breast, was constructed from various tissue equivalent materials ranging from purely adipose to purely glandular composition. Different areas within the test object permitted the evaluation of low and high contrast detection, spatial resolution and image noise. All the images (digital and conventional) were captured using a CCD camera to include the visualisation process in the image quality assessment. A mathematical model observer (non-prewhitening matched filter), that calculates the detectability of high and low contrast structures using spatial resolution, noise and contrast, was used to compare the two technologies. Our results show that for a given patient dose, the detection of high and low contrast structures is significantly better for the digital system than for the conventional screen-film system studied. The method of using a test object with a large tissue composition range combined with a camera to compare conventional and digital imaging modalities can be applied to other radiological imaging techniques. In particular it could be used to optimise the process of radiographic reading of soft copy images.