869 resultados para Factor of risk


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The nuclear factor of activated T cells (NFAT) family of transcription factors controls calcium signaling in T lymphocytes. In this study, we have identified a crucial regulatory role of the transcription factor NFATc2 in T cell-dependent experimental colitis. Similar to ulcerative colitis in humans, the expression of NFATc2 was up-regulated in oxazolone-induced chronic intestinal inflammation. Furthermore, NFATc2 deficiency suppressed colitis induced by oxazolone administration. This finding was associated with enhanced T cell apoptosis in the lamina propria and strikingly reduced production of IL-6, -13, and -17 by mucosal T lymphocytes. Further studies using knockout mice showed that IL-6, rather than IL-23 and -17, are essential for oxazolone colitis induction. Administration of hyper-IL-6 blocked the protective effects of NFATc2 deficiency in experimental colitis, suggesting that IL-6 signal transduction plays a major pathogenic role in vivo. Finally, adoptive transfer of IL-6 and wild-type T cells demonstrated that oxazolone colitis is critically dependent on IL-6 production by T cells. Collectively, these results define a unique regulatory role for NFATc2 in colitis by controlling mucosal T cell activation in an IL-6-dependent manner. NFATc2 in T cells thus emerges as a potentially new therapeutic target for inflammatory bowel diseases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: We assessed the prevalence of risk factors for cardiovascular disease (CVD) in a middle-income country in rapid epidemiological transition and estimated direct costs for treating all individuals at increased cardiovascular risk, i.e. following the so-called "high risk strategy". METHODS: Survey of risk factors using an age- and sex-stratified random sample of the population of Seychelles aged 25-64 in 2004. Assessment of CVD risk and treatment modalities were in line with international guidelines. Costs are expressed as USD per capita per year. RESULTS: 1255 persons took part in the survey (participation rate of 80.2%). Prevalence of main risk factors was: 39.6% for high blood pressure (> or =140/90 mmHg or treatment) of which 59% were under treatment; 24.2% for high cholesterol (> or =6.2 mmol/l); 20.8% for low HDL-cholesterol (<1.0 mmol/l); 9.3% for diabetes (fasting glucose > or =7.0 mmol/l); 17.5% for smoking; 25.1% for obesity (body mass index > or =30 kg/m2) and 22.1% for the metabolic syndrome. Overall, 43% had HBP, high cholesterol or diabetes and substantially increased CVD risk. The cost for medications needed to treat all high-risk individuals amounted to USD 45.6, i.e. 11.2 dollars for high blood pressure, 3.8 dollars for diabetes, and 30.6 dollars for dyslipidemia (using generic drugs except for hypercholesterolemia). Cost for minimal follow-up medical care and laboratory tests amounted to 22.6 dollars. CONCLUSION: High prevalence of major risk factors was found in a rapidly developing country and costs for treatment needed to reduce risk factors in all high-risk individuals exceeded resources generally available in low or middle income countries. Our findings emphasize the need for affordable cost-effective treatment strategies and the critical importance of population strategies aimed at reducing risk factors in the entire population.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Population-based cohort studies of risk factors of stroke are scarce in developing countries and none has been done in the African region. We conducted a longitudinal study in the Seychelles (Indian Ocean, east of Kenya), a middle-income island state where the majority of the population is of African descent. Such data in Africa are important for international comparison and for advocacy in the region. Methods: Three examination surveys of cardiovascular risk factors were performed in independent samples representative of the general population aged 25-64 in 1989, 1994 and 2004 (n=1081, 1067, and 1255, respectively). Baseline risk factors data were linked with cause-specific mortality from vital statistics up to May 2007 (all deaths are medically certified in the Seychelles and kept in an electronic database). We considered stroke (any type) as a cause of death if the diagnosis was reported in any of the 4 fields in the death certificates for underlying and concomitant causes of death. Results. Among the 2479 persons aged 35-64 at baseline, 280 died including 56 with stroke during follow up (maximum: 18.2 years; mean: 10.2 years). In this age range, age-adjusted mortality rates (/100'000/year) were 969 for all cause and 187 for stroke; age-adjusted prevalence of high blood pressure (≥140/90 mmHg) was 48%. In multivariate Cox survival time regression, stroke mortality was increased by 18% and 35% for a 10-mmHg increase in systolic, respectively diastolic BP (p<0.001). Stroke mortality was also associated with age, smoking ≥5 cigarettes vs. no smoking (HR: 2.4; 95% CI: 1.2-4.8) and diabetes (HR: 1.9; 1.02-3.6) but not with sex, LDL-cholesterol intake, alcohol intake and professional occupation. Conclusion. This first population-based cohort study in the African region demonstrates high mortality rates from stroke in middle-aged adults and confirms associations with high BP and other risk factors. This emphasizes the importance of reducing BP and other modifiable risk factors in high risk individuals and in the general population as a main strategy to reduce the burden of stroke.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rational learning theories postulate that information channels and cognitive biases such as individual optimism may influence an individual¿s assessment of the risk of undesired events, especially with regard to those that have a cumulative nature. This is the case with disability in old age, which may take place upon survival to an advanced age, and such factors have been regarded as responsible for certain individual behaviours (for example, the limited incidence of insurance purchase). This paper examines the determinants of individual perceptions with regard to disability in old age and longevity. The cumulative nature of such perceptions of risk is tested, and potential biases are identified, including `optimism¿ and a set of information determinants. Empirical evidence from a representative survey of Catalonia is presented to illustrate these effects. The findings from this research suggest a significant overestimation of disability in old age, yet this is not the case with longevity. Furthermore, individual perceptions with regard to disability in old age, unlike those with regard to longevity, exhibit on aggregate an `optimistic bias¿ and, are perceived as `cumulative risks¿. Gender influences the perceived risk of disability in old age at a population level but not at the individual level, and the opposite holds true for age. Finally, self-reported health status is the main variable behind risk perceptions at both the individual and population level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract : The occupational health risk involved with handling nanoparticles is the probability that a worker will experience an adverse health effect: this is calculated as a function of the worker's exposure relative to the potential biological hazard of the material. Addressing the risks of nanoparticles requires therefore knowledge on occupational exposure and the release of nanoparticles into the environment as well as toxicological data. However, information on exposure is currently not systematically collected; therefore this risk assessment lacks quantitative data. This thesis aimed at, first creating the fundamental data necessary for a quantitative assessment and, second, evaluating methods to measure the occupational nanoparticle exposure. The first goal was to determine what is being used where in Swiss industries. This was followed by an evaluation of the adequacy of existing measurement methods to assess workplace nanopaiticle exposure to complex size distributions and concentration gradients. The study was conceived as a series of methodological evaluations aimed at better understanding nanoparticle measurement devices and methods. lt focused on inhalation exposure to airborne particles, as respiration is considered to be the most important entrance pathway for nanoparticles in the body in terms of risk. The targeted survey (pilot study) was conducted as a feasibility study for a later nationwide survey on the handling of nanoparticles and the applications of specific protection means in industry. The study consisted of targeted phone interviews with health and safety officers of Swiss companies that were believed to use or produce nanoparticles. This was followed by a representative survey on the level of nanoparticle usage in Switzerland. lt was designed based on the results of the pilot study. The study was conducted among a representative selection of clients of the Swiss National Accident Insurance Fund (SUVA), covering about 85% of Swiss production companies. The third part of this thesis focused on the methods to measure nanoparticles. Several pre- studies were conducted studying the limits of commonly used measurement devices in the presence of nanoparticle agglomerates, This focus was chosen, because several discussions with users and producers of the measurement devices raised questions about their accuracy measuring nanoparticle agglomerates and because, at the same time, the two survey studies revealed that such powders are frequently used in industry. The first preparatory experiment focused on the accuracy of the scanning mobility particle sizer (SMPS), which showed an improbable size distribution when measuring powders of nanoparticle agglomerates. Furthermore, the thesis includes a series of smaller experiments that took a closer look at problems encountered with other measurement devices in the presence of nanoparticle agglomerates: condensation particle counters (CPC), portable aerosol spectrometer (PAS) a device to estimate the aerodynamic diameter, as well as diffusion size classifiers. Some initial feasibility tests for the efficiency of filter based sampling and subsequent counting of carbon nanotubes (CNT) were conducted last. The pilot study provided a detailed picture of the types and amounts of nanoparticles used and the knowledge of the health and safety experts in the companies. Considerable maximal quantities (> l'000 kg/year per company) of Ag, Al-Ox, Fe-Ox, SiO2, TiO2, and ZnO (mainly first generation particles) were declared by the contacted Swiss companies, The median quantity of handled nanoparticles, however, was 100 kg/year. The representative survey was conducted by contacting by post mail a representative selection of l '626 SUVA-clients (Swiss Accident Insurance Fund). It allowed estimation of the number of companies and workers dealing with nanoparticles in Switzerland. The extrapolation from the surveyed companies to all companies of the Swiss production sector suggested that l'309 workers (95%-confidence interval l'073 to l'545) of the Swiss production sector are potentially exposed to nanoparticles in 586 companies (145 to l'027). These numbers correspond to 0.08% (0.06% to 0.09%) of all workers and to 0.6% (0.2% to 1.1%) of companies in the Swiss production sector. To measure airborne concentrations of sub micrometre-sized particles, a few well known methods exist. However, it was unclear how well the different instruments perform in the presence of the often quite large agglomerates of nanostructured materials. The evaluation of devices and methods focused on nanoparticle agglomerate powders. lt allowed the identification of the following potential sources of inaccurate measurements at workplaces with considerable high concentrations of airborne agglomerates: - A standard SMPS showed bi-modal particle size distributions when measuring large nanoparticle agglomerates. - Differences in the range of a factor of a thousand were shown between diffusion size classifiers and CPC/SMPS. - The comparison between CPC/SMPS and portable aerosol Spectrometer (PAS) was much better, but depending on the concentration, size or type of the powders measured, the differences were still of a high order of magnitude - Specific difficulties and uncertainties in the assessment of workplaces were identified: the background particles can interact with particles created by a process, which make the handling of background concentration difficult. - Electric motors produce high numbers of nanoparticles and confound the measurement of the process-related exposure. Conclusion: The surveys showed that nanoparticles applications exist in many industrial sectors in Switzerland and that some companies already use high quantities of them. The representative survey demonstrated a low prevalence of nanoparticle usage in most branches of the Swiss industry and led to the conclusion that the introduction of applications using nanoparticles (especially outside industrial chemistry) is only beginning. Even though the number of potentially exposed workers was reportedly rather small, it nevertheless underscores the need for exposure assessments. Understanding exposure and how to measure it correctly is very important because the potential health effects of nanornaterials are not yet fully understood. The evaluation showed that many devices and methods of measuring nanoparticles need to be validated for nanoparticles agglomerates before large exposure assessment studies can begin. Zusammenfassung : Das Gesundheitsrisiko von Nanopartikel am Arbeitsplatz ist die Wahrscheinlichkeit dass ein Arbeitnehmer einen möglichen Gesundheitsschaden erleidet wenn er diesem Stoff ausgesetzt ist: sie wird gewöhnlich als Produkt von Schaden mal Exposition gerechnet. Für eine gründliche Abklärung möglicher Risiken von Nanomaterialien müssen also auf der einen Seite Informationen über die Freisetzung von solchen Materialien in die Umwelt vorhanden sein und auf der anderen Seite solche über die Exposition von Arbeitnehmenden. Viele dieser Informationen werden heute noch nicht systematisch gesarnmelt und felilen daher für Risikoanalysen, Die Doktorarbeit hatte als Ziel, die Grundlagen zu schaffen für eine quantitative Schatzung der Exposition gegenüber Nanopartikel am Arbeitsplatz und die Methoden zu evaluieren die zur Messung einer solchen Exposition nötig sind. Die Studie sollte untersuchen, in welchem Ausmass Nanopartikel bereits in der Schweizer Industrie eingesetzt werden, wie viele Arbeitnehrner damit potentiel] in Kontakt komrrien ob die Messtechnologie für die nötigen Arbeitsplatzbelastungsmessungen bereits genügt, Die Studie folcussierte dabei auf Exposition gegenüber luftgetragenen Partikel, weil die Atmung als Haupteintrittspforte iïlr Partikel in den Körper angesehen wird. Die Doktorarbeit besteht baut auf drei Phasen auf eine qualitative Umfrage (Pilotstudie), eine repräsentative, schweizerische Umfrage und mehrere technische Stndien welche dem spezitischen Verständnis der Mëglichkeiten und Grenzen einzelner Messgeräte und - teclmikeri dienen. Die qualitative Telephonumfrage wurde durchgeführt als Vorstudie zu einer nationalen und repräsentativen Umfrage in der Schweizer Industrie. Sie zielte auf Informationen ab zum Vorkommen von Nanopartikeln, und den angewendeten Schutzmassnahmen. Die Studie bestand aus gezielten Telefoninterviews mit Arbeit- und Gesundheitsfachpersonen von Schweizer Unternehmen. Die Untemehmen wurden aufgrund von offentlich zugànglichen lnformationen ausgewählt die darauf hinwiesen, dass sie mit Nanopartikeln umgehen. Der zweite Teil der Dolctorarbeit war die repräsentative Studie zur Evalniernng der Verbreitnng von Nanopaitikelanwendungen in der Schweizer lndustrie. Die Studie baute auf lnformationen der Pilotstudie auf und wurde mit einer repräsentativen Selektion von Firmen der Schweizerischen Unfall Versicherungsanstalt (SUVA) durchgeüihxt. Die Mehrheit der Schweizerischen Unternehmen im lndustrieselctor wurde damit abgedeckt. Der dritte Teil der Doktorarbeit fokussierte auf die Methodik zur Messung von Nanopartikeln. Mehrere Vorstudien wurden dnrchgefîihrt, um die Grenzen von oft eingesetzten Nanopartikelmessgeräten auszuloten, wenn sie grösseren Mengen von Nanopartikel Agglomeraten ausgesetzt messen sollen. Dieser F okns wurde ans zwei Gründen gewählt: weil mehrere Dislcussionen rnit Anwendem und auch dem Produzent der Messgeràte dort eine Schwachstelle vermuten liessen, welche Zweifel an der Genauigkeit der Messgeräte aufkommen liessen und weil in den zwei Umfragestudien ein häufiges Vorkommen von solchen Nanopartikel-Agglomeraten aufgezeigt wurde. i Als erstes widmete sich eine Vorstndie der Genauigkeit des Scanning Mobility Particle Sizer (SMPS). Dieses Messgerät zeigte in Präsenz von Nanopartikel Agglorneraten unsinnige bimodale Partikelgrössenverteilung an. Eine Serie von kurzen Experimenten folgte, welche sich auf andere Messgeräte und deren Probleme beim Messen von Nanopartikel-Agglomeraten konzentrierten. Der Condensation Particle Counter (CPC), der portable aerosol spectrometer (PAS), ein Gerät zur Schàtzung des aerodynamischen Durchniessers von Teilchen, sowie der Diffusion Size Classifier wurden getestet. Einige erste Machbarkeitstests zur Ermittlnng der Effizienz von tilterbasierter Messung von luftgetragenen Carbon Nanotubes (CNT) wnrden als letztes durchgeiührt. Die Pilotstudie hat ein detailliiertes Bild der Typen und Mengen von genutzten Nanopartikel in Schweizer Unternehmen geliefert, und hat den Stand des Wissens der interviewten Gesundheitsschntz und Sicherheitsfachleute aufgezeigt. Folgende Typen von Nanopaitikeln wurden von den kontaktierten Firmen als Maximalmengen angegeben (> 1'000 kg pro Jahr / Unternehrnen): Ag, Al-Ox, Fe-Ox, SiO2, TiO2, und ZnO (hauptsächlich Nanopartikel der ersten Generation). Die Quantitäten von eingesetzten Nanopartikeln waren stark verschieden mit einem ein Median von 100 kg pro Jahr. ln der quantitativen Fragebogenstudie wurden l'626 Unternehmen brieflich kontaktiert; allesamt Klienten der Schweizerischen Unfallversicherringsanstalt (SUVA). Die Resultate der Umfrage erlaubten eine Abschätzung der Anzahl von Unternehmen und Arbeiter, welche Nanopartikel in der Schweiz anwenden. Die Hochrechnung auf den Schweizer lndnstriesektor hat folgendes Bild ergeben: ln 586 Unternehmen (95% Vertrauensintervallz 145 bis 1'027 Unternehmen) sind 1'309 Arbeiter potentiell gegenüber Nanopartikel exponiert (95%-Vl: l'073 bis l'545). Diese Zahlen stehen für 0.6% der Schweizer Unternehmen (95%-Vl: 0.2% bis 1.1%) und 0.08% der Arbeiternehmerschaft (95%-V1: 0.06% bis 0.09%). Es gibt einige gut etablierte Technologien um die Luftkonzentration von Submikrometerpartikel zu messen. Es besteht jedoch Zweifel daran, inwiefern sich diese Technologien auch für die Messurrg von künstlich hergestellten Nanopartikeln verwenden lassen. Aus diesem Grund folcussierten die vorbereitenden Studien für die Arbeitsplatzbeurteilnngen auf die Messung von Pulverri, welche Nan0partike1-Agg10merate enthalten. Sie erlaubten die ldentifikation folgender rnöglicher Quellen von fehlerhaften Messungen an Arbeitsplätzen mit erhöhter Luft-K0nzentrati0n von Nanopartikel Agglomeratenz - Ein Standard SMPS zeigte eine unglaubwürdige bimodale Partikelgrössenverteilung wenn er grössere Nan0par'til<e1Agg10merate gemessen hat. - Grosse Unterschiede im Bereich von Faktor tausend wurden festgestellt zwischen einem Diffusion Size Classiîier und einigen CPC (beziehungsweise dem SMPS). - Die Unterschiede zwischen CPC/SMPS und dem PAS waren geringer, aber abhängig von Grosse oder Typ des gemessenen Pulvers waren sie dennoch in der Grössenordnung von einer guten Grössenordnung. - Spezifische Schwierigkeiten und Unsicherheiten im Bereich von Arbeitsplatzmessungen wurden identitiziert: Hintergrundpartikel können mit Partikeln interagieren die während einem Arbeitsprozess freigesetzt werden. Solche Interaktionen erschweren eine korrekte Einbettung der Hintergrunds-Partikel-Konzentration in die Messdaten. - Elektromotoren produzieren grosse Mengen von Nanopartikeln und können so die Messung der prozessbezogenen Exposition stören. Fazit: Die Umfragen zeigten, dass Nanopartikel bereits Realitàt sind in der Schweizer Industrie und dass einige Unternehmen bereits grosse Mengen davon einsetzen. Die repräsentative Umfrage hat diese explosive Nachricht jedoch etwas moderiert, indem sie aufgezeigt hat, dass die Zahl der Unternehmen in der gesamtschweizerischen Industrie relativ gering ist. In den meisten Branchen (vor allem ausserhalb der Chemischen Industrie) wurden wenig oder keine Anwendungen gefunden, was schliessen last, dass die Einführung dieser neuen Technologie erst am Anfang einer Entwicklung steht. Auch wenn die Zahl der potentiell exponierten Arbeiter immer noch relativ gering ist, so unterstreicht die Studie dennoch die Notwendigkeit von Expositionsmessungen an diesen Arbeitsplätzen. Kenntnisse um die Exposition und das Wissen, wie solche Exposition korrekt zu messen, sind sehr wichtig, vor allem weil die möglichen Auswirkungen auf die Gesundheit noch nicht völlig verstanden sind. Die Evaluation einiger Geräte und Methoden zeigte jedoch, dass hier noch Nachholbedarf herrscht. Bevor grössere Mess-Studien durgefîihrt werden können, müssen die Geräte und Methodem für den Einsatz mit Nanopartikel-Agglomeraten validiert werden.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ces dernières années, de nombreuses recherches ont mis en évidence les effets toxiques des micropolluants organiques pour les espèces de nos lacs et rivières. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, alors que les organismes sont exposés tous les jours à des milliers de substances en mélange. Or les effets de ces cocktails ne sont pas négligeables. Cette thèse de doctorat s'est ainsi intéressée aux modèles permettant de prédire le risque environnemental de ces cocktails pour le milieu aquatique. Le principal objectif a été d'évaluer le risque écologique des mélanges de substances chimiques mesurées dans le Léman, mais aussi d'apporter un regard critique sur les méthodologies utilisées afin de proposer certaines adaptations pour une meilleure estimation du risque. Dans la première partie de ce travail, le risque des mélanges de pesticides et médicaments pour le Rhône et pour le Léman a été établi en utilisant des approches envisagées notamment dans la législation européenne. Il s'agit d'approches de « screening », c'est-à-dire permettant une évaluation générale du risque des mélanges. Une telle approche permet de mettre en évidence les substances les plus problématiques, c'est-à-dire contribuant le plus à la toxicité du mélange. Dans notre cas, il s'agit essentiellement de 4 pesticides. L'étude met également en évidence que toutes les substances, même en trace infime, contribuent à l'effet du mélange. Cette constatation a des implications en terme de gestion de l'environnement. En effet, ceci implique qu'il faut réduire toutes les sources de polluants, et pas seulement les plus problématiques. Mais l'approche proposée présente également un biais important au niveau conceptuel, ce qui rend son utilisation discutable, en dehors d'un screening, et nécessiterait une adaptation au niveau des facteurs de sécurité employés. Dans une deuxième partie, l'étude s'est portée sur l'utilisation des modèles de mélanges dans le calcul de risque environnemental. En effet, les modèles de mélanges ont été développés et validés espèce par espèce, et non pour une évaluation sur l'écosystème en entier. Leur utilisation devrait donc passer par un calcul par espèce, ce qui est rarement fait dû au manque de données écotoxicologiques à disposition. Le but a été donc de comparer, avec des valeurs générées aléatoirement, le calcul de risque effectué selon une méthode rigoureuse, espèce par espèce, avec celui effectué classiquement où les modèles sont appliqués sur l'ensemble de la communauté sans tenir compte des variations inter-espèces. Les résultats sont dans la majorité des cas similaires, ce qui valide l'approche utilisée traditionnellement. En revanche, ce travail a permis de déterminer certains cas où l'application classique peut conduire à une sous- ou sur-estimation du risque. Enfin, une dernière partie de cette thèse s'est intéressée à l'influence que les cocktails de micropolluants ont pu avoir sur les communautés in situ. Pour ce faire, une approche en deux temps a été adoptée. Tout d'abord la toxicité de quatorze herbicides détectés dans le Léman a été déterminée. Sur la période étudiée, de 2004 à 2009, cette toxicité due aux herbicides a diminué, passant de 4% d'espèces affectées à moins de 1%. Ensuite, la question était de savoir si cette diminution de toxicité avait un impact sur le développement de certaines espèces au sein de la communauté des algues. Pour ce faire, l'utilisation statistique a permis d'isoler d'autres facteurs pouvant avoir une influence sur la flore, comme la température de l'eau ou la présence de phosphates, et ainsi de constater quelles espèces se sont révélées avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps. Fait intéressant, une partie d'entre-elles avait déjà montré des comportements similaires dans des études en mésocosmes. En conclusion, ce travail montre qu'il existe des modèles robustes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques, et qu'ils peuvent être utilisés pour expliquer le rôle des substances dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application. - Depuis plusieurs années, les risques que posent les micropolluants organiques pour le milieu aquatique préoccupent grandement les scientifiques ainsi que notre société. En effet, de nombreuses recherches ont mis en évidence les effets toxiques que peuvent avoir ces substances chimiques sur les espèces de nos lacs et rivières, quand elles se retrouvent exposées à des concentrations aiguës ou chroniques. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, c'est à dire considérées séparément. Actuellement, il en est de même dans les procédures de régulation européennes, concernant la partie évaluation du risque pour l'environnement d'une substance. Or, les organismes sont exposés tous les jours à des milliers de substances en mélange, et les effets de ces "cocktails" ne sont pas négligeables. L'évaluation du risque écologique que pose ces mélanges de substances doit donc être abordé par de la manière la plus appropriée et la plus fiable possible. Dans la première partie de cette thèse, nous nous sommes intéressés aux méthodes actuellement envisagées à être intégrées dans les législations européennes pour l'évaluation du risque des mélanges pour le milieu aquatique. Ces méthodes sont basées sur le modèle d'addition des concentrations, avec l'utilisation des valeurs de concentrations des substances estimées sans effet dans le milieu (PNEC), ou à partir des valeurs des concentrations d'effet (CE50) sur certaines espèces d'un niveau trophique avec la prise en compte de facteurs de sécurité. Nous avons appliqué ces méthodes à deux cas spécifiques, le lac Léman et le Rhône situés en Suisse, et discuté les résultats de ces applications. Ces premières étapes d'évaluation ont montré que le risque des mélanges pour ces cas d'étude atteint rapidement une valeur au dessus d'un seuil critique. Cette valeur atteinte est généralement due à deux ou trois substances principales. Les procédures proposées permettent donc d'identifier les substances les plus problématiques pour lesquelles des mesures de gestion, telles que la réduction de leur entrée dans le milieu aquatique, devraient être envisagées. Cependant, nous avons également constaté que le niveau de risque associé à ces mélanges de substances n'est pas négligeable, même sans tenir compte de ces substances principales. En effet, l'accumulation des substances, même en traces infimes, atteint un seuil critique, ce qui devient plus difficile en terme de gestion du risque. En outre, nous avons souligné un manque de fiabilité dans ces procédures, qui peuvent conduire à des résultats contradictoires en terme de risque. Ceci est lié à l'incompatibilité des facteurs de sécurité utilisés dans les différentes méthodes. Dans la deuxième partie de la thèse, nous avons étudié la fiabilité de méthodes plus avancées dans la prédiction de l'effet des mélanges pour les communautés évoluant dans le système aquatique. Ces méthodes reposent sur le modèle d'addition des concentrations (CA) ou d'addition des réponses (RA) appliqués sur les courbes de distribution de la sensibilité des espèces (SSD) aux substances. En effet, les modèles de mélanges ont été développés et validés pour être appliqués espèce par espèce, et non pas sur plusieurs espèces agrégées simultanément dans les courbes SSD. Nous avons ainsi proposé une procédure plus rigoureuse, pour l'évaluation du risque d'un mélange, qui serait d'appliquer d'abord les modèles CA ou RA à chaque espèce séparément, et, dans une deuxième étape, combiner les résultats afin d'établir une courbe SSD du mélange. Malheureusement, cette méthode n'est pas applicable dans la plupart des cas, car elle nécessite trop de données généralement indisponibles. Par conséquent, nous avons comparé, avec des valeurs générées aléatoirement, le calcul de risque effectué selon cette méthode plus rigoureuse, avec celle effectuée traditionnellement, afin de caractériser la robustesse de cette approche qui consiste à appliquer les modèles de mélange sur les courbes SSD. Nos résultats ont montré que l'utilisation de CA directement sur les SSDs peut conduire à une sous-estimation de la concentration du mélange affectant 5 % ou 50% des espèces, en particulier lorsque les substances présentent un grand écart- type dans leur distribution de la sensibilité des espèces. L'application du modèle RA peut quant à lui conduire à une sur- ou sous-estimations, principalement en fonction de la pente des courbes dose- réponse de chaque espèce composant les SSDs. La sous-estimation avec RA devient potentiellement importante lorsque le rapport entre la EC50 et la EC10 de la courbe dose-réponse des espèces est plus petit que 100. Toutefois, la plupart des substances, selon des cas réels, présentent des données d' écotoxicité qui font que le risque du mélange calculé par la méthode des modèles appliqués directement sur les SSDs reste cohérent et surestimerait plutôt légèrement le risque. Ces résultats valident ainsi l'approche utilisée traditionnellement. Néanmoins, il faut garder à l'esprit cette source d'erreur lorsqu'on procède à une évaluation du risque d'un mélange avec cette méthode traditionnelle, en particulier quand les SSD présentent une distribution des données en dehors des limites déterminées dans cette étude. Enfin, dans la dernière partie de cette thèse, nous avons confronté des prédictions de l'effet de mélange avec des changements biologiques observés dans l'environnement. Dans cette étude, nous avons utilisé des données venant d'un suivi à long terme d'un grand lac européen, le lac Léman, ce qui offrait la possibilité d'évaluer dans quelle mesure la prédiction de la toxicité des mélanges d'herbicide expliquait les changements dans la composition de la communauté phytoplanctonique. Ceci à côté d'autres paramètres classiques de limnologie tels que les nutriments. Pour atteindre cet objectif, nous avons déterminé la toxicité des mélanges sur plusieurs années de 14 herbicides régulièrement détectés dans le lac, en utilisant les modèles CA et RA avec les courbes de distribution de la sensibilité des espèces. Un gradient temporel de toxicité décroissant a pu être constaté de 2004 à 2009. Une analyse de redondance et de redondance partielle, a montré que ce gradient explique une partie significative de la variation de la composition de la communauté phytoplanctonique, même après avoir enlevé l'effet de toutes les autres co-variables. De plus, certaines espèces révélées pour avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps, ont montré des comportements similaires dans des études en mésocosmes. On peut en conclure que la toxicité du mélange herbicide est l'un des paramètres clés pour expliquer les changements de phytoplancton dans le lac Léman. En conclusion, il existe diverses méthodes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques et celui-ci peut jouer un rôle dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application, avant d'utiliser leurs résultats pour la gestion des risques environnementaux. - For several years now, the scientists as well as the society is concerned by the aquatic risk organic micropollutants may pose. Indeed, several researches have shown the toxic effects these substances may induce on organisms living in our lakes or rivers, especially when they are exposed to acute or chronic concentrations. However, most of the studies focused on the toxicity of single compounds, i.e. considered individually. The same also goes in the current European regulations concerning the risk assessment procedures for the environment of these substances. But aquatic organisms are typically exposed every day simultaneously to thousands of organic compounds. The toxic effects resulting of these "cocktails" cannot be neglected. The ecological risk assessment of mixtures of such compounds has therefore to be addressed by scientists in the most reliable and appropriate way. In the first part of this thesis, the procedures currently envisioned for the aquatic mixture risk assessment in European legislations are described. These methodologies are based on the mixture model of concentration addition and the use of the predicted no effect concentrations (PNEC) or effect concentrations (EC50) with assessment factors. These principal approaches were applied to two specific case studies, Lake Geneva and the River Rhône in Switzerland, including a discussion of the outcomes of such applications. These first level assessments showed that the mixture risks for these studied cases exceeded rapidly the critical value. This exceeding is generally due to two or three main substances. The proposed procedures allow therefore the identification of the most problematic substances for which management measures, such as a reduction of the entrance to the aquatic environment, should be envisioned. However, it was also showed that the risk levels associated with mixtures of compounds are not negligible, even without considering these main substances. Indeed, it is the sum of the substances that is problematic, which is more challenging in term of risk management. Moreover, a lack of reliability in the procedures was highlighted, which can lead to contradictory results in terms of risk. This result is linked to the inconsistency in the assessment factors applied in the different methods. In the second part of the thesis, the reliability of the more advanced procedures to predict the mixture effect to communities in the aquatic system were investigated. These established methodologies combine the model of concentration addition (CA) or response addition (RA) with species sensitivity distribution curves (SSD). Indeed, the mixture effect predictions were shown to be consistent only when the mixture models are applied on a single species, and not on several species simultaneously aggregated to SSDs. Hence, A more stringent procedure for mixture risk assessment is proposed, that would be to apply first the CA or RA models to each species separately and, in a second step, to combine the results to build an SSD for a mixture. Unfortunately, this methodology is not applicable in most cases, because it requires large data sets usually not available. Therefore, the differences between the two methodologies were studied with datasets created artificially to characterize the robustness of the traditional approach applying models on species sensitivity distribution. The results showed that the use of CA on SSD directly might lead to underestimations of the mixture concentration affecting 5% or 50% of species, especially when substances present a large standard deviation of the distribution from the sensitivity of the species. The application of RA can lead to over- or underestimates, depending mainly on the slope of the dose-response curves of the individual species. The potential underestimation with RA becomes important when the ratio between the EC50 and the EC10 for the dose-response curve of the species composing the SSD are smaller than 100. However, considering common real cases of ecotoxicity data for substances, the mixture risk calculated by the methodology applying mixture models directly on SSDs remains consistent and would rather slightly overestimate the risk. These results can be used as a theoretical validation of the currently applied methodology. Nevertheless, when assessing the risk of mixtures, one has to keep in mind this source of error with this classical methodology, especially when SSDs present a distribution of the data outside the range determined in this study Finally, in the last part of this thesis, we confronted the mixture effect predictions with biological changes observed in the environment. In this study, long-term monitoring of a European great lake, Lake Geneva, provides the opportunity to assess to what extent the predicted toxicity of herbicide mixtures explains the changes in the composition of the phytoplankton community next to other classical limnology parameters such as nutrients. To reach this goal, the gradient of the mixture toxicity of 14 herbicides regularly detected in the lake was calculated, using concentration addition and response addition models. A decreasing temporal gradient of toxicity was observed from 2004 to 2009. Redundancy analysis and partial redundancy analysis showed that this gradient explains a significant portion of the variation in phytoplankton community composition, even when having removed the effect of all other co-variables. Moreover, some species that were revealed to be influenced positively or negatively, by the decrease of toxicity in the lake over time, showed similar behaviors in mesocosms studies. It could be concluded that the herbicide mixture toxicity is one of the key parameters to explain phytoplankton changes in Lake Geneva. To conclude, different methods exist to predict the risk of mixture in the ecosystems. But their reliability varies depending on the underlying hypotheses. One should therefore carefully consider these hypotheses, as well as the limits of the approaches, before using the results for environmental risk management

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective was to analyze the situation in Switzerland regarding the prevalence of overweight or obesity in children, adolescents and adults. The data were compared with France, an adjacent much larger country. The results showed that there is a definitive lack of objective information in Switzerland on the prevalence of obesity at different ages. As in other European studies, the fact that many national surveys are classically based on subject interviews (self-reported weights and heights rather than measured values) implies that the overweight/obesity prevalence is largely underestimated in adulthood. For example, in a recent Swiss epidemiological study, the prevalence of obesity (BMI greater than 30 kg/m(2)) averaged 6-7% in young men and women (25-34 y), the prevalence being underestimated by a factor of two to three when body weight was self-reported rather than measured. This phenomenon has already been observed in previous European studies. It is concluded that National Surveys based on telephone interviews generally produce biased obesity prevalence results, although the direction of the changes in prevalence of obesity and its evolution with repeated surveys using strict standardized methodology may be evaluated correctly. Therefore, these surveys should be complemented by large-scale epidemiological studies (based on measured anthropomeric variables rather than declared) covering the different linguistic areas of Switzerland. An epidemiological body weight (BMI) monitoring surveillance system, using a harmonized methodology among European countries, would help to accurately assess differences in obesity prevalence across Europe without methodological bias. It will permit monitoring of the dynamic evolution of obesity prevalence as well as the development of appropriate strategies (taking into account the specificity of each country) for obesity prevention and treatment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The amino acid composition of the protein from three strains of rat (Wistar, Zucker lean and Zucker obese), subjected to reference and high-fat diets has been used to determine the mean empirical formula, molecular weight and N content of whole-rat protein. The combined whole protein of the rat was uniform for the six experimental groups, containing an estimate of 17.3% N and a mean aminoacyl residue molecular weight of 103.7. This suggests that the appropriate protein factor for the calculation of rat protein from its N content should be 5.77 instead of the classical 6.25. In addition, an estimate of the size of the non-protein N mass in the whole rat gave a figure in the range of 5.5 % of all N. The combination of the two calculations gives a protein factor of 5.5 for the conversion of total N into rat protein.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rational learning theories postulate that information channels and cognitive biases such as individual optimism may influence an individual¿s assessment of the risk of undesired events, especially with regard to those that have a cumulative nature. This is the case with disability in old age, which may take place upon survival to an advanced age, and such factors have been regarded as responsible for certain individual behaviours (for example, the limited incidence of insurance purchase). This paper examines the determinants of individual perceptions with regard to disability in old age and longevity. The cumulative nature of such perceptions of risk is tested, and potential biases are identified, including `optimism¿ and a set of information determinants. Empirical evidence from a representative survey of Catalonia is presented to illustrate these effects. The findings from this research suggest a significant overestimation of disability in old age, yet this is not the case with longevity. Furthermore, individual perceptions with regard to disability in old age, unlike those with regard to longevity, exhibit on aggregate an `optimistic bias¿ and, are perceived as `cumulative risks¿. Gender influences the perceived risk of disability in old age at a population level but not at the individual level, and the opposite holds true for age. Finally, self-reported health status is the main variable behind risk perceptions at both the individual and population level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To assess the contribution of modifiable risk factors to social inequalities in the incidence of type 2 diabetes when these factors are measured at study baseline or repeatedly over follow-up and when long term exposure is accounted for. DESIGN: Prospective cohort study with risk factors (health behaviours (smoking, alcohol consumption, diet, and physical activity), body mass index, and biological risk markers (systolic blood pressure, triglycerides and high density lipoprotein cholesterol)) measured four times and diabetes status assessed seven times between 1991-93 and 2007-09. SETTING: Civil service departments in London (Whitehall II study). PARTICIPANTS: 7237 adults without diabetes (mean age 49.4 years; 2196 women). MAIN OUTCOME MEASURES: Incidence of type 2 diabetes and contribution of risk factors to its association with socioeconomic status. RESULTS: Over a mean follow-up of 14.2 years, 818 incident cases of diabetes were identified. Participants in the lowest occupational category had a 1.86-fold (hazard ratio 1.86, 95% confidence interval 1.48 to 2.32) greater risk of developing diabetes relative to those in the highest occupational category. Health behaviours and body mass index explained 33% (-1% to 78%) of this socioeconomic differential when risk factors were assessed at study baseline (attenuation of hazard ratio from 1.86 to 1.51), 36% (22% to 66%) when they were assessed repeatedly over the follow-up (attenuated hazard ratio 1.48), and 45% (28% to 75%) when long term exposure over the follow-up was accounted for (attenuated hazard ratio 1.41). With additional adjustment for biological risk markers, a total of 53% (29% to 88%) of the socioeconomic differential was explained (attenuated hazard ratio 1.35, 1.05 to 1.72). CONCLUSIONS: Modifiable risk factors such as health behaviours and obesity, when measured repeatedly over time, explain almost half of the social inequalities in incidence of type 2 diabetes. This is more than was seen in previous studies based on single measurement of risk factors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Ca(2+)-regulated calcineurin/nuclear factor of activated T cells (NFAT) cascade controls alternative pathways of T-cell activation and peripheral tolerance. Here, we describe reduction of NFATc2 mRNA expression in the lungs of patients with bronchial adenocarcinoma. In a murine model of bronchoalveolar adenocarcinoma, mice lacking NFATc2 developed more and larger solid tumors than wild-type littermates. The extent of central tumor necrosis was decreased in the tumors in NFATc2((-/-)) mice, and this finding was associated with reduced tumor necrosis factor-alpha and interleukin-2 (IL-2) production by CD8(+) T cells. Adoptive transfer of CD8(+) T cells of NFATc2((-/-)) mice induced transforming growth factor-beta(1) in the airways of recipient mice, thus supporting CD4(+)CD25(+)Foxp-3(+)glucocorticoid-induced tumor necrosis factor receptor (GITR)(+) regulatory T (T(reg)) cell survival. Finally, engagement of GITR in NFATc2((-/-)) mice induced IFN-gamma levels in the airways, reversed the suppression by T(reg) cells, and costimulated effector CD4(+)CD25(+) (IL-2Ralpha) and memory CD4(+)CD127(+) (IL-7Ralpha) T cells, resulting in abrogation of carcinoma progression. Agonistic signaling through GITR, in the absence of NFATc2, thus emerges as a novel possible strategy for the treatment of human bronchial adenocarcinoma in the absence of NFATc2 by enhancing IL-2Ralpha(+) effector and IL-7Ralpha(+) memory-expressing T cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this paper is to distinguish between different types of working poverty, on the basis of the mechanisms that produce it. Whereas the poverty literature identifies a myriad of risk factors and of categories of disadvantaged workers, we focus on three immediate causes of working poverty, namely low wage rate, weak labour force attachment, and high needs, the latter mainly due to the presence of children (and sometimes to the increase in needs caused by a divorce). These three mechanisms are the channels through which macroeconomic, demographic and policy factors have a direct bearing on working households. The main assumption tested here is that welfare regimes strongly influence the relative weight of these three mechanisms in producing working poverty, and, hence, the composition of the working-poor population. Our figures confirm this hypothesis and show that low-wage employment is a key factor, but, by far, not the only one and that family policies broadly understood play a decisive role, as well as patterns of labour market participation and integration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The comparison of radiotherapy techniques regarding secondary cancer risk has yielded contradictory results possibly stemming from the many different approaches used to estimate risk. The purpose of this study was to make a comprehensive evaluation of different available risk models applied to detailed whole-body dose distributions computed by Monte Carlo for various breast radiotherapy techniques including conventional open tangents, 3D conformal wedged tangents and hybrid intensity modulated radiation therapy (IMRT). First, organ-specific linear risk models developed by the International Commission on Radiological Protection (ICRP) and the Biological Effects of Ionizing Radiation (BEIR) VII committee were applied to mean doses for remote organs only and all solid organs. Then, different general non-linear risk models were applied to the whole body dose distribution. Finally, organ-specific non-linear risk models for the lung and breast were used to assess the secondary cancer risk for these two specific organs. A total of 32 different calculated absolute risks resulted in a broad range of values (between 0.1% and 48.5%) underlying the large uncertainties in absolute risk calculation. The ratio of risk between two techniques has often been proposed as a more robust assessment of risk than the absolute risk. We found that the ratio of risk between two techniques could also vary substantially considering the different approaches to risk estimation. Sometimes the ratio of risk between two techniques would range between values smaller and larger than one, which then translates into inconsistent results on the potential higher risk of one technique compared to another. We found however that the hybrid IMRT technique resulted in a systematic reduction of risk compared to the other techniques investigated even though the magnitude of this reduction varied substantially with the different approaches investigated. Based on the epidemiological data available, a reasonable approach to risk estimation would be to use organ-specific non-linear risk models applied to the dose distributions of organs within or near the treatment fields (lungs and contralateral breast in the case of breast radiotherapy) as the majority of radiation-induced secondary cancers are found in the beam-bordering regions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work consists of three essays investigating the ability of structural macroeconomic models to price zero coupon U.S. government bonds. 1. A small scale 3 factor DSGE model implying constant term premium is able to provide reasonable a fit for the term structure only at the expense of the persistence parameters of the structural shocks. The test of the structural model against one that has constant but unrestricted prices of risk parameters shows that the exogenous prices of risk-model is only weakly preferred. We provide an MLE based variance-covariance matrix of the Metropolis Proposal Density that improves convergence speeds in MCMC chains. 2. Affine in observable macro-variables, prices of risk specification is excessively flexible and provides term-structure fit without significantly altering the structural parameters. The exogenous component of the SDF is separating the macro part of the model from the term structure and the good term structure fit has as a driving force an extremely volatile SDF and an implied average short rate that is inexplicable. We conclude that the no arbitrage restrictions do not suffice to temper the SDF, thus there is need for more restrictions. We introduce a penalty-function methodology that proves useful in showing that affine prices of risk specifications are able to reconcile stable macro-dynamics with good term structure fit and a plausible SDF. 3. The level factor is reproduced most importantly by the preference shock to which it is strongly and positively related but technology and monetary shocks, with negative loadings, are also contributing to its replication. The slope factor is only related to the monetary policy shocks and it is poorly explained. We find that there are gains in in- and out-of-sample forecast of consumption and inflation if term structure information is used in a time varying hybrid prices of risk setting. In-sample yield forecast are better in models with non-stationary shocks for the period 1982-1988. After this period, time varying market price of risk models provide better in-sample forecasts. For the period 2005-2008, out of sample forecast of consumption and inflation are better if term structure information is incorporated in the DSGE model but yields are better forecasted by a pure macro DSGE model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Only few countries have cohorts enabling specific and up-to-date cardiovascular disease (CVD) risk estimation. Individual risk assessment based on study samples that differ too much from the target population could jeopardize the benefit of risk charts in general practice. Our aim was to provide up-to-date and valid CVD risk estimation for a Swiss population using a novel record linkage approach. METHODS: Anonymous record linkage was used to follow-up (for mortality, until 2008) 9,853 men and women aged 25-74 years who participated in the Swiss MONICA (MONItoring of trends and determinants in CVD) study of 1983-92. The linkage success was 97.8%, loss to follow-up 1990-2000 was 4.7%. Based on the ESC SCORE methodology (Weibull regression), we used age, sex, blood pressure, smoking, and cholesterol to generate three models. We compared the 1) original SCORE model with a 2) recalibrated and a 3) new model using the Brier score (BS) and cross-validation. RESULTS: Based on the cross-validated BS, the new model (BS = 14107×10(-6)) was somewhat more appropriate for risk estimation than the original (BS = 14190×10(-6)) and the recalibrated (BS = 14172×10(-6)) model. Particularly at younger age, derived absolute risks were consistently lower than those from the original and the recalibrated model which was mainly due to a smaller impact of total cholesterol. CONCLUSION: Using record linkage of observational and routine data is an efficient procedure to obtain valid and up-to-date CVD risk estimates for a specific population.